Plimpton, Steven James; Heffernan, Julieanne; Sasaki, Darryl Yoshio; Frischknecht, Amalie Lucile; Stevens, Mark Jackson; Frink, Laura J. Douglas
2005-11-01
Understanding the properties and behavior of biomembranes is fundamental to many biological processes and technologies. Microdomains in biomembranes or ''lipid rafts'' are now known to be an integral part of cell signaling, vesicle formation, fusion processes, protein trafficking, and viral and toxin infection processes. Understanding how microdomains form, how they depend on membrane constituents, and how they act not only has biological implications, but also will impact Sandia's effort in development of membranes that structurally adapt to their environment in a controlled manner. To provide such understanding, we created physically-based models of biomembranes. Molecular dynamics (MD) simulations and classical density functional theory (DFT) calculations using these models were applied to phenomena such as microdomain formation, membrane fusion, pattern formation, and protein insertion. Because lipid dynamics and self-organization in membranes occur on length and time scales beyond atomistic MD, we used coarse-grained models of double tail lipid molecules that spontaneously self-assemble into bilayers. DFT provided equilibrium information on membrane structure. Experimental work was performed to further help elucidate the fundamental membrane organization principles.
Structure and physical properties of biomembranes and model membranes
NASA Astrophysics Data System (ADS)
Hianik, T.
2006-12-01
Biomembranes belong to the most important structures of the cell and the cell organels. They play not only structural role of the barrier separating the external and internal part of the membrane but contain also various functional molecules, like receptors, ionic channels, carriers and enzymes. The cell membrane also preserves non-equillibrium state in a cell which is crucial for maintaining its excitability and other signaling functions. The growing interest to the biomembranes is also due to their unique physical properties. From physical point of view the biomembranes, that are composed of lipid bilayer into which are incorporated integral proteins and on their surface are anchored peripheral proteins and polysaccharides, represent liquid scrystal of smectic type. The biomembranes are characterized by anisotropy of structural and physical properties. The complex structure of biomembranes makes the study of their physical properties rather difficult. Therefore several model systems that mimic the structure of biomembranes were developed. Among them the lipid monolayers at an air-water interphase, bilayer lipid membranes (BLM), supported bilayer lipid membranes (sBLM) and liposomes are most known. This work is focused on the introduction into the "physical word" of the biomembranes and their models. After introduction to the membrane structure and the history of its establishment, the physical properties of the biomembranes and their models areare stepwise presented. The most focus is on the properties of lipid monolayers, BLM, sBLM and liposomes that were most detailed studied. This contribution has tutorial character that may be usefull for undergraduate and graduate students in the area of biophysics, biochemistry, molecular biology and bioengineering, however it contains also original work of the author and his co-worker and PhD students, that may be usefull also for specialists working in the field of biomembranes and model membranes.
Ojwang', Loice M; Cook, Robert L
2013-08-01
The interaction of humic acids (HAs) with 1-palmitoyl-2-oleoyl-Sn-glycero-3-phosphocholine (POPC) large unilamellar vesicle (LUV) model biomembrane system was studied by fluorescence spectroscopy. HAs from aquatic and terrestrial (including coal) sources were studied. The effects of HA concentration and temperature over environmentally relevant ranges of 0 to 20 mg C/L and 10 to 30 °C, respectively, were investigated. The dosage studies revealed that the aquatic Suwannee River humic acid (SRHA) causes an increased biomembrane perturbation (percent leakage of the fluorescent dye, Sulforhodamine B) over the entire studied concentration range. The two terrestrial HAs, namely Leonardite humic acid (LAHA) and Florida peat humic acid (FPHA), at concentrations above 5 mg C/L, show a decrease or a plateau effect attributable to the competition within the HA mixture and/or the formation of "partial aggregates". The temperature studies revealed that biomembrane perturbation increases with decreasing temperature for all three HAs. Kinetic studies showed that the membrane perturbation process is complex with both fast and slow absorption (sorption into the bilayer) components and that the slow component could be fitted by first order kinetics. A mechanism based on "lattice errors" within the POPC LUVs is put forward to explain the fast and slow components. A rationale behind the concentration and temperature findings is provided, and the environmental implications are discussed. PMID:23805776
Interactions of PAMAM dendrimers with negatively charged model biomembranes.
Yanez Arteta, Marianna; Ainalem, Marie-Louise; Porcar, Lionel; Martel, Anne; Coker, Helena; Lundberg, Dan; Chang, Debby P; Soltwedel, Olaf; Barker, Robert; Nylander, Tommy
2014-11-13
We have investigated the interactions between cationic poly(amidoamine) (PAMAM) dendrimers of generation 4 (G4), a potential gene transfection vector, with net-anionic model biomembranes composed of different ratios of zwitterionic phosphocholine (PC) and anionic phospho-L-serine (PS) phospholipids. Two types of model membranes were used: solid-supported bilayers, prepared with lipids carrying palmitoyl-oleoyl (PO) and diphytanoyl (DPh) acyl chains, and free-standing bilayers, formed at the interface between two aqueous droplets in oil (droplet interface bilayers, DIBs) using the DPh-based lipids. G4 dendrimers were found to translocate through POPC:POPS bilayers deposited on silica surfaces. The charge density of the bilayer affects translocation, which is reduced when the ionic strength increases. This shows that the dendrimer-bilayer interactions are largely controlled by their electrostatic attraction. The structure of the solid-supported bilayers remains intact upon translocation of the dendrimer. However, the amount of lipids in the bilayer decreases and dendrimer/lipid aggregates are formed in bulk solution, which can be deposited on the interfacial layers upon dilution of the system with dendrimer-free solvent. Electrophysiology measurements on DIBs confirm that G4 dendrimers cross the lipid membranes containing PS, which then become more permeable to ions. The obtained results have implications for PAMAM dendrimers as delivery vehicles to cells. PMID:25310456
Pignatello, R.; Musumeci, T.; Basile, L.; Carbone, C.; Puglisi, G.
2011-01-01
Contact with many different biological membranes goes along the destiny of a drug after its systemic administration. From the circulating macrophage cells to the vessel endothelium, to more complex absorption barriers, the interaction of a biomolecule with these membranes largely affects its rate and time of biodistribution in the body and at the target sites. Therefore, investigating the phenomena occurring on the cell membranes, as well as their different interaction with drugs in the physiological or pathological conditions, is important to exploit the molecular basis of many diseases and to identify new potential therapeutic strategies. Of course, the complexity of the structure and functions of biological and cell membranes, has pushed researchers toward the proposition and validation of simpler two- and three-dimensional membrane models, whose utility and drawbacks will be discussed. This review also describes the analytical methods used to look at the interactions among bioactive compounds with biological membrane models, with a particular accent on the calorimetric techniques. These studies can be considered as a powerful tool for medicinal chemistry and pharmaceutical technology, in the steps of designing new drugs and optimizing the activity and safety profile of compounds already used in the therapy. PMID:21430952
Transfer kinetics from colloidal drug carriers and liposomes to biomembrane models: DSC studies
Sarpietro, Maria Grazia; Castelli, Francesco
2011-01-01
The release of bioactive molecules by different delivery systems has been studied. We have proposed a protocol that takes into account a system that is able to carry out the uptake of a bioactive molecule released during the time, resembling an in vivo-like system, and for this reason we have used biomembrane models represented by multi-lamellar and unilamellar vesicles. The bioactive molecule loaded delivery system has been put in contact with the biomembrane model and the release has been evaluated, to consider the effect of the bioactive molecule on the biomembrane model thermotropic behavior, and to compare the results with those obtained when a pure drug interacts with the biomembrane model. The differential scanning calorimetry technique has been employed. Depending on the delivery system used, our research permits to evaluate the effect of different parameters on the bioactive molecule release, such as pH, drug loading degree, delivery system swelling, crosslinking agent, degree of cross-linking, and delivery system side chains. PMID:21430957
Interaction of α-Hexylcinnamaldehyde with a Biomembrane Model: A Possible MDR Reversal Mechanism.
Sarpietro, Maria Grazia; Di Sotto, Antonella; Accolla, Maria Lorena; Castelli, Francesco
2015-05-22
The ability of the naturally derived compound α-hexylcinnamaldehyde (1) to interact with biomembranes and to modulate their permeability has been investigated as a strategy to reverse multidrug resistance (MDR) in cancer cells. Dimyristoylphosphatidylcholine (DMPC) multilamellar vesicles (MLVs) were used as biomembrane models, and differential scanning calorimetry was applied to measure the effect of 1 on the thermotropic behavior of DMPC MLVs. The effect of an aqueous medium or a lipid carrier on the uptake of 1 by the biomembrane was also characterized. Furthermore, taking into account that MDR is strictly regulated by redox signaling, the pro-oxidant and/or antioxidant effects of 1 were evaluated by the crocin-bleaching assay, in both hydrophilic and lipophilic environments. Compound 1 was uniformly distributed in the phospholipid bilayers and deeply interacted with DMPC MLVs, intercalating among the phospholipid acyl chains and thus decreasing their cooperativity. The lipophilic medium allowed the absorption of 1 into the phospholipid membrane. In the crocin-bleaching assay, the substance produced no pro-oxidant effects in both hydrophilic and lipophilic environments; conversely, a significant inhibition of AAPH-induced oxidation was exerted in hydrophilic medium. These results suggest a possible role of 1 as a chemopreventive and chemosensitizing agent for fighting cancer. PMID:25893313
Development of a Nonionic Azobenzene Amphiphile for Remote Photocontrol of a Model Biomembrane.
Benedini, Luciano A; Sequeira, M Alejandra; Fanani, Maria Laura; Maggio, Bruno; Dodero, Verónica I
2016-05-01
We report the synthesis and characterization of a simple nonionic azoamphiphile, C12OazoE3OH, which behaves as an optically controlled molecule alone and in a biomembrane environment. First, Langmuir monolayer and Brewster angle microscopy (BAM) experiments showed that pure C12OazoE3OH enriched in the (E) isomer was able to form solidlike mesophase even at low surface pressure associated with supramolecular organization of the azobenzene derivative at the interface. On the other hand, pure C12OazoE3OH enriched in the (Z) isomer formed a less solidlike monolayer due to the bent geometry around the azobenzene moiety. Second, C12OazoE3OH is well-mixed in a biological membrane model, Lipoid s75 (up to 20%mol), and photoisomerization among the lipids proceeded smoothly depending on light conditions. It is proposed that the cross-sectional area of the hydroxyl triethylenglycol head of C12OazoE3OH inhibits azobenzenes H-aggregation in the model membrane; thus, the tails conformation change due to photoisomerization is transferred efficiently to the lipid membrane. We showed that the lipid membrane effectively senses the azobenzene geometrical change photomodulating some properties, like compressibility modulus, transition temperature, and morphology. In addition, photomodulation proceeds with a color change from yellow to orange, providing the possibility to externally monitor the system. Finally, Gibbs monolayers showed that C12OazoE3OH is able to penetrate the highly packing biomembrane model; thus, C12OazoE3OH might be used as photoswitchable molecular probe in real systems. PMID:27070294
Chen, Yu; Yang, Yumin; Liao, Qingping; Yang, Wei; Ma, Wanfeng; Zhao, Jian; Zheng, Xionggao; Yang, Yang; Chen, Rui
2016-10-01
Cervical erosion is one of the common diseases of women. The loop electrosurgical excisional procedure (LEEP) has been used widely in the treatment of the cervical diseases. However, there are no effective wound dressings for the postoperative care to protect the wound area from further infection, leading to increased secretion and longer healing time. Iodine is a widely used inorganic antibacterial agent with many advantages. However, the carrier for stable iodine complex antibacterial agents is lack. In the present study, a novel iodine carrier, Carboxymethyl chitosan-g-(poly(sodium acrylate)-co-polyvinylpyrrolidone) (CMCTS-g-(PAANa-co-PVP), was prepared by graft copolymerization of sodium acrylate (AANa) and N-vinylpyrrolidone (NVP) to a carboxymethyl chitosan (CMCTS) skeleton. The obtained structure could combine prominent property of poly(sodium acrylate) (PAANa) anionic polyelectrolyte segment and good complex property of polyvinylpyrrolidone (PVP) segment to iodine. The bioactivity of CMCTS could also be kept. The properties of the complex, CMCTS-g-(PAANa-co-PVP)-I2, were studied. The in vitro experiment shows that it has broad-spectrum bactericidal effects to virus, fungus, gram-positive bacteria and gram-negative bacteria. A CMCTS-g-(PAANa-co-PVP)-I2 complex contained cervical antibacterial biomembrane (CABM) was prepared. The iodine release from the CABM is pH-dependent. The clinic trial results indicate that CABM has better treatment effectiveness than the conventional treatment in the postoperative care of the LEEP operation. PMID:27287120
Travelling lipid domains in a dynamic model for protein-induced pattern formation in biomembranes
NASA Astrophysics Data System (ADS)
John, Karin; Bär, Markus
2005-06-01
Cell membranes are composed of a mixture of lipids. Many biological processes require the formation of spatial domains in the lipid distribution of the plasma membrane. We have developed a mathematical model that describes the dynamic spatial distribution of acidic lipids in response to the presence of GMC proteins and regulating enzymes. The model encompasses diffusion of lipids and GMC proteins, electrostatic attraction between acidic lipids and GMC proteins as well as the kinetics of membrane attachment/detachment of GMC proteins. If the lipid-protein interaction is strong enough, phase separation occurs in the membrane as a result of free energy minimization and protein/lipid domains are formed. The picture is changed if a constant activity of enzymes is included into the model. We chose the myristoyl-electrostatic switch as a regulatory module. It consists of a protein kinase C that phosphorylates and removes the GMC proteins from the membrane and a phosphatase that dephosphorylates the proteins and enables them to rebind to the membrane. For sufficiently high enzymatic activity, the phase separation is replaced by travelling domains of acidic lipids and proteins. The latter active process is typical for nonequilibrium systems. It allows for a faster restructuring and polarization of the membrane since it acts on a larger length scale than the passive phase separation. The travelling domains can be pinned by spatial gradients in the activity; thus the membrane is able to detect spatial clues and can adapt its polarity dynamically to changes in the environment.
Neves, Ana Rute; Nunes, Cláudia; Reis, Salette
2015-09-01
Resveratrol has been widely studied because of its pleiotropic effects in cancer therapy, neuroprotection, and cardioprotection. It is believed that the interaction of resveratrol with biological membranes may play a key role in its therapeutic activity. The capacity of resveratrol to partition into lipid bilayers, its possible location within the membrane, and the influence of this compound on the membrane fluidity were investigated using membrane mimetic systems composed of egg l-α-phosphatidylcholine (EPC), cholesterol (CHOL), and sphingomyelin (SM). The results showed that resveratrol has greater affinity for the EPC bilayers than for EPC:CHOL [4:1] and EPC:CHOL:SM [1:1:1] membrane models. The increased difficulty in penetrating tight packed membranes is also demonstrated by fluorescence quenching of probes and by fluorescence anisotropy measurements. Resveratrol may be involved in the regulation of cell membrane fluidity, thereby contributing for cell homeostasis. PMID:26237152
Ahlers, M; Grainger, D W; Herron, J N; Lim, K; Ringsdorf, H; Salesse, C
1992-01-01
Three model biomembrane systems, monolayers, micelles, and vesicles, have been used to study the influence of chemical and physical variables of hapten presentation at membrane interfaces on antibody binding. Hapten recognition and binding were monitored for the anti-fluorescein monoclonal antibody 4-4-20 generated against the hapten, fluorescein, in these membrane models as a function of fluorescein-conjugated lipid architecture. Specific recognition and binding in this system are conveniently monitored by quenching of fluorescein emission upon penetration of fluorescein into the antibody's active site. Lipid structure was shown to play a large role in affecting antibody quenching. Interestingly, the observed degrees of quenching were nearly independent of the lipid membrane model studied, but directly correlated with the chemical structure of the lipids. In all cases, the antibody recognized and quenched most efficiently a lipid based on dioctadecylamine where fluorescein is attached to the headgroup via a long, flexible hydrophilic spacer. Dipalmitoyl phosphatidylethanolamine containing a fluorescein headgroup demonstrated only partial binding/quenching. Egg phosphatidylethanolamine with a fluorescein headgroup showed no susceptibility to antibody recognition, binding, or quenching. Formation of two-dimensional protein domains upon antibody binding to the fluorescein-lipids in monolayers is also presented. Chemical and physical requirements for these antibody-hapten complexes at membrane surfaces have been discussed in terms of molecular dynamics simulations based on recent crystallographic models for this antibody-hapten complex (Herron et al., 1989. Proteins Struct. Funct. Genet. 5:271-280). Images FIGURE 7 FIGURE 8 PMID:1420916
Oliver, Miquel; Bauzá, Antonio; Frontera, Antonio; Miró, Manuel
2016-07-01
Experimental sensing schemes and thermodynamic in-silico studies are combined holistically in this manuscript so as to give new insights into the bioavailability of environmental contaminants via permeation across lipid nanoparticles (liposomes) as a mimicry of biological membranes. Using Prodan and Laurdan as fluorescent membrane probes, phosphatidylcholine-based unilamellar liposomes are harnessed to investigate membranotropic effects of alkyl esters of p-hydroxybenzoic acid and triclosan in vitro on the basis of steady-state fluorescence anisotropy, light scattering, and generalized polarization measurements. The feasibility of the analytical responses to ascertain differences in temperature-dependent contaminant bioavailability is investigated in detail. High level density functional theory (DFT) calculations (RI-BP86-D3/def2-SVP) have been resorted to investigate noncovalent 1:1 complexes of the fluorescent probes and emerging contaminants with dipalmitoylphosphatidylcholine, as a minimalist model of a lipid nanoparticle, to evaluate both the interaction energies and the geometries of the complexes. This information can be related to the degree of penetration of the guest across the lipid bilayer. Our experimental results supported by in-silico DFT calculations and ecotoxicological data let us to conclude that simple analytical measurements of liposomal changes in lipid packaging, permeability, and fluidity are appropriate to foresee the potential bioavailability and toxicity of emerging contaminants. PMID:27243463
Andreani, Tatiana; Miziara, Leonardo; Lorenzón, Esteban N; de Souza, Ana Luiza R; Kiill, Charlene P; Fangueiro, Joana F; Garcia, Maria L; Gremião, Palmira D; Silva, Amélia M; Souto, Eliana B
2015-06-01
The present paper focuses on the development and characterization of silica nanoparticles (SiNP) coated with hydrophilic polymers as mucoadhesive carriers for oral administration of insulin. SiNP were prepared by sol-gel technology under mild conditions and coated with different hydrophilic polymers, namely, chitosan, sodium alginate or poly(ethylene glycol) (PEG) with low and high molecular weight (PEG 6000 and PEG 20000) to increase the residence time at intestinal mucosa. The mean size and size distribution, association efficiency, insulin structure and insulin thermal denaturation have been determined. The mean nanoparticle diameter ranged from 289 nm to 625 nm with a PI between 0.251 and 0.580. The insulin association efficiency in SiNP was recorded above 70%. After coating, the association efficiency of insulin increased up to 90%, showing the high affinity of the protein to the hydrophilic polymer chains. Circular dichroism (CD) indicated that no conformation changes of insulin structure occurred after loading the peptide into SiNP. Nano-differential scanning calorimetry (nDSC) showed that SiNP shifted the insulin endothermic peak to higher temperatures. The influence of coating on the interaction of nanoparticles with dipalmitoylphosphatidylcholine (DPPC) biomembrane models was also evaluated by nDSC. The increase of ΔH values suggested a strong association of non-coated SiNP and those PEGylated nanoparticles coated with DPPC polar heads by forming hydrogen bonds and/or by electrostatic interaction. The mucoadhesive properties of nanoparticles were examined by studying the interaction with mucin in aqueous solution. SiNP coated with alginate or chitosan showed high contact with mucin. On the other hand, non-coated SiNP and PEGylated SiNP showed lower interaction with mucin, indicating that these nanoparticles can interdiffuse across mucus network. The results of the present work provide valuable data in assessing the in vitro performance of insulin
Single-Molecule Analysis of Biomembranes
NASA Astrophysics Data System (ADS)
Schmidt, Thomas; Schütz, Gerhard J.
Biomembranes are more than just a cell's envelope - as the interface to the surrounding of a cell they carry key signalling functions. Consequentially, membranes are highly complex organelles: they host about thousand different types of lipids and about half of the proteome, whose interaction has to be orchestrated appropriately for the various signalling purposes. In particular, knowledge on the nanoscopic organization of the plasma membrane appears critical for understanding the regulation of interactions between membrane proteins. The high localization precision of ˜20 nm combined with a high time resolution of ˜1 ms made single molecule tracking an excellent technology to obtain insights into membrane nanostructures, even in a live cell context. In this chapter, we will highlight concepts to achieve superresolution by single molecule imaging, summarize tools for data analysis, and review applications on artificial and live cell membranes.
Casadó, Ana; Giuffrida, M Chiara; Sagristá, M Lluïsa; Castelli, Francesco; Pujol, Montserrat; Alsina, M Asunción; Mora, Margarita
2016-02-01
CPT-11 and SN-38 are camptothecins with strong antitumor activity. Nevertheless, their severe side effects and the chemical instability of their lactone ring have questioned the usual forms for its administration and have focused the current research on the development of new suitable pharmaceutical formulations. This work presents a biophysical study of the interfacial interactions of CPT-11 and SN-38 with membrane mimetic models by using monolayer techniques and Differential Scanning Calorimetry. The aim is to get new insights for the understanding of the bilayer mechanics after drug incorporation and to optimize the design of drug delivery systems based on the formation of stable bilayer structures. Moreover, from our knowledge, the molecular interactions between camptothecins and phospholipids have not been investigated in detail, despite their importance in the context of drug action. The results show that neither CPT-11 nor SN-38 disturbs the structure of the complex liposome bilayers, despite their different solubility, that CPT-11, positively charged in its piperidine group, interacts electrostatically with DOPS, making stable the incorporation of a high percentage of CPT-11 into liposomes and that SN-38 establishes weak repulsive interactions with lipid molecules that modify the compressibility of the bilayer without affecting significantly neither the lipid collapse pressure nor the miscibility pattern of drug-lipid mixed monolayers. The suitability of a binary and a ternary lipid mixture for encapsulating SN-38 and CPT-11, respectively, has been demonstrated. PMID:26656185
Phase Studies of Model Biomembranes: Complex Behavior of DSPC/DOPC/Cholesterol
Zhao, Jiang; Wu, Jing; Heberle, Frederick A.; Mills, Thalia T.; Klawitter, Paul; Huang, Grace; Costanza, Greg; Feigenson, Gerald W.
2009-01-01
We have undertaken a series of experiments to examine the behavior of individual components of cell membranes. Here we report an initial stage of these experiments, in which the properties of a chemically simple lipid mixture are carefully mapped onto a phase diagram. Four different experimental methods were used to establish the phase behavior of the 3-component mixture DSPC/DOPC/chol: (1) confocal fluorescence microscopy observation of giant unilamellar vesicles, GUVs; (2) FRET from perylene to C20:0-DiI; (3) fluorescence of dilute dyes C18:2-DiO and C20:0-DiI; and (4) wide angle x-ray diffraction. This particular 3-component mixture was chosen, in part, for a high level of immiscibility of the components in order to facilitate solving the phase behavior at all compositions. At 23 °C, a large fraction of the possible compositions for this mixture give rise to a solid phase. A region of 3-phase coexistence of {Lα + Lβ + Lo} was detected and defined based on a combination of fluorescence microscopy of GUVs, FRET, and dilute C20:0-DiI fluorescence. At very low cholesterol concentrations, the solid phase is the tilted-chain phase Lβ′. Most of the phase boundaries have been determined to within a few percent of the composition. Measurements of the perturbations of the boundaries of this accurate phase diagram could serve as a means to understand the behaviors of a range of added lipids and proteins. PMID:17825247
Statistical Thermodynamics of Biomembranes
Devireddy, Ram V.
2010-01-01
An overview of the major issues involved in the statistical thermodynamic treatment of phospholipid membranes at the atomistic level is summarized: thermodynamic ensembles, initial configuration (or the physical system being modeled), force field representation as well as the representation of long-range interactions. This is followed by a description of the various ways that the simulated ensembles can be analyzed: area of the lipid, mass density profiles, radial distribution functions (RDFs), water orientation profile, Deuteurium order parameter, free energy profiles and void (pore) formation; with particular focus on the results obtained from our recent molecular dynamic (MD) simulations of phospholipids interacting with dimethylsulfoxide (Me2SO), a commonly used cryoprotective agent (CPA). PMID:19460363
Kinetics of hole nucleation in biomembrane rupture
NASA Astrophysics Data System (ADS)
Evans, Evan; Smith, Benjamin A.
2011-09-01
The core component of a biological membrane is a fluid-lipid bilayer held together by interfacial-hydrophobic and van der Waals interactions, which are balanced for the most part by acyl chain entropy confinement. If biomembranes are subjected to persistent tensions, an unstable (nanoscale) hole will emerge at some time to cause rupture. Because of the large energy required to create a hole, thermal activation appears to be requisite for initiating a hole and the activation energy is expected to depend significantly on mechanical tension. Although models exist for the kinetic process of hole nucleation in tense membranes, studies of membrane survival have failed to cover the ranges of tension and lifetime needed to critically examine nucleation theory. Hence, rupturing giant (~20 μm) membrane vesicles ultra-slowly to ultra-quickly with slow to fast ramps of tension, we demonstrate a method to directly quantify kinetic rates at which unstable holes form in fluid membranes, at the same time providing a range of kinetic rates from <0.01 to >100 s-1. Measuring lifetimes of many hundreds of vesicles, each tensed by precision control of micropipette suction, we have determined the rates of failure for vesicles made from several synthetic phospholipids plus 1:1 mixtures of phospho- and sphingo-lipids with cholesterol, all of which represent prominent constituents of eukaryotic cell membranes. Plotted on a logarithmic scale, the failure rates for vesicles are found to rise dramatically with an increase in tension. Converting the experimental profiles of kinetic rates into changes of activation energy versus tension, we show that the results closely match expressions for thermal activation derived from a combination of meso-scale theory and molecular-scale simulations of hole formation. Moreover, we demonstrate a generic approach to transform analytical fits of activation energies obtained from rupture experiments into energy landscapes characterizing the process of hole
Lipid Biomembrane in Ionic Liquids
NASA Astrophysics Data System (ADS)
Yoo, Brian; Jing, Benxin; Shah, Jindal; Maginn, Ed; Zhu, Y. Elaine; Department of Chemical and Biomolecular Engineering Team
2014-03-01
Ionic liquids (ILs) have been recently explored as new ``green'' chemicals in several chemical and biomedical processes. In our pursuit of understanding their toxicities towards aquatic and terrestrial organisms, we have examined the IL interaction with lipid bilayers as model cell membranes. Experimentally by fluorescence microscopy, we have directly observed the disruption of lipid bilayer by added ILs. Depending on the concentration, alkyl chain length, and anion hydrophobicity of ILs, the interaction of ILs with lipid bilayers leads to the formation of micelles, fibrils, and multi-lamellar vesicles for IL-lipid complexes. By MD computer simulations, we have confirmed the insertion of ILs into lipid bilayers to modify the spatial organization of lipids in the membrane. The combined experimental and simulation results correlate well with the bioassay results of IL-induced suppression in bacteria growth, thereby suggesting a possible mechanism behind the IL toxicity. National Science Foundation, Center for Research Computing at Notre Dame.
Biomembranes research using thermal and cold neutrons.
Heberle, F A; Myles, D A A; Katsaras, J
2015-11-01
In 1932 James Chadwick discovered the neutron using a polonium source and a beryllium target (Chadwick, 1932). In a letter to Niels Bohr dated February 24, 1932, Chadwick wrote: "whatever the radiation from Be may be, it has most remarkable properties." Where it concerns hydrogen-rich biological materials, the "most remarkable" property is the neutron's differential sensitivity for hydrogen and its isotope deuterium. Such differential sensitivity is unique to neutron scattering, which unlike X-ray scattering, arises from nuclear forces. Consequently, the coherent neutron scattering length can experience a dramatic change in magnitude and phase as a result of resonance scattering, imparting sensitivity to both light and heavy atoms, and in favorable cases to their isotopic variants. This article describes recent biomembranes research using a variety of neutron scattering techniques. PMID:26241882
Biomembranes research using thermal and cold neutrons
Heberle, Frederick A.; Myles, Dean A. A.; Katsaras, John
2015-08-01
In 1932 James Chadwick discovered the neutron using a polonium source and a beryllium target (Chadwick, 1932). In a letter to Niels Bohr dated February 24, 1932, Chadwick wrote: “whatever the radiation from Be may be, it has most remarkable properties.” Where it concerns hydrogen-rich biological materials, the “most remarkable” property is the neutron’s differential sensitivity for hydrogen and its isotope deuterium. Such differential sensitivity is unique to neutron scattering, which unlike X-ray scattering, arises from nuclear forces. Consequently, the coherent neutron scattering length can experience a dramatic change in magnitude and phase as a result of resonance scattering, impartingmore » sensitivity to both light and heavy atoms, and in favorable cases to their isotopic variants. Furthermore, this article describes recent biomembranes research using a variety of neutron scattering techniques.« less
Biomembranes research using thermal and cold neutrons
Heberle, Frederick A.; Myles, Dean A. A.; Katsaras, John
2015-08-01
In 1932 James Chadwick discovered the neutron using a polonium source and a beryllium target (Chadwick, 1932). In a letter to Niels Bohr dated February 24, 1932, Chadwick wrote: “whatever the radiation from Be may be, it has most remarkable properties.” Where it concerns hydrogen-rich biological materials, the “most remarkable” property is the neutron’s differential sensitivity for hydrogen and its isotope deuterium. Such differential sensitivity is unique to neutron scattering, which unlike X-ray scattering, arises from nuclear forces. Consequently, the coherent neutron scattering length can experience a dramatic change in magnitude and phase as a result of resonance scattering, imparting sensitivity to both light and heavy atoms, and in favorable cases to their isotopic variants. Furthermore, this article describes recent biomembranes research using a variety of neutron scattering techniques.
The decreasing of corn root biomembrane penetration for acetochlor with vermicompost amendment
NASA Astrophysics Data System (ADS)
Sytnyk, Svitlana; Wiche, Oliver
2016-04-01
One of the topical environmental security issues is management and control of anthropogenic (artificially synthesized) chemical agents usage and utilization. Protection systems development against toxic effects of herbicides should be based on studies of biological indication mechanisms for identification of stressors effect in organisms. Lipid degradation is non-specific reaction to exogenous chemical agents effects. Therefore it is important to study responses of lipid components depending on the stressor type. We studied physiological and biochemical characteristics of lipid metabolism under action of herbicides of chloracetamide group. Corn at different stages of ontogenesis was used as testing object during model laboratory and microfield experiments. Cattle manure treated with earth worms Essenia Foetida was used as compost fertilizer to add to chain: chernozem (black soil) -corn system. It was found several acetochlor actions as following: -decreasing of sterols, phospholipids, phosphatidylcholines and phosphatidylethanolamines content; -increasing pool of available fatty acids and phosphatidic acids associated with intensification of hydrolysis processes; -lypase activity stimulation under effect of stressor in low concentrations; -lypase activity inhibition under effect of high stressor level; -decreasing of polyenoic free fatty acids indicating biomembrane degradation; -accumulation of phospholipids degradation products (phosphatidic acids); -decreasing of high-molecular compounds (phosphatidylcholin and phosphatidylinositol) concentrations; -change in the index of unsaturated and saturated free fatty acids ratio in biomembranes structure; It was established that incorporation of vermicompost in dose 0.4 kg/m2 in black soil lead to corn roots biomembrane restoration. It was fixed the decreasing roots biomembrane penetration for acetochlor in trial with vermicompost. Second compost substances antidote effect is the soil microorganism's activation
Goto, Thiago E; Lopes, Carla C; Nader, Helena B; Silva, Anielle C A; Dantas, Noelio O; Siqueira, José R; Caseli, Luciano
2016-07-01
Cadmium selenide (CdSe) magic-sized quantum dots (MSQDs) are semiconductor nanocrystals with stable luminescence that are feasible for biomedical applications, especially for in vivo and in vitro imaging of tumor cells. In this work, we investigated the specific interaction of CdSe MSQDs with tumorigenic and non-tumorigenic cells using Langmuir monolayers and Langmuir-Blodgett (LB) films of lipids as membrane models for diagnosis of cancerous cells. Surface pressure-area isotherms and polarization modulation reflection-absorption spectroscopy (PM-IRRAS) showed an intrinsic interaction between the quantum dots, inserted in the aqueous subphase, and Langmuir monolayers constituted either of selected lipids or of tumorigenic and non-tumorigenic cell extracts. The films were transferred to solid supports to obtain microscopic images, providing information on their morphology. Similarity between films with different compositions representing cell membranes, with or without the quantum dots, was evaluated by atomic force microscopy (AFM) and confocal microscopy. This study demonstrates that the affinity of quantum dots for models representing cancer cells permits the use of these systems as devices for cancer diagnosis. PMID:27107554
Brown, T. W.
2011-04-15
The same complex matrix model calculates both tachyon scattering for the c=1 noncritical string at the self-dual radius and certain correlation functions of operators which preserve half the supersymmetry in N=4 super-Yang-Mills theory. It is dual to another complex matrix model where the couplings of the first model are encoded in the Kontsevich-like variables of the second. The duality between the theories is mirrored by the duality of their Feynman diagrams. Analogously to the Hermitian Kontsevich-Penner model, the correlation functions of the second model can be written as sums over discrete points in subspaces of the moduli space of punctured Riemann surfaces.
Performance of skeleton-reinforced biomembranes in locomotion
NASA Astrophysics Data System (ADS)
Zhu, Qiang; Shoele, Kourosh
2008-11-01
Skeleton-reinforced biomembranes are ubiquitous in nature and play critical roles in many biological functions. Representative examples include insect wings, cell membranes, and mollusk nacres. In this study we focus on the ray fins of fish and investigate the effects of anisotropic flexibility on their performance. Employing a fluid-structure interaction algorithm by coupling a boundary-element model with a nonlinear structural model, we examined the dynamics of a membrane that is geometrically and structurally similar to a caudal fin. Several locomotion modes that closely resemble caudal fin kinematics reported in the literature are applied. Our results show that the flexibility of the fin significantly increases its capacity of thrust generation, manifested as increased efficiency, reduced transverse force, and reduced sensitivity to kinematic parameters. This design also makes the fin more controllable and deployable. Despite simplifications made in this model in terms of fin geometry, internal structure, and kinematics, detailed features of the simulated flow field are consistent with observations and speculations based upon Particle Image Velocimetry (PIV) measurements of flow around live fish.
Technology Transfer Automated Retrieval System (TEKTRAN)
Adsorption-desorption reactions are important processes that affect the transport of contaminants in the environment. Surface complexation models are chemical models that can account for the effects of variable chemical conditions, such as pH, on adsorption reactions. These models define specific ...
NASA Technical Reports Server (NTRS)
Figueroa-Feliciano, Enectali
2004-01-01
We have developed a software suite that models complex calorimeters in the time and frequency domain. These models can reproduce all measurements that we currently do in a lab setting, like IV curves, impedance measurements, noise measurements, and pulse generation. Since all these measurements are modeled from one set of parameters, we can fully describe a detector and characterize its behavior. This leads to a model than can be used effectively for engineering and design of detectors for particular applications.
Liu, Ying; Zhang, Zhen; Zhang, Quanxuan; Baker, Gregory L.; Worden, R. Mark
2013-01-01
Engineered nanomaterials (ENM) have desirable properties that make them well suited for many commercial applications. However, a limited understanding of how ENM’s properties influence their molecular interactions with biomembranes hampers efforts to design ENM that are both safe and effective. This paper describes the use of a tethered bilayer lipid membrane (tBLM) to characterize biomembrane disruption by functionalized silica-core nanoparticles. Electrochemical impedance spectroscopy was used to measure the time trajectory of tBLM resistance following nanoparticle exposure. Statistical analysis of parameters from an exponential resistance decay model was then used to quantify and analyze differences between the impedance profiles of nanoparticles that were unfunctionalized, amine-functionalized, or carboxyl-functionalized. All of the nanoparticles triggered a decrease in membrane resistance, indicating nanoparticle-induced disruption of the tBLM. Hierarchical clustering allowed the potency of nanoparticles for reducing tBLM resistance to be ranked in the order amine > carboxyl ~ bare silica. Dynamic light scattering analysis revealed that tBLM exposure triggered minor coalescence for bare and amine-functionalized silica nanoparticles but not for carboxyl-functionalized silica nanoparticles. These results indicate that the tBLM method can reproducibly characterize ENM-induced biomembrane disruption and can distinguish the BLM-disruption patterns of nanoparticles that are identical except for their surface functional groups. The method provides insight into mechanisms of molecular interaction involving biomembranes and is suitable for miniaturization and automation for high-throughput applications to help assess the health risk of nanomaterial exposure or identify ENM having a desired mode of interaction with biomembranes. PMID:24060565
Eggeling, Christian; Honigmann, Alf
2016-10-01
Biological membranes are complex composites of lipids, proteins and sugars, which catalyze a myriad of vital cellular reactions in a spatiotemporal tightly controlled manner. Our understanding of the organization principles of biomembranes is limited mainly by the challenge to measure distributions and interactions of lipids and proteins within the complex environment of living cells. With the recent advent of super-resolution optical microscopy (or nanoscopy) one now has approached the molecular scale regime with non-invasive live cell fluorescence observation techniques. Since in silico molecular dynamics (MD) simulation techniques are also improving to study larger and more complex systems we can now start to integrate live-cell and in silico experiments to develop a deeper understanding of biomembranes. In this review we summarize recent progress to measure lipid-protein interactions in living cells and give examples how MD simulations can complement and upgrade the experimental data. This article is part of a Special Issue entitled: Biosimulations edited by Ilpo Vattulainen and Tomasz Róg. PMID:27039279
Debating complexity in modeling
Hunt, Randall J.; Zheng, Chunmiao
1999-01-01
As scientists trying to understand the natural world, how should our effort be apportioned? We know that the natural world is characterized by complex and interrelated processes. Yet do we need to explicitly incorporate these intricacies to perform the tasks we are charged with? In this era of expanding computer power and development of sophisticated preprocessors and postprocessors, are bigger machines making better models? Put another way, do we understand the natural world better now with all these advancements in our simulation ability? Today the public's patience for long-term projects producing indeterminate results is wearing thin. This increases pressure on the investigator to use the appropriate technology efficiently. On the other hand, bringing scientific results into the legal arena opens up a new dimension to the issue: to the layperson, a tool that includes more of the complexity known to exist in the real world is expected to provide the more scientifically valid answer.
Biomembranes in atomistic and coarse-grained simulations
NASA Astrophysics Data System (ADS)
Pluhackova, Kristyna; Böckmann, Rainer A.
2015-08-01
The architecture of biological membranes is tightly coupled to the localization, organization, and function of membrane proteins. The organelle-specific distribution of lipids allows for the formation of functional microdomains (also called rafts) that facilitate the segregation and aggregation of membrane proteins and thus shape their function. Molecular dynamics simulations enable to directly access the formation, structure, and dynamics of membrane microdomains at the molecular scale and the specific interactions among lipids and proteins on timescales from picoseconds to microseconds. This review focuses on the latest developments of biomembrane force fields for both atomistic and coarse-grained molecular dynamics (MD) simulations, and the different levels of coarsening of biomolecular structures. It also briefly introduces scale-bridging methods applicable to biomembrane studies, and highlights selected recent applications.
Elasticity of biomembranes studied by dynamic light scattering
NASA Astrophysics Data System (ADS)
Fujime, Satoru; Miyamoto, Shigeaki
1991-05-01
Combination of osmotic swelling and dynamic light scattering makes it possible to measure the elastic modulus of biomembranes. By this technique we have observed a drastic increase in membrane flexibility on activation of Na/glucose cotransporters in membrane vesicles prepared from brush-borders of rat small intestine and on activation by micromolar [Ca2] of exocytosis in secretory granules isolated from rat pancreatic acinar cells and bovine adrenal chromaffin cells. 1 .
Evaluation of the mechanism of skin enhancing surfactants on the biomembrane of shed snake skin.
Wonglertnirant, Nanthida; Ngawhirunpat, Tanasait; Kumpugdee-Vollrath, Mont
2012-01-01
The aim of the present work was to investigate the effects of different surfactants at various concentrations as a skin penetration enhancer through the biomembrane of the shed skin of Naja kaouthia. Additionally, the enhancer mechanism(s) of each class of surfactants were evaluated using physical characterization techniques, such as scanning electron microscopy (SEM), attenuated total reflectance-Fourier transform infrared (ATR-FTIR) spectroscopy, and small and wide angle X-ray scattering (SWAXS). Our results showed that skin permeability increased with increasing concentrations of surfactants and the degree of increase was higher for the model hydrophilic permeant, deuterium dioxide (D(2)O), than the lipophilic permeant, ketoprofen (KP). Ionic surfactants, sodium lauryl sulfate (SLS) and cetyl trimethyl ammonium bromide (CTAB), demonstrated higher enhancement ability than the polyoxyethylene (20) sorbitan mono-oleate (Tween 80) non-ionic surfactant, which was consistent with the results from physical characterization studies. Increasing amounts of permeated drug resulted in an increase in membrane interactions. From our observations, it can be assumed that SLS and CTAB can be localized inside the biomembrane and thereby enhance drug permeation mainly through interactions with intercellular lipids in the stratum corneum (SC) and the creation of a perturbed microenvironment among lipid alkyl chains and polar head groups. PMID:22466556
Response of biomembrane domains to external stimuli
NASA Astrophysics Data System (ADS)
Urbancic, Iztok
To enrich our knowledge about membrane domains, new measurement techniques with extended spatial and temporal windows are being vigorously developed by combining various approaches. Following such efforts of the scientific community, we set up fluorescence microspectroscopy (FMS), bridging two well established methods: fluorescence microscopy, which enables imaging of the samples with spatial resolution down to 200 nm, and fluorescence spectroscopy that provides molecular information of the environment at nanometer and nanosecond scale. The combined method therefore allows us to localize this type of information with the precision suitable for studying various cellular structures. Faced with weak available fluorescence signals, we have put considerable efforts into optimization of measurement processes and analysis of the data. By introducing a novel acquisition scheme and by fitting the data with a mathematical model, we preserved the spectral resolution, characteristic for spectroscopic measurements of bulk samples, also at microscopic level. We have at the same time overcome the effects of photobleaching, which had previously considerably distorted the measured spectral lineshape of photosensitive dyes and consequently hindered the reliability of FMS. Our new approach has therefore greatly extended the range of applicable environmentally sensitive probes, which can now be designed to better accommodate the needs of each particular experiment. Moreover, photobleaching of fluorescence signal can now even be exploited to obtain new valuable information about molecular environment of the probes, as bleaching rates of certain probes also depend on physical and chemical properties of the local surroundings. In this manner we increased the number of available spatially localized spectral parameters, which becomes invaluable when investigating complex biological systems that can only be adequately characterized by several independent variables. Applying the developed
NASA Astrophysics Data System (ADS)
Yi, Zheng
Bio-membranes of the natural living cells are made of bilayers of phospholipids molecules embedded with other constituents, such as cholesterol and membrane proteins, which help to accomplish a broad range of functions. Vesicles made of lipid bilayers can serve as good model systems for bio-membranes. Therefore these systems have been extensively characterized and much is known about their shape, size, porosity and functionality. In this dissertation we report the studies of the effects of the phosoholipid conformation, such as hydrocarbon number and presence of double bond in hydrophobic tails on dynamics of phospholipids bilayers studied by neutron spin echo (NSE) technique. We have investigated how lidocaine, the most medically used local anesthetics (LA), influence the structural and dynamical properties of model bio-membranes by small angle neutron scattering (SANS), NSE and differential scanning calorimetry (DSC). To investigate the influence of phospholipid conformation on bio-membranes, the bending elasticities kappac of seven saturated and monounsaturated phospholipid bilayers were investigated by NSE spectroscopy. kappa c of phosphatidylcholines (PCS) in liquid crystalline (L alpha) phase ranges from 0.38x10-19 J for 1,2-Dimyristoyl- sn-Glycero-3-Phosphocholine (14:0 PC) to 0.64x10-19 J for 1,2-Dieicosenoyl-sn-Glycero-3-Phosphocholine (20:1 PC). It was confirmed that when the area modulus KA varies little with chain unsaturation or length, the elastic ratios (kappac/ KA)1/2 of bilayers varies linearly with lipid hydrophobic thickness d. For the study of the influence of LA on bio-membranes, SANS measurements have been performed on 14:0 PC bilayers with different concentrations of lidocaine to determine the bilayer thickness dL as a function of the lidocaine concentration. NSE has been used to study the influence of lidocaine on the bending elasticity of 14:0 PC bilayers in Lalpha and ripple gel (Pbeta') phases. Our results confirmed that the molecules of
57 Fe Mössbauer probe of spin crossover thin films on a bio-membrane
NASA Astrophysics Data System (ADS)
Naik, Anil D.; Garcia, Yann
2012-03-01
An illustrious complex [Fe(ptz)6](BF4)2 (ptz = 1-propyl-tetrazole) ( 1) which was produced in the form of submicron crystals and thin film on Allium cepa membrane was probed by 57Fe Mossbauer spectroscopy in order to follow its intrinsic spin crossover. In addition to a weak signal that corresponds to neat SCO compound significant amount of other iron compounds are found that could have morphed from 1 due to specific host-guest interaction on the lipid-bilayer of bio-membrane. Further complimentary information about biogenic role of membrane, was obtained from variable temperature Mossbauer spectroscopy on a ~5% enriched [57Fe(H2O)6](BF4)2 salt on this membrane.
NASA Astrophysics Data System (ADS)
Akdim, Mohamed Reda
2003-09-01
Nowadays plasmas are used for various applications such as the fabrication of silicon solar cells, integrated circuits, coatings and dental cleaning. In the case of a processing plasma, e.g. for the fabrication of amorphous silicon solar cells, a mixture of silane and hydrogen gas is injected in a reactor. These gases are decomposed by making a plasma. A plasma with a low degree of ionization (typically 10_5) is usually made in a reactor containing two electrodes driven by a radio-frequency (RF) power source in the megahertz range. Under the right circumstances the radicals, neutrals and ions can react further to produce nanometer sized dust particles. The particles can stick to the surface and thereby contribute to a higher deposition rate. Another possibility is that the nanometer sized particles coagulate and form larger micron sized particles. These particles obtain a high negative charge, due to their large radius and are usually trapped in a radiofrequency plasma. The electric field present in the discharge sheaths causes the entrapment. Such plasmas are called dusty or complex plasmas. In this thesis numerical models are presented which describe dusty plasmas in reactive and nonreactive plasmas. We started first with the development of a simple one-dimensional silane fluid model where a dusty radio-frequency silane/hydrogen discharge is simulated. In the model, discharge quantities like the fluxes, densities and electric field are calculated self-consistently. A radius and an initial density profile for the spherical dust particles are given and the charge and the density of the dust are calculated with an iterative method. During the transport of the dust, its charge is kept constant in time. The dust influences the electric field distribution through its charge and the density of the plasma through recombination of positive ions and electrons at its surface. In the model this process gives an extra production of silane radicals, since the growth of dust is
Intelligent biomembranes for nicotine releases by radiation curing
NASA Astrophysics Data System (ADS)
Nakayama, Hiroshi; Kaetsu, Isao; Uchida, Kumao; Oishibashi, Manabu; Matsubara, Yoshio
2003-06-01
The authors have studied stimuli-responsive polyelectrolyte and polyampholyte hydrogels. Thermo-responsive copolymer hydrogels have also been studied. Recently, the authors have applied those hydrogels to radiation curable intelligent coatings for the gating of drug release channel. One way of this application is the coating on a drug including membrane to initiate and stop the drug release by on-off switching of stimulations. Some results of application to practical intelligent biomembranes such as glucose-responsive nicotine release membrane and temperature-responsive nicotine release membrane were investigated and their functions as well as of some effective factors on the release profiles were proved.
Modeling complexity in biology
NASA Astrophysics Data System (ADS)
Louzoun, Yoram; Solomon, Sorin; Atlan, Henri; Cohen, Irun. R.
2001-08-01
Biological systems, unlike physical or chemical systems, are characterized by the very inhomogeneous distribution of their components. The immune system, in particular, is notable for self-organizing its structure. Classically, the dynamics of natural systems have been described using differential equations. But, differential equation models fail to account for the emergence of large-scale inhomogeneities and for the influence of inhomogeneity on the overall dynamics of biological systems. Here, we show that a microscopic simulation methodology enables us to model the emergence of large-scale objects and to extend the scope of mathematical modeling in biology. We take a simple example from immunology and illustrate that the methods of classical differential equations and microscopic simulation generate contradictory results. Microscopic simulations generate a more faithful approximation of the reality of the immune system.
Tools for characterizing biomembranes : final LDRD report.
Alam, Todd Michael; Stevens, Mark; Holland, Gregory P.; McIntyre, Sarah K.
2007-10-01
A suite of experimental nuclear magnetic resonance (NMR) spectroscopy tools were developed to investigate lipid structure and dynamics in model membrane systems. By utilizing both multinuclear and multidimensional NMR experiments a range of different intra- and inter-molecular contacts were probed within the membranes. Examples on pure single component lipid membranes and on the canonical raft forming mixture of DOPC/SM/Chol are presented. A unique gel phase pretransition in SM was also identified and characterized using these NMR techniques. In addition molecular dynamics into the hydrogen bonding network unique to sphingomyelin containing membranes were evaluated as a function of temperature, and are discussed.
The thermodynamics of simple biomembrane mimetic systems
Raudino, Antonio; Sarpietro, Maria Grazia; Pannuzzo, Martina
2011-01-01
Insight into the forces governing a system is essential for understanding its behavior and function. Thermodynamic investigations provide a wealth of information that is not, or is hardly, available from other methods. This article reviews thermodynamic approaches and assays to measure collective properties such as heat adsorption / emission and volume variations. These methods can be successfully applied to the study of lipid vesicles (liposomes) and biological membranes. With respect to instrumentation, differential scanning calorimetry, pressure perturbation calorimetry, isothermal titration calorimetry, dilatometry, and acoustic techniques aimed at measuring the isothermal and adiabatic processes, two- and three-dimensional compressibilities are considered. Applications of these techniques to lipid systems include the measurement of different thermodynamic parameters and a detailed characterization of thermotropic, barotropic, and lyotropic phase behavior. The membrane binding and / or partitioning of solutes (proteins, peptides, drugs, surfactants, ions, etc.) can also be quantified and modeled. Many thermodynamic assays are available for studying the effect of proteins and other additives on membranes, characterizing non-ideal mixing, domain formation, bilayer stability, curvature strain, permeability, solubilization, and fusion. Studies of membrane proteins in lipid environments elucidate lipid–protein interactions in membranes. Finally, a plethora of relaxation phenomena toward equilibrium thermodynamic structures can be also investigated. The systems are described in terms of enthalpic and entropic forces, equilibrium constants, heat capacities, partial volume changes, volume and area compressibility, and so on, also shedding light on the stability of the structures and the molecular origin and mechanism of the structural changes. PMID:21430953
Field theoretical approach for bio-membrane coupled with flow field
NASA Astrophysics Data System (ADS)
Oya, Y.; Kawakatsu, T.
2013-02-01
Shape deformation of bio-membranes in flow field is well known phenomenon in biological systems, for example red blood cell in blood vessel. To simulate such deformation with use of field theoretical approach, we derived the dynamical equation of phase field for shape of membrane and coupled the equation with Navier-Stokes equation for flow field. In 2-dimensional simulations, we found that a bio-membrane in a Poiseuille flow takes a parachute shape similar to the red blood cells.
[Effects of selective extraction on microorganisms on biomembrane in natural water body].
Li, Yu; Chen, Jiejiang; Haiyan, Ma; Hua, Xiuyi; Dong, Deming; Guo, Shuhai
2006-02-01
By the methods of direct viable count and plate count, this paper studied the effects of different selective extractants on the bacteria, algae and protozoan on the biomembrane in natural water body. The results indicated that the stronger the extraction ability of selective extractant, the fewer the living microorganisms on the biomembrane after extraction. Compared with the control, the percentages of living microorganisms on the biomembrane were 27.6, 14.1 and 0.01, respectively, after extracted by hydroxylamine hydrochloride (0.01 mol x L(-1) NH2OH.HCl + 0.01 mol x L(-1) HNO3), sodium dithionite (0.4 mol x L(-1) Na2S2O4, pH 6.0), and acidified ammonium oxalate. Very few bacteria was left after extracted by nitric acid (15% HNO3), and no microorgariisms could be detected after extracted by H2O2/HNO3, suggesting that the use of selective extractants affected the activity of biomembrane. With the decreasing amount of microorganisms on the biomembrane after treated with selective extractants, the adsorption of heavy metals by the biomembrane was gradually depressed. PMID:16706056
Banthiya, Swathi; Pekárová, Mária; Kuhn, Hartmut; Heydeck, Dagmar
2015-10-15
Pseudomonas aeruginosa (PA) expresses a secreted lipoxygenase (LOX), which oxygenates free arachidonic acid predominantly to 15S-H(p)ETE. The enzyme is capable of binding phospholipids at its active site and physically interacts with model membranes. However, its membrane oxygenase activity has not been quantified. To address this question, we overexpressed PA-LOX as intracellular his-tag fusion protein in Escherichia coli, purified it to electrophoretic homogeneity and compared its biomembrane oxygenase activity with that of rabbit ALOX15. We found that both enzymes were capable of oxygenating mitochondrial membranes to specific oxygenation products and 13S-H(p)ODE and 15S-H(p)ETE esterified to phosphatidylcholine and phosphatidylethanolamine were identified as major oxygenation products. When normalized to similar linoleic acid oxygenase activity, the rabbit enzyme exhibited a much more effective mitochondrial membrane oxygenase activity. In contrast, during long-term incubations (24 h) with red blood cells PA-LOX induced significant (50%) hemolysis whereas rabbit ALOX15 was more or less ineffective. These data indicate the principle capability of PA-LOX of oxygenating membrane bound phospholipids which is likely to alter the barrier function of the biomembranes. Although the membrane oxygenase activity was lower than the fatty acid oxygenase activity of PA-LOX red blood cell membrane oxygenation might be of biological relevance for P. aeruginosa septicemia. PMID:26361973
Complex Networks in Psychological Models
NASA Astrophysics Data System (ADS)
Wedemann, R. S.; Carvalho, L. S. A. V. D.; Donangelo, R.
We develop schematic, self-organizing, neural-network models to describe mechanisms associated with mental processes, by a neurocomputational substrate. These models are examples of real world complex networks with interesting general topological structures. Considering dopaminergic signal-to-noise neuronal modulation in the central nervous system, we propose neural network models to explain development of cortical map structure and dynamics of memory access, and unify different mental processes into a single neurocomputational substrate. Based on our neural network models, neurotic behavior may be understood as an associative memory process in the brain, and the linguistic, symbolic associative process involved in psychoanalytic working-through can be mapped onto a corresponding process of reconfiguration of the neural network. The models are illustrated through computer simulations, where we varied dopaminergic modulation and observed the self-organizing emergent patterns at the resulting semantic map, interpreting them as different manifestations of mental functioning, from psychotic through to normal and neurotic behavior, and creativity.
Dynamics of biomembranes with active multiple-state inclusions.
Chen, Hsuan-Yi; Mikhailov, Alexander S
2010-03-01
Nonequilibrium dynamics of biomembranes with active multiple-state inclusions is considered. The inclusions represent protein molecules which perform cyclic internal conformational motions driven by the energy brought with adenosine triphosphate (ATP) ligands. As protein conformations cyclically change, this induces hydrodynamical flows and also directly affects the local curvature of a membrane. On the other hand, variations in the local curvature of the membrane modify the transition rates between conformational states in a protein, leading to a feedback in the considered system. Moreover, active inclusions can move diffusively through the membrane so that their surface concentration varies. The kinetic description of this system is constructed and the stability of the uniform stationary state is analytically investigated. We show that, as the rate of supply of chemical energy is increased above a certain threshold, this uniform state becomes unstable and stationary or traveling waves spontaneously develop in the system. Such waves are accompanied by periodic spatial variations of the membrane curvature and the inclusion density. For typical parameter values, their characteristic wavelengths are of the order of hundreds of nanometers. For traveling waves, the characteristic frequency is of the order of a thousand Hz or less. The predicted instabilities are possible only if at least three internal inclusion states are present. PMID:20365764
Morphological and Physical Analysis of Natural Phospholipids-Based Biomembranes
Jacquot, Adrien; Francius, Grégory; Razafitianamaharavo, Angelina; Dehghani, Fariba; Tamayol, Ali; Linder, Michel; Arab-Tehrany, Elmira
2014-01-01
Background Liposomes are currently an important part of biological, pharmaceutical, medical and nutritional research, as they are considered to be among the most effective carriers for the introduction of various types of bioactive agents into target cells. Scope of Review In this work, we study the lipid organization and mechanical properties of biomembranes made of marine and plant phospholipids. Membranes based on phospholipids extracted from rapeseed and salmon are studied in the form of liposome and as supported lipid bilayer. Dioleylphosphatidylcholine (DOPC) and dipalmitoylphosphatidylcholine (DPPC) are used as references to determine the lipid organization of marine and plant phospholipid based membranes. Atomic force microscopy (AFM) imaging and force spectroscopy measurements are performed to investigate the membranes' topography at the micrometer scale and to determine their mechanical properties. Major Conclusions The mechanical properties of the membranes are correlated to the fatty acid composition, the morphology, the electrophoretic mobility and the membrane fluidity. Thus, soft and homogeneous mechanical properties are evidenced for salmon phospholipids membrane containing various polyunsaturated fatty acids. Besides, phase segregation in rapeseed membrane and more important mechanical properties were emphasized for this type of membranes by contrast to the marine phospholipids based membranes. General Significance This paper provides new information on the nanomechanical and morphological properties of membrane in form of liposome by AFM. The originality of this work is to characterize the physico-chemical properties of the nanoliposome from the natural sources containing various fatty acids and polar head. PMID:25238543
Interaction of holothurian triterpene glycoside with biomembranes of mouse immune cells.
Pislyagin, E A; Gladkikh, R V; Kapustina, I I; Kim, N Yu; Shevchenko, V P; Nagaev, I Yu; Avilov, S A; Aminin, D L
2012-09-01
The in vitro interactions between triterpene glycoside, cucumarioside A(2)-2, isolated from the Far-Eastern holothurian Cucumaria japonica, and mouse splenocyte and peritoneal macrophage biomembranes were studied. Multiple experimental approaches were employed, including determination of biomembrane microviscosity, membrane potential and Ca(2+) signaling, and radioligand binding assays. Cucumarioside A(2)-2 exhibited strong cytotoxic effect in the micromolar range of concentrations and showed pronounced immunomodulatory activity in the nanomolar concentration range. It was established that the cucumarioside A(2)-2 effectively interacted with immune cells and increased the cellular biomembrane microviscosity. This interaction led to a dose-dependent reversible shift in cellular membrane potential and temporary biomembrane depolarization; and an increase in [Ca(2+)](i) in the cytoplasm. It is suggested that there are at least two binding sites for [(3)H]-cucumarioside A(2)-2 on cellular membranes corresponding to different biomembrane components: a low affinity site match to membrane cholesterol that is responsible for the cytotoxic properties, and a high affinity site corresponding to a hypothetical receptor that is responsible for immunostimulation. PMID:22683181
Modeling Wildfire Incident Complexity Dynamics
Thompson, Matthew P.
2013-01-01
Wildfire management in the United States and elsewhere is challenged by substantial uncertainty regarding the location and timing of fire events, the socioeconomic and ecological consequences of these events, and the costs of suppression. Escalating U.S. Forest Service suppression expenditures is of particular concern at a time of fiscal austerity as swelling fire management budgets lead to decreases for non-fire programs, and as the likelihood of disruptive within-season borrowing potentially increases. Thus there is a strong interest in better understanding factors influencing suppression decisions and in turn their influence on suppression costs. As a step in that direction, this paper presents a probabilistic analysis of geographic and temporal variation in incident management team response to wildfires. The specific focus is incident complexity dynamics through time for fires managed by the U.S. Forest Service. The modeling framework is based on the recognition that large wildfire management entails recurrent decisions across time in response to changing conditions, which can be represented as a stochastic dynamic system. Daily incident complexity dynamics are modeled according to a first-order Markov chain, with containment represented as an absorbing state. A statistically significant difference in complexity dynamics between Forest Service Regions is demonstrated. Incident complexity probability transition matrices and expected times until containment are presented at national and regional levels. Results of this analysis can help improve understanding of geographic variation in incident management and associated cost structures, and can be incorporated into future analyses examining the economic efficiency of wildfire management. PMID:23691014
Hepatocellular biomembrane peroxidation in copper-induced injury
Homer, B.L.
1986-01-01
The pathogenesis of Cu-induced hepatocellular biomembrane peroxidation was studied in male Fischer rats by analyzing hepatic morphologic alterations, measuring the activity of hepatic free radical scavenger enzymes, and determining the distribution of hepatic cytosolic Cu bound to high and low molecular weight proteins. Seventy-five weanling rats were divided into 3 group of 25 each and injected once daily with either 6.25 mg/kg or 12.5 mg/kg cupric chloride, or 0.2 ml/100 gm saline. Five rats from each group were killed after 3, 14, 28, 42, and 70 consecutive days of injections. The level of malondialdehyde was elevated after 3 days of Cu injections and continued to increase until it peaked in the high-dose group after 28 days and in the low-dose group after 42 days. The density of catalase-containing peroxisomes was reduced in Cu-treated rats, correlating with a reduced activity of hepatic catalase. Catalase activity in Cu-treated rats was reduced after 3 days, and always remained < or = to the activity in control rats. The activity of glutathione peroxidase in high-dose rats always was < or = to the level in control rats, while the activity in control rats always was < or = to the level in low-dose rats. Meanwhile, the activity of superoxide dismutase increase in Cu-treated rats after 28 days. The concentration of cytosolic low molecular weight protein-bound Cu was elevated after 3 days in both Cu-treated groups and continued to increase, leveling off or peaking after 42 days. Regression analysis and in vitro studies, involving the peroxidation of erythrocyte ghost membranes, demonstrated that Cu bound to low molecular weight proteins was less likely to induce lipoperoxidation than copper bound to high molecular weight proteins.
Explosion modelling for complex geometries
NASA Astrophysics Data System (ADS)
Nehzat, Naser
A literature review suggested that the combined effects of fuel reactivity, obstacle density, ignition strength, and confinement result in flame acceleration and subsequent pressure build-up during a vapour cloud explosion (VCE). Models for the prediction of propagating flames in hazardous areas, such as coal mines, oil platforms, storage and process chemical areas etc. fall into two classes. One class involves use of Computation Fluid Dynamics (CFD). This approach has been utilised by several researchers. The other approach relies upon a lumped parameter approach as developed by Baker (1983). The former approach is restricted by the appropriateness of sub-models and numerical stability requirements inherent in the computational solution. The latter approach raises significant questions regarding the validity of the simplification involved in representing the complexities of a propagating explosion. This study was conducted to investigate and improve the Computational Fluid Dynamic (CFD) code EXPLODE which has been developed by Green et al., (1993) for use on practical gas explosion hazard assessments. The code employs a numerical method for solving partial differential equations by using finite volume techniques. Verification exercises, involving comparison with analytical solutions for the classical shock-tube and with experimental (small-scale, medium and large-scale) results, demonstrate the accuracy of the code and the new combustion models but also identify differences between predictions and the experimental results. The project has resulted in a developed version of the code (EXPLODE2) with new combustion models for simulating gas explosions. Additional features of this program include the physical models necessary to simulate the combustion process using alternative combustion models, improvement to the numerical accuracy and robustness of the code, and special input for simulation of different gas explosions. The present code has the capability of
A physical interpretation of hydrologic model complexity
NASA Astrophysics Data System (ADS)
Moayeri, MohamadMehdi; Pande, Saket
2015-04-01
It is intuitive that instability of hydrological system representation, in the sense of how perturbations in input forcings translate into perturbation in a hydrologic response, may depend on its hydrological characteristics. Responses of unstable systems are thus complex to model. We interpret complexity in this context and define complexity as a measure of instability in hydrological system representation. We provide algorithms to quantify model complexity in this context. We use Sacramento soil moisture accounting model (SAC-SMA) parameterized for MOPEX basins and quantify complexities of corresponding models. Relationships between hydrologic characteristics of MOPEX basins such as location, precipitation seasonality index, slope, hydrologic ratios, saturated hydraulic conductivity and NDVI and respective model complexities are then investigated. We hypothesize that complexities of basin specific SAC-SMA models correspond to aforementioned hydrologic characteristics, thereby suggesting that model complexity, in the context presented here, may have a physical interpretation.
Teacher Modeling Using Complex Informational Texts
ERIC Educational Resources Information Center
Fisher, Douglas; Frey, Nancy
2015-01-01
Modeling in complex texts requires that teachers analyze the text for factors of qualitative complexity and then design lessons that introduce students to that complexity. In addition, teachers can model the disciplinary nature of content area texts as well as word solving and comprehension strategies. Included is a planning guide for think aloud.
Formation of Biomembrane Microarrays with a Squeegee-based Assembly Method
Wittenberg, Nathan J.; Johnson, Timothy W.; Jordan, Luke R.; Xu, Xiaohua; Warrington, Arthur E.; Rodriguez, Moses; Oh, Sang-Hyun
2014-01-01
Lipid bilayer membranes form the plasma membranes of cells and define the boundaries of subcellular organelles. In nature, these membranes are heterogeneous mixtures of many types of lipids, contain membrane-bound proteins and are decorated with carbohydrates. In some experiments, it is desirable to decouple the biophysical or biochemical properties of the lipid bilayer from those of the natural membrane. Such cases call for the use of model systems such as giant vesicles, liposomes or supported lipid bilayers (SLBs). Arrays of SLBs are particularly attractive for sensing applications and mimicking cell-cell interactions. Here we describe a new method for forming SLB arrays. Submicron-diameter SiO2 beads are first coated with lipid bilayers to form spherical SLBs (SSLBs). The beads are then deposited into an array of micro-fabricated submicron-diameter microwells. The preparation technique uses a "squeegee" to clean the substrate surface, while leaving behind SSLBs that have settled into microwells. This method requires no chemical modification of the microwell substrate, nor any particular targeting ligands on the SSLB. Microwells are occupied by single beads because the well diameter is tuned to be just larger than the bead diameter. Typically, more 75% of the wells are occupied, while the rest remain empty. In buffer SSLB arrays display long-term stability of greater than one week. Multiple types of SSLBs can be placed in a single array by serial deposition, and the arrays can be used for sensing, which we demonstrate by characterizing the interaction of cholera toxin with ganglioside GM1. We also show that phospholipid vesicles without the bead supports and biomembranes from cellular sources can be arrayed with the same method and cell-specific membrane lipids can be identified. PMID:24837169
Formation of biomembrane microarrays with a squeegee-based assembly method.
Wittenberg, Nathan J; Johnson, Timothy W; Jordan, Luke R; Xu, Xiaohua; Warrington, Arthur E; Rodriguez, Moses; Oh, Sang-Hyun
2014-01-01
Lipid bilayer membranes form the plasma membranes of cells and define the boundaries of subcellular organelles. In nature, these membranes are heterogeneous mixtures of many types of lipids, contain membrane-bound proteins and are decorated with carbohydrates. In some experiments, it is desirable to decouple the biophysical or biochemical properties of the lipid bilayer from those of the natural membrane. Such cases call for the use of model systems such as giant vesicles, liposomes or supported lipid bilayers (SLBs). Arrays of SLBs are particularly attractive for sensing applications and mimicking cell-cell interactions. Here we describe a new method for forming SLB arrays. Submicron-diameter SiO2 beads are first coated with lipid bilayers to form spherical SLBs (SSLBs). The beads are then deposited into an array of micro-fabricated submicron-diameter microwells. The preparation technique uses a "squeegee" to clean the substrate surface, while leaving behind SSLBs that have settled into microwells. This method requires no chemical modification of the microwell substrate, nor any particular targeting ligands on the SSLB. Microwells are occupied by single beads because the well diameter is tuned to be just larger than the bead diameter. Typically, more 75% of the wells are occupied, while the rest remain empty. In buffer SSLB arrays display long-term stability of greater than one week. Multiple types of SSLBs can be placed in a single array by serial deposition, and the arrays can be used for sensing, which we demonstrate by characterizing the interaction of cholera toxin with ganglioside GM1. We also show that phospholipid vesicles without the bead supports and biomembranes from cellular sources can be arrayed with the same method and cell-specific membrane lipids can be identified. PMID:24837169
"Computational Modeling of Actinide Complexes"
Balasubramanian, K
2007-03-07
We will present our recent studies on computational actinide chemistry of complexes which are not only interesting from the standpoint of actinide coordination chemistry but also of relevance to environmental management of high-level nuclear wastes. We will be discussing our recent collaborative efforts with Professor Heino Nitsche of LBNL whose research group has been actively carrying out experimental studies on these species. Computations of actinide complexes are also quintessential to our understanding of the complexes found in geochemical, biochemical environments and actinide chemistry relevant to advanced nuclear systems. In particular we have been studying uranyl, plutonyl, and Cm(III) complexes are in aqueous solution. These studies are made with a variety of relativistic methods such as coupled cluster methods, DFT, and complete active space multi-configuration self-consistent-field (CASSCF) followed by large-scale CI computations and relativistic CI (RCI) computations up to 60 million configurations. Our computational studies on actinide complexes were motivated by ongoing EXAFS studies of speciated complexes in geo and biochemical environments carried out by Prof Heino Nitsche's group at Berkeley, Dr. David Clark at Los Alamos and Dr. Gibson's work on small actinide molecules at ORNL. The hydrolysis reactions of urnayl, neputyl and plutonyl complexes have received considerable attention due to their geochemical and biochemical importance but the results of free energies in solution and the mechanism of deprotonation have been topic of considerable uncertainty. We have computed deprotonating and migration of one water molecule from the first solvation shell to the second shell in UO{sub 2}(H{sub 2}O){sub 5}{sup 2+}, UO{sub 2}(H{sub 2}O){sub 5}{sup 2+}NpO{sub 2}(H{sub 2}O){sub 6}{sup +}, and PuO{sub 2}(H{sub 2}O){sub 5}{sup 2+} complexes. Our computed Gibbs free energy(7.27 kcal/m) in solution for the first time agrees with the experiment (7.1 kcal
Capturing Complexity through Maturity Modelling
ERIC Educational Resources Information Center
Underwood, Jean; Dillon, Gayle
2004-01-01
The impact of information and communication technologies (ICT) on the process and products of education is difficult to assess for a number of reasons. In brief, education is a complex system of interrelationships, of checks and balances. This context is not a neutral backdrop on which teaching and learning are played out. Rather, it may help, or…
Does increased hydrochemical model complexity decrease robustness?
NASA Astrophysics Data System (ADS)
Medici, C.; Wade, A. J.; Francés, F.
2012-05-01
SummaryThe aim of this study was, within a sensitivity analysis framework, to determine if additional model complexity gives a better capability to model the hydrology and nitrogen dynamics of a small Mediterranean forested catchment or if the additional parameters cause over-fitting. Three nitrogen-models of varying hydrological complexity were considered. For each model, general sensitivity analysis (GSA) and Generalized Likelihood Uncertainty Estimation (GLUE) were applied, each based on 100,000 Monte Carlo simulations. The results highlighted the most complex structure as the most appropriate, providing the best representation of the non-linear patterns observed in the flow and streamwater nitrate concentrations between 1999 and 2002. Its 5% and 95% GLUE bounds, obtained considering a multi-objective approach, provide the narrowest band for streamwater nitrogen, which suggests increased model robustness, though all models exhibit periods of inconsistent good and poor fits between simulated outcomes and observed data. The results confirm the importance of the riparian zone in controlling the short-term (daily) streamwater nitrogen dynamics in this catchment but not the overall flux of nitrogen from the catchment. It was also shown that as the complexity of a hydrological model increases over-parameterisation occurs, but the converse is true for a water quality model where additional process representation leads to additional acceptable model simulations. Water quality data help constrain the hydrological representation in process-based models. Increased complexity was justifiable for modelling river-system hydrochemistry. Increased complexity was justifiable for modelling river-system hydrochemistry.
Complexity and Uncertainty in Soil Nitrogen Modeling
NASA Astrophysics Data System (ADS)
Ajami, N. K.; Gu, C.
2009-12-01
Model uncertainty is rarely considered in the field of biogeochemical modeling. The standard biogeochemical modeling approach is to proceed based on one selected model with the “right” complexity level based on data availability. However other plausible models can result in dissimilar answer to the scientific question in hand using the same set of data. Relying on a single model can lead to underestimation of uncertainty associated with the results and therefore lead to unreliable conclusions. Multi-model ensemble strategy is a means to exploit the diversity of skillful predictions from different models with multiple levels of complexity. The aim of this study is two fold, first to explore the impact of a model’s complexity level on the accuracy of the end results and second to introduce a probabilistic multi-model strategy in the context of a process-based biogeochemical model. We developed three different versions of a biogeochemical model, TOUGHREACT-N, with various complexity levels. Each one of these models was calibrated against the observed data from a tomato field in Western Sacramento County, California, and considered two different weighting sets on the objective function. This way we created a set of six ensemble members. The Bayesian Model Averaging (BMA) approach was then used to combine these ensemble members by the likelihood that an individual model is correct given the observations. The results clearly indicate the need to consider a multi-model ensemble strategy over a single model selection in biogeochemical modeling.
Fock spaces for modeling macromolecular complexes
NASA Astrophysics Data System (ADS)
Kinney, Justin
Large macromolecular complexes play a fundamental role in how cells function. Here I describe a Fock space formalism for mathematically modeling these complexes. Specifically, this formalism allows ensembles of complexes to be defined in terms of elementary molecular ``building blocks'' and ``assembly rules.'' Such definitions avoid the massive redundancy inherent in standard representations, in which all possible complexes are manually enumerated. Methods for systematically computing ensembles of complexes from a list of components and interaction rules are described. I also show how this formalism readily accommodates coarse-graining. Finally, I introduce diagrammatic techniques that greatly facilitate the application of this formalism to both equilibrium and non-equilibrium biochemical systems.
Molecular simulation and modeling of complex I.
Hummer, Gerhard; Wikström, Mårten
2016-07-01
Molecular modeling and molecular dynamics simulations play an important role in the functional characterization of complex I. With its large size and complicated function, linking quinone reduction to proton pumping across a membrane, complex I poses unique modeling challenges. Nonetheless, simulations have already helped in the identification of possible proton transfer pathways. Simulations have also shed light on the coupling between electron and proton transfer, thus pointing the way in the search for the mechanistic principles underlying the proton pump. In addition to reviewing what has already been achieved in complex I modeling, we aim here to identify pressing issues and to provide guidance for future research to harness the power of modeling in the functional characterization of complex I. This article is part of a Special Issue entitled Respiratory complex I, edited by Volker Zickermann and Ulrich Brandt. PMID:26780586
Selecting model complexity in learning problems
Buescher, K.L.; Kumar, P.R.
1993-10-01
To learn (or generalize) from noisy data, one must resist the temptation to pick a model for the underlying process that overfits the data. Many existing techniques solve this problem at the expense of requiring the evaluation of an absolute, a priori measure of each model`s complexity. We present a method that does not. Instead, it uses a natural, relative measure of each model`s complexity. This method first creates a pool of ``simple`` candidate models using part of the data and then selects from among these by using the rest of the data.
Scaffolding in Complex Modelling Situations
ERIC Educational Resources Information Center
Stender, Peter; Kaiser, Gabriele
2015-01-01
The implementation of teacher-independent realistic modelling processes is an ambitious educational activity with many unsolved problems so far. Amongst others, there hardly exists any empirical knowledge about efficient ways of possible teacher support with students' activities, which should be mainly independent from the teacher. The research…
Role models for complex networks
NASA Astrophysics Data System (ADS)
Reichardt, J.; White, D. R.
2007-11-01
We present a framework for automatically decomposing (“block-modeling”) the functional classes of agents within a complex network. These classes are represented by the nodes of an image graph (“block model”) depicting the main patterns of connectivity and thus functional roles in the network. Using a first principles approach, we derive a measure for the fit of a network to any given image graph allowing objective hypothesis testing. From the properties of an optimal fit, we derive how to find the best fitting image graph directly from the network and present a criterion to avoid overfitting. The method can handle both two-mode and one-mode data, directed and undirected as well as weighted networks and allows for different types of links to be dealt with simultaneously. It is non-parametric and computationally efficient. The concepts of structural equivalence and modularity are found as special cases of our approach. We apply our method to the world trade network and analyze the roles individual countries play in the global economy.
Agent-based modeling of complex infrastructures
North, M. J.
2001-06-01
Complex Adaptive Systems (CAS) can be applied to investigate complex infrastructures and infrastructure interdependencies. The CAS model agents within the Spot Market Agent Research Tool (SMART) and Flexible Agent Simulation Toolkit (FAST) allow investigation of the electric power infrastructure, the natural gas infrastructure and their interdependencies.
Numerical models of complex diapirs
NASA Astrophysics Data System (ADS)
Podladchikov, Yu.; Talbot, C.; Poliakov, A. N. B.
1993-12-01
Numerically modelled diapirs that rise into overburdens with viscous rheology produce a large variety of shapes. This work uses the finite-element method to study the development of diapirs that rise towards a surface on which a diapir-induced topography creeps flat or disperses ("erodes") at different rates. Slow erosion leads to diapirs with "mushroom" shapes, moderate erosion rate to "wine glass" diapirs and fast erosion to "beer glass"- and "column"-shaped diapirs. The introduction of a low-viscosity layer at the top of the overburden causes diapirs to develop into structures resembling a "Napoleon hat". These spread lateral sheets.
Complex system modelling for veterinary epidemiology.
Lanzas, Cristina; Chen, Shi
2015-02-01
The use of mathematical models has a long tradition in infectious disease epidemiology. The nonlinear dynamics and complexity of pathogen transmission pose challenges in understanding its key determinants, in identifying critical points, and designing effective mitigation strategies. Mathematical modelling provides tools to explicitly represent the variability, interconnectedness, and complexity of systems, and has contributed to numerous insights and theoretical advances in disease transmission, as well as to changes in public policy, health practice, and management. In recent years, our modelling toolbox has considerably expanded due to the advancements in computing power and the need to model novel data generated by technologies such as proximity loggers and global positioning systems. In this review, we discuss the principles, advantages, and challenges associated with the most recent modelling approaches used in systems science, the interdisciplinary study of complex systems, including agent-based, network and compartmental modelling. Agent-based modelling is a powerful simulation technique that considers the individual behaviours of system components by defining a set of rules that govern how individuals ("agents") within given populations interact with one another and the environment. Agent-based models have become a recent popular choice in epidemiology to model hierarchical systems and address complex spatio-temporal dynamics because of their ability to integrate multiple scales and datasets. PMID:25449734
Modelling Canopy Flows over Complex Terrain
NASA Astrophysics Data System (ADS)
Grant, Eleanor R.; Ross, Andrew N.; Gardiner, Barry A.
2016-06-01
Recent studies of flow over forested hills have been motivated by a number of important applications including understanding CO_2 and other gaseous fluxes over forests in complex terrain, predicting wind damage to trees, and modelling wind energy potential at forested sites. Current modelling studies have focussed almost exclusively on highly idealized, and usually fully forested, hills. Here, we present model results for a site on the Isle of Arran, Scotland with complex terrain and heterogeneous forest canopy. The model uses an explicit representation of the canopy and a 1.5-order turbulence closure for flow within and above the canopy. The validity of the closure scheme is assessed using turbulence data from a field experiment before comparing predictions of the full model with field observations. For near-neutral stability, the results compare well with the observations, showing that such a relatively simple canopy model can accurately reproduce the flow patterns observed over complex terrain and realistic, variable forest cover, while at the same time remaining computationally feasible for real case studies. The model allows closer examination of the flow separation observed over complex forested terrain. Comparisons with model simulations using a roughness length parametrization show significant differences, particularly with respect to flow separation, highlighting the need to explicitly model the forest canopy if detailed predictions of near-surface flow around forests are required.
Building phenomenological models of complex biological processes
NASA Astrophysics Data System (ADS)
Daniels, Bryan; Nemenman, Ilya
2009-11-01
A central goal of any modeling effort is to make predictions regarding experimental conditions that have not yet been observed. Overly simple models will not be able to fit the original data well, but overly complex models are likely to overfit the data and thus produce bad predictions. Modern quantitative biology modeling efforts often err on the complexity side of this balance, using myriads of microscopic biochemical reaction processes with a priori unknown kinetic parameters to model relatively simple biological phenomena. In this work, we show how Bayesian model selection (which is mathematically similar to low temperature expansion in statistical physics) can be used to build coarse-grained, phenomenological models of complex dynamical biological processes, which have better predictive powers than microscopically correct, but poorely constrained mechanistic molecular models. We illustrate this on the example of a multiply-modifiable protein molecule, which is a simplified description of multiple biological systems, such as an immune receptors and an RNA polymerase complex. Our approach is similar in spirit to the phenomenological Landau expansion for the free energy in the theory of critical phenomena.
SUMMARY OF COMPLEX TERRAIN MODEL EVALUATION
The Environmental Protection Agency conducted a scientific review of a set of eight complex terrain dispersion models. TRC Environmental Consultants, Inc. calculated and tabulated a uniform set of performance statistics for the models using the Cinder Cone Butte and Westvaco Luke...
Explicit stress integration of complex soil models
NASA Astrophysics Data System (ADS)
Zhao, Jidong; Sheng, Daichao; Rouainia, M.; Sloan, Scott W.
2005-10-01
In this paper, two complex critical-state models are implemented in a displacement finite element code. The two models are used for structured clays and sands, and are characterized by multiple yield surfaces, plastic yielding within the yield surface, and complex kinematic and isotropic hardening laws. The consistent tangent operators - which lead to a quadratic convergence when used in a fully implicit algorithm - are difficult to derive or may even not exist. The stress integration scheme used in this paper is based on the explicit Euler method with automatic substepping and error control. This scheme employs the classical elastoplastic stiffness matrix and requires only the first derivatives of the yield function and plastic potential. This explicit scheme is used to integrate the two complex critical-state models - the sub/super-loading surfaces model (SSLSM) and the kinematic hardening structure model (KHSM). Various boundary-value problems are then analysed. The results for the two models are compared with each other, as well with those from standard Cam-clay models. Accuracy and efficiency of the scheme used for the complex models are also investigated. Copyright
From Complex to Simple: Interdisciplinary Stochastic Models
ERIC Educational Resources Information Center
Mazilu, D. A.; Zamora, G.; Mazilu, I.
2012-01-01
We present two simple, one-dimensional, stochastic models that lead to a qualitative understanding of very complex systems from biology, nanoscience and social sciences. The first model explains the complicated dynamics of microtubules, stochastic cellular highways. Using the theory of random walks in one dimension, we find analytical expressions…
Modeling the chemistry of complex petroleum mixtures.
Quann, R J
1998-01-01
Determining the complete molecular composition of petroleum and its refined products is not feasible with current analytical techniques because of the astronomical number of molecular components. Modeling the composition and behavior of such complex mixtures in refinery processes has accordingly evolved along a simplifying concept called lumping. Lumping reduces the complexity of the problem to a manageable form by grouping the entire set of molecular components into a handful of lumps. This traditional approach does not have a molecular basis and therefore excludes important aspects of process chemistry and molecular property fundamentals from the model's formulation. A new approach called structure-oriented lumping has been developed to model the composition and chemistry of complex mixtures at a molecular level. The central concept is to represent an individual molecular or a set of closely related isomers as a mathematical construct of certain specific and repeating structural groups. A complex mixture such as petroleum can then be represented as thousands of distinct molecular components, each having a mathematical identity. This enables the automated construction of large complex reaction networks with tens of thousands of specific reactions for simulating the chemistry of complex mixtures. Further, the method provides a convenient framework for incorporating molecular physical property correlations, existing group contribution methods, molecular thermodynamic properties, and the structure--activity relationships of chemical kinetics in the development of models. PMID:9860903
Updating the debate on model complexity
Simmons, Craig T.; Hunt, Randall J.
2012-01-01
As scientists who are trying to understand a complex natural world that cannot be fully characterized in the field, how can we best inform the society in which we live? This founding context was addressed in a special session, “Complexity in Modeling: How Much is Too Much?” convened at the 2011 Geological Society of America Annual Meeting. The session had a variety of thought-provoking presentations—ranging from philosophy to cost-benefit analyses—and provided some areas of broad agreement that were not evident in discussions of the topic in 1998 (Hunt and Zheng, 1999). The session began with a short introduction during which model complexity was framed borrowing from an economic concept, the Law of Diminishing Returns, and an example of enjoyment derived by eating ice cream. Initially, there is increasing satisfaction gained from eating more ice cream, to a point where the gain in satisfaction starts to decrease, ending at a point when the eater sees no value in eating more ice cream. A traditional view of model complexity is similar—understanding gained from modeling can actually decrease if models become unnecessarily complex. However, oversimplified models—those that omit important aspects of the problem needed to make a good prediction—can also limit and confound our understanding. Thus, the goal of all modeling is to find the “sweet spot” of model sophistication—regardless of whether complexity was added sequentially to an overly simple model or collapsed from an initial highly parameterized framework that uses mathematics and statistics to attain an optimum (e.g., Hunt et al., 2007). Thus, holistic parsimony is attained, incorporating “as simple as possible,” as well as the equally important corollary “but no simpler.”
Balancing model complexity and measurements in hydrology
NASA Astrophysics Data System (ADS)
Van De Giesen, N.; Schoups, G.; Weijs, S. V.
2012-12-01
The Data Processing Inequality implies that hydrological modeling can only reduce, and never increase, the amount of information available in the original data used to formulate and calibrate hydrological models: I(X;Z(Y)) ≤ I(X;Y). Still, hydrologists around the world seem quite content building models for "their" watersheds to move our discipline forward. Hydrological models tend to have a hybrid character with respect to underlying physics. Most models make use of some well established physical principles, such as mass and energy balances. One could argue that such principles are based on many observations, and therefore add data. These physical principles, however, are applied to hydrological models that often contain concepts that have no direct counterpart in the observable physical universe, such as "buckets" or "reservoirs" that fill up and empty out over time. These not-so-physical concepts are more like the Artificial Neural Networks and Support Vector Machines of the Artificial Intelligence (AI) community. Within AI, one quickly came to the realization that by increasing model complexity, one could basically fit any dataset but that complexity should be controlled in order to be able to predict unseen events. The more data are available to train or calibrate the model, the more complex it can be. Many complexity control approaches exist in AI, with Solomonoff inductive inference being one of the first formal approaches, the Akaike Information Criterion the most popular, and Statistical Learning Theory arguably being the most comprehensive practical approach. In hydrology, complexity control has hardly been used so far. There are a number of reasons for that lack of interest, the more valid ones of which will be presented during the presentation. For starters, there are no readily available complexity measures for our models. Second, some unrealistic simplifications of the underlying complex physics tend to have a smoothing effect on possible model
Multifaceted Modelling of Complex Business Enterprises.
Chakraborty, Subrata; Mengersen, Kerrie; Fidge, Colin; Ma, Lin; Lassen, David
2015-01-01
We formalise and present a new generic multifaceted complex system approach for modelling complex business enterprises. Our method has a strong focus on integrating the various data types available in an enterprise which represent the diverse perspectives of various stakeholders. We explain the challenges faced and define a novel approach to converting diverse data types into usable Bayesian probability forms. The data types that can be integrated include historic data, survey data, and management planning data, expert knowledge and incomplete data. The structural complexities of the complex system modelling process, based on various decision contexts, are also explained along with a solution. This new application of complex system models as a management tool for decision making is demonstrated using a railway transport case study. The case study demonstrates how the new approach can be utilised to develop a customised decision support model for a specific enterprise. Various decision scenarios are also provided to illustrate the versatility of the decision model at different phases of enterprise operations such as planning and control. PMID:26247591
Multifaceted Modelling of Complex Business Enterprises
2015-01-01
We formalise and present a new generic multifaceted complex system approach for modelling complex business enterprises. Our method has a strong focus on integrating the various data types available in an enterprise which represent the diverse perspectives of various stakeholders. We explain the challenges faced and define a novel approach to converting diverse data types into usable Bayesian probability forms. The data types that can be integrated include historic data, survey data, and management planning data, expert knowledge and incomplete data. The structural complexities of the complex system modelling process, based on various decision contexts, are also explained along with a solution. This new application of complex system models as a management tool for decision making is demonstrated using a railway transport case study. The case study demonstrates how the new approach can be utilised to develop a customised decision support model for a specific enterprise. Various decision scenarios are also provided to illustrate the versatility of the decision model at different phases of enterprise operations such as planning and control. PMID:26247591
Slip complexity in earthquake fault models.
Rice, J R; Ben-Zion, Y
1996-01-01
We summarize studies of earthquake fault models that give rise to slip complexities like those in natural earthquakes. For models of smooth faults between elastically deformable continua, it is critical that the friction laws involve a characteristic distance for slip weakening or evolution of surface state. That results in a finite nucleation size, or coherent slip patch size, h*. Models of smooth faults, using numerical cell size properly small compared to h*, show periodic response or complex and apparently chaotic histories of large events but have not been found to show small event complexity like the self-similar (power law) Gutenberg-Richter frequency-size statistics. This conclusion is supported in the present paper by fully inertial elastodynamic modeling of earthquake sequences. In contrast, some models of locally heterogeneous faults with quasi-independent fault segments, represented approximately by simulations with cell size larger than h* so that the model becomes "inherently discrete," do show small event complexity of the Gutenberg-Richter type. Models based on classical friction laws without a weakening length scale or for which the numerical procedure imposes an abrupt strength drop at the onset of slip have h* = 0 and hence always fall into the inherently discrete class. We suggest that the small-event complexity that some such models show will not survive regularization of the constitutive description, by inclusion of an appropriate length scale leading to a finite h*, and a corresponding reduction of numerical grid size. Images Fig. 2 Fig. 3 Fig. 4 Fig. 5 PMID:11607669
Minimum-complexity helicopter simulation math model
NASA Technical Reports Server (NTRS)
Heffley, Robert K.; Mnich, Marc A.
1988-01-01
An example of a minimal complexity simulation helicopter math model is presented. Motivating factors are the computational delays, cost, and inflexibility of the very sophisticated math models now in common use. A helicopter model form is given which addresses each of these factors and provides better engineering understanding of the specific handling qualities features which are apparent to the simulator pilot. The technical approach begins with specification of features which are to be modeled, followed by a build up of individual vehicle components and definition of equations. Model matching and estimation procedures are given which enable the modeling of specific helicopters from basic data sources such as flight manuals. Checkout procedures are given which provide for total model validation. A number of possible model extensions and refinement are discussed. Math model computer programs are defined and listed.
Constructing minimal models for complex system dynamics
NASA Astrophysics Data System (ADS)
Barzel, Baruch; Liu, Yang-Yu; Barabási, Albert-László
2015-05-01
One of the strengths of statistical physics is the ability to reduce macroscopic observations into microscopic models, offering a mechanistic description of a system's dynamics. This paradigm, rooted in Boltzmann's gas theory, has found applications from magnetic phenomena to subcellular processes and epidemic spreading. Yet, each of these advances were the result of decades of meticulous model building and validation, which are impossible to replicate in most complex biological, social or technological systems that lack accurate microscopic models. Here we develop a method to infer the microscopic dynamics of a complex system from observations of its response to external perturbations, allowing us to construct the most general class of nonlinear pairwise dynamics that are guaranteed to recover the observed behaviour. The result, which we test against both numerical and empirical data, is an effective dynamic model that can predict the system's behaviour and provide crucial insights into its inner workings.
Modeling acuity for optotypes varying in complexity.
Watson, Andrew B; Ahumada, Albert J
2012-01-01
Watson and Ahumada (2008) described a template model of visual acuity based on an ideal-observer limited by optical filtering, neural filtering, and noise. They computed predictions for selected optotypes and optical aberrations. Here we compare this model's predictions to acuity data for six human observers, each viewing seven different optotype sets, consisting of one set of Sloan letters and six sets of Chinese characters, differing in complexity (Zhang, Zhang, Xue, Liu, & Yu, 2007). Since optical aberrations for the six observers were unknown, we constructed 200 model observers using aberrations collected from 200 normal human eyes (Thibos, Hong, Bradley, & Cheng, 2002). For each condition (observer, optotype set, model observer) we estimated the model noise required to match the data. Expressed as efficiency, performance for Chinese characters was 1.4 to 2.7 times lower than for Sloan letters. Efficiency was weakly and inversely related to perimetric complexity of optotype set. We also compared confusion matrices for human and model observers. Correlations for off-diagonal elements ranged from 0.5 to 0.8 for different sets, and the average correlation for the template model was superior to a geometrical moment model with a comparable number of parameters (Liu, Klein, Xue, Zhang, & Yu, 2009). The template model performed well overall. Estimated psychometric function slopes matched the data, and noise estimates agreed roughly with those obtained independently from contrast sensitivity to Gabor targets. For optotypes of low complexity, the model accurately predicted relative performance. This suggests the model may be used to compare acuities measured with different sets of simple optotypes. PMID:23024356
The Kuramoto model in complex networks
NASA Astrophysics Data System (ADS)
Rodrigues, Francisco A.; Peron, Thomas K. DM.; Ji, Peng; Kurths, Jürgen
2016-01-01
Synchronization of an ensemble of oscillators is an emergent phenomenon present in several complex systems, ranging from social and physical to biological and technological systems. The most successful approach to describe how coherent behavior emerges in these complex systems is given by the paradigmatic Kuramoto model. This model has been traditionally studied in complete graphs. However, besides being intrinsically dynamical, complex systems present very heterogeneous structure, which can be represented as complex networks. This report is dedicated to review main contributions in the field of synchronization in networks of Kuramoto oscillators. In particular, we provide an overview of the impact of network patterns on the local and global dynamics of coupled phase oscillators. We cover many relevant topics, which encompass a description of the most used analytical approaches and the analysis of several numerical results. Furthermore, we discuss recent developments on variations of the Kuramoto model in networks, including the presence of noise and inertia. The rich potential for applications is discussed for special fields in engineering, neuroscience, physics and Earth science. Finally, we conclude by discussing problems that remain open after the last decade of intensive research on the Kuramoto model and point out some promising directions for future research.
Modelling biological complexity: a physical scientist's perspective
Coveney, Peter V; Fowler, Philip W
2005-01-01
We discuss the modern approaches of complexity and self-organization to understanding dynamical systems and how these concepts can inform current interest in systems biology. From the perspective of a physical scientist, it is especially interesting to examine how the differing weights given to philosophies of science in the physical and biological sciences impact the application of the study of complexity. We briefly describe how the dynamics of the heart and circadian rhythms, canonical examples of systems biology, are modelled by sets of nonlinear coupled differential equations, which have to be solved numerically. A major difficulty with this approach is that all the parameters within these equations are not usually known. Coupled models that include biomolecular detail could help solve this problem. Coupling models across large ranges of length- and time-scales is central to describing complex systems and therefore to biology. Such coupling may be performed in at least two different ways, which we refer to as hierarchical and hybrid multiscale modelling. While limited progress has been made in the former case, the latter is only beginning to be addressed systematically. These modelling methods are expected to bring numerous benefits to biology, for example, the properties of a system could be studied over a wider range of length- and time-scales, a key aim of systems biology. Multiscale models couple behaviour at the molecular biological level to that at the cellular level, thereby providing a route for calculating many unknown parameters as well as investigating the effects at, for example, the cellular level, of small changes at the biomolecular level, such as a genetic mutation or the presence of a drug. The modelling and simulation of biomolecular systems is itself very computationally intensive; we describe a recently developed hybrid continuum-molecular model, HybridMD, and its associated molecular insertion algorithm, which point the way towards the
Modelling biological complexity: a physical scientist's perspective.
Coveney, Peter V; Fowler, Philip W
2005-09-22
We discuss the modern approaches of complexity and self-organization to understanding dynamical systems and how these concepts can inform current interest in systems biology. From the perspective of a physical scientist, it is especially interesting to examine how the differing weights given to philosophies of science in the physical and biological sciences impact the application of the study of complexity. We briefly describe how the dynamics of the heart and circadian rhythms, canonical examples of systems biology, are modelled by sets of nonlinear coupled differential equations, which have to be solved numerically. A major difficulty with this approach is that all the parameters within these equations are not usually known. Coupled models that include biomolecular detail could help solve this problem. Coupling models across large ranges of length- and time-scales is central to describing complex systems and therefore to biology. Such coupling may be performed in at least two different ways, which we refer to as hierarchical and hybrid multiscale modelling. While limited progress has been made in the former case, the latter is only beginning to be addressed systematically. These modelling methods are expected to bring numerous benefits to biology, for example, the properties of a system could be studied over a wider range of length- and time-scales, a key aim of systems biology. Multiscale models couple behaviour at the molecular biological level to that at the cellular level, thereby providing a route for calculating many unknown parameters as well as investigating the effects at, for example, the cellular level, of small changes at the biomolecular level, such as a genetic mutation or the presence of a drug. The modelling and simulation of biomolecular systems is itself very computationally intensive; we describe a recently developed hybrid continuum-molecular model, HybridMD, and its associated molecular insertion algorithm, which point the way towards the
Intrinsic curvature hypothesis for biomembrane lipid composition: a role for nonbilayer lipids.
Gruner, S M
1985-01-01
A rationale is presented for the mix of "bilayer" and "nonbilayer" lipids, which occurs in biomembranes. A theory for the L alpha-HII phase transition and experimental tests of the theory are reviewed. It is suggested that the phase behavior is largely the result of a competition between the tendency for certain lipid monolayers to curl and the hydrocarbon packing strains that result. The tendency to curl is quantitatively given by the intrinsic radius of curvature, Ro, which minimizes the bending energy of a lipid monolayer. When bilayer (large Ro) and nonbilayer (small Ro) lipids are properly mixed, the resulting layer has a value of Ro that is at the critical edge of bilayer stability. In this case, bilayers may be destabilized by the protein-mediated introduction of hydrophobic molecules, such as dolichol. An x-ray diffraction investigation of the effect of dolichol on such a lipid mixture is described. This leads to the hypothesis that biomembranes homeostatically adjust their intrinsic curvatures to fall into an optimum range. Experimental strategies for testing the hypothesis are outlined. PMID:3858841
How useful are complex flood damage models?
NASA Astrophysics Data System (ADS)
Schröter, Kai; Kreibich, Heidi; Vogel, Kristin; Riggelsen, Carsten; Scherbaum, Frank; Merz, Bruno
2014-04-01
We investigate the usefulness of complex flood damage models for predicting relative damage to residential buildings in a spatial and temporal transfer context. We apply eight different flood damage models to predict relative building damage for five historic flood events in two different regions of Germany. Model complexity is measured in terms of the number of explanatory variables which varies from 1 variable up to 10 variables which are singled out from 28 candidate variables. Model validation is based on empirical damage data, whereas observation uncertainty is taken into consideration. The comparison of model predictive performance shows that additional explanatory variables besides the water depth improve the predictive capability in a spatial and temporal transfer context, i.e., when the models are transferred to different regions and different flood events. Concerning the trade-off between predictive capability and reliability the model structure seem more important than the number of explanatory variables. Among the models considered, the reliability of Bayesian network-based predictions in space-time transfer is larger than for the remaining models, and the uncertainties associated with damage predictions are reflected more completely.
Modeling Electromagnetic Scattering From Complex Inhomogeneous Objects
NASA Technical Reports Server (NTRS)
Deshpande, Manohar; Reddy, C. J.
2011-01-01
This software innovation is designed to develop a mathematical formulation to estimate the electromagnetic scattering characteristics of complex, inhomogeneous objects using the finite-element-method (FEM) and method-of-moments (MoM) concepts, as well as to develop a FORTRAN code called FEMOM3DS (Finite Element Method and Method of Moments for 3-Dimensional Scattering), which will implement the steps that are described in the mathematical formulation. Very complex objects can be easily modeled, and the operator of the code is not required to know the details of electromagnetic theory to study electromagnetic scattering.
Synthetic seismograms for a complex crustal model
NASA Astrophysics Data System (ADS)
Sandmeier, K.-J.; Wenzel, F.
1986-01-01
The algorithm of the original Reflectivity Method has been vectorized and implemented on a CDC CYBER 205 computer. Calculation times are shortened by a factor of 20 to 30 compared with a general purpose computer with a capacity of several million floating point operations per second (MFLOP). The rapid calculation of synthetic seismograms for complex models, high frequency sources and all offset ranges is a provision for modeling not only particular phases but the whole observed wavefield. As an example we model refraction data of the Black Forest, Southwest Germany and are able to derive rather tight constraints on the physical properties of the lower crust.
Dual-resolution molecular dynamics simulation of antimicrobials in biomembranes
Orsi, Mario; Noro, Massimo G.; Essex, Jonathan W.
2011-01-01
Triclocarban and triclosan, two potent antibacterial molecules present in many consumer products, have been subject to growing debate on a number of issues, particularly in relation to their possible role in causing microbial resistance. In this computational study, we present molecular-level insights into the interaction between these antimicrobial agents and hydrated phospholipid bilayers (taken as a simple model for the cell membrane). Simulations are conducted by a novel ‘dual-resolution’ molecular dynamics approach which combines accuracy with efficiency: the antimicrobials, modelled atomistically, are mixed with simplified (coarse-grain) models of lipids and water. A first set of calculations is run to study the antimicrobials' transfer free energies and orientations as a function of depth inside the membrane. Both molecules are predicted to preferentially accumulate in the lipid headgroup–glycerol region; this finding, which reproduces corresponding experimental data, is also discussed in terms of a general relation between solute partitioning and the intramembrane distribution of pressure. A second set of runs involves membranes incorporated with different molar concentrations of antimicrobial molecules (up to one antimicrobial per two lipids). We study the effects induced on fundamental membrane properties, such as the electron density, lateral pressure and electrical potential profiles. In particular, the analysis of the spontaneous curvature indicates that increasing antimicrobial concentrations promote a ‘destabilizing’ tendency towards non-bilayer phases, as observed experimentally. The antimicrobials' influence on the self-assembly process is also investigated. The significance of our results in the context of current theories of antimicrobial action is discussed. PMID:21131331
Human driven transitions in complex model ecosystems
NASA Astrophysics Data System (ADS)
Harfoot, Mike; Newbold, Tim; Tittinsor, Derek; Purves, Drew
2015-04-01
Human activities have been observed to be impacting ecosystems across the globe, leading to reduced ecosystem functioning, altered trophic and biomass structure and ultimately ecosystem collapse. Previous attempts to understand global human impacts on ecosystems have usually relied on statistical models, which do not explicitly model the processes underlying the functioning of ecosystems, represent only a small proportion of organisms and do not adequately capture complex non-linear and dynamic responses of ecosystems to perturbations. We use a mechanistic ecosystem model (1), which simulates the underlying processes structuring ecosystems and can thus capture complex and dynamic interactions, to investigate boundaries of complex ecosystems to human perturbation. We explore several drivers including human appropriation of net primary production and harvesting of animal biomass. We also present an analysis of the key interactions between biotic, societal and abiotic earth system components, considering why and how we might think about these couplings. References: M. B. J. Harfoot et al., Emergent global patterns of ecosystem structure and function from a mechanistic general ecosystem model., PLoS Biol. 12, e1001841 (2014).
A Practical Philosophy of Complex Climate Modelling
NASA Technical Reports Server (NTRS)
Schmidt, Gavin A.; Sherwood, Steven
2014-01-01
We give an overview of the practice of developing and using complex climate models, as seen from experiences in a major climate modelling center and through participation in the Coupled Model Intercomparison Project (CMIP).We discuss the construction and calibration of models; their evaluation, especially through use of out-of-sample tests; and their exploitation in multi-model ensembles to identify biases and make predictions. We stress that adequacy or utility of climate models is best assessed via their skill against more naive predictions. The framework we use for making inferences about reality using simulations is naturally Bayesian (in an informal sense), and has many points of contact with more familiar examples of scientific epistemology. While the use of complex simulations in science is a development that changes much in how science is done in practice, we argue that the concepts being applied fit very much into traditional practices of the scientific method, albeit those more often associated with laboratory work.
Intrinsic Uncertainties in Modeling Complex Systems.
Cooper, Curtis S; Bramson, Aaron L.; Ames, Arlo L.
2014-09-01
Models are built to understand and predict the behaviors of both natural and artificial systems. Because it is always necessary to abstract away aspects of any non-trivial system being modeled, we know models can potentially leave out important, even critical elements. This reality of the modeling enterprise forces us to consider the prospective impacts of those effects completely left out of a model - either intentionally or unconsidered. Insensitivity to new structure is an indication of diminishing returns. In this work, we represent a hypothetical unknown effect on a validated model as a finite perturba- tion whose amplitude is constrained within a control region. We find robustly that without further constraints, no meaningful bounds can be placed on the amplitude of a perturbation outside of the control region. Thus, forecasting into unsampled regions is a very risky proposition. We also present inherent difficulties with proper time discretization of models and representing in- herently discrete quantities. We point out potentially worrisome uncertainties, arising from math- ematical formulation alone, which modelers can inadvertently introduce into models of complex systems. Acknowledgements This work has been funded under early-career LDRD project #170979, entitled "Quantify- ing Confidence in Complex Systems Models Having Structural Uncertainties", which ran from 04/2013 to 09/2014. We wish to express our gratitude to the many researchers at Sandia who con- tributed ideas to this work, as well as feedback on the manuscript. In particular, we would like to mention George Barr, Alexander Outkin, Walt Beyeler, Eric Vugrin, and Laura Swiler for provid- ing invaluable advice and guidance through the course of the project. We would also like to thank Steven Kleban, Amanda Gonzales, Trevor Manzanares, and Sarah Burwell for their assistance in managing project tasks and resources.
Simulating Complex Modulated Phases Through Spin Models
NASA Astrophysics Data System (ADS)
Selinger, Jonathan V.; Lopatina, Lena M.; Geng, Jun; Selinger, Robin L. B.
2009-03-01
We extend the computational approach for studying striped phases on curved surfaces, presented in the previous talk, to two new problems involving complex modulated phases. First, we simulate a smectic liquid crystal on an arbitrary mesh by mapping the director field onto a vector spin and the density wave onto an Ising spin. We can thereby determine how the smectic phase responds to any geometrical constraints, including hybrid boundary conditions, patterned substrates, and disordered substrates. This method may provide a useful tool for designing ferroelectric liquid crystal cells. Second, we explore a model of vector spins on a flat two-dimensional (2D) lattice with long-range antiferromagnetic interactions. This model generates modulated phases with surprisingly complex structures, including 1D stripes and 2D periodic cells, which are independent of the underlying lattice. We speculate on the physical significance of these structures.
Industrial Source Complex (ISC) dispersion model. Software
Schewe, G.; Sieurin, E.
1980-01-01
The model updates various EPA dispersion model algorithms and combines them in two computer programs that can be used to assess the air quality impact of emissions from the wide variety of source types associated with an industrial source complex. The ISC Model short-term program ISCST, an updated version of the EPA Single Source (CRSTER) Model uses sequential hourly meteorological data to calculate values of average concentration or total dry deposition for time periods of 1, 2, 3, 4, 6, 8, 12 and 24 hours. Additionally, ISCST may be used to calculate 'N' is 366 days. The ISC Model long-term computer program ISCLT, a sector-averaged model that updates and combines basic features of the EPA Air Quality Display Model (AQDM) and the EPA Climatological Dispersion Model (CDM), uses STAR Summaries to calculate seasonal and/or annual average concentration or total deposition values. Both the ISCST and ISCLT programs make the same basic dispersion-model assumptions. Additionally, both the ISCST and ISCLT programs use either a polar or a Cartesian receptor grid...Software Description: The programs are written in the FORTRAN IV programming language for implementation on a UNIVAC 1110 computer and also on medium-to-large IBM or CDC systems. 65,000k words of core storage are required to operate the model.
Noncommutative complex Grosse-Wulkenhaar model
Hounkonnou, Mahouton Norbert; Samary, Dine Ousmane
2008-11-18
This paper stands for an application of the noncommutative (NC) Noether theorem, given in our previous work [AIP Proc 956(2007) 55-60], for the NC complex Grosse-Wulkenhaar model. It provides with an extension of a recent work [Physics Letters B 653(2007) 343-345]. The local conservation of energy-momentum tensors (EMTs) is recovered using improvement procedures based on Moyal algebraic techniques. Broken dilatation symmetry is discussed. NC gauge currents are also explicitly computed.
Molecular Rationale for Improved Dynamic Nuclear Polarization of Biomembranes.
Smith, Adam N; Twahir, Umar T; Dubroca, Thierry; Fanucci, Gail E; Long, Joanna R
2016-08-18
Dynamic nuclear polarization (DNP) enhanced solid-state NMR can provide orders of magnitude in signal enhancement. One of the most important aspects of obtaining efficient DNP enhancements is the optimization of the paramagnetic polarization agents used. To date, the most utilized polarization agents are nitroxide biradicals. However, the efficiency of these polarization agents is diminished when used with samples other than small molecule model compounds. We recently demonstrated the effectiveness of nitroxide labeled lipids as polarization agents for lipids and a membrane embedded peptide. Here, we systematically characterize, via electron paramagnetic (EPR), the dynamics of and the dipolar couplings between nitroxide labeled lipids under conditions relevant to DNP applications. Complemented by DNP enhanced solid-state NMR measurements at 600 MHz/395 GHz, a molecular rationale for the efficiency of nitroxide labeled lipids as DNP polarization agents is developed. Specifically, optimal DNP enhancements are obtained when the nitroxide moiety is attached to the lipid choline headgroup and local nitroxide concentrations yield an average e(-)-e(-) dipolar coupling of 47 MHz. On the basis of these measurements, we propose a framework for development of DNP polarization agents optimal for membrane protein structure determination. PMID:27434371
The noisy voter model on complex networks
NASA Astrophysics Data System (ADS)
Carro, Adrián; Toral, Raúl; San Miguel, Maxi
2016-04-01
We propose a new analytical method to study stochastic, binary-state models on complex networks. Moving beyond the usual mean-field theories, this alternative approach is based on the introduction of an annealed approximation for uncorrelated networks, allowing to deal with the network structure as parametric heterogeneity. As an illustration, we study the noisy voter model, a modification of the original voter model including random changes of state. The proposed method is able to unfold the dependence of the model not only on the mean degree (the mean-field prediction) but also on more complex averages over the degree distribution. In particular, we find that the degree heterogeneity—variance of the underlying degree distribution—has a strong influence on the location of the critical point of a noise-induced, finite-size transition occurring in the model, on the local ordering of the system, and on the functional form of its temporal correlations. Finally, we show how this latter point opens the possibility of inferring the degree heterogeneity of the underlying network by observing only the aggregate behavior of the system as a whole, an issue of interest for systems where only macroscopic, population level variables can be measured.
The noisy voter model on complex networks
Carro, Adrián; Toral, Raúl; San Miguel, Maxi
2016-01-01
We propose a new analytical method to study stochastic, binary-state models on complex networks. Moving beyond the usual mean-field theories, this alternative approach is based on the introduction of an annealed approximation for uncorrelated networks, allowing to deal with the network structure as parametric heterogeneity. As an illustration, we study the noisy voter model, a modification of the original voter model including random changes of state. The proposed method is able to unfold the dependence of the model not only on the mean degree (the mean-field prediction) but also on more complex averages over the degree distribution. In particular, we find that the degree heterogeneity—variance of the underlying degree distribution—has a strong influence on the location of the critical point of a noise-induced, finite-size transition occurring in the model, on the local ordering of the system, and on the functional form of its temporal correlations. Finally, we show how this latter point opens the possibility of inferring the degree heterogeneity of the underlying network by observing only the aggregate behavior of the system as a whole, an issue of interest for systems where only macroscopic, population level variables can be measured. PMID:27094773
The noisy voter model on complex networks.
Carro, Adrián; Toral, Raúl; San Miguel, Maxi
2016-01-01
We propose a new analytical method to study stochastic, binary-state models on complex networks. Moving beyond the usual mean-field theories, this alternative approach is based on the introduction of an annealed approximation for uncorrelated networks, allowing to deal with the network structure as parametric heterogeneity. As an illustration, we study the noisy voter model, a modification of the original voter model including random changes of state. The proposed method is able to unfold the dependence of the model not only on the mean degree (the mean-field prediction) but also on more complex averages over the degree distribution. In particular, we find that the degree heterogeneity-variance of the underlying degree distribution-has a strong influence on the location of the critical point of a noise-induced, finite-size transition occurring in the model, on the local ordering of the system, and on the functional form of its temporal correlations. Finally, we show how this latter point opens the possibility of inferring the degree heterogeneity of the underlying network by observing only the aggregate behavior of the system as a whole, an issue of interest for systems where only macroscopic, population level variables can be measured. PMID:27094773
Complexity of groundwater models in catchment hydrological models
NASA Astrophysics Data System (ADS)
Attinger, Sabine; Herold, Christian; Kumar, Rohini; Mai, Juliane; Ross, Katharina; Samaniego, Luis; Zink, Matthias
2015-04-01
In catchment hydrological models, groundwater is usually modeled very simple: it is conceptualized as a linear reservoir that gets the water from the upper unsaturated zone reservoir and releases water to the river system as baseflow. The baseflow is only a minor component of the total river flow and groundwater reservoir parameters are therefore difficult to be inversely estimated by means of river flow data only. In addition, the modelled values of the absolute height of the water filling the groundwater reservoir - in other words the groundwater levels - are of limited meaning due to coarse or no spatial resolution of groundwater and due to the fact that only river flow data are used for the calibration. The talk focuses on the question: Which complexity in terms of model complexity and model resolution is necessary to characterize groundwater processes and groundwater responses adequately in distributed catchment hydrological models? Starting from a spatially distributed catchment hydrological model with a groundwater compartment that is conceptualized as a linear reservoir we stepwise increase the groundwater model complexity and its spatial resolution to investigate which resolution, which complexity and which data are needed to reproduce baseflow and groundwater level data adequately.
Complex Constructivism: A Theoretical Model of Complexity and Cognition
ERIC Educational Resources Information Center
Doolittle, Peter E.
2014-01-01
Education has long been driven by its metaphors for teaching and learning. These metaphors have influenced both educational research and educational practice. Complexity and constructivism are two theories that provide functional and robust metaphors. Complexity provides a metaphor for the structure of myriad phenomena, while constructivism…
Describing Ecosystem Complexity through Integrated Catchment Modeling
NASA Astrophysics Data System (ADS)
Shope, C. L.; Tenhunen, J. D.; Peiffer, S.
2011-12-01
Land use and climate change have been implicated in reduced ecosystem services (ie: high quality water yield, biodiversity, and agricultural yield. The prediction of ecosystem services expected under future land use decisions and changing climate conditions has become increasingly important. Complex policy and management decisions require the integration of physical, economic, and social data over several scales to assess effects on water resources and ecology. Field-based meteorology, hydrology, soil physics, plant production, solute and sediment transport, economic, and social behavior data were measured in a South Korean catchment. A variety of models are being used to simulate plot and field scale experiments within the catchment. Results from each of the local-scale models provide identification of sensitive, local-scale parameters which are then used as inputs into a large-scale watershed model. We used the spatially distributed SWAT model to synthesize the experimental field data throughout the catchment. The approach of our study was that the range in local-scale model parameter results can be used to define the sensitivity and uncertainty in the large-scale watershed model. Further, this example shows how research can be structured for scientific results describing complex ecosystems and landscapes where cross-disciplinary linkages benefit the end result. The field-based and modeling framework described is being used to develop scenarios to examine spatial and temporal changes in land use practices and climatic effects on water quantity, water quality, and sediment transport. Development of accurate modeling scenarios requires understanding the social relationship between individual and policy driven land management practices and the value of sustainable resources to all shareholders.
Magnetic modeling of the Bushveld Igneous Complex
NASA Astrophysics Data System (ADS)
Webb, S. J.; Cole, J.; Letts, S. A.; Finn, C.; Torsvik, T. H.; Lee, M. D.
2009-12-01
Magnetic modeling of the 2.06 Ga Bushveld Complex presents special challenges due a variety of magnetic effects. These include strong remanence in the Main Zone and extremely high magnetic susceptibilities in the Upper Zone, which exhibit self-demagnetization. Recent palaeomagnetic results have resolved a long standing discrepancy between age data, which constrain the emplacement to within 1 million years, and older palaeomagnetic data which suggested ~50 million years for emplacement. The new palaeomagnetic results agree with the age data and present a single consistent pole, as opposed to a long polar wander path, for the Bushveld for all of the Zones and all of the limbs. These results also pass a fold test indicating the Bushveld Complex was emplaced horizontally lending support to arguments for connectivity. The magnetic signature of the Bushveld Complex provides an ideal mapping tool as the UZ has high susceptibility values and is well layered showing up as distinct anomalies on new high resolution magnetic data. However, this signature is similar to the highly magnetic BIFs found in the Transvaal and in the Witwatersrand Supergroups. Through careful mapping using new high resolution aeromagnetic data, we have been able to map the Bushveld UZ in complicated geological regions and identify a characteristic signature with well defined layers. The Main Zone, which has a more subdued magnetic signature, does have a strong remanent component and exhibits several magnetic reversals. The magnetic layers of the UZ contain layers of magnetitite with as much as 80-90% pure magnetite with large crystals (1-2 cm). While these layers are not strongly remanent, they have extremely high magnetic susceptibilities, and the self demagnetization effect must be taken into account when modeling these layers. Because the Bushveld Complex is so large, the geometry of the Earth’s magnetic field relative to the layers of the UZ Bushveld Complex changes orientation, creating
Structured analysis and modeling of complex systems
NASA Technical Reports Server (NTRS)
Strome, David R.; Dalrymple, Mathieu A.
1992-01-01
The Aircrew Evaluation Sustained Operations Performance (AESOP) facility at Brooks AFB, Texas, combines the realism of an operational environment with the control of a research laboratory. In recent studies we collected extensive data from the Airborne Warning and Control Systems (AWACS) Weapons Directors subjected to high and low workload Defensive Counter Air Scenarios. A critical and complex task in this environment involves committing a friendly fighter against a hostile fighter. Structured Analysis and Design techniques and computer modeling systems were applied to this task as tools for analyzing subject performance and workload. This technology is being transferred to the Man-Systems Division of NASA Johnson Space Center for application to complex mission related tasks, such as manipulating the Shuttle grappler arm.
Lab on a Biomembrane: Rapid prototyping and manipulation of 2D fluidic lipid bilayers circuits
Ainla, Alar; Gözen, Irep; Hakonen, Bodil; Jesorka, Aldo
2013-01-01
Lipid bilayer membranes are among the most ubiquitous structures in the living world, with intricate structural features and a multitude of biological functions. It is attractive to recreate these structures in the laboratory, as this allows mimicking and studying the properties of biomembranes and their constituents, and to specifically exploit the intrinsic two-dimensional fluidity. Even though diverse strategies for membrane fabrication have been reported, the development of related applications and technologies has been hindered by the unavailability of both versatile and simple methods. Here we report a rapid prototyping technology for two-dimensional fluidic devices, based on in-situ generated circuits of phospholipid films. In this “lab on a molecularly thin membrane”, various chemical and physical operations, such as writing, erasing, functionalization, and molecular transport, can be applied to user-defined regions of a membrane circuit. This concept is an enabling technology for research on molecular membranes and their technological use. PMID:24067786
The Intermediate Complexity Atmospheric Research Model
NASA Astrophysics Data System (ADS)
Gutmann, Ethan; Clark, Martyn; Rasmussen, Roy; Arnold, Jeffrey; Brekke, Levi
2015-04-01
The high-resolution, non-hydrostatic atmospheric models often used for dynamical downscaling are extremely computationally expensive, and, for a certain class of problems, their complexity hinders our ability to ask key scientific questions, particularly those related to hydrology and climate change. For changes in precipitation in particular, an atmospheric model grid spacing capable of resolving the structure of mountain ranges is of critical importance, yet such simulations can not currently be performed with an advanced regional climate model for long time periods, over large areas, and forced by many climate models. Here we present the newly developed Intermediate Complexity Atmospheric Research model (ICAR) capable of simulating critical atmospheric processes two to three orders of magnitude faster than a state of the art regional climate model. ICAR uses a simplified dynamical formulation based off of linear theory, combined with the circulation field from a low-resolution climate model. The resulting three-dimensional wind field is used to advect heat and moisture within the domain, while sub-grid physics (e.g. microphysics) are processed by standard and simplified physics schemes from the Weather Research and Forecasting (WRF) model. ICAR is tested in comparison to WRF by downscaling a climate change scenario over the Colorado Rockies. Both atmospheric models predict increases in precipitation across the domain with a greater increase on the western half. In contrast, statistically downscaled precipitation using multiple common statistical methods predict decreases in precipitation over the western half of the domain. Finally, we apply ICAR to multiple CMIP5 climate models and scenarios with multiple parameterization options to investigate the importance of uncertainty in sub-grid physics as compared to the uncertainty in the large scale climate scenario. ICAR is a useful tool for climate change and weather forecast downscaling, particularly for orographic
Interactions of a Tetrazine Derivative with Biomembrane Constituents: A Langmuir Monolayer Study.
Nakahara, Hiromichi; Hagimori, Masayori; Mukai, Takahiro; Shibata, Osamu
2016-07-01
Tetrazine (Tz) is expected to be used for bioimaging and as an analytical reagent. It is known to react very fast with trans-cyclooctene under water in organic chemistry. Here, to understand the interaction between Tz and biomembrane constituents, we first investigated the interfacial behavior of a newly synthesized Tz derivative comprising a C18-saturated hydrocarbon chain (rTz-C18) using a Langmuir monolayer spread at the air-water interface. Surface pressure (π)-molecular area (A) and surface potential (ΔV)-A isotherms were measured for monolayers of rTz-C18 and biomembrane constituents such as dipalmitoylphosphatidylcholine (DPPC), dipalmitoylphosphatidylglycerol (DPPG), dipalmitoyl phosphatidylethanolamine (DPPE), palmitoyl sphingomyelin (PSM), and cholesterol (Ch). The lateral interaction between rTz-C18 and the lipids was thermodynamically elucidated from the excess Gibbs free energy of mixing and two-dimensional phase diagram. The binary monolayers except for the Ch system indicated high miscibility or affinity. In particular, rTz-C18 was found to interact more strongly with DPPE, which is a major constituent of the inner surface of cell membranes. The phase behavior and morphology upon monolayer compression were investigated by using Brewster angle microscopy (BAM), fluorescence microscopy (FM), and atomic force microscopy (AFM). The BAM and FM images of the DPPC/rTz-C18, DPPG/rTz-C18, and PSM/rTz-C18 systems exhibited a coexistence state of two different liquid-condensed domains derived mainly from monolayers of phospholipids and phospholipids-rTz-C18. From these morphological observations, it is worthy to note that rTz-C18 is possible to interact with a limited amount of the lipids except for DPPE. PMID:27280946
Soares, Diana Gabriela; Rosseto, Hebert Luís; Basso, Fernanda Gonçalves; Scheffel, Débora Salles; Hebling, Josimeri; Costa, Carlos Alberto de Souza
2016-01-01
The development of biomaterials capable of driving dental pulp stem cell differentiation into odontoblast-like cells able to secrete reparative dentin is the goal of current conservative dentistry. In the present investigation, a biomembrane (BM) composed of a chitosan/collagen matrix embedded with calcium-aluminate microparticles was tested. The BM was produced by mixing collagen gel with a chitosan solution (2:1), and then adding bioactive calcium-aluminate cement as the mineral phase. An inert material (polystyrene) was used as the negative control. Human dental pulp cells were seeded onto the surface of certain materials, and the cytocompatibility was evaluated by cell proliferation and cell morphology, assessed after 1, 7, 14 and 28 days in culture. The odontoblastic differentiation was evaluated by measuring alkaline phosphatase (ALP) activity, total protein production, gene expression of DMP-1/DSPP and mineralized nodule deposition. The pulp cells were able to attach onto the BM surface and spread, displaying a faster proliferative rate at initial periods than that of the control cells. The BM also acted on the cells to induce more intense ALP activity, protein production at 14 days, and higher gene expression of DSPP and DMP-1 at 28 days, leading to the deposition of about five times more mineralized matrix than the cells in the control group. Therefore, the experimental biomembrane induced the differentiation of pulp cells into odontoblast-like cells featuring a highly secretory phenotype. This innovative bioactive material can drive other protocols for dental pulp exposure treatment by inducing the regeneration of dentin tissue mediated by resident cells. PMID:27119587
Lee, Tzong-Hsien; Hirst, Daniel J; Aguilar, Marie-Isabel
2015-09-01
Biomolecular-membrane interactions play a critical role in the regulation of many important biological processes such as protein trafficking, cellular signalling and ion channel formation. Peptide/protein-membrane interactions can also destabilise and damage the membrane which can lead to cell death. Characterisation of the molecular details of these binding-mediated membrane destabilisation processes is therefore central to understanding cellular events such as antimicrobial action, membrane-mediated amyloid aggregation, and apoptotic protein induced mitochondrial membrane permeabilisation. Optical biosensors have provided a unique approach to characterising membrane interactions allowing quantitation of binding events and new insight into the kinetic mechanism of these interactions. One of the most commonly used optical biosensor technologies is surface plasmon resonance (SPR) and there have been an increasing number of studies reporting the use of this technique for investigating biophysical analysis of membrane-mediated events. More recently, a number of new optical biosensors based on waveguide techniques have been developed, allowing membrane structure changes to be measured simultaneously with mass binding measurements. These techniques include dual polarisation interferometry (DPI), plasmon waveguide resonance spectroscopy (PWR) and optical waveguide light mode spectroscopy (OWLS). These techniques have expanded the application of optical biosensors to allow the analysis of membrane structure changes during peptide and protein binding. This review provides a theoretical and practical overview of the application of biosensor technology with a specific focus on DPI, PWR and OWLS to study biomembrane-mediated events and the mechanism of biomembrane disruption. This article is part of a Special Issue entitled: Lipid-protein interactions. PMID:26009270
Modeling the human prothrombinase complex components
NASA Astrophysics Data System (ADS)
Orban, Tivadar
Thrombin generation is the culminating stage of the blood coagulation process. Thrombin is obtained from prothrombin (the substrate) in a reaction catalyzed by the prothrombinase complex (the enzyme). The prothrombinase complex is composed of factor Xa (the enzyme), factor Va (the cofactor) associated in the presence of calcium ions on a negatively charged cell membrane. Factor Xa, alone, can activate prothrombin to thrombin; however, the rate of conversion is not physiologically relevant for survival. Incorporation of factor Va into prothrombinase accelerates the rate of prothrombinase activity by 300,000-fold, and provides the physiological pathway of thrombin generation. The long-term goal of the current proposal is to provide the necessary support for the advancing of studies to design potential drug candidates that may be used to avoid development of deep venous thrombosis in high-risk patients. The short-term goals of the present proposal are to (1) to propose a model of a mixed asymmetric phospholipid bilayer, (2) expand the incomplete model of human coagulation factor Va and study its interaction with the phospholipid bilayer, (3) to create a homology model of prothrombin (4) to study the dynamics of interaction between prothrombin and the phospholipid bilayer.
Membrane associated complexes in calcium dynamics modelling
NASA Astrophysics Data System (ADS)
Szopa, Piotr; Dyzma, Michał; Kaźmierczak, Bogdan
2013-06-01
Mitochondria not only govern energy production, but are also involved in crucial cellular signalling processes. They are one of the most important organelles determining the Ca2+ regulatory pathway in the cell. Several mathematical models explaining these mechanisms were constructed, but only few of them describe interplay between calcium concentrations in endoplasmic reticulum (ER), cytoplasm and mitochondria. Experiments measuring calcium concentrations in mitochondria and ER suggested the existence of cytosolic microdomains with locally elevated calcium concentration in the nearest vicinity of the outer mitochondrial membrane. These intermediate physical connections between ER and mitochondria are called MAM (mitochondria-associated ER membrane) complexes. We propose a model with a direct calcium flow from ER to mitochondria, which may be justified by the existence of MAMs, and perform detailed numerical analysis of the effect of this flow on the type and shape of calcium oscillations. The model is partially based on the Marhl et al model. We have numerically found that the stable oscillations exist for a considerable set of parameter values. However, for some parameter sets the oscillations disappear and the trajectories of the model tend to a steady state with very high calcium level in mitochondria. This can be interpreted as an early step in an apoptotic pathway.
Wind modelling over complex terrain using CFD
NASA Astrophysics Data System (ADS)
Avila, Matias; Owen, Herbert; Folch, Arnau; Prieto, Luis; Cosculluela, Luis
2015-04-01
The present work deals with the numerical CFD modelling of onshore wind farms in the context of High Performance Computing (HPC). The CFD model involves the numerical solution of the Reynolds-Averaged Navier-Stokes (RANS) equations together with a κ-É turbulence model and the energy equation, specially designed for Atmospheric Boundary Layer (ABL) flows. The aim is to predict the wind velocity distribution over complex terrain, using a model that includes meteorological data assimilation, thermal coupling, forested canopy and Coriolis effects. The modelling strategy involves automatic mesh generation, terrain data assimilation and generation of boundary conditions for the inflow wind flow distribution up to the geostrophic height. The CFD model has been implemented in Alya, a HPC multi physics parallel solver able to run with thousands of processors with an optimal scalability, developed in Barcelona Supercomputing Center. The implemented thermal stability and canopy physical model was developed by Sogachev in 2012. The k-É equations are of non-linear convection diffusion reaction type. The implemented numerical scheme consists on a stabilized finite element formulation based on the variational multiscale method, that is known to be stable for this kind of turbulence equations. We present a numerical formulation that stresses on the robustness of the solution method, tackling common problems that produce instability. The iterative strategy and linearization scheme is discussed. It intends to avoid the possibility of having negative values of diffusion during the iterative process, which may lead to divergence of the scheme. These problems are addressed by acting on the coefficients of the reaction and diffusion terms and on the turbulent variables themselves. The k-É equations are highly nonlinear. Complex terrain induces transient flow instabilities that may preclude the convergence of computer flow simulations based on steady state formulation of the
Modeling the relational complexities of symptoms.
Dolin, R H
1994-12-01
Realization of the value of reliable codified medical data is growing at a rapid rate. Symptom data in particular have been shown to be useful in decision analysis and in the determination of patient outcomes. Electronic medical record systems are emerging, and attempts are underway to define the structure and content of these systems to support the storage of all medical data. The underlying models upon which these systems are being built continue to be strengthened by a deeper understanding of the complex information they are to store. This report analyzes symptoms as they might be recorded in free text notes and presents a high-level conceptual data model representation of this domain. PMID:7869941
Inexpensive Complex Hand Model Twenty Years Later.
Frenger, Paul
2015-01-01
Twenty years ago the author unveiled his inexpensive complex hand model, which reproduced every motion of the human hand. A control system programmed in the Forth language operated its actuators and sensors. Follow-on papers for this popular project were next presented in Texas, Canada and Germany. From this hand grew the authors meter-tall robot (nicknamed ANNIE: Android With Neural Networks, Intellect and Emotions). It received machine vision, facial expressiveness, speech synthesis and speech recognition; a simian version also received a dexterous ape foot. New artificial intelligence features included op-amp neurons for OCR and simulated emotions, hormone emulation, endocannabinoid receptors, fear-trust-love mechanisms, a Grandmother Cell recognizer and artificial consciousness. Simulated illnesses included narcotic addiction, autism, PTSD, fibromyalgia and Alzheimers disease. The author gave 13 robotics-AI presentations at NASA in Houston since 2006. A meter-tall simian robot was proposed with gripping hand-feet for use with space vehicles and to explore distant planets and moons. Also proposed were: intelligent motorized exoskeletons for astronaut force multiplication; a cognitive prosthesis to detect and alleviate decreased crew mental performance; and a gynoid robot medic to tend astronauts in deep space missions. What began as a complex hand model evolved into an innovative robot-AI within two decades. PMID:25996742
Complex Educational Design: A Course Design Model Based on Complexity
ERIC Educational Resources Information Center
Freire, Maximina Maria
2013-01-01
Purpose: This article aims at presenting a conceptual framework which, theoretically grounded on complexity, provides the basis to conceive of online language courses that intend to respond to the needs of students and society. Design/methodology/approach: This paper is introduced by reflections on distance education and on the paradigmatic view…
Using Perspective to Model Complex Processes
Kelsey, R.L.; Bisset, K.R.
1999-04-04
The notion of perspective, when supported in an object-based knowledge representation, can facilitate better abstractions of reality for modeling and simulation. The object modeling of complex physical and chemical processes is made more difficult in part due to the poor abstractions of state and phase changes available in these models. The notion of perspective can be used to create different views to represent the different states of matter in a process. These techniques can lead to a more understandable model. Additionally, the ability to record the progress of a process from start to finish is problematic. It is desirable to have a historic record of the entire process, not just the end result of the process. A historic record should facilitate backtracking and re-start of a process at different points in time. The same representation structures and techniques can be used to create a sequence of process markers to represent a historic record. By using perspective, the sequence of markers can have multiple and varying views tailored for a particular user's context of interest.
Ants (Formicidae): models for social complexity.
Smith, Chris R; Dolezal, Adam; Eliyahu, Dorit; Holbrook, C Tate; Gadau, Jürgen
2009-07-01
The family Formicidae (ants) is composed of more than 12,000 described species that vary greatly in size, morphology, behavior, life history, ecology, and social organization. Ants occur in most terrestrial habitats and are the dominant animals in many of them. They have been used as models to address fundamental questions in ecology, evolution, behavior, and development. The literature on ants is extensive, and the natural history of many species is known in detail. Phylogenetic relationships for the family, as well as within many subfamilies, are known, enabling comparative studies. Their ease of sampling and ecological variation makes them attractive for studying populations and questions relating to communities. Their sociality and variation in social organization have contributed greatly to an understanding of complex systems, division of labor, and chemical communication. Ants occur in colonies composed of tens to millions of individuals that vary greatly in morphology, physiology, and behavior; this variation has been used to address proximate and ultimate mechanisms generating phenotypic plasticity. Relatedness asymmetries within colonies have been fundamental to the formulation and empirical testing of kin and group selection theories. Genomic resources have been developed for some species, and a whole-genome sequence for several species is likely to follow in the near future; comparative genomics in ants should provide new insights into the evolution of complexity and sociogenomics. Future studies using ants should help establish a more comprehensive understanding of social life, from molecules to colonies. PMID:20147200
Physical modelling of the nuclear pore complex
Fassati, Ariberto; Ford, Ian J.; Hoogenboom, Bart W.
2013-01-01
Physically interesting behaviour can arise when soft matter is confined to nanoscale dimensions. A highly relevant biological example of such a phenomenon is the Nuclear Pore Complex (NPC) found perforating the nuclear envelope of eukaryotic cells. In the central conduit of the NPC, of ∼30–60 nm diameter, a disordered network of proteins regulates all macromolecular transport between the nucleus and the cytoplasm. In spite of a wealth of experimental data, the selectivity barrier of the NPC has yet to be explained fully. Experimental and theoretical approaches are complicated by the disordered and heterogeneous nature of the NPC conduit. Modelling approaches have focused on the behaviour of the partially unfolded protein domains in the confined geometry of the NPC conduit, and have demonstrated that within the range of parameters thought relevant for the NPC, widely varying behaviour can be observed. In this review, we summarise recent efforts to physically model the NPC barrier and function. We illustrate how attempts to understand NPC barrier function have employed many different modelling techniques, each of which have contributed to our understanding of the NPC.
Reducing Spatial Data Complexity for Classification Models
Ruta, Dymitr; Gabrys, Bogdan
2007-11-29
Intelligent data analytics gradually becomes a day-to-day reality of today's businesses. However, despite rapidly increasing storage and computational power current state-of-the-art predictive models still can not handle massive and noisy corporate data warehouses. What is more adaptive and real-time operational environment requires multiple models to be frequently retrained which further hinders their use. Various data reduction techniques ranging from data sampling up to density retention models attempt to address this challenge by capturing a summarised data structure, yet they either do not account for labelled data or degrade the classification performance of the model trained on the condensed dataset. Our response is a proposition of a new general framework for reducing the complexity of labelled data by means of controlled spatial redistribution of class densities in the input space. On the example of Parzen Labelled Data Compressor (PLDC) we demonstrate a simulatory data condensation process directly inspired by the electrostatic field interaction where the data are moved and merged following the attracting and repelling interactions with the other labelled data. The process is controlled by the class density function built on the original data that acts as a class-sensitive potential field ensuring preservation of the original class density distributions, yet allowing data to rearrange and merge joining together their soft class partitions. As a result we achieved a model that reduces the labelled datasets much further than any competitive approaches yet with the maximum retention of the original class densities and hence the classification performance. PLDC leaves the reduced dataset with the soft accumulative class weights allowing for efficient online updates and as shown in a series of experiments if coupled with Parzen Density Classifier (PDC) significantly outperforms competitive data condensation methods in terms of classification performance at the
Reducing Spatial Data Complexity for Classification Models
NASA Astrophysics Data System (ADS)
Ruta, Dymitr; Gabrys, Bogdan
2007-11-01
Intelligent data analytics gradually becomes a day-to-day reality of today's businesses. However, despite rapidly increasing storage and computational power current state-of-the-art predictive models still can not handle massive and noisy corporate data warehouses. What is more adaptive and real-time operational environment requires multiple models to be frequently retrained which further hinders their use. Various data reduction techniques ranging from data sampling up to density retention models attempt to address this challenge by capturing a summarised data structure, yet they either do not account for labelled data or degrade the classification performance of the model trained on the condensed dataset. Our response is a proposition of a new general framework for reducing the complexity of labelled data by means of controlled spatial redistribution of class densities in the input space. On the example of Parzen Labelled Data Compressor (PLDC) we demonstrate a simulatory data condensation process directly inspired by the electrostatic field interaction where the data are moved and merged following the attracting and repelling interactions with the other labelled data. The process is controlled by the class density function built on the original data that acts as a class-sensitive potential field ensuring preservation of the original class density distributions, yet allowing data to rearrange and merge joining together their soft class partitions. As a result we achieved a model that reduces the labelled datasets much further than any competitive approaches yet with the maximum retention of the original class densities and hence the classification performance. PLDC leaves the reduced dataset with the soft accumulative class weights allowing for efficient online updates and as shown in a series of experiments if coupled with Parzen Density Classifier (PDC) significantly outperforms competitive data condensation methods in terms of classification performance at the
40 CFR 80.45 - Complex emissions model.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 17 2012-07-01 2012-07-01 false Complex emissions model. 80.45 Section...) REGULATION OF FUELS AND FUEL ADDITIVES Reformulated Gasoline § 80.45 Complex emissions model. (a) Definition... fuel which is being evaluated for its emissions performance using the complex model OXY =...
Comparison of Two Pasture Growth Models of Differing Complexity
Technology Transfer Automated Retrieval System (TEKTRAN)
Two pasture growth models that share many common features but differ in model complexity have been developed for incorporation into the Integrated Farm System Model (IFSM). Major differences between models include the explicit representation of roots in the more complex model, and their effects on c...
Koynova, Rumiana; MacDonald, Robert C.
2010-01-18
A viewpoint now emerging is that a critical factor in lipid-mediated transfection (lipofection) is the structural evolution of lipoplexes upon interacting and mixing with cellular lipids. Here we report our finding that lipid mixtures mimicking biomembrane lipid compositions are superior to pure anionic liposomes in their ability to release DNA from lipoplexes (cationic lipid/DNA complexes), even though they have a much lower negative charge density (and thus lower capacity to neutralize the positive charge of the lipoplex lipids). Flow fluorometry revealed that the portion of DNA released after a 30-min incubation of the cationic O-ethylphosphatidylcholine lipoplexes with the anionic phosphatidylserine or phosphatidylglycerol was 19% and 37%, respectively, whereas a mixture mimicking biomembranes (MM: phosphatidylcholine/phosphatidylethanolamine/phosphatidylserine /cholesterol 45:20:20:15 w/w) and polar lipid extract from bovine liver released 62% and 74%, respectively, of the DNA content. A possible reason for this superior power in releasing DNA by the natural lipid mixtures was suggested by structural experiments: while pure anionic lipids typically form lamellae, the natural lipid mixtures exhibited a surprising predilection to form nonlamellar phases. Thus, the MM mixture arranged into lamellar arrays at physiological temperature, but began to convert to the hexagonal phase at a slightly higher temperature, {approx} 40-45 C. A propensity to form nonlamellar phases (hexagonal, cubic, micellar) at close to physiological temperatures was also found with the lipid extracts from natural tissues (from bovine liver, brain, and heart). This result reveals that electrostatic interactions are only one of the factors involved in lipid-mediated DNA delivery. The tendency of lipid bilayers to form nonlamellar phases has been described in terms of bilayer 'frustration' which imposes a nonzero intrinsic curvature of the two opposing monolayers. Because the stored curvature
Analytical models for complex swirling flows
NASA Astrophysics Data System (ADS)
Borissov, A.; Hussain, V.
1996-11-01
We develops a new class of analytical solutions of the Navier-Stokes equations for swirling flows, and suggests ways to predict and control such flows occurring in various technological applications. We view momentum accumulation on the axis as a key feature of swirling flows and consider vortex-sink flows on curved axisymmetric surfaces with an axial flow. We show that these solutions model swirling flows in a cylindrical can, whirlpools, tornadoes, and cosmic swirling jets. The singularity of these solutions on the flow axis is removed by matching them with near-axis Schlichting and Long's swirling jets. The matched solutions model flows with very complex patterns, consisting of up to seven separation regions with recirculatory 'bubbles' and vortex rings. We apply the matched solutions for computing flows in the Ranque-Hilsch tube, in the meniscus of electrosprays, in vortex breakdown, and in an industrial vortex burner. The simple analytical solutions allow a clear understanding of how different control parameters affect the flow and guide selection of optimal parameter values for desired flow features. These solutions permit extension to other problems (such as heat transfer and chemical reaction) and have the potential of being significantly useful for further detailed investigation by direct or large-eddy numerical simulations as well as laboratory experimentation.
Discrete Element Modeling of Complex Granular Flows
NASA Astrophysics Data System (ADS)
Movshovitz, N.; Asphaug, E. I.
2010-12-01
Granular materials occur almost everywhere in nature, and are actively studied in many fields of research, from food industry to planetary science. One approach to the study of granular media, the continuum approach, attempts to find a constitutive law that determines the material's flow, or strain, under applied stress. The main difficulty with this approach is that granular systems exhibit different behavior under different conditions, behaving at times as an elastic solid (e.g. pile of sand), at times as a viscous fluid (e.g. when poured), or even as a gas (e.g. when shaken). Even if all these physics are accounted for, numerical implementation is made difficult by the wide and often discontinuous ranges in continuum density and sound speed. A different approach is Discrete Element Modeling (DEM). Here the goal is to directly model every grain in the system as a rigid body subject to various body and surface forces. The advantage of this method is that it treats all of the above regimes in the same way, and can easily deal with a system moving back and forth between regimes. But as a granular system typically contains a multitude of individual grains, the direct integration of the system can be very computationally expensive. For this reason most DEM codes are limited to spherical grains of uniform size. However, spherical grains often cannot replicate the behavior of real world granular systems. A simple pile of spherical grains, for example, relies on static friction alone to keep its shape, while in reality a pile of irregular grains can maintain a much steeper angle by interlocking force chains. In the present study we employ a commercial DEM, nVidia's PhysX Engine, originally designed for the game and animation industry, to simulate complex granular flows with irregular, non-spherical grains. This engine runs as a multi threaded process and can be GPU accelerated. We demonstrate the code's ability to physically model granular materials in the three regimes
Modeling competitive substitution in a polyelectrolyte complex
Peng, B.; Muthukumar, M.
2015-12-28
We have simulated the invasion of a polyelectrolyte complex made of a polycation chain and a polyanion chain, by another longer polyanion chain, using the coarse-grained united atom model for the chains and the Langevin dynamics methodology. Our simulations reveal many intricate details of the substitution reaction in terms of conformational changes of the chains and competition between the invading chain and the chain being displaced for the common complementary chain. We show that the invading chain is required to be sufficiently longer than the chain being displaced for effecting the substitution. Yet, having the invading chain to be longer than a certain threshold value does not reduce the substitution time much further. While most of the simulations were carried out in salt-free conditions, we show that presence of salt facilitates the substitution reaction and reduces the substitution time. Analysis of our data shows that the dominant driving force for the substitution process involving polyelectrolytes lies in the release of counterions during the substitution.
Biomembrane-mimicking lipid bilayer system as a mechanically tunable cell substrate
Lin, C. Y.; Auernheimer, V.; Naumann, C.; Goldmann, W. H.; Fabry, B.
2014-01-01
Cell behavior such as cell adhesion, spreading, and contraction critically depends on the elastic properties of the extracellular matrix. It is not known, however, how cells respond to viscoelastic or plastic material properties that more closely resemble the mechanical environment that cells encounter in the body. In this report, we employ viscoelastic and plastic biomembrane-mimicking cell substrates. The compliance of the substrates can be tuned by increasing the number of polymer-tethered bilayers. This leaves the density and conformation of adhesive ligands on the top bilayer unaltered. We then observe the response of fibroblasts to these property changes. For comparison, we also study the cells on soft polyacrylamide and hard glass surfaces. Cell morphology, motility, cell stiffness, contractile forces and adhesive contact size all decrease on more compliant matrices but are less sensitive to changes in matrix dissipative properties. These data suggest that cells are able to feel and respond predominantly to the effective matrix compliance, which arises as a combination of substrate and adhesive ligand mechanical properties. PMID:24439398
Monzel, Cornelia; Schmidt, Daniel; Seifert, Udo; Smith, Ana-Sunčana; Merkel, Rudolf; Sengupta, Kheya
2016-05-25
We probe the bending fluctuations of bio-membranes using highly deflated giant unilamellar vesicles (GUVs) bound to a substrate by a weak potential arising from generic interactions. The substrate is either homogeneous, with GUVs bound only by the weak potential, or is chemically functionalized with a micro-pattern of very strong specific binders. In both cases, the weakly adhered membrane is seen to be confined at a well-defined distance above the surface while it continues to fluctuate strongly. We quantify the fluctuations of the weakly confined membrane at the substrate proximal surface as well as of the free membrane at the distal surface of the same GUV. This strategy enables us to probe in detail the damping of fluctuations in the presence of the substrate, and to independently measure the membrane tension and the strength of the generic interaction potential. Measurements were done using two complementary techniques - dynamic optical displacement spectroscopy (DODS, resolution: 20 nm, 10 μs), and dual wavelength reflection interference contrast microscopy (DW-RICM, resolution: 4 nm, 50 ms). After accounting for the spatio-temporal resolution of the techniques, an excellent agreement between the two measurements was obtained. For both weakly confined systems we explore in detail the link between fluctuations on the one hand and membrane tension and the interaction potential on the other hand. PMID:27142463
Wang, Tianshu; Liu, Jiyang; Ren, Jiangtao; Wang, Jin; Wang, Erkang
2015-10-01
A hybrid composite constructed of phospholipids bilayer membrane, gold nanoparticles and graphene was prepared and used as matrices for microperoxidase-11 (MP11) immobilization. The direct electrochemistry and corresponding bioelectrocatalysis of the enzyme electrode was further investigated. Phospholipid bilayer membrane protected gold nanoparticles (AuNPs) were assembled on polyelectrolyte functionalized graphene sheets through electrostatic attraction to form a hybrid bionanocomposite. Owing to the biocompatible microenvironment provided by the mimetic biomembrane, microperoxidase-11 entrapped in this matrix well retained its native structure and exhibited high bioactivity. Moreover, the AuNPs-graphene assemblies could efficiently promote the direct electron transfer between the immobilized MP11 and the substrate electrode. The as-prepared enzyme electrode presented good direct electrochemistry and electrocatalytic responses to the reduction of hydrogen peroxide (H2O2). The resulting H2O2 biosensor showed a wide linear range (2.0×10(-5)-2.8×10(-4) M), a low detection limit (2.6×10(-6) M), good reproducibility and stability. Furthermore, this sensor was used for real-time detection of H2O2 dynamically released from the tumor cells MCF-7 in response to a pro-inflammatory stimulant. PMID:26078181
Wound healing modulation by a latex protein-containing polyvinyl alcohol biomembrane.
Ramos, Márcio V; de Alencar, Nylane Maria N; de Oliveira, Raquel S B; Freitas, Lyara B N; Aragão, Karoline S; de Andrade, Thiago Antônio M; Frade, Marco Andrey C; Brito, Gerly Anne C; de Figueiredo, Ingrid Samantha T
2016-07-01
In a previous study, we performed the chemical characterization of a polyvinyl alcohol (PVA) membrane supplemented with latex proteins (LP) displaying wound healing activity, and its efficacy as a delivery system was demonstrated. Here, we report on aspects of the mechanism underlying the performance of the PVA-latex protein biomembrane on wound healing. LP-PVA, but not PVA, induced more intense leukocyte (neutrophil) migration and mast cell degranulation during the inflammatory phase of the cicatricial process. Likewise, LP-PVA induced an increase in key markers and mediators of the inflammatory response (myeloperoxidase activity, nitric oxide, TNF, and IL-1β). These results demonstrated that LP-PVA significantly accelerates the early phase of the inflammatory process by upregulating cytokine release. This remarkable effect improves the subsequent phases of the healing process. The polyvinyl alcohol membrane was fully absorbed as an inert support while LP was shown to be active. It is therefore concluded that the LP-PVA is a suitable bioresource for biomedical engineering. PMID:27037828
Drvenica, Ivana T; Bukara, Katarina M; Ilić, Vesna Lj; Mišić, Danijela M; Vasić, Borislav Z; Gajić, Radoš B; Đorđević, Verica B; Veljović, Đorđe N; Belić, Aleksandar; Bugarski, Branko M
2016-07-01
The present study investigated preparation of bovine and porcine erythrocyte membranes from slaughterhouse blood as bio-derived materials for delivery of dexamethasone-sodium phosphate (DexP). The obtained biomembranes, i.e., ghosts were characterized in vitro in terms of morphological properties, loading parameters, and release behavior. For the last two, an UHPLC/-HESI-MS/MS based analytical procedure for absolute drug identification and quantification was developed. The results revealed that loading of DexP into both type of ghosts was directly proportional to the increase of drug concentration in the incubation medium, while incubation at 37°C had statistically significant effect on loaded amount of DexP (P < 0.05). The encapsulation efficiency was about fivefold higher in porcine compared to bovine ghosts. Insight into ghosts' surface morphology by field emission-scanning electron microscopy and atomic force microscopy confirmed that besides inevitable effects of osmosis, DexP inclusion itself had no observable additional effect on the morphology of the ghosts carriers. DexP release profiles were dependent on erythrocyte ghost type and amount of residual hemoglobin. However, sustained DexP release was achieved and shown over 3 days from porcine ghosts and 5 days from bovine erythrocyte ghosts. © 2016 American Institute of Chemical Engineers Biotechnol. Prog., 32:1046-1055, 2016. PMID:27254304
Vanegas, Juan M; Torres-Sánchez, Alejandro; Arroyo, Marino
2014-02-11
Local stress fields are routinely computed from molecular dynamics trajectories to understand the structure and mechanical properties of lipid bilayers. These calculations can be systematically understood with the Irving-Kirkwood-Noll theory. In identifying the stress tensor, a crucial step is the decomposition of the forces on the particles into pairwise contributions. However, such a decomposition is not unique in general, leading to an ambiguity in the definition of the stress tensor, particularly for multibody potentials. Furthermore, a theoretical treatment of constraints in local stress calculations has been lacking. Here, we present a new implementation of local stress calculations that systematically treats constraints and considers a privileged decomposition, the central force decomposition, that leads to a symmetric stress tensor by construction. We focus on biomembranes, although the methodology presented here is widely applicable. Our results show that some unphysical behavior obtained with previous implementations (e.g. nonconstant normal stress profiles along an isotropic bilayer in equilibrium) is a consequence of an improper treatment of constraints. Furthermore, other valid force decompositions produce significantly different stress profiles, particularly in the presence of dihedral potentials. Our methodology reveals the striking effect of unsaturations on the bilayer mechanics, missed by previous stress calculation implementations. PMID:26580046
Clinical complexity in medicine: A measurement model of task and patient complexity
Islam, R.; Weir, C.; Fiol, G. Del
2016-01-01
Summary Background Complexity in medicine needs to be reduced to simple components in a way that is comprehensible to researchers and clinicians. Few studies in the current literature propose a measurement model that addresses both task and patient complexity in medicine. Objective The objective of this paper is to develop an integrated approach to understand and measure clinical complexity by incorporating both task and patient complexity components focusing on infectious disease domain. The measurement model was adapted and modified to healthcare domain. Methods Three clinical Infectious Disease teams were observed, audio-recorded and transcribed. Each team included an Infectious Diseases expert, one Infectious Diseases fellow, one physician assistant and one pharmacy resident fellow. The transcripts were parsed and the authors independently coded complexity attributes. This baseline measurement model of clinical complexity was modified in an initial set of coding process and further validated in a consensus-based iterative process that included several meetings and email discussions by three clinical experts from diverse backgrounds from the Department of Biomedical Informatics at the University of Utah. Inter-rater reliability was calculated using Cohen’s kappa. Results The proposed clinical complexity model consists of two separate components. The first is a clinical task complexity model with 13 clinical complexity-contributing factors and 7 dimensions. The second is the patient complexity model with 11 complexity-contributing factors and 5 dimensions. Conclusion The measurement model for complexity encompassing both task and patient complexity will be a valuable resource for future researchers and industry to measure and understand complexity in healthcare. PMID:26404626
Power Curve Modeling in Complex Terrain Using Statistical Models
NASA Astrophysics Data System (ADS)
Bulaevskaya, V.; Wharton, S.; Clifton, A.; Qualley, G.; Miller, W.
2014-12-01
Traditional power output curves typically model power only as a function of the wind speed at the turbine hub height. While the latter is an essential predictor of power output, wind speed information in other parts of the vertical profile, as well as additional atmospheric variables, are also important determinants of power. The goal of this work was to determine the gain in predictive ability afforded by adding wind speed information at other heights, as well as other atmospheric variables, to the power prediction model. Using data from a wind farm with a moderately complex terrain in the Altamont Pass region in California, we trained three statistical models, a neural network, a random forest and a Gaussian process model, to predict power output from various sets of aforementioned predictors. The comparison of these predictions to the observed power data revealed that considerable improvements in prediction accuracy can be achieved both through the addition of predictors other than the hub-height wind speed and the use of statistical models. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under contract DE-AC52-07NA27344 and was funded by Wind Uncertainty Quantification Laboratory Directed Research and Development Project at LLNL under project tracking code 12-ERD-069.
Spatiotemporal Organization of Spin-Coated Supported Model Membranes
NASA Astrophysics Data System (ADS)
Simonsen, Adam Cohen
All cells of living organisms are separated from their surroundings and organized internally by means of flexible lipid membranes. In fact, there is consensus that the minimal requirements for self-replicating life processes include the following three features: (1) information carriers (DNA, RNA), (2) a metabolic system, and (3) encapsulation in a container structure [1]. Therefore, encapsulation can be regarded as an essential part of life itself. In nature, membranes are highly diverse interfacial structures that compartmentalize cells [2]. While prokaryotic cells only have an outer plasma membrane and a less-well-developed internal membrane structure, eukaryotic cells have a number of internal membranes associated with the organelles and the nucleus. Many of these membrane structures, including the plasma membrane, are complex layered systems, but with the basic structure of a lipid bilayer. Biomembranes contain hundreds of different lipid species in addition to embedded or peripherally associated membrane proteins and connections to scaffolds such as the cytoskeleton. In vitro, lipid bilayers are spontaneously self-organized structures formed by a large group of amphiphilic lipid molecules in aqueous suspensions. Bilayer formation is driven by the entropic properties of the hydrogen bond network in water in combination with the amphiphilic nature of the lipids. The molecular shapes of the lipid constituents play a crucial role in bilayer formation, and only lipids with approximately cylindrical shapes are able to form extended bilayers. The bilayer structure of biomembranes was discovered by Gorter and Grendel in 1925 [3] using monolayer studies of lipid extracts from red blood cells. Later, a number of conceptual models were developed to rationalize the organization of lipids and proteins in biological membranes. One of the most celebrated is the fluid-mosaic model by Singer and Nicolson (1972) [4]. According to this model, the lipid bilayer component of
Modeling Complex Workflow in Molecular Diagnostics
Gomah, Mohamed E.; Turley, James P.; Lu, Huimin; Jones, Dan
2010-01-01
One of the hurdles to achieving personalized medicine has been implementing the laboratory processes for performing and reporting complex molecular tests. The rapidly changing test rosters and complex analysis platforms in molecular diagnostics have meant that many clinical laboratories still use labor-intensive manual processing and testing without the level of automation seen in high-volume chemistry and hematology testing. We provide here a discussion of design requirements and the results of implementation of a suite of lab management tools that incorporate the many elements required for use of molecular diagnostics in personalized medicine, particularly in cancer. These applications provide the functionality required for sample accessioning and tracking, material generation, and testing that are particular to the evolving needs of individualized molecular diagnostics. On implementation, the applications described here resulted in improvements in the turn-around time for reporting of more complex molecular test sets, and significant changes in the workflow. Therefore, careful mapping of workflow can permit design of software applications that simplify even the complex demands of specialized molecular testing. By incorporating design features for order review, software tools can permit a more personalized approach to sample handling and test selection without compromising efficiency. PMID:20007844
APPLICATION OF SURFACE COMPLEXATION MODELS TO SOIL SYSTEMS
Technology Transfer Automated Retrieval System (TEKTRAN)
Chemical surface complexation models were developed to describe potentiometric titration and ion adsorption data on oxide minerals. These models provide molecular descriptions of adsorption using an equilibrium approach that defines surface species, chemical reactions, mass and charge balances and ...
Dispersion Modeling in Complex Urban Systems
Models are used to represent real systems in an understandable way. They take many forms. A conceptual model explains the way a system works. In environmental studies, for example, a conceptual model may delineate all the factors and parameters for determining how a particle move...
Specifying and Refining a Complex Measurement Model.
ERIC Educational Resources Information Center
Levy, Roy; Mislevy, Robert J.
This paper aims to describe a Bayesian approach to modeling and estimating cognitive models both in terms of statistical machinery and actual instrument development. Such a method taps the knowledge of experts to provide initial estimates for the probabilistic relationships among the variables in a multivariate latent variable model and refines…
Turbulence modeling for complex hypersonic flows
NASA Technical Reports Server (NTRS)
Huang, P. G.; Coakley, T. J.
1993-01-01
The paper presents results of calculations for a range of 2D turbulent hypersonic flows using two-equation models. The baseline models and the model corrections required for good hypersonic-flow predictions will be illustrated. Three experimental data sets were chosen for comparison. They are: (1) the hypersonic flare flows of Kussoy and Horstman, (2) a 2D hypersonic compression corner flow of Coleman and Stollery, and (3) the ogive-cylinder impinging shock-expansion flows of Kussoy and Horstman. Comparisons with the experimental data have shown that baseline models under-predict the extent of flow separation but over-predict the heat transfer rate near flow reattachment. Modifications to the models are described which remove the above-mentioned deficiencies. Although we have restricted the discussion only to the selected baseline models in this paper, the modifications proposed are universal and can in principle be transferred to any existing two-equation model formulation.
Studying complex chemistries using PLASIMO's global model
NASA Astrophysics Data System (ADS)
Koelman, PMJ; Tadayon Mousavi, S.; Perillo, R.; Graef, WAAD; Mihailova, DB; van Dijk, J.
2016-02-01
The Plasimo simulation software is used to construct a Global Model of a CO2 plasma. A DBD plasma between two coaxial cylinders is considered, which is driven by a triangular input power pulse. The plasma chemistry is studied during this power pulse and in the afterglow. The model consists of 71 species that interact in 3500 reactions. Preliminary results from the model are presented. The model has been validated by comparing its results with those presented in Kozák et al. (Plasma Sources Science and Technology 23(4) p. 045004, 2014). A good qualitative agreement has been reached; potential sources of remaining discrepancies are extensively discussed.
Multiscale Computational Models of Complex Biological Systems
Walpole, Joseph; Papin, Jason A.; Peirce, Shayn M.
2014-01-01
Integration of data across spatial, temporal, and functional scales is a primary focus of biomedical engineering efforts. The advent of powerful computing platforms, coupled with quantitative data from high-throughput experimental platforms, has allowed multiscale modeling to expand as a means to more comprehensively investigate biological phenomena in experimentally relevant ways. This review aims to highlight recently published multiscale models of biological systems while using their successes to propose the best practices for future model development. We demonstrate that coupling continuous and discrete systems best captures biological information across spatial scales by selecting modeling techniques that are suited to the task. Further, we suggest how to best leverage these multiscale models to gain insight into biological systems using quantitative, biomedical engineering methods to analyze data in non-intuitive ways. These topics are discussed with a focus on the future of the field, the current challenges encountered, and opportunities yet to be realized. PMID:23642247
Information, complexity and efficiency: The automobile model
Allenby, B. |
1996-08-08
The new, rapidly evolving field of industrial ecology - the objective, multidisciplinary study of industrial and economic systems and their linkages with fundamental natural systems - provides strong ground for believing that a more environmentally and economically efficient economy will be more information intensive and complex. Information and intellectual capital will be substituted for the more traditional inputs of materials and energy in producing a desirable, yet sustainable, quality of life. While at this point this remains a strong hypothesis, the evolution of the automobile industry can be used to illustrate how such substitution may, in fact, already be occurring in an environmentally and economically critical sector.
Sensitivity Analysis in Complex Plasma Chemistry Models
NASA Astrophysics Data System (ADS)
Turner, Miles
2015-09-01
The purpose of a plasma chemistry model is prediction of chemical species densities, including understanding the mechanisms by which such species are formed. These aims are compromised by an uncertain knowledge of the rate constants included in the model, which directly causes uncertainty in the model predictions. We recently showed that this predictive uncertainty can be large--a factor of ten or more in some cases. There is probably no context in which a plasma chemistry model might be used where the existence of uncertainty on this scale could not be a matter of concern. A question that at once follows is: Which rate constants cause such uncertainty? In the present paper we show how this question can be answered by applying a systematic screening procedure--the so-called Morris method--to identify sensitive rate constants. We investigate the topical example of the helium-oxygen chemistry. Beginning with a model with almost four hundred reactions, we show that only about fifty rate constants materially affect the model results, and as few as ten cause most of the uncertainty. This means that the model can be improved, and the uncertainty substantially reduced, by focussing attention on this tractably small set of rate constants. Work supported by Science Foundation Ireland under grant08/SRC/I1411, and by COST Action MP1101 ``Biomedical Applications of Atmospheric Pressure Plasmas.''
Modeling Power Systems as Complex Adaptive Systems
Chassin, David P.; Malard, Joel M.; Posse, Christian; Gangopadhyaya, Asim; Lu, Ning; Katipamula, Srinivas; Mallow, J V.
2004-12-30
Physical analogs have shown considerable promise for understanding the behavior of complex adaptive systems, including macroeconomics, biological systems, social networks, and electric power markets. Many of today's most challenging technical and policy questions can be reduced to a distributed economic control problem. Indeed, economically based control of large-scale systems is founded on the conjecture that the price-based regulation (e.g., auctions, markets) results in an optimal allocation of resources and emergent optimal system control. This report explores the state-of-the-art physical analogs for understanding the behavior of some econophysical systems and deriving stable and robust control strategies for using them. We review and discuss applications of some analytic methods based on a thermodynamic metaphor, according to which the interplay between system entropy and conservation laws gives rise to intuitive and governing global properties of complex systems that cannot be otherwise understood. We apply these methods to the question of how power markets can be expected to behave under a variety of conditions.
Uniform surface complexation approaches to radionuclide sorption modeling
Turner, D.R.; Pabalan, R.T.; Muller, P.; Bertetti, F.P.
1995-12-01
Simplified surface complexation models, based on a uniform set of model parameters have been developed to address complex radionuclide sorption behavior. Existing data have been examined, and interpreted using numerical nonlinear least-squares optimization techniques to determine the necessary binding constants. Simplified modeling approaches have generally proven successful at simulating and predicting radionuclide sorption on (hydr)oxides and aluminosilicates over a wide range of physical and chemical conditions.
Integrated Modeling of Complex Optomechanical Systems
NASA Astrophysics Data System (ADS)
Andersen, Torben; Enmark, Anita
2011-09-01
Mathematical modeling and performance simulation are playing an increasing role in large, high-technology projects. There are two reasons; first, projects are now larger than they were before, and the high cost calls for detailed performance prediction before construction. Second, in particular for space-related designs, it is often difficult to test systems under realistic conditions beforehand, and mathematical modeling is then needed to verify in advance that a system will work as planned. Computers have become much more powerful, permitting calculations that were not possible before. At the same time mathematical tools have been further developed and found acceptance in the community. Particular progress has been made in the fields of structural mechanics, optics and control engineering, where new methods have gained importance over the last few decades. Also, methods for combining optical, structural and control system models into global models have found widespread use. Such combined models are usually called integrated models and were the subject of this symposium. The objective was to bring together people working in the fields of groundbased optical telescopes, ground-based radio telescopes, and space telescopes. We succeeded in doing so and had 39 interesting presentations and many fruitful discussions during coffee and lunch breaks and social arrangements. We are grateful that so many top ranked specialists found their way to Kiruna and we believe that these proceedings will prove valuable during much future work.
The sigma model on complex projective superspaces
NASA Astrophysics Data System (ADS)
Candu, Constantin; Mitev, Vladimir; Quella, Thomas; Saleur, Hubert; Schomerus, Volker
2010-02-01
The sigma model on projective superspaces mathbb{C}{mathbb{P}^{S - 1left| S right.}} gives rise to a continuous family of interacting 2D conformal field theories which are parametrized by the curvature radius R and the theta angle θ. Our main goal is to determine the spectrum of the model, non-perturbatively as a function of both parameters. We succeed to do so for all open boundary conditions preserving the full global symmetry of the model. In string theory parlor, these correspond to volume filling branes that are equipped with a monopole line bundle and connection. The paper consists of two parts. In the first part, we approach the problem within the continuum formulation. Combining combinatorial arguments with perturbative studies and some simple free field calculations, we determine a closed formula for the partition function of the theory. This is then tested numerically in the second part. There we extend the proposal of [
A simple model clarifies the complicated relationships of complex networks
NASA Astrophysics Data System (ADS)
Zheng, Bojin; Wu, Hongrun; Kuang, Li; Qin, Jun; Du, Wenhua; Wang, Jianmin; Li, Deyi
2014-08-01
Real-world networks such as the Internet and WWW have many common traits. Until now, hundreds of models were proposed to characterize these traits for understanding the networks. Because different models used very different mechanisms, it is widely believed that these traits origin from different causes. However, we find that a simple model based on optimisation can produce many traits, including scale-free, small-world, ultra small-world, Delta-distribution, compact, fractal, regular and random networks. Moreover, by revising the proposed model, the community-structure networks are generated. By this model and the revised versions, the complicated relationships of complex networks are illustrated. The model brings a new universal perspective to the understanding of complex networks and provide a universal method to model complex networks from the viewpoint of optimisation.
A simple model clarifies the complicated relationships of complex networks
Zheng, Bojin; Wu, Hongrun; Kuang, Li; Qin, Jun; Du, Wenhua; Wang, Jianmin; Li, Deyi
2014-01-01
Real-world networks such as the Internet and WWW have many common traits. Until now, hundreds of models were proposed to characterize these traits for understanding the networks. Because different models used very different mechanisms, it is widely believed that these traits origin from different causes. However, we find that a simple model based on optimisation can produce many traits, including scale-free, small-world, ultra small-world, Delta-distribution, compact, fractal, regular and random networks. Moreover, by revising the proposed model, the community-structure networks are generated. By this model and the revised versions, the complicated relationships of complex networks are illustrated. The model brings a new universal perspective to the understanding of complex networks and provide a universal method to model complex networks from the viewpoint of optimisation. PMID:25160506
Improving phylogenetic regression under complex evolutionary models.
Mazel, Florent; Davies, T Jonathan; Georges, Damien; Lavergne, Sébastien; Thuiller, Wilfried; Peres-NetoO, Pedro R
2016-02-01
Phylogenetic Generalized Least Square (PGLS) is the tool of choice among phylogenetic comparative methods to measure the correlation between species features such as morphological and life-history traits or niche characteristics. In its usual form, it assumes that the residual variation follows a homogenous model of evolution across the branches of the phylogenetic tree. Since a homogenous model of evolution is unlikely to be realistic in nature, we explored the robustness of the phylogenetic regression when this assumption is violated. We did so by simulating a set of traits under various heterogeneous models of evolution, and evaluating the statistical performance (type I error [the percentage of tests based on samples that incorrectly rejected a true null hypothesis] and power [the percentage of tests that correctly rejected a false null hypothesis]) of classical phylogenetic regression. We found that PGLS has good power but unacceptable type I error rates. This finding is important since this method has been increasingly used in comparative analyses over the last decade. To address this issue, we propose a simple solution based on transforming the underlying variance-covariance matrix to adjust for model heterogeneity within PGLS. We suggest that heterogeneous rates of evolution might be particularly prevalent in large phylogenetic trees, while most current approaches assume a homogenous rate of evolution. Our analysis demonstrates that overlooking rate heterogeneity can result in inflated type I errors, thus misleading comparative analyses. We show that it is possible to correct for this bias even when the underlying model of evolution is not known a priori. PMID:27145604
A musculoskeletal model of the elbow joint complex
NASA Technical Reports Server (NTRS)
Gonzalez, Roger V.; Barr, Ronald E.; Abraham, Lawrence D.
1993-01-01
This paper describes a musculoskeletal model that represents human elbow flexion-extension and forearm pronation-supination. Musculotendon parameters and the skeletal geometry were determined for the musculoskeletal model in the analysis of ballistic elbow joint complex movements. The key objective was to develop a computational model, guided by optimal control, to investigate the relationship among patterns of muscle excitation, individual muscle forces, and movement kinematics. The model was verified using experimental kinematic, torque, and electromyographic data from volunteer subjects performing both isometric and ballistic elbow joint complex movements. In general, the model predicted kinematic and muscle excitation patterns similar to what was experimentally measured.
Optimal Complexity of Nonlinear Rainfall-Runoff Models
NASA Astrophysics Data System (ADS)
Schoups, G.; Vrugt, J.; van de Giesen, N.; Fenicia, F.
2008-12-01
Identification of an appropriate level of model complexity to accurately translate rainfall into runoff remains an unresolved issue. The model has to be complex enough to generate accurate predictions, but not too complex such that its parameters cannot be reliably estimated from the data. Earlier work with linear models (Jakeman and Hornberger, 1993) concluded that a model with 4 to 5 parameters is sufficient. However, more recent results with a nonlinear model (Vrugt et al., 2006) suggest that 10 or more parameters may be identified from daily rainfall-runoff time-series. The goal here is to systematically investigate optimal complexity of nonlinear rainfall-runoff models, yielding accurate models with identifiable parameters. Our methodology consists of four steps: (i) a priori specification of a family of model structures from which to pick an optimal one, (ii) parameter optimization of each model structure to estimate empirical or calibration error, (iii) estimation of parameter uncertainty of each calibrated model structure, and (iv) estimation of prediction error of each calibrated model structure. For the first step we formulate a flexible model structure that allows us to systematically vary the complexity with which physical processes are simulated. The second and third steps are achieved using a recently developed Markov chain Monte Carlo algorithm (DREAM), which minimizes calibration error yielding optimal parameter values and their underlying posterior probability density function. Finally, we compare several methods for estimating prediction error of each model structure, including statistical methods based on information criteria and split-sample calibration-validation. Estimates of parameter uncertainty and prediction error are then used to identify optimal complexity for rainfall-runoff modeling, using data from dry and wet MOPEX catchments as case studies.
Blueprints for Complex Learning: The 4C/ID-Model.
ERIC Educational Resources Information Center
van Merrienboer, Jeroen J. G.; Clark, Richard E.; de Croock, Marcel B. M.
2002-01-01
Describes the four-component instructional design system (4C/ID-model) developed for the design of training programs for complex skills. Discusses the structure of training blueprints for complex learning and associated instructional methods, focusing on learning tasks, supportive information, just-in-time information, and part-task practice.…
Classrooms as Complex Adaptive Systems: A Relational Model
ERIC Educational Resources Information Center
Burns, Anne; Knox, John S.
2011-01-01
In this article, we describe and model the language classroom as a complex adaptive system (see Logan & Schumann, 2005). We argue that linear, categorical descriptions of classroom processes and interactions do not sufficiently explain the complex nature of classrooms, and cannot account for how classroom change occurs (or does not occur), over…
Prequential Analysis of Complex Data with Adaptive Model Reselection†
Clarke, Jennifer; Clarke, Bertrand
2010-01-01
In Prequential analysis, an inference method is viewed as a forecasting system, and the quality of the inference method is based on the quality of its predictions. This is an alternative approach to more traditional statistical methods that focus on the inference of parameters of the data generating distribution. In this paper, we introduce adaptive combined average predictors (ACAPs) for the Prequential analysis of complex data. That is, we use convex combinations of two different model averages to form a predictor at each time step in a sequence. A novel feature of our strategy is that the models in each average are re-chosen adaptively at each time step. To assess the complexity of a given data set, we introduce measures of data complexity for continuous response data. We validate our measures in several simulated contexts prior to using them in real data examples. The performance of ACAPs is compared with the performances of predictors based on stacking or likelihood weighted averaging in several model classes and in both simulated and real data sets. Our results suggest that ACAPs achieve a better trade off between model list bias and model list variability in cases where the data is very complex. This implies that the choices of model class and averaging method should be guided by a concept of complexity matching, i.e. the analysis of a complex data set may require a more complex model class and averaging strategy than the analysis of a simpler data set. We propose that complexity matching is akin to a bias–variance tradeoff in statistical modeling. PMID:20617104
Size and complexity in model financial systems.
Arinaminpathy, Nimalan; Kapadia, Sujit; May, Robert M
2012-11-01
The global financial crisis has precipitated an increasing appreciation of the need for a systemic perspective toward financial stability. For example: What role do large banks play in systemic risk? How should capital adequacy standards recognize this role? How is stability shaped by concentration and diversification in the financial system? We explore these questions using a deliberately simplified, dynamic model of a banking system that combines three different channels for direct transmission of contagion from one bank to another: liquidity hoarding, asset price contagion, and the propagation of defaults via counterparty credit risk. Importantly, we also introduce a mechanism for capturing how swings in "confidence" in the system may contribute to instability. Our results highlight that the importance of relatively large, well-connected banks in system stability scales more than proportionately with their size: the impact of their collapse arises not only from their connectivity, but also from their effect on confidence in the system. Imposing tougher capital requirements on larger banks than smaller ones can thus enhance the resilience of the system. Moreover, these effects are more pronounced in more concentrated systems, and continue to apply, even when allowing for potential diversification benefits that may be realized by larger banks. We discuss some tentative implications for policy, as well as conceptual analogies in ecosystem stability and in the control of infectious diseases. PMID:23091020
Size and complexity in model financial systems
Arinaminpathy, Nimalan; Kapadia, Sujit; May, Robert M.
2012-01-01
The global financial crisis has precipitated an increasing appreciation of the need for a systemic perspective toward financial stability. For example: What role do large banks play in systemic risk? How should capital adequacy standards recognize this role? How is stability shaped by concentration and diversification in the financial system? We explore these questions using a deliberately simplified, dynamic model of a banking system that combines three different channels for direct transmission of contagion from one bank to another: liquidity hoarding, asset price contagion, and the propagation of defaults via counterparty credit risk. Importantly, we also introduce a mechanism for capturing how swings in “confidence” in the system may contribute to instability. Our results highlight that the importance of relatively large, well-connected banks in system stability scales more than proportionately with their size: the impact of their collapse arises not only from their connectivity, but also from their effect on confidence in the system. Imposing tougher capital requirements on larger banks than smaller ones can thus enhance the resilience of the system. Moreover, these effects are more pronounced in more concentrated systems, and continue to apply, even when allowing for potential diversification benefits that may be realized by larger banks. We discuss some tentative implications for policy, as well as conceptual analogies in ecosystem stability and in the control of infectious diseases. PMID:23091020
Effect of Fengycin, a Lipopeptide Produced by Bacillus subtilis, on Model Biomembranes
Deleu, Magali; Paquot, Michel; Nylander, Tommy
2008-01-01
Fengycin is a biologically active lipopeptide produced by several Bacillus subtilis strains. The lipopeptide is known to develop antifungal activity against filamentous fungi and to have hemolytic activity 40-fold lower than that of surfactin, another lipopeptide produced by B. subtilis. The aim of this work is to use complementary biophysical techniques to reveal the mechanism of membrane perturbation by fengycin. These include: 1), the Langmuir trough technique in combination with Brewster angle microscopy to study the lipopeptide penetration into monolayers; 2), ellipsometry to investigate the adsorption of fengycin onto supported lipid bilayers; 3), differential scanning calorimetry to determine the thermotropic properties of lipid bilayers in the presence of fengycin; and 4), cryogenic transmission electron microscopy, which provides information on the structural organization of the lipid/lipopeptide system. From these experiments, the mechanism of fengycin action appears to be based on a two-state transition controlled by the lipopeptide concentration. One state is the monomeric, not deeply anchored and nonperturbing lipopeptide, and the other state is a buried, aggregated form, which is responsible for membrane leakage and bioactivity. The mechanism, thus, appears to be driven mainly by the physicochemical properties of the lipopeptide, i.e., its amphiphilic character and affinity for lipid bilayers. PMID:18178659
Rogalska, Ewa; Więcław-Czapla, Katarzyna
2013-01-01
Three antimicrobial peptides derived from bovine milk proteins were examined with regard to penetration into insoluble monolayers formed with 1,2-dipalmitoyl-sn-glycero-3-phosphocholine (DPPC) or 1,2-dipalmitoyl-sn-glycero-3-phospho-rac-(1-glycerol) sodium salt (DPPG). Effects on surface pressure (Π) and electric surface potential (ΔV) were measured, Π with a platinum Wilhelmy plate and ΔV with a vibrating plate. The penetration measurements were performed under stationary diffusion conditions and upon the compression of the monolayers. The two type measurements showed greatly different effects of the peptide-lipid interactions. Results of the stationary penetration show that the peptide interactions with DPPC monolayer are weak, repulsive, and nonspecific while the interactions with DPPG monolayer are significant, attractive, and specific. These results are in accord with the fact that antimicrobial peptides disrupt bacteria membranes (negative) while no significant effect on the host membranes (neutral) is observed. No such discrimination was revealed from the compression isotherms. The latter indicate that squeezing the penetrant out of the monolayer upon compression does not allow for establishing the penetration equilibrium, so the monolayer remains supersaturated with the penetrant and shows an under-equilibrium orientation within the entire compression range, practically. PMID:24455264
Orientation of Tie-Lines in the Phase Diagram of DOPC:DPPC:Cholesterol Model Biomembranes
Uppamoochikkal, Pradeep; Tristram-Nagle, Stephanie; Nagle, John F.
2010-01-01
We report the direction of tie-lines of coexisting phases in a ternary diagram of DOPC:DPPC:Cholesterol lipid bilayers, which has been a system of interest in the discussion of biological rafts. For coexisting Ld and Lo phases we find that the orientation angle α of the tie-lines increases as the cholesterol concentration increases and it also increases as temperature increases from T=15 °C to T=30 °C. Results at lower cholesterol concentrations support the existence of a different 2-phase coexistence region of Ld and So phases and the existence of a 3-phase region separating the two 2-phase regions. Our method uses the X-ray lamellar D-spacings observed in oriented bilayers as a function of varying hydration. Although this method does not obtain the ends of the tie-lines, it gives precise values (±1°) of their angles α in the ternary phase diagram. PMID:20968281
Amirkavei, Mooud; Kinnunen, Paavo K J
2016-02-01
In order to obtain molecular level insight into the biophysics of the apoptosis promoting phospholipid 1-palmitoyl-2-azelaoyl-sn-glycero-3-phosphocholine (PazePC) we studied its partitioning into different lipid phases by isothermal titration calorimetry (ITC). To aid the interpretation of these data for PazePC, we additionally characterized by both ITC and fluorescence spectroscopy the fluorescent phospholipid analog 1-palmitoyl-2-{6-[(7-nitro-2-1,3-benzoxadiazol-4-yl)amino]hexanoyl}-sn-glycero-3-phosphocholine (NBD-C6-PC), which similarly to PazePC can adopt extended conformation in lipid bilayers. With the NBD-hexanoyl chain reversing its direction and extending into the aqueous space out of the bilayer, 7-nitro-2,1,3-benzoxadiazol-4-yl (NBD) becomes accessible to the water soluble dithionite, which reduces to non-fluorescent product. Our results suggest that these phospholipid derivatives first partition and penetrate into the outer bilayer leaflet of liquid disordered phase liposomes composed of unsaturated 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphocholine (POPC). Upon increase up to 2 mol% PazePC and NBD-C6-PC of the overall content, flip-flop from the outer into the inner bilayer leaflet commences. Interestingly, the presence of 40 mol% cholesterol in POPC liposomes did not abrogate the partitioning of PazePC into the liquid ordered phase. In contrast, only insignificant partitioning of PazePC and NBD-C6-PC into sphingomyelin/cholesterol liposomes was evident, highlighting a specific membrane permeability barrier function of this particular lipid composition against oxidatively truncated PazePC, thus emphasizing the importance of detailed characterization of the biophysical properties of membranes found in different cellular organelles, in terms of providing barriers for lipid-mediated cellular signals in processes such as apoptosis. Our data suggest NBD-C6-PC to represent useful fluorescent probe to study the cellular dynamics of oxidized phospholipid species, such as PazePC. PMID:26656184
Barzyk, Wanda; Rogalska, Ewa; Więcław-Czapla, Katarzyna
2013-01-01
Three antimicrobial peptides derived from bovine milk proteins were examined with regard to penetration into insoluble monolayers formed with 1,2-dipalmitoyl-sn-glycero-3-phosphocholine (DPPC) or 1,2-dipalmitoyl-sn-glycero-3-phospho-rac-(1-glycerol) sodium salt (DPPG). Effects on surface pressure (Π) and electric surface potential (ΔV) were measured, Π with a platinum Wilhelmy plate and ΔV with a vibrating plate. The penetration measurements were performed under stationary diffusion conditions and upon the compression of the monolayers. The two type measurements showed greatly different effects of the peptide-lipid interactions. Results of the stationary penetration show that the peptide interactions with DPPC monolayer are weak, repulsive, and nonspecific while the interactions with DPPG monolayer are significant, attractive, and specific. These results are in accord with the fact that antimicrobial peptides disrupt bacteria membranes (negative) while no significant effect on the host membranes (neutral) is observed. No such discrimination was revealed from the compression isotherms. The latter indicate that squeezing the penetrant out of the monolayer upon compression does not allow for establishing the penetration equilibrium, so the monolayer remains supersaturated with the penetrant and shows an under-equilibrium orientation within the entire compression range, practically. PMID:24455264
Reassessing Geophysical Models of the Bushveld Complex in 3D
NASA Astrophysics Data System (ADS)
Cole, J.; Webb, S. J.; Finn, C.
2012-12-01
Conceptual geophysical models of the Bushveld Igneous Complex show three possible geometries for its mafic component: 1) Separate intrusions with vertical feeders for the eastern and western lobes (Cousins, 1959) 2) Separate dipping sheets for the two lobes (Du Plessis and Kleywegt, 1987) 3) A single saucer-shaped unit connected at depth in the central part between the two lobes (Cawthorn et al, 1998) Model three incorporates isostatic adjustment of the crust in response to the weight of the dense mafic material. The model was corroborated by results of a broadband seismic array over southern Africa, known as the Southern African Seismic Experiment (SASE) (Nguuri, et al, 2001; Webb et al, 2004). This new information about the crustal thickness only became available in the last decade and could not be considered in the earlier models. Nevertheless, there is still on-going debate as to which model is correct. All of the models published up to now have been done in 2 or 2.5 dimensions. This is not well suited to modelling the complex geometry of the Bushveld intrusion. 3D modelling takes into account effects of variations in geometry and geophysical properties of lithologies in a full three dimensional sense and therefore affects the shape and amplitude of calculated fields. The main question is how the new knowledge of the increased crustal thickness, as well as the complexity of the Bushveld Complex, will impact on the gravity fields calculated for the existing conceptual models, when modelling in 3D. The three published geophysical models were remodelled using full 3Dl potential field modelling software, and including crustal thickness obtained from the SASE. The aim was not to construct very detailed models, but to test the existing conceptual models in an equally conceptual way. Firstly a specific 2D model was recreated in 3D, without crustal thickening, to establish the difference between 2D and 3D results. Then the thicker crust was added. Including the less
A mechanistic model of the cysteine synthase complex.
Feldman-Salit, Anna; Wirtz, Markus; Hell, Ruediger; Wade, Rebecca C
2009-02-13
Plants and bacteria assimilate and incorporate inorganic sulfur into organic compounds such as the amino acid cysteine. Cysteine biosynthesis involves a bienzyme complex, the cysteine synthase (CS) complex. The CS complex is composed of the enzymes serine acetyl transferase (SAT) and O-acetyl-serine-(thiol)-lyase (OAS-TL). Although it is experimentally known that formation of the CS complex influences cysteine production, the exact biological function of the CS complex, the mechanism of reciprocal regulation of the constituent enzymes and the structure of the complex are still poorly understood. Here, we used docking techniques to construct a model of the CS complex from mitochondrial Arabidopsis thaliana. The three-dimensional structures of the enzymes were modeled by comparative techniques. The C-termini of SAT, missing in the template structures but crucial for CS formation, were modeled de novo. Diffusional encounter complexes of SAT and OAS-TL were generated by rigid-body Brownian dynamics simulation. By incorporating experimental constraints during Brownian dynamics simulation, we identified complexes consistent with experiments. Selected encounter complexes were refined by molecular dynamics simulation to generate structures of bound complexes. We found that although a stoichiometric ratio of six OAS-TL dimers to one SAT hexamer in the CS complex is geometrically possible, binding energy calculations suggest that, consistent with experiments, a ratio of only two OAS-TL dimers to one SAT hexamer is more likely. Computational mutagenesis of residues in OAS-TL that are experimentally significant for CS formation hindered the association of the enzymes due to a less-favorable electrostatic binding free energy. Since the enzymes from A. thaliana were expressed in Escherichia coli, the cross-species binding of SAT and OAS-TL from E. coli and A. thaliana was explored. The results showed that reduced cysteine production might be due to a cross-binding of A. thaliana
NASA Astrophysics Data System (ADS)
He, Bing; Yuan, Lan; Dai, Wenbing; Gao, Wei; Zhang, Hua; Wang, Xueqing; Fang, Weigang; Zhang, Qiang
2016-03-01
Nowadays, concern about the use of nanotechnology for biomedical application is unprecedentedly increasing. In fact, nanosystems applied for various potential clinical uses always have to cross the primary biological barrier consisting of epithelial cells. However, little is really known currently in terms of the influence of the dynamic bio-adhesion of nanosystems on bio-membranes as well as on endocytosis and transcytosis. This was investigated here using polymer nanoparticles (PNs) and MDCK epithelial cells as the models. Firstly, the adhesion of PNs on cell membranes was found to be time-dependent with a shift of both location and dispersion pattern, from the lateral adhesion of mainly mono-dispersed PNs initially to the apical coverage of the PN aggregate later. Then, it was interesting to observe in this study that the dynamic bio-adhesion of PNs only affected their endocytosis but not their transcytosis. It was important to find that the endocytosis of PNs was not a constant process. A GM1 dependent CDE (caveolae dependent endocytosis) pathway was dominant in the preliminary stage, followed by the co-existence of a CME (clathrin-mediated endocytosis) pathway for the PN aggregate at a later stage, in accordance with the adhesion features of PNs, suggesting the modification of PN adhesion patterns on the endocytosis pathways. Next, the PN adhesion was noticed to affect the structure of cell junctions, via altering the extra- and intra-cellular calcium levels, leading to the enhanced paracellular transport of small molecules, but not favorably enough for the obviously increased passing of PNs themselves. Finally, FRAP and other techniques all demonstrated the obvious impact of PN adhesion on the membrane confirmation, independent of the adhesion location and time, which might lower the threshold for the internalization of PNs, even their aggregates. Generally, these findings confirm that the transport pathway mechanism of PNs through epithelial cells is rather
He, Bing; Yuan, Lan; Dai, Wenbing; Gao, Wei; Zhang, Hua; Wang, Xueqing; Fang, Weigang; Zhang, Qiang
2016-03-10
Nowadays, concern about the use of nanotechnology for biomedical application is unprecedentedly increasing. In fact, nanosystems applied for various potential clinical uses always have to cross the primary biological barrier consisting of epithelial cells. However, little is really known currently in terms of the influence of the dynamic bio-adhesion of nanosystems on bio-membranes as well as on endocytosis and transcytosis. This was investigated here using polymer nanoparticles (PNs) and MDCK epithelial cells as the models. Firstly, the adhesion of PNs on cell membranes was found to be time-dependent with a shift of both location and dispersion pattern, from the lateral adhesion of mainly mono-dispersed PNs initially to the apical coverage of the PN aggregate later. Then, it was interesting to observe in this study that the dynamic bio-adhesion of PNs only affected their endocytosis but not their transcytosis. It was important to find that the endocytosis of PNs was not a constant process. A GM1 dependent CDE (caveolae dependent endocytosis) pathway was dominant in the preliminary stage, followed by the co-existence of a CME (clathrin-mediated endocytosis) pathway for the PN aggregate at a later stage, in accordance with the adhesion features of PNs, suggesting the modification of PN adhesion patterns on the endocytosis pathways. Next, the PN adhesion was noticed to affect the structure of cell junctions, via altering the extra- and intra-cellular calcium levels, leading to the enhanced paracellular transport of small molecules, but not favorably enough for the obviously increased passing of PNs themselves. Finally, FRAP and other techniques all demonstrated the obvious impact of PN adhesion on the membrane confirmation, independent of the adhesion location and time, which might lower the threshold for the internalization of PNs, even their aggregates. Generally, these findings confirm that the transport pathway mechanism of PNs through epithelial cells is rather
The Use of Behavior Models for Predicting Complex Operations
NASA Technical Reports Server (NTRS)
Gore, Brian F.
2010-01-01
Modeling and simulation (M&S) plays an important role when complex human-system notions are being proposed, developed and tested within the system design process. National Aeronautics and Space Administration (NASA) as an agency uses many different types of M&S approaches for predicting human-system interactions, especially when it is early in the development phase of a conceptual design. NASA Ames Research Center possesses a number of M&S capabilities ranging from airflow, flight path models, aircraft models, scheduling models, human performance models (HPMs), and bioinformatics models among a host of other kinds of M&S capabilities that are used for predicting whether the proposed designs will benefit the specific mission criteria. The Man-Machine Integration Design and Analysis System (MIDAS) is a NASA ARC HPM software tool that integrates many models of human behavior with environment models, equipment models, and procedural / task models. The challenge to model comprehensibility is heightened as the number of models that are integrated and the requisite fidelity of the procedural sets are increased. Model transparency is needed for some of the more complex HPMs to maintain comprehensibility of the integrated model performance. This will be exemplified in a recent MIDAS v5 application model and plans for future model refinements will be presented.
Modeling of protein binary complexes using structural mass spectrometry data
Kamal, J.K. Amisha; Chance, Mark R.
2008-01-01
In this article, we describe a general approach to modeling the structure of binary protein complexes using structural mass spectrometry data combined with molecular docking. In the first step, hydroxyl radical mediated oxidative protein footprinting is used to identify residues that experience conformational reorganization due to binding or participate in the binding interface. In the second step, a three-dimensional atomic structure of the complex is derived by computational modeling. Homology modeling approaches are used to define the structures of the individual proteins if footprinting detects significant conformational reorganization as a function of complex formation. A three-dimensional model of the complex is constructed from these binary partners using the ClusPro program, which is composed of docking, energy filtering, and clustering steps. Footprinting data are used to incorporate constraints—positive and/or negative—in the docking step and are also used to decide the type of energy filter—electrostatics or desolvation—in the successive energy-filtering step. By using this approach, we examine the structure of a number of binary complexes of monomeric actin and compare the results to crystallographic data. Based on docking alone, a number of competing models with widely varying structures are observed, one of which is likely to agree with crystallographic data. When the docking steps are guided by footprinting data, accurate models emerge as top scoring. We demonstrate this method with the actin/gelsolin segment-1 complex. We also provide a structural model for the actin/cofilin complex using this approach which does not have a crystal or NMR structure. PMID:18042684
Geometric modeling of subcellular structures, organelles, and multiprotein complexes
Feng, Xin; Xia, Kelin; Tong, Yiying; Wei, Guo-Wei
2013-01-01
SUMMARY Recently, the structure, function, stability, and dynamics of subcellular structures, organelles, and multi-protein complexes have emerged as a leading interest in structural biology. Geometric modeling not only provides visualizations of shapes for large biomolecular complexes but also fills the gap between structural information and theoretical modeling, and enables the understanding of function, stability, and dynamics. This paper introduces a suite of computational tools for volumetric data processing, information extraction, surface mesh rendering, geometric measurement, and curvature estimation of biomolecular complexes. Particular emphasis is given to the modeling of cryo-electron microscopy data. Lagrangian-triangle meshes are employed for the surface presentation. On the basis of this representation, algorithms are developed for surface area and surface-enclosed volume calculation, and curvature estimation. Methods for volumetric meshing have also been presented. Because the technological development in computer science and mathematics has led to multiple choices at each stage of the geometric modeling, we discuss the rationales in the design and selection of various algorithms. Analytical models are designed to test the computational accuracy and convergence of proposed algorithms. Finally, we select a set of six cryo-electron microscopy data representing typical subcellular complexes to demonstrate the efficacy of the proposed algorithms in handling biomolecular surfaces and explore their capability of geometric characterization of binding targets. This paper offers a comprehensive protocol for the geometric modeling of subcellular structures, organelles, and multiprotein complexes. PMID:23212797
Between complexity of modelling and modelling of complexity: An essay on econophysics
NASA Astrophysics Data System (ADS)
Schinckus, C.
2013-09-01
Econophysics is an emerging field dealing with complex systems and emergent properties. A deeper analysis of themes studied by econophysicists shows that research conducted in this field can be decomposed into two different computational approaches: “statistical econophysics” and “agent-based econophysics”. This methodological scission complicates the definition of the complexity used in econophysics. Therefore, this article aims to clarify what kind of emergences and complexities we can find in econophysics in order to better understand, on one hand, the current scientific modes of reasoning this new field provides; and on the other hand, the future methodological evolution of the field.
Using fMRI to Test Models of Complex Cognition
ERIC Educational Resources Information Center
Anderson, John R.; Carter, Cameron S.; Fincham, Jon M.; Qin, Yulin; Ravizza, Susan M.; Rosenberg-Lee, Miriam
2008-01-01
This article investigates the potential of fMRI to test assumptions about different components in models of complex cognitive tasks. If the components of a model can be associated with specific brain regions, one can make predictions for the temporal course of the BOLD response in these regions. An event-locked procedure is described for dealing…
Tips on Creating Complex Geometry Using Solid Modeling Software
ERIC Educational Resources Information Center
Gow, George
2008-01-01
Three-dimensional computer-aided drafting (CAD) software, sometimes referred to as "solid modeling" software, is easy to learn, fun to use, and becoming the standard in industry. However, many users have difficulty creating complex geometry with the solid modeling software. And the problem is not entirely a student problem. Even some teachers and…
Network model of bilateral power markets based on complex networks
NASA Astrophysics Data System (ADS)
Wu, Yang; Liu, Junyong; Li, Furong; Yan, Zhanxin; Zhang, Li
2014-06-01
The bilateral power transaction (BPT) mode becomes a typical market organization with the restructuring of electric power industry, the proper model which could capture its characteristics is in urgent need. However, the model is lacking because of this market organization's complexity. As a promising approach to modeling complex systems, complex networks could provide a sound theoretical framework for developing proper simulation model. In this paper, a complex network model of the BPT market is proposed. In this model, price advantage mechanism is a precondition. Unlike other general commodity transactions, both of the financial layer and the physical layer are considered in the model. Through simulation analysis, the feasibility and validity of the model are verified. At same time, some typical statistical features of BPT network are identified. Namely, the degree distribution follows the power law, the clustering coefficient is low and the average path length is a bit long. Moreover, the topological stability of the BPT network is tested. The results show that the network displays a topological robustness to random market member's failures while it is fragile against deliberate attacks, and the network could resist cascading failure to some extent. These features are helpful for making decisions and risk management in BPT markets.
Evapotranspiration model of different complexity for multiple land cover types
Technology Transfer Automated Retrieval System (TEKTRAN)
A comparison between half-hourly and daily measured and computed evapotranspiration (ET) using three models of different complexity, namely the Priestley-Taylor (P-T), reference Penman-Monteith (P-M), and Common Land Model (CLM) was conducted using three AmeriFlux sites under different land cover an...
Zebrafish as an emerging model for studying complex brain disorders
Kalueff, Allan V.; Stewart, Adam Michael; Gerlai, Robert
2014-01-01
The zebrafish (Danio rerio) is rapidly becoming a popular model organism in pharmacogenetics and neuropharmacology. Both larval and adult zebrafish are currently used to increase our understanding of brain function, dysfunction, and their genetic and pharmacological modulation. Here we review the developing utility of zebrafish in the analysis of complex brain disorders (including, for example, depression, autism, psychoses, drug abuse and cognitive disorders), also covering zebrafish applications towards the goal of modeling major human neuropsychiatric and drug-induced syndromes. We argue that zebrafish models of complex brain disorders and drug-induced conditions have become a rapidly emerging critical field in translational neuropharmacology research. PMID:24412421
Simulating complex intracellular processes using object-oriented computational modelling.
Johnson, Colin G; Goldman, Jacki P; Gullick, William J
2004-11-01
The aim of this paper is to give an overview of computer modelling and simulation in cellular biology, in particular as applied to complex biochemical processes within the cell. This is illustrated by the use of the techniques of object-oriented modelling, where the computer is used to construct abstractions of objects in the domain being modelled, and these objects then interact within the computer to simulate the system and allow emergent properties to be observed. The paper also discusses the role of computer simulation in understanding complexity in biological systems, and the kinds of information which can be obtained about biology via simulation. PMID:15302205
González-Henríquez, C M; Pizarro-Guerra, G C; Córdova-Alarcón, E N; Sarabia-Vallejos, M A; Terraza-Inostroza, C A
2016-03-01
Hydrogel films possess the ability of retain water and deliver it to a phospholipid bilayer mainly composed by DPPC (1,2-dipalmitoyl-sn-glycero-3-phosphocholine); moisture of the medium favors the stability of an artificial biomembrane when it is subjected to repetitive heating cycles. This hypothesis is valid when the hydrogel film, used as scaffold, present a flat surface morphology and a high ability for water releasing. On the other hand, when the sample presents a wrinkle topography (periodic undulations), free lateral molecular movement of the bilayer becomes lower, disfavoring the occurrence of clear phases/phase transitions according to applied temperature. Hydrogel films were prepared using HEMA (hydroxyethylmetacrylate), different crosslinking agents and initiators. This reaction mixture was spread over hydrophilic silicon wafers using spin coating technique. Resultant films were then exposed to UV light favoring polymeric chain crosslinking and interactions between hydrogel and substrate; this process is also known to generate tensile stress mismatch between different hydrogel strata, producing out-of-plane net force that generate ordered undulations or collapsed crystals at surface level. DPPC bilayers were then placed over hydrogel using Langmuir-Blodgett technique. Surface morphology was detected in order to clarify the behavior of these films. Obtained data corroborate DPPC membrane stability making possible to detect phases/phase transitions by ellipsometric methods and Atomic Force Microscopy due to their high hydration level. This system is intended to be used as biosensor through the insertion of transmembrane proteins or peptides that detect minimal variations of some analyte in the environment; artificial biomembrane stability and behavior is fundamental for this purpose. PMID:26855412
Complexation and molecular modeling studies of europium(III)-gallic acid-amino acid complexes.
Taha, Mohamed; Khan, Imran; Coutinho, João A P
2016-04-01
With many metal-based drugs extensively used today in the treatment of cancer, attention has focused on the development of new coordination compounds with antitumor activity with europium(III) complexes recently introduced as novel anticancer drugs. The aim of this work is to design new Eu(III) complexes with gallic acid, an antioxida'nt phenolic compound. Gallic acid was chosen because it shows anticancer activity without harming health cells. As antioxidant, it helps to protect human cells against oxidative damage that implicated in DNA damage, cancer, and accelerated cell aging. In this work, the formation of binary and ternary complexes of Eu(III) with gallic acid, primary ligand, and amino acids alanine, leucine, isoleucine, and tryptophan was studied by glass electrode potentiometry in aqueous solution containing 0.1M NaNO3 at (298.2±0.1) K. Their overall stability constants were evaluated and the concentration distributions of the complex species in solution were calculated. The protonation constants of gallic acid and amino acids were also determined at our experimental conditions and compared with those predicted by using conductor-like screening model for realistic solvation (COSMO-RS) model. The geometries of Eu(III)-gallic acid complexes were characterized by the density functional theory (DFT). The spectroscopic UV-visible and photoluminescence measurements are carried out to confirm the formation of Eu(III)-gallic acid complexes in aqueous solutions. PMID:26827296
Multiscale Model for the Assembly Kinetics of Protein Complexes.
Xie, Zhong-Ru; Chen, Jiawen; Wu, Yinghao
2016-02-01
The assembly of proteins into high-order complexes is a general mechanism for these biomolecules to implement their versatile functions in cells. Natural evolution has developed various assembling pathways for specific protein complexes to maintain their stability and proper activities. Previous studies have provided numerous examples of the misassembly of protein complexes leading to severe biological consequences. Although the research focusing on protein complexes has started to move beyond the static representation of quaternary structures to the dynamic aspect of their assembly, the current understanding of the assembly mechanism of protein complexes is still largely limited. To tackle this problem, we developed a new multiscale modeling framework. This framework combines a lower-resolution rigid-body-based simulation with a higher-resolution Cα-based simulation method so that protein complexes can be assembled with both structural details and computational efficiency. We applied this model to a homotrimer and a heterotetramer as simple test systems. Consistent with experimental observations, our simulations indicated very different kinetics between protein oligomerization and dimerization. The formation of protein oligomers is a multistep process that is much slower than dimerization but thermodynamically more stable. Moreover, we showed that even the same protein quaternary structure can have very diverse assembly pathways under different binding constants between subunits, which is important for regulating the functions of protein complexes. Finally, we revealed that the binding between subunits in a complex can be synergistically strengthened during assembly without considering allosteric regulation or conformational changes. Therefore, our model provides a useful tool to understand the general principles of protein complex assembly. PMID:26738810
Pedigree models for complex human traits involving the mitochondrial genome.
Schork, N J; Guo, S W
1993-01-01
Recent biochemical and molecular-genetic discoveries concerning variations in human mtDNA have suggested a role for mtDNA mutations in a number of human traits and disorders. Although the importance of these discoveries cannot be emphasized enough, the complex natures of mitochondrial biogenesis, mutant mtDNA phenotype expression, and the maternal inheritance pattern exhibited by mtDNA transmission make it difficult to develop models that can be used routinely in pedigree analyses to quantify and test hypotheses about the role of mtDNA in the expression of a trait. In the present paper, we describe complexities inherent in mitochondrial biogenesis and genetic transmission and show how these complexities can be incorporated into appropriate mathematical models. We offer a variety of likelihood-based models which account for the complexities discussed. The derivation of our models is meant to stimulate the construction of statistical tests for putative mtDNA contribution to a trait. Results of simulation studies which make use of the proposed models are described. The results of the simulation studies suggest that, although pedigree models of mtDNA effects can be reliable, success in mapping chromosomal determinants of a trait does not preclude the possibility that mtDNA determinants exists for the trait as well. Shortcomings inherent in the proposed models are described in an effort to expose areas in need of additional research. PMID:8250048
Pedigree models for complex human traits involving the mitochrondrial genome
Schork, N.J.; Guo, S.W. )
1993-12-01
Recent biochemical and molecular-genetic discoveries concerning variations in human mtDNA have suggested a role for mtDNA mutations in a number of human traits and disorders. Although the importance of these discoveries cannot be emphasized enough, the complex natures of mitochondrial biogenesis, mutant mtDNA phenotype expression, and the maternal inheritance pattern exhibited by mtDNA transmission make it difficult to develop models that can be used routinely in pedigree analyses to quantify and test hypotheses about the role of mtDNA in the expression of a trait. In the present paper, the authors describe complexities inherent in mitochondrial biogenesis and genetic transmission and show how these complexities can be incorporated into appropriate mathematical models. The authors offer a variety of likelihood-based models which account for the complexities discussed. The derivation of the models is meant to stimulate the construction of statistical tests for putative mtDNA contribution to a trait. Results of simulation studies which make use of the proposed models are described. The results of the simulation studies suggest that, although pedigree models of mtDNA effects can be reliable, success in mapping chromosomal determinants of a trait does not preclude the possibility that mtDNA determinants exist for the trait as well. Shortcomings inherent in the proposed models are described in an effort to expose areas in need of additional research. 58 refs., 5 figs., 2 tabs.
Emulator-assisted data assimilation in complex models
NASA Astrophysics Data System (ADS)
Margvelashvili, Nugzar Yu; Herzfeld, Mike; Rizwi, Farhan; Mongin, Mathieu; Baird, Mark E.; Jones, Emlyn; Schaffelke, Britta; King, Edward; Schroeder, Thomas
2016-08-01
Emulators are surrogates of complex models that run orders of magnitude faster than the original model. The utility of emulators for the data assimilation into ocean models is still not well understood. High complexity of ocean models translates into high uncertainty of the corresponding emulators which may undermine the quality of the assimilation schemes based on such emulators. Numerical experiments with a chaotic Lorenz-95 model are conducted to illustrate this point and suggest a strategy to alleviate this problem through the localization of the emulation and data assimilation procedures. Insights gained through these experiments are used to design and implement data assimilation scenario for a 3D fine-resolution sediment transport model of the Great Barrier Reef (GBR), Australia.
Systems Engineering Metrics: Organizational Complexity and Product Quality Modeling
NASA Technical Reports Server (NTRS)
Mog, Robert A.
1997-01-01
Innovative organizational complexity and product quality models applicable to performance metrics for NASA-MSFC's Systems Analysis and Integration Laboratory (SAIL) missions and objectives are presented. An intensive research effort focuses on the synergistic combination of stochastic process modeling, nodal and spatial decomposition techniques, organizational and computational complexity, systems science and metrics, chaos, and proprietary statistical tools for accelerated risk assessment. This is followed by the development of a preliminary model, which is uniquely applicable and robust for quantitative purposes. Exercise of the preliminary model using a generic system hierarchy and the AXAF-I architectural hierarchy is provided. The Kendall test for positive dependence provides an initial verification and validation of the model. Finally, the research and development of the innovation is revisited, prior to peer review. This research and development effort results in near-term, measurable SAIL organizational and product quality methodologies, enhanced organizational risk assessment and evolutionary modeling results, and 91 improved statistical quantification of SAIL productivity interests.
Calibration of Complex Subsurface Reaction Models Using a Surrogate-Model Approach
Application of model assessment techniques to complex subsurface reaction models involves numerous difficulties, including non-trivial model selection, parameter non-uniqueness, and excessive computational burden. To overcome these difficulties, this study introduces SAMM (Simult...
Complex groundwater flow systems as traveling agent models
Padilla, Pablo; Escolero, Oscar; González, Tomas; Morales-Casique, Eric; Osorio-Olvera, Luis
2014-01-01
Analyzing field data from pumping tests, we show that as with many other natural phenomena, groundwater flow exhibits complex dynamics described by 1/f power spectrum. This result is theoretically studied within an agent perspective. Using a traveling agent model, we prove that this statistical behavior emerges when the medium is complex. Some heuristic reasoning is provided to justify both spatial and dynamic complexity, as the result of the superposition of an infinite number of stochastic processes. Even more, we show that this implies that non-Kolmogorovian probability is needed for its study, and provide a set of new partial differential equations for groundwater flow. PMID:25337455
Improving a regional model using reduced complexity and parameter estimation
Kelson, Victor A.; Hunt, Randall J.; Haitjema, Henk M.
2002-01-01
The availability of powerful desktop computers and graphical user interfaces for ground water flow models makes possible the construction of ever more complex models. A proposed copper-zinc sulfide mine in northern Wisconsin offers a unique case in which the same hydrologic system has been modeled using a variety of techniques covering a wide range of sophistication and complexity. Early in the permitting process, simple numerical models were used to evaluate the necessary amount of water to be pumped from the mine, reductions in streamflow, and the drawdowns in the regional aquifer. More complex models have subsequently been used in an attempt to refine the predictions. Even after so much modeling effort, questions regarding the accuracy and reliability of the predictions remain. We have performed a new analysis of the proposed mine using the two-dimensional analytic element code GFLOW coupled with the nonlinear parameter estimation code UCODE. The new model is parsimonious, containing fewer than 10 parameters, and covers a region several times larger in areal extent than any of the previous models. The model demonstrates the suitability of analytic element codes for use with parameter estimation codes. The simplified model results are similar to the more complex models; predicted mine inflows and UCODE-derived 95% confidence intervals are consistent with the previous predictions. More important, the large areal extent of the model allowed us to examine hydrological features not included in the previous models, resulting in new insights about the effects that far-field boundary conditions can have on near-field model calibration and parameterization. In this case, the addition of surface water runoff into a lake in the headwaters of a stream while holding recharge constant moved a regional ground watershed divide and resulted in some of the added water being captured by the adjoining basin. Finally, a simple analytical solution was used to clarify the GFLOW model
On explicit algebraic stress models for complex turbulent flows
NASA Technical Reports Server (NTRS)
Gatski, T. B.; Speziale, C. G.
1992-01-01
Explicit algebraic stress models that are valid for three-dimensional turbulent flows in noninertial frames are systematically derived from a hierarchy of second-order closure models. This represents a generalization of the model derived by Pope who based his analysis on the Launder, Reece, and Rodi model restricted to two-dimensional turbulent flows in an inertial frame. The relationship between the new models and traditional algebraic stress models -- as well as anistropic eddy visosity models -- is theoretically established. The need for regularization is demonstrated in an effort to explain why traditional algebraic stress models have failed in complex flows. It is also shown that these explicit algebraic stress models can shed new light on what second-order closure models predict for the equilibrium states of homogeneous turbulent flows and can serve as a useful alternative in practical computations.
Synchronization Experiments With A Global Coupled Model of Intermediate Complexity
NASA Astrophysics Data System (ADS)
Selten, Frank; Hiemstra, Paul; Shen, Mao-Lin
2013-04-01
In the super modeling approach an ensemble of imperfect models are connected through nudging terms that nudge the solution of each model to the solution of all other models in the ensemble. The goal is to obtain a synchronized state through a proper choice of connection strengths that closely tracks the trajectory of the true system. For the super modeling approach to be successful, the connections should be dense and strong enough for synchronization to occur. In this study we analyze the behavior of an ensemble of connected global atmosphere-ocean models of intermediate complexity. All atmosphere models are connected to the same ocean model through the surface fluxes of heat, water and momentum, the ocean is integrated using weighted averaged surface fluxes. In particular we analyze the degree of synchronization between the atmosphere models and the characteristics of the ensemble mean solution. The results are interpreted using a low order atmosphere-ocean toy model.
Prediction of Complex Aerodynamic Flows with Explicit Algebraic Stress Models
NASA Technical Reports Server (NTRS)
Abid, Ridha; Morrison, Joseph H.; Gatski, Thomas B.; Speziale, Charles G.
1996-01-01
An explicit algebraic stress equation, developed by Gatski and Speziale, is used in the framework of K-epsilon formulation to predict complex aerodynamic turbulent flows. The nonequilibrium effects are modeled through coefficients that depend nonlinearly on both rotational and irrotational strains. The proposed model was implemented in the ISAAC Navier-Stokes code. Comparisons with the experimental data are presented which clearly demonstrate that explicit algebraic stress models can predict the correct response to nonequilibrium flow.
A Complex Systems Model Approach to Quantified Mineral Resource Appraisal
Gettings, M.E.; Bultman, M.W.; Fisher, F.S.
2004-01-01
For federal and state land management agencies, mineral resource appraisal has evolved from value-based to outcome-based procedures wherein the consequences of resource development are compared with those of other management options. Complex systems modeling is proposed as a general framework in which to build models that can evaluate outcomes. Three frequently used methods of mineral resource appraisal (subjective probabilistic estimates, weights of evidence modeling, and fuzzy logic modeling) are discussed to obtain insight into methods of incorporating complexity into mineral resource appraisal models. Fuzzy logic and weights of evidence are most easily utilized in complex systems models. A fundamental product of new appraisals is the production of reusable, accessible databases and methodologies so that appraisals can easily be repeated with new or refined data. The data are representations of complex systems and must be so regarded if all of their information content is to be utilized. The proposed generalized model framework is applicable to mineral assessment and other geoscience problems. We begin with a (fuzzy) cognitive map using (+1,0,-1) values for the links and evaluate the map for various scenarios to obtain a ranking of the importance of various links. Fieldwork and modeling studies identify important links and help identify unanticipated links. Next, the links are given membership functions in accordance with the data. Finally, processes are associated with the links; ideally, the controlling physical and chemical events and equations are found for each link. After calibration and testing, this complex systems model is used for predictions under various scenarios.
Goldsby, Michael E.; Mayo, Jackson R.; Bhattacharyya, Arnab; Armstrong, Robert C.; Vanderveen, Keith
2008-09-01
The goal of this research was to examine foundational methods, both computational and theoretical, that can improve the veracity of entity-based complex system models and increase confidence in their predictions for emergent behavior. The strategy was to seek insight and guidance from simplified yet realistic models, such as cellular automata and Boolean networks, whose properties can be generalized to production entity-based simulations. We have explored the usefulness of renormalization-group methods for finding reduced models of such idealized complex systems. We have prototyped representative models that are both tractable and relevant to Sandia mission applications, and quantified the effect of computational renormalization on the predictive accuracy of these models, finding good predictivity from renormalized versions of cellular automata and Boolean networks. Furthermore, we have theoretically analyzed the robustness properties of certain Boolean networks, relevant for characterizing organic behavior, and obtained precise mathematical constraints on systems that are robust to failures. In combination, our results provide important guidance for more rigorous construction of entity-based models, which currently are often devised in an ad-hoc manner. Our results can also help in designing complex systems with the goal of predictable behavior, e.g., for cybersecurity.
SEE Rate Estimation: Model Complexity and Data Requirements
NASA Technical Reports Server (NTRS)
Ladbury, Ray
2008-01-01
Statistical Methods outlined in [Ladbury, TNS20071 can be generalized for Monte Carlo Rate Calculation Methods Two Monte Carlo Approaches: a) Rate based on vendor-supplied (or reverse-engineered) model SEE testing and statistical analysis performed to validate model; b) Rate calculated based on model fit to SEE data Statistical analysis very similar to case for CREME96. Information Theory allows simultaneous consideration of multiple models with different complexities: a) Model with lowest AIC usually has greatest predictive power; b) Model averaging using AIC weights may give better performance if several models have similar good performance; and c) Rates can be bounded for a given confidence level over multiple models, as well as over the parameter space of a model.
Multikernel linear mixed models for complex phenotype prediction.
Weissbrod, Omer; Geiger, Dan; Rosset, Saharon
2016-07-01
Linear mixed models (LMMs) and their extensions have recently become the method of choice in phenotype prediction for complex traits. However, LMM use to date has typically been limited by assuming simple genetic architectures. Here, we present multikernel linear mixed model (MKLMM), a predictive modeling framework that extends the standard LMM using multiple-kernel machine learning approaches. MKLMM can model genetic interactions and is particularly suitable for modeling complex local interactions between nearby variants. We additionally present MKLMM-Adapt, which automatically infers interaction types across multiple genomic regions. In an analysis of eight case-control data sets from the Wellcome Trust Case Control Consortium and more than a hundred mouse phenotypes, MKLMM-Adapt consistently outperforms competing methods in phenotype prediction. MKLMM is as computationally efficient as standard LMMs and does not require storage of genotypes, thus achieving state-of-the-art predictive power without compromising computational feasibility or genomic privacy. PMID:27302636
Turing instability in reaction-diffusion models on complex networks
NASA Astrophysics Data System (ADS)
Ide, Yusuke; Izuhara, Hirofumi; Machida, Takuya
2016-09-01
In this paper, the Turing instability in reaction-diffusion models defined on complex networks is studied. Here, we focus on three types of models which generate complex networks, i.e. the Erdős-Rényi, the Watts-Strogatz, and the threshold network models. From analysis of the Laplacian matrices of graphs generated by these models, we numerically reveal that stable and unstable regions of a homogeneous steady state on the parameter space of two diffusion coefficients completely differ, depending on the network architecture. In addition, we theoretically discuss the stable and unstable regions in the cases of regular enhanced ring lattices which include regular circles, and networks generated by the threshold network model when the number of vertices is large enough.
Complex solutions for the scalar field model of the Universe
NASA Astrophysics Data System (ADS)
Lyons, Glenn W.
1992-08-01
The Hartle-Hawking proposal is implemented for Hawking's scalar field model of the Universe. For this model the complex saddle-point geometries required by the semiclassical approximation to the path integral cannot simply be deformed into real Euclidean and real Lorentzian sections. Approximate saddle points are constructed which are fully complex and have contours of real Lorentzian evolution. The semiclassical wave function is found to give rise to classical spacetimes at late times and extra terms in the Hamilton-Jacobi equation do not contribute significantly to the potential.
Deterministic ripple-spreading model for complex networks.
Hu, Xiao-Bing; Wang, Ming; Leeson, Mark S; Hines, Evor L; Di Paolo, Ezequiel
2011-04-01
This paper proposes a deterministic complex network model, which is inspired by the natural ripple-spreading phenomenon. The motivations and main advantages of the model are the following: (i) The establishment of many real-world networks is a dynamic process, where it is often observed that the influence of a few local events spreads out through nodes, and then largely determines the final network topology. Obviously, this dynamic process involves many spatial and temporal factors. By simulating the natural ripple-spreading process, this paper reports a very natural way to set up a spatial and temporal model for such complex networks. (ii) Existing relevant network models are all stochastic models, i.e., with a given input, they cannot output a unique topology. Differently, the proposed ripple-spreading model can uniquely determine the final network topology, and at the same time, the stochastic feature of complex networks is captured by randomly initializing ripple-spreading related parameters. (iii) The proposed model can use an easily manageable number of ripple-spreading related parameters to precisely describe a network topology, which is more memory efficient when compared with traditional adjacency matrix or similar memory-expensive data structures. (iv) The ripple-spreading model has a very good potential for both extensions and applications. PMID:21599256
A Compact Model for the Complex Plant Circadian Clock
De Caluwé, Joëlle; Xiao, Qiying; Hermans, Christian; Verbruggen, Nathalie; Leloup, Jean-Christophe; Gonze, Didier
2016-01-01
The circadian clock is an endogenous timekeeper that allows organisms to anticipate and adapt to the daily variations of their environment. The plant clock is an intricate network of interlocked feedback loops, in which transcription factors regulate each other to generate oscillations with expression peaks at specific times of the day. Over the last decade, mathematical modeling approaches have been used to understand the inner workings of the clock in the model plant Arabidopsis thaliana. Those efforts have produced a number of models of ever increasing complexity. Here, we present an alternative model that combines a low number of equations and parameters, similar to the very earliest models, with the complex network structure found in more recent ones. This simple model describes the temporal evolution of the abundance of eight clock gene mRNA/protein and captures key features of the clock on a qualitative level, namely the entrained and free-running behaviors of the wild type clock, as well as the defects found in knockout mutants (such as altered free-running periods, lack of entrainment, or changes in the expression of other clock genes). Additionally, our model produces complex responses to various light cues, such as extreme photoperiods and non-24 h environmental cycles, and can describe the control of hypocotyl growth by the clock. Our model constitutes a useful tool to probe dynamical properties of the core clock as well as clock-dependent processes. PMID:26904049
Simple and complex models for studying muscle function in walking.
Pandy, Marcus G
2003-09-29
While simple models can be helpful in identifying basic features of muscle function, more complex models are needed to discern the functional roles of specific muscles in movement. In this paper, two very different models of walking, one simple and one complex, are used to study how muscle forces, gravitational forces and centrifugal forces (i.e. forces arising from motion of the joints) combine to produce the pattern of force exerted on the ground. Both the simple model and the complex one predict that muscles contribute significantly to the ground force pattern generated in walking; indeed, both models show that muscle action is responsible for the appearance of the two peaks in the vertical force. The simple model, an inverted double pendulum, suggests further that the first and second peaks are due to net extensor muscle moments exerted about the knee and ankle, respectively. Analyses based on a much more complex, muscle-actuated simulation of walking are in general agreement with these results; however, the more detailed model also reveals that both the hip extensor and hip abductor muscles contribute significantly to vertical motion of the centre of mass, and therefore to the appearance of the first peak in the vertical ground force, in early single-leg stance. This discrepancy in the model predictions is most probably explained by the difference in model complexity. First, movements of the upper body in the sagittal plane are not represented properly in the double-pendulum model, which may explain the anomalous result obtained for the contribution of a hip-extensor torque to the vertical ground force. Second, the double-pendulum model incorporates only three of the six major elements of walking, whereas the complex model is fully 3D and incorporates all six gait determinants. In particular, pelvic list occurs primarily in the frontal plane, so there is the potential for this mechanism to contribute significantly to the vertical ground force, especially
Surface complexation modeling of inositol hexaphosphate sorption onto gibbsite.
Ruyter-Hooley, Maika; Larsson, Anna-Carin; Johnson, Bruce B; Antzutkin, Oleg N; Angove, Michael J
2015-02-15
The sorption of Inositol hexaphosphate (IP6) onto gibbsite was investigated using a combination of adsorption experiments, (31)P solid-state MAS NMR spectroscopy, and surface complexation modeling. Adsorption experiments conducted at four temperatures showed that IP6 sorption decreased with increasing pH. At pH 6, IP6 sorption increased with increasing temperature, while at pH 10 sorption decreased as the temperature was raised. (31)P MAS NMR measurements at pH 3, 6, 9 and 11 produced spectra with broad resonance lines that could be de-convoluted with up to five resonances (+5, 0, -6, -13 and -21ppm). The chemical shifts suggest the sorption process involves a combination of both outer- and inner-sphere complexation and surface precipitation. Relative intensities of the observed resonances indicate that outer-sphere complexation is important in the sorption process at higher pH, while inner-sphere complexation and surface precipitation are dominant at lower pH. Using the adsorption and (31)P MAS NMR data, IP6 sorption to gibbsite was modeled with an extended constant capacitance model (ECCM). The adsorption reactions that best described the sorption of IP6 to gibbsite included two inner-sphere surface complexes and one outer-sphere complex: ≡AlOH + IP₆¹²⁻ + 5H⁺ ↔ ≡Al(IP₆H₄)⁷⁻ + H₂O, ≡3AlOH + IP₆¹²⁻ + 6H⁺ ↔ ≡Al₃(IP₆H₃)⁶⁻ + 3H₂O, ≡2AlOH + IP₆¹²⁻ + 4H⁺ ↔ (≡AlOH₂)₂²⁺(IP₆H₂)¹⁰⁻. The inner-sphere complex involving three surface sites may be considered to be equivalent to a surface precipitate. Thermodynamic parameters were obtained from equilibrium constants derived from surface complexation modeling. Enthalpies for the formation of inner-sphere surface complexes were endothermic, while the enthalpy for the outer-sphere complex was exothermic. The entropies for the proposed sorption reactions were large and positive suggesting that changes in solvation of species play a major role in driving
(Relatively) Simple Models of Flow in Complex Terrain
NASA Astrophysics Data System (ADS)
Taylor, Peter; Weng, Wensong; Salmon, Jim
2013-04-01
The term, "complex terrain" includes both topography and variations in surface roughness and thermal properties. The scales that are affected can differ and there are some advantages to modeling them separately. In studies of flow in complex terrain we have developed 2 D and 3 D models of atmospheric PBL boundary layer flow over roughness changes, appropriate for longer fetches than most existing models. These "internal boundary layers" are especially important for understanding and predicting wind speed variations with distance from shorelines, an important factor for wind farms around, and potentially in, the Great Lakes. The models can also form a base for studying the wakes behind woodlots and wind turbines. Some sample calculations of wind speed evolution over water and the reduced wind speeds behind an isolated woodlot, represented simply in terms of an increase in surface roughness, will be presented. Note that these models can also include thermal effects and non-neutral stratification. We can use the model to deal with 3-D roughness variations and will describe applications to both on-shore and off-shore situations around the Great Lakes. In particular we will show typical results for hub height winds and indicate the length of over-water fetch needed to get the full benefit of siting turbines over water. The linear Mixed Spectral Finite-Difference (MSFD) and non-linear (NLMSFD) models for surface boundary-layer flow over complex terrain have been extended to planetary boundary-layer flow over topography This allows for their use for larger scale regions and increased heights. The models have been applied to successfully simulate the Askervein hill experimental case and we will show examples of applications to more complex terrain, typical of some Canadian wind farms. Output from the model can be used as an alternative to MS-Micro, WAsP or other CFD calculations of topographic impacts for input to wind farm design software.
Predictive modelling of complex agronomic and biological systems.
Keurentjes, Joost J B; Molenaar, Jaap; Zwaan, Bas J
2013-09-01
Biological systems are tremendously complex in their functioning and regulation. Studying the multifaceted behaviour and describing the performance of such complexity has challenged the scientific community for years. The reduction of real-world intricacy into simple descriptive models has therefore convinced many researchers of the usefulness of introducing mathematics into biological sciences. Predictive modelling takes such an approach another step further in that it takes advantage of existing knowledge to project the performance of a system in alternating scenarios. The ever growing amounts of available data generated by assessing biological systems at increasingly higher detail provide unique opportunities for future modelling and experiment design. Here we aim to provide an overview of the progress made in modelling over time and the currently prevalent approaches for iterative modelling cycles in modern biology. We will further argue for the importance of versatility in modelling approaches, including parameter estimation, model reduction and network reconstruction. Finally, we will discuss the difficulties in overcoming the mathematical interpretation of in vivo complexity and address some of the future challenges lying ahead. PMID:23777295
Computer models of complex multiloop branched pipeline systems
NASA Astrophysics Data System (ADS)
Kudinov, I. V.; Kolesnikov, S. V.; Eremin, A. V.; Branfileva, A. N.
2013-11-01
This paper describes the principal theoretical concepts of the method used for constructing computer models of complex multiloop branched pipeline networks, and this method is based on the theory of graphs and two Kirchhoff's laws applied to electrical circuits. The models make it possible to calculate velocities, flow rates, and pressures of a fluid medium in any section of pipeline networks, when the latter are considered as single hydraulic systems. On the basis of multivariant calculations the reasons for existing problems can be identified, the least costly methods of their elimination can be proposed, and recommendations for planning the modernization of pipeline systems and construction of their new sections can be made. The results obtained can be applied to complex pipeline systems intended for various purposes (water pipelines, petroleum pipelines, etc.). The operability of the model has been verified on an example of designing a unified computer model of the heat network for centralized heat supply of the city of Samara.
Modeling the propagation of mobile malware on complex networks
NASA Astrophysics Data System (ADS)
Liu, Wanping; Liu, Chao; Yang, Zheng; Liu, Xiaoyang; Zhang, Yihao; Wei, Zuxue
2016-08-01
In this paper, the spreading behavior of malware across mobile devices is addressed. By introducing complex networks to model mobile networks, which follows the power-law degree distribution, a novel epidemic model for mobile malware propagation is proposed. The spreading threshold that guarantees the dynamics of the model is calculated. Theoretically, the asymptotic stability of the malware-free equilibrium is confirmed when the threshold is below the unity, and the global stability is further proved under some sufficient conditions. The influences of different model parameters as well as the network topology on malware propagation are also analyzed. Our theoretical studies and numerical simulations show that networks with higher heterogeneity conduce to the diffusion of malware, and complex networks with lower power-law exponents benefit malware spreading.
Petri net model for analysis of concurrently processed complex algorithms
NASA Technical Reports Server (NTRS)
Stoughton, John W.; Mielke, Roland R.
1986-01-01
This paper presents a Petri-net model suitable for analyzing the concurrent processing of computationally complex algorithms. The decomposed operations are to be processed in a multiple processor, data driven architecture. Of particular interest is the application of the model to both the description of the data/control flow of a particular algorithm, and to the general specification of the data driven architecture. A candidate architecture is also presented.
Modeling complex diffusion mechanisms in L1 2 -structured compounds
NASA Astrophysics Data System (ADS)
Zacate, M. O.; Lape, M.; Stufflebeam, M.; Evenson, W. E.
2010-04-01
We report on a procedure developed to create stochastic models of hyperfine interactions for complex diffusion mechanisms and demonstrate its application to simulate perturbed angular correlation spectra for the divacancy and 6-jump cycle diffusion mechanisms in L12-structured compounds.
Performance of Random Effects Model Estimators under Complex Sampling Designs
ERIC Educational Resources Information Center
Jia, Yue; Stokes, Lynne; Harris, Ian; Wang, Yan
2011-01-01
In this article, we consider estimation of parameters of random effects models from samples collected via complex multistage designs. Incorporation of sampling weights is one way to reduce estimation bias due to unequal probabilities of selection. Several weighting methods have been proposed in the literature for estimating the parameters of…
The Complex Model of Television Viewing and Educational Achievement.
ERIC Educational Resources Information Center
Razel, Micha
2001-01-01
Meta-analyzed data from six national studies of elementary through high school students to determine the relationship between amount of television viewing and educational achievement. According to a complex viewing-achievement model, for small amounts of viewing, achievement increased with viewing, but as viewing increased beyond a certain point,…
Conceptual Complexity, Teaching Style and Models of Teaching.
ERIC Educational Resources Information Center
Joyce, Bruce; Weil, Marsha
The focus of this paper is on the relative roles of personality and training in enabling teachers to carry out the kinds of complex learning models which are envisioned by curriculum reformers in the social sciences. The paper surveys some of the major research done in this area and concludes that: 1) Most teachers do not manifest the complex…
Surface complexation modeling of americium sorption onto volcanic tuff.
Ding, M; Kelkar, S; Meijer, A
2014-10-01
Results of a surface complexation model (SCM) for americium sorption on volcanic rocks (devitrified and zeolitic tuff) are presented. The model was developed using PHREEQC and based on laboratory data for americium sorption on quartz. Available data for sorption of americium on quartz as a function of pH in dilute groundwater can be modeled with two surface reactions involving an americium sulfate and an americium carbonate complex. It was assumed in applying the model to volcanic rocks from Yucca Mountain, that the surface properties of volcanic rocks can be represented by a quartz surface. Using groundwaters compositionally representative of Yucca Mountain, americium sorption distribution coefficient (Kd, L/Kg) values were calculated as function of pH. These Kd values are close to the experimentally determined Kd values for americium sorption on volcanic rocks, decreasing with increasing pH in the pH range from 7 to 9. The surface complexation constants, derived in this study, allow prediction of sorption of americium in a natural complex system, taking into account the inherent uncertainty associated with geochemical conditions that occur along transport pathways. PMID:24963803
Fischer and Schrock Carbene Complexes: A Molecular Modeling Exercise
ERIC Educational Resources Information Center
Montgomery, Craig D.
2015-01-01
An exercise in molecular modeling that demonstrates the distinctive features of Fischer and Schrock carbene complexes is presented. Semi-empirical calculations (PM3) demonstrate the singlet ground electronic state, restricted rotation about the C-Y bond, the positive charge on the carbon atom, and hence, the electrophilic nature of the Fischer…
Catastrophe, Chaos, and Complexity Models and Psychosocial Adjustment to Disability.
ERIC Educational Resources Information Center
Parker, Randall M.; Schaller, James; Hansmann, Sandra
2003-01-01
Rehabilitation professionals may unknowingly rely on stereotypes and specious beliefs when dealing with people with disabilities, despite the formulation of theories that suggest new models of the adjustment process. Suggests that Catastrophe, Chaos, and Complexity Theories hold considerable promise in this regard. This article reviews these…
The Complexity of Developmental Predictions from Dual Process Models
ERIC Educational Resources Information Center
Stanovich, Keith E.; West, Richard F.; Toplak, Maggie E.
2011-01-01
Drawing developmental predictions from dual-process theories is more complex than is commonly realized. Overly simplified predictions drawn from such models may lead to premature rejection of the dual process approach as one of many tools for understanding cognitive development. Misleading predictions can be avoided by paying attention to several…
A random interacting network model for complex networks
NASA Astrophysics Data System (ADS)
Goswami, Bedartha; Shekatkar, Snehal M.; Rheinwalt, Aljoscha; Ambika, G.; Kurths, Jürgen
2015-12-01
We propose a RAndom Interacting Network (RAIN) model to study the interactions between a pair of complex networks. The model involves two major steps: (i) the selection of a pair of nodes, one from each network, based on intra-network node-based characteristics, and (ii) the placement of a link between selected nodes based on the similarity of their relative importance in their respective networks. Node selection is based on a selection fitness function and node linkage is based on a linkage probability defined on the linkage scores of nodes. The model allows us to relate within-network characteristics to between-network structure. We apply the model to the interaction between the USA and Schengen airline transportation networks (ATNs). Our results indicate that two mechanisms: degree-based preferential node selection and degree-assortative link placement are necessary to replicate the observed inter-network degree distributions as well as the observed inter-network assortativity. The RAIN model offers the possibility to test multiple hypotheses regarding the mechanisms underlying network interactions. It can also incorporate complex interaction topologies. Furthermore, the framework of the RAIN model is general and can be potentially adapted to various real-world complex systems.
A random interacting network model for complex networks
Goswami, Bedartha; Shekatkar, Snehal M.; Rheinwalt, Aljoscha; Ambika, G.; Kurths, Jürgen
2015-01-01
We propose a RAndom Interacting Network (RAIN) model to study the interactions between a pair of complex networks. The model involves two major steps: (i) the selection of a pair of nodes, one from each network, based on intra-network node-based characteristics, and (ii) the placement of a link between selected nodes based on the similarity of their relative importance in their respective networks. Node selection is based on a selection fitness function and node linkage is based on a linkage probability defined on the linkage scores of nodes. The model allows us to relate within-network characteristics to between-network structure. We apply the model to the interaction between the USA and Schengen airline transportation networks (ATNs). Our results indicate that two mechanisms: degree-based preferential node selection and degree-assortative link placement are necessary to replicate the observed inter-network degree distributions as well as the observed inter-network assortativity. The RAIN model offers the possibility to test multiple hypotheses regarding the mechanisms underlying network interactions. It can also incorporate complex interaction topologies. Furthermore, the framework of the RAIN model is general and can be potentially adapted to various real-world complex systems. PMID:26657032
A random interacting network model for complex networks.
Goswami, Bedartha; Shekatkar, Snehal M; Rheinwalt, Aljoscha; Ambika, G; Kurths, Jürgen
2015-01-01
We propose a RAndom Interacting Network (RAIN) model to study the interactions between a pair of complex networks. The model involves two major steps: (i) the selection of a pair of nodes, one from each network, based on intra-network node-based characteristics, and (ii) the placement of a link between selected nodes based on the similarity of their relative importance in their respective networks. Node selection is based on a selection fitness function and node linkage is based on a linkage probability defined on the linkage scores of nodes. The model allows us to relate within-network characteristics to between-network structure. We apply the model to the interaction between the USA and Schengen airline transportation networks (ATNs). Our results indicate that two mechanisms: degree-based preferential node selection and degree-assortative link placement are necessary to replicate the observed inter-network degree distributions as well as the observed inter-network assortativity. The RAIN model offers the possibility to test multiple hypotheses regarding the mechanisms underlying network interactions. It can also incorporate complex interaction topologies. Furthermore, the framework of the RAIN model is general and can be potentially adapted to various real-world complex systems. PMID:26657032
On Using Meta-Modeling and Multi-Modeling to Address Complex Problems
ERIC Educational Resources Information Center
Abu Jbara, Ahmed
2013-01-01
Models, created using different modeling techniques, usually serve different purposes and provide unique insights. While each modeling technique might be capable of answering specific questions, complex problems require multiple models interoperating to complement/supplement each other; we call this Multi-Modeling. To address the syntactic and…
Probabilistic Analysis Techniques Applied to Complex Spacecraft Power System Modeling
NASA Technical Reports Server (NTRS)
Hojnicki, Jeffrey S.; Rusick, Jeffrey J.
2005-01-01
Electric power system performance predictions are critical to spacecraft, such as the International Space Station (ISS), to ensure that sufficient power is available to support all the spacecraft s power needs. In the case of the ISS power system, analyses to date have been deterministic, meaning that each analysis produces a single-valued result for power capability because of the complexity and large size of the model. As a result, the deterministic ISS analyses did not account for the sensitivity of the power capability to uncertainties in model input variables. Over the last 10 years, the NASA Glenn Research Center has developed advanced, computationally fast, probabilistic analysis techniques and successfully applied them to large (thousands of nodes) complex structural analysis models. These same techniques were recently applied to large, complex ISS power system models. This new application enables probabilistic power analyses that account for input uncertainties and produce results that include variations caused by these uncertainties. Specifically, N&R Engineering, under contract to NASA, integrated these advanced probabilistic techniques with Glenn s internationally recognized ISS power system model, System Power Analysis for Capability Evaluation (SPACE).
Boolean modeling of collective effects in complex networks.
Norrell, Johannes; Socolar, Joshua E S
2009-06-01
Complex systems are often modeled as Boolean networks in attempts to capture their logical structure and reveal its dynamical consequences. Approximating the dynamics of continuous variables by discrete values and Boolean logic gates may, however, introduce dynamical possibilities that are not accessible to the original system. We show that large random networks of variables coupled through continuous transfer functions often fail to exhibit the complex dynamics of corresponding Boolean models in the disordered (chaotic) regime, even when each individual function appears to be a good candidate for Boolean idealization. A suitably modified Boolean theory explains the behavior of systems in which information does not propagate faithfully down certain chains of nodes. Model networks incorporating calculated or directly measured transfer functions reported in the literature on transcriptional regulation of genes are described by the modified theory. PMID:19658525
Entropy, complexity, and Markov diagrams for random walk cancer models
Newton, Paul K.; Mason, Jeremy; Hurt, Brian; Bethel, Kelly; Bazhenova, Lyudmila; Nieva, Jorge; Kuhn, Peter
2014-01-01
The notion of entropy is used to compare the complexity associated with 12 common cancers based on metastatic tumor distribution autopsy data. We characterize power-law distributions, entropy, and Kullback-Liebler divergence associated with each primary cancer as compared with data for all cancer types aggregated. We then correlate entropy values with other measures of complexity associated with Markov chain dynamical systems models of progression. The Markov transition matrix associated with each cancer is associated with a directed graph model where nodes are anatomical locations where a metastatic tumor could develop, and edge weightings are transition probabilities of progression from site to site. The steady-state distribution corresponds to the autopsy data distribution. Entropy correlates well with the overall complexity of the reduced directed graph structure for each cancer and with a measure of systemic interconnectedness of the graph, called graph conductance. The models suggest that grouping cancers according to their entropy values, with skin, breast, kidney, and lung cancers being prototypical high entropy cancers, stomach, uterine, pancreatic and ovarian being mid-level entropy cancers, and colorectal, cervical, bladder, and prostate cancers being prototypical low entropy cancers, provides a potentially useful framework for viewing metastatic cancer in terms of predictability, complexity, and metastatic potential. PMID:25523357
Entropy, complexity, and Markov diagrams for random walk cancer models
NASA Astrophysics Data System (ADS)
Newton, Paul K.; Mason, Jeremy; Hurt, Brian; Bethel, Kelly; Bazhenova, Lyudmila; Nieva, Jorge; Kuhn, Peter
2014-12-01
The notion of entropy is used to compare the complexity associated with 12 common cancers based on metastatic tumor distribution autopsy data. We characterize power-law distributions, entropy, and Kullback-Liebler divergence associated with each primary cancer as compared with data for all cancer types aggregated. We then correlate entropy values with other measures of complexity associated with Markov chain dynamical systems models of progression. The Markov transition matrix associated with each cancer is associated with a directed graph model where nodes are anatomical locations where a metastatic tumor could develop, and edge weightings are transition probabilities of progression from site to site. The steady-state distribution corresponds to the autopsy data distribution. Entropy correlates well with the overall complexity of the reduced directed graph structure for each cancer and with a measure of systemic interconnectedness of the graph, called graph conductance. The models suggest that grouping cancers according to their entropy values, with skin, breast, kidney, and lung cancers being prototypical high entropy cancers, stomach, uterine, pancreatic and ovarian being mid-level entropy cancers, and colorectal, cervical, bladder, and prostate cancers being prototypical low entropy cancers, provides a potentially useful framework for viewing metastatic cancer in terms of predictability, complexity, and metastatic potential.
Complex 2D matrix model and geometrical map on the complex-Nc plane
NASA Astrophysics Data System (ADS)
Nawa, Kanabu; Ozaki, Sho; Nagahiro, Hideko; Jido, Daisuke; Hosaka, Atsushi
2013-08-01
We study the parameter dependence of the internal structure of resonance states by formulating a complex two-dimensional (2D) matrix model, where the two dimensions represent two levels of resonances. We calculate a critical value of the parameter at which a "nature transition" with character exchange occurs between two resonance states, from the viewpoint of geometry on complex-parameter space. Such a critical value is useful for identifying the internal structure of resonance states with variation of the parameter in the system. We apply the model to analyze the internal structure of hadrons with variation of the color number N_c from infty to a realistic value 3. By regarding 1/N_c as the variable parameter in our model, we calculate a critical color number of the nature transition between hadronic states in terms of a quark-antiquark pair and a mesonic molecule as exotics from the geometry on the complex-N_c plane. For large-N_c effective theory, we employ the chiral Lagrangian induced by holographic QCD with a D4/D8/overline {D8} multi-D brane system in type IIA superstring theory.
Modeling the respiratory chain complexes with biothermokinetic equations - the case of complex I.
Heiske, Margit; Nazaret, Christine; Mazat, Jean-Pierre
2014-10-01
The mitochondrial respiratory chain plays a crucial role in energy metabolism and its dysfunction is implicated in a wide range of human diseases. In order to understand the global expression of local mutations in the rate of oxygen consumption or in the production of adenosine triphosphate (ATP) it is useful to have a mathematical model in which the changes in a given respiratory complex are properly modeled. Our aim in this paper is to provide thermodynamics respecting and structurally simple equations to represent the kinetics of each isolated complexes which can, assembled in a dynamical system, also simulate the behavior of the respiratory chain, as a whole, under a large set of different physiological and pathological conditions. On the example of the reduced nicotinamide adenine dinucleotide (NADH)-ubiquinol-oxidoreductase (complex I) we analyze the suitability of different types of rate equations. Based on our kinetic experiments we show that very simple rate laws, as those often used in many respiratory chain models, fail to describe the kinetic behavior when applied to a wide concentration range. This led us to adapt rate equations containing the essential parameters of enzyme kinetic, maximal velocities and Henri-Michaelis-Menten like-constants (KM and KI) to satisfactorily simulate these data. PMID:25064016
Complex Behavior in Simple Models of Biological Coevolution
NASA Astrophysics Data System (ADS)
Rikvold, Per Arne
We explore the complex dynamical behavior of simple predator-prey models of biological coevolution that account for interspecific and intraspecific competition for resources, as well as adaptive foraging behavior. In long kinetic Monte Carlo simulations of these models we find quite robust 1/f-like noise in species diversity and population sizes, as well as power-law distributions for the lifetimes of individual species and the durations of quiet periods of relative evolutionary stasis. In one model, based on the Holling Type II functional response, adaptive foraging produces a metastable low-diversity phase and a stable high-diversity phase.
Modeling and Algorithmic Approaches to Constitutively-Complex, Microstructured Fluids
Miller, Gregory H.; Forest, Gregory
2011-12-22
We present a new multiscale model for complex uids based on three scales: microscopic, kinetic, and continuum. We choose the microscopic level as Kramers' bead-rod model for polymers, which we describe as a system of stochastic di erential equations with an implicit constraint formulation. The associated Fokker-Planck equation is then derived, and adiabatic elimination removes the fast momentum coordinates. Approached in this way, the kinetic level reduces to a dispersive drift equation. The continuum level is modeled with a nite volume Godunov-projection algorithm. We demonstrate computation of viscoelastic stress divergence using this multiscale approach.
Modeling of Carbohydrate Binding Modules Complexed to Cellulose
Nimlos, M. R.; Beckham, G. T.; Bu, L.; Himmel, M. E.; Crowley, M. F.; Bomble, Y. J.
2012-01-01
Modeling results are presented for the interaction of two carbohydrate binding modules (CBMs) with cellulose. The family 1 CBM from Trichoderma reesei's Cel7A cellulase was modeled using molecular dynamics to confirm that this protein selectively binds to the hydrophobic (100) surface of cellulose fibrils and to determine the energetics and mechanisms for locating this surface. Modeling was also conducted of binding of the family 4 CBM from the CbhA complex from Clostridium thermocellum. There is a cleft in this protein, which may accommodate a cellulose chain that is detached from crystalline cellulose. This possibility is explored using molecular dynamics.
Hill, Renee J.; Chopra, Pradeep; Richardi, Toni
2012-01-01
Abstract Explaining the etiology of Complex Regional Pain Syndrome (CRPS) from the psychogenic model is exceedingly unsophisticated, because neurocognitive deficits, neuroanatomical abnormalities, and distortions in cognitive mapping are features of CRPS pathology. More importantly, many people who have developed CRPS have no history of mental illness. The psychogenic model offers comfort to physicians and mental health practitioners (MHPs) who have difficulty understanding pain maintained by newly uncovered neuro inflammatory processes. With increased education about CRPS through a biopsychosocial perspective, both physicians and MHPs can better diagnose, treat, and manage CRPS symptomatology. PMID:24223338
Bridging Mechanistic and Phenomenological Models of Complex Biological Systems
Transtrum, Mark K.; Qiu, Peng
2016-01-01
The inherent complexity of biological systems gives rise to complicated mechanistic models with a large number of parameters. On the other hand, the collective behavior of these systems can often be characterized by a relatively small number of phenomenological parameters. We use the Manifold Boundary Approximation Method (MBAM) as a tool for deriving simple phenomenological models from complicated mechanistic models. The resulting models are not black boxes, but remain expressed in terms of the microscopic parameters. In this way, we explicitly connect the macroscopic and microscopic descriptions, characterize the equivalence class of distinct systems exhibiting the same range of collective behavior, and identify the combinations of components that function as tunable control knobs for the behavior. We demonstrate the procedure for adaptation behavior exhibited by the EGFR pathway. From a 48 parameter mechanistic model, the system can be effectively described by a single adaptation parameter τ characterizing the ratio of time scales for the initial response and recovery time of the system which can in turn be expressed as a combination of microscopic reaction rates, Michaelis-Menten constants, and biochemical concentrations. The situation is not unlike modeling in physics in which microscopically complex processes can often be renormalized into simple phenomenological models with only a few effective parameters. The proposed method additionally provides a mechanistic explanation for non-universal features of the behavior. PMID:27187545
Bridging Mechanistic and Phenomenological Models of Complex Biological Systems.
Transtrum, Mark K; Qiu, Peng
2016-05-01
The inherent complexity of biological systems gives rise to complicated mechanistic models with a large number of parameters. On the other hand, the collective behavior of these systems can often be characterized by a relatively small number of phenomenological parameters. We use the Manifold Boundary Approximation Method (MBAM) as a tool for deriving simple phenomenological models from complicated mechanistic models. The resulting models are not black boxes, but remain expressed in terms of the microscopic parameters. In this way, we explicitly connect the macroscopic and microscopic descriptions, characterize the equivalence class of distinct systems exhibiting the same range of collective behavior, and identify the combinations of components that function as tunable control knobs for the behavior. We demonstrate the procedure for adaptation behavior exhibited by the EGFR pathway. From a 48 parameter mechanistic model, the system can be effectively described by a single adaptation parameter τ characterizing the ratio of time scales for the initial response and recovery time of the system which can in turn be expressed as a combination of microscopic reaction rates, Michaelis-Menten constants, and biochemical concentrations. The situation is not unlike modeling in physics in which microscopically complex processes can often be renormalized into simple phenomenological models with only a few effective parameters. The proposed method additionally provides a mechanistic explanation for non-universal features of the behavior. PMID:27187545
Modeling of ion complexation and extraction using substructural molecular fragments
Solov'ev; Varnek; Wipff
2000-05-01
A substructural molecular fragment (SMF) method has been developed to model the relationships between the structure of organic molecules and their thermodynamic parameters of complexation or extraction. The method is based on the splitting of a molecule into fragments, and on calculations of their contributions to a given property. It uses two types of fragments: atom/bond sequences and "augmented atoms" (atoms with their nearest neighbors). The SMF approach is tested on physical properties of C2-C9 alkanes (boiling point, molar volume, molar refraction, heat of vaporization, surface tension, melting point, critical temperature, and critical pressures) and on octanol/water partition coefficients. Then, it is applied to the assessment of (i) complexation stability constants of alkali cations with crown ethers and phosphoryl-containing podands, and of beta-cyclodextrins with mono- and 1,4-disubstituted benzenes, and (ii) solvent extraction constants for the complexes of uranyl cation by phosphoryl-containing ligands. PMID:10850791
Complexity and robustness in hypernetwork models of metabolism.
Pearcy, Nicole; Chuzhanova, Nadia; Crofts, Jonathan J
2016-10-01
Metabolic reaction data is commonly modelled using a complex network approach, whereby nodes represent the chemical species present within the organism of interest, and connections are formed between those nodes participating in the same chemical reaction. Unfortunately, such an approach provides an inadequate description of the metabolic process in general, as a typical chemical reaction will involve more than two nodes, thus risking oversimplification of the system of interest in a potentially significant way. In this paper, we employ a complex hypernetwork formalism to investigate the robustness of bacterial metabolic hypernetworks by extending the concept of a percolation process to hypernetworks. Importantly, this provides a novel method for determining the robustness of these systems and thus for quantifying their resilience to random attacks/errors. Moreover, we performed a site percolation analysis on a large cohort of bacterial metabolic networks and found that hypernetworks that evolved in more variable environments displayed increased levels of robustness and topological complexity. PMID:27354314
Parameter uncertainty and interaction in complex environmental models
NASA Astrophysics Data System (ADS)
Spear, Robert C.; Grieb, Thomas M.; Shang, Nong
1994-11-01
Recently developed models for the estimation of risks arising from the release of toxic chemicals from hazardous waste sites are inherently complex both structurally and parametrically. To better understand the impact of uncertainty and interaction in the high-dimensional parameter spaces of these models, the set of procedures termed regional sensitivity analysis has been extended and applied to the groundwater pathway of the MMSOILS model. The extension consists of a tree-structured density estimation technique which allows the characterization of complex interaction in that portion of the parameter space which gives rise to successful simulation. Results show that the parameter space can be partitioned into small, densely populated regions and relatively large, sparsely populated regions. From the high-density regions one can identify the important or controlling parameters as well as the interaction between parameters in different local areas of the space. This new tool can provide guidance in the analysis and interpretation of site-specific application of these complex models.
Mathematical modelling of complex contagion on clustered networks
NASA Astrophysics Data System (ADS)
O'sullivan, David J.; O'Keeffe, Gary; Fennell, Peter; Gleeson, James
2015-09-01
The spreading of behavior, such as the adoption of a new innovation, is influenced bythe structure of social networks that interconnect the population. In the experiments of Centola (Science, 2010), adoption of new behavior was shown to spread further and faster across clustered-lattice networks than across corresponding random networks. This implies that the “complex contagion” effects of social reinforcement are important in such diffusion, in contrast to “simple” contagion models of disease-spread which predict that epidemics would grow more efficiently on random networks than on clustered networks. To accurately model complex contagion on clustered networks remains a challenge because the usual assumptions (e.g. of mean-field theory) regarding tree-like networks are invalidated by the presence of triangles in the network; the triangles are, however, crucial to the social reinforcement mechanism, which posits an increased probability of a person adopting behavior that has been adopted by two or more neighbors. In this paper we modify the analytical approach that was introduced by Hebert-Dufresne et al. (Phys. Rev. E, 2010), to study disease-spread on clustered networks. We show how the approximation method can be adapted to a complex contagion model, and confirm the accuracy of the method with numerical simulations. The analytical results of the model enable us to quantify the level of social reinforcement that is required to observe—as in Centola’s experiments—faster diffusion on clustered topologies than on random networks.
An Adaptive Complex Network Model for Brain Functional Networks
Gomez Portillo, Ignacio J.; Gleiser, Pablo M.
2009-01-01
Brain functional networks are graph representations of activity in the brain, where the vertices represent anatomical regions and the edges their functional connectivity. These networks present a robust small world topological structure, characterized by highly integrated modules connected sparsely by long range links. Recent studies showed that other topological properties such as the degree distribution and the presence (or absence) of a hierarchical structure are not robust, and show different intriguing behaviors. In order to understand the basic ingredients necessary for the emergence of these complex network structures we present an adaptive complex network model for human brain functional networks. The microscopic units of the model are dynamical nodes that represent active regions of the brain, whose interaction gives rise to complex network structures. The links between the nodes are chosen following an adaptive algorithm that establishes connections between dynamical elements with similar internal states. We show that the model is able to describe topological characteristics of human brain networks obtained from functional magnetic resonance imaging studies. In particular, when the dynamical rules of the model allow for integrated processing over the entire network scale-free non-hierarchical networks with well defined communities emerge. On the other hand, when the dynamical rules restrict the information to a local neighborhood, communities cluster together into larger ones, giving rise to a hierarchical structure, with a truncated power law degree distribution. PMID:19738902
Yue, Tongtao; Sun, Mingbin; Zhang, Shuai; Ren, Hao; Ge, Baosheng; Huang, Fang
2016-06-29
After the synthesis of transmembrane peptides/proteins (TMPs), their insertion into a lipid bilayer is a fundamental biophysical process. Moreover, correct orientations of TMPs in membranes determine the normal functions they play in relevant cellular activities. In this study, we have established a method to determine the orientation of TMPs in membranes. This method is based on the use of TAMRA, a fluorescent molecule with high extinction coefficient and fluorescence quantum yield, to act as a fluorescent probe and tryptophan as a quencher. Fluorescence quenching indicates that the model peptide displays membrane orientation with the N terminus outside and the C terminus inside dominantly. To elucidate the underlying mechanism, we performed molecular dynamics simulations. Our simulations suggest that both membrane insertion and the orientation of TMPs are determined by complex competition and cooperation between hydrophobic and electrostatic interactions. After initial membrane anchorage via electrostatic interactions of the charged residues with the lipid headgroups, further insertion is hindered by unfavorable interactions between the polar residues and lipid tails, which result in an energy barrier. Nevertheless, such a finite energy barrier is reduced by hydrophobic interactions between the non-polar residues and lipid tails. Moreover, a transient terminal flipping was captured to facilitate the membrane insertion. Once the inserted terminus reaches the opposite lipid headgroups, the hydrophobic interactions cooperate with the electrostatic interactions to complete the membrane insertion process. PMID:27302083
NASA Astrophysics Data System (ADS)
Jing, Benxin; Lan, Nan; Zhu, Y. Elaine
2013-03-01
An explosion in the research activities using ionic liquids (ILs) as new ``green'' chemicals in several chemical and biomedical processes has resulted in the urgent need to understand their impact in term of their transport and toxicity towards aquatic organisms. Though a few experimental toxicology studies have reported that some ionic liquids are toxic with increased hydrophobicity of ILs while others are not, our understanding of the molecular level mechanism of IL toxicity remains poorly understood. In this talk, we will discuss our recent study of the interaction of ionic liquids with model cell membranes. We have found that the ILs could induce morphological change of lipid bilayers when a critical concentration is exceeded, leading to the swelling and tube-like formation of lipid bilayers. The critical concentration shows a strong dependence on the length of hydrocarbon tails and hydrophobic counterions. By SAXS, Langmuir-Blodgett (LB) and fluorescence microscopic measurement, we have confirmed that tube-like lipid complexes result from the insertion of ILs with long hydrocarbon chains to minimize the hydrophobic interaction with aqueous media. This finding could give insight to the modification and adoption of ILs for the engineering of micro-organisms.
a Model Study of Complex Behavior in the Belousov - Reaction.
NASA Astrophysics Data System (ADS)
Lindberg, David Mark
1988-12-01
We have studied the complex oscillatory behavior in a model of the Belousov-Zhabotinskii (BZ) reaction in a continuously-fed stirred tank reactor (CSTR). The model consisted of a set of nonlinear ordinary differential equations derived from a reduced mechanism of the chemical system. These equations were integrated numerically on a computer, which yielded the concentrations of the constituent chemicals as functions of time. In addition, solutions were tracked as functions of a single parameter, the stability of the solutions was determined, and bifurcations of the solutions were located and studied. The intent of this study was to use this BZ model to explore further a region of complex oscillatory behavior found in experimental investigations, the most thorough of which revealed an alternating periodic-chaotic (P-C) sequence of states. A P-C sequence was discovered in the model which showed the same qualitative features as the experimental sequence. In order to better understand the P-C sequence, a detailed study was conducted in the vicinity of the P-C sequence, with two experimentally accessible parameters as control variables. This study mapped out the bifurcation sets, and included examination of the dynamics of the stable periodic, unstable periodic, and chaotic oscillatory motion. Observations made from the model results revealed a rough symmetry which suggests a new way of looking at the P-C sequence. Other nonlinear phenomena uncovered in the model were boundary and interior crises, several codimension-two bifurcations, and similarities in the shapes of areas of stability for periodic orbits in two-parameter space. Each earlier model study of this complex region involved only a limited one-parameter scan and had limited success in producing agreement with experiments. In contrast, for those regions of complex behavior that have been studied experimentally, the observations agree qualitatively with our model results. Several new predictions of the model
Cx-02 Program, workshop on modeling complex systems
Mossotti, Victor G.; Barragan, Jo Ann; Westergard, Todd D.
2003-01-01
This publication contains the abstracts and program for the workshop on complex systems that was held on November 19-21, 2002, in Reno, Nevada. Complex systems are ubiquitous within the realm of the earth sciences. Geological systems consist of a multiplicity of linked components with nested feedback loops; the dynamics of these systems are non-linear, iterative, multi-scale, and operate far from equilibrium. That notwithstanding, It appears that, with the exception of papers on seismic studies, geology and geophysics work has been disproportionally underrepresented at regional and national meetings on complex systems relative to papers in the life sciences. This is somewhat puzzling because geologists and geophysicists are, in many ways, preadapted to thinking of complex system mechanisms. Geologists and geophysicists think about processes involving large volumes of rock below the sunlit surface of Earth, the accumulated consequence of processes extending hundreds of millions of years in the past. Not only do geologists think in the abstract by virtue of the vast time spans, most of the evidence is out-of-sight. A primary goal of this workshop is to begin to bridge the gap between the Earth sciences and life sciences through demonstration of the universality of complex systems science, both philosophically and in model structures.
Lateral organization of complex lipid mixtures from multiscale modeling
NASA Astrophysics Data System (ADS)
Tumaneng, Paul W.; Pandit, Sagar A.; Zhao, Guijun; Scott, H. L.
2010-02-01
The organizational properties of complex lipid mixtures can give rise to functionally important structures in cell membranes. In model membranes, ternary lipid-cholesterol (CHOL) mixtures are often used as representative systems to investigate the formation and stabilization of localized structural domains ("rafts"). In this work, we describe a self-consistent mean-field model that builds on molecular dynamics simulations to incorporate multiple lipid components and to investigate the lateral organization of such mixtures. The model predictions reveal regions of bimodal order on ternary plots that are in good agreement with experiment. Specifically, we have applied the model to ternary mixtures composed of dioleoylphosphatidylcholine:18:0 sphingomyelin:CHOL. This work provides insight into the specific intermolecular interactions that drive the formation of localized domains in these mixtures. The model makes use of molecular dynamics simulations to extract interaction parameters and to provide chain configuration order parameter libraries.
Mechanistic modeling confronts the complexity of molecular cell biology.
Phair, Robert D
2014-11-01
Mechanistic modeling has the potential to transform how cell biologists contend with the inescapable complexity of modern biology. I am a physiologist-electrical engineer-systems biologist who has been working at the level of cell biology for the past 24 years. This perspective aims 1) to convey why we build models, 2) to enumerate the major approaches to modeling and their philosophical differences, 3) to address some recurrent concerns raised by experimentalists, and then 4) to imagine a future in which teams of experimentalists and modelers build-and subject to exhaustive experimental tests-models covering the entire spectrum from molecular cell biology to human pathophysiology. There is, in my view, no technical obstacle to this future, but it will require some plasticity in the biological research mind-set. PMID:25368428
Paradigms of Complexity in Modelling of Fluid and Kinetic Processes
NASA Astrophysics Data System (ADS)
Diamond, P. H.
2006-10-01
The need to discuss and compare a wide variety of models of fluid and kinetic processes is motivated by the astonishing wide variety of complex physical phenomena which occur in plasmas in nature. Such phenomena include, but are not limited to: turbulence, turbulent transport and mixing, reconnection and structure formation. In this talk, I will review how various fluid and kinetic models come to grips with the essential physics of these phenomena. For example, I will discuss how the idea of a turbulent cascade and the concept of an ``eddy'' are realized quite differently in fluid and Vlasov models. Attention will be placed primarily on physical processes, the physics content of various models, and the consequences of choices in model construction, rather than on the intrinsic mathematical structure of the theories. Examples will be chosen from fusion, laboratory, space and astrophysical plasmas.
Structuring temporal sequences: comparison of models and factors of complexity.
Essens, P
1995-05-01
Two stages for structuring tone sequences have been distinguished by Povel and Essens (1985). In the first, a mental clock segments a sequence into equal time units (clock model); in the second, intervals are specified in terms of subdivisions of these units. The present findings support the clock model in that it predicts human performance better than three other algorithmic models. Two further experiments in which clock and subdivision characteristics were varied did not support the hypothesized effect of the nature of the subdivisions on complexity. A model focusing on the variations in the beat-anchored envelopes of the tone clusters was proposed. Errors in reproduction suggest a dual-code representation comprising temporal and figural characteristics. The temporal part of the representation is based on the clock model but specifies, in addition, the metric of the level below the clock. The beat-tone-cluster envelope concept was proposed to specify the figural part. PMID:7596749
RHIC injector complex online model status and plans
Schoefer,V.; Ahrens, L.; Brown, K.; Morris, J.; Nemesure, S.
2009-05-04
An online modeling system is being developed for the RHIC injector complex, which consists of the Booster, the AGS and the transfer lines connecting the Booster to the AGS and the AGS to RHIC. Historically the injectors have been operated using static values from design specifications or offline model runs, but tighter beam optics constraints required by polarized proton operations (e.g, accelerating with near-integer tunes) have necessitated a more dynamic system. An online model server for the AGS has been implemented using MAD-X [1] as the model engine, with plans to extend the system to the Booster and the injector transfer lines and to add the option of calculating optics using the Polymorphic Tracking Code (PTC [2]) as the model engine.
Heo, Lim; Lee, Hasup; Seok, Chaok
2016-01-01
Protein-protein docking methods have been widely used to gain an atomic-level understanding of protein interactions. However, docking methods that employ low-resolution energy functions are popular because of computational efficiency. Low-resolution docking tends to generate protein complex structures that are not fully optimized. GalaxyRefineComplex takes such low-resolution docking structures and refines them to improve model accuracy in terms of both interface contact and inter-protein orientation. This refinement method allows flexibility at the protein interface and in the overall docking structure to capture conformational changes that occur upon binding. Symmetric refinement is also provided for symmetric homo-complexes. This method was validated by refining models produced by available docking programs, including ZDOCK and M-ZDOCK, and was successfully applied to CAPRI targets in a blind fashion. An example of using the refinement method with an existing docking method for ligand binding mode prediction of a drug target is also presented. A web server that implements the method is freely available at http://galaxy.seoklab.org/refinecomplex. PMID:27535582
Heo, Lim; Lee, Hasup; Seok, Chaok
2016-01-01
Protein-protein docking methods have been widely used to gain an atomic-level understanding of protein interactions. However, docking methods that employ low-resolution energy functions are popular because of computational efficiency. Low-resolution docking tends to generate protein complex structures that are not fully optimized. GalaxyRefineComplex takes such low-resolution docking structures and refines them to improve model accuracy in terms of both interface contact and inter-protein orientation. This refinement method allows flexibility at the protein interface and in the overall docking structure to capture conformational changes that occur upon binding. Symmetric refinement is also provided for symmetric homo-complexes. This method was validated by refining models produced by available docking programs, including ZDOCK and M-ZDOCK, and was successfully applied to CAPRI targets in a blind fashion. An example of using the refinement method with an existing docking method for ligand binding mode prediction of a drug target is also presented. A web server that implements the method is freely available at http://galaxy.seoklab.org/refinecomplex. PMID:27535582
Reduced Complexity Modeling (RCM): toward more use of less
NASA Astrophysics Data System (ADS)
Paola, Chris; Voller, Vaughan
2014-05-01
Although not exact, there is a general correspondence between reductionism and detailed, high-fidelity models, while 'synthesism' is often associated with reduced-complexity modeling. There is no question that high-fidelity reduction- based computational models are extremely useful in simulating the behaviour of complex natural systems. In skilled hands they are also a source of insight and understanding. We focus here on the case for the other side (reduced-complexity models), not because we think they are 'better' but because their value is more subtle, and their natural constituency less clear. What kinds of problems and systems lend themselves to the reduced-complexity approach? RCM is predicated on the idea that the mechanism of the system or phenomenon in question is, for whatever reason, insensitive to the full details of the underlying physics. There are multiple ways in which this can happen. B.T. Werner argued for the importance of process hierarchies in which processes at larger scales depend on only a small subset of everything going on at smaller scales. Clear scale breaks would seem like a way to test systems for this property but to our knowledge has not been used in this way. We argue that scale-independent physics, as for example exhibited by natural fractals, is another. We also note that the same basic criterion - independence of the process in question from details of the underlying physics - underpins 'unreasonably effective' laboratory experiments. There is thus a link between suitability for experimentation at reduced scale and suitability for RCM. Examples from RCM approaches to erosional landscapes, braided rivers, and deltas illustrate these ideas, and suggest that they are insufficient. There is something of a 'wild west' nature to RCM that puts some researchers off by suggesting a departure from traditional methods that have served science well for centuries. We offer two thoughts: first, that in the end the measure of a model is its
Engineering complex topological memories from simple Abelian models
NASA Astrophysics Data System (ADS)
Wootton, James R.; Lahtinen, Ville; Doucot, Benoit; Pachos, Jiannis K.
2011-09-01
In three spatial dimensions, particles are limited to either bosonic or fermionic statistics. Two-dimensional systems, on the other hand, can support anyonic quasiparticles exhibiting richer statistical behaviors. An exciting proposal for quantum computation is to employ anyonic statistics to manipulate information. Since such statistical evolutions depend only on topological characteristics, the resulting computation is intrinsically resilient to errors. The so-called non-Abelian anyons are most promising for quantum computation, but their physical realization may prove to be complex. Abelian anyons, however, are easier to understand theoretically and realize experimentally. Here we show that complex topological memories inspired by non-Abelian anyons can be engineered in Abelian models. We explicitly demonstrate the control procedures for the encoding and manipulation of quantum information in specific lattice models that can be implemented in the laboratory. This bridges the gap between requirements for anyonic quantum computation and the potential of state-of-the-art technology.
An Ontology for Modeling Complex Inter-relational Organizations
NASA Astrophysics Data System (ADS)
Wautelet, Yves; Neysen, Nicolas; Kolp, Manuel
This paper presents an ontology for organizational modeling through multiple complementary aspects. The primary goal of the ontology is to dispose of an adequate set of related concepts for studying complex organizations involved in a lot of relationships at the same time. In this paper, we define complex organizations as networked organizations involved in a market eco-system that are playing several roles simultaneously. In such a context, traditional approaches focus on the macro analytic level of transactions; this is supplemented here with a micro analytic study of the actors' rationale. At first, the paper overviews enterprise ontologies literature to position our proposal and exposes its contributions and limitations. The ontology is then brought to an advanced level of formalization: a meta-model in the form of a UML class diagram allows to overview the ontology concepts and their relationships which are formally defined. Finally, the paper presents the case study on which the ontology has been validated.
Polygonal Shapes Detection in 3d Models of Complex Architectures
NASA Astrophysics Data System (ADS)
Benciolini, G. B.; Vitti, A.
2015-02-01
A sequential application of two global models defined on a variational framework is proposed for the detection of polygonal shapes in 3D models of complex architectures. As a first step, the procedure involves the use of the Mumford and Shah (1989) 1st-order variational model in dimension two (gridded height data are processed). In the Mumford-Shah model an auxiliary function detects the sharp changes, i.e., the discontinuities, of a piecewise smooth approximation of the data. The Mumford-Shah model requires the global minimization of a specific functional to simultaneously produce both the smooth approximation and its discontinuities. In the proposed procedure, the edges of the smooth approximation derived by a specific processing of the auxiliary function are then processed using the Blake and Zisserman (1987) 2nd-order variational model in dimension one (edges are processed in the plane). This second step permits to describe the edges of an object by means of piecewise almost-linear approximation of the input edges themselves and to detects sharp changes of the first-derivative of the edges so to detect corners. The Mumford-Shah variational model is used in two dimensions accepting the original data as primary input. The Blake-Zisserman variational model is used in one dimension for the refinement of the description of the edges. The selection among all the boundaries detected by the Mumford-Shah model of those that present a shape close to a polygon is performed by considering only those boundaries for which the Blake-Zisserman model identified discontinuities in their first derivative. The output of the procedure are hence shapes, coming from 3D geometric data, that can be considered as polygons. The application of the procedure is suitable for, but not limited to, the detection of objects such as foot-print of polygonal buildings, building facade boundaries or windows contours. v The procedure is applied to a height model of the building of the Engineering
Thermodynamic model to describe miscibility in complex fluid systems
Guerrero, M.I.
1982-01-01
In the basic studies of tertiary oil recovery, it is necessary to describe the phase diagrams of mixtures of hydrocarbons, surfactants and brine. It has been observed that certain features of those phase diagrams, such as the appearance of 3-phase regions, can be correlated to ultra-low interfacial tensions. In this work, a simple thermodynamic model is described. The phase diagram obtained is qualitatively identical to that of real, more complex systems. 13 references.
Complex polysaccharides as PCR inhibitors in feces: Helicobacter pylori model.
Monteiro, L; Bonnemaison, D; Vekris, A; Petry, K G; Bonnet, J; Vidal, R; Cabrita, J; Mégraud, F
1997-04-01
A model was developed to study inhibitors present in feces which prevent the use of PCR for the detection of Helicobacter pylori. A DNA fragment amplified with the same primers as H. pylori was used to spike samples before extraction by a modified QIAamp tissue method. Inhibitors, separated on an Ultrogel AcA44 column, were characterized. Inhibitors in feces are complex polysaccharides possibly originating from vegetable material in the diet. PMID:9157172
Modeling of Interaction of Hydraulic Fractures in Complex Fracture Networks
NASA Astrophysics Data System (ADS)
Kresse, O. 2; Wu, R.; Weng, X.; Gu, H.; Cohen, C.
2011-12-01
A recently developed unconventional fracture model (UFM) is able to simulate complex fracture network propagation in a formation with pre-existing natural fractures. Multiple fracture branches can propagate at the same time and intersect/cross each other. Each open fracture exerts additional stresses on the surrounding rock and adjacent fractures, which is often referred to as "stress shadow" effect. The stress shadow can cause significant restriction of fracture width, leading to greater risk of proppant screenout. It can also alter the fracture propagation path and drastically affect fracture network patterns. It is hence critical to properly model the fracture interaction in a complex fracture model. A method for computing the stress shadow in a complex hydraulic fracture network is presented. The method is based on an enhanced 2D Displacement Discontinuity Method (DDM) with correction for finite fracture height. The computed stress field is compared to 3D numerical simulation in a few simple examples and shows the method provides a good approximation for the 3D fracture problem. This stress shadow calculation is incorporated in the UFM. The results for simple cases of two fractures are presented that show the fractures can either attract or expel each other depending on their initial relative positions, and compares favorably with an independent 2D non-planar hydraulic fracture model. Additional examples of both planar and complex fractures propagating from multiple perforation clusters are presented, showing that fracture interaction controls the fracture dimension and propagation pattern. In a formation with no or small stress anisotropy, fracture interaction can lead to dramatic divergence of the fractures as they tend to repel each other. However, when stress anisotropy is large, the fracture propagation direction is dominated by the stress field and fracture turning due to fracture interaction is limited. However, stress shadowing still has a strong effect
Termination of Multipartite Graph Series Arising from Complex Network Modelling
NASA Astrophysics Data System (ADS)
Latapy, Matthieu; Phan, Thi Ha Duong; Crespelle, Christophe; Nguyen, Thanh Qui
An intense activity is nowadays devoted to the definition of models capturing the properties of complex networks. Among the most promising approaches, it has been proposed to model these graphs via their clique incidence bipartite graphs. However, this approach has, until now, severe limitations resulting from its incapacity to reproduce a key property of this object: the overlapping nature of cliques in complex networks. In order to get rid of these limitations we propose to encode the structure of clique overlaps in a network thanks to a process consisting in iteratively factorising the maximal bicliques between the upper level and the other levels of a multipartite graph. We show that the most natural definition of this factorising process leads to infinite series for some instances. Our main result is to design a restriction of this process that terminates for any arbitrary graph. Moreover, we show that the resulting multipartite graph has remarkable combinatorial properties and is closely related to another fundamental combinatorial object. Finally, we show that, in practice, this multipartite graph is computationally tractable and has a size that makes it suitable for complex network modelling.
Modeling high-resolution broadband discourse in complex adaptive systems.
Dooley, Kevin J; Corman, Steven R; McPhee, Robert D; Kuhn, Timothy
2003-01-01
Numerous researchers and practitioners have turned to complexity science to better understand human systems. Simulation can be used to observe how the microlevel actions of many human agents create emergent structures and novel behavior in complex adaptive systems. In such simulations, communication between human agents is often modeled simply as message passing, where a message or text may transfer data, trigger action, or inform context. Human communication involves more than the transmission of texts and messages, however. Such a perspective is likely to limit the effectiveness and insight that we can gain from simulations, and complexity science itself. In this paper, we propose a model of how close analysis of discursive processes between individuals (high-resolution), which occur simultaneously across a human system (broadband), dynamically evolve. We propose six different processes that describe how evolutionary variation can occur in texts-recontextualization, pruning, chunking, merging, appropriation, and mutation. These process models can facilitate the simulation of high-resolution, broadband discourse processes, and can aid in the analysis of data from such processes. Examples are used to illustrate each process. We make the tentative suggestion that discourse may evolve to the "edge of chaos." We conclude with a discussion concerning how high-resolution, broadband discourse data could actually be collected. PMID:12876447
A Simple Model for Complex Dynamical Transitions in Epidemics
NASA Astrophysics Data System (ADS)
Earn, David J. D.; Rohani, Pejman; Bolker, Benjamin M.; Grenfell, Bryan T.
2000-01-01
Dramatic changes in patterns of epidemics have been observed throughout this century. For childhood infectious diseases such as measles, the major transitions are between regular cycles and irregular, possibly chaotic epidemics, and from regionally synchronized oscillations to complex, spatially incoherent epidemics. A simple model can explain both kinds of transitions as the consequences of changes in birth and vaccination rates. Measles is a natural ecological system that exhibits different dynamical transitions at different times and places, yet all of these transitions can be predicted as bifurcations of a single nonlinear model.
The evaluative imaging of mental models - Visual representations of complexity
NASA Technical Reports Server (NTRS)
Dede, Christopher
1989-01-01
The paper deals with some design issues involved in building a system that could visually represent the semantic structures of training materials and their underlying mental models. In particular, hypermedia-based semantic networks that instantiate classification problem solving strategies are thought to be a useful formalism for such representations; the complexity of these web structures can be best managed through visual depictions. It is also noted that a useful approach to implement in these hypermedia models would be some metrics of conceptual distance.
2013-01-01
Despite a long history in medical and dental application, the molecular mechanism and precise site of action are still arguable for local anesthetics. Their effects are considered to be induced by acting on functional proteins, on membrane lipids, or on both. Local anesthetics primarily interact with sodium channels embedded in cell membranes to reduce the excitability of nerve cells and cardiomyocytes or produce a malfunction of the cardiovascular system. However, the membrane protein-interacting theory cannot explain all of the pharmacological and toxicological features of local anesthetics. The administered drug molecules must diffuse through the lipid barriers of nerve sheaths and penetrate into or across the lipid bilayers of cell membranes to reach the acting site on transmembrane proteins. Amphiphilic local anesthetics interact hydrophobically and electrostatically with lipid bilayers and modify their physicochemical property, with the direct inhibition of membrane functions, and with the resultant alteration of the membrane lipid environments surrounding transmembrane proteins and the subsequent protein conformational change, leading to the inhibition of channel functions. We review recent studies on the interaction of local anesthetics with biomembranes consisting of phospholipids and cholesterol. Understanding the membrane interactivity of local anesthetics would provide novel insights into their anesthetic and cardiotoxic effects. PMID:24174934
Hybrid Structural Model of the Complete Human ESCRT-0 Complex
Ren, Xuefeng; Kloer, Daniel P.; Kim, Young C.; Ghirlando, Rodolfo; Saidi, Layla F.; Hummer, Gerhard; Hurley, James H.
2009-03-31
The human Hrs and STAM proteins comprise the ESCRT-0 complex, which sorts ubiquitinated cell surface receptors to lysosomes for degradation. Here we report a model for the complete ESCRT-0 complex based on the crystal structure of the Hrs-STAM core complex, previously solved domain structures, hydrodynamic measurements, and Monte Carlo simulations. ESCRT-0 expressed in insect cells has a hydrodynamic radius of R{sub H} = 7.9 nm and is a 1:1 heterodimer. The 2.3 {angstrom} crystal structure of the ESCRT-0 core complex reveals two domain-swapped GAT domains and an antiparallel two-stranded coiled-coil, similar to yeast ESCRT-0. ESCRT-0 typifies a class of biomolecular assemblies that combine structured and unstructured elements, and have dynamic and open conformations to ensure versatility in target recognition. Coarse-grained Monte Carlo simulations constrained by experimental R{sub H} values for ESCRT-0 reveal a dynamic ensemble of conformations well suited for diverse functions.
Hybrid structural model of the complete human ESCRT-0 complex.
Ren, Xuefeng; Kloer, Daniel P; Kim, Young C; Ghirlando, Rodolfo; Saidi, Layla F; Hummer, Gerhard; Hurley, James H
2009-03-11
The human Hrs and STAM proteins comprise the ESCRT-0 complex, which sorts ubiquitinated cell surface receptors to lysosomes for degradation. Here we report a model for the complete ESCRT-0 complex based on the crystal structure of the Hrs-STAM core complex, previously solved domain structures, hydrodynamic measurements, and Monte Carlo simulations. ESCRT-0 expressed in insect cells has a hydrodynamic radius of RH = 7.9 nm and is a 1:1 heterodimer. The 2.3 Angstroms crystal structure of the ESCRT-0 core complex reveals two domain-swapped GAT domains and an antiparallel two-stranded coiled-coil, similar to yeast ESCRT-0. ESCRT-0 typifies a class of biomolecular assemblies that combine structured and unstructured elements, and have dynamic and open conformations to ensure versatility in target recognition. Coarse-grained Monte Carlo simulations constrained by experimental RH values for ESCRT-0 reveal a dynamic ensemble of conformations well suited for diverse functions. PMID:19278655
Parameter estimation for distributed parameter models of complex, flexible structures
NASA Technical Reports Server (NTRS)
Taylor, Lawrence W., Jr.
1991-01-01
Distributed parameter modeling of structural dynamics has been limited to simple spacecraft configurations because of the difficulty of handling several distributed parameter systems linked at their boundaries. Although there is other computer software able to generate such models or complex, flexible spacecraft, unfortunately, neither is suitable for parameter estimation. Because of this limitation the computer software PDEMOD is being developed for the express purposes of modeling, control system analysis, parameter estimation and structure optimization. PDEMOD is capable of modeling complex, flexible spacecraft which consist of a three-dimensional network of flexible beams and rigid bodies. Each beam has bending (Bernoulli-Euler or Timoshenko) in two directions, torsion, and elongation degrees of freedom. The rigid bodies can be attached to the beam ends at any angle or body location. PDEMOD is also capable of performing parameter estimation based on matching experimental modal frequencies and static deflection test data. The underlying formulation and the results of using this approach for test data of the Mini-MAST truss will be discussed. The resulting accuracy of the parameter estimates when using such limited data can impact significantly the instrumentation requirements for on-orbit tests.
A model of the proton translocation mechanism of complex I.
Treberg, Jason R; Brand, Martin D
2011-05-20
Despite decades of speculation, the proton pumping mechanism of complex I (NADH-ubiquinone oxidoreductase) is unknown and continues to be controversial. Recent descriptions of the architecture of the hydrophobic region of complex I have resolved one vital issue: this region appears to have multiple proton transporters that are mechanically interlinked. Thus, transduction of conformational changes to drive the transmembrane transporters linked by a "connecting rod" during the reduction of ubiquinone (Q) can account for two or three of the four protons pumped per NADH oxidized. The remaining proton(s) must be pumped by direct coupling at the Q-binding site. Here, we present a mixed model based on a crucial constraint: the strong dependence on the pH gradient across the membrane (ΔpH) of superoxide generation at the Q-binding site of complex I. This model combines direct and indirect coupling mechanisms to account for the pumping of the four protons. It explains the observed properties of the semiquinone in the Q-binding site, the rapid superoxide production from this site during reverse electron transport, its low superoxide production during forward electron transport except in the presence of inhibitory Q-analogs and high protonmotive force, and the strong dependence of both modes of superoxide production on ΔpH. PMID:21454533
A Model of the Proton Translocation Mechanism of Complex I*
Treberg, Jason R.; Brand, Martin D.
2011-01-01
Despite decades of speculation, the proton pumping mechanism of complex I (NADH-ubiquinone oxidoreductase) is unknown and continues to be controversial. Recent descriptions of the architecture of the hydrophobic region of complex I have resolved one vital issue: this region appears to have multiple proton transporters that are mechanically interlinked. Thus, transduction of conformational changes to drive the transmembrane transporters linked by a “connecting rod” during the reduction of ubiquinone (Q) can account for two or three of the four protons pumped per NADH oxidized. The remaining proton(s) must be pumped by direct coupling at the Q-binding site. Here, we present a mixed model based on a crucial constraint: the strong dependence on the pH gradient across the membrane (ΔpH) of superoxide generation at the Q-binding site of complex I. This model combines direct and indirect coupling mechanisms to account for the pumping of the four protons. It explains the observed properties of the semiquinone in the Q-binding site, the rapid superoxide production from this site during reverse electron transport, its low superoxide production during forward electron transport except in the presence of inhibitory Q-analogs and high protonmotive force, and the strong dependence of both modes of superoxide production on ΔpH. PMID:21454533
Semiotic aspects of control and modeling relations in complex systems
Joslyn, C.
1996-08-01
A conceptual analysis of the semiotic nature of control is provided with the goal of elucidating its nature in complex systems. Control is identified as a canonical form of semiotic relation of a system to its environment. As a form of constraint between a system and its environment, its necessary and sufficient conditions are established, and the stabilities resulting from control are distinguished from other forms of stability. These result from the presence of semantic coding relations, and thus the class of control systems is hypothesized to be equivalent to that of semiotic systems. Control systems are contrasted with models, which, while they have the same measurement functions as control systems, do not necessarily require semantic relations because of the lack of the requirement of an interpreter. A hybrid construction of models in control systems is detailed. Towards the goal of considering the nature of control in complex systems, the possible relations among collections of control systems are considered. Powers arguments on conflict among control systems and the possible nature of control in social systems are reviewed, and reconsidered based on our observations about hierarchical control. Finally, we discuss the necessary semantic functions which must be present in complex systems for control in this sense to be present at all.
A qualitative model of human interaction with complex dynamic systems
NASA Technical Reports Server (NTRS)
Hess, Ronald A.
1987-01-01
A qualitative model describing human interaction with complex dynamic systems is developed. The model is hierarchical in nature and consists of three parts: a behavior generator, an internal model, and a sensory information processor. The behavior generator is responsible for action decomposition, turning higher level goals or missions into physical action at the human-machine interface. The internal model is an internal representation of the environment which the human is assumed to possess and is divided into four submodel categories. The sensory information processor is responsible for sensory composition. All three parts of the model act in consort to allow anticipatory behavior on the part of the human in goal-directed interaction with dynamic systems. Human workload and error are interpreted in this framework, and the familiar example of an automobile commute is used to illustrate the nature of the activity in the three model elements. Finally, with the qualitative model as a guide, verbal protocols from a manned simulation study of a helicopter instrument landing task are analyzed with particular emphasis on the effect of automation on human-machine performance.
A Qualitative Model of Human Interaction with Complex Dynamic Systems
NASA Technical Reports Server (NTRS)
Hess, Ronald A.
1987-01-01
A qualitative model describing human interaction with complex dynamic systems is developed. The model is hierarchical in nature and consists of three parts: a behavior generator, an internal model, and a sensory information processor. The behavior generator is responsible for action decomposition, turning higher level goals or missions into physical action at the human-machine interface. The internal model is an internal representation of the environment which the human is assumed to possess and is divided into four submodel categories. The sensory information processor is responsible for sensory composition. All three parts of the model act in consort to allow anticipatory behavior on the part of the human in goal-directed interaction with dynamic systems. Human workload and error are interpreted in this framework, and the familiar example of an automobile commute is used to illustrate the nature of the activity in the three model elements. Finally, with the qualitative model as a guide, verbal protocols from a manned simulation study of a helicopter instrument landing task are analyzed with particular emphasis on the effect of automation on human-machine performance.
An ice sheet model of reduced complexity for paleoclimate studies
NASA Astrophysics Data System (ADS)
Neff, B.; Born, A.; Stocker, T. F.
2015-08-01
IceBern2D is a vertically integrated ice sheet model to investigate the ice distribution on long timescales under different climatic conditions. It is forced by simulated fields of surface temperature and precipitation of the last glacial maximum and present day climate from a comprehensive climate model. This constant forcing is adjusted to changes in ice elevation. Bedrock sinking and sea level are a function of ice volume. Due to its reduced complexity and computational efficiency, the model is well-suited for extensive sensitivity studies and ensemble simulations on extensive temporal and spatial scales. It shows good quantitative agreement with standardized benchmarks on an artificial domain (EISMINT). Present day and last glacial maximum ice distributions on the Northern Hemisphere are also simulated with good agreement. Glacial ice volume in Eurasia is underestimated due to the lack of ice shelves in our model. The efficiency of the model is utilized by running an ensemble of 400 simulations with perturbed model parameters and two different estimates of the climate at the last glacial maximum. The sensitivity to the imposed climate boundary conditions and the positive degree day factor β, i.e., the surface mass balance, outweighs the influence of parameters that disturb the flow of ice. This justifies the use of simplified dynamics as a means to achieve computational efficiency for simulations that cover several glacial cycles. The sensitivity of the model to changes in surface temperature is illustrated as a hysteresis based on 5 million year long simulations.
Integrated Bayesian network framework for modeling complex ecological issues.
Johnson, Sandra; Mengersen, Kerrie
2012-07-01
The management of environmental problems is multifaceted, requiring varied and sometimes conflicting objectives and perspectives to be considered. Bayesian network (BN) modeling facilitates the integration of information from diverse sources and is well suited to tackling the management challenges of complex environmental problems. However, combining several perspectives in one model can lead to large, unwieldy BNs that are difficult to maintain and understand. Conversely, an oversimplified model may lead to an unrealistic representation of the environmental problem. Environmental managers require the current research and available knowledge about an environmental problem of interest to be consolidated in a meaningful way, thereby enabling the assessment of potential impacts and different courses of action. Previous investigations of the environmental problem of interest may have already resulted in the construction of several disparate ecological models. On the other hand, the opportunity may exist to initiate this modeling. In the first instance, the challenge is to integrate existing models and to merge the information and perspectives from these models. In the second instance, the challenge is to include different aspects of the environmental problem incorporating both the scientific and management requirements. Although the paths leading to the combined model may differ for these 2 situations, the common objective is to design an integrated model that captures the available information and research, yet is simple to maintain, expand, and refine. BN modeling is typically an iterative process, and we describe a heuristic method, the iterative Bayesian network development cycle (IBNDC), for the development of integrated BN models that are suitable for both situations outlined above. The IBNDC approach facilitates object-oriented BN (OOBN) modeling, arguably viewed as the next logical step in adaptive management modeling, and that embraces iterative development
Emission spectra of LH2 complex: full Hamiltonian model
NASA Astrophysics Data System (ADS)
Heřman, Pavel; Zapletal, David; Horák, Milan
2013-05-01
In the present contribution we study the absorption and steady-state fluorescence spectra for ring molecular system, which can model B850 ring of peripheral light-harvesting complex LH2 from purple bacterium Rhodopseudomonas acidophila (Rhodoblastus acidophilus). LH2 is a highly symmetric ring of nine pigment-protein subunits, each containing two transmembrane polypeptide helixes and three bacteriochlorophylls (BChl). The uncorrelated diagonal static disorder with Gaussian distribution (fluctuations of local excitation energies) simultaneously with the diagonal dynamic disorder (interaction with a bath) in Markovian approximation is used in our simulations. We compare calculated absorption and steady state fluorescence spectra obtained within the full Hamiltonian model of the B850 ring with our previous results calculated within the nearest neighbour approximation model and also with experimental data.
3D model of amphioxus steroid receptor complexed with estradiol
Baker, Michael E.; Chang, David J.
2009-08-28
The origins of signaling by vertebrate steroids are not fully understood. An important advance was the report that an estrogen-binding steroid receptor [SR] is present in amphioxus, a basal chordate with a similar body plan as vertebrates. To investigate the evolution of estrogen-binding to steroid receptors, we constructed a 3D model of amphioxus SR complexed with estradiol. This 3D model indicates that although the SR is activated by estradiol, some interactions between estradiol and human ER{alpha} are not conserved in the SR, which can explain the low affinity of estradiol for the SR. These differences between the SR and ER{alpha} in the steroid-binding domain are sufficient to suggest that another steroid is the physiological regulator of the SR. The 3D model predicts that mutation of Glu-346 to Gln will increase the affinity of testosterone for amphioxus SR and elucidate the evolution of steroid-binding to nuclear receptors.
Equilibrium modeling of trace metal transport from Duluth complex rockpile
Kelsey, P.D.; Klusman, R.W.; Lapakko, K.
1996-12-31
Geochemical modeling was used to predict weathering processes and the formation of trace metal-adsorbing secondary phases in a waste rock stockpile containing Cu-Ni ore mined from the Duluth Complex, MN. Amorphous ferric hydroxide was identified as a secondary phase within the pile, from observation and geochemical modeling of the weathering process. Due to the high content of cobalt, copper, nickel, and zinc in the primary minerals of the waste rock and in the effluent, it was hypothesized that the predicted and observed precipitant ferric hydroxide would adsorb small quantities of these trace metals. This was verified using sequential extractions and simulated using adsorption geochemical modeling. It was concluded that the trace metals were adsorbed in small quantities, and adsorption onto the amorphous ferric hydroxide was in decreasing order of Cu > Ni > Zn > Co. The low degree of adsorption was due to low pH water and competition for adsorption sites with other ions in solution.
A two-level complex network model and its application
NASA Astrophysics Data System (ADS)
Yang, Jianmei; Wang, Wenjie; Chen, Guanrong
2009-06-01
This paper investigates the competitive relationship and rivalry of industrial markets, using Chinese household electrical appliance firms as a platform for the study. The common complex network models belong to one-level networks in layered classification, while this paper formulates and evaluates a new two-level network model, in which the first level is the whole unweighted-undirected network useful for macro-analyzing the industrial market structure while the second level is a local weighted-directed network capable of micro-analyzing the inter-firm rivalry in the market. It is believed that the relationship is determined by objective factors whereas the action is rather subjective, and the idea in this paper lies in that the objective relationship and the subjective action subjected to this relationship are being simultaneously considered but at deferent levels of the model which may be applicable to many real applications.
Rumor spreading model considering hesitating mechanism in complex social networks
NASA Astrophysics Data System (ADS)
Xia, Ling-Ling; Jiang, Guo-Ping; Song, Bo; Song, Yu-Rong
2015-11-01
The study of rumor spreading has become an important issue on complex social networks. On the basis of prior studies, we propose a modified susceptible-exposed-infected-removed (SEIR) model with hesitating mechanism by considering the attractiveness and fuzziness of the content of rumors. We derive mean-field equations to characterize the dynamics of SEIR model on both homogeneous and heterogeneous networks. Then a steady-state analysis is conducted to investigate the spreading threshold and the final rumor size. Simulations on both artificial and real networks show that a decrease of fuzziness can effectively increase the spreading threshold of the SEIR model and reduce the maximum rumor influence. In addition, the spreading threshold is independent of the attractiveness of rumor. Simulation results also show that the speed of rumor spreading obeys the relation "BA network > WS network", whereas the final scale of spreading obeys the opposite relation.
Multiagent model and mean field theory of complex auction dynamics
NASA Astrophysics Data System (ADS)
Chen, Qinghua; Huang, Zi-Gang; Wang, Yougui; Lai, Ying-Cheng
2015-09-01
Recent years have witnessed a growing interest in analyzing a variety of socio-economic phenomena using methods from statistical and nonlinear physics. We study a class of complex systems arising from economics, the lowest unique bid auction (LUBA) systems, which is a recently emerged class of online auction game systems. Through analyzing large, empirical data sets of LUBA, we identify a general feature of the bid price distribution: an inverted J-shaped function with exponential decay in the large bid price region. To account for the distribution, we propose a multi-agent model in which each agent bids stochastically in the field of winner’s attractiveness, and develop a theoretical framework to obtain analytic solutions of the model based on mean field analysis. The theory produces bid-price distributions that are in excellent agreement with those from the real data. Our model and theory capture the essential features of human behaviors in the competitive environment as exemplified by LUBA, and may provide significant quantitative insights into complex socio-economic phenomena.
IDMS: inert dark matter model with a complex singlet
NASA Astrophysics Data System (ADS)
Bonilla, Cesar; Sokolowska, Dorota; Darvishi, Neda; Diaz-Cruz, J. Lorenzo; Krawczyk, Maria
2016-06-01
We study an extension of the inert doublet model (IDM) that includes an extra complex singlet of the scalars fields, which we call the IDMS. In this model there are three Higgs particles, among them a SM-like Higgs particle, and the lightest neutral scalar, from the inert sector, remains a viable dark matter (DM) candidate. We assume a non-zero complex vacuum expectation value for the singlet, so that the visible sector can introduce extra sources of CP violation. We construct the scalar potential of IDMS, assuming an exact Z 2 symmetry, with the new singlet being Z 2-even, as well as a softly broken U(1) symmetry, which allows a reduced number of free parameters in the potential. In this paper we explore the foundations of the model, in particular the masses and interactions of scalar particles for a few benchmark scenarios. Constraints from collider physics, in particular from the Higgs signal observed at the Large Hadron Collider with {M}h≈ 125 {{GeV}}, as well as constraints from the DM experiments, such as relic density measurements and direct detection limits, are included in the analysis. We observe significant differences with respect to the IDM in relic density values from additional annihilation channels, interference and resonance effects due to the extended Higgs sector.
Troposphere-lower-stratosphere connection in an intermediate complexity model.
NASA Astrophysics Data System (ADS)
Ruggieri, Paolo; King, Martin; Kucharski, Fred; Buizza, Roberto; Visconti, Guido
2016-04-01
The dynamical coupling between the troposphere and the lower stratosphere has been investigated using a low-top, intermediate complexity model provided by the Abdus Salam International Centre for Theoretical Physics (SPEEDY). The key question that we wanted to address is whether a simple model like SPEEDY can be used to understand troposphere-stratosphere interactions, e.g. forced by changes of sea-ice concentration in polar arctic regions. Three sets of experiments have been performed. Firstly, a potential vorticity perspective has been applied to understand the wave-like forcing of the troposphere on the stratosphere and to provide quantitative information on the sub seasonal variability of the coupling. Then, the zonally asymmetric, near-surface response to a lower-stratospheric forcing has been analysed in a set of forced experiments with an artificial heating imposed in the extra-tropical lower stratosphere. Finally, the lower-stratosphere response sensitivity to tropospheric initial conditions has been examined. Results indicate how SPEEDY captures the physics of the troposphere-stratosphere connection but also show the lack of stratospheric variability. Results also suggest that intermediate-complexity models such as SPEEDY could be used to investigate the effects that surface forcing (e.g. due to sea-ice concentration changes) have on the troposphere and the lower stratosphere.
Preconditioning the bidomain model with almost linear complexity
NASA Astrophysics Data System (ADS)
Pierre, Charles
2012-01-01
The bidomain model is widely used in electro-cardiology to simulate spreading of excitation in the myocardium and electrocardiograms. It consists of a system of two parabolic reaction diffusion equations coupled with an ODE system. Its discretisation displays an ill-conditioned system matrix to be inverted at each time step: simulations based on the bidomain model therefore are associated with high computational costs. In this paper we propose a preconditioning for the bidomain model either for an isolated heart or in an extended framework including a coupling with the surrounding tissues (the torso). The preconditioning is based on a formulation of the discrete problem that is shown to be symmetric positive semi-definite. A block LU decomposition of the system together with a heuristic approximation (referred to as the monodomain approximation) are the key ingredients for the preconditioning definition. Numerical results are provided for two test cases: a 2D test case on a realistic slice of the thorax based on a segmented heart medical image geometry, a 3D test case involving a small cubic slab of tissue with orthotropic anisotropy. The analysis of the resulting computational cost (both in terms of CPU time and of iteration number) shows an almost linear complexity with the problem size, i.e. of type nlog α( n) (for some constant α) which is optimal complexity for such problems.
Surface complexation model of uranyl sorption on Georgia kaolinite
Payne, T.E.; Davis, J.A.; Lumpkin, G.R.; Chisari, R.; Waite, T.D.
2004-01-01
The adsorption of uranyl on standard Georgia kaolinites (KGa-1 and KGa-1B) was studied as a function of pH (3-10), total U (1 and 10 ??mol/l), and mass loading of clay (4 and 40 g/l). The uptake of uranyl in air-equilibrated systems increased with pH and reached a maximum in the near-neutral pH range. At higher pH values, the sorption decreased due to the presence of aqueous uranyl carbonate complexes. One kaolinite sample was examined after the uranyl uptake experiments by transmission electron microscopy (TEM), using energy dispersive X-ray spectroscopy (EDS) to determine the U content. It was found that uranium was preferentially adsorbed by Ti-rich impurity phases (predominantly anatase), which are present in the kaolinite samples. Uranyl sorption on the Georgia kaolinites was simulated with U sorption reactions on both titanol and aluminol sites, using a simple non-electrostatic surface complexation model (SCM). The relative amounts of U-binding >TiOH and >AlOH sites were estimated from the TEM/EDS results. A ternary uranyl carbonate complex on the titanol site improved the fit to the experimental data in the higher pH range. The final model contained only three optimised log K values, and was able to simulate adsorption data across a wide range of experimental conditions. The >TiOH (anatase) sites appear to play an important role in retaining U at low uranyl concentrations. As kaolinite often contains trace TiO2, its presence may need to be taken into account when modelling the results of sorption experiments with radionuclides or trace metals on kaolinite. ?? 2004 Elsevier B.V. All rights reserved.
A complex mathematical model of the human menstrual cycle.
Reinecke, Isabel; Deuflhard, Peter
2007-07-21
Despite the fact that more than 100 million women worldwide use birth control pills and that half of the world's population is concerned, the menstrual cycle has so far received comparatively little attention in the field of mathematical modeling. The term menstrual cycle comprises the processes of the control system in the female body that, under healthy circumstances, lead to ovulation at regular intervals, thus making reproduction possible. If this is not the case or ovulation is not desired, the question arises how this control system can be influenced, for example, by hormonal treatments. In order to be able to cover a vast range of external manipulations, the mathematical model must comprise the main components where the processes belonging to the menstrual cycle occur, as well as their interrelations. A system of differential equations serves as the mathematical model, describing the dynamics of hormones, enzymes, receptors, and follicular phases. Since the processes take place in different parts of the body and influence each other with a certain delay, passing over to delay differential equations is deemed a reasonable step. The pulsatile release of the gonadotropin-releasing hormone (GnRH) is controlled by a complex neural network. We choose to model the pulse time points of this GnRH pulse generator by a stochastic process. Focus in this paper is on the model development. This rather elaborate mathematical model is the basis for a detailed analysis and could be helpful for possible drug design. PMID:17448501
Modeling the Propagation of Mobile Phone Virus under Complex Network
Yang, Wei; Wei, Xi-liang; Guo, Hao; An, Gang; Guo, Lei
2014-01-01
Mobile phone virus is a rogue program written to propagate from one phone to another, which can take control of a mobile device by exploiting its vulnerabilities. In this paper the propagation model of mobile phone virus is tackled to understand how particular factors can affect its propagation and design effective containment strategies to suppress mobile phone virus. Two different propagation models of mobile phone viruses under the complex network are proposed in this paper. One is intended to describe the propagation of user-tricking virus, and the other is to describe the propagation of the vulnerability-exploiting virus. Based on the traditional epidemic models, the characteristics of mobile phone viruses and the network topology structure are incorporated into our models. A detailed analysis is conducted to analyze the propagation models. Through analysis, the stable infection-free equilibrium point and the stability condition are derived. Finally, considering the network topology, the numerical and simulation experiments are carried out. Results indicate that both models are correct and suitable for describing the spread of two different mobile phone viruses, respectively. PMID:25133209
An ice sheet model of reduced complexity for paleoclimate studies
NASA Astrophysics Data System (ADS)
Neff, Basil; Born, Andreas; Stocker, Thomas F.
2016-04-01
IceBern2D is a vertically integrated ice sheet model to investigate the ice distribution on long timescales under different climatic conditions. It is forced by simulated fields of surface temperature and precipitation of the Last Glacial Maximum and present-day climate from a comprehensive climate model. This constant forcing is adjusted to changes in ice elevation. Due to its reduced complexity and computational efficiency, the model is well suited for extensive sensitivity studies and ensemble simulations on extensive temporal and spatial scales. It shows good quantitative agreement with standardized benchmarks on an artificial domain (EISMINT). Present-day and Last Glacial Maximum ice distributions in the Northern Hemisphere are also simulated with good agreement. Glacial ice volume in Eurasia is underestimated due to the lack of ice shelves in our model. The efficiency of the model is utilized by running an ensemble of 400 simulations with perturbed model parameters and two different estimates of the climate at the Last Glacial Maximum. The sensitivity to the imposed climate boundary conditions and the positive degree-day factor β, i.e., the surface mass balance, outweighs the influence of parameters that disturb the flow of ice. This justifies the use of simplified dynamics as a means to achieve computational efficiency for simulations that cover several glacial cycles. Hysteresis simulations over 5 million years illustrate the stability of the simulated ice sheets to variations in surface air temperature.
Modeling and Visualizing Flow of Chemical Agents Across Complex Terrain
NASA Technical Reports Server (NTRS)
Kao, David; Kramer, Marc; Chaderjian, Neal
2005-01-01
Release of chemical agents across complex terrain presents a real threat to homeland security. Modeling and visualization tools are being developed that capture flow fluid terrain interaction as well as point dispersal downstream flow paths. These analytic tools when coupled with UAV atmospheric observations provide predictive capabilities to allow for rapid emergency response as well as developing a comprehensive preemptive counter-threat evacuation plan. The visualization tools involve high-end computing and massive parallel processing combined with texture mapping. We demonstrate our approach across a mountainous portion of North California under two contrasting meteorological conditions. Animations depicting flow over this geographical location provide immediate assistance in decision support and crisis management.
Order parameter in complex dipolar structures: Microscopic modeling
NASA Astrophysics Data System (ADS)
Prosandeev, S.; Bellaiche, L.
2008-02-01
Microscopic models have been used to reveal the existence of an order parameter that is associated with many complex dipolar structures in magnets and ferroelectrics. This order parameter involves a double cross product of the local dipoles with their positions. It provides a measure of subtle microscopic features, such as the helicity of the two domains inherent to onion states, curvature of the dipolar pattern in flower states, or characteristics of sets of vortices with opposite chirality (e.g., distance between the vortex centers and/or the magnitude of their local dipoles).
The modeling of complex continua: Fundamental obstacles and grand challenges
Not Available
1993-01-01
The research is divided into: discontinuities and adaptive computation, chaotic flows, dispersion of flow in porous media, and nonlinear waves and nonlinear materials. The research program has emphasized innovative computation and theory. The approach depends on abstracting mathematical concepts and computational methods from individual applications to a wide range of problems involving complex continua. The generic difficulties in the modeling of continua that guide this abstraction are multiple length and time scales, microstructures (bubbles, droplets, vortices, crystal defects), and chaotic or random phenomena described by a statistical formulation.
Does model performance improve with complexity? A case study with three hydrological models
NASA Astrophysics Data System (ADS)
Orth, Rene; Staudinger, Maria; Seneviratne, Sonia I.; Seibert, Jan; Zappa, Massimiliano
2015-04-01
In recent decades considerable progress has been made in climate model development. Following the massive increase in computational power, models became more sophisti- cated. At the same time also simple conceptual models have advanced. In this study we validate and compare three hydrological models of different complexity to investigate whether their performance varies accordingly. For this purpose we use runoff and also soil moisture measurements, which allow a truly independent validation, from several sites across Switzerland. The models are calibrated in similar ways with the same runoff data. Our results show that the more complex models HBV and PREVAH outperform the simple water balance model (SWBM) in case of runoff but not for soil moisture. Furthermore the most sophisticated PREVAH model shows an added value compared to the HBV model only in case of soil moisture. Focusing on extreme events we find generally improved performance of the SWBM during drought conditions and degraded agreement with observations during wet extremes. For the more complex models we find the opposite behavior, probably because they were primarily developed for predic- tion of runoff extremes. As expected given their complexity, HBV and PREVAH have more problems with over-fitting. All models show a tendency towards better perfor- mance in lower altitudes as opposed to (pre-)alpine sites. The results vary considerably across the investigated sites. In contrast, the different metrics we consider to estimate the agreement between models and observations lead to similar conclusions, indicating that the performance of the considered models is similar at different time scales as well as for anomalies and long-term means. We conclude that added complexity does not necessarily lead to improved performance of hydrological models, and that performance can vary greatly depending on the considered hydrological variable (e.g. runoff vs. soil moisture) or hydrological conditions (floods vs
Does model performance improve with complexity? A case study with three hydrological models
NASA Astrophysics Data System (ADS)
Orth, Rene; Staudinger, Maria; Seneviratne, Sonia I.; Seibert, Jan; Zappa, Massimiliano
2015-04-01
In recent decades considerable progress has been made in climate model development. Following the massive increase in computational power, models became more sophisticated. At the same time also simple conceptual models have advanced. In this study we validate and compare three hydrological models of different complexity to investigate whether their performance varies accordingly. For this purpose we use runoff and also soil moisture measurements, which allow a truly independent validation, from several sites across Switzerland. The models are calibrated in similar ways with the same runoff data. Our results show that the more complex models HBV and PREVAH outperform the simple water balance model (SWBM) in case of runoff but not for soil moisture. Furthermore the most sophisticated PREVAH model shows an added value compared to the HBV model only in case of soil moisture. Focusing on extreme events we find generally improved performance of the SWBM during drought conditions and degraded agreement with observations during wet extremes. For the more complex models we find the opposite behavior, probably because they were primarily developed for prediction of runoff extremes. As expected given their complexity, HBV and PREVAH have more problems with over-fitting. All models show a tendency towards better performance in lower altitudes as opposed to (pre-) alpine sites. The results vary considerably across the investigated sites. In contrast, the different metrics we consider to estimate the agreement between models and observations lead to similar conclusions, indicating that the performance of the considered models is similar at different time scales as well as for anomalies and long-term means. We conclude that added complexity does not necessarily lead to improved performance of hydrological models, and that performance can vary greatly depending on the considered hydrological variable (e.g. runoff vs. soil moisture) or hydrological conditions (floods vs. droughts).
The Eemian climate simulated by two models of different complexities
NASA Astrophysics Data System (ADS)
Nikolova, Irina; Yin, Qiuzhen; Berger, Andre; Singh, Umesh; Karami, Pasha
2013-04-01
The Eemian period, also known as MIS-5, experienced warmer than today climate, reduction in ice sheets and important sea-level rise. These interesting features have made the Eemian appropriate to evaluate climate models when forced with astronomical and greenhouse gas forcings different from today. In this work, we present the simulated Eemian climate by two climate models of different complexities, LOVECLIM (LLN Earth system model of intermediate complexity) and CCSM3 (NCAR atmosphere-ocean general circulation model). Feedbacks from sea ice, vegetation, monsoon and ENSO phenomena are discussed to explain the regional similarities/dissimilarities in both models with respect to the pre-industrial (PI) climate. Significant warming (cooling) over almost all the continents during boreal summer (winter) leads to a largely increased (reduced) seasonal contrast in the northern (southern) hemisphere, mainly due to the much higher (lower) insolation received by the whole Earth in boreal summer (winter). The arctic is warmer than at PI through the whole year, resulting from its much higher summer insolation and its remnant effect in the following fall-winter through the interactions between atmosphere, ocean and sea ice. Regional discrepancies exist in the sea-ice formation zones between the two models. Excessive sea-ice formation in CCSM3 results in intense regional cooling. In both models intensified African monsoon and vegetation feedback are responsible for the cooling during summer in North Africa and on the Arabian Peninsula. Over India precipitation maximum is found further west, while in Africa the precipitation maximum migrates further north. Trees and grassland expand north in Sahel/Sahara, trees being more abundant in the results from LOVECLIM than from CCSM3. A mix of forest and grassland occupies continents and expand deep in the high northern latitudes in line with proxy records. Desert areas reduce significantly in Northern Hemisphere, but increase in North
Complex 3D crustal model of Asia region
NASA Astrophysics Data System (ADS)
Baranov, A. A.
2009-04-01
The Southern and Central Asia is tectonically complex region with great collision between Asian and Indian plates and its evolution is strongly related to the active subduction along the Pacific border. Previous global crustal model (CRUST 2.0.) for Asia region have resolution 2x2 degree. Model AsCRUST-08 (Baranov et al., 2008) of Central and Southern Asia with resolution of 1x1 degree was sufficiently improved in several regions and we built integrated model of the crust for Asia region. Also we add several regions in North Eurasia as Mongolia, Kazahstan and others. For such regions as Red and Dead sea, Northern China, Southern India we built regional maps with more detailed resolution. It was used data of deep seismic reflection, refraction and receiver functions studies from published papers. The existing data were verified and crosschecked. As the first result, we demonstrate a new Moho map for the region. The complex crustal model consists of three layers: upper, middle and lower crust. Besides depth to the boundaries, we provide average P-wave velocities in the upper, middle and lower parts of the crystalline crust. Limits for Vp velocities are: for upper crust 5.5-6.2 km/s, for middle 6.0-6.6 km/s, for lower crust 6.6-7.5km/s. Also we recalculated seismic P velocity data to density in crustal layers using rheology properties and geology data. Conclusions: Moho map and the velocity structure of the crust are much more heterogeneous than in previous maps CRUST 2.0. (Bassin et al., 2000), and CRUST 5.1. (Mooney et al., 1998). Our model offers a starting point for numerical modeling of deep structures by allowing correction for crustal effects beforehand and to resolve trade-off with mantle heterogeneities. This model will be used as a starting point in the gravity modeling of the lithosphere and mantle structure. [1] A. Baranov et al., First steps towards a new crustal model of South and Central Asia , Geophysical Research Abstracts, Vol. 10, EGU2008-A-05313
The independent spreaders involved SIR Rumor model in complex networks
NASA Astrophysics Data System (ADS)
Qian, Zhen; Tang, Shaoting; Zhang, Xiao; Zheng, Zhiming
2015-07-01
Recent studies of rumor or information diffusion process in complex networks show that in contrast to traditional comprehension, individuals who participate in rumor spreading within one network do not always get the rumor from their neighbors. They can obtain the rumor from different sources like online social networks and then publish it on their personal sites. In our paper, we discuss this phenomenon in complex networks by adopting the concept of independent spreaders. Rather than getting the rumor from neighbors, independent spreaders learn it from other channels. We further develop the classic "ignorant-spreaders-stiflers" or SIR model of rumor diffusion process in complex networks. A steady-state analysis is conducted to investigate the final spectrum of the rumor spreading under various spreading rate, stifling rate, density of independent spreaders and average degree of the network. Results show that independent spreaders effectively enhance the rumor diffusion process, by delivering the rumor to regions far away from the current rumor infected regions. And though the rumor spreading process in SF networks is faster than that in ER networks, the final size of rumor spreading in ER networks is larger than that in SF networks.
a Range Based Method for Complex Facade Modeling
NASA Astrophysics Data System (ADS)
Adami, A.; Fregonese, L.; Taffurelli, L.
2011-09-01
3d modelling of Architectural Heritage does not follow a very well-defined way, but it goes through different algorithms and digital form according to the shape complexity of the object, to the main goal of the representation and to the starting data. Even if the process starts from the same data, such as a pointcloud acquired by laser scanner, there are different possibilities to realize a digital model. In particular we can choose between two different attitudes: the mesh and the solid model. In the first case the complexity of architecture is represented by a dense net of triangular surfaces which approximates the real surface of the object. In the other -opposite- case the 3d digital model can be realized by the use of simple geometrical shapes, by the use of sweeping algorithm and the Boolean operations. Obviously these two models are not the same and each one is characterized by some peculiarities concerning the way of modelling (the choice of a particular triangulation algorithm or the quasi-automatic modelling by known shapes) and the final results (a more detailed and complex mesh versus an approximate and more simple solid model). Usually the expected final representation and the possibility of publishing lead to one way or the other. In this paper we want to suggest a semiautomatic process to build 3d digital models of the facades of complex architecture to be used for example in city models or in other large scale representations. This way of modelling guarantees also to obtain small files to be published on the web or to be transmitted. The modelling procedure starts from laser scanner data which can be processed in the well known way. Usually more than one scan is necessary to describe a complex architecture and to avoid some shadows on the facades. These have to be registered in a single reference system by the use of targets which are surveyed by topography and then to be filtered in order to obtain a well controlled and homogeneous point cloud of
NASA Astrophysics Data System (ADS)
Schöniger, Anneli; Illman, Walter A.; Wöhling, Thomas; Nowak, Wolfgang
2015-12-01
Groundwater modelers face the challenge of how to assign representative parameter values to the studied aquifer. Several approaches are available to parameterize spatial heterogeneity in aquifer parameters. They differ in their conceptualization and complexity, ranging from homogeneous models to heterogeneous random fields. While it is common practice to invest more effort into data collection for models with a finer resolution of heterogeneities, there is a lack of advice which amount of data is required to justify a certain level of model complexity. In this study, we propose to use concepts related to Bayesian model selection to identify this balance. We demonstrate our approach on the characterization of a heterogeneous aquifer via hydraulic tomography in a sandbox experiment (Illman et al., 2010). We consider four increasingly complex parameterizations of hydraulic conductivity: (1) Effective homogeneous medium, (2) geology-based zonation, (3) interpolation by pilot points, and (4) geostatistical random fields. First, we investigate the shift in justified complexity with increasing amount of available data by constructing a model confusion matrix. This matrix indicates the maximum level of complexity that can be justified given a specific experimental setup. Second, we determine which parameterization is most adequate given the observed drawdown data. Third, we test how the different parameterizations perform in a validation setup. The results of our test case indicate that aquifer characterization via hydraulic tomography does not necessarily require (or justify) a geostatistical description. Instead, a zonation-based model might be a more robust choice, but only if the zonation is geologically adequate.
A subsurface model of the beaver meadow complex
NASA Astrophysics Data System (ADS)
Nash, C.; Grant, G.; Flinchum, B. A.; Lancaster, J.; Holbrook, W. S.; Davis, L. G.; Lewis, S.
2015-12-01
Wet meadows are a vital component of arid and semi-arid environments. These valley spanning, seasonally inundated wetlands provide critical habitat and refugia for wildlife, and may potentially mediate catchment-scale hydrology in otherwise "water challenged" landscapes. In the last 150 years, these meadows have begun incising rapidly, causing the wetlands to drain and much of the ecological benefit to be lost. The mechanisms driving this incision are poorly understood, with proposed means ranging from cattle grazing to climate change, to the removal of beaver. There is considerable interest in identifying cost-effective strategies to restore the hydrologic and ecological conditions of these meadows at a meaningful scale, but effective process based restoration first requires a thorough understanding of the constructional history of these ubiquitous features. There is emerging evidence to suggest that the North American beaver may have had a considerable role in shaping this landscape through the building of dams. This "beaver meadow complex hypothesis" posits that as beaver dams filled with fine-grained sediments, they became large wet meadows on which new dams, and new complexes, were formed, thereby aggrading valley bottoms. A pioneering study done in Yellowstone indicated that 32-50% of the alluvial sediment was deposited in ponded environments. The observed aggradation rates were highly heterogeneous, suggesting spatial variability in the depositional process - all consistent with the beaver meadow complex hypothesis (Polvi and Wohl, 2012). To expand on this initial work, we have probed deeper into these meadow complexes using a combination of geophysical techniques, coring methods and numerical modeling to create a 3-dimensional representation of the subsurface environments. This imaging has given us a unique view into the patterns and processes responsible for the landforms, and may shed further light on the role of beaver in shaping these landscapes.
Complex Wall Boundary Conditions for Modeling Combustion in Catalytic Channels
NASA Astrophysics Data System (ADS)
Zhu, Huayang; Jackson, Gregory
2000-11-01
Monolith catalytic reactors for exothermic oxidation are being used in automobile exhaust clean-up and ultra-low emissions combustion systems. The reactors present a unique coupling between mass, heat, and momentum transport in a channel flow configuration. The use of porous catalytic coatings along the channel wall presents a complex boundary condition when modeled with the two-dimensional channel flow. This current work presents a 2-D transient model for predicting the performance of catalytic combustion systems for methane oxidation on Pd catalysts. The model solves the 2-D compressible transport equations for momentum, species, and energy, which are solved with a porous washcoat model for the wall boundary conditions. A time-splitting algorithm is used to separate the stiff chemical reactions from the convective/diffusive equations for the channel flow. A detailed surface chemistry mechanism is incorporated for the catalytic wall model and is used to predict transient ignition and steady-state conversion of CH4-air flows in the catalytic reactor.
Velocity response curves demonstrate the complexity of modeling entrainable clocks.
Taylor, Stephanie R; Cheever, Allyson; Harmon, Sarah M
2014-12-21
Circadian clocks are biological oscillators that regulate daily behaviors in organisms across the kingdoms of life. Their rhythms are generated by complex systems, generally involving interlocked regulatory feedback loops. These rhythms are entrained by the daily light/dark cycle, ensuring that the internal clock time is coordinated with the environment. Mathematical models play an important role in understanding how the components work together to function as a clock which can be entrained by light. For a clock to entrain, it must be possible for it to be sped up or slowed down at appropriate times. To understand how biophysical processes affect the speed of the clock, one can compute velocity response curves (VRCs). Here, in a case study involving the fruit fly clock, we demonstrate that VRC analysis provides insight into a clock׳s response to light. We also show that biochemical mechanisms and parameters together determine a model׳s ability to respond realistically to light. The implication is that, if one is developing a model and its current form has an unrealistic response to light, then one must reexamine one׳s model structure, because searching for better parameter values is unlikely to lead to a realistic response to light. PMID:25193284
Modeling pedestrian's conformity violation behavior: a complex network based approach.
Zhou, Zhuping; Hu, Qizhou; Wang, Wei
2014-01-01
Pedestrian injuries and fatalities present a problem all over the world. Pedestrian conformity violation behaviors, which lead to many pedestrian crashes, are common phenomena at the signalized intersections in China. The concepts and metrics of complex networks are applied to analyze the structural characteristics and evolution rules of pedestrian network about the conformity violation crossings. First, a network of pedestrians crossing the street is established, and the network's degree distributions are analyzed. Then, by using the basic idea of SI model, a spreading model of pedestrian illegal crossing behavior is proposed. Finally, through simulation analysis, pedestrian's illegal crossing behavior trends are obtained in different network structures and different spreading rates. Some conclusions are drawn: as the waiting time increases, more pedestrians will join in the violation crossing once a pedestrian crosses on red firstly. And pedestrian's conformity violation behavior will increase as the spreading rate increases. PMID:25530755
Dynamic workflow model for complex activity in intensive care unit.
Bricon-Souf, N; Renard, J M; Beuscart, R
1998-01-01
Cooperation is very important in Medical care, especially in the Intensive Care Unit (ICU) where the difficulties increase which is due to the urgency of the work. Workflow systems are considered as well adapted to modelize productive work in business process. We aim at introducing this approach in the Health Care domain. We have proposed a conversation-based Workflow in order to modelize the therapeutics plan in the ICU [1]. But in such a complex field, the flexibility of the workflow system is essential for the system to be usable. In this paper, we focus on the main points used to increase the dynamicity. We report on affecting roles, highlighting information, and controlling the system We propose some solutions and describe our prototype in the ICU. PMID:10384452
Complex fluid flow modeling with SPH on GPU
NASA Astrophysics Data System (ADS)
Bilotta, Giuseppe; Hérault, Alexis; Del Negro, Ciro; Russo, Giovanni; Vicari, Annamaria
2010-05-01
We describe an implementation of the Smoothed Particle Hydrodynamics (SPH) method for the simulation of complex fluid flows. The algorithm is entirely executed on Graphic Processing Units (GPUs) using the Compute Unified Device Architecture (CUDA) developed by NVIDIA and fully exploiting their computational power. An increase of one to two orders of magnitude in simulation speed over equivalent CPU code is achieved. A complete modeling of the flow of a complex fluid such as lava is challenging from the modelistic, numerical and computational points of view. The natural topography irregularities, the dynamic free boundaries and phenomena such as solidification, presence of floating solid bodies or other obstacles and their eventual fragmentation make the problem difficult to solve using traditional numerical methods (finite volumes, finite elements): the need to refine the discretization grid in correspondence of high gradients, when possible, is computationally expensive and with an often inadequate control of the error; for real-world applications, moreover, the information needed by the grid refinement may not be available (e.g. because the Digital Elevation Models are too coarse); boundary tracking is also problematic with Eulerian discretizations, more so with complex fluids due to the presence of internal boundaries given by fluid inhomogeneity and presence of solidification fronts. An alternative approach is offered by mesh-free particle methods, that solve most of the problems connected to the dynamics of complex fluids in a natural way. Particle methods discretize the fluid using nodes which are not forced on a given topological structure: boundary treatment is therefore implicit and automatic; the movement freedom of the particles also permits the treatment of deformations without incurring in any significant penalty; finally, the accuracy is easily controlled by the insertion of new particles where needed. Our team has developed a new model based on the
Industrial processing of complex fluids: Formulation and modeling
Scovel, J.C.; Bleasdale, S.; Forest, G.M.; Bechtel, S.
1997-08-01
The production of many important commercial materials involves the evolution of a complex fluid through a cooling phase into a hardened product. Textile fibers, high-strength fibers(KEVLAR, VECTRAN), plastics, chopped-fiber compounds, and fiber optical cable are such materials. Industry desires to replace experiments with on-line, real time models of these processes. Solutions to the problems are not just a matter of technology transfer, but require a fundamental description and simulation of the processes. Goals of the project are to develop models that can be used to optimize macroscopic properties of the solid product, to identify sources of undesirable defects, and to seek boundary-temperature and flow-and-material controls to optimize desired properties.
Modeling Pedestrian's Conformity Violation Behavior: A Complex Network Based Approach
Zhou, Zhuping; Hu, Qizhou; Wang, Wei
2014-01-01
Pedestrian injuries and fatalities present a problem all over the world. Pedestrian conformity violation behaviors, which lead to many pedestrian crashes, are common phenomena at the signalized intersections in China. The concepts and metrics of complex networks are applied to analyze the structural characteristics and evolution rules of pedestrian network about the conformity violation crossings. First, a network of pedestrians crossing the street is established, and the network's degree distributions are analyzed. Then, by using the basic idea of SI model, a spreading model of pedestrian illegal crossing behavior is proposed. Finally, through simulation analysis, pedestrian's illegal crossing behavior trends are obtained in different network structures and different spreading rates. Some conclusions are drawn: as the waiting time increases, more pedestrians will join in the violation crossing once a pedestrian crosses on red firstly. And pedestrian's conformity violation behavior will increase as the spreading rate increases. PMID:25530755
Management of complex immunogenetics information using an enhanced relational model.
Barsalou, T; Sujansky, W; Herzenberg, L A; Wiederhold, G
1991-10-01
Flow cytometry has become a technique of paramount importance in the armamentarium of the scientist in such domains as immunogenetics. In the PENGUIN project, we are currently developing the architecture for an expert database system to facilitate the design of flow-cytometry experiments. This paper describes the core of this architecture--a methodology for managing complex biomedical information in an extended relational framework. More specifically, we exploit a semantic data model to enhance relational databases with structuring and manipulation tools that take more domain information into account and provide the user with an appropriate level of abstraction. We present specific applications of the structural model to database schema management, data retrieval and browsing, and integrity maintenance. PMID:1743006
González-Henríquez, C M; Sarabia-Vallejos, M A
2015-09-01
DPPC bilayers were deposited over thin hydrogel scaffolds using the Langmuir-Blodgett technique (with DPPC thickness ∼ 6.2 nm). Wrinkled hydrogels films were used to maintain a moist environment in order to enhance DPPC bilayer stability. Polymer mixtures were prepared using HEMA (as a base monomer) and DEGDMA, PEGDA575, PEGDA700 or AAm (as crosslinking agents); a thermal initiator was added to obtain a final pre-hydrogel (oligomer) with an adequate viscosity for thin film formation. This mixture was deposited as wrinkled film/fibers over hydrophilic silicon wafers using an electrospinning technique. Later, these samples were exposed to UV light to trigger photopolymerization, generating crosslinking bonds between hydrogel chains; this process also generated remnant surface stresses in the films that favored wrinkle formation. In the cases where DEGDMA and AAm were used as crosslinking agents, HEMA was added in higher amounts. The resultant polymer film surface showed homogenous layering with some small isolated clusters. If PEGDA575/700 was used as the crosslinking agent, we observed the formation of polymer wrinkled thin films, composed by main and secondary chains (with different dimensions). Moreover, water absorption and release was found to be mediated through surface morphology, ordering and film thickness. The thermal behavior of biomembranes was examined using ellipsometry techniques under controlled heating cycles, allowing phases and phase transitions to be detected through slight thickness variations with respect to temperature. Atomic force microscopy was used to determinate surface roughness changes according to temperature variation, temperature was varied sufficiently for the detection and recording of DPPC phase limits. Contact angle measurements corroborated and quantified system wettability, supporting the theory that wrinkled hydrogel films act to enhance DPPC bilayer stability during thermal cycles. PMID:26206414
Fish locomotion: insights from both simple and complex mechanical models
NASA Astrophysics Data System (ADS)
Lauder, George
2015-11-01
Fishes are well-known for their ability to swim and maneuver effectively in the water, and recent years have seen great progress in understanding the hydrodynamics of aquatic locomotion. But studying freely-swimming fishes is challenging due to difficulties in controlling fish behavior. Mechanical models of aquatic locomotion have many advantages over studying live animals, including the ability to manipulate and control individual structural or kinematic factors, easier measurement of forces and torques, and the ability to abstract complex animal designs into simpler components. Such simplifications, while not without their drawbacks, facilitate interpretation of how individual traits alter swimming performance and the discovery of underlying physical principles. In this presentation I will discuss the use of a variety of mechanical models for fish locomotion, ranging from simple flexing panels to complex biomimetic designs incorporating flexible, actively moved, fin rays on multiple fins. Mechanical devices have provided great insight into the dynamics of aquatic propulsion and, integrated with studies of locomotion in freely-swimming fishes, provide new insights into how fishes move through the water.
Phase-separation models for swimming enhancement in complex fluids
NASA Astrophysics Data System (ADS)
Man, Yi; Lauga, Eric
2015-08-01
Swimming cells often have to self-propel through fluids displaying non-Newtonian rheology. While past theoretical work seems to indicate that stresses arising from complex fluids should systematically hinder low-Reynolds number locomotion, experimental observations suggest that locomotion enhancement is possible. In this paper we propose a physical mechanism for locomotion enhancement of microscopic swimmers in a complex fluid. It is based on the fact that microstructured fluids will generically phase-separate near surfaces, leading to the presence of low-viscosity layers, which promote slip and decrease viscous friction near the surface of the swimmer. We use two models to address the consequence of this phase separation: a nonzero apparent slip length for the fluid and then an explicit modeling of the change of viscosity in a thin layer near the swimmer. Considering two canonical setups for low-Reynolds number locomotion, namely the waving locomotion of a two-dimensional sheet and that of a three-dimensional filament, we show that phase-separation systematically increases the locomotion speeds, possibly by orders of magnitude. We close by confronting our predictions with recent experimental results.
Phase-separation models for swimming enhancement in complex fluids.
Man, Yi; Lauga, Eric
2015-08-01
Swimming cells often have to self-propel through fluids displaying non-Newtonian rheology. While past theoretical work seems to indicate that stresses arising from complex fluids should systematically hinder low-Reynolds number locomotion, experimental observations suggest that locomotion enhancement is possible. In this paper we propose a physical mechanism for locomotion enhancement of microscopic swimmers in a complex fluid. It is based on the fact that microstructured fluids will generically phase-separate near surfaces, leading to the presence of low-viscosity layers, which promote slip and decrease viscous friction near the surface of the swimmer. We use two models to address the consequence of this phase separation: a nonzero apparent slip length for the fluid and then an explicit modeling of the change of viscosity in a thin layer near the swimmer. Considering two canonical setups for low-Reynolds number locomotion, namely the waving locomotion of a two-dimensional sheet and that of a three-dimensional filament, we show that phase-separation systematically increases the locomotion speeds, possibly by orders of magnitude. We close by confronting our predictions with recent experimental results. PMID:26382500
Modeling the complex pathology of Alzheimer's disease in Drosophila.
Fernandez-Funez, Pedro; de Mena, Lorena; Rincon-Limas, Diego E
2015-12-01
Alzheimer's disease (AD) is the leading cause of dementia and the most common neurodegenerative disorder. AD is mostly a sporadic disorder and its main risk factor is age, but mutations in three genes that promote the accumulation of the amyloid-β (Aβ42) peptide revealed the critical role of amyloid precursor protein (APP) processing in AD. Neurofibrillary tangles enriched in tau are the other pathological hallmark of AD, but the lack of causative tau mutations still puzzles researchers. Here, we describe the contribution of a powerful invertebrate model, the fruit fly Drosophila melanogaster, to uncover the function and pathogenesis of human APP, Aβ42, and tau. APP and tau participate in many complex cellular processes, although their main function is microtubule stabilization and the to-and-fro transport of axonal vesicles. Additionally, expression of secreted Aβ42 induces prominent neuronal death in Drosophila, a critical feature of AD, making this model a popular choice for identifying intrinsic and extrinsic factors mediating Aβ42 neurotoxicity. Overall, Drosophila has made significant contributions to better understand the complex pathology of AD, although additional insight can be expected from combining multiple transgenes, performing genome-wide loss-of-function screens, and testing anti-tau therapies alone or in combination with Aβ42. PMID:26024860
Alpha Decay in the Complex-Energy Shell Model
Betan, R. Id
2012-01-01
Background: Alpha emission from a nucleus is a fundamental decay process in which the alpha particle formed inside the nucleus tunnels out through the potential barrier. Purpose: We describe alpha decay of 212Po and 104Te by means of the configuration interaction approach. Method: To compute the preformation factor and penetrability, we use the complex-energy shell model with a separable T = 1 interaction. The single-particle space is expanded in a Woods-Saxon basis that consists of bound and unbound resonant states. Special attention is paid to the treatment of the norm kernel appearing in the definition of the formation amplitude that guarantees the normalization of the channel function. Results: Without explicitly considering the alpha-cluster component in the wave function of the parent nucleus, we reproduce the experimental alpha-decay width of 212Po and predict an upper limit of T1/2 = 5.5 10 7 sec for the half-life of 104Te. Conclusions: The complex-energy shell model in a large valence configuration space is capable of providing a microscopic description of the alpha decay of heavy nuclei having two valence protons and two valence neutrons outside the doubly magic core. The inclusion of proton-neutron interaction between the valence nucleons is likely to shorten the predicted half-live of 104Te.
ERIC Educational Resources Information Center
Dagne, Getachew A.; Brown, C. Hendricks; Howe, George W.
2007-01-01
This article presents new methods for modeling the strength of association between multiple behaviors in a behavioral sequence, particularly those involving substantively important interaction patterns. Modeling and identifying such interaction patterns becomes more complex when behaviors are assigned to more than two categories, as is the case…
Complex Geometry Creation and Turbulent Conjugate Heat Transfer Modeling
Bodey, Isaac T; Arimilli, Rao V; Freels, James D
2011-01-01
The multiphysics capabilities of COMSOL provide the necessary tools to simulate the turbulent thermal-fluid aspects of the High Flux Isotope Reactor (HFIR). Version 4.1, and later, of COMSOL provides three different turbulence models: the standard k-{var_epsilon} closure model, the low Reynolds number (LRN) k-{var_epsilon} model, and the Spalart-Allmaras model. The LRN meets the needs of the nominal HFIR thermal-hydraulic requirements for 2D and 3D simulations. COMSOL also has the capability to create complex geometries. The circular involute fuel plates used in the HFIR require the use of algebraic equations to generate an accurate geometrical representation in the simulation environment. The best-estimate simulation results show that the maximum fuel plate clad surface temperatures are lower than those predicted by the legacy thermal safety code used at HFIR by approximately 17 K. The best-estimate temperature distribution determined by COMSOL was then used to determine the necessary increase in the magnitude of the power density profile (PDP) to produce a similar clad surface temperature as compared to the legacy thermal safety code. It was determined and verified that a 19% power increase was sufficient to bring the two temperature profiles to relatively good agreement.
Wind Power Curve Modeling in Simple and Complex Terrain
Bulaevskaya, V.; Wharton, S.; Irons, Z.; Qualley, G.
2015-02-09
Our previous work on wind power curve modeling using statistical models focused on a location with a moderately complex terrain in the Altamont Pass region in northern California (CA). The work described here is the follow-up to that work, but at a location with a simple terrain in northern Oklahoma (OK). The goal of the present analysis was to determine the gain in predictive ability afforded by adding information beyond the hub-height wind speed, such as wind speeds at other heights, as well as other atmospheric variables, to the power prediction model at this new location and compare the results to those obtained at the CA site in the previous study. While we reach some of the same conclusions at both sites, many results reported for the CA site do not hold at the OK site. In particular, using the entire vertical profile of wind speeds improves the accuracy of wind power prediction relative to using the hub-height wind speed alone at both sites. However, in contrast to the CA site, the rotor equivalent wind speed (REWS) performs almost as well as the entire profile at the OK site. Another difference is that at the CA site, adding wind veer as a predictor significantly improved the power prediction accuracy. The same was true for that site when air density was added to the model separately instead of using the standard air density adjustment. At the OK site, these additional variables result in no significant benefit for the prediction accuracy.
Lupus Nephritis: Animal Modeling of a Complex Disease Syndrome Pathology
McGaha, Tracy L; Madaio, Michael P.
2014-01-01
Nephritis as a result of autoimmunity is a common morbidity associated with Systemic Lupus Erythematosus (SLE). There is substantial clinical and industry interest in medicinal intervention in the SLE nephritic process; however, clinical trials to specifically treat lupus nephritis have not resulted in complete and sustained remission in all patients. Multiple mouse models have been used to investigate the pathologic interactions between autoimmune reactivity and SLE pathology. While several models bear a remarkable similarity to SLE-driven nephritis, there are limitations for each that can make the task of choosing the appropriate model for a particular aspect of SLE pathology challenging. This is not surprising given the variable and diverse nature of human disease. In many respects, features among murine strains mimic some (but never all) of the autoimmune and pathologic features of lupus patients. Although the diversity often limits universal conclusions relevant to pathogenesis, they provide insights into the complex process that result in phenotypic manifestations of nephritis. Thus nephritis represents a microcosm of systemic disease, with variable lesions and clinical features. In this review, we discuss some of the most commonly used models of lupus nephritis (LN) and immune-mediated glomerular damage examining their relative strengths and weaknesses, which may provide insight in the human condition. PMID:25722732
Stepwise building of plankton functional type (PFT) models: A feasible route to complex models?
NASA Astrophysics Data System (ADS)
Frede Thingstad, T.; Strand, Espen; Larsen, Aud
2010-01-01
We discuss the strategy of building models of the lower part of the planktonic food web in a stepwise manner: starting with few plankton functional types (PFTs) and adding resolution and complexity while carrying along the insight and results gained from simpler models. A central requirement for PFT models is that they allow sustained coexistence of the PFTs. Here we discuss how this identifies a need to consider predation, parasitism and defence mechanisms together with nutrient acquisition and competition. Although the stepwise addition of complexity is assumed to be useful and feasible, a rapid increase in complexity strongly calls for alternative approaches able to model emergent system-level features without a need for detailed representation of all the underlying biological detail.
Spectroscopic studies of molybdenum complexes as models for nitrogenase
Walker, T.P.
1981-05-01
Because biological nitrogen fixation requires Mo, there is an interest in inorganic Mo complexes which mimic the reactions of nitrogen-fixing enzymes. Two such complexes are the dimer Mo/sub 2/O/sub 4/ (cysteine)/sub 2//sup 2 -/ and trans-Mo(N/sub 2/)/sub 2/(dppe)/sub 2/ (dppe = 1,2-bis(diphenylphosphino)ethane). The H/sup 1/ and C/sup 13/ NMR of solutions of Mo/sub 2/O/sub 4/(cys)/sub 2//sup 2 -/ are described. It is shown that in aqueous solution the cysteine ligands assume at least three distinct configurations. A step-wise dissociation of the cysteine ligand is proposed to explain the data. The Extended X-ray Absorption Fine Structure (EXAFS) of trans-Mo(N/sub 2/)/sub 2/(dppe)/sub 2/ is described and compared to the EXAFS of MoH/sub 4/(dppe)/sub 2/. The spectra are fitted to amplitude and phase parameters developed at Bell Laboratories. On the basis of this analysis, one can determine (1) that the dinitrogen complex contains nitrogen and the hydride complex does not and (2) the correct Mo-N distance. This is significant because the Mo inn both complexes is coordinated by four P atoms which dominate the EXAFS. A similar sort of interference is present in nitrogenase due to S coordination of the Mo in the enzyme. This model experiment indicates that, given adequate signal to noise ratios, the presence or absence of dinitrogen coordination to Mo in the enzyme may be determined by EXAFS using existing data analysis techniques. A new reaction between Mo/sub 2/O/sub 4/(cys)/sub 2//sup 2 -/ and acetylene is described to the extent it is presently understood. A strong EPR signal is observed, suggesting the production of stable Mo(V) monomers. EXAFS studies support this suggestion. The Mo K-edge is described. The edge data suggests Mo(VI) is also produced in the reaction. Ultraviolet spectra suggest that cysteine is released in the course of the reaction.
Polysaccharide-Protein Complexes in a Coarse-Grained Model.
Poma, Adolfo B; Chwastyk, Mateusz; Cieplak, Marek
2015-09-10
We construct two variants of coarse-grained models of three hexaoses: one based on the centers of mass of the monomers and the other associated with the C4 atoms. The latter is found to be better defined and more suitable for studying interactions with proteins described within α-C based models. We determine the corresponding effective stiffness constants through all-atom simulations and two statistical methods. One method is the Boltzmann inversion (BI) and the other, named energy-based (EB), involves direct monitoring of energies as a function of the variables that define the stiffness potentials. The two methods are generally consistent in their account of the stiffness. We find that the elastic constants differ between the hexaoses and are noticeably different from those determined for the crystalline cellulose Iβ. The nonbonded couplings through hydrogen bonds between different sugar molecules are modeled by the Lennard-Jones potentials and are found to be stronger than the hydrogen bonds in proteins. We observe that the EB method agrees with other theoretical and experimental determinations of the nonbonded parameters much better than BI. We then consider the hexaose-Man5B catalytic complexes and determine the contact energies between their the C4-α-C atoms. These interactions are found to be stronger than the proteinic hydrogen bonds: about four times as strong for cellohexaose and two times for mannohexaose. The fluctuational dynamics of the coarse-grained complexes are found to be compatible with previous all-atom studies by Bernardi et al. PMID:26291477
Modeling Cu2+-Aβ complexes from computational approaches
NASA Astrophysics Data System (ADS)
Alí-Torres, Jorge; Mirats, Andrea; Maréchal, Jean-Didier; Rodríguez-Santiago, Luis; Sodupe, Mariona
2015-09-01
Amyloid plaques formation and oxidative stress are two key events in the pathology of the Alzheimer disease (AD), in which metal cations have been shown to play an important role. In particular, the interaction of the redox active Cu2+ metal cation with Aβ has been found to interfere in amyloid aggregation and to lead to reactive oxygen species (ROS). A detailed knowledge of the electronic and molecular structure of Cu2+-Aβ complexes is thus important to get a better understanding of the role of these complexes in the development and progression of the AD disease. The computational treatment of these systems requires a combination of several available computational methodologies, because two fundamental aspects have to be addressed: the metal coordination sphere and the conformation adopted by the peptide upon copper binding. In this paper we review the main computational strategies used to deal with the Cu2+-Aβ coordination and build plausible Cu2+-Aβ models that will afterwards allow determining physicochemical properties of interest, such as their redox potential.
Information-driven modeling of protein-peptide complexes.
Trellet, Mikael; Melquiond, Adrien S J; Bonvin, Alexandre M J J
2015-01-01
Despite their biological importance in many regulatory processes, protein-peptide recognition mechanisms are difficult to study experimentally at the structural level because of the inherent flexibility of peptides and the often transient interactions on which they rely. Complementary methods like biomolecular docking are therefore required. The prediction of the three-dimensional structure of protein-peptide complexes raises unique challenges for computational algorithms, as exemplified by the recent introduction of protein-peptide targets in the blind international experiment CAPRI (Critical Assessment of PRedicted Interactions). Conventional protein-protein docking approaches are often struggling with the high flexibility of peptides whose short sizes impede protocols and scoring functions developed for larger interfaces. On the other side, protein-small ligand docking methods are unable to cope with the larger number of degrees of freedom in peptides compared to small molecules and the typically reduced available information to define the binding site. In this chapter, we describe a protocol to model protein-peptide complexes using the HADDOCK web server, working through a test case to illustrate every steps. The flexibility challenge that peptides represent is dealt with by combining elements of conformational selection and induced fit molecular recognition theories. PMID:25555727
Complex dynamics in the Oregonator model with linear delayed feedback
NASA Astrophysics Data System (ADS)
Sriram, K.; Bernard, S.
2008-06-01
The Belousov-Zhabotinsky (BZ) reaction can display a rich dynamics when a delayed feedback is applied. We used the Oregonator model of the oscillating BZ reaction to explore the dynamics brought about by a linear delayed feedback. The time-delayed feedback can generate a succession of complex dynamics: period-doubling bifurcation route to chaos; amplitude death; fat, wrinkled, fractal, and broken tori; and mixed-mode oscillations. We observed that this dynamics arises due to a delay-driven transition, or toggling of the system between large and small amplitude oscillations, through a canard bifurcation. We used a combination of numerical bifurcation continuation techniques and other numerical methods to explore the dynamics in the strength of feedback-delay space. We observed that the period-doubling and quasiperiodic route to chaos span a low-dimensional subspace, perhaps due to the trapping of the trajectories in the small amplitude regime near the canard; and the trapped chaotic trajectories get ejected from the small amplitude regime due to a crowding effect to generate chaotic-excitable spikes. We also qualitatively explained the observed dynamics by projecting a three-dimensional phase portrait of the delayed dynamics on the two-dimensional nullclines. This is the first instance in which it is shown that the interaction of delay and canard can bring about complex dynamics.
Neurocomputational Model of EEG Complexity during Mind Wandering
Ibáñez-Molina, Antonio J.; Iglesias-Parro, Sergio
2016-01-01
Mind wandering (MW) can be understood as a transient state in which attention drifts from an external task to internal self-generated thoughts. MW has been associated with the activation of the Default Mode Network (DMN). In addition, it has been shown that the activity of the DMN is anti-correlated with activation in brain networks related to the processing of external events (e.g., Salience network, SN). In this study, we present a mean field model based on weakly coupled Kuramoto oscillators. We simulated the oscillatory activity of the entire brain and explored the role of the interaction between the nodes from the DMN and SN in MW states. External stimulation was added to the network model in two opposite conditions. Stimuli could be presented when oscillators in the SN showed more internal coherence (synchrony) than in the DMN, or, on the contrary, when the coherence in the SN was lower than in the DMN. The resulting phases of the oscillators were analyzed and used to simulate EEG signals. Our results showed that the structural complexity from both simulated and real data was higher when the model was stimulated during periods in which DMN was more coherent than the SN. Overall, our results provided a plausible mechanistic explanation to MW as a state in which high coherence in the DMN partially suppresses the capacity of the system to process external stimuli. PMID:26973505
Dynamic workflow model for complex activity in intensive care unit.
Bricon-Souf, N; Renard, J M; Beuscart, R
1999-01-01
Co-operation is very important in Medical care, especially in the Intensive Care Unit (ICU) where the difficulties increase which is due to the urgency of the work. Workflow systems are considered as well adapted to modelize productive work in business process. We aim at introducing this approach in the Health Care domain. We have proposed a conversation-based workflow in order to modelize the therapeutics plan in the ICU [1]. But in such a complex field, the flexibility of the workflow system is essential for the system to be usable. We have concentrated on three main points usually proposed in the workflow models, suffering from a lack of dynamicity: static links between roles and actors, global notification of information changes, lack of human control on the system. In this paper, we focus on the main points used to increase the dynamicity. We report on affecting roles, highlighting information, and controlling the system. We propose some solutions and describe our prototype in the ICU. PMID:10193884
Deposition parameterizations for the Industrial Source Complex (ISC3) model
Wesely, Marvin L.; Doskey, Paul V.; Shannon, J. D.
2002-06-01
Improved algorithms have been developed to simulate the dry and wet deposition of hazardous air pollutants (HAPs) with the Industrial Source Complex version 3 (ISC3) model system. The dry deposition velocities (concentrations divided by downward flux at a specified height) of the gaseous HAPs are modeled with algorithms adapted from existing dry deposition modules. The dry deposition velocities are described in a conventional resistance scheme, for which micrometeorological formulas are applied to describe the aerodynamic resistances above the surface. Pathways to uptake at the ground and in vegetative canopies are depicted with several resistances that are affected by variations in air temperature, humidity, solar irradiance, and soil moisture. The role of soil moisture variations in affecting the uptake of gases through vegetative plant leaf stomata is assessed with the relative available soil moisture, which is estimated with a rudimentary budget of soil moisture content. Some of the procedures and equations are simplified to be commensurate with the type and extent of information on atmospheric and surface conditions available to the ISC3 model system user. For example, standardized land use types and seasonal categories provide sets of resistances to uptake by various components of the surface. To describe the dry deposition of the large number of gaseous organic HAPS, a new technique based on laboratory study results and theoretical considerations has been developed providing a means of evaluating the role of lipid solubility in uptake by the waxy outer cuticle of vegetative plant leaves.
Critical noise of majority-vote model on complex networks
NASA Astrophysics Data System (ADS)
Chen, Hanshuang; Shen, Chuansheng; He, Gang; Zhang, Haifeng; Hou, Zhonghuai
2015-02-01
The majority-vote model with noise is one of the simplest nonequilibrium statistical model that has been extensively studied in the context of complex networks. However, the relationship between the critical noise where the order-disorder phase transition takes place and the topology of the underlying networks is still lacking. In this paper, we use the heterogeneous mean-field theory to derive the rate equation for governing the model's dynamics that can analytically determine the critical noise fc in the limit of infinite network size N →∞ . The result shows that fc depends on the ratio of
A model for navigational errors in complex environmental fields.
Postlethwaite, Claire M; Walker, Michael M
2014-12-21
Many animals are believed to navigate using environmental signals such as light, sound, odours and magnetic fields. However, animals rarely navigate directly to their target location, but instead make a series of navigational errors which are corrected during transit. In previous work, we introduced a model showing that differences between an animal׳s 'cognitive map' of the environmental signals used for navigation and the true nature of these signals caused a systematic pattern in orientation errors when navigation begins. The model successfully predicted the pattern of errors seen in previously collected data from homing pigeons, but underestimated the amplitude of the errors. In this paper, we extend our previous model to include more complicated distortions of the contour lines of the environmental signals. Specifically, we consider the occurrence of critical points in the fields describing the signals. We consider three scenarios and compute orientation errors as parameters are varied in each case. We show that the occurrence of critical points can be associated with large variations in initial orientation errors over a small geographic area. We discuss the implications that these results have on predicting how animals will behave when encountering complex distortions in any environmental signals they use to navigate. PMID:25149368
Integrated modeling tool for performance engineering of complex computer systems
NASA Technical Reports Server (NTRS)
Wright, Gary; Ball, Duane; Hoyt, Susan; Steele, Oscar
1989-01-01
This report summarizes Advanced System Technologies' accomplishments on the Phase 2 SBIR contract NAS7-995. The technical objectives of the report are: (1) to develop an evaluation version of a graphical, integrated modeling language according to the specification resulting from the Phase 2 research; and (2) to determine the degree to which the language meets its objectives by evaluating ease of use, utility of two sets of performance predictions, and the power of the language constructs. The technical approach followed to meet these objectives was to design, develop, and test an evaluation prototype of a graphical, performance prediction tool. The utility of the prototype was then evaluated by applying it to a variety of test cases found in the literature and in AST case histories. Numerous models were constructed and successfully tested. The major conclusion of this Phase 2 SBIR research and development effort is that complex, real-time computer systems can be specified in a non-procedural manner using combinations of icons, windows, menus, and dialogs. Such a specification technique provides an interface that system designers and architects find natural and easy to use. In addition, PEDESTAL's multiview approach provides system engineers with the capability to perform the trade-offs necessary to produce a design that meets timing performance requirements. Sample system designs analyzed during the development effort showed that models could be constructed in a fraction of the time required by non-visual system design capture tools.
Uncertainty evaluation in numerical modeling of complex devices
NASA Astrophysics Data System (ADS)
Cheng, X.; Monebhurrun, V.
2014-10-01
Numerical simulation is an efficient tool for exploring and understanding the physics of complex devices, e.g. mobile phones. For meaningful results, it is important to evaluate the uncertainty of the numerical simulation. Uncertainty quantification in specific absorption rate (SAR) calculation using a full computer-aided design (CAD) mobile phone model is a challenging task. Since a typical SAR numerical simulation is computationally expensive, the traditional Monte Carlo (MC) simulation method proves inadequate. The unscented transformation (UT) is an alternative and numerically efficient method herein investigated to evaluate the uncertainty in the SAR calculation using the realistic models of two commercially available mobile phones. The electromagnetic simulation process is modeled as a nonlinear mapping with the uncertainty in the inputs e.g. the relative permittivity values of the mobile phone material properties, inducing an uncertainty in the output, e.g. the peak spatial-average SAR value.The numerical simulation results demonstrate that UT may be a potential candidate for the uncertainty quantification in SAR calculations since only a few simulations are necessary to obtain results similar to those obtained after hundreds or thousands of MC simulations.
Complex events in a fault model with interacting asperities
NASA Astrophysics Data System (ADS)
Dragoni, Michele; Tallarico, Andrea
2016-08-01
The dynamics of a fault with heterogeneous friction is studied by employing a discrete fault model with two asperities of different strengths. The average values of stress, friction and slip on each asperity are considered and the state of the fault is described by the slip deficits of the asperities as functions of time. The fault has three different slipping modes, corresponding to the asperities slipping one at a time or simultaneously. Any seismic event produced by the fault is a sequence of n slipping modes. According to initial conditions, seismic events can be different sequences of slipping modes, implying different moment rates and seismic moments. Each event can be represented geometrically in the state space by an orbit that is the union of n damped Lissajous curves. We focus our interest on events that are sequences of two or more slipping modes: they show a complex stress interchange between the asperities and a complex temporal pattern of slip rate. The initial stress distribution producing these events is not uniform on the fault. We calculate the stress drop, the moment rate and the frequency spectrum of the events, showing how these quantities depend on initial conditions. These events have the greatest seismic moments that can be produced by fault slip. As an example, we model the moment rate of the 1992 Landers, California, earthquake that can be described as the consecutive failure of two asperities, one of which has a double strength than the other, and evaluate the evolution of stress distribution on the fault during the event.
Inverse Problems in Complex Models and Applications to Earth Sciences
NASA Astrophysics Data System (ADS)
Bosch, M. E.
2015-12-01
The inference of the subsurface earth structure and properties requires the integration of different types of data, information and knowledge, by combined processes of analysis and synthesis. To support the process of integrating information, the regular concept of data inversion is evolving to expand its application to models with multiple inner components (properties, scales, structural parameters) that explain multiple data (geophysical survey data, well-logs, core data). The probabilistic inference methods provide the natural framework for the formulation of these problems, considering a posterior probability density function (PDF) that combines the information from a prior information PDF and the new sets of observations. To formulate the posterior PDF in the context of multiple datasets, the data likelihood functions are factorized assuming independence of uncertainties for data originating across different surveys. A realistic description of the earth medium requires modeling several properties and structural parameters, which relate to each other according to dependency and independency notions. Thus, conditional probabilities across model components also factorize. A common setting proceeds by structuring the model parameter space in hierarchical layers. A primary layer (e.g. lithology) conditions a secondary layer (e.g. physical medium properties), which conditions a third layer (e.g. geophysical data). In general, less structured relations within model components and data emerge from the analysis of other inverse problems. They can be described with flexibility via direct acyclic graphs, which are graphs that map dependency relations between the model components. Examples of inverse problems in complex models can be shown at various scales. At local scale, for example, the distribution of gas saturation is inferred from pre-stack seismic data and a calibrated rock-physics model. At regional scale, joint inversion of gravity and magnetic data is applied
Complex Environmental Data Modelling Using Adaptive General Regression Neural Networks
NASA Astrophysics Data System (ADS)
Kanevski, Mikhail
2015-04-01
The research deals with an adaptation and application of Adaptive General Regression Neural Networks (GRNN) to high dimensional environmental data. GRNN [1,2,3] are efficient modelling tools both for spatial and temporal data and are based on nonparametric kernel methods closely related to classical Nadaraya-Watson estimator. Adaptive GRNN, using anisotropic kernels, can be also applied for features selection tasks when working with high dimensional data [1,3]. In the present research Adaptive GRNN are used to study geospatial data predictability and relevant feature selection using both simulated and real data case studies. The original raw data were either three dimensional monthly precipitation data or monthly wind speeds embedded into 13 dimensional space constructed by geographical coordinates and geo-features calculated from digital elevation model. GRNN were applied in two different ways: 1) adaptive GRNN with the resulting list of features ordered according to their relevancy; and 2) adaptive GRNN applied to evaluate all possible models N [in case of wind fields N=(2^13 -1)=8191] and rank them according to the cross-validation error. In both cases training were carried out applying leave-one-out procedure. An important result of the study is that the set of the most relevant features depends on the month (strong seasonal effect) and year. The predictabilities of precipitation and wind field patterns, estimated using the cross-validation and testing errors of raw and shuffled data, were studied in detail. The results of both approaches were qualitatively and quantitatively compared. In conclusion, Adaptive GRNN with their ability to select features and efficient modelling of complex high dimensional data can be widely used in automatic/on-line mapping and as an integrated part of environmental decision support systems. 1. Kanevski M., Pozdnoukhov A., Timonin V. Machine Learning for Spatial Environmental Data. Theory, applications and software. EPFL Press
Projection- vs. selection-based model reduction of complex hydro-ecological models
NASA Astrophysics Data System (ADS)
Galelli, S.; Giuliani, M.; Castelletti, A.; Alsahaf, A.
2014-12-01
Projection-based model reduction is one of the most popular approaches used for the identification of reduced-order models (emulators). It is based on the idea of sampling from the original model various values, or snapshots, of the state variables, and then using these snapshots in a projection scheme to find a lower-dimensional subspace that captures the majority of the variation of the original model. The model is then projected onto this subspace and solved, yielding a computationally efficient emulator. Yet, this approach may unnecessarily increase the complexity of the emulator, especially when only a few state variables of the original model are relevant with respect to the output of interest. This is the case of complex hydro-ecological models, which typically account for a variety of water quality processes. On the other hand, selection-based model reduction uses the information contained in the snapshots to select the state variables of the original model that are relevant with respect to the emulator's output, thus allowing for model reduction. This provides a better trade-off between fidelity and model complexity, since the irrelevant and redundant state variables are excluded from the model reduction process. In this work we address these issues by presenting an exhaustive experimental comparison between two popular projection- and selection-based methods, namely Proper Orthogonal Decomposition (POD) and Dynamic Emulation Modelling (DEMo). The comparison is performed on the reduction of DYRESM-CAEDYM, a 1D hydro-ecological model used to describe the in-reservoir water quality conditions of Tono Dam, an artificial reservoir located in western Japan. Experiments on two different output variables (i.e. chlorophyll-a concentration and release water temperature) show that DEMo allows obtaining the same fidelity as POD while reducing the number of state variables in the emulator.
Time to change from a simple linear model to a complex systems model
2016-01-01
A simple linear model to test the hypothesis based on one-on-one relationship has been used to find the causative factors of diseases. However, we now know that not just one, but many factors from different systems such as chemical exposure, genes, epigenetic changes, and proteins are involved in the pathogenesis of chronic diseases such as diabetes mellitus. So, with availability of modern technologies to understand the intricate nature of relations among complex systems, we need to move forward to the future by taking complex systems model. PMID:27158003
Thermophysical Model of S-complex NEAs: 1627 Ivar
NASA Astrophysics Data System (ADS)
Crowell, Jenna L.; Howell, Ellen S.; Magri, Christopher; Fernandez, Yan R.; Marshall, Sean E.; Warner, Brian D.; Vervack, Ronald J.
2015-11-01
We present updates to the thermophysical model of asteroid 1627 Ivar. Ivar is an Amor class near Earth asteroid (NEA) with a taxonomic type of Sqw [1] and a rotation rate of 4.795162 ± 5.4 * 10-6 hours [2]. In 2013, our group observed Ivar in radar, in CCD lightcurves, and in the near-IR’s reflected and thermal regimes (0.8 - 4.1 µm) using the Arecibo Observatory’s 2380 MHz radar, the Palmer Divide Station’s 0.35m telescope, and the SpeX instrument at the NASA IRTF respectively. Using these radar and lightcurve data, we generated a detailed shape model of Ivar using the software SHAPE [3,4]. Our shape model reveals more surface detail compared to earlier models [5] and we found Ivar to be an elongated asteroid with the maximum extended length along the three body-fixed coordinates being 12 x 11.76 x 6 km. For our thermophysical modeling, we have used SHERMAN [6,7] with input parameters such as the asteroid’s IR emissivity, optical scattering law and thermal inertia, in order to complete thermal computations based on our shape model and the known spin state. We then create synthetic near-IR spectra that can be compared to our observed spectra, which cover a wide range of Ivar’s rotational longitudes and viewing geometries. As has been noted [6,8], the use of an accurate shape model is often crucial for correctly interpreting multi-epoch thermal emission observations. We will present what SHERMAN has let us determine about the reflective, thermal, and surface properties for Ivar that best reproduce our spectra. From our derived best-fit thermal parameters, we will learn more about the regolith, surface properties, and heterogeneity of Ivar and how those properties compare to those of other S-complex asteroids. References: [1] DeMeo et al. 2009, Icarus 202, 160-180 [2] Crowell, J. et al. 2015, LPSC 46 [3] Magri C. et al. 2007, Icarus 186, 152-177 [4] Crowell, J. et al. 2014, AAS/DPS 46 [5] Kaasalainen, M. et al. 2004, Icarus 167, 178-196 [6] Crowell, J. et
Dealing with uncertainty in ecosystem models: lessons from a complex salmon model.
McElhany, Paul; Steel, E Ashley; Avery, Karen; Yoder, Naomi; Busack, Craig; Thompson, Brad
2010-03-01
Ecosystem models have been developed for assessment and management in a wide variety of environments. As model complexity increases, it becomes more difficult to trace how imperfect knowledge of internal model parameters, data inputs, or relationships among parameters might impact model results, affecting predictions and subsequent management decisions. Sensitivity analysis is an essential component of model evaluation, particularly when models are used to make management decisions. Results should be expressed as probabilities and should realistically account for uncertainty. When models are particularly complex, this can be difficult to do and to present in ways that do not obfuscate essential results. We conducted a sensitivity analysis of the Ecosystem Diagnosis and Treatment (EDT) model, which predicts salmon productivity and capacity as a function of ecosystem conditions. We used a novel "structured sensitivity analysis" approach that is particularly useful for very complex models or those with an abundance of interconnected parameters. We identified small, medium, and large plausible ranges for both input data and model parameters. Using a Monte Carlo approach, we explored the variation in output, prediction intervals, and sensitivity indices, given these plausible input distributions. The analyses indicated that, as a consequence of internal parameter uncertainty, EDT productivity and capacity predictions lack the precision needed for many management applications. However, EDT prioritization of reaches for preservation or restoration was more robust to given input uncertainties, indicating that EDT may be more useful as a relative measure of fish performance than as an absolute measure. Like all large models, if EDT output is to be used as input to other models or management tools it is important to explicitly incorporate the uncertainty and sensitivity analyses into such secondary analyses. Sensitivity analyses should become standard operating procedure for
Reliable Modeling of the Electronic Spectra of Realistic Uranium Complexes
Tecmer, Pawel; Govind, Niranjan; Kowalski, Karol; De Jong, Wibe A.; Visscher, Lucas
2013-07-21
We present an EOMCCSD (equation of motion coupled cluster with singles and doubles) study of excited states of the small [UO2]2+ and [UO2]+ model systems as well as the larger UV IO2(saldien) complex. In addition, the triples contribution within the EOMCCSDT and CR-EOMCCSD(T) (completely renormalized EOMCCSD with non-iterative triples) approaches for the [UO2]2+ and [UO2]+ systems as well as the active-space variant of the CR-EOMCCSD(T) method | CREOMCCSd(t) | for the UV IO2(saldien) molecule are investigated. The coupled cluster data was employed as benchmark to chose the "best" appropriate exchange--correlation functional for subsequent time-dependent density functional (TD-DFT) studies on the transition energies for closed-shell species. Furthermore, the influence of the saldien ligands on the electronic structure and excitation energies of the [UO2]+ molecule is discussed. The electronic excitations as well as their oscillator dipole strengths modeled with TD-DFT approach using the CAM-B3LYP exchange{correlation functional for the [UV O2(saldien)]- with explicit inclusion of two DMSOs are in good agreement with the experimental data of Takao et al. [Inorg. Chem. 49, 2349-2359, (2010)].
Electromagnetic modelling of Ground Penetrating Radar responses to complex targets
NASA Astrophysics Data System (ADS)
Pajewski, Lara; Giannopoulos, Antonis
2014-05-01
This work deals with the electromagnetic modelling of composite structures for Ground Penetrating Radar (GPR) applications. It was developed within the Short-Term Scientific Mission ECOST-STSM-TU1208-211013-035660, funded by COST Action TU1208 "Civil Engineering Applications of Ground Penetrating Radar". The Authors define a set of test concrete structures, hereinafter called cells. The size of each cell is 60 x 100 x 18 cm and the content varies with growing complexity, from a simple cell with few rebars of different diameters embedded in concrete at increasing depths, to a final cell with a quite complicated pattern, including a layer of tendons between two overlying meshes of rebars. Other cells, of intermediate complexity, contain pvc ducts (air filled or hosting rebars), steel objects commonly used in civil engineering (as a pipe, an angle bar, a box section and an u-channel), as well as void and honeycombing defects. One of the cells has a steel mesh embedded in it, overlying two rebars placed diagonally across the comers of the structure. Two cells include a couple of rebars bent into a right angle and placed on top of each other, with a square/round circle lying at the base of the concrete slab. Inspiration for some of these cells is taken from the very interesting experimental work presented in Ref. [1]. For each cell, a subset of models with growing complexity is defined, starting from a simple representation of the cell and ending with a more realistic one. In particular, the model's complexity increases from the geometrical point of view, as well as in terms of how the constitutive parameters of involved media and GPR antennas are described. Some cells can be simulated in both two and three dimensions; the concrete slab can be approximated as a finite-thickness layer having infinite extension on the transverse plane, thus neglecting how edges affect radargrams, or else its finite size can be fully taken into account. The permittivity of concrete can be
Reliable modeling of the electronic spectra of realistic uranium complexes
NASA Astrophysics Data System (ADS)
Tecmer, Paweł; Govind, Niranjan; Kowalski, Karol; de Jong, Wibe A.; Visscher, Lucas
2013-07-01
We present an EOMCCSD (equation of motion coupled cluster with singles and doubles) study of excited states of the small [UO2]2+ and [UO2]+ model systems as well as the larger UVIO2(saldien) complex. In addition, the triples contribution within the EOMCCSDT and CR-EOMCCSD(T) (completely renormalized EOMCCSD with non-iterative triples) approaches for the [UO2]2+ and [UO2]+ systems as well as the active-space variant of the CR-EOMCCSD(T) method—CR-EOMCCSd(t)—for the UVIO2(saldien) molecule are investigated. The coupled cluster data were employed as benchmark to choose the "best" appropriate exchange-correlation functional for subsequent time-dependent density functional (TD-DFT) studies on the transition energies for closed-shell species. Furthermore, the influence of the saldien ligands on the electronic structure and excitation energies of the [UO2]+ molecule is discussed. The electronic excitations as well as their oscillator dipole strengths modeled with TD-DFT approach using the CAM-B3LYP exchange-correlation functional for the [UVO2(saldien)]- with explicit inclusion of two dimethyl sulfoxide molecules are in good agreement with the experimental data of Takao et al. [Inorg. Chem. 49, 2349 (2010), 10.1021/ic902225f].
The complexity of model checking for belief revision and update
Liberatore, P.; Schaerf, M.
1996-12-31
One of the main challenges in the formal modeling of common-sense reasoning is the ability to cope with the dynamic nature of the world. Among the approaches put forward to address this problem are belief revision and update. Given a knowledge base T, representing our knowledge of the {open_quotes}state of affairs{close_quotes} of the world of interest, it is possible that we are lead to trust another piece of information P, possibly inconsistent with the old one T. The aim of revision and update operators is to characterize the revised knowledge base T{prime} that incorporates the new formula P into the old one T while preserving consistency and, at the same time, avoiding the loss of too much information in this process. In this paper we study the computational complexity of one of the main computational problems of belief revision and update: deciding if an interpretation M is a model of the revised knowledge base.
Finite element modeling of piezoelectric elements with complex electrode configuration
NASA Astrophysics Data System (ADS)
Paradies, R.; Schläpfer, B.
2009-02-01
It is well known that the material properties of piezoelectric materials strongly depend on the state of polarization of the individual element. While an unpolarized material exhibits mechanically isotropic material properties in the absence of global piezoelectric capabilities, the piezoelectric material properties become transversally isotropic with respect to the polarization direction after polarization. Therefore, for evaluating piezoelectric elements the material properties, including the coupling between the mechanical and the electromechanical behavior, should be addressed correctly. This is of special importance for the micromechanical description of piezoelectric elements with interdigitated electrodes (IDEs). The best known representatives of this group are active fiber composites (AFCs), macro fiber composites (MFCs) and the radial field diaphragm (RFD), respectively. While the material properties are available for a piezoelectric wafer with a homogeneous polarization perpendicular to its plane as postulated in the so-called uniform field model (UFM), the same information is missing for piezoelectric elements with more complex electrode configurations like the above-mentioned ones with IDEs. This is due to the inhomogeneous field distribution which does not automatically allow for the correct assignment of the material, i.e. orientation and property. A variation of the material orientation as well as the material properties can be accomplished by including the polarization process of the piezoelectric transducer in the finite element (FE) simulation prior to the actual load case to be investigated. A corresponding procedure is presented which automatically assigns the piezoelectric material properties, e.g. elasticity matrix, permittivity, and charge vector, for finite element models (FEMs) describing piezoelectric transducers according to the electric field distribution (field orientation and strength) in the structure. A corresponding code has been
Chitosan and alginate types of bio-membrane in fuel cell application: An overview
NASA Astrophysics Data System (ADS)
Shaari, N.; Kamarudin, S. K.
2015-09-01
The major problems of polymer electrolyte membrane fuel cell technology that need to be highlighted are fuel crossovers (e.g., methanol or hydrogen leaking across fuel cell membranes), CO poisoning, low durability, and high cost. Chitosan and alginate-based biopolymer membranes have recently been used to solve these problems with promising results. Current research in biopolymer membrane materials and systems has focused on the following: 1) the development of novel and efficient biopolymer materials; and 2) increasing the processing capacity of membrane operations. Consequently, chitosan and alginate-based biopolymers seek to enhance fuel cell performance by improving proton conductivity, membrane durability, and reducing fuel crossover and electro-osmotic drag. There are four groups of chitosan-based membranes (categorized according to their reaction and preparation): self-cross-linked and salt-complexed chitosans, chitosan-based polymer blends, chitosan/inorganic filler composites, and chitosan/polymer composites. There are only three alginate-based membranes that have been synthesized for fuel cell application. This work aims to review the state-of-the-art in the growth of chitosan and alginate-based biopolymer membranes for fuel cell applications.
Sikder, Md. Kabir Uddin; Stone, Kyle A.; Kumar, P. B. Sunil; Laradji, Mohamed
2014-01-01
We investigate the combined effects of transmembrane proteins and the subjacent cytoskeleton on the dynamics of phase separation in multicomponent lipid bilayers using computer simulations of a particle-based implicit solvent model for lipid membranes with soft-core interactions. We find that microphase separation can be achieved by the protein confinement by the cytoskeleton. Our results have relevance to the finite size of lipid rafts in the plasma membrane of mammalian cells. PMID:25106608
Atmospheric Modelling for Air Quality Study over the complex Himalayas
NASA Astrophysics Data System (ADS)
Surapipith, Vanisa; Panday, Arnico; Mukherji, Aditi; Banmali Pradhan, Bidya; Blumer, Sandro
2014-05-01
An Atmospheric Modelling System has been set up at International Centre for Integrated Mountain Development (ICIMOD) for the assessment of Air Quality across the Himalaya mountain ranges. The Weather Research and Forecasting (WRF) model version 3.5 has been implemented over the regional domain, stretching across 4995 x 4455 km2 centred at Ichhyakamana , the ICIMOD newly setting-up mountain-peak station (1860 m) in central Nepal, and covering terrains from sea-level to the Everest (8848 m). Simulation is carried out for the winter time period, i.e. December 2012 to February 2013, when there was an intensive field campaign SusKat, where at least 7 super stations were collecting meteorology and chemical parameters on various sites. The very complex terrain requires a high horizontal resolution (1 × 1 km2), which is achieved by nesting the domain of interest, e.g. Kathmandu Valley, into 3 coarser ones (27, 9, 3 km resolution). Model validation is performed against the field data as well as satellite data, and the challenge of capturing the necessary atmospheric processes is discussed, before moving forward with the fully coupled chemistry module (WRF-Chem), having local and regional emission databases as input. The effort aims at finding a better understanding of the atmospheric processes and air quality impact on the mountain population, as well as the impact of the long-range transport, particularly of Black Carbon aerosol deposition, to the radiative budget over the Himalayan glaciers. The higher rate of snowcap melting, and shrinkage of permafrost as noticed by glaciologists is a concern. Better prediction will supply crucial information to form the proper mitigation and adaptation strategies for saving people lives across the Himalayas in the changing climate.
Critical noise of majority-vote model on complex networks.
Chen, Hanshuang; Shen, Chuansheng; He, Gang; Zhang, Haifeng; Hou, Zhonghuai
2015-02-01
The majority-vote model with noise is one of the simplest nonequilibrium statistical model that has been extensively studied in the context of complex networks. However, the relationship between the critical noise where the order-disorder phase transition takes place and the topology of the underlying networks is still lacking. In this paper, we use the heterogeneous mean-field theory to derive the rate equation for governing the model's dynamics that can analytically determine the critical noise f(c) in the limit of infinite network size N→∞. The result shows that f(c) depends on the ratio of 〈k〉 to 〈k(3/2)〉, where 〈k〉 and 〈k(3/2)〉 are the average degree and the 3/2 order moment of degree distribution, respectively. Furthermore, we consider the finite-size effect where the stochastic fluctuation should be involved. To the end, we derive the Langevin equation and obtain the potential of the corresponding Fokker-Planck equation. This allows us to calculate the effective critical noise f(c)(N) at which the susceptibility is maximal in finite-size networks. We find that the f(c)-f(c)(N) decays with N in a power-law way and vanishes for N→∞. All the theoretical results are confirmed by performing the extensive Monte Carlo simulations in random k-regular networks, Erdös-Rényi random networks, and scale-free networks. PMID:25768561
Modeling CO2 Migration at Sleipner Using Models of Varying Complexity
NASA Astrophysics Data System (ADS)
Bandilla, K.; Celia, M. A.; Leister, E.; Guo, B.
2014-12-01
The goal of geologic carbon sequestration (GCS) is to store carbon dioxide (CO2) in the subsurface for time periods on the order of thousands of years. To ensure the safe storage of CO2 in the subsurface, the migration of CO2 and resident brine needs to be predicted. Mathematical modeling is an important tool to predict the migration of both CO2 and brine. Many modeling approaches with different levels of complexity have been applied to the problem of GCS ranging from simple analytic solutions to full three-dimensional reservoir simulators. The choice of modeling approach is often a function of the spatial and temporal scales of the problem, reservoir properties, data availability, available computational resources, and the familiarity of the modeler with a specific modeling approach.The Utsira Formation off the coast of Norway is the target formation of the Sleipner Project, where approximately 1 million tons of CO2 are injected per year. The Utsira Sand consists of a Pliocene sandstone with high permeability and porosity, interbedded with thin mudstone layers that act as baffles for vertical flow. CO2 is injected at the bottom of the formation and collects under the mudstone baffles as it migrates to the top of the formation. The layer of sandstone between the topmost mudstone baffle and caprock is termed the 9th layer. Geometrical and petro-physical data of the 9th layer have been made publicly available, and are the basis for this modeling study.In this study we apply a series of models with different levels of model complexity to the 9th layer of the Utsira Sand. The list of modeling approaches includes (from least complex to most complex): macroscopic invasion percolation model, numerical vertical-equilibrium model with sharp-interface, numerical vertical-equilibrium model with capillary transition zone, vertically-integrated model with dynamic vertical pressure and saturation reconstruction, and full three-dimensional model. The models are compared based on
Biomembrane phospholipid-oxide surface interactions: crystal chemical and thermodynamic basis.
Sahai, Nita
2002-08-15
Quartz has the least favored surface among many oxides for bacterial attachment and for lipid bilayer or micelle interactions. Tetrahedrally coordinated crystalline silica polymorphs are membranolytic toward liposomes, lysosomes, erythrocytes, and macrophages. Amorphous silica, the octahedral silica polymorph, (stishovite), and oxides such as Al2O3, Fe2 O3, and TiO2 are less cytotoxic. Existing theories for membrane rupture that invoke interactions between oxide surfaces and cell membrane phospholipids (PLs) do not adequately explain these differences in membranolytic potential of the oxides. The author presents a crystal chemical, thermodynamic model for the initial interaction of oxide surfaces with the quaternary ammonium component of the PL's polar head group. The model includes solvation energy changes and electrostatic forces during adsorption, represented by the dielectric constant of the solid and the charge-to-radius ratio of the adsorbing solute. The nature of oxide-solute interactions compared with oxide-water, solute-water, and water-water interactions determines the membranolytic activity of the oxide, where the solute is TMA+, the quaternary ammonium moeity. Significant membrane rupture, as on quartz, requires unfavorable adsorption entropy (DeltaS(ads,TMA+)<0) to maximize disruption of normal membrane structure and requires favorable Gibbs free energy of exchange between TMA+ and the ambient Na+ ions (DeltaG(exc,TMA+/Na+) = DeltaG(ads,TMA+)-DeltaG(ads,Na+)<0) to maximize the extent of membrane affected. For amorphous silica, DeltaS(ads,TMA+) >0, so disruption of structure is limited, even though G(exc,TMA+/Na+) is <0. Stishovite and other oxides have DeltaS(ads,TMA+) <0, but now DeltaG(exc,TMA+/Na+) is>0 at the acidic to circumneutral pHs of cellular and subcellular organelle fluids. The model predicts the correct sequence of membranolytic ability: quartz > or = amorphous SiO2 >Al2O3 >Fe2O3 >TiO2. The model thus explains the relatively poor adhesion
Thermophysical Model of S-complex NEAs: 1627 Ivar
NASA Astrophysics Data System (ADS)
Crowell, Jenna; Howell, Ellen S.; Magri, Christopher; Fernandez, Yanga R.; Marshall, Sean E.; Warner, Brian D.; Vervack, Ronald J., Jr.
2016-01-01
We present an updated thermophysical model of 1627 Ivar, an Amor class near Earth asteroid (NEA) with a taxonomic type of Sqw [1]. Ivar's large size and close approach to Earth in 2013 (minimum distance 0.32 AU) provided an opportunity to observe the asteroid over many different viewing angles for an extended period of time, which we have utilized to generate a shape and thermophysical model of Ivar, allowing us to discuss the implications that these results have on the regolith of this asteroid. Using the software SHAPE [2,3], we updated the nonconvex shape model of Ivar, which was constructed by Kaasalainen et al. [4] using photometry. We incorporated 2013 radar data and CCD lightcurves using the Arecibo Observatory's 2380Mz radar and the 0.35m telescope at the Palmer Divide Station respectively, to create a shape model with higher surface detail. We found Ivar to be elongated with maximum extended lengths along principal axes of 12 x 5 x 6 km and a rotation rate of 4.795162 ± 5.4 * 10-6 hrs [5]. In addition to these radar data and lightcurves, we also observed Ivar in the near IR using the SpeX instrument at the NASA IRTF. These data cover a wide range of Ivar's rotational longitudes and viewing geometries. We have used SHERMAN [6,7] with input parameters such as the asteroid's IR emissivity, optical scattering law, and thermal inertia, in order to complete thermal computations based on our shape model and known spin state. Using this procedure, we find which reflective, thermal, and surface properties best reproduce the observed spectra. This allows us to characterize properties of the asteroid's regolith and study heterogeneity of the surface. We will compare these results with those of other S-complex asteroids to better understand this asteroid type and the uniqueness of 1627 Ivar.[1] DeMeo et al. 2009, Icarus 202, 160-180 [2] Magri, C. et al. 2011, Icarus 214, 210-227. [3] Crowell, J. et al. 2014, AAS/DPS 46 [4] Kaasalainen, M. et al. 2004, Icarus 167, 178
A New Approach to Modelling Student Retention through an Application of Complexity Thinking
ERIC Educational Resources Information Center
Forsman, Jonas; Linder, Cedric; Moll, Rachel; Fraser, Duncan; Andersson, Staffan
2014-01-01
Complexity thinking is relatively new to education research and has rarely been used to examine complex issues in physics and engineering education. Issues in higher education such as student retention have been approached from a multiplicity of perspectives and are recognized as complex. The complex system of student retention modelling in higher…
Modelling Complex Systems by Integration of Agent-Based and Dynamical Systems Models
NASA Astrophysics Data System (ADS)
Bosse, Tibor; Sharpanskykh, Alexei; Treur, Jan
Existing models for complex systems are often based on quantitative, numerical methods such as Dynamical Systems Theory (DST) [Port and Gelder 1995]. Such approaches often use numerical variables to describe global aspects and specify how they affect each other over time. An advantage of such approaches is that numerical approximation methods and software are available for simulation.
NASA Astrophysics Data System (ADS)
Gong, Wei; Duan, Qingyun; Li, Jianduo; Wang, Chen; Di, Zhenhua; Ye, Aizhong; Miao, Chiyuan; Dai, Yongjiu
2016-03-01
Parameter specification is an important source of uncertainty in large, complex geophysical models. These models generally have multiple model outputs that require multiobjective optimization algorithms. Although such algorithms have long been available, they usually require a large number of model runs and are therefore computationally expensive for large, complex dynamic models. In this paper, a multiobjective adaptive surrogate modeling-based optimization (MO-ASMO) algorithm is introduced that aims to reduce computational cost while maintaining optimization effectiveness. Geophysical dynamic models usually have a prior parameterization scheme derived from the physical processes involved, and our goal is to improve all of the objectives by parameter calibration. In this study, we developed a method for directing the search processes toward the region that can improve all of the objectives simultaneously. We tested the MO-ASMO algorithm against NSGA-II and SUMO with 13 test functions and a land surface model - the Common Land Model (CoLM). The results demonstrated the effectiveness and efficiency of MO-ASMO.
Monzel, C.; Schmidt, D.; Kleusch, C.; Kirchenbüchler, D.; Seifert, U.; Smith, A-S; Sengupta, K.; Merkel, R.
2015-01-01
Stochastic displacements or fluctuations of biological membranes are increasingly recognized as an important aspect of many physiological processes, but hitherto their precise quantification in living cells was limited due to a lack of tools to accurately record them. Here we introduce a novel technique—dynamic optical displacement spectroscopy (DODS), to measure stochastic displacements of membranes with unprecedented combined spatiotemporal resolution of 20 nm and 10 μs. The technique was validated by measuring bending fluctuations of model membranes. DODS was then used to explore the fluctuations in human red blood cells, which showed an ATP-induced enhancement of non-Gaussian behaviour. Plasma membrane fluctuations of human macrophages were quantified to this accuracy for the first time. Stimulation with a cytokine enhanced non-Gaussian contributions to these fluctuations. Simplicity of implementation, and high accuracy make DODS a promising tool for comprehensive understanding of stochastic membrane processes. PMID:26437911
Monzel, C; Schmidt, D; Kleusch, C; Kirchenbüchler, D; Seifert, U; Smith, A-S; Sengupta, K; Merkel, R
2015-01-01
Stochastic displacements or fluctuations of biological membranes are increasingly recognized as an important aspect of many physiological processes, but hitherto their precise quantification in living cells was limited due to a lack of tools to accurately record them. Here we introduce a novel technique--dynamic optical displacement spectroscopy (DODS), to measure stochastic displacements of membranes with unprecedented combined spatiotemporal resolution of 20 nm and 10 μs. The technique was validated by measuring bending fluctuations of model membranes. DODS was then used to explore the fluctuations in human red blood cells, which showed an ATP-induced enhancement of non-Gaussian behaviour. Plasma membrane fluctuations of human macrophages were quantified to this accuracy for the first time. Stimulation with a cytokine enhanced non-Gaussian contributions to these fluctuations. Simplicity of implementation, and high accuracy make DODS a promising tool for comprehensive understanding of stochastic membrane processes. PMID:26437911
NASA Astrophysics Data System (ADS)
Monzel, C.; Schmidt, D.; Kleusch, C.; Kirchenbüchler, D.; Seifert, U.; Smith, A.-S.; Sengupta, K.; Merkel, R.
2015-10-01
Stochastic displacements or fluctuations of biological membranes are increasingly recognized as an important aspect of many physiological processes, but hitherto their precise quantification in living cells was limited due to a lack of tools to accurately record them. Here we introduce a novel technique--dynamic optical displacement spectroscopy (DODS), to measure stochastic displacements of membranes with unprecedented combined spatiotemporal resolution of 20 nm and 10 μs. The technique was validated by measuring bending fluctuations of model membranes. DODS was then used to explore the fluctuations in human red blood cells, which showed an ATP-induced enhancement of non-Gaussian behaviour. Plasma membrane fluctuations of human macrophages were quantified to this accuracy for the first time. Stimulation with a cytokine enhanced non-Gaussian contributions to these fluctuations. Simplicity of implementation, and high accuracy make DODS a promising tool for comprehensive understanding of stochastic membrane processes.
Neves, Ana Rute; Nunes, Cláudia; Reis, Salette
2016-01-01
Resveratrol is a polyphenol compound with great value in cancer therapy, cardiovascular protection, and neurodegenerative disorders. The mechanism by which resveratrol exerts such pleiotropic effects is not yet clear and there is a huge need to understand the influence of this compound on the regulation of lipid domains formation on membrane structure. The aim of the present study was to reveal potential molecular interactions between resveratrol and lipid rafts found in cell membranes by means of Förster resonance energy transfer, DPH fluorescence quenching, and triton X-100 detergent resistance assay. Liposomes composed of egg phosphatidylcholine, cholesterol, and sphingomyelin were used as model membranes. The results revealed that resveratrol induces phase separation and formation of liquid-ordered domains in bilayer structures. The formation of such tightly packed lipid rafts is important for different signal transduction pathways, through the regulation of membrane-associating proteins, that can justify several pharmacological activities of this compound. PMID:26456556
Effect of Phosphatidic Acid on Biomembrane: Experimental and Molecular Dynamics Simulations Study.
Kwolek, Urszula; Kulig, Waldemar; Wydro, Paweł; Nowakowska, Maria; Róg, Tomasz; Kepczynski, Mariusz
2015-08-01
We consider the impact of phosphatidic acid (namely, 1,2-dioleoyl-sn-glycero-3-phosphate, DOPA) on the properties of a zwitterionic (1,2-dipalmitoyl-sn-glycero-3-phosphocholine, DPPC) bilayer used as a model system for protein-free cell membranes. For this purpose, experimental measurements were performed using differential scanning calorimetry and the Langmuir monolayer technique at physiological pH. Moreover, atomistic-scale molecular dynamics (MD) simulations were performed to gain information on the mixed bilayer's molecular organization. The results of the monolayer studies clearly showed that the DPPC/DOPA mixtures are nonideal and the interactions between lipid species change from attractive, at low contents of DOPA, to repulsive, at higher contents of that component. In accordance with these results, the MD simulations demonstrated that both monoanionic and dianionic forms of DOPA have an ordering and condensing effect on the mixed bilayer at low concentrations. For the DOPA monoanions, this is the result of both (i) strong electrostatic interactions between the negatively charged oxygen of DOPA and the positively charged choline groups of DPPC and (ii) conformational changes of the lipid acyl chains, leading to their tight packing according to the so-called "umbrella model", in which large headgroups of DPPC shield the hydrophobic part of DOPA (the conical shape lipid) from contact with water. In the case of the DOPA dianions, cation-mediated clustering was observed. Our results provide a detailed molecular-level description of the lipid organization inside the mixed zwitterionic/PA membranes, which is fully supported by the experimental data. PMID:26167676
How Good Are Statistical Models at Approximating Complex Fitness Landscapes?
du Plessis, Louis; Leventhal, Gabriel E; Bonhoeffer, Sebastian
2016-09-01
Fitness landscapes determine the course of adaptation by constraining and shaping evolutionary trajectories. Knowledge of the structure of a fitness landscape can thus predict evolutionary outcomes. Empirical fitness landscapes, however, have so far only offered limited insight into real-world questions, as the high dimensionality of sequence spaces makes it impossible to exhaustively measure the fitness of all variants of biologically meaningful sequences. We must therefore revert to statistical descriptions of fitness landscapes that are based on a sparse sample of fitness measurements. It remains unclear, however, how much data are required for such statistical descriptions to be useful. Here, we assess the ability of regression models accounting for single and pairwise mutations to correctly approximate a complex quasi-empirical fitness landscape. We compare approximations based on various sampling regimes of an RNA landscape and find that the sampling regime strongly influences the quality of the regression. On the one hand it is generally impossible to generate sufficient samples to achieve a good approximation of the complete fitness landscape, and on the other hand systematic sampling schemes can only provide a good description of the immediate neighborhood of a sequence of interest. Nevertheless, we obtain a remarkably good and unbiased fit to the local landscape when using sequences from a population that has evolved under strong selection. Thus, current statistical methods can provide a good approximation to the landscape of naturally evolving populations. PMID:27189564
Structural models for alkali-metal complexes of polyacetylene
NASA Astrophysics Data System (ADS)
Murthy, N. S.; Shacklette, L. W.; Baughman, R. H.
1990-02-01
Structural models for a stage-2 complex are proposed for polyacetylene doped with less than about 0.1 potassium or rubidium atoms per carbon. These structures utilize as a basic motif an alkali-metal column surrounded by four planar-zig-zag polyacetylene chains, a structure found at the highest dopant levels. In the new stage-2 structures, each polyacetylene chain neighbors only one alkali-metal column, so the phase contains four polymer chains per alkali-metal column. Basic structural aspects for stage-1 and stage-2 structures are now established for both potassium- and rubidium-doped polyacetylene. X-ray-diffraction and electrochemical data show that undoped and doped phases coexist at low dopant concentrations (<0.06 K atom per C). X-ray-diffraction data, down to a Bragg spacing of 1.3 Å, for polyacetylene heavily doped with potassium (0.125-0.167 K atom per C) is fully consistent with our previously proposed stage-1 tetragonal unit cell containing two polyacetylene chains per alkali-metal column. There is no evidence for our samples requiring a distortion to a monoclinic unit cell as reported by others for heavily doped samples. The nature of structural transformations and the relationship between structure and electronic properties are discussed for potassium-doped polyacetylene.
Modeling the complex dynamics of enzyme-pathway coevolution.
Schütte, Moritz; Skupin, Alexander; Segrè, Daniel; Ebenhöh, Oliver
2010-12-01
Metabolic pathways must have coevolved with the corresponding enzyme gene sequences. However, the evolutionary dynamics ensuing from the interplay between metabolic networks and genomes is still poorly understood. Here, we present a computational model that generates putative evolutionary walks on the metabolic network using a parallel evolution of metabolic reactions and their catalyzing enzymes. Starting from an initial set of compounds and enzymes, we expand the metabolic network iteratively by adding new enzymes with a probability that depends on their sequence-based similarity to already present enzymes. Thus, we obtain simulated time courses of chemical evolution in which we can monitor the appearance of new metabolites, enzyme sequences, or even entire organisms. We observe that new enzymes do not appear gradually but rather in clusters which correspond to enzyme classes. A comparison with Brownian motion dynamics indicates that our system displays biased random walks similar to diffusion on the metabolic network with long-range correlations. This suggests that a quantitative molecular principle may underlie the appearance of punctuated equilibrium dynamics, whereby enzymes occur in bursts rather than by phyletic gradualism. Moreover, the simulated time courses lead to a putative time-order of enzyme and organism appearance. Among the patterns we detect in these evolutionary trends is a significant correlation between the time of appearance and their enzyme repertoire size. Hence, our approach to metabolic evolution may help understand the rise in complexity at the biochemical and genomic levels. PMID:21198127
How Good Are Statistical Models at Approximating Complex Fitness Landscapes?
du Plessis, Louis; Leventhal, Gabriel E.; Bonhoeffer, Sebastian
2016-01-01
Fitness landscapes determine the course of adaptation by constraining and shaping evolutionary trajectories. Knowledge of the structure of a fitness landscape can thus predict evolutionary outcomes. Empirical fitness landscapes, however, have so far only offered limited insight into real-world questions, as the high dimensionality of sequence spaces makes it impossible to exhaustively measure the fitness of all variants of biologically meaningful sequences. We must therefore revert to statistical descriptions of fitness landscapes that are based on a sparse sample of fitness measurements. It remains unclear, however, how much data are required for such statistical descriptions to be useful. Here, we assess the ability of regression models accounting for single and pairwise mutations to correctly approximate a complex quasi-empirical fitness landscape. We compare approximations based on various sampling regimes of an RNA landscape and find that the sampling regime strongly influences the quality of the regression. On the one hand it is generally impossible to generate sufficient samples to achieve a good approximation of the complete fitness landscape, and on the other hand systematic sampling schemes can only provide a good description of the immediate neighborhood of a sequence of interest. Nevertheless, we obtain a remarkably good and unbiased fit to the local landscape when using sequences from a population that has evolved under strong selection. Thus, current statistical methods can provide a good approximation to the landscape of naturally evolving populations. PMID:27189564
Modeling the complex dynamics of enzyme-pathway coevolution
NASA Astrophysics Data System (ADS)
Schütte, Moritz; Skupin, Alexander; Segrè, Daniel; Ebenhöh, Oliver
2010-12-01
Metabolic pathways must have coevolved with the corresponding enzyme gene sequences. However, the evolutionary dynamics ensuing from the interplay between metabolic networks and genomes is still poorly understood. Here, we present a computational model that generates putative evolutionary walks on the metabolic network using a parallel evolution of metabolic reactions and their catalyzing enzymes. Starting from an initial set of compounds and enzymes, we expand the metabolic network iteratively by adding new enzymes with a probability that depends on their sequence-based similarity to already present enzymes. Thus, we obtain simulated time courses of chemical evolution in which we can monitor the appearance of new metabolites, enzyme sequences, or even entire organisms. We observe that new enzymes do not appear gradually but rather in clusters which correspond to enzyme classes. A comparison with Brownian motion dynamics indicates that our system displays biased random walks similar to diffusion on the metabolic network with long-range correlations. This suggests that a quantitative molecular principle may underlie the appearance of punctuated equilibrium dynamics, whereby enzymes occur in bursts rather than by phyletic gradualism. Moreover, the simulated time courses lead to a putative time-order of enzyme and organism appearance. Among the patterns we detect in these evolutionary trends is a significant correlation between the time of appearance and their enzyme repertoire size. Hence, our approach to metabolic evolution may help understand the rise in complexity at the biochemical and genomic levels.
O'Leary, T J; Ross, P D; Lieber, M R; Levin, I W
1986-01-01
Cyclosporine A (CSA)-dipalmitoylphosphatidylcholine (DPPC) interactions were investigated using scanning calorimetry, infrared spectroscopy, and Raman spectroscopy. CSA reduced both the temperature and the maximum heat capacity of the lipid bilayer gel-to-liquid crystalline phase transition; the relationship between the shift in transition temperature and CSA concentration indicates that the peptide does not partition ideally between DPPC gel and liquid crystalline phases. This nonideality can be accounted for by excluded volume interactions between peptide molecules. CSA exhibited a similar but much more pronounced effect on the pretransition; at concentrations of 1 mol % CSA the amplitude of the pretransition was less than 20% of its value in the pure lipid. Raman spectroscopy confirmed that the effects of CSA on the phase transitions are not accompanied by major structural alterations in either the lipid headgroup or acyl chain regions at temperatures away from the phase changes. Both infrared and Raman spectroscopic results demonstrated that CSA in the lipid bilayer exists largely in a beta-turn conformation, as expected from single crystal x-ray data; the lipid phase transition does not induce structural alterations in CSA. Although the polypeptide significantly affects DPPC model membrane bilayers, CSA neither inhibited hypotonic hemolysis nor caused erythrocyte hemolysis, in contrast to many chemical agents that are believed to act through membrane-mediated pathways. Thus, agents, such as CSA, that perturb phospholipid phase transitions do not necessarily cause functional changes in cell membranes. PMID:3755063
An ytterbium(III) complex of duanomycin, a model metal complex of anthracycline antibiotics
Ming, Li-June; Wei, Xiangdong
1994-10-12
Here, the authors report on structural studies of a daunomycin -Yb{sup 3+} complex. Daunomycin is a prospective anthracycline antibiotic. Both optical and NMR spectroscopy are used in the structural investigation.
González, Carmen M; Pizarro-Guerra, Guadalupe; Droguett, Felipe; Sarabia, Mauricio
2015-10-01
Organic thin film deposition presents a multiplicity of challenges. Most notably, layer thickness control, homogeneity and subsequent characterization have been not cleared yet. Phospholipid bilayers are frequently used to model cell membranes. Bilayers can be disrupted by changes in mechanical stress, pH and temperature. The strategy presented in this article is based on thermal study of 1,2-dipalmitoyl-sn-glycero-3-phosphocholine (DPPC) through analysis of slight changes in material thickness. The sample was prepared by depositing X- or Y-type DPPC bilayers using Langmuir-Blodgett technique over silicon wafer. Thus, molecular inclination degree, mobility and stability of phases and their respective phase transitions were observed and analyzed through ellipsometric techniques during heating cycles and corroborated by Grazing Incidence X-ray Diffraction and Atomic Force Microcopy measurements. DPPC functional group vibrations were detected by Raman spectra analysis. Scanning Electron Microscope with Field Emission gun (FE-SEM) and conventional SEM micrographs were also used to characterize sample morphology, demonstrating that homogenous bilayer formations coexist with some vesicles or micelles at surface level. Contact angle measurements corroborate DPPC surface wettability, which is mainly related to surface treatment methods of silicon wafer used to create either hydrophilic or hydrophobic nature regarding the substrate surface. Also, shifting and intensity changes of certain functional groups into Raman spectra confirm water presence between DPPC layers. Signal analysis detects certain interdigitation between aliphatic chains. These studies correspond to the base of future biosensors based on proteins or antimicrobial peptides stabilized into phospholipid bilayers over thin hydrogel films as moist scaffold. PMID:26150275
Three-dimensional modeling of canopy flow in complex terrain
NASA Astrophysics Data System (ADS)
Xu, X.; Yi, C.; Montagnani, L.
2013-12-01
Flows within and just above forest canopy over mountainous terrain are most complicated, which substantially influence the biosphere-atmosphere interaction of mass and energy. Due to the significant spatial variation, canopy flow in complex terrain is poorly understood based on the point-based tower measurement. We employ numerical model integrated with biogenic CO2 process to examine the impacts of topography, canopy structure, and synoptic atmospheric motion on canopy flow and associated CO2 transport in an alpine forest, with special focus on stable nocturnal condition when biogenic CO2 emission is active. Our model prediction is in better agreement with tower measurements when background synoptic wind is present, which leads to better larger-scale mixing, while local slope flow is just thermal-driven in the modeled domain by ignorance of surround mountain-valley. Our results show that large-scale synoptic wind is modified by local slope-canopy flow within and just above canopy. As the synoptic wind is down-slope (Figure 1a), recirculation is formed on the downwind slope with cool air and high accumulation of CO2 in front of tall and dense vegetation. As the synoptic wind is up-slope(Figure 1b), canopy flow at the higher elevation of the slope is in the same direction of synoptic wind, while canopy flow at the lower part of the slope blows down-slope. The upslope wind causes better mixing in the canopy and leads to smaller CO2 accumulation just close to the slope surface. The local down-slope wind (Figure 1c) causes rich and deep CO2 build-up in the downwind direction on the lower slope. Our numerical performance has demonstrated that three-dimensional CFD approach is a useful tool to understanding relationships between tower-point measurements and surrounding's field distributions. Acknowledgement: This research was supported by NSF Grants ATM-0930015, CNS-0958379 & CNS-0855217, PSC-CUNY ENHC-42-64 & CUNY HPCC. Figure 1 CO2 distribution within and just above
Parker, G T
2011-01-01
This paper extends previous work comparing the response of water quality models under uncertainty. A new model, the River Water Quality Model no. 1 (RWQM1), is compared to the previous work of two commonly used water quality models. Additionally, the effect of conceptual model scaling within a single modelling framework, as allowed by RWQM1, is explored under uncertainty. Model predictions are examined using against real-world data for the Potomac River with a Generalized Likelihood Uncertainty Estimation used to assess model response surfaces to uncertainty. Generally, it was found that there are tangible model characteristics that are closely tied to model complexity and thresholds for these characteristics were discussed. The novel work has yielded an illustrative example but also a conceptually scaleable water quality modelling tool, alongside defined metrics to assess when scaling is required under uncertainty. The resulting framework holds substantial, unique, promise for a new generation of modelling tools that are capable of addressing classically intractable problems. PMID:21252443
Using SysML to model complex systems for security.
Cano, Lester Arturo
2010-08-01
As security systems integrate more Information Technology the design of these systems has tended to become more complex. Some of the most difficult issues in designing Complex Security Systems (CSS) are: Capturing Requirements: Defining Hardware Interfaces: Defining Software Interfaces: Integrating Technologies: Radio Systems: Voice Over IP Systems: Situational Awareness Systems.
Modeling the self-similarity in complex networks based on Coulomb's law
NASA Astrophysics Data System (ADS)
Zhang, Haixin; Wei, Daijun; Hu, Yong; Lan, Xin; Deng, Yong
2016-06-01
Recently, self-similarity of complex networks have attracted much attention. Fractal dimension of complex network is an open issue. Hub repulsion plays an important role in fractal topologies. This paper models the repulsion among the nodes in the complex networks in calculation of the fractal dimension of the networks. Coulomb's law is adopted to represent the repulse between two nodes of the network quantitatively. A new method to calculate the fractal dimension of complex networks is proposed. The Sierpinski triangle network and some real complex networks are investigated. The results are illustrated to show that the new model of self-similarity of complex networks is reasonable and efficient.
Probabilistic Multi-Factor Interaction Model for Complex Material Behavior
NASA Technical Reports Server (NTRS)
Abumeri, Galib H.; Chamis, Christos C.
2010-01-01
Complex material behavior is represented by a single equation of product form to account for interaction among the various factors. The factors are selected by the physics of the problem and the environment that the model is to represent. For example, different factors will be required for each to represent temperature, moisture, erosion, corrosion, etc. It is important that the equation represent the physics of the behavior in its entirety accurately. The Multi-Factor Interaction Model (MFIM) is used to evaluate the divot weight (foam weight ejected) from the external launch tanks. The multi-factor has sufficient degrees of freedom to evaluate a large number of factors that may contribute to the divot ejection. It also accommodates all interactions by its product form. Each factor has an exponent that satisfies only two points - the initial and final points. The exponent describes a monotonic path from the initial condition to the final. The exponent values are selected so that the described path makes sense in the absence of experimental data. In the present investigation, the data used were obtained by testing simulated specimens in launching conditions. Results show that the MFIM is an effective method of describing the divot weight ejected under the conditions investigated. The problem lies in how to represent the divot weight with a single equation. A unique solution to this problem is a multi-factor equation of product form. Each factor is of the following form (1 xi/xf)ei, where xi is the initial value, usually at ambient conditions, xf the final value, and ei the exponent that makes the curve represented unimodal that meets the initial and final values. The exponents are either evaluated by test data or by technical judgment. A minor disadvantage may be the selection of exponents in the absence of any empirical data. This form has been used successfully in describing the foam ejected in simulated space environmental conditions. Seven factors were required
Application of surface complexation models to anion adsorption by natural materials
Technology Transfer Automated Retrieval System (TEKTRAN)
Various chemical models of ion adsorption will be presented and discussed. Chemical models, such as surface complexation models, provide a molecular description of anion adsorption reactions using an equilibrium approach. Two such models, the constant capacitance model and the triple layer model w...
Dagne, Getachew A; Brown, C Hendricks; Howe, George W
2007-09-01
This article presents new methods for modeling the strength of association between multiple behaviors in a behavioral sequence, particularly those involving substantively important interaction patterns. Modeling and identifying such interaction patterns becomes more complex when behaviors are assigned to more than two categories, as is the case for most observational research. The authors propose multilevel empirical Bayes methods to overcome the challenges inherent in such data. Furthermore, these methods allow the study of how variation in interaction patterns can mediate the effects of antecedents or intervention on distal outcomes. New procedures are developed to compare alternative mediation models and pinpoint which random effects operate as mediators. These models are then applied to observational data taken from a study of the behavioral interactions of 254 couples. PMID:17784796
Tastan Bishop, Ozlem; Kroon, Matthys
2011-12-01
This paper develops and evaluates large-scale calculation of 3D structures of protein complexes by homology modeling as a promising new approach for protein docking. The complexes investigated were papain-like cysteine proteases and their protein inhibitors, which play numerous roles in human and parasitic metabolisms. The structural modeling was performed in two parts. For the first part (evaluation set), nine crystal structure complexes were selected, 1325 homology models of known complexes were rebuilt by various templates including hybrids, allowing an analysis of the factors influencing the accuracy of the models. The important considerations for modeling the interface were protease coverage and inhibitor sequence identity. In the second part (study set), the findings of the evaluation set were used to select appropriate templates to model novel cysteine protease-inhibitor complexes from human and malaria parasites Plasmodium falciparum and Plasmodium vivax. The energy scores, considering the evaluation set, indicate that the models are of high accuracy. PMID:21365221
A note on the Dirichlet problem for model complex partial differential equations
NASA Astrophysics Data System (ADS)
Ashyralyev, Allaberen; Karaca, Bahriye
2016-08-01
Complex model partial differential equations of arbitrary order are considered. The uniqueness of the Dirichlet problem is studied. It is proved that the Dirichlet problem for higher order of complex partial differential equations with one complex variable has infinitely many solutions.
Carotenoid binding to proteins: Modeling pigment transport to lipid membranes.
Reszczynska, Emilia; Welc, Renata; Grudzinski, Wojciech; Trebacz, Kazimierz; Gruszecki, Wieslaw I
2015-10-15
Carotenoid pigments play numerous important physiological functions in human organism. Very special is a role of lutein and zeaxanthin in the retina of an eye and in particular in its central part, the macula lutea. In the retina, carotenoids can be directly present in the lipid phase of the membranes or remain bound to the protein-pigment complexes. In this work we address a problem of binding of carotenoids to proteins and possible role of such structures in pigment transport to lipid membranes. Interaction of three carotenoids, beta-carotene, lutein and zeaxanthin with two proteins: bovine serum albumin and glutathione S-transferase (GST) was investigated with application of molecular spectroscopy techniques: UV-Vis absorption, circular dichroism and Fourier transform infrared spectroscopy (FTIR). Interaction of pigment-protein complexes with model lipid bilayers formed with egg yolk phosphatidylcholine was investigated with application of FTIR, Raman imaging of liposomes and electrophysiological technique, in the planar lipid bilayer models. The results show that in all the cases of protein and pigment studied, carotenoids bind to protein and that the complexes formed can interact with membranes. This means that protein-carotenoid complexes are capable of playing physiological role in pigment transport to biomembranes. PMID:26361975
Complex-Energy Shell-Model Description of Alpha Decay
Id Betan, R.; Nazarewicz, Witold
2011-01-01
In his pioneering work of alpha decay, Gamow assumed that the alpha particle formed inside the nucleus tunnels through the barrier of the alpha-daughter potential. The corresponding metastable state can be viewed as a complex-energy solution of the time-independent Schroedinger equation with the outgoing boundary condition. The formation of the alpha cluster, missing in the original Gamow formulation, can be described within the R-matrix theory in terms of the formation amplitude. In this work, the alpha decay process is described by computing the formation amplitude and barrier penetrability in a large complex-energy configuration space spanned by the complex-energy eigenstates of the finite Woods-Saxon (WS) potential. The proper normalization of the decay channel is essential as it strongly modifies the alpha-decay spectroscopic factor. The test calculations are carried out for the ^{212}Po alpha decay.
Boily, Jean F
2007-01-25
The applicability of separating charges of oxyanions across inner- and outer-Helmholtz planes according to the Charge Distribution model was tested by investigating the theoretical charge distribution in a range of metal-sulphate complexes using the methods of Atoms In Molecules and of the Electron Localisation Function. Density Functional Theory gas-phase geometry optimisation calculations revealed that unbound oxygens of the sulphate molecules contracted to relatively constant S-O bond lengths of 1.432 ± 0.019 (3) Å irrespective of the bond strength with the metal ions. The populations of the valence basins of the unbound oxygens also remained relatively constant, showing that even the strongest complexation induces very little charge distribution across the sulphate molecule. Maps of the Laplacian of the electron density and of the Electron Localisation Function revealed that although charge is relatively localised at oxygen centers there is not necessarily a clear charge separation between the inner- and outer-Helmholtz planes. The Proximity and Smit models are presented as alternative surface complexation schemes to provide a molecularly and electronically consistent depiction of the mineral/solution interface. These models are also presented in their capability in accounting for results from large-scale molecular models. It should nonetheless be emphasized that the Charge Distribution model remains a valuable approach and should have the best applicability at low surface loadings and with molecules with sizes similar to those of the compact layer.
NASA Astrophysics Data System (ADS)
Holzmann, Hubert; Massmann, Carolina
2015-04-01
A plenty of hydrological model types have been developed during the past decades. Most of them used a fixed design to describe the variable hydrological processes assuming to be representative for the whole range of spatial and temporal scales. This assumption is questionable as it is evident, that the runoff formation process is driven by dominant processes which can vary among different basins. Furthermore the model application and the interpretation of results is limited by data availability to identify the particular sub-processes, since most models were calibrated and validated only with discharge data. Therefore it can be hypothesized, that simpler model designs, focusing only on the dominant processes, can achieve comparable results with the benefit of less parameters. In the current contribution a modular model concept will be introduced, which allows the integration and neglection of hydrological sub-processes depending on the catchment characteristics and data availability. Key elements of the process modules refer to (1) storage effects (interception, soil), (2) transfer processes (routing), (3) threshold processes (percolation, saturation overland flow) and (4) split processes (rainfall excess). Based on hydro-meteorological observations in an experimental catchment in the Slovak region of the Carpathian mountains a comparison of several model realizations with different degrees of complexity will be discussed. A special focus is given on model parameter sensitivity estimated by Markov Chain Monte Carlo approach. Furthermore the identification of dominant processes by means of Sobol's method is introduced. It could be shown that a flexible model design - and even the simple concept - can reach comparable and equivalent performance than the standard model type (HBV-type). The main benefit of the modular concept is the individual adaptation of the model structure with respect to data and process availability and the option for parsimonious model design.
Kellogg, R M
1982-01-01
In terms of the reporting of accomplished chemistry this review can do no more than give an indication of the rapid progress in the branch of bioorganic modelling based on the use of macrocyclic compounds that (usually) act as complexing agents. What remains to be done, however, is to point out problems that have not been satisfactorily solved and to suggest other profitable areas of investigation. From the material accumulated in this review one can draw the conclusion that especially crown (or cryptate) systems offer special advantages in bioorganic modelling because such compounds can - enzyme like - complex a potential substrate. On the basis of quite simple binding considerations, coupled with an analysis of steric interactions, accurate predictions of the stereochemistry of the complex can be made. The inclusion of catalytic groups in the crown (or cryptate) system and reactive functional groups in the substrate is then done in such a fashion that the stereoelectronic arrangement is compatible with the predicted geometry of the complex. However, the good complexing ability of the ligand is paradoxically often its greatest failing in terms of developing a system in which the functionalized ligand acts truly as a catalyst. As seen from much of the chemistry discussed in this review the ligand is incapable of the double task of complexing substrate but releasing product in an enzymic fashion, i.e. that turnover occurs. How is this problem to be solved? Induced conformational changes are an obvious approach although the design of proper systems remains a challenge for which few suggestions outside of unlimited ingenuity can be given. Much of the solution to such problems will lie also in a much better understanding than we now have of non-covalent interactions and the stereochemistry of such interactions. The assembly and disassembly of large molecular aggregates by the making and dissolution of non-covalent bonds is an art at which chemists are still relative
Quental, Carlos; Folgado, João; Ambrósio, Jorge; Monteiro, Jacinto
2015-01-01
The inverse dynamics technique applied to musculoskeletal models, and supported by optimisation techniques, is used extensively to estimate muscle and joint reaction forces. However, the solutions of the redundant muscle force sharing problem are sensitive to the detail and modelling assumptions of the models used. This study presents four alternative biomechanical models of the upper limb with different levels of discretisation of muscles by bundles and muscle paths, and their consequences on the estimation of the muscle and joint reaction forces. The muscle force sharing problem is solved for the motions of abduction and anterior flexion, acquired using video imaging, through the minimisation of an objective function describing muscle metabolic energy consumption. While looking for the optimal solution, not only the equations of motion are satisfied but also the stability of the glenohumeral and scapulothoracic joints is preserved. The results show that a lower level of muscle discretisation provides worse estimations regarding the muscle forces. Moreover, the poor discretisation of muscles relevant to the joint in analysis limits the applicability of the biomechanical model. In this study, the biomechanical model of the upper limb describing the infraspinatus by a single bundle could not solve the complete motion of anterior flexion. Despite the small differences in the magnitude of the forces predicted by the biomechanical models with more complex muscular systems, in general, there are no significant variations in the muscular activity of equivalent muscles. PMID:24156405
Can Models Capture the Complexity of the Systems Engineering Process?
NASA Astrophysics Data System (ADS)
Boppana, Krishna; Chow, Sam; de Weck, Olivier L.; Lafon, Christian; Lekkakos, Spyridon D.; Lyneis, James; Rinaldi, Matthew; Wang, Zhiyong; Wheeler, Paul; Zborovskiy, Marat; Wojcik, Leonard A.
Many large-scale, complex systems engineering (SE) programs have been problematic; a few examples are listed below (Bar-Yam, 2003 and Cullen, 2004), and many others have been late, well over budget, or have failed: Hilton/Marriott/American Airlines system for hotel reservations and flights; 1988-1992; 125 million; "scrapped"
Modeling Cognitive Strategies during Complex Task Performing Process
ERIC Educational Resources Information Center
Mazman, Sacide Guzin; Altun, Arif
2012-01-01
The purpose of this study is to examine individuals' computer based complex task performing processes and strategies in order to determine the reasons of failure by cognitive task analysis method and cued retrospective think aloud with eye movement data. Study group was five senior students from Computer Education and Instructional Technologies…
Modelling Second Language Performance: Integrating Complexity, Accuracy, Fluency, and Lexis
ERIC Educational Resources Information Center
Skehan, Peter
2009-01-01
Complexity, accuracy, and fluency have proved useful measures of second language performance. The present article will re-examine these measures themselves, arguing that fluency needs to be rethought if it is to be measured effectively, and that the three general measures need to be supplemented by measures of lexical use. Building upon this…
Molecular Models of Ruthenium(II) Organometallic Complexes
ERIC Educational Resources Information Center
Coleman, William F.
2007-01-01
This article presents the featured molecules for the month of March, which appear in the paper by Ozerov, Fafard, and Hoffman, and which are related to the study of the reactions of a number of "piano stool" complexes of ruthenium(II). The synthesis of compound 2a offers students an alternative to the preparation of ferrocene if they are only…
Visualizing and modelling complex rockfall slopes using game-engine hosted models
NASA Astrophysics Data System (ADS)
Ondercin, Matthew; Hutchinson, D. Jean; Harrap, Rob
2015-04-01
Innovations in computing in the past few decades have resulted in entirely new ways to collect 3d geological data and visualize it. For example, new tools and techniques relying on high performance computing capabilities have become widely available, allowing us to model rockfalls with more attention to complexity of the rock slope geometry and rockfall path, with significantly higher quality base data, and with more analytical options. Model results are used to design mitigation solutions, considering the potential paths of the rockfall events and the energy they impart on impacted structures. Such models are currently implemented as general-purpose GIS tools and in specialized programs. These tools are used to inspect geometrical and geomechanical data, model rockfalls, and communicate results to researchers and the larger community. The research reported here explores the notion that 3D game engines provide a high speed, widely accessible platform on which to build rockfall modelling workflows and to provide a new and accessible outreach method. Taking advantage of the in-built physics capability of the 3D game codes, and ability to handle large terrains, these models are rapidly deployed and generate realistic visualizations of rockfall trajectories. Their utility in this area is as yet unproven, but preliminary research shows that they are capable of producing results that are comparable to existing approaches. Furthermore, modelling of case histories shows that the output matches the behaviour that is observed in the field. The key advantage of game-engine hosted models is their accessibility to the general public and to people with little to no knowledge of rockfall hazards. With much of the younger generation being very familiar with 3D environments such as Minecraft, the idea of a game-like simulation is intuitive and thus offers new ways to communicate to the general public. We present results from using the Unity game engine to develop 3D voxel worlds
NASA Astrophysics Data System (ADS)
Courtney, Owen T.; Bianconi, Ginestra
2016-06-01
Simplicial complexes are generalized network structures able to encode interactions occurring between more than two nodes. Simplicial complexes describe a large variety of complex interacting systems ranging from brain networks to social and collaboration networks. Here we characterize the structure of simplicial complexes using their generalized degrees that capture fundamental properties of one, two, three, or more linked nodes. Moreover, we introduce the configuration model and the canonical ensemble of simplicial complexes, enforcing, respectively, the sequence of generalized degrees of the nodes and the sequence of the expected generalized degrees of the nodes. We evaluate the entropy of these ensembles, finding the asymptotic expression for the number of simplicial complexes in the configuration model. We provide the algorithms for the construction of simplicial complexes belonging to the configuration model and the canonical ensemble of simplicial complexes. We give an expression for the structural cutoff of simplicial complexes that for simplicial complexes of dimension d =1 reduces to the structural cutoff of simple networks. Finally, we provide a numerical analysis of the natural correlations emerging in the configuration model of simplicial complexes without structural cutoff.
First results from the International Urban Energy Balance Model Comparison: Model Complexity
NASA Astrophysics Data System (ADS)
Blackett, M.; Grimmond, S.; Best, M.
2009-04-01
A great variety of urban energy balance models has been developed. These vary in complexity from simple schemes that represent the city as a slab, through those which model various facets (i.e. road, walls and roof) to more complex urban forms (including street canyons with intersections) and features (such as vegetation cover and anthropogenic heat fluxes). Some schemes also incorporate detailed representations of momentum and energy fluxes distributed throughout various layers of the urban canopy layer. The models each differ in the parameters they require to describe the site and the in demands they make on computational processing power. Many of these models have been evaluated using observational datasets but to date, no controlled comparisons have been conducted. Urban surface energy balance models provide a means to predict the energy exchange processes which influence factors such as urban temperature, humidity, atmospheric stability and winds. These all need to be modelled accurately to capture features such as the urban heat island effect and to provide key information for dispersion and air quality modelling. A comparison of the various models available will assist in improving current and future models and will assist in formulating research priorities for future observational campaigns within urban areas. In this presentation we will summarise the initial results of this international urban energy balance model comparison. In particular, the relative performance of the models involved will be compared based on their degree of complexity. These results will inform us on ways in which we can improve the modelling of air quality within, and climate impacts of, global megacities. The methodology employed in conducting this comparison followed that used in PILPS (the Project for Intercomparison of Land-Surface Parameterization Schemes) which is also endorsed by the GEWEX Global Land Atmosphere System Study (GLASS) panel. In all cases, models were run
Complex Modelling Scheme Of An Additive Manufacturing Centre
NASA Astrophysics Data System (ADS)
Popescu, Liliana Georgeta
2015-09-01
This paper presents a modelling scheme sustaining the development of an additive manufacturing research centre model and its processes. This modelling is performed using IDEF0, the resulting model process representing the basic processes required in developing such a centre in any university. While the activities presented in this study are those recommended in general, changes may occur in specific existing situations in a research centre.
NASA Astrophysics Data System (ADS)
de Boer, H. J.; Dekker, S. C.; Wassen, M. J.
2009-04-01
Earth System Models of Intermediate Complexity (EMICs) are popular tools for palaeo climate simulations. Recent studies applied these models in comparison to terrestrial proxy records and aimed to reconstruct changes in seasonal climate forced by altered ocean circulation patterns. To strengthen this powerful methodology, we argue that the magnitude of the simulated atmospheric changes should be considered in relation to the internal variability of both the climate system and the intermediate complexity model. To attribute a shift in modelled climate to reality, this ‘signal' should be detectable above the ‘noise' related to the internal variability of the climate system and the internal variability of the model. Both noise and climate signals vary over the globe and change with the seasons. We therefore argue that spatial explicit fields of noise should be considered in relation to the strengths of the simulated signals at a seasonal timescale. We approximated total noise on terrestrial temperature and precipitation from a 29 member simulation with the EMIC PUMA-2 and global temperature and precipitation datasets. To illustrate this approach, we calculate Signal-to-Noise-Ratios (SNRs) in terrestrial temperature and precipitation on simulations of an El Niño warm event, a phase change in Atlantic Meridional Oscillation (AMO) and a Heinrich cooling event. The results of the El Niño and AMO simulations indicate that the chance to accurately detect a climate signal increases with increasing SNRs. Considering the regions and seasons with highest SNRs, the simulated El Niño anomalies show good agreement with observations (r² = 0.8 and 0.6 for temperature and precipitation at SNRs > 4). The AMO signals rarely surpass the noise levels and remain mostly undetected. The simulation of a Heinrich event predicts highest SNRs for temperature (up to 10) over Arabia and Russia during Boreal winter and spring. Highest SNRs for precipitation (up to 12) are predicted over
NASA Technical Reports Server (NTRS)
Befrui, Bizhan A.
1995-01-01
This viewgraph presentation discusses the following: STAR-CD computational features; STAR-CD turbulence models; common features of industrial complex flows; industry-specific CFD development requirements; applications and experiences of industrial complex flows, including flow in rotating disc cavities, diffusion hole film cooling, internal blade cooling, and external car aerodynamics; and conclusions on turbulence modeling needs.
NASA Astrophysics Data System (ADS)
Befrui, Bizhan A.
1995-03-01
This viewgraph presentation discusses the following: STAR-CD computational features; STAR-CD turbulence models; common features of industrial complex flows; industry-specific CFD development requirements; applications and experiences of industrial complex flows, including flow in rotating disc cavities, diffusion hole film cooling, internal blade cooling, and external car aerodynamics; and conclusions on turbulence modeling needs.
Building of a conceptual model at the UE25-c hole complex
Karasaki, K.; Landsfeld, M.; Grossenbacher, K.
1990-10-01
This paper discusses construction of a conceptual model of the UE25-c hole complex. An interdisciplinary approach is discussed where all the available data are integrated. Site geology, borehole geophysics and hydraulic test results at UE25-c hole complex suggest that groundwater flow may be controlled by fractures and faults. A model is proposed.
NASA Astrophysics Data System (ADS)
Christensen, Claire Petra
Across diverse fields ranging from physics to biology, sociology, and economics, the technological advances of the past decade have engendered an unprecedented explosion of data on highly complex systems with thousands, if not millions of interacting components. These systems exist at many scales of size and complexity, and it is becoming ever-more apparent that they are, in fact, universal, arising in every field of study. Moreover, they share fundamental properties---chief among these, that the individual interactions of their constituent parts may be well-understood, but the characteristic behaviour produced by the confluence of these interactions---by these complex networks---is unpredictable; in a nutshell, the whole is more than the sum of its parts. There is, perhaps, no better illustration of this concept than the discoveries being made regarding complex networks in the biological sciences. In particular, though the sequencing of the human genome in 2003 was a remarkable feat, scientists understand that the "cellular-level blueprints" for the human being are cellular-level parts lists, but they say nothing (explicitly) about cellular-level processes. The challenge of modern molecular biology is to understand these processes in terms of the networks of parts---in terms of the interactions among proteins, enzymes, genes, and metabolites---as it is these processes that ultimately differentiate animate from inanimate, giving rise to life! It is the goal of systems biology---an umbrella field encapsulating everything from molecular biology to epidemiology in social systems---to understand processes in terms of fundamental networks of core biological parts, be they proteins or people. By virtue of the fact that there are literally countless complex systems, not to mention tools and techniques used to infer, simulate, analyze, and model these systems, it is impossible to give a truly comprehensive account of the history and study of complex systems. The author
Trials of the beta model for complex inheritance.
Collins, A; MacLean, C J; Morton, N E
1996-01-01
Theoretical advantages of nonparametric logarithm of odds to map polygenic diseases are supported by tests of the beta model that depends on a single logistic parameter and is the only model under which paternal and maternal transmissions to sibs of specified phenotypes are independent. Although it does not precisely describe recurrence risks in monozygous twins, the beta model has greater power to detect family resemblance or linkage than the more general delta model which describes the probability of 0, 1, or 2 alleles identical by descent (ibd) with two parameters. Available data on ibd in sibs are consistent with the beta model, but not with the equally parsimonious but less powerful gamma model that assumes a fixed probability of 1/2 for 1 allele ibd. Additivity of loci on the liability scale is not disproven. A simple equivalence extends the beta model to multipoint analysis. PMID:8799174
Modelling radiation fluxes in simple and complex environments—application of the RayMan model
NASA Astrophysics Data System (ADS)
Matzarakis, Andreas; Rutz, Frank; Mayer, Helmut
2007-03-01
The most important meteorological parameter affecting the human energy balance during sunny weather conditions is the mean radiant temperature Tmrt. It considers the uniform temperature of a surrounding surface giving off blackbody radiation, which results in the same energy gain of a human body given the prevailing radiation fluxes. This energy gain usually varies considerably in open space conditions. In this paper, the model ‘RayMan’, used for the calculation of short- and long-wave radiation fluxes on the human body, is presented. The model, which takes complex urban structures into account, is suitable for several applications in urban areas such as urban planning and street design. The final output of the model is, however, the calculated Tmrt, which is required in the human energy balance model, and thus also for the assessment of the urban bioclimate, with the use of thermal indices such as predicted mean vote (PMV), physiologically equivalent temperature (PET) and standard effective temperature (SET*). The model has been developed based on the German VDI-Guidelines 3789, Part II (environmental meteorology, interactions between atmosphere and surfaces; calculation of short- and long-wave radiation) and VDI-3787 (environmental meteorology, methods for the human-biometeorological evaluation of climate and air quality for urban and regional planning. Part I: climate). The validation of the results of the RayMan model agrees with similar results obtained from experimental studies.
NASA Astrophysics Data System (ADS)
White, J.; Hughes, J. D.; Doherty, J.
2011-12-01
The appropriate level of model complexity and parameterization has been the focus of much recent research. The use of a complex, physically-based model as the receptacle for expert knowledge allows the prior distribution of parameters to be fully expressed stochastically. A physically-based model also permits measurements of system properties to be included that constrain the prior parameter distribution. However, it may be difficult to quantify and maximize the likelihood term of Bayes' equation with a complex physically-based model, because history-matching is a computationally expensive exercise, requiring many model runs and (ideally) good model numerical performance. Alternatively, history-matching can be achieved relatively easily with a simple model if the parameterization spans the solution space required for a solution of the inverse problem. But the parameters employed by a simple model may be relatively abstract, and hence not as easy to characterize through expert knowledge and measurements of system properties. Furthermore, calibration of a simple model may cause predictive bias if the simple-model parameterization is not aligned with the complex model solution space. In this case, at least some complex model null-space components are estimated during calibration of the simple model, which results in a deviation from the minimum error variance solution. This simplification bias may amplify the predictive bias that arises from other aspects of the simple model's failure to represent environmental processes with integrity. To correct for this simplification bias, it is possible to develop a function that relates predictions made by the simple model back to the complex model. This mapping function is empirically derived through a process of generating realizations of the complex model properties, and conditioning the simple model to each of the realized complex model outputs. This synthetic conditioning should use the same outputs that are actually
Stability and complexity in model meta-ecosystems
Gravel, Dominique; Massol, François; Leibold, Mathew A.
2016-01-01
The diversity of life and its organization in networks of interacting species has been a long-standing theoretical puzzle for ecologists. Ever since May's provocative paper challenging whether ‘large complex systems [are] stable' various hypotheses have been proposed to explain when stability should be the rule, not the exception. Spatial dynamics may be stabilizing and thus explain high community diversity, yet existing theory on spatial stabilization is limited, preventing comparisons of the role of dispersal relative to species interactions. Here we incorporate dispersal of organisms and material into stability–complexity theory. We find that stability criteria from classic theory are relaxed in direct proportion to the number of ecologically distinct patches in the meta-ecosystem. Further, we find the stabilizing effect of dispersal is maximal at intermediate intensity. Our results highlight how biodiversity can be vulnerable to factors, such as landscape fragmentation and habitat loss, that isolate local communities. PMID:27555100
Stability and complexity in model meta-ecosystems.
Gravel, Dominique; Massol, François; Leibold, Mathew A
2016-01-01
The diversity of life and its organization in networks of interacting species has been a long-standing theoretical puzzle for ecologists. Ever since May's provocative paper challenging whether 'large complex systems [are] stable' various hypotheses have been proposed to explain when stability should be the rule, not the exception. Spatial dynamics may be stabilizing and thus explain high community diversity, yet existing theory on spatial stabilization is limited, preventing comparisons of the role of dispersal relative to species interactions. Here we incorporate dispersal of organisms and material into stability-complexity theory. We find that stability criteria from classic theory are relaxed in direct proportion to the number of ecologically distinct patches in the meta-ecosystem. Further, we find the stabilizing effect of dispersal is maximal at intermediate intensity. Our results highlight how biodiversity can be vulnerable to factors, such as landscape fragmentation and habitat loss, that isolate local communities. PMID:27555100
NREL's System Advisor Model Simplifies Complex Energy Analysis (Fact Sheet)
Not Available
2011-10-01
The energy market is diversifying. In addition to traditional power sources, decision makers can choose among solar, wind, and geothermal technologies as well. Each of these technologies has complex performance characteristics and economics that vary with location and other project specifics, making it difficult to analyze the viability of such projects. But that analysis is easier now, thanks to the National Renewable Energy Laboratory (NREL).
The Creation of Surrogate Models for Fast Estimation of Complex Model Outcomes
Pruett, W. Andrew; Hester, Robert L.
2016-01-01
A surrogate model is a black box model that reproduces the output of another more complex model at a single time point. This is to be distinguished from the method of surrogate data, used in time series. The purpose of a surrogate is to reduce the time necessary for a computation at the cost of rigor and generality. We describe a method of constructing surrogates in the form of support vector machine (SVM) regressions for the purpose of exploring the parameter space of physiological models. Our focus is on the methodology of surrogate creation and accuracy assessment in comparison to the original model. This is done in the context of a simulation of hemorrhage in one model, “Small”, and renal denervation in another, HumMod. In both cases, the surrogate predicts the drop in mean arterial pressure following the intervention. We asked three questions concerning surrogate models: (1) how many training examples are necessary to obtain an accurate surrogate, (2) is surrogate accuracy homogeneous, and (3) how much can computation time be reduced when using a surrogate. We found the minimum training set size that would guarantee maximal accuracy was widely variable, but could be algorithmically generated. The average error for the pressure response to the protocols was -0.05±2.47 in Small, and -0.3 +/- 3.94 mmHg in HumMod. In the Small model, error grew with actual pressure drop, and in HumMod, larger pressure drops were overestimated by the surrogates. Surrogate use resulted in a 6 order of magnitude decrease in computation time. These results suggest surrogate modeling is a valuable tool for generating predictions of an integrative model’s behavior on densely sampled subsets of its parameter space. PMID:27258010
Design of Low Complexity Model Reference Adaptive Controllers
NASA Technical Reports Server (NTRS)
Hanson, Curt; Schaefer, Jacob; Johnson, Marcus; Nguyen, Nhan
2012-01-01
Flight research experiments have demonstrated that adaptive flight controls can be an effective technology for improving aircraft safety in the event of failures or damage. However, the nonlinear, timevarying nature of adaptive algorithms continues to challenge traditional methods for the verification and validation testing of safety-critical flight control systems. Increasingly complex adaptive control theories and designs are emerging, but only make testing challenges more difficult. A potential first step toward the acceptance of adaptive flight controllers by aircraft manufacturers, operators, and certification authorities is a very simple design that operates as an augmentation to a non-adaptive baseline controller. Three such controllers were developed as part of a National Aeronautics and Space Administration flight research experiment to determine the appropriate level of complexity required to restore acceptable handling qualities to an aircraft that has suffered failures or damage. The controllers consist of the same basic design, but incorporate incrementally-increasing levels of complexity. Derivations of the controllers and their adaptive parameter update laws are presented along with details of the controllers implementations.
Cheung, Carol C; Torlakovic, Emina E; Chow, Hung; Snover, Dale C; Asa, Sylvia L
2015-03-01
Pathologists provide diagnoses relevant to the disease state of the patient and identify specific tissue characteristics relevant to response to therapy and prognosis. As personalized medicine evolves, there is a trend for increased demand of tissue-derived parameters. Pathologists perform increasingly complex analyses on the same 'cases'. Traditional methods of workload assessment and reimbursement, based on number of cases sometimes with a modifier (eg, the relative value unit (RVU) system used in the United States), often grossly underestimate the amount of work needed for complex cases and may overvalue simple, small biopsy cases. We describe a new approach to pathologist workload measurement that aligns with this new practice paradigm. Our multisite institution with geographically diverse partner institutions has developed the Automatable Activity-Based Approach to Complexity Unit Scoring (AABACUS) model that captures pathologists' clinical activities from parameters documented in departmental laboratory information systems (LISs). The model's algorithm includes: 'capture', 'export', 'identify', 'count', 'score', 'attribute', 'filter', and 'assess filtered results'. Captured data include specimen acquisition, handling, analysis, and reporting activities. Activities were counted and complexity units (CUs) generated using a complexity factor for each activity. CUs were compared between institutions, practice groups, and practice types and evaluated over a 5-year period (2008-2012). The annual load of a clinical service pathologist, irrespective of subspecialty, was ∼40,000 CUs using relative benchmarking. The model detected changing practice patterns and was appropriate for monitoring clinical workload for anatomical pathology, neuropathology, and hematopathology in academic and community settings, and encompassing subspecialty and generalist practices. AABACUS is objective, can be integrated with an LIS and automated, is reproducible, backwards compatible
A multi-element cosmological model with a complex space-time topology
NASA Astrophysics Data System (ADS)
Kardashev, N. S.; Lipatova, L. N.; Novikov, I. D.; Shatskiy, A. A.
2015-02-01
Wormhole models with a complex topology having one entrance and two exits into the same space-time of another universe are considered, as well as models with two entrances from the same space-time and one exit to another universe. These models are used to build a model of a multi-sheeted universe (a multi-element model of the "Multiverse") with a complex topology. Spherical symmetry is assumed in all the models. A Reissner-Norström black-hole model having no singularity beyond the horizon is constructed. The strength of the central singularity of the black hole is analyzed.
A radio-frequency sheath model for complex waveforms
Turner, M. M.; Chabert, P.
2014-04-21
Plasma sheaths driven by radio-frequency voltages occur in contexts ranging from plasma processing to magnetically confined fusion experiments. An analytical understanding of such sheaths is therefore important, both intrinsically and as an element in more elaborate theoretical structures. Radio-frequency sheaths are commonly excited by highly anharmonic waveforms, but no analytical model exists for this general case. We present a mathematically simple sheath model that is in good agreement with earlier models for single frequency excitation, yet can be solved for arbitrary excitation waveforms. As examples, we discuss dual-frequency and pulse-like waveforms. The model employs the ansatz that the time-averaged electron density is a constant fraction of the ion density. In the cases we discuss, the error introduced by this approximation is small, and in general it can be quantified through an internal consistency condition of the model. This simple and accurate model is likely to have wide application.
A Complex Network Approach to Distributional Semantic Models
Utsumi, Akira
2015-01-01
A number of studies on network analysis have focused on language networks based on free word association, which reflects human lexical knowledge, and have demonstrated the small-world and scale-free properties in the word association network. Nevertheless, there have been very few attempts at applying network analysis to distributional semantic models, despite the fact that these models have been studied extensively as computational or cognitive models of human lexical knowledge. In this paper, we analyze three network properties, namely, small-world, scale-free, and hierarchical properties, of semantic networks created by distributional semantic models. We demonstrate that the created networks generally exhibit the same properties as word association networks. In particular, we show that the distribution of the number of connections in these networks follows the truncated power law, which is also observed in an association network. This indicates that distributional semantic models can provide a plausible model of lexical knowledge. Additionally, the observed differences in the network properties of various implementations of distributional semantic models are consistently explained or predicted by considering the intrinsic semantic features of a word-context matrix and the functions of matrix weighting and smoothing. Furthermore, to simulate a semantic network with the observed network properties, we propose a new growing network model based on the model of Steyvers and Tenenbaum. The idea underlying the proposed model is that both preferential and random attachments are required to reflect different types of semantic relations in network growth process. We demonstrate that this model provides a better explanation of network behaviors generated by distributional semantic models. PMID:26295940
Reduced complexity structural modeling for automated airframe synthesis
NASA Technical Reports Server (NTRS)
Hajela, Prabhat
1987-01-01
A procedure is developed for the optimum sizing of wing structures based on representing the built-up finite element assembly of the structure by equivalent beam models. The reduced-order beam models are computationally less demanding in an optimum design environment which dictates repetitive analysis of several trial designs. The design procedure is implemented in a computer program requiring geometry and loading information to create the wing finite element model and its equivalent beam model, and providing a rapid estimate of the optimum weight obtained from a fully stressed design approach applied to the beam. The synthesis procedure is demonstrated for representative conventional-cantilever and joined wing configurations.
Modelling complex terrain effects for wind farm layout optimization
NASA Astrophysics Data System (ADS)
Schmidt, Jonas; Stoevesandt, Bernhard
2014-06-01
The flow over four analytical hill geometries was calculated by CFD RANS simulations. For each hill, the results were converted into numerical models that transform arbitrary undisturbed inflow profiles by rescaling the effect of the obstacle. The predictions of such models are compared to full CFD results, first for atmospheric boundary layer flow, and then for a single turbine wake in the presence of an isolated hill. The implementation of the models into the wind farm modelling software flapFOAM is reported, advancing their inclusion into a fully modular wind farm layout optimization routine.
NASA Astrophysics Data System (ADS)
Huang, X.; Bandilla, K.; Celia, M. A.; Bachu, S.
2013-12-01
Geological carbon sequestration can significantly contribute to climate-change mitigation only if it is deployed at a very large scale. This means that injection scenarios must occur, and be analyzed, at the basin scale. Various mathematical models of different complexity may be used to assess the fate of injected CO2 and/or resident brine. These models span the range from multi-dimensional, multi-phase numerical simulators to simple single-phase analytical solutions. In this study, we consider a range of models, all based on vertically-integrated governing equations, to predict the basin-scale pressure response to specific injection scenarios. The Canadian section of the Basal Aquifer is used as a test site to compare the different modeling approaches. The model domain covers an area of approximately 811,000 km2, and the total injection rate is 63 Mt/yr, corresponding to 9 locations where large point sources have been identified. Predicted areas of critical pressure exceedance are used as a comparison metric among the different modeling approaches. Comparison of the results shows that single-phase numerical models may be good enough to predict the pressure response over a large aquifer; however, a simple superposition of semi-analytical or analytical solutions is not sufficiently accurate because spatial variability of formation properties plays an important role in the problem, and these variations are not captured properly with simple superposition. We consider two different injection scenarios: injection at the source locations and injection at locations with more suitable aquifer properties. Results indicate that in formations with significant spatial variability of properties, strong variations in injectivity among the different source locations can be expected, leading to the need to transport the captured CO2 to suitable injection locations, thereby necessitating development of a pipeline network. We also consider the sensitivity of porosity and
Modeling Conditional Probabilities in Complex Educational Assessments. CSE Technical Report.
ERIC Educational Resources Information Center
Mislevy, Robert J.; Almond, Russell; Dibello, Lou; Jenkins, Frank; Steinberg, Linda; Yan, Duanli; Senturk, Deniz
An active area in psychometric research is coordinated task design and statistical analysis built around cognitive models. Compared with classical test theory and item response theory, there is often less information from observed data about the measurement-model parameters. On the other hand, there is more information from the grounding…
Modeling Developmental Complexity in Adolescence: Hormones and Behavior in Context.
ERIC Educational Resources Information Center
Susman, Elizabeth J.
1997-01-01
The links between endocrine physiological processes and adolescent psychological processes are the focus of this article. Presents a brief history of biopsychosocial research in adolescent development. Discusses four models for conceptualizing hormone-behavior research as illustrative of biopsychosocial models. Concludes with challenges and…
Is there hope for multi-site complexation modeling?
Bickmore, Barry R.; Rosso, Kevin M.; Mitchell, S. C.
2006-06-06
It has been shown here that the standard formulation of the MUSIC model does not deliver the molecular-scale insight into oxide surface reactions that it promises. The model does not properly divide long-range electrostatic and short-range contributions to acid-base reaction energies, and it does not treat solvation in a physically realistic manner. However, even if the current MUSIC model does not succeed in its ambitions, its ambitions are still reasonable. It was a pioneering attempt in that Hiemstra and coworkers recognized that intrinsic equilibrium constants, where the effects of long-range electrostatic effects have been removed, must be theoretically constrained prior to model fitting if there is to be any hope of obtaining molecular-scale insights from SCMs. We have also shown, on the other hand, that it may be premature to dismiss all valence-based models of acidity. Not only can some such models accurately predict intrinsic acidity constants, but they can also now be linked to the results of molecular dynamics simulations of solvated systems. Significant challenges remain for those interested in creating SCMs that are accurate at the molecular scale. It will only be after all model parameters can be predicted from theory, and the models validated against titration data that we will be able to begin to have some confidence that we really are adequately describing the chemical systems in question.
NASA Astrophysics Data System (ADS)
Brodsky, Yu. I.
2015-01-01
The work is devoted to the application of Bourbaki's structure theory to substantiate the synthesis of simulation models of complex multicomponent systems, where every component may be a complex system itself. An application of the Bourbaki's structure theory offers a new approach to the design and computer implementation of simulation models of complex multicomponent systems—model synthesis and model-oriented programming. It differs from the traditional object-oriented approach. The central concept of this new approach and at the same time, the basic building block for the construction of more complex structures is the concept of models-components. A model-component endowed with a more complicated structure than, for example, the object in the object-oriented analysis. This structure provides to the model-component an independent behavior-the ability of standard responds to standard requests of its internal and external environment. At the same time, the computer implementation of model-component's behavior is invariant under the integration of models-components into complexes. This fact allows one firstly to construct fractal models of any complexity, and secondly to implement a computational process of such constructions uniformly-by a single universal program. In addition, the proposed paradigm allows one to exclude imperative programming and to generate computer code with a high degree of parallelism.
The effects of numerical-model complexity and observation type on estimated porosity values
NASA Astrophysics Data System (ADS)
Starn, J. Jeffrey; Bagtzoglou, Amvrossios C.; Green, Christopher T.
2015-09-01
The relative merits of model complexity and types of observations employed in model calibration are compared. An existing groundwater flow model coupled with an advective transport simulation of the Salt Lake Valley, Utah (USA), is adapted for advective transport, and effective porosity is adjusted until simulated tritium concentrations match concentrations in samples from wells. Two calibration approaches are used: a "complex" highly parameterized porosity field and a "simple" parsimonious model of porosity distribution. The use of an atmospheric tracer (tritium in this case) and apparent ages (from tritium/helium) in model calibration also are discussed. Of the models tested, the complex model (with tritium concentrations and tritium/helium apparent ages) performs best. Although tritium breakthrough curves simulated by complex and simple models are very generally similar, and there is value in the simple model, the complex model is supported by a more realistic porosity distribution and a greater number of estimable parameters. Culling the best quality data did not lead to better calibration, possibly because of processes and aquifer characteristics that are not simulated. Despite many factors that contribute to shortcomings of both the models and the data, useful information is obtained from all the models evaluated. Although any particular prediction of tritium breakthrough may have large errors, overall, the models mimic observed trends.
Modelling the Complex Conductivity of Charged Porous Media using The Grain Polarization Model
NASA Astrophysics Data System (ADS)
Leroy, P.; Revil, A.; Jougnot, D.; Li, S.
2015-12-01
The low-frequency complex conductivity response of charged porous media reflects a combination of three polarization processes occuring at different frequency ranges. One polarization process corresponds to the membrane polarization phenomenon, which is the polarization mechanism associated with the back-diffusion of salt ions through different pore spaces of the porous material (ions-selective zones and zones with no selectivity). This polarization process generally occurs at the lowest frequency range, typically in the frequency range [mHz Hz] because it involves polarization mechanism occurring over different pore spaces (the relaxation frequency is inversely proportional to the length of the polarization process). Another polarization process corresponds to the electrochemical polarization of the electrical double layer coating the surface of the grains. In the grain polarization model, the diffuse layer is assumed to not polarize because it is assumed to form a continuum in the porous medium. The compact Stern layer is assumed to polarize because the Stern layer is assumed to be discontinuous over multiple grains. The electrochemical polarization of the Stern layer typically occurs in the frequency range [Hz kHz]. The last polarization process corresponds to the Maxwell-Wagner polarization mechanism, which is caused by the formation of field-induced free charge distributions near the interface between the phases of the medium. In this presentation, the grain polarization model based on the O'Konski, Schwarz, Schurr and Sen theories and developed later by Revil and co-workers is showed. This spectral induced polarization model was successfully applied to describe the complex conductivity responses of glass beads, sands, clays, clay-sand mixtures and other minerals. The limits of this model and future developments will also be presented.
The theoretical bases and computational techniques are presented for U.S. and Russian complex terrain diffusion models developed for engineering applications. hile the U.S. model is based on the modified Gaussian diffusion model, the Russian model is based on the analytical appro...
Computer modeling of properties of complex molecular systems
Kulkova, E.Yu.; Khrenova, M.G.; Polyakov, I.V.
2015-03-10
Large molecular aggregates present important examples of strongly nonhomogeneous systems. We apply combined quantum mechanics / molecular mechanics approaches that assume treatment of a part of the system by quantum-based methods and the rest of the system with conventional force fields. Herein we illustrate these computational approaches by two different examples: (1) large-scale molecular systems mimicking natural photosynthetic centers, and (2) components of prospective solar cells containing titan dioxide and organic dye molecules. We demonstrate that modern computational tools are capable to predict structures and spectra of such complex molecular aggregates.
Non-consensus Opinion Models on Complex Networks
NASA Astrophysics Data System (ADS)
Li, Qian; Braunstein, Lidia A.; Wang, Huijuan; Shao, Jia; Stanley, H. Eugene; Havlin, Shlomo
2013-04-01
Social dynamic opinion models have been widely studied to understand how interactions among individuals cause opinions to evolve. Most opinion models that utilize spin interaction models usually produce a consensus steady state in which only one opinion exists. Because in reality different opinions usually coexist, we focus on non-consensus opinion models in which above a certain threshold two opinions coexist in a stable relationship. We revisit and extend the non-consensus opinion (NCO) model introduced by Shao et al. (Phys. Rev. Lett. 103:01870, 2009). The NCO model in random networks displays a second order phase transition that belongs to regular mean field percolation and is characterized by the appearance (above a certain threshold) of a large spanning cluster of the minority opinion. We generalize the NCO model by adding a weight factor W to each individual's original opinion when determining their future opinion (NCO W model). We find that as W increases the minority opinion holders tend to form stable clusters with a smaller initial minority fraction than in the NCO model. We also revisit another non-consensus opinion model based on the NCO model, the inflexible contrarian opinion (ICO) model (Li et al. in Phys. Rev. E 84:066101, 2011), which introduces inflexible contrarians to model the competition between two opinions in a steady state. Inflexible contrarians are individuals that never change their original opinion but may influence the opinions of others. To place the inflexible contrarians in the ICO model we use two different strategies, random placement and one in which high-degree nodes are targeted. The inflexible contrarians effectively decrease the size of the largest rival-opinion cluster in both strategies, but the effect is more pronounced under the targeted method. All of the above models have previously been explored in terms of a single network, but human communities are usually interconnected, not isolated. Because opinions propagate not
Hu, Eric Y.; Bouteiller, Jean-Marie C.; Song, Dong; Baudry, Michel; Berger, Theodore W.
2015-01-01
Chemical synapses are comprised of a wide collection of intricate signaling pathways involving complex dynamics. These mechanisms are often reduced to simple spikes or exponential representations in order to enable computer simulations at higher spatial levels of complexity. However, these representations cannot capture important nonlinear dynamics found in synaptic transmission. Here, we propose an input-output (IO) synapse model capable of generating complex nonlinear dynamics while maintaining low computational complexity. This IO synapse model is an extension of a detailed mechanistic glutamatergic synapse model capable of capturing the input-output relationships of the mechanistic model using the Volterra functional power series. We demonstrate that the IO synapse model is able to successfully track the nonlinear dynamics of the synapse up to the third order with high accuracy. We also evaluate the accuracy of the IO synapse model at different input frequencies and compared its performance with that of kinetic models in compartmental neuron models. Our results demonstrate that the IO synapse model is capable of efficiently replicating complex nonlinear dynamics that were represented in the original mechanistic model and provide a method to replicate complex and diverse synaptic transmission within neuron network simulations. PMID:26441622
NASA Astrophysics Data System (ADS)
Eby, M.; Weaver, A. J.; Alexander, K.; Zickfeld, K.; Abe-Ouchi, A.; Cimatoribus, A. A.; Crespin, E.; Drijfhout, S. S.; Edwards, N. R.; Eliseev, A. V.; Feulner, G.; Fichefet, T.; Forest, C. E.; Goosse, H.; Holden, P. B.; Joos, F.; Kawamiya, M.; Kicklighter, D.; Kienert, H.; Matsumoto, K.; Mokhov, I. I.; Monier, E.; Olsen, S. M.; Pedersen, J. O. P.; Perrette, M.; Philippon-Berthier, G.; Ridgwell, A.; Schlosser, A.; Schneider von Deimling, T.; Shaffer, G.; Smith, R. S.; Spahni, R.; Sokolov, A. P.; Steinacher, M.; Tachiiri, K.; Tokos, K.; Yoshimori, M.; Zeng, N.; Zhao, F.
2013-05-01
Both historical and idealized climate model experiments are performed with a variety of Earth system models of intermediate complexity (EMICs) as part of a community contribution to the Intergovernmental Panel on Climate Change Fifth Assessment Report. Historical simulations start at 850 CE and continue through to 2005. The standard simulations include changes in forcing from solar luminosity, Earth's orbital configuration, CO2, additional greenhouse gases, land use, and sulphate and volcanic aerosols. In spite of very different modelled pre-industrial global surface air temperatures, overall 20th century trends in surface air temperature and carbon uptake are reasonably well simulated when compared to observed trends. Land carbon fluxes show much more variation between models than ocean carbon fluxes, and recent land fluxes appear to be slightly underestimated. It is possible that recent modelled climate trends or climate-carbon feedbacks are overestimated resulting in too much land carbon loss or that carbon uptake due to CO2 and/or nitrogen fertilization is underestimated. Several one thousand year long, idealized, 2 × and 4 × CO2 experiments are used to quantify standard model characteristics, including transient and equilibrium climate sensitivities, and climate-carbon feedbacks. The values from EMICs generally fall within the range given by general circulation models. Seven additional historical simulations, each including a single specified forcing, are used to assess the contributions of different climate forcings to the overall climate and carbon cycle response. The response of surface air temperature is the linear sum of the individual forcings, while the carbon cycle response shows a non-linear interaction between land-use change and CO2 forcings for some models. Finally, the preindustrial portions of the last millennium simulations are used to assess historical model carbon-climate feedbacks. Given the specified forcing, there is a tendency for the
NASA Astrophysics Data System (ADS)
Chen, F.; Barlage, M. J.; Tewari, M.; Rasmussen, R.; Bao, Y.; Jin, J.; Lettenmaier, D. P.; Livneh, B.; Lin, C.; Miguez-Macho, G.; Niu, G.; Wen, L.; Yang, Z.
2011-12-01
The timing and amount of spring snowmelt runoff in mountainous regions are critical for water resources and managements. Correctly capturing the snow-atmospheric interactions (through albedo and surface energy partitioning) is also important for weather and climate models. This study developed a unique, integrated data set including one-year (2007-2008) snow water equivalent (SWE) observations from 112 SNOTEL sites in the Colorado Headwaters region, 2004-2008 observations (surface heat fluxes, radiation budgets, soil temperature and moisture) from two AmeriFlux sites (Niwot Ridge and GLEES), MODIS snow cover, and river discharge. These observations were used to evaluate the ability of six widely-used land-surface/snow models (Noah, Noah-MP, VIC, CLM, SAST, and LEAF-2) in simulating the seasonal evolution of snowpacks in central Rockies. The overarching goals of this community undertaking are to: 1) understand key processes controlling the evolution of snowpack in this complex terrain and forested region through analyzing field data and various components of snow physics in these models, and 2) improve snowpack modeling in weather and climate models. This comprehensive data set allowed us to address issues that had not been possible in previous snow-model inter-comparison investigations (e.g., SnowMIPs). For instance, models displayed a large disparity in treating radiation and turbulence processes within vegetation canopies. Some models with an overly simplified tree-canopy treatment need to raise snow albedo helped to retain snow on the ground during melting phase. However, comparing modeled radiation and heat fluxes to long-term observations revealed that too-high albedo reduced 75% of solar energy absorbed by the forested surface and resulted in too-low surface sensible heat and longwave radiation returned to the atmosphere, which could be a crucial deficiency for coupled weather and climate models. Large differences were found in simulated SWE by the six LSMs
Complex zeros of the 2 d Ising model on dynamical random lattices
NASA Astrophysics Data System (ADS)
Ambjørn, J.; Anagnostopoulos, K. N.; Magnea, U.
1998-04-01
We study the zeros in the complex plane of the partition function for the Ising model coupled to 2 d quantum gravity for complex magnetic field and for complex temperature. We compute the zeros by using the exact solution coming from a two matrix model and by Monte Carlo simulations of Ising spins on dynamical triangulations. We present evidence that the zeros form simple one-dimensional patterns in the complex plane, and that the critical behaviour of the system is governed by the scaling of the distribution of singularities near the critical point.
Mathematical model and software complex for computer simulation of field emission electron sources
Nikiforov, Konstantin
2015-03-10
The software complex developed in MATLAB allows modelling of function of diode and triode structures based on field emission electron sources with complex sub-micron geometry, their volt-ampere characteristics, calculating distribution of electric field for educational and research needs. The goal of this paper is describing the physical-mathematical model, calculation methods and algorithms the software complex is based on, demonstrating the principles of its function and showing results of its work. For getting to know the complex, a demo version with graphical user interface is presented.
Calibration of two complex ecosystem models with different likelihood functions
NASA Astrophysics Data System (ADS)
Hidy, Dóra; Haszpra, László; Pintér, Krisztina; Nagy, Zoltán; Barcza, Zoltán
2014-05-01
The biosphere is a sensitive carbon reservoir. Terrestrial ecosystems were approximately carbon neutral during the past centuries, but they became net carbon sinks due to climate change induced environmental change and associated CO2 fertilization effect of the atmosphere. Model studies and measurements indicate that the biospheric carbon sink can saturate in the future due to ongoing climate change which can act as a positive feedback. Robustness of carbon cycle models is a key issue when trying to choose the appropriate model for decision support. The input parameters of the process-based models are decisive regarding the model output. At the same time there are several input parameters for which accurate values are hard to obtain directly from experiments or no local measurements are available. Due to the uncertainty associated with the unknown model parameters significant bias can be experienced if the model is used to simulate the carbon and nitrogen cycle components of different ecosystems. In order to improve model performance the unknown model parameters has to be estimated. We developed a multi-objective, two-step calibration method based on Bayesian approach in order to estimate the unknown parameters of PaSim and Biome-BGC models. Biome-BGC and PaSim are a widely used biogeochemical models that simulate the storage and flux of water, carbon, and nitrogen between the ecosystem and the atmosphere, and within the components of the terrestrial ecosystems (in this research the developed version of Biome-BGC is used which is referred as BBGC MuSo). Both models were calibrated regardless the simulated processes and type of model parameters. The calibration procedure is based on the comparison of measured data with simulated results via calculating a likelihood function (degree of goodness-of-fit between simulated and measured data). In our research different likelihood function formulations were used in order to examine the effect of the different model
Railway faults spreading model based on dynamics of complex network
NASA Astrophysics Data System (ADS)
Zhou, Jin; Xu, Weixiang; Guo, Xin; Ma, Xin
2015-12-01
In this paper, we propose a railway faults spreading model which improved the SIR model and made it suitable for analyzing the dynamic process of faults spreading. To apply our model into a real network, the accident causation network of "7.23" China Yongwen high-speed railway accident is employed. This network is improved into a directed network, which more clearly reflects the causation relationships among the accident factors and provides help for our studies. Simulation results quantitatively show that the influence of failures can be diminished via choosing the appropriate initial recovery factors, reducing the time of the failure detected, decreasing the transmission rate of faults and increasing the propagating rate of corrected information. The model is useful to simulate the railway faults spreading and quantitatively analyze the influence of failures.
LISP based simulation generators for modeling complex space processes
NASA Technical Reports Server (NTRS)
Tseng, Fan T.; Schroer, Bernard J.; Dwan, Wen-Shing
1987-01-01
The development of a simulation assistant for modeling discrete event processes is presented. Included are an overview of the system, a description of the simulation generators, and a sample process generated using the simulation assistant.
NASA Astrophysics Data System (ADS)
Rohmer, J.; Foerster, E.
2012-04-01
Large-scale landslide prediction is typically based on numerical modeling, with computer codes generally involving a large number of input parameters. Addressing the influence of each of them on the final result and providing a ranking procedure may be useful for risk management purposes, especially to guide future lab or in site characterizations and studies, but also to simplify the model by fixing the input parameters, which have negligible influence. Variance-based global sensitivity analysis relying on the Sobol' indices can provide such valuable information and presents the advantages of exploring the sensitivity to input parameters over their whole range of variation (i.e. in a global manner), of fully accounting for possible interaction between them and of being applicable without introducing a priori assumptions on the mathematical formulation of the landslide model. Nevertheless, such analysis require a large number of computer code simulations (typically a thousand), which appears impracticable for computationally demanding simulations, with computation times ranging from several hours to several days. To overcome this difficulty, we propose a ''meta-model''-based strategy consisting in replacing the complex simulator by a "costless-to-evaluate" statistical approximation (i.e. emulator) provided by a Gaussian-Process (GP) model. This allows computation of sensitivity measures from a limited number of simulations. This meta-modelling strategy is demonstrated on two cases. The first application is a simple analytical model based on the infinite slope analysis, which allows to compare the sensitivity measures computed using the ''true'' model with those computed using the GP meta-model. The second application aims at ranking in terms of importance the properties of the elasto-plastic model describing the complex behaviour of the slip surface in the "La Frasse" landslide (Switzerland). This case is more challenging as a single simulation requires at least 4
Ensemble Learning of QTL Models Improves Prediction of Complex Traits
Bian, Yang; Holland, James B.
2015-01-01
Quantitative trait locus (QTL) models can provide useful insights into trait genetic architecture because of their straightforward interpretability but are less useful for genetic prediction because of the difficulty in including the effects of numerous small effect loci without overfitting. Tight linkage between markers introduces near collinearity among marker genotypes, complicating the detection of QTL and estimation of QTL effects in linkage mapping, and this problem is exacerbated by very high density linkage maps. Here we developed a thinning and aggregating (TAGGING) method as a new ensemble learning approach to QTL mapping. TAGGING reduces collinearity problems by thinning dense linkage maps, maintains aspects of marker selection that characterize standard QTL mapping, and by ensembling, incorporates information from many more markers-trait associations than traditional QTL mapping. The objective of TAGGING was to improve prediction power compared with QTL mapping while also providing more specific insights into genetic architecture than genome-wide prediction models. TAGGING was compared with standard QTL mapping using cross validation of empirical data from the maize (Zea mays L.) nested association mapping population. TAGGING-assisted QTL mapping substantially improved prediction ability for both biparental and multifamily populations by reducing both the variance and bias in prediction. Furthermore, an ensemble model combining predictions from TAGGING-assisted QTL and infinitesimal models improved prediction abilities over the component models, indicating some complementarity between model assumptions and suggesting that some trait genetic architectures involve a mixture of a few major QTL and polygenic effects. PMID:26276383
Ensemble Learning of QTL Models Improves Prediction of Complex Traits.
Bian, Yang; Holland, James B
2015-10-01
Quantitative trait locus (QTL) models can provide useful insights into trait genetic architecture because of their straightforward interpretability but are less useful for genetic prediction because of the difficulty in including the effects of numerous small effect loci without overfitting. Tight linkage between markers introduces near collinearity among marker genotypes, complicating the detection of QTL and estimation of QTL effects in linkage mapping, and this problem is exacerbated by very high density linkage maps. Here we developed a thinning and aggregating (TAGGING) method as a new ensemble learning approach to QTL mapping. TAGGING reduces collinearity problems by thinning dense linkage maps, maintains aspects of marker selection that characterize standard QTL mapping, and by ensembling, incorporates information from many more markers-trait associations than traditional QTL mapping. The objective of TAGGING was to improve prediction power compared with QTL mapping while also providing more specific insights into genetic architecture than genome-wide prediction models. TAGGING was compared with standard QTL mapping using cross validation of empirical data from the maize (Zea mays L.) nested association mapping population. TAGGING-assisted QTL mapping substantially improved prediction ability for both biparental and multifamily populations by reducing both the variance and bias in prediction. Furthermore, an ensemble model combining predictions from TAGGING-assisted QTL and infinitesimal models improved prediction abilities over the component models, indicating some complementarity between model assumptions and suggesting that some trait genetic architectures involve a mixture of a few major QTL and polygenic effects. PMID:26276383
NASA Technical Reports Server (NTRS)
Ancel, Ersin; Shih, Ann T.
2014-01-01
This paper highlights the development of a model that is focused on the safety issue of increasing complexity and reliance on automation systems in transport category aircraft. Recent statistics show an increase in mishaps related to manual handling and automation errors due to pilot complacency and over-reliance on automation, loss of situational awareness, automation system failures and/or pilot deficiencies. Consequently, the aircraft can enter a state outside the flight envelope and/or air traffic safety margins which potentially can lead to loss-of-control (LOC), controlled-flight-into-terrain (CFIT), or runway excursion/confusion accidents, etc. The goal of this modeling effort is to provide NASA's Aviation Safety Program (AvSP) with a platform capable of assessing the impacts of AvSP technologies and products towards reducing the relative risk of automation related accidents and incidents. In order to do so, a generic framework, capable of mapping both latent and active causal factors leading to automation errors, is developed. Next, the framework is converted into a Bayesian Belief Network model and populated with data gathered from Subject Matter Experts (SMEs). With the insertion of technologies and products, the model provides individual and collective risk reduction acquired by technologies and methodologies developed within AvSP.
NASA Astrophysics Data System (ADS)
Mishra, Abhudaya
2006-12-01
The current burgeoning research in high nuclearity manganese-containing carboxylate clusters is primarily due to their relevance in areas as diverse as magnetic materials and bioinorganic chemistry. In the former, the ability of single molecules to retain, below a critical temperature (T B), their magnetization vector, resulting in the observation of bulk magnetization in the absence of a field and without long-range ordering of the spins, has termed such molecules as Single-Molecule Magnets (SMMs), or molecular nanomagnets. These molecules display superparamagnet like slow magnetization relaxation arising from the combination of a large molecular spin, S, and a large and negative magnetoanisotropy, D. Traditionally, these nanomagnets have been Mn containing species. An out of the box approach towards synthesizing SMMs is engineering mixed-metal Mn-containing compounds. An attractive choice towards this end is the use of Lanthanides (Ln), which possess both a high spin, S, and a large D. A family of related MnIII8Ce IV SMMs has been synthesized. However, the Ce ion of these complexes is diamagnetic (CeIV). Thus, further investigation has led to the isolation of a family of MnIII11Ln III4 complexes in which all but the Ln = Eu complex function as single-molecule nanomagnets. The mixed-metal synthetic effort has been extended to include actinides with the successful isolation of a Mn IV10ThIV6 complex, albeit this homovalent complex is not a SMM. In the bioinorganic research, the Water Oxidizing Complex (WOC) in Photosystem II (PS II) catalyzes the oxidation of H2O to O2 in green plants, algae and cyanobacteria. Recent crystal structures of the WOC confirm it to be a Mn4CaOx cluster with primarily carboxylate ligation. To date, various multinuclear Mn complexes have been synthesized as putative models of the WOC. On the contrary, there have been no synthetic MnCa(Sr) mixed-metal complexes. Thus, in this bioinorganic modeling research of the WOC, various synthetic
Local degree blocking model for link prediction in complex networks.
Liu, Zhen; Dong, Weike; Fu, Yan
2015-01-01
Recovering and reconstructing networks by accurately identifying missing and unreliable links is a vital task in the domain of network analysis and mining. In this article, by studying a specific local structure, namely, a degree block having a node and its all immediate neighbors, we find it contains important statistical features of link formation for complex networks. We therefore propose a parameter-free local blocking (LB) predictor to quantitatively detect link formation in given networks via local link density calculations. The promising experimental results performed on six real-world networks suggest that the new index can outperform other traditional local similarity-based methods on most of tested networks. After further analyzing the scores' correlations between LB and two other methods, we find that LB index simultaneously captures the features of both PA index and short-path-based index, which empirically verifies that LB index is a multiple-mechanism-driven link predictor. PMID:25637926
Modelling excitonic-energy transfer in light-harvesting complexes
Kramer, Tobias; Kreisbeck, Christoph
2014-01-08
The theoretical and experimental study of energy transfer in photosynthesis has revealed an interesting transport regime, which lies at the borderline between classical transport dynamics and quantum-mechanical interference effects. Dissipation is caused by the coupling of electronic degrees of freedom to vibrational modes and leads to a directional energy transfer from the antenna complex to the target reaction-center. The dissipative driving is robust and does not rely on fine-tuning of specific vibrational modes. For the parameter regime encountered in the biological systems new theoretical tools are required to directly compare theoretical results with experimental spectroscopy data. The calculations require to utilize massively parallel graphics processor units (GPUs) for efficient and exact computations.
Modeling plasmons and photons in complex, periodic lattices
NASA Astrophysics Data System (ADS)
McClarren, Ryan; Pletzer, Alexander
2002-11-01
We present the continued evolution of Curly3d, a finite element code for solving the vector Helmholtz equation in a periodic lattice. New developments in Curly3d which are of particular interest for analyzing optical properties in such lattices are discussed: (1) the capability to compute the curl of a vector field of the lattice and by extension the Poynting flux throughout (2) the implementation of algorthims to allow for the lattice to have inhomogenuous and anisotropic dielectric and permeability properties on an arbitrarily small scale (i.e. on the order of a single element). Curly3d uses these new features coupled with its flexibility due to its implementation in the Python scripting language to analyze complex geometries. Calculations are performed on materials with local negative dielectric and permeability characteristics and presented with the necessary implications of the results.
Research Strategy for Modeling the Complexities of Turbine Heat Transfer
NASA Technical Reports Server (NTRS)
Simoneau, Robert J.
1996-01-01
The subject of this paper is a NASA research program, known as the Coolant Flow Management Program, which focuses on the interaction between the internal coolant channel and the external film cooling of a turbine blade and/or vane in an aircraft gas turbine engine. The turbine gas path is really a very complex flow field. The combination of strong pressure gradients, abrupt geometry changes and intersecting surfaces, viscous forces, rotation, and unsteady blade/vane interactions all combine to offer a formidable challenge. To this, in the high pressure turbine, we add the necessity of film cooling. The ultimate goal of the turbine designer is to maintain or increase the high level of turbine performance and at the same time reduce the amount of coolant flow needed to achieve this end. Simply stated, coolant flow is a penalty on the cycle and reduces engine thermal efficiency. Accordingly, understanding the flow field and heat transfer associated with the coolant flow is a priority goal. It is important to understand both the film cooling and the internal coolant flow, particularly their interaction. Thus, the motivation for the Coolant Flow Management Program. The paper will begin with a brief discussion of the management and research strategy, will then proceed to discuss the current attack from the internal coolant side, and will conclude by looking at the film cooling effort - at all times keeping sight of the primary goal the interaction between the two. One of the themes of this paper is that complex heat transfer problems of this nature cannot be attacked by single researchers or even groups of researchers, each working alone. It truly needs the combined efforts of a well-coordinated team to make an impact. It is important to note that this is a government/industry/university team effort.
Efficient Calibration/Uncertainty Analysis Using Paired Complex/Surrogate Models.
Burrows, Wesley; Doherty, John
2015-01-01
The use of detailed groundwater models to simulate complex environmental processes can be hampered by (1) long run-times and (2) a penchant for solution convergence problems. Collectively, these can undermine the ability of a modeler to reduce and quantify predictive uncertainty, and therefore limit the use of such detailed models in the decision-making context. We explain and demonstrate a novel approach to calibration and the exploration of posterior predictive uncertainty, of a complex model, that can overcome these problems in many modelling contexts. The methodology relies on conjunctive use of a simplified surrogate version of the complex model in combination with the complex model itself. The methodology employs gradient-based subspace analysis and is thus readily adapted for use in highly parameterized contexts. In its most basic form, one or more surrogate models are used for calculation of the partial derivatives that collectively comprise the Jacobian matrix. Meanwhile, testing of parameter upgrades and the making of predictions is done by the original complex model. The methodology is demonstrated using a density-dependent seawater intrusion model in which the model domain is characterized by a heterogeneous distribution of hydraulic conductivity. PMID:25142272
Dynamics and complexity of the Schelling segregation model
NASA Astrophysics Data System (ADS)
Domic, Nicolás Goles; Goles, Eric; Rica, Sergio
2011-05-01
In this paper we consider the Schelling social segregation model for two different populations. In Schelling’s model, segregation appears as a consequence of discrimination, measured by the local difference between two populations. For that, the model defines a tolerance criterion on the neighborhood of an individual, indicating wether the individual is able to move to a new place or not. Next, the model chooses which of the available unhappy individuals really moves. In our work, we study the patterns generated by the dynamical evolution of the Schelling model in terms of various parameters or the initial condition, such as the size of the neighborhood of an inhabitant, the tolerance, and the initial number of individuals. As a general rule we observe that segregation patterns minimize the interface of zones of different people. In this context we introduce an energy functional associated with the configuration which is a strictly decreasing function for the tolerant people case. Moreover, as far as we know, we are the first to notice that in the case of a non-strictly-decreasing energy functional, the system may segregate very efficiently.
Model Catalysts: Simulating the Complexities of Heterogeneous Catalysts
NASA Astrophysics Data System (ADS)
Gao, Feng; Goodman, D. Wayne
2012-05-01
Surface-science investigations have contributed significantly to heterogeneous catalysis in the past several decades. Fundamental studies of reactive systems on metal single crystals have aided researchers in understanding the effect of surface structure on catalyst reactivity and selectivity for a number of important reactions. Recently, model systems, consisting of metal clusters deposited on planar oxide surfaces, have facilitated the study of metal particle-size and support effects. These model systems not only are useful for carrying out kinetic investigations, but are also amenable to surface spectroscopic techniques, thus enabling investigations under realistic pressures and at working temperatures. By combining surface-science characterization methods with kinetic measurements under realistic working conditions, researchers are continuing to advance the molecular-level understanding of heterogeneous catalysis and are narrowing the pressure and material gap between model and real-world catalysts.
NASA Technical Reports Server (NTRS)
Hops, J. M.; Sherif, J. S.
1994-01-01
A great deal of effort is now being devoted to the study, analysis, prediction, and minimization of software maintenance expected cost, long before software is delivered to users or customers. It has been estimated that, on the average, the effort spent on software maintenance is as costly as the effort spent on all other software costs. Software design methods should be the starting point to aid in alleviating the problems of software maintenance complexity and high costs. Two aspects of maintenance deserve attention: (1) protocols for locating and rectifying defects, and for ensuring that noe new defects are introduced in the development phase of the software process; and (2) protocols for modification, enhancement, and upgrading. This article focuses primarily on the second aspect, the development of protocols to help increase the quality and reduce the costs associated with modifications, enhancements, and upgrades of existing software. This study developed parsimonious models and a relative complexity metric for complexity measurement of software that were used to rank the modules in the system relative to one another. Some success was achieved in using the models and the relative metric to identify maintenance-prone modules.
NASA Astrophysics Data System (ADS)
Olmsted, Peter
2004-03-01
"Shear banding", i.e. flow-induced macroscopic "phase coexistence" or apparent "phase transitions", has been observed in many complex fluids, including wormlike micelles, lamellar systems, associating polymers, and liquid crystals. In this talk I will review this behavior, and discuss a general phenomenology for understanding shear banding and flow-induced phase separation in complex fluids, at a "thermodynamic" level (as opposed to a "statistical mechanics" level). An accurate theory must include the relevant microstructural order parameters, and construct the fully coupled spatially-dependent hydrodynamic equations of motion. Although this has been successfully done for very few model fluids, we can nonetheless obtain general rules for the "phase behavior". Perhaps surprisingly, the interface between coexisting phases plays a crucial role in determining the steady state behavior, and is much more important than its equilibrium counterpart. I will discuss recent work addressed at the kinetics and morphology of wormlike micellar solutions, and touch on models for more complex oscillatory and possibly chaotic systems.
NASA Astrophysics Data System (ADS)
Hops, J. M.; Sherif, J. S.
1994-01-01
A great deal of effort is now being devoted to the study, analysis, prediction, and minimization of software maintenance expected cost, long before software is delivered to users or customers. It has been estimated that, on the average, the effort spent on software maintenance is as costly as the effort spent on all other software costs. Software design methods should be the starting point to aid in alleviating the problems of software maintenance complexity and high costs. Two aspects of maintenance deserve attention: (1) protocols for locating and rectifying defects, and for ensuring that no new defects are introduced in the development phase of the software process, and (2) protocols for modification, enhancement, and upgrading. This article focuses primarily on the second aspect, the development of protocols to help increase the quality and reduce the costs associated with modi fications, enhancements, and upgrades of existing software. This study developed parsimonious models and a relative complexity metric for complexity measurement of software that were used to rank the modules in the system relative to one another. Some success was achieved in using the models and the relative metric to identify maintenance-prone modules.
Low Complexity Models to improve Incomplete Sensitivities for Shape Optimization
NASA Astrophysics Data System (ADS)
Stanciu, Mugurel; Mohammadi, Bijan; Moreau, Stéphane
2003-01-01
The present global platform for simulation and design of multi-model configurations treat shape optimization problems in aerodynamics. Flow solvers are coupled with optimization algorithms based on CAD-free and CAD-connected frameworks. Newton methods together with incomplete expressions of gradients are used. Such incomplete sensitivities are improved using reduced models based on physical assumptions. The validity and the application of this approach in real-life problems are presented. The numerical examples concern shape optimization for an airfoil, a business jet and a car engine cooling axial fan.
A Novel Approach for Identifying Causal Models of Complex Diseases from Family Data
Park, Leeyoung; Kim, Ju H.
2015-01-01
Causal models including genetic factors are important for understanding the presentation mechanisms of complex diseases. Familial aggregation and segregation analyses based on polygenic threshold models have been the primary approach to fitting genetic models to the family data of complex diseases. In the current study, an advanced approach to obtaining appropriate causal models for complex diseases based on the sufficient component cause (SCC) model involving combinations of traditional genetics principles was proposed. The probabilities for the entire population, i.e., normal–normal, normal–disease, and disease–disease, were considered for each model for the appropriate handling of common complex diseases. The causal model in the current study included the genetic effects from single genes involving epistasis, complementary gene interactions, gene–environment interactions, and environmental effects. Bayesian inference using a Markov chain Monte Carlo algorithm (MCMC) was used to assess of the proportions of each component for a given population lifetime incidence. This approach is flexible, allowing both common and rare variants within a gene and across multiple genes. An application to schizophrenia data confirmed the complexity of the causal factors. An analysis of diabetes data demonstrated that environmental factors and gene–environment interactions are the main causal factors for type II diabetes. The proposed method is effective and useful for identifying causal models, which can accelerate the development of efficient strategies for identifying causal factors of complex diseases. PMID:25701286
Which level of model complexity is justified by your data? A Bayesian answer
NASA Astrophysics Data System (ADS)
Schöniger, Anneli; Illman, Walter; Wöhling, Thomas; Nowak, Wolfgang
2016-04-01
When judging the plausibility and utility of a subsurface flow or transport model, the question of justifiability arises: which level of model complexity can still be justified by the available calibration data? Although it is common sense that more data are needed to reasonably constrain the parameter space of a more complex model, there is a lack of tools that can objectively quantify model justifiability as a function of the available data. We propose an approach to determine model justifiability in the context of comparing alternative conceptual models. Our approach rests on Bayesian model averaging (BMA). BMA yields posterior model probabilities that point the modeler to an optimal trade-off between model performance in reproducing a given calibration data set and model complexity. To find out which level of complexity can be justified by the available data, we disentangle the complexity component of the trade-off from its performance counterpart. Technically, we remove the performance component from the BMA analysis by replacing the actually observed data values with potential measurement values as predicted by the models. Our proposed analysis results in a "model confusion matrix". Based on this matrix, the modeler can identify the maximum level of model complexity that could possibly be justified by the available amount and type of data. As a side product, model (dis-)similarity is revealed. We have applied the model justifiability analysis to a case of aquifer characterization via hydraulic tomography. Four models of vastly different complexity have been proposed to represent the heterogeneity in hydraulic conductivity of a sandbox aquifer, ranging from a homogeneous medium to geostatistical random fields. We have used drawdown data from two to six pumping tests to condition the models and to determine model justifiability as a function of data set size. Our test case shows that a geostatistical parameterization scheme requires a substantial amount of
NASA Astrophysics Data System (ADS)
Rathi, Parveen; Sharma, Kavita; Singh, Dharam Pal
2014-09-01
Macrocyclic complexes of the type [MLX]X2; where L is (C30H28N4), a macrocyclic ligand, M = Cr(III) and Fe(III) and X = Cl-, CH3COO- or NO3-, have been synthesized by template condensation reaction of 1,8-diaminonaphthalene and acetylacetone in the presence of trivalent metal salts in a methanolic medium. The complexes have been formulated as [MLX]X2 due to 1:2 electrolytic nature of these complexes. The complexes have been characterized with the help of elemental analyses, molar conductance measurements, magnetic susceptibility measurements, electronic, infrared, far infrared, Mass spectral studies and molecular modelling. Molecular weight of these complexes indicates their monomeric nature. On the basis of all these studies, a five coordinated square pyramidal geometry has been proposed for all these complexes. These metal complexes have also been screened for their in vitro antimicrobial activities.
Information and complexity measures for hydrologic model evaluation
Technology Transfer Automated Retrieval System (TEKTRAN)
Hydrological models are commonly evaluated through the residual-based performance measures such as the root-mean square error or efficiency criteria. Such measures, however, do not evaluate the degree of similarity of patterns in simulated and measured time series. The objective of this study was to...
Measuring Learning Progressions Using Bayesian Modeling in Complex Assessments
ERIC Educational Resources Information Center
Rutstein, Daisy Wise
2012-01-01
This research examines issues regarding model estimation and robustness in the use of Bayesian Inference Networks (BINs) for measuring Learning Progressions (LPs). It provides background information on LPs and how they might be used in practice. Two simulation studies are performed, along with real data examples. The first study examines the case…
Ensemble learning of QTL models improves prediction of complex traits
Technology Transfer Automated Retrieval System (TEKTRAN)
Quantitative trait locus (QTL) models can provide useful insights into trait genetic architecture because of their straightforward interpretability, but are less useful for genetic prediction due to difficulty in including the effects of numerous small effect loci without overfitting. Tight linkage ...
Wind field near complex terrain using numerical weather prediction model
NASA Astrophysics Data System (ADS)
Chim, Kin-Sang
The PennState/NCAR MM5 model was modified to simulate an idealized flow pass through a 3D obstacle in the Micro- Alpha Scale domain. The obstacle used were the idealized Gaussian obstacle and the real topography of Lantau Island of Hong Kong. The Froude number under study is ranged from 0.22 to 1.5. Regime diagrams for both the idealized Gaussian obstacle and Lantau island were constructed. This work is divided into five parts. The first part is the problem definition and the literature review of the related publications. The second part briefly discuss as the PennState/NCAR MM5 model and a case study of long- range transport is included. The third part is devoted to the modification and the verification of the PennState/NCAR MM5 model on the Micro-Alpha Scale domain. The implementation of the Orlanski (1976) open boundary condition is included with the method of single sounding initialization of the model. Moreover, an upper dissipative layer, Klemp and Lilly (1978), is implemented on the model. The simulated result is verified by the Automatic Weather Station (AWS) data and the Wind Profiler data. Four different types of Planetary Boundary Layer (PBL) parameterization schemes have been investigated in order to find out the most suitable one for Micro-Alpha Scale domain in terms of both accuracy and efficiency. Bulk Aerodynamic type of PBL parameterization scheme is found to be the most suitable PBL parameterization scheme. Investigation of the free- slip lower boundary condition is performed and the simulated result is compared with that with friction. The fourth part is the use of the modified PennState/NCAR MM5 model for an idealized flow simulation. The idealized uniform flow used is nonhydrostatic and has constant Froude number. Sensitivity test is performed by varying the Froude number and the regime diagram is constructed. Moreover, nondimensional drag is found to be useful for regime identification. The model result is also compared with the analytic
NASA Astrophysics Data System (ADS)
Sinclair, D. J.; Risk, M.
2004-12-01
A mixed kinetic/equilibrium steady state model of trace-element co-precipitation in coral skeleton is presented, and tested against high spatial resolution observations of coral trace-element composition made previously by LA-ICP-MS. The model is implemented in PHREEQC, and simulates physicochemical precipitation from a small pocket of seawater which is isolated by the coral, and modified by enzyme exchange of 2 H+ for Ca2+. The model assumes that all aqueous trace-element species in the calcifying fluid are in full equilibrium and that selected trace element species compete kinetically for precipitation with the major aqueous species (Ca2+ for cation substituents and CO32- for anion substituents). No equilibrium is assumed for the CaCO3 skeleton. Carbon is supplied to the system by diffusion of CO2 into the high-pH calcifying fluid, and trace elements are continuously replenished through the addition of fresh seawater. The rate at which the coral operates the enzyme pump, and the rate at which it replenishes the seawater component are independent variables, and the steady state trace-element composition of the skeleton/calcifying fluid are evaluated over a 2D grid of variables spanning realistic rates of pumping and seawater influx. It is assumed that variations in the trace element composition of the coral skeleton are the result of shifts in the steady-state caused by changes to these variables. The results indicate an unexpected complexity in the response of the trace elements. First order predictions suggest that increasing the rate of calcification by increasing the enzyme pumping should result in a mutual dilution of most trace element species by pumped Ca2+ and diffused CO2. However, for high rates of pumping and low seawater replenishment, the model predicts a change in the trace-element response of the system as high CO32- concentrations drive calcification and deplete Ca2+. This added complexity makes rationalizing observations with models more difficult.
The effects of numerical-model complexity and observation type on estimated porosity values
Starn, Jeffrey; Bagtzoglou, Amvrossios C.; Green, Christopher T.
2015-01-01
The relative merits of model complexity and types of observations employed in model calibration are compared. An existing groundwater flow model coupled with an advective transport simulation of the Salt Lake Valley, Utah (USA), is adapted for advective transport, and effective porosity is adjusted until simulated tritium concentrations match concentrations in samples from wells. Two calibration approaches are used: a “complex” highly parameterized porosity field and a “simple” parsimonious model of porosity distribution. The use of an atmospheric tracer (tritium in this case) and apparent ages (from tritium/helium) in model calibration also are discussed. Of the models tested, the complex model (with tritium concentrations and tritium/helium apparent ages) performs best. Although tritium breakthrough curves simulated by complex and simple models are very generally similar, and there is value in the simple model, the complex model is supported by a more realistic porosity distribution and a greater number of estimable parameters. Culling the best quality data did not lead to better calibration, possibly because of processes and aquifer characteristics that are not simulated. Despite many factors that contribute to shortcomings of both the models and the data, useful information is obtained from all the models evaluated. Although any particular prediction of tritium breakthrough may have large errors, overall, the models mimic observed trends.
Statistical Techniques Complement UML When Developing Domain Models of Complex Dynamical Biosystems.
Williams, Richard A; Timmis, Jon; Qwarnstrom, Eva E
2016-01-01
Computational modelling and simulation is increasingly being used to complement traditional wet-lab techniques when investigating the mechanistic behaviours of complex biological systems. In order to ensure computational models are fit for purpose, it is essential that the abstracted view of biology captured in the computational model, is clearly and unambiguously defined within a conceptual model of the biological domain (a domain model), that acts to accurately represent the biological system and to document the functional requirements for the resultant computational model. We present a domain model of the IL-1 stimulated NF-κB signalling pathway, which unambiguously defines the spatial, temporal and stochastic requirements for our future computational model. Through the development of this model, we observe that, in isolation, UML is not sufficient for the purpose of creating a domain model, and that a number of descriptive and multivariate statistical techniques provide complementary perspectives, in particular when modelling the heterogeneity of dynamics at the single-cell level. We believe this approach of using UML to define the structure and interactions within a complex system, along with statistics to define the stochastic and dynamic nature of complex systems, is crucial for ensuring that conceptual models of complex dynamical biosystems, which are developed using UML, are fit for purpose, and unambiguously define the functional requirements for the resultant computational model. PMID:27571414
Statistical Techniques Complement UML When Developing Domain Models of Complex Dynamical Biosystems
Timmis, Jon; Qwarnstrom, Eva E.
2016-01-01
Computational modelling and simulation is increasingly being used to complement traditional wet-lab techniques when investigating the mechanistic behaviours of complex biological systems. In order to ensure computational models are fit for purpose, it is essential that the abstracted view of biology captured in the computational model, is clearly and unambiguously defined within a conceptual model of the biological domain (a domain model), that acts to accurately represent the biological system and to document the functional requirements for the resultant computational model. We present a domain model of the IL-1 stimulated NF-κB signalling pathway, which unambiguously defines the spatial, temporal and stochastic requirements for our future computational model. Through the development of this model, we observe that, in isolation, UML is not sufficient for the purpose of creating a domain model, and that a number of descriptive and multivariate statistical techniques provide complementary perspectives, in particular when modelling the heterogeneity of dynamics at the single-cell level. We believe this approach of using UML to define the structure and interactions within a complex system, along with statistics to define the stochastic and dynamic nature of complex systems, is crucial for ensuring that conceptual models of complex dynamical biosystems, which are developed using UML, are fit for purpose, and unambiguously define the functional requirements for the resultant computational model. PMID:27571414
Generating efficient executable models for complex virtual experimentation with the Tornado kernel.
Claeys, Filip H A; Fritzson, Peter; Vanrolleghem, Peter A
2007-01-01
Virtual experimentation is a collective term that includes various model evaluation procedures such as simulation, optimization and scenario analysis. Given the complexity of the models used in these procedures, and the number of evaluations that is required to complete them, highly efficient model implementations are desired. Although water quality management is a domain in which complex virtual experimentation is often adopted, only relatively little attention has thus far been devoted to the automated generation of efficient executable models. This article reports on a number of promising results regarding executable model generation that were obtained in the scope of the Tornado kernel, using techniques such as equiv substitution and equation lifting. PMID:17898445
Model of human collective decision-making in complex environments
NASA Astrophysics Data System (ADS)
Carbone, Giuseppe; Giannoccaro, Ilaria
2015-12-01
A continuous-time Markov process is proposed to analyze how a group of humans solves a complex task, consisting in the search of the optimal set of decisions on a fitness landscape. Individuals change their opinions driven by two different forces: (i) the self-interest, which pushes them to increase their own fitness values, and (ii) the social interactions, which push individuals to reduce the diversity of their opinions in order to reach consensus. Results show that the performance of the group is strongly affected by the strength of social interactions and by the level of knowledge of the individuals. Increasing the strength of social interactions improves the performance of the team. However, too strong social interactions slow down the search of the optimal solution and worsen the performance of the group. In particular, we find that the threshold value of the social interaction strength, which leads to the emergence of a superior intelligence of the group, is just the critical threshold at which the consensus among the members sets in. We also prove that a moderate level of knowledge is already enough to guarantee high performance of the group in making decisions.
Reconstitution of [Fe]-hydrogenase using model complexes
NASA Astrophysics Data System (ADS)
Shima, Seigo; Chen, Dafa; Xu, Tao; Wodrich, Matthew D.; Fujishiro, Takashi; Schultz, Katherine M.; Kahnt, Jörg; Ataka, Kenichi; Hu, Xile
2015-12-01
[Fe]-Hydrogenase catalyses the reversible hydrogenation of a methenyltetrahydromethanopterin substrate, which is an intermediate step during the methanogenesis from CO2 and H2. The active site contains an iron-guanylylpyridinol cofactor, in which Fe2+ is coordinated by two CO ligands, as well as an acyl carbon atom and a pyridinyl nitrogen atom from a 3,4,5,6-substituted 2-pyridinol ligand. However, the mechanism of H2 activation by [Fe]-hydrogenase is unclear. Here we report the reconstitution of [Fe]-hydrogenase from an apoenzyme using two FeGP cofactor mimics to create semisynthetic enzymes. The small-molecule mimics reproduce the ligand environment of the active site, but are inactive towards H2 binding and activation on their own. We show that reconstituting the enzyme using a mimic that contains a 2-hydroxypyridine group restores activity, whereas an analogous enzyme with a 2-methoxypyridine complex was essentially inactive. These findings, together with density functional theory computations, support a mechanism in which the 2-hydroxy group is deprotonated before it serves as an internal base for heterolytic H2 cleavage.
Modeling platinum group metal complexes in aqueous solution.
Lienke, A; Klatt, G; Robinson, D J; Koch, K R; Naidoo, K J
2001-05-01
We construct force fields suited for the study of three platinum group metals (PGM) as chloranions in aqueous solution from quantum chemical computations and report experimental data. Density functional theory (DFT) using the local density approximation (LDA), as well as extended basis sets that incorporate relativistic corrections for the transition metal atoms, has been used to obtain equilibrium geometries, harmonic vibrational frequencies, and atomic charges for the complexes. We found that DFT calculations of [PtCl(6)](2-).3H(2)O, [PdCl(4)](2-).2H(2)O, and [RhCl(6)](3-).3H(2)O water clusters compared well with molecular mechanics (MM) calculations using the specific force field developed here. The force field performed equally well in condensed phase simulations. A 500 ps molecular dynamics (MD) simulation of [PtCl(6)](2-) in water was used to study the structure of the solvation shell around the anion. The resulting data were compared to an experimental radial distribution function derived from X-ray diffraction experiments. We found the calculated pair correlation functions (PCF) for hexachloroplatinate to be in good agreement with experiment and were able to use the simulation results to identify and resolve two water-anion peaks in the experimental spectrum. PMID:11327912
Microstructure-based modelling of multiphase materials and complex structures
NASA Astrophysics Data System (ADS)
Werner, Ewald; Wesenjak, Robert; Fillafer, Alexander; Meier, Felix; Krempaszky, Christian
2015-10-01
Micromechanical approaches are frequently employed to monitor local and global field quantities and their evolution under varying mechanical and/or thermal loading scenarios. In this contribution, an overview on important methods is given that are currently used to gain insight into the deformational and failure behaviour of multiphase materials and complex structures. First, techniques to represent material microstructures are reviewed. It is common to either digitise images of real microstructures or generate virtual 2D or 3D microstructures using automated procedures (e.g. Voronoï tessellation) for grain generation and colouring algorithms for phase assignment. While the former method allows to capture exactly all features of the microstructure at hand with respect to its morphological and topological features, the latter method opens up the possibility for parametric studies with respect to the influence of individual microstructure features on the local and global stress and strain response. Several applications of these approaches are presented, comprising low and high strain behaviour of multiphase steels, failure and fracture behaviour of multiphase materials and the evolution of surface roughening of the aluminium top metallisation of semiconductor devices.
ERIC Educational Resources Information Center
Kim, Young Rae
2013-01-01
A theoretical model of metacognition in complex modeling activities has been developed based on existing frameworks, by synthesizing the re-conceptualization of metacognition at multiple levels by looking at the three sources that trigger metacognition. Using the theoretical model as a framework, this study was designed to explore how students'…
The Skilled Counselor Training Model: Skills Acquisition, Self-Assessment, and Cognitive Complexity
ERIC Educational Resources Information Center
Little, Cassandra; Packman, Jill; Smaby, Marlowe H.; Maddux, Cleborne D.
2005-01-01
The authors evaluated the effectiveness of the Skilled Counselor Training Model (SCTM; M. H. Smaby, C. D. Maddux, E. Torres-Rivera, & R. Zimmick, 1999) in teaching counseling skills and in fostering counselor cognitive complexity. Counselor trainees who completed the SCTM had better counseling skills and higher levels of cognitive complexity than…
40 CFR 80.48 - Augmentation of the complex emission model by vehicle testing.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Augmentation of the complex emission model by vehicle testing. 80.48 Section 80.48 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Reformulated Gasoline § 80.48 Augmentation of the complex...
The zebrafish as a model for complex tissue regeneration
Gemberling, Matthew; Bailey, Travis J.; Hyde, David R.; Poss, Kenneth D.
2013-01-01
For centuries, philosophers and scientists have been fascinated by the principles and implications of regeneration in lower vertebrate species. Two features have made zebrafish an informative model system for determining mechanisms of regenerative events. First, they are highly regenerative, able to regrow amputated fins, as well as a lesioned brain, retina, spinal cord, heart, and other tissues. Second, they are amenable to both forward and reverse genetic approaches, with a research toolset regularly updated by an expanding community of zebrafish researchers. Zebrafish studies have helped identify new mechanistic underpinnings of regeneration in multiple tissues, and in some cases have served as a guide for contemplating regenerative strategies in mammals. Here, we review the recent history of zebrafish as a genetic model system for understanding how and why tissue regeneration occurs. PMID:23927865
NREL's System Advisor Model Simplifies Complex Energy Analysis (Fact Sheet)
Not Available
2015-01-01
NREL has developed a tool -- the System Advisor Model (SAM) -- that can help decision makers analyze cost, performance, and financing of any size grid-connected solar, wind, or geothermal power project. Manufacturers, engineering and consulting firms, research and development firms, utilities, developers, venture capital firms, and international organizations use SAM for end-to-end analysis that helps determine whether and how to make investments in renewable energy projects.
Data Mining Approaches for Modeling Complex Electronic Circuit Design Activities
Kwon, Yongjin; Omitaomu, Olufemi A; Wang, Gi-Nam
2008-01-01
A printed circuit board (PCB) is an essential part of modern electronic circuits. It is made of a flat panel of insulating materials with patterned copper foils that act as electric pathways for various components such as ICs, diodes, capacitors, resistors, and coils. The size of PCBs has been shrinking over the years, while the number of components mounted on these boards has increased considerably. This trend makes the design and fabrication of PCBs ever more difficult. At the beginning of design cycles, it is important to estimate the time to complete the steps required accurately, based on many factors such as the required parts, approximate board size and shape, and a rough sketch of schematics. Current approach uses multiple linear regression (MLR) technique for time and cost estimations. However, the need for accurate predictive models continues to grow as the technology becomes more advanced. In this paper, we analyze a large volume of historical PCB design data, extract some important variables, and develop predictive models based on the extracted variables using a data mining approach. The data mining approach uses an adaptive support vector regression (ASVR) technique; the benchmark model used is the MLR technique currently being used in the industry. The strengths of SVR for this data include its ability to represent data in high-dimensional space through kernel functions. The computational results show that a data mining approach is a better prediction technique for this data. Our approach reduces computation time and enhances the practical applications of the SVR technique.
Post-closure biosphere assessment modelling: comparison of complex and more stylised approaches.
Walke, Russell C; Kirchner, Gerald; Xu, Shulan; Dverstorp, Björn
2015-10-01
Geological disposal facilities are the preferred option for high-level radioactive waste, due to their potential to provide isolation from the surface environment (biosphere) on very long timescales. Assessments need to strike a balance between stylised models and more complex approaches that draw more extensively on site-specific information. This paper explores the relative merits of complex versus more stylised biosphere models in the context of a site-specific assessment. The more complex biosphere modelling approach was developed by the Swedish Nuclear Fuel and Waste Management Co (SKB) for the Formark candidate site for a spent nuclear fuel repository in Sweden. SKB's approach is built on a landscape development model, whereby radionuclide releases to distinct hydrological basins/sub-catchments (termed 'objects') are represented as they evolve through land rise and climate change. Each of seventeen of these objects is represented with more than 80 site specific parameters, with about 22 that are time-dependent and result in over 5000 input values per object. The more stylised biosphere models developed for this study represent releases to individual ecosystems without environmental change and include the most plausible transport processes. In the context of regulatory review of the landscape modelling approach adopted in the SR-Site assessment in Sweden, the more stylised representation has helped to build understanding in the more complex modelling approaches by providing bounding results, checking the reasonableness of the more complex modelling, highlighting uncertainties introduced through conceptual assumptions and helping to quantify the conservatisms involved. The more stylised biosphere models are also shown capable of reproducing the results of more complex approaches. A major recommendation is that biosphere assessments need to justify the degree of complexity in modelling approaches as well as simplifying and conservative assumptions. In light of
Developing and Modeling Complex Social Interventions: Introducing the Connecting People Intervention
ERIC Educational Resources Information Center
Webber, Martin; Reidy, Hannah; Ansari, David; Stevens, Martin; Morris, David
2016-01-01
Objectives: Modeling the processes involved in complex social interventions is important in social work practice, as it facilitates their implementation and translation into different contexts. This article reports the process of developing and modeling the connecting people intervention (CPI), a model of practice that supports people with mental…
Using New Models to Analyze Complex Regularities of the World: Commentary on Musso et al. (2013)
ERIC Educational Resources Information Center
Nokelainen, Petri; Silander, Tomi
2014-01-01
This commentary to the recent article by Musso et al. (2013) discusses issues related to model fitting, comparison of classification accuracy of generative and discriminative models, and two (or more) cultures of data modeling. We start by questioning the extremely high classification accuracy with an empirical data from a complex domain. There is…
Assessment of higher order turbulence models for complex two- and three-dimensional flowfields
NASA Technical Reports Server (NTRS)
Menter, Florian R.
1992-01-01
A numerical method is presented to solve the three-dimensional Navier-Stokes equations in combination with a full Reynolds-stress turbulence model. Computations will be shown for three complex flowfields. The results of the Reynolds-stress model will be compared with those predicted by two different versions of the k-omega model. It will be shown that an improved version of the k-omega model gives as accurate results as the Reynolds-stress model.
In this study, the calibration of subsurface batch and reactive-transport models involving complex biogeochemical processes was systematically evaluated. Two hypothetical nitrate biodegradation scenarios were developed and simulated in numerical experiments to evaluate the perfor...
Modeling complex chemical effects in turbulent nonpremixed combustion
NASA Technical Reports Server (NTRS)
Smith, Nigel S. A.
1995-01-01
Virtually all of the energy derived from the consumption of combustibles occurs in systems which utilize turbulent fluid motion. Since combustion is largely related to the mixing of fluids and mixing processes are orders of magnitude more rapid when enhanced by turbulent motion, efficiency criteria dictate that chemically powered devices necessarily involve fluid turbulence. Where combustion occurs concurrently with mixing at an interface between two reactive fluid bodies, this mode of combustion is called nonpremixed combustion. This is distinct from premixed combustion where flame-fronts propagate into a homogeneous mixture of reactants. These two modes are limiting cases in the range of temporal lag between mixing of reactants and the onset of reaction. Nonpremixed combustion occurs where this lag tends to zero, while premixed combustion occurs where this lag tends to infinity. Many combustion processes are hybrids of these two extremes with finite non-zero lag times. Turbulent nonpremixed combustion is important from a practical standpoint because it occurs in gas fired boilers, furnaces, waste incinerators, diesel engines, gas turbine combustors, and afterburners etc. To a large extent, past development of these practical systems involved an empirical methodology. Presently, efficiency standards and emission regulations are being further tightened (Correa 1993), and empiricism has had to give way to more fundamental research in order to understand and effectively model practical combustion processes (Pope 1991). A key element in effective modeling of turbulent combustion is making use of a sufficiently detailed chemical kinetic mechanism. The prediction of pollutant emission such as oxides of nitrogen (NO(x)) and sulphur (SO(x)) unburned hydrocarbons, and particulates demands the use of detailed chemical mechanisms. It is essential that practical models for turbulent nonpremixed combustion are capable of handling large numbers of 'stiff' chemical species
Efficient modelling of droplet dynamics on complex surfaces.
Karapetsas, George; Chamakos, Nikolaos T; Papathanasiou, Athanasios G
2016-03-01
This work investigates the dynamics of droplet interaction with smooth or structured solid surfaces using a novel sharp-interface scheme which allows the efficient modelling of multiple dynamic contact lines. The liquid-gas and liquid-solid interfaces are treated in a unified context and the dynamic contact angle emerges simply due to the combined action of the disjoining and capillary pressure, and viscous stresses without the need of an explicit boundary condition or any requirement for the predefinition of the number and position of the contact lines. The latter, as it is shown, renders the model able to handle interfacial flows with topological changes, e.g. in the case of an impinging droplet on a structured surface. Then it is possible to predict, depending on the impact velocity, whether the droplet will fully or partially impregnate the structures of the solid, or will result in a 'fakir', i.e. suspended, state. In the case of a droplet sliding on an inclined substrate, we also demonstrate the built-in capability of our model to provide a prediction for either static or dynamic contact angle hysteresis. We focus our study on hydrophobic surfaces and examine the effect of the geometrical characteristics of the solid surface. It is shown that the presence of air inclusions trapped in the micro-structure of a hydrophobic substrate (Cassie-Baxter state) result in the decrease of contact angle hysteresis and in the increase of the droplet migration velocity in agreement with experimental observations for super-hydrophobic surfaces. Moreover, we perform 3D simulations which are in line with the 2D ones regarding the droplet mobility and also indicate that the contact angle hysteresis may be significantly affected by the directionality of the structures with respect to the droplet motion. PMID:26828706
Efficient modelling of droplet dynamics on complex surfaces
NASA Astrophysics Data System (ADS)
Karapetsas, George; Chamakos, Nikolaos T.; Papathanasiou, Athanasios G.
2016-03-01
This work investigates the dynamics of droplet interaction with smooth or structured solid surfaces using a novel sharp-interface scheme which allows the efficient modelling of multiple dynamic contact lines. The liquid-gas and liquid-solid interfaces are treated in a unified context and the dynamic contact angle emerges simply due to the combined action of the disjoining and capillary pressure, and viscous stresses without the need of an explicit boundary condition or any requirement for the predefinition of the number and position of the contact lines. The latter, as it is shown, renders the model able to handle interfacial flows with topological changes, e.g. in the case of an impinging droplet on a structured surface. Then it is possible to predict, depending on the impact velocity, whether the droplet will fully or partially impregnate the structures of the solid, or will result in a ‘fakir’, i.e. suspended, state. In the case of a droplet sliding on an inclined substrate, we also demonstrate the built-in capability of our model to provide a prediction for either static or dynamic contact angle hysteresis. We focus our study on hydrophobic surfaces and examine the effect of the geometrical characteristics of the solid surface. It is shown that the presence of air inclusions trapped in the micro-structure of a hydrophobic substrate (Cassie-Baxter state) result in the decrease of contact angle hysteresis and in the increase of the droplet migration velocity in agreement with experimental observations for super-hydrophobic surfaces. Moreover, we perform 3D simulations which are in line with the 2D ones regarding the droplet mobility and also indicate that the contact angle hysteresis may be significantly affected by the directionality of the structures with respect to the droplet motion.
Causal Inference and Model Selection in Complex Settings
NASA Astrophysics Data System (ADS)
Zhao, Shandong
Propensity score methods have become a part of the standard toolkit for applied researchers who wish to ascertain causal effects from observational data. While they were originally developed for binary treatments, several researchers have proposed generalizations of the propensity score methodology for non-binary treatment regimes. In this article, we firstly review three main methods that generalize propensity scores in this direction, namely, inverse propensity weighting (IPW), the propensity function (P-FUNCTION), and the generalized propensity score (GPS), along with recent extensions of the GPS that aim to improve its robustness. We compare the assumptions, theoretical properties, and empirical performance of these methods. We propose three new methods that provide robust causal estimation based on the P-FUNCTION and GPS. While our proposed P-FUNCTION-based estimator preforms well, we generally advise caution in that all available methods can be biased by model misspecification and extrapolation. In a related line of research, we consider adjustment for posttreatment covariates in causal inference. Even in a randomized experiment, observations might have different compliance performance under treatment and control assignment. This posttreatment covariate cannot be adjusted using standard statistical methods. We review the principal stratification framework which allows for modeling this effect as part of its Bayesian hierarchical models. We generalize the current model to add the possibility of adjusting for pretreatment covariates. We also propose a new estimator of the average treatment effect over the entire population. In a third line of research, we discuss the spectral line detection problem in high energy astrophysics. We carefully review how this problem can be statistically formulated as a precise hypothesis test with point null hypothesis, why a usual likelihood ratio test does not apply for problem of this nature, and a doable fix to correctly
Green, Christopher T.; Zhang, Yong; Jurgens, Bryant C.; Starn, J. Jeffrey; Landon, Matthew K.
2014-01-01
Analytical models of the travel time distribution (TTD) from a source area to a sample location are often used to estimate groundwater ages and solute concentration trends. The accuracies of these models are not well known for geologically complex aquifers. In this study, synthetic datasets were used to quantify the accuracy of four analytical TTD models as affected by TTD complexity, observation errors, model selection, and tracer selection. Synthetic TTDs and tracer data were generated from existing numerical models with complex hydrofacies distributions for one public-supply well and 14 monitoring wells in the Central Valley, California. Analytical TTD models were calibrated to synthetic tracer data, and prediction errors were determined for estimates of TTDs and conservative tracer (NO3−) concentrations. Analytical models included a new, scale-dependent dispersivity model (SDM) for two-dimensional transport from the watertable to a well, and three other established analytical models. The relative influence of the error sources (TTD complexity, observation error, model selection, and tracer selection) depended on the type of prediction. Geological complexity gave rise to complex TTDs in monitoring wells that strongly affected errors of the estimated TTDs. However, prediction errors for NO3− and median age depended more on tracer concentration errors. The SDM tended to give the most accurate estimates of the vertical velocity and other predictions, although TTD model selection had minor effects overall. Adding tracers improved predictions if the new tracers had different input histories. Studies using TTD models should focus on the factors that most strongly affect the desired predictions.
Gaussian Process Model for Collision Dynamics of Complex Molecules
NASA Astrophysics Data System (ADS)
Cui, Jie; Krems, Roman V.
2015-08-01
We show that a Gaussian process model can be combined with a small number (of order 100) of scattering calculations to provide a multidimensional dependence of scattering observables on the experimentally controllable parameters (such as the collision energy or temperature) as well as the potential energy surface (PES) parameters. For the case of Ar -C6H6 collisions, we show that 200 classical trajectory calculations are sufficient to provide a ten-dimensional hypersurface, giving the dependence of the collision lifetimes on the collision energy, internal temperature, and eight PES parameters. This can be used for solving the inverse scattering problem, for the efficient calculation of thermally averaged observables, for reducing the error of the molecular dynamics calculations by averaging over the PES variations, and for the analysis of the sensitivity of the observables to individual parameters determining the PES. Trained by a combination of classical and quantum calculations, the model provides an accurate description of the quantum scattering cross sections, even near scattering resonances.
Application of surface complexation models to anion adsorption by natural materials.
Goldberg, Sabine
2014-10-01
Various chemical models of ion adsorption are presented and discussed. Chemical models, such as surface complexation models, provide a molecular description of anion adsorption reactions using an equilibrium approach. Two such models, the constant capacitance model and the triple layer model, are described in the present study. Characteristics common to all the surface complexation models are equilibrium constant expressions, mass and charge balances, and surface activity coefficient electrostatic potential terms. Methods for determining parameter values for surface site density, capacitances, and surface complexation constants also are discussed. Spectroscopic experimental methods of establishing ion adsorption mechanisms include vibrational spectroscopy, nuclear magnetic resonance spectroscopy, electron spin resonance spectroscopy, X-ray absorption spectroscopy, and X-ray reflectivity. Experimental determinations of point of zero charge shifts and ionic strength dependence of adsorption results and molecular modeling calculations also can be used to deduce adsorption mechanisms. Applications of the surface complexation models to heterogeneous natural materials, such as soils, using the component additivity and the generalized composite approaches are described. Emphasis is on the generalized composite approach for predicting anion adsorption by soils. Continuing research is needed to develop consistent and realistic protocols for describing ion adsorption reactions on soil minerals and soils. The availability of standardized model parameter databases for use in chemical speciation-transport models is critical. PMID:24619924
NASA Astrophysics Data System (ADS)
Munoz-Carpena, R.; Muller, S. J.; Chu, M.; Kiker, G. A.; Perz, S. G.
2014-12-01
Model Model complexity resulting from the need to integrate environmental system components cannot be understated. In particular, additional emphasis is urgently needed on rational approaches to guide decision making through uncertainties surrounding the integrated system across decision-relevant scales. However, in spite of the difficulties that the consideration of modeling uncertainty represent for the decision process, it should not be avoided or the value and science behind the models will be undermined. These two issues; i.e., the need for coupled models that can answer the pertinent questions and the need for models that do so with sufficient certainty, are the key indicators of a model's relevance. Model relevance is inextricably linked with model complexity. Although model complexity has advanced greatly in recent years there has been little work to rigorously characterize the threshold of relevance in integrated and complex models. Formally assessing the relevance of the model in the face of increasing complexity would be valuable because there is growing unease among developers and users of complex models about the cumulative effects of various sources of uncertainty on model outputs. In particular, this issue has prompted doubt over whether the considerable effort going into further elaborating complex models will in fact yield the expected payback. New approaches have been proposed recently to evaluate the uncertainty-complexity-relevance modeling trilemma (Muller, Muñoz-Carpena and Kiker, 2011) by incorporating state-of-the-art global sensitivity and uncertainty analysis (GSA/UA) in every step of the model development so as to quantify not only the uncertainty introduced by the addition of new environmental components, but the effect that these new components have over existing components (interactions, non-linear responses). Outputs from the analysis can also be used to quantify system resilience (stability, alternative states, thresholds or tipping
NASA Astrophysics Data System (ADS)
Christensen, Claire Petra
Across diverse fields ranging from physics to biology, sociology, and economics, the technological advances of the past decade have engendered an unprecedented explosion of data on highly complex systems with thousands, if not millions of interacting components. These systems exist at many scales of size and complexity, and it is becoming ever-more apparent that they are, in fact, universal, arising in every field of study. Moreover, they share fundamental properties---chief among these, that the individual interactions of their constituent parts may be well-understood, but the characteristic behaviour produced by the confluence of these interactions---by these complex networks---is unpredictable; in a nutshell, the whole is more than the sum of its parts. There is, perhaps, no better illustration of this concept than the discoveries being made regarding complex networks in the biological sciences. In particular, though the sequencing of the human genome in 2003 was a remarkable feat, scientists understand that the "cellular-level blueprints" for the human being are cellular-level parts lists, but they say nothing (explicitly) about cellular-level processes. The challenge of modern molecular biology is to understand these processes in terms of the networks of parts---in terms of the interactions among proteins, enzymes, genes, and metabolites---as it is these processes that ultimately differentiate animate from inanimate, giving rise to life! It is the goal of systems biology---an umbrella field encapsulating everything from molecular biology to epidemiology in social systems---to understand processes in terms of fundamental networks of core biological parts, be they proteins or people. By virtue of the fact that there are literally countless complex systems, not to mention tools and techniques used to infer, simulate, analyze, and model these systems, it is impossible to give a truly comprehensive account of the history and study of complex systems. The author
Surface complexation modeling of Cr(VI) adsorption at the goethite-water interface.
Xie, Jinyu; Gu, Xueyuan; Tong, Fei; Zhao, Yanping; Tan, Yinyue
2015-10-01
In this study, a charge distribution multisite surface complexation model (CD-MUSIC) for adsorption of chromate onto goethite was carefully developed. The adsorption of Cr(VI) on goethite was firstly investigated as a function of pH, ionic strength and Cr(VI) concentration. Results showed that an inner-sphere complexation mechanism was involved because the retention of Cr(VI) was little influenced by ionic strength. Then two surface species: a bidentate complex (≡Fe2O2CrOOH) and a monodentate complex (≡FeOCrO3(-3/2)), which is constrained by prior spectroscopic evidence were proposed to fit the macroscopic adsorption data. Modeling results showed that the bidentate complex was found to be the dominant species at low pH, whereas, with increasing pH, monodentate species became more pronounced. The model was then verified by prediction of competitive adsorption of chromate and phosphate at various ratios and ionic strengths. The model successfully predicted the inhibition of chromate with the presence of phosphate, suggesting phosphate has higher affinity to goethite surface than Cr(VI). Results showed that the model developed in this study for Cr(VI) onto goethite was applicable for various conditions. It is a useful supplement for the surface complexation model database for oxyanions onto goethite surfaces. PMID:26057103
Chen, Yunfeng; Liu, Baoyu; Ju, Lining; Hong, Jinsung; Ji, Qinghua; Chen, Wei; Zhu, Cheng
2015-01-01
Membrane receptor-ligand interactions mediate many cellular functions. Binding kinetics and downstream signaling triggered by these molecular interactions are likely affected by the mechanical environment in which binding and signaling take place. A recent study demonstrated that mechanical force can regulate antigen recognition by and triggering of the T-cell receptor (TCR). This was made possible by a new technology we developed and termed fluorescence biomembrane force probe (fBFP), which combines single-molecule force spectroscopy with fluorescence microscopy. Using an ultra-soft human red blood cell as the sensitive force sensor, a high-speed camera and real-time imaging tracking techniques, the fBFP is of ~1 pN (10(-12) N), ~3 nm and ~0.5 msec in force, spatial and temporal resolution. With the fBFP, one can precisely measure single receptor-ligand binding kinetics under force regulation and simultaneously image binding-triggered intracellular calcium signaling on a single live cell. This new technology can be used to study other membrane receptor-ligand interaction and signaling in other cells under mechanical regulation. PMID:26274371
NASA Astrophysics Data System (ADS)
Ohki, Kazuo
2005-08-01
Using differential scanning calorimetry, preferential interaction between melittin and dimyristoylphosphatidylcholine was observed in various binary mixtures of phospholipids. It is shown that matching of the hydrophobic regions between melittin and fatty acyl chains of phospholipids is the most important factor. Using a microscopic imaging instrument for membrane fluidity, specific interaction between cholesterol and sphingomyelin in rafts was confirmed in living CHO cells. An environment sensitive fluorescence dye, laurdan, was used in this home-built instrument. The membrane fluidity of the cells was scarcely affected with the treatment of sphingomyelinase up to 0.1 U ml-1. On the other hand, increase of the membrane fluidity was observed in CHO cells treated with methyl-beta-cyclodextrin, which removes cholesterol molecules from biomembranes of the cells in a concentration dependent manner up to 10 mM. But a low concentration of methyl-beta-cyclodextrin (1 mM) did not raise the membrane fluidity. However, increase of membrane fluidity was observed in CHO cells treated with sphingomyelinase and then with 1 mM methyl-beta-cyclodextrin. These results suggest specific interaction between sphingomyelin and cholesterol in the rafts.
Complexities in Ferret Influenza Virus Pathogenesis and Transmission Models.
Belser, Jessica A; Eckert, Alissa M; Tumpey, Terrence M; Maines, Taronna R
2016-09-01
Ferrets are widely employed to study the pathogenicity, transmissibility, and tropism of influenza viruses. However, inherent variations in inoculation methods, sampling schemes, and experimental designs are often overlooked when contextualizing or aggregating data between laboratories, leading to potential confusion or misinterpretation of results. Here, we provide a comprehensive overview of parameters to consider when planning an experiment using ferrets, collecting data from the experiment, and placing results in context with previously performed studies. This review offers information that is of particular importance for researchers in the field who rely on ferret data but do not perform the experiments themselves. Furthermore, this review highlights the breadth of experimental designs and techniques currently available to study influenza viruses in this model, underscoring the wide heterogeneity of protocols currently used for ferret studies while demonstrating the wealth of information which can benefit risk assessments of emerging influenza viruses. PMID:27412880
Probabilistic Multi-Factor Interaction Model for Complex Material Behavior
NASA Technical Reports Server (NTRS)
Chamis, Christos C.; Abumeri, Galib H.
2008-01-01
The Multi-Factor Interaction Model (MFIM) is used to evaluate the divot weight (foam weight ejected) from the launch external tanks. The multi-factor has sufficient degrees of freedom to evaluate a large number of factors that may contribute to the divot ejection. It also accommodates all interactions by its product form. Each factor has an exponent that satisfies only two points the initial and final points. The exponent describes a monotonic path from the initial condition to the final. The exponent values are selected so that the described path makes sense in the absence of experimental data. In the present investigation, the data used was obtained by testing simulated specimens in launching conditions. Results show that the MFIM is an effective method of describing the divot weight ejected under the conditions investigated.
Probabilistic Multi-Factor Interaction Model for Complex Material Behavior
NASA Technical Reports Server (NTRS)
Chamis, Christos C.; Abumeri, Galib H.
2008-01-01
The Multi-Factor Interaction Model (MFIM) is used to evaluate the divot weight (foam weight ejected) from the launch external tanks. The multi-factor has sufficient degrees of freedom to evaluate a large number of factors that may contribute to the divot ejection. It also accommodates all interactions by its product form. Each factor has an exponent that satisfies only two points, the initial and final points. The exponent describes a monotonic path from the initial condition to the final. The exponent values are selected so that the described path makes sense in the absence of experimental data. In the present investigation the data used was obtained by testing simulated specimens in launching conditions. Results show that the MFIM is an effective method of describing the divot weight ejected under the conditions investigated.
Do climate models reproduce complexity of observed sea level changes?
NASA Astrophysics Data System (ADS)
Becker, M.; Karpytchev, M.; Marcos, M.; Jevrejeva, S.; Lennartz-Sassinek, S.
2016-05-01
The ability of Atmosphere-Ocean General Circulation Models (AOGCMs) to capture the statistical behavior of sea level (SL) fluctuations has been assessed at the local scale. To do so, we have compared scaling behavior of the SL fluctuations simulated in the historical runs of 36 CMIP5 AOGCMs to that in the longest (>100 years) SL records from 23 tides gauges around the globe. The observed SL fluctuations are known to manifest a power law scaling. We have checked if the SL changes simulated in the AOGCM exhibit the same scaling properties and the long-term correlations as observed in the tide gauge records. We find that the majority of AOGCMs overestimates the scaling of SL fluctuations, particularly in the North Atlantic. Consequently, AOGCMs, routinely used to project regional SL rise, may underestimate the part of the externally driven SL rise, in particular the anthropogenic footprint, in the projections for the 21st century.
Analytical model study of dendrimer/DNA complexes.
Qamhieh, Khawla; Nylander, Tommy; Ainalem, Marie-Louise
2009-07-13
The interaction between positively charged poly(amido amine) (PAMAM) dendrimers of generation 4 and DNA has been investigated for two DNA lengths; 2000 basepairs (bp; L = 680 nm) and 4331 bp (L = 1472.5 nm) using a theoretical model by Schiessel for a semiflexible polyelectrolyte and hard spheres. The model was modified to take into account that the dendrimers are to be regarded as soft spheres, that is, the radius is not constant when the DNA interact with the dendrimer. For the shorter and longer DNA, the estimated optimal wrapping length, l(opt) is ≈15.69 and ≈12.25 nm, respectively, for dendrimers that retain their original size (R(o) = 2.25 nm) upon DNA interaction. However, the values of l(opt) for the dendrimers that were considered to have a radius of (R = 0.4R(o)) 0.9 nm were 9.3 and 9.4 nm for the short and long DNA, respectively, and the effect due to the DNA length is no longer observed. For l(opt) = 10.88 nm, which is the length needed to neutralize the 64 positive charges of the G4 dendrimer, the maximum number of dendrimers per DNA (N(max)) was ≈76 for the shorter DNA, which is larger than the corresponding experimental value of 35 for 2000 bp DNA. For the longer DNA, N(max) ≈ 160, which is close to the experimental value of 140 for the 4331 bp DNA. Charge inversion of the dendrimer is only observed when they retain their size or only slightly contract upon DNA interaction. PMID:19438230
NASA Astrophysics Data System (ADS)
Finger, David; Vis, Marc; Huss, Matthias; Seibert, Jan
2015-04-01
The assessment of snow, glacier, and rainfall runoff contribution to discharge in mountain streams is of major importance for an adequate water resource management. Such contributions can be estimated via hydrological models, provided that the modeling adequately accounts for snow and glacier melt, as well as rainfall runoff. We present a multiple data set calibration approach to estimate runoff composition using hydrological models with three levels of complexity. For this purpose, the code of the conceptual runoff model HBV-light was enhanced to allow calibration and validation of simulations against glacier mass balances, satellite-derived snow cover area and measured discharge. Three levels of complexity of the model were applied to glacierized catchments in Switzerland, ranging from 39 to 103 km2. The results indicate that all three observational data sets are reproduced adequately by the model, allowing an accurate estimation of the runoff composition in the three mountain streams. However, calibration against only runoff leads to unrealistic snow and glacier melt rates. Based on these results, we recommend using all three observational data sets in order to constrain model parameters and compute snow, glacier, and rain contributions. Finally, based on the comparison of model performance of different complexities, we postulate that the availability and use of different data sets to calibrate hydrological models might be more important than model complexity to achieve realistic estimations of runoff composition.
Frequency modelling and solution of fluid-structure interaction in complex pipelines
NASA Astrophysics Data System (ADS)
Xu, Yuanzhi; Johnston, D. Nigel; Jiao, Zongxia; Plummer, Andrew R.
2014-05-01
Complex pipelines may have various structural supports and boundary conditions, as well as branches. To analyse the vibrational characteristics of piping systems, frequency modelling and solution methods considering complex constraints are developed here. A fourteen-equation model and Transfer Matrix Method (TMM) are employed to describe Fluid-Structure Interaction (FSI) in liquid-filled pipes. A general solution for the multi-branch pipe is proposed in this paper, offering a methodology to predict frequency responses of the complex piping system. Some branched pipe systems are built for the purpose of validation, indicating good agreement with calculated results.
2.5D complex resistivity modeling and inversion using unstructured grids
NASA Astrophysics Data System (ADS)
Xu, Kaijun; Sun, Jie
2016-04-01
The characteristic of complex resistivity on rock and ore has been recognized by people for a long time. Generally we have used the Cole-Cole Model(CCM) to describe complex resistivity. It has been proved that the electrical anomaly of geologic body can be quantitative estimated by CCM parameters such as direct resistivity(ρ0), chargeability(m), time constant(τ) and frequency dependence(c). Thus it is very important to obtain the complex parameters of geologic body. It is difficult to approximate complex structures and terrain using traditional rectangular grid. In order to enhance the numerical accuracy and rationality of modeling and inversion, we use an adaptive finite-element algorithm for forward modeling of the frequency-domain 2.5D complex resistivity and implement the conjugate gradient algorithm in the inversion of 2.5D complex resistivity. An adaptive finite element method is applied for solving the 2.5D complex resistivity forward modeling of horizontal electric dipole source. First of all, the CCM is introduced into the Maxwell's equations to calculate the complex resistivity electromagnetic fields. Next, the pseudo delta function is used to distribute electric dipole source. Then the electromagnetic fields can be expressed in terms of the primary fields caused by layered structure and the secondary fields caused by inhomogeneities anomalous conductivity. At last, we calculated the electromagnetic fields response of complex geoelectric structures such as anticline, syncline, fault. The modeling results show that adaptive finite-element methods can automatically improve mesh generation and simulate complex geoelectric models using unstructured grids. The 2.5D complex resistivity invertion is implemented based the conjugate gradient algorithm.The conjugate gradient algorithm doesn't need to compute the sensitivity matrix but directly computes the sensitivity matrix or its transpose multiplying vector. In addition, the inversion target zones are
NASA Technical Reports Server (NTRS)
Schaefer, Jacob; Hanson, Curt; Johnson, Marcus A.; Nguyen, Nhan
2011-01-01
Three model reference adaptive controllers (MRAC) with varying levels of complexity were evaluated on a high performance jet aircraft and compared along with a baseline nonlinear dynamic inversion controller. The handling qualities and performance of the controllers were examined during failure conditions that induce coupling between the pitch and roll axes. Results from flight tests showed with a roll to pitch input coupling failure, the handling qualities went from Level 2 with the baseline controller to Level 1 with the most complex MRAC tested. A failure scenario with the left stabilator frozen also showed improvement with the MRAC. Improvement in performance and handling qualities was generally seen as complexity was incrementally added; however, added complexity usually corresponds to increased verification and validation effort required for certification. The tradeoff between complexity and performance is thus important to a controls system designer when implementing an adaptive controller on an aircraft. This paper investigates this relation through flight testing of several controllers of vary complexity.
Modeling Damage Complexity-Dependent Non-Homologous End-Joining Repair Pathway
Li, Yongfeng; Reynolds, Pamela; O'Neill, Peter; Cucinotta, Francis A.
2014-01-01
Non-homologous end joining (NHEJ) is the dominant DNA double strand break (DSB) repair pathway and involves several repair proteins such as Ku, DNA-PKcs, and XRCC4. It has been experimentally shown that the choice of NHEJ proteins is determined by the complexity of DSB. In this paper, we built a mathematical model, based on published data, to study how NHEJ depends on the damage complexity. Under an appropriate set of parameters obtained by minimization technique, we can simulate the kinetics of foci track formation in fluorescently tagged mammalian cells, Ku80-EGFP and DNA-PKcs-YFP for simple and complex DSB repair, respectively, in good agreement with the published experimental data, supporting the notion that simple DSB undergo fast repair in a Ku-dependent, DNA-PKcs-independent manner, while complex DSB repair requires additional DNA-PKcs for end processing, resulting in its slow repair, additionally resulting in slower release rate of Ku and the joining rate of complex DNA ends. Based on the numerous experimental descriptions, we investigated several models to describe the kinetics for complex DSB repair. An important prediction of our model is that the rejoining of complex DSBs is through a process of synapsis formation, similar to a second order reaction between ends, rather than first order break filling/joining. The synapsis formation (SF) model allows for diffusion of ends before the synapsis formation, which is precluded in the first order model by the rapid coupling of ends. Therefore, the SF model also predicts the higher number of chromosomal aberrations observed with high linear energy transfer (LET) radiation due to the higher proportion of complex DSBs compared to low LET radiation, and an increased probability of misrejoin following diffusion before the synapsis is formed, while the first order model does not provide a mechanism for the increased effectiveness in chromosomal aberrations observed. PMID:24520318
Modeling damage complexity-dependent non-homologous end-joining repair pathway.
Li, Yongfeng; Reynolds, Pamela; O'Neill, Peter; Cucinotta, Francis A
2014-01-01
Non-homologous end joining (NHEJ) is the dominant DNA double strand break (DSB) repair pathway and involves several repair proteins such as Ku, DNA-PKcs, and XRCC4. It has been experimentally shown that the choice of NHEJ proteins is determined by the complexity of DSB. In this paper, we built a mathematical model, based on published data, to study how NHEJ depends on the damage complexity. Under an appropriate set of parameters obtained by minimization technique, we can simulate the kinetics of foci track formation in fluorescently tagged mammalian cells, Ku80-EGFP and DNA-PKcs-YFP for simple and complex DSB repair, respectively, in good agreement with the published experimental data, supporting the notion that simple DSB undergo fast repair in a Ku-dependent, DNA-PKcs-independent manner, while complex DSB repair requires additional DNA-PKcs for end processing, resulting in its slow repair, additionally resulting in slower release rate of Ku and the joining rate of complex DNA ends. Based on the numerous experimental descriptions, we investigated several models to describe the kinetics for complex DSB repair. An important prediction of our model is that the rejoining of complex DSBs is through a process of synapsis formation, similar to a second order reaction between ends, rather than first order break filling/joining. The synapsis formation (SF) model allows for diffusion of ends before the synapsis formation, which is precluded in the first order model by the rapid coupling of ends. Therefore, the SF model also predicts the higher number of chromosomal aberrations observed with high linear energy transfer (LET) radiation due to the higher proportion of complex DSBs compared to low LET radiation, and an increased probability of misrejoin following diffusion before the synapsis is formed, while the first order model does not provide a mechanism for the increased effectiveness in chromosomal aberrations observed. PMID:24520318
Complex Transition to Cooperative Behavior in a Structured Population Model
Miranda, Luciano; de Souza, Adauto J. F.; Ferreira, Fernando F.; Campos, Paulo R. A.
2012-01-01
Cooperation plays an important role in the evolution of species and human societies. The understanding of the emergence and persistence of cooperation in those systems is a fascinating and fundamental question. Many mechanisms were extensively studied and proposed as supporting cooperation. The current work addresses the role of migration for the maintenance of cooperation in structured populations. This problem is investigated in an evolutionary perspective through the prisoner's dilemma game paradigm. It is found that migration and structure play an essential role in the evolution of the cooperative behavior. The possible outcomes of the model are extinction of the entire population, dominance of the cooperative strategy and coexistence between cooperators and defectors. The coexistence phase is obtained in the range of large migration rates. It is also verified the existence of a critical level of structuring beyond that cooperation is always likely. In resume, we conclude that the increase in the number of demes as well as in the migration rate favor the fixation of the cooperative behavior. PMID:22761736
Endophenotype Network Models: Common Core of Complex Diseases
Ghiassian, Susan Dina; Menche, Jörg; Chasman, Daniel I.; Giulianini, Franco; Wang, Ruisheng; Ricchiuto, Piero; Aikawa, Masanori; Iwata, Hiroshi; Müller, Christian; Zeller, Tania; Sharma, Amitabh; Wild, Philipp; Lackner, Karl; Singh, Sasha; Ridker, Paul M.; Blankenberg, Stefan; Barabási, Albert-László; Loscalzo, Joseph
2016-01-01
Historically, human diseases have been differentiated and categorized based on the organ system in which they primarily manifest. Recently, an alternative view is emerging that emphasizes that different diseases often have common underlying mechanisms and shared intermediate pathophenotypes, or endo(pheno)types. Within this framework, a specific disease’s expression is a consequence of the interplay between the relevant endophenotypes and their local, organ-based environment. Important examples of such endophenotypes are inflammation, fibrosis, and thrombosis and their essential roles in many developing diseases. In this study, we construct endophenotype network models and explore their relation to different diseases in general and to cardiovascular diseases in particular. We identify the local neighborhoods (module) within the interconnected map of molecular components, i.e., the subnetworks of the human interactome that represent the inflammasome, thrombosome, and fibrosome. We find that these neighborhoods are highly overlapping and significantly enriched with disease-associated genes. In particular they are also enriched with differentially expressed genes linked to cardiovascular disease (risk). Finally, using proteomic data, we explore how macrophage activation contributes to our understanding of inflammatory processes and responses. The results of our analysis show that inflammatory responses initiate from within the cross-talk of the three identified endophenotypic modules. PMID:27278246
Endophenotype Network Models: Common Core of Complex Diseases.
Ghiassian, Susan Dina; Menche, Jörg; Chasman, Daniel I; Giulianini, Franco; Wang, Ruisheng; Ricchiuto, Piero; Aikawa, Masanori; Iwata, Hiroshi; Müller, Christian; Zeller, Tania; Sharma, Amitabh; Wild, Philipp; Lackner, Karl; Singh, Sasha; Ridker, Paul M; Blankenberg, Stefan; Barabási, Albert-László; Loscalzo, Joseph
2016-01-01
Historically, human diseases have been differentiated and categorized based on the organ system in which they primarily manifest. Recently, an alternative view is emerging that emphasizes that different diseases often have common underlying mechanisms and shared intermediate pathophenotypes, or endo(pheno)types. Within this framework, a specific disease's expression is a consequence of the interplay between the relevant endophenotypes and their local, organ-based environment. Important examples of such endophenotypes are inflammation, fibrosis, and thrombosis and their essential roles in many developing diseases. In this study, we construct endophenotype network models and explore their relation to different diseases in general and to cardiovascular diseases in particular. We identify the local neighborhoods (module) within the interconnected map of molecular components, i.e., the subnetworks of the human interactome that represent the inflammasome, thrombosome, and fibrosome. We find that these neighborhoods are highly overlapping and significantly enriched with disease-associated genes. In particular they are also enriched with differentially expressed genes linked to cardiovascular disease (risk). Finally, using proteomic data, we explore how macrophage activation contributes to our understanding of inflammatory processes and responses. The results of our analysis show that inflammatory responses initiate from within the cross-talk of the three identified endophenotypic modules. PMID:27278246
Ecological Acclimation and Hydrologic Response: Problem Complexity and Modeling Challenges
NASA Astrophysics Data System (ADS)
Kumar, P.; Srinivasan, V.; Le, P. V. V.; Drewry, D.
2012-04-01
Elevated CO2 in the atmosphere leads to a number of acclimatory responses in different vegetation types. These may be characterized as structural such as vegetation height or foliage density, ecophysiological such as reduction in stomatal conductance, and biochemical such as photosynthetic down-regulation. Furthermore, the allocation of assimilated carbon to different vegetation parts such as leaves, roots, stem and seeds is also altered such that empirical allometric relations are no longer valid. The extent and nature of these acclimatory responses vary between C3 and C4 vegetation and across species. These acclimatory responses have significant impact on hydrologic fluxes both pertaining to water and energy with the possibility of large-scale hydrologic influence. Capturing the pathways of acclimatory response to provide accurate ecohydrologic response predictions requires incorporating subtle relationships that are accentuated under elevated CO2. The talk will discuss the challenges of modeling these as well as applications to soybean, maize and bioenergy crops such as switchgrass and miscanthus.
Pattern-Oriented Modeling of Agent-Based Complex Systems: Lessons from Ecology
NASA Astrophysics Data System (ADS)
Grimm, Volker; Revilla, Eloy; Berger, Uta; Jeltsch, Florian; Mooij, Wolf M.; Railsback, Steven F.; Thulke, Hans-Hermann; Weiner, Jacob; Wiegand, Thorsten; DeAngelis, Donald L.
2005-11-01
Agent-based complex systems are dynamic networks of many interacting agents; examples include ecosystems, financial markets, and cities. The search for general principles underlying the internal organization of such systems often uses bottom-up simulation models such as cellular automata and agent-based models. No general framework for designing, testing, and analyzing bottom-up models has yet been established, but recent advances in ecological modeling have come together in a general strategy we call pattern-oriented modeling. This strategy provides a unifying framework for decoding the internal organization of agent-based complex systems and may lead toward unifying algorithmic theories of the relation between adaptive behavior and system complexity.
Pattern-oriented modeling of agent-based complex systems: Lessons from ecology
Grimm, Volker; Revilla, Eloy; Berger, Uta; Jeltsch, Florian; Mooij, Wolf M.; Railsback, Steven F.; Thulke, Hans-Hermann; Weiner, Jacob; Wiegand, Thorsten; DeAngelis, Donald L.
2005-01-01
Agent-based complex systems are dynamic networks of many interacting agents; examples include ecosystems, financial markets, and cities. The search for general principles underlying the internal organization of such systems often uses bottom-up simulation models such as cellular automata and agent-based models. No general framework for designing, testing, and analyzing bottom-up models has yet been established, but recent advances in ecological modeling have come together in a general strategy we call pattern-oriented modeling. This strategy provides a unifying framework for decoding the internal organization of agent-based complex systems and may lead toward unifying algorithmic theories of the relation between adaptive behavior and system complexity.
Pattern-oriented modeling of agent-based complex systems: lessons from ecology.
Grimm, Volker; Revilla, Eloy; Berger, Uta; Jeltsch, Florian; Mooij, Wolf M; Railsback, Steven F; Thulke, Hans-Hermann; Weiner, Jacob; Wiegand, Thorsten; DeAngelis, Donald L
2005-11-11
Agent-based complex systems are dynamic networks of many interacting agents; examples include ecosystems, financial markets, and cities. The search for general principles underlying the internal organization of such systems often uses bottom-up simulation models such as cellular automata and agent-based models. No general framework for designing, testing, and analyzing bottom-up models has yet been established, but recent advances in ecological modeling have come together in a general strategy we call pattern-oriented modeling. This strategy provides a unifying framework for decoding the internal organization of agent-based complex systems and may lead toward unifying algorithmic theories of the relation between adaptive behavior and system complexity. PMID:16284171
Modeling the production of highly-complex molecules in star-forming regions
NASA Astrophysics Data System (ADS)
Garrod, R. T.
2016-05-01
Molecules of increasing complexity are being observed toward star-forming regions, including the recently detected iso-propyl cyanide, the first interstellar branched carbon-chain molecule. Modeling the formation of new complex organics requires new grain-surface production mechanisms, as well as gas-phase and grain-surface destruction processes. The method for constructing networks for new molecules is discussed, as well as the results of recent models of branched carbon-chain molecule chemistry. The formation of both simple and complex organics in cold regions is also discussed. New, exact kinetics models indicate that complex molecules may be formed efficiently at very low temperatures, if CO is abundant on the grain surfaces.
NASA Astrophysics Data System (ADS)
Stanculescu, Ioana; Mandravel, Cristina; Landy, David; Woisel, Patrice; Surpateanu, Gheorghe
2003-07-01
The formation of the complex between tetrandrine and the calcium ion, in solution, was studied using FTIR, UV-Vis, 1H NMR, 13C NMR and electrospray mass spectroscopy spectroscopic methods and molecular modeling. The calcium salts used were: Ca(ClO 4) 2·4H 2O and Ca(Picrate) 2 in the solvents: acetonitrile (CH 3CN), deuterated acetonitrile (CD 3CN) and tetrahydrofurane (THF). The determined complex stability constant was: 20277±67 dm 3 mol -1 and corresponding free energy Δ G0=-5.820±0.002 kcal mol -1. The molecular simulation of the complex formation with the MM3 Augmented force field integrated in CAChe provided useful data about its energy. Combining the experimental results and molecular modeling we propose a model for the structure of tetrandrine-Ca complex with an eight coordinated geometry.
Yun, Sung Mi; Kang, Christina S; Kim, Jonghwa; Kim, Han S
2015-04-28
The removal of heavy metals (Zn and Pb) and heavy petroleum oils (HPOs) from a soil with complex contamination was examined by soil flushing. Desorption and transport behaviors of the complex contaminants were assessed by batch and continuous flow reactor experiments and through modeling simulations. Flushing a one-dimensional flow column packed with complex contaminated soil sequentially with citric acid then a surfactant resulted in the removal of 85.6% of Zn, 62% of Pb, and 31.6% of HPO. The desorption distribution coefficients, KUbatch and KLbatch, converged to constant values as Ce increased. An equilibrium model (ADR) and nonequilibrium models (TSNE and TRNE) were used to predict the desorption and transport of complex contaminants. The nonequilibrium models demonstrated better fits with the experimental values obtained from the column test than the equilibrium model. The ranges of KUbatch and KLbatch were very close to those of KUfit and KLfit determined from model simulations. The parameters (R, β, ω, α, and f) determined from model simulations were useful for characterizing the transport of contaminants within the soil matrix. The results of this study provide useful information for the operational parameters of the flushing process for soils with complex contamination. PMID:25698434
A generalised enzyme kinetic model for predicting the behaviour of complex biochemical systems
Wong, Martin Kin Lok; Krycer, James Robert; Burchfield, James Geoffrey; James, David Ernest; Kuncic, Zdenka
2015-01-01
Quasi steady-state enzyme kinetic models are increasingly used in systems modelling. The Michaelis Menten model is popular due to its reduced parameter dimensionality, but its low-enzyme and irreversibility assumption may not always be valid in the in vivo context. Whilst the total quasi-steady state assumption (tQSSA) model eliminates the reactant stationary assumptions, its mathematical complexity is increased. Here, we propose the differential quasi-steady state approximation (dQSSA) kinetic model, which expresses the differential equations as a linear algebraic equation. It eliminates the reactant stationary assumptions of the Michaelis Menten model without increasing model dimensionality. The dQSSA was found to be easily adaptable for reversible enzyme kinetic systems with complex topologies and to predict behaviour consistent with mass action kinetics in silico. Additionally, the dQSSA was able to predict coenzyme inhibition in the reversible lactate dehydrogenase enzyme, which the Michaelis Menten model failed to do. Whilst the dQSSA does not account for the physical and thermodynamic interactions of all intermediate enzyme-substrate complex states, it is proposed to be suitable for modelling complex enzyme mediated biochemical systems. This is due to its simpler application, reduced parameter dimensionality and improved accuracy. PMID:25859426
Modeling of Wall-Bounded Complex Flows and Free Shear Flows
NASA Technical Reports Server (NTRS)
Shih, Tsan-Hsing; Zhu, Jiang; Lumley, John L.
1994-01-01
Various wall-bounded flows with complex geometries and free shear flows have been studied with a newly developed realizable Reynolds stress algebraic equation model. The model development is based on the invariant theory in continuum mechanics. This theory enables us to formulate a general constitutive relation for the Reynolds stresses. Pope was the first to introduce this kind of constitutive relation to turbulence modeling. In our study, realizability is imposed on the truncated constitutive relation to determine the coefficients so that, unlike the standard k-E eddy viscosity model, the present model will not produce negative normal stresses in any situations of rapid distortion. The calculations based on the present model have shown an encouraging success in modeling complex turbulent flows.
Rivas, Elena; Lang, Raymond; Eddy, Sean R
2012-02-01
The standard approach for single-sequence RNA secondary structure prediction uses a nearest-neighbor thermodynamic model with several thousand experimentally determined energy parameters. An attractive alternative is to use statistical approaches with parameters estimated from growing databases of structural RNAs. Good results have been reported for discriminative statistical methods using complex nearest-neighbor models, including CONTRAfold, Simfold, and ContextFold. Little work has been reported on generative probabilistic models (stochastic context-free grammars [SCFGs]) of comparable complexity, although probabilistic mo