Science.gov

Sample records for model biomembranes complex

  1. Modeling biomembranes.

    SciTech Connect

    Plimpton, Steven James; Heffernan, Julieanne; Sasaki, Darryl Yoshio; Frischknecht, Amalie Lucile; Stevens, Mark Jackson; Frink, Laura J. Douglas

    2005-11-01

    Understanding the properties and behavior of biomembranes is fundamental to many biological processes and technologies. Microdomains in biomembranes or ''lipid rafts'' are now known to be an integral part of cell signaling, vesicle formation, fusion processes, protein trafficking, and viral and toxin infection processes. Understanding how microdomains form, how they depend on membrane constituents, and how they act not only has biological implications, but also will impact Sandia's effort in development of membranes that structurally adapt to their environment in a controlled manner. To provide such understanding, we created physically-based models of biomembranes. Molecular dynamics (MD) simulations and classical density functional theory (DFT) calculations using these models were applied to phenomena such as microdomain formation, membrane fusion, pattern formation, and protein insertion. Because lipid dynamics and self-organization in membranes occur on length and time scales beyond atomistic MD, we used coarse-grained models of double tail lipid molecules that spontaneously self-assemble into bilayers. DFT provided equilibrium information on membrane structure. Experimental work was performed to further help elucidate the fundamental membrane organization principles.

  2. Introductory lecture: basic quantities in model biomembranes.

    PubMed

    Nagle, John F

    2013-01-01

    One of the many aspects of membrane biophysics dealt with in this Faraday Discussion regards the material moduli that describe energies at a supramolecular level. This introductory lecture first critically reviews differences in reported numerical values of the bending modulus K(C), which is a central property for the biologically important flexibility of membranes. It is speculated that there may be a reason that the shape analysis method tends to give larger values of K(C) than the micromechanical manipulation method or the more recent X-ray method that agree very well with each other. Another theme of membrane biophysics is the use of simulations to provide exquisite detail of structures and processes. This lecture critically reviews the application of atomic level simulations to the quantitative structure of simple single component lipid bilayers and diagnostics are introduced to evaluate simulations. Another theme of this Faraday Discussion was lateral heterogeneity in biomembranes with many different lipids. Coarse grained simulations and analytical theories promise to synergistically enhance experimental studies when their interaction parameters are tuned to agree with experimental data, such as the slopes of experimental tie lines in ternary phase diagrams. Finally, attention is called to contributions that add relevant biological molecules to bilayers and to contributions that study the exciting shape changes and different non-bilayer structures with different lipids.

  3. Ferroelectric active models of ion channels in biomembranes.

    PubMed

    Bystrov, V S; Lakhno, V D; Molchanov, M

    1994-06-21

    Ferroactive models of ion channels in the theory of biological membranes are presented. The main equations are derived and their possible solutions are shown. The estimates of some experimentally measured parameters are given. Possible physical consequences of the suggested models are listed and the possibility of their experimental finding is discussed. The functioning of the biomembrane's ion channel is qualitatively described on the basis of the suggested ferroactive models. The main directions and prospects for development of the ferroactive approach to the theory of biological membranes and their structures are indicated.

  4. The specificity of frutalin lectin using biomembrane models.

    PubMed

    Nobre, Thatyane M; Pavinatto, Felippe J; Cominetti, Márcia R; Selistre de-Araújo, Heloísa S; Zaniquelli, Maria E D; Beltramini, Leila M

    2010-08-01

    Frutalin is a homotetrameric alpha-d-galactose (d-Gal)-binding lectin that activates natural killer cells in vitro and promotes leukocyte migration in vivo. Because lectins are potent lymphocyte stimulators, understanding the interactions that occur between them and cell surfaces can help to the action mechanisms involved in this process. In this paper, we present a detailed investigation of the interactions of frutalin with phospho- and glycolipids using Langmuir monolayers as biomembrane models. The results confirm the specificity of frutalin for d-Gal attached to a biomembrane. Adsorption of frutalin was more efficient for the galactose polar head lipids, in contrast to the one for sulfated galactose, in which a lag time is observed, indicating a rearrangement of the monolayer to incorporate the protein. Regarding ganglioside GM1 monolayers, lower quantities of the protein were adsorbed, probably due to the farther apart position of d-galactose from the interface. Binary mixtures containing galactocerebroside revealed small domains formed at high lipid packing in the presence of frutalin, suggesting that lectin induces the clusterization and the forming of domains in vitro, which may be a form of receptor internalization. This is the first experimental evidence of such lectin effect, and it may be useful to understand the mechanism of action of lectins at the molecular level.

  5. Interactions of PAMAM dendrimers with negatively charged model biomembranes.

    PubMed

    Yanez Arteta, Marianna; Ainalem, Marie-Louise; Porcar, Lionel; Martel, Anne; Coker, Helena; Lundberg, Dan; Chang, Debby P; Soltwedel, Olaf; Barker, Robert; Nylander, Tommy

    2014-11-13

    We have investigated the interactions between cationic poly(amidoamine) (PAMAM) dendrimers of generation 4 (G4), a potential gene transfection vector, with net-anionic model biomembranes composed of different ratios of zwitterionic phosphocholine (PC) and anionic phospho-L-serine (PS) phospholipids. Two types of model membranes were used: solid-supported bilayers, prepared with lipids carrying palmitoyl-oleoyl (PO) and diphytanoyl (DPh) acyl chains, and free-standing bilayers, formed at the interface between two aqueous droplets in oil (droplet interface bilayers, DIBs) using the DPh-based lipids. G4 dendrimers were found to translocate through POPC:POPS bilayers deposited on silica surfaces. The charge density of the bilayer affects translocation, which is reduced when the ionic strength increases. This shows that the dendrimer-bilayer interactions are largely controlled by their electrostatic attraction. The structure of the solid-supported bilayers remains intact upon translocation of the dendrimer. However, the amount of lipids in the bilayer decreases and dendrimer/lipid aggregates are formed in bulk solution, which can be deposited on the interfacial layers upon dilution of the system with dendrimer-free solvent. Electrophysiology measurements on DIBs confirm that G4 dendrimers cross the lipid membranes containing PS, which then become more permeable to ions. The obtained results have implications for PAMAM dendrimers as delivery vehicles to cells. PMID:25310456

  6. Biomembrane models and drug-biomembrane interaction studies: Involvement in drug design and development

    PubMed Central

    Pignatello, R.; Musumeci, T.; Basile, L.; Carbone, C.; Puglisi, G.

    2011-01-01

    Contact with many different biological membranes goes along the destiny of a drug after its systemic administration. From the circulating macrophage cells to the vessel endothelium, to more complex absorption barriers, the interaction of a biomolecule with these membranes largely affects its rate and time of biodistribution in the body and at the target sites. Therefore, investigating the phenomena occurring on the cell membranes, as well as their different interaction with drugs in the physiological or pathological conditions, is important to exploit the molecular basis of many diseases and to identify new potential therapeutic strategies. Of course, the complexity of the structure and functions of biological and cell membranes, has pushed researchers toward the proposition and validation of simpler two- and three-dimensional membrane models, whose utility and drawbacks will be discussed. This review also describes the analytical methods used to look at the interactions among bioactive compounds with biological membrane models, with a particular accent on the calorimetric techniques. These studies can be considered as a powerful tool for medicinal chemistry and pharmaceutical technology, in the steps of designing new drugs and optimizing the activity and safety profile of compounds already used in the therapy. PMID:21430952

  7. Absorption of nitro-polycyclic aromatic hydrocarbons by biomembrane models: effect of the medium lipophilicity.

    PubMed

    Castelli, Francesco; Micieli, Dorotea; Ottimo, Sara; Minniti, Zelica; Sarpietro, Maria Grazia; Librando, Vito

    2008-10-01

    To demonstrate the relationship between the structure of nitro-polycyclic aromatic hydrocarbons and their effect on biomembranes, we have investigated the influence of three structurally different nitro-polycyclic aromatic hydrocarbons, 2-nitrofluorene, 2,7-dinitrofluorene and 3-nitrofluoranthene, on the thermotropic behavior of dimyristoylphosphatidylcholine multilamellar vesicles, used as biomembrane models, by means of differential scanning calorimetry. The obtained results indicate that the studied nitro-polycyclic aromatic hydrocarbons affected the thermotropic behavior of multilamellar vesicles to various extents, modifying the pretransition and the main phase transition peaks and shifting them to lower temperatures. The effect of the aqueous and lipophilic medium on the absorption process of these compounds by the biomembrane models has been also investigated revealing that the process is hindered by the aqueous medium but strongly allowed by the lipophilic medium. PMID:18723205

  8. Transfer kinetics from colloidal drug carriers and liposomes to biomembrane models: DSC studies

    PubMed Central

    Sarpietro, Maria Grazia; Castelli, Francesco

    2011-01-01

    The release of bioactive molecules by different delivery systems has been studied. We have proposed a protocol that takes into account a system that is able to carry out the uptake of a bioactive molecule released during the time, resembling an in vivo-like system, and for this reason we have used biomembrane models represented by multi-lamellar and unilamellar vesicles. The bioactive molecule loaded delivery system has been put in contact with the biomembrane model and the release has been evaluated, to consider the effect of the bioactive molecule on the biomembrane model thermotropic behavior, and to compare the results with those obtained when a pure drug interacts with the biomembrane model. The differential scanning calorimetry technique has been employed. Depending on the delivery system used, our research permits to evaluate the effect of different parameters on the bioactive molecule release, such as pH, drug loading degree, delivery system swelling, crosslinking agent, degree of cross-linking, and delivery system side chains. PMID:21430957

  9. Interaction of antioxidant biobased epicatechin conjugates with biomembrane models.

    PubMed

    Lazaro, Elisabet; Castillo, José A; Rafols, Clara; Rosés, Martí; Clapés, Pere; Torres, Josep Lluís

    2007-04-18

    (-)-Epicatechin conjugates with sulfur-containing moieties are strong free radical scavengers with cell-protecting activities, which may be in part modulated by their capacity to bind to biological membranes. We present here a study of the interaction of these conjugates with membrane models such as multilamellar vesicles and a phospholipid-coated silica column (immobilized artificial membrane), monitored by differential scanning calorimetry and high-performance liquid chromatography, respectively. The nonpolyphenolic moiety significantly influenced the membrane behavior of the whole molecules. Bulky and hydrophobic conjugates clearly interacted with the phospholipids and may have a tendency to penetrate into the hydrophobic core of the vesicles. In contrast, the smaller cationic 4beta-(2-aminoethylthio)epicatechin may be located at the outer interface of the lipid membrane. The outcomes from both experimental set-ups were in good agreement. The differences detected in the biological activities of the conjugates may be explained in part by their tendency to penetrate the cell membrane.

  10. Metal transport across biomembranes: emerging models for a distinct chemistry.

    PubMed

    Argüello, José M; Raimunda, Daniel; González-Guerrero, Manuel

    2012-04-20

    Transition metals are essential components of important biomolecules, and their homeostasis is central to many life processes. Transmembrane transporters are key elements controlling the distribution of metals in various compartments. However, due to their chemical properties, transition elements require transporters with different structural-functional characteristics from those of alkali and alkali earth ions. Emerging structural information and functional studies have revealed distinctive features of metal transport. Among these are the relevance of multifaceted events involving metal transfer among participating proteins, the importance of coordination geometry at transmembrane transport sites, and the presence of the largely irreversible steps associated with vectorial transport. Here, we discuss how these characteristics shape novel transition metal ion transport models.

  11. Comparison between cucurbiturils and β-cyclodextrin interactions with cholesterol molecules present in Langmuir monolayers used as a biomembrane model.

    PubMed

    Tovani, Camila Bussola; de Souza, João Francisco Ventrici; Cavallini, Thiago de Souza; Demets, Grégoire Jean-François; Ito, Amando; Barioni, Marina Berardi; Pazin, Wallance Moreira; Zaniquelli, Maria Elisabete Darbello

    2013-11-01

    Specific surface techniques can probe the interaction of cholesterol (Chol) with substances that are able to host and/or sequester this biomolecule, provided that the additives are properly assembled at the interface. Reports on inclusion complexes of Chol with β-cyclodextrins exist in the literature. Here we compare the interaction of β-cyclodextrin and cucurbiturils with Chol present in Langmuir phospholipid (dipalmitoylphosphatidylcholine, DPPC) monolayers, used as a biomembrane model. Cucurbiturils, CB[n], comprise macrocyclic host molecules consisting of n glycoluril units. Classic surface pressure curves, dilatational surface viscoelasticity measurements, and fluorescence emission spectra and images obtained by time-resolved fluorescence of the corresponding Langmuir-Blodgett films have shown that homologues with 5 and 6 glycoluril units, CB[5] and CB[6], do not form inclusion complexes. Higher-order homologues, such as CB[7], are likely to complex with Chol with changes in the minimum molecular areas recorded for DPPC/Chol monolayers, the fluorescence decay lifetimes, and the dilatational surface viscosities of the monolayers generated in the presence of these molecules. Moreover, we proof the removal of cholesterol from the biomimetic interface in the presence of CB[7] by means of fluorescence spectra from the subphase support of monolayers containing fluorescent-labeled Chol.

  12. Calorimetry and Langmuir-Blodgett studies on the interaction of a lipophilic prodrug of LHRH with biomembrane models.

    PubMed

    Sarpietro, Maria G; Accolla, Maria L; Santoro, Nancy; Mansfeld, Friederike M; Pignatello, Rosario; Toth, Istvan; Castelli, Francesco

    2014-05-01

    The interaction between an amphiphilic luteinizing hormone-releasing hormone (LHRH) prodrug that incorporated a lipoamino acid moiety (C12-LAA) with biological membrane models that consisted of multilamellar liposomes (MLVs) and phospholipid monolayers, was studied using Differential Scanning Calorimetry (DSC) and Langmuir-Blodgett film techniques. The effect of the prodrug C12[Q1]LHRH on the lipid layers was compared with the results obtained with the pure precursors, LHRH and C12-LAA. Conjugation of LHRH with a LAA promoiety showed to improve the peptide interaction with biomembrane models. Basing on the calorimetric findings, the LAA moiety aided the transfer of the prodrug from an aqueous solution to the biomembrane model.

  13. Preparation, property of the complex of carboxymethyl chitosan grafted copolymer with iodine and application of it in cervical antibacterial biomembrane.

    PubMed

    Chen, Yu; Yang, Yumin; Liao, Qingping; Yang, Wei; Ma, Wanfeng; Zhao, Jian; Zheng, Xionggao; Yang, Yang; Chen, Rui

    2016-10-01

    Cervical erosion is one of the common diseases of women. The loop electrosurgical excisional procedure (LEEP) has been used widely in the treatment of the cervical diseases. However, there are no effective wound dressings for the postoperative care to protect the wound area from further infection, leading to increased secretion and longer healing time. Iodine is a widely used inorganic antibacterial agent with many advantages. However, the carrier for stable iodine complex antibacterial agents is lack. In the present study, a novel iodine carrier, Carboxymethyl chitosan-g-(poly(sodium acrylate)-co-polyvinylpyrrolidone) (CMCTS-g-(PAANa-co-PVP), was prepared by graft copolymerization of sodium acrylate (AANa) and N-vinylpyrrolidone (NVP) to a carboxymethyl chitosan (CMCTS) skeleton. The obtained structure could combine prominent property of poly(sodium acrylate) (PAANa) anionic polyelectrolyte segment and good complex property of polyvinylpyrrolidone (PVP) segment to iodine. The bioactivity of CMCTS could also be kept. The properties of the complex, CMCTS-g-(PAANa-co-PVP)-I2, were studied. The in vitro experiment shows that it has broad-spectrum bactericidal effects to virus, fungus, gram-positive bacteria and gram-negative bacteria. A CMCTS-g-(PAANa-co-PVP)-I2 complex contained cervical antibacterial biomembrane (CABM) was prepared. The iodine release from the CABM is pH-dependent. The clinic trial results indicate that CABM has better treatment effectiveness than the conventional treatment in the postoperative care of the LEEP operation. PMID:27287120

  14. Engineering and validation of a novel lipid thin film for biomembrane modeling in lipophilicity determination of drugs and xenobiotics

    PubMed Central

    Idowu, Sunday Olakunle; Adeyemo, Morenikeji Ambali; Ogbonna, Udochi Ihechiluru

    2009-01-01

    Background Determination of lipophilicity as a tool for predicting pharmacokinetic molecular behavior is limited by the predictive power of available experimental models of the biomembrane. There is current interest, therefore, in models that accurately simulate the biomembrane structure and function. A novel bio-device; a lipid thin film, was engineered as an alternative approach to the previous use of hydrocarbon thin films in biomembrane modeling. Results Retention behavior of four structurally diverse model compounds; 4-amino-3,5-dinitrobenzoic acid (ADBA), naproxen (NPX), nabumetone (NBT) and halofantrine (HF), representing 4 broad classes of varying molecular polarities and aqueous solubility behavior, was investigated on the lipid film, liquid paraffin, and octadecylsilane layers. Computational, thermodynamic and image analysis confirms the peculiar amphiphilic configuration of the lipid film. Effect of solute-type, layer-type and variables interactions on retention behavior was delineated by 2-way analysis of variance (ANOVA) and quantitative structure property relationships (QSPR). Validation of the lipid film was implemented by statistical correlation of a unique chromatographic metric with Log P (octanol/water) and several calculated molecular descriptors of bulk and solubility properties. Conclusion The lipid film signifies a biomimetic artificial biological interface capable of both hydrophobic and specific electrostatic interactions. It captures the hydrophilic-lipophilic balance (HLB) in the determination of lipophilicity of molecules unlike the pure hydrocarbon film of the prior art. The potentials and performance of the bio-device gives the promise of its utility as a predictive analytic tool for early-stage drug discovery science. PMID:19735551

  15. Disruption of Saccharomyces cerevisiae by Plantaricin 149 and investigation of its mechanism of action with biomembrane model systems.

    PubMed

    Lopes, José Luiz S; Nobre, Thatyane M; Siano, Alvaro; Humpola, Verónica; Bossolan, Nelma R S; Zaniquelli, Maria E D; Tonarelli, Georgina; Beltramini, Leila M

    2009-10-01

    The action of a synthetic antimicrobial peptide analog of Plantaricin 149 (Pln149a) against Saccharomyces cerevisiae and its interaction with biomembrane model systems were investigated. Pln149a was shown to inhibit S. cerevisiae growth by more than 80% in YPD medium, causing morphological changes in the yeast wall and remaining active and resistant to the yeast proteases even after 24 h of incubation. Different membrane model systems and carbohydrates were employed to better describe the Pln149a interaction with cellular components using circular dichroism and fluorescence spectroscopies, adsorption kinetics and surface elasticity in Langmuir monolayers. These assays showed that Pln149a does not interact with either mono/polysaccharides or zwitterionic LUVs, but is strongly adsorbed to and incorporated into negatively charged surfaces, causing a conformational change in its secondary structure from random-coil to helix upon adsorption. From the concurrent analysis of Pln149a adsorption kinetics and dilatational surface elasticity data, we determined that 2.5 muM is the critical concentration at which Pln149a will disrupt a negative DPPG monolayer. Furthermore, Pln149a exhibited a carpet-like mechanism of action, in which the peptide initially binds to the membrane, covering its surface and acquiring a helical structure that remains associated to the negatively charged phospholipids. After this electrostatic interaction, another peptide region causes a strain in the membrane, promoting its disruption.

  16. Vibrational Spectroscopy of Biomembranes

    NASA Astrophysics Data System (ADS)

    Schultz, Zachary D.; Levin, Ira W.

    2011-07-01

    Vibrational spectroscopy, commonly associated with IR absorption and Raman scattering, has provided a powerful approach for investigating interactions between biomolecules that make up cellular membranes. Because the IR and Raman signals arise from the intrinsic properties of these molecules, vibrational spectroscopy probes the delicate interactions that regulate biomembranes with minimal perturbation. Numerous innovative measurements, including nonlinear optical processes and confined bilayer assemblies, have provided new insights into membrane behavior. In this review, we highlight the use of vibrational spectroscopy to study lipid-lipid interactions. We also examine recent work in which vibrational measurements have been used to investigate the incorporation of peptides and proteins into lipid bilayers, and we discuss the interactions of small molecules and drugs with membrane structures. Emerging techniques and measurements on intact cellular membranes provide a prospective on the future of vibrational spectroscopic studies of biomembranes.

  17. Travelling lipid domains in a dynamic model for protein-induced pattern formation in biomembranes

    NASA Astrophysics Data System (ADS)

    John, Karin; Bär, Markus

    2005-06-01

    Cell membranes are composed of a mixture of lipids. Many biological processes require the formation of spatial domains in the lipid distribution of the plasma membrane. We have developed a mathematical model that describes the dynamic spatial distribution of acidic lipids in response to the presence of GMC proteins and regulating enzymes. The model encompasses diffusion of lipids and GMC proteins, electrostatic attraction between acidic lipids and GMC proteins as well as the kinetics of membrane attachment/detachment of GMC proteins. If the lipid-protein interaction is strong enough, phase separation occurs in the membrane as a result of free energy minimization and protein/lipid domains are formed. The picture is changed if a constant activity of enzymes is included into the model. We chose the myristoyl-electrostatic switch as a regulatory module. It consists of a protein kinase C that phosphorylates and removes the GMC proteins from the membrane and a phosphatase that dephosphorylates the proteins and enables them to rebind to the membrane. For sufficiently high enzymatic activity, the phase separation is replaced by travelling domains of acidic lipids and proteins. The latter active process is typical for nonequilibrium systems. It allows for a faster restructuring and polarization of the membrane since it acts on a larger length scale than the passive phase separation. The travelling domains can be pinned by spatial gradients in the activity; thus the membrane is able to detect spatial clues and can adapt its polarity dynamically to changes in the environment.

  18. Elasticity of ``Fuzzy'' Biomembranes

    NASA Astrophysics Data System (ADS)

    Evans, E.; Rawicz, W.

    1997-09-01

    Sensitive micropipet methods have been used to measure the elastic stretch modulus and bending rigidity of biomembranes studded with water-soluble polymers. The fully extended lengths of the chemically grafted chains ranged from 10-50× the length of the embedding membrane lipid. Concentrations of the polymer were varied from 1-10× the surface density needed for isolated chains to touch, nominally satisfying the scaling theory requirement for semidilute brushes. Over this range, the membrane stretch modulus was unchanged by the polymer layers, but the bending rigidity increased by as much as 10kBT. Surprisingly, the increase in rigidity deviated significantly from scaling theory predictions, revealing a large marginal brush regime between dilute mushrooms and a semidilute brush.

  19. Quenching of fluorescein-conjugated lipids by antibodies. Quantitative recognition and binding of lipid-bound haptens in biomembrane models, formation of two-dimensional protein domains and molecular dynamics simulations.

    PubMed Central

    Ahlers, M; Grainger, D W; Herron, J N; Lim, K; Ringsdorf, H; Salesse, C

    1992-01-01

    Three model biomembrane systems, monolayers, micelles, and vesicles, have been used to study the influence of chemical and physical variables of hapten presentation at membrane interfaces on antibody binding. Hapten recognition and binding were monitored for the anti-fluorescein monoclonal antibody 4-4-20 generated against the hapten, fluorescein, in these membrane models as a function of fluorescein-conjugated lipid architecture. Specific recognition and binding in this system are conveniently monitored by quenching of fluorescein emission upon penetration of fluorescein into the antibody's active site. Lipid structure was shown to play a large role in affecting antibody quenching. Interestingly, the observed degrees of quenching were nearly independent of the lipid membrane model studied, but directly correlated with the chemical structure of the lipids. In all cases, the antibody recognized and quenched most efficiently a lipid based on dioctadecylamine where fluorescein is attached to the headgroup via a long, flexible hydrophilic spacer. Dipalmitoyl phosphatidylethanolamine containing a fluorescein headgroup demonstrated only partial binding/quenching. Egg phosphatidylethanolamine with a fluorescein headgroup showed no susceptibility to antibody recognition, binding, or quenching. Formation of two-dimensional protein domains upon antibody binding to the fluorescein-lipids in monolayers is also presented. Chemical and physical requirements for these antibody-hapten complexes at membrane surfaces have been discussed in terms of molecular dynamics simulations based on recent crystallographic models for this antibody-hapten complex (Herron et al., 1989. Proteins Struct. Funct. Genet. 5:271-280). Images FIGURE 7 FIGURE 8 PMID:1420916

  20. A Critical Comparison of Biomembrane Force Fields: Structure and Dynamics of Model DMPC, POPC, and POPE Bilayers.

    PubMed

    Pluhackova, Kristyna; Kirsch, Sonja A; Han, Jing; Sun, Liping; Jiang, Zhenyan; Unruh, Tobias; Böckmann, Rainer A

    2016-04-28

    Atomistic molecular dynamics simulations have become an important source of information for the structure and dynamics of biomembranes at molecular detail difficult to access in experiments. A number of force fields for lipid membrane simulations have been derived in the past; the choice of the most suitable force field is, however, frequently hampered by the availability of parameters for specific lipids. Additionally, the comparison of different quantities among force fields is often aggravated by varying simulation parameters. Here, we compare four atomistic lipid force fields, namely, the united-atom GROMOS54a7 and the all-atom force fields CHARMM36, Slipids, and Lipid14, for a broad range of structural and dynamical properties of saturated and monounsaturated phosphatidylcholine bilayers (DMPC and POPC) as well as for monounsaturated phosphatidylethanolamine bilayers (POPE). Additionally, the ability of the different force fields to describe the gel-liquid crystalline phase transition is compared and their computational efficiency estimated. Moreover, membrane properties like the water flux across the lipid bilayer and lipid acyl chain protrusion probabilities are compared.

  1. Lipid Biomembrane in Ionic Liquids

    NASA Astrophysics Data System (ADS)

    Yoo, Brian; Jing, Benxin; Shah, Jindal; Maginn, Ed; Zhu, Y. Elaine; Department of Chemical and Biomolecular Engineering Team

    2014-03-01

    Ionic liquids (ILs) have been recently explored as new ``green'' chemicals in several chemical and biomedical processes. In our pursuit of understanding their toxicities towards aquatic and terrestrial organisms, we have examined the IL interaction with lipid bilayers as model cell membranes. Experimentally by fluorescence microscopy, we have directly observed the disruption of lipid bilayer by added ILs. Depending on the concentration, alkyl chain length, and anion hydrophobicity of ILs, the interaction of ILs with lipid bilayers leads to the formation of micelles, fibrils, and multi-lamellar vesicles for IL-lipid complexes. By MD computer simulations, we have confirmed the insertion of ILs into lipid bilayers to modify the spatial organization of lipids in the membrane. The combined experimental and simulation results correlate well with the bioassay results of IL-induced suppression in bacteria growth, thereby suggesting a possible mechanism behind the IL toxicity. National Science Foundation, Center for Research Computing at Notre Dame.

  2. Depolarization Laplace transform analysis of exchangeable hyperpolarized ¹²⁹Xe for detecting ordering phases and cholesterol content of biomembrane models.

    PubMed

    Schnurr, Matthias; Witte, Christopher; Schröder, Leif

    2014-03-18

    We present a highly sensitive nuclear-magnetic resonance technique to study membrane dynamics that combines the temporary encapsulation of spin-hyperpolarized xenon ((129)Xe) atoms in cryptophane-A-monoacid (CrAma) and their indirect detection through chemical exchange saturation transfer. Radiofrequency-labeled Xe@CrAma complexes exhibit characteristic differences in chemical exchange saturation transfer-driven depolarization when interacting with binary membrane models composed of different molecular ratios of DPPC (1,2-dipalmitoyl-sn-glycero-3-phosphocholine) and POPC (1-palmitoyl-2-oleoyl-sn-glycero-3-phosphocholine). The method is also applied to mixtures of cholesterol and POPC. The existence of domains that fluctuate in cluster size in DPPC/POPC models at a high (75-98%) DPPC content induces up to a fivefold increase in spin depolarization time τ at 297 K. In POPC/cholesterol model membranes, the parameter τ depends linearly on the cholesterol content at 310 K and allows us to determine the cholesterol content with an accuracy of at least 5%.

  3. Biomembranes research using thermal and cold neutrons

    DOE PAGES

    Heberle, Frederick A.; Myles, Dean A. A.; Katsaras, John

    2015-08-01

    In 1932 James Chadwick discovered the neutron using a polonium source and a beryllium target (Chadwick, 1932). In a letter to Niels Bohr dated February 24, 1932, Chadwick wrote: “whatever the radiation from Be may be, it has most remarkable properties.” Where it concerns hydrogen-rich biological materials, the “most remarkable” property is the neutron’s differential sensitivity for hydrogen and its isotope deuterium. Such differential sensitivity is unique to neutron scattering, which unlike X-ray scattering, arises from nuclear forces. Consequently, the coherent neutron scattering length can experience a dramatic change in magnitude and phase as a result of resonance scattering, impartingmore » sensitivity to both light and heavy atoms, and in favorable cases to their isotopic variants. Furthermore, this article describes recent biomembranes research using a variety of neutron scattering techniques.« less

  4. Biomembranes research using thermal and cold neutrons

    SciTech Connect

    Heberle, Frederick A.; Myles, Dean A. A.; Katsaras, John

    2015-08-01

    In 1932 James Chadwick discovered the neutron using a polonium source and a beryllium target (Chadwick, 1932). In a letter to Niels Bohr dated February 24, 1932, Chadwick wrote: “whatever the radiation from Be may be, it has most remarkable properties.” Where it concerns hydrogen-rich biological materials, the “most remarkable” property is the neutron’s differential sensitivity for hydrogen and its isotope deuterium. Such differential sensitivity is unique to neutron scattering, which unlike X-ray scattering, arises from nuclear forces. Consequently, the coherent neutron scattering length can experience a dramatic change in magnitude and phase as a result of resonance scattering, imparting sensitivity to both light and heavy atoms, and in favorable cases to their isotopic variants. Furthermore, this article describes recent biomembranes research using a variety of neutron scattering techniques.

  5. Biomembranes research using thermal and cold neutrons.

    PubMed

    Heberle, F A; Myles, D A A; Katsaras, J

    2015-11-01

    In 1932 James Chadwick discovered the neutron using a polonium source and a beryllium target (Chadwick, 1932). In a letter to Niels Bohr dated February 24, 1932, Chadwick wrote: "whatever the radiation from Be may be, it has most remarkable properties." Where it concerns hydrogen-rich biological materials, the "most remarkable" property is the neutron's differential sensitivity for hydrogen and its isotope deuterium. Such differential sensitivity is unique to neutron scattering, which unlike X-ray scattering, arises from nuclear forces. Consequently, the coherent neutron scattering length can experience a dramatic change in magnitude and phase as a result of resonance scattering, imparting sensitivity to both light and heavy atoms, and in favorable cases to their isotopic variants. This article describes recent biomembranes research using a variety of neutron scattering techniques. PMID:26241882

  6. Biomembranes research using thermal and cold neutrons.

    PubMed

    Heberle, F A; Myles, D A A; Katsaras, J

    2015-11-01

    In 1932 James Chadwick discovered the neutron using a polonium source and a beryllium target (Chadwick, 1932). In a letter to Niels Bohr dated February 24, 1932, Chadwick wrote: "whatever the radiation from Be may be, it has most remarkable properties." Where it concerns hydrogen-rich biological materials, the "most remarkable" property is the neutron's differential sensitivity for hydrogen and its isotope deuterium. Such differential sensitivity is unique to neutron scattering, which unlike X-ray scattering, arises from nuclear forces. Consequently, the coherent neutron scattering length can experience a dramatic change in magnitude and phase as a result of resonance scattering, imparting sensitivity to both light and heavy atoms, and in favorable cases to their isotopic variants. This article describes recent biomembranes research using a variety of neutron scattering techniques.

  7. The decreasing of corn root biomembrane penetration for acetochlor with vermicompost amendment

    NASA Astrophysics Data System (ADS)

    Sytnyk, Svitlana; Wiche, Oliver

    2016-04-01

    One of the topical environmental security issues is management and control of anthropogenic (artificially synthesized) chemical agents usage and utilization. Protection systems development against toxic effects of herbicides should be based on studies of biological indication mechanisms for identification of stressors effect in organisms. Lipid degradation is non-specific reaction to exogenous chemical agents effects. Therefore it is important to study responses of lipid components depending on the stressor type. We studied physiological and biochemical characteristics of lipid metabolism under action of herbicides of chloracetamide group. Corn at different stages of ontogenesis was used as testing object during model laboratory and microfield experiments. Cattle manure treated with earth worms Essenia Foetida was used as compost fertilizer to add to chain: chernozem (black soil) -corn system. It was found several acetochlor actions as following: -decreasing of sterols, phospholipids, phosphatidylcholines and phosphatidylethanolamines content; -increasing pool of available fatty acids and phosphatidic acids associated with intensification of hydrolysis processes; -lypase activity stimulation under effect of stressor in low concentrations; -lypase activity inhibition under effect of high stressor level; -decreasing of polyenoic free fatty acids indicating biomembrane degradation; -accumulation of phospholipids degradation products (phosphatidic acids); -decreasing of high-molecular compounds (phosphatidylcholin and phosphatidylinositol) concentrations; -change in the index of unsaturated and saturated free fatty acids ratio in biomembranes structure; It was established that incorporation of vermicompost in dose 0.4 kg/m2 in black soil lead to corn roots biomembrane restoration. It was fixed the decreasing roots biomembrane penetration for acetochlor in trial with vermicompost. Second compost substances antidote effect is the soil microorganism's activation

  8. CdSe magic-sized quantum dots incorporated in biomembrane models at the air-water interface composed of components of tumorigenic and non-tumorigenic cells.

    PubMed

    Goto, Thiago E; Lopes, Carla C; Nader, Helena B; Silva, Anielle C A; Dantas, Noelio O; Siqueira, José R; Caseli, Luciano

    2016-07-01

    Cadmium selenide (CdSe) magic-sized quantum dots (MSQDs) are semiconductor nanocrystals with stable luminescence that are feasible for biomedical applications, especially for in vivo and in vitro imaging of tumor cells. In this work, we investigated the specific interaction of CdSe MSQDs with tumorigenic and non-tumorigenic cells using Langmuir monolayers and Langmuir-Blodgett (LB) films of lipids as membrane models for diagnosis of cancerous cells. Surface pressure-area isotherms and polarization modulation reflection-absorption spectroscopy (PM-IRRAS) showed an intrinsic interaction between the quantum dots, inserted in the aqueous subphase, and Langmuir monolayers constituted either of selected lipids or of tumorigenic and non-tumorigenic cell extracts. The films were transferred to solid supports to obtain microscopic images, providing information on their morphology. Similarity between films with different compositions representing cell membranes, with or without the quantum dots, was evaluated by atomic force microscopy (AFM) and confocal microscopy. This study demonstrates that the affinity of quantum dots for models representing cancer cells permits the use of these systems as devices for cancer diagnosis. PMID:27107554

  9. CdSe magic-sized quantum dots incorporated in biomembrane models at the air-water interface composed of components of tumorigenic and non-tumorigenic cells.

    PubMed

    Goto, Thiago E; Lopes, Carla C; Nader, Helena B; Silva, Anielle C A; Dantas, Noelio O; Siqueira, José R; Caseli, Luciano

    2016-07-01

    Cadmium selenide (CdSe) magic-sized quantum dots (MSQDs) are semiconductor nanocrystals with stable luminescence that are feasible for biomedical applications, especially for in vivo and in vitro imaging of tumor cells. In this work, we investigated the specific interaction of CdSe MSQDs with tumorigenic and non-tumorigenic cells using Langmuir monolayers and Langmuir-Blodgett (LB) films of lipids as membrane models for diagnosis of cancerous cells. Surface pressure-area isotherms and polarization modulation reflection-absorption spectroscopy (PM-IRRAS) showed an intrinsic interaction between the quantum dots, inserted in the aqueous subphase, and Langmuir monolayers constituted either of selected lipids or of tumorigenic and non-tumorigenic cell extracts. The films were transferred to solid supports to obtain microscopic images, providing information on their morphology. Similarity between films with different compositions representing cell membranes, with or without the quantum dots, was evaluated by atomic force microscopy (AFM) and confocal microscopy. This study demonstrates that the affinity of quantum dots for models representing cancer cells permits the use of these systems as devices for cancer diagnosis.

  10. Surface complexation modeling

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Adsorption-desorption reactions are important processes that affect the transport of contaminants in the environment. Surface complexation models are chemical models that can account for the effects of variable chemical conditions, such as pH, on adsorption reactions. These models define specific ...

  11. Modeling Complex Calorimeters

    NASA Technical Reports Server (NTRS)

    Figueroa-Feliciano, Enectali

    2004-01-01

    We have developed a software suite that models complex calorimeters in the time and frequency domain. These models can reproduce all measurements that we currently do in a lab setting, like IV curves, impedance measurements, noise measurements, and pulse generation. Since all these measurements are modeled from one set of parameters, we can fully describe a detector and characterize its behavior. This leads to a model than can be used effectively for engineering and design of detectors for particular applications.

  12. Debating complexity in modeling

    USGS Publications Warehouse

    Hunt, Randall J.; Zheng, Chunmiao

    1999-01-01

    As scientists trying to understand the natural world, how should our effort be apportioned? We know that the natural world is characterized by complex and interrelated processes. Yet do we need to explicitly incorporate these intricacies to perform the tasks we are charged with? In this era of expanding computer power and development of sophisticated preprocessors and postprocessors, are bigger machines making better models? Put another way, do we understand the natural world better now with all these advancements in our simulation ability? Today the public's patience for long-term projects producing indeterminate results is wearing thin. This increases pressure on the investigator to use the appropriate technology efficiently. On the other hand, bringing scientific results into the legal arena opens up a new dimension to the issue: to the layperson, a tool that includes more of the complexity known to exist in the real world is expected to provide the more scientifically valid answer.

  13. Biomembranes in atomistic and coarse-grained simulations

    NASA Astrophysics Data System (ADS)

    Pluhackova, Kristyna; Böckmann, Rainer A.

    2015-08-01

    The architecture of biological membranes is tightly coupled to the localization, organization, and function of membrane proteins. The organelle-specific distribution of lipids allows for the formation of functional microdomains (also called rafts) that facilitate the segregation and aggregation of membrane proteins and thus shape their function. Molecular dynamics simulations enable to directly access the formation, structure, and dynamics of membrane microdomains at the molecular scale and the specific interactions among lipids and proteins on timescales from picoseconds to microseconds. This review focuses on the latest developments of biomembrane force fields for both atomistic and coarse-grained molecular dynamics (MD) simulations, and the different levels of coarsening of biomolecular structures. It also briefly introduces scale-bridging methods applicable to biomembrane studies, and highlights selected recent applications.

  14. Response of biomembrane domains to external stimuli

    NASA Astrophysics Data System (ADS)

    Urbancic, Iztok

    To enrich our knowledge about membrane domains, new measurement techniques with extended spatial and temporal windows are being vigorously developed by combining various approaches. Following such efforts of the scientific community, we set up fluorescence microspectroscopy (FMS), bridging two well established methods: fluorescence microscopy, which enables imaging of the samples with spatial resolution down to 200 nm, and fluorescence spectroscopy that provides molecular information of the environment at nanometer and nanosecond scale. The combined method therefore allows us to localize this type of information with the precision suitable for studying various cellular structures. Faced with weak available fluorescence signals, we have put considerable efforts into optimization of measurement processes and analysis of the data. By introducing a novel acquisition scheme and by fitting the data with a mathematical model, we preserved the spectral resolution, characteristic for spectroscopic measurements of bulk samples, also at microscopic level. We have at the same time overcome the effects of photobleaching, which had previously considerably distorted the measured spectral lineshape of photosensitive dyes and consequently hindered the reliability of FMS. Our new approach has therefore greatly extended the range of applicable environmentally sensitive probes, which can now be designed to better accommodate the needs of each particular experiment. Moreover, photobleaching of fluorescence signal can now even be exploited to obtain new valuable information about molecular environment of the probes, as bleaching rates of certain probes also depend on physical and chemical properties of the local surroundings. In this manner we increased the number of available spatially localized spectral parameters, which becomes invaluable when investigating complex biological systems that can only be adequately characterized by several independent variables. Applying the developed

  15. Dynamics of bio-membranes investigated by neutron spin echo: Effects of phospholipid conformations and presence of lidocaine

    NASA Astrophysics Data System (ADS)

    Yi, Zheng

    Bio-membranes of the natural living cells are made of bilayers of phospholipids molecules embedded with other constituents, such as cholesterol and membrane proteins, which help to accomplish a broad range of functions. Vesicles made of lipid bilayers can serve as good model systems for bio-membranes. Therefore these systems have been extensively characterized and much is known about their shape, size, porosity and functionality. In this dissertation we report the studies of the effects of the phosoholipid conformation, such as hydrocarbon number and presence of double bond in hydrophobic tails on dynamics of phospholipids bilayers studied by neutron spin echo (NSE) technique. We have investigated how lidocaine, the most medically used local anesthetics (LA), influence the structural and dynamical properties of model bio-membranes by small angle neutron scattering (SANS), NSE and differential scanning calorimetry (DSC). To investigate the influence of phospholipid conformation on bio-membranes, the bending elasticities kappac of seven saturated and monounsaturated phospholipid bilayers were investigated by NSE spectroscopy. kappa c of phosphatidylcholines (PCS) in liquid crystalline (L alpha) phase ranges from 0.38x10-19 J for 1,2-Dimyristoyl- sn-Glycero-3-Phosphocholine (14:0 PC) to 0.64x10-19 J for 1,2-Dieicosenoyl-sn-Glycero-3-Phosphocholine (20:1 PC). It was confirmed that when the area modulus KA varies little with chain unsaturation or length, the elastic ratios (kappac/ KA)1/2 of bilayers varies linearly with lipid hydrophobic thickness d. For the study of the influence of LA on bio-membranes, SANS measurements have been performed on 14:0 PC bilayers with different concentrations of lidocaine to determine the bilayer thickness dL as a function of the lidocaine concentration. NSE has been used to study the influence of lidocaine on the bending elasticity of 14:0 PC bilayers in Lalpha and ripple gel (Pbeta') phases. Our results confirmed that the molecules of

  16. 57 Fe Mössbauer probe of spin crossover thin films on a bio-membrane

    NASA Astrophysics Data System (ADS)

    Naik, Anil D.; Garcia, Yann

    2012-03-01

    An illustrious complex [Fe(ptz)6](BF4)2 (ptz = 1-propyl-tetrazole) ( 1) which was produced in the form of submicron crystals and thin film on Allium cepa membrane was probed by 57Fe Mossbauer spectroscopy in order to follow its intrinsic spin crossover. In addition to a weak signal that corresponds to neat SCO compound significant amount of other iron compounds are found that could have morphed from 1 due to specific host-guest interaction on the lipid-bilayer of bio-membrane. Further complimentary information about biogenic role of membrane, was obtained from variable temperature Mossbauer spectroscopy on a ~5% enriched [57Fe(H2O)6](BF4)2 salt on this membrane.

  17. Modelling of Complex Plasmas

    NASA Astrophysics Data System (ADS)

    Akdim, Mohamed Reda

    2003-09-01

    Nowadays plasmas are used for various applications such as the fabrication of silicon solar cells, integrated circuits, coatings and dental cleaning. In the case of a processing plasma, e.g. for the fabrication of amorphous silicon solar cells, a mixture of silane and hydrogen gas is injected in a reactor. These gases are decomposed by making a plasma. A plasma with a low degree of ionization (typically 10_5) is usually made in a reactor containing two electrodes driven by a radio-frequency (RF) power source in the megahertz range. Under the right circumstances the radicals, neutrals and ions can react further to produce nanometer sized dust particles. The particles can stick to the surface and thereby contribute to a higher deposition rate. Another possibility is that the nanometer sized particles coagulate and form larger micron sized particles. These particles obtain a high negative charge, due to their large radius and are usually trapped in a radiofrequency plasma. The electric field present in the discharge sheaths causes the entrapment. Such plasmas are called dusty or complex plasmas. In this thesis numerical models are presented which describe dusty plasmas in reactive and nonreactive plasmas. We started first with the development of a simple one-dimensional silane fluid model where a dusty radio-frequency silane/hydrogen discharge is simulated. In the model, discharge quantities like the fluxes, densities and electric field are calculated self-consistently. A radius and an initial density profile for the spherical dust particles are given and the charge and the density of the dust are calculated with an iterative method. During the transport of the dust, its charge is kept constant in time. The dust influences the electric field distribution through its charge and the density of the plasma through recombination of positive ions and electrons at its surface. In the model this process gives an extra production of silane radicals, since the growth of dust is

  18. Biomembrane interactions reveal the mechanism of action of surface-immobilized host defense IDR-1010 peptide.

    PubMed

    Gao, Guangzheng; Cheng, John T J; Kindrachuk, Jason; Hancock, Robert E W; Straus, Suzana K; Kizhakkedathu, Jayachandran N

    2012-02-24

    Dissecting the mechanism of action of surface-tethered antimicrobial and immunomodulatory peptides is critical to the design of optimized anti-infection coatings on biomedical devices. To address this, we compared the biomembrane interactions of host defense peptide IDR-1010cys (1) in free form, (2) as a soluble polymer conjugate, and (3) with one end tethered to a solid support with model bacterial and mammalian lipid membranes. Our results show that IDR-1010cys in all three distinct forms interacted with bacterial and mammalian lipid vesicles, but the extent of the interactions as monitored by the induction of secondary structure varied. The enhanced interaction of surface-tethered peptides is well correlated with their very good antimicrobial activities. Our results demonstrate that there may be a difference in the mechanism of action of surface-tethered versus free IDR-1010cys.

  19. Action of the multifunctional peptide BP100 on native biomembranes examined by solid-state NMR.

    PubMed

    Misiewicz, Julia; Afonin, Sergii; Grage, Stephan L; van den Berg, Jonas; Strandberg, Erik; Wadhwani, Parvesh; Ulrich, Anne S

    2015-04-01

    Membrane composition is a key factor that regulates the destructive activity of antimicrobial peptides and the non-leaky permeation of cell penetrating peptides in vivo. Hence, the choice of model membrane is a crucial aspect in NMR studies and should reflect the biological situation as closely as possible. Here, we explore the structure and dynamics of the short multifunctional peptide BP100 using a multinuclear solid-state NMR approach. The membrane alignment and mobility of this 11 amino acid peptide was studied in various synthetic lipid bilayers with different net charge, fluidity, and thickness, as well as in native biomembranes harvested from prokaryotic and eukaryotic cells. (19)F-NMR provided the high sensitivity and lack of natural abundance background that are necessary to observe a labelled peptide even in protoplast membranes from Micrococcus luteus and in erythrocyte ghosts. Six selectively (19)F-labeled BP100 analogues gave remarkably similar spectra in all of the macroscopically oriented membrane systems, which were studied under quasi-native conditions of ambient temperature and full hydration. This similarity suggests that BP100 has the same surface-bound helical structure and high mobility in the different biomembranes and model membranes alike, independent of charge, thickness or cholesterol content of the system. (31)P-NMR spectra of the phospholipid components did not indicate any bilayer perturbation, so the formation of toroidal wormholes or micellarization can be excluded as a mechanism of its antimicrobial or cell penetrating action. However, (2)H-NMR analysis of the acyl chain order parameter profiles showed that BP100 leads to considerable membrane thinning and thereby local destabilization.

  20. Modeling complexity in biology

    NASA Astrophysics Data System (ADS)

    Louzoun, Yoram; Solomon, Sorin; Atlan, Henri; Cohen, Irun. R.

    2001-08-01

    Biological systems, unlike physical or chemical systems, are characterized by the very inhomogeneous distribution of their components. The immune system, in particular, is notable for self-organizing its structure. Classically, the dynamics of natural systems have been described using differential equations. But, differential equation models fail to account for the emergence of large-scale inhomogeneities and for the influence of inhomogeneity on the overall dynamics of biological systems. Here, we show that a microscopic simulation methodology enables us to model the emergence of large-scale objects and to extend the scope of mathematical modeling in biology. We take a simple example from immunology and illustrate that the methods of classical differential equations and microscopic simulation generate contradictory results. Microscopic simulations generate a more faithful approximation of the reality of the immune system.

  1. Tools for characterizing biomembranes : final LDRD report.

    SciTech Connect

    Alam, Todd Michael; Stevens, Mark; Holland, Gregory P.; McIntyre, Sarah K.

    2007-10-01

    A suite of experimental nuclear magnetic resonance (NMR) spectroscopy tools were developed to investigate lipid structure and dynamics in model membrane systems. By utilizing both multinuclear and multidimensional NMR experiments a range of different intra- and inter-molecular contacts were probed within the membranes. Examples on pure single component lipid membranes and on the canonical raft forming mixture of DOPC/SM/Chol are presented. A unique gel phase pretransition in SM was also identified and characterized using these NMR techniques. In addition molecular dynamics into the hydrogen bonding network unique to sphingomyelin containing membranes were evaluated as a function of temperature, and are discussed.

  2. The thermodynamics of simple biomembrane mimetic systems

    PubMed Central

    Raudino, Antonio; Sarpietro, Maria Grazia; Pannuzzo, Martina

    2011-01-01

    Insight into the forces governing a system is essential for understanding its behavior and function. Thermodynamic investigations provide a wealth of information that is not, or is hardly, available from other methods. This article reviews thermodynamic approaches and assays to measure collective properties such as heat adsorption / emission and volume variations. These methods can be successfully applied to the study of lipid vesicles (liposomes) and biological membranes. With respect to instrumentation, differential scanning calorimetry, pressure perturbation calorimetry, isothermal titration calorimetry, dilatometry, and acoustic techniques aimed at measuring the isothermal and adiabatic processes, two- and three-dimensional compressibilities are considered. Applications of these techniques to lipid systems include the measurement of different thermodynamic parameters and a detailed characterization of thermotropic, barotropic, and lyotropic phase behavior. The membrane binding and / or partitioning of solutes (proteins, peptides, drugs, surfactants, ions, etc.) can also be quantified and modeled. Many thermodynamic assays are available for studying the effect of proteins and other additives on membranes, characterizing non-ideal mixing, domain formation, bilayer stability, curvature strain, permeability, solubilization, and fusion. Studies of membrane proteins in lipid environments elucidate lipid–protein interactions in membranes. Finally, a plethora of relaxation phenomena toward equilibrium thermodynamic structures can be also investigated. The systems are described in terms of enthalpic and entropic forces, equilibrium constants, heat capacities, partial volume changes, volume and area compressibility, and so on, also shedding light on the stability of the structures and the molecular origin and mechanism of the structural changes. PMID:21430953

  3. Specific biomembrane adhesion -Indirect lateral interactions between bound receptor molecules

    NASA Astrophysics Data System (ADS)

    Maier, C. W.; Behrisch, A.; Kloboucek, A.; Simson, D. A.; Merkel, R.

    We studied biomembrane adhesion using the micropipet aspiration technique. Adhesion was caused by contact site A, a laterally mobile and highly specific cell adhesion molecule from Dictyostelium discoideum, reconstituted in lipid vesicles of DOPC (L-α-dioleoylphosphatidylcholine) with an addition of 5 mol % DOPE-PEG{2000} (1,2-diacyl-sn-glycero-3-phosphatidylethanolamine-N-[poly(ethyleneglycol) 2000]). The "fuzzy" membrane mimics the cellular plasma membrane including the glycocalyx. We found adhesion and subsequent receptor migration into the contact zone. Using membrane tension jumps to probe the equation of state of the two-dimensional "gas" of bound receptor pairs within the contact zone, we found strong, attractive lateral interactions.

  4. Induction of lipid peroxidation in biomembranes by dietary oil components.

    PubMed

    Udilova, Natalia; Jurek, Daniela; Marian, Brigitte; Gille, Lars; Schulte-Hermann, Rolf; Nohl, Hans

    2003-11-01

    Prooxidant formation and resulting lipid peroxidation are supposed to be involved in the pathogenesis of various diseases including cancer. Cancer risk is possibly influenced by the composition of diet with high intake of fat and red meat being harmful and high consumption of fruits and vegetables being protective. Since dietary oils may contain potential prooxidants, the aim of the present study was to prove (i) whether oxidative stress in biomembranes may be induced by dietary oils and if, (ii) which impact it has on the viability and proliferation of cultured colon (carcinoma) cells. Lipid hydroperoxide content in dietary oils increased after heating. Linoleic acid hydroperoxide (LOOH) and/or oils with different hydroperoxide contents induced lipid peroxidation in liposomes, erythrocyte ghosts and colon cells. Upon incubation with liposomes, both LOOH and heated oil induced lipid peroxidation only in the presence of iron and ascorbate. LOOH was sufficient to start lipid peroxidation of erythrocyte ghosts. LOOH incorporates into the lipid bilayer decreasing membrane fluidity and initiating lipid peroxidation in the lipid phase. When cultured cells (IEC18 intestinal epithelial cells, SW480 and HT29/HI1 colon carcinoma cells) were exposed to LOOH, they responded by cell death both via apoptosis and necrosis. Cells with higher degree of membrane unsaturation were more susceptible and antioxidants (vitamin E and selenite) were protective indicating the involvement of oxidative stress. Thus, peroxidation of biomembranes can be initiated by lipid hydroperoxides from heated oils. Dietary consumption of heated oils may lead to oxidative damage and to cell death in the colon. This may contribute to the enhanced risk of colon cancer due to regenerative cell proliferation.

  5. Field theoretical approach for bio-membrane coupled with flow field

    NASA Astrophysics Data System (ADS)

    Oya, Y.; Kawakatsu, T.

    2013-02-01

    Shape deformation of bio-membranes in flow field is well known phenomenon in biological systems, for example red blood cell in blood vessel. To simulate such deformation with use of field theoretical approach, we derived the dynamical equation of phase field for shape of membrane and coupled the equation with Navier-Stokes equation for flow field. In 2-dimensional simulations, we found that a bio-membrane in a Poiseuille flow takes a parachute shape similar to the red blood cells.

  6. BOOK REVIEW: Modeling Complex Systems

    NASA Astrophysics Data System (ADS)

    Schreckenberg, M.

    2004-10-01

    This book by Nino Boccara presents a compilation of model systems commonly termed as `complex'. It starts with a definition of the systems under consideration and how to build up a model to describe the complex dynamics. The subsequent chapters are devoted to various categories of mean-field type models (differential and recurrence equations, chaos) and of agent-based models (cellular automata, networks and power-law distributions). Each chapter is supplemented by a number of exercises and their solutions. The table of contents looks a little arbitrary but the author took the most prominent model systems investigated over the years (and up until now there has been no unified theory covering the various aspects of complex dynamics). The model systems are explained by looking at a number of applications in various fields. The book is written as a textbook for interested students as well as serving as a compehensive reference for experts. It is an ideal source for topics to be presented in a lecture on dynamics of complex systems. This is the first book on this `wide' topic and I have long awaited such a book (in fact I planned to write it myself but this is much better than I could ever have written it!). Only section 6 on cellular automata is a little too limited to the author's point of view and one would have expected more about the famous Domany--Kinzel model (and more accurate citation!). In my opinion this is one of the best textbooks published during the last decade and even experts can learn a lot from it. Hopefully there will be an actualization after, say, five years since this field is growing so quickly. The price is too high for students but this, unfortunately, is the normal case today. Nevertheless I think it will be a great success!

  7. Complex Networks in Psychological Models

    NASA Astrophysics Data System (ADS)

    Wedemann, R. S.; Carvalho, L. S. A. V. D.; Donangelo, R.

    We develop schematic, self-organizing, neural-network models to describe mechanisms associated with mental processes, by a neurocomputational substrate. These models are examples of real world complex networks with interesting general topological structures. Considering dopaminergic signal-to-noise neuronal modulation in the central nervous system, we propose neural network models to explain development of cortical map structure and dynamics of memory access, and unify different mental processes into a single neurocomputational substrate. Based on our neural network models, neurotic behavior may be understood as an associative memory process in the brain, and the linguistic, symbolic associative process involved in psychoanalytic working-through can be mapped onto a corresponding process of reconfiguration of the neural network. The models are illustrated through computer simulations, where we varied dopaminergic modulation and observed the self-organizing emergent patterns at the resulting semantic map, interpreting them as different manifestations of mental functioning, from psychotic through to normal and neurotic behavior, and creativity.

  8. Modeling wildfire incident complexity dynamics.

    PubMed

    Thompson, Matthew P

    2013-01-01

    Wildfire management in the United States and elsewhere is challenged by substantial uncertainty regarding the location and timing of fire events, the socioeconomic and ecological consequences of these events, and the costs of suppression. Escalating U.S. Forest Service suppression expenditures is of particular concern at a time of fiscal austerity as swelling fire management budgets lead to decreases for non-fire programs, and as the likelihood of disruptive within-season borrowing potentially increases. Thus there is a strong interest in better understanding factors influencing suppression decisions and in turn their influence on suppression costs. As a step in that direction, this paper presents a probabilistic analysis of geographic and temporal variation in incident management team response to wildfires. The specific focus is incident complexity dynamics through time for fires managed by the U.S. Forest Service. The modeling framework is based on the recognition that large wildfire management entails recurrent decisions across time in response to changing conditions, which can be represented as a stochastic dynamic system. Daily incident complexity dynamics are modeled according to a first-order Markov chain, with containment represented as an absorbing state. A statistically significant difference in complexity dynamics between Forest Service Regions is demonstrated. Incident complexity probability transition matrices and expected times until containment are presented at national and regional levels. Results of this analysis can help improve understanding of geographic variation in incident management and associated cost structures, and can be incorporated into future analyses examining the economic efficiency of wildfire management.

  9. Morphological and Physical Analysis of Natural Phospholipids-Based Biomembranes

    PubMed Central

    Jacquot, Adrien; Francius, Grégory; Razafitianamaharavo, Angelina; Dehghani, Fariba; Tamayol, Ali; Linder, Michel; Arab-Tehrany, Elmira

    2014-01-01

    Background Liposomes are currently an important part of biological, pharmaceutical, medical and nutritional research, as they are considered to be among the most effective carriers for the introduction of various types of bioactive agents into target cells. Scope of Review In this work, we study the lipid organization and mechanical properties of biomembranes made of marine and plant phospholipids. Membranes based on phospholipids extracted from rapeseed and salmon are studied in the form of liposome and as supported lipid bilayer. Dioleylphosphatidylcholine (DOPC) and dipalmitoylphosphatidylcholine (DPPC) are used as references to determine the lipid organization of marine and plant phospholipid based membranes. Atomic force microscopy (AFM) imaging and force spectroscopy measurements are performed to investigate the membranes' topography at the micrometer scale and to determine their mechanical properties. Major Conclusions The mechanical properties of the membranes are correlated to the fatty acid composition, the morphology, the electrophoretic mobility and the membrane fluidity. Thus, soft and homogeneous mechanical properties are evidenced for salmon phospholipids membrane containing various polyunsaturated fatty acids. Besides, phase segregation in rapeseed membrane and more important mechanical properties were emphasized for this type of membranes by contrast to the marine phospholipids based membranes. General Significance This paper provides new information on the nanomechanical and morphological properties of membrane in form of liposome by AFM. The originality of this work is to characterize the physico-chemical properties of the nanoliposome from the natural sources containing various fatty acids and polar head. PMID:25238543

  10. Electrochemical screening of biomembrane-active compounds in water.

    PubMed

    Mohamadi, Shahrzad; Tate, Daniel J; Vakurov, Alexander; Nelson, Andrew

    2014-02-27

    Interactions of biomembrane-active compounds with phospholipid monolayers on microfabricated Pt/Hg electrodes in an on-line high throughput flow system are demonstrated by recording capacitance current peak changes as rapid cyclic voltammograms (RCV). Detection limits of the compounds' effects on the layer have been estimated from the data. Compounds studied include steroids, polycyclic aromatic hydrocarbons, tricyclic antidepressants and tricyclic phenothiazines. The results show that the extent and type of interaction depends on the-(a) presence and number of aromatic rings and substituents, (b) presence and composition of side chains and, (c) molecular shape. Interaction is only indirectly related to compound hydrophobicity. For a selection of tricyclic antidepressants and tricyclic phenothiazines the detection limit in water is related to their therapeutic normal threshold. The sensing assay has been tested in the presence of humic acid as a potential interferent and in a tap water matrix. The system can be applied to the screening of putative hazardous substances and pharmaceuticals allowing for early detection thereof in the water supply. The measurements are made in real time which means that potentially toxic compounds are detected rapidly within <10 min per assay. This technology will contribute greatly to environment safety and health. PMID:24528664

  11. Modeling Wildfire Incident Complexity Dynamics

    PubMed Central

    Thompson, Matthew P.

    2013-01-01

    Wildfire management in the United States and elsewhere is challenged by substantial uncertainty regarding the location and timing of fire events, the socioeconomic and ecological consequences of these events, and the costs of suppression. Escalating U.S. Forest Service suppression expenditures is of particular concern at a time of fiscal austerity as swelling fire management budgets lead to decreases for non-fire programs, and as the likelihood of disruptive within-season borrowing potentially increases. Thus there is a strong interest in better understanding factors influencing suppression decisions and in turn their influence on suppression costs. As a step in that direction, this paper presents a probabilistic analysis of geographic and temporal variation in incident management team response to wildfires. The specific focus is incident complexity dynamics through time for fires managed by the U.S. Forest Service. The modeling framework is based on the recognition that large wildfire management entails recurrent decisions across time in response to changing conditions, which can be represented as a stochastic dynamic system. Daily incident complexity dynamics are modeled according to a first-order Markov chain, with containment represented as an absorbing state. A statistically significant difference in complexity dynamics between Forest Service Regions is demonstrated. Incident complexity probability transition matrices and expected times until containment are presented at national and regional levels. Results of this analysis can help improve understanding of geographic variation in incident management and associated cost structures, and can be incorporated into future analyses examining the economic efficiency of wildfire management. PMID:23691014

  12. Hepatocellular biomembrane peroxidation in copper-induced injury

    SciTech Connect

    Homer, B.L.

    1986-01-01

    The pathogenesis of Cu-induced hepatocellular biomembrane peroxidation was studied in male Fischer rats by analyzing hepatic morphologic alterations, measuring the activity of hepatic free radical scavenger enzymes, and determining the distribution of hepatic cytosolic Cu bound to high and low molecular weight proteins. Seventy-five weanling rats were divided into 3 group of 25 each and injected once daily with either 6.25 mg/kg or 12.5 mg/kg cupric chloride, or 0.2 ml/100 gm saline. Five rats from each group were killed after 3, 14, 28, 42, and 70 consecutive days of injections. The level of malondialdehyde was elevated after 3 days of Cu injections and continued to increase until it peaked in the high-dose group after 28 days and in the low-dose group after 42 days. The density of catalase-containing peroxisomes was reduced in Cu-treated rats, correlating with a reduced activity of hepatic catalase. Catalase activity in Cu-treated rats was reduced after 3 days, and always remained < or = to the activity in control rats. The activity of glutathione peroxidase in high-dose rats always was < or = to the level in control rats, while the activity in control rats always was < or = to the level in low-dose rats. Meanwhile, the activity of superoxide dismutase increase in Cu-treated rats after 28 days. The concentration of cytosolic low molecular weight protein-bound Cu was elevated after 3 days in both Cu-treated groups and continued to increase, leveling off or peaking after 42 days. Regression analysis and in vitro studies, involving the peroxidation of erythrocyte ghost membranes, demonstrated that Cu bound to low molecular weight proteins was less likely to induce lipoperoxidation than copper bound to high molecular weight proteins.

  13. Explosion modelling for complex geometries

    NASA Astrophysics Data System (ADS)

    Nehzat, Naser

    A literature review suggested that the combined effects of fuel reactivity, obstacle density, ignition strength, and confinement result in flame acceleration and subsequent pressure build-up during a vapour cloud explosion (VCE). Models for the prediction of propagating flames in hazardous areas, such as coal mines, oil platforms, storage and process chemical areas etc. fall into two classes. One class involves use of Computation Fluid Dynamics (CFD). This approach has been utilised by several researchers. The other approach relies upon a lumped parameter approach as developed by Baker (1983). The former approach is restricted by the appropriateness of sub-models and numerical stability requirements inherent in the computational solution. The latter approach raises significant questions regarding the validity of the simplification involved in representing the complexities of a propagating explosion. This study was conducted to investigate and improve the Computational Fluid Dynamic (CFD) code EXPLODE which has been developed by Green et al., (1993) for use on practical gas explosion hazard assessments. The code employs a numerical method for solving partial differential equations by using finite volume techniques. Verification exercises, involving comparison with analytical solutions for the classical shock-tube and with experimental (small-scale, medium and large-scale) results, demonstrate the accuracy of the code and the new combustion models but also identify differences between predictions and the experimental results. The project has resulted in a developed version of the code (EXPLODE2) with new combustion models for simulating gas explosions. Additional features of this program include the physical models necessary to simulate the combustion process using alternative combustion models, improvement to the numerical accuracy and robustness of the code, and special input for simulation of different gas explosions. The present code has the capability of

  14. Tunable adsorption of soft colloids on model biomembranes.

    PubMed

    Mihut, Adriana M; Dabkowska, Aleksandra P; Crassous, Jérôme J; Schurtenberger, Peter; Nylander, Tommy

    2013-12-23

    A simple procedure is developed to probe in situ the association between lipid bilayers and colloidal particles. Here, a one-step method is applied to generate giant unilamellar 1,2-dioleoyl-sn-glycero-3-phosphocholine (DOPC) vesicles (GUVs) by application of an alternating electric field directly in the presence of thermoresponsive poly(N-isopropylacrylamide) (PNIPAM) microgels. We demonstrate that the soft PNIPAM microgel particles act as switchable stabilizers for lipid membranes. The change of the particle conformation from the swollen to the collapsed state enables the reversible control of the microgel adsorption as a function of temperature. At 20 °C, the swollen and hydrophilic soft microgel particles adsorb evenly and densely pack in 2D hexagonal arrays at the DOPC GUV surfaces. In contrast, at 40 °C, that is, above the volume phase transition temperature (TVPT = 32 °C) of the PNIPAM microgels, the collapsed and more hydrophobic particles partially desorb and self-organize into domains at the GUV/GUV interfaces. This study shows that thermoresponsive PNIPAM microgels can be used to increase and control the stability of lipid vesicles where the softness and deformability of these types of particles play a major role. The observed self-assembly, where the organization and position of the particles on the GUV surface can be controlled "on demand", opens new routes for the design of nanostructured materials.

  15. Alleviation of capsular formations on silicone implants in rats using biomembrane-mimicking coatings.

    PubMed

    Park, Ji Ung; Ham, Jiyeon; Kim, Sukwha; Seo, Ji-Hun; Kim, Sang-Hyon; Lee, Seonju; Min, Hye Jeong; Choi, Sunghyun; Choi, Ra Mi; Kim, Heejin; Oh, Sohee; Hur, Ji An; Choi, Tae Hyun; Lee, Yan

    2014-10-01

    Despite their popular use in breast augmentation and reconstruction surgeries, the limited biocompatibility of silicone implants can induce severe side effects, including capsular contracture - an excessive foreign body reaction that forms a tight and hard fibrous capsule around the implant. This study examines the effects of using biomembrane-mimicking surface coatings to prevent capsular formations on silicone implants. The covalently attached biomembrane-mimicking polymer, poly(2-methacryloyloxyethyl phosphorylcholine) (PMPC), prevented nonspecific protein adsorption and fibroblast adhesion on the silicone surface. More importantly, in vivo capsule formations around PMPC-grafted silicone implants in rats were significantly thinner and exhibited lower collagen densities and more regular collagen alignments than bare silicone implants. The observed decrease in α-smooth muscle actin also supported the alleviation of capsular formations by the biomembrane-mimicking coating. Decreases in inflammation-related cells, myeloperoxidase and transforming growth factor-β resulted in reduced inflammation in the capsular tissue. The biomembrane-mimicking coatings used on these silicone implants demonstrate great potential for preventing capsular contracture and developing biocompatible materials for various biomedical applications.

  16. Teacher Modeling Using Complex Informational Texts

    ERIC Educational Resources Information Center

    Fisher, Douglas; Frey, Nancy

    2015-01-01

    Modeling in complex texts requires that teachers analyze the text for factors of qualitative complexity and then design lessons that introduce students to that complexity. In addition, teachers can model the disciplinary nature of content area texts as well as word solving and comprehension strategies. Included is a planning guide for think aloud.

  17. "Computational Modeling of Actinide Complexes"

    SciTech Connect

    Balasubramanian, K

    2007-03-07

    We will present our recent studies on computational actinide chemistry of complexes which are not only interesting from the standpoint of actinide coordination chemistry but also of relevance to environmental management of high-level nuclear wastes. We will be discussing our recent collaborative efforts with Professor Heino Nitsche of LBNL whose research group has been actively carrying out experimental studies on these species. Computations of actinide complexes are also quintessential to our understanding of the complexes found in geochemical, biochemical environments and actinide chemistry relevant to advanced nuclear systems. In particular we have been studying uranyl, plutonyl, and Cm(III) complexes are in aqueous solution. These studies are made with a variety of relativistic methods such as coupled cluster methods, DFT, and complete active space multi-configuration self-consistent-field (CASSCF) followed by large-scale CI computations and relativistic CI (RCI) computations up to 60 million configurations. Our computational studies on actinide complexes were motivated by ongoing EXAFS studies of speciated complexes in geo and biochemical environments carried out by Prof Heino Nitsche's group at Berkeley, Dr. David Clark at Los Alamos and Dr. Gibson's work on small actinide molecules at ORNL. The hydrolysis reactions of urnayl, neputyl and plutonyl complexes have received considerable attention due to their geochemical and biochemical importance but the results of free energies in solution and the mechanism of deprotonation have been topic of considerable uncertainty. We have computed deprotonating and migration of one water molecule from the first solvation shell to the second shell in UO{sub 2}(H{sub 2}O){sub 5}{sup 2+}, UO{sub 2}(H{sub 2}O){sub 5}{sup 2+}NpO{sub 2}(H{sub 2}O){sub 6}{sup +}, and PuO{sub 2}(H{sub 2}O){sub 5}{sup 2+} complexes. Our computed Gibbs free energy(7.27 kcal/m) in solution for the first time agrees with the experiment (7.1 kcal

  18. Capturing Complexity through Maturity Modelling

    ERIC Educational Resources Information Center

    Underwood, Jean; Dillon, Gayle

    2004-01-01

    The impact of information and communication technologies (ICT) on the process and products of education is difficult to assess for a number of reasons. In brief, education is a complex system of interrelationships, of checks and balances. This context is not a neutral backdrop on which teaching and learning are played out. Rather, it may help, or…

  19. Control performance and biomembrane disturbance of carbon nanotube artificial water channels by nitrogen-doping.

    PubMed

    Yang, Yuling; Li, Xiaoyi; Jiang, Jinliang; Du, Huailiang; Zhao, Lina; Zhao, Yuliang

    2010-10-26

    To establish ways to control the performance of artificial water channels is a big challenge. With molecular dynamics studies, we found that water flow inside the water channels of carbon nanotubes (CNTs) can be controlled by reducing or intensifying interaction energy between water molecules and the wall of the CNTs channel. A way of example toward this significant goal was demonstrated by the doping of nitrogen into the wall of CNTs. Different ratios of nitrogen doping result in different controllable water performance which is dominated mainly through a gradient of van der Waals forces created by the heteroatom doping in the wall of CNTs. Further results revealed that the nitrogen-doped CNT channels show less influence on the integrality of biomembrane than the pristine one, while the nitrogen-doped double-walled carbon nanotube exhibits fewer disturbances to the cellular membrane integrality than the nitrogen-doped single-walled carbon nanotube when interacting with biomembranes.

  20. Complexity and Uncertainty in Soil Nitrogen Modeling

    NASA Astrophysics Data System (ADS)

    Ajami, N. K.; Gu, C.

    2009-12-01

    Model uncertainty is rarely considered in the field of biogeochemical modeling. The standard biogeochemical modeling approach is to proceed based on one selected model with the “right” complexity level based on data availability. However other plausible models can result in dissimilar answer to the scientific question in hand using the same set of data. Relying on a single model can lead to underestimation of uncertainty associated with the results and therefore lead to unreliable conclusions. Multi-model ensemble strategy is a means to exploit the diversity of skillful predictions from different models with multiple levels of complexity. The aim of this study is two fold, first to explore the impact of a model’s complexity level on the accuracy of the end results and second to introduce a probabilistic multi-model strategy in the context of a process-based biogeochemical model. We developed three different versions of a biogeochemical model, TOUGHREACT-N, with various complexity levels. Each one of these models was calibrated against the observed data from a tomato field in Western Sacramento County, California, and considered two different weighting sets on the objective function. This way we created a set of six ensemble members. The Bayesian Model Averaging (BMA) approach was then used to combine these ensemble members by the likelihood that an individual model is correct given the observations. The results clearly indicate the need to consider a multi-model ensemble strategy over a single model selection in biogeochemical modeling.

  1. Fock spaces for modeling macromolecular complexes

    NASA Astrophysics Data System (ADS)

    Kinney, Justin

    Large macromolecular complexes play a fundamental role in how cells function. Here I describe a Fock space formalism for mathematically modeling these complexes. Specifically, this formalism allows ensembles of complexes to be defined in terms of elementary molecular ``building blocks'' and ``assembly rules.'' Such definitions avoid the massive redundancy inherent in standard representations, in which all possible complexes are manually enumerated. Methods for systematically computing ensembles of complexes from a list of components and interaction rules are described. I also show how this formalism readily accommodates coarse-graining. Finally, I introduce diagrammatic techniques that greatly facilitate the application of this formalism to both equilibrium and non-equilibrium biochemical systems.

  2. Scaffolding in Complex Modelling Situations

    ERIC Educational Resources Information Center

    Stender, Peter; Kaiser, Gabriele

    2015-01-01

    The implementation of teacher-independent realistic modelling processes is an ambitious educational activity with many unsolved problems so far. Amongst others, there hardly exists any empirical knowledge about efficient ways of possible teacher support with students' activities, which should be mainly independent from the teacher. The research…

  3. Role models for complex networks

    NASA Astrophysics Data System (ADS)

    Reichardt, J.; White, D. R.

    2007-11-01

    We present a framework for automatically decomposing (“block-modeling”) the functional classes of agents within a complex network. These classes are represented by the nodes of an image graph (“block model”) depicting the main patterns of connectivity and thus functional roles in the network. Using a first principles approach, we derive a measure for the fit of a network to any given image graph allowing objective hypothesis testing. From the properties of an optimal fit, we derive how to find the best fitting image graph directly from the network and present a criterion to avoid overfitting. The method can handle both two-mode and one-mode data, directed and undirected as well as weighted networks and allows for different types of links to be dealt with simultaneously. It is non-parametric and computationally efficient. The concepts of structural equivalence and modularity are found as special cases of our approach. We apply our method to the world trade network and analyze the roles individual countries play in the global economy.

  4. Concept of a Model City Complex

    ERIC Educational Resources Information Center

    Giammatteo, Michael C.

    The model cities concept calls for an educational complex which includes the nonschool educational institutions and facilities of the community as well as actual school facilities. Such an educational complex would require a wider administrative base than the school yet smaller than the municipal government. Examples of nonschool educational…

  5. Agent-based modeling of complex infrastructures

    SciTech Connect

    North, M. J.

    2001-06-01

    Complex Adaptive Systems (CAS) can be applied to investigate complex infrastructures and infrastructure interdependencies. The CAS model agents within the Spot Market Agent Research Tool (SMART) and Flexible Agent Simulation Toolkit (FAST) allow investigation of the electric power infrastructure, the natural gas infrastructure and their interdependencies.

  6. Numerical models of complex diapirs

    NASA Astrophysics Data System (ADS)

    Podladchikov, Yu.; Talbot, C.; Poliakov, A. N. B.

    1993-12-01

    Numerically modelled diapirs that rise into overburdens with viscous rheology produce a large variety of shapes. This work uses the finite-element method to study the development of diapirs that rise towards a surface on which a diapir-induced topography creeps flat or disperses ("erodes") at different rates. Slow erosion leads to diapirs with "mushroom" shapes, moderate erosion rate to "wine glass" diapirs and fast erosion to "beer glass"- and "column"-shaped diapirs. The introduction of a low-viscosity layer at the top of the overburden causes diapirs to develop into structures resembling a "Napoleon hat". These spread lateral sheets.

  7. Modelling Canopy Flows over Complex Terrain

    NASA Astrophysics Data System (ADS)

    Grant, Eleanor R.; Ross, Andrew N.; Gardiner, Barry A.

    2016-06-01

    Recent studies of flow over forested hills have been motivated by a number of important applications including understanding CO_2 and other gaseous fluxes over forests in complex terrain, predicting wind damage to trees, and modelling wind energy potential at forested sites. Current modelling studies have focussed almost exclusively on highly idealized, and usually fully forested, hills. Here, we present model results for a site on the Isle of Arran, Scotland with complex terrain and heterogeneous forest canopy. The model uses an explicit representation of the canopy and a 1.5-order turbulence closure for flow within and above the canopy. The validity of the closure scheme is assessed using turbulence data from a field experiment before comparing predictions of the full model with field observations. For near-neutral stability, the results compare well with the observations, showing that such a relatively simple canopy model can accurately reproduce the flow patterns observed over complex terrain and realistic, variable forest cover, while at the same time remaining computationally feasible for real case studies. The model allows closer examination of the flow separation observed over complex forested terrain. Comparisons with model simulations using a roughness length parametrization show significant differences, particularly with respect to flow separation, highlighting the need to explicitly model the forest canopy if detailed predictions of near-surface flow around forests are required.

  8. Modeling a crowdsourced definition of molecular complexity.

    PubMed

    Sheridan, Robert P; Zorn, Nicolas; Sherer, Edward C; Campeau, Louis-Charles; Chang, Charlie Zhenyu; Cumming, Jared; Maddess, Matthew L; Nantermet, Philippe G; Sinz, Christopher J; O'Shea, Paul D

    2014-06-23

    This paper brings together the concepts of molecular complexity and crowdsourcing. An exercise was done at Merck where 386 chemists voted on the molecular complexity (on a scale of 1-5) of 2681 molecules taken from various sources: public, licensed, and in-house. The meanComplexity of a molecule is the average over all votes for that molecule. As long as enough votes are cast per molecule, we find meanComplexity is quite easy to model with QSAR methods using only a handful of physical descriptors (e.g., number of chiral centers, number of unique topological torsions, a Wiener index, etc.). The high level of self-consistency of the model (cross-validated R(2) ∼0.88) is remarkable given that our chemists do not agree with each other strongly about the complexity of any given molecule. Thus, the power of crowdsourcing is clearly demonstrated in this case. The meanComplexity appears to be correlated with at least one metric of synthetic complexity from the literature derived in a different way and is correlated with values of process mass intensity (PMI) from the literature and from in-house studies. Complexity can be used to differentiate between in-house programs and to follow a program over time.

  9. Random energy model at complex temperatures

    PubMed

    Saakian

    2000-06-01

    The complete phase diagram of the random energy model is obtained for complex temperatures using the method proposed by Derrida. We find the density of zeroes for the statistical sum. Then the method is applied to the generalized random energy model. This allowed us to propose an analytical method for investigating zeroes of the statistical sum for finite-dimensional systems. PMID:11088286

  10. From Complex to Simple: Interdisciplinary Stochastic Models

    ERIC Educational Resources Information Center

    Mazilu, D. A.; Zamora, G.; Mazilu, I.

    2012-01-01

    We present two simple, one-dimensional, stochastic models that lead to a qualitative understanding of very complex systems from biology, nanoscience and social sciences. The first model explains the complicated dynamics of microtubules, stochastic cellular highways. Using the theory of random walks in one dimension, we find analytical expressions…

  11. Updating the debate on model complexity

    USGS Publications Warehouse

    Simmons, Craig T.; Hunt, Randall J.

    2012-01-01

    As scientists who are trying to understand a complex natural world that cannot be fully characterized in the field, how can we best inform the society in which we live? This founding context was addressed in a special session, “Complexity in Modeling: How Much is Too Much?” convened at the 2011 Geological Society of America Annual Meeting. The session had a variety of thought-provoking presentations—ranging from philosophy to cost-benefit analyses—and provided some areas of broad agreement that were not evident in discussions of the topic in 1998 (Hunt and Zheng, 1999). The session began with a short introduction during which model complexity was framed borrowing from an economic concept, the Law of Diminishing Returns, and an example of enjoyment derived by eating ice cream. Initially, there is increasing satisfaction gained from eating more ice cream, to a point where the gain in satisfaction starts to decrease, ending at a point when the eater sees no value in eating more ice cream. A traditional view of model complexity is similar—understanding gained from modeling can actually decrease if models become unnecessarily complex. However, oversimplified models—those that omit important aspects of the problem needed to make a good prediction—can also limit and confound our understanding. Thus, the goal of all modeling is to find the “sweet spot” of model sophistication—regardless of whether complexity was added sequentially to an overly simple model or collapsed from an initial highly parameterized framework that uses mathematics and statistics to attain an optimum (e.g., Hunt et al., 2007). Thus, holistic parsimony is attained, incorporating “as simple as possible,” as well as the equally important corollary “but no simpler.”

  12. Multifaceted Modelling of Complex Business Enterprises

    PubMed Central

    2015-01-01

    We formalise and present a new generic multifaceted complex system approach for modelling complex business enterprises. Our method has a strong focus on integrating the various data types available in an enterprise which represent the diverse perspectives of various stakeholders. We explain the challenges faced and define a novel approach to converting diverse data types into usable Bayesian probability forms. The data types that can be integrated include historic data, survey data, and management planning data, expert knowledge and incomplete data. The structural complexities of the complex system modelling process, based on various decision contexts, are also explained along with a solution. This new application of complex system models as a management tool for decision making is demonstrated using a railway transport case study. The case study demonstrates how the new approach can be utilised to develop a customised decision support model for a specific enterprise. Various decision scenarios are also provided to illustrate the versatility of the decision model at different phases of enterprise operations such as planning and control. PMID:26247591

  13. Modeling the complex bromate-iodine reaction.

    PubMed

    Machado, Priscilla B; Faria, Roberto B

    2009-05-01

    In this article, it is shown that the FLEK model (ref 5 ) is able to model the experimental results of the bromate-iodine clock reaction. Five different complex chemical systems, the bromate-iodide clock and oscillating reactions, the bromite-iodide clock and oscillating reactions, and now the bromate-iodine clock reaction are adequately accounted for by the FLEK model. PMID:19361181

  14. Biomembrane simulations of 12 lipid types using the general amber force field in a tensionless ensemble.

    PubMed

    Coimbra, João T S; Sousa, Sérgio F; Fernandes, Pedro A; Rangel, Maria; Ramos, Maria J

    2014-01-01

    The AMBER family of force fields is one of the most commonly used alternatives to describe proteins and drug-like molecules in molecular dynamics simulations. However, the absence of a specific set of parameters for lipids has been limiting the widespread application of this force field in biomembrane simulations, including membrane protein simulations and drug-membrane simulations. Here, we report the systematic parameterization of 12 common lipid types consistent with the General Amber Force Field (GAFF), with charge-parameters determined with RESP at the HF/6-31G(d) level of theory, to be consistent with AMBER. The accuracy of the scheme was evaluated by comparing predicted and experimental values for structural lipid properties in MD simulations in an NPT ensemble with explicit solvent in 100:100 bilayer systems. Globally, a consistent agreement with experimental reference data on membrane structures was achieved for some lipid types when using the typical MD conditions normally employed when handling membrane proteins and drug-membrane simulations (a tensionless NPT ensemble, 310 K), without the application of any of the constraints often used in other biomembrane simulations (such as the surface tension and the total simulation box area). The present set of parameters and the universal approach used in the parameterization of all the lipid types described here, as well as the consistency with the AMBER force field family, together with the tensionless NPT ensemble used, opens the door to systematic studies combining lipid components with small drug-like molecules or membrane proteins and show the potential of GAFF in dealing with biomembranes.

  15. Minimum-complexity helicopter simulation math model

    NASA Technical Reports Server (NTRS)

    Heffley, Robert K.; Mnich, Marc A.

    1988-01-01

    An example of a minimal complexity simulation helicopter math model is presented. Motivating factors are the computational delays, cost, and inflexibility of the very sophisticated math models now in common use. A helicopter model form is given which addresses each of these factors and provides better engineering understanding of the specific handling qualities features which are apparent to the simulator pilot. The technical approach begins with specification of features which are to be modeled, followed by a build up of individual vehicle components and definition of equations. Model matching and estimation procedures are given which enable the modeling of specific helicopters from basic data sources such as flight manuals. Checkout procedures are given which provide for total model validation. A number of possible model extensions and refinement are discussed. Math model computer programs are defined and listed.

  16. Trends in modeling Biomedical Complex Systems

    PubMed Central

    Milanesi, Luciano; Romano, Paolo; Castellani, Gastone; Remondini, Daniel; Liò, Petro

    2009-01-01

    In this paper we provide an introduction to the techniques for multi-scale complex biological systems, from the single bio-molecule to the cell, combining theoretical modeling, experiments, informatics tools and technologies suitable for biological and biomedical research, which are becoming increasingly multidisciplinary, multidimensional and information-driven. The most important concepts on mathematical modeling methodologies and statistical inference, bioinformatics and standards tools to investigate complex biomedical systems are discussed and the prominent literature useful to both the practitioner and the theoretician are presented. PMID:19828068

  17. Constructing minimal models for complex system dynamics

    NASA Astrophysics Data System (ADS)

    Barzel, Baruch; Liu, Yang-Yu; Barabási, Albert-László

    2015-05-01

    One of the strengths of statistical physics is the ability to reduce macroscopic observations into microscopic models, offering a mechanistic description of a system's dynamics. This paradigm, rooted in Boltzmann's gas theory, has found applications from magnetic phenomena to subcellular processes and epidemic spreading. Yet, each of these advances were the result of decades of meticulous model building and validation, which are impossible to replicate in most complex biological, social or technological systems that lack accurate microscopic models. Here we develop a method to infer the microscopic dynamics of a complex system from observations of its response to external perturbations, allowing us to construct the most general class of nonlinear pairwise dynamics that are guaranteed to recover the observed behaviour. The result, which we test against both numerical and empirical data, is an effective dynamic model that can predict the system's behaviour and provide crucial insights into its inner workings.

  18. Modeling acuity for optotypes varying in complexity.

    PubMed

    Watson, Andrew B; Ahumada, Albert J

    2012-01-01

    Watson and Ahumada (2008) described a template model of visual acuity based on an ideal-observer limited by optical filtering, neural filtering, and noise. They computed predictions for selected optotypes and optical aberrations. Here we compare this model's predictions to acuity data for six human observers, each viewing seven different optotype sets, consisting of one set of Sloan letters and six sets of Chinese characters, differing in complexity (Zhang, Zhang, Xue, Liu, & Yu, 2007). Since optical aberrations for the six observers were unknown, we constructed 200 model observers using aberrations collected from 200 normal human eyes (Thibos, Hong, Bradley, & Cheng, 2002). For each condition (observer, optotype set, model observer) we estimated the model noise required to match the data. Expressed as efficiency, performance for Chinese characters was 1.4 to 2.7 times lower than for Sloan letters. Efficiency was weakly and inversely related to perimetric complexity of optotype set. We also compared confusion matrices for human and model observers. Correlations for off-diagonal elements ranged from 0.5 to 0.8 for different sets, and the average correlation for the template model was superior to a geometrical moment model with a comparable number of parameters (Liu, Klein, Xue, Zhang, & Yu, 2009). The template model performed well overall. Estimated psychometric function slopes matched the data, and noise estimates agreed roughly with those obtained independently from contrast sensitivity to Gabor targets. For optotypes of low complexity, the model accurately predicted relative performance. This suggests the model may be used to compare acuities measured with different sets of simple optotypes. PMID:23024356

  19. The Kuramoto model in complex networks

    NASA Astrophysics Data System (ADS)

    Rodrigues, Francisco A.; Peron, Thomas K. DM.; Ji, Peng; Kurths, Jürgen

    2016-01-01

    Synchronization of an ensemble of oscillators is an emergent phenomenon present in several complex systems, ranging from social and physical to biological and technological systems. The most successful approach to describe how coherent behavior emerges in these complex systems is given by the paradigmatic Kuramoto model. This model has been traditionally studied in complete graphs. However, besides being intrinsically dynamical, complex systems present very heterogeneous structure, which can be represented as complex networks. This report is dedicated to review main contributions in the field of synchronization in networks of Kuramoto oscillators. In particular, we provide an overview of the impact of network patterns on the local and global dynamics of coupled phase oscillators. We cover many relevant topics, which encompass a description of the most used analytical approaches and the analysis of several numerical results. Furthermore, we discuss recent developments on variations of the Kuramoto model in networks, including the presence of noise and inertia. The rich potential for applications is discussed for special fields in engineering, neuroscience, physics and Earth science. Finally, we conclude by discussing problems that remain open after the last decade of intensive research on the Kuramoto model and point out some promising directions for future research.

  20. Modelling biological complexity: a physical scientist's perspective.

    PubMed

    Coveney, Peter V; Fowler, Philip W

    2005-09-22

    We discuss the modern approaches of complexity and self-organization to understanding dynamical systems and how these concepts can inform current interest in systems biology. From the perspective of a physical scientist, it is especially interesting to examine how the differing weights given to philosophies of science in the physical and biological sciences impact the application of the study of complexity. We briefly describe how the dynamics of the heart and circadian rhythms, canonical examples of systems biology, are modelled by sets of nonlinear coupled differential equations, which have to be solved numerically. A major difficulty with this approach is that all the parameters within these equations are not usually known. Coupled models that include biomolecular detail could help solve this problem. Coupling models across large ranges of length- and time-scales is central to describing complex systems and therefore to biology. Such coupling may be performed in at least two different ways, which we refer to as hierarchical and hybrid multiscale modelling. While limited progress has been made in the former case, the latter is only beginning to be addressed systematically. These modelling methods are expected to bring numerous benefits to biology, for example, the properties of a system could be studied over a wider range of length- and time-scales, a key aim of systems biology. Multiscale models couple behaviour at the molecular biological level to that at the cellular level, thereby providing a route for calculating many unknown parameters as well as investigating the effects at, for example, the cellular level, of small changes at the biomolecular level, such as a genetic mutation or the presence of a drug. The modelling and simulation of biomolecular systems is itself very computationally intensive; we describe a recently developed hybrid continuum-molecular model, HybridMD, and its associated molecular insertion algorithm, which point the way towards the

  1. Modelling biological complexity: a physical scientist's perspective

    PubMed Central

    Coveney, Peter V; Fowler, Philip W

    2005-01-01

    We discuss the modern approaches of complexity and self-organization to understanding dynamical systems and how these concepts can inform current interest in systems biology. From the perspective of a physical scientist, it is especially interesting to examine how the differing weights given to philosophies of science in the physical and biological sciences impact the application of the study of complexity. We briefly describe how the dynamics of the heart and circadian rhythms, canonical examples of systems biology, are modelled by sets of nonlinear coupled differential equations, which have to be solved numerically. A major difficulty with this approach is that all the parameters within these equations are not usually known. Coupled models that include biomolecular detail could help solve this problem. Coupling models across large ranges of length- and time-scales is central to describing complex systems and therefore to biology. Such coupling may be performed in at least two different ways, which we refer to as hierarchical and hybrid multiscale modelling. While limited progress has been made in the former case, the latter is only beginning to be addressed systematically. These modelling methods are expected to bring numerous benefits to biology, for example, the properties of a system could be studied over a wider range of length- and time-scales, a key aim of systems biology. Multiscale models couple behaviour at the molecular biological level to that at the cellular level, thereby providing a route for calculating many unknown parameters as well as investigating the effects at, for example, the cellular level, of small changes at the biomolecular level, such as a genetic mutation or the presence of a drug. The modelling and simulation of biomolecular systems is itself very computationally intensive; we describe a recently developed hybrid continuum-molecular model, HybridMD, and its associated molecular insertion algorithm, which point the way towards the

  2. Computational Modeling of T Cell Receptor Complexes.

    PubMed

    Riley, Timothy P; Singh, Nishant K; Pierce, Brian G; Weng, Zhiping; Baker, Brian M

    2016-01-01

    T-cell receptor (TCR) binding to peptide/MHC determines specificity and initiates signaling in antigen-specific cellular immune responses. Structures of TCR-pMHC complexes have provided enormous insight to cellular immune functions, permitted a rational understanding of processes such as pathogen escape, and led to the development of novel approaches for the design of vaccines and other therapeutics. As production, crystallization, and structure determination of TCR-pMHC complexes can be challenging, there is considerable interest in modeling new complexes. Here we describe a rapid approach to TCR-pMHC modeling that takes advantage of structural features conserved in known complexes, such as the restricted TCR binding site and the generally conserved diagonal docking mode. The approach relies on the powerful Rosetta suite and is implemented using the PyRosetta scripting environment. We show how the approach can recapitulate changes in TCR binding angles and other structural details, and highlight areas where careful evaluation of parameters is needed and alternative choices might be made. As TCRs are highly sensitive to subtle structural perturbations, there is room for improvement. Our method nonetheless generates high-quality models that can be foundational for structure-based hypotheses regarding TCR recognition.

  3. Computational Modeling of T Cell Receptor Complexes.

    PubMed

    Riley, Timothy P; Singh, Nishant K; Pierce, Brian G; Weng, Zhiping; Baker, Brian M

    2016-01-01

    T-cell receptor (TCR) binding to peptide/MHC determines specificity and initiates signaling in antigen-specific cellular immune responses. Structures of TCR-pMHC complexes have provided enormous insight to cellular immune functions, permitted a rational understanding of processes such as pathogen escape, and led to the development of novel approaches for the design of vaccines and other therapeutics. As production, crystallization, and structure determination of TCR-pMHC complexes can be challenging, there is considerable interest in modeling new complexes. Here we describe a rapid approach to TCR-pMHC modeling that takes advantage of structural features conserved in known complexes, such as the restricted TCR binding site and the generally conserved diagonal docking mode. The approach relies on the powerful Rosetta suite and is implemented using the PyRosetta scripting environment. We show how the approach can recapitulate changes in TCR binding angles and other structural details, and highlight areas where careful evaluation of parameters is needed and alternative choices might be made. As TCRs are highly sensitive to subtle structural perturbations, there is room for improvement. Our method nonetheless generates high-quality models that can be foundational for structure-based hypotheses regarding TCR recognition. PMID:27094300

  4. Modeling Electromagnetic Scattering From Complex Inhomogeneous Objects

    NASA Technical Reports Server (NTRS)

    Deshpande, Manohar; Reddy, C. J.

    2011-01-01

    This software innovation is designed to develop a mathematical formulation to estimate the electromagnetic scattering characteristics of complex, inhomogeneous objects using the finite-element-method (FEM) and method-of-moments (MoM) concepts, as well as to develop a FORTRAN code called FEMOM3DS (Finite Element Method and Method of Moments for 3-Dimensional Scattering), which will implement the steps that are described in the mathematical formulation. Very complex objects can be easily modeled, and the operator of the code is not required to know the details of electromagnetic theory to study electromagnetic scattering.

  5. Computing the complexity for Schelling segregation models

    NASA Astrophysics Data System (ADS)

    Gerhold, Stefan; Glebsky, Lev; Schneider, Carsten; Weiss, Howard; Zimmermann, Burkhard

    2008-12-01

    The Schelling segregation models are "agent based" population models, where individual members of the population (agents) interact directly with other agents and move in space and time. In this note we study one-dimensional Schelling population models as finite dynamical systems. We define a natural notion of entropy which measures the complexity of the family of these dynamical systems. The entropy counts the asymptotic growth rate of the number of limit states. We find formulas and deduce precise asymptotics for the number of limit states, which enable us to explicitly compute the entropy.

  6. Seismic modeling of complex stratified reservoirs

    NASA Astrophysics Data System (ADS)

    Lai, Hung-Liang

    Turbidite reservoirs in deep-water depositional systems, such as the oil fields in the offshore Gulf of Mexico and North Sea, are becoming an important exploration target in the petroleum industry. Accurate seismic reservoir characterization, however, is complicated by the heterogeneous of the sand and shale distribution and also by the lack of resolution when imaging thin channel deposits. Amplitude variation with offset (AVO) is a very important technique that is widely applied to locate hydrocarbons. Inaccurate estimates of seismic reflection amplitudes may result in misleading interpretations because of these problems in application to turbidite reservoirs. Therefore, an efficient, accurate, and robust method of modeling seismic responses for such complex reservoirs is crucial and necessary to reduce exploration risk. A fast and accurate approach generating synthetic seismograms for such reservoir models combines wavefront construction ray tracing with composite reflection coefficients in a hybrid modeling algorithm. The wavefront construction approach is a modern, fast implementation of ray tracing that I have extended to model quasi-shear wave propagation in anisotropic media. Composite reflection coefficients, which are computed using propagator matrix methods, provide the exact seismic reflection amplitude for a stratified reservoir model. This is a distinct improvement over conventional AVO analysis based on a model with only two homogeneous half spaces. I combine the two methods to compute synthetic seismograms for test models of turbidite reservoirs in the Ursa field, Gulf of Mexico, validating the new results against exact calculations using the discrete wavenumber method. The new method, however, can also be used to generate synthetic seismograms for the laterally heterogeneous, complex stratified reservoir models. The results show important frequency dependence that may be useful for exploration. Because turbidite channel systems often display complex

  7. Dual-resolution molecular dynamics simulation of antimicrobials in biomembranes

    PubMed Central

    Orsi, Mario; Noro, Massimo G.; Essex, Jonathan W.

    2011-01-01

    Triclocarban and triclosan, two potent antibacterial molecules present in many consumer products, have been subject to growing debate on a number of issues, particularly in relation to their possible role in causing microbial resistance. In this computational study, we present molecular-level insights into the interaction between these antimicrobial agents and hydrated phospholipid bilayers (taken as a simple model for the cell membrane). Simulations are conducted by a novel ‘dual-resolution’ molecular dynamics approach which combines accuracy with efficiency: the antimicrobials, modelled atomistically, are mixed with simplified (coarse-grain) models of lipids and water. A first set of calculations is run to study the antimicrobials' transfer free energies and orientations as a function of depth inside the membrane. Both molecules are predicted to preferentially accumulate in the lipid headgroup–glycerol region; this finding, which reproduces corresponding experimental data, is also discussed in terms of a general relation between solute partitioning and the intramembrane distribution of pressure. A second set of runs involves membranes incorporated with different molar concentrations of antimicrobial molecules (up to one antimicrobial per two lipids). We study the effects induced on fundamental membrane properties, such as the electron density, lateral pressure and electrical potential profiles. In particular, the analysis of the spontaneous curvature indicates that increasing antimicrobial concentrations promote a ‘destabilizing’ tendency towards non-bilayer phases, as observed experimentally. The antimicrobials' influence on the self-assembly process is also investigated. The significance of our results in the context of current theories of antimicrobial action is discussed. PMID:21131331

  8. Human driven transitions in complex model ecosystems

    NASA Astrophysics Data System (ADS)

    Harfoot, Mike; Newbold, Tim; Tittinsor, Derek; Purves, Drew

    2015-04-01

    Human activities have been observed to be impacting ecosystems across the globe, leading to reduced ecosystem functioning, altered trophic and biomass structure and ultimately ecosystem collapse. Previous attempts to understand global human impacts on ecosystems have usually relied on statistical models, which do not explicitly model the processes underlying the functioning of ecosystems, represent only a small proportion of organisms and do not adequately capture complex non-linear and dynamic responses of ecosystems to perturbations. We use a mechanistic ecosystem model (1), which simulates the underlying processes structuring ecosystems and can thus capture complex and dynamic interactions, to investigate boundaries of complex ecosystems to human perturbation. We explore several drivers including human appropriation of net primary production and harvesting of animal biomass. We also present an analysis of the key interactions between biotic, societal and abiotic earth system components, considering why and how we might think about these couplings. References: M. B. J. Harfoot et al., Emergent global patterns of ecosystem structure and function from a mechanistic general ecosystem model., PLoS Biol. 12, e1001841 (2014).

  9. A Practical Philosophy of Complex Climate Modelling

    NASA Technical Reports Server (NTRS)

    Schmidt, Gavin A.; Sherwood, Steven

    2014-01-01

    We give an overview of the practice of developing and using complex climate models, as seen from experiences in a major climate modelling center and through participation in the Coupled Model Intercomparison Project (CMIP).We discuss the construction and calibration of models; their evaluation, especially through use of out-of-sample tests; and their exploitation in multi-model ensembles to identify biases and make predictions. We stress that adequacy or utility of climate models is best assessed via their skill against more naive predictions. The framework we use for making inferences about reality using simulations is naturally Bayesian (in an informal sense), and has many points of contact with more familiar examples of scientific epistemology. While the use of complex simulations in science is a development that changes much in how science is done in practice, we argue that the concepts being applied fit very much into traditional practices of the scientific method, albeit those more often associated with laboratory work.

  10. Intrinsic Uncertainties in Modeling Complex Systems.

    SciTech Connect

    Cooper, Curtis S; Bramson, Aaron L.; Ames, Arlo L.

    2014-09-01

    Models are built to understand and predict the behaviors of both natural and artificial systems. Because it is always necessary to abstract away aspects of any non-trivial system being modeled, we know models can potentially leave out important, even critical elements. This reality of the modeling enterprise forces us to consider the prospective impacts of those effects completely left out of a model - either intentionally or unconsidered. Insensitivity to new structure is an indication of diminishing returns. In this work, we represent a hypothetical unknown effect on a validated model as a finite perturba- tion whose amplitude is constrained within a control region. We find robustly that without further constraints, no meaningful bounds can be placed on the amplitude of a perturbation outside of the control region. Thus, forecasting into unsampled regions is a very risky proposition. We also present inherent difficulties with proper time discretization of models and representing in- herently discrete quantities. We point out potentially worrisome uncertainties, arising from math- ematical formulation alone, which modelers can inadvertently introduce into models of complex systems. Acknowledgements This work has been funded under early-career LDRD project #170979, entitled "Quantify- ing Confidence in Complex Systems Models Having Structural Uncertainties", which ran from 04/2013 to 09/2014. We wish to express our gratitude to the many researchers at Sandia who con- tributed ideas to this work, as well as feedback on the manuscript. In particular, we would like to mention George Barr, Alexander Outkin, Walt Beyeler, Eric Vugrin, and Laura Swiler for provid- ing invaluable advice and guidance through the course of the project. We would also like to thank Steven Kleban, Amanda Gonzales, Trevor Manzanares, and Sarah Burwell for their assistance in managing project tasks and resources.

  11. Different Epidemic Models on Complex Networks

    NASA Astrophysics Data System (ADS)

    Zhang, Hai-Feng; Small, Michael; Fu, Xin-Chu

    2009-07-01

    Models for diseases spreading are not just limited to SIS or SIR. For instance, for the spreading of AIDS/HIV, the susceptible individuals can be classified into different cases according to their immunity, and similarly, the infected individuals can be sorted into different classes according to their infectivity. Moreover, some diseases may develop through several stages. Many authors have shown that the individuals' relation can be viewed as a complex network. So in this paper, in order to better explain the dynamical behavior of epidemics, we consider different epidemic models on complex networks, and obtain the epidemic threshold for each case. Finally, we present numerical simulations for each case to verify our results.

  12. Noncommutative complex Grosse-Wulkenhaar model

    SciTech Connect

    Hounkonnou, Mahouton Norbert; Samary, Dine Ousmane

    2008-11-18

    This paper stands for an application of the noncommutative (NC) Noether theorem, given in our previous work [AIP Proc 956(2007) 55-60], for the NC complex Grosse-Wulkenhaar model. It provides with an extension of a recent work [Physics Letters B 653(2007) 343-345]. The local conservation of energy-momentum tensors (EMTs) is recovered using improvement procedures based on Moyal algebraic techniques. Broken dilatation symmetry is discussed. NC gauge currents are also explicitly computed.

  13. The noisy voter model on complex networks

    NASA Astrophysics Data System (ADS)

    Carro, Adrián; Toral, Raúl; San Miguel, Maxi

    2016-04-01

    We propose a new analytical method to study stochastic, binary-state models on complex networks. Moving beyond the usual mean-field theories, this alternative approach is based on the introduction of an annealed approximation for uncorrelated networks, allowing to deal with the network structure as parametric heterogeneity. As an illustration, we study the noisy voter model, a modification of the original voter model including random changes of state. The proposed method is able to unfold the dependence of the model not only on the mean degree (the mean-field prediction) but also on more complex averages over the degree distribution. In particular, we find that the degree heterogeneity—variance of the underlying degree distribution—has a strong influence on the location of the critical point of a noise-induced, finite-size transition occurring in the model, on the local ordering of the system, and on the functional form of its temporal correlations. Finally, we show how this latter point opens the possibility of inferring the degree heterogeneity of the underlying network by observing only the aggregate behavior of the system as a whole, an issue of interest for systems where only macroscopic, population level variables can be measured.

  14. The noisy voter model on complex networks

    PubMed Central

    Carro, Adrián; Toral, Raúl; San Miguel, Maxi

    2016-01-01

    We propose a new analytical method to study stochastic, binary-state models on complex networks. Moving beyond the usual mean-field theories, this alternative approach is based on the introduction of an annealed approximation for uncorrelated networks, allowing to deal with the network structure as parametric heterogeneity. As an illustration, we study the noisy voter model, a modification of the original voter model including random changes of state. The proposed method is able to unfold the dependence of the model not only on the mean degree (the mean-field prediction) but also on more complex averages over the degree distribution. In particular, we find that the degree heterogeneity—variance of the underlying degree distribution—has a strong influence on the location of the critical point of a noise-induced, finite-size transition occurring in the model, on the local ordering of the system, and on the functional form of its temporal correlations. Finally, we show how this latter point opens the possibility of inferring the degree heterogeneity of the underlying network by observing only the aggregate behavior of the system as a whole, an issue of interest for systems where only macroscopic, population level variables can be measured. PMID:27094773

  15. Complexity of groundwater models in catchment hydrological models

    NASA Astrophysics Data System (ADS)

    Attinger, Sabine; Herold, Christian; Kumar, Rohini; Mai, Juliane; Ross, Katharina; Samaniego, Luis; Zink, Matthias

    2015-04-01

    In catchment hydrological models, groundwater is usually modeled very simple: it is conceptualized as a linear reservoir that gets the water from the upper unsaturated zone reservoir and releases water to the river system as baseflow. The baseflow is only a minor component of the total river flow and groundwater reservoir parameters are therefore difficult to be inversely estimated by means of river flow data only. In addition, the modelled values of the absolute height of the water filling the groundwater reservoir - in other words the groundwater levels - are of limited meaning due to coarse or no spatial resolution of groundwater and due to the fact that only river flow data are used for the calibration. The talk focuses on the question: Which complexity in terms of model complexity and model resolution is necessary to characterize groundwater processes and groundwater responses adequately in distributed catchment hydrological models? Starting from a spatially distributed catchment hydrological model with a groundwater compartment that is conceptualized as a linear reservoir we stepwise increase the groundwater model complexity and its spatial resolution to investigate which resolution, which complexity and which data are needed to reproduce baseflow and groundwater level data adequately.

  16. Describing Ecosystem Complexity through Integrated Catchment Modeling

    NASA Astrophysics Data System (ADS)

    Shope, C. L.; Tenhunen, J. D.; Peiffer, S.

    2011-12-01

    Land use and climate change have been implicated in reduced ecosystem services (ie: high quality water yield, biodiversity, and agricultural yield. The prediction of ecosystem services expected under future land use decisions and changing climate conditions has become increasingly important. Complex policy and management decisions require the integration of physical, economic, and social data over several scales to assess effects on water resources and ecology. Field-based meteorology, hydrology, soil physics, plant production, solute and sediment transport, economic, and social behavior data were measured in a South Korean catchment. A variety of models are being used to simulate plot and field scale experiments within the catchment. Results from each of the local-scale models provide identification of sensitive, local-scale parameters which are then used as inputs into a large-scale watershed model. We used the spatially distributed SWAT model to synthesize the experimental field data throughout the catchment. The approach of our study was that the range in local-scale model parameter results can be used to define the sensitivity and uncertainty in the large-scale watershed model. Further, this example shows how research can be structured for scientific results describing complex ecosystems and landscapes where cross-disciplinary linkages benefit the end result. The field-based and modeling framework described is being used to develop scenarios to examine spatial and temporal changes in land use practices and climatic effects on water quantity, water quality, and sediment transport. Development of accurate modeling scenarios requires understanding the social relationship between individual and policy driven land management practices and the value of sustainable resources to all shareholders.

  17. Complex Constructivism: A Theoretical Model of Complexity and Cognition

    ERIC Educational Resources Information Center

    Doolittle, Peter E.

    2014-01-01

    Education has long been driven by its metaphors for teaching and learning. These metaphors have influenced both educational research and educational practice. Complexity and constructivism are two theories that provide functional and robust metaphors. Complexity provides a metaphor for the structure of myriad phenomena, while constructivism…

  18. Magnetic modeling of the Bushveld Igneous Complex

    NASA Astrophysics Data System (ADS)

    Webb, S. J.; Cole, J.; Letts, S. A.; Finn, C.; Torsvik, T. H.; Lee, M. D.

    2009-12-01

    Magnetic modeling of the 2.06 Ga Bushveld Complex presents special challenges due a variety of magnetic effects. These include strong remanence in the Main Zone and extremely high magnetic susceptibilities in the Upper Zone, which exhibit self-demagnetization. Recent palaeomagnetic results have resolved a long standing discrepancy between age data, which constrain the emplacement to within 1 million years, and older palaeomagnetic data which suggested ~50 million years for emplacement. The new palaeomagnetic results agree with the age data and present a single consistent pole, as opposed to a long polar wander path, for the Bushveld for all of the Zones and all of the limbs. These results also pass a fold test indicating the Bushveld Complex was emplaced horizontally lending support to arguments for connectivity. The magnetic signature of the Bushveld Complex provides an ideal mapping tool as the UZ has high susceptibility values and is well layered showing up as distinct anomalies on new high resolution magnetic data. However, this signature is similar to the highly magnetic BIFs found in the Transvaal and in the Witwatersrand Supergroups. Through careful mapping using new high resolution aeromagnetic data, we have been able to map the Bushveld UZ in complicated geological regions and identify a characteristic signature with well defined layers. The Main Zone, which has a more subdued magnetic signature, does have a strong remanent component and exhibits several magnetic reversals. The magnetic layers of the UZ contain layers of magnetitite with as much as 80-90% pure magnetite with large crystals (1-2 cm). While these layers are not strongly remanent, they have extremely high magnetic susceptibilities, and the self demagnetization effect must be taken into account when modeling these layers. Because the Bushveld Complex is so large, the geometry of the Earth’s magnetic field relative to the layers of the UZ Bushveld Complex changes orientation, creating

  19. Lab on a Biomembrane: rapid prototyping and manipulation of 2D fluidic lipid bilayers circuits.

    PubMed

    Ainla, Alar; Gözen, Irep; Hakonen, Bodil; Jesorka, Aldo

    2013-09-25

    Lipid bilayer membranes are among the most ubiquitous structures in the living world, with intricate structural features and a multitude of biological functions. It is attractive to recreate these structures in the laboratory, as this allows mimicking and studying the properties of biomembranes and their constituents, and to specifically exploit the intrinsic two-dimensional fluidity. Even though diverse strategies for membrane fabrication have been reported, the development of related applications and technologies has been hindered by the unavailability of both versatile and simple methods. Here we report a rapid prototyping technology for two-dimensional fluidic devices, based on in-situ generated circuits of phospholipid films. In this "lab on a molecularly thin membrane", various chemical and physical operations, such as writing, erasing, functionalization, and molecular transport, can be applied to user-defined regions of a membrane circuit. This concept is an enabling technology for research on molecular membranes and their technological use.

  20. Theoretical and computational investigations of nanoparticle-biomembrane interactions in cellular delivery.

    PubMed

    Ding, Hong-ming; Ma, Yu-qiang

    2015-03-01

    With the rapid development of nanotechnology, nanoparticles have been widely used in many applications such as phototherapy, cell imaging, and drug/gene delivery. A better understanding of how nanoparticles interact with bio-system (especially cells) is of great importance for their potential biomedical applications. In this review, the current status and perspective of theoretical and computational investigations is presented on the nanoparticle-biomembrane interactions in cellular delivery. In particular, the determining parameters (including the properties of nanoparticles, cell membranes and environments) that govern the cellular uptake of nanoparticles (direct penetration and endocytosis) are discussed. Further, some special attention is paid to their interactions beyond the translocation of nanoparticles across membranes (e.g., nanoparticles escaping from endosome and entering into nucleus). Finally, a summary is given, and the challenging problems of this field in the future are identified.

  1. Modeling the human prothrombinase complex components

    NASA Astrophysics Data System (ADS)

    Orban, Tivadar

    Thrombin generation is the culminating stage of the blood coagulation process. Thrombin is obtained from prothrombin (the substrate) in a reaction catalyzed by the prothrombinase complex (the enzyme). The prothrombinase complex is composed of factor Xa (the enzyme), factor Va (the cofactor) associated in the presence of calcium ions on a negatively charged cell membrane. Factor Xa, alone, can activate prothrombin to thrombin; however, the rate of conversion is not physiologically relevant for survival. Incorporation of factor Va into prothrombinase accelerates the rate of prothrombinase activity by 300,000-fold, and provides the physiological pathway of thrombin generation. The long-term goal of the current proposal is to provide the necessary support for the advancing of studies to design potential drug candidates that may be used to avoid development of deep venous thrombosis in high-risk patients. The short-term goals of the present proposal are to (1) to propose a model of a mixed asymmetric phospholipid bilayer, (2) expand the incomplete model of human coagulation factor Va and study its interaction with the phospholipid bilayer, (3) to create a homology model of prothrombin (4) to study the dynamics of interaction between prothrombin and the phospholipid bilayer.

  2. Chitosan-collagen biomembrane embedded with calcium-aluminate enhances dentinogenic potential of pulp cells.

    PubMed

    Soares, Diana Gabriela; Rosseto, Hebert Luís; Basso, Fernanda Gonçalves; Scheffel, Débora Salles; Hebling, Josimeri; Costa, Carlos Alberto de Souza

    2016-01-01

    The development of biomaterials capable of driving dental pulp stem cell differentiation into odontoblast-like cells able to secrete reparative dentin is the goal of current conservative dentistry. In the present investigation, a biomembrane (BM) composed of a chitosan/collagen matrix embedded with calcium-aluminate microparticles was tested. The BM was produced by mixing collagen gel with a chitosan solution (2:1), and then adding bioactive calcium-aluminate cement as the mineral phase. An inert material (polystyrene) was used as the negative control. Human dental pulp cells were seeded onto the surface of certain materials, and the cytocompatibility was evaluated by cell proliferation and cell morphology, assessed after 1, 7, 14 and 28 days in culture. The odontoblastic differentiation was evaluated by measuring alkaline phosphatase (ALP) activity, total protein production, gene expression of DMP-1/DSPP and mineralized nodule deposition. The pulp cells were able to attach onto the BM surface and spread, displaying a faster proliferative rate at initial periods than that of the control cells. The BM also acted on the cells to induce more intense ALP activity, protein production at 14 days, and higher gene expression of DSPP and DMP-1 at 28 days, leading to the deposition of about five times more mineralized matrix than the cells in the control group. Therefore, the experimental biomembrane induced the differentiation of pulp cells into odontoblast-like cells featuring a highly secretory phenotype. This innovative bioactive material can drive other protocols for dental pulp exposure treatment by inducing the regeneration of dentin tissue mediated by resident cells. PMID:27119587

  3. Interactions of a Tetrazine Derivative with Biomembrane Constituents: A Langmuir Monolayer Study.

    PubMed

    Nakahara, Hiromichi; Hagimori, Masayori; Mukai, Takahiro; Shibata, Osamu

    2016-07-01

    Tetrazine (Tz) is expected to be used for bioimaging and as an analytical reagent. It is known to react very fast with trans-cyclooctene under water in organic chemistry. Here, to understand the interaction between Tz and biomembrane constituents, we first investigated the interfacial behavior of a newly synthesized Tz derivative comprising a C18-saturated hydrocarbon chain (rTz-C18) using a Langmuir monolayer spread at the air-water interface. Surface pressure (π)-molecular area (A) and surface potential (ΔV)-A isotherms were measured for monolayers of rTz-C18 and biomembrane constituents such as dipalmitoylphosphatidylcholine (DPPC), dipalmitoylphosphatidylglycerol (DPPG), dipalmitoyl phosphatidylethanolamine (DPPE), palmitoyl sphingomyelin (PSM), and cholesterol (Ch). The lateral interaction between rTz-C18 and the lipids was thermodynamically elucidated from the excess Gibbs free energy of mixing and two-dimensional phase diagram. The binary monolayers except for the Ch system indicated high miscibility or affinity. In particular, rTz-C18 was found to interact more strongly with DPPE, which is a major constituent of the inner surface of cell membranes. The phase behavior and morphology upon monolayer compression were investigated by using Brewster angle microscopy (BAM), fluorescence microscopy (FM), and atomic force microscopy (AFM). The BAM and FM images of the DPPC/rTz-C18, DPPG/rTz-C18, and PSM/rTz-C18 systems exhibited a coexistence state of two different liquid-condensed domains derived mainly from monolayers of phospholipids and phospholipids-rTz-C18. From these morphological observations, it is worthy to note that rTz-C18 is possible to interact with a limited amount of the lipids except for DPPE.

  4. Wind modelling over complex terrain using CFD

    NASA Astrophysics Data System (ADS)

    Avila, Matias; Owen, Herbert; Folch, Arnau; Prieto, Luis; Cosculluela, Luis

    2015-04-01

    The present work deals with the numerical CFD modelling of onshore wind farms in the context of High Performance Computing (HPC). The CFD model involves the numerical solution of the Reynolds-Averaged Navier-Stokes (RANS) equations together with a κ-ɛ turbulence model and the energy equation, specially designed for Atmospheric Boundary Layer (ABL) flows. The aim is to predict the wind velocity distribution over complex terrain, using a model that includes meteorological data assimilation, thermal coupling, forested canopy and Coriolis effects. The modelling strategy involves automatic mesh generation, terrain data assimilation and generation of boundary conditions for the inflow wind flow distribution up to the geostrophic height. The CFD model has been implemented in Alya, a HPC multi physics parallel solver able to run with thousands of processors with an optimal scalability, developed in Barcelona Supercomputing Center. The implemented thermal stability and canopy physical model was developed by Sogachev in 2012. The k-ɛ equations are of non-linear convection diffusion reaction type. The implemented numerical scheme consists on a stabilized finite element formulation based on the variational multiscale method, that is known to be stable for this kind of turbulence equations. We present a numerical formulation that stresses on the robustness of the solution method, tackling common problems that produce instability. The iterative strategy and linearization scheme is discussed. It intends to avoid the possibility of having negative values of diffusion during the iterative process, which may lead to divergence of the scheme. These problems are addressed by acting on the coefficients of the reaction and diffusion terms and on the turbulent variables themselves. The k-ɛ equations are highly nonlinear. Complex terrain induces transient flow instabilities that may preclude the convergence of computer flow simulations based on steady state formulation of the

  5. Inexpensive Complex Hand Model Twenty Years Later.

    PubMed

    Frenger, Paul

    2015-01-01

    Twenty years ago the author unveiled his inexpensive complex hand model, which reproduced every motion of the human hand. A control system programmed in the Forth language operated its actuators and sensors. Follow-on papers for this popular project were next presented in Texas, Canada and Germany. From this hand grew the author’s meter-tall robot (nicknamed ANNIE: Android With Neural Networks, Intellect and Emotions). It received machine vision, facial expressiveness, speech synthesis and speech recognition; a simian version also received a dexterous ape foot. New artificial intelligence features included op-amp neurons for OCR and simulated emotions, hormone emulation, endocannabinoid receptors, fear-trust-love mechanisms, a Grandmother Cell recognizer and artificial consciousness. Simulated illnesses included narcotic addiction, autism, PTSD, fibromyalgia and Alzheimer’s disease. The author gave 13 robotics-AI presentations at NASA in Houston since 2006. A meter-tall simian robot was proposed with gripping hand-feet for use with space vehicles and to explore distant planets and moons. Also proposed were: intelligent motorized exoskeletons for astronaut force multiplication; a cognitive prosthesis to detect and alleviate decreased crew mental performance; and a gynoid robot medic to tend astronauts in deep space missions. What began as a complex hand model evolved into an innovative robot-AI within two decades. PMID:25996742

  6. Inexpensive Complex Hand Model Twenty Years Later.

    PubMed

    Frenger, Paul

    2015-01-01

    Twenty years ago the author unveiled his inexpensive complex hand model, which reproduced every motion of the human hand. A control system programmed in the Forth language operated its actuators and sensors. Follow-on papers for this popular project were next presented in Texas, Canada and Germany. From this hand grew the author’s meter-tall robot (nicknamed ANNIE: Android With Neural Networks, Intellect and Emotions). It received machine vision, facial expressiveness, speech synthesis and speech recognition; a simian version also received a dexterous ape foot. New artificial intelligence features included op-amp neurons for OCR and simulated emotions, hormone emulation, endocannabinoid receptors, fear-trust-love mechanisms, a Grandmother Cell recognizer and artificial consciousness. Simulated illnesses included narcotic addiction, autism, PTSD, fibromyalgia and Alzheimer’s disease. The author gave 13 robotics-AI presentations at NASA in Houston since 2006. A meter-tall simian robot was proposed with gripping hand-feet for use with space vehicles and to explore distant planets and moons. Also proposed were: intelligent motorized exoskeletons for astronaut force multiplication; a cognitive prosthesis to detect and alleviate decreased crew mental performance; and a gynoid robot medic to tend astronauts in deep space missions. What began as a complex hand model evolved into an innovative robot-AI within two decades.

  7. Using Perspective to Model Complex Processes

    SciTech Connect

    Kelsey, R.L.; Bisset, K.R.

    1999-04-04

    The notion of perspective, when supported in an object-based knowledge representation, can facilitate better abstractions of reality for modeling and simulation. The object modeling of complex physical and chemical processes is made more difficult in part due to the poor abstractions of state and phase changes available in these models. The notion of perspective can be used to create different views to represent the different states of matter in a process. These techniques can lead to a more understandable model. Additionally, the ability to record the progress of a process from start to finish is problematic. It is desirable to have a historic record of the entire process, not just the end result of the process. A historic record should facilitate backtracking and re-start of a process at different points in time. The same representation structures and techniques can be used to create a sequence of process markers to represent a historic record. By using perspective, the sequence of markers can have multiple and varying views tailored for a particular user's context of interest.

  8. Ants (Formicidae): models for social complexity.

    PubMed

    Smith, Chris R; Dolezal, Adam; Eliyahu, Dorit; Holbrook, C Tate; Gadau, Jürgen

    2009-07-01

    The family Formicidae (ants) is composed of more than 12,000 described species that vary greatly in size, morphology, behavior, life history, ecology, and social organization. Ants occur in most terrestrial habitats and are the dominant animals in many of them. They have been used as models to address fundamental questions in ecology, evolution, behavior, and development. The literature on ants is extensive, and the natural history of many species is known in detail. Phylogenetic relationships for the family, as well as within many subfamilies, are known, enabling comparative studies. Their ease of sampling and ecological variation makes them attractive for studying populations and questions relating to communities. Their sociality and variation in social organization have contributed greatly to an understanding of complex systems, division of labor, and chemical communication. Ants occur in colonies composed of tens to millions of individuals that vary greatly in morphology, physiology, and behavior; this variation has been used to address proximate and ultimate mechanisms generating phenotypic plasticity. Relatedness asymmetries within colonies have been fundamental to the formulation and empirical testing of kin and group selection theories. Genomic resources have been developed for some species, and a whole-genome sequence for several species is likely to follow in the near future; comparative genomics in ants should provide new insights into the evolution of complexity and sociogenomics. Future studies using ants should help establish a more comprehensive understanding of social life, from molecules to colonies. PMID:20147200

  9. Physical modelling of the nuclear pore complex

    PubMed Central

    Fassati, Ariberto; Ford, Ian J.; Hoogenboom, Bart W.

    2013-01-01

    Physically interesting behaviour can arise when soft matter is confined to nanoscale dimensions. A highly relevant biological example of such a phenomenon is the Nuclear Pore Complex (NPC) found perforating the nuclear envelope of eukaryotic cells. In the central conduit of the NPC, of ∼30–60 nm diameter, a disordered network of proteins regulates all macromolecular transport between the nucleus and the cytoplasm. In spite of a wealth of experimental data, the selectivity barrier of the NPC has yet to be explained fully. Experimental and theoretical approaches are complicated by the disordered and heterogeneous nature of the NPC conduit. Modelling approaches have focused on the behaviour of the partially unfolded protein domains in the confined geometry of the NPC conduit, and have demonstrated that within the range of parameters thought relevant for the NPC, widely varying behaviour can be observed. In this review, we summarise recent efforts to physically model the NPC barrier and function. We illustrate how attempts to understand NPC barrier function have employed many different modelling techniques, each of which have contributed to our understanding of the NPC.

  10. Reducing Spatial Data Complexity for Classification Models

    SciTech Connect

    Ruta, Dymitr; Gabrys, Bogdan

    2007-11-29

    Intelligent data analytics gradually becomes a day-to-day reality of today's businesses. However, despite rapidly increasing storage and computational power current state-of-the-art predictive models still can not handle massive and noisy corporate data warehouses. What is more adaptive and real-time operational environment requires multiple models to be frequently retrained which further hinders their use. Various data reduction techniques ranging from data sampling up to density retention models attempt to address this challenge by capturing a summarised data structure, yet they either do not account for labelled data or degrade the classification performance of the model trained on the condensed dataset. Our response is a proposition of a new general framework for reducing the complexity of labelled data by means of controlled spatial redistribution of class densities in the input space. On the example of Parzen Labelled Data Compressor (PLDC) we demonstrate a simulatory data condensation process directly inspired by the electrostatic field interaction where the data are moved and merged following the attracting and repelling interactions with the other labelled data. The process is controlled by the class density function built on the original data that acts as a class-sensitive potential field ensuring preservation of the original class density distributions, yet allowing data to rearrange and merge joining together their soft class partitions. As a result we achieved a model that reduces the labelled datasets much further than any competitive approaches yet with the maximum retention of the original class densities and hence the classification performance. PLDC leaves the reduced dataset with the soft accumulative class weights allowing for efficient online updates and as shown in a series of experiments if coupled with Parzen Density Classifier (PDC) significantly outperforms competitive data condensation methods in terms of classification performance at the

  11. Reducing Spatial Data Complexity for Classification Models

    NASA Astrophysics Data System (ADS)

    Ruta, Dymitr; Gabrys, Bogdan

    2007-11-01

    Intelligent data analytics gradually becomes a day-to-day reality of today's businesses. However, despite rapidly increasing storage and computational power current state-of-the-art predictive models still can not handle massive and noisy corporate data warehouses. What is more adaptive and real-time operational environment requires multiple models to be frequently retrained which further hinders their use. Various data reduction techniques ranging from data sampling up to density retention models attempt to address this challenge by capturing a summarised data structure, yet they either do not account for labelled data or degrade the classification performance of the model trained on the condensed dataset. Our response is a proposition of a new general framework for reducing the complexity of labelled data by means of controlled spatial redistribution of class densities in the input space. On the example of Parzen Labelled Data Compressor (PLDC) we demonstrate a simulatory data condensation process directly inspired by the electrostatic field interaction where the data are moved and merged following the attracting and repelling interactions with the other labelled data. The process is controlled by the class density function built on the original data that acts as a class-sensitive potential field ensuring preservation of the original class density distributions, yet allowing data to rearrange and merge joining together their soft class partitions. As a result we achieved a model that reduces the labelled datasets much further than any competitive approaches yet with the maximum retention of the original class densities and hence the classification performance. PLDC leaves the reduced dataset with the soft accumulative class weights allowing for efficient online updates and as shown in a series of experiments if coupled with Parzen Density Classifier (PDC) significantly outperforms competitive data condensation methods in terms of classification performance at the

  12. Analytical models for complex swirling flows

    NASA Astrophysics Data System (ADS)

    Borissov, A.; Hussain, V.

    1996-11-01

    We develops a new class of analytical solutions of the Navier-Stokes equations for swirling flows, and suggests ways to predict and control such flows occurring in various technological applications. We view momentum accumulation on the axis as a key feature of swirling flows and consider vortex-sink flows on curved axisymmetric surfaces with an axial flow. We show that these solutions model swirling flows in a cylindrical can, whirlpools, tornadoes, and cosmic swirling jets. The singularity of these solutions on the flow axis is removed by matching them with near-axis Schlichting and Long's swirling jets. The matched solutions model flows with very complex patterns, consisting of up to seven separation regions with recirculatory 'bubbles' and vortex rings. We apply the matched solutions for computing flows in the Ranque-Hilsch tube, in the meniscus of electrosprays, in vortex breakdown, and in an industrial vortex burner. The simple analytical solutions allow a clear understanding of how different control parameters affect the flow and guide selection of optimal parameter values for desired flow features. These solutions permit extension to other problems (such as heat transfer and chemical reaction) and have the potential of being significantly useful for further detailed investigation by direct or large-eddy numerical simulations as well as laboratory experimentation.

  13. Advanced Combustion Modeling for Complex Turbulent Flows

    NASA Technical Reports Server (NTRS)

    Ham, Frank Stanford

    2005-01-01

    The next generation of aircraft engines will need to pass stricter efficiency and emission tests. NASA's Ultra-Efficient Engine Technology (UEET) program has set an ambitious goal of 70% reduction of NO(x) emissions and a 15% increase in fuel efficiency of aircraft engines. We will demonstrate the state-of-the-art combustion tools developed a t Stanford's Center for Turbulence Research (CTR) as part of this program. In the last decade, CTR has spear-headed a multi-physics-based combustion modeling program. Key technologies have been transferred to the aerospace industry and are currently being used for engine simulations. In this demo, we will showcase the next-generation combustion modeling tools that integrate a very high level of detailed physics into advanced flow simulation codes. Combustor flows involve multi-phase physics with liquid fuel jet breakup, evaporation, and eventual combustion. Individual components of the simulation are verified against complex test cases and show excellent agreement with experimental data.

  14. Natural lipid extracts and biomembrane-mimicking lipid compositions are disposed to form nonlamellar phases, and they release DNA from lipoplexes most efficiently

    SciTech Connect

    Koynova, Rumiana; MacDonald, Robert C.

    2010-01-18

    A viewpoint now emerging is that a critical factor in lipid-mediated transfection (lipofection) is the structural evolution of lipoplexes upon interacting and mixing with cellular lipids. Here we report our finding that lipid mixtures mimicking biomembrane lipid compositions are superior to pure anionic liposomes in their ability to release DNA from lipoplexes (cationic lipid/DNA complexes), even though they have a much lower negative charge density (and thus lower capacity to neutralize the positive charge of the lipoplex lipids). Flow fluorometry revealed that the portion of DNA released after a 30-min incubation of the cationic O-ethylphosphatidylcholine lipoplexes with the anionic phosphatidylserine or phosphatidylglycerol was 19% and 37%, respectively, whereas a mixture mimicking biomembranes (MM: phosphatidylcholine/phosphatidylethanolamine/phosphatidylserine /cholesterol 45:20:20:15 w/w) and polar lipid extract from bovine liver released 62% and 74%, respectively, of the DNA content. A possible reason for this superior power in releasing DNA by the natural lipid mixtures was suggested by structural experiments: while pure anionic lipids typically form lamellae, the natural lipid mixtures exhibited a surprising predilection to form nonlamellar phases. Thus, the MM mixture arranged into lamellar arrays at physiological temperature, but began to convert to the hexagonal phase at a slightly higher temperature, {approx} 40-45 C. A propensity to form nonlamellar phases (hexagonal, cubic, micellar) at close to physiological temperatures was also found with the lipid extracts from natural tissues (from bovine liver, brain, and heart). This result reveals that electrostatic interactions are only one of the factors involved in lipid-mediated DNA delivery. The tendency of lipid bilayers to form nonlamellar phases has been described in terms of bilayer 'frustration' which imposes a nonzero intrinsic curvature of the two opposing monolayers. Because the stored curvature

  15. Modeling competitive substitution in a polyelectrolyte complex

    NASA Astrophysics Data System (ADS)

    Peng, B.; Muthukumar, M.

    2015-12-01

    We have simulated the invasion of a polyelectrolyte complex made of a polycation chain and a polyanion chain, by another longer polyanion chain, using the coarse-grained united atom model for the chains and the Langevin dynamics methodology. Our simulations reveal many intricate details of the substitution reaction in terms of conformational changes of the chains and competition between the invading chain and the chain being displaced for the common complementary chain. We show that the invading chain is required to be sufficiently longer than the chain being displaced for effecting the substitution. Yet, having the invading chain to be longer than a certain threshold value does not reduce the substitution time much further. While most of the simulations were carried out in salt-free conditions, we show that presence of salt facilitates the substitution reaction and reduces the substitution time. Analysis of our data shows that the dominant driving force for the substitution process involving polyelectrolytes lies in the release of counterions during the substitution.

  16. Modeling competitive substitution in a polyelectrolyte complex

    SciTech Connect

    Peng, B.; Muthukumar, M.

    2015-12-28

    We have simulated the invasion of a polyelectrolyte complex made of a polycation chain and a polyanion chain, by another longer polyanion chain, using the coarse-grained united atom model for the chains and the Langevin dynamics methodology. Our simulations reveal many intricate details of the substitution reaction in terms of conformational changes of the chains and competition between the invading chain and the chain being displaced for the common complementary chain. We show that the invading chain is required to be sufficiently longer than the chain being displaced for effecting the substitution. Yet, having the invading chain to be longer than a certain threshold value does not reduce the substitution time much further. While most of the simulations were carried out in salt-free conditions, we show that presence of salt facilitates the substitution reaction and reduces the substitution time. Analysis of our data shows that the dominant driving force for the substitution process involving polyelectrolytes lies in the release of counterions during the substitution.

  17. Mimetic biomembrane-AuNPs-graphene hybrid as matrix for enzyme immobilization and bioelectrocatalysis study.

    PubMed

    Wang, Tianshu; Liu, Jiyang; Ren, Jiangtao; Wang, Jin; Wang, Erkang

    2015-10-01

    A hybrid composite constructed of phospholipids bilayer membrane, gold nanoparticles and graphene was prepared and used as matrices for microperoxidase-11 (MP11) immobilization. The direct electrochemistry and corresponding bioelectrocatalysis of the enzyme electrode was further investigated. Phospholipid bilayer membrane protected gold nanoparticles (AuNPs) were assembled on polyelectrolyte functionalized graphene sheets through electrostatic attraction to form a hybrid bionanocomposite. Owing to the biocompatible microenvironment provided by the mimetic biomembrane, microperoxidase-11 entrapped in this matrix well retained its native structure and exhibited high bioactivity. Moreover, the AuNPs-graphene assemblies could efficiently promote the direct electron transfer between the immobilized MP11 and the substrate electrode. The as-prepared enzyme electrode presented good direct electrochemistry and electrocatalytic responses to the reduction of hydrogen peroxide (H2O2). The resulting H2O2 biosensor showed a wide linear range (2.0×10(-5)-2.8×10(-4) M), a low detection limit (2.6×10(-6) M), good reproducibility and stability. Furthermore, this sensor was used for real-time detection of H2O2 dynamically released from the tumor cells MCF-7 in response to a pro-inflammatory stimulant. PMID:26078181

  18. Biomembrane-mimicking lipid bilayer system as a mechanically tunable cell substrate

    PubMed Central

    Lin, C. Y.; Auernheimer, V.; Naumann, C.; Goldmann, W. H.; Fabry, B.

    2014-01-01

    Cell behavior such as cell adhesion, spreading, and contraction critically depends on the elastic properties of the extracellular matrix. It is not known, however, how cells respond to viscoelastic or plastic material properties that more closely resemble the mechanical environment that cells encounter in the body. In this report, we employ viscoelastic and plastic biomembrane-mimicking cell substrates. The compliance of the substrates can be tuned by increasing the number of polymer-tethered bilayers. This leaves the density and conformation of adhesive ligands on the top bilayer unaltered. We then observe the response of fibroblasts to these property changes. For comparison, we also study the cells on soft polyacrylamide and hard glass surfaces. Cell morphology, motility, cell stiffness, contractile forces and adhesive contact size all decrease on more compliant matrices but are less sensitive to changes in matrix dissipative properties. These data suggest that cells are able to feel and respond predominantly to the effective matrix compliance, which arises as a combination of substrate and adhesive ligand mechanical properties. PMID:24439398

  19. Biomembranes from slaughterhouse blood erythrocytes as prolonged release systems for dexamethasone sodium phosphate.

    PubMed

    Drvenica, Ivana T; Bukara, Katarina M; Ilić, Vesna Lj; Mišić, Danijela M; Vasić, Borislav Z; Gajić, Radoš B; Đorđević, Verica B; Veljović, Đorđe N; Belić, Aleksandar; Bugarski, Branko M

    2016-07-01

    The present study investigated preparation of bovine and porcine erythrocyte membranes from slaughterhouse blood as bio-derived materials for delivery of dexamethasone-sodium phosphate (DexP). The obtained biomembranes, i.e., ghosts were characterized in vitro in terms of morphological properties, loading parameters, and release behavior. For the last two, an UHPLC/-HESI-MS/MS based analytical procedure for absolute drug identification and quantification was developed. The results revealed that loading of DexP into both type of ghosts was directly proportional to the increase of drug concentration in the incubation medium, while incubation at 37°C had statistically significant effect on loaded amount of DexP (P < 0.05). The encapsulation efficiency was about fivefold higher in porcine compared to bovine ghosts. Insight into ghosts' surface morphology by field emission-scanning electron microscopy and atomic force microscopy confirmed that besides inevitable effects of osmosis, DexP inclusion itself had no observable additional effect on the morphology of the ghosts carriers. DexP release profiles were dependent on erythrocyte ghost type and amount of residual hemoglobin. However, sustained DexP release was achieved and shown over 3 days from porcine ghosts and 5 days from bovine erythrocyte ghosts. © 2016 American Institute of Chemical Engineers Biotechnol. Prog., 32:1046-1055, 2016. PMID:27254304

  20. Clinical complexity in medicine: A measurement model of task and patient complexity

    PubMed Central

    Islam, R.; Weir, C.; Fiol, G. Del

    2016-01-01

    Summary Background Complexity in medicine needs to be reduced to simple components in a way that is comprehensible to researchers and clinicians. Few studies in the current literature propose a measurement model that addresses both task and patient complexity in medicine. Objective The objective of this paper is to develop an integrated approach to understand and measure clinical complexity by incorporating both task and patient complexity components focusing on infectious disease domain. The measurement model was adapted and modified to healthcare domain. Methods Three clinical Infectious Disease teams were observed, audio-recorded and transcribed. Each team included an Infectious Diseases expert, one Infectious Diseases fellow, one physician assistant and one pharmacy resident fellow. The transcripts were parsed and the authors independently coded complexity attributes. This baseline measurement model of clinical complexity was modified in an initial set of coding process and further validated in a consensus-based iterative process that included several meetings and email discussions by three clinical experts from diverse backgrounds from the Department of Biomedical Informatics at the University of Utah. Inter-rater reliability was calculated using Cohen’s kappa. Results The proposed clinical complexity model consists of two separate components. The first is a clinical task complexity model with 13 clinical complexity-contributing factors and 7 dimensions. The second is the patient complexity model with 11 complexity-contributing factors and 5 dimensions. Conclusion The measurement model for complexity encompassing both task and patient complexity will be a valuable resource for future researchers and industry to measure and understand complexity in healthcare. PMID:26404626

  1. A thermodynamic signature of lipid segregation in biomembranes induced by a short peptide derived from glycoprotein gp36 of feline immunodeficiency virus.

    PubMed

    Oliva, Rosario; Del Vecchio, Pompea; Stellato, Marco Ignazio; D'Ursi, Anna Maria; D'Errico, Gerardino; Paduano, Luigi; Petraccone, Luigi

    2015-02-01

    The interactions between proteins/peptides and lipid bilayers are fundamental in a variety of key biological processes, and among these, the membrane fusion process operated by viral glycoproteins is one of the most important, being a fundamental step of the infectious event. In the case of the feline immunodeficiency virus (FIV), a small region of the membrane proximal external region (MPER) of the glycoprotein gp36 has been demonstrated to be necessary for the infection to occur, being able to destabilize the membranes to be fused. In this study, we report a physicochemical characterization of the interaction process between an eight-residue peptide, named C8, modeled on that gp36 region and some biological membrane models (liposomes) by using calorimetric and spectroscopic measurements. CD studies have shown that the peptide conformation changes upon binding to the liposomes. Interestingly, the peptide folds from a disordered structure (in the absence of liposomes) to a more ordered structure with a low but significant helix content. Isothermal titration calorimetry (ITC) and differential scanning calorimetry (DSC) results show that C8 binds with high affinity the lipid bilayers and induces a significant perturbation/reorganization of the lipid membrane structure. The type and the extent of such membrane reorganization depend on the membrane composition. These findings provide interesting insights into the role of this short peptide fragment in the mechanism of virus-cell fusion, demonstrating its ability to induce lipid segregation in biomembranes.

  2. Power Curve Modeling in Complex Terrain Using Statistical Models

    NASA Astrophysics Data System (ADS)

    Bulaevskaya, V.; Wharton, S.; Clifton, A.; Qualley, G.; Miller, W.

    2014-12-01

    Traditional power output curves typically model power only as a function of the wind speed at the turbine hub height. While the latter is an essential predictor of power output, wind speed information in other parts of the vertical profile, as well as additional atmospheric variables, are also important determinants of power. The goal of this work was to determine the gain in predictive ability afforded by adding wind speed information at other heights, as well as other atmospheric variables, to the power prediction model. Using data from a wind farm with a moderately complex terrain in the Altamont Pass region in California, we trained three statistical models, a neural network, a random forest and a Gaussian process model, to predict power output from various sets of aforementioned predictors. The comparison of these predictions to the observed power data revealed that considerable improvements in prediction accuracy can be achieved both through the addition of predictors other than the hub-height wind speed and the use of statistical models. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under contract DE-AC52-07NA27344 and was funded by Wind Uncertainty Quantification Laboratory Directed Research and Development Project at LLNL under project tracking code 12-ERD-069.

  3. A Novel BA Complex Network Model on Color Template Matching

    PubMed Central

    Han, Risheng; Yue, Guangxue; Ding, Hui

    2014-01-01

    A novel BA complex network model of color space is proposed based on two fundamental rules of BA scale-free network model: growth and preferential attachment. The scale-free characteristic of color space is discovered by analyzing evolving process of template's color distribution. And then the template's BA complex network model can be used to select important color pixels which have much larger effects than other color pixels in matching process. The proposed BA complex network model of color space can be easily integrated into many traditional template matching algorithms, such as SSD based matching and SAD based matching. Experiments show the performance of color template matching results can be improved based on the proposed algorithm. To the best of our knowledge, this is the first study about how to model the color space of images using a proper complex network model and apply the complex network model to template matching. PMID:25243235

  4. Spatiotemporal Organization of Spin-Coated Supported Model Membranes

    NASA Astrophysics Data System (ADS)

    Simonsen, Adam Cohen

    All cells of living organisms are separated from their surroundings and organized internally by means of flexible lipid membranes. In fact, there is consensus that the minimal requirements for self-replicating life processes include the following three features: (1) information carriers (DNA, RNA), (2) a metabolic system, and (3) encapsulation in a container structure [1]. Therefore, encapsulation can be regarded as an essential part of life itself. In nature, membranes are highly diverse interfacial structures that compartmentalize cells [2]. While prokaryotic cells only have an outer plasma membrane and a less-well-developed internal membrane structure, eukaryotic cells have a number of internal membranes associated with the organelles and the nucleus. Many of these membrane structures, including the plasma membrane, are complex layered systems, but with the basic structure of a lipid bilayer. Biomembranes contain hundreds of different lipid species in addition to embedded or peripherally associated membrane proteins and connections to scaffolds such as the cytoskeleton. In vitro, lipid bilayers are spontaneously self-organized structures formed by a large group of amphiphilic lipid molecules in aqueous suspensions. Bilayer formation is driven by the entropic properties of the hydrogen bond network in water in combination with the amphiphilic nature of the lipids. The molecular shapes of the lipid constituents play a crucial role in bilayer formation, and only lipids with approximately cylindrical shapes are able to form extended bilayers. The bilayer structure of biomembranes was discovered by Gorter and Grendel in 1925 [3] using monolayer studies of lipid extracts from red blood cells. Later, a number of conceptual models were developed to rationalize the organization of lipids and proteins in biological membranes. One of the most celebrated is the fluid-mosaic model by Singer and Nicolson (1972) [4]. According to this model, the lipid bilayer component of

  5. Modeling Complex Workflow in Molecular Diagnostics

    PubMed Central

    Gomah, Mohamed E.; Turley, James P.; Lu, Huimin; Jones, Dan

    2010-01-01

    One of the hurdles to achieving personalized medicine has been implementing the laboratory processes for performing and reporting complex molecular tests. The rapidly changing test rosters and complex analysis platforms in molecular diagnostics have meant that many clinical laboratories still use labor-intensive manual processing and testing without the level of automation seen in high-volume chemistry and hematology testing. We provide here a discussion of design requirements and the results of implementation of a suite of lab management tools that incorporate the many elements required for use of molecular diagnostics in personalized medicine, particularly in cancer. These applications provide the functionality required for sample accessioning and tracking, material generation, and testing that are particular to the evolving needs of individualized molecular diagnostics. On implementation, the applications described here resulted in improvements in the turn-around time for reporting of more complex molecular test sets, and significant changes in the workflow. Therefore, careful mapping of workflow can permit design of software applications that simplify even the complex demands of specialized molecular testing. By incorporating design features for order review, software tools can permit a more personalized approach to sample handling and test selection without compromising efficiency. PMID:20007844

  6. Specifying and Refining a Complex Measurement Model.

    ERIC Educational Resources Information Center

    Levy, Roy; Mislevy, Robert J.

    This paper aims to describe a Bayesian approach to modeling and estimating cognitive models both in terms of statistical machinery and actual instrument development. Such a method taps the knowledge of experts to provide initial estimates for the probabilistic relationships among the variables in a multivariate latent variable model and refines…

  7. Dispersion Modeling in Complex Urban Systems

    EPA Science Inventory

    Models are used to represent real systems in an understandable way. They take many forms. A conceptual model explains the way a system works. In environmental studies, for example, a conceptual model may delineate all the factors and parameters for determining how a particle move...

  8. Epidemiological models of Mycobacterium tuberculosis complex infections.

    PubMed

    Ozcaglar, Cagri; Shabbeer, Amina; Vandenberg, Scott L; Yener, Bülent; Bennett, Kristin P

    2012-04-01

    The resurgence of tuberculosis in the 1990s and the emergence of drug-resistant tuberculosis in the first decade of the 21st century increased the importance of epidemiological models for the disease. Due to slow progression of tuberculosis, the transmission dynamics and its long-term effects can often be better observed and predicted using simulations of epidemiological models. This study provides a review of earlier study on modeling different aspects of tuberculosis dynamics. The models simulate tuberculosis transmission dynamics, treatment, drug resistance, control strategies for increasing compliance to treatment, HIV/TB co-infection, and patient groups. The models are based on various mathematical systems, such as systems of ordinary differential equations, simulation models, and Markov Chain Monte Carlo methods. The inferences from the models are justified by case studies and statistical analysis of TB patient datasets. PMID:22387570

  9. Disentangling nestedness from models of ecological complexity.

    PubMed

    James, Alex; Pitchford, Jonathan W; Plank, Michael J

    2012-07-12

    Complex networks of interactions are ubiquitous and are particularly important in ecological communities, in which large numbers of species exhibit negative (for example, competition or predation) and positive (for example, mutualism) interactions with one another. Nestedness in mutualistic ecological networks is the tendency for ecological specialists to interact with a subset of species that also interact with more generalist species. Recent mathematical and computational analysis has suggested that such nestedness increases species richness. By examining previous results and applying computational approaches to 59 empirical data sets representing mutualistic plant–pollinator networks, we show that this statement is incorrect. A simpler metric—the number of mutualistic partners a species has—is a much better predictor of individual species survival and hence, community persistence. Nestedness is, at best, a secondary covariate rather than a causative factor for biodiversity in mutualistic communities. Analysis of complex networks should be accompanied by analysis of simpler, underpinning mechanisms that drive multiple higher-order network properties.

  10. Studying complex chemistries using PLASIMO's global model

    NASA Astrophysics Data System (ADS)

    Koelman, PMJ; Tadayon Mousavi, S.; Perillo, R.; Graef, WAAD; Mihailova, DB; van Dijk, J.

    2016-02-01

    The Plasimo simulation software is used to construct a Global Model of a CO2 plasma. A DBD plasma between two coaxial cylinders is considered, which is driven by a triangular input power pulse. The plasma chemistry is studied during this power pulse and in the afterglow. The model consists of 71 species that interact in 3500 reactions. Preliminary results from the model are presented. The model has been validated by comparing its results with those presented in Kozák et al. (Plasma Sources Science and Technology 23(4) p. 045004, 2014). A good qualitative agreement has been reached; potential sources of remaining discrepancies are extensively discussed.

  11. Multiscale Computational Models of Complex Biological Systems

    PubMed Central

    Walpole, Joseph; Papin, Jason A.; Peirce, Shayn M.

    2014-01-01

    Integration of data across spatial, temporal, and functional scales is a primary focus of biomedical engineering efforts. The advent of powerful computing platforms, coupled with quantitative data from high-throughput experimental platforms, has allowed multiscale modeling to expand as a means to more comprehensively investigate biological phenomena in experimentally relevant ways. This review aims to highlight recently published multiscale models of biological systems while using their successes to propose the best practices for future model development. We demonstrate that coupling continuous and discrete systems best captures biological information across spatial scales by selecting modeling techniques that are suited to the task. Further, we suggest how to best leverage these multiscale models to gain insight into biological systems using quantitative, biomedical engineering methods to analyze data in non-intuitive ways. These topics are discussed with a focus on the future of the field, the current challenges encountered, and opportunities yet to be realized. PMID:23642247

  12. Information, complexity and efficiency: The automobile model

    SciTech Connect

    Allenby, B. |

    1996-08-08

    The new, rapidly evolving field of industrial ecology - the objective, multidisciplinary study of industrial and economic systems and their linkages with fundamental natural systems - provides strong ground for believing that a more environmentally and economically efficient economy will be more information intensive and complex. Information and intellectual capital will be substituted for the more traditional inputs of materials and energy in producing a desirable, yet sustainable, quality of life. While at this point this remains a strong hypothesis, the evolution of the automobile industry can be used to illustrate how such substitution may, in fact, already be occurring in an environmentally and economically critical sector.

  13. Sensitivity Analysis in Complex Plasma Chemistry Models

    NASA Astrophysics Data System (ADS)

    Turner, Miles

    2015-09-01

    The purpose of a plasma chemistry model is prediction of chemical species densities, including understanding the mechanisms by which such species are formed. These aims are compromised by an uncertain knowledge of the rate constants included in the model, which directly causes uncertainty in the model predictions. We recently showed that this predictive uncertainty can be large--a factor of ten or more in some cases. There is probably no context in which a plasma chemistry model might be used where the existence of uncertainty on this scale could not be a matter of concern. A question that at once follows is: Which rate constants cause such uncertainty? In the present paper we show how this question can be answered by applying a systematic screening procedure--the so-called Morris method--to identify sensitive rate constants. We investigate the topical example of the helium-oxygen chemistry. Beginning with a model with almost four hundred reactions, we show that only about fifty rate constants materially affect the model results, and as few as ten cause most of the uncertainty. This means that the model can be improved, and the uncertainty substantially reduced, by focussing attention on this tractably small set of rate constants. Work supported by Science Foundation Ireland under grant08/SRC/I1411, and by COST Action MP1101 ``Biomedical Applications of Atmospheric Pressure Plasmas.''

  14. Modeling Power Systems as Complex Adaptive Systems

    SciTech Connect

    Chassin, David P.; Malard, Joel M.; Posse, Christian; Gangopadhyaya, Asim; Lu, Ning; Katipamula, Srinivas; Mallow, J V.

    2004-12-30

    Physical analogs have shown considerable promise for understanding the behavior of complex adaptive systems, including macroeconomics, biological systems, social networks, and electric power markets. Many of today's most challenging technical and policy questions can be reduced to a distributed economic control problem. Indeed, economically based control of large-scale systems is founded on the conjecture that the price-based regulation (e.g., auctions, markets) results in an optimal allocation of resources and emergent optimal system control. This report explores the state-of-the-art physical analogs for understanding the behavior of some econophysical systems and deriving stable and robust control strategies for using them. We review and discuss applications of some analytic methods based on a thermodynamic metaphor, according to which the interplay between system entropy and conservation laws gives rise to intuitive and governing global properties of complex systems that cannot be otherwise understood. We apply these methods to the question of how power markets can be expected to behave under a variety of conditions.

  15. Integrated Modeling of Complex Optomechanical Systems

    NASA Astrophysics Data System (ADS)

    Andersen, Torben; Enmark, Anita

    2011-09-01

    Mathematical modeling and performance simulation are playing an increasing role in large, high-technology projects. There are two reasons; first, projects are now larger than they were before, and the high cost calls for detailed performance prediction before construction. Second, in particular for space-related designs, it is often difficult to test systems under realistic conditions beforehand, and mathematical modeling is then needed to verify in advance that a system will work as planned. Computers have become much more powerful, permitting calculations that were not possible before. At the same time mathematical tools have been further developed and found acceptance in the community. Particular progress has been made in the fields of structural mechanics, optics and control engineering, where new methods have gained importance over the last few decades. Also, methods for combining optical, structural and control system models into global models have found widespread use. Such combined models are usually called integrated models and were the subject of this symposium. The objective was to bring together people working in the fields of groundbased optical telescopes, ground-based radio telescopes, and space telescopes. We succeeded in doing so and had 39 interesting presentations and many fruitful discussions during coffee and lunch breaks and social arrangements. We are grateful that so many top ranked specialists found their way to Kiruna and we believe that these proceedings will prove valuable during much future work.

  16. Comparison of Thermal Modeling Approaches for Complex Measurement Equipment

    NASA Astrophysics Data System (ADS)

    Schalles, M.; Thewes, R.

    2014-04-01

    Thermal modeling is used for thermal investigation and optimization of sensors, instruments, and structures. Here, results depend on the chosen modeling approach, the complexity of the model, the quality of material data, and the information about the heat transport conditions of the object of investigation. Despite the widespread application, the advantages and limits of the modeling approaches are partially unknown. For comparison of different modeling approaches, a simplified and analytically describable demonstration object is used. This object is a steel rod at well-defined heat exchange conditions with the environment. For this, analytically describable models, equivalent electrical circuits, and simple and complex finite-element-analysis models are presented. Using the different approaches, static and dynamic simulations are performed and temperatures and temperature fields in the rod are estimated. The results of those calculations, comparisons with measurements, and identification of the sensitive points of the approaches are shown. General conclusions for thermal modeling of complex equipment are drawn.

  17. Modeling complex systems in the geosciences

    NASA Astrophysics Data System (ADS)

    Balcerak, Ernie

    2013-03-01

    Many geophysical phenomena can be described as complex systems, involving phenomena such as extreme or "wild" events that often do not follow the Gaussian distribution that would be expected if the events were simply random and uncorrelated. For instance, some geophysical phenomena like earthquakes show a much higher occurrence of relatively large values than would a Gaussian distribution and so are examples of the "Noah effect" (named by Benoit Mandelbrot for the exceptionally heavy rain in the biblical flood). Other geophysical phenomena are examples of the "Joseph effect," in which a state is especially persistent, such as a spell of multiple consecutive hot days (heat waves) or several dry summers in a row. The Joseph effect was named after the biblical story in which Joseph's dream of seven fat cows and seven thin ones predicted 7 years of plenty followed by 7 years of drought.

  18. A simple model clarifies the complicated relationships of complex networks

    NASA Astrophysics Data System (ADS)

    Zheng, Bojin; Wu, Hongrun; Kuang, Li; Qin, Jun; Du, Wenhua; Wang, Jianmin; Li, Deyi

    2014-08-01

    Real-world networks such as the Internet and WWW have many common traits. Until now, hundreds of models were proposed to characterize these traits for understanding the networks. Because different models used very different mechanisms, it is widely believed that these traits origin from different causes. However, we find that a simple model based on optimisation can produce many traits, including scale-free, small-world, ultra small-world, Delta-distribution, compact, fractal, regular and random networks. Moreover, by revising the proposed model, the community-structure networks are generated. By this model and the revised versions, the complicated relationships of complex networks are illustrated. The model brings a new universal perspective to the understanding of complex networks and provide a universal method to model complex networks from the viewpoint of optimisation.

  19. A musculoskeletal model of the elbow joint complex

    NASA Technical Reports Server (NTRS)

    Gonzalez, Roger V.; Barr, Ronald E.; Abraham, Lawrence D.

    1993-01-01

    This paper describes a musculoskeletal model that represents human elbow flexion-extension and forearm pronation-supination. Musculotendon parameters and the skeletal geometry were determined for the musculoskeletal model in the analysis of ballistic elbow joint complex movements. The key objective was to develop a computational model, guided by optimal control, to investigate the relationship among patterns of muscle excitation, individual muscle forces, and movement kinematics. The model was verified using experimental kinematic, torque, and electromyographic data from volunteer subjects performing both isometric and ballistic elbow joint complex movements. In general, the model predicted kinematic and muscle excitation patterns similar to what was experimentally measured.

  20. Classrooms as Complex Adaptive Systems: A Relational Model

    ERIC Educational Resources Information Center

    Burns, Anne; Knox, John S.

    2011-01-01

    In this article, we describe and model the language classroom as a complex adaptive system (see Logan & Schumann, 2005). We argue that linear, categorical descriptions of classroom processes and interactions do not sufficiently explain the complex nature of classrooms, and cannot account for how classroom change occurs (or does not occur), over…

  1. Blueprints for Complex Learning: The 4C/ID-Model.

    ERIC Educational Resources Information Center

    van Merrienboer, Jeroen J. G.; Clark, Richard E.; de Croock, Marcel B. M.

    2002-01-01

    Describes the four-component instructional design system (4C/ID-model) developed for the design of training programs for complex skills. Discusses the structure of training blueprints for complex learning and associated instructional methods, focusing on learning tasks, supportive information, just-in-time information, and part-task practice.…

  2. Model complexity and performance: How far can we simplify?

    NASA Astrophysics Data System (ADS)

    Raick, C.; Soetaert, K.; Grégoire, M.

    2006-07-01

    Handling model complexity and reliability is a key area of research today. While complex models containing sufficient detail have become possible due to increased computing power, they often lead to too much uncertainty. On the other hand, very simple models often crudely oversimplify the real ecosystem and can not be used for management purposes. Starting from a complex and validated 1D pelagic ecosystem model of the Ligurian Sea (NW Mediterranean Sea), we derived simplified aggregated models in which either the unbalanced algal growth, the functional group diversity or the explicit description of the microbial loop was sacrificed. To overcome the problem of data availability with adequate spatial and temporal resolution, the outputs of the complex model are used as the baseline of perfect knowledge to calibrate the simplified models. Objective criteria of model performance were used to compare the simplified models’ results to the complex model output and to the available data at the DYFAMED station in the central Ligurian Sea. We show that even the simplest (NPZD) model is able to represent the global ecosystem features described by the complex model (e.g. primary and secondary productions, particulate organic matter export flux, etc.). However, a certain degree of sophistication in the formulation of some biogeochemical processes is required to produce realistic behaviors (e.g. the phytoplankton competition, the potential carbon or nitrogen limitation of the zooplankton ingestion, the model trophic closure, etc.). In general, a 9 state-variable model that has the functional group diversity removed, but which retains the bacterial loop and the unbalanced algal growth, performs best.

  3. Prequential Analysis of Complex Data with Adaptive Model Reselection†

    PubMed Central

    Clarke, Jennifer; Clarke, Bertrand

    2010-01-01

    In Prequential analysis, an inference method is viewed as a forecasting system, and the quality of the inference method is based on the quality of its predictions. This is an alternative approach to more traditional statistical methods that focus on the inference of parameters of the data generating distribution. In this paper, we introduce adaptive combined average predictors (ACAPs) for the Prequential analysis of complex data. That is, we use convex combinations of two different model averages to form a predictor at each time step in a sequence. A novel feature of our strategy is that the models in each average are re-chosen adaptively at each time step. To assess the complexity of a given data set, we introduce measures of data complexity for continuous response data. We validate our measures in several simulated contexts prior to using them in real data examples. The performance of ACAPs is compared with the performances of predictors based on stacking or likelihood weighted averaging in several model classes and in both simulated and real data sets. Our results suggest that ACAPs achieve a better trade off between model list bias and model list variability in cases where the data is very complex. This implies that the choices of model class and averaging method should be guided by a concept of complexity matching, i.e. the analysis of a complex data set may require a more complex model class and averaging strategy than the analysis of a simpler data set. We propose that complexity matching is akin to a bias–variance tradeoff in statistical modeling. PMID:20617104

  4. Size and complexity in model financial systems.

    PubMed

    Arinaminpathy, Nimalan; Kapadia, Sujit; May, Robert M

    2012-11-01

    The global financial crisis has precipitated an increasing appreciation of the need for a systemic perspective toward financial stability. For example: What role do large banks play in systemic risk? How should capital adequacy standards recognize this role? How is stability shaped by concentration and diversification in the financial system? We explore these questions using a deliberately simplified, dynamic model of a banking system that combines three different channels for direct transmission of contagion from one bank to another: liquidity hoarding, asset price contagion, and the propagation of defaults via counterparty credit risk. Importantly, we also introduce a mechanism for capturing how swings in "confidence" in the system may contribute to instability. Our results highlight that the importance of relatively large, well-connected banks in system stability scales more than proportionately with their size: the impact of their collapse arises not only from their connectivity, but also from their effect on confidence in the system. Imposing tougher capital requirements on larger banks than smaller ones can thus enhance the resilience of the system. Moreover, these effects are more pronounced in more concentrated systems, and continue to apply, even when allowing for potential diversification benefits that may be realized by larger banks. We discuss some tentative implications for policy, as well as conceptual analogies in ecosystem stability and in the control of infectious diseases.

  5. Size and complexity in model financial systems

    PubMed Central

    Arinaminpathy, Nimalan; Kapadia, Sujit; May, Robert M.

    2012-01-01

    The global financial crisis has precipitated an increasing appreciation of the need for a systemic perspective toward financial stability. For example: What role do large banks play in systemic risk? How should capital adequacy standards recognize this role? How is stability shaped by concentration and diversification in the financial system? We explore these questions using a deliberately simplified, dynamic model of a banking system that combines three different channels for direct transmission of contagion from one bank to another: liquidity hoarding, asset price contagion, and the propagation of defaults via counterparty credit risk. Importantly, we also introduce a mechanism for capturing how swings in “confidence” in the system may contribute to instability. Our results highlight that the importance of relatively large, well-connected banks in system stability scales more than proportionately with their size: the impact of their collapse arises not only from their connectivity, but also from their effect on confidence in the system. Imposing tougher capital requirements on larger banks than smaller ones can thus enhance the resilience of the system. Moreover, these effects are more pronounced in more concentrated systems, and continue to apply, even when allowing for potential diversification benefits that may be realized by larger banks. We discuss some tentative implications for policy, as well as conceptual analogies in ecosystem stability and in the control of infectious diseases. PMID:23091020

  6. Penetration of Milk-Derived Antimicrobial Peptides into Phospholipid Monolayers as Model Biomembranes

    PubMed Central

    Rogalska, Ewa; Więcław-Czapla, Katarzyna

    2013-01-01

    Three antimicrobial peptides derived from bovine milk proteins were examined with regard to penetration into insoluble monolayers formed with 1,2-dipalmitoyl-sn-glycero-3-phosphocholine (DPPC) or 1,2-dipalmitoyl-sn-glycero-3-phospho-rac-(1-glycerol) sodium salt (DPPG). Effects on surface pressure (Π) and electric surface potential (ΔV) were measured, Π with a platinum Wilhelmy plate and ΔV with a vibrating plate. The penetration measurements were performed under stationary diffusion conditions and upon the compression of the monolayers. The two type measurements showed greatly different effects of the peptide-lipid interactions. Results of the stationary penetration show that the peptide interactions with DPPC monolayer are weak, repulsive, and nonspecific while the interactions with DPPG monolayer are significant, attractive, and specific. These results are in accord with the fact that antimicrobial peptides disrupt bacteria membranes (negative) while no significant effect on the host membranes (neutral) is observed. No such discrimination was revealed from the compression isotherms. The latter indicate that squeezing the penetrant out of the monolayer upon compression does not allow for establishing the penetration equilibrium, so the monolayer remains supersaturated with the penetrant and shows an under-equilibrium orientation within the entire compression range, practically. PMID:24455264

  7. Imaging coexisting fluid domains in biomembrane models coupling curvature and line tension.

    PubMed

    Baumgart, Tobias; Hess, Samuel T; Webb, Watt W

    2003-10-23

    Lipid bilayer membranes--ubiquitous in biological systems and closely associated with cell function--exhibit rich shape-transition behaviour, including bud formation and vesicle fission. Membranes formed from multiple lipid components can laterally separate into coexisting liquid phases, or domains, with distinct compositions. This process, which may resemble raft formation in cell membranes, has been directly observed in giant unilamellar vesicles. Detailed theoretical frameworks link the elasticity of domains and their boundary properties to the shape adopted by membranes and the formation of particular domain patterns, but it has been difficult to experimentally probe and validate these theories. Here we show that high-resolution fluorescence imaging using two dyes preferentially labelling different fluid phases directly provides a correlation between domain composition and local membrane curvature. Using freely suspended membranes of giant unilamellar vesicles, we are able to optically resolve curvature and line tension interactions of circular, stripe and ring domains. We observe long-range domain ordering in the form of locally parallel stripes and hexagonal arrays of circular domains, curvature-dependent domain sorting, and membrane fission into separate vesicles at domain boundaries. By analysing our observations using available membrane theory, we are able to provide experimental estimates of boundary tension between fluid bilayer domains. PMID:14574408

  8. Interactions and dynamics of two extended conformation adapting phosphatidylcholines in model biomembranes.

    PubMed

    Amirkavei, Mooud; Kinnunen, Paavo K J

    2016-02-01

    In order to obtain molecular level insight into the biophysics of the apoptosis promoting phospholipid 1-palmitoyl-2-azelaoyl-sn-glycero-3-phosphocholine (PazePC) we studied its partitioning into different lipid phases by isothermal titration calorimetry (ITC). To aid the interpretation of these data for PazePC, we additionally characterized by both ITC and fluorescence spectroscopy the fluorescent phospholipid analog 1-palmitoyl-2-{6-[(7-nitro-2-1,3-benzoxadiazol-4-yl)amino]hexanoyl}-sn-glycero-3-phosphocholine (NBD-C6-PC), which similarly to PazePC can adopt extended conformation in lipid bilayers. With the NBD-hexanoyl chain reversing its direction and extending into the aqueous space out of the bilayer, 7-nitro-2,1,3-benzoxadiazol-4-yl (NBD) becomes accessible to the water soluble dithionite, which reduces to non-fluorescent product. Our results suggest that these phospholipid derivatives first partition and penetrate into the outer bilayer leaflet of liquid disordered phase liposomes composed of unsaturated 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphocholine (POPC). Upon increase up to 2 mol% PazePC and NBD-C6-PC of the overall content, flip-flop from the outer into the inner bilayer leaflet commences. Interestingly, the presence of 40 mol% cholesterol in POPC liposomes did not abrogate the partitioning of PazePC into the liquid ordered phase. In contrast, only insignificant partitioning of PazePC and NBD-C6-PC into sphingomyelin/cholesterol liposomes was evident, highlighting a specific membrane permeability barrier function of this particular lipid composition against oxidatively truncated PazePC, thus emphasizing the importance of detailed characterization of the biophysical properties of membranes found in different cellular organelles, in terms of providing barriers for lipid-mediated cellular signals in processes such as apoptosis. Our data suggest NBD-C6-PC to represent useful fluorescent probe to study the cellular dynamics of oxidized phospholipid species, such as PazePC. PMID:26656184

  9. Penetration of milk-derived antimicrobial peptides into phospholipid monolayers as model biomembranes.

    PubMed

    Barzyk, Wanda; Rogalska, Ewa; Więcław-Czapla, Katarzyna

    2013-01-01

    Three antimicrobial peptides derived from bovine milk proteins were examined with regard to penetration into insoluble monolayers formed with 1,2-dipalmitoyl-sn-glycero-3-phosphocholine (DPPC) or 1,2-dipalmitoyl-sn-glycero-3-phospho-rac-(1-glycerol) sodium salt (DPPG). Effects on surface pressure (Π) and electric surface potential (ΔV) were measured, Π with a platinum Wilhelmy plate and ΔV with a vibrating plate. The penetration measurements were performed under stationary diffusion conditions and upon the compression of the monolayers. The two type measurements showed greatly different effects of the peptide-lipid interactions. Results of the stationary penetration show that the peptide interactions with DPPC monolayer are weak, repulsive, and nonspecific while the interactions with DPPG monolayer are significant, attractive, and specific. These results are in accord with the fact that antimicrobial peptides disrupt bacteria membranes (negative) while no significant effect on the host membranes (neutral) is observed. No such discrimination was revealed from the compression isotherms. The latter indicate that squeezing the penetrant out of the monolayer upon compression does not allow for establishing the penetration equilibrium, so the monolayer remains supersaturated with the penetrant and shows an under-equilibrium orientation within the entire compression range, practically. PMID:24455264

  10. Complexity vs. Simplicity: Tradeoffs in Integrated Water Resources Models

    NASA Astrophysics Data System (ADS)

    Gonda, J.; Elshorbagy, A. A.; Wheater, H. S.; Razavi, S.

    2014-12-01

    Integrated Water Resources Management is an interdisciplinary approach to managing water. Integration often involves linking hydrologic processes with socio-economic development. When implemented through a simulation or optimization model, complexities arise. This complexity is due to the large data requirements, making it difficult to implement by the end users. Not only is computational efficiency at stake, but it becomes cumbersome to future model users. To overcome this issue the model may be simplified through emulation, at the expense of information loss. Herein lies a tradeoff: Complexity involved in an accurate, detailed model versus the transparency and saliency of a simplified model. This presentation examines the role of model emulation towards simplifying a water allocation model. The case study is located in Southern Alberta, Canada. Water here is allocated between agricultural, municipal, environmental and energy sectors. Currently, water allocation is modeled through a detailed optimization model, WRMM. Although WRMM can allocate water on a priority basis, it lacks the simplicity needed by the end user. The proposed System Dynamics-based model, SWAMP 2.0, emulates this optimization model, utilizing two scales of complexity. A regional scale spatially aggregates individual components, reducing the complexity of the original model. A local scale retains the original detail, and is contained within the regional scale. This two tiered emulation presents relevant spatial scales to water managers, who may not be interested in all the details of WRMM. By evaluating the accuracy of SWAMP 2.0 against the original allocation model, the tradeoff of accuracy for simplicity can be further realized.

  11. Dynamic modeling of structures from measured complex modes

    NASA Technical Reports Server (NTRS)

    Ibrahim, s. R.

    1982-01-01

    A technique is presented to use a set of identified complex modes together with an analytical mathematical model of a structure under test to compute improved mass, stiffness and damping matrices. A set of identified normal modes, computed from the measured complex modes, is used in the mass orthogonality equation to compute an improved mass matrix. This eliminates possible errors that may result from using approximated complex modes as normal modes. The improved mass matrix, the measured complex modes and the higher analytical modes are then used to compute the improved stiffness and damping matrices. The number of degrees-of-freedom of the improved model is limited to equal the number of elements in the measured modal vectors. A simulated experiment shows considerable improvements, in the system's analytical dynamic model, over the frequency range of the given measured modal information.

  12. Bladder Cancer: A Simple Model Becomes Complex

    PubMed Central

    Pierro, Giovanni Battista Di; Gulia, Caterina; Cristini, Cristiano; Fraietta, Giorgio; Marini, Lorenzo; Grande, Pietro; Gentile, Vincenzo; Piergentili, Roberto

    2012-01-01

    Bladder cancer is one of the most frequent malignancies in developed countries and it is also characterized by a high number of recurrences. Despite this, several authors in the past reported that only two altered molecular pathways may genetically explain all cases of bladder cancer: one involving the FGFR3 gene, and the other involving the TP53 gene. Mutations in any of these two genes are usually predictive of the malignancy final outcome. This cancer may also be further classified as low-grade tumors, which is always papillary and in most cases superficial, and high-grade tumors, not necessarily papillary and often invasive. This simple way of considering this pathology has strongly changed in the last few years, with the development of genome-wide studies on expression profiling and the discovery of small non-coding RNA affecting gene expression. An easy search in the OMIM (On-line Mendelian Inheritance in Man) database using “bladder cancer” as a query reveals that genes in some way connected to this pathology are approximately 150, and some authors report that altered gene expression (up- or down-regulation) in this disease may involve up to 500 coding sequences for low-grade tumors and up to 2300 for high-grade tumors. In many clinical cases, mutations inside the coding sequences of the above mentioned two genes were not found, but their expression changed; this indicates that also epigenetic modifications may play an important role in its development. Indeed, several reports were published about genome-wide methylation in these neoplastic tissues, and an increasing number of small non-coding RNA are either up- or down-regulated in bladder cancer, indicating that impaired gene expression may also pass through these metabolic pathways. Taken together, these data reveal that bladder cancer is far to be considered a simple model of malignancy. In the present review, we summarize recent progress in the genome-wide analysis of bladder cancer, and analyse non

  13. Reassessing Geophysical Models of the Bushveld Complex in 3D

    NASA Astrophysics Data System (ADS)

    Cole, J.; Webb, S. J.; Finn, C.

    2012-12-01

    Conceptual geophysical models of the Bushveld Igneous Complex show three possible geometries for its mafic component: 1) Separate intrusions with vertical feeders for the eastern and western lobes (Cousins, 1959) 2) Separate dipping sheets for the two lobes (Du Plessis and Kleywegt, 1987) 3) A single saucer-shaped unit connected at depth in the central part between the two lobes (Cawthorn et al, 1998) Model three incorporates isostatic adjustment of the crust in response to the weight of the dense mafic material. The model was corroborated by results of a broadband seismic array over southern Africa, known as the Southern African Seismic Experiment (SASE) (Nguuri, et al, 2001; Webb et al, 2004). This new information about the crustal thickness only became available in the last decade and could not be considered in the earlier models. Nevertheless, there is still on-going debate as to which model is correct. All of the models published up to now have been done in 2 or 2.5 dimensions. This is not well suited to modelling the complex geometry of the Bushveld intrusion. 3D modelling takes into account effects of variations in geometry and geophysical properties of lithologies in a full three dimensional sense and therefore affects the shape and amplitude of calculated fields. The main question is how the new knowledge of the increased crustal thickness, as well as the complexity of the Bushveld Complex, will impact on the gravity fields calculated for the existing conceptual models, when modelling in 3D. The three published geophysical models were remodelled using full 3Dl potential field modelling software, and including crustal thickness obtained from the SASE. The aim was not to construct very detailed models, but to test the existing conceptual models in an equally conceptual way. Firstly a specific 2D model was recreated in 3D, without crustal thickening, to establish the difference between 2D and 3D results. Then the thicker crust was added. Including the less

  14. The Use of Behavior Models for Predicting Complex Operations

    NASA Technical Reports Server (NTRS)

    Gore, Brian F.

    2010-01-01

    Modeling and simulation (M&S) plays an important role when complex human-system notions are being proposed, developed and tested within the system design process. National Aeronautics and Space Administration (NASA) as an agency uses many different types of M&S approaches for predicting human-system interactions, especially when it is early in the development phase of a conceptual design. NASA Ames Research Center possesses a number of M&S capabilities ranging from airflow, flight path models, aircraft models, scheduling models, human performance models (HPMs), and bioinformatics models among a host of other kinds of M&S capabilities that are used for predicting whether the proposed designs will benefit the specific mission criteria. The Man-Machine Integration Design and Analysis System (MIDAS) is a NASA ARC HPM software tool that integrates many models of human behavior with environment models, equipment models, and procedural / task models. The challenge to model comprehensibility is heightened as the number of models that are integrated and the requisite fidelity of the procedural sets are increased. Model transparency is needed for some of the more complex HPMs to maintain comprehensibility of the integrated model performance. This will be exemplified in a recent MIDAS v5 application model and plans for future model refinements will be presented.

  15. Modeling of Protein Binary Complexes Using Structural Mass Spectrometry Data

    SciTech Connect

    Amisha Kamal,J.; Chance, M.

    2008-01-01

    In this article, we describe a general approach to modeling the structure of binary protein complexes using structural mass spectrometry data combined with molecular docking. In the first step, hydroxyl radical mediated oxidative protein footprinting is used to identify residues that experience conformational reorganization due to binding or participate in the binding interface. In the second step, a three-dimensional atomic structure of the complex is derived by computational modeling. Homology modeling approaches are used to define the structures of the individual proteins if footprinting detects significant conformational reorganization as a function of complex formation. A three-dimensional model of the complex is constructed from these binary partners using the ClusPro program, which is composed of docking, energy filtering, and clustering steps. Footprinting data are used to incorporate constraints--positive and/or negative--in the docking step and are also used to decide the type of energy filter--electrostatics or desolvation--in the successive energy-filtering step. By using this approach, we examine the structure of a number of binary complexes of monomeric actin and compare the results to crystallographic data. Based on docking alone, a number of competing models with widely varying structures are observed, one of which is likely to agree with crystallographic data. When the docking steps are guided by footprinting data, accurate models emerge as top scoring. We demonstrate this method with the actin/gelsolin segment-1 complex. We also provide a structural model for the actin/cofilin complex using this approach which does not have a crystal or NMR structure.

  16. Geometric modeling of subcellular structures, organelles, and multiprotein complexes

    PubMed Central

    Feng, Xin; Xia, Kelin; Tong, Yiying; Wei, Guo-Wei

    2013-01-01

    SUMMARY Recently, the structure, function, stability, and dynamics of subcellular structures, organelles, and multi-protein complexes have emerged as a leading interest in structural biology. Geometric modeling not only provides visualizations of shapes for large biomolecular complexes but also fills the gap between structural information and theoretical modeling, and enables the understanding of function, stability, and dynamics. This paper introduces a suite of computational tools for volumetric data processing, information extraction, surface mesh rendering, geometric measurement, and curvature estimation of biomolecular complexes. Particular emphasis is given to the modeling of cryo-electron microscopy data. Lagrangian-triangle meshes are employed for the surface presentation. On the basis of this representation, algorithms are developed for surface area and surface-enclosed volume calculation, and curvature estimation. Methods for volumetric meshing have also been presented. Because the technological development in computer science and mathematics has led to multiple choices at each stage of the geometric modeling, we discuss the rationales in the design and selection of various algorithms. Analytical models are designed to test the computational accuracy and convergence of proposed algorithms. Finally, we select a set of six cryo-electron microscopy data representing typical subcellular complexes to demonstrate the efficacy of the proposed algorithms in handling biomolecular surfaces and explore their capability of geometric characterization of binding targets. This paper offers a comprehensive protocol for the geometric modeling of subcellular structures, organelles, and multiprotein complexes. PMID:23212797

  17. Routine Discovery of Complex Genetic Models using Genetic Algorithms

    PubMed Central

    Moore, Jason H.; Hahn, Lance W.; Ritchie, Marylyn D.; Thornton, Tricia A.; White, Bill C.

    2010-01-01

    Simulation studies are useful in various disciplines for a number of reasons including the development and evaluation of new computational and statistical methods. This is particularly true in human genetics and genetic epidemiology where new analytical methods are needed for the detection and characterization of disease susceptibility genes whose effects are complex, nonlinear, and partially or solely dependent on the effects of other genes (i.e. epistasis or gene-gene interaction). Despite this need, the development of complex genetic models that can be used to simulate data is not always intuitive. In fact, only a few such models have been published. We have previously developed a genetic algorithm approach to discovering complex genetic models in which two single nucleotide polymorphisms (SNPs) influence disease risk solely through nonlinear interactions. In this paper, we extend this approach for the discovery of high-order epistasis models involving three to five SNPs. We demonstrate that the genetic algorithm is capable of routinely discovering interesting high-order epistasis models in which each SNP influences risk of disease only through interactions with the other SNPs in the model. This study opens the door for routine simulation of complex gene-gene interactions among SNPs for the development and evaluation of new statistical and computational approaches for identifying common, complex multifactorial disease susceptibility genes. PMID:20948983

  18. Routine Discovery of Complex Genetic Models using Genetic Algorithms.

    PubMed

    Moore, Jason H; Hahn, Lance W; Ritchie, Marylyn D; Thornton, Tricia A; White, Bill C

    2004-02-01

    Simulation studies are useful in various disciplines for a number of reasons including the development and evaluation of new computational and statistical methods. This is particularly true in human genetics and genetic epidemiology where new analytical methods are needed for the detection and characterization of disease susceptibility genes whose effects are complex, nonlinear, and partially or solely dependent on the effects of other genes (i.e. epistasis or gene-gene interaction). Despite this need, the development of complex genetic models that can be used to simulate data is not always intuitive. In fact, only a few such models have been published. We have previously developed a genetic algorithm approach to discovering complex genetic models in which two single nucleotide polymorphisms (SNPs) influence disease risk solely through nonlinear interactions. In this paper, we extend this approach for the discovery of high-order epistasis models involving three to five SNPs. We demonstrate that the genetic algorithm is capable of routinely discovering interesting high-order epistasis models in which each SNP influences risk of disease only through interactions with the other SNPs in the model. This study opens the door for routine simulation of complex gene-gene interactions among SNPs for the development and evaluation of new statistical and computational approaches for identifying common, complex multifactorial disease susceptibility genes.

  19. Geometric modeling of subcellular structures, organelles, and multiprotein complexes.

    PubMed

    Feng, Xin; Xia, Kelin; Tong, Yiying; Wei, Guo-Wei

    2012-12-01

    Recently, the structure, function, stability, and dynamics of subcellular structures, organelles, and multiprotein complexes have emerged as a leading interest in structural biology. Geometric modeling not only provides visualizations of shapes for large biomolecular complexes but also fills the gap between structural information and theoretical modeling, and enables the understanding of function, stability, and dynamics. This paper introduces a suite of computational tools for volumetric data processing, information extraction, surface mesh rendering, geometric measurement, and curvature estimation of biomolecular complexes. Particular emphasis is given to the modeling of cryo-electron microscopy data. Lagrangian-triangle meshes are employed for the surface presentation. On the basis of this representation, algorithms are developed for surface area and surface-enclosed volume calculation, and curvature estimation. Methods for volumetric meshing have also been presented. Because the technological development in computer science and mathematics has led to multiple choices at each stage of the geometric modeling, we discuss the rationales in the design and selection of various algorithms. Analytical models are designed to test the computational accuracy and convergence of proposed algorithms. Finally, we select a set of six cryo-electron microscopy data representing typical subcellular complexes to demonstrate the efficacy of the proposed algorithms in handling biomolecular surfaces and explore their capability of geometric characterization of binding targets. This paper offers a comprehensive protocol for the geometric modeling of subcellular structures, organelles, and multiprotein complexes.

  20. Between complexity of modelling and modelling of complexity: An essay on econophysics

    NASA Astrophysics Data System (ADS)

    Schinckus, C.

    2013-09-01

    Econophysics is an emerging field dealing with complex systems and emergent properties. A deeper analysis of themes studied by econophysicists shows that research conducted in this field can be decomposed into two different computational approaches: “statistical econophysics” and “agent-based econophysics”. This methodological scission complicates the definition of the complexity used in econophysics. Therefore, this article aims to clarify what kind of emergences and complexities we can find in econophysics in order to better understand, on one hand, the current scientific modes of reasoning this new field provides; and on the other hand, the future methodological evolution of the field.

  1. Dynamic bio-adhesion of polymer nanoparticles on MDCK epithelial cells and its impact on bio-membranes, endocytosis and paracytosis

    NASA Astrophysics Data System (ADS)

    He, Bing; Yuan, Lan; Dai, Wenbing; Gao, Wei; Zhang, Hua; Wang, Xueqing; Fang, Weigang; Zhang, Qiang

    2016-03-01

    Nowadays, concern about the use of nanotechnology for biomedical application is unprecedentedly increasing. In fact, nanosystems applied for various potential clinical uses always have to cross the primary biological barrier consisting of epithelial cells. However, little is really known currently in terms of the influence of the dynamic bio-adhesion of nanosystems on bio-membranes as well as on endocytosis and transcytosis. This was investigated here using polymer nanoparticles (PNs) and MDCK epithelial cells as the models. Firstly, the adhesion of PNs on cell membranes was found to be time-dependent with a shift of both location and dispersion pattern, from the lateral adhesion of mainly mono-dispersed PNs initially to the apical coverage of the PN aggregate later. Then, it was interesting to observe in this study that the dynamic bio-adhesion of PNs only affected their endocytosis but not their transcytosis. It was important to find that the endocytosis of PNs was not a constant process. A GM1 dependent CDE (caveolae dependent endocytosis) pathway was dominant in the preliminary stage, followed by the co-existence of a CME (clathrin-mediated endocytosis) pathway for the PN aggregate at a later stage, in accordance with the adhesion features of PNs, suggesting the modification of PN adhesion patterns on the endocytosis pathways. Next, the PN adhesion was noticed to affect the structure of cell junctions, via altering the extra- and intra-cellular calcium levels, leading to the enhanced paracellular transport of small molecules, but not favorably enough for the obviously increased passing of PNs themselves. Finally, FRAP and other techniques all demonstrated the obvious impact of PN adhesion on the membrane confirmation, independent of the adhesion location and time, which might lower the threshold for the internalization of PNs, even their aggregates. Generally, these findings confirm that the transport pathway mechanism of PNs through epithelial cells is rather

  2. Using fMRI to Test Models of Complex Cognition

    ERIC Educational Resources Information Center

    Anderson, John R.; Carter, Cameron S.; Fincham, Jon M.; Qin, Yulin; Ravizza, Susan M.; Rosenberg-Lee, Miriam

    2008-01-01

    This article investigates the potential of fMRI to test assumptions about different components in models of complex cognitive tasks. If the components of a model can be associated with specific brain regions, one can make predictions for the temporal course of the BOLD response in these regions. An event-locked procedure is described for dealing…

  3. Tips on Creating Complex Geometry Using Solid Modeling Software

    ERIC Educational Resources Information Center

    Gow, George

    2008-01-01

    Three-dimensional computer-aided drafting (CAD) software, sometimes referred to as "solid modeling" software, is easy to learn, fun to use, and becoming the standard in industry. However, many users have difficulty creating complex geometry with the solid modeling software. And the problem is not entirely a student problem. Even some teachers and…

  4. Network model of bilateral power markets based on complex networks

    NASA Astrophysics Data System (ADS)

    Wu, Yang; Liu, Junyong; Li, Furong; Yan, Zhanxin; Zhang, Li

    2014-06-01

    The bilateral power transaction (BPT) mode becomes a typical market organization with the restructuring of electric power industry, the proper model which could capture its characteristics is in urgent need. However, the model is lacking because of this market organization's complexity. As a promising approach to modeling complex systems, complex networks could provide a sound theoretical framework for developing proper simulation model. In this paper, a complex network model of the BPT market is proposed. In this model, price advantage mechanism is a precondition. Unlike other general commodity transactions, both of the financial layer and the physical layer are considered in the model. Through simulation analysis, the feasibility and validity of the model are verified. At same time, some typical statistical features of BPT network are identified. Namely, the degree distribution follows the power law, the clustering coefficient is low and the average path length is a bit long. Moreover, the topological stability of the BPT network is tested. The results show that the network displays a topological robustness to random market member's failures while it is fragile against deliberate attacks, and the network could resist cascading failure to some extent. These features are helpful for making decisions and risk management in BPT markets.

  5. Zebrafish as an emerging model for studying complex brain disorders

    PubMed Central

    Kalueff, Allan V.; Stewart, Adam Michael; Gerlai, Robert

    2014-01-01

    The zebrafish (Danio rerio) is rapidly becoming a popular model organism in pharmacogenetics and neuropharmacology. Both larval and adult zebrafish are currently used to increase our understanding of brain function, dysfunction, and their genetic and pharmacological modulation. Here we review the developing utility of zebrafish in the analysis of complex brain disorders (including, for example, depression, autism, psychoses, drug abuse and cognitive disorders), also covering zebrafish applications towards the goal of modeling major human neuropsychiatric and drug-induced syndromes. We argue that zebrafish models of complex brain disorders and drug-induced conditions have become a rapidly emerging critical field in translational neuropharmacology research. PMID:24412421

  6. Artificial biomembranes stabilized over spin coated hydrogel scaffolds. Crosslinking agent nature induces wrinkled or flat surfaces on the hydrogel.

    PubMed

    González-Henríquez, C M; Pizarro-Guerra, G C; Córdova-Alarcón, E N; Sarabia-Vallejos, M A; Terraza-Inostroza, C A

    2016-03-01

    Hydrogel films possess the ability of retain water and deliver it to a phospholipid bilayer mainly composed by DPPC (1,2-dipalmitoyl-sn-glycero-3-phosphocholine); moisture of the medium favors the stability of an artificial biomembrane when it is subjected to repetitive heating cycles. This hypothesis is valid when the hydrogel film, used as scaffold, present a flat surface morphology and a high ability for water releasing. On the other hand, when the sample presents a wrinkle topography (periodic undulations), free lateral molecular movement of the bilayer becomes lower, disfavoring the occurrence of clear phases/phase transitions according to applied temperature. Hydrogel films were prepared using HEMA (hydroxyethylmetacrylate), different crosslinking agents and initiators. This reaction mixture was spread over hydrophilic silicon wafers using spin coating technique. Resultant films were then exposed to UV light favoring polymeric chain crosslinking and interactions between hydrogel and substrate; this process is also known to generate tensile stress mismatch between different hydrogel strata, producing out-of-plane net force that generate ordered undulations or collapsed crystals at surface level. DPPC bilayers were then placed over hydrogel using Langmuir-Blodgett technique. Surface morphology was detected in order to clarify the behavior of these films. Obtained data corroborate DPPC membrane stability making possible to detect phases/phase transitions by ellipsometric methods and Atomic Force Microscopy due to their high hydration level. This system is intended to be used as biosensor through the insertion of transmembrane proteins or peptides that detect minimal variations of some analyte in the environment; artificial biomembrane stability and behavior is fundamental for this purpose.

  7. Complexation and molecular modeling studies of europium(III)-gallic acid-amino acid complexes.

    PubMed

    Taha, Mohamed; Khan, Imran; Coutinho, João A P

    2016-04-01

    With many metal-based drugs extensively used today in the treatment of cancer, attention has focused on the development of new coordination compounds with antitumor activity with europium(III) complexes recently introduced as novel anticancer drugs. The aim of this work is to design new Eu(III) complexes with gallic acid, an antioxida'nt phenolic compound. Gallic acid was chosen because it shows anticancer activity without harming health cells. As antioxidant, it helps to protect human cells against oxidative damage that implicated in DNA damage, cancer, and accelerated cell aging. In this work, the formation of binary and ternary complexes of Eu(III) with gallic acid, primary ligand, and amino acids alanine, leucine, isoleucine, and tryptophan was studied by glass electrode potentiometry in aqueous solution containing 0.1M NaNO3 at (298.2 ± 0.1) K. Their overall stability constants were evaluated and the concentration distributions of the complex species in solution were calculated. The protonation constants of gallic acid and amino acids were also determined at our experimental conditions and compared with those predicted by using conductor-like screening model for realistic solvation (COSMO-RS) model. The geometries of Eu(III)-gallic acid complexes were characterized by the density functional theory (DFT). The spectroscopic UV-visible and photoluminescence measurements are carried out to confirm the formation of Eu(III)-gallic acid complexes in aqueous solutions.

  8. Pedigree models for complex human traits involving the mitochrondrial genome

    SciTech Connect

    Schork, N.J.; Guo, S.W. )

    1993-12-01

    Recent biochemical and molecular-genetic discoveries concerning variations in human mtDNA have suggested a role for mtDNA mutations in a number of human traits and disorders. Although the importance of these discoveries cannot be emphasized enough, the complex natures of mitochondrial biogenesis, mutant mtDNA phenotype expression, and the maternal inheritance pattern exhibited by mtDNA transmission make it difficult to develop models that can be used routinely in pedigree analyses to quantify and test hypotheses about the role of mtDNA in the expression of a trait. In the present paper, the authors describe complexities inherent in mitochondrial biogenesis and genetic transmission and show how these complexities can be incorporated into appropriate mathematical models. The authors offer a variety of likelihood-based models which account for the complexities discussed. The derivation of the models is meant to stimulate the construction of statistical tests for putative mtDNA contribution to a trait. Results of simulation studies which make use of the proposed models are described. The results of the simulation studies suggest that, although pedigree models of mtDNA effects can be reliable, success in mapping chromosomal determinants of a trait does not preclude the possibility that mtDNA determinants exist for the trait as well. Shortcomings inherent in the proposed models are described in an effort to expose areas in need of additional research. 58 refs., 5 figs., 2 tabs.

  9. Emulator-assisted data assimilation in complex models

    NASA Astrophysics Data System (ADS)

    Margvelashvili, Nugzar Yu; Herzfeld, Mike; Rizwi, Farhan; Mongin, Mathieu; Baird, Mark E.; Jones, Emlyn; Schaffelke, Britta; King, Edward; Schroeder, Thomas

    2016-09-01

    Emulators are surrogates of complex models that run orders of magnitude faster than the original model. The utility of emulators for the data assimilation into ocean models is still not well understood. High complexity of ocean models translates into high uncertainty of the corresponding emulators which may undermine the quality of the assimilation schemes based on such emulators. Numerical experiments with a chaotic Lorenz-95 model are conducted to illustrate this point and suggest a strategy to alleviate this problem through the localization of the emulation and data assimilation procedures. Insights gained through these experiments are used to design and implement data assimilation scenario for a 3D fine-resolution sediment transport model of the Great Barrier Reef (GBR), Australia.

  10. Systems Engineering Metrics: Organizational Complexity and Product Quality Modeling

    NASA Technical Reports Server (NTRS)

    Mog, Robert A.

    1997-01-01

    Innovative organizational complexity and product quality models applicable to performance metrics for NASA-MSFC's Systems Analysis and Integration Laboratory (SAIL) missions and objectives are presented. An intensive research effort focuses on the synergistic combination of stochastic process modeling, nodal and spatial decomposition techniques, organizational and computational complexity, systems science and metrics, chaos, and proprietary statistical tools for accelerated risk assessment. This is followed by the development of a preliminary model, which is uniquely applicable and robust for quantitative purposes. Exercise of the preliminary model using a generic system hierarchy and the AXAF-I architectural hierarchy is provided. The Kendall test for positive dependence provides an initial verification and validation of the model. Finally, the research and development of the innovation is revisited, prior to peer review. This research and development effort results in near-term, measurable SAIL organizational and product quality methodologies, enhanced organizational risk assessment and evolutionary modeling results, and 91 improved statistical quantification of SAIL productivity interests.

  11. Emulator-assisted data assimilation in complex models

    NASA Astrophysics Data System (ADS)

    Margvelashvili, Nugzar Yu; Herzfeld, Mike; Rizwi, Farhan; Mongin, Mathieu; Baird, Mark E.; Jones, Emlyn; Schaffelke, Britta; King, Edward; Schroeder, Thomas

    2016-08-01

    Emulators are surrogates of complex models that run orders of magnitude faster than the original model. The utility of emulators for the data assimilation into ocean models is still not well understood. High complexity of ocean models translates into high uncertainty of the corresponding emulators which may undermine the quality of the assimilation schemes based on such emulators. Numerical experiments with a chaotic Lorenz-95 model are conducted to illustrate this point and suggest a strategy to alleviate this problem through the localization of the emulation and data assimilation procedures. Insights gained through these experiments are used to design and implement data assimilation scenario for a 3D fine-resolution sediment transport model of the Great Barrier Reef (GBR), Australia.

  12. Calibration of Complex Subsurface Reaction Models Using a Surrogate-Model Approach

    EPA Science Inventory

    Application of model assessment techniques to complex subsurface reaction models involves numerous difficulties, including non-trivial model selection, parameter non-uniqueness, and excessive computational burden. To overcome these difficulties, this study introduces SAMM (Simult...

  13. Minimal model for complex dynamics in cellular processes.

    PubMed

    Suguna, C; Chowdhury, K K; Sinha, S

    1999-11-01

    Cellular functions are controlled and coordinated by the complex circuitry of biochemical pathways regulated by genetic and metabolic feedback processes. This paper aims to show, with the help of a minimal model of a regulated biochemical pathway, that the common nonlinearities and control structures present in biomolecular interactions are capable of eliciting a variety of functional dynamics, such as homeostasis, periodic, complex, and chaotic oscillations, including transients, that are observed in various cellular processes.

  14. Improving a regional model using reduced complexity and parameter estimation

    USGS Publications Warehouse

    Kelson, Victor A.; Hunt, Randall J.; Haitjema, Henk M.

    2002-01-01

    The availability of powerful desktop computers and graphical user interfaces for ground water flow models makes possible the construction of ever more complex models. A proposed copper-zinc sulfide mine in northern Wisconsin offers a unique case in which the same hydrologic system has been modeled using a variety of techniques covering a wide range of sophistication and complexity. Early in the permitting process, simple numerical models were used to evaluate the necessary amount of water to be pumped from the mine, reductions in streamflow, and the drawdowns in the regional aquifer. More complex models have subsequently been used in an attempt to refine the predictions. Even after so much modeling effort, questions regarding the accuracy and reliability of the predictions remain. We have performed a new analysis of the proposed mine using the two-dimensional analytic element code GFLOW coupled with the nonlinear parameter estimation code UCODE. The new model is parsimonious, containing fewer than 10 parameters, and covers a region several times larger in areal extent than any of the previous models. The model demonstrates the suitability of analytic element codes for use with parameter estimation codes. The simplified model results are similar to the more complex models; predicted mine inflows and UCODE-derived 95% confidence intervals are consistent with the previous predictions. More important, the large areal extent of the model allowed us to examine hydrological features not included in the previous models, resulting in new insights about the effects that far-field boundary conditions can have on near-field model calibration and parameterization. In this case, the addition of surface water runoff into a lake in the headwaters of a stream while holding recharge constant moved a regional ground watershed divide and resulted in some of the added water being captured by the adjoining basin. Finally, a simple analytical solution was used to clarify the GFLOW model

  15. On explicit algebraic stress models for complex turbulent flows

    NASA Technical Reports Server (NTRS)

    Gatski, T. B.; Speziale, C. G.

    1992-01-01

    Explicit algebraic stress models that are valid for three-dimensional turbulent flows in noninertial frames are systematically derived from a hierarchy of second-order closure models. This represents a generalization of the model derived by Pope who based his analysis on the Launder, Reece, and Rodi model restricted to two-dimensional turbulent flows in an inertial frame. The relationship between the new models and traditional algebraic stress models -- as well as anistropic eddy visosity models -- is theoretically established. The need for regularization is demonstrated in an effort to explain why traditional algebraic stress models have failed in complex flows. It is also shown that these explicit algebraic stress models can shed new light on what second-order closure models predict for the equilibrium states of homogeneous turbulent flows and can serve as a useful alternative in practical computations.

  16. Complex groundwater flow systems as traveling agent models.

    PubMed

    López Corona, Oliver; Padilla, Pablo; Escolero, Oscar; González, Tomas; Morales-Casique, Eric; Osorio-Olvera, Luis

    2014-01-01

    Analyzing field data from pumping tests, we show that as with many other natural phenomena, groundwater flow exhibits complex dynamics described by 1/f power spectrum. This result is theoretically studied within an agent perspective. Using a traveling agent model, we prove that this statistical behavior emerges when the medium is complex. Some heuristic reasoning is provided to justify both spatial and dynamic complexity, as the result of the superposition of an infinite number of stochastic processes. Even more, we show that this implies that non-Kolmogorovian probability is needed for its study, and provide a set of new partial differential equations for groundwater flow. PMID:25337455

  17. Complex groundwater flow systems as traveling agent models

    PubMed Central

    Padilla, Pablo; Escolero, Oscar; González, Tomas; Morales-Casique, Eric; Osorio-Olvera, Luis

    2014-01-01

    Analyzing field data from pumping tests, we show that as with many other natural phenomena, groundwater flow exhibits complex dynamics described by 1/f power spectrum. This result is theoretically studied within an agent perspective. Using a traveling agent model, we prove that this statistical behavior emerges when the medium is complex. Some heuristic reasoning is provided to justify both spatial and dynamic complexity, as the result of the superposition of an infinite number of stochastic processes. Even more, we show that this implies that non-Kolmogorovian probability is needed for its study, and provide a set of new partial differential equations for groundwater flow. PMID:25337455

  18. A Complex Systems Model Approach to Quantified Mineral Resource Appraisal

    USGS Publications Warehouse

    Gettings, M.E.; Bultman, M.W.; Fisher, F.S.

    2004-01-01

    For federal and state land management agencies, mineral resource appraisal has evolved from value-based to outcome-based procedures wherein the consequences of resource development are compared with those of other management options. Complex systems modeling is proposed as a general framework in which to build models that can evaluate outcomes. Three frequently used methods of mineral resource appraisal (subjective probabilistic estimates, weights of evidence modeling, and fuzzy logic modeling) are discussed to obtain insight into methods of incorporating complexity into mineral resource appraisal models. Fuzzy logic and weights of evidence are most easily utilized in complex systems models. A fundamental product of new appraisals is the production of reusable, accessible databases and methodologies so that appraisals can easily be repeated with new or refined data. The data are representations of complex systems and must be so regarded if all of their information content is to be utilized. The proposed generalized model framework is applicable to mineral assessment and other geoscience problems. We begin with a (fuzzy) cognitive map using (+1,0,-1) values for the links and evaluate the map for various scenarios to obtain a ranking of the importance of various links. Fieldwork and modeling studies identify important links and help identify unanticipated links. Next, the links are given membership functions in accordance with the data. Finally, processes are associated with the links; ideally, the controlling physical and chemical events and equations are found for each link. After calibration and testing, this complex systems model is used for predictions under various scenarios.

  19. An analytical pressure-transient model for complex reservoir scenarios

    NASA Astrophysics Data System (ADS)

    Gomes, Edmond; Ambastha, Anil K.

    1994-10-01

    Reservoir deposition occurs through long periods of time, thus most reservoirs are heterogeneous in nature. The presence of various zones and layers of different rock and fluid properties is the usual circumstance in petroleum reservoirs. A secondary recovery operation, such as steam-flooding, results in a composite reservoir situation because of the presence of zones of different fluid properties. Because of reservoir heterogeneity and gravity override effects, fluid boundaries separating two zones may have complicated or irregular shapes. The purpose of this paper is to develop a new analytical pressure-transient model which can accommodate complex reservoir scenarios resulting from reservoir heterogeneity and from thermal recovery or other fluid-injection operations. Mathematically, our analytical model considers such complex situations as a generalized eigenvalue system resulting in a system of linear equations. Computational difficulties faced, validation approach of the new model, and an application for complex reservoir scenarios are discussed.

  20. Mathematical approaches for complexity/predictivity trade-offs in complex system models : LDRD final report.

    SciTech Connect

    Goldsby, Michael E.; Mayo, Jackson R.; Bhattacharyya, Arnab; Armstrong, Robert C.; Vanderveen, Keith

    2008-09-01

    The goal of this research was to examine foundational methods, both computational and theoretical, that can improve the veracity of entity-based complex system models and increase confidence in their predictions for emergent behavior. The strategy was to seek insight and guidance from simplified yet realistic models, such as cellular automata and Boolean networks, whose properties can be generalized to production entity-based simulations. We have explored the usefulness of renormalization-group methods for finding reduced models of such idealized complex systems. We have prototyped representative models that are both tractable and relevant to Sandia mission applications, and quantified the effect of computational renormalization on the predictive accuracy of these models, finding good predictivity from renormalized versions of cellular automata and Boolean networks. Furthermore, we have theoretically analyzed the robustness properties of certain Boolean networks, relevant for characterizing organic behavior, and obtained precise mathematical constraints on systems that are robust to failures. In combination, our results provide important guidance for more rigorous construction of entity-based models, which currently are often devised in an ad-hoc manner. Our results can also help in designing complex systems with the goal of predictable behavior, e.g., for cybersecurity.

  1. Turing instability in reaction-diffusion models on complex networks

    NASA Astrophysics Data System (ADS)

    Ide, Yusuke; Izuhara, Hirofumi; Machida, Takuya

    2016-09-01

    In this paper, the Turing instability in reaction-diffusion models defined on complex networks is studied. Here, we focus on three types of models which generate complex networks, i.e. the Erdős-Rényi, the Watts-Strogatz, and the threshold network models. From analysis of the Laplacian matrices of graphs generated by these models, we numerically reveal that stable and unstable regions of a homogeneous steady state on the parameter space of two diffusion coefficients completely differ, depending on the network architecture. In addition, we theoretically discuss the stable and unstable regions in the cases of regular enhanced ring lattices which include regular circles, and networks generated by the threshold network model when the number of vertices is large enough.

  2. Modelling nutrient reduction targets - model structure complexity vs. data availability

    NASA Astrophysics Data System (ADS)

    Capell, Rene; Lausten Hansen, Anne; Donnelly, Chantal; Refsgaard, Jens Christian; Arheimer, Berit

    2015-04-01

    In most parts of Europe, macronutrient concentrations and loads in surface water are currently affected by human land use and land management choices. Moreover, current macronutrient concentration and load levels often violate European Water Framework Directive (WFD) targets and effective measures to reduce these levels are sought after by water managers. Identifying such effective measures in specific target catchments should consider the four key processes release, transport, retention, and removal, and thus physical catchment characteristics as e.g. soils and geomorphology, but also management data such as crop distribution and fertilizer application regimes. The BONUS funded research project Soils2Sea evaluates new, differentiated regulation strategies to cost-efficiently reduce nutrient loads to the Baltic Sea based on new knowledge of nutrient transport and retention processes between soils and the coast. Within the Soils2Sea framework, we here examine the capability of two integrated hydrological and nutrient transfer models, HYPE and Mike SHE, to model runoff and nitrate flux responses in the 100 km2 Norsminde catchment, Denmark, comparing different model structures and data bases. We focus on comparing modelled nitrate reductions within and below the root zone, and evaluate model performances as function of available model structures (process representation within the model) and available data bases (temporal forcing data and spatial information). This model evaluation is performed to aid in the development of model tools which will be used to estimate the effect of new nutrient reduction measures on the catchment to regional scale, where available data - both climate forcing and land management - typically are increasingly limited with the targeted spatial scale and may act as a bottleneck for process conceptualizations and thus the value of a model as tool to provide decision support for differentiated regulation strategies.

  3. A Compact Model for the Complex Plant Circadian Clock.

    PubMed

    De Caluwé, Joëlle; Xiao, Qiying; Hermans, Christian; Verbruggen, Nathalie; Leloup, Jean-Christophe; Gonze, Didier

    2016-01-01

    The circadian clock is an endogenous timekeeper that allows organisms to anticipate and adapt to the daily variations of their environment. The plant clock is an intricate network of interlocked feedback loops, in which transcription factors regulate each other to generate oscillations with expression peaks at specific times of the day. Over the last decade, mathematical modeling approaches have been used to understand the inner workings of the clock in the model plant Arabidopsis thaliana. Those efforts have produced a number of models of ever increasing complexity. Here, we present an alternative model that combines a low number of equations and parameters, similar to the very earliest models, with the complex network structure found in more recent ones. This simple model describes the temporal evolution of the abundance of eight clock gene mRNA/protein and captures key features of the clock on a qualitative level, namely the entrained and free-running behaviors of the wild type clock, as well as the defects found in knockout mutants (such as altered free-running periods, lack of entrainment, or changes in the expression of other clock genes). Additionally, our model produces complex responses to various light cues, such as extreme photoperiods and non-24 h environmental cycles, and can describe the control of hypocotyl growth by the clock. Our model constitutes a useful tool to probe dynamical properties of the core clock as well as clock-dependent processes. PMID:26904049

  4. Deterministic ripple-spreading model for complex networks

    NASA Astrophysics Data System (ADS)

    Hu, Xiao-Bing; Wang, Ming; Leeson, Mark S.; Hines, Evor L.; di Paolo, Ezequiel

    2011-04-01

    This paper proposes a deterministic complex network model, which is inspired by the natural ripple-spreading phenomenon. The motivations and main advantages of the model are the following: (i) The establishment of many real-world networks is a dynamic process, where it is often observed that the influence of a few local events spreads out through nodes, and then largely determines the final network topology. Obviously, this dynamic process involves many spatial and temporal factors. By simulating the natural ripple-spreading process, this paper reports a very natural way to set up a spatial and temporal model for such complex networks. (ii) Existing relevant network models are all stochastic models, i.e., with a given input, they cannot output a unique topology. Differently, the proposed ripple-spreading model can uniquely determine the final network topology, and at the same time, the stochastic feature of complex networks is captured by randomly initializing ripple-spreading related parameters. (iii) The proposed model can use an easily manageable number of ripple-spreading related parameters to precisely describe a network topology, which is more memory efficient when compared with traditional adjacency matrix or similar memory-expensive data structures. (iv) The ripple-spreading model has a very good potential for both extensions and applications.

  5. A Compact Model for the Complex Plant Circadian Clock

    PubMed Central

    De Caluwé, Joëlle; Xiao, Qiying; Hermans, Christian; Verbruggen, Nathalie; Leloup, Jean-Christophe; Gonze, Didier

    2016-01-01

    The circadian clock is an endogenous timekeeper that allows organisms to anticipate and adapt to the daily variations of their environment. The plant clock is an intricate network of interlocked feedback loops, in which transcription factors regulate each other to generate oscillations with expression peaks at specific times of the day. Over the last decade, mathematical modeling approaches have been used to understand the inner workings of the clock in the model plant Arabidopsis thaliana. Those efforts have produced a number of models of ever increasing complexity. Here, we present an alternative model that combines a low number of equations and parameters, similar to the very earliest models, with the complex network structure found in more recent ones. This simple model describes the temporal evolution of the abundance of eight clock gene mRNA/protein and captures key features of the clock on a qualitative level, namely the entrained and free-running behaviors of the wild type clock, as well as the defects found in knockout mutants (such as altered free-running periods, lack of entrainment, or changes in the expression of other clock genes). Additionally, our model produces complex responses to various light cues, such as extreme photoperiods and non-24 h environmental cycles, and can describe the control of hypocotyl growth by the clock. Our model constitutes a useful tool to probe dynamical properties of the core clock as well as clock-dependent processes. PMID:26904049

  6. Complex solutions for the scalar field model of the Universe

    NASA Astrophysics Data System (ADS)

    Lyons, Glenn W.

    1992-08-01

    The Hartle-Hawking proposal is implemented for Hawking's scalar field model of the Universe. For this model the complex saddle-point geometries required by the semiclassical approximation to the path integral cannot simply be deformed into real Euclidean and real Lorentzian sections. Approximate saddle points are constructed which are fully complex and have contours of real Lorentzian evolution. The semiclassical wave function is found to give rise to classical spacetimes at late times and extra terms in the Hamilton-Jacobi equation do not contribute significantly to the potential.

  7. Simple and complex models for studying muscle function in walking.

    PubMed

    Pandy, Marcus G

    2003-09-29

    While simple models can be helpful in identifying basic features of muscle function, more complex models are needed to discern the functional roles of specific muscles in movement. In this paper, two very different models of walking, one simple and one complex, are used to study how muscle forces, gravitational forces and centrifugal forces (i.e. forces arising from motion of the joints) combine to produce the pattern of force exerted on the ground. Both the simple model and the complex one predict that muscles contribute significantly to the ground force pattern generated in walking; indeed, both models show that muscle action is responsible for the appearance of the two peaks in the vertical force. The simple model, an inverted double pendulum, suggests further that the first and second peaks are due to net extensor muscle moments exerted about the knee and ankle, respectively. Analyses based on a much more complex, muscle-actuated simulation of walking are in general agreement with these results; however, the more detailed model also reveals that both the hip extensor and hip abductor muscles contribute significantly to vertical motion of the centre of mass, and therefore to the appearance of the first peak in the vertical ground force, in early single-leg stance. This discrepancy in the model predictions is most probably explained by the difference in model complexity. First, movements of the upper body in the sagittal plane are not represented properly in the double-pendulum model, which may explain the anomalous result obtained for the contribution of a hip-extensor torque to the vertical ground force. Second, the double-pendulum model incorporates only three of the six major elements of walking, whereas the complex model is fully 3D and incorporates all six gait determinants. In particular, pelvic list occurs primarily in the frontal plane, so there is the potential for this mechanism to contribute significantly to the vertical ground force, especially

  8. Simple and complex models for studying muscle function in walking.

    PubMed Central

    Pandy, Marcus G

    2003-01-01

    While simple models can be helpful in identifying basic features of muscle function, more complex models are needed to discern the functional roles of specific muscles in movement. In this paper, two very different models of walking, one simple and one complex, are used to study how muscle forces, gravitational forces and centrifugal forces (i.e. forces arising from motion of the joints) combine to produce the pattern of force exerted on the ground. Both the simple model and the complex one predict that muscles contribute significantly to the ground force pattern generated in walking; indeed, both models show that muscle action is responsible for the appearance of the two peaks in the vertical force. The simple model, an inverted double pendulum, suggests further that the first and second peaks are due to net extensor muscle moments exerted about the knee and ankle, respectively. Analyses based on a much more complex, muscle-actuated simulation of walking are in general agreement with these results; however, the more detailed model also reveals that both the hip extensor and hip abductor muscles contribute significantly to vertical motion of the centre of mass, and therefore to the appearance of the first peak in the vertical ground force, in early single-leg stance. This discrepancy in the model predictions is most probably explained by the difference in model complexity. First, movements of the upper body in the sagittal plane are not represented properly in the double-pendulum model, which may explain the anomalous result obtained for the contribution of a hip-extensor torque to the vertical ground force. Second, the double-pendulum model incorporates only three of the six major elements of walking, whereas the complex model is fully 3D and incorporates all six gait determinants. In particular, pelvic list occurs primarily in the frontal plane, so there is the potential for this mechanism to contribute significantly to the vertical ground force, especially

  9. (Relatively) Simple Models of Flow in Complex Terrain

    NASA Astrophysics Data System (ADS)

    Taylor, Peter; Weng, Wensong; Salmon, Jim

    2013-04-01

    The term, "complex terrain" includes both topography and variations in surface roughness and thermal properties. The scales that are affected can differ and there are some advantages to modeling them separately. In studies of flow in complex terrain we have developed 2 D and 3 D models of atmospheric PBL boundary layer flow over roughness changes, appropriate for longer fetches than most existing models. These "internal boundary layers" are especially important for understanding and predicting wind speed variations with distance from shorelines, an important factor for wind farms around, and potentially in, the Great Lakes. The models can also form a base for studying the wakes behind woodlots and wind turbines. Some sample calculations of wind speed evolution over water and the reduced wind speeds behind an isolated woodlot, represented simply in terms of an increase in surface roughness, will be presented. Note that these models can also include thermal effects and non-neutral stratification. We can use the model to deal with 3-D roughness variations and will describe applications to both on-shore and off-shore situations around the Great Lakes. In particular we will show typical results for hub height winds and indicate the length of over-water fetch needed to get the full benefit of siting turbines over water. The linear Mixed Spectral Finite-Difference (MSFD) and non-linear (NLMSFD) models for surface boundary-layer flow over complex terrain have been extended to planetary boundary-layer flow over topography This allows for their use for larger scale regions and increased heights. The models have been applied to successfully simulate the Askervein hill experimental case and we will show examples of applications to more complex terrain, typical of some Canadian wind farms. Output from the model can be used as an alternative to MS-Micro, WAsP or other CFD calculations of topographic impacts for input to wind farm design software.

  10. Assessing Complexity in Learning Outcomes--A Comparison between the SOLO Taxonomy and the Model of Hierarchical Complexity

    ERIC Educational Resources Information Center

    Stålne, Kristian; Kjellström, Sofia; Utriainen, Jukka

    2016-01-01

    An important aspect of higher education is to educate students who can manage complex relationships and solve complex problems. Teachers need to be able to evaluate course content with regard to complexity, as well as evaluate students' ability to assimilate complex content and express it in the form of a learning outcome. One model for evaluating…

  11. Computer models of complex multiloop branched pipeline systems

    NASA Astrophysics Data System (ADS)

    Kudinov, I. V.; Kolesnikov, S. V.; Eremin, A. V.; Branfileva, A. N.

    2013-11-01

    This paper describes the principal theoretical concepts of the method used for constructing computer models of complex multiloop branched pipeline networks, and this method is based on the theory of graphs and two Kirchhoff's laws applied to electrical circuits. The models make it possible to calculate velocities, flow rates, and pressures of a fluid medium in any section of pipeline networks, when the latter are considered as single hydraulic systems. On the basis of multivariant calculations the reasons for existing problems can be identified, the least costly methods of their elimination can be proposed, and recommendations for planning the modernization of pipeline systems and construction of their new sections can be made. The results obtained can be applied to complex pipeline systems intended for various purposes (water pipelines, petroleum pipelines, etc.). The operability of the model has been verified on an example of designing a unified computer model of the heat network for centralized heat supply of the city of Samara.

  12. Modeling the propagation of mobile malware on complex networks

    NASA Astrophysics Data System (ADS)

    Liu, Wanping; Liu, Chao; Yang, Zheng; Liu, Xiaoyang; Zhang, Yihao; Wei, Zuxue

    2016-08-01

    In this paper, the spreading behavior of malware across mobile devices is addressed. By introducing complex networks to model mobile networks, which follows the power-law degree distribution, a novel epidemic model for mobile malware propagation is proposed. The spreading threshold that guarantees the dynamics of the model is calculated. Theoretically, the asymptotic stability of the malware-free equilibrium is confirmed when the threshold is below the unity, and the global stability is further proved under some sufficient conditions. The influences of different model parameters as well as the network topology on malware propagation are also analyzed. Our theoretical studies and numerical simulations show that networks with higher heterogeneity conduce to the diffusion of malware, and complex networks with lower power-law exponents benefit malware spreading.

  13. Stability of complex Langevin dynamics in effective models

    NASA Astrophysics Data System (ADS)

    Aarts, Gert; James, Frank A.; Pawlowski, Jan M.; Seiler, Erhard; Sexty, Dénes; Stamatescu, Ion-Olimpiu

    2013-03-01

    The sign problem at nonzero chemical potential prohibits the use of importance sampling in lattice simulations. Since complex Langevin dynamics does not rely on importance sampling, it provides a potential solution. Recently it was shown that complex Langevin dynamics fails in the disordered phase in the case of the three-dimensional XY model, while it appears to work in the entire phase diagram in the case of the three-dimensional SU(3) spin model. Here we analyse this difference and argue that it is due to the presence of the nontrivial Haar measure in the SU(3) case, which has a stabilizing effect on the complexified dynamics. The freedom to modify and stabilize the complex Langevin process is discussed in some detail.

  14. Catastrophe, Chaos, and Complexity Models and Psychosocial Adjustment to Disability.

    ERIC Educational Resources Information Center

    Parker, Randall M.; Schaller, James; Hansmann, Sandra

    2003-01-01

    Rehabilitation professionals may unknowingly rely on stereotypes and specious beliefs when dealing with people with disabilities, despite the formulation of theories that suggest new models of the adjustment process. Suggests that Catastrophe, Chaos, and Complexity Theories hold considerable promise in this regard. This article reviews these…

  15. 40 CFR 80.45 - Complex emissions model.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... in terms of weight percent oxygen ETH = Ethanol content of the target fuel in terms of weight percent...) REGULATION OF FUELS AND FUEL ADDITIVES Reformulated Gasoline § 80.45 Complex emissions model. (a) Definition of terms. For the purposes of this section, the following definitions shall apply: Target fuel =...

  16. Fischer and Schrock Carbene Complexes: A Molecular Modeling Exercise

    ERIC Educational Resources Information Center

    Montgomery, Craig D.

    2015-01-01

    An exercise in molecular modeling that demonstrates the distinctive features of Fischer and Schrock carbene complexes is presented. Semi-empirical calculations (PM3) demonstrate the singlet ground electronic state, restricted rotation about the C-Y bond, the positive charge on the carbon atom, and hence, the electrophilic nature of the Fischer…

  17. The Complexity of Developmental Predictions from Dual Process Models

    ERIC Educational Resources Information Center

    Stanovich, Keith E.; West, Richard F.; Toplak, Maggie E.

    2011-01-01

    Drawing developmental predictions from dual-process theories is more complex than is commonly realized. Overly simplified predictions drawn from such models may lead to premature rejection of the dual process approach as one of many tools for understanding cognitive development. Misleading predictions can be avoided by paying attention to several…

  18. Conceptual Complexity, Teaching Style and Models of Teaching.

    ERIC Educational Resources Information Center

    Joyce, Bruce; Weil, Marsha

    The focus of this paper is on the relative roles of personality and training in enabling teachers to carry out the kinds of complex learning models which are envisioned by curriculum reformers in the social sciences. The paper surveys some of the major research done in this area and concludes that: 1) Most teachers do not manifest the complex…

  19. Performance of Random Effects Model Estimators under Complex Sampling Designs

    ERIC Educational Resources Information Center

    Jia, Yue; Stokes, Lynne; Harris, Ian; Wang, Yan

    2011-01-01

    In this article, we consider estimation of parameters of random effects models from samples collected via complex multistage designs. Incorporation of sampling weights is one way to reduce estimation bias due to unequal probabilities of selection. Several weighting methods have been proposed in the literature for estimating the parameters of…

  20. Model predicting impact of complexation with cyclodextrins on oral absorption.

    PubMed

    Gamsiz, Ece D; Thombre, Avinash G; Ahmed, Imran; Carrier, Rebecca L

    2013-09-01

    Significant effort and resource expenditure is dedicated to enabling low-solubility oral drug delivery using solubilization technologies. Cyclodextrins (CD) are cyclic oligosaccharides which form inclusion complexes with many drugs and are often used as solubilizing agents. It is not clear prior to developing a drug delivery device with CD what level of absorption enhancement might be achieved; modeling can provide useful guidance in formulation and minimize resource intensive iterative formulation development. A model was developed to enable quantitative, dynamic prediction of the influence of CD on oral absorption of low solubility drug administered as a pre-formed complex. The predominant effects of CD considered were enhancement of dissolution and slowing of precipitation kinetics, as well as binding of free drug in solution. Simulation results with different parameter values reflective of typical drug and CD properties indicate a potential positive (up to five times increase in drug absorption), negative (up to 50% decrease in absorption) or lack of effect of CD. Comparison of model predictions with in vitro and in vivo experimental results indicate that a systems-based dynamic model incorporating CD complexation and key process kinetics may enable quantitative prediction of impact of CD delivered as a pre-formed complex on drug bioavailability.

  1. Surfactant manganese complexes as models for the oxidation of water

    SciTech Connect

    Wohlgemuth, R.; Otvos, J.W.; Calvin, M.

    1984-02-01

    Surfactant manganese complexes have been studied spectroscopically and electrochemically as models for the catalysts involved in the photooxidation of water to produce oxygen. Evidence has been obtained for the participation of the suggested redox cycle Mn/sup II/ to Mn/sup III/ to Mn/sup IV/ and back to Mn/sup II/ with the evolution of oxygen.

  2. Fitting Meta-Analytic Structural Equation Models with Complex Datasets

    ERIC Educational Resources Information Center

    Wilson, Sandra Jo; Polanin, Joshua R.; Lipsey, Mark W.

    2016-01-01

    A modification of the first stage of the standard procedure for two-stage meta-analytic structural equation modeling for use with large complex datasets is presented. This modification addresses two common problems that arise in such meta-analyses: (a) primary studies that provide multiple measures of the same construct and (b) the correlation…

  3. Surface complexation modeling of americium sorption onto volcanic tuff.

    PubMed

    Ding, M; Kelkar, S; Meijer, A

    2014-10-01

    Results of a surface complexation model (SCM) for americium sorption on volcanic rocks (devitrified and zeolitic tuff) are presented. The model was developed using PHREEQC and based on laboratory data for americium sorption on quartz. Available data for sorption of americium on quartz as a function of pH in dilute groundwater can be modeled with two surface reactions involving an americium sulfate and an americium carbonate complex. It was assumed in applying the model to volcanic rocks from Yucca Mountain, that the surface properties of volcanic rocks can be represented by a quartz surface. Using groundwaters compositionally representative of Yucca Mountain, americium sorption distribution coefficient (Kd, L/Kg) values were calculated as function of pH. These Kd values are close to the experimentally determined Kd values for americium sorption on volcanic rocks, decreasing with increasing pH in the pH range from 7 to 9. The surface complexation constants, derived in this study, allow prediction of sorption of americium in a natural complex system, taking into account the inherent uncertainty associated with geochemical conditions that occur along transport pathways. PMID:24963803

  4. Surface complexation modeling of americium sorption onto volcanic tuff.

    PubMed

    Ding, M; Kelkar, S; Meijer, A

    2014-10-01

    Results of a surface complexation model (SCM) for americium sorption on volcanic rocks (devitrified and zeolitic tuff) are presented. The model was developed using PHREEQC and based on laboratory data for americium sorption on quartz. Available data for sorption of americium on quartz as a function of pH in dilute groundwater can be modeled with two surface reactions involving an americium sulfate and an americium carbonate complex. It was assumed in applying the model to volcanic rocks from Yucca Mountain, that the surface properties of volcanic rocks can be represented by a quartz surface. Using groundwaters compositionally representative of Yucca Mountain, americium sorption distribution coefficient (Kd, L/Kg) values were calculated as function of pH. These Kd values are close to the experimentally determined Kd values for americium sorption on volcanic rocks, decreasing with increasing pH in the pH range from 7 to 9. The surface complexation constants, derived in this study, allow prediction of sorption of americium in a natural complex system, taking into account the inherent uncertainty associated with geochemical conditions that occur along transport pathways.

  5. A random interacting network model for complex networks.

    PubMed

    Goswami, Bedartha; Shekatkar, Snehal M; Rheinwalt, Aljoscha; Ambika, G; Kurths, Jürgen

    2015-01-01

    We propose a RAndom Interacting Network (RAIN) model to study the interactions between a pair of complex networks. The model involves two major steps: (i) the selection of a pair of nodes, one from each network, based on intra-network node-based characteristics, and (ii) the placement of a link between selected nodes based on the similarity of their relative importance in their respective networks. Node selection is based on a selection fitness function and node linkage is based on a linkage probability defined on the linkage scores of nodes. The model allows us to relate within-network characteristics to between-network structure. We apply the model to the interaction between the USA and Schengen airline transportation networks (ATNs). Our results indicate that two mechanisms: degree-based preferential node selection and degree-assortative link placement are necessary to replicate the observed inter-network degree distributions as well as the observed inter-network assortativity. The RAIN model offers the possibility to test multiple hypotheses regarding the mechanisms underlying network interactions. It can also incorporate complex interaction topologies. Furthermore, the framework of the RAIN model is general and can be potentially adapted to various real-world complex systems. PMID:26657032

  6. A random interacting network model for complex networks

    NASA Astrophysics Data System (ADS)

    Goswami, Bedartha; Shekatkar, Snehal M.; Rheinwalt, Aljoscha; Ambika, G.; Kurths, Jürgen

    2015-12-01

    We propose a RAndom Interacting Network (RAIN) model to study the interactions between a pair of complex networks. The model involves two major steps: (i) the selection of a pair of nodes, one from each network, based on intra-network node-based characteristics, and (ii) the placement of a link between selected nodes based on the similarity of their relative importance in their respective networks. Node selection is based on a selection fitness function and node linkage is based on a linkage probability defined on the linkage scores of nodes. The model allows us to relate within-network characteristics to between-network structure. We apply the model to the interaction between the USA and Schengen airline transportation networks (ATNs). Our results indicate that two mechanisms: degree-based preferential node selection and degree-assortative link placement are necessary to replicate the observed inter-network degree distributions as well as the observed inter-network assortativity. The RAIN model offers the possibility to test multiple hypotheses regarding the mechanisms underlying network interactions. It can also incorporate complex interaction topologies. Furthermore, the framework of the RAIN model is general and can be potentially adapted to various real-world complex systems.

  7. Bayesian Case-deletion Model Complexity and Information Criterion

    PubMed Central

    Zhu, Hongtu; Ibrahim, Joseph G.; Chen, Qingxia

    2015-01-01

    We establish a connection between Bayesian case influence measures for assessing the influence of individual observations and Bayesian predictive methods for evaluating the predictive performance of a model and comparing different models fitted to the same dataset. Based on such a connection, we formally propose a new set of Bayesian case-deletion model complexity (BCMC) measures for quantifying the effective number of parameters in a given statistical model. Its properties in linear models are explored. Adding some functions of BCMC to a conditional deviance function leads to a Bayesian case-deletion information criterion (BCIC) for comparing models. We systematically investigate some properties of BCIC and its connection with other information criteria, such as the Deviance Information Criterion (DIC). We illustrate the proposed methodology on linear mixed models with simulations and a real data example. PMID:26180578

  8. A perspective on modeling and simulation of complex dynamical systems

    NASA Astrophysics Data System (ADS)

    Åström, K. J.

    2011-09-01

    There has been an amazing development of modeling and simulation from its beginning in the 1920s, when the technology was available only at a handful of University groups who had access to a mechanical differential analyzer. Today, tools for modeling and simulation are available for every student and engineer. This paper gives a perspective on the development with particular emphasis on technology and paradigm shifts. Modeling is increasingly important for design and operation of complex natural and man-made systems. Because of the increased use of model based control such as Kalman filters and model predictive control, models are also appearing as components of feedback systems. Modeling and simulation are multidisciplinary, it is used in a wide variety of fields and their development have been strongly influenced by mathematics, numerics, computer science and computer technology.

  9. On Using Meta-Modeling and Multi-Modeling to Address Complex Problems

    ERIC Educational Resources Information Center

    Abu Jbara, Ahmed

    2013-01-01

    Models, created using different modeling techniques, usually serve different purposes and provide unique insights. While each modeling technique might be capable of answering specific questions, complex problems require multiple models interoperating to complement/supplement each other; we call this Multi-Modeling. To address the syntactic and…

  10. Emergence of complexity in evolving niche-model food webs.

    PubMed

    Guill, Christian; Drossel, Barbara

    2008-03-01

    We have analysed mechanisms that promote the emergence of complex structures in evolving model food webs. The niche model is used to determine predator-prey relationships. Complexity is measured by species richness as well as trophic level structure and link density. Adaptive dynamics that allow predators to concentrate on the prey species they are best adapted to lead to a strong increase in species number but have only a small effect on the number and relative occupancy of trophic levels. The density of active links also remains small but a high number of potential links allows the network to adjust to changes in the species composition (emergence and extinction of species). Incorporating effects of body size on individual metabolism leads to a more complex trophic level structure: both the maximum and the average trophic level increase. So does the density of active links. Taking body size effects into consideration does not have a measurable influence on species richness. If species are allowed to adjust their foraging behaviour, the complexity of the evolving networks can also be influenced by the size of the external resources. The larger the resources, the larger and more complex is the food web it can sustain. Body size effects and increasing resources do not change size and the simple structure of the evolving networks if adaptive foraging is prohibited. This leads to the conclusion that in the framework of the niche model adaptive foraging is a necessary but not sufficient condition for the emergence of complex networks. It is found that despite the stabilising effect of foraging adaptation the system displays elements of self-organised critical behaviour.

  11. Boolean modeling of collective effects in complex networks

    PubMed Central

    Norrell, Johannes; Socolar, Joshua E. S.

    2009-01-01

    Complex systems are often modeled as Boolean networks in attempts to capture their logical structure and reveal its dynamical consequences. Approximating the dynamics of continuous variables by discrete values and Boolean logic gates may, however, introduce dynamical possibilities that are not accessible to the original system. We show that large random networks of variables coupled through continuous transfer functions often fail to exhibit the complex dynamics of corresponding Boolean models in the disordered (chaotic) regime, even when each individual function appears to be a good candidate for Boolean idealization. A suitably modified Boolean theory explains the behavior of systems in which information does not propagate faithfully down certain chains of nodes. Model networks incorporating calculated or directly measured transfer functions reported in the literature on transcriptional regulation of genes are described by the modified theory. PMID:19658525

  12. Boolean modeling of collective effects in complex networks.

    PubMed

    Norrell, Johannes; Socolar, Joshua E S

    2009-06-01

    Complex systems are often modeled as Boolean networks in attempts to capture their logical structure and reveal its dynamical consequences. Approximating the dynamics of continuous variables by discrete values and Boolean logic gates may, however, introduce dynamical possibilities that are not accessible to the original system. We show that large random networks of variables coupled through continuous transfer functions often fail to exhibit the complex dynamics of corresponding Boolean models in the disordered (chaotic) regime, even when each individual function appears to be a good candidate for Boolean idealization. A suitably modified Boolean theory explains the behavior of systems in which information does not propagate faithfully down certain chains of nodes. Model networks incorporating calculated or directly measured transfer functions reported in the literature on transcriptional regulation of genes are described by the modified theory. PMID:19658525

  13. Entropy, complexity, and Markov diagrams for random walk cancer models.

    PubMed

    Newton, Paul K; Mason, Jeremy; Hurt, Brian; Bethel, Kelly; Bazhenova, Lyudmila; Nieva, Jorge; Kuhn, Peter

    2014-12-19

    The notion of entropy is used to compare the complexity associated with 12 common cancers based on metastatic tumor distribution autopsy data. We characterize power-law distributions, entropy, and Kullback-Liebler divergence associated with each primary cancer as compared with data for all cancer types aggregated. We then correlate entropy values with other measures of complexity associated with Markov chain dynamical systems models of progression. The Markov transition matrix associated with each cancer is associated with a directed graph model where nodes are anatomical locations where a metastatic tumor could develop, and edge weightings are transition probabilities of progression from site to site. The steady-state distribution corresponds to the autopsy data distribution. Entropy correlates well with the overall complexity of the reduced directed graph structure for each cancer and with a measure of systemic interconnectedness of the graph, called graph conductance. The models suggest that grouping cancers according to their entropy values, with skin, breast, kidney, and lung cancers being prototypical high entropy cancers, stomach, uterine, pancreatic and ovarian being mid-level entropy cancers, and colorectal, cervical, bladder, and prostate cancers being prototypical low entropy cancers, provides a potentially useful framework for viewing metastatic cancer in terms of predictability, complexity, and metastatic potential.

  14. Entropy, complexity, and Markov diagrams for random walk cancer models

    NASA Astrophysics Data System (ADS)

    Newton, Paul K.; Mason, Jeremy; Hurt, Brian; Bethel, Kelly; Bazhenova, Lyudmila; Nieva, Jorge; Kuhn, Peter

    2014-12-01

    The notion of entropy is used to compare the complexity associated with 12 common cancers based on metastatic tumor distribution autopsy data. We characterize power-law distributions, entropy, and Kullback-Liebler divergence associated with each primary cancer as compared with data for all cancer types aggregated. We then correlate entropy values with other measures of complexity associated with Markov chain dynamical systems models of progression. The Markov transition matrix associated with each cancer is associated with a directed graph model where nodes are anatomical locations where a metastatic tumor could develop, and edge weightings are transition probabilities of progression from site to site. The steady-state distribution corresponds to the autopsy data distribution. Entropy correlates well with the overall complexity of the reduced directed graph structure for each cancer and with a measure of systemic interconnectedness of the graph, called graph conductance. The models suggest that grouping cancers according to their entropy values, with skin, breast, kidney, and lung cancers being prototypical high entropy cancers, stomach, uterine, pancreatic and ovarian being mid-level entropy cancers, and colorectal, cervical, bladder, and prostate cancers being prototypical low entropy cancers, provides a potentially useful framework for viewing metastatic cancer in terms of predictability, complexity, and metastatic potential.

  15. Latent Hierarchical Model of Temporal Structure for Complex Activity Classification.

    PubMed

    Wang, Limin; Qiao, Yu; Tang, Xiaoou

    2014-02-01

    Modeling the temporal structure of sub-activities is an important yet challenging problem in complex activity classification. This paper proposes a latent hierarchical model (LHM) to describe the decomposition of complex activity into sub-activities in a hierarchical way. The LHM has a tree-structure, where each node corresponds to a video segment (sub-activity) at certain temporal scale. The starting and ending time points of each sub-activity are represented by two latent variables, which are automatically determined during the inference process. We formulate the training problem of the LHM in a latent kernelized SVM framework and develop an efficient cascade inference method to speed up classification. The advantages of our methods come from: 1) LHM models the complex activity with a deep structure, which is decomposed into sub-activities in a coarse-to-fine manner and 2) the starting and ending time points of each segment are adaptively determined to deal with the temporal displacement and duration variation of sub-activity. We conduct experiments on three datasets: 1) the KTH; 2) the Hollywood2; and 3) the Olympic Sports. The experimental results show the effectiveness of the LHM in complex activity classification. With dense features, our LHM achieves the state-of-the-art performance on the Hollywood2 dataset and the Olympic Sports dataset.

  16. Complex humanitarian emergencies: a review of epidemiological and response models.

    PubMed

    Burkle, F M

    2006-01-01

    Complex emergencies (CEs) have been the most common human-generated disaster of the past two decades. These internal conflicts and associated acts of genocide have been poorly understood and poorly managed. This article provides an epidemiological background and understanding of developing and developed countries, and chronic or smoldering countries' CEs, and explains in detail the prevailing models of response seen by the international community. Even though CEs are declining in number, they have become more complex and dangerous. The UN Charter reform is expected to address internal conflicts and genocide but may not provide a more effective and efficient means to respond.

  17. Modeling the respiratory chain complexes with biothermokinetic equations - the case of complex I.

    PubMed

    Heiske, Margit; Nazaret, Christine; Mazat, Jean-Pierre

    2014-10-01

    The mitochondrial respiratory chain plays a crucial role in energy metabolism and its dysfunction is implicated in a wide range of human diseases. In order to understand the global expression of local mutations in the rate of oxygen consumption or in the production of adenosine triphosphate (ATP) it is useful to have a mathematical model in which the changes in a given respiratory complex are properly modeled. Our aim in this paper is to provide thermodynamics respecting and structurally simple equations to represent the kinetics of each isolated complexes which can, assembled in a dynamical system, also simulate the behavior of the respiratory chain, as a whole, under a large set of different physiological and pathological conditions. On the example of the reduced nicotinamide adenine dinucleotide (NADH)-ubiquinol-oxidoreductase (complex I) we analyze the suitability of different types of rate equations. Based on our kinetic experiments we show that very simple rate laws, as those often used in many respiratory chain models, fail to describe the kinetic behavior when applied to a wide concentration range. This led us to adapt rate equations containing the essential parameters of enzyme kinetic, maximal velocities and Henri-Michaelis-Menten like-constants (KM and KI) to satisfactorily simulate these data. PMID:25064016

  18. Impedance control complements incomplete internal models under complex external dynamics.

    PubMed

    Tomi, Naoki; Gouko, Manabu; Ito, Koji

    2008-01-01

    In this paper, we investigate motor adaptation of human arm movements to external dynamics. In an experiment, we tried to determine whether humans can learn an internal model of a mixed force field (V+P) that was the sum of a velocity-dependent force field (V) and a position-dependent force field (P). The experimental results show that the subjects did not learn the internal model of V+P accurately and they compensated for the loads by using impedance control. Our results suggest that humans use impedance control when internal models become inaccurate because of the complexity of the external dynamics.

  19. Modeling of Carbohydrate Binding Modules Complexed to Cellulose

    SciTech Connect

    Nimlos, M. R.; Beckham, G. T.; Bu, L.; Himmel, M. E.; Crowley, M. F.; Bomble, Y. J.

    2012-01-01

    Modeling results are presented for the interaction of two carbohydrate binding modules (CBMs) with cellulose. The family 1 CBM from Trichoderma reesei's Cel7A cellulase was modeled using molecular dynamics to confirm that this protein selectively binds to the hydrophobic (100) surface of cellulose fibrils and to determine the energetics and mechanisms for locating this surface. Modeling was also conducted of binding of the family 4 CBM from the CbhA complex from Clostridium thermocellum. There is a cleft in this protein, which may accommodate a cellulose chain that is detached from crystalline cellulose. This possibility is explored using molecular dynamics.

  20. Modeling and Algorithmic Approaches to Constitutively-Complex, Microstructured Fluids

    SciTech Connect

    Miller, Gregory H.; Forest, Gregory

    2014-05-01

    We present a new multiscale model for complex fluids based on three scales: microscopic, kinetic, and continuum. We choose the microscopic level as Kramers' bead-rod model for polymers, which we describe as a system of stochastic differential equations with an implicit constraint formulation. The associated Fokker-Planck equation is then derived, and adiabatic elimination removes the fast momentum coordinates. Approached in this way, the kinetic level reduces to a dispersive drift equation. The continuum level is modeled with a finite volume Godunov-projection algorithm. We demonstrate computation of viscoelastic stress divergence using this multiscale approach.

  1. Development of Conceptual Benchmark Models to Evaluate Complex Hydrologic Model Calibration in Managed Basins Using Python

    NASA Astrophysics Data System (ADS)

    Hughes, J. D.; White, J.

    2013-12-01

    For many numerical hydrologic models it is a challenge to quantitatively demonstrate that complex models are preferable to simpler models. Typically, a decision is made to develop and calibrate a complex model at the beginning of a study. The value of selecting a complex model over simpler models is commonly inferred from use of a model with fewer simplifications of the governing equations because it can be time consuming to develop another numerical code with data processing and parameter estimation functionality. High-level programming languages like Python can greatly reduce the effort required to develop and calibrate simple models that can be used to quantitatively demonstrate the increased value of a complex model. We have developed and calibrated a spatially-distributed surface-water/groundwater flow model for managed basins in southeast Florida, USA, to (1) evaluate the effect of municipal groundwater pumpage on surface-water/groundwater exchange, (2) investigate how the study area will respond to sea-level rise, and (3) explore combinations of these forcing functions. To demonstrate the increased value of this complex model, we developed a two-parameter conceptual-benchmark-discharge model for each basin in the study area. The conceptual-benchmark-discharge model includes seasonal scaling and lag parameters and is driven by basin rainfall. The conceptual-benchmark-discharge models were developed in the Python programming language and used weekly rainfall data. Calibration was implemented with the Broyden-Fletcher-Goldfarb-Shanno method available in the Scientific Python (SciPy) library. Normalized benchmark efficiencies calculated using output from the complex model and the corresponding conceptual-benchmark-discharge model indicate that the complex model has more explanatory power than the simple model driven only by rainfall.

  2. Bridging Mechanistic and Phenomenological Models of Complex Biological Systems

    PubMed Central

    Transtrum, Mark K.; Qiu, Peng

    2016-01-01

    The inherent complexity of biological systems gives rise to complicated mechanistic models with a large number of parameters. On the other hand, the collective behavior of these systems can often be characterized by a relatively small number of phenomenological parameters. We use the Manifold Boundary Approximation Method (MBAM) as a tool for deriving simple phenomenological models from complicated mechanistic models. The resulting models are not black boxes, but remain expressed in terms of the microscopic parameters. In this way, we explicitly connect the macroscopic and microscopic descriptions, characterize the equivalence class of distinct systems exhibiting the same range of collective behavior, and identify the combinations of components that function as tunable control knobs for the behavior. We demonstrate the procedure for adaptation behavior exhibited by the EGFR pathway. From a 48 parameter mechanistic model, the system can be effectively described by a single adaptation parameter τ characterizing the ratio of time scales for the initial response and recovery time of the system which can in turn be expressed as a combination of microscopic reaction rates, Michaelis-Menten constants, and biochemical concentrations. The situation is not unlike modeling in physics in which microscopically complex processes can often be renormalized into simple phenomenological models with only a few effective parameters. The proposed method additionally provides a mechanistic explanation for non-universal features of the behavior. PMID:27187545

  3. Assessment model for perceived visual complexity of automotive instrument cluster.

    PubMed

    Yoon, Sol Hee; Lim, Jihyoun; Ji, Yong Gu

    2015-01-01

    This research proposes an assessment model for quantifying the perceived visual complexity (PVC) of an in-vehicle instrument cluster. An initial study was conducted to investigate the possibility of evaluating the PVC of an in-vehicle instrument cluster by estimating and analyzing the complexity of its individual components. However, this approach was only partially successful, because it did not take into account the combination of the different components with random levels of complexity to form one visual display. Therefore, a second study was conducted focusing on the effect of combining the different components. The results from the overall research enabled us to suggest a basis for quantifying the PVC of an in-vehicle instrument cluster based both on the PVCs of its components and on the integration effect.

  4. Complexity and robustness in hypernetwork models of metabolism.

    PubMed

    Pearcy, Nicole; Chuzhanova, Nadia; Crofts, Jonathan J

    2016-10-01

    Metabolic reaction data is commonly modelled using a complex network approach, whereby nodes represent the chemical species present within the organism of interest, and connections are formed between those nodes participating in the same chemical reaction. Unfortunately, such an approach provides an inadequate description of the metabolic process in general, as a typical chemical reaction will involve more than two nodes, thus risking oversimplification of the system of interest in a potentially significant way. In this paper, we employ a complex hypernetwork formalism to investigate the robustness of bacterial metabolic hypernetworks by extending the concept of a percolation process to hypernetworks. Importantly, this provides a novel method for determining the robustness of these systems and thus for quantifying their resilience to random attacks/errors. Moreover, we performed a site percolation analysis on a large cohort of bacterial metabolic networks and found that hypernetworks that evolved in more variable environments displayed increased levels of robustness and topological complexity. PMID:27354314

  5. An Adaptive Complex Network Model for Brain Functional Networks

    PubMed Central

    Gomez Portillo, Ignacio J.; Gleiser, Pablo M.

    2009-01-01

    Brain functional networks are graph representations of activity in the brain, where the vertices represent anatomical regions and the edges their functional connectivity. These networks present a robust small world topological structure, characterized by highly integrated modules connected sparsely by long range links. Recent studies showed that other topological properties such as the degree distribution and the presence (or absence) of a hierarchical structure are not robust, and show different intriguing behaviors. In order to understand the basic ingredients necessary for the emergence of these complex network structures we present an adaptive complex network model for human brain functional networks. The microscopic units of the model are dynamical nodes that represent active regions of the brain, whose interaction gives rise to complex network structures. The links between the nodes are chosen following an adaptive algorithm that establishes connections between dynamical elements with similar internal states. We show that the model is able to describe topological characteristics of human brain networks obtained from functional magnetic resonance imaging studies. In particular, when the dynamical rules of the model allow for integrated processing over the entire network scale-free non-hierarchical networks with well defined communities emerge. On the other hand, when the dynamical rules restrict the information to a local neighborhood, communities cluster together into larger ones, giving rise to a hierarchical structure, with a truncated power law degree distribution. PMID:19738902

  6. Mathematical modelling of complex contagion on clustered networks

    NASA Astrophysics Data System (ADS)

    O'sullivan, David J.; O'Keeffe, Gary; Fennell, Peter; Gleeson, James

    2015-09-01

    The spreading of behavior, such as the adoption of a new innovation, is influenced bythe structure of social networks that interconnect the population. In the experiments of Centola (Science, 2010), adoption of new behavior was shown to spread further and faster across clustered-lattice networks than across corresponding random networks. This implies that the “complex contagion” effects of social reinforcement are important in such diffusion, in contrast to “simple” contagion models of disease-spread which predict that epidemics would grow more efficiently on random networks than on clustered networks. To accurately model complex contagion on clustered networks remains a challenge because the usual assumptions (e.g. of mean-field theory) regarding tree-like networks are invalidated by the presence of triangles in the network; the triangles are, however, crucial to the social reinforcement mechanism, which posits an increased probability of a person adopting behavior that has been adopted by two or more neighbors. In this paper we modify the analytical approach that was introduced by Hebert-Dufresne et al. (Phys. Rev. E, 2010), to study disease-spread on clustered networks. We show how the approximation method can be adapted to a complex contagion model, and confirm the accuracy of the method with numerical simulations. The analytical results of the model enable us to quantify the level of social reinforcement that is required to observe—as in Centola’s experiments—faster diffusion on clustered topologies than on random networks.

  7. Interaction of Ionic Liquids with Lipid Biomembrane: Implication from Supramolecular Assembly to Cytotoxicity

    NASA Astrophysics Data System (ADS)

    Jing, Benxin; Lan, Nan; Zhu, Y. Elaine

    2013-03-01

    An explosion in the research activities using ionic liquids (ILs) as new ``green'' chemicals in several chemical and biomedical processes has resulted in the urgent need to understand their impact in term of their transport and toxicity towards aquatic organisms. Though a few experimental toxicology studies have reported that some ionic liquids are toxic with increased hydrophobicity of ILs while others are not, our understanding of the molecular level mechanism of IL toxicity remains poorly understood. In this talk, we will discuss our recent study of the interaction of ionic liquids with model cell membranes. We have found that the ILs could induce morphological change of lipid bilayers when a critical concentration is exceeded, leading to the swelling and tube-like formation of lipid bilayers. The critical concentration shows a strong dependence on the length of hydrocarbon tails and hydrophobic counterions. By SAXS, Langmuir-Blodgett (LB) and fluorescence microscopic measurement, we have confirmed that tube-like lipid complexes result from the insertion of ILs with long hydrocarbon chains to minimize the hydrophobic interaction with aqueous media. This finding could give insight to the modification and adoption of ILs for the engineering of micro-organisms.

  8. a Model Study of Complex Behavior in the Belousov - Reaction.

    NASA Astrophysics Data System (ADS)

    Lindberg, David Mark

    1988-12-01

    We have studied the complex oscillatory behavior in a model of the Belousov-Zhabotinskii (BZ) reaction in a continuously-fed stirred tank reactor (CSTR). The model consisted of a set of nonlinear ordinary differential equations derived from a reduced mechanism of the chemical system. These equations were integrated numerically on a computer, which yielded the concentrations of the constituent chemicals as functions of time. In addition, solutions were tracked as functions of a single parameter, the stability of the solutions was determined, and bifurcations of the solutions were located and studied. The intent of this study was to use this BZ model to explore further a region of complex oscillatory behavior found in experimental investigations, the most thorough of which revealed an alternating periodic-chaotic (P-C) sequence of states. A P-C sequence was discovered in the model which showed the same qualitative features as the experimental sequence. In order to better understand the P-C sequence, a detailed study was conducted in the vicinity of the P-C sequence, with two experimentally accessible parameters as control variables. This study mapped out the bifurcation sets, and included examination of the dynamics of the stable periodic, unstable periodic, and chaotic oscillatory motion. Observations made from the model results revealed a rough symmetry which suggests a new way of looking at the P-C sequence. Other nonlinear phenomena uncovered in the model were boundary and interior crises, several codimension-two bifurcations, and similarities in the shapes of areas of stability for periodic orbits in two-parameter space. Each earlier model study of this complex region involved only a limited one-parameter scan and had limited success in producing agreement with experiments. In contrast, for those regions of complex behavior that have been studied experimentally, the observations agree qualitatively with our model results. Several new predictions of the model

  9. RHIC injector complex online model status and plans

    SciTech Connect

    Schoefer,V.; Ahrens, L.; Brown, K.; Morris, J.; Nemesure, S.

    2009-05-04

    An online modeling system is being developed for the RHIC injector complex, which consists of the Booster, the AGS and the transfer lines connecting the Booster to the AGS and the AGS to RHIC. Historically the injectors have been operated using static values from design specifications or offline model runs, but tighter beam optics constraints required by polarized proton operations (e.g, accelerating with near-integer tunes) have necessitated a more dynamic system. An online model server for the AGS has been implemented using MAD-X [1] as the model engine, with plans to extend the system to the Booster and the injector transfer lines and to add the option of calculating optics using the Polymorphic Tracking Code (PTC [2]) as the model engine.

  10. Lateral organization of complex lipid mixtures from multiscale modeling

    PubMed Central

    Tumaneng, Paul W.; Pandit, Sagar A.; Zhao, Guijun; Scott, H. L.

    2010-01-01

    The organizational properties of complex lipid mixtures can give rise to functionally important structures in cell membranes. In model membranes, ternary lipid-cholesterol (CHOL) mixtures are often used as representative systems to investigate the formation and stabilization of localized structural domains (“rafts”). In this work, we describe a self-consistent mean-field model that builds on molecular dynamics simulations to incorporate multiple lipid components and to investigate the lateral organization of such mixtures. The model predictions reveal regions of bimodal order on ternary plots that are in good agreement with experiment. Specifically, we have applied the model to ternary mixtures composed of dioleoylphosphatidylcholine:18:0 sphingomyelin:CHOL. This work provides insight into the specific intermolecular interactions that drive the formation of localized domains in these mixtures. The model makes use of molecular dynamics simulations to extract interaction parameters and to provide chain configuration order parameter libraries. PMID:20151760

  11. Mechanistic modeling confronts the complexity of molecular cell biology.

    PubMed

    Phair, Robert D

    2014-11-01

    Mechanistic modeling has the potential to transform how cell biologists contend with the inescapable complexity of modern biology. I am a physiologist-electrical engineer-systems biologist who has been working at the level of cell biology for the past 24 years. This perspective aims 1) to convey why we build models, 2) to enumerate the major approaches to modeling and their philosophical differences, 3) to address some recurrent concerns raised by experimentalists, and then 4) to imagine a future in which teams of experimentalists and modelers build-and subject to exhaustive experimental tests-models covering the entire spectrum from molecular cell biology to human pathophysiology. There is, in my view, no technical obstacle to this future, but it will require some plasticity in the biological research mind-set.

  12. The semiotics of control and modeling relations in complex systems.

    PubMed

    Joslyn, C

    2001-01-01

    We provide a conceptual analysis of ideas and principles from the systems theory discourse which underlie Pattee's semantic or semiotic closure, which is itself foundational for a school of theoretical biology derived from systems theory and cybernetics, and is now being related to biological semiotics and explicated in the relational biological school of Rashevsky and Rosen. Atomic control systems and models are described as the canonical forms of semiotic organization, sharing measurement relations, but differing topologically in that control systems are circularly and models linearly related to their environments. Computation in control systems is introduced, motivating hierarchical decomposition, hybrid modeling and control systems, and anticipatory or model-based control. The semiotic relations in complex control systems are described in terms of relational constraints, and rules and laws are distinguished as contingent and necessary functional entailments, respectively. Finally, selection as a meta-level of constraint is introduced as the necessary condition for semantic relations in control systems and models.

  13. Mechanistic modeling confronts the complexity of molecular cell biology

    PubMed Central

    Phair, Robert D.

    2014-01-01

    Mechanistic modeling has the potential to transform how cell biologists contend with the inescapable complexity of modern biology. I am a physiologist–electrical engineer–systems biologist who has been working at the level of cell biology for the past 24 years. This perspective aims 1) to convey why we build models, 2) to enumerate the major approaches to modeling and their philosophical differences, 3) to address some recurrent concerns raised by experimentalists, and then 4) to imagine a future in which teams of experimentalists and modelers build—and subject to exhaustive experimental tests—models covering the entire spectrum from molecular cell biology to human pathophysiology. There is, in my view, no technical obstacle to this future, but it will require some plasticity in the biological research mind-set. PMID:25368428

  14. Cx-02 Program, workshop on modeling complex systems

    USGS Publications Warehouse

    Mossotti, Victor G.; Barragan, Jo Ann; Westergard, Todd D.

    2003-01-01

    This publication contains the abstracts and program for the workshop on complex systems that was held on November 19-21, 2002, in Reno, Nevada. Complex systems are ubiquitous within the realm of the earth sciences. Geological systems consist of a multiplicity of linked components with nested feedback loops; the dynamics of these systems are non-linear, iterative, multi-scale, and operate far from equilibrium. That notwithstanding, It appears that, with the exception of papers on seismic studies, geology and geophysics work has been disproportionally underrepresented at regional and national meetings on complex systems relative to papers in the life sciences. This is somewhat puzzling because geologists and geophysicists are, in many ways, preadapted to thinking of complex system mechanisms. Geologists and geophysicists think about processes involving large volumes of rock below the sunlit surface of Earth, the accumulated consequence of processes extending hundreds of millions of years in the past. Not only do geologists think in the abstract by virtue of the vast time spans, most of the evidence is out-of-sight. A primary goal of this workshop is to begin to bridge the gap between the Earth sciences and life sciences through demonstration of the universality of complex systems science, both philosophically and in model structures.

  15. Computational and analytical modeling of cationic lipid-DNA complexes.

    PubMed

    Farago, Oded; Grønbech-Jensen, Niels

    2007-05-01

    We present a theoretical study of the physical properties of cationic lipid-DNA (CL-DNA) complexes--a promising synthetically based nonviral carrier of DNA for gene therapy. The study is based on a coarse-grained molecular model, which is used in Monte Carlo simulations of mesoscopically large systems over timescales long enough to address experimental reality. In the present work, we focus on the statistical-mechanical behavior of lamellar complexes, which in Monte Carlo simulations self-assemble spontaneously from a disordered random initial state. We measure the DNA-interaxial spacing, d(DNA), and the local cationic area charge density, sigma(M), for a wide range of values of the parameter (c) representing the fraction of cationic lipids. For weakly charged complexes (low values of (c)), we find that d(DNA) has a linear dependence on (c)(-1), which is in excellent agreement with x-ray diffraction experimental data. We also observe, in qualitative agreement with previous Poisson-Boltzmann calculations of the system, large fluctuations in the local area charge density with a pronounced minimum of sigma(M) halfway between adjacent DNA molecules. For highly-charged complexes (large (c)), we find moderate charge density fluctuations and observe deviations from linear dependence of d(DNA) on (c)(-1). This last result, together with other findings such as the decrease in the effective stretching modulus of the complex and the increased rate at which pores are formed in the complex membranes, are indicative of the gradual loss of mechanical stability of the complex, which occurs when (c) becomes large. We suggest that this may be the origin of the recently observed enhanced transfection efficiency of lamellar CL-DNA complexes at high charge densities, because the completion of the transfection process requires the disassembly of the complex and the release of the DNA into the cytoplasm. Some of the structural properties of the system are also predicted by a continuum

  16. Reduced Complexity Modeling (RCM): toward more use of less

    NASA Astrophysics Data System (ADS)

    Paola, Chris; Voller, Vaughan

    2014-05-01

    Although not exact, there is a general correspondence between reductionism and detailed, high-fidelity models, while 'synthesism' is often associated with reduced-complexity modeling. There is no question that high-fidelity reduction- based computational models are extremely useful in simulating the behaviour of complex natural systems. In skilled hands they are also a source of insight and understanding. We focus here on the case for the other side (reduced-complexity models), not because we think they are 'better' but because their value is more subtle, and their natural constituency less clear. What kinds of problems and systems lend themselves to the reduced-complexity approach? RCM is predicated on the idea that the mechanism of the system or phenomenon in question is, for whatever reason, insensitive to the full details of the underlying physics. There are multiple ways in which this can happen. B.T. Werner argued for the importance of process hierarchies in which processes at larger scales depend on only a small subset of everything going on at smaller scales. Clear scale breaks would seem like a way to test systems for this property but to our knowledge has not been used in this way. We argue that scale-independent physics, as for example exhibited by natural fractals, is another. We also note that the same basic criterion - independence of the process in question from details of the underlying physics - underpins 'unreasonably effective' laboratory experiments. There is thus a link between suitability for experimentation at reduced scale and suitability for RCM. Examples from RCM approaches to erosional landscapes, braided rivers, and deltas illustrate these ideas, and suggest that they are insufficient. There is something of a 'wild west' nature to RCM that puts some researchers off by suggesting a departure from traditional methods that have served science well for centuries. We offer two thoughts: first, that in the end the measure of a model is its

  17. GalaxyRefineComplex: Refinement of protein-protein complex model structures driven by interface repacking.

    PubMed

    Heo, Lim; Lee, Hasup; Seok, Chaok

    2016-01-01

    Protein-protein docking methods have been widely used to gain an atomic-level understanding of protein interactions. However, docking methods that employ low-resolution energy functions are popular because of computational efficiency. Low-resolution docking tends to generate protein complex structures that are not fully optimized. GalaxyRefineComplex takes such low-resolution docking structures and refines them to improve model accuracy in terms of both interface contact and inter-protein orientation. This refinement method allows flexibility at the protein interface and in the overall docking structure to capture conformational changes that occur upon binding. Symmetric refinement is also provided for symmetric homo-complexes. This method was validated by refining models produced by available docking programs, including ZDOCK and M-ZDOCK, and was successfully applied to CAPRI targets in a blind fashion. An example of using the refinement method with an existing docking method for ligand binding mode prediction of a drug target is also presented. A web server that implements the method is freely available at http://galaxy.seoklab.org/refinecomplex. PMID:27535582

  18. GalaxyRefineComplex: Refinement of protein-protein complex model structures driven by interface repacking

    PubMed Central

    Heo, Lim; Lee, Hasup; Seok, Chaok

    2016-01-01

    Protein-protein docking methods have been widely used to gain an atomic-level understanding of protein interactions. However, docking methods that employ low-resolution energy functions are popular because of computational efficiency. Low-resolution docking tends to generate protein complex structures that are not fully optimized. GalaxyRefineComplex takes such low-resolution docking structures and refines them to improve model accuracy in terms of both interface contact and inter-protein orientation. This refinement method allows flexibility at the protein interface and in the overall docking structure to capture conformational changes that occur upon binding. Symmetric refinement is also provided for symmetric homo-complexes. This method was validated by refining models produced by available docking programs, including ZDOCK and M-ZDOCK, and was successfully applied to CAPRI targets in a blind fashion. An example of using the refinement method with an existing docking method for ligand binding mode prediction of a drug target is also presented. A web server that implements the method is freely available at http://galaxy.seoklab.org/refinecomplex. PMID:27535582

  19. Engineering complex topological memories from simple Abelian models

    NASA Astrophysics Data System (ADS)

    Wootton, James R.; Lahtinen, Ville; Doucot, Benoit; Pachos, Jiannis K.

    2011-09-01

    In three spatial dimensions, particles are limited to either bosonic or fermionic statistics. Two-dimensional systems, on the other hand, can support anyonic quasiparticles exhibiting richer statistical behaviors. An exciting proposal for quantum computation is to employ anyonic statistics to manipulate information. Since such statistical evolutions depend only on topological characteristics, the resulting computation is intrinsically resilient to errors. The so-called non-Abelian anyons are most promising for quantum computation, but their physical realization may prove to be complex. Abelian anyons, however, are easier to understand theoretically and realize experimentally. Here we show that complex topological memories inspired by non-Abelian anyons can be engineered in Abelian models. We explicitly demonstrate the control procedures for the encoding and manipulation of quantum information in specific lattice models that can be implemented in the laboratory. This bridges the gap between requirements for anyonic quantum computation and the potential of state-of-the-art technology.

  20. Polygonal Shapes Detection in 3d Models of Complex Architectures

    NASA Astrophysics Data System (ADS)

    Benciolini, G. B.; Vitti, A.

    2015-02-01

    A sequential application of two global models defined on a variational framework is proposed for the detection of polygonal shapes in 3D models of complex architectures. As a first step, the procedure involves the use of the Mumford and Shah (1989) 1st-order variational model in dimension two (gridded height data are processed). In the Mumford-Shah model an auxiliary function detects the sharp changes, i.e., the discontinuities, of a piecewise smooth approximation of the data. The Mumford-Shah model requires the global minimization of a specific functional to simultaneously produce both the smooth approximation and its discontinuities. In the proposed procedure, the edges of the smooth approximation derived by a specific processing of the auxiliary function are then processed using the Blake and Zisserman (1987) 2nd-order variational model in dimension one (edges are processed in the plane). This second step permits to describe the edges of an object by means of piecewise almost-linear approximation of the input edges themselves and to detects sharp changes of the first-derivative of the edges so to detect corners. The Mumford-Shah variational model is used in two dimensions accepting the original data as primary input. The Blake-Zisserman variational model is used in one dimension for the refinement of the description of the edges. The selection among all the boundaries detected by the Mumford-Shah model of those that present a shape close to a polygon is performed by considering only those boundaries for which the Blake-Zisserman model identified discontinuities in their first derivative. The output of the procedure are hence shapes, coming from 3D geometric data, that can be considered as polygons. The application of the procedure is suitable for, but not limited to, the detection of objects such as foot-print of polygonal buildings, building facade boundaries or windows contours. v The procedure is applied to a height model of the building of the Engineering

  1. Modeling high-resolution broadband discourse in complex adaptive systems.

    PubMed

    Dooley, Kevin J; Corman, Steven R; McPhee, Robert D; Kuhn, Timothy

    2003-01-01

    Numerous researchers and practitioners have turned to complexity science to better understand human systems. Simulation can be used to observe how the microlevel actions of many human agents create emergent structures and novel behavior in complex adaptive systems. In such simulations, communication between human agents is often modeled simply as message passing, where a message or text may transfer data, trigger action, or inform context. Human communication involves more than the transmission of texts and messages, however. Such a perspective is likely to limit the effectiveness and insight that we can gain from simulations, and complexity science itself. In this paper, we propose a model of how close analysis of discursive processes between individuals (high-resolution), which occur simultaneously across a human system (broadband), dynamically evolve. We propose six different processes that describe how evolutionary variation can occur in texts-recontextualization, pruning, chunking, merging, appropriation, and mutation. These process models can facilitate the simulation of high-resolution, broadband discourse processes, and can aid in the analysis of data from such processes. Examples are used to illustrate each process. We make the tentative suggestion that discourse may evolve to the "edge of chaos." We conclude with a discussion concerning how high-resolution, broadband discourse data could actually be collected. PMID:12876447

  2. The evaluative imaging of mental models - Visual representations of complexity

    NASA Technical Reports Server (NTRS)

    Dede, Christopher

    1989-01-01

    The paper deals with some design issues involved in building a system that could visually represent the semantic structures of training materials and their underlying mental models. In particular, hypermedia-based semantic networks that instantiate classification problem solving strategies are thought to be a useful formalism for such representations; the complexity of these web structures can be best managed through visual depictions. It is also noted that a useful approach to implement in these hypermedia models would be some metrics of conceptual distance.

  3. A Simple Model for Complex Dynamical Transitions in Epidemics

    NASA Astrophysics Data System (ADS)

    Earn, David J. D.; Rohani, Pejman; Bolker, Benjamin M.; Grenfell, Bryan T.

    2000-01-01

    Dramatic changes in patterns of epidemics have been observed throughout this century. For childhood infectious diseases such as measles, the major transitions are between regular cycles and irregular, possibly chaotic epidemics, and from regionally synchronized oscillations to complex, spatially incoherent epidemics. A simple model can explain both kinds of transitions as the consequences of changes in birth and vaccination rates. Measles is a natural ecological system that exhibits different dynamical transitions at different times and places, yet all of these transitions can be predicted as bifurcations of a single nonlinear model.

  4. Extending a configuration model to find communities in complex networks

    NASA Astrophysics Data System (ADS)

    Jin, Di; He, Dongxiao; Hu, Qinghua; Baquero, Carlos; Yang, Bo

    2013-09-01

    Discovery of communities in complex networks is a fundamental data analysis task in various domains. Generative models are a promising class of techniques for identifying modular properties from networks, which has been actively discussed recently. However, most of them cannot preserve the degree sequence of networks, which will distort the community detection results. Rather than using a blockmodel as most current works do, here we generalize a configuration model, namely, a null model of modularity, to solve this problem. Towards decomposing and combining sub-graphs according to the soft community memberships, our model incorporates the ability to describe community structures, something the original model does not have. Also, it has the property, as with the original model, that it fixes the expected degree sequence to be the same as that of the observed network. We combine both the community property and degree sequence preserving into a single unified model, which gives better community results compared with other models. Thereafter, we learn the model using a technique of nonnegative matrix factorization and determine the number of communities by applying consensus clustering. We test this approach both on synthetic benchmarks and on real-world networks, and compare it with two similar methods. The experimental results demonstrate the superior performance of our method over competing methods in detecting both disjoint and overlapping communities.

  5. Amelioration of oxidative stress in bio-membranes and macromolecules by non-toxic dye from Morinda tinctoria (Roxb.) roots.

    PubMed

    Bhakta, Dipita; Siva, Ramamoorthy

    2012-06-01

    Plant dyes have been in use for coloring and varied purposes since prehistoric times. A red dye found in the roots of plants belonging to genus Morinda is a well recognized coloring ingredient. The dye fraction obtained from the methanolic extract of the roots of Morinda tinctoria was explored for its role in attenuating damages caused by H(2)O(2)-induced oxidative stress. The antioxidant potential of the dye fraction was assessed through DPPH radical scavenging, deoxyribose degradation and inhibition of lipid peroxidation in mice liver. It was subsequently screened for its efficiency in extenuating damage incurred to biomembrane (using erythrocytes and their ghost membranes) and macromolecules (pBR322 DNA, lipids and proteins) from exposure to hydrogen peroxide. In addition, the non-toxic nature of the dye was supported by the histological evaluation conducted on the tissue sections from the major organs of Swiss Albino mice as well as effect on Hep3B cell line (human hepatic carcinoma). The LC-MS confirms the dye fraction to be morindone. Our study strongly suggests that morindone present in the root extracts of M. tinctoria, in addition to being a colorant, definitely holds promise in the pharmaceutical industry.

  6. A model for restricted diffusion in complex fluids

    NASA Astrophysics Data System (ADS)

    de Bruyn, John; Wylie, Jonathan

    2014-03-01

    We use a model originally due to Tanner to study the diffusion of tracer particles in complex fluids both analytically and through Monte-Carlo simulations. The model consists of regions through which the particles diffuse freely, separated by membranes with a specified low permeability. The mean squared displacement of the particles calculated from the model agrees well with experimental data on the diffusion of particles in a concentrated colloidal suspension when the membrane permeability is used as an adjustable parameter. Data on a micro-phase-separated polymer system can be well modeled by considering two populations of particles constrained by membranes with different permeabilites. Supported by the Hong Kong Research Grants Council and the Natural Sciences and Engineering Research Council of Canada.

  7. Modeling complexity in simulating pesticide fate in a rice paddy.

    PubMed

    Luo, Yuzhou; Spurlock, Frank; Gill, Sheryl; Goh, Kean S

    2012-12-01

    Modeling approaches for pesticide regulation are required to provide generic and conservative evaluations on pesticide fate and exposure based on limited data. This study investigates the modeling approach for pesticide simulation in a rice paddy, by developing a component-based modeling system and characterizing the dependence of pesticide concentrations on individual fate processes. The developed system covers the modeling complexity from a "base model" which considers only the essential processes of water management, water-sediment exchange, and aquatic dissipation, to a "full model" for all commonly simulated processes. Model capability and performance were demonstrated by case studies with 5 pesticides in 13 rice fields of the California's Sacramento Valley. With registrant-submitted dissipation half-lives, the base model conservatively estimated dissolved pesticide concentrations within one order of magnitude of measured data. The full model simulations were calibrated to characterize the key model parameters and processes varying with chemical properties and field conditions. Metabolism in water was identified as an important process in predicting pesticide fate in all tested rice fields. Relative contributions of metabolism, hydrolysis, direct aquatic photolysis, and volatilization to the overall pesticide dissipation were significantly correlated to the model sensitivities to the corresponding physicochemical properties and half-lives. While modeling results were sensitive to metabolism half-lives in water for all fields, significances of metabolism in sediment and water-sediment exchange were only observed for pesticides with pre-flooding applications or with rapid dissipation in sediment. Results suggest that, in addition to the development of regional modeling scenarios for rice production, the registrant-submitted maximum values for the aquatic dissipation half-lives could be used for evaluating pesticide for regulatory purposes.

  8. Hierarchical Model for the Evolution of Cloud Complexes

    NASA Astrophysics Data System (ADS)

    Sánchez D., Néstor M.; Parravano, Antonio

    1999-01-01

    The structure of cloud complexes appears to be well described by a tree structure (i.e., a simplified ``stick man'') representation when the image is partitioned into ``clouds.'' In this representation, the parent-child relationships are assigned according to containment. Based on this picture, a hierarchical model for the evolution of cloud complexes, including star formation, is constructed. The model follows the mass evolution of each substructure by computing its mass exchange with its parent and children. The parent-child mass exchange (evaporation or condensation) depends on the radiation density at the interphase. At the end of the ``lineage,'' stars may be born or die, so that there is a nonstationary mass flow in the hierarchical structure. For a variety of parameter sets the system follows the same series of steps to transform diffuse gas into stars, and the regulation of the mass flux in the tree by previously formed stars dominates the evolution of the star formation. For the set of parameters used here as a reference model, the system tends to produce initial mass functions (IMFs) that have a maximum at a mass that is too high (~2 Msolar) and the characteristic times for evolution seem too long. We show that these undesired properties can be improved by adjusting the model parameters. The model requires further physics (e.g., allowing for multiple stellar systems and clump collisions) before a definitive comparison with observations can be made. Instead, the emphasis here is to illustrate some general properties of this kind of complex nonlinear model for the star formation process. Notwithstanding the simplifications involved, the model reveals an essential feature that will likely remain if additional physical processes are included, that is, the detailed behavior of the system is very sensitive to the variations on the initial and external conditions, suggesting that a ``universal'' IMF is very unlikely. When an ensemble of IMFs corresponding to a

  9. Complex hybrid models combining deterministic and machine learning components for numerical climate modeling and weather prediction.

    PubMed

    Krasnopolsky, Vladimir M; Fox-Rabinovitz, Michael S

    2006-03-01

    A new practical application of neural network (NN) techniques to environmental numerical modeling has been developed. Namely, a new type of numerical model, a complex hybrid environmental model based on a synergetic combination of deterministic and machine learning model components, has been introduced. Conceptual and practical possibilities of developing hybrid models are discussed in this paper for applications to climate modeling and weather prediction. The approach presented here uses NN as a statistical or machine learning technique to develop highly accurate and fast emulations for time consuming model physics components (model physics parameterizations). The NN emulations of the most time consuming model physics components, short and long wave radiation parameterizations or full model radiation, presented in this paper are combined with the remaining deterministic components (like model dynamics) of the original complex environmental model--a general circulation model or global climate model (GCM)--to constitute a hybrid GCM (HGCM). The parallel GCM and HGCM simulations produce very similar results but HGCM is significantly faster. The speed-up of model calculations opens the opportunity for model improvement. Examples of developed HGCMs illustrate the feasibility and efficiency of the new approach for modeling complex multidimensional interdisciplinary systems.

  10. Hybrid Structural Model of the Complete Human ESCRT-0 Complex

    SciTech Connect

    Ren, Xuefeng; Kloer, Daniel P.; Kim, Young C.; Ghirlando, Rodolfo; Saidi, Layla F.; Hummer, Gerhard; Hurley, James H.

    2009-03-31

    The human Hrs and STAM proteins comprise the ESCRT-0 complex, which sorts ubiquitinated cell surface receptors to lysosomes for degradation. Here we report a model for the complete ESCRT-0 complex based on the crystal structure of the Hrs-STAM core complex, previously solved domain structures, hydrodynamic measurements, and Monte Carlo simulations. ESCRT-0 expressed in insect cells has a hydrodynamic radius of R{sub H} = 7.9 nm and is a 1:1 heterodimer. The 2.3 {angstrom} crystal structure of the ESCRT-0 core complex reveals two domain-swapped GAT domains and an antiparallel two-stranded coiled-coil, similar to yeast ESCRT-0. ESCRT-0 typifies a class of biomolecular assemblies that combine structured and unstructured elements, and have dynamic and open conformations to ensure versatility in target recognition. Coarse-grained Monte Carlo simulations constrained by experimental R{sub H} values for ESCRT-0 reveal a dynamic ensemble of conformations well suited for diverse functions.

  11. Simulation-based parameter estimation for complex models: a breast cancer natural history modelling illustration.

    PubMed

    Chia, Yen Lin; Salzman, Peter; Plevritis, Sylvia K; Glynn, Peter W

    2004-12-01

    Simulation-based parameter estimation offers a powerful means of estimating parameters in complex stochastic models. We illustrate the application of these ideas in the setting of a natural history model for breast cancer. Our model assumes that the tumor growth process follows a geometric Brownian motion; parameters are estimated from the SEER registry. Our discussion focuses on the use of simulation for computing the maximum likelihood estimator for this class of models. The analysis shows that simulation provides a straightforward means of computing such estimators for models of substantial complexity.

  12. Modeling the Self-assembly of the Cellulosome Enzyme Complex*

    PubMed Central

    Bomble, Yannick J.; Beckham, Gregg T.; Matthews, James F.; Nimlos, Mark R.; Himmel, Michael E.; Crowley, Michael F.

    2011-01-01

    Most bacteria use free enzymes to degrade plant cell walls in nature. However, some bacteria have adopted a different strategy wherein enzymes can either be free or tethered on a protein scaffold forming a complex called a cellulosome. The study of the structure and mechanism of these large macromolecular complexes is an active and ongoing research topic, with the goal of finding ways to improve biomass conversion using cellulosomes. Several mechanisms involved in cellulosome formation remain unknown, including how cellulosomal enzymes assemble on the scaffoldin and what governs the population of cellulosomes created during self-assembly. Here, we present a coarse-grained model to study the self-assembly of cellulosomes. The model captures most of the physical characteristics of three cellulosomal enzymes (Cel5B, CelS, and CbhA) and the scaffoldin (CipA) from Clostridium thermocellum. The protein structures are represented by beads connected by restraints to mimic the flexibility and shapes of these proteins. From a large simulation set, the assembly of cellulosomal enzyme complexes is shown to be dominated by their shape and modularity. The multimodular enzyme, CbhA, binds statistically more frequently to the scaffoldin than CelS or Cel5B. The enhanced binding is attributed to the flexible nature and multimodularity of this enzyme, providing a longer residence time around the scaffoldin. The characterization of the factors influencing the cellulosome assembly process may enable new strategies to create designers cellulosomes. PMID:21098021

  13. A Qualitative Model of Human Interaction with Complex Dynamic Systems

    NASA Technical Reports Server (NTRS)

    Hess, Ronald A.

    1987-01-01

    A qualitative model describing human interaction with complex dynamic systems is developed. The model is hierarchical in nature and consists of three parts: a behavior generator, an internal model, and a sensory information processor. The behavior generator is responsible for action decomposition, turning higher level goals or missions into physical action at the human-machine interface. The internal model is an internal representation of the environment which the human is assumed to possess and is divided into four submodel categories. The sensory information processor is responsible for sensory composition. All three parts of the model act in consort to allow anticipatory behavior on the part of the human in goal-directed interaction with dynamic systems. Human workload and error are interpreted in this framework, and the familiar example of an automobile commute is used to illustrate the nature of the activity in the three model elements. Finally, with the qualitative model as a guide, verbal protocols from a manned simulation study of a helicopter instrument landing task are analyzed with particular emphasis on the effect of automation on human-machine performance.

  14. A qualitative model of human interaction with complex dynamic systems

    NASA Technical Reports Server (NTRS)

    Hess, Ronald A.

    1987-01-01

    A qualitative model describing human interaction with complex dynamic systems is developed. The model is hierarchical in nature and consists of three parts: a behavior generator, an internal model, and a sensory information processor. The behavior generator is responsible for action decomposition, turning higher level goals or missions into physical action at the human-machine interface. The internal model is an internal representation of the environment which the human is assumed to possess and is divided into four submodel categories. The sensory information processor is responsible for sensory composition. All three parts of the model act in consort to allow anticipatory behavior on the part of the human in goal-directed interaction with dynamic systems. Human workload and error are interpreted in this framework, and the familiar example of an automobile commute is used to illustrate the nature of the activity in the three model elements. Finally, with the qualitative model as a guide, verbal protocols from a manned simulation study of a helicopter instrument landing task are analyzed with particular emphasis on the effect of automation on human-machine performance.

  15. Integrated Bayesian network framework for modeling complex ecological issues.

    PubMed

    Johnson, Sandra; Mengersen, Kerrie

    2012-07-01

    The management of environmental problems is multifaceted, requiring varied and sometimes conflicting objectives and perspectives to be considered. Bayesian network (BN) modeling facilitates the integration of information from diverse sources and is well suited to tackling the management challenges of complex environmental problems. However, combining several perspectives in one model can lead to large, unwieldy BNs that are difficult to maintain and understand. Conversely, an oversimplified model may lead to an unrealistic representation of the environmental problem. Environmental managers require the current research and available knowledge about an environmental problem of interest to be consolidated in a meaningful way, thereby enabling the assessment of potential impacts and different courses of action. Previous investigations of the environmental problem of interest may have already resulted in the construction of several disparate ecological models. On the other hand, the opportunity may exist to initiate this modeling. In the first instance, the challenge is to integrate existing models and to merge the information and perspectives from these models. In the second instance, the challenge is to include different aspects of the environmental problem incorporating both the scientific and management requirements. Although the paths leading to the combined model may differ for these 2 situations, the common objective is to design an integrated model that captures the available information and research, yet is simple to maintain, expand, and refine. BN modeling is typically an iterative process, and we describe a heuristic method, the iterative Bayesian network development cycle (IBNDC), for the development of integrated BN models that are suitable for both situations outlined above. The IBNDC approach facilitates object-oriented BN (OOBN) modeling, arguably viewed as the next logical step in adaptive management modeling, and that embraces iterative development

  16. Star formation in a hierarchical model for Cloud Complexes

    NASA Astrophysics Data System (ADS)

    Sanchez, N.; Parravano, A.

    The effects of the external and initial conditions on the star formation processes in Molecular Cloud Complexes are examined in the context of a schematic model. The model considers a hierarchical system with five predefined phases: warm gas, neutral gas, low density molecular gas, high density molecular gas and protostars. The model follows the mass evolution of each substructure by computing its mass exchange with their parent and children. The parent-child mass exchange depends on the radiation density at the interphase, which is produced by the radiation coming from the stars that form at the end of the hierarchical structure, and by the external radiation field. The system is chaotic in the sense that its temporal evolution is very sensitive to small changes in the initial or external conditions. However, global features such as the star formation efficience and the Initial Mass Function are less affected by those variations.

  17. Equilibrium modeling of trace metal transport from Duluth complex rockpile

    SciTech Connect

    Kelsey, P.D.; Klusman, R.W.; Lapakko, K.

    1996-12-31

    Geochemical modeling was used to predict weathering processes and the formation of trace metal-adsorbing secondary phases in a waste rock stockpile containing Cu-Ni ore mined from the Duluth Complex, MN. Amorphous ferric hydroxide was identified as a secondary phase within the pile, from observation and geochemical modeling of the weathering process. Due to the high content of cobalt, copper, nickel, and zinc in the primary minerals of the waste rock and in the effluent, it was hypothesized that the predicted and observed precipitant ferric hydroxide would adsorb small quantities of these trace metals. This was verified using sequential extractions and simulated using adsorption geochemical modeling. It was concluded that the trace metals were adsorbed in small quantities, and adsorption onto the amorphous ferric hydroxide was in decreasing order of Cu > Ni > Zn > Co. The low degree of adsorption was due to low pH water and competition for adsorption sites with other ions in solution.

  18. 3D model of amphioxus steroid receptor complexed with estradiol

    SciTech Connect

    Baker, Michael E.; Chang, David J.

    2009-08-28

    The origins of signaling by vertebrate steroids are not fully understood. An important advance was the report that an estrogen-binding steroid receptor [SR] is present in amphioxus, a basal chordate with a similar body plan as vertebrates. To investigate the evolution of estrogen-binding to steroid receptors, we constructed a 3D model of amphioxus SR complexed with estradiol. This 3D model indicates that although the SR is activated by estradiol, some interactions between estradiol and human ER{alpha} are not conserved in the SR, which can explain the low affinity of estradiol for the SR. These differences between the SR and ER{alpha} in the steroid-binding domain are sufficient to suggest that another steroid is the physiological regulator of the SR. The 3D model predicts that mutation of Glu-346 to Gln will increase the affinity of testosterone for amphioxus SR and elucidate the evolution of steroid-binding to nuclear receptors.

  19. Preconditioning the bidomain model with almost linear complexity

    NASA Astrophysics Data System (ADS)

    Pierre, Charles

    2012-01-01

    The bidomain model is widely used in electro-cardiology to simulate spreading of excitation in the myocardium and electrocardiograms. It consists of a system of two parabolic reaction diffusion equations coupled with an ODE system. Its discretisation displays an ill-conditioned system matrix to be inverted at each time step: simulations based on the bidomain model therefore are associated with high computational costs. In this paper we propose a preconditioning for the bidomain model either for an isolated heart or in an extended framework including a coupling with the surrounding tissues (the torso). The preconditioning is based on a formulation of the discrete problem that is shown to be symmetric positive semi-definite. A block LU decomposition of the system together with a heuristic approximation (referred to as the monodomain approximation) are the key ingredients for the preconditioning definition. Numerical results are provided for two test cases: a 2D test case on a realistic slice of the thorax based on a segmented heart medical image geometry, a 3D test case involving a small cubic slab of tissue with orthotropic anisotropy. The analysis of the resulting computational cost (both in terms of CPU time and of iteration number) shows an almost linear complexity with the problem size, i.e. of type nlog α( n) (for some constant α) which is optimal complexity for such problems.

  20. Bloch-Redfield equations for modeling light-harvesting complexes.

    PubMed

    Jeske, Jan; Ing, David J; Plenio, Martin B; Huelga, Susana F; Cole, Jared H

    2015-02-14

    We challenge the misconception that Bloch-Redfield equations are a less powerful tool than phenomenological Lindblad equations for modeling exciton transport in photosynthetic complexes. This view predominantly originates from an indiscriminate use of the secular approximation. We provide a detailed description of how to model both coherent oscillations and several types of noise, giving explicit examples. All issues with non-positivity are overcome by a consistent straightforward physical noise model. Herein also lies the strength of the Bloch-Redfield approach because it facilitates the analysis of noise-effects by linking them back to physical parameters of the noise environment. This includes temporal and spatial correlations and the strength and type of interaction between the noise and the system of interest. Finally, we analyze a prototypical dimer system as well as a 7-site Fenna-Matthews-Olson complex in regards to spatial correlation length of the noise, noise strength, temperature, and their connection to the transfer time and transfer probability.

  1. NMR-derived model for a peptide-antibody complex

    SciTech Connect

    Zilber, B.; Scherf, T.; Anglister, J. ); Levitt, M. )

    1990-10-01

    The TE34 monoclonal antibody against cholera toxin peptide 3 (CTP3; VEVPGSQHIDSQKKA) was sequenced and investigated by two-dimensional transferred NOE difference spectroscopy and molecular modeling. The V{sub H} sequence of TE34, which does not bind cholera toxin, shares remarkable homology to that of TE32 and TE33, which are both anti-CTP3 antibodies that bind the toxin. However, due to a shortened heavy chain CDR3, TE34 assumes a radically different combining site structure. The assignment of the combining site interactions to specific peptide residues was completed by use of AcIDSQRKA, a truncated peptide analogue in which lysine-13 was substituted by arginine, specific deuteration of individual polypeptide chains of the antibody, and a computer model for the Fv fragment of TE34. NMR-derived distance restraints were then applied to the calculated model of the Fv to generate a three-dimensional structure of the TE34/CTP3 complex. The combining site was found to be a very hydrophobic cavity composed of seven aromatic residues. Charged residues are found in the periphery of the combining site. The peptide residues HIDSQKKA form a {beta}-turn inside the combining site. The contact area between the peptide and the TE34 antibody is 388 {Angstrom}{sup 2}, about half of the contact area observed in protein-antibody complexes.

  2. A computational model for cancer growth by using complex networks

    NASA Astrophysics Data System (ADS)

    Galvão, Viviane; Miranda, José G. V.

    2008-09-01

    In this work we propose a computational model to investigate the proliferation of cancerous cell by using complex networks. In our model the network represents the structure of available space in the cancer propagation. The computational scheme considers a cancerous cell randomly included in the complex network. When the system evolves the cells can assume three states: proliferative, non-proliferative, and necrotic. Our results were compared with experimental data obtained from three human lung carcinoma cell lines. The computational simulations show that the cancerous cells have a Gompertzian growth. Also, our model simulates the formation of necrosis, increase of density, and resources diffusion to regions of lower nutrient concentration. We obtain that the cancer growth is very similar in random and small-world networks. On the other hand, the topological structure of the small-world network is more affected. The scale-free network has the largest rates of cancer growth due to hub formation. Finally, our results indicate that for different average degrees the rate of cancer growth is related to the available space in the network.

  3. Troposphere-lower-stratosphere connection in an intermediate complexity model.

    NASA Astrophysics Data System (ADS)

    Ruggieri, Paolo; King, Martin; Kucharski, Fred; Buizza, Roberto; Visconti, Guido

    2016-04-01

    The dynamical coupling between the troposphere and the lower stratosphere has been investigated using a low-top, intermediate complexity model provided by the Abdus Salam International Centre for Theoretical Physics (SPEEDY). The key question that we wanted to address is whether a simple model like SPEEDY can be used to understand troposphere-stratosphere interactions, e.g. forced by changes of sea-ice concentration in polar arctic regions. Three sets of experiments have been performed. Firstly, a potential vorticity perspective has been applied to understand the wave-like forcing of the troposphere on the stratosphere and to provide quantitative information on the sub seasonal variability of the coupling. Then, the zonally asymmetric, near-surface response to a lower-stratospheric forcing has been analysed in a set of forced experiments with an artificial heating imposed in the extra-tropical lower stratosphere. Finally, the lower-stratosphere response sensitivity to tropospheric initial conditions has been examined. Results indicate how SPEEDY captures the physics of the troposphere-stratosphere connection but also show the lack of stratospheric variability. Results also suggest that intermediate-complexity models such as SPEEDY could be used to investigate the effects that surface forcing (e.g. due to sea-ice concentration changes) have on the troposphere and the lower stratosphere.

  4. IDMS: inert dark matter model with a complex singlet

    NASA Astrophysics Data System (ADS)

    Bonilla, Cesar; Sokolowska, Dorota; Darvishi, Neda; Diaz-Cruz, J. Lorenzo; Krawczyk, Maria

    2016-06-01

    We study an extension of the inert doublet model (IDM) that includes an extra complex singlet of the scalars fields, which we call the IDMS. In this model there are three Higgs particles, among them a SM-like Higgs particle, and the lightest neutral scalar, from the inert sector, remains a viable dark matter (DM) candidate. We assume a non-zero complex vacuum expectation value for the singlet, so that the visible sector can introduce extra sources of CP violation. We construct the scalar potential of IDMS, assuming an exact Z 2 symmetry, with the new singlet being Z 2-even, as well as a softly broken U(1) symmetry, which allows a reduced number of free parameters in the potential. In this paper we explore the foundations of the model, in particular the masses and interactions of scalar particles for a few benchmark scenarios. Constraints from collider physics, in particular from the Higgs signal observed at the Large Hadron Collider with {M}h≈ 125 {{GeV}}, as well as constraints from the DM experiments, such as relic density measurements and direct detection limits, are included in the analysis. We observe significant differences with respect to the IDM in relic density values from additional annihilation channels, interference and resonance effects due to the extended Higgs sector.

  5. Modeling the propagation of mobile phone virus under complex network.

    PubMed

    Yang, Wei; Wei, Xi-liang; Guo, Hao; An, Gang; Guo, Lei; Yao, Yu

    2014-01-01

    Mobile phone virus is a rogue program written to propagate from one phone to another, which can take control of a mobile device by exploiting its vulnerabilities. In this paper the propagation model of mobile phone virus is tackled to understand how particular factors can affect its propagation and design effective containment strategies to suppress mobile phone virus. Two different propagation models of mobile phone viruses under the complex network are proposed in this paper. One is intended to describe the propagation of user-tricking virus, and the other is to describe the propagation of the vulnerability-exploiting virus. Based on the traditional epidemic models, the characteristics of mobile phone viruses and the network topology structure are incorporated into our models. A detailed analysis is conducted to analyze the propagation models. Through analysis, the stable infection-free equilibrium point and the stability condition are derived. Finally, considering the network topology, the numerical and simulation experiments are carried out. Results indicate that both models are correct and suitable for describing the spread of two different mobile phone viruses, respectively.

  6. When do evolutionary food web models generate complex networks?

    PubMed

    Allhoff, Korinna T; Drossel, Barbara

    2013-10-01

    Evolutionary foodweb models are used to build food webs by the repeated addition of new species. Population dynamics leads to the extinction or establishment of a newly added species, and possibly to the extinction of other species. The food web structure that emerges after some time is a highly nontrivial result of the evolutionary and dynamical rules. We investigate the evolutionary food web model introduced by Loeuille and Loreau (2005), which characterizes species by their body mass as the only evolving trait. Our goal is to find the reasons behind the model's remarkable robustness and its capability to generate various and stable networks. In contrast to other evolutionary food web models, this model requires neither adaptive foraging nor allometric scaling of metabolic rates with body mass in order to produce complex networks that do not eventually collapse to trivial structures. Our study shows that this is essentially due to the fact that the difference in niche value between predator and prey as well as the feeding range are constrained so that they remain within narrow limits under evolution. Furthermore, competition between similar species is sufficiently strong, so that a trophic level can accommodate several species. We discuss the implications of these findings and argue that the conditions that stabilize other evolutionary food web models have similar effects because they also prevent the occurrence of extreme specialists or extreme generalists that have in general a higher fitness than species with a moderate niche width.

  7. Modeling the Propagation of Mobile Phone Virus under Complex Network

    PubMed Central

    Yang, Wei; Wei, Xi-liang; Guo, Hao; An, Gang; Guo, Lei

    2014-01-01

    Mobile phone virus is a rogue program written to propagate from one phone to another, which can take control of a mobile device by exploiting its vulnerabilities. In this paper the propagation model of mobile phone virus is tackled to understand how particular factors can affect its propagation and design effective containment strategies to suppress mobile phone virus. Two different propagation models of mobile phone viruses under the complex network are proposed in this paper. One is intended to describe the propagation of user-tricking virus, and the other is to describe the propagation of the vulnerability-exploiting virus. Based on the traditional epidemic models, the characteristics of mobile phone viruses and the network topology structure are incorporated into our models. A detailed analysis is conducted to analyze the propagation models. Through analysis, the stable infection-free equilibrium point and the stability condition are derived. Finally, considering the network topology, the numerical and simulation experiments are carried out. Results indicate that both models are correct and suitable for describing the spread of two different mobile phone viruses, respectively. PMID:25133209

  8. Surface complexation model of uranyl sorption on Georgia kaolinite

    USGS Publications Warehouse

    Payne, T.E.; Davis, J.A.; Lumpkin, G.R.; Chisari, R.; Waite, T.D.

    2004-01-01

    The adsorption of uranyl on standard Georgia kaolinites (KGa-1 and KGa-1B) was studied as a function of pH (3-10), total U (1 and 10 ??mol/l), and mass loading of clay (4 and 40 g/l). The uptake of uranyl in air-equilibrated systems increased with pH and reached a maximum in the near-neutral pH range. At higher pH values, the sorption decreased due to the presence of aqueous uranyl carbonate complexes. One kaolinite sample was examined after the uranyl uptake experiments by transmission electron microscopy (TEM), using energy dispersive X-ray spectroscopy (EDS) to determine the U content. It was found that uranium was preferentially adsorbed by Ti-rich impurity phases (predominantly anatase), which are present in the kaolinite samples. Uranyl sorption on the Georgia kaolinites was simulated with U sorption reactions on both titanol and aluminol sites, using a simple non-electrostatic surface complexation model (SCM). The relative amounts of U-binding >TiOH and >AlOH sites were estimated from the TEM/EDS results. A ternary uranyl carbonate complex on the titanol site improved the fit to the experimental data in the higher pH range. The final model contained only three optimised log K values, and was able to simulate adsorption data across a wide range of experimental conditions. The >TiOH (anatase) sites appear to play an important role in retaining U at low uranyl concentrations. As kaolinite often contains trace TiO2, its presence may need to be taken into account when modelling the results of sorption experiments with radionuclides or trace metals on kaolinite. ?? 2004 Elsevier B.V. All rights reserved.

  9. Model Complexity in Diffusion Modeling: Benefits of Making the Model More Parsimonious

    PubMed Central

    Lerche, Veronika; Voss, Andreas

    2016-01-01

    The diffusion model (Ratcliff, 1978) takes into account the reaction time distributions of both correct and erroneous responses from binary decision tasks. This high degree of information usage allows the estimation of different parameters mapping cognitive components such as speed of information accumulation or decision bias. For three of the four main parameters (drift rate, starting point, and non-decision time) trial-to-trial variability is allowed. We investigated the influence of these variability parameters both drawing on simulation studies and on data from an empirical test-retest study using different optimization criteria and different trial numbers. Our results suggest that less complex models (fixing intertrial variabilities of the drift rate and the starting point at zero) can improve the estimation of the psychologically most interesting parameters (drift rate, threshold separation, starting point, and non-decision time). PMID:27679585

  10. Model Complexity in Diffusion Modeling: Benefits of Making the Model More Parsimonious.

    PubMed

    Lerche, Veronika; Voss, Andreas

    2016-01-01

    The diffusion model (Ratcliff, 1978) takes into account the reaction time distributions of both correct and erroneous responses from binary decision tasks. This high degree of information usage allows the estimation of different parameters mapping cognitive components such as speed of information accumulation or decision bias. For three of the four main parameters (drift rate, starting point, and non-decision time) trial-to-trial variability is allowed. We investigated the influence of these variability parameters both drawing on simulation studies and on data from an empirical test-retest study using different optimization criteria and different trial numbers. Our results suggest that less complex models (fixing intertrial variabilities of the drift rate and the starting point at zero) can improve the estimation of the psychologically most interesting parameters (drift rate, threshold separation, starting point, and non-decision time). PMID:27679585

  11. Model Complexity in Diffusion Modeling: Benefits of Making the Model More Parsimonious

    PubMed Central

    Lerche, Veronika; Voss, Andreas

    2016-01-01

    The diffusion model (Ratcliff, 1978) takes into account the reaction time distributions of both correct and erroneous responses from binary decision tasks. This high degree of information usage allows the estimation of different parameters mapping cognitive components such as speed of information accumulation or decision bias. For three of the four main parameters (drift rate, starting point, and non-decision time) trial-to-trial variability is allowed. We investigated the influence of these variability parameters both drawing on simulation studies and on data from an empirical test-retest study using different optimization criteria and different trial numbers. Our results suggest that less complex models (fixing intertrial variabilities of the drift rate and the starting point at zero) can improve the estimation of the psychologically most interesting parameters (drift rate, threshold separation, starting point, and non-decision time).

  12. Application of Peterson's stray light model to complex optical instruments

    NASA Astrophysics Data System (ADS)

    Fray, S.; Goepel, M.; Kroneberger, M.

    2016-07-01

    Gary L. Peterson (Breault Research Organization) presented a simple analytical model for in- field stray light evaluation of axial optical systems. We exploited this idea for more complex optical instruments of the Meteosat Third Generation (MTG) mission. For the Flexible Combined Imager (FCI) we evaluated the in-field stray light of its three-mirroranastigmat telescope, while for the Infrared Sounder (IRS) we performed an end-to-end analysis including the front telescope, interferometer and back telescope assembly and the cold optics. A comparison to simulations will be presented. The authors acknowledge the support by ESA and Thales Alenia Space through the MTG satellites program.

  13. Modeling and Visualizing Flow of Chemical Agents Across Complex Terrain

    NASA Technical Reports Server (NTRS)

    Kao, David; Kramer, Marc; Chaderjian, Neal

    2005-01-01

    Release of chemical agents across complex terrain presents a real threat to homeland security. Modeling and visualization tools are being developed that capture flow fluid terrain interaction as well as point dispersal downstream flow paths. These analytic tools when coupled with UAV atmospheric observations provide predictive capabilities to allow for rapid emergency response as well as developing a comprehensive preemptive counter-threat evacuation plan. The visualization tools involve high-end computing and massive parallel processing combined with texture mapping. We demonstrate our approach across a mountainous portion of North California under two contrasting meteorological conditions. Animations depicting flow over this geographical location provide immediate assistance in decision support and crisis management.

  14. The modeling of complex continua: Fundamental obstacles and grand challenges

    SciTech Connect

    Not Available

    1993-01-01

    The research is divided into: discontinuities and adaptive computation, chaotic flows, dispersion of flow in porous media, and nonlinear waves and nonlinear materials. The research program has emphasized innovative computation and theory. The approach depends on abstracting mathematical concepts and computational methods from individual applications to a wide range of problems involving complex continua. The generic difficulties in the modeling of continua that guide this abstraction are multiple length and time scales, microstructures (bubbles, droplets, vortices, crystal defects), and chaotic or random phenomena described by a statistical formulation.

  15. Does model performance improve with complexity? A case study with three hydrological models

    NASA Astrophysics Data System (ADS)

    Orth, Rene; Staudinger, Maria; Seneviratne, Sonia I.; Seibert, Jan; Zappa, Massimiliano

    2015-04-01

    In recent decades considerable progress has been made in climate model development. Following the massive increase in computational power, models became more sophisti- cated. At the same time also simple conceptual models have advanced. In this study we validate and compare three hydrological models of different complexity to investigate whether their performance varies accordingly. For this purpose we use runoff and also soil moisture measurements, which allow a truly independent validation, from several sites across Switzerland. The models are calibrated in similar ways with the same runoff data. Our results show that the more complex models HBV and PREVAH outperform the simple water balance model (SWBM) in case of runoff but not for soil moisture. Furthermore the most sophisticated PREVAH model shows an added value compared to the HBV model only in case of soil moisture. Focusing on extreme events we find generally improved performance of the SWBM during drought conditions and degraded agreement with observations during wet extremes. For the more complex models we find the opposite behavior, probably because they were primarily developed for predic- tion of runoff extremes. As expected given their complexity, HBV and PREVAH have more problems with over-fitting. All models show a tendency towards better perfor- mance in lower altitudes as opposed to (pre-)alpine sites. The results vary considerably across the investigated sites. In contrast, the different metrics we consider to estimate the agreement between models and observations lead to similar conclusions, indicating that the performance of the considered models is similar at different time scales as well as for anomalies and long-term means. We conclude that added complexity does not necessarily lead to improved performance of hydrological models, and that performance can vary greatly depending on the considered hydrological variable (e.g. runoff vs. soil moisture) or hydrological conditions (floods vs

  16. Does model performance improve with complexity? A case study with three hydrological models

    NASA Astrophysics Data System (ADS)

    Orth, Rene; Staudinger, Maria; Seneviratne, Sonia I.; Seibert, Jan; Zappa, Massimiliano

    2015-04-01

    In recent decades considerable progress has been made in climate model development. Following the massive increase in computational power, models became more sophisticated. At the same time also simple conceptual models have advanced. In this study we validate and compare three hydrological models of different complexity to investigate whether their performance varies accordingly. For this purpose we use runoff and also soil moisture measurements, which allow a truly independent validation, from several sites across Switzerland. The models are calibrated in similar ways with the same runoff data. Our results show that the more complex models HBV and PREVAH outperform the simple water balance model (SWBM) in case of runoff but not for soil moisture. Furthermore the most sophisticated PREVAH model shows an added value compared to the HBV model only in case of soil moisture. Focusing on extreme events we find generally improved performance of the SWBM during drought conditions and degraded agreement with observations during wet extremes. For the more complex models we find the opposite behavior, probably because they were primarily developed for prediction of runoff extremes. As expected given their complexity, HBV and PREVAH have more problems with over-fitting. All models show a tendency towards better performance in lower altitudes as opposed to (pre-) alpine sites. The results vary considerably across the investigated sites. In contrast, the different metrics we consider to estimate the agreement between models and observations lead to similar conclusions, indicating that the performance of the considered models is similar at different time scales as well as for anomalies and long-term means. We conclude that added complexity does not necessarily lead to improved performance of hydrological models, and that performance can vary greatly depending on the considered hydrological variable (e.g. runoff vs. soil moisture) or hydrological conditions (floods vs. droughts).

  17. The Eemian climate simulated by two models of different complexities

    NASA Astrophysics Data System (ADS)

    Nikolova, Irina; Yin, Qiuzhen; Berger, Andre; Singh, Umesh; Karami, Pasha

    2013-04-01

    The Eemian period, also known as MIS-5, experienced warmer than today climate, reduction in ice sheets and important sea-level rise. These interesting features have made the Eemian appropriate to evaluate climate models when forced with astronomical and greenhouse gas forcings different from today. In this work, we present the simulated Eemian climate by two climate models of different complexities, LOVECLIM (LLN Earth system model of intermediate complexity) and CCSM3 (NCAR atmosphere-ocean general circulation model). Feedbacks from sea ice, vegetation, monsoon and ENSO phenomena are discussed to explain the regional similarities/dissimilarities in both models with respect to the pre-industrial (PI) climate. Significant warming (cooling) over almost all the continents during boreal summer (winter) leads to a largely increased (reduced) seasonal contrast in the northern (southern) hemisphere, mainly due to the much higher (lower) insolation received by the whole Earth in boreal summer (winter). The arctic is warmer than at PI through the whole year, resulting from its much higher summer insolation and its remnant effect in the following fall-winter through the interactions between atmosphere, ocean and sea ice. Regional discrepancies exist in the sea-ice formation zones between the two models. Excessive sea-ice formation in CCSM3 results in intense regional cooling. In both models intensified African monsoon and vegetation feedback are responsible for the cooling during summer in North Africa and on the Arabian Peninsula. Over India precipitation maximum is found further west, while in Africa the precipitation maximum migrates further north. Trees and grassland expand north in Sahel/Sahara, trees being more abundant in the results from LOVECLIM than from CCSM3. A mix of forest and grassland occupies continents and expand deep in the high northern latitudes in line with proxy records. Desert areas reduce significantly in Northern Hemisphere, but increase in North

  18. a Range Based Method for Complex Facade Modeling

    NASA Astrophysics Data System (ADS)

    Adami, A.; Fregonese, L.; Taffurelli, L.

    2011-09-01

    3d modelling of Architectural Heritage does not follow a very well-defined way, but it goes through different algorithms and digital form according to the shape complexity of the object, to the main goal of the representation and to the starting data. Even if the process starts from the same data, such as a pointcloud acquired by laser scanner, there are different possibilities to realize a digital model. In particular we can choose between two different attitudes: the mesh and the solid model. In the first case the complexity of architecture is represented by a dense net of triangular surfaces which approximates the real surface of the object. In the other -opposite- case the 3d digital model can be realized by the use of simple geometrical shapes, by the use of sweeping algorithm and the Boolean operations. Obviously these two models are not the same and each one is characterized by some peculiarities concerning the way of modelling (the choice of a particular triangulation algorithm or the quasi-automatic modelling by known shapes) and the final results (a more detailed and complex mesh versus an approximate and more simple solid model). Usually the expected final representation and the possibility of publishing lead to one way or the other. In this paper we want to suggest a semiautomatic process to build 3d digital models of the facades of complex architecture to be used for example in city models or in other large scale representations. This way of modelling guarantees also to obtain small files to be published on the web or to be transmitted. The modelling procedure starts from laser scanner data which can be processed in the well known way. Usually more than one scan is necessary to describe a complex architecture and to avoid some shadows on the facades. These have to be registered in a single reference system by the use of targets which are surveyed by topography and then to be filtered in order to obtain a well controlled and homogeneous point cloud of

  19. Finding the right balance between groundwater model complexity and experimental effort via Bayesian model selection

    NASA Astrophysics Data System (ADS)

    Schöniger, Anneli; Illman, Walter A.; Wöhling, Thomas; Nowak, Wolfgang

    2015-12-01

    Groundwater modelers face the challenge of how to assign representative parameter values to the studied aquifer. Several approaches are available to parameterize spatial heterogeneity in aquifer parameters. They differ in their conceptualization and complexity, ranging from homogeneous models to heterogeneous random fields. While it is common practice to invest more effort into data collection for models with a finer resolution of heterogeneities, there is a lack of advice which amount of data is required to justify a certain level of model complexity. In this study, we propose to use concepts related to Bayesian model selection to identify this balance. We demonstrate our approach on the characterization of a heterogeneous aquifer via hydraulic tomography in a sandbox experiment (Illman et al., 2010). We consider four increasingly complex parameterizations of hydraulic conductivity: (1) Effective homogeneous medium, (2) geology-based zonation, (3) interpolation by pilot points, and (4) geostatistical random fields. First, we investigate the shift in justified complexity with increasing amount of available data by constructing a model confusion matrix. This matrix indicates the maximum level of complexity that can be justified given a specific experimental setup. Second, we determine which parameterization is most adequate given the observed drawdown data. Third, we test how the different parameterizations perform in a validation setup. The results of our test case indicate that aquifer characterization via hydraulic tomography does not necessarily require (or justify) a geostatistical description. Instead, a zonation-based model might be a more robust choice, but only if the zonation is geologically adequate.

  20. Thermohaline feedbacks in ocean-climate models of varying complexity

    NASA Astrophysics Data System (ADS)

    den Toom, M.

    2013-03-01

    explicitly resolves eddies, and a model in which eddies are parameterized. It is found that the behavior of an eddy-resolving model is qualitatively different from that of a non-eddying model. What is clear at this point, is that the AMOC is governed by non-linear dynamics. As a result, its simulated behavior depends in a non-trivial way on how unresolved processes are represented in a model. As demonstrated in this thesis, model fidelity can be effectively assessed by examining models of varying complexity.

  1. Modeling of complex systems using nonlinear, flexible multibody dynamics

    NASA Astrophysics Data System (ADS)

    Rodriguez, Jesus Diaz

    Finite element based multibody dynamics formulations extend the applicability of classical finite element methods to the modeling of flexible mechanisms. A general computer code will include rigid and flexible bodies, such as beams, joints, and active elements. These procedures are designed to overcome the modeling limitations of conventional multibody formulations that are often restricted to the analysis of rigid systems or use a modal representation to model the flexibility of elastic components. As multibody formulations become more widely accepted, the need to model a wider array of phenomena increases. The goal of this work is to present a methodology for the analysis of complex systems that may require the modeling of new joints and elements, or include the effects of clearance, freeplay or friction in the joints. Joints are essential components of multibody systems, rigid or flexible. Usually, joints are modeled as perfect components. In actual joints, clearance, freeplay, friction, lubrication and impact forces will can have a significant effect on the dynamic response of the system. Certain systems require the formulation of new joints for their analysis. Among one of them is the curve sliding joint which enforces the sliding of a body on a rigid curve connected to another body. The curve sliding joint is especially useful when modeling a vibration absorber device mounted on the rotor hub of rotorcraft: the bifilar pendulum. The formulation of a new modal based element is also presented. A modal based element is a model of an elastic substructure that includes a modal representation of elastic effects together with large rigid body motions. The proposed approach makes use of a component mode synthesis technique that allows the analyst to choose any type of modal basis and simplifies the connection to other multibody elements. The formulation is independent of the finite element analysis package used to compute the modes of the elastic component.

  2. Modeling of the formation of complex molecules in protostellar objects

    NASA Astrophysics Data System (ADS)

    Kochina, O. V.; Wiebe, D. S.; Kalenskii, S. V.; Vasyunin, A. I.

    2013-11-01

    The results of molecular composition modeling are presented for the well studied low-mass star-forming region TMC-1 and the massive star-forming region DR21(OH), which is poorly studied from a chemical point of view. The column densities of dozens of molecules, ranging from simple diatomic to complex organic molecules, are reproduced to within an order of magnitude using a one-dimensional model for the physical and chemical structure of these regions. The chemical ages of the regions are approximately 105 years in both cases. The main desorption mechanisms that are usually included in chemical models (photodesorption, thermal desorption, and cosmic-ray-induced desorption) do not provide sufficient gasphase abundances of molecules that are synthesized in surface reactions; however, this shortcoming can be removed by introducing small amount of reactive desorption into the model. It is possible to reproduce the properties of the TMC-1 chemical composition in a standard model, without requiring additional assumptions about an anomalous C/O ratio or the recent accretion of matter enriched with atomic carbon, as has been proposed by some researchers.

  3. Velocity response curves demonstrate the complexity of modeling entrainable clocks.

    PubMed

    Taylor, Stephanie R; Cheever, Allyson; Harmon, Sarah M

    2014-12-21

    Circadian clocks are biological oscillators that regulate daily behaviors in organisms across the kingdoms of life. Their rhythms are generated by complex systems, generally involving interlocked regulatory feedback loops. These rhythms are entrained by the daily light/dark cycle, ensuring that the internal clock time is coordinated with the environment. Mathematical models play an important role in understanding how the components work together to function as a clock which can be entrained by light. For a clock to entrain, it must be possible for it to be sped up or slowed down at appropriate times. To understand how biophysical processes affect the speed of the clock, one can compute velocity response curves (VRCs). Here, in a case study involving the fruit fly clock, we demonstrate that VRC analysis provides insight into a clock׳s response to light. We also show that biochemical mechanisms and parameters together determine a model׳s ability to respond realistically to light. The implication is that, if one is developing a model and its current form has an unrealistic response to light, then one must reexamine one׳s model structure, because searching for better parameter values is unlikely to lead to a realistic response to light. PMID:25193284

  4. A subsurface model of the beaver meadow complex

    NASA Astrophysics Data System (ADS)

    Nash, C.; Grant, G.; Flinchum, B. A.; Lancaster, J.; Holbrook, W. S.; Davis, L. G.; Lewis, S.

    2015-12-01

    Wet meadows are a vital component of arid and semi-arid environments. These valley spanning, seasonally inundated wetlands provide critical habitat and refugia for wildlife, and may potentially mediate catchment-scale hydrology in otherwise "water challenged" landscapes. In the last 150 years, these meadows have begun incising rapidly, causing the wetlands to drain and much of the ecological benefit to be lost. The mechanisms driving this incision are poorly understood, with proposed means ranging from cattle grazing to climate change, to the removal of beaver. There is considerable interest in identifying cost-effective strategies to restore the hydrologic and ecological conditions of these meadows at a meaningful scale, but effective process based restoration first requires a thorough understanding of the constructional history of these ubiquitous features. There is emerging evidence to suggest that the North American beaver may have had a considerable role in shaping this landscape through the building of dams. This "beaver meadow complex hypothesis" posits that as beaver dams filled with fine-grained sediments, they became large wet meadows on which new dams, and new complexes, were formed, thereby aggrading valley bottoms. A pioneering study done in Yellowstone indicated that 32-50% of the alluvial sediment was deposited in ponded environments. The observed aggradation rates were highly heterogeneous, suggesting spatial variability in the depositional process - all consistent with the beaver meadow complex hypothesis (Polvi and Wohl, 2012). To expand on this initial work, we have probed deeper into these meadow complexes using a combination of geophysical techniques, coring methods and numerical modeling to create a 3-dimensional representation of the subsurface environments. This imaging has given us a unique view into the patterns and processes responsible for the landforms, and may shed further light on the role of beaver in shaping these landscapes.

  5. Complex fluid flow modeling with SPH on GPU

    NASA Astrophysics Data System (ADS)

    Bilotta, Giuseppe; Hérault, Alexis; Del Negro, Ciro; Russo, Giovanni; Vicari, Annamaria

    2010-05-01

    We describe an implementation of the Smoothed Particle Hydrodynamics (SPH) method for the simulation of complex fluid flows. The algorithm is entirely executed on Graphic Processing Units (GPUs) using the Compute Unified Device Architecture (CUDA) developed by NVIDIA and fully exploiting their computational power. An increase of one to two orders of magnitude in simulation speed over equivalent CPU code is achieved. A complete modeling of the flow of a complex fluid such as lava is challenging from the modelistic, numerical and computational points of view. The natural topography irregularities, the dynamic free boundaries and phenomena such as solidification, presence of floating solid bodies or other obstacles and their eventual fragmentation make the problem difficult to solve using traditional numerical methods (finite volumes, finite elements): the need to refine the discretization grid in correspondence of high gradients, when possible, is computationally expensive and with an often inadequate control of the error; for real-world applications, moreover, the information needed by the grid refinement may not be available (e.g. because the Digital Elevation Models are too coarse); boundary tracking is also problematic with Eulerian discretizations, more so with complex fluids due to the presence of internal boundaries given by fluid inhomogeneity and presence of solidification fronts. An alternative approach is offered by mesh-free particle methods, that solve most of the problems connected to the dynamics of complex fluids in a natural way. Particle methods discretize the fluid using nodes which are not forced on a given topological structure: boundary treatment is therefore implicit and automatic; the movement freedom of the particles also permits the treatment of deformations without incurring in any significant penalty; finally, the accuracy is easily controlled by the insertion of new particles where needed. Our team has developed a new model based on the

  6. Industrial processing of complex fluids: Formulation and modeling

    SciTech Connect

    Scovel, J.C.; Bleasdale, S.; Forest, G.M.; Bechtel, S.

    1997-08-01

    The production of many important commercial materials involves the evolution of a complex fluid through a cooling phase into a hardened product. Textile fibers, high-strength fibers(KEVLAR, VECTRAN), plastics, chopped-fiber compounds, and fiber optical cable are such materials. Industry desires to replace experiments with on-line, real time models of these processes. Solutions to the problems are not just a matter of technology transfer, but require a fundamental description and simulation of the processes. Goals of the project are to develop models that can be used to optimize macroscopic properties of the solid product, to identify sources of undesirable defects, and to seek boundary-temperature and flow-and-material controls to optimize desired properties.

  7. A complex network model for seismicity based on mutual information

    NASA Astrophysics Data System (ADS)

    Jiménez, Abigail

    2013-05-01

    Seismicity is the product of the interaction between the different parts of the lithosphere. Here, we model each part of the Earth as a cell that is constantly communicating its state to its environment. As a neuron is stimulated and produces an output, the different parts of the lithosphere are constantly stimulated by both other cells and the ductile part of the lithosphere, and produce an output in the form of a stress transfer or an earthquake. This output depends on the properties of each part of the Earth’s crust and the magnitude of the inputs. In this study, we propose an approach to the quantification of this communication, with the aid of the Information Theory, and model seismicity as a Complex Network. We have used data from California, and this new approach gives a better understanding of the processes involved in the formation of seismic patterns in that region.

  8. Modeling pedestrian's conformity violation behavior: a complex network based approach.

    PubMed

    Zhou, Zhuping; Hu, Qizhou; Wang, Wei

    2014-01-01

    Pedestrian injuries and fatalities present a problem all over the world. Pedestrian conformity violation behaviors, which lead to many pedestrian crashes, are common phenomena at the signalized intersections in China. The concepts and metrics of complex networks are applied to analyze the structural characteristics and evolution rules of pedestrian network about the conformity violation crossings. First, a network of pedestrians crossing the street is established, and the network's degree distributions are analyzed. Then, by using the basic idea of SI model, a spreading model of pedestrian illegal crossing behavior is proposed. Finally, through simulation analysis, pedestrian's illegal crossing behavior trends are obtained in different network structures and different spreading rates. Some conclusions are drawn: as the waiting time increases, more pedestrians will join in the violation crossing once a pedestrian crosses on red firstly. And pedestrian's conformity violation behavior will increase as the spreading rate increases.

  9. Modeling the complex pathology of Alzheimer's disease in Drosophila.

    PubMed

    Fernandez-Funez, Pedro; de Mena, Lorena; Rincon-Limas, Diego E

    2015-12-01

    Alzheimer's disease (AD) is the leading cause of dementia and the most common neurodegenerative disorder. AD is mostly a sporadic disorder and its main risk factor is age, but mutations in three genes that promote the accumulation of the amyloid-β (Aβ42) peptide revealed the critical role of amyloid precursor protein (APP) processing in AD. Neurofibrillary tangles enriched in tau are the other pathological hallmark of AD, but the lack of causative tau mutations still puzzles researchers. Here, we describe the contribution of a powerful invertebrate model, the fruit fly Drosophila melanogaster, to uncover the function and pathogenesis of human APP, Aβ42, and tau. APP and tau participate in many complex cellular processes, although their main function is microtubule stabilization and the to-and-fro transport of axonal vesicles. Additionally, expression of secreted Aβ42 induces prominent neuronal death in Drosophila, a critical feature of AD, making this model a popular choice for identifying intrinsic and extrinsic factors mediating Aβ42 neurotoxicity. Overall, Drosophila has made significant contributions to better understand the complex pathology of AD, although additional insight can be expected from combining multiple transgenes, performing genome-wide loss-of-function screens, and testing anti-tau therapies alone or in combination with Aβ42.

  10. Engineering complex topological memories from simple Abelian models

    SciTech Connect

    Wootton, James R.; Lahtinen, Ville; Doucot, Benoit; Pachos, Jiannis K.

    2011-09-15

    In three spatial dimensions, particles are limited to either bosonic or fermionic statistics. Two-dimensional systems, on the other hand, can support anyonic quasiparticles exhibiting richer statistical behaviors. An exciting proposal for quantum computation is to employ anyonic statistics to manipulate information. Since such statistical evolutions depend only on topological characteristics, the resulting computation is intrinsically resilient to errors. The so-called non-Abelian anyons are most promising for quantum computation, but their physical realization may prove to be complex. Abelian anyons, however, are easier to understand theoretically and realize experimentally. Here we show that complex topological memories inspired by non-Abelian anyons can be engineered in Abelian models. We explicitly demonstrate the control procedures for the encoding and manipulation of quantum information in specific lattice models that can be implemented in the laboratory. This bridges the gap between requirements for anyonic quantum computation and the potential of state-of-the-art technology. - Highlights: > A novel quantum memory using Abelian anyons is developed. > This uses an advanced encoding, inspired by non-Abelian anyons. > Errors are suppressed topologically, by means of single spin interactions. > An implementation with current Josephson junction technology is proposed.

  11. Alpha Decay in the Complex-Energy Shell Model

    SciTech Connect

    Betan, R. Id

    2012-01-01

    Background: Alpha emission from a nucleus is a fundamental decay process in which the alpha particle formed inside the nucleus tunnels out through the potential barrier. Purpose: We describe alpha decay of 212Po and 104Te by means of the configuration interaction approach. Method: To compute the preformation factor and penetrability, we use the complex-energy shell model with a separable T = 1 interaction. The single-particle space is expanded in a Woods-Saxon basis that consists of bound and unbound resonant states. Special attention is paid to the treatment of the norm kernel appearing in the definition of the formation amplitude that guarantees the normalization of the channel function. Results: Without explicitly considering the alpha-cluster component in the wave function of the parent nucleus, we reproduce the experimental alpha-decay width of 212Po and predict an upper limit of T1/2 = 5.5 10 7 sec for the half-life of 104Te. Conclusions: The complex-energy shell model in a large valence configuration space is capable of providing a microscopic description of the alpha decay of heavy nuclei having two valence protons and two valence neutrons outside the doubly magic core. The inclusion of proton-neutron interaction between the valence nucleons is likely to shorten the predicted half-live of 104Te.

  12. Modeling the complex pathology of Alzheimer's disease in Drosophila.

    PubMed

    Fernandez-Funez, Pedro; de Mena, Lorena; Rincon-Limas, Diego E

    2015-12-01

    Alzheimer's disease (AD) is the leading cause of dementia and the most common neurodegenerative disorder. AD is mostly a sporadic disorder and its main risk factor is age, but mutations in three genes that promote the accumulation of the amyloid-β (Aβ42) peptide revealed the critical role of amyloid precursor protein (APP) processing in AD. Neurofibrillary tangles enriched in tau are the other pathological hallmark of AD, but the lack of causative tau mutations still puzzles researchers. Here, we describe the contribution of a powerful invertebrate model, the fruit fly Drosophila melanogaster, to uncover the function and pathogenesis of human APP, Aβ42, and tau. APP and tau participate in many complex cellular processes, although their main function is microtubule stabilization and the to-and-fro transport of axonal vesicles. Additionally, expression of secreted Aβ42 induces prominent neuronal death in Drosophila, a critical feature of AD, making this model a popular choice for identifying intrinsic and extrinsic factors mediating Aβ42 neurotoxicity. Overall, Drosophila has made significant contributions to better understand the complex pathology of AD, although additional insight can be expected from combining multiple transgenes, performing genome-wide loss-of-function screens, and testing anti-tau therapies alone or in combination with Aβ42. PMID:26024860

  13. Fish locomotion: insights from both simple and complex mechanical models

    NASA Astrophysics Data System (ADS)

    Lauder, George

    2015-11-01

    Fishes are well-known for their ability to swim and maneuver effectively in the water, and recent years have seen great progress in understanding the hydrodynamics of aquatic locomotion. But studying freely-swimming fishes is challenging due to difficulties in controlling fish behavior. Mechanical models of aquatic locomotion have many advantages over studying live animals, including the ability to manipulate and control individual structural or kinematic factors, easier measurement of forces and torques, and the ability to abstract complex animal designs into simpler components. Such simplifications, while not without their drawbacks, facilitate interpretation of how individual traits alter swimming performance and the discovery of underlying physical principles. In this presentation I will discuss the use of a variety of mechanical models for fish locomotion, ranging from simple flexing panels to complex biomimetic designs incorporating flexible, actively moved, fin rays on multiple fins. Mechanical devices have provided great insight into the dynamics of aquatic propulsion and, integrated with studies of locomotion in freely-swimming fishes, provide new insights into how fishes move through the water.

  14. Electrospinning deposition of hydrogel fibers used as scaffold for biomembranes. Thermal stability of DPPC corroborated by ellipsometry.

    PubMed

    González-Henríquez, C M; Sarabia-Vallejos, M A

    2015-09-01

    DPPC bilayers were deposited over thin hydrogel scaffolds using the Langmuir-Blodgett technique (with DPPC thickness ∼ 6.2 nm). Wrinkled hydrogels films were used to maintain a moist environment in order to enhance DPPC bilayer stability. Polymer mixtures were prepared using HEMA (as a base monomer) and DEGDMA, PEGDA575, PEGDA700 or AAm (as crosslinking agents); a thermal initiator was added to obtain a final pre-hydrogel (oligomer) with an adequate viscosity for thin film formation. This mixture was deposited as wrinkled film/fibers over hydrophilic silicon wafers using an electrospinning technique. Later, these samples were exposed to UV light to trigger photopolymerization, generating crosslinking bonds between hydrogel chains; this process also generated remnant surface stresses in the films that favored wrinkle formation. In the cases where DEGDMA and AAm were used as crosslinking agents, HEMA was added in higher amounts. The resultant polymer film surface showed homogenous layering with some small isolated clusters. If PEGDA575/700 was used as the crosslinking agent, we observed the formation of polymer wrinkled thin films, composed by main and secondary chains (with different dimensions). Moreover, water absorption and release was found to be mediated through surface morphology, ordering and film thickness. The thermal behavior of biomembranes was examined using ellipsometry techniques under controlled heating cycles, allowing phases and phase transitions to be detected through slight thickness variations with respect to temperature. Atomic force microscopy was used to determinate surface roughness changes according to temperature variation, temperature was varied sufficiently for the detection and recording of DPPC phase limits. Contact angle measurements corroborated and quantified system wettability, supporting the theory that wrinkled hydrogel films act to enhance DPPC bilayer stability during thermal cycles.

  15. Complex Geometry Creation and Turbulent Conjugate Heat Transfer Modeling

    SciTech Connect

    Bodey, Isaac T; Arimilli, Rao V; Freels, James D

    2011-01-01

    The multiphysics capabilities of COMSOL provide the necessary tools to simulate the turbulent thermal-fluid aspects of the High Flux Isotope Reactor (HFIR). Version 4.1, and later, of COMSOL provides three different turbulence models: the standard k-{var_epsilon} closure model, the low Reynolds number (LRN) k-{var_epsilon} model, and the Spalart-Allmaras model. The LRN meets the needs of the nominal HFIR thermal-hydraulic requirements for 2D and 3D simulations. COMSOL also has the capability to create complex geometries. The circular involute fuel plates used in the HFIR require the use of algebraic equations to generate an accurate geometrical representation in the simulation environment. The best-estimate simulation results show that the maximum fuel plate clad surface temperatures are lower than those predicted by the legacy thermal safety code used at HFIR by approximately 17 K. The best-estimate temperature distribution determined by COMSOL was then used to determine the necessary increase in the magnitude of the power density profile (PDP) to produce a similar clad surface temperature as compared to the legacy thermal safety code. It was determined and verified that a 19% power increase was sufficient to bring the two temperature profiles to relatively good agreement.

  16. 3-D Numerical Modeling of a Complex Salt Structure

    SciTech Connect

    House, L.; Larsen, S.; Bednar, J.B.

    2000-02-17

    Reliably processing, imaging, and interpreting seismic data from areas with complicated structures, such as sub-salt, requires a thorough understanding of elastic as well as acoustic wave propagation. Elastic numerical modeling is an essential tool to develop that understanding. While 2-D elastic modeling is in common use, 3-D elastic modeling has been too computationally intensive to be used routinely. Recent advances in computing hardware, including commodity-based hardware, have substantially reduced computing costs. These advances are making 3-D elastic numerical modeling more feasible. A series of example 3-D elastic calculations were performed using a complicated structure, the SEG/EAGE salt structure. The synthetic traces show that the effects of shear wave propagation can be important for imaging and interpretation of images, and also for AVO and other applications that rely on trace amplitudes. Additional calculations are needed to better identify and understand the complex wave propagation effects produced in complicated structures, such as the SEG/EAGE salt structure.

  17. Wind Power Curve Modeling in Simple and Complex Terrain

    SciTech Connect

    Bulaevskaya, V.; Wharton, S.; Irons, Z.; Qualley, G.

    2015-02-09

    Our previous work on wind power curve modeling using statistical models focused on a location with a moderately complex terrain in the Altamont Pass region in northern California (CA). The work described here is the follow-up to that work, but at a location with a simple terrain in northern Oklahoma (OK). The goal of the present analysis was to determine the gain in predictive ability afforded by adding information beyond the hub-height wind speed, such as wind speeds at other heights, as well as other atmospheric variables, to the power prediction model at this new location and compare the results to those obtained at the CA site in the previous study. While we reach some of the same conclusions at both sites, many results reported for the CA site do not hold at the OK site. In particular, using the entire vertical profile of wind speeds improves the accuracy of wind power prediction relative to using the hub-height wind speed alone at both sites. However, in contrast to the CA site, the rotor equivalent wind speed (REWS) performs almost as well as the entire profile at the OK site. Another difference is that at the CA site, adding wind veer as a predictor significantly improved the power prediction accuracy. The same was true for that site when air density was added to the model separately instead of using the standard air density adjustment. At the OK site, these additional variables result in no significant benefit for the prediction accuracy.

  18. Stepwise building of plankton functional type (PFT) models: A feasible route to complex models?

    NASA Astrophysics Data System (ADS)

    Frede Thingstad, T.; Strand, Espen; Larsen, Aud

    2010-01-01

    We discuss the strategy of building models of the lower part of the planktonic food web in a stepwise manner: starting with few plankton functional types (PFTs) and adding resolution and complexity while carrying along the insight and results gained from simpler models. A central requirement for PFT models is that they allow sustained coexistence of the PFTs. Here we discuss how this identifies a need to consider predation, parasitism and defence mechanisms together with nutrient acquisition and competition. Although the stepwise addition of complexity is assumed to be useful and feasible, a rapid increase in complexity strongly calls for alternative approaches able to model emergent system-level features without a need for detailed representation of all the underlying biological detail.

  19. Complexity in mathematical models of public health policies: a guide for consumers of models.

    PubMed

    Basu, Sanjay; Andrews, Jason

    2013-10-01

    Sanjay Basu and colleagues explain how models are increasingly used to inform public health policy yet readers may struggle to evaluate the quality of models. All models require simplifying assumptions, and there are tradeoffs between creating models that are more "realistic" versus those that are grounded in more solid data. Indeed, complex models are not necessarily more accurate or reliable simply because they can more easily fit real-world data than simpler models can. Please see later in the article for the Editors' Summary.

  20. Wind Tunnel Modeling Of Wind Flow Over Complex Terrain

    NASA Astrophysics Data System (ADS)

    Banks, D.; Cochran, B.

    2010-12-01

    This presentation will describe the finding of an atmospheric boundary layer (ABL) wind tunnel study conducted as part of the Bolund Experiment. This experiment was sponsored by Risø DTU (National Laboratory for Sustainable Energy, Technical University of Denmark) during the fall of 2009 to enable a blind comparison of various air flow models in an attempt to validate their performance in predicting airflow over complex terrain. Bohlund hill sits 12 m above the water level at the end of a narrow isthmus. The island features a steep escarpment on one side, over which the airflow can be expected to separate. The island was equipped with several anemometer towers, and the approach flow over the water was well characterized. This study was one of only two only physical model studies included in the blind model comparison, the other being a water plume study. The remainder were computational fluid dynamics (CFD) simulations, including both RANS and LES. Physical modeling of air flow over topographical features has been used since the middle of the 20th century, and the methods required are well understood and well documented. Several books have been written describing how to properly perform ABL wind tunnel studies, including ASCE manual of engineering practice 67. Boundary layer wind tunnel tests are the only modelling method deemed acceptable in ASCE 7-10, the most recent edition of the American Society of Civil Engineers standard that provides wind loads for buildings and other structures for buildings codes across the US. Since the 1970’s, most tall structures undergo testing in a boundary layer wind tunnel to accurately determine the wind induced loading. When compared to CFD, the US EPA considers a properly executed wind tunnel study to be equivalent to a CFD model with infinitesimal grid resolution and near infinite memory. One key reason for this widespread acceptance is that properly executed ABL wind tunnel studies will accurately simulate flow separation

  1. A model for transgenerational imprinting variation in complex traits.

    PubMed

    Wang, Chenguang; Wang, Zhong; Luo, Jiangtao; Li, Qin; Li, Yao; Ahn, Kwangmi; Prows, Daniel R; Wu, Rongling

    2010-07-14

    Despite the fact that genetic imprinting, i.e., differential expression of the same allele due to its different parental origins, plays a pivotal role in controlling complex traits or diseases, the origin, action and transmission mode of imprinted genes have still remained largely unexplored. We present a new strategy for studying these properties of genetic imprinting with a two-stage reciprocal F mating design, initiated with two contrasting inbred lines. This strategy maps quantitative trait loci that are imprinted (i.e., iQTLs) based on their segregation and transmission across different generations. By incorporating the allelic configuration of an iQTL genotype into a mixture model framework, this strategy provides a path to trace the parental origin of alleles from previous generations. The imprinting effects of iQTLs and their interactions with other traditionally defined genetic effects, expressed in different generations, are estimated and tested by implementing the EM algorithm. The strategy was used to map iQTLs responsible for survival time with four reciprocal F populations and test whether and how the detected iQTLs inherit their imprinting effects into the next generation. The new strategy will provide a tool for quantifying the role of imprinting effects in the creation and maintenance of phenotypic diversity and elucidating a comprehensive picture of the genetic architecture of complex traits and diseases.

  2. Modeling Cu2+-Aβ complexes from computational approaches

    NASA Astrophysics Data System (ADS)

    Alí-Torres, Jorge; Mirats, Andrea; Maréchal, Jean-Didier; Rodríguez-Santiago, Luis; Sodupe, Mariona

    2015-09-01

    Amyloid plaques formation and oxidative stress are two key events in the pathology of the Alzheimer disease (AD), in which metal cations have been shown to play an important role. In particular, the interaction of the redox active Cu2+ metal cation with Aβ has been found to interfere in amyloid aggregation and to lead to reactive oxygen species (ROS). A detailed knowledge of the electronic and molecular structure of Cu2+-Aβ complexes is thus important to get a better understanding of the role of these complexes in the development and progression of the AD disease. The computational treatment of these systems requires a combination of several available computational methodologies, because two fundamental aspects have to be addressed: the metal coordination sphere and the conformation adopted by the peptide upon copper binding. In this paper we review the main computational strategies used to deal with the Cu2+-Aβ coordination and build plausible Cu2+-Aβ models that will afterwards allow determining physicochemical properties of interest, such as their redox potential.

  3. Complex dynamics in the Oregonator model with linear delayed feedback

    NASA Astrophysics Data System (ADS)

    Sriram, K.; Bernard, S.

    2008-06-01

    The Belousov-Zhabotinsky (BZ) reaction can display a rich dynamics when a delayed feedback is applied. We used the Oregonator model of the oscillating BZ reaction to explore the dynamics brought about by a linear delayed feedback. The time-delayed feedback can generate a succession of complex dynamics: period-doubling bifurcation route to chaos; amplitude death; fat, wrinkled, fractal, and broken tori; and mixed-mode oscillations. We observed that this dynamics arises due to a delay-driven transition, or toggling of the system between large and small amplitude oscillations, through a canard bifurcation. We used a combination of numerical bifurcation continuation techniques and other numerical methods to explore the dynamics in the strength of feedback-delay space. We observed that the period-doubling and quasiperiodic route to chaos span a low-dimensional subspace, perhaps due to the trapping of the trajectories in the small amplitude regime near the canard; and the trapped chaotic trajectories get ejected from the small amplitude regime due to a crowding effect to generate chaotic-excitable spikes. We also qualitatively explained the observed dynamics by projecting a three-dimensional phase portrait of the delayed dynamics on the two-dimensional nullclines. This is the first instance in which it is shown that the interaction of delay and canard can bring about complex dynamics.

  4. Deposition parameterizations for the Industrial Source Complex (ISC3) model

    SciTech Connect

    Wesely, Marvin L.; Doskey, Paul V.; Shannon, J. D.

    2002-06-01

    Improved algorithms have been developed to simulate the dry and wet deposition of hazardous air pollutants (HAPs) with the Industrial Source Complex version 3 (ISC3) model system. The dry deposition velocities (concentrations divided by downward flux at a specified height) of the gaseous HAPs are modeled with algorithms adapted from existing dry deposition modules. The dry deposition velocities are described in a conventional resistance scheme, for which micrometeorological formulas are applied to describe the aerodynamic resistances above the surface. Pathways to uptake at the ground and in vegetative canopies are depicted with several resistances that are affected by variations in air temperature, humidity, solar irradiance, and soil moisture. The role of soil moisture variations in affecting the uptake of gases through vegetative plant leaf stomata is assessed with the relative available soil moisture, which is estimated with a rudimentary budget of soil moisture content. Some of the procedures and equations are simplified to be commensurate with the type and extent of information on atmospheric and surface conditions available to the ISC3 model system user. For example, standardized land use types and seasonal categories provide sets of resistances to uptake by various components of the surface. To describe the dry deposition of the large number of gaseous organic HAPS, a new technique based on laboratory study results and theoretical considerations has been developed providing a means of evaluating the role of lipid solubility in uptake by the waxy outer cuticle of vegetative plant leaves.

  5. Atmospheric dispersion modelling over complex terrain at small scale

    NASA Astrophysics Data System (ADS)

    Nosek, S.; Janour, Z.; Kukacka, L.; Jurcakova, K.; Kellnerova, R.; Gulikova, E.

    2014-03-01

    Previous study concerned of qualitative modelling neutrally stratified flow over open-cut coal mine and important surrounding topography at meso-scale (1:9000) revealed an important area for quantitative modelling of atmospheric dispersion at small-scale (1:3300). The selected area includes a necessary part of the coal mine topography with respect to its future expansion and surrounding populated areas. At this small-scale simultaneous measurement of velocity components and concentrations in specified points of vertical and horizontal planes were performed by two-dimensional Laser Doppler Anemometry (LDA) and Fast-Response Flame Ionization Detector (FFID), respectively. The impact of the complex terrain on passive pollutant dispersion with respect to the prevailing wind direction was observed and the prediction of the air quality at populated areas is discussed. The measured data will be used for comparison with another model taking into account the future coal mine transformation. Thus, the impact of coal mine transformation on pollutant dispersion can be observed.

  6. Simulation and Processing Seismic Data in Complex Geological Models

    NASA Astrophysics Data System (ADS)

    Forestieri da Gama Rodrigues, S.; Moreira Lupinacci, W.; Martins de Assis, C. A.

    2014-12-01

    Seismic simulations in complex geological models are interesting to verify some limitations of seismic data. In this project, different geological models were designed to analyze some difficulties encountered in the interpretation of seismic data. Another idea is these data become available for LENEP/UENF students to test new tools to assist in seismic data processing. The geological models were created considering some characteristics found in oil exploration. We simulated geological medium with volcanic intrusions, salt domes, fault, pinch out and layers more distante from surface (Kanao, 2012). We used the software Tesseral Pro to simulate the seismic acquisitions. The acquisition geometries simulated were of the type common offset, end-on and split-spread. (Figure 1) Data acquired with constant offset require less processing routines. The processing flow used with tools available in Seismic Unix package (for more details, see Pennington et al., 2005) was geometric spreading correction, deconvolution, attenuation correction and post-stack depth migration. In processing of the data acquired with end-on and split-spread geometries, we included velocity analysis and NMO correction routines. Although we analyze synthetic data and carefully applied each processing routine, we can observe some limitations of the seismic reflection in imaging thin layers, great surface depth layers, layers with low impedance contrast and faults.

  7. A resistive force model for complex intrusion in granular media

    NASA Astrophysics Data System (ADS)

    Zhang, Tingnan; Li, Chen; Goldman, Daniel

    2012-11-01

    Intrusion forces in granular media (GM) are best understood for simple shapes (like disks and rods) undergoing vertical penetration and horizontal drag. Inspired by a resistive force theory for sand-swimming, we develop a new two-dimensional resistive force model for intruders of arbitrary shape and intrusion path into GM in the vertical plane. We divide an intruder of complex geometry into small segments and approximate segmental forces by measuring forces on small flat plates in experiments. Both lift and drag forces on the plates are proportional to penetration depth, and depend sensitively on the angle of attack and the direction of motion. Summation of segmental forces over the intruder predicts the net forces on a c-leg, a flat leg, and a reversed c-leg rotated into GM about a fixed axle. The stress profiles are similar for GM of different particle sizes, densities, coefficients of friction, and volume fractions. We propose a universal scaling law applicable to all tested GM. By combining the new force model with a multi-body simulator, we can also predict the locomotion dynamics of a small legged robot on GM. Our force laws can provide a strict test of hydrodynamic-like approaches to model dense granular flows. Also affiliated to: School of Physics, Georgia Institute of Technology.

  8. Neurocomputational Model of EEG Complexity during Mind Wandering

    PubMed Central

    Ibáñez-Molina, Antonio J.; Iglesias-Parro, Sergio

    2016-01-01

    Mind wandering (MW) can be understood as a transient state in which attention drifts from an external task to internal self-generated thoughts. MW has been associated with the activation of the Default Mode Network (DMN). In addition, it has been shown that the activity of the DMN is anti-correlated with activation in brain networks related to the processing of external events (e.g., Salience network, SN). In this study, we present a mean field model based on weakly coupled Kuramoto oscillators. We simulated the oscillatory activity of the entire brain and explored the role of the interaction between the nodes from the DMN and SN in MW states. External stimulation was added to the network model in two opposite conditions. Stimuli could be presented when oscillators in the SN showed more internal coherence (synchrony) than in the DMN, or, on the contrary, when the coherence in the SN was lower than in the DMN. The resulting phases of the oscillators were analyzed and used to simulate EEG signals. Our results showed that the structural complexity from both simulated and real data was higher when the model was stimulated during periods in which DMN was more coherent than the SN. Overall, our results provided a plausible mechanistic explanation to MW as a state in which high coherence in the DMN partially suppresses the capacity of the system to process external stimuli. PMID:26973505

  9. Neurocomputational Model of EEG Complexity during Mind Wandering.

    PubMed

    Ibáñez-Molina, Antonio J; Iglesias-Parro, Sergio

    2016-01-01

    Mind wandering (MW) can be understood as a transient state in which attention drifts from an external task to internal self-generated thoughts. MW has been associated with the activation of the Default Mode Network (DMN). In addition, it has been shown that the activity of the DMN is anti-correlated with activation in brain networks related to the processing of external events (e.g., Salience network, SN). In this study, we present a mean field model based on weakly coupled Kuramoto oscillators. We simulated the oscillatory activity of the entire brain and explored the role of the interaction between the nodes from the DMN and SN in MW states. External stimulation was added to the network model in two opposite conditions. Stimuli could be presented when oscillators in the SN showed more internal coherence (synchrony) than in the DMN, or, on the contrary, when the coherence in the SN was lower than in the DMN. The resulting phases of the oscillators were analyzed and used to simulate EEG signals. Our results showed that the structural complexity from both simulated and real data was higher when the model was stimulated during periods in which DMN was more coherent than the SN. Overall, our results provided a plausible mechanistic explanation to MW as a state in which high coherence in the DMN partially suppresses the capacity of the system to process external stimuli.

  10. Integrated modeling tool for performance engineering of complex computer systems

    NASA Technical Reports Server (NTRS)

    Wright, Gary; Ball, Duane; Hoyt, Susan; Steele, Oscar

    1989-01-01

    This report summarizes Advanced System Technologies' accomplishments on the Phase 2 SBIR contract NAS7-995. The technical objectives of the report are: (1) to develop an evaluation version of a graphical, integrated modeling language according to the specification resulting from the Phase 2 research; and (2) to determine the degree to which the language meets its objectives by evaluating ease of use, utility of two sets of performance predictions, and the power of the language constructs. The technical approach followed to meet these objectives was to design, develop, and test an evaluation prototype of a graphical, performance prediction tool. The utility of the prototype was then evaluated by applying it to a variety of test cases found in the literature and in AST case histories. Numerous models were constructed and successfully tested. The major conclusion of this Phase 2 SBIR research and development effort is that complex, real-time computer systems can be specified in a non-procedural manner using combinations of icons, windows, menus, and dialogs. Such a specification technique provides an interface that system designers and architects find natural and easy to use. In addition, PEDESTAL's multiview approach provides system engineers with the capability to perform the trade-offs necessary to produce a design that meets timing performance requirements. Sample system designs analyzed during the development effort showed that models could be constructed in a fraction of the time required by non-visual system design capture tools.

  11. Methods of Information Geometry to model complex shapes

    NASA Astrophysics Data System (ADS)

    De Sanctis, A.; Gattone, S. A.

    2016-09-01

    In this paper, a new statistical method to model patterns emerging in complex systems is proposed. A framework for shape analysis of 2- dimensional landmark data is introduced, in which each landmark is represented by a bivariate Gaussian distribution. From Information Geometry we know that Fisher-Rao metric endows the statistical manifold of parameters of a family of probability distributions with a Riemannian metric. Thus this approach allows to reconstruct the intermediate steps in the evolution between observed shapes by computing the geodesic, with respect to the Fisher-Rao metric, between the corresponding distributions. Furthermore, the geodesic path can be used for shape predictions. As application, we study the evolution of the rat skull shape. A future application in Ophthalmology is introduced.

  12. Random field Ising model and community structure in complex networks

    NASA Astrophysics Data System (ADS)

    Son, S.-W.; Jeong, H.; Noh, J. D.

    2006-04-01

    We propose a method to determine the community structure of a complex network. In this method the ground state problem of a ferromagnetic random field Ising model is considered on the network with the magnetic field Bs = +∞, Bt = -∞, and Bi≠s,t=0 for a node pair s and t. The ground state problem is equivalent to the so-called maximum flow problem, which can be solved exactly numerically with the help of a combinatorial optimization algorithm. The community structure is then identified from the ground state Ising spin domains for all pairs of s and t. Our method provides a criterion for the existence of the community structure, and is applicable equally well to unweighted and weighted networks. We demonstrate the performance of the method by applying it to the Barabási-Albert network, Zachary karate club network, the scientific collaboration network, and the stock price correlation network. (Ising, Potts, etc.)

  13. Inverse Problems in Complex Models and Applications to Earth Sciences

    NASA Astrophysics Data System (ADS)

    Bosch, M. E.

    2015-12-01

    The inference of the subsurface earth structure and properties requires the integration of different types of data, information and knowledge, by combined processes of analysis and synthesis. To support the process of integrating information, the regular concept of data inversion is evolving to expand its application to models with multiple inner components (properties, scales, structural parameters) that explain multiple data (geophysical survey data, well-logs, core data). The probabilistic inference methods provide the natural framework for the formulation of these problems, considering a posterior probability density function (PDF) that combines the information from a prior information PDF and the new sets of observations. To formulate the posterior PDF in the context of multiple datasets, the data likelihood functions are factorized assuming independence of uncertainties for data originating across different surveys. A realistic description of the earth medium requires modeling several properties and structural parameters, which relate to each other according to dependency and independency notions. Thus, conditional probabilities across model components also factorize. A common setting proceeds by structuring the model parameter space in hierarchical layers. A primary layer (e.g. lithology) conditions a secondary layer (e.g. physical medium properties), which conditions a third layer (e.g. geophysical data). In general, less structured relations within model components and data emerge from the analysis of other inverse problems. They can be described with flexibility via direct acyclic graphs, which are graphs that map dependency relations between the model components. Examples of inverse problems in complex models can be shown at various scales. At local scale, for example, the distribution of gas saturation is inferred from pre-stack seismic data and a calibrated rock-physics model. At regional scale, joint inversion of gravity and magnetic data is applied

  14. Complex events in a fault model with interacting asperities

    NASA Astrophysics Data System (ADS)

    Dragoni, Michele; Tallarico, Andrea

    2016-08-01

    The dynamics of a fault with heterogeneous friction is studied by employing a discrete fault model with two asperities of different strengths. The average values of stress, friction and slip on each asperity are considered and the state of the fault is described by the slip deficits of the asperities as functions of time. The fault has three different slipping modes, corresponding to the asperities slipping one at a time or simultaneously. Any seismic event produced by the fault is a sequence of n slipping modes. According to initial conditions, seismic events can be different sequences of slipping modes, implying different moment rates and seismic moments. Each event can be represented geometrically in the state space by an orbit that is the union of n damped Lissajous curves. We focus our interest on events that are sequences of two or more slipping modes: they show a complex stress interchange between the asperities and a complex temporal pattern of slip rate. The initial stress distribution producing these events is not uniform on the fault. We calculate the stress drop, the moment rate and the frequency spectrum of the events, showing how these quantities depend on initial conditions. These events have the greatest seismic moments that can be produced by fault slip. As an example, we model the moment rate of the 1992 Landers, California, earthquake that can be described as the consecutive failure of two asperities, one of which has a double strength than the other, and evaluate the evolution of stress distribution on the fault during the event.

  15. Construction of Lyapunov functions for some models of infectious diseases in vivo: from simple models to complex models.

    PubMed

    Kajiwara, Tsuyoshi; Sasaki, Toru; Takeuchi, Yasuhiro

    2015-02-01

    We present a constructive method for Lyapunov functions for ordinary differential equation models of infectious diseases in vivo. We consider models derived from the Nowak-Bangham models. We construct Lyapunov functions for complex models using those of simpler models. Especially, we construct Lyapunov functions for models with an immune variable from those for models without an immune variable, a Lyapunov functions of a model with absorption effect from that for a model without absorption effect. We make the construction clear for Lyapunov functions proposed previously, and present new results with our method.

  16. Complex Environmental Data Modelling Using Adaptive General Regression Neural Networks

    NASA Astrophysics Data System (ADS)

    Kanevski, Mikhail

    2015-04-01

    The research deals with an adaptation and application of Adaptive General Regression Neural Networks (GRNN) to high dimensional environmental data. GRNN [1,2,3] are efficient modelling tools both for spatial and temporal data and are based on nonparametric kernel methods closely related to classical Nadaraya-Watson estimator. Adaptive GRNN, using anisotropic kernels, can be also applied for features selection tasks when working with high dimensional data [1,3]. In the present research Adaptive GRNN are used to study geospatial data predictability and relevant feature selection using both simulated and real data case studies. The original raw data were either three dimensional monthly precipitation data or monthly wind speeds embedded into 13 dimensional space constructed by geographical coordinates and geo-features calculated from digital elevation model. GRNN were applied in two different ways: 1) adaptive GRNN with the resulting list of features ordered according to their relevancy; and 2) adaptive GRNN applied to evaluate all possible models N [in case of wind fields N=(2^13 -1)=8191] and rank them according to the cross-validation error. In both cases training were carried out applying leave-one-out procedure. An important result of the study is that the set of the most relevant features depends on the month (strong seasonal effect) and year. The predictabilities of precipitation and wind field patterns, estimated using the cross-validation and testing errors of raw and shuffled data, were studied in detail. The results of both approaches were qualitatively and quantitatively compared. In conclusion, Adaptive GRNN with their ability to select features and efficient modelling of complex high dimensional data can be widely used in automatic/on-line mapping and as an integrated part of environmental decision support systems. 1. Kanevski M., Pozdnoukhov A., Timonin V. Machine Learning for Spatial Environmental Data. Theory, applications and software. EPFL Press

  17. Thermophysical Model of S-complex NEAs: 1627 Ivar

    NASA Astrophysics Data System (ADS)

    Crowell, Jenna L.; Howell, Ellen S.; Magri, Christopher; Fernandez, Yan R.; Marshall, Sean E.; Warner, Brian D.; Vervack, Ronald J.

    2015-11-01

    We present updates to the thermophysical model of asteroid 1627 Ivar. Ivar is an Amor class near Earth asteroid (NEA) with a taxonomic type of Sqw [1] and a rotation rate of 4.795162 ± 5.4 * 10-6 hours [2]. In 2013, our group observed Ivar in radar, in CCD lightcurves, and in the near-IR’s reflected and thermal regimes (0.8 - 4.1 µm) using the Arecibo Observatory’s 2380 MHz radar, the Palmer Divide Station’s 0.35m telescope, and the SpeX instrument at the NASA IRTF respectively. Using these radar and lightcurve data, we generated a detailed shape model of Ivar using the software SHAPE [3,4]. Our shape model reveals more surface detail compared to earlier models [5] and we found Ivar to be an elongated asteroid with the maximum extended length along the three body-fixed coordinates being 12 x 11.76 x 6 km. For our thermophysical modeling, we have used SHERMAN [6,7] with input parameters such as the asteroid’s IR emissivity, optical scattering law and thermal inertia, in order to complete thermal computations based on our shape model and the known spin state. We then create synthetic near-IR spectra that can be compared to our observed spectra, which cover a wide range of Ivar’s rotational longitudes and viewing geometries. As has been noted [6,8], the use of an accurate shape model is often crucial for correctly interpreting multi-epoch thermal emission observations. We will present what SHERMAN has let us determine about the reflective, thermal, and surface properties for Ivar that best reproduce our spectra. From our derived best-fit thermal parameters, we will learn more about the regolith, surface properties, and heterogeneity of Ivar and how those properties compare to those of other S-complex asteroids. References: [1] DeMeo et al. 2009, Icarus 202, 160-180 [2] Crowell, J. et al. 2015, LPSC 46 [3] Magri C. et al. 2007, Icarus 186, 152-177 [4] Crowell, J. et al. 2014, AAS/DPS 46 [5] Kaasalainen, M. et al. 2004, Icarus 167, 178-196 [6] Crowell, J. et

  18. Overproduction of a potential red pigment by a specific self-immobilization biomembrane-surface liquid culture of Penicillium novae-zeelandiae.

    PubMed

    Hailei, Wang; Ping, Li; Yufeng, Liu; Zhifang, Ren; Gang, Wang

    2012-10-01

    A specific self-immobilization biomembrane-surface liquid culture (SIBSLC) was developed to overproduce a potential penicillium red pigment. Statistic analysis shows that both glucose concentration and membrane diameter are important factors influencing the yield of red pigment. After the optimization using central composite experimental design, the maximum yield of red pigment in shake flask reaches 4.25 g/l. The growth of strain HSD07B consists of three phases, and the pigment secreted in the decelerated phase, is originated from the interior of biomembrane where glucose exhaustion occurs. In addition, the batch and continuous SIBSLC were conducted for production of the pigment, and the latter was more competitive in consideration of the fact that it not only increased 61.5 % of pigment productivity, but also simplified the production process. Moreover, the pigment produced by SIBSLC is potentially acceptable for food applications although it is distinguished from the co-cultured red pigment we reported previously in components. PMID:22476766

  19. The complexity of model checking for belief revision and update

    SciTech Connect

    Liberatore, P.; Schaerf, M.

    1996-12-31

    One of the main challenges in the formal modeling of common-sense reasoning is the ability to cope with the dynamic nature of the world. Among the approaches put forward to address this problem are belief revision and update. Given a knowledge base T, representing our knowledge of the {open_quotes}state of affairs{close_quotes} of the world of interest, it is possible that we are lead to trust another piece of information P, possibly inconsistent with the old one T. The aim of revision and update operators is to characterize the revised knowledge base T{prime} that incorporates the new formula P into the old one T while preserving consistency and, at the same time, avoiding the loss of too much information in this process. In this paper we study the computational complexity of one of the main computational problems of belief revision and update: deciding if an interpretation M is a model of the revised knowledge base.

  20. Reliable modeling of the electronic spectra of realistic uranium complexes

    NASA Astrophysics Data System (ADS)

    Tecmer, Paweł; Govind, Niranjan; Kowalski, Karol; de Jong, Wibe A.; Visscher, Lucas

    2013-07-01

    We present an EOMCCSD (equation of motion coupled cluster with singles and doubles) study of excited states of the small [UO2]2+ and [UO2]+ model systems as well as the larger UVIO2(saldien) complex. In addition, the triples contribution within the EOMCCSDT and CR-EOMCCSD(T) (completely renormalized EOMCCSD with non-iterative triples) approaches for the [UO2]2+ and [UO2]+ systems as well as the active-space variant of the CR-EOMCCSD(T) method—CR-EOMCCSd(t)—for the UVIO2(saldien) molecule are investigated. The coupled cluster data were employed as benchmark to choose the "best" appropriate exchange-correlation functional for subsequent time-dependent density functional (TD-DFT) studies on the transition energies for closed-shell species. Furthermore, the influence of the saldien ligands on the electronic structure and excitation energies of the [UO2]+ molecule is discussed. The electronic excitations as well as their oscillator dipole strengths modeled with TD-DFT approach using the CAM-B3LYP exchange-correlation functional for the [UVO2(saldien)]- with explicit inclusion of two dimethyl sulfoxide molecules are in good agreement with the experimental data of Takao et al. [Inorg. Chem. 49, 2349 (2010), 10.1021/ic902225f].

  1. Electromagnetic modelling of Ground Penetrating Radar responses to complex targets

    NASA Astrophysics Data System (ADS)

    Pajewski, Lara; Giannopoulos, Antonis

    2014-05-01

    This work deals with the electromagnetic modelling of composite structures for Ground Penetrating Radar (GPR) applications. It was developed within the Short-Term Scientific Mission ECOST-STSM-TU1208-211013-035660, funded by COST Action TU1208 "Civil Engineering Applications of Ground Penetrating Radar". The Authors define a set of test concrete structures, hereinafter called cells. The size of each cell is 60 x 100 x 18 cm and the content varies with growing complexity, from a simple cell with few rebars of different diameters embedded in concrete at increasing depths, to a final cell with a quite complicated pattern, including a layer of tendons between two overlying meshes of rebars. Other cells, of intermediate complexity, contain pvc ducts (air filled or hosting rebars), steel objects commonly used in civil engineering (as a pipe, an angle bar, a box section and an u-channel), as well as void and honeycombing defects. One of the cells has a steel mesh embedded in it, overlying two rebars placed diagonally across the comers of the structure. Two cells include a couple of rebars bent into a right angle and placed on top of each other, with a square/round circle lying at the base of the concrete slab. Inspiration for some of these cells is taken from the very interesting experimental work presented in Ref. [1]. For each cell, a subset of models with growing complexity is defined, starting from a simple representation of the cell and ending with a more realistic one. In particular, the model's complexity increases from the geometrical point of view, as well as in terms of how the constitutive parameters of involved media and GPR antennas are described. Some cells can be simulated in both two and three dimensions; the concrete slab can be approximated as a finite-thickness layer having infinite extension on the transverse plane, thus neglecting how edges affect radargrams, or else its finite size can be fully taken into account. The permittivity of concrete can be

  2. Reliable Modeling of the Electronic Spectra of Realistic Uranium Complexes

    SciTech Connect

    Tecmer, Pawel; Govind, Niranjan; Kowalski, Karol; De Jong, Wibe A.; Visscher, Lucas

    2013-07-21

    We present an EOMCCSD (equation of motion coupled cluster with singles and doubles) study of excited states of the small [UO2]2+ and [UO2]+ model systems as well as the larger UV IO2(saldien) complex. In addition, the triples contribution within the EOMCCSDT and CR-EOMCCSD(T) (completely renormalized EOMCCSD with non-iterative triples) approaches for the [UO2]2+ and [UO2]+ systems as well as the active-space variant of the CR-EOMCCSD(T) method | CREOMCCSd(t) | for the UV IO2(saldien) molecule are investigated. The coupled cluster data was employed as benchmark to chose the "best" appropriate exchange--correlation functional for subsequent time-dependent density functional (TD-DFT) studies on the transition energies for closed-shell species. Furthermore, the influence of the saldien ligands on the electronic structure and excitation energies of the [UO2]+ molecule is discussed. The electronic excitations as well as their oscillator dipole strengths modeled with TD-DFT approach using the CAM-B3LYP exchange{correlation functional for the [UV O2(saldien)]- with explicit inclusion of two DMSOs are in good agreement with the experimental data of Takao et al. [Inorg. Chem. 49, 2349-2359, (2010)].

  3. Chitosan and alginate types of bio-membrane in fuel cell application: An overview

    NASA Astrophysics Data System (ADS)

    Shaari, N.; Kamarudin, S. K.

    2015-09-01

    The major problems of polymer electrolyte membrane fuel cell technology that need to be highlighted are fuel crossovers (e.g., methanol or hydrogen leaking across fuel cell membranes), CO poisoning, low durability, and high cost. Chitosan and alginate-based biopolymer membranes have recently been used to solve these problems with promising results. Current research in biopolymer membrane materials and systems has focused on the following: 1) the development of novel and efficient biopolymer materials; and 2) increasing the processing capacity of membrane operations. Consequently, chitosan and alginate-based biopolymers seek to enhance fuel cell performance by improving proton conductivity, membrane durability, and reducing fuel crossover and electro-osmotic drag. There are four groups of chitosan-based membranes (categorized according to their reaction and preparation): self-cross-linked and salt-complexed chitosans, chitosan-based polymer blends, chitosan/inorganic filler composites, and chitosan/polymer composites. There are only three alginate-based membranes that have been synthesized for fuel cell application. This work aims to review the state-of-the-art in the growth of chitosan and alginate-based biopolymer membranes for fuel cell applications.

  4. Combined effect of cortical cytoskeleton and transmembrane proteins on domain formation in biomembranes

    PubMed Central

    Sikder, Md. Kabir Uddin; Stone, Kyle A.; Kumar, P. B. Sunil; Laradji, Mohamed

    2014-01-01

    We investigate the combined effects of transmembrane proteins and the subjacent cytoskeleton on the dynamics of phase separation in multicomponent lipid bilayers using computer simulations of a particle-based implicit solvent model for lipid membranes with soft-core interactions. We find that microphase separation can be achieved by the protein confinement by the cytoskeleton. Our results have relevance to the finite size of lipid rafts in the plasma membrane of mammalian cells. PMID:25106608

  5. Combined effect of cortical cytoskeleton and transmembrane proteins on domain formation in biomembranes

    NASA Astrophysics Data System (ADS)

    Sikder, Md. Kabir Uddin; Stone, Kyle A.; Kumar, P. B. Sunil; Laradji, Mohamed

    2014-08-01

    We investigate the combined effects of transmembrane proteins and the subjacent cytoskeleton on the dynamics of phase separation in multicomponent lipid bilayers using computer simulations of a particle-based implicit solvent model for lipid membranes with soft-core interactions. We find that microphase separation can be achieved by the protein confinement by the cytoskeleton. Our results have relevance to the finite size of lipid rafts in the plasma membrane of mammalian cells.

  6. Atmospheric Modelling for Air Quality Study over the complex Himalayas

    NASA Astrophysics Data System (ADS)

    Surapipith, Vanisa; Panday, Arnico; Mukherji, Aditi; Banmali Pradhan, Bidya; Blumer, Sandro

    2014-05-01

    An Atmospheric Modelling System has been set up at International Centre for Integrated Mountain Development (ICIMOD) for the assessment of Air Quality across the Himalaya mountain ranges. The Weather Research and Forecasting (WRF) model version 3.5 has been implemented over the regional domain, stretching across 4995 x 4455 km2 centred at Ichhyakamana , the ICIMOD newly setting-up mountain-peak station (1860 m) in central Nepal, and covering terrains from sea-level to the Everest (8848 m). Simulation is carried out for the winter time period, i.e. December 2012 to February 2013, when there was an intensive field campaign SusKat, where at least 7 super stations were collecting meteorology and chemical parameters on various sites. The very complex terrain requires a high horizontal resolution (1 × 1 km2), which is achieved by nesting the domain of interest, e.g. Kathmandu Valley, into 3 coarser ones (27, 9, 3 km resolution). Model validation is performed against the field data as well as satellite data, and the challenge of capturing the necessary atmospheric processes is discussed, before moving forward with the fully coupled chemistry module (WRF-Chem), having local and regional emission databases as input. The effort aims at finding a better understanding of the atmospheric processes and air quality impact on the mountain population, as well as the impact of the long-range transport, particularly of Black Carbon aerosol deposition, to the radiative budget over the Himalayan glaciers. The higher rate of snowcap melting, and shrinkage of permafrost as noticed by glaciologists is a concern. Better prediction will supply crucial information to form the proper mitigation and adaptation strategies for saving people lives across the Himalayas in the changing climate.

  7. Thermophysical Model of S-complex NEAs: 1627 Ivar

    NASA Astrophysics Data System (ADS)

    Crowell, Jenna; Howell, Ellen S.; Magri, Christopher; Fernandez, Yanga R.; Marshall, Sean E.; Warner, Brian D.; Vervack, Ronald J., Jr.

    2016-01-01

    We present an updated thermophysical model of 1627 Ivar, an Amor class near Earth asteroid (NEA) with a taxonomic type of Sqw [1]. Ivar's large size and close approach to Earth in 2013 (minimum distance 0.32 AU) provided an opportunity to observe the asteroid over many different viewing angles for an extended period of time, which we have utilized to generate a shape and thermophysical model of Ivar, allowing us to discuss the implications that these results have on the regolith of this asteroid. Using the software SHAPE [2,3], we updated the nonconvex shape model of Ivar, which was constructed by Kaasalainen et al. [4] using photometry. We incorporated 2013 radar data and CCD lightcurves using the Arecibo Observatory's 2380Mz radar and the 0.35m telescope at the Palmer Divide Station respectively, to create a shape model with higher surface detail. We found Ivar to be elongated with maximum extended lengths along principal axes of 12 x 5 x 6 km and a rotation rate of 4.795162 ± 5.4 * 10-6 hrs [5]. In addition to these radar data and lightcurves, we also observed Ivar in the near IR using the SpeX instrument at the NASA IRTF. These data cover a wide range of Ivar's rotational longitudes and viewing geometries. We have used SHERMAN [6,7] with input parameters such as the asteroid's IR emissivity, optical scattering law, and thermal inertia, in order to complete thermal computations based on our shape model and known spin state. Using this procedure, we find which reflective, thermal, and surface properties best reproduce the observed spectra. This allows us to characterize properties of the asteroid's regolith and study heterogeneity of the surface. We will compare these results with those of other S-complex asteroids to better understand this asteroid type and the uniqueness of 1627 Ivar.[1] DeMeo et al. 2009, Icarus 202, 160-180 [2] Magri, C. et al. 2011, Icarus 214, 210-227. [3] Crowell, J. et al. 2014, AAS/DPS 46 [4] Kaasalainen, M. et al. 2004, Icarus 167, 178

  8. A New Approach to Modelling Student Retention through an Application of Complexity Thinking

    ERIC Educational Resources Information Center

    Forsman, Jonas; Linder, Cedric; Moll, Rachel; Fraser, Duncan; Andersson, Staffan

    2014-01-01

    Complexity thinking is relatively new to education research and has rarely been used to examine complex issues in physics and engineering education. Issues in higher education such as student retention have been approached from a multiplicity of perspectives and are recognized as complex. The complex system of student retention modelling in higher…

  9. Using Models to Inform Policy: Insights from Modeling the Complexities of Global Polio Eradication

    NASA Astrophysics Data System (ADS)

    Thompson, Kimberly M.

    Drawing on over 20 years of experience modeling risks in complex systems, this talk will challenge SBP participants to develop models that provide timely and useful answers to critical policy questions when decision makers need them. The talk will include reflections on the opportunities and challenges associated with developing integrated models for complex problems and communicating their results effectively. Dr. Thompson will focus the talk largely on collaborative modeling related to global polio eradication and the application of system dynamics tools. After successful global eradication of wild polioviruses, live polioviruses will still present risks that could potentially lead to paralytic polio cases. This talk will present the insights of efforts to use integrated dynamic, probabilistic risk, decision, and economic models to address critical policy questions related to managing global polio risks. Using a dynamic disease transmission model combined with probabilistic model inputs that characterize uncertainty for a stratified world to account for variability, we find that global health leaders will face some difficult choices, but that they can take actions that will manage the risks effectively. The talk will emphasize the need for true collaboration between modelers and subject matter experts, and the importance of working with decision makers as partners to ensure the development of useful models that actually get used.

  10. Measuring fast stochastic displacements of bio-membranes with dynamic optical displacement spectroscopy.

    PubMed

    Monzel, C; Schmidt, D; Kleusch, C; Kirchenbüchler, D; Seifert, U; Smith, A-S; Sengupta, K; Merkel, R

    2015-01-01

    Stochastic displacements or fluctuations of biological membranes are increasingly recognized as an important aspect of many physiological processes, but hitherto their precise quantification in living cells was limited due to a lack of tools to accurately record them. Here we introduce a novel technique--dynamic optical displacement spectroscopy (DODS), to measure stochastic displacements of membranes with unprecedented combined spatiotemporal resolution of 20 nm and 10 μs. The technique was validated by measuring bending fluctuations of model membranes. DODS was then used to explore the fluctuations in human red blood cells, which showed an ATP-induced enhancement of non-Gaussian behaviour. Plasma membrane fluctuations of human macrophages were quantified to this accuracy for the first time. Stimulation with a cytokine enhanced non-Gaussian contributions to these fluctuations. Simplicity of implementation, and high accuracy make DODS a promising tool for comprehensive understanding of stochastic membrane processes. PMID:26437911

  11. Measuring fast stochastic displacements of bio-membranes with dynamic optical displacement spectroscopy

    PubMed Central

    Monzel, C.; Schmidt, D.; Kleusch, C.; Kirchenbüchler, D.; Seifert, U.; Smith, A-S; Sengupta, K.; Merkel, R.

    2015-01-01

    Stochastic displacements or fluctuations of biological membranes are increasingly recognized as an important aspect of many physiological processes, but hitherto their precise quantification in living cells was limited due to a lack of tools to accurately record them. Here we introduce a novel technique—dynamic optical displacement spectroscopy (DODS), to measure stochastic displacements of membranes with unprecedented combined spatiotemporal resolution of 20 nm and 10 μs. The technique was validated by measuring bending fluctuations of model membranes. DODS was then used to explore the fluctuations in human red blood cells, which showed an ATP-induced enhancement of non-Gaussian behaviour. Plasma membrane fluctuations of human macrophages were quantified to this accuracy for the first time. Stimulation with a cytokine enhanced non-Gaussian contributions to these fluctuations. Simplicity of implementation, and high accuracy make DODS a promising tool for comprehensive understanding of stochastic membrane processes. PMID:26437911

  12. Resveratrol induces ordered domains formation in biomembranes: Implication for its pleiotropic action.

    PubMed

    Neves, Ana Rute; Nunes, Cláudia; Reis, Salette

    2016-01-01

    Resveratrol is a polyphenol compound with great value in cancer therapy, cardiovascular protection, and neurodegenerative disorders. The mechanism by which resveratrol exerts such pleiotropic effects is not yet clear and there is a huge need to understand the influence of this compound on the regulation of lipid domains formation on membrane structure. The aim of the present study was to reveal potential molecular interactions between resveratrol and lipid rafts found in cell membranes by means of Förster resonance energy transfer, DPH fluorescence quenching, and triton X-100 detergent resistance assay. Liposomes composed of egg phosphatidylcholine, cholesterol, and sphingomyelin were used as model membranes. The results revealed that resveratrol induces phase separation and formation of liquid-ordered domains in bilayer structures. The formation of such tightly packed lipid rafts is important for different signal transduction pathways, through the regulation of membrane-associating proteins, that can justify several pharmacological activities of this compound. PMID:26456556

  13. How Good Are Statistical Models at Approximating Complex Fitness Landscapes?

    PubMed Central

    du Plessis, Louis; Leventhal, Gabriel E.; Bonhoeffer, Sebastian

    2016-01-01

    Fitness landscapes determine the course of adaptation by constraining and shaping evolutionary trajectories. Knowledge of the structure of a fitness landscape can thus predict evolutionary outcomes. Empirical fitness landscapes, however, have so far only offered limited insight into real-world questions, as the high dimensionality of sequence spaces makes it impossible to exhaustively measure the fitness of all variants of biologically meaningful sequences. We must therefore revert to statistical descriptions of fitness landscapes that are based on a sparse sample of fitness measurements. It remains unclear, however, how much data are required for such statistical descriptions to be useful. Here, we assess the ability of regression models accounting for single and pairwise mutations to correctly approximate a complex quasi-empirical fitness landscape. We compare approximations based on various sampling regimes of an RNA landscape and find that the sampling regime strongly influences the quality of the regression. On the one hand it is generally impossible to generate sufficient samples to achieve a good approximation of the complete fitness landscape, and on the other hand systematic sampling schemes can only provide a good description of the immediate neighborhood of a sequence of interest. Nevertheless, we obtain a remarkably good and unbiased fit to the local landscape when using sequences from a population that has evolved under strong selection. Thus, current statistical methods can provide a good approximation to the landscape of naturally evolving populations. PMID:27189564

  14. Solvent-Dependent Pyranopterin Cyclization in Molybdenum Cofactor Model Complexes.

    PubMed

    Williams, Benjamin R; Gisewhite, Douglas; Kalinsky, Anna; Esmail, Alisha; Burgmayer, Sharon J Nieter

    2015-09-01

    The conserved pterin dithiolene ligand that coordinates molybdenum (Mo) in the cofactor (Moco) of mononuclear Mo enzymes can exist in both a tricyclic pyranopterin dithiolene form and as a bicyclic pterin-dithiolene form as observed in protein crystal structures of several bacterial molybdoenzymes. Interconversion between the tricyclic and bicyclic forms via pyran scission and cyclization has been hypothesized to play a role in the catalytic mechanism of Moco. Therefore, understanding the interconversion between the tricyclic and bicyclic forms, a type of ring-chain tautomerism, is an important aspect of study to understand its role in catalysis. In this study, equilibrium constants (K(eq)) as well as enthalpy, entropy, and free energy values are obtained for pyran ring tautomerism exhibited by two Moco model complexes, namely, (Et4N)[Tp*Mo(O)(S2BMOPP)] (1) and (Et4N)[Tp*Mo(O)(S2PEOPP)] (2), as a solvent-dependent equilibrium process. Keq values obtained from (1)H NMR data in seven deuterated solvents show a correlation between solvent polarity and tautomer form, where solvents with higher polarity parameters favor the pyran form.

  15. Complex network model of the Treatise on Cold Damage Disorders

    NASA Astrophysics Data System (ADS)

    Shao, Feng-jing; Sui, Yi; Zhou, Yong-hong; Sun, Ren-cheng

    2016-10-01

    Investigating the underlying principles of the Treatise on Cold Damage Disorder is meaningful and interesting. In this study, we investigated the symptoms, herbal formulae, herbal drugs, and their relationships in this treatise based on a multi-subnet composited complex network model (MCCN). Syndrome subnets were constructed for the symptoms and a formula subnet for herbal drugs. By subnet compounding using MCCN, a composited network was obtained that described the treatment relationships between syndromes and formulae. The results obtained by topological analysis suggested some prescription laws that could be validated in clinics. After subnet reduction using the MCCN, six channel (Tai-yang, Yang-ming, Shao-yang, Tai-yin, Shao-yin, and Jue-yin) subnets were obtained. By analyzing the strengths of the relationships among these six channel subnets, we found that the Tai-yang channel and Yang-ming channel were related most strongly with each other, and we found symptoms that implied pathogen movements and transformations among the six channels. This study could help therapists to obtain a deeper understanding of this ancient treatise.

  16. How Good Are Statistical Models at Approximating Complex Fitness Landscapes?

    PubMed

    du Plessis, Louis; Leventhal, Gabriel E; Bonhoeffer, Sebastian

    2016-09-01

    Fitness landscapes determine the course of adaptation by constraining and shaping evolutionary trajectories. Knowledge of the structure of a fitness landscape can thus predict evolutionary outcomes. Empirical fitness landscapes, however, have so far only offered limited insight into real-world questions, as the high dimensionality of sequence spaces makes it impossible to exhaustively measure the fitness of all variants of biologically meaningful sequences. We must therefore revert to statistical descriptions of fitness landscapes that are based on a sparse sample of fitness measurements. It remains unclear, however, how much data are required for such statistical descriptions to be useful. Here, we assess the ability of regression models accounting for single and pairwise mutations to correctly approximate a complex quasi-empirical fitness landscape. We compare approximations based on various sampling regimes of an RNA landscape and find that the sampling regime strongly influences the quality of the regression. On the one hand it is generally impossible to generate sufficient samples to achieve a good approximation of the complete fitness landscape, and on the other hand systematic sampling schemes can only provide a good description of the immediate neighborhood of a sequence of interest. Nevertheless, we obtain a remarkably good and unbiased fit to the local landscape when using sequences from a population that has evolved under strong selection. Thus, current statistical methods can provide a good approximation to the landscape of naturally evolving populations.

  17. Effects of cyclosporine A on biomembranes. Vibrational spectroscopic, calorimetric and hemolysis studies.

    PubMed

    O'Leary, T J; Ross, P D; Lieber, M R; Levin, I W

    1986-04-01

    Cyclosporine A (CSA)-dipalmitoylphosphatidylcholine (DPPC) interactions were investigated using scanning calorimetry, infrared spectroscopy, and Raman spectroscopy. CSA reduced both the temperature and the maximum heat capacity of the lipid bilayer gel-to-liquid crystalline phase transition; the relationship between the shift in transition temperature and CSA concentration indicates that the peptide does not partition ideally between DPPC gel and liquid crystalline phases. This nonideality can be accounted for by excluded volume interactions between peptide molecules. CSA exhibited a similar but much more pronounced effect on the pretransition; at concentrations of 1 mol % CSA the amplitude of the pretransition was less than 20% of its value in the pure lipid. Raman spectroscopy confirmed that the effects of CSA on the phase transitions are not accompanied by major structural alterations in either the lipid headgroup or acyl chain regions at temperatures away from the phase changes. Both infrared and Raman spectroscopic results demonstrated that CSA in the lipid bilayer exists largely in a beta-turn conformation, as expected from single crystal x-ray data; the lipid phase transition does not induce structural alterations in CSA. Although the polypeptide significantly affects DPPC model membrane bilayers, CSA neither inhibited hypotonic hemolysis nor caused erythrocyte hemolysis, in contrast to many chemical agents that are believed to act through membrane-mediated pathways. Thus, agents, such as CSA, that perturb phospholipid phase transitions do not necessarily cause functional changes in cell membranes.

  18. Hexagonal, square, and stripe patterns of the ion channel density in biomembranes

    NASA Astrophysics Data System (ADS)

    Hilt, Markus; Zimmermann, Walter

    2007-01-01

    Transmembrane ion flow through channel proteins undergoing density fluctuations may cause lateral gradients of the electrical potential across the membrane giving rise to electrophoresis of charged channels. A model for the dynamics of the channel density and the voltage drop across the membrane (cable equation) coupled to a binding-release reaction with the cell skeleton [P. Fromherz and W. Zimmerman, Phys. Rev. E 51, R1659 (1995)] is analyzed in one and two spatial dimensions. Due to the binding release reaction spatially periodic modulations of the channel density with a finite wave number are favored at the onset of pattern formation, whereby the wave number decreases with the kinetic rate of the binding-release reaction. In a two-dimensional extended membrane hexagonal modulations of the ion channel density are preferred in a large range of parameters. The stability diagrams of the periodic patterns near threshold are calculated and in addition the equations of motion in the limit of a slow binding-release kinetics are derived.

  19. Artificial biomembrane based on DPPC--Investigation into phase transition and thermal behavior through ellipsometric techniques.

    PubMed

    González, Carmen M; Pizarro-Guerra, Guadalupe; Droguett, Felipe; Sarabia, Mauricio

    2015-10-01

    Organic thin film deposition presents a multiplicity of challenges. Most notably, layer thickness control, homogeneity and subsequent characterization have been not cleared yet. Phospholipid bilayers are frequently used to model cell membranes. Bilayers can be disrupted by changes in mechanical stress, pH and temperature. The strategy presented in this article is based on thermal study of 1,2-dipalmitoyl-sn-glycero-3-phosphocholine (DPPC) through analysis of slight changes in material thickness. The sample was prepared by depositing X- or Y-type DPPC bilayers using Langmuir-Blodgett technique over silicon wafer. Thus, molecular inclination degree, mobility and stability of phases and their respective phase transitions were observed and analyzed through ellipsometric techniques during heating cycles and corroborated by Grazing Incidence X-ray Diffraction and Atomic Force Microcopy measurements. DPPC functional group vibrations were detected by Raman spectra analysis. Scanning Electron Microscope with Field Emission gun (FE-SEM) and conventional SEM micrographs were also used to characterize sample morphology, demonstrating that homogenous bilayer formations coexist with some vesicles or micelles at surface level. Contact angle measurements corroborate DPPC surface wettability, which is mainly related to surface treatment methods of silicon wafer used to create either hydrophilic or hydrophobic nature regarding the substrate surface. Also, shifting and intensity changes of certain functional groups into Raman spectra confirm water presence between DPPC layers. Signal analysis detects certain interdigitation between aliphatic chains. These studies correspond to the base of future biosensors based on proteins or antimicrobial peptides stabilized into phospholipid bilayers over thin hydrogel films as moist scaffold. PMID:26150275

  20. Artificial biomembrane based on DPPC--Investigation into phase transition and thermal behavior through ellipsometric techniques.

    PubMed

    González, Carmen M; Pizarro-Guerra, Guadalupe; Droguett, Felipe; Sarabia, Mauricio

    2015-10-01

    Organic thin film deposition presents a multiplicity of challenges. Most notably, layer thickness control, homogeneity and subsequent characterization have been not cleared yet. Phospholipid bilayers are frequently used to model cell membranes. Bilayers can be disrupted by changes in mechanical stress, pH and temperature. The strategy presented in this article is based on thermal study of 1,2-dipalmitoyl-sn-glycero-3-phosphocholine (DPPC) through analysis of slight changes in material thickness. The sample was prepared by depositing X- or Y-type DPPC bilayers using Langmuir-Blodgett technique over silicon wafer. Thus, molecular inclination degree, mobility and stability of phases and their respective phase transitions were observed and analyzed through ellipsometric techniques during heating cycles and corroborated by Grazing Incidence X-ray Diffraction and Atomic Force Microcopy measurements. DPPC functional group vibrations were detected by Raman spectra analysis. Scanning Electron Microscope with Field Emission gun (FE-SEM) and conventional SEM micrographs were also used to characterize sample morphology, demonstrating that homogenous bilayer formations coexist with some vesicles or micelles at surface level. Contact angle measurements corroborate DPPC surface wettability, which is mainly related to surface treatment methods of silicon wafer used to create either hydrophilic or hydrophobic nature regarding the substrate surface. Also, shifting and intensity changes of certain functional groups into Raman spectra confirm water presence between DPPC layers. Signal analysis detects certain interdigitation between aliphatic chains. These studies correspond to the base of future biosensors based on proteins or antimicrobial peptides stabilized into phospholipid bilayers over thin hydrogel films as moist scaffold.

  1. Characteristics of halorhodopsin-bacterioruberin complex from Natronomonas pharaonis membrane in the solubilized system.

    PubMed

    Sasaki, Takanori; Razak, Nur Wahida Abdul; Kato, Noritaka; Mukai, Yuri

    2012-04-01

    Halorhodopsin is a retinal protein with a seven-transmembrane helix and acts as an inward light-driven Cl(-) pump. In this study, structural state of the solubilized halorhodopsin (NpHR) from the biomembrane of mutant strain KM-1 of Natronomonas pharaonis in nonionic detergent was investigated. A gel filtration chromatography monitored absorbances at 280 and 504 nm corresponding to the protein and a lipid soluble pigment of bacterioruberin (BR), respectively, has clearly detected an oligomer formation of the NpHRs and a complex formation between the NpHR and BR in the solubilized system. A molar ratio of NpHR:BR in the solubilized complex was close to 1:1. Further SDS-PAGE analysis of the solubilized NpHR cross-linked by 1% glutaraldehyde has revealed that the NpHR forms homotrimer in detergent system. Although this trimeric structure was stable in the presence of NaCl, it was dissociated to the monomer by the heat treatment at 45 °C in the desalted condition. The same tendency has been reported in the case of trimeric NpHR expressed heterologously on the E. coli membrane, leading to a conclusion that the change of strength of the trimeric association dependent on the ion binding is a universal feature of the NpHR. Interestingly, the trimer dissociation on the NpHR was accompanied by the complete dissociation of the BR molecule from the protein, indicated that the cavity formed by the NpHR protomers in the trimeric conformation is important for tight binding of the BR. Because the binding affinity for Cl(-) and the resistance to hydroxylamine under light illumination showed only minor differences between the NpHR in the solubilized state and that on the biomembrane, the influences of solubilization to the tertiary structure and function of the protein are thought to be minor. This NpHR-BR complex in the solubilized system has a potential to be a good model system to investigate the intermolecular interaction between the membrane protein and lipid. PMID:22369627

  2. Capillary interactions between particles bound to interfaces, liquid films and biomembranes.

    PubMed

    Kralchevsky, P A; Nagayama, K

    2000-03-31

    the meniscus area, gravitational energy and/or energy of wetting. The second approach is based on calculating the net force exerted on the particle, which can originate from the hydrostatic pressure, interfacial tension and bending moment. In the case of small perturbations, the superposition approximation can be used to derive an asymptotic formula for the capillary forces, which has been found to agree well with the experiment. Capillary interactions between particles bound to spherical interfaces are also considered taking into account the special geometry and restricted area of such phase boundaries. A similar approach can be applied to quantify the forces between inclusions (transmembrane proteins) in lipid membranes. The deformations in a lipid membrane, due to the inclusions, can be described theoretically in the framework of a mechanical model of the lipid bilayer, which accounts for its 'hybrid' rheology (neither elastic body nor fluid). In all considered cases the lateral capillary interaction originates from the overlap of interfacial deformations and is subject to a unified theoretical treatment, despite the fact that the characteristic particle size can vary from 1 cm down to 1 nm.

  3. Using SysML to model complex systems for security.

    SciTech Connect

    Cano, Lester Arturo

    2010-08-01

    As security systems integrate more Information Technology the design of these systems has tended to become more complex. Some of the most difficult issues in designing Complex Security Systems (CSS) are: Capturing Requirements: Defining Hardware Interfaces: Defining Software Interfaces: Integrating Technologies: Radio Systems: Voice Over IP Systems: Situational Awareness Systems.

  4. Probabilistic Multi-Factor Interaction Model for Complex Material Behavior

    NASA Technical Reports Server (NTRS)

    Abumeri, Galib H.; Chamis, Christos C.

    2010-01-01

    Complex material behavior is represented by a single equation of product form to account for interaction among the various factors. The factors are selected by the physics of the problem and the environment that the model is to represent. For example, different factors will be required for each to represent temperature, moisture, erosion, corrosion, etc. It is important that the equation represent the physics of the behavior in its entirety accurately. The Multi-Factor Interaction Model (MFIM) is used to evaluate the divot weight (foam weight ejected) from the external launch tanks. The multi-factor has sufficient degrees of freedom to evaluate a large number of factors that may contribute to the divot ejection. It also accommodates all interactions by its product form. Each factor has an exponent that satisfies only two points - the initial and final points. The exponent describes a monotonic path from the initial condition to the final. The exponent values are selected so that the described path makes sense in the absence of experimental data. In the present investigation, the data used were obtained by testing simulated specimens in launching conditions. Results show that the MFIM is an effective method of describing the divot weight ejected under the conditions investigated. The problem lies in how to represent the divot weight with a single equation. A unique solution to this problem is a multi-factor equation of product form. Each factor is of the following form (1 xi/xf)ei, where xi is the initial value, usually at ambient conditions, xf the final value, and ei the exponent that makes the curve represented unimodal that meets the initial and final values. The exponents are either evaluated by test data or by technical judgment. A minor disadvantage may be the selection of exponents in the absence of any empirical data. This form has been used successfully in describing the foam ejected in simulated space environmental conditions. Seven factors were required

  5. BDO-RFQ Program Complex of Modelling and Optimization of Charged Particle Dynamics

    NASA Astrophysics Data System (ADS)

    Ovsyannikov, D. A.; Ovsyannikov, A. D.; Antropov, I. V.; Kozynchenko, V. A.

    2016-09-01

    The article is dedicated to BDO Code program complex used for modelling and optimization of charged particle dynamics with consideration of interaction in RFQ accelerating structures. The structure of the program complex and its functionality are described; mathematical models of charged particle dynamics, interaction models and methods of optimization are given.

  6. 40 CFR 80.48 - Augmentation of the complex emission model by vehicle testing.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... emission model is deemed valid. (b) To augment the complex emission model described at § 80.45, the... refueling VOC and toxics emissions) shall not be augmented by vehicle testing. (4) The Agency reserves the... petitions to augment the complex model defined at § 80.45 with a new parameter, the effect of the...

  7. Entropic force between biomembranes

    NASA Astrophysics Data System (ADS)

    Li, Long; Song, Fan

    2016-08-01

    Undulation force, an entropic force, stems from thermally excited fluctuations, and plays a key role in the essential interactions between neighboring surfaces of objects. Although the characteristics of the undulation force have been widely studied theoretically and experimentally, the distance dependence of the force, which constitutes its most fundamental characteristic, remains poorly understood. In this paper, first, we obtain a novel expression for the undulation force by employing elasticity and statistical mechanics and prove it to be in good agreement with existing experimental results. Second, we clearly demonstrate that the two representative forms of the undulation force proposed by Helfrich and Freund were respectively the upper and lower bounds of the present expression when the separation between membranes is sufficiently small, which was intrinsically different from the existing results where Helfrich's and Freund's forms of the undulation force were only suitable for the intermediate and small separations. The investigations show that only in a sufficiently small separation does Helfrich's result stand for the undulation force with a large wave number and Freund's result express the force with a small wave number. Finally, a critical acting distance of the undulation force, beyond which the entropic force will rapidly decay approaching zero, is presented.

  8. Fisher Information and Complexity Measure of Generalized Morse Potential Model

    NASA Astrophysics Data System (ADS)

    Onate, C. A.; Idiodi, J. O. A.

    2016-09-01

    The spreading of the quantum-mechanical probability distribution density of the three-dimensional system is quantitatively determined by means of the local information-theoretic quantity of the Shannon information and information energy in both position and momentum spaces. The complexity measure which is equivalent to Cramer–Rao uncertainty product is determined. We have obtained the information content stored, the concentration of quantum system and complexity measure numerically for n = 0, 1, 2 and 3 respectively.

  9. Application of surface complexation models to anion adsorption by natural materials

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Various chemical models of ion adsorption will be presented and discussed. Chemical models, such as surface complexation models, provide a molecular description of anion adsorption reactions using an equilibrium approach. Two such models, the constant capacitance model and the triple layer model w...

  10. Towards a complex system understanding of bipolar disorder: A map based model of a complex winnerless competition.

    PubMed

    Hadaeghi, Fatemeh; Hashemi Golpayegani, Mohammad Reza; Murray, Greg

    2015-07-01

    Bipolar disorder is characterized by repeated erratic episodes of mania and depression, which can be understood as pathological complex system behavior involving cognitive, affective and psychomotor disturbance. In order to illuminate dynamical aspects of the longitudinal course of the illness, we propose here a novel complex model based on the notion of competition between recurrent maps, which mathematically represent the dynamics of activation in excitatory (Glutamatergic) and inhibitory (GABAergic) pathways. We assume that manic and depressive states can be considered stable sub attractors of a dynamical system through which the mood trajectory moves. The model provides a theoretical framework which can account for a number of complex phenomena of bipolar disorder, including intermittent transition between the two poles of the disorder, rapid and ultra-rapid cycling of episodes and manicogenic effects of antidepressants.

  11. Complex terrain dispersion model plus algorithms for unstable situations (CTDMPLUS) (for microcomputers) (re-announcement). Model-Simulation

    SciTech Connect

    1990-12-31

    The Complex Terrain Dispersion Model Plus (CTDMPLUS) is a refined air quality model for use in all stability conditions for complex terrain applications. It contains the technology of the original Complex Terrain Dispersion Model (CTDM) for stable and neutral conditions, but also models daytime, unstable conditions. The model makes use of considerable detail in the terrain and meteorological data (as compared to current EPA regulatory models) and requires the parameterization of individual terrain features, thus considering the three-dimensional nature of the interaction of the plume and terrain.

  12. Study of protein complexes via homology modeling, applied to cysteine proteases and their protein inhibitors.

    PubMed

    Tastan Bishop, Ozlem; Kroon, Matthys

    2011-12-01

    This paper develops and evaluates large-scale calculation of 3D structures of protein complexes by homology modeling as a promising new approach for protein docking. The complexes investigated were papain-like cysteine proteases and their protein inhibitors, which play numerous roles in human and parasitic metabolisms. The structural modeling was performed in two parts. For the first part (evaluation set), nine crystal structure complexes were selected, 1325 homology models of known complexes were rebuilt by various templates including hybrids, allowing an analysis of the factors influencing the accuracy of the models. The important considerations for modeling the interface were protease coverage and inhibitor sequence identity. In the second part (study set), the findings of the evaluation set were used to select appropriate templates to model novel cysteine protease-inhibitor complexes from human and malaria parasites Plasmodium falciparum and Plasmodium vivax. The energy scores, considering the evaluation set, indicate that the models are of high accuracy. PMID:21365221

  13. Modelling complex organic molecules in dense regions: Eley-Rideal and complex induced reaction

    NASA Astrophysics Data System (ADS)

    Ruaud, M.; Loison, J. C.; Hickson, K. M.; Gratier, P.; Hersant, F.; Wakelam, V.

    2015-03-01

    Recent observations have revealed the existence of complex organic molecules (COMs) in cold dense cores and pre-stellar cores. The presence of these molecules in such cold conditions is not well understood and remains a matter of debate since the previously proposed `warm-up' scenario cannot explain these observations. In this paper, we study the effect of Eley-Rideal and complex induced reaction mechanisms of gas-phase carbon atoms with the main ice components of dust grains on the formation of COMs in cold and dense regions. Based on recent experiments, we use a low value for the chemical desorption efficiency (which was previously invoked to explain the observed COM abundances). We show that our introduced mechanisms are efficient enough to produce a large amount of COMs in the gas phase at temperatures as low as 10 K.

  14. [Model studies on the transport processes of anticancer platinum complexes].

    PubMed

    Nagy, Z; Fábián, I; Sóvágó, I

    2000-01-01

    Potentiometric, calorimetric, NMR and stopped-flow kinetic studies were performed on the palladium(II) complexes of thioether and/or nitrogen donor ligands. The ternary systems always contained a tridentate ligand (dien, dipic, terpy and dianions of dipeptides, GlyGly, GlyAla and GlyMet) and a monodentate thioether (AcMet). The stability constants of thioether complexes were obtained by indirect potentiometric measurements using uridine as a competitive ligand. The thermodynamic parameters revealed that selectivity of palladium(II) for thioether binding can be significantly influenced by the other donor atoms around the metal ion. [Pd(terpy)]2+, [Pd(dipic)]2+ and [Pd(GlyMet)] had the lowest affinity for thioether binding and it was explained by steric and electronic effects. Ternary complexes of nitrogen donors have higher thermodynamic stability constants than that of the thioether complexes, but rate constants of the substitution reactions revealed that the formation of thioether complexes is the faster reaction. As a consequence, the thermodynamic equilibrium state of a multicomponent system is characterized by the coordination of N-donors, which are formed via the existence of thioether bonded intermediates. PMID:11379028

  15. A note on the Dirichlet problem for model complex partial differential equations

    NASA Astrophysics Data System (ADS)

    Ashyralyev, Allaberen; Karaca, Bahriye

    2016-08-01

    Complex model partial differential equations of arbitrary order are considered. The uniqueness of the Dirichlet problem is studied. It is proved that the Dirichlet problem for higher order of complex partial differential equations with one complex variable has infinitely many solutions.

  16. The model Lysozyme-PSSNa system for electrostatic complexation: Similarities and differences with complex coacervation.

    PubMed

    Cousin, F; Gummel, J; Combet, S; Boué, F

    2011-09-14

    We review, based on structural information, the mechanisms involved when putting in contact two nano-objects of opposite electrical charge, in the case of one negatively charged polyion, and a compact charged one. The central case is mixtures of PSS, a strong flexible polyanion (the salt of a strong acid, and with high linear charge density), and Lysozyme, a globular protein with a global positive charge. A wide accurate and consistent set of information in different situations is available on the structure at local scales (5-1000Å), due to the possibility of matching, the reproducibility of the system, its well-defined electrostatics features, and the well-defined structures obtained. We have related these structures to the observations at macroscopic scale of the phase behavior, and to the expected mechanisms of coacervation. On the one hand, PSS/Lysozyme mixtures show accurately many of what is expected in PEL/protein complexation, and phase separation, as reviewed by de Kruif: under certain conditions some well-defined complexes are formed before any phase separation, they are close to neutral; even in excess of one species, complexes are only modestly charged (surface charges in PEL excess). Neutral cores are attracting each other, to form larger objects responsible for large turbidity. They should lead the system to phase separation; this is observed in the more dilute samples, while in more concentrated ones the lack of separation in turbid samples is explained by locking effects between fractal aggregates. On the other hand, although some of the features just listed are the same required for coacervation, this phase transition is not really obtained. The phase separation has all the macroscopic aspects of a fluid (undifferentiated liquid/gas phase) - solid transition, not of a fluid-fluid (liquid-liquid) one, which would correspond to real coacervation). The origin of this can be found in the interaction potential between primary complexes formed (globules

  17. Complex-Energy Shell-Model Description of Alpha Decay

    SciTech Connect

    Id Betan, R.; Nazarewicz, Witold

    2011-01-01

    In his pioneering work of alpha decay, Gamow assumed that the alpha particle formed inside the nucleus tunnels through the barrier of the alpha-daughter potential. The corresponding metastable state can be viewed as a complex-energy solution of the time-independent Schroedinger equation with the outgoing boundary condition. The formation of the alpha cluster, missing in the original Gamow formulation, can be described within the R-matrix theory in terms of the formation amplitude. In this work, the alpha decay process is described by computing the formation amplitude and barrier penetrability in a large complex-energy configuration space spanned by the complex-energy eigenstates of the finite Woods-Saxon (WS) potential. The proper normalization of the decay channel is essential as it strongly modifies the alpha-decay spectroscopic factor. The test calculations are carried out for the ^{212}Po alpha decay.

  18. Interaction of O-acylated chitosans with biomembrane models: probing the effects from hydrophobic interactions and hydrogen bonding.

    PubMed

    Pavinatto, Adriana; Souza, Adriano L; Delezuk, Jorge A M; Pavinatto, Felippe J; Campana-Filho, Sérgio P; Oliveira, Osvaldo N

    2014-02-01

    One of the major challenges in establishing the mechanisms responsible for the chitosan action in biomedical applications lies in the determination of the molecular-level interactions with the cell membrane. In this study, we probed hydrophobic interactions and H-bonding in experiments with O,O'-diacetylchitosan (DACT) and O,O'-dipropionylchitosan (DPPCT) incorporated into monolayers of distinct phospholipids, the zwitterionic dipalmitoyl phosphatidyl choline (DPPC), and the negatively charged dipalmitoyl phosphatidyl glycerol (DPPG) and dimyristoyl phosphatidic acid (DMPA). The importance of hydrophobic interactions was confirmed with the larger effects observed for DACT and DPPCT than for parent chitosan (Chi), particularly for the more hydrophobic DPPCT. Such larger effects were noted in surface pressure isotherms and elasticity of the monolayers. Since H-bonding is hampered for the chitosan derivatives, which have part of their hydroxyl groups shielded by O-acylation, these effects indicate that H-bonding does not play an important role in the chitosan-membrane interactions. Using polarization-modulated infrared reflection absorption (PM-IRRAS) spectroscopy, we found that the chitosan derivatives were incorporated into the hydrophobic chain of the phospholipids, even at high surface pressures comparable to those in a real cell membrane. Taken together, these results indicate that the chitosan derivatives containing hydrophobic moieties would probably be more efficient than parent chitosan as antimicrobial agents, where interaction with the cell membrane is crucial.

  19. Comparison of complex and parsimonious model structures by means of a modular hydrological model concept

    NASA Astrophysics Data System (ADS)

    Holzmann, Hubert; Massmann, Carolina

    2015-04-01

    A plenty of hydrological model types have been developed during the past decades. Most of them used a fixed design to describe the variable hydrological processes assuming to be representative for the whole range of spatial and temporal scales. This assumption is questionable as it is evident, that the runoff formation process is driven by dominant processes which can vary among different basins. Furthermore the model application and the interpretation of results is limited by data availability to identify the particular sub-processes, since most models were calibrated and validated only with discharge data. Therefore it can be hypothesized, that simpler model designs, focusing only on the dominant processes, can achieve comparable results with the benefit of less parameters. In the current contribution a modular model concept will be introduced, which allows the integration and neglection of hydrological sub-processes depending on the catchment characteristics and data availability. Key elements of the process modules refer to (1) storage effects (interception, soil), (2) transfer processes (routing), (3) threshold processes (percolation, saturation overland flow) and (4) split processes (rainfall excess). Based on hydro-meteorological observations in an experimental catchment in the Slovak region of the Carpathian mountains a comparison of several model realizations with different degrees of complexity will be discussed. A special focus is given on model parameter sensitivity estimated by Markov Chain Monte Carlo approach. Furthermore the identification of dominant processes by means of Sobol's method is introduced. It could be shown that a flexible model design - and even the simple concept - can reach comparable and equivalent performance than the standard model type (HBV-type). The main benefit of the modular concept is the individual adaptation of the model structure with respect to data and process availability and the option for parsimonious model design.

  20. Molecular Models of Ruthenium(II) Organometallic Complexes

    ERIC Educational Resources Information Center

    Coleman, William F.

    2007-01-01

    This article presents the featured molecules for the month of March, which appear in the paper by Ozerov, Fafard, and Hoffman, and which are related to the study of the reactions of a number of "piano stool" complexes of ruthenium(II). The synthesis of compound 2a offers students an alternative to the preparation of ferrocene if they are only…

  1. Can Models Capture the Complexity of the Systems Engineering Process?

    NASA Astrophysics Data System (ADS)

    Boppana, Krishna; Chow, Sam; de Weck, Olivier L.; Lafon, Christian; Lekkakos, Spyridon D.; Lyneis, James; Rinaldi, Matthew; Wang, Zhiyong; Wheeler, Paul; Zborovskiy, Marat; Wojcik, Leonard A.

    Many large-scale, complex systems engineering (SE) programs have been problematic; a few examples are listed below (Bar-Yam, 2003 and Cullen, 2004), and many others have been late, well over budget, or have failed: Hilton/Marriott/American Airlines system for hotel reservations and flights; 1988-1992; 125 million; "scrapped"

  2. Modelling Second Language Performance: Integrating Complexity, Accuracy, Fluency, and Lexis

    ERIC Educational Resources Information Center

    Skehan, Peter

    2009-01-01

    Complexity, accuracy, and fluency have proved useful measures of second language performance. The present article will re-examine these measures themselves, arguing that fluency needs to be rethought if it is to be measured effectively, and that the three general measures need to be supplemented by measures of lexical use. Building upon this…

  3. Modeling Cognitive Strategies during Complex Task Performing Process

    ERIC Educational Resources Information Center

    Mazman, Sacide Guzin; Altun, Arif

    2012-01-01

    The purpose of this study is to examine individuals' computer based complex task performing processes and strategies in order to determine the reasons of failure by cognitive task analysis method and cued retrospective think aloud with eye movement data. Study group was five senior students from Computer Education and Instructional Technologies…

  4. Visualizing and modelling complex rockfall slopes using game-engine hosted models

    NASA Astrophysics Data System (ADS)

    Ondercin, Matthew; Hutchinson, D. Jean; Harrap, Rob

    2015-04-01

    Innovations in computing in the past few decades have resulted in entirely new ways to collect 3d geological data and visualize it. For example, new tools and techniques relying on high performance computing capabilities have become widely available, allowing us to model rockfalls with more attention to complexity of the rock slope geometry and rockfall path, with significantly higher quality base data, and with more analytical options. Model results are used to design mitigation solutions, considering the potential paths of the rockfall events and the energy they impart on impacted structures. Such models are currently implemented as general-purpose GIS tools and in specialized programs. These tools are used to inspect geometrical and geomechanical data, model rockfalls, and communicate results to researchers and the larger community. The research reported here explores the notion that 3D game engines provide a high speed, widely accessible platform on which to build rockfall modelling workflows and to provide a new and accessible outreach method. Taking advantage of the in-built physics capability of the 3D game codes, and ability to handle large terrains, these models are rapidly deployed and generate realistic visualizations of rockfall trajectories. Their utility in this area is as yet unproven, but preliminary research shows that they are capable of producing results that are comparable to existing approaches. Furthermore, modelling of case histories shows that the output matches the behaviour that is observed in the field. The key advantage of game-engine hosted models is their accessibility to the general public and to people with little to no knowledge of rockfall hazards. With much of the younger generation being very familiar with 3D environments such as Minecraft, the idea of a game-like simulation is intuitive and thus offers new ways to communicate to the general public. We present results from using the Unity game engine to develop 3D voxel worlds

  5. First results from the International Urban Energy Balance Model Comparison: Model Complexity

    NASA Astrophysics Data System (ADS)

    Blackett, M.; Grimmond, S.; Best, M.

    2009-04-01

    A great variety of urban energy balance models has been developed. These vary in complexity from simple schemes that represent the city as a slab, through those which model various facets (i.e. road, walls and roof) to more complex urban forms (including street canyons with intersections) and features (such as vegetation cover and anthropogenic heat fluxes). Some schemes also incorporate detailed representations of momentum and energy fluxes distributed throughout various layers of the urban canopy layer. The models each differ in the parameters they require to describe the site and the in demands they make on computational processing power. Many of these models have been evaluated using observational datasets but to date, no controlled comparisons have been conducted. Urban surface energy balance models provide a means to predict the energy exchange processes which influence factors such as urban temperature, humidity, atmospheric stability and winds. These all need to be modelled accurately to capture features such as the urban heat island effect and to provide key information for dispersion and air quality modelling. A comparison of the various models available will assist in improving current and future models and will assist in formulating research priorities for future observational campaigns within urban areas. In this presentation we will summarise the initial results of this international urban energy balance model comparison. In particular, the relative performance of the models involved will be compared based on their degree of complexity. These results will inform us on ways in which we can improve the modelling of air quality within, and climate impacts of, global megacities. The methodology employed in conducting this comparison followed that used in PILPS (the Project for Intercomparison of Land-Surface Parameterization Schemes) which is also endorsed by the GEWEX Global Land Atmosphere System Study (GLASS) panel. In all cases, models were run

  6. Generalized network structures: The configuration model and the canonical ensemble of simplicial complexes

    NASA Astrophysics Data System (ADS)

    Courtney, Owen T.; Bianconi, Ginestra

    2016-06-01

    Simplicial complexes are generalized network structures able to encode interactions occurring between more than two nodes. Simplicial complexes describe a large variety of complex interacting systems ranging from brain networks to social and collaboration networks. Here we characterize the structure of simplicial complexes using their generalized degrees that capture fundamental properties of one, two, three, or more linked nodes. Moreover, we introduce the configuration model and the canonical ensemble of simplicial complexes, enforcing, respectively, the sequence of generalized degrees of the nodes and the sequence of the expected generalized degrees of the nodes. We evaluate the entropy of these ensembles, finding the asymptotic expression for the number of simplicial complexes in the configuration model. We provide the algorithms for the construction of simplicial complexes belonging to the configuration model and the canonical ensemble of simplicial complexes. We give an expression for the structural cutoff of simplicial complexes that for simplicial complexes of dimension d =1 reduces to the structural cutoff of simple networks. Finally, we provide a numerical analysis of the natural correlations emerging in the configuration model of simplicial complexes without structural cutoff.

  7. Combining intermediate complexity models and seasonal palaeo records: how to deal with model and climate variability?

    NASA Astrophysics Data System (ADS)

    de Boer, H. J.; Dekker, S. C.; Wassen, M. J.

    2009-04-01

    Earth System Models of Intermediate Complexity (EMICs) are popular tools for palaeo climate simulations. Recent studies applied these models in comparison to terrestrial proxy records and aimed to reconstruct changes in seasonal climate forced by altered ocean circulation patterns. To strengthen this powerful methodology, we argue that the magnitude of the simulated atmospheric changes should be considered in relation to the internal variability of both the climate system and the intermediate complexity model. To attribute a shift in modelled climate to reality, this ‘signal' should be detectable above the ‘noise' related to the internal variability of the climate system and the internal variability of the model. Both noise and climate signals vary over the globe and change with the seasons. We therefore argue that spatial explicit fields of noise should be considered in relation to the strengths of the simulated signals at a seasonal timescale. We approximated total noise on terrestrial temperature and precipitation from a 29 member simulation with the EMIC PUMA-2 and global temperature and precipitation datasets. To illustrate this approach, we calculate Signal-to-Noise-Ratios (SNRs) in terrestrial temperature and precipitation on simulations of an El Niño warm event, a phase change in Atlantic Meridional Oscillation (AMO) and a Heinrich cooling event. The results of the El Niño and AMO simulations indicate that the chance to accurately detect a climate signal increases with increasing SNRs. Considering the regions and seasons with highest SNRs, the simulated El Niño anomalies show good agreement with observations (r² = 0.8 and 0.6 for temperature and precipitation at SNRs > 4). The AMO signals rarely surpass the noise levels and remain mostly undetected. The simulation of a Heinrich event predicts highest SNRs for temperature (up to 10) over Arabia and Russia during Boreal winter and spring. Highest SNRs for precipitation (up to 12) are predicted over

  8. 40 CFR 80.48 - Augmentation of the complex emission model by vehicle testing.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Reformulated Gasoline § 80.48... a fuel claims emission reduction benefits from fuel parameters that are not included in the complex emission model or complex emission model database, or if the values of fuel parameters included in...

  9. Turbulence modeling needs of commercial CFD codes: Complex flows in the aerospace and automotive industries

    NASA Technical Reports Server (NTRS)

    Befrui, Bizhan A.

    1995-01-01

    This viewgraph presentation discusses the following: STAR-CD computational features; STAR-CD turbulence models; common features of industrial complex flows; industry-specific CFD development requirements; applications and experiences of industrial complex flows, including flow in rotating disc cavities, diffusion hole film cooling, internal blade cooling, and external car aerodynamics; and conclusions on turbulence modeling needs.

  10. Turbulence modeling needs of commercial CFD codes: Complex flows in the aerospace and automotive industries

    NASA Astrophysics Data System (ADS)

    Befrui, Bizhan A.

    1995-03-01

    This viewgraph presentation discusses the following: STAR-CD computational features; STAR-CD turbulence models; common features of industrial complex flows; industry-specific CFD development requirements; applications and experiences of industrial complex flows, including flow in rotating disc cavities, diffusion hole film cooling, internal blade cooling, and external car aerodynamics; and conclusions on turbulence modeling needs.

  11. Inference, simulation, modeling, and analysis of complex networks, with special emphasis on complex networks in systems biology

    NASA Astrophysics Data System (ADS)

    Christensen, Claire Petra

    Across diverse fields ranging from physics to biology, sociology, and economics, the technological advances of the past decade have engendered an unprecedented explosion of data on highly complex systems with thousands, if not millions of interacting components. These systems exist at many scales of size and complexity, and it is becoming ever-more apparent that they are, in fact, universal, arising in every field of study. Moreover, they share fundamental properties---chief among these, that the individual interactions of their constituent parts may be well-understood, but the characteristic behaviour produced by the confluence of these interactions---by these complex networks---is unpredictable; in a nutshell, the whole is more than the sum of its parts. There is, perhaps, no better illustration of this concept than the discoveries being made regarding complex networks in the biological sciences. In particular, though the sequencing of the human genome in 2003 was a remarkable feat, scientists understand that the "cellular-level blueprints" for the human being are cellular-level parts lists, but they say nothing (explicitly) about cellular-level processes. The challenge of modern molecular biology is to understand these processes in terms of the networks of parts---in terms of the interactions among proteins, enzymes, genes, and metabolites---as it is these processes that ultimately differentiate animate from inanimate, giving rise to life! It is the goal of systems biology---an umbrella field encapsulating everything from molecular biology to epidemiology in social systems---to understand processes in terms of fundamental networks of core biological parts, be they proteins or people. By virtue of the fact that there are literally countless complex systems, not to mention tools and techniques used to infer, simulate, analyze, and model these systems, it is impossible to give a truly comprehensive account of the history and study of complex systems. The author

  12. A reduced-complexity model for river delta formation - Part 1: Modeling deltas with channel dynamics

    NASA Astrophysics Data System (ADS)

    Liang, M.; Voller, V. R.; Paola, C.

    2014-07-01

    We develop a reduced-complexity model (RCM) delta formation model, in contrast to reductionist models based on high-resolution computational fluid dynamics. The basic framework of this model (referred in this paper as "DeltaRCM") consists of stochastic parcel-based cellular routing schemes for water and sediment and a set of phenomenological rules for sediment deposition and erosion. The outputs of the model include flow field, water surface topography and bed topography that evolves in time. Results show that DeltaRCM is able to: (1) resolve a wide range of channel dynamics, including elongation, bifurcation, avulsion and migration; (2) in response to the changes in input parameters, produce different types of deltas such as alluvial fan deltas at experimental scale. We also identify three key areas of particular model sensitivity, even at the RCM level: (1) avulsion dynamics is sensitive to dynamic free-surface topography; (2) channel network structure is sensitive to instability at channel mouths which creates bars; and (3) out-of-channel sedimentation is sensitive to water surface slope along channel margins. We also demonstrate a simple stratigraphy tracking component which can display the structure of the deposit in terms of distribution of coarse and fine materials along with the age of the deposit. DeltaRCM is a useful tool for understanding the dynamics of river deltas within a relatively simple cellular representation of water and sediment transport.

  13. A hybridization model for the plasmon response of complex nanostructures

    NASA Astrophysics Data System (ADS)

    Prodan, Emil; Radloff, Corey; Halas, Naomi; Nordlander, Peter

    2004-03-01

    We discuss a simple and intuitive method, an electromagnetic analog of molecular orbital theory, to describe the plasmon response of complex nanostructures of arbitrary shape, (Science 302(2003)419-422). The method expresses the plasmon response of complex or composite nanoparticles as resulting from the interaction or "hybridization" of elementary plasmons supported by nanostructures of elementary geometries. As an example, the approach is applied to the important cases of metallic nanoshells and concentric multishell structures, nanomatryushkas. For the nanoshell, the plasmons can be described as resulting from the interaction between the cavity plasmon localized around the inner surface of the shell and a solid sphere plasmon localized around the outer surface of the shell. For the multishell structure, the plasmons can be viewed as resulting from the hybridization of the individual nanoshell plasmons on the different metallic shells. Work supported by ARO, TATP and the Robert A. Welch Foundation

  14. Stability and complexity in model meta-ecosystems

    PubMed Central

    Gravel, Dominique; Massol, François; Leibold, Mathew A.

    2016-01-01

    The diversity of life and its organization in networks of interacting species has been a long-standing theoretical puzzle for ecologists. Ever since May's provocative paper challenging whether ‘large complex systems [are] stable' various hypotheses have been proposed to explain when stability should be the rule, not the exception. Spatial dynamics may be stabilizing and thus explain high community diversity, yet existing theory on spatial stabilization is limited, preventing comparisons of the role of dispersal relative to species interactions. Here we incorporate dispersal of organisms and material into stability–complexity theory. We find that stability criteria from classic theory are relaxed in direct proportion to the number of ecologically distinct patches in the meta-ecosystem. Further, we find the stabilizing effect of dispersal is maximal at intermediate intensity. Our results highlight how biodiversity can be vulnerable to factors, such as landscape fragmentation and habitat loss, that isolate local communities. PMID:27555100

  15. Stability and complexity in model meta-ecosystems.

    PubMed

    Gravel, Dominique; Massol, François; Leibold, Mathew A

    2016-01-01

    The diversity of life and its organization in networks of interacting species has been a long-standing theoretical puzzle for ecologists. Ever since May's provocative paper challenging whether 'large complex systems [are] stable' various hypotheses have been proposed to explain when stability should be the rule, not the exception. Spatial dynamics may be stabilizing and thus explain high community diversity, yet existing theory on spatial stabilization is limited, preventing comparisons of the role of dispersal relative to species interactions. Here we incorporate dispersal of organisms and material into stability-complexity theory. We find that stability criteria from classic theory are relaxed in direct proportion to the number of ecologically distinct patches in the meta-ecosystem. Further, we find the stabilizing effect of dispersal is maximal at intermediate intensity. Our results highlight how biodiversity can be vulnerable to factors, such as landscape fragmentation and habitat loss, that isolate local communities. PMID:27555100

  16. The Creation of Surrogate Models for Fast Estimation of Complex Model Outcomes

    PubMed Central

    Pruett, W. Andrew; Hester, Robert L.

    2016-01-01

    A surrogate model is a black box model that reproduces the output of another more complex model at a single time point. This is to be distinguished from the method of surrogate data, used in time series. The purpose of a surrogate is to reduce the time necessary for a computation at the cost of rigor and generality. We describe a method of constructing surrogates in the form of support vector machine (SVM) regressions for the purpose of exploring the parameter space of physiological models. Our focus is on the methodology of surrogate creation and accuracy assessment in comparison to the original model. This is done in the context of a simulation of hemorrhage in one model, “Small”, and renal denervation in another, HumMod. In both cases, the surrogate predicts the drop in mean arterial pressure following the intervention. We asked three questions concerning surrogate models: (1) how many training examples are necessary to obtain an accurate surrogate, (2) is surrogate accuracy homogeneous, and (3) how much can computation time be reduced when using a surrogate. We found the minimum training set size that would guarantee maximal accuracy was widely variable, but could be algorithmically generated. The average error for the pressure response to the protocols was -0.05±2.47 in Small, and -0.3 +/- 3.94 mmHg in HumMod. In the Small model, error grew with actual pressure drop, and in HumMod, larger pressure drops were overestimated by the surrogates. Surrogate use resulted in a 6 order of magnitude decrease in computation time. These results suggest surrogate modeling is a valuable tool for generating predictions of an integrative model’s behavior on densely sampled subsets of its parameter space. PMID:27258010

  17. NREL's System Advisor Model Simplifies Complex Energy Analysis (Fact Sheet)

    SciTech Connect

    Not Available

    2011-10-01

    The energy market is diversifying. In addition to traditional power sources, decision makers can choose among solar, wind, and geothermal technologies as well. Each of these technologies has complex performance characteristics and economics that vary with location and other project specifics, making it difficult to analyze the viability of such projects. But that analysis is easier now, thanks to the National Renewable Energy Laboratory (NREL).

  18. Design of Low Complexity Model Reference Adaptive Controllers

    NASA Technical Reports Server (NTRS)

    Hanson, Curt; Schaefer, Jacob; Johnson, Marcus; Nguyen, Nhan

    2012-01-01

    Flight research experiments have demonstrated that adaptive flight controls can be an effective technology for improving aircraft safety in the event of failures or damage. However, the nonlinear, timevarying nature of adaptive algorithms continues to challenge traditional methods for the verification and validation testing of safety-critical flight control systems. Increasingly complex adaptive control theories and designs are emerging, but only make testing challenges more difficult. A potential first step toward the acceptance of adaptive flight controllers by aircraft manufacturers, operators, and certification authorities is a very simple design that operates as an augmentation to a non-adaptive baseline controller. Three such controllers were developed as part of a National Aeronautics and Space Administration flight research experiment to determine the appropriate level of complexity required to restore acceptable handling qualities to an aircraft that has suffered failures or damage. The controllers consist of the same basic design, but incorporate incrementally-increasing levels of complexity. Derivations of the controllers and their adaptive parameter update laws are presented along with details of the controllers implementations.

  19. A radio-frequency sheath model for complex waveforms

    SciTech Connect

    Turner, M. M.; Chabert, P.

    2014-04-21

    Plasma sheaths driven by radio-frequency voltages occur in contexts ranging from plasma processing to magnetically confined fusion experiments. An analytical understanding of such sheaths is therefore important, both intrinsically and as an element in more elaborate theoretical structures. Radio-frequency sheaths are commonly excited by highly anharmonic waveforms, but no analytical model exists for this general case. We present a mathematically simple sheath model that is in good agreement with earlier models for single frequency excitation, yet can be solved for arbitrary excitation waveforms. As examples, we discuss dual-frequency and pulse-like waveforms. The model employs the ansatz that the time-averaged electron density is a constant fraction of the ion density. In the cases we discuss, the error introduced by this approximation is small, and in general it can be quantified through an internal consistency condition of the model. This simple and accurate model is likely to have wide application.

  20. A multi-element cosmological model with a complex space-time topology

    NASA Astrophysics Data System (ADS)

    Kardashev, N. S.; Lipatova, L. N.; Novikov, I. D.; Shatskiy, A. A.

    2015-02-01

    Wormhole models with a complex topology having one entrance and two exits into the same space-time of another universe are considered, as well as models with two entrances from the same space-time and one exit to another universe. These models are used to build a model of a multi-sheeted universe (a multi-element model of the "Multiverse") with a complex topology. Spherical symmetry is assumed in all the models. A Reissner-Norström black-hole model having no singularity beyond the horizon is constructed. The strength of the central singularity of the black hole is analyzed.

  1. A Complex Network Approach to Distributional Semantic Models

    PubMed Central

    Utsumi, Akira

    2015-01-01

    A number of studies on network analysis have focused on language networks based on free word association, which reflects human lexical knowledge, and have demonstrated the small-world and scale-free properties in the word association network. Nevertheless, there have been very few attempts at applying network analysis to distributional semantic models, despite the fact that these models have been studied extensively as computational or cognitive models of human lexical knowledge. In this paper, we analyze three network properties, namely, small-world, scale-free, and hierarchical properties, of semantic networks created by distributional semantic models. We demonstrate that the created networks generally exhibit the same properties as word association networks. In particular, we show that the distribution of the number of connections in these networks follows the truncated power law, which is also observed in an association network. This indicates that distributional semantic models can provide a plausible model of lexical knowledge. Additionally, the observed differences in the network properties of various implementations of distributional semantic models are consistently explained or predicted by considering the intrinsic semantic features of a word-context matrix and the functions of matrix weighting and smoothing. Furthermore, to simulate a semantic network with the observed network properties, we propose a new growing network model based on the model of Steyvers and Tenenbaum. The idea underlying the proposed model is that both preferential and random attachments are required to reflect different types of semantic relations in network growth process. We demonstrate that this model provides a better explanation of network behaviors generated by distributional semantic models. PMID:26295940

  2. Digging through model complexity: using hierarchical models to uncover evolutionary processes in the wild.

    PubMed

    Buoro, M; Prévost, E; Gimenez, O

    2012-10-01

    The growing interest for studying questions in the wild requires acknowledging that eco-evolutionary processes are complex, hierarchically structured and often partially observed or with measurement error. These issues have long been ignored in evolutionary biology, which might have led to flawed inference when addressing evolutionary questions. Hierarchical modelling (HM) has been proposed as a generic statistical framework to deal with complexity in ecological data and account for uncertainty. However, to date, HM has seldom been used to investigate evolutionary mechanisms possibly underlying observed patterns. Here, we contend the HM approach offers a relevant approach for the study of eco-evolutionary processes in the wild by confronting formal theories to empirical data through proper statistical inference. Studying eco-evolutionary processes requires considering the complete and often complex life histories of organisms. We show how this can be achieved by combining sequentially all life-history components and all available sources of information through HM. We demonstrate how eco-evolutionary processes may be poorly inferred or even missed without using the full potential of HM. As a case study, we use the Atlantic salmon and data on wild marked juveniles. We assess a reaction norm for migration and two potential trade-offs for survival. Overall, HM has a great potential to address evolutionary questions and investigate important processes that could not previously be assessed in laboratory or short time-scale studies.

  3. Reduced complexity structural modeling for automated airframe synthesis

    NASA Technical Reports Server (NTRS)

    Hajela, Prabhat

    1987-01-01

    A procedure is developed for the optimum sizing of wing structures based on representing the built-up finite element assembly of the structure by equivalent beam models. The reduced-order beam models are computationally less demanding in an optimum design environment which dictates repetitive analysis of several trial designs. The design procedure is implemented in a computer program requiring geometry and loading information to create the wing finite element model and its equivalent beam model, and providing a rapid estimate of the optimum weight obtained from a fully stressed design approach applied to the beam. The synthesis procedure is demonstrated for representative conventional-cantilever and joined wing configurations.

  4. Adaptive tracking for complex systems using reduced-order models

    NASA Technical Reports Server (NTRS)

    Carignan, Craig R.

    1990-01-01

    Reduced-order models are considered in the context of parameter adaptive controllers for tracking workspace trajectories. A dual-arm manipulation task is used to illustrate the methodology and provide simulation results. A parameter adaptive controller is designed to track the desired position trajectory of a payload using a four-parameter model instead of a full-order, nine-parameter model. Several simulations with different payload-to-arm mass ratios are used to illustrate the capabilities of the reduced-order model in tracking the desired trajectory.

  5. Is there hope for multi-site complexation modeling?

    SciTech Connect

    Bickmore, Barry R.; Rosso, Kevin M.; Mitchell, S. C.

    2006-06-06

    It has been shown here that the standard formulation of the MUSIC model does not deliver the molecular-scale insight into oxide surface reactions that it promises. The model does not properly divide long-range electrostatic and short-range contributions to acid-base reaction energies, and it does not treat solvation in a physically realistic manner. However, even if the current MUSIC model does not succeed in its ambitions, its ambitions are still reasonable. It was a pioneering attempt in that Hiemstra and coworkers recognized that intrinsic equilibrium constants, where the effects of long-range electrostatic effects have been removed, must be theoretically constrained prior to model fitting if there is to be any hope of obtaining molecular-scale insights from SCMs. We have also shown, on the other hand, that it may be premature to dismiss all valence-based models of acidity. Not only can some such models accurately predict intrinsic acidity constants, but they can also now be linked to the results of molecular dynamics simulations of solvated systems. Significant challenges remain for those interested in creating SCMs that are accurate at the molecular scale. It will only be after all model parameters can be predicted from theory, and the models validated against titration data that we will be able to begin to have some confidence that we really are adequately describing the chemical systems in question.

  6. Modeling Developmental Complexity in Adolescence: Hormones and Behavior in Context.

    ERIC Educational Resources Information Center

    Susman, Elizabeth J.

    1997-01-01

    The links between endocrine physiological processes and adolescent psychological processes are the focus of this article. Presents a brief history of biopsychosocial research in adolescent development. Discusses four models for conceptualizing hormone-behavior research as illustrative of biopsychosocial models. Concludes with challenges and…

  7. Modeling Conditional Probabilities in Complex Educational Assessments. CSE Technical Report.

    ERIC Educational Resources Information Center

    Mislevy, Robert J.; Almond, Russell; Dibello, Lou; Jenkins, Frank; Steinberg, Linda; Yan, Duanli; Senturk, Deniz

    An active area in psychometric research is coordinated task design and statistical analysis built around cognitive models. Compared with classical test theory and item response theory, there is often less information from observed data about the measurement-model parameters. On the other hand, there is more information from the grounding…

  8. Bourbaki's structure theory in the problem of complex systems simulation models synthesis and model-oriented programming

    NASA Astrophysics Data System (ADS)

    Brodsky, Yu. I.

    2015-01-01

    The work is devoted to the application of Bourbaki's structure theory to substantiate the synthesis of simulation models of complex multicomponent systems, where every component may be a complex system itself. An application of the Bourbaki's structure theory offers a new approach to the design and computer implementation of simulation models of complex multicomponent systems—model synthesis and model-oriented programming. It differs from the traditional object-oriented approach. The central concept of this new approach and at the same time, the basic building block for the construction of more complex structures is the concept of models-components. A model-component endowed with a more complicated structure than, for example, the object in the object-oriented analysis. This structure provides to the model-component an independent behavior-the ability of standard responds to standard requests of its internal and external environment. At the same time, the computer implementation of model-component's behavior is invariant under the integration of models-components into complexes. This fact allows one firstly to construct fractal models of any complexity, and secondly to implement a computational process of such constructions uniformly-by a single universal program. In addition, the proposed paradigm allows one to exclude imperative programming and to generate computer code with a high degree of parallelism.

  9. Modelling the Complex Conductivity of Charged Porous Media using The Grain Polarization Model

    NASA Astrophysics Data System (ADS)

    Leroy, P.; Revil, A.; Jougnot, D.; Li, S.

    2015-12-01

    The low-frequency complex conductivity response of charged porous media reflects a combination of three polarization processes occuring at different frequency ranges. One polarization process corresponds to the membrane polarization phenomenon, which is the polarization mechanism associated with the back-diffusion of salt ions through different pore spaces of the porous material (ions-selective zones and zones with no selectivity). This polarization process generally occurs at the lowest frequency range, typically in the frequency range [mHz Hz] because it involves polarization mechanism occurring over different pore spaces (the relaxation frequency is inversely proportional to the length of the polarization process). Another polarization process corresponds to the electrochemical polarization of the electrical double layer coating the surface of the grains. In the grain polarization model, the diffuse layer is assumed to not polarize because it is assumed to form a continuum in the porous medium. The compact Stern layer is assumed to polarize because the Stern layer is assumed to be discontinuous over multiple grains. The electrochemical polarization of the Stern layer typically occurs in the frequency range [Hz kHz]. The last polarization process corresponds to the Maxwell-Wagner polarization mechanism, which is caused by the formation of field-induced free charge distributions near the interface between the phases of the medium. In this presentation, the grain polarization model based on the O'Konski, Schwarz, Schurr and Sen theories and developed later by Revil and co-workers is showed. This spectral induced polarization model was successfully applied to describe the complex conductivity responses of glass beads, sands, clays, clay-sand mixtures and other minerals. The limits of this model and future developments will also be presented.

  10. The effects of numerical-model complexity and observation type on estimated porosity values

    NASA Astrophysics Data System (ADS)

    Starn, J. Jeffrey; Bagtzoglou, Amvrossios C.; Green, Christopher T.

    2015-09-01

    The relative merits of model complexity and types of observations employed in model calibration are compared. An existing groundwater flow model coupled with an advective transport simulation of the Salt Lake Valley, Utah (USA), is adapted for advective transport, and effective porosity is adjusted until simulated tritium concentrations match concentrations in samples from wells. Two calibration approaches are used: a "complex" highly parameterized porosity field and a "simple" parsimonious model of porosity distribution. The use of an atmospheric tracer (tritium in this case) and apparent ages (from tritium/helium) in model calibration also are discussed. Of the models tested, the complex model (with tritium concentrations and tritium/helium apparent ages) performs best. Although tritium breakthrough curves simulated by complex and simple models are very generally similar, and there is value in the simple model, the complex model is supported by a more realistic porosity distribution and a greater number of estimable parameters. Culling the best quality data did not lead to better calibration, possibly because of processes and aquifer characteristics that are not simulated. Despite many factors that contribute to shortcomings of both the models and the data, useful information is obtained from all the models evaluated. Although any particular prediction of tritium breakthrough may have large errors, overall, the models mimic observed trends.

  11. Non-consensus Opinion Models on Complex Networks

    NASA Astrophysics Data System (ADS)

    Li, Qian; Braunstein, Lidia A.; Wang, Huijuan; Shao, Jia; Stanley, H. Eugene; Havlin, Shlomo

    2013-04-01

    Social dynamic opinion models have been widely studied to understand how interactions among individuals cause opinions to evolve. Most opinion models that utilize spin interaction models usually produce a consensus steady state in which only one opinion exists. Because in reality different opinions usually coexist, we focus on non-consensus opinion models in which above a certain threshold two opinions coexist in a stable relationship. We revisit and extend the non-consensus opinion (NCO) model introduced by Shao et al. (Phys. Rev. Lett. 103:01870, 2009). The NCO model in random networks displays a second order phase transition that belongs to regular mean field percolation and is characterized by the appearance (above a certain threshold) of a large spanning cluster of the minority opinion. We generalize the NCO model by adding a weight factor W to each individual's original opinion when determining their future opinion (NCO W model). We find that as W increases the minority opinion holders tend to form stable clusters with a smaller initial minority fraction than in the NCO model. We also revisit another non-consensus opinion model based on the NCO model, the inflexible contrarian opinion (ICO) model (Li et al. in Phys. Rev. E 84:066101, 2011), which introduces inflexible contrarians to model the competition between two opinions in a steady state. Inflexible contrarians are individuals that never change their original opinion but may influence the opinions of others. To place the inflexible contrarians in the ICO model we use two different strategies, random placement and one in which high-degree nodes are targeted. The inflexible contrarians effectively decrease the size of the largest rival-opinion cluster in both strategies, but the effect is more pronounced under the targeted method. All of the above models have previously been explored in terms of a single network, but human communities are usually interconnected, not isolated. Because opinions propagate not

  12. Computer modeling of properties of complex molecular systems

    SciTech Connect

    Kulkova, E.Yu.; Khrenova, M.G.; Polyakov, I.V.

    2015-03-10

    Large molecular aggregates present important examples of strongly nonhomogeneous systems. We apply combined quantum mechanics / molecular mechanics approaches that assume treatment of a part of the system by quantum-based methods and the rest of the system with conventional force fields. Herein we illustrate these computational approaches by two different examples: (1) large-scale molecular systems mimicking natural photosynthetic centers, and (2) components of prospective solar cells containing titan dioxide and organic dye molecules. We demonstrate that modern computational tools are capable to predict structures and spectra of such complex molecular aggregates.

  13. Improving ranking of models for protein complexes with side chain modeling and atomic potentials.

    PubMed

    Viswanath, Shruthi; Ravikant, D V S; Elber, Ron

    2013-04-01

    An atomically detailed potential for docking pairs of proteins is derived using mathematical programming. A refinement algorithm that builds atomically detailed models of the complex and combines coarse grained and atomic scoring is introduced. The refinement step consists of remodeling the interface side chains of the top scoring decoys from rigid docking followed by a short energy minimization. The refined models are then re-ranked using a combination of coarse grained and atomic potentials. The docking algorithm including the refinement and re-ranking, is compared favorably to other leading docking packages like ZDOCK, Cluspro, and PATCHDOCK, on the ZLAB 3.0 Benchmark and a test set of 30 novel complexes. A detailed analysis shows that coarse grained potentials perform better than atomic potentials for realistic unbound docking (where the exact structures of the individual bound proteins are unknown), probably because atomic potentials are more sensitive to local errors. Nevertheless, the atomic potential captures a different signal from the residue potential and as a result a combination of the two scores provides a significantly better prediction than each of the approaches alone.

  14. On the Minimum Description Length Complexity of Multinomial Processing Tree Models

    PubMed Central

    Wu, Hao; Myung, Jay I.; Batchelder, William H.

    2010-01-01

    Multinomial processing tree (MPT) modeling is a statistical methodology that has been widely and successfully applied for measuring hypothesized latent cognitive processes in selected experimental paradigms. This paper concerns model complexity of MPT models. Complexity is a key and necessary concept to consider in the evaluation and selection of quantitative models. A complex model with many parameters often overfits data beyond and above the underlying regularities, and therefore, should be appropriately penalized. It has been well established and demonstrated in multiple studies that in addition to the number of parameters, a model’s functional form, which refers to the way by which parameters are combined in the model equation, can also have significant effects on complexity. Given that MPT models vary greatly in their functional forms (tree structures and parameter/category assignments), it would be of interest to evaluate their effects on complexity. Addressing this issue from the minimum description length (MDL) viewpoint, we prove a series of propositions concerning various ways in which functional form contributes to the complexity of MPT models. Computational issues of complexity are also discussed. PMID:20514139

  15. Uncertainty and error in complex plasma chemistry models

    NASA Astrophysics Data System (ADS)

    Turner, Miles M.

    2015-06-01

    Chemistry models that include dozens of species and hundreds to thousands of reactions are common in low-temperature plasma physics. The rate constants used in such models are uncertain, because they are obtained from some combination of experiments and approximate theories. Since the predictions of these models are a function of the rate constants, these predictions must also be uncertain. However, systematic investigations of the influence of uncertain rate constants on model predictions are rare to non-existent. In this work we examine a particular chemistry model, for helium-oxygen plasmas. This chemistry is of topical interest because of its relevance to biomedical applications of atmospheric pressure plasmas. We trace the primary sources for every rate constant in the model, and hence associate an error bar (or equivalently, an uncertainty) with each. We then use a Monte Carlo procedure to quantify the uncertainty in predicted plasma species densities caused by the uncertainty in the rate constants. Under the conditions investigated, the range of uncertainty in most species densities is a factor of two to five. However, the uncertainty can vary strongly for different species, over time, and with other plasma conditions. There are extreme (pathological) cases where the uncertainty is more than a factor of ten. One should therefore be cautious in drawing any conclusion from plasma chemistry modelling, without first ensuring that the conclusion in question survives an examination of the related uncertainty.

  16. Volterra representation enables modeling of complex synaptic nonlinear dynamics in large-scale simulations

    PubMed Central

    Hu, Eric Y.; Bouteiller, Jean-Marie C.; Song, Dong; Baudry, Michel; Berger, Theodore W.

    2015-01-01

    Chemical synapses are comprised of a wide collection of intricate signaling pathways involving complex dynamics. These mechanisms are often reduced to simple spikes or exponential representations in order to enable computer simulations at higher spatial levels of complexity. However, these representations cannot capture important nonlinear dynamics found in synaptic transmission. Here, we propose an input-output (IO) synapse model capable of generating complex nonlinear dynamics while maintaining low computational complexity. This IO synapse model is an extension of a detailed mechanistic glutamatergic synapse model capable of capturing the input-output relationships of the mechanistic model using the Volterra functional power series. We demonstrate that the IO synapse model is able to successfully track the nonlinear dynamics of the synapse up to the third order with high accuracy. We also evaluate the accuracy of the IO synapse model at different input frequencies and compared its performance with that of kinetic models in compartmental neuron models. Our results demonstrate that the IO synapse model is capable of efficiently replicating complex nonlinear dynamics that were represented in the original mechanistic model and provide a method to replicate complex and diverse synaptic transmission within neuron network simulations. PMID:26441622

  17. Calibration of two complex ecosystem models with different likelihood functions

    NASA Astrophysics Data System (ADS)

    Hidy, Dóra; Haszpra, László; Pintér, Krisztina; Nagy, Zoltán; Barcza, Zoltán

    2014-05-01

    The biosphere is a sensitive carbon reservoir. Terrestrial ecosystems were approximately carbon neutral during the past centuries, but they became net carbon sinks due to climate change induced environmental change and associated CO2 fertilization effect of the atmosphere. Model studies and measurements indicate that the biospheric carbon sink can saturate in the future due to ongoing climate change which can act as a positive feedback. Robustness of carbon cycle models is a key issue when trying to choose the appropriate model for decision support. The input parameters of the process-based models are decisive regarding the model output. At the same time there are several input parameters for which accurate values are hard to obtain directly from experiments or no local measurements are available. Due to the uncertainty associated with the unknown model parameters significant bias can be experienced if the model is used to simulate the carbon and nitrogen cycle components of different ecosystems. In order to improve model performance the unknown model parameters has to be estimated. We developed a multi-objective, two-step calibration method based on Bayesian approach in order to estimate the unknown parameters of PaSim and Biome-BGC models. Biome-BGC and PaSim are a widely used biogeochemical models that simulate the storage and flux of water, carbon, and nitrogen between the ecosystem and the atmosphere, and within the components of the terrestrial ecosystems (in this research the developed version of Biome-BGC is used which is referred as BBGC MuSo). Both models were calibrated regardless the simulated processes and type of model parameters. The calibration procedure is based on the comparison of measured data with simulated results via calculating a likelihood function (degree of goodness-of-fit between simulated and measured data). In our research different likelihood function formulations were used in order to examine the effect of the different model

  18. Modelling polymersomes: a prototype for complex cellular structures

    NASA Astrophysics Data System (ADS)

    Sevink, G. J. A.; Fraaije, J. G. E. M.

    2004-04-01

    Self-organisation of small amphiphilic molecules is a key technique in many applications of modern nanotechnology. Thin polymer films have been extensively studied theoretically and experimentally because of their rich phase behaviour and use as templates in lithographic processes, optical devices and surfaces with molecular recognition capabilities. Encapsulating polymeric vesicles or polymersomes can be applied in very diverse applications ranging from drug delivery, templates for heterogeneous catalysts, aerosols and personal care products. Moreover, there is some understanding that polymersomes with internal structures can serve as a scaffold for the understanding of complex biological structures, such as the mitochondrium and other (sub-)cellular stuctures. The experimental technique of making polymersomes is relatively new, and the kinetics of their formation delicate, and often not well understood. As a result, the internal and external structures of experimental polymersomes are very diverse, and often highly depend on the method of preparation. Here we report the results of field-theoretic computer simulations of remarkable structures in dispersed droplets of a polymer surfactant. The preparation method is that of quenching a homogenous droplet of polymer surfactant in an aqueous bath. In the discussion part we shortly discuss the road ahead: the use of our method as a tool for understanding complex biological systems.

  19. Complexities and contingencies conceptualised: towards a model of reproductive navigation.

    PubMed

    van der Sijpt, Erica

    2014-02-01

    Current international attention to reproductive health behaviour is inspired by a western celebration of individual rights, autonomous action and rational choice. A predominant idea is that individuals should be free to act in accordance with their reproductive intentions and that, in doing so, they will attain their desired (and quantifiable) fertility outcomes. Yet such a framework leads to a misrepresentation of the reproductive dynamics on the ground, because individual fertility intentions are often not a priori defined, decisions are often not the result of rational calculation and reproductive happenings do not exist in a social vacuum. This article provides sociocultural evidence for a different conceptualisation of reproductive health behaviour. On the basis of long-term anthropological fieldwork in the East Province of Cameroon, I will analyse the complexities of fertility-related decision-making. Two case studies from the field will show that reproductive happenings are often characterised by indeterminacy and contingency. In order to understand the complex ways in which women give direction to these uncertainties, I propose an encompassing framework of reproductive navigation that explicitly acknowledges the influence of sociality and corporeality on fertility aspirations and actions. PMID:24111549

  20. Complexities and contingencies conceptualised: towards a model of reproductive navigation.

    PubMed

    van der Sijpt, Erica

    2014-02-01

    Current international attention to reproductive health behaviour is inspired by a western celebration of individual rights, autonomous action and rational choice. A predominant idea is that individuals should be free to act in accordance with their reproductive intentions and that, in doing so, they will attain their desired (and quantifiable) fertility outcomes. Yet such a framework leads to a misrepresentation of the reproductive dynamics on the ground, because individual fertility intentions are often not a priori defined, decisions are often not the result of rational calculation and reproductive happenings do not exist in a social vacuum. This article provides sociocultural evidence for a different conceptualisation of reproductive health behaviour. On the basis of long-term anthropological fieldwork in the East Province of Cameroon, I will analyse the complexities of fertility-related decision-making. Two case studies from the field will show that reproductive happenings are often characterised by indeterminacy and contingency. In order to understand the complex ways in which women give direction to these uncertainties, I propose an encompassing framework of reproductive navigation that explicitly acknowledges the influence of sociality and corporeality on fertility aspirations and actions.

  1. 40 CFR 80.49 - Fuels to be used in augmenting the complex emission model through vehicle testing.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... complex emission model through vehicle testing. 80.49 Section 80.49 Protection of Environment... Reformulated Gasoline § 80.49 Fuels to be used in augmenting the complex emission model through vehicle testing... augmenting the complex emission model with a parameter not currently included in the complex emission...

  2. Mathematical model and software complex for computer simulation of field emission electron sources

    SciTech Connect

    Nikiforov, Konstantin

    2015-03-10

    The software complex developed in MATLAB allows modelling of function of diode and triode structures based on field emission electron sources with complex sub-micron geometry, their volt-ampere characteristics, calculating distribution of electric field for educational and research needs. The goal of this paper is describing the physical-mathematical model, calculation methods and algorithms the software complex is based on, demonstrating the principles of its function and showing results of its work. For getting to know the complex, a demo version with graphical user interface is presented.

  3. Complex dynamics of a nonlinear voter model with contrarian agents

    SciTech Connect

    Tanabe, Shoma; Masuda, Naoki

    2013-12-15

    We investigate mean-field dynamics of a nonlinear opinion formation model with congregator and contrarian agents. Each agent assumes one of the two possible states. Congregators imitate the state of other agents with a rate that increases with the number of other agents in the opposite state, as in the linear voter model and nonlinear majority voting models. Contrarians flip the state with a rate that increases with the number of other agents in the same state. The nonlinearity controls the strength of the majority voting and is used as a main bifurcation parameter. We show that the model undergoes a rich bifurcation scenario comprising the egalitarian equilibrium, two symmetric lopsided equilibria, limit cycle, and coexistence of different types of stable equilibria with intertwining attractive basins.

  4. Railway faults spreading model based on dynamics of complex network

    NASA Astrophysics Data System (ADS)

    Zhou, Jin; Xu, Weixiang; Guo, Xin; Ma, Xin

    2015-12-01

    In this paper, we propose a railway faults spreading model which improved the SIR model and made it suitable for analyzing the dynamic process of faults spreading. To apply our model into a real network, the accident causation network of "7.23" China Yongwen high-speed railway accident is employed. This network is improved into a directed network, which more clearly reflects the causation relationships among the accident factors and provides help for our studies. Simulation results quantitatively show that the influence of failures can be diminished via choosing the appropriate initial recovery factors, reducing the time of the failure detected, decreasing the transmission rate of faults and increasing the propagating rate of corrected information. The model is useful to simulate the railway faults spreading and quantitatively analyze the influence of failures.

  5. Modelling hierarchical and modular complex networks: division and independence

    NASA Astrophysics Data System (ADS)

    Kim, D.-H.; Rodgers, G. J.; Kahng, B.; Kim, D.

    2005-06-01

    We introduce a growing network model which generates both modular and hierarchical structure in a self-organized way. To this end, we modify the Barabási-Albert model into the one evolving under the principles of division and independence as well as growth and preferential attachment (PA). A newly added vertex chooses one of the modules composed of existing vertices, and attaches edges to vertices belonging to that module following the PA rule. When the module size reaches a proper size, the module is divided into two, and a new module is created. The karate club network studied by Zachary is a simple version of the current model. We find that the model can reproduce both modular and hierarchical properties, characterized by the hierarchical clustering function of a vertex with degree k, C(k), being in good agreement with empirical measurements for real-world networks.

  6. LISP based simulation generators for modeling complex space processes

    NASA Technical Reports Server (NTRS)

    Tseng, Fan T.; Schroer, Bernard J.; Dwan, Wen-Shing

    1987-01-01

    The development of a simulation assistant for modeling discrete event processes is presented. Included are an overview of the system, a description of the simulation generators, and a sample process generated using the simulation assistant.

  7. Analytical solution of a model for complex food webs

    NASA Astrophysics Data System (ADS)

    Camacho, Juan; Guimerà, Roger; Amaral, Luís A.

    2002-03-01

    We investigate numerically and analytically a recently proposed model for food webs [Nature 404, 180 (2000)] in the limit of large web sizes and sparse interaction matrices. We obtain analytical expressions for several quantities with ecological interest, in particular, the probability distributions for the number of prey and the number of predators. We find that these distributions have fast-decaying exponential and Gaussian tails, respectively. We also find that our analytical expressions are robust to changes in the details of the model.

  8. Ensemble Learning of QTL Models Improves Prediction of Complex Traits

    PubMed Central

    Bian, Yang; Holland, James B.

    2015-01-01

    Quantitative trait locus (QTL) models can provide useful insights into trait genetic architecture because of their straightforward interpretability but are less useful for genetic prediction because of the difficulty in including the effects of numerous small effect loci without overfitting. Tight linkage between markers introduces near collinearity among marker genotypes, complicating the detection of QTL and estimation of QTL effects in linkage mapping, and this problem is exacerbated by very high density linkage maps. Here we developed a thinning and aggregating (TAGGING) method as a new ensemble learning approach to QTL mapping. TAGGING reduces collinearity problems by thinning dense linkage maps, maintains aspects of marker selection that characterize standard QTL mapping, and by ensembling, incorporates information from many more markers-trait associations than traditional QTL mapping. The objective of TAGGING was to improve prediction power compared with QTL mapping while also providing more specific insights into genetic architecture than genome-wide prediction models. TAGGING was compared with standard QTL mapping using cross validation of empirical data from the maize (Zea mays L.) nested association mapping population. TAGGING-assisted QTL mapping substantially improved prediction ability for both biparental and multifamily populations by reducing both the variance and bias in prediction. Furthermore, an ensemble model combining predictions from TAGGING-assisted QTL and infinitesimal models improved prediction abilities over the component models, indicating some complementarity between model assumptions and suggesting that some trait genetic architectures involve a mixture of a few major QTL and polygenic effects. PMID:26276383

  9. Ensemble Learning of QTL Models Improves Prediction of Complex Traits.

    PubMed

    Bian, Yang; Holland, James B

    2015-10-01

    Quantitative trait locus (QTL) models can provide useful insights into trait genetic architecture because of their straightforward interpretability but are less useful for genetic prediction because of the difficulty in including the effects of numerous small effect loci without overfitting. Tight linkage between markers introduces near collinearity among marker genotypes, complicating the detection of QTL and estimation of QTL effects in linkage mapping, and this problem is exacerbated by very high density linkage maps. Here we developed a thinning and aggregating (TAGGING) method as a new ensemble learning approach to QTL mapping. TAGGING reduces collinearity problems by thinning dense linkage maps, maintains aspects of marker selection that characterize standard QTL mapping, and by ensembling, incorporates information from many more markers-trait associations than traditional QTL mapping. The objective of TAGGING was to improve prediction power compared with QTL mapping while also providing more specific insights into genetic architecture than genome-wide prediction models. TAGGING was compared with standard QTL mapping using cross validation of empirical data from the maize (Zea mays L.) nested association mapping population. TAGGING-assisted QTL mapping substantially improved prediction ability for both biparental and multifamily populations by reducing both the variance and bias in prediction. Furthermore, an ensemble model combining predictions from TAGGING-assisted QTL and infinitesimal models improved prediction abilities over the component models, indicating some complementarity between model assumptions and suggesting that some trait genetic architectures involve a mixture of a few major QTL and polygenic effects. PMID:26276383

  10. Renormalizing Sznajd model on complex networks taking into account the effects of growth mechanisms

    NASA Astrophysics Data System (ADS)

    González, M. C.; Sousa, A. O.; Herrmann, H. J.

    2006-01-01

    We present a renormalization approach to solve the Sznajd opinion formation model on complex networks. For the case of two opinions, we present an expression of the probability of reaching consensus for a given opinion as a function of the initial fraction of agents with that opinion. The calculations reproduce the sharp transition of the model on a fixed network, as well as the recently observed smooth function for the model when simulated on a growing complex networks.

  11. Modeling Increased Complexity and the Reliance on Automation: FLightdeck Automation Problems (FLAP) Model

    NASA Technical Reports Server (NTRS)

    Ancel, Ersin; Shih, Ann T.

    2014-01-01

    This paper highlights the development of a model that is focused on the safety issue of increasing complexity and reliance on automation systems in transport category aircraft. Recent statistics show an increase in mishaps related to manual handling and automation errors due to pilot complacency and over-reliance on automation, loss of situational awareness, automation system failures and/or pilot deficiencies. Consequently, the aircraft can enter a state outside the flight envelope and/or air traffic safety margins which potentially can lead to loss-of-control (LOC), controlled-flight-into-terrain (CFIT), or runway excursion/confusion accidents, etc. The goal of this modeling effort is to provide NASA's Aviation Safety Program (AvSP) with a platform capable of assessing the impacts of AvSP technologies and products towards reducing the relative risk of automation related accidents and incidents. In order to do so, a generic framework, capable of mapping both latent and active causal factors leading to automation errors, is developed. Next, the framework is converted into a Bayesian Belief Network model and populated with data gathered from Subject Matter Experts (SMEs). With the insertion of technologies and products, the model provides individual and collective risk reduction acquired by technologies and methodologies developed within AvSP.

  12. A Systematic Assessment of the Relationship Between the Complexity and Fidelity of Hydrological Models

    NASA Astrophysics Data System (ADS)

    Addor, N.; Clark, M. P.; Nijssen, B.

    2015-12-01

    The relationship between the complexity and fidelity of hydrological models is challenging to investigate in a systematic way using current modeling frameworks. Its characterization has so far principally relied on the comparison of different models or of different modules within the same model. Shortcomings of these approaches include the difficulty to pinpoint model features that contribute to good simulations, given the small number of models or modeling hypotheses that are usually evaluated. We use the newly-developed Structure for Unifying Multiple Modeling Alternatives (SUMMA) to comprehensively and systematically explore modeling alternatives across the continuum of model complexity. We use SUMMA's flexibility to evaluate the impacts of explicitly representing or lumping physical processes and hydrological landscapes. Starting from conceptual models based on the Framework for Understanding Structural Errors (FUSE), we progressively increase model complexity and assess corresponding model fidelity. We scrutinize models' ability to reproduce observed events and the stability of their performance under changing climatic conditions (robustness). We will show preliminary results for catchments in different hydroclimatic regimes simulated using models of varying complexity. As a first step, model complexity will be quantified using computing time and the number of state variables; model robustness will be quantified using differential split-sample tests; and model performance will be quantified using a suite of multivariate and multi-scale diagnostic metrics. With this modeling approach we seek to uncover trade-offs between realism and practicality. A particular aim is to explore to which extent the replacement of conceptual formulations by physically explicit ones improves model performance, and whether this may lead to a reduction of uncertainty in hydrological simulations.

  13. Understanding the implementation of complex interventions in health care: the normalization process model

    PubMed Central

    May, Carl; Finch, Tracy; Mair, Frances; Ballini, Luciana; Dowrick, Christopher; Eccles, Martin; Gask, Linda; MacFarlane, Anne; Murray, Elizabeth; Rapley, Tim; Rogers, Anne; Treweek, Shaun; Wallace, Paul; Anderson, George; Burns, Jo; Heaven, Ben

    2007-01-01

    Background The Normalization Process Model is a theoretical model that assists in explaining the processes by which complex interventions become routinely embedded in health care practice. It offers a framework for process evaluation and also for comparative studies of complex interventions. It focuses on the factors that promote or inhibit the routine embedding of complex interventions in health care practice. Methods A formal theory structure is used to define the model, and its internal causal relations and mechanisms. The model is broken down to show that it is consistent and adequate in generating accurate description, systematic explanation, and the production of rational knowledge claims about the workability and integration of complex interventions. Results The model explains the normalization of complex interventions by reference to four factors demonstrated to promote or inhibit the operationalization and embedding of complex interventions (interactional workability, relational integration, skill-set workability, and contextual integration). Conclusion The model is consistent and adequate. Repeated calls for theoretically sound process evaluations in randomized controlled trials of complex interventions, and policy-makers who call for a proper understanding of implementation processes, emphasize the value of conceptual tools like the Normalization Process Model. PMID:17880693

  14. Modelling excitonic-energy transfer in light-harvesting complexes

    SciTech Connect

    Kramer, Tobias; Kreisbeck, Christoph

    2014-01-08

    The theoretical and experimental study of energy transfer in photosynthesis has revealed an interesting transport regime, which lies at the borderline between classical transport dynamics and quantum-mechanical interference effects. Dissipation is caused by the coupling of electronic degrees of freedom to vibrational modes and leads to a directional energy transfer from the antenna complex to the target reaction-center. The dissipative driving is robust and does not rely on fine-tuning of specific vibrational modes. For the parameter regime encountered in the biological systems new theoretical tools are required to directly compare theoretical results with experimental spectroscopy data. The calculations require to utilize massively parallel graphics processor units (GPUs) for efficient and exact computations.

  15. Research Strategy for Modeling the Complexities of Turbine Heat Transfer

    NASA Technical Reports Server (NTRS)

    Simoneau, Robert J.

    1996-01-01

    The subject of this paper is a NASA research program, known as the Coolant Flow Management Program, which focuses on the interaction between the internal coolant channel and the external film cooling of a turbine blade and/or vane in an aircraft gas turbine engine. The turbine gas path is really a very complex flow field. The combination of strong pressure gradients, abrupt geometry changes and intersecting surfaces, viscous forces, rotation, and unsteady blade/vane interactions all combine to offer a formidable challenge. To this, in the high pressure turbine, we add the necessity of film cooling. The ultimate goal of the turbine designer is to maintain or increase the high level of turbine performance and at the same time reduce the amount of coolant flow needed to achieve this end. Simply stated, coolant flow is a penalty on the cycle and reduces engine thermal efficiency. Accordingly, understanding the flow field and heat transfer associated with the coolant flow is a priority goal. It is important to understand both the film cooling and the internal coolant flow, particularly their interaction. Thus, the motivation for the Coolant Flow Management Program. The paper will begin with a brief discussion of the management and research strategy, will then proceed to discuss the current attack from the internal coolant side, and will conclude by looking at the film cooling effort - at all times keeping sight of the primary goal the interaction between the two. One of the themes of this paper is that complex heat transfer problems of this nature cannot be attacked by single researchers or even groups of researchers, each working alone. It truly needs the combined efforts of a well-coordinated team to make an impact. It is important to note that this is a government/industry/university team effort.

  16. Dynamics and complexity of the Schelling segregation model

    NASA Astrophysics Data System (ADS)

    Domic, Nicolás Goles; Goles, Eric; Rica, Sergio

    2011-05-01

    In this paper we consider the Schelling social segregation model for two different populations. In Schelling’s model, segregation appears as a consequence of discrimination, measured by the local difference between two populations. For that, the model defines a tolerance criterion on the neighborhood of an individual, indicating wether the individual is able to move to a new place or not. Next, the model chooses which of the available unhappy individuals really moves. In our work, we study the patterns generated by the dynamical evolution of the Schelling model in terms of various parameters or the initial condition, such as the size of the neighborhood of an inhabitant, the tolerance, and the initial number of individuals. As a general rule we observe that segregation patterns minimize the interface of zones of different people. In this context we introduce an energy functional associated with the configuration which is a strictly decreasing function for the tolerant people case. Moreover, as far as we know, we are the first to notice that in the case of a non-strictly-decreasing energy functional, the system may segregate very efficiently.

  17. Dynamics and complexity of the Schelling segregation model.

    PubMed

    Domic, Nicolás Goles; Goles, Eric; Rica, Sergio

    2011-05-01

    In this paper we consider the Schelling social segregation model for two different populations. In Schelling's model, segregation appears as a consequence of discrimination, measured by the local difference between two populations. For that, the model defines a tolerance criterion on the neighborhood of an individual, indicating wether the individual is able to move to a new place or not. Next, the model chooses which of the available unhappy individuals really moves. In our work, we study the patterns generated by the dynamical evolution of the Schelling model in terms of various parameters or the initial condition, such as the size of the neighborhood of an inhabitant, the tolerance, and the initial number of individuals. As a general rule we observe that segregation patterns minimize the interface of zones of different people. In this context we introduce an energy functional associated with the configuration which is a strictly decreasing function for the tolerant people case. Moreover, as far as we know, we are the first to notice that in the case of a non-strictly-decreasing energy functional, the system may segregate very efficiently.

  18. Efficient Calibration/Uncertainty Analysis Using Paired Complex/Surrogate Models.

    PubMed

    Burrows, Wesley; Doherty, John

    2015-01-01

    The use of detailed groundwater models to simulate complex environmental processes can be hampered by (1) long run-times and (2) a penchant for solution convergence problems. Collectively, these can undermine the ability of a modeler to reduce and quantify predictive uncertainty, and therefore limit the use of such detailed models in the decision-making context. We explain and demonstrate a novel approach to calibration and the exploration of posterior predictive uncertainty, of a complex model, that can overcome these problems in many modelling contexts. The methodology relies on conjunctive use of a simplified surrogate version of the complex model in combination with the complex model itself. The methodology employs gradient-based subspace analysis and is thus readily adapted for use in highly parameterized contexts. In its most basic form, one or more surrogate models are used for calculation of the partial derivatives that collectively comprise the Jacobian matrix. Meanwhile, testing of parameter upgrades and the making of predictions is done by the original complex model. The methodology is demonstrated using a density-dependent seawater intrusion model in which the model domain is characterized by a heterogeneous distribution of hydraulic conductivity.

  19. Low Complexity Models to improve Incomplete Sensitivities for Shape Optimization

    NASA Astrophysics Data System (ADS)

    Stanciu, Mugurel; Mohammadi, Bijan; Moreau, Stéphane

    2003-01-01

    The present global platform for simulation and design of multi-model configurations treat shape optimization problems in aerodynamics. Flow solvers are coupled with optimization algorithms based on CAD-free and CAD-connected frameworks. Newton methods together with incomplete expressions of gradients are used. Such incomplete sensitivities are improved using reduced models based on physical assumptions. The validity and the application of this approach in real-life problems are presented. The numerical examples concern shape optimization for an airfoil, a business jet and a car engine cooling axial fan.

  20. The development and application of composite complexity models and a relative complexity metric in a software maintenance environment

    NASA Technical Reports Server (NTRS)

    Hops, J. M.; Sherif, J. S.

    1994-01-01

    A great deal of effort is now being devoted to the study, analysis, prediction, and minimization of software maintenance expected cost, long before software is delivered to users or customers. It has been estimated that, on the average, the effort spent on software maintenance is as costly as the effort spent on all other software costs. Software design methods should be the starting point to aid in alleviating the problems of software maintenance complexity and high costs. Two aspects of maintenance deserve attention: (1) protocols for locating and rectifying defects, and for ensuring that noe new defects are introduced in the development phase of the software process; and (2) protocols for modification, enhancement, and upgrading. This article focuses primarily on the second aspect, the development of protocols to help increase the quality and reduce the costs associated with modifications, enhancements, and upgrades of existing software. This study developed parsimonious models and a relative complexity metric for complexity measurement of software that were used to rank the modules in the system relative to one another. Some success was achieved in using the models and the relative metric to identify maintenance-prone modules.

  1. Ensemble learning of QTL models improves prediction of complex traits

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Quantitative trait locus (QTL) models can provide useful insights into trait genetic architecture because of their straightforward interpretability, but are less useful for genetic prediction due to difficulty in including the effects of numerous small effect loci without overfitting. Tight linkage ...

  2. Information and complexity measures for hydrologic model evaluation

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Hydrological models are commonly evaluated through the residual-based performance measures such as the root-mean square error or efficiency criteria. Such measures, however, do not evaluate the degree of similarity of patterns in simulated and measured time series. The objective of this study was to...

  3. Wind field near complex terrain using numerical weather prediction model

    NASA Astrophysics Data System (ADS)

    Chim, Kin-Sang

    The PennState/NCAR MM5 model was modified to simulate an idealized flow pass through a 3D obstacle in the Micro- Alpha Scale domain. The obstacle used were the idealized Gaussian obstacle and the real topography of Lantau Island of Hong Kong. The Froude number under study is ranged from 0.22 to 1.5. Regime diagrams for both the idealized Gaussian obstacle and Lantau island were constructed. This work is divided into five parts. The first part is the problem definition and the literature review of the related publications. The second part briefly discuss as the PennState/NCAR MM5 model and a case study of long- range transport is included. The third part is devoted to the modification and the verification of the PennState/NCAR MM5 model on the Micro-Alpha Scale domain. The implementation of the Orlanski (1976) open boundary condition is included with the method of single sounding initialization of the model. Moreover, an upper dissipative layer, Klemp and Lilly (1978), is implemented on the model. The simulated result is verified by the Automatic Weather Station (AWS) data and the Wind Profiler data. Four different types of Planetary Boundary Layer (PBL) parameterization schemes have been investigated in order to find out the most suitable one for Micro-Alpha Scale domain in terms of both accuracy and efficiency. Bulk Aerodynamic type of PBL parameterization scheme is found to be the most suitable PBL parameterization scheme. Investigation of the free- slip lower boundary condition is performed and the simulated result is compared with that with friction. The fourth part is the use of the modified PennState/NCAR MM5 model for an idealized flow simulation. The idealized uniform flow used is nonhydrostatic and has constant Froude number. Sensitivity test is performed by varying the Froude number and the regime diagram is constructed. Moreover, nondimensional drag is found to be useful for regime identification. The model result is also compared with the analytic

  4. Which level of model complexity is justified by your data? A Bayesian answer

    NASA Astrophysics Data System (ADS)

    Schöniger, Anneli; Illman, Walter; Wöhling, Thomas; Nowak, Wolfgang

    2016-04-01

    When judging the plausibility and utility of a subsurface flow or transport model, the question of justifiability arises: which level of model complexity can still be justified by the available calibration data? Although it is common sense that more data are needed to reasonably constrain the parameter space of a more complex model, there is a lack of tools that can objectively quantify model justifiability as a function of the available data. We propose an approach to determine model justifiability in the context of comparing alternative conceptual models. Our approach rests on Bayesian model averaging (BMA). BMA yields posterior model probabilities that point the modeler to an optimal trade-off between model performance in reproducing a given calibration data set and model complexity. To find out which level of complexity can be justified by the available data, we disentangle the complexity component of the trade-off from its performance counterpart. Technically, we remove the performance component from the BMA analysis by replacing the actually observed data values with potential measurement values as predicted by the models. Our proposed analysis results in a "model confusion matrix". Based on this matrix, the modeler can identify the maximum level of model complexity that could possibly be justified by the available amount and type of data. As a side product, model (dis-)similarity is revealed. We have applied the model justifiability analysis to a case of aquifer characterization via hydraulic tomography. Four models of vastly different complexity have been proposed to represent the heterogeneity in hydraulic conductivity of a sandbox aquifer, ranging from a homogeneous medium to geostatistical random fields. We have used drawdown data from two to six pumping tests to condition the models and to determine model justifiability as a function of data set size. Our test case shows that a geostatistical parameterization scheme requires a substantial amount of

  5. A novel approach for identifying causal models of complex diseases from family data.

    PubMed

    Park, Leeyoung; Kim, Ju H

    2015-04-01

    Causal models including genetic factors are important for understanding the presentation mechanisms of complex diseases. Familial aggregation and segregation analyses based on polygenic threshold models have been the primary approach to fitting genetic models to the family data of complex diseases. In the current study, an advanced approach to obtaining appropriate causal models for complex diseases based on the sufficient component cause (SCC) model involving combinations of traditional genetics principles was proposed. The probabilities for the entire population, i.e., normal-normal, normal-disease, and disease-disease, were considered for each model for the appropriate handling of common complex diseases. The causal model in the current study included the genetic effects from single genes involving epistasis, complementary gene interactions, gene-environment interactions, and environmental effects. Bayesian inference using a Markov chain Monte Carlo algorithm (MCMC) was used to assess of the proportions of each component for a given population lifetime incidence. This approach is flexible, allowing both common and rare variants within a gene and across multiple genes. An application to schizophrenia data confirmed the complexity of the causal factors. An analysis of diabetes data demonstrated that environmental factors and gene-environment interactions are the main causal factors for type II diabetes. The proposed method is effective and useful for identifying causal models, which can accelerate the development of efficient strategies for identifying causal factors of complex diseases. PMID:25701286

  6. Chemical evolution of peroxidase--amino acid pentacyanoferrate (II) complexes as model.

    PubMed

    Kamaluddin; Nath, M; Deopujari, S W

    1988-01-01

    Complexes of the type [Fe(II)(CN)5(L)]n- (where n = 3, or 4; L = glycine, histidine, imidazole, and triglycine) are proposed as evolutionary model of peroxidases. Detailed kinetic investigation for disproportionation of hydrogen peroxide catalysed by [Fe(II)(CN)5(L)]n- complexes at 40 degrees C and pH 9.18 are discussed. Decomposition of hydrogen peroxide catalysed by above complexes conforms to Michaelis-Menten type kinetics.

  7. Statistical Techniques Complement UML When Developing Domain Models of Complex Dynamical Biosystems.

    PubMed

    Williams, Richard A; Timmis, Jon; Qwarnstrom, Eva E

    2016-01-01

    Computational modelling and simulation is increasingly being used to complement traditional wet-lab techniques when investigating the mechanistic behaviours of complex biological systems. In order to ensure computational models are fit for purpose, it is essential that the abstracted view of biology captured in the computational model, is clearly and unambiguously defined within a conceptual model of the biological domain (a domain model), that acts to accurately represent the biological system and to document the functional requirements for the resultant computational model. We present a domain model of the IL-1 stimulated NF-κB signalling pathway, which unambiguously defines the spatial, temporal and stochastic requirements for our future computational model. Through the development of this model, we observe that, in isolation, UML is not sufficient for the purpose of creating a domain model, and that a number of descriptive and multivariate statistical techniques provide complementary perspectives, in particular when modelling the heterogeneity of dynamics at the single-cell level. We believe this approach of using UML to define the structure and interactions within a complex system, along with statistics to define the stochastic and dynamic nature of complex systems, is crucial for ensuring that conceptual models of complex dynamical biosystems, which are developed using UML, are fit for purpose, and unambiguously define the functional requirements for the resultant computational model. PMID:27571414

  8. Statistical Techniques Complement UML When Developing Domain Models of Complex Dynamical Biosystems.

    PubMed

    Williams, Richard A; Timmis, Jon; Qwarnstrom, Eva E

    2016-01-01

    Computational modelling and simulation is increasingly being used to complement traditional wet-lab techniques when investigating the mechanistic behaviours of complex biological systems. In order to ensure computational models are fit for purpose, it is essential that the abstracted view of biology captured in the computational model, is clearly and unambiguously defined within a conceptual model of the biological domain (a domain model), that acts to accurately represent the biological system and to document the functional requirements for the resultant computational model. We present a domain model of the IL-1 stimulated NF-κB signalling pathway, which unambiguously defines the spatial, temporal and stochastic requirements for our future computational model. Through the development of this model, we observe that, in isolation, UML is not sufficient for the purpose of creating a domain model, and that a number of descriptive and multivariate statistical techniques provide complementary perspectives, in particular when modelling the heterogeneity of dynamics at the single-cell level. We believe this approach of using UML to define the structure and interactions within a complex system, along with statistics to define the stochastic and dynamic nature of complex systems, is crucial for ensuring that conceptual models of complex dynamical biosystems, which are developed using UML, are fit for purpose, and unambiguously define the functional requirements for the resultant computational model.

  9. Statistical Techniques Complement UML When Developing Domain Models of Complex Dynamical Biosystems

    PubMed Central

    Timmis, Jon; Qwarnstrom, Eva E.

    2016-01-01

    Computational modelling and simulation is increasingly being used to complement traditional wet-lab techniques when investigating the mechanistic behaviours of complex biological systems. In order to ensure computational models are fit for purpose, it is essential that the abstracted view of biology captured in the computational model, is clearly and unambiguously defined within a conceptual model of the biological domain (a domain model), that acts to accurately represent the biological system and to document the functional requirements for the resultant computational model. We present a domain model of the IL-1 stimulated NF-κB signalling pathway, which unambiguously defines the spatial, temporal and stochastic requirements for our future computational model. Through the development of this model, we observe that, in isolation, UML is not sufficient for the purpose of creating a domain model, and that a number of descriptive and multivariate statistical techniques provide complementary perspectives, in particular when modelling the heterogeneity of dynamics at the single-cell level. We believe this approach of using UML to define the structure and interactions within a complex system, along with statistics to define the stochastic and dynamic nature of complex systems, is crucial for ensuring that conceptual models of complex dynamical biosystems, which are developed using UML, are fit for purpose, and unambiguously define the functional requirements for the resultant computational model. PMID:27571414

  10. The effects of numerical-model complexity and observation type on estimated porosity values

    USGS Publications Warehouse

    Starn, Jeffrey; Bagtzoglou, Amvrossios C.; Green, Christopher T.

    2015-01-01

    The relative merits of model complexity and types of observations employed in model calibration are compared. An existing groundwater flow model coupled with an advective transport simulation of the Salt Lake Valley, Utah (USA), is adapted for advective transport, and effective porosity is adjusted until simulated tritium concentrations match concentrations in samples from wells. Two calibration approaches are used: a “complex” highly parameterized porosity field and a “simple” parsimonious model of porosity distribution. The use of an atmospheric tracer (tritium in this case) and apparent ages (from tritium/helium) in model calibration also are discussed. Of the models tested, the complex model (with tritium concentrations and tritium/helium apparent ages) performs best. Although tritium breakthrough curves simulated by complex and simple models are very generally similar, and there is value in the simple model, the complex model is supported by a more realistic porosity distribution and a greater number of estimable parameters. Culling the best quality data did not lead to better calibration, possibly because of processes and aquifer characteristics that are not simulated. Despite many factors that contribute to shortcomings of both the models and the data, useful information is obtained from all the models evaluated. Although any particular prediction of tritium breakthrough may have large errors, overall, the models mimic observed trends.

  11. Microstructure-based modelling of multiphase materials and complex structures

    NASA Astrophysics Data System (ADS)

    Werner, Ewald; Wesenjak, Robert; Fillafer, Alexander; Meier, Felix; Krempaszky, Christian

    2016-09-01

    Micromechanical approaches are frequently employed to monitor local and global field quantities and their evolution under varying mechanical and/or thermal loading scenarios. In this contribution, an overview on important methods is given that are currently used to gain insight into the deformational and failure behaviour of multiphase materials and complex structures. First, techniques to represent material microstructures are reviewed. It is common to either digitise images of real microstructures or generate virtual 2D or 3D microstructures using automated procedures (e.g. Voronoï tessellation) for grain generation and colouring algorithms for phase assignment. While the former method allows to capture exactly all features of the microstructure at hand with respect to its morphological and topological features, the latter method opens up the possibility for parametric studies with respect to the influence of individual microstructure features on the local and global stress and strain response. Several applications of these approaches are presented, comprising low and high strain behaviour of multiphase steels, failure and fracture behaviour of multiphase materials and the evolution of surface roughening of the aluminium top metallisation of semiconductor devices.

  12. Reconstitution of [Fe]-hydrogenase using model complexes

    NASA Astrophysics Data System (ADS)

    Shima, Seigo; Chen, Dafa; Xu, Tao; Wodrich, Matthew D.; Fujishiro, Takashi; Schultz, Katherine M.; Kahnt, Jörg; Ataka, Kenichi; Hu, Xile

    2015-12-01

    [Fe]-Hydrogenase catalyses the reversible hydrogenation of a methenyltetrahydromethanopterin substrate, which is an intermediate step during the methanogenesis from CO2 and H2. The active site contains an iron-guanylylpyridinol cofactor, in which Fe2+ is coordinated by two CO ligands, as well as an acyl carbon atom and a pyridinyl nitrogen atom from a 3,4,5,6-substituted 2-pyridinol ligand. However, the mechanism of H2 activation by [Fe]-hydrogenase is unclear. Here we report the reconstitution of [Fe]-hydrogenase from an apoenzyme using two FeGP cofactor mimics to create semisynthetic enzymes. The small-molecule mimics reproduce the ligand environment of the active site, but are inactive towards H2 binding and activation on their own. We show that reconstituting the enzyme using a mimic that contains a 2-hydroxypyridine group restores activity, whereas an analogous enzyme with a 2-methoxypyridine complex was essentially inactive. These findings, together with density functional theory computations, support a mechanism in which the 2-hydroxy group is deprotonated before it serves as an internal base for heterolytic H2 cleavage.

  13. Model of human collective decision-making in complex environments

    NASA Astrophysics Data System (ADS)

    Carbone, Giuseppe; Giannoccaro, Ilaria

    2015-12-01

    A continuous-time Markov process is proposed to analyze how a group of humans solves a complex task, consisting in the search of the optimal set of decisions on a fitness landscape. Individuals change their opinions driven by two different forces: (i) the self-interest, which pushes them to increase their own fitness values, and (ii) the social interactions, which push individuals to reduce the diversity of their opinions in order to reach consensus. Results show that the performance of the group is strongly affected by the strength of social interactions and by the level of knowledge of the individuals. Increasing the strength of social interactions improves the performance of the team. However, too strong social interactions slow down the search of the optimal solution and worsen the performance of the group. In particular, we find that the threshold value of the social interaction strength, which leads to the emergence of a superior intelligence of the group, is just the critical threshold at which the consensus among the members sets in. We also prove that a moderate level of knowledge is already enough to guarantee high performance of the group in making decisions.

  14. The zebrafish as a model for complex tissue regeneration

    PubMed Central

    Gemberling, Matthew; Bailey, Travis J.; Hyde, David R.; Poss, Kenneth D.

    2013-01-01

    For centuries, philosophers and scientists have been fascinated by the principles and implications of regeneration in lower vertebrate species. Two features have made zebrafish an informative model system for determining mechanisms of regenerative events. First, they are highly regenerative, able to regrow amputated fins, as well as a lesioned brain, retina, spinal cord, heart, and other tissues. Second, they are amenable to both forward and reverse genetic approaches, with a research toolset regularly updated by an expanding community of zebrafish researchers. Zebrafish studies have helped identify new mechanistic underpinnings of regeneration in multiple tissues, and in some cases have served as a guide for contemplating regenerative strategies in mammals. Here, we review the recent history of zebrafish as a genetic model system for understanding how and why tissue regeneration occurs. PMID:23927865

  15. Complex nuclear spectra in a large scale shell model approach

    NASA Astrophysics Data System (ADS)

    D, Bianco; F, Andreozzi; Iudice N, Lo; A, Porrino; F, Knapp

    2012-05-01

    We report on a shell model implementation of an iterative matrix diagonalization algorithm in the spin uncoupled scheme. A new importance sampling is adopted which brings the eigenvalues to convergence with about 10% of the basis states. The method is shown to be able to provide an exhaustive description of the low-energy spectroscopic properties of 132-134Xe isotopes and of the spectrum of 130Xe.

  16. NREL's System Advisor Model Simplifies Complex Energy Analysis (Fact Sheet)

    SciTech Connect

    Not Available

    2015-01-01

    NREL has developed a tool -- the System Advisor Model (SAM) -- that can help decision makers analyze cost, performance, and financing of any size grid-connected solar, wind, or geothermal power project. Manufacturers, engineering and consulting firms, research and development firms, utilities, developers, venture capital firms, and international organizations use SAM for end-to-end analysis that helps determine whether and how to make investments in renewable energy projects.

  17. Data Mining Approaches for Modeling Complex Electronic Circuit Design Activities

    SciTech Connect

    Kwon, Yongjin; Omitaomu, Olufemi A; Wang, Gi-Nam

    2008-01-01

    A printed circuit board (PCB) is an essential part of modern electronic circuits. It is made of a flat panel of insulating materials with patterned copper foils that act as electric pathways for various components such as ICs, diodes, capacitors, resistors, and coils. The size of PCBs has been shrinking over the years, while the number of components mounted on these boards has increased considerably. This trend makes the design and fabrication of PCBs ever more difficult. At the beginning of design cycles, it is important to estimate the time to complete the steps required accurately, based on many factors such as the required parts, approximate board size and shape, and a rough sketch of schematics. Current approach uses multiple linear regression (MLR) technique for time and cost estimations. However, the need for accurate predictive models continues to grow as the technology becomes more advanced. In this paper, we analyze a large volume of historical PCB design data, extract some important variables, and develop predictive models based on the extracted variables using a data mining approach. The data mining approach uses an adaptive support vector regression (ASVR) technique; the benchmark model used is the MLR technique currently being used in the industry. The strengths of SVR for this data include its ability to represent data in high-dimensional space through kernel functions. The computational results show that a data mining approach is a better prediction technique for this data. Our approach reduces computation time and enhances the practical applications of the SVR technique.

  18. Building a Theoretical Model of Metacognitive Processes in Complex Modeling Activities: A Window into the Development of Students' Metacognitive Abilities

    ERIC Educational Resources Information Center

    Kim, Young Rae

    2013-01-01

    A theoretical model of metacognition in complex modeling activities has been developed based on existing frameworks, by synthesizing the re-conceptualization of metacognition at multiple levels by looking at the three sources that trigger metacognition. Using the theoretical model as a framework, this study was designed to explore how students'…

  19. The Skilled Counselor Training Model: Skills Acquisition, Self-Assessment, and Cognitive Complexity

    ERIC Educational Resources Information Center

    Little, Cassandra; Packman, Jill; Smaby, Marlowe H.; Maddux, Cleborne D.

    2005-01-01

    The authors evaluated the effectiveness of the Skilled Counselor Training Model (SCTM; M. H. Smaby, C. D. Maddux, E. Torres-Rivera, & R. Zimmick, 1999) in teaching counseling skills and in fostering counselor cognitive complexity. Counselor trainees who completed the SCTM had better counseling skills and higher levels of cognitive complexity than…

  20. Post-closure biosphere assessment modelling: comparison of complex and more stylised approaches.

    PubMed

    Walke, Russell C; Kirchner, Gerald; Xu, Shulan; Dverstorp, Björn

    2015-10-01

    Geological disposal facilities are the preferred option for high-level radioactive waste, due to their potential to provide isolation from the surface environment (biosphere) on very long timescales. Assessments need to strike a balance between stylised models and more complex approaches that draw more extensively on site-specific information. This paper explores the relative merits of complex versus more stylised biosphere models in the context of a site-specific assessment. The more complex biosphere modelling approach was developed by the Swedish Nuclear Fuel and Waste Management Co (SKB) for the Formark candidate site for a spent nuclear fuel repository in Sweden. SKB's approach is built on a landscape development model, whereby radionuclide releases to distinct hydrological basins/sub-catchments (termed 'objects') are represented as they evolve through land rise and climate change. Each of seventeen of these objects is represented with more than 80 site specific parameters, with about 22 that are time-dependent and result in over 5000 input values per object. The more stylised biosphere models developed for this study represent releases to individual ecosystems without environmental change and include the most plausible transport processes. In the context of regulatory review of the landscape modelling approach adopted in the SR-Site assessment in Sweden, the more stylised representation has helped to build understanding in the more complex modelling approaches by providing bounding results, checking the reasonableness of the more complex modelling, highlighting uncertainties introduced through conceptual assumptions and helping to quantify the conservatisms involved. The more stylised biosphere models are also shown capable of reproducing the results of more complex approaches. A major recommendation is that biosphere assessments need to justify the degree of complexity in modelling approaches as well as simplifying and conservative assumptions. In light of

  1. Developing and Modeling Complex Social Interventions: Introducing the Connecting People Intervention

    ERIC Educational Resources Information Center

    Webber, Martin; Reidy, Hannah; Ansari, David; Stevens, Martin; Morris, David

    2016-01-01

    Objectives: Modeling the processes involved in complex social interventions is important in social work practice, as it facilitates their implementation and translation into different contexts. This article reports the process of developing and modeling the connecting people intervention (CPI), a model of practice that supports people with mental…

  2. Using New Models to Analyze Complex Regularities of the World: Commentary on Musso et al. (2013)

    ERIC Educational Resources Information Center

    Nokelainen, Petri; Silander, Tomi

    2014-01-01

    This commentary to the recent article by Musso et al. (2013) discusses issues related to model fitting, comparison of classification accuracy of generative and discriminative models, and two (or more) cultures of data modeling. We start by questioning the extremely high classification accuracy with an empirical data from a complex domain. There is…

  3. Modeling complex chemical effects in turbulent nonpremixed combustion

    NASA Technical Reports Server (NTRS)

    Smith, Nigel S. A.

    1995-01-01

    Virtually all of the energy derived from the consumption of combustibles occurs in systems which utilize turbulent fluid motion. Since combustion is largely related to the mixing of fluids and mixing processes are orders of magnitude more rapid when enhanced by turbulent motion, efficiency criteria dictate that chemically powered devices necessarily involve fluid turbulence. Where combustion occurs concurrently with mixing at an interface between two reactive fluid bodies, this mode of combustion is called nonpremixed combustion. This is distinct from premixed combustion where flame-fronts propagate into a homogeneous mixture of reactants. These two modes are limiting cases in the range of temporal lag between mixing of reactants and the onset of reaction. Nonpremixed combustion occurs where this lag tends to zero, while premixed combustion occurs where this lag tends to infinity. Many combustion processes are hybrids of these two extremes with finite non-zero lag times. Turbulent nonpremixed combustion is important from a practical standpoint because it occurs in gas fired boilers, furnaces, waste incinerators, diesel engines, gas turbine combustors, and afterburners etc. To a large extent, past development of these practical systems involved an empirical methodology. Presently, efficiency standards and emission regulations are being further tightened (Correa 1993), and empiricism has had to give way to more fundamental research in order to understand and effectively model practical combustion processes (Pope 1991). A key element in effective modeling of turbulent combustion is making use of a sufficiently detailed chemical kinetic mechanism. The prediction of pollutant emission such as oxides of nitrogen (NO(x)) and sulphur (SO(x)) unburned hydrocarbons, and particulates demands the use of detailed chemical mechanisms. It is essential that practical models for turbulent nonpremixed combustion are capable of handling large numbers of 'stiff' chemical species

  4. Efficient modelling of droplet dynamics on complex surfaces.

    PubMed

    Karapetsas, George; Chamakos, Nikolaos T; Papathanasiou, Athanasios G

    2016-03-01

    This work investigates the dynamics of droplet interaction with smooth or structured solid surfaces using a novel sharp-interface scheme which allows the efficient modelling of multiple dynamic contact lines. The liquid-gas and liquid-solid interfaces are treated in a unified context and the dynamic contact angle emerges simply due to the combined action of the disjoining and capillary pressure, and viscous stresses without the need of an explicit boundary condition or any requirement for the predefinition of the number and position of the contact lines. The latter, as it is shown, renders the model able to handle interfacial flows with topological changes, e.g. in the case of an impinging droplet on a structured surface. Then it is possible to predict, depending on the impact velocity, whether the droplet will fully or partially impregnate the structures of the solid, or will result in a 'fakir', i.e. suspended, state. In the case of a droplet sliding on an inclined substrate, we also demonstrate the built-in capability of our model to provide a prediction for either static or dynamic contact angle hysteresis. We focus our study on hydrophobic surfaces and examine the effect of the geometrical characteristics of the solid surface. It is shown that the presence of air inclusions trapped in the micro-structure of a hydrophobic substrate (Cassie-Baxter state) result in the decrease of contact angle hysteresis and in the increase of the droplet migration velocity in agreement with experimental observations for super-hydrophobic surfaces. Moreover, we perform 3D simulations which are in line with the 2D ones regarding the droplet mobility and also indicate that the contact angle hysteresis may be significantly affected by the directionality of the structures with respect to the droplet motion. PMID:26828706

  5. Causal Inference and Model Selection in Complex Settings

    NASA Astrophysics Data System (ADS)

    Zhao, Shandong

    Propensity score methods have become a part of the standard toolkit for applied researchers who wish to ascertain causal effects from observational data. While they were originally developed for binary treatments, several researchers have proposed generalizations of the propensity score methodology for non-binary treatment regimes. In this article, we firstly review three main methods that generalize propensity scores in this direction, namely, inverse propensity weighting (IPW), the propensity function (P-FUNCTION), and the generalized propensity score (GPS), along with recent extensions of the GPS that aim to improve its robustness. We compare the assumptions, theoretical properties, and empirical performance of these methods. We propose three new methods that provide robust causal estimation based on the P-FUNCTION and GPS. While our proposed P-FUNCTION-based estimator preforms well, we generally advise caution in that all available methods can be biased by model misspecification and extrapolation. In a related line of research, we consider adjustment for posttreatment covariates in causal inference. Even in a randomized experiment, observations might have different compliance performance under treatment and control assignment. This posttreatment covariate cannot be adjusted using standard statistical methods. We review the principal stratification framework which allows for modeling this effect as part of its Bayesian hierarchical models. We generalize the current model to add the possibility of adjusting for pretreatment covariates. We also propose a new estimator of the average treatment effect over the entire population. In a third line of research, we discuss the spectral line detection problem in high energy astrophysics. We carefully review how this problem can be statistically formulated as a precise hypothesis test with point null hypothesis, why a usual likelihood ratio test does not apply for problem of this nature, and a doable fix to correctly

  6. Positive complexity-stability relations in food web models without foraging adaptation.

    PubMed

    Kartascheff, Boris; Guill, Christian; Drossel, Barbara

    2009-07-01

    May's [1972. Will a large complex system be stable? Nature 238, 413-414] local stability analysis of random food web models showed that increasing network complexity leads to decreasing stability, a result that is contradictory to earlier empirical findings. Since this seminal work, research of complexity-stability relations became one of the most challenging issues in theoretical ecology. We investigate conditions for positive complexity-stability relations in the niche, cascade, nested hierarchy, and random models by evaluating the network robustness, i.e., the fraction of surviving species after population dynamics. We find that positive relations between robustness and complexity can be obtained when resources are large, Holling II functional response is used and interaction strengths are weighted with the number of prey species, in order to take foraging efforts into account. In order to obtain these results, no foraging dynamics needs to be included. However, the niche model does not show positive complexity-stability relations under these conditions. By comparing to empirical food web data, we show that the niche model has unrealistic distributions of predator numbers. When this distribution is randomized, positive complexity-stability relations can be found also in the niche model.

  7. CALIBRATION OF SUBSURFACE BATCH AND REACTIVE-TRANSPORT MODELS INVOLVING COMPLEX BIOGEOCHEMICAL PROCESSES

    EPA Science Inventory

    In this study, the calibration of subsurface batch and reactive-transport models involving complex biogeochemical processes was systematically evaluated. Two hypothetical nitrate biodegradation scenarios were developed and simulated in numerical experiments to evaluate the perfor...

  8. Accuracy of travel time distribution (TTD) models as affected by TTD complexity, observation errors, and model and tracer selection

    USGS Publications Warehouse

    Green, Christopher T.; Zhang, Yong; Jurgens, Bryant C.; Starn, J. Jeffrey; Landon, Matthew K.

    2014-01-01

    Analytical models of the travel time distribution (TTD) from a source area to a sample location are often used to estimate groundwater ages and solute concentration trends. The accuracies of these models are not well known for geologically complex aquifers. In this study, synthetic datasets were used to quantify the accuracy of four analytical TTD models as affected by TTD complexity, observation errors, model selection, and tracer selection. Synthetic TTDs and tracer data were generated from existing numerical models with complex hydrofacies distributions for one public-supply well and 14 monitoring wells in the Central Valley, California. Analytical TTD models were calibrated to synthetic tracer data, and prediction errors were determined for estimates of TTDs and conservative tracer (NO3−) concentrations. Analytical models included a new, scale-dependent dispersivity model (SDM) for two-dimensional transport from the watertable to a well, and three other established analytical models. The relative influence of the error sources (TTD complexity, observation error, model selection, and tracer selection) depended on the type of prediction. Geological complexity gave rise to complex TTDs in monitoring wells that strongly affected errors of the estimated TTDs. However, prediction errors for NO3− and median age depended more on tracer concentration errors. The SDM tended to give the most accurate estimates of the vertical velocity and other predictions, although TTD model selection had minor effects overall. Adding tracers improved predictions if the new tracers had different input histories. Studies using TTD models should focus on the factors that most strongly affect the desired predictions.

  9. Virtual enterprise model for the electronic components business in the Nuclear Weapons Complex

    SciTech Connect

    Ferguson, T.J.; Long, K.S.; Sayre, J.A.; Hull, A.L.; Carey, D.A.; Sim, J.R.; Smith, M.G.

    1994-08-01

    The electronic components business within the Nuclear Weapons Complex spans organizational and Department of Energy contractor boundaries. An assessment of the current processes indicates a need for fundamentally changing the way electronic components are developed, procured, and manufactured. A model is provided based on a virtual enterprise that recognizes distinctive competencies within the Nuclear Weapons Complex and at the vendors. The model incorporates changes that reduce component delivery cycle time and improve cost effectiveness while delivering components of the appropriate quality.

  10. Application of surface complexation models to anion adsorption by natural materials.

    PubMed

    Goldberg, Sabine

    2014-10-01

    Various chemical models of ion adsorption are presented and discussed. Chemical models, such as surface complexation models, provide a molecular description of anion adsorption reactions using an equilibrium approach. Two such models, the constant capacitance model and the triple layer model, are described in the present study. Characteristics common to all the surface complexation models are equilibrium constant expressions, mass and charge balances, and surface activity coefficient electrostatic potential terms. Methods for determining parameter values for surface site density, capacitances, and surface complexation constants also are discussed. Spectroscopic experimental methods of establishing ion adsorption mechanisms include vibrational spectroscopy, nuclear magnetic resonance spectroscopy, electron spin resonance spectroscopy, X-ray absorption spectroscopy, and X-ray reflectivity. Experimental determinations of point of zero charge shifts and ionic strength dependence of adsorption results and molecular modeling calculations also can be used to deduce adsorption mechanisms. Applications of the surface complexation models to heterogeneous natural materials, such as soils, using the component additivity and the generalized composite approaches are described. Emphasis is on the generalized composite approach for predicting anion adsorption by soils. Continuing research is needed to develop consistent and realistic protocols for describing ion adsorption reactions on soil minerals and soils. The availability of standardized model parameter databases for use in chemical speciation-transport models is critical. PMID:24619924

  11. A Peep into the Uncertainty-Complexity-Relevance Modeling Trilemma through Global Sensitivity and Uncertainty Analysis

    NASA Astrophysics Data System (ADS)

    Munoz-Carpena, R.; Muller, S. J.; Chu, M.; Kiker, G. A.; Perz, S. G.

    2014-12-01

    Model Model complexity resulting from the need to integrate environmental system components cannot be understated. In particular, additional emphasis is urgently needed on rational approaches to guide decision making through uncertainties surrounding the integrated system across decision-relevant scales. However, in spite of the difficulties that the consideration of modeling uncertainty represent for the decision process, it should not be avoided or the value and science behind the models will be undermined. These two issues; i.e., the need for coupled models that can answer the pertinent questions and the need for models that do so with sufficient certainty, are the key indicators of a model's relevance. Model relevance is inextricably linked with model complexity. Although model complexity has advanced greatly in recent years there has been little work to rigorously characterize the threshold of relevance in integrated and complex models. Formally assessing the relevance of the model in the face of increasing complexity would be valuable because there is growing unease among developers and users of complex models about the cumulative effects of various sources of uncertainty on model outputs. In particular, this issue has prompted doubt over whether the considerable effort going into further elaborating complex models will in fact yield the expected payback. New approaches have been proposed recently to evaluate the uncertainty-complexity-relevance modeling trilemma (Muller, Muñoz-Carpena and Kiker, 2011) by incorporating state-of-the-art global sensitivity and uncertainty analysis (GSA/UA) in every step of the model development so as to quantify not only the uncertainty introduced by the addition of new environmental components, but the effect that these new components have over existing components (interactions, non-linear responses). Outputs from the analysis can also be used to quantify system resilience (stability, alternative states, thresholds or tipping

  12. Probabilistic Multi-Factor Interaction Model for Complex Material Behavior

    NASA Technical Reports Server (NTRS)

    Chamis, Christos C.; Abumeri, Galib H.

    2008-01-01

    The Multi-Factor Interaction Model (MFIM) is used to evaluate the divot weight (foam weight ejected) from the launch external tanks. The multi-factor has sufficient degrees of freedom to evaluate a large number of factors that may contribute to the divot ejection. It also accommodates all interactions by its product form. Each factor has an exponent that satisfies only two points the initial and final points. The exponent describes a monotonic path from the initial condition to the final. The exponent values are selected so that the described path makes sense in the absence of experimental data. In the present investigation, the data used was obtained by testing simulated specimens in launching conditions. Results show that the MFIM is an effective method of describing the divot weight ejected under the conditions investigated.

  13. Do climate models reproduce complexity of observed sea level changes?

    NASA Astrophysics Data System (ADS)

    Becker, M.; Karpytchev, M.; Marcos, M.; Jevrejeva, S.; Lennartz-Sassinek, S.

    2016-05-01

    The ability of Atmosphere-Ocean General Circulation Models (AOGCMs) to capture the statistical behavior of sea level (SL) fluctuations has been assessed at the local scale. To do so, we have compared scaling behavior of the SL fluctuations simulated in the historical runs of 36 CMIP5 AOGCMs to that in the longest (>100 years) SL records from 23 tides gauges around the globe. The observed SL fluctuations are known to manifest a power law scaling. We have checked if the SL changes simulated in the AOGCM exhibit the same scaling properties and the long-term correlations as observed in the tide gauge records. We find that the majority of AOGCMs overestimates the scaling of SL fluctuations, particularly in the North Atlantic. Consequently, AOGCMs, routinely used to project regional SL rise, may underestimate the part of the externally driven SL rise, in particular the anthropogenic footprint, in the projections for the 21st century.

  14. Preferential survival in models of complex ad hoc networks

    NASA Astrophysics Data System (ADS)

    Kong, Joseph S.; Roychowdhury, Vwani P.

    2008-05-01

    There has been a rich interplay in recent years between (i) empirical investigations of real-world dynamic networks, (ii) analytical modeling of the microscopic mechanisms that drive the emergence of such networks, and (iii) harnessing of these mechanisms to either manipulate existing networks, or engineer new networks for specific tasks. We continue in this vein, and study the deletion phenomenon in the web by the following two different sets of websites (each comprising more than 150,000 pages) over a one-year period. Empirical data show that there is a significant deletion component in the underlying web networks, but the deletion process is not uniform. This motivates us to introduce a new mechanism of preferential survival (PS), where nodes are removed according to the degree-dependent deletion kernel, D(k)∝k, with α≥0. We use the mean-field rate equation approach to study a general dynamic model driven by Preferential Attachment (PA), Double PA (DPA), and a tunable PS (i.e., with any α>0), where c nodes ( c<1) are deleted per node added to the network, and verify our predictions via large-scale simulations. One of our results shows that, unlike in the case of uniform deletion (i.e., where α=0), the PS kernel when coupled with the standard PA mechanism, can lead to heavy-tailed power-law networks even in the presence of extreme turnover in the network. Moreover, a weak DPA mechanism, coupled with PS, can help to make the network even more heavy-tailed, especially in the limit when deletion and insertion rates are almost equal, and the overall network growth is minimal. The dynamics reported in this work can be used to design and engineer stable ad hoc networks and explain the stability of the power-law exponents observed in real-world networks.

  15. Inference, simulation, modeling, and analysis of complex networks, with special emphasis on complex networks in systems biology

    NASA Astrophysics Data System (ADS)

    Christensen, Claire Petra

    Across diverse fields ranging from physics to biology, sociology, and economics, the technological advances of the past decade have engendered an unprecedented explosion of data on highly complex systems with thousands, if not millions of interacting components. These systems exist at many scales of size and complexity, and it is becoming ever-more apparent that they are, in fact, universal, arising in every field of study. Moreover, they share fundamental properties---chief among these, that the individual interactions of their constituent parts may be well-understood, but the characteristic behaviour produced by the confluence of these interactions---by these complex networks---is unpredictable; in a nutshell, the whole is more than the sum of its parts. There is, perhaps, no better illustration of this concept than the discoveries being made regarding complex networks in the biological sciences. In particular, though the sequencing of the human genome in 2003 was a remarkable feat, scientists understand that the "cellular-level blueprints" for the human being are cellular-level parts lists, but they say nothing (explicitly) about cellular-level processes. The challenge of modern molecular biology is to understand these processes in terms of the networks of parts---in terms of the interactions among proteins, enzymes, genes, and metabolites---as it is these processes that ultimately differentiate animate from inanimate, giving rise to life! It is the goal of systems biology---an umbrella field encapsulating everything from molecular biology to epidemiology in social systems---to understand processes in terms of fundamental networks of core biological parts, be they proteins or people. By virtue of the fact that there are literally countless complex systems, not to mention tools and techniques used to infer, simulate, analyze, and model these systems, it is impossible to give a truly comprehensive account of the history and study of complex systems. The author

  16. The value of multiple data set calibration versus model complexity for improving the performance of hydrological models in mountain catchments

    NASA Astrophysics Data System (ADS)

    Finger, David; Vis, Marc; Huss, Matthias; Seibert, Jan

    2015-04-01

    The assessment of snow, glacier, and rainfall runoff contribution to discharge in mountain streams is of major importance for an adequate water resource management. Such contributions can be estimated via hydrological models, provided that the modeling adequately accounts for snow and glacier melt, as well as rainfall runoff. We present a multiple data set calibration approach to estimate runoff composition using hydrological models with three levels of complexity. For this purpose, the code of the conceptual runoff model HBV-light was enhanced to allow calibration and validation of simulations against glacier mass balances, satellite-derived snow cover area and measured discharge. Three levels of complexity of the model were applied to glacierized catchments in Switzerland, ranging from 39 to 103 km2. The results indicate that all three observational data sets are reproduced adequately by the model, allowing an accurate estimation of the runoff composition in the three mountain streams. However, calibration against only runoff leads to unrealistic snow and glacier melt rates. Based on these results, we recommend using all three observational data sets in order to constrain model parameters and compute snow, glacier, and rain contributions. Finally, based on the comparison of model performance of different complexities, we postulate that the availability and use of different data sets to calibrate hydrological models might be more important than model complexity to achieve realistic estimations of runoff composition.

  17. Preparation, spectroscopy and molecular modelling studies of the inclusion complex of cordycepin with cyclodextrins.

    PubMed

    Zhang, Jian-Qiang; Wu, Di; Jiang, Kun-Ming; Zhang, Da; Zheng, Xi; Wan, Chun-Ping; Zhu, Hong-You; Xie, Xiao-Guang; Jin, Yi; Lin, Jun

    2015-04-10

    The inclusion complexes of cordycepin with cyclodextrins (CDs) were prepared, the resultant complexes were characterised by UV-vis, FTIR, DSC, SEM, XRD, ESI-MS and proton nuclear magnetic resonance spectroscopy ((1)H NMR). The stoichiometry was established using a Job plot and the inclusion mechanism was clarified using molecular dynamic simulations. Molecular modelling calculations have been carried out to rationalise the experimental findings and predict the stable molecular structure of the inclusion complex. The stability of the inclusion complexes were confirmed by energetic and thermodynamic properties (ΔE, ΔH, ΔG and ΔS) and HOMO, LUMO orbital. The 1:1 binding model of complexes were visually proved by ESI-MS experiment. Our results showed that the purine group of cordycepin molecule was deeply inserted into the cavity of CDs.

  18. Co-allocation model for complex equipment project risk based on gray information

    NASA Astrophysics Data System (ADS)

    Zhi-geng, Fang; Jin-yu, Sun

    2013-10-01

    As the fact that complex equipment project is a multi-level co-development network system and milestones connect with each other in accordance with the logical relationship between different levels, we can decompose the complex equipment project into several multi-level milestones. This paper has designed several connecting nodes of collaborative milestone and established a new co-allocation model for complex equipment project risk based on gray information. Take comprehensive trial phase of a large aircraft developed project as an example to prove the effectiveness and feasibility of the above models and algorithms, which provides a new analysis methods and research ideas.

  19. 2.5D complex resistivity modeling and inversion using unstructured grids

    NASA Astrophysics Data System (ADS)

    Xu, Kaijun; Sun, Jie

    2016-04-01

    The characteristic of complex resistivity on rock and ore has been recognized by people for a long time. Generally we have used the Cole-Cole Model(CCM) to describe complex resistivity. It has been proved that the electrical anomaly of geologic body can be quantitative estimated by CCM parameters such as direct resistivity(ρ0), chargeability(m), time constant(τ) and frequency dependence(c). Thus it is very important to obtain the complex parameters of geologic body. It is difficult to approximate complex structures and terrain using traditional rectangular grid. In order to enhance the numerical accuracy and rationality of modeling and inversion, we use an adaptive finite-element algorithm for forward modeling of the frequency-domain 2.5D complex resistivity and implement the conjugate gradient algorithm in the inversion of 2.5D complex resistivity. An adaptive finite element method is applied for solving the 2.5D complex resistivity forward modeling of horizontal electric dipole source. First of all, the CCM is introduced into the Maxwell's equations to calculate the complex resistivity electromagnetic fields. Next, the pseudo delta function is used to distribute electric dipole source. Then the electromagnetic fields can be expressed in terms of the primary fields caused by layered structure and the secondary fields caused by inhomogeneities anomalous conductivity. At last, we calculated the electromagnetic fields response of complex geoelectric structures such as anticline, syncline, fault. The modeling results show that adaptive finite-element methods can automatically improve mesh generation and simulate complex geoelectric models using unstructured grids. The 2.5D complex resistivity invertion is implemented based the conjugate gradient algorithm.The conjugate gradient algorithm doesn't need to compute the sensitivity matrix but directly computes the sensitivity matrix or its transpose multiplying vector. In addition, the inversion target zones are

  20. Frequency modelling and solution of fluid-structure interaction in complex pipelines

    NASA Astrophysics Data System (ADS)

    Xu, Yuanzhi; Johnston, D. Nigel; Jiao, Zongxia; Plummer, Andrew R.

    2014-05-01

    Complex pipelines may have various structural supports and boundary conditions, as well as branches. To analyse the vibrational characteristics of piping systems, frequency modelling and solution methods considering complex constraints are developed here. A fourteen-equation model and Transfer Matrix Method (TMM) are employed to describe Fluid-Structure Interaction (FSI) in liquid-filled pipes. A general solution for the multi-branch pipe is proposed in this paper, offering a methodology to predict frequency responses of the complex piping system. Some branched pipe systems are built for the purpose of validation, indicating good agreement with calculated results.

  1. Handling Qualities of Model Reference Adaptive Controllers with Varying Complexity for Pitch-Roll Coupled Failures

    NASA Technical Reports Server (NTRS)

    Schaefer, Jacob; Hanson, Curt; Johnson, Marcus A.; Nguyen, Nhan

    2011-01-01

    Three model reference adaptive controllers (MRAC) with varying levels of complexity were evaluated on a high performance jet aircraft and compared along with a baseline nonlinear dynamic inversion controller. The handling qualities and performance of the controllers were examined during failure conditions that induce coupling between the pitch and roll axes. Results from flight tests showed with a roll to pitch input coupling failure, the handling qualities went from Level 2 with the baseline controller to Level 1 with the most complex MRAC tested. A failure scenario with the left stabilator frozen also showed improvement with the MRAC. Improvement in performance and handling qualities was generally seen as complexity was incrementally added; however, added complexity usually corresponds to increased verification and validation effort required for certification. The tradeoff between complexity and performance is thus important to a controls system designer when implementing an adaptive controller on an aircraft. This paper investigates this relation through flight testing of several controllers of vary complexity.

  2. Ecological Acclimation and Hydrologic Response: Problem Complexity and Modeling Challenges

    NASA Astrophysics Data System (ADS)

    Kumar, P.; Srinivasan, V.; Le, P. V. V.; Drewry, D.

    2012-04-01

    Elevated CO2 in the atmosphere leads to a number of acclimatory responses in different vegetation types. These may be characterized as structural such as vegetation height or foliage density, ecophysiological such as reduction in stomatal conductance, and biochemical such as photosynthetic down-regulation. Furthermore, the allocation of assimilated carbon to different vegetation parts such as leaves, roots, stem and seeds is also altered such that empirical allometric relations are no longer valid. The extent and nature of these acclimatory responses vary between C3 and C4 vegetation and across species. These acclimatory responses have significant impact on hydrologic fluxes both pertaining to water and energy with the possibility of large-scale hydrologic influence. Capturing the pathways of acclimatory response to provide accurate ecohydrologic response predictions requires incorporating subtle relationships that are accentuated under elevated CO2. The talk will discuss the challenges of modeling these as well as applications to soybean, maize and bioenergy crops such as switchgrass and miscanthus.

  3. Complex transition to cooperative behavior in a structured population model.

    PubMed

    Miranda, Luciano; de Souza, Adauto J F; Ferreira, Fernando F; Campos, Paulo R A

    2012-01-01

    Cooperation plays an important role in the evolution of species and human societies. The understanding of the emergence and persistence of cooperation in those systems is a fascinating and fundamental question. Many mechanisms were extensively studied and proposed as supporting cooperation. The current work addresses the role of migration for the maintenance of cooperation in structured populations. This problem is investigated in an evolutionary perspective through the prisoner's dilemma game paradigm. It is found that migration and structure play an essential role in the evolution of the cooperative behavior. The possible outcomes of the model are extinction of the entire population, dominance of the cooperative strategy and coexistence between cooperators and defectors. The coexistence phase is obtained in the range of large migration rates. It is also verified the existence of a critical level of structuring beyond that cooperation is always likely. In resume, we conclude that the increase in the number of demes as well as in the migration rate favor the fixation of the cooperative behavior.

  4. Complex Transition to Cooperative Behavior in a Structured Population Model

    PubMed Central

    Miranda, Luciano; de Souza, Adauto J. F.; Ferreira, Fernando F.; Campos, Paulo R. A.

    2012-01-01

    Cooperation plays an important role in the evolution of species and human societies. The understanding of the emergence and persistence of cooperation in those systems is a fascinating and fundamental question. Many mechanisms were extensively studied and proposed as supporting cooperation. The current work addresses the role of migration for the maintenance of cooperation in structured populations. This problem is investigated in an evolutionary perspective through the prisoner's dilemma game paradigm. It is found that migration and structure play an essential role in the evolution of the cooperative behavior. The possible outcomes of the model are extinction of the entire population, dominance of the cooperative strategy and coexistence between cooperators and defectors. The coexistence phase is obtained in the range of large migration rates. It is also verified the existence of a critical level of structuring beyond that cooperation is always likely. In resume, we conclude that the increase in the number of demes as well as in the migration rate favor the fixation of the cooperative behavior. PMID:22761736

  5. Endophenotype Network Models: Common Core of Complex Diseases.

    PubMed

    Ghiassian, Susan Dina; Menche, Jörg; Chasman, Daniel I; Giulianini, Franco; Wang, Ruisheng; Ricchiuto, Piero; Aikawa, Masanori; Iwata, Hiroshi; Müller, Christian; Zeller, Tania; Sharma, Amitabh; Wild, Philipp; Lackner, Karl; Singh, Sasha; Ridker, Paul M; Blankenberg, Stefan; Barabási, Albert-László; Loscalzo, Joseph

    2016-01-01

    Historically, human diseases have been differentiated and categorized based on the organ system in which they primarily manifest. Recently, an alternative view is emerging that emphasizes that different diseases often have common underlying mechanisms and shared intermediate pathophenotypes, or endo(pheno)types. Within this framework, a specific disease's expression is a consequence of the interplay between the relevant endophenotypes and their local, organ-based environment. Important examples of such endophenotypes are inflammation, fibrosis, and thrombosis and their essential roles in many developing diseases. In this study, we construct endophenotype network models and explore their relation to different diseases in general and to cardiovascular diseases in particular. We identify the local neighborhoods (module) within the interconnected map of molecular components, i.e., the subnetworks of the human interactome that represent the inflammasome, thrombosome, and fibrosome. We find that these neighborhoods are highly overlapping and significantly enriched with disease-associated genes. In particular they are also enriched with differentially expressed genes linked to cardiovascular disease (risk). Finally, using proteomic data, we explore how macrophage activation contributes to our understanding of inflammatory processes and responses. The results of our analysis show that inflammatory responses initiate from within the cross-talk of the three identified endophenotypic modules. PMID:27278246

  6. Endophenotype Network Models: Common Core of Complex Diseases.

    PubMed

    Ghiassian, Susan Dina; Menche, Jörg; Chasman, Daniel I; Giulianini, Franco; Wang, Ruisheng; Ricchiuto, Piero; Aikawa, Masanori; Iwata, Hiroshi; Müller, Christian; Zeller, Tania; Sharma, Amitabh; Wild, Philipp; Lackner, Karl; Singh, Sasha; Ridker, Paul M; Blankenberg, Stefan; Barabási, Albert-László; Loscalzo, Joseph

    2016-06-09

    Historically, human diseases have been differentiated and categorized based on the organ system in which they primarily manifest. Recently, an alternative view is emerging that emphasizes that different diseases often have common underlying mechanisms and shared intermediate pathophenotypes, or endo(pheno)types. Within this framework, a specific disease's expression is a consequence of the interplay between the relevant endophenotypes and their local, organ-based environment. Important examples of such endophenotypes are inflammation, fibrosis, and thrombosis and their essential roles in many developing diseases. In this study, we construct endophenotype network models and explore their relation to different diseases in general and to cardiovascular diseases in particular. We identify the local neighborhoods (module) within the interconnected map of molecular components, i.e., the subnetworks of the human interactome that represent the inflammasome, thrombosome, and fibrosome. We find that these neighborhoods are highly overlapping and significantly enriched with disease-associated genes. In particular they are also enriched with differentially expressed genes linked to cardiovascular disease (risk). Finally, using proteomic data, we explore how macrophage activation contributes to our understanding of inflammatory processes and responses. The results of our analysis show that inflammatory responses initiate from within the cross-talk of the three identified endophenotypic modules.

  7. Endophenotype Network Models: Common Core of Complex Diseases

    PubMed Central

    Ghiassian, Susan Dina; Menche, Jörg; Chasman, Daniel I.; Giulianini, Franco; Wang, Ruisheng; Ricchiuto, Piero; Aikawa, Masanori; Iwata, Hiroshi; Müller, Christian; Zeller, Tania; Sharma, Amitabh; Wild, Philipp; Lackner, Karl; Singh, Sasha; Ridker, Paul M.; Blankenberg, Stefan; Barabási, Albert-László; Loscalzo, Joseph

    2016-01-01

    Historically, human diseases have been differentiated and categorized based on the organ system in which they primarily manifest. Recently, an alternative view is emerging that emphasizes that different diseases often have common underlying mechanisms and shared intermediate pathophenotypes, or endo(pheno)types. Within this framework, a specific disease’s expression is a consequence of the interplay between the relevant endophenotypes and their local, organ-based environment. Important examples of such endophenotypes are inflammation, fibrosis, and thrombosis and their essential roles in many developing diseases. In this study, we construct endophenotype network models and explore their relation to different diseases in general and to cardiovascular diseases in particular. We identify the local neighborhoods (module) within the interconnected map of molecular components, i.e., the subnetworks of the human interactome that represent the inflammasome, thrombosome, and fibrosome. We find that these neighborhoods are highly overlapping and significantly enriched with disease-associated genes. In particular they are also enriched with differentially expressed genes linked to cardiovascular disease (risk). Finally, using proteomic data, we explore how macrophage activation contributes to our understanding of inflammatory processes and responses. The results of our analysis show that inflammatory responses initiate from within the cross-talk of the three identified endophenotypic modules. PMID:27278246

  8. Climate and landscape controls on water balance model complexity over changing timescales

    NASA Astrophysics Data System (ADS)

    Atkinson, S. E.; Woods, R. A.; Sivapalan, M.

    2002-12-01

    A systematic approach is described for determining the minimum level of model complexity required to predict runoff in New Zealand catchments, with minimal calibration, at decreasing timescales. Starting with a lumped conceptual model representing the most basic hydrological processes needed to capture water balance, model complexity is systematically increased in response to demonstrated deficiencies in model predictions until acceptable accuracy is achieved. Sensitivity and error analyses are performed to determine the dominant physical controls on streamflow variability. It is found that dry catchments are sensitive to a threshold storage parameter, producing inaccurate results with little confidence, while wet catchments are relatively insensitive, producing more accurate results with more confidence. Sensitivity to the threshold parameter is well correlated with climate and timescale, and in combination with the results of two previous studies, this allowed the postulation of a qualitative relationship between model complexity, timescale, and the climatic dryness index (DI). This relationship can provide an a priori understanding of the model complexity required to accurately predict streamflow with confidence in small catchments under given climate and timescales and a conceptual framework for model selection. The objective of the paper is therefore not to present a perfect model for any of the catchments studied but rather to present a systematic approach to modeling based on making inferences from data that can be applied with respect to different model designs, catchments and timescales.

  9. Benzenecarboxylate surface complexation at the goethite (α-FeOOH)/water interface: II. Linking IR spectroscopic observations to mechanistic surface complexation models for phthalate, trimellitate, and pyromellitate

    NASA Astrophysics Data System (ADS)

    Boily, Jean-François; Persson, Per; Sjöberg, Staffan

    2000-10-01

    A study combining information from infrared spectroscopy and adsorption experiments was carried out to investigate phthalate, trimellitate, and pyromellitate complexes at the goethite (α-FeOOH)/water interface. Infrared spectra showed evidence for inner-sphere complexes below pH 6 and outer-sphere complexes in the pH range 3 to 9. Normalized infrared peak areas were used as a semi-quantitative tool to devise diagrams showing the molecular level surface speciation as a function of pH. Surface complexation models that simultaneously predict these diagrams, the proton balance data and the ligand adsorption data were developed with surface complexation theory. Surface complexation modeling was carried out with a Charge Distribution Multisite Complexation Model (CD-MUSIC), assuming goethite particles with surfaces represented by the {110} plane (90% of total particle surface area) and by the {001} plane (10% of total particle surface area). Inner-sphere complexes were described as mononuclear chelates at the {001} plane, whereas outer-sphere complexes were described as binuclear complexes with singly coordinated sites on the {110} plane. The Three-Plane Model (TPM) was used to described surface electrostatics and to distribute the charges of the inner- and the outer-sphere complexes on different planes of adsorption.

  10. Pattern-oriented modeling of agent-based complex systems: Lessons from ecology

    USGS Publications Warehouse

    Grimm, Volker; Revilla, Eloy; Berger, Uta; Jeltsch, Florian; Mooij, Wolf M.; Railsback, Steven F.; Thulke, Hans-Hermann; Weiner, Jacob; Wiegand, Thorsten; DeAngelis, Donald L.

    2005-01-01

    Agent-based complex systems are dynamic networks of many interacting agents; examples include ecosystems, financial markets, and cities. The search for general principles underlying the internal organization of such systems often uses bottom-up simulation models such as cellular automata and agent-based models. No general framework for designing, testing, and analyzing bottom-up models has yet been established, but recent advances in ecological modeling have come together in a general strategy we call pattern-oriented modeling. This strategy provides a unifying framework for decoding the internal organization of agent-based complex systems and may lead toward unifying algorithmic theories of the relation between adaptive behavior and system complexity.

  11. Modeling the production of highly-complex molecules in star-forming regions

    NASA Astrophysics Data System (ADS)

    Garrod, R. T.

    2016-05-01

    Molecules of increasing complexity are being observed toward star-forming regions, including the recently detected iso-propyl cyanide, the first interstellar branched carbon-chain molecule. Modeling the formation of new complex organics requires new grain-surface production mechanisms, as well as gas-phase and grain-surface destruction processes. The method for constructing networks for new molecules is discussed, as well as the results of recent models of branched carbon-chain molecule chemistry. The formation of both simple and complex organics in cold regions is also discussed. New, exact kinetics models indicate that complex molecules may be formed efficiently at very low temperatures, if CO is abundant on the grain surfaces.

  12. Complexation of tetrandrine with calcium ion probed by various spectroscopic methods and molecular modeling

    NASA Astrophysics Data System (ADS)

    Stanculescu, Ioana; Mandravel, Cristina; Landy, David; Woisel, Patrice; Surpateanu, Gheorghe

    2003-07-01

    The formation of the complex between tetrandrine and the calcium ion, in solution, was studied using FTIR, UV-Vis, 1H NMR, 13C NMR and electrospray mass spectroscopy spectroscopic methods and molecular modeling. The calcium salts used were: Ca(ClO 4) 2·4H 2O and Ca(Picrate) 2 in the solvents: acetonitrile (CH 3CN), deuterated acetonitrile (CD 3CN) and tetrahydrofurane (THF). The determined complex stability constant was: 20277±67 dm 3 mol -1 and corresponding free energy Δ G0=-5.820±0.002 kcal mol -1. The molecular simulation of the complex formation with the MM3 Augmented force field integrated in CAChe provided useful data about its energy. Combining the experimental results and molecular modeling we propose a model for the structure of tetrandrine-Ca complex with an eight coordinated geometry.

  13. A generalised enzyme kinetic model for predicting the behaviour of complex biochemical systems.

    PubMed

    Wong, Martin Kin Lok; Krycer, James Robert; Burchfield, James Geoffrey; James, David Ernest; Kuncic, Zdenka

    2015-01-01

    Quasi steady-state enzyme kinetic models are increasingly used in systems modelling. The Michaelis Menten model is popular due to its reduced parameter dimensionality, but its low-enzyme and irreversibility assumption may not always be valid in the in vivo context. Whilst the total quasi-steady state assumption (tQSSA) model eliminates the reactant stationary assumptions, its mathematical complexity is increased. Here, we propose the differential quasi-steady state approximation (dQSSA) kinetic model, which expresses the differential equations as a linear algebraic equation. It eliminates the reactant stationary assumptions of the Michaelis Menten model without increasing model dimensionality. The dQSSA was found to be easily adaptable for reversible enzyme kinetic systems with complex topologies and to predict behaviour consistent with mass action kinetics in silico. Additionally, the dQSSA was able to predict coenzyme inhibition in the reversible lactate dehydrogenase enzyme, which the Michaelis Menten model failed to do. Whilst the dQSSA does not account for the physical and thermodynamic interactions of all intermediate enzyme-substrate complex states, it is proposed to be suitable for modelling complex enzyme mediated biochemical systems. This is due to its simpler application, reduced parameter dimensionality and improved accuracy.

  14. Uranium(VI) adsorption to ferrihydrite: Application of a surface complexation model

    USGS Publications Warehouse

    Waite, T.D.; Davis, J.A.; Payne, T.E.; Waychunas, G.A.; Xu, N.

    1994-01-01

    A study of U(VI) adsorption by ferrihydrite was conducted over a wide range of U(VI) concentrations, pH, and at two partial pressures of carbon dioxide. A two-site (strong- and weak-affinity sites, FesOH and FewOH, respectively) surface complexation model was able to describe the experimental data well over a wide range of conditions, with only one species formed with each site type: an inner-sphere, mononuclear, bidentate complex of the type (FeO2)UO2. The existence of such a surface species was supported by results of uranium EXAFS spectroscopy performed on two samples with U(VI) adsorption density in the upper range observed in this study (10 and 18% occupancy of total surface sites). Adsorption data in the alkaline pH range suggested the existence of a second surface species, modeled as a ternary surface complex with UO2CO30 binding to a bidentate surface site. Previous surface complexation models for U(VI) adsorption have proposed surface species that are identical to the predominant aqueous species, e.g., multinuclear hydrolysis complexes or several U(VI)-carbonate complexes. The results demonstrate that the speciation of adsorbed U(VI) may be constrained by the coordination environment at the surface, giving rise to surface speciation for U(VI) that is significantly less complex than aqueous speciation. ?? 1994.

  15. A range of complex probabilistic models for RNA secondary structure prediction that includes the nearest-neighbor model and more.

    PubMed

    Rivas, Elena; Lang, Raymond; Eddy, Sean R

    2012-02-01

    The standard approach for single-sequence RNA secondary structure prediction uses a nearest-neighbor thermodynamic model with several thousand experimentally determined energy parameters. An attractive alternative is to use statistical approaches with parameters estimated from growing databases of structural RNAs. Good results have been reported for discriminative statistical methods using complex nearest-neighbor models, including CONTRAfold, Simfold, and ContextFold. Little work has been reported on generative probabilistic models (stochastic context-free grammars [SCFGs]) of comparable complexity, although probabilistic models are generally easier to train and to use. To explore a range of probabilistic models of increasing complexity, and to directly compare probabilistic, thermodynamic, and discriminative approaches, we created TORNADO, a computational tool that can parse a wide spectrum of RNA grammar architectures (including the standard nearest-neighbor model and more) using a generalized super-grammar that can be parameterized with probabilities, energies, or arbitrary scores. By using TORNADO, we find that probabilistic nearest-neighbor models perform comparably to (but not significantly better than) discriminative methods. We find that complex statistical models are prone to overfitting RNA structure and that evaluations should use structurally nonhomologous training and test data sets. Overfitting has affected at least one published method (ContextFold). The most important barrier to improving statistical approaches for RNA secondary structure prediction is the lack of diversity of well-curated single-sequence RNA secondary structures in current RNA databases. PMID:22194308

  16. A range of complex probabilistic models for RNA secondary structure prediction that includes the nearest-neighbor model and more.

    PubMed

    Rivas, Elena; Lang, Raymond; Eddy, Sean R

    2012-02-01

    The standard approach for single-sequence RNA secondary structure prediction uses a nearest-neighbor thermodynamic model with several thousand experimentally determined energy parameters. An attractive alternative is to use statistical approaches with parameters estimated from growing databases of structural RNAs. Good results have been reported for discriminative statistical methods using complex nearest-neighbor models, including CONTRAfold, Simfold, and ContextFold. Little work has been reported on generative probabilistic models (stochastic context-free grammars [SCFGs]) of comparable complexity, although probabilistic models are generally easier to train and to use. To explore a range of probabilistic models of increasing complexity, and to directly compare probabilistic, thermodynamic, and discriminative approaches, we created TORNADO, a computational tool that can parse a wide spectrum of RNA grammar architectures (including the standard nearest-neighbor model and more) using a generalized super-grammar that can be parameterized with probabilities, energies, or arbitrary scores. By using TORNADO, we find that probabilistic nearest-neighbor models perform comparably to (but not significantly better than) discriminative methods. We find that complex statistical models are prone to overfitting RNA structure and that evaluations should use structurally nonhomologous training and test data sets. Overfitting has affected at least one published method (ContextFold). The most important barrier to improving statistical approaches for RNA secondary structure prediction is the lack of diversity of well-curated single-sequence RNA secondary structures in current RNA databases.

  17. Modeling of Wall-Bounded Complex Flows and Free Shear Flows

    NASA Technical Reports Server (NTRS)

    Shih, Tsan-Hsing; Zhu, Jiang; Lumley, John L.

    1994-01-01

    Various wall-bounded flows with complex geometries and free shear flows have been studied with a newly developed realizable Reynolds stress algebraic equation model. The model development is based on the invariant theory in continuum mechanics. This theory enables us to formulate a general constitutive relation for the Reynolds stresses. Pope was the first to introduce this kind of constitutive relation to turbulence modeling. In our study, realizability is imposed on the truncated constitutive relation to determine the coefficients so that, unlike the standard k-E eddy viscosity model, the present model will not produce negative normal stresses in any situations of rapid distortion. The calculations based on the present model have shown an encouraging success in modeling complex turbulent flows.

  18. Estimating the complexity of 3D structural models using machine learning methods

    NASA Astrophysics Data System (ADS)

    Mejía-Herrera, Pablo; Kakurina, Maria; Royer, Jean-Jacques

    2016-04-01

    Quantifying the complexity of 3D geological structural models can play a major role in natural resources exploration surveys, for predicting environmental hazards or for forecasting fossil resources. This paper proposes a structural complexity index which can be used to help in defining the degree of effort necessary to build a 3D model for a given degree of confidence, and also to identify locations where addition efforts are required to meet a given acceptable risk of uncertainty. In this work, it is considered that the structural complexity index can be estimated using machine learning methods on raw geo-data. More precisely, the metrics for measuring the complexity can be approximated as the difficulty degree associated to the prediction of the geological objects distribution calculated based on partial information on the actual structural distribution of materials. The proposed methodology is tested on a set of 3D synthetic structural models for which the degree of effort during their building is assessed using various parameters (such as number of faults, number of part in a surface object, number of borders, ...), the rank of geological elements contained in each model, and, finally, their level of deformation (folding and faulting). The results show how the estimated complexity in a 3D model can be approximated by the quantity of partial data necessaries to simulated at a given precision the actual 3D model without error using machine learning algorithms.

  19. A rational model for assessing and evaluating complex interventions in health care

    PubMed Central

    May, Carl

    2006-01-01

    Background Understanding how new clinical techniques, technologies and other complex interventions become normalized in practice is important to researchers, clinicians, health service managers and policy-makers. This paper presents a model of the normalization of complex interventions. Methods Between 1995 and 2005 multiple qualitative studies were undertaken. These examined: professional-patient relationships; changing patterns of care; the development, evaluation and implementation of telemedicine and related informatics systems; and the production and utilization of evidence for practice. Data from these studies were subjected to (i) formative re-analysis, leading to sets of analytic propositions; and to (ii) a summative analysis that aimed to build a robust conceptual model of the normalization of complex interventions in health care. Results A normalization process model that enables analysis of the conditions necessary to support the introduction of complex interventions is presented. The model is defined by four constructs: interactional workability; relational integration; skill set workability and contextual integration. This model can be used to understand the normalization potential of new techniques and technologies in healthcare settings Conclusion The normalization process model has face validity in (i) assessing the potential for complex interventions to become routinely embedded in everyday clinical work, and (ii) evaluating the factors that promote or inhibit their success and failure in practice. PMID:16827928

  20. PRI-Modeler: extracting RNA structural elements from PDB files of protein-RNA complexes.

    PubMed

    Han, Kyungsook; Nepal, Chirag

    2007-05-01

    A complete understanding of protein and RNA structures and their interactions is important for determining the binding sites in protein-RNA complexes. Computational approaches exist for identifying secondary structural elements in proteins from atomic coordinates. However, similar methods have not been developed for RNA, due in part to the very limited structural data so far available. We have developed a set of algorithms for extracting and visualizing secondary and tertiary structures of RNA and for analyzing protein-RNA complexes. These algorithms have been implemented in a web-based program called PRI-Modeler (protein-RNA interaction modeler). Given one or more protein data bank files of protein-RNA complexes, PRI-Modeler analyzes the conformation of the RNA, calculates the hydrogen bond (H bond) and van der Waals interactions between amino acids and nucleotides, extracts secondary and tertiary RNA structure elements, and identifies the patterns of interactions between the proteins and RNAs. This paper presents PRI-Modeler and its application to the hydrogen bond and van der Waals interactions in the most representative set of protein-RNA complexes. The analysis reveals several interesting interaction patterns at various levels. The information provided by PRI-Modeler should prove useful for determining the binding sites in protein-RNA complexes. PRI-Modeler is accessible at http://wilab.inha.ac.kr/primodeler/, and supplementary materials are available in the analysis results section at http://wilab.inha.ac.kr/primodeler/.

  1. A diagnostic evaluation model for complex research partnerships with community engagement: the partnership for Native American Cancer Prevention (NACP) model.

    PubMed

    Trotter, Robert T; Laurila, Kelly; Alberts, David; Huenneke, Laura F

    2015-02-01

    Complex community oriented health care prevention and intervention partnerships fail or only partially succeed at alarming rates. In light of the current rapid expansion of critically needed programs targeted at health disparities in minority populations, we have designed and are testing an "logic model plus" evaluation model that combines classic logic model and query based evaluation designs (CDC, NIH, Kellogg Foundation) with advances in community engaged designs derived from industry-university partnership models. These approaches support the application of a "near real time" feedback system (diagnosis and intervention) based on organizational theory, social network theory, and logic model metrics directed at partnership dynamics, combined with logic model metrics.

  2. When the Optimal Is Not the Best: Parameter Estimation in Complex Biological Models

    PubMed Central

    Fernández Slezak, Diego; Suárez, Cecilia; Cecchi, Guillermo A.; Marshall, Guillermo; Stolovitzky, Gustavo

    2010-01-01

    Background The vast computational resources that became available during the past decade enabled the development and simulation of increasingly complex mathematical models of cancer growth. These models typically involve many free parameters whose determination is a substantial obstacle to model development. Direct measurement of biochemical parameters in vivo is often difficult and sometimes impracticable, while fitting them under data-poor conditions may result in biologically implausible values. Results We discuss different methodological approaches to estimate parameters in complex biological models. We make use of the high computational power of the Blue Gene technology to perform an extensive study of the parameter space in a model of avascular tumor growth. We explicitly show that the landscape of the cost function used to optimize the model to the data has a very rugged surface in parameter space. This cost function has many local minima with unrealistic solutions, including the global minimum corresponding to the best fit. Conclusions The case studied in this paper shows one example in which model parameters that optimally fit the data are not necessarily the best ones from a biological point of view. To avoid force-fitting a model to a dataset, we propose that the best model parameters should be found by choosing, among suboptimal parameters, those that match criteria other than the ones used to fit the model. We also conclude that the model, data and optimization approach form a new complex system and point to the need of a theory that addresses this problem more generally. PMID:21049094

  3. Evaluation of 2D shallow-water model for spillway flow with a complex geometry

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Although the two-dimensional (2D) shallow water model is formulated based on several assumptions such as hydrostatic pressure distribution and vertical velocity is negligible, as a simple alternative to the complex 3D model, it has been used to compute water flows in which these assumptions may be ...

  4. Calibration of a complex watershed model using high resolution remotely sensed evapotranspiration retrievals

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Process-based watershed models typically require a large number of parameters to describe complex hydrologic and biogeochemical processes in highly variable environments. Most of such parameters are not directly measured in field and require calibration, in most cases through matching modeled fluxes...

  5. Complexity and Modularity of Intracellular Networks - A Systematic Approach for Modeling and Simulation

    PubMed Central

    Blinov, Michael L.; Ruebenacker, Oliver; Moraru, Ion I.

    2009-01-01

    Assembly of quantitative models of large complex networks brings about several challenges. One of them is combinatorial complexity, where relatively few signaling molecules can combine to form thousands or millions of distinct chemical species. A receptor that has several separate phosphorylation sites can exist in hundreds of different states, many of which must be accounted for individually when simulating the time course of signaling. When assembly of protein complexes is being included, the number of distinct molecular species can easily increase by a few orders of magnitude. Validation, visualization, and understanding the network can become intractable. Another challenge appears when the modeler needs to recast or grow a model. Keeping track of changes and adding new elements present a significant difficulty. We describe an approach to solve these challenges within the Virtual Cell (VCell). Using (i) automatic extraction from pathway databases of model components, and (ii) rules of interactions that serve as reaction network generators, we provide a way for semi-automatic generation of quantitative mathematical models that also facilitates the reuse of model elements. In this approach, kinetic models of large, complex networks can be assembled from separately constructed modules, either directly or via rules. To implement this approach, we have combined the strength of several related technologies: the BioPAX ontology, the BioNetGen rule-based description of molecular interactions, and the VCell modeling and simulation framework. PMID:19045831

  6. The Relationship between Students' Performance on Conventional Standardized Mathematics Assessments and Complex Mathematical Modeling Problems

    ERIC Educational Resources Information Center

    Kartal, Ozgul; Dunya, Beyza Aksu; Diefes-Dux, Heidi A.; Zawojewski, Judith S.

    2016-01-01

    Critical to many science, technology, engineering, and mathematics (STEM) career paths is mathematical modeling--specifically, the creation and adaptation of mathematical models to solve problems in complex settings. Conventional standardized measures of mathematics achievement are not structured to directly assess this type of mathematical…

  7. STEPS: Modeling and Simulating Complex Reaction-Diffusion Systems with Python.

    PubMed

    Wils, Stefan; De Schutter, Erik

    2009-01-01

    We describe how the use of the Python language improved the user interface of the program STEPS. STEPS is a simulation platform for modeling and stochastic simulation of coupled reaction-diffusion systems with complex 3-dimensional boundary conditions. Setting up such models is a complicated process that consists of many phases. Initial versions of STEPS relied on a static input format that did not cleanly separate these phases, limiting modelers in how they could control the simulation and becoming increasingly complex as new features and new simulation algorithms were added. We solved all of these problems by tightly integrating STEPS with Python, using SWIG to expose our existing simulation code. PMID:19623245

  8. Modeling Complex Cross-Systems Software Interfaces Using SysML

    NASA Technical Reports Server (NTRS)

    Mandutianu, Sanda; Morillo, Ron; Simpson, Kim; Liepack, Otfrid; Bonanne, Kevin

    2013-01-01

    The complex flight and ground systems for NASA human space exploration are designed, built, operated and managed as separate programs and projects. However, each system relies on one or more of the other systems in order to accomplish specific mission objectives, creating a complex, tightly coupled architecture. Thus, there is a fundamental need to understand how each system interacts with the other. To determine if a model-based system engineering approach could be utilized to assist with understanding the complex system interactions, the NASA Engineering and Safety Center (NESC) sponsored a task to develop an approach for performing cross-system behavior modeling. This paper presents the results of applying Model Based Systems Engineering (MBSE) principles using the System Modeling Language (SysML) to define cross-system behaviors and how they map to crosssystem software interfaces documented in system-level Interface Control Documents (ICDs).

  9. Surface complexation model for strontium sorption to amorphous silica and goethite

    PubMed Central

    Carroll, Susan A; Roberts, Sarah K; Criscenti, Louise J; O'Day, Peggy A

    2008-01-01

    Strontium sorption to amorphous silica and goethite was measured as a function of pH and dissolved strontium and carbonate concentrations at 25°C. Strontium sorption gradually increases from 0 to 100% from pH 6 to 10 for both phases and requires multiple outer-sphere surface complexes to fit the data. All data are modeled using the triple layer model and the site-occupancy standard state; unless stated otherwise all strontium complexes are mononuclear. Strontium sorption to amorphous silica in the presence and absence of dissolved carbonate can be fit with tetradentate Sr2+ and SrOH+ complexes on the β-plane and a monodentate Sr2+complex on the diffuse plane to account for strontium sorption at low ionic strength. Strontium sorption to goethite in the absence of dissolved carbonate can be fit with monodentate and tetradentate SrOH+ complexes and a tetradentate binuclear Sr2+ species on the β-plane. The binuclear complex is needed to account for enhanced sorption at hgh strontium surface loadings. In the presence of dissolved carbonate additional monodentate Sr2+ and SrOH+ carbonate surface complexes on the β-plane are needed to fit strontium sorption to goethite. Modeling strontium sorption as outer-sphere complexes is consistent with quantitative analysis of extended X-ray absorption fine structure (EXAFS) on selected sorption samples that show a single first shell of oxygen atoms around strontium indicating hydrated surface complexes at the amorphous silica and goethite surfaces. Strontium surface complexation equilibrium constants determined in this study combined with other alkaline earth surface complexation constants are used to recalibrate a predictive model based on Born solvation and crystal-chemistry theory. The model is accurate to about 0.7 log K units. More studies are needed to determine the dependence of alkaline earth sorption on ionic strength and dissolved carbonate and sulfate concentrations for the development of a robust surface

  10. Surface complexation model for strontium sorption to amorphous silica and goethite.

    PubMed

    Carroll, Susan A; Roberts, Sarah K; Criscenti, Louise J; O'Day, Peggy A

    2008-01-01

    Strontium sorption to amorphous silica and goethite was measured as a function of pH and dissolved strontium and carbonate concentrations at 25 degrees C. Strontium sorption gradually increases from 0 to 100% from pH 6 to 10 for both phases and requires multiple outer-sphere surface complexes to fit the data. All data are modeled using the triple layer model and the site-occupancy standard state; unless stated otherwise all strontium complexes are mononuclear. Strontium sorption to amorphous silica in the presence and absence of dissolved carbonate can be fit with tetradentate Sr2+ and SrOH+ complexes on the beta-plane and a monodentate Sr2+complex on the diffuse plane to account for strontium sorption at low ionic strength. Strontium sorption to goethite in the absence of dissolved carbonate can be fit with monodentate and tetradentate SrOH+ complexes and a tetradentate binuclear Sr2+ species on the beta-plane. The binuclear complex is needed to account for enhanced sorption at hgh strontium surface loadings. In the presence of dissolved carbonate additional monodentate Sr2+ and SrOH+ carbonate surface complexes on the beta-plane are needed to fit strontium sorption to goethite. Modeling strontium sorption as outer-sphere complexes is consistent with quantitative analysis of extended X-ray absorption fine structure (EXAFS) on selected sorption samples that show a single first shell of oxygen atoms around strontium indicating hydrated surface complexes at the amorphous silica and goethite surfaces. Strontium surface complexation equilibrium constants determined in this study combined with other alkaline earth surface complexation constants are used to recalibrate a predictive model based on Born solvation and crystal-chemistry theory. The model is accurate to about 0.7 log K units. More studies are needed to determine the dependence of alkaline earth sorption on ionic strength and dissolved carbonate and sulfate concentrations for the development of a robust

  11. Surface Complexation Model for Strontium Sorption to Amorphous Silica and Goethite

    SciTech Connect

    Carroll, S; Robers, S; Criscenti, L; O'Day, P

    2007-11-30

    Strontium sorption to amorphous silica and goethite was measured as a function of pH and dissolved strontium and carbonate concentrations at 25 C. Strontium sorption gradually increases from 0 to 100% from pH 6 to 10 for both phases and requires multiple outer-sphere surface complexes to fit the data. All data are modeled using the triple layer model and the site-occupancy standard state; unless stated otherwise all strontium complexes are mononuclear. Strontium sorption to amorphous silica in the presence and absence of dissolved carbonate can be fit with tetradentate Sr{sup 2+} and SrOH{sup +} complexes on the {beta}-plane and a monodentate Sr{sup 2+} complex on the diffuse plane to account for strontium sorption at low ionic strength. Strontium sorption to goethite in the absence of dissolved carbonate can be fit with monodentate and tetradentate SrOH{sup +} complexes and a tetradentate binuclear Sr{sup 2+} species on the {beta}-plane. The binuclear complex is needed to account for enhanced sorption at high strontium surface loadings. In the presence of dissolved carbonate additional monodentate Sr{sup 2+} and SrOH{sup +} carbonate surface complexes on the {beta}-plane are needed to fit strontium sorption to goethite. Modeling strontium sorption as outer-sphere complexes is consistent with quantitative analysis of extended X-ray absorption fine structure (EXAFS) on selected sorption samples that show a single first shell of oxygen atoms around strontium indicating hydrated surface complexes at the amorphous silica and goethite surfaces. Strontium surface complexation equilibrium constants determined in this study combined with other alkaline earth surface complexation constants are used to recalibrate a predictive model based on Born solvation and crystal-chemistry theory. The model is accurate to about 0.7 log K units. More studies are needed to determine the dependence of alkaline earth sorption on ionic strength and dissolved carbonate and sulfate

  12. Complexity, parameter sensitivity and parameter transferability in the modelling of floodplain inundation

    NASA Astrophysics Data System (ADS)

    Bates, P. D.; Neal, J. C.; Fewtrell, T. J.

    2012-12-01

    In this we paper we consider two related questions. First, we address the issue of how much physical complexity is necessary in a model in order to simulate floodplain inundation to within validation data error. This is achieved through development of a single code/multiple physics hydraulic model (LISFLOOD-FP) where different degrees of complexity can be switched on or off. Different configurations of this code are applied to four benchmark test cases, and compared to the results of a number of industry standard models. Second we address the issue of how parameter sensitivity and transferability change with increasing complexity using numerical experiments with models of different physical and geometric intricacy. Hydraulic models are a good example system with which to address such generic modelling questions as: (1) they have a strong physical basis; (2) there is only one set of equations to solve; (3) they require only topography and boundary conditions as input data; and (4) they typically require only a single free parameter, namely boundary friction. In terms of complexity required we show that for the problem of sub-critical floodplain inundation a number of codes of different dimensionality and resolution can be found to fit uncertain model validation data equally well, and that in this situation Occam's razor emerges as a useful logic to guide model selection. We find also find that model skill usually improves more rapidly with increases in model spatial resolution than increases in physical complexity, and that standard approaches to testing hydraulic models against laboratory data or analytical solutions may fail to identify this important fact. Lastly, we find that in benchmark testing studies significant differences can exist between codes with identical numerical solution techniques as a result of auxiliary choices regarding the specifics of model implementation that are frequently unreported by code developers. As a consequence, making sound

  13. Evaluating the Novel Methods on Species Distribution Modeling in Complex Forest

    NASA Astrophysics Data System (ADS)

    Tu, C. H.; Lo, N. J.; Chang, W. I.; Huang, K. Y.

    2012-07-01

    The prediction of species distribution has become a focus in ecology. For predicting a result more effectively and accurately, some novel methods have been proposed recently, like support vector machine (SVM) and maximum entropy (MAXENT). However, high complexity in the forest, like that in Taiwan, will make the modeling become even harder. In this study, we aim to explore which method is more applicable to species distribution modeling in the complex forest. Castanopsis carlesii (long-leaf chinkapin, LLC), growing widely in Taiwan, was chosen as the target species because its seeds are an important food source for animals. We overlaid the tree samples on the layers of altitude, slope, aspect, terrain position, and vegetation index derived from SOPT-5 images, and developed three models, MAXENT, SVM, and decision tree (DT), to predict the potential habitat of LLCs. We evaluated these models by two sets of independent samples in different site and the effect on the complexity of forest by changing the background sample size (BSZ). In the forest with low complex (small BSZ), the accuracies of SVM (kappa = 0.87) and DT (0.86) models were slightly higher than that of MAXENT (0.84). In the more complex situation (large BSZ), MAXENT kept high kappa value (0.85), whereas SVM (0.61) and DT (0.57) models dropped significantly due to limiting the habitat close to samples. Therefore, MAXENT model was more applicable to predict species' potential habitat in the complex forest; whereas SVM and DT models would tend to underestimate the potential habitat of LLCs.

  14. Inclusion complexes of pyrimethamine in 2-hydroxypropyl-beta-cyclodextrin: characterization, phase solubility and molecular modelling.

    PubMed

    de Araújo, Márcia Valéria Gaspar; Vieira, Elze Kelly Barbosa; Lázaro, Gilderman Silva; de Souza Conegero, Leila; Ferreira, Odair Pastor; Almeida, Lui S Eduardo; Barreto, Ledjane Silva; da Costa, Nivan Bezerra; Gimenez, Iara F

    2007-09-01

    The inclusion complexation of pyrimethamine in 2-hydroxypropyl-beta-cyclodextrin has been investigated by 2D (1)H NMR, FTIR and UV/visible spectroscopy and also by molecular modelling methods (AM1, PM3, MM3). From the phase-solubility diagram a linear increase was observed in pyrimethamine aqueous solubility in the presence of 2-hydroxypropyl-beta-cyclodextrin, evidencing the formation of a soluble inclusion complex. According to the continuous variation method (Job's plot) applied to fluorescence measurements, a 1:1 stoichiometry has been proposed for the complex. Concerning the structure of the complex, a Cl-in orientation of pyrimethamine in the 2-hydroxypropyl-beta-cyclodextrin cavity has been proposed from the theoretical calculations, being confirmed by two-dimensional (1)H NMR spectroscopy (ROESY). The thermal behaviour has also been studied, providing complementary evidences of complex formation.

  15. Introduction to a special section on ecohydrology of semiarid environments: Confronting mathematical models with ecosystem complexity

    NASA Astrophysics Data System (ADS)

    Svoray, Tal; Assouline, Shmuel; Katul, Gabriel

    2015-11-01

    Current literature provides large number of publications about ecohydrological processes and their effect on the biota in drylands. Given the limited laboratory and field experiments in such systems, many of these publications are based on mathematical models of varying complexity. The underlying implicit assumption is that the data set used to evaluate these models covers the parameter space of conditions that characterize drylands and that the models represent the actual processes with acceptable certainty. However, a question raised is to what extent these mathematical models are valid when confronted with observed ecosystem complexity? This Introduction reviews the 16 papers that comprise the Special Section on Eco-hydrology of Semiarid Environments: Confronting Mathematical Models with Ecosystem Complexity. The subjects studied in these papers include rainfall regime, infiltration and preferential flow, evaporation and evapotranspiration, annual net primary production, dispersal and invasion, and vegetation greening. The findings in the papers published in this Special Section show that innovative mathematical modeling approaches can represent actual field measurements. Hence, there are strong grounds for suggesting that mathematical models can contribute to greater understanding of ecosystem complexity through characterization of space-time dynamics of biomass and water storage as well as their multiscale interactions. However, the generality of the models and their low-dimensional representation of many processes may also be a "curse" that results in failures when particulars of an ecosystem are required. It is envisaged that the search for a unifying "general" model, while seductive, may remain elusive in the foreseeable future. It is for this reason that improving the merger between experiments and models of various degrees of complexity continues to shape the future research agenda.

  16. Integrated optimal allocation model for complex adaptive system of water resources management (I): Methodologies

    NASA Astrophysics Data System (ADS)

    Zhou, Yanlai; Guo, Shenglian; Xu, Chong-Yu; Liu, Dedi; Chen, Lu; Ye, Yushi

    2015-12-01

    Due to the adaption, dynamic and multi-objective characteristics of complex water resources system, it is a considerable challenge to manage water resources in an efficient, equitable and sustainable way. An integrated optimal allocation model is proposed for complex adaptive system of water resources management. The model consists of three modules: (1) an agent-based module for revealing evolution mechanism of complex adaptive system using agent-based, system dynamic and non-dominated sorting genetic algorithm II methods, (2) an optimal module for deriving decision set of water resources allocation using multi-objective genetic algorithm, and (3) a multi-objective evaluation module for evaluating the efficiency of the optimal module and selecting the optimal water resources allocation scheme using project pursuit method. This study has provided a theoretical framework for adaptive allocation, dynamic allocation and multi-objective optimization for a complex adaptive system of water resources management.

  17. Process evaluation for complex interventions in primary care: understanding trials using the normalization process model

    PubMed Central

    May, Carl R; Mair, Frances S; Dowrick, Christopher F; Finch, Tracy L

    2007-01-01

    Background The Normalization Process Model is a conceptual tool intended to assist in understanding the factors that affect implementation processes in clinical trials and other evaluations of complex interventions. It focuses on the ways that the implementation of complex interventions is shaped by problems of workability and integration. Method In this paper the model is applied to two different complex trials: (i) the delivery of problem solving therapies for psychosocial distress, and (ii) the delivery of nurse-led clinics for heart failure treatment in primary care. Results Application of the model shows how process evaluations need to focus on more than the immediate contexts in which trial outcomes are generated. Problems relating to intervention workability and integration also need to be understood. The model may be used effectively to explain the implementation process in trials of complex interventions. Conclusion The model invites evaluators to attend equally to considering how a complex intervention interacts with existing patterns of service organization, professional practice, and professional-patient interaction. The justification for this may be found in the abundance of reports of clinical effectiveness for interventions that have little hope of being implemented in real healthcare settings. PMID:17650326

  18. A Comparison of Geographic Information Systems, Complex Networks, and Other Models for Analyzing Transportation Network Topologies

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalia (Technical Monitor); Kuby, Michael; Tierney, Sean; Roberts, Tyler; Upchurch, Christopher

    2005-01-01

    This report reviews six classes of models that are used for studying transportation network topologies. The report is motivated by two main questions. First, what can the "new science" of complex networks (scale-free, small-world networks) contribute to our understanding of transport network structure, compared to more traditional methods? Second, how can geographic information systems (GIS) contribute to studying transport networks? The report defines terms that can be used to classify different kinds of models by their function, composition, mechanism, spatial and temporal dimensions, certainty, linearity, and resolution. Six broad classes of models for analyzing transport network topologies are then explored: GIS; static graph theory; complex networks; mathematical programming; simulation; and agent-based modeling. Each class of models is defined and classified according to the attributes introduced earlier. The paper identifies some typical types of research questions about network structure that have been addressed by each class of model in the literature.

  19. A Tightly Coupled Particle-Fluid Model for DNA-Laden Flows in Complex Microscale Geometries

    SciTech Connect

    Trebotich, D; Miller, G H; Colella, P; Graves, D T; Martin, D F; Schwartz, P O

    2004-11-18

    We present a stable and convergent method for the computation of flows of DNA-laden fluids in microchannels with complex geometry. The numerical strategy combines a ball-rod model representation for polymers tightly coupled with a projection method for incompressible viscous flow. We use Cartesian grid embedded boundary methods to discretize the fluid equations in the presence of complex domain boundaries. A sample calculation is presented showing flow through a packed array microchannel in 2D.

  20. Chayanov revisited: A model for the economics of complex kin units

    PubMed Central

    Hammel, E. A.

    2005-01-01

    Chayanov's model of the peasant economy is based on autarkic nuclear family households. Expansion to the more complex households and kin groups common in peasant societies shows that the sharp changes Chayanov observed in the consumer/producer ratio over the domestic cycle are smoothed by the intergenerational structure of complex households and extended kin groups. This amelioration may be retarded by competition between constituent units. Understanding the dynamics of the developmental cycle and micropolitics of domestic groups is a useful correction to Chayanov's widely used formulation, especially in developing countries where complex kin structures are common. PMID:15867158