Sample records for model biomembranes complex

  1. Absorption of nitro-polycyclic aromatic hydrocarbons by biomembrane models: effect of the medium lipophilicity.

    PubMed

    Castelli, Francesco; Micieli, Dorotea; Ottimo, Sara; Minniti, Zelica; Sarpietro, Maria Grazia; Librando, Vito

    2008-10-01

    To demonstrate the relationship between the structure of nitro-polycyclic aromatic hydrocarbons and their effect on biomembranes, we have investigated the influence of three structurally different nitro-polycyclic aromatic hydrocarbons, 2-nitrofluorene, 2,7-dinitrofluorene and 3-nitrofluoranthene, on the thermotropic behavior of dimyristoylphosphatidylcholine multilamellar vesicles, used as biomembrane models, by means of differential scanning calorimetry. The obtained results indicate that the studied nitro-polycyclic aromatic hydrocarbons affected the thermotropic behavior of multilamellar vesicles to various extents, modifying the pretransition and the main phase transition peaks and shifting them to lower temperatures. The effect of the aqueous and lipophilic medium on the absorption process of these compounds by the biomembrane models has been also investigated revealing that the process is hindered by the aqueous medium but strongly allowed by the lipophilic medium.

  2. Environmental conditions that influence the ability of humic acids to induce permeability in model biomembranes.

    PubMed

    Ojwang', Loice M; Cook, Robert L

    2013-08-06

    The interaction of humic acids (HAs) with 1-palmitoyl-2-oleoyl-Sn-glycero-3-phosphocholine (POPC) large unilamellar vesicle (LUV) model biomembrane system was studied by fluorescence spectroscopy. HAs from aquatic and terrestrial (including coal) sources were studied. The effects of HA concentration and temperature over environmentally relevant ranges of 0 to 20 mg C/L and 10 to 30 °C, respectively, were investigated. The dosage studies revealed that the aquatic Suwannee River humic acid (SRHA) causes an increased biomembrane perturbation (percent leakage of the fluorescent dye, Sulforhodamine B) over the entire studied concentration range. The two terrestrial HAs, namely Leonardite humic acid (LAHA) and Florida peat humic acid (FPHA), at concentrations above 5 mg C/L, show a decrease or a plateau effect attributable to the competition within the HA mixture and/or the formation of "partial aggregates". The temperature studies revealed that biomembrane perturbation increases with decreasing temperature for all three HAs. Kinetic studies showed that the membrane perturbation process is complex with both fast and slow absorption (sorption into the bilayer) components and that the slow component could be fitted by first order kinetics. A mechanism based on "lattice errors" within the POPC LUVs is put forward to explain the fast and slow components. A rationale behind the concentration and temperature findings is provided, and the environmental implications are discussed.

  3. Flexoelectricity in PZT Nanoribbons and Biomembranes

    DTIC Science & Technology

    2015-01-09

    Flexoelectricity in PZT Nanoribbons and Biomembranes The objective of this grant was to study flexoelectric phenomena in solids and in biomembranes...Flexoelectricity in PZT Nanoribbons and Biomembranes Report Title The objective of this grant was to study flexoelectric phenomena in solids and...producing PZT nanoribbons for energy harvesters. (a) Papers published in peer-reviewed journals (N/A for none) Enter List of papers submitted or

  4. Parametric FEM for geometric biomembranes

    NASA Astrophysics Data System (ADS)

    Bonito, Andrea; Nochetto, Ricardo H.; Sebastian Pauletti, M.

    2010-05-01

    We consider geometric biomembranes governed by an L2-gradient flow for bending energy subject to area and volume constraints (Helfrich model). We give a concise derivation of a novel vector formulation, based on shape differential calculus, and corresponding discretization via parametric FEM using quadratic isoparametric elements and a semi-implicit Euler method. We document the performance of the new parametric FEM with a number of simulations leading to dumbbell, red blood cell and toroidal equilibrium shapes while exhibiting large deformations.

  5. Bicontinuous microemulsions as a biomembrane mimetic system for melittin

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hayes, Douglas G.; Ye, Ran; Dunlap, Rachel N.

    Antimicrobial peptides effectively kill antibiotic-resistant bacteria by forming pores in prokaryotes' biomembranes via penetration into the biomembranes' interior. Bicontinuous microemulsions, consisting of interdispersed oil and water nanodomains separated by flexible surfactant monolayers, are potentially valuable for hosting membrane-associated peptides and proteins due to their thermodynamic stability, optical transparency, low viscosity, and high interfacial area. Here, we show that bicontinuous microemulsions formed by negatively-charged surfactants are a robust biomembrane mimetic system for the antimicrobial peptide melittin. When encapsulated in bicontinuous microemulsions formed using three-phase (Winsor-III) systems, melittin's helicity increases greatly due to penetration into the surfactant monolayers, mimicking its behavior inmore » biomembranes. But, the threshold melittin concentration required to achieve these trends is lower for the microemulsions. The extent of penetration was decreased when the interfacial fluidity of the microemulsions was increased. In conclusion, these results suggest the utility of bicontinuous microemulsions for isolation, purification, delivery, and host systems for antimicrobial peptides.« less

  6. Bicontinuous microemulsions as a biomembrane mimetic system for melittin

    DOE PAGES

    Hayes, Douglas G.; Ye, Ran; Dunlap, Rachel N.; ...

    2017-11-12

    Antimicrobial peptides effectively kill antibiotic-resistant bacteria by forming pores in prokaryotes' biomembranes via penetration into the biomembranes' interior. Bicontinuous microemulsions, consisting of interdispersed oil and water nanodomains separated by flexible surfactant monolayers, are potentially valuable for hosting membrane-associated peptides and proteins due to their thermodynamic stability, optical transparency, low viscosity, and high interfacial area. Here, we show that bicontinuous microemulsions formed by negatively-charged surfactants are a robust biomembrane mimetic system for the antimicrobial peptide melittin. When encapsulated in bicontinuous microemulsions formed using three-phase (Winsor-III) systems, melittin's helicity increases greatly due to penetration into the surfactant monolayers, mimicking its behavior inmore » biomembranes. But, the threshold melittin concentration required to achieve these trends is lower for the microemulsions. The extent of penetration was decreased when the interfacial fluidity of the microemulsions was increased. In conclusion, these results suggest the utility of bicontinuous microemulsions for isolation, purification, delivery, and host systems for antimicrobial peptides.« less

  7. Engineering and validation of a novel lipid thin film for biomembrane modeling in lipophilicity determination of drugs and xenobiotics

    PubMed Central

    Idowu, Sunday Olakunle; Adeyemo, Morenikeji Ambali; Ogbonna, Udochi Ihechiluru

    2009-01-01

    Background Determination of lipophilicity as a tool for predicting pharmacokinetic molecular behavior is limited by the predictive power of available experimental models of the biomembrane. There is current interest, therefore, in models that accurately simulate the biomembrane structure and function. A novel bio-device; a lipid thin film, was engineered as an alternative approach to the previous use of hydrocarbon thin films in biomembrane modeling. Results Retention behavior of four structurally diverse model compounds; 4-amino-3,5-dinitrobenzoic acid (ADBA), naproxen (NPX), nabumetone (NBT) and halofantrine (HF), representing 4 broad classes of varying molecular polarities and aqueous solubility behavior, was investigated on the lipid film, liquid paraffin, and octadecylsilane layers. Computational, thermodynamic and image analysis confirms the peculiar amphiphilic configuration of the lipid film. Effect of solute-type, layer-type and variables interactions on retention behavior was delineated by 2-way analysis of variance (ANOVA) and quantitative structure property relationships (QSPR). Validation of the lipid film was implemented by statistical correlation of a unique chromatographic metric with Log P (octanol/water) and several calculated molecular descriptors of bulk and solubility properties. Conclusion The lipid film signifies a biomimetic artificial biological interface capable of both hydrophobic and specific electrostatic interactions. It captures the hydrophilic-lipophilic balance (HLB) in the determination of lipophilicity of molecules unlike the pure hydrocarbon film of the prior art. The potentials and performance of the bio-device gives the promise of its utility as a predictive analytic tool for early-stage drug discovery science. PMID:19735551

  8. Development of a Nonionic Azobenzene Amphiphile for Remote Photocontrol of a Model Biomembrane.

    PubMed

    Benedini, Luciano A; Sequeira, M Alejandra; Fanani, Maria Laura; Maggio, Bruno; Dodero, Verónica I

    2016-05-05

    We report the synthesis and characterization of a simple nonionic azoamphiphile, C12OazoE3OH, which behaves as an optically controlled molecule alone and in a biomembrane environment. First, Langmuir monolayer and Brewster angle microscopy (BAM) experiments showed that pure C12OazoE3OH enriched in the (E) isomer was able to form solidlike mesophase even at low surface pressure associated with supramolecular organization of the azobenzene derivative at the interface. On the other hand, pure C12OazoE3OH enriched in the (Z) isomer formed a less solidlike monolayer due to the bent geometry around the azobenzene moiety. Second, C12OazoE3OH is well-mixed in a biological membrane model, Lipoid s75 (up to 20%mol), and photoisomerization among the lipids proceeded smoothly depending on light conditions. It is proposed that the cross-sectional area of the hydroxyl triethylenglycol head of C12OazoE3OH inhibits azobenzenes H-aggregation in the model membrane; thus, the tails conformation change due to photoisomerization is transferred efficiently to the lipid membrane. We showed that the lipid membrane effectively senses the azobenzene geometrical change photomodulating some properties, like compressibility modulus, transition temperature, and morphology. In addition, photomodulation proceeds with a color change from yellow to orange, providing the possibility to externally monitor the system. Finally, Gibbs monolayers showed that C12OazoE3OH is able to penetrate the highly packing biomembrane model; thus, C12OazoE3OH might be used as photoswitchable molecular probe in real systems.

  9. On the ability of PAMAM dendrimers and dendrimer/DNA aggregates to penetrate POPC model biomembranes.

    PubMed

    Ainalem, Marie-Louise; Campbell, Richard A; Khalid, Syma; Gillams, Richard J; Rennie, Adrian R; Nylander, Tommy

    2010-06-03

    Poly(amido amine) (PAMAM) dendrimers have previously been shown, as cationic condensing agents of DNA, to have high potential for nonviral gene delivery. This study addresses two key issues for gene delivery: the interaction of the biomembrane with (i) the condensing agent (the cationic PAMAM dendrimer) and (ii) the corresponding dendrimer/DNA aggregate. Using in situ null ellipsometry and neutron reflection, parallel experiments were carried out involving dendrimers of generations 2 (G2), 4 (G4), and 6 (G6). The study demonstrates that free dendrimers of all three generations were able to traverse supported palmitoyloleoylphosphatidylcholine (POPC) bilayers deposited on silica surfaces. The model biomembranes were elevated from the solid surfaces upon dendrimer penetration, which offers a promising new way to generate more realistic model biomembranes where the contact with the supporting surface is reduced and where aqueous cavities are present beneath the bilayer. The largest dendrimer (G6) induced partial bilayer destruction directly upon penetration, whereas the smaller dendrimers (G2 and G4) leave the bilayer intact, so we propose that lower generation dendrimers have greater potential as transfection mediators. In addition to the experimental observations, coarse-grained simulations on the interaction between generation 3 (G3) dendrimers and POPC bilayers were performed in the absence and presence of a bilayer-supporting negatively charged surface that emulates the support. The simulations demonstrate that G3 is transported across free-standing POPC bilayers by direct penetration and not by endocytosis. The penetrability was, however, reduced in the presence of a surface, indicating that the membrane transport observed experimentally was not driven solely by the surface. The experimental reflection techniques were also applied to dendrimer/DNA aggregates of charge ratio = 0.5, and while G2/DNA and G4/DNA aggregates interact with POPC bilayers, G6/DNA

  10. The decreasing of corn root biomembrane penetration for acetochlor with vermicompost amendment

    NASA Astrophysics Data System (ADS)

    Sytnyk, Svitlana; Wiche, Oliver

    2016-04-01

    One of the topical environmental security issues is management and control of anthropogenic (artificially synthesized) chemical agents usage and utilization. Protection systems development against toxic effects of herbicides should be based on studies of biological indication mechanisms for identification of stressors effect in organisms. Lipid degradation is non-specific reaction to exogenous chemical agents effects. Therefore it is important to study responses of lipid components depending on the stressor type. We studied physiological and biochemical characteristics of lipid metabolism under action of herbicides of chloracetamide group. Corn at different stages of ontogenesis was used as testing object during model laboratory and microfield experiments. Cattle manure treated with earth worms Essenia Foetida was used as compost fertilizer to add to chain: chernozem (black soil) -corn system. It was found several acetochlor actions as following: -decreasing of sterols, phospholipids, phosphatidylcholines and phosphatidylethanolamines content; -increasing pool of available fatty acids and phosphatidic acids associated with intensification of hydrolysis processes; -lypase activity stimulation under effect of stressor in low concentrations; -lypase activity inhibition under effect of high stressor level; -decreasing of polyenoic free fatty acids indicating biomembrane degradation; -accumulation of phospholipids degradation products (phosphatidic acids); -decreasing of high-molecular compounds (phosphatidylcholin and phosphatidylinositol) concentrations; -change in the index of unsaturated and saturated free fatty acids ratio in biomembranes structure; It was established that incorporation of vermicompost in dose 0.4 kg/m2 in black soil lead to corn roots biomembrane restoration. It was fixed the decreasing roots biomembrane penetration for acetochlor in trial with vermicompost. Second compost substances antidote effect is the soil microorganism's activation

  11. In silico study of liposome transport across biomembranes

    NASA Astrophysics Data System (ADS)

    Glukhova, O. E.; Zyktin, A. A.; Slepchenkov, M. M.

    2018-02-01

    At present, the liposomes are widely used as drug carriers in different areas of clinical medicine. One of them is the transport across the blood-brain barrier (BBB) into brain. This work is devoted to computational modeling of liposome transport across biomembrane. For this, we applied the MARTINI coarse-grained model. The liposome model is constructed from lipid (DPPC) and cholesterol (CHOL) molecules in a percentage ratio of 60/40. The diameter of the liposome is 28 nm. The equilibrium configuration of the liposome is achieved by minimizing its total energy. A series of numerical experiments was conducted in order to study the transport of the drug contained in the liposome across the cell membrane. All computer manipulations were carried out using software packages GROMACS and Kvazar at a temperature of 305-310 K. All the processes were simulated for 10-20 ns. The speed of the liposome ranged from 0.89 to 1.07 m/s. It should be noted that the selected speed range corresponds to the rate of human blood flow. Various cases of the angle of the incidence of the liposome on the membrane surface were also considered. Since the process of contact of the liposome with the membrane can be characterized as rolling in most cases, the angles were considered in the interval from 0 to 20 degrees. Based on the simulation results, we determined optimal pathways (from the point of view of energy) for liposome penetration across biomembrane.

  12. Biomembrane disruption by silica-core nanoparticles: effect of surface functional group measured using a tethered bilayer lipid membrane

    PubMed Central

    Liu, Ying; Zhang, Zhen; Zhang, Quanxuan; Baker, Gregory L.; Worden, R. Mark

    2013-01-01

    Engineered nanomaterials (ENM) have desirable properties that make them well suited for many commercial applications. However, a limited understanding of how ENM’s properties influence their molecular interactions with biomembranes hampers efforts to design ENM that are both safe and effective. This paper describes the use of a tethered bilayer lipid membrane (tBLM) to characterize biomembrane disruption by functionalized silica-core nanoparticles. Electrochemical impedance spectroscopy was used to measure the time trajectory of tBLM resistance following nanoparticle exposure. Statistical analysis of parameters from an exponential resistance decay model was then used to quantify and analyze differences between the impedance profiles of nanoparticles that were unfunctionalized, amine-functionalized, or carboxyl-functionalized. All of the nanoparticles triggered a decrease in membrane resistance, indicating nanoparticle-induced disruption of the tBLM. Hierarchical clustering allowed the potency of nanoparticles for reducing tBLM resistance to be ranked in the order amine > carboxyl ~ bare silica. Dynamic light scattering analysis revealed that tBLM exposure triggered minor coalescence for bare and amine-functionalized silica nanoparticles but not for carboxyl-functionalized silica nanoparticles. These results indicate that the tBLM method can reproducibly characterize ENM-induced biomembrane disruption and can distinguish the BLM-disruption patterns of nanoparticles that are identical except for their surface functional groups. The method provides insight into mechanisms of molecular interaction involving biomembranes and is suitable for miniaturization and automation for high-throughput applications to help assess the health risk of nanomaterial exposure or identify ENM having a desired mode of interaction with biomembranes. PMID:24060565

  13. Room-Temperature Ionic Liquids and Biomembranes: Setting the Stage for Applications in Pharmacology, Biomedicine, and Bionanotechnology.

    PubMed

    Benedetto, Antonio; Ballone, Pietro

    2018-03-21

    Empirical evidence and conceptual elaboration reveal and rationalize the remarkable affinity of organic ionic liquids for biomembranes. Cations of the so-called room-temperature ionic liquids (RTILs), in particular, are readily absorbed into the lipid fraction of biomembranes, causing a variety of observable biological effects, including generic cytotoxicity, broad antibacterial potential, and anticancer activity. Chemical physics analysis of model systems made of phospholipid bilayers, RTIL ions, and water confirm and partially explain this evidence, quantifying the mild destabilizing effect of RTILs on the structural, dynamic, and thermodynamic properties of lipids in biomembranes. Our Feature Article presents a brief introduction to these systems and to their roles in biophysics and biotechnology, summarizing recent experimental and computational results on their properties. More importantly, it highlights the many developments in pharmacology, biomedicine, and bionanotechnology expected from the current research effort on this topic. To anticipate future developments, we speculate on (i) potential applications of (magnetic) RTILs to affect and control the rheology of cells and biological tissues, of great relevance for diagnostics and (ii) the use of RTILs to improve the durability, reliability, and output of biomimetic photovoltaic devices.

  14. Exploring Molecular-Biomembrane Interactions with Surface Plasmon Resonance and Dual Polarization Interferometry Technology: Expanding the Spotlight onto Biomembrane Structure.

    PubMed

    Lee, Tzong-Hsien; Hirst, Daniel J; Kulkarni, Ketav; Del Borgo, Mark P; Aguilar, Marie-Isabel

    2018-06-13

    The molecular analysis of biomolecular-membrane interactions is central to understanding most cellular systems but has emerged as a complex technical challenge given the complexities of membrane structure and composition across all living cells. We present a review of the application of surface plasmon resonance and dual polarization interferometry-based biosensors to the study of biomembrane-based systems using both planar mono- or bilayers or liposomes. We first describe the optical principals and instrumentation of surface plasmon resonance, including both linear and extraordinary transmission modes and dual polarization interferometry. We then describe the wide range of model membrane systems that have been developed for deposition on the chips surfaces that include planar, polymer cushioned, tethered bilayers, and liposomes. This is followed by a description of the different chemical immobilization or physisorption techniques. The application of this broad range of engineered membrane surfaces to biomolecular-membrane interactions is then overviewed and how the information obtained using these techniques enhance our molecular understanding of membrane-mediated peptide and protein function. We first discuss experiments where SPR alone has been used to characterize membrane binding and describe how these studies yielded novel insight into the molecular events associated with membrane interactions and how they provided a significant impetus to more recent studies that focus on coincident membrane structure changes during binding of peptides and proteins. We then discuss the emerging limitations of not monitoring the effects on membrane structure and how SPR data can be combined with DPI to provide significant new information on how a membrane responds to the binding of peptides and proteins.

  15. Memristive Ion Channel-Doped Biomembranes as Synaptic Mimics.

    PubMed

    Najem, Joseph S; Taylor, Graham J; Weiss, Ryan J; Hasan, Md Sakib; Rose, Garrett; Schuman, Catherine D; Belianinov, Alex; Collier, C Patrick; Sarles, Stephen A

    2018-05-22

    Solid-state neuromorphic systems based on transistors or memristors have yet to achieve the interconnectivity, performance, and energy efficiency of the brain due to excessive noise, undesirable material properties, and nonbiological switching mechanisms. Here we demonstrate that an alamethicin-doped, synthetic biomembrane exhibits memristive behavior, emulates key synaptic functions including paired-pulse facilitation and depression, and enables learning and computing. Unlike state-of-the-art devices, our two-terminal, biomolecular memristor features similar structure (biomembrane), switching mechanism (ion channels), and ionic transport modality as biological synapses while operating at considerably lower power. The reversible and volatile voltage-driven insertion of alamethicin peptides into an insulating lipid bilayer creates conductive pathways that exhibit pinched current-voltage hysteresis at potentials above their insertion threshold. Moreover, the synapse-like dynamic properties of the biomolecular memristor allow for simplified learning circuit implementations. Low-power memristive devices based on stimuli-responsive biomolecules represent a major advance toward implementation of full synaptic functionality in neuromorphic hardware.

  16. Preparation, property of the complex of carboxymethyl chitosan grafted copolymer with iodine and application of it in cervical antibacterial biomembrane.

    PubMed

    Chen, Yu; Yang, Yumin; Liao, Qingping; Yang, Wei; Ma, Wanfeng; Zhao, Jian; Zheng, Xionggao; Yang, Yang; Chen, Rui

    2016-10-01

    Cervical erosion is one of the common diseases of women. The loop electrosurgical excisional procedure (LEEP) has been used widely in the treatment of the cervical diseases. However, there are no effective wound dressings for the postoperative care to protect the wound area from further infection, leading to increased secretion and longer healing time. Iodine is a widely used inorganic antibacterial agent with many advantages. However, the carrier for stable iodine complex antibacterial agents is lack. In the present study, a novel iodine carrier, Carboxymethyl chitosan-g-(poly(sodium acrylate)-co-polyvinylpyrrolidone) (CMCTS-g-(PAANa-co-PVP), was prepared by graft copolymerization of sodium acrylate (AANa) and N-vinylpyrrolidone (NVP) to a carboxymethyl chitosan (CMCTS) skeleton. The obtained structure could combine prominent property of poly(sodium acrylate) (PAANa) anionic polyelectrolyte segment and good complex property of polyvinylpyrrolidone (PVP) segment to iodine. The bioactivity of CMCTS could also be kept. The properties of the complex, CMCTS-g-(PAANa-co-PVP)-I2, were studied. The in vitro experiment shows that it has broad-spectrum bactericidal effects to virus, fungus, gram-positive bacteria and gram-negative bacteria. A CMCTS-g-(PAANa-co-PVP)-I2 complex contained cervical antibacterial biomembrane (CABM) was prepared. The iodine release from the CABM is pH-dependent. The clinic trial results indicate that CABM has better treatment effectiveness than the conventional treatment in the postoperative care of the LEEP operation. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. Evaluation of the interaction and drug release from alpha,beta-polyaspartamide derivatives to a biomembrane model.

    PubMed

    Castelli, F; Messina, C; Craparo, E F; Mandracchia, D; Pitarresi, G

    2005-01-01

    This article reports on a comparative study on the ability of various polymers, containing hydrophilic and/or hydrophobic groups, to interact with a biomembrane model using the differential scanning calorimetry (DSC) technique. Multilamellar vesicles of mixed dimyristoylphosphatidylcholine (DMPC) and dimyristoylphosphatidic acid (DMPA) were chosen as a model of cell membranes. The investigated samples were a water soluble polymer, the alpha,beta-poly(N-2-hydroxyethyl)-DL-aspartamide (PHEA) and its derivatives partially functionalized with polyethylene glycol (PEG2000) to obtain PHEA-PEG2000, with hexadecylamine (C16) to obtain PHEA-C16, and with both compounds to obtain PHEA-PEG2000-C16. These polymers are potential candidates to prepare drug delivery systems. In particular, some samples give rise to polymeric micelles able to entrap hydrophobic drugs in an aqueous medium. The migration of drug molecules from these micelles to DMPC/DMPA vesicles also has been evaluated by DSC analysis, by using ketoprofen as a model drug.

  18. Evaluation of the interaction of coumarins with biomembrane models studied by differential scanning calorimetry and Langmuir-Blodgett techniques.

    PubMed

    Sarpietro, Maria Grazia; Giuffrida, Maria Chiara; Ottimo, Sara; Micieli, Dorotea; Castelli, Francesco

    2011-04-25

    Three coumarins, scopoletin (1), esculetin (2), and esculin (3), were investigated by differential scanning calorimetry and Langmuir-Blodgett techniques to gain information about the interaction of these compounds with cellular membranes. Phospholipids assembled as multilamellar vesicles or monolayers (at the air-water interface) were used as biomembrane models. Differential scanning calorimetry was employed to study the interaction of these coumarins with multilamellar vesicles and to evaluate their absorption by multilamellar vesicles. These experiments indicated that 1-3 interact in this manner to different extents. The Langmuir-Blodgett technique was used to study the effect of these coumarins on the organization of phospholipids assembled as a monolayer. The data obtained were in agreement with those obtained in the calorimetric experiments.

  19. Elasticity of biomembranes studied by dynamic light scattering

    NASA Astrophysics Data System (ADS)

    Fujime, Satoru; Miyamoto, Shigeaki

    1991-05-01

    Combination of osmotic swelling and dynamic light scattering makes it possible to measure the elastic modulus of biomembranes. By this technique we have observed a drastic increase in membrane flexibility on activation of Na/glucose cotransporters in membrane vesicles prepared from brush-borders of rat small intestine and on activation by micromolar [Ca2] of exocytosis in secretory granules isolated from rat pancreatic acinar cells and bovine adrenal chromaffin cells. 1 .

  20. [Submental island pedicled flap combination with bio-membrane for reconstructing the piercing palate defects].

    PubMed

    Liu, Hanqian; Yu, Huiming; Liu, Jiawu; Fang, Jin; Mao, Chi

    2015-05-01

    To evaluate the clinical outcomes of submental island pedicled flap (SIPF) combination with bio-membrane in reconstructing palate defects after maxillofacial or palatal neoplasm resection. There were 12 patients with squamous cell carcinoma and one patient with adenoid cystic carcinoma. The clinical stages of tumours were II in two patients, III in four patients, IV in six patients (UICC 2002), and one patient with adenoid cystic carcinoma no staged. SIPFs were designed and created, and the tissue sides of the SIPFs were covered with bio-membrane to reconstruct the oral and the nasal sides of the defects respectively. Speech and swallowing functions and opening mouth were evaluated 6 months postoperatively. All flaps survived and no serious complications occurred. Ten patients achieved normal speech, two had intelligible speech, and one was with slurred speech; Nine patients resumed a solid diet, three with a soft diet, and one on a liquid diet. Eight patients recovered normal mouth opening, four emerged minor limitation of mouth opening, and one had serious limitation of mouth opening. SIPF combined with bio-membrane is a safe, simple, and reliable method for reconstruction of piercing palate defect following neoplasm ablation, with satisfactory oral functions.

  1. Photoionization of oxidized coenzyme Q in microemulsion: laser flash photolysis study in biomembrane-like system.

    PubMed

    Li, Kun; Wang, Mei; Wang, Jin; Zhu, Rongrong; Sun, Dongmei; Sun, Xiaoyu; Wang, Shi-Long

    2013-01-01

    Photoexcitation to generate triplet state has been proved to be the main photoreaction in homogeneous system for many benzoquinone derivatives, including oxidized coenzyme Q (CoQ) and its analogs. In the present study, microemulsion of CoQ, a heterogeneous system, is employed to mimic the distribution of CoQ in biomembrane. The photochemistry of CoQ(10) in microemulsion and cyclohexane is investigated and compared using laser flash photolysis and results show that CoQ(10) undergoes photoionization via a monophotonic process to generate radical cation of CoQ(10) in microemulsion and photoexcitation to generate excited triplet state in cyclohexane. Meanwhile, photoreactions of duroquinone (DQ) and CoQ(0) in microemulsion are also investigated to analyze the influence of molecular structure on the photochemistry of benzoquinone derivatives in microemulsion. Results suggest that photoexcitation, which is followed by excited state-involved hydrogen-abstraction reaction, is the main photoreaction for DQ and CoQ(0) in microemulsion. However, photoexcited CoQ(0) also leads to the formation of hydrated electrons. The isoprenoid side chain-involved high resonance stabilization is proposed to explain the difference in photoreactions of CoQ(0) and CoQ(10) in microemulsion. Considering that microemulsion is close to biomembrane system, its photoionization in microemulsion may be helpful to understand the real photochemistry of biological quinones in biomembrane system. © 2012 Tongji University. Photochemistry and Photobiology © 2012 The American Society of Photobiology.

  2. Evaluation of peptides release using a natural rubber latex biomembrane as a carrier.

    PubMed

    Miranda, M C R; Borges, F A; Barros, N R; Santos Filho, N A; Mendonça, R J; Herculano, R D; Cilli, E M

    2018-05-01

    The biomembrane natural (NRL-Natural Rubber Latex), manipulated from the latex obtained from the rubber tree Hevea brasiliensis, has shown great potential for application in biomedicine and biomaterials. Reflecting the biocompatibility and low bounce rate of this material, NRL has been used as a physical barrier to infectious agents and for the controlled release of drugs and extracts. The aim of the present study was to evaluate the incorporation and release of peptides using a latex biomembrane carrier. After incorporation, the release of material from the membrane was observed using spectrophotometry. Analyses using HPLC and mass spectroscopy did not confirm the release of the antimicrobial peptide [W 6 ]Hylin a1 after 24 h. In addition, analysis of the release solution showed new compounds, indicating the degradation of the peptide by enzymes contained in the latex. Additionally, the release of a peptide with a shorter sequence (Ac-WAAAA) was evaluated, and degradation was not observed. These results showed that the use of NRL as solid matrices as delivery systems of peptide are sequence dependent and could to be evaluated for each sequence.

  3. Fractal analysis of lateral movement in biomembranes.

    PubMed

    Gmachowski, Lech

    2018-04-01

    Lateral movement of a molecule in a biomembrane containing small compartments (0.23-μm diameter) and large ones (0.75 μm) is analyzed using a fractal description of its walk. The early time dependence of the mean square displacement varies from linear due to the contribution of ballistic motion. In small compartments, walking molecules do not have sufficient time or space to develop an asymptotic relation and the diffusion coefficient deduced from the experimental records is lower than that measured without restrictions. The model makes it possible to deduce the molecule step parameters, namely the step length and time, from data concerning confined and unrestricted diffusion coefficients. This is also possible using experimental results for sub-diffusive transport. The transition from normal to anomalous diffusion does not affect the molecule step parameters. The experimental literature data on molecular trajectories recorded at a high time resolution appear to confirm the modeled value of the mean free path length of DOPE for Brownian and anomalous diffusion. Although the step length and time give the proper values of diffusion coefficient, the DOPE speed calculated as their quotient is several orders of magnitude lower than the thermal speed. This is interpreted as a result of intermolecular interactions, as confirmed by lateral diffusion of other molecules in different membranes. The molecule step parameters are then utilized to analyze the problem of multiple visits in small compartments. The modeling of the diffusion exponent results in a smooth transition to normal diffusion on entering a large compartment, as observed in experiments.

  4. The study of the use of a latex biomembrane and conjunctival autograft in rabbit conjunctiva wound healing.

    PubMed

    Pinho, Erika Christina Canarim Martha de; Chahud, Fernando; Lachat, João-José; Coutinho-Netto, Joaquim José; Sousa, Sidney Julio Faria E

    2018-04-01

    RESUMO Objetivo: Estudar o uso da biomembrana de látex e o transplante conjuntival autólogo na cicatrização conjuntival em coelhos. Métodos: Em nove coelhos albinos, neo-zelandeses, machos foram removidas áreas retangulares idênticas, do quadrante supero nasal, adjacente ao limbo, de ambos os olhos. As áreas desnudas da camada esclerótica nos olhos direitos foram recobertas com biomembrana de látex e a dos olhos esquerdos com enxerto conjuntival autólogo. Os animais foram sacrificados em grupos de três, aos 7, 14 e 21 dias após a cirurgia. Os tecidos do local cirúrgico, incluindo a córnea, foram fixados em formaldeído, antes de serem processados em parafina e corados com hematoxilina e eosina. A natureza e a intensidade da resposta inflamatória e o padrão de epitelização da superfície conjuntival foram avaliados sob microscopia óptica, em seções histológicas longitudinais, passando pelo centro dos espécimes anatômicos. Resultados: Até o décimo quarto dia pós-operatório, o grupo que recebeu a biomembrana apresentou reação inflamatória mais intensa do que o grupo com auto enxerto conjuntival. Aos 14 dias, os olhos com biomembrana apresentavam-se menos inflamados e com estroma mais espesso do que aos 7 dias. Aos 21 dias, a reparação conjuntival de ambos os grupos apresentavam características semelhantes. Conclusão: Apesar de apresentar uma cicatrização mais lenta, a biomembrana de látex se mostrou tão eficaz quanto o auto enxerto conjuntival na reconstrução da superfície ocular após três semanas de cicatrização pós-operatória. Devido as suas baixas toxicidade e alergenicidade, este material parece ser uma opção terapêutica promissora na reconstrução da conjuntiva.ABSTRACT Purpose: To study a latex biomembrane and conjunctival autograft with regard to the promotion of conjunctival healing in rabbits. The study included nine male albino rabbits. In these rabbits, a rectangular area of the conjunctiva was surgically

  5. Biomembranes research using thermal and cold neutrons

    DOE PAGES

    Heberle, Frederick A.; Myles, Dean A. A.; Katsaras, John

    2015-08-01

    In 1932 James Chadwick discovered the neutron using a polonium source and a beryllium target (Chadwick, 1932). In a letter to Niels Bohr dated February 24, 1932, Chadwick wrote: “whatever the radiation from Be may be, it has most remarkable properties.” Where it concerns hydrogen-rich biological materials, the “most remarkable” property is the neutron’s differential sensitivity for hydrogen and its isotope deuterium. Such differential sensitivity is unique to neutron scattering, which unlike X-ray scattering, arises from nuclear forces. Consequently, the coherent neutron scattering length can experience a dramatic change in magnitude and phase as a result of resonance scattering, impartingmore » sensitivity to both light and heavy atoms, and in favorable cases to their isotopic variants. Furthermore, this article describes recent biomembranes research using a variety of neutron scattering techniques.« less

  6. Phospholipid bilayer-perturbing properties underlying lysis induced by pH-sensitive cationic lysine-based surfactants in biomembranes.

    PubMed

    Nogueira, Daniele Rubert; Mitjans, Montserrat; Busquets, M Antonia; Pérez, Lourdes; Vinardell, M Pilar

    2012-08-14

    Amino acid-based surfactants constitute an important class of natural surface-active biomolecules with an unpredictable number of industrial applications. To gain a better mechanistic understanding of surfactant-induced membrane destabilization, we assessed the phospholipid bilayer-perturbing properties of new cationic lysine-based surfactants. We used erythrocytes as biomembrane models to study the hemolytic activity of surfactants and their effects on cells' osmotic resistance and morphology, as well as on membrane fluidity and membrane protein profile with varying pH. The antihemolytic capacity of amphiphiles correlated negatively with the length of the alkyl chain. Anisotropy measurements showed that the pH-sensitive surfactants, with the positive charge on the α-amino group of lysine, significantly increased membrane fluidity at acidic conditions. SDS-PAGE analysis revealed that surfactants induced significant degradation of membrane proteins in hypo-osmotic medium and at pH 5.4. By scanning electron microscopy examinations, we corroborated the interaction of surfactants with lipid bilayer. We found that varying the surfactant chemical structure is a way to modulate the positioning of the molecule inside bilayer and, thus, the overall effect on the membrane. Our work showed that pH-sensitive lysine-based surfactants significantly disturb the lipid bilayer of biomembranes especially at acidic conditions, which suggests that these compounds are promising as a new class of multifunctional bioactive excipients for active intracellular drug delivery.

  7. Complex biomembrane mimetics on the sub-nanometer scale

    DOE PAGES

    Heberle, Frederick A.; Pabst, Georg

    2017-07-17

    Biomimetic lipid vesicles are indispensable tools for gaining insight into the biophysics of cell physiology on the molecular level. The level of complexity of these model systems has steadily increased, and now spans from domain forming lipid mixtures to asymmetric lipid bilayers. We review recent progress in the development and application of elastic neutron and X-ray scattering techniques for studying these systems in situ and under physiologically relevant conditions on the nanometer to sub-nanometer length scales. Particularly we focus on: (i) structural details of coexisting liquid-ordered and liquid-disordered domains, including their thickness and lipid packing mismatch as a function ofmore » a size transition from nanoscopic to macroscopic domains; (ii) membrane-mediated protein partitioning into lipid domains; (iii) the role of the aqueous medium in tuning interactions between membranes and domains; and (iv) leaflet specific structure in asymmetric bilayers and passive lipid flip-flop.« less

  8. Complex biomembrane mimetics on the sub-nanometer scale

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heberle, Frederick A.; Pabst, Georg

    Biomimetic lipid vesicles are indispensable tools for gaining insight into the biophysics of cell physiology on the molecular level. The level of complexity of these model systems has steadily increased, and now spans from domain forming lipid mixtures to asymmetric lipid bilayers. We review recent progress in the development and application of elastic neutron and X-ray scattering techniques for studying these systems in situ and under physiologically relevant conditions on the nanometer to sub-nanometer length scales. Particularly we focus on: (i) structural details of coexisting liquid-ordered and liquid-disordered domains, including their thickness and lipid packing mismatch as a function ofmore » a size transition from nanoscopic to macroscopic domains; (ii) membrane-mediated protein partitioning into lipid domains; (iii) the role of the aqueous medium in tuning interactions between membranes and domains; and (iv) leaflet specific structure in asymmetric bilayers and passive lipid flip-flop.« less

  9. Disruption of Saccharomyces cerevisiae by Plantaricin 149 and investigation of its mechanism of action with biomembrane model systems.

    PubMed

    Lopes, José Luiz S; Nobre, Thatyane M; Siano, Alvaro; Humpola, Verónica; Bossolan, Nelma R S; Zaniquelli, Maria E D; Tonarelli, Georgina; Beltramini, Leila M

    2009-10-01

    The action of a synthetic antimicrobial peptide analog of Plantaricin 149 (Pln149a) against Saccharomyces cerevisiae and its interaction with biomembrane model systems were investigated. Pln149a was shown to inhibit S. cerevisiae growth by more than 80% in YPD medium, causing morphological changes in the yeast wall and remaining active and resistant to the yeast proteases even after 24 h of incubation. Different membrane model systems and carbohydrates were employed to better describe the Pln149a interaction with cellular components using circular dichroism and fluorescence spectroscopies, adsorption kinetics and surface elasticity in Langmuir monolayers. These assays showed that Pln149a does not interact with either mono/polysaccharides or zwitterionic LUVs, but is strongly adsorbed to and incorporated into negatively charged surfaces, causing a conformational change in its secondary structure from random-coil to helix upon adsorption. From the concurrent analysis of Pln149a adsorption kinetics and dilatational surface elasticity data, we determined that 2.5 muM is the critical concentration at which Pln149a will disrupt a negative DPPG monolayer. Furthermore, Pln149a exhibited a carpet-like mechanism of action, in which the peptide initially binds to the membrane, covering its surface and acquiring a helical structure that remains associated to the negatively charged phospholipids. After this electrostatic interaction, another peptide region causes a strain in the membrane, promoting its disruption.

  10. [Application of in vitro bionic digestion and biomembrane extraction for metal speciation analysis, bioavailability and risk assessment in lianhua qingwen capsule].

    PubMed

    Lin, Lu-Xiu; Li, Shun-Xing; Zheng, Feng-Ying

    2014-06-01

    One of the causes of the high cost of pharmaceuticals and the major obstacles to rapidly assessing the bioavailability and risk of a chemical is the lack of experimental model systems. A new pre-treatment technology, in vitro bionic digestion was designed for metal analysis in Lianhua Qingwen capsule. The capsule was digested on 37 degrees C under the acidity of the stomach or intestine, and with the inorganic and organic compounds (including digestive enzymes) found in the stomach or intestine, and then the chyme was obtained. Being similar to the biomembrane between the gastrointestinal tract and blood vessels, monolayer liposome was used as biomembrane model Affinity-monolayer liposome metals (AMLMs) and water-soluble metals were used for metal speciation analysis in the capsule. Based on the concentration of AMLMs, the main absorption site of trace metals was proposed. The metal total contents or the concentration of AMLMs in the capsule were compared to the nutritional requirements, daily permissible dose and heavy metal total contents from the "import and export of medicinal plants and preparation of green industry state standards". The metal concentrations in the capsule were within the safety baseline levels for human consumption. After in vitro bionic digestion, most of trace metals were absorbed mainly in intestine. The concentration of As, Cd, Pb was 0.38, 0.07, 1.60 mg x kg(-1), respectively, far less than the permissible dose from the "import and export of medicinal plants and preparation of green industry state standards".

  11. Structure and dynamics of biomembranes in room-temperature ionic liquid water solutions studied by neutron scattering and by molecular dynamics simulations

    NASA Astrophysics Data System (ADS)

    Benedetto, Antonio; Ballone, Pietro

    2018-05-01

    Increasing attention is being devoted to the interaction of a new class of organic ionic liquids known as room-temperature ionic liquids (RTILs) with biomolecules, partly because of health and environment concerns, and, even more, for the prospect of exciting new applications in biomedicine, sensing and energy technologies. Here we focus on the interaction between RTILs and phospholipid bilayers that are well-accepted models for bio-membranes. We discuss how neutron scattering has been used to probe both the structure and the dynamics of these systems, and how its integration with molecular dynamics simulation has allowed the determination of the microscopic details of their interaction.

  12. Interaction of cholesterol-conjugated ionizable amino lipids with biomembranes: lipid polymorphism, structure-activity relationship, and implications for siRNA delivery.

    PubMed

    Zhang, Jingtao; Fan, Haihong; Levorse, Dorothy A; Crocker, Louis S

    2011-08-02

    Delivery of siRNA is a major obstacle to the advancement of RNAi as a novel therapeutic modality. Lipid nanoparticles (LNP) consisting of ionizable amino lipids are being developed as an important delivery platform for siRNAs, and significant efforts are being made to understand the structure-activity relationship (SAR) of the lipids. This article uses a combination of small-angle X-ray scattering (SAXS) and differential scanning calorimetry (DSC) to evaluate the interaction between cholesterol-conjugated ionizable amino lipids and biomembranes, focusing on an important area of lipid SAR--the ability of lipids to destabilize membrane bilayer structures and facilitate endosomal escape. In this study, cholesterol-conjugated amino lipids were found to be effective in increasing the order of biomembranes and also highly effective in inducing phase changes in biological membranes in vitro (i.e., the lamellar to inverted hexagonal phase transition). The phase transition temperatures, determined using SAXS and DSC, serve as an indicator for ranking the potency of lipids to destabilize endosomal membranes. It was found that the bilayer disruption ability of amino lipids depends strongly on the amino lipid concentration in membranes. Amino lipids with systematic variations in headgroups, the extent of ionization, tail length, the degree of unsaturation, and tail asymmetry were evaluated for their bilayer disruption ability to establish SAR. Overall, it was found that the impact of these lipid structure changes on their bilayer disruption ability agrees well with the results from a conceptual molecular "shape" analysis. Implications of the findings from this study for siRNA delivery are discussed. The methods reported here can be used to support the SAR screening of cationic lipids for siRNA delivery, and the information revealed through the study of the interaction between cationic lipids and biomembranes will contribute significantly to the design of more efficient si

  13. Kinetics of Hole Nucleation in Biomembrane Rupture

    PubMed Central

    Evans, Evan; Smith, Benjamin A

    2011-01-01

    The core component of a biological membrane is a fluid-lipid bilayer held together by interfacial-hydrophobic and van der Waals interactions, which are balanced for the most part by acyl chain entropy confinement. If biomembranes are subjected to persistent tensions, an unstable (nanoscale) hole will emerge at some time to cause rupture. Because of the large energy required to create a hole, thermal activation appears to be requisite for initiating a hole and the activation energy is expected to depend significantly on mechanical tension. Although models exist for the kinetic process of hole nucleation in tense membranes, studies of membrane survival have failed to cover the ranges of tension and lifetime needed to critically examine nucleation theory. Hence, rupturing giant (~20 μm) membrane vesicles ultra-slowly to ultra-quickly with slow to fast ramps of tension, we demonstrate a method to directly quantify kinetic rates at which unstable holes form in fluid membranes, at the same time providing a range of kinetic rates from < 0.01 s−1 to > 100 s−1. Measuring lifetimes of many hundreds of vesicles, each tensed by precision control of micropipet suction, we have determined the rates of failure for vesicles made from several synthetic phospholipids plus 1:1 mixtures of phospho- and sphingo-lipids with cholesterol, all of which represent prominent constituents of eukaryotic cell membranes. Plotted on a logarithmic scale, the failure rates for vesicles are found to rise dramatically with increase of tension. Converting the experimental profiles of kinetic rates into changes of activation energy versus tension, we show that the results closely match expressions for thermal activation derived from a combination of meso-scale theory and molecular-scale simulations of hole formation. Moreover, we demonstrate a generic approach to transform analytical fits of activation energies obtained from rupture experiments into energy landscapes characterizing the process hole

  14. BFPTool: a software tool for analysis of Biomembrane Force Probe experiments.

    PubMed

    Šmít, Daniel; Fouquet, Coralie; Doulazmi, Mohamed; Pincet, Frédéric; Trembleau, Alain; Zapotocky, Martin

    2017-01-01

    The Biomembrane Force Probe is an approachable experimental technique commonly used for single-molecule force spectroscopy and experiments on biological interfaces. The technique operates in the range of forces from 0.1 pN to 1000 pN. Experiments are typically repeated many times, conditions are often not optimal, the captured video can be unstable and lose focus; this makes efficient analysis challenging, while out-of-the-box non-proprietary solutions are not freely available. This dedicated tool was developed to integrate and simplify the image processing and analysis of videomicroscopy recordings from BFP experiments. A novel processing feature, allowing the tracking of the pipette, was incorporated to address a limitation of preceding methods. Emphasis was placed on versatility and comprehensible user interface implemented in a graphical form. An integrated analytical tool was implemented to provide a faster, simpler and more convenient way to process and analyse BFP experiments.

  15. Fabrication of biomembrane-like films on carbon electrodes using alkanethiol and diazonium salt and their application for direct electrochemistry of myoglobin.

    PubMed

    Anjum, Saima; Qi, Wenjing; Gao, Wenyue; Zhao, Jianming; Hanif, Saima; Aziz-Ur-Rehman; Xu, Guobao

    2015-03-15

    Alkanethiols generally form self-assembled monolayers on gold electrodes and the electrochemical reduction of aromatic diazonium salts is a popular method for the covalent modification of carbon. Based on the reaction of alkanethiol with aldehyde groups covalently bound on carbon surface by the electrochemical reduction of aromatic diazonium salts, a new strategy for the modification of carbon electrodes with alkanethiols has been developed. The modification of carbon surface with aldehyde groups is achieved by the electrochemical reduction of aromatic diazonium salts in situ electrogenerated from a nitro precursor, p-nitrophenylaldehyde, in the presence of nitrous acid. By this way, in situ electrogenerated p-aminophenyl aldehyde from p-nitrophenylaldehyde immediately reacts with nitrous acid, effectively minimizing the side reaction of amine groups and aldehyde groups. The as-prepared alkanethiol-modified glassy carbon electrode was further used to make biomembrane-like films by casting didodecyldimethylammonium bromide on its surface. The biomembrane-like films enable the direct electrochemistry of immobilized myoglobin for the detection of hydrogen peroxide. The response is linear over the range of 1-600μM with a detection limit of 0.3μM. Copyright © 2014 Elsevier B.V. All rights reserved.

  16. Natural lipid extracts and biomembrane-mimicking lipid compositions are disposed to form nonlamellar phases, and they release DNA from lipoplexes most efficiently

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koynova, Rumiana; MacDonald, Robert C.

    2010-01-18

    A viewpoint now emerging is that a critical factor in lipid-mediated transfection (lipofection) is the structural evolution of lipoplexes upon interacting and mixing with cellular lipids. Here we report our finding that lipid mixtures mimicking biomembrane lipid compositions are superior to pure anionic liposomes in their ability to release DNA from lipoplexes (cationic lipid/DNA complexes), even though they have a much lower negative charge density (and thus lower capacity to neutralize the positive charge of the lipoplex lipids). Flow fluorometry revealed that the portion of DNA released after a 30-min incubation of the cationic O-ethylphosphatidylcholine lipoplexes with the anionic phosphatidylserinemore » or phosphatidylglycerol was 19% and 37%, respectively, whereas a mixture mimicking biomembranes (MM: phosphatidylcholine/phosphatidylethanolamine/phosphatidylserine /cholesterol 45:20:20:15 w/w) and polar lipid extract from bovine liver released 62% and 74%, respectively, of the DNA content. A possible reason for this superior power in releasing DNA by the natural lipid mixtures was suggested by structural experiments: while pure anionic lipids typically form lamellae, the natural lipid mixtures exhibited a surprising predilection to form nonlamellar phases. Thus, the MM mixture arranged into lamellar arrays at physiological temperature, but began to convert to the hexagonal phase at a slightly higher temperature, {approx} 40-45 C. A propensity to form nonlamellar phases (hexagonal, cubic, micellar) at close to physiological temperatures was also found with the lipid extracts from natural tissues (from bovine liver, brain, and heart). This result reveals that electrostatic interactions are only one of the factors involved in lipid-mediated DNA delivery. The tendency of lipid bilayers to form nonlamellar phases has been described in terms of bilayer 'frustration' which imposes a nonzero intrinsic curvature of the two opposing monolayers. Because the stored

  17. Natural lipid extracts and biomembrane-mimicking lipid compositions are disposed to form nonlamellar phases, and they release DNA from lipoplexes most efficiently

    PubMed Central

    Koynova, Rumiana; MacDonald, Robert C.

    2007-01-01

    A viewpoint now emerging is that a critical factor in lipid-mediated transfection (lipofection) is the structural evolution of lipoplexes upon interacting and mixing with cellular lipids. Here we report our finding that lipid mixtures mimicking biomembrane lipid compositions are superior to pure anionic liposomes in their ability to release DNA from lipoplexes (cationic lipid/DNA complexes), even though they have a much lower negative charge density (and thus lower capacity to neutralize the positive charge of the lipoplex lipids). Flow fluorometry revealed that the portion of DNA released after a 30 min incubation of the cationic O-ethylphosphatidylcholine lipoplexes with the anionic phosphatidylserine or phosphatidylglycerol was 19% and 37%, respectively, whereas a mixture mimicking biomembranes (MM: phosphatidylcholine/phosphatidylethanolamine/ phosphatidylserine/cholesterol 45:20:20:15 w/w) and polar lipid extract from bovine liver released 62% and 74%, respectively, of the DNA content. A possible reason for this superior power in releasing DNA by the natural lipid mixtures was suggested by structural experiments: while pure anionic lipids typically form lamellae, the natural lipid mixtures exhibited a surprising predilection to form nonlamellar phases. Thus, the MM mixture arranged into lamellar arrays at physiological temperature, but began to convert to the hexagonal phase at a slightly higher temperature, ∼40-45°C. A propensity to form nonlamellar phases (hexagonal, cubic, micellar) at close to physiological temperatures was also found with the lipid extracts from natural tissues (from bovine liver, brain, and heart). This result reveals that electrostatic interactions are only one of the factors involved in lipid-mediated DNA delivery. The tendency of lipid bilayers to form nonlamellar phases has been described in terms of bilayer “frustration” which imposes a nonzero intrinsic curvature of the two opposing monolayers. Because the stored curvature

  18. Flexoelectricity of model and living membranes.

    PubMed

    Petrov, Alexander G

    2002-03-19

    The theory and experiments on model and biomembrane flexoelectricity are reviewed. Biological implications of flexoelectricity are underlined. Molecular machinery and molecular electronics applications are pointed out.

  19. Molecular modeling of biomembranes and their complexes with protein transmembrane α-helices

    NASA Astrophysics Data System (ADS)

    Kuznetsov, Andrey S.; Smirnov, Kirill V.; Antonov, Mikhail Yu.; Nikolaev, Ivan N.; Efremov, Roman G.

    2017-11-01

    Helical segments are common structural elements of membrane proteins. Dimerization and oligomerization of transmembrane (TM) α-helices provides the framework for spatial structure formation and protein-protein interactions. The membrane itself also takes part in the protein functioning. There are some examples of the mutual influence of the lipid bilayer properties and embedded membrane proteins. This work aims at the detail investigation of protein-lipid interactions using model systems: TM peptides corresponding to native protein segments. Three peptides were considered corresponding to TM domains of human glycophorin A (GpA), epidermal growth factor receptor (EGFR) and proposed TM-segment of human neuraminidase-1 (Neu1). A computational analysis of structural and dynamical properties was performed using molecular dynamics method. Monomers of peptides were considered incorporated into hydrated lipid bilayers. It was confirmed, that all these TM peptides have stable helical conformation in lipid environment, and the mutual adaptation of peptides and membrane was observed. It was shown that incorporation of the peptide into membrane results in the modulation of local and mean structural properties of the bilayer. Each peptide interacts with lipid acyl chains having special binding sites on the surface of central part of α-helix that exist for at least 200 ns. However, lipid acyl chains substitute each other faster occupying the same site. The formation of a special pattern of protein-lipid interactions may modulate the association of TM domains of membrane proteins, so membrane environment should be considered when proposing new substances targeting cell receptors.

  20. Artificial biomembrane morphology: a dissipative particle dynamics study.

    PubMed

    Becton, Matthew; Averett, Rodney; Wang, Xianqiao

    2017-09-18

    Artificial membranes mimicking biological structures are rapidly breaking new ground in the areas of medicine and soft-matter physics. In this endeavor, we use dissipative particle dynamics simulation to investigate the morphology and behavior of lipid-based biomembranes under conditions of varied lipid density and self-interaction. Our results show that a less-than-normal initial lipid density does not create the traditional membrane; but instead results in the formation of a 'net', or at very low densities, a series of disparate 'clumps' similar to the micelles formed by lipids in nature. When the initial lipid density is high, a membrane forms, but due to the large number of lipids, the naturally formed membrane would be larger than the simulation box, leading to 'rippling' behavior as the excess repulsive force of the membrane interior overcomes the bending energy of the membrane. Once the density reaches a certain point however, 'bubbles' appear inside the membrane, reducing the rippling behavior and eventually generating a relatively flat, but thick, structure with micelles of water inside the membrane itself. Our simulations also demonstrate that the interaction parameter between individual lipids plays a significant role in the formation and behavior of lipid membrane assemblies, creating similar structures as the initial lipid density distribution. This work provides a comprehensive approach to the intricacies of lipid membranes, and offers a guideline to design biological or polymeric membranes through self-assembly processes as well as develop novel cellular manipulation and destruction techniques.

  1. Uptake and localization mechanisms of fluorescent and colored lipid probes. Part 2. QSAR models that predict localization of fluorescent probes used to identify ("specifically stain") various biomembranes and membranous organelles.

    PubMed

    Horobin, R W; Stockert, J C; Rashid-Doubell, F

    2015-05-01

    We discuss a variety of biological targets including generic biomembranes and the membranes of the endoplasmic reticulum, endosomes/lysosomes, Golgi body, mitochondria (outer and inner membranes) and the plasma membrane of usual fluidity. For each target, we discuss the access of probes to the target membrane, probe uptake into the membrane and the mechanism of selectivity of the probe uptake. A statement of the QSAR decision rule that describes the required physicochemical features of probes that enable selective staining also is provided, followed by comments on exceptions and limits. Examples of probes typically used to demonstrate each target structure are noted and decision rule tabulations are provided for probes that localize in particular targets; these tabulations show distribution of probes in the conceptual space defined by the relevant structure parameters ("parameter space"). Some general implications and limitations of the QSAR models for probe targeting are discussed including the roles of certain cell and protocol factors that play significant roles in lipid staining. A case example illustrates the predictive ability of QSAR models. Key limiting values of the head group hydrophilicity parameter associated with membrane-probe interactions are discussed in an appendix.

  2. Novel NMR tools to study structure and dynamics of biomembranes.

    PubMed

    Gawrisch, Klaus; Eldho, Nadukkudy V; Polozov, Ivan V

    2002-06-01

    Nuclear magnetic resonance (NMR) studies on biomembranes have benefited greatly from introduction of magic angle spinning (MAS) NMR techniques. Improvements in MAS probe technology, combined with the higher magnetic field strength of modern instruments, enables almost liquid-like resolution of lipid resonances. The cross-relaxation rates measured by nuclear Overhauser enhancement spectroscopy (NOESY) provide new insights into conformation and dynamics of lipids with atomic-scale resolution. The data reflect the tremendous motional disorder in the lipid matrix. Transfer of magnetization by spin diffusion along the proton network of lipids is of secondary relevance, even at a long NOESY mixing time of 300 ms. MAS experiments with re-coupling of anisotropic interactions, like the 13C-(1)H dipolar couplings, benefit from the excellent resolution of 13C shifts that enables assignment of the couplings to specific carbon atoms. The traditional 2H NMR experiments on deuterated lipids have higher sensitivity when conducted on oriented samples at higher magnetic field strength. A very large number of NMR parameters from lipid bilayers is now accessible, providing information about conformation and dynamics for every lipid segment. The NMR methods have the sensitivity and resolution to study lipid-protein interaction, lateral lipid organization, and the location of solvents and drugs in the lipid matrix.

  3. Ionization behavior of amino lipids for siRNA delivery: determination of ionization constants, SAR, and the impact of lipid pKa on cationic lipid-biomembrane interactions.

    PubMed

    Zhang, Jingtao; Fan, Haihong; Levorse, Dorothy A; Crocker, Louis S

    2011-03-01

    Ionizable amino lipids are being pursued as an important class of materials for delivering small interfering RNA (siRNA) therapeutics, and research is being conducted to elucidate the structure-activity relationships (SAR) of these lipids. The pK(a) of cationic lipid headgroups is one of the critical physiochemical properties of interest due to the strong impact of lipid ionization on the assembly and performance of these lipids. This research focused on developing approaches that permit the rapid determination of the relevant pK(a) of the ionizable amino lipids. Two distinct approaches were investigated: (1) potentiometric titration of amino lipids dissolved in neutral surfactant micelles; and (2) pH-dependent partitioning of a fluorescent dye to cationic liposomes formulated from amino lipids. Using the approaches developed here, the pK(a) values of cationic lipids with distinct headgroups were measured and found to be significantly lower than calculated values. It was also found that lipid-lipid interaction has a strong impact on the pK(a) values of lipids. Lysis of model biomembranes by cationic lipids was used to evaluate the impact of lipid pK(a) on the interaction between cationic lipids and cell membranes. It was found that cationic lipid-biomembrane interaction depends strongly on lipid pK(a) and solution pH, and this interaction is much stronger when amino lipids are highly charged. The presence of an optimal pK(a) range of ionizable amino lipids for siRNA delivery was suggested based on these results. The pK(a) methods reported here can be used to support the SAR screen of cationic lipids for siRNA delivery, and the information revealed through studying the impact of pK(a) on the interaction between cationic lipids and cell membranes will contribute significantly to the design of more efficient siRNA delivery vehicles.

  4. Electrochemical modelling of QD-phospholipid interactions.

    PubMed

    Zhang, Shengwen; Chen, Rongjun; Malhotra, Girish; Critchley, Kevin; Vakurov, Alexander; Nelson, Andrew

    2014-04-15

    The aggregation of quantum dots (QDs) and capping of individual QDs affects their activity towards biomembrane models. Electrochemical methods using a phospholipid layer on mercury (Hg) membrane model have been used to determine the phospholipid monolayer activity of thioglycollic acid (TGA) coated quantum dots (QDs) as an indicator of biomembrane activity. The particles were characterised for size and charge. The activity of the QDs towards dioleoyl phosphatidylcholine (DOPC) monolayers is pH dependent, and is most active at pH 8.2 within the pH range 8.2-6.5 examined in this work. This pH dependent activity is the result of increased particle aggregation coupled to decreasing surface charge emanating from the TGA carboxylic groups employed to stabilize the QD dispersion in aqueous media. Capping the QDs with CdS/ZnS lowers the particles' activity to phospholipid monolayers. Copyright © 2014 Elsevier Inc. All rights reserved.

  5. Effect of mucoadhesive polymers on the in vitro performance of insulin-loaded silica nanoparticles: Interactions with mucin and biomembrane models.

    PubMed

    Andreani, Tatiana; Miziara, Leonardo; Lorenzón, Esteban N; de Souza, Ana Luiza R; Kiill, Charlene P; Fangueiro, Joana F; Garcia, Maria L; Gremião, Palmira D; Silva, Amélia M; Souto, Eliana B

    2015-06-01

    The present paper focuses on the development and characterization of silica nanoparticles (SiNP) coated with hydrophilic polymers as mucoadhesive carriers for oral administration of insulin. SiNP were prepared by sol-gel technology under mild conditions and coated with different hydrophilic polymers, namely, chitosan, sodium alginate or poly(ethylene glycol) (PEG) with low and high molecular weight (PEG 6000 and PEG 20000) to increase the residence time at intestinal mucosa. The mean size and size distribution, association efficiency, insulin structure and insulin thermal denaturation have been determined. The mean nanoparticle diameter ranged from 289 nm to 625 nm with a PI between 0.251 and 0.580. The insulin association efficiency in SiNP was recorded above 70%. After coating, the association efficiency of insulin increased up to 90%, showing the high affinity of the protein to the hydrophilic polymer chains. Circular dichroism (CD) indicated that no conformation changes of insulin structure occurred after loading the peptide into SiNP. Nano-differential scanning calorimetry (nDSC) showed that SiNP shifted the insulin endothermic peak to higher temperatures. The influence of coating on the interaction of nanoparticles with dipalmitoylphosphatidylcholine (DPPC) biomembrane models was also evaluated by nDSC. The increase of ΔH values suggested a strong association of non-coated SiNP and those PEGylated nanoparticles coated with DPPC polar heads by forming hydrogen bonds and/or by electrostatic interaction. The mucoadhesive properties of nanoparticles were examined by studying the interaction with mucin in aqueous solution. SiNP coated with alginate or chitosan showed high contact with mucin. On the other hand, non-coated SiNP and PEGylated SiNP showed lower interaction with mucin, indicating that these nanoparticles can interdiffuse across mucus network. The results of the present work provide valuable data in assessing the in vitro performance of insulin

  6. Lab on a Biomembrane: Rapid prototyping and manipulation of 2D fluidic lipid bilayers circuits

    PubMed Central

    Ainla, Alar; Gözen, Irep; Hakonen, Bodil; Jesorka, Aldo

    2013-01-01

    Lipid bilayer membranes are among the most ubiquitous structures in the living world, with intricate structural features and a multitude of biological functions. It is attractive to recreate these structures in the laboratory, as this allows mimicking and studying the properties of biomembranes and their constituents, and to specifically exploit the intrinsic two-dimensional fluidity. Even though diverse strategies for membrane fabrication have been reported, the development of related applications and technologies has been hindered by the unavailability of both versatile and simple methods. Here we report a rapid prototyping technology for two-dimensional fluidic devices, based on in-situ generated circuits of phospholipid films. In this “lab on a molecularly thin membrane”, various chemical and physical operations, such as writing, erasing, functionalization, and molecular transport, can be applied to user-defined regions of a membrane circuit. This concept is an enabling technology for research on molecular membranes and their technological use. PMID:24067786

  7. Surfactant-free purification of membrane protein complexes from bacteria: application to the staphylococcal penicillin-binding protein complex PBP2/PBP2a

    NASA Astrophysics Data System (ADS)

    Paulin, Sarah; Jamshad, Mohammed; Dafforn, Timothy R.; Garcia-Lara, Jorge; Foster, Simon J.; Galley, Nicola F.; Roper, David I.; Rosado, Helena; Taylor, Peter W.

    2014-07-01

    Surfactant-mediated removal of proteins from biomembranes invariably results in partial or complete loss of function and disassembly of multi-protein complexes. We determined the capacity of styrene-co-maleic acid (SMA) co-polymer to remove components of the cell division machinery from the membrane of drug-resistant staphylococcal cells. SMA-lipid nanoparticles solubilized FtsZ-PBP2-PBP2a complexes from intact cells, demonstrating the close physical proximity of these proteins within the lipid bilayer. Exposure of bacteria to (-)-epicatechin gallate, a polyphenolic agent that abolishes β-lactam resistance in staphylococci, disrupted the association between PBP2 and PBP2a. Thus, SMA purification provides a means to remove native integral membrane protein assemblages with minimal physical disruption and shows promise as a tool for the interrogation of molecular aspects of bacterial membrane protein structure and function.

  8. Importance of Force Decomposition for Local Stress Calculations in Biomembrane Molecular Simulations.

    PubMed

    Vanegas, Juan M; Torres-Sánchez, Alejandro; Arroyo, Marino

    2014-02-11

    Local stress fields are routinely computed from molecular dynamics trajectories to understand the structure and mechanical properties of lipid bilayers. These calculations can be systematically understood with the Irving-Kirkwood-Noll theory. In identifying the stress tensor, a crucial step is the decomposition of the forces on the particles into pairwise contributions. However, such a decomposition is not unique in general, leading to an ambiguity in the definition of the stress tensor, particularly for multibody potentials. Furthermore, a theoretical treatment of constraints in local stress calculations has been lacking. Here, we present a new implementation of local stress calculations that systematically treats constraints and considers a privileged decomposition, the central force decomposition, that leads to a symmetric stress tensor by construction. We focus on biomembranes, although the methodology presented here is widely applicable. Our results show that some unphysical behavior obtained with previous implementations (e.g. nonconstant normal stress profiles along an isotropic bilayer in equilibrium) is a consequence of an improper treatment of constraints. Furthermore, other valid force decompositions produce significantly different stress profiles, particularly in the presence of dihedral potentials. Our methodology reveals the striking effect of unsaturations on the bilayer mechanics, missed by previous stress calculation implementations.

  9. Amphiphilic naproxen prodrugs: differential scanning calorimetry study on their interaction with phospholipid bilayers.

    PubMed

    Giuffrida, Maria Chiara; Pignatello, Rosario; Castelli, Francesco; Sarpietro, Maria Grazia

    2017-09-01

    Naproxen, a nonsteroid anti-inflammatory drug studied for Alzheimer's disease, was conjugated with lipoamino acids (LAA) directly or through a diethylamine (EDA) spacer to improve the drug lipophilicity and the interaction with phospholipid bilayers. The interaction of naproxen and its prodrugs with biomembrane models consisting of dimyristoylphosphatidylcholine multilamellar vesicles was studied by differential scanning calorimetry. The transfer of prodrugs from a lipophilic carrier to a biomembrane model was also studied. Naproxen conjugation to lipoamino acids improves its interaction with biomembrane models and affects the transfer from a lipophilic carrier to biomembrane model. LAA portion may localize between the phospholipid chains; the entity of the interaction depends not only on the presence of the spacer but also on the LAA chain length. Variation of LAA portion can modulate the naproxen prodrugs affinity towards the biological membrane as well as towards the lipophilic carrier. © 2017 Royal Pharmaceutical Society.

  10. On the Way to Appropriate Model Complexity

    NASA Astrophysics Data System (ADS)

    Höge, M.

    2016-12-01

    When statistical models are used to represent natural phenomena they are often too simple or too complex - this is known. But what exactly is model complexity? Among many other definitions, the complexity of a model can be conceptualized as a measure of statistical dependence between observations and parameters (Van der Linde, 2014). However, several issues remain when working with model complexity: A unique definition for model complexity is missing. Assuming a definition is accepted, how can model complexity be quantified? How can we use a quantified complexity to the better of modeling? Generally defined, "complexity is a measure of the information needed to specify the relationships between the elements of organized systems" (Bawden & Robinson, 2015). The complexity of a system changes as the knowledge about the system changes. For models this means that complexity is not a static concept: With more data or higher spatio-temporal resolution of parameters, the complexity of a model changes. There are essentially three categories into which all commonly used complexity measures can be classified: (1) An explicit representation of model complexity as "Degrees of freedom" of a model, e.g. effective number of parameters. (2) Model complexity as code length, a.k.a. "Kolmogorov complexity": The longer the shortest model code, the higher its complexity (e.g. in bits). (3) Complexity defined via information entropy of parametric or predictive uncertainty. Preliminary results show that Bayes theorem allows for incorporating all parts of the non-static concept of model complexity like data quality and quantity or parametric uncertainty. Therefore, we test how different approaches for measuring model complexity perform in comparison to a fully Bayesian model selection procedure. Ultimately, we want to find a measure that helps to assess the most appropriate model.

  11. Lipid Biomembrane in Ionic Liquids

    NASA Astrophysics Data System (ADS)

    Yoo, Brian; Jing, Benxin; Shah, Jindal; Maginn, Ed; Zhu, Y. Elaine; Department of Chemical and Biomolecular Engineering Team

    2014-03-01

    Ionic liquids (ILs) have been recently explored as new ``green'' chemicals in several chemical and biomedical processes. In our pursuit of understanding their toxicities towards aquatic and terrestrial organisms, we have examined the IL interaction with lipid bilayers as model cell membranes. Experimentally by fluorescence microscopy, we have directly observed the disruption of lipid bilayer by added ILs. Depending on the concentration, alkyl chain length, and anion hydrophobicity of ILs, the interaction of ILs with lipid bilayers leads to the formation of micelles, fibrils, and multi-lamellar vesicles for IL-lipid complexes. By MD computer simulations, we have confirmed the insertion of ILs into lipid bilayers to modify the spatial organization of lipids in the membrane. The combined experimental and simulation results correlate well with the bioassay results of IL-induced suppression in bacteria growth, thereby suggesting a possible mechanism behind the IL toxicity. National Science Foundation, Center for Research Computing at Notre Dame.

  12. Surface complexation modeling

    USDA-ARS?s Scientific Manuscript database

    Adsorption-desorption reactions are important processes that affect the transport of contaminants in the environment. Surface complexation models are chemical models that can account for the effects of variable chemical conditions, such as pH, on adsorption reactions. These models define specific ...

  13. Nanomaterial interactions with biomembranes: Bridging the gap between soft matter models and biological context.

    PubMed

    Werner, Marco; Auth, Thorsten; Beales, Paul A; Fleury, Jean Baptiste; Höök, Fredrik; Kress, Holger; Van Lehn, Reid C; Müller, Marcus; Petrov, Eugene P; Sarkisov, Lev; Sommer, Jens-Uwe; Baulin, Vladimir A

    2018-04-03

    Synthetic polymers, nanoparticles, and carbon-based materials have great potential in applications including drug delivery, gene transfection, in vitro and in vivo imaging, and the alteration of biological function. Nature and humans use different design strategies to create nanomaterials: biological objects have emerged from billions of years of evolution and from adaptation to their environment resulting in high levels of structural complexity; in contrast, synthetic nanomaterials result from minimalistic but controlled design options limited by the authors' current understanding of the biological world. This conceptual mismatch makes it challenging to create synthetic nanomaterials that possess desired functions in biological media. In many biologically relevant applications, nanomaterials must enter the cell interior to perform their functions. An essential transport barrier is the cell-protecting plasma membrane and hence the understanding of its interaction with nanomaterials is a fundamental task in biotechnology. The authors present open questions in the field of nanomaterial interactions with biological membranes, including: how physical mechanisms and molecular forces acting at the nanoscale restrict or inspire design options; which levels of complexity to include next in computational and experimental models to describe how nanomaterials cross barriers via passive or active processes; and how the biological media and protein corona interfere with nanomaterial functionality. In this Perspective, the authors address these questions with the aim of offering guidelines for the development of next-generation nanomaterials that function in biological media.

  14. Clinical Complexity in Medicine: A Measurement Model of Task and Patient Complexity.

    PubMed

    Islam, R; Weir, C; Del Fiol, G

    2016-01-01

    Complexity in medicine needs to be reduced to simple components in a way that is comprehensible to researchers and clinicians. Few studies in the current literature propose a measurement model that addresses both task and patient complexity in medicine. The objective of this paper is to develop an integrated approach to understand and measure clinical complexity by incorporating both task and patient complexity components focusing on the infectious disease domain. The measurement model was adapted and modified for the healthcare domain. Three clinical infectious disease teams were observed, audio-recorded and transcribed. Each team included an infectious diseases expert, one infectious diseases fellow, one physician assistant and one pharmacy resident fellow. The transcripts were parsed and the authors independently coded complexity attributes. This baseline measurement model of clinical complexity was modified in an initial set of coding processes and further validated in a consensus-based iterative process that included several meetings and email discussions by three clinical experts from diverse backgrounds from the Department of Biomedical Informatics at the University of Utah. Inter-rater reliability was calculated using Cohen's kappa. The proposed clinical complexity model consists of two separate components. The first is a clinical task complexity model with 13 clinical complexity-contributing factors and 7 dimensions. The second is the patient complexity model with 11 complexity-contributing factors and 5 dimensions. The measurement model for complexity encompassing both task and patient complexity will be a valuable resource for future researchers and industry to measure and understand complexity in healthcare.

  15. Benzocaine complexation with p-sulfonic acid calix[n]arene: experimental ((1) H-NMR) and theoretical approaches.

    PubMed

    Arantes, Lucas M; Varejão, Eduardo V V; Pelizzaro-Rocha, Karin J; Cereda, Cíntia M S; de Paula, Eneida; Lourenço, Maicon P; Duarte, Hélio A; Fernandes, Sergio A

    2014-05-01

    The aim of this work was to study the interaction between the local anesthetic benzocaine and p-sulfonic acid calix[n]arenes using NMR and theoretical calculations and to assess the effects of complexation on cytotoxicity of benzocaine. The architectures of the complexes were proposed according to (1) H NMR data (Job plot, binding constants, and ROESY) indicating details on the insertion of benzocaine in the cavity of the calix[n]arenes. The proposed inclusion compounds were optimized using the PM3 semiempirical method, and the electronic plus nuclear repulsion energy contributions were performed at the DFT level using the PBE exchange/correlation functional and the 6-311G(d) basis set. The remarkable agreement between experimental and theoretical approaches adds support to their use in the structural characterization of the inclusion complexes. In vitro cytotoxic tests showed that complexation intensifies the intrinsic toxicity of benzocaine, possibly by increasing the water solubility of the anesthetic and favoring its partitioning inside of biomembranes. © 2013 John Wiley & Sons A/S.

  16. Modeling complexes of modeled proteins.

    PubMed

    Anishchenko, Ivan; Kundrotas, Petras J; Vakser, Ilya A

    2017-03-01

    Structural characterization of proteins is essential for understanding life processes at the molecular level. However, only a fraction of known proteins have experimentally determined structures. This fraction is even smaller for protein-protein complexes. Thus, structural modeling of protein-protein interactions (docking) primarily has to rely on modeled structures of the individual proteins, which typically are less accurate than the experimentally determined ones. Such "double" modeling is the Grand Challenge of structural reconstruction of the interactome. Yet it remains so far largely untested in a systematic way. We present a comprehensive validation of template-based and free docking on a set of 165 complexes, where each protein model has six levels of structural accuracy, from 1 to 6 Å C α RMSD. Many template-based docking predictions fall into acceptable quality category, according to the CAPRI criteria, even for highly inaccurate proteins (5-6 Å RMSD), although the number of such models (and, consequently, the docking success rate) drops significantly for models with RMSD > 4 Å. The results show that the existing docking methodologies can be successfully applied to protein models with a broad range of structural accuracy, and the template-based docking is much less sensitive to inaccuracies of protein models than the free docking. Proteins 2017; 85:470-478. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  17. Simulations of simple Bovine and Homo sapiens outer cortex ocular lens membrane models with a majority concentration of cholesterol.

    PubMed

    Adams, Mark; Wang, Eric; Zhuang, Xiaohong; Klauda, Jeffery B

    2017-11-21

    The lipid composition of bovine and human ocular lens membranes has been probed, and a variety of lipids have been found including phosphatidylcholine (PC), phosphatidylethanolamine (PE), sphingomyelin (SM), and cholesterol (CHOL) with cholesterol being present in particularly high concentrations. In this study, we use the all-atom CHARMM36 force field to simulate binary, ternary, and quaternary mixtures as models of the ocular lens. High concentration of cholesterol, in combination with different and varying diversity of phospholipids (PL) and sphingolipids (SL), affect the structure of the ocular lens lipid bilayer. The following analyses were done for each simulation: surface area per lipid, component surface area per lipid, deuterium order parameters (S CD ), electron density profiles (EDP), membrane thickness, hydrogen bonding, radial distribution functions, clustering, and sterol tilt angle distribution. The S CD show significant bilayer alignment and packing in cholesterol-rich bilayers. The EDP show the transition from liquid crystalline to liquid ordered with the addition of cholesterol. Hydrogen bonds in our systems show the tendency for intramolecular interactions between cholesterol and fully saturated lipid tails for less complex bilayers. But with an increased number of components in the bilayer, the acyl chain of the lipids becomes a less important characteristic, and the headgroup of the lipid becomes more significant. Overall, cholesterol is the driving force of membrane structure of the ocular lens membrane where interactions between cholesterol, PL, and SL determine structure and function of the biomembrane. The goal of this work is to develop a baseline for further study of more physiologically realistic ocular lens lipid membranes. This article is part of a Special Issue entitled: Emergence of Complex Behavior in Biomembranes edited by Marjorie Longo. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Anti-inflammatory and Antibacterial Effects of Covalently Attached Biomembrane-Mimic Polymer Grafts on Gore-Tex Implants.

    PubMed

    Jin, Young Ju; Kang, Sunah; Park, Pona; Choi, Dongkil; Kim, Dae Woo; Jung, Dongwook; Koh, Jaemoon; Jeon, Joohee; Lee, Myoungjin; Ham, Jiyeon; Seo, Ji-Hun; Jin, Hong-Ryul; Lee, Yan

    2017-06-07

    Expanded polytetrafluoroethylene (ePTFE), also known as Gore-Tex, is widely used as an implantable biomaterial in biomedical applications because of its favorable mechanical properties and biochemical inertness. However, infection and inflammation are two major complications with ePTFE implantations, because pathogenic bacteria can inhabit the microsized pores, without clearance by host immune cells, and the limited biocompatibility can induce foreign body reactions. To minimize these complications, we covalently grafted a biomembrane-mimic polymer, poly(2-methacryloyloxylethyl phosphorylcholine) (PMPC), by partial defluorination followed by UV-induced polymerization with cross-linkers on the ePTFE surface. PMPC grafting greatly reduced serum protein adsorption as well as fibroblast adhesion on the ePTFE surface. Moreover, the PMPC-grafted ePTFE surface exhibited a dramatic inhibition of the adhesion and growth of Staphylococcus aureus, a typical pathogenic bacterium in ePTFE implants, in the porous network. On the basis of an analysis of immune cells and inflammation-related factors, i.e., transforming growth factor-β (TGF-β) and myeloperoxidase (MPO), we confirmed that inflammation was efficiently alleviated in tissues around PMPC-grafted ePTFE plates implanted in the backs of rats. Covalent PMPC may be an effective strategy for promoting anti-inflammatory and antibacterial functions in ePTFE implants and to reduce side effects in biomedical applications of ePTFE.

  19. Teacher Modeling Using Complex Informational Texts

    ERIC Educational Resources Information Center

    Fisher, Douglas; Frey, Nancy

    2015-01-01

    Modeling in complex texts requires that teachers analyze the text for factors of qualitative complexity and then design lessons that introduce students to that complexity. In addition, teachers can model the disciplinary nature of content area texts as well as word solving and comprehension strategies. Included is a planning guide for think aloud.

  20. Multifaceted Modelling of Complex Business Enterprises

    PubMed Central

    2015-01-01

    We formalise and present a new generic multifaceted complex system approach for modelling complex business enterprises. Our method has a strong focus on integrating the various data types available in an enterprise which represent the diverse perspectives of various stakeholders. We explain the challenges faced and define a novel approach to converting diverse data types into usable Bayesian probability forms. The data types that can be integrated include historic data, survey data, and management planning data, expert knowledge and incomplete data. The structural complexities of the complex system modelling process, based on various decision contexts, are also explained along with a solution. This new application of complex system models as a management tool for decision making is demonstrated using a railway transport case study. The case study demonstrates how the new approach can be utilised to develop a customised decision support model for a specific enterprise. Various decision scenarios are also provided to illustrate the versatility of the decision model at different phases of enterprise operations such as planning and control. PMID:26247591

  1. Multifaceted Modelling of Complex Business Enterprises.

    PubMed

    Chakraborty, Subrata; Mengersen, Kerrie; Fidge, Colin; Ma, Lin; Lassen, David

    2015-01-01

    We formalise and present a new generic multifaceted complex system approach for modelling complex business enterprises. Our method has a strong focus on integrating the various data types available in an enterprise which represent the diverse perspectives of various stakeholders. We explain the challenges faced and define a novel approach to converting diverse data types into usable Bayesian probability forms. The data types that can be integrated include historic data, survey data, and management planning data, expert knowledge and incomplete data. The structural complexities of the complex system modelling process, based on various decision contexts, are also explained along with a solution. This new application of complex system models as a management tool for decision making is demonstrated using a railway transport case study. The case study demonstrates how the new approach can be utilised to develop a customised decision support model for a specific enterprise. Various decision scenarios are also provided to illustrate the versatility of the decision model at different phases of enterprise operations such as planning and control.

  2. Amelioration of oxidative stress in bio-membranes and macromolecules by non-toxic dye from Morinda tinctoria (Roxb.) roots.

    PubMed

    Bhakta, Dipita; Siva, Ramamoorthy

    2012-06-01

    Plant dyes have been in use for coloring and varied purposes since prehistoric times. A red dye found in the roots of plants belonging to genus Morinda is a well recognized coloring ingredient. The dye fraction obtained from the methanolic extract of the roots of Morinda tinctoria was explored for its role in attenuating damages caused by H(2)O(2)-induced oxidative stress. The antioxidant potential of the dye fraction was assessed through DPPH radical scavenging, deoxyribose degradation and inhibition of lipid peroxidation in mice liver. It was subsequently screened for its efficiency in extenuating damage incurred to biomembrane (using erythrocytes and their ghost membranes) and macromolecules (pBR322 DNA, lipids and proteins) from exposure to hydrogen peroxide. In addition, the non-toxic nature of the dye was supported by the histological evaluation conducted on the tissue sections from the major organs of Swiss Albino mice as well as effect on Hep3B cell line (human hepatic carcinoma). The LC-MS confirms the dye fraction to be morindone. Our study strongly suggests that morindone present in the root extracts of M. tinctoria, in addition to being a colorant, definitely holds promise in the pharmaceutical industry. Copyright © 2012 Elsevier Ltd. All rights reserved.

  3. Updating the debate on model complexity

    USGS Publications Warehouse

    Simmons, Craig T.; Hunt, Randall J.

    2012-01-01

    As scientists who are trying to understand a complex natural world that cannot be fully characterized in the field, how can we best inform the society in which we live? This founding context was addressed in a special session, “Complexity in Modeling: How Much is Too Much?” convened at the 2011 Geological Society of America Annual Meeting. The session had a variety of thought-provoking presentations—ranging from philosophy to cost-benefit analyses—and provided some areas of broad agreement that were not evident in discussions of the topic in 1998 (Hunt and Zheng, 1999). The session began with a short introduction during which model complexity was framed borrowing from an economic concept, the Law of Diminishing Returns, and an example of enjoyment derived by eating ice cream. Initially, there is increasing satisfaction gained from eating more ice cream, to a point where the gain in satisfaction starts to decrease, ending at a point when the eater sees no value in eating more ice cream. A traditional view of model complexity is similar—understanding gained from modeling can actually decrease if models become unnecessarily complex. However, oversimplified models—those that omit important aspects of the problem needed to make a good prediction—can also limit and confound our understanding. Thus, the goal of all modeling is to find the “sweet spot” of model sophistication—regardless of whether complexity was added sequentially to an overly simple model or collapsed from an initial highly parameterized framework that uses mathematics and statistics to attain an optimum (e.g., Hunt et al., 2007). Thus, holistic parsimony is attained, incorporating “as simple as possible,” as well as the equally important corollary “but no simpler.”

  4. Statistical Determinants of Selective Ionic Complexation: Ions in Solvent, Transport Proteins, and Other “Hosts”

    PubMed Central

    Bostick, David L.; Brooks, Charles L.

    2009-01-01

    To provide utility in understanding the molecular evolution of ion-selective biomembrane channels/transporters, globular proteins, and ionophoric compounds, as well as in guiding their modification and design, we present a statistical mechanical basis for deconstructing the impact of the coordination structure and chemistry of selective multidentate ionic complexes. The deconstruction augments familiar ideas in liquid structure theory to realize the ionic complex as an open ion-ligated system acting under the influence of an “external field” provided by the host (or surrounding medium). Using considerations derived from this basis, we show that selective complexation arises from exploitation of a particular ion's coordination preferences. These preferences derive from a balance of interactions much like that which dictates the Hofmeister effect. By analyzing the coordination-state space of small family IA and VIIA ions in simulated fluid media, we derive domains of coordinated states that confer selectivity for a given ion upon isolating and constraining particular attributes (order parameters) of a complex comprised of a given type of ligand. We demonstrate that such domains may be used to rationalize the ion-coordinated environments provided by selective ionophores and biological ion channels/transporters of known structure, and that they can serve as a means toward deriving rational design principles for ion-selective hosts. PMID:19486671

  5. Bacillus subtilis Lipid Extract, A Branched-Chain Fatty Acid Model Membrane

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nickels, Jonathan D.; Chatterjee, Sneha; Mostofian, Barmak

    Lipid extracts are an excellent choice of model biomembrane; however at present, there are no commercially available lipid extracts or computational models that mimic microbial membranes containing the branched-chain fatty acids found in many pathogenic and industrially relevant bacteria. Here, we advance the extract of Bacillus subtilis as a standard model for these diverse systems, providing a detailed experimental description and equilibrated atomistic bilayer model included as Supporting Information to this Letter and at (http://cmb.ornl.gov/members/cheng). The development and validation of this model represents an advance that enables more realistic simulations and experiments on bacterial membranes and reconstituted bacterial membrane proteins.

  6. Predictive Surface Complexation Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sverjensky, Dimitri A.

    Surface complexation plays an important role in the equilibria and kinetics of processes controlling the compositions of soilwaters and groundwaters, the fate of contaminants in groundwaters, and the subsurface storage of CO 2 and nuclear waste. Over the last several decades, many dozens of individual experimental studies have addressed aspects of surface complexation that have contributed to an increased understanding of its role in natural systems. However, there has been no previous attempt to develop a model of surface complexation that can be used to link all the experimental studies in order to place them on a predictive basis. Overall,more » my research has successfully integrated the results of the work of many experimentalists published over several decades. For the first time in studies of the geochemistry of the mineral-water interface, a practical predictive capability for modeling has become available. The predictive correlations developed in my research now enable extrapolations of experimental studies to provide estimates of surface chemistry for systems not yet studied experimentally and for natural and anthropogenically perturbed systems.« less

  7. Are Model Transferability And Complexity Antithetical? Insights From Validation of a Variable-Complexity Empirical Snow Model in Space and Time

    NASA Astrophysics Data System (ADS)

    Lute, A. C.; Luce, Charles H.

    2017-11-01

    The related challenges of predictions in ungauged basins and predictions in ungauged climates point to the need to develop environmental models that are transferable across both space and time. Hydrologic modeling has historically focused on modelling one or only a few basins using highly parameterized conceptual or physically based models. However, model parameters and structures have been shown to change significantly when calibrated to new basins or time periods, suggesting that model complexity and model transferability may be antithetical. Empirical space-for-time models provide a framework within which to assess model transferability and any tradeoff with model complexity. Using 497 SNOTEL sites in the western U.S., we develop space-for-time models of April 1 SWE and Snow Residence Time based on mean winter temperature and cumulative winter precipitation. The transferability of the models to new conditions (in both space and time) is assessed using non-random cross-validation tests with consideration of the influence of model complexity on transferability. As others have noted, the algorithmic empirical models transfer best when minimal extrapolation in input variables is required. Temporal split-sample validations use pseudoreplicated samples, resulting in the selection of overly complex models, which has implications for the design of hydrologic model validation tests. Finally, we show that low to moderate complexity models transfer most successfully to new conditions in space and time, providing empirical confirmation of the parsimony principal.

  8. Balancing model complexity and measurements in hydrology

    NASA Astrophysics Data System (ADS)

    Van De Giesen, N.; Schoups, G.; Weijs, S. V.

    2012-12-01

    The Data Processing Inequality implies that hydrological modeling can only reduce, and never increase, the amount of information available in the original data used to formulate and calibrate hydrological models: I(X;Z(Y)) ≤ I(X;Y). Still, hydrologists around the world seem quite content building models for "their" watersheds to move our discipline forward. Hydrological models tend to have a hybrid character with respect to underlying physics. Most models make use of some well established physical principles, such as mass and energy balances. One could argue that such principles are based on many observations, and therefore add data. These physical principles, however, are applied to hydrological models that often contain concepts that have no direct counterpart in the observable physical universe, such as "buckets" or "reservoirs" that fill up and empty out over time. These not-so-physical concepts are more like the Artificial Neural Networks and Support Vector Machines of the Artificial Intelligence (AI) community. Within AI, one quickly came to the realization that by increasing model complexity, one could basically fit any dataset but that complexity should be controlled in order to be able to predict unseen events. The more data are available to train or calibrate the model, the more complex it can be. Many complexity control approaches exist in AI, with Solomonoff inductive inference being one of the first formal approaches, the Akaike Information Criterion the most popular, and Statistical Learning Theory arguably being the most comprehensive practical approach. In hydrology, complexity control has hardly been used so far. There are a number of reasons for that lack of interest, the more valid ones of which will be presented during the presentation. For starters, there are no readily available complexity measures for our models. Second, some unrealistic simplifications of the underlying complex physics tend to have a smoothing effect on possible model

  9. A Primer for Model Selection: The Decisive Role of Model Complexity

    NASA Astrophysics Data System (ADS)

    Höge, Marvin; Wöhling, Thomas; Nowak, Wolfgang

    2018-03-01

    Selecting a "best" model among several competing candidate models poses an often encountered problem in water resources modeling (and other disciplines which employ models). For a modeler, the best model fulfills a certain purpose best (e.g., flood prediction), which is typically assessed by comparing model simulations to data (e.g., stream flow). Model selection methods find the "best" trade-off between good fit with data and model complexity. In this context, the interpretations of model complexity implied by different model selection methods are crucial, because they represent different underlying goals of modeling. Over the last decades, numerous model selection criteria have been proposed, but modelers who primarily want to apply a model selection criterion often face a lack of guidance for choosing the right criterion that matches their goal. We propose a classification scheme for model selection criteria that helps to find the right criterion for a specific goal, i.e., which employs the correct complexity interpretation. We identify four model selection classes which seek to achieve high predictive density, low predictive error, high model probability, or shortest compression of data. These goals can be achieved by following either nonconsistent or consistent model selection and by either incorporating a Bayesian parameter prior or not. We allocate commonly used criteria to these four classes, analyze how they represent model complexity and what this means for the model selection task. Finally, we provide guidance on choosing the right type of criteria for specific model selection tasks. (A quick guide through all key points is given at the end of the introduction.)

  10. Challenges in Developing Models Describing Complex Soil Systems

    NASA Astrophysics Data System (ADS)

    Simunek, J.; Jacques, D.

    2014-12-01

    Quantitative mechanistic models that consider basic physical, mechanical, chemical, and biological processes have the potential to be powerful tools to integrate our understanding of complex soil systems, and the soil science community has often called for models that would include a large number of these diverse processes. However, once attempts have been made to develop such models, the response from the community has not always been overwhelming, especially after it discovered that these models are consequently highly complex, requiring not only a large number of parameters, not all of which can be easily (or at all) measured and/or identified, and which are often associated with large uncertainties, but also requiring from their users deep knowledge of all/most of these implemented physical, mechanical, chemical and biological processes. Real, or perceived, complexity of these models then discourages users from using them even for relatively simple applications, for which they would be perfectly adequate. Due to the nonlinear nature and chemical/biological complexity of the soil systems, it is also virtually impossible to verify these types of models analytically, raising doubts about their applicability. Code inter-comparisons, which is then likely the most suitable method to assess code capabilities and model performance, requires existence of multiple models of similar/overlapping capabilities, which may not always exist. It is thus a challenge not only to developed models describing complex soil systems, but also to persuade the soil science community in using them. As a result, complex quantitative mechanistic models are still an underutilized tool in soil science research. We will demonstrate some of the challenges discussed above on our own efforts in developing quantitative mechanistic models (such as HP1/2) for complex soil systems.

  11. Epidemic modeling in complex realities.

    PubMed

    Colizza, Vittoria; Barthélemy, Marc; Barrat, Alain; Vespignani, Alessandro

    2007-04-01

    In our global world, the increasing complexity of social relations and transport infrastructures are key factors in the spread of epidemics. In recent years, the increasing availability of computer power has enabled both to obtain reliable data allowing one to quantify the complexity of the networks on which epidemics may propagate and to envision computational tools able to tackle the analysis of such propagation phenomena. These advances have put in evidence the limits of homogeneous assumptions and simple spatial diffusion approaches, and stimulated the inclusion of complex features and heterogeneities relevant in the description of epidemic diffusion. In this paper, we review recent progresses that integrate complex systems and networks analysis with epidemic modelling and focus on the impact of the various complex features of real systems on the dynamics of epidemic spreading.

  12. Interaction of Local Anesthetics with Biomembranes Consisting of Phospholipids and Cholesterol: Mechanistic and Clinical Implications for Anesthetic and Cardiotoxic Effects

    PubMed Central

    2013-01-01

    Despite a long history in medical and dental application, the molecular mechanism and precise site of action are still arguable for local anesthetics. Their effects are considered to be induced by acting on functional proteins, on membrane lipids, or on both. Local anesthetics primarily interact with sodium channels embedded in cell membranes to reduce the excitability of nerve cells and cardiomyocytes or produce a malfunction of the cardiovascular system. However, the membrane protein-interacting theory cannot explain all of the pharmacological and toxicological features of local anesthetics. The administered drug molecules must diffuse through the lipid barriers of nerve sheaths and penetrate into or across the lipid bilayers of cell membranes to reach the acting site on transmembrane proteins. Amphiphilic local anesthetics interact hydrophobically and electrostatically with lipid bilayers and modify their physicochemical property, with the direct inhibition of membrane functions, and with the resultant alteration of the membrane lipid environments surrounding transmembrane proteins and the subsequent protein conformational change, leading to the inhibition of channel functions. We review recent studies on the interaction of local anesthetics with biomembranes consisting of phospholipids and cholesterol. Understanding the membrane interactivity of local anesthetics would provide novel insights into their anesthetic and cardiotoxic effects. PMID:24174934

  13. Effect of head group orientation on phospholipid assembly

    NASA Astrophysics Data System (ADS)

    Paul, Tanay; Saha, Jayashree

    2017-06-01

    The relationship between bilayer stability and lipid head group orientation is reported. In this work, molecular-dynamics simulations are performed to analyze the structure-property relationship of lipid biomembranes, taking into account coarse-grained model lipid interactions. The work explains the molecular scale mechanism of the phase behavior of lipid systems due to ion-lipid or anesthetic-lipid interactions, where reorientations of dipoles play a key role in modifying lipid phases and thereby alter biomembrane function. Our study demonstrates that simple dipolar reorientation is indeed sufficient in tuning a bilayer to a randomly flipped nonbilayer lamellar phase. This study may be used to assess the impact of changes in lipid phase characteristics on biomembrane structure due to the presence of anesthetics and ions.

  14. Development and application of coarse-grained models for lipids

    NASA Astrophysics Data System (ADS)

    Cui, Qiang

    2013-03-01

    I'll discuss a number of topics that represent our efforts in developing reliable molecular models for describing chemical and physical processes involving biomembranes. This is an exciting yet challenging research area because of the multiple length and time scales that are present in the relevant problems. Accordingly, we attempt to (1) understand the value and limitation of popular coarse-grained (CG) models for lipid membranes with either a particle or continuum representation; (2) develop new CG models that are appropriate for the particular problem of interest. As specific examples, I'll discuss (1) a comparison of atomistic, MARTINI (a particle based CG model) and continuum descriptions of a membrane fusion pore; (2) the development of a modified MARTINI model (BMW-MARTINI) that features a reliable description of membrane/water interfacial electrostatics and its application to cell-penetration peptides and membrane-bending proteins. Motivated specifically by the recent studies of Wong and co-workers, we compare the self-assembly behaviors of lipids with cationic peptides that include either Arg residues or a combination of Lys and hydrophobic residues; in particular, we attempt to reveal factors that stabilize the cubic ``double diamond'' Pn3m phase over the inverted hexagonal HII phase. For example, to explicitly test the importance of the bidentate hydrogen-bonding capability of Arg to the stabilization of negative Gaussian curvature, we also compare results using variants of the BMW-MARTINI model that treat the side chain of Arg with different levels of details. Collectively, the results suggest that both the bidentate feature of Arg and the overall electrostatic properties of cationic peptides are important to the self-assembly behavior of these peptides with lipids. The results are expected to have general implications to the mechanism of peptides and proteins that stimulate pore formation in biomembranes. Work in collaboration with Zhe Wu, Leili Zhang

  15. Modeling the chemistry of complex petroleum mixtures.

    PubMed Central

    Quann, R J

    1998-01-01

    Determining the complete molecular composition of petroleum and its refined products is not feasible with current analytical techniques because of the astronomical number of molecular components. Modeling the composition and behavior of such complex mixtures in refinery processes has accordingly evolved along a simplifying concept called lumping. Lumping reduces the complexity of the problem to a manageable form by grouping the entire set of molecular components into a handful of lumps. This traditional approach does not have a molecular basis and therefore excludes important aspects of process chemistry and molecular property fundamentals from the model's formulation. A new approach called structure-oriented lumping has been developed to model the composition and chemistry of complex mixtures at a molecular level. The central concept is to represent an individual molecular or a set of closely related isomers as a mathematical construct of certain specific and repeating structural groups. A complex mixture such as petroleum can then be represented as thousands of distinct molecular components, each having a mathematical identity. This enables the automated construction of large complex reaction networks with tens of thousands of specific reactions for simulating the chemistry of complex mixtures. Further, the method provides a convenient framework for incorporating molecular physical property correlations, existing group contribution methods, molecular thermodynamic properties, and the structure--activity relationships of chemical kinetics in the development of models. PMID:9860903

  16. Refiners Switch to RFG Complex Model

    EIA Publications

    1998-01-01

    On January 1, 1998, domestic and foreign refineries and importers must stop using the "simple" model and begin using the "complex" model to calculate emissions of volatile organic compounds (VOC), toxic air pollutants (TAP), and nitrogen oxides (NOx) from motor gasoline. The primary differences between application of the two models is that some refineries may have to meet stricter standards for the sulfur and olefin content of the reformulated gasoline (RFG) they produce and all refineries will now be held accountable for NOx emissions. Requirements for calculating emissions from conventional gasoline under the anti-dumping rule similarly change for exhaust TAP and NOx. However, the change to the complex model is not expected to result in an increase in the price premium for RFG or constrain supplies.

  17. Complexity-aware simple modeling.

    PubMed

    Gómez-Schiavon, Mariana; El-Samad, Hana

    2018-02-26

    Mathematical models continue to be essential for deepening our understanding of biology. On one extreme, simple or small-scale models help delineate general biological principles. However, the parsimony of detail in these models as well as their assumption of modularity and insulation make them inaccurate for describing quantitative features. On the other extreme, large-scale and detailed models can quantitatively recapitulate a phenotype of interest, but have to rely on many unknown parameters, making them often difficult to parse mechanistically and to use for extracting general principles. We discuss some examples of a new approach-complexity-aware simple modeling-that can bridge the gap between the small-scale and large-scale approaches. Copyright © 2018 Elsevier Ltd. All rights reserved.

  18. Trends in modeling Biomedical Complex Systems

    PubMed Central

    Milanesi, Luciano; Romano, Paolo; Castellani, Gastone; Remondini, Daniel; Liò, Petro

    2009-01-01

    In this paper we provide an introduction to the techniques for multi-scale complex biological systems, from the single bio-molecule to the cell, combining theoretical modeling, experiments, informatics tools and technologies suitable for biological and biomedical research, which are becoming increasingly multidisciplinary, multidimensional and information-driven. The most important concepts on mathematical modeling methodologies and statistical inference, bioinformatics and standards tools to investigate complex biomedical systems are discussed and the prominent literature useful to both the practitioner and the theoretician are presented. PMID:19828068

  19. Elements of complexity in subsurface modeling, exemplified with three case studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Freedman, Vicky L.; Truex, Michael J.; Rockhold, Mark

    2017-04-03

    There are complexity elements to consider when applying subsurface flow and transport models to support environmental analyses. Modelers balance the benefits and costs of modeling along the spectrum of complexity, taking into account the attributes of more simple models (e.g., lower cost, faster execution, easier to explain, less mechanistic) and the attributes of more complex models (higher cost, slower execution, harder to explain, more mechanistic and technically defensible). In this paper, modeling complexity is examined with respect to considering this balance. The discussion of modeling complexity is organized into three primary elements: 1) modeling approach, 2) description of process, andmore » 3) description of heterogeneity. Three examples are used to examine these complexity elements. Two of the examples use simulations generated from a complex model to develop simpler models for efficient use in model applications. The first example is designed to support performance evaluation of soil vapor extraction remediation in terms of groundwater protection. The second example investigates the importance of simulating different categories of geochemical reactions for carbon sequestration and selecting appropriate simplifications for use in evaluating sequestration scenarios. In the third example, the modeling history for a uranium-contaminated site demonstrates that conservative parameter estimates were inadequate surrogates for complex, critical processes and there is discussion on the selection of more appropriate model complexity for this application. All three examples highlight how complexity considerations are essential to create scientifically defensible models that achieve a balance between model simplification and complexity.« less

  20. Elements of complexity in subsurface modeling, exemplified with three case studies

    NASA Astrophysics Data System (ADS)

    Freedman, Vicky L.; Truex, Michael J.; Rockhold, Mark L.; Bacon, Diana H.; Freshley, Mark D.; Wellman, Dawn M.

    2017-09-01

    There are complexity elements to consider when applying subsurface flow and transport models to support environmental analyses. Modelers balance the benefits and costs of modeling along the spectrum of complexity, taking into account the attributes of more simple models (e.g., lower cost, faster execution, easier to explain, less mechanistic) and the attributes of more complex models (higher cost, slower execution, harder to explain, more mechanistic and technically defensible). In this report, modeling complexity is examined with respect to considering this balance. The discussion of modeling complexity is organized into three primary elements: (1) modeling approach, (2) description of process, and (3) description of heterogeneity. Three examples are used to examine these complexity elements. Two of the examples use simulations generated from a complex model to develop simpler models for efficient use in model applications. The first example is designed to support performance evaluation of soil-vapor-extraction remediation in terms of groundwater protection. The second example investigates the importance of simulating different categories of geochemical reactions for carbon sequestration and selecting appropriate simplifications for use in evaluating sequestration scenarios. In the third example, the modeling history for a uranium-contaminated site demonstrates that conservative parameter estimates were inadequate surrogates for complex, critical processes and there is discussion on the selection of more appropriate model complexity for this application. All three examples highlight how complexity considerations are essential to create scientifically defensible models that achieve a balance between model simplification and complexity.

  1. Genotypic Complexity of Fisher’s Geometric Model

    PubMed Central

    Hwang, Sungmin; Park, Su-Chan; Krug, Joachim

    2017-01-01

    Fisher’s geometric model was originally introduced to argue that complex adaptations must occur in small steps because of pleiotropic constraints. When supplemented with the assumption of additivity of mutational effects on phenotypic traits, it provides a simple mechanism for the emergence of genotypic epistasis from the nonlinear mapping of phenotypes to fitness. Of particular interest is the occurrence of reciprocal sign epistasis, which is a necessary condition for multipeaked genotypic fitness landscapes. Here we compute the probability that a pair of randomly chosen mutations interacts sign epistatically, which is found to decrease with increasing phenotypic dimension n, and varies nonmonotonically with the distance from the phenotypic optimum. We then derive expressions for the mean number of fitness maxima in genotypic landscapes comprised of all combinations of L random mutations. This number increases exponentially with L, and the corresponding growth rate is used as a measure of the complexity of the landscape. The dependence of the complexity on the model parameters is found to be surprisingly rich, and three distinct phases characterized by different landscape structures are identified. Our analysis shows that the phenotypic dimension, which is often referred to as phenotypic complexity, does not generally correlate with the complexity of fitness landscapes and that even organisms with a single phenotypic trait can have complex landscapes. Our results further inform the interpretation of experiments where the parameters of Fisher’s model have been inferred from data, and help to elucidate which features of empirical fitness landscapes can be described by this model. PMID:28450460

  2. Cumulative complexity: a functional, patient-centered model of patient complexity can improve research and practice.

    PubMed

    Shippee, Nathan D; Shah, Nilay D; May, Carl R; Mair, Frances S; Montori, Victor M

    2012-10-01

    To design a functional, patient-centered model of patient complexity with practical applicability to analytic design and clinical practice. Existing literature on patient complexity has mainly identified its components descriptively and in isolation, lacking clarity as to their combined functions in disrupting care or to how complexity changes over time. The authors developed a cumulative complexity model, which integrates existing literature and emphasizes how clinical and social factors accumulate and interact to complicate patient care. A narrative literature review is used to explicate the model. The model emphasizes a core, patient-level mechanism whereby complicating factors impact care and outcomes: the balance between patient workload of demands and patient capacity to address demands. Workload encompasses the demands on the patient's time and energy, including demands of treatment, self-care, and life in general. Capacity concerns ability to handle work (e.g., functional morbidity, financial/social resources, literacy). Workload-capacity imbalances comprise the mechanism driving patient complexity. Treatment and illness burdens serve as feedback loops, linking negative outcomes to further imbalances, such that complexity may accumulate over time. With its components largely supported by existing literature, the model has implications for analytic design, clinical epidemiology, and clinical practice. Copyright © 2012 Elsevier Inc. All rights reserved.

  3. Metal Transport across Biomembranes: Emerging Models for a Distinct Chemistry*

    PubMed Central

    Argüello, José M.; Raimunda, Daniel; González-Guerrero, Manuel

    2012-01-01

    Transition metals are essential components of important biomolecules, and their homeostasis is central to many life processes. Transmembrane transporters are key elements controlling the distribution of metals in various compartments. However, due to their chemical properties, transition elements require transporters with different structural-functional characteristics from those of alkali and alkali earth ions. Emerging structural information and functional studies have revealed distinctive features of metal transport. Among these are the relevance of multifaceted events involving metal transfer among participating proteins, the importance of coordination geometry at transmembrane transport sites, and the presence of the largely irreversible steps associated with vectorial transport. Here, we discuss how these characteristics shape novel transition metal ion transport models. PMID:22389499

  4. Metal transport across biomembranes: emerging models for a distinct chemistry.

    PubMed

    Argüello, José M; Raimunda, Daniel; González-Guerrero, Manuel

    2012-04-20

    Transition metals are essential components of important biomolecules, and their homeostasis is central to many life processes. Transmembrane transporters are key elements controlling the distribution of metals in various compartments. However, due to their chemical properties, transition elements require transporters with different structural-functional characteristics from those of alkali and alkali earth ions. Emerging structural information and functional studies have revealed distinctive features of metal transport. Among these are the relevance of multifaceted events involving metal transfer among participating proteins, the importance of coordination geometry at transmembrane transport sites, and the presence of the largely irreversible steps associated with vectorial transport. Here, we discuss how these characteristics shape novel transition metal ion transport models.

  5. Deterministic ripple-spreading model for complex networks.

    PubMed

    Hu, Xiao-Bing; Wang, Ming; Leeson, Mark S; Hines, Evor L; Di Paolo, Ezequiel

    2011-04-01

    This paper proposes a deterministic complex network model, which is inspired by the natural ripple-spreading phenomenon. The motivations and main advantages of the model are the following: (i) The establishment of many real-world networks is a dynamic process, where it is often observed that the influence of a few local events spreads out through nodes, and then largely determines the final network topology. Obviously, this dynamic process involves many spatial and temporal factors. By simulating the natural ripple-spreading process, this paper reports a very natural way to set up a spatial and temporal model for such complex networks. (ii) Existing relevant network models are all stochastic models, i.e., with a given input, they cannot output a unique topology. Differently, the proposed ripple-spreading model can uniquely determine the final network topology, and at the same time, the stochastic feature of complex networks is captured by randomly initializing ripple-spreading related parameters. (iii) The proposed model can use an easily manageable number of ripple-spreading related parameters to precisely describe a network topology, which is more memory efficient when compared with traditional adjacency matrix or similar memory-expensive data structures. (iv) The ripple-spreading model has a very good potential for both extensions and applications.

  6. Modeling OPC complexity for design for manufacturability

    NASA Astrophysics Data System (ADS)

    Gupta, Puneet; Kahng, Andrew B.; Muddu, Swamy; Nakagawa, Sam; Park, Chul-Hong

    2005-11-01

    Increasing design complexity in sub-90nm designs results in increased mask complexity and cost. Resolution enhancement techniques (RET) such as assist feature addition, phase shifting (attenuated PSM) and aggressive optical proximity correction (OPC) help in preserving feature fidelity in silicon but increase mask complexity and cost. Data volume increase with rise in mask complexity is becoming prohibitive for manufacturing. Mask cost is determined by mask write time and mask inspection time, which are directly related to the complexity of features printed on the mask. Aggressive RET increase complexity by adding assist features and by modifying existing features. Passing design intent to OPC has been identified as a solution for reducing mask complexity and cost in several recent works. The goal of design-aware OPC is to relax OPC tolerances of layout features to minimize mask cost, without sacrificing parametric yield. To convey optimal OPC tolerances for manufacturing, design optimization should drive OPC tolerance optimization using models of mask cost for devices and wires. Design optimization should be aware of impact of OPC correction levels on mask cost and performance of the design. This work introduces mask cost characterization (MCC) that quantifies OPC complexity, measured in terms of fracture count of the mask, for different OPC tolerances. MCC with different OPC tolerances is a critical step in linking design and manufacturing. In this paper, we present a MCC methodology that provides models of fracture count of standard cells and wire patterns for use in design optimization. MCC cannot be performed by designers as they do not have access to foundry OPC recipes and RET tools. To build a fracture count model, we perform OPC and fracturing on a limited set of standard cells and wire configurations with all tolerance combinations. Separately, we identify the characteristics of the layout that impact fracture count. Based on the fracture count (FC) data

  7. Emulator-assisted data assimilation in complex models

    NASA Astrophysics Data System (ADS)

    Margvelashvili, Nugzar Yu; Herzfeld, Mike; Rizwi, Farhan; Mongin, Mathieu; Baird, Mark E.; Jones, Emlyn; Schaffelke, Britta; King, Edward; Schroeder, Thomas

    2016-09-01

    Emulators are surrogates of complex models that run orders of magnitude faster than the original model. The utility of emulators for the data assimilation into ocean models is still not well understood. High complexity of ocean models translates into high uncertainty of the corresponding emulators which may undermine the quality of the assimilation schemes based on such emulators. Numerical experiments with a chaotic Lorenz-95 model are conducted to illustrate this point and suggest a strategy to alleviate this problem through the localization of the emulation and data assimilation procedures. Insights gained through these experiments are used to design and implement data assimilation scenario for a 3D fine-resolution sediment transport model of the Great Barrier Reef (GBR), Australia.

  8. Coupling of lipid membrane elasticity and in-plane dynamics

    NASA Astrophysics Data System (ADS)

    Tsang, Kuan-Yu; Lai, Yei-Chen; Chiang, Yun-Wei; Chen, Yi-Fan

    2017-07-01

    Biomembranes exhibit liquid and solid features concomitantly with their in-plane fluidity and elasticity tightly regulated by cells. Here, we present experimental evidence supporting the existence of the dynamics-elasticity correlations for lipid membranes and propose a mechanism involving molecular packing densities to explain them. This paper thereby unifies, at the molecular level, the aspects of the continuum mechanics long used to model the two membrane features. This ultimately may elucidate the universal physical principles governing the cellular phenomena involving biomembranes.

  9. A novel BA complex network model on color template matching.

    PubMed

    Han, Risheng; Shen, Shigen; Yue, Guangxue; Ding, Hui

    2014-01-01

    A novel BA complex network model of color space is proposed based on two fundamental rules of BA scale-free network model: growth and preferential attachment. The scale-free characteristic of color space is discovered by analyzing evolving process of template's color distribution. And then the template's BA complex network model can be used to select important color pixels which have much larger effects than other color pixels in matching process. The proposed BA complex network model of color space can be easily integrated into many traditional template matching algorithms, such as SSD based matching and SAD based matching. Experiments show the performance of color template matching results can be improved based on the proposed algorithm. To the best of our knowledge, this is the first study about how to model the color space of images using a proper complex network model and apply the complex network model to template matching.

  10. Watershed Complexity Impacts on Rainfall-Runoff Modeling

    NASA Astrophysics Data System (ADS)

    Goodrich, D. C.; Grayson, R.; Willgoose, G.; Palacios-Velez, O.; Bloeschl, G.

    2002-12-01

    Application of distributed hydrologic watershed models fundamentally requires watershed partitioning or discretization. In addition to partitioning the watershed into modeling elements, these elements typically represent a further abstraction of the actual watershed surface and its relevant hydrologic properties. A critical issue that must be addressed by any user of these models prior to their application is definition of an acceptable level of watershed discretization or geometric model complexity. A quantitative methodology to define a level of geometric model complexity commensurate with a specified level of model performance is developed for watershed rainfall-runoff modeling. In the case where watershed contributing areas are represented by overland flow planes, equilibrium discharge storage was used to define the transition from overland to channel dominated flow response. The methodology is tested on four subcatchments which cover a range of watershed scales of over three orders of magnitude in the USDA-ARS Walnut Gulch Experimental Watershed in Southeastern Arizona. It was found that distortion of the hydraulic roughness can compensate for a lower level of discretization (fewer channels) to a point. Beyond this point, hydraulic roughness distortion cannot compensate for topographic distortion of representing the watershed by fewer elements (e.g. less complex channel network). Similarly, differences in representation of topography by different model or digital elevation model (DEM) types (e.g. Triangular Irregular Elements - TINs; contour lines; and regular grid DEMs) also result in difference in runoff routing responses that can be largely compensated for by a distortion in hydraulic roughness.

  11. A Novel BA Complex Network Model on Color Template Matching

    PubMed Central

    Han, Risheng; Yue, Guangxue; Ding, Hui

    2014-01-01

    A novel BA complex network model of color space is proposed based on two fundamental rules of BA scale-free network model: growth and preferential attachment. The scale-free characteristic of color space is discovered by analyzing evolving process of template's color distribution. And then the template's BA complex network model can be used to select important color pixels which have much larger effects than other color pixels in matching process. The proposed BA complex network model of color space can be easily integrated into many traditional template matching algorithms, such as SSD based matching and SAD based matching. Experiments show the performance of color template matching results can be improved based on the proposed algorithm. To the best of our knowledge, this is the first study about how to model the color space of images using a proper complex network model and apply the complex network model to template matching. PMID:25243235

  12. Complex networks under dynamic repair model

    NASA Astrophysics Data System (ADS)

    Chaoqi, Fu; Ying, Wang; Kun, Zhao; Yangjun, Gao

    2018-01-01

    Invulnerability is not the only factor of importance when considering complex networks' security. It is also critical to have an effective and reasonable repair strategy. Existing research on network repair is confined to the static model. The dynamic model makes better use of the redundant capacity of repaired nodes and repairs the damaged network more efficiently than the static model; however, the dynamic repair model is complex and polytropic. In this paper, we construct a dynamic repair model and systematically describe the energy-transfer relationships between nodes in the repair process of the failure network. Nodes are divided into three types, corresponding to three structures. We find that the strong coupling structure is responsible for secondary failure of the repaired nodes and propose an algorithm that can select the most suitable targets (nodes or links) to repair the failure network with minimal cost. Two types of repair strategies are identified, with different effects under the two energy-transfer rules. The research results enable a more flexible approach to network repair.

  13. Seismic modeling of complex stratified reservoirs

    NASA Astrophysics Data System (ADS)

    Lai, Hung-Liang

    Turbidite reservoirs in deep-water depositional systems, such as the oil fields in the offshore Gulf of Mexico and North Sea, are becoming an important exploration target in the petroleum industry. Accurate seismic reservoir characterization, however, is complicated by the heterogeneous of the sand and shale distribution and also by the lack of resolution when imaging thin channel deposits. Amplitude variation with offset (AVO) is a very important technique that is widely applied to locate hydrocarbons. Inaccurate estimates of seismic reflection amplitudes may result in misleading interpretations because of these problems in application to turbidite reservoirs. Therefore, an efficient, accurate, and robust method of modeling seismic responses for such complex reservoirs is crucial and necessary to reduce exploration risk. A fast and accurate approach generating synthetic seismograms for such reservoir models combines wavefront construction ray tracing with composite reflection coefficients in a hybrid modeling algorithm. The wavefront construction approach is a modern, fast implementation of ray tracing that I have extended to model quasi-shear wave propagation in anisotropic media. Composite reflection coefficients, which are computed using propagator matrix methods, provide the exact seismic reflection amplitude for a stratified reservoir model. This is a distinct improvement over conventional AVO analysis based on a model with only two homogeneous half spaces. I combine the two methods to compute synthetic seismograms for test models of turbidite reservoirs in the Ursa field, Gulf of Mexico, validating the new results against exact calculations using the discrete wavenumber method. The new method, however, can also be used to generate synthetic seismograms for the laterally heterogeneous, complex stratified reservoir models. The results show important frequency dependence that may be useful for exploration. Because turbidite channel systems often display complex

  14. On the dangers of model complexity without ecological justification in species distribution modeling

    Treesearch

    David M. Bell; Daniel R. Schlaepfer

    2016-01-01

    Although biogeographic patterns are the product of complex ecological processes, the increasing com-plexity of correlative species distribution models (SDMs) is not always motivated by ecological theory,but by model fit. The validity of model projections, such as shifts in a species’ climatic niche, becomesquestionable particularly during extrapolations, such as for...

  15. Calibration of Complex Subsurface Reaction Models Using a Surrogate-Model Approach

    EPA Science Inventory

    Application of model assessment techniques to complex subsurface reaction models involves numerous difficulties, including non-trivial model selection, parameter non-uniqueness, and excessive computational burden. To overcome these difficulties, this study introduces SAMM (Simult...

  16. On Using Meta-Modeling and Multi-Modeling to Address Complex Problems

    ERIC Educational Resources Information Center

    Abu Jbara, Ahmed

    2013-01-01

    Models, created using different modeling techniques, usually serve different purposes and provide unique insights. While each modeling technique might be capable of answering specific questions, complex problems require multiple models interoperating to complement/supplement each other; we call this Multi-Modeling. To address the syntactic and…

  17. Membrane lipids and the origin of life

    NASA Technical Reports Server (NTRS)

    Oro, J.; Holzer, G.; Rao, M.; Tornabene, T. G.

    1981-01-01

    The current state of knowledge regarding the development of biological systems is briefly reviewed. At a crucial stage concerning the evolution of such systems, the mechanisms leading to more complex structures must have evolved within the confines of a protected microenvironment, similar to those provided by the contemporary cell membranes. The major components found normally in biomembranes are phospholipids. The structure of the biomembrane is examined, and attention is given to questions concerning the availability of the structural components which are necessary in the formation of primitive lipid membranes. Two approaches regarding the study of protomembranes are discussed. The probability of obtaining ether lipids under prebiotic conditions is considered, taking into account the formation of cyclic and acyclic isoprenoids by the irradiation of isoprene with UV.

  18. Managing Complex Interoperability Solutions using Model-Driven Architecture

    DTIC Science & Technology

    2011-06-01

    such as Oracle or MySQL . Each data model for a specific RDBMS is a distinct PSM. Or the system may want to exchange information with other C2...reduced number of transformations, e.g., from an RDBMS physical schema to the corresponding SQL script needed to instantiate the tables in a relational...tance of models. In engineering, a model serves several purposes: 1. It presents an abstract view of a complex system or of a complex information

  19. Foundations for Streaming Model Transformations by Complex Event Processing.

    PubMed

    Dávid, István; Ráth, István; Varró, Dániel

    2018-01-01

    Streaming model transformations represent a novel class of transformations to manipulate models whose elements are continuously produced or modified in high volume and with rapid rate of change. Executing streaming transformations requires efficient techniques to recognize activated transformation rules over a live model and a potentially infinite stream of events. In this paper, we propose foundations of streaming model transformations by innovatively integrating incremental model query, complex event processing (CEP) and reactive (event-driven) transformation techniques. Complex event processing allows to identify relevant patterns and sequences of events over an event stream. Our approach enables event streams to include model change events which are automatically and continuously populated by incremental model queries. Furthermore, a reactive rule engine carries out transformations on identified complex event patterns. We provide an integrated domain-specific language with precise semantics for capturing complex event patterns and streaming transformations together with an execution engine, all of which is now part of the Viatra reactive transformation framework. We demonstrate the feasibility of our approach with two case studies: one in an advanced model engineering workflow; and one in the context of on-the-fly gesture recognition.

  20. Modeling wildfire incident complexity dynamics.

    PubMed

    Thompson, Matthew P

    2013-01-01

    Wildfire management in the United States and elsewhere is challenged by substantial uncertainty regarding the location and timing of fire events, the socioeconomic and ecological consequences of these events, and the costs of suppression. Escalating U.S. Forest Service suppression expenditures is of particular concern at a time of fiscal austerity as swelling fire management budgets lead to decreases for non-fire programs, and as the likelihood of disruptive within-season borrowing potentially increases. Thus there is a strong interest in better understanding factors influencing suppression decisions and in turn their influence on suppression costs. As a step in that direction, this paper presents a probabilistic analysis of geographic and temporal variation in incident management team response to wildfires. The specific focus is incident complexity dynamics through time for fires managed by the U.S. Forest Service. The modeling framework is based on the recognition that large wildfire management entails recurrent decisions across time in response to changing conditions, which can be represented as a stochastic dynamic system. Daily incident complexity dynamics are modeled according to a first-order Markov chain, with containment represented as an absorbing state. A statistically significant difference in complexity dynamics between Forest Service Regions is demonstrated. Incident complexity probability transition matrices and expected times until containment are presented at national and regional levels. Results of this analysis can help improve understanding of geographic variation in incident management and associated cost structures, and can be incorporated into future analyses examining the economic efficiency of wildfire management.

  1. Response of biomembrane domains to external stimuli

    NASA Astrophysics Data System (ADS)

    Urbancic, Iztok

    To enrich our knowledge about membrane domains, new measurement techniques with extended spatial and temporal windows are being vigorously developed by combining various approaches. Following such efforts of the scientific community, we set up fluorescence microspectroscopy (FMS), bridging two well established methods: fluorescence microscopy, which enables imaging of the samples with spatial resolution down to 200 nm, and fluorescence spectroscopy that provides molecular information of the environment at nanometer and nanosecond scale. The combined method therefore allows us to localize this type of information with the precision suitable for studying various cellular structures. Faced with weak available fluorescence signals, we have put considerable efforts into optimization of measurement processes and analysis of the data. By introducing a novel acquisition scheme and by fitting the data with a mathematical model, we preserved the spectral resolution, characteristic for spectroscopic measurements of bulk samples, also at microscopic level. We have at the same time overcome the effects of photobleaching, which had previously considerably distorted the measured spectral lineshape of photosensitive dyes and consequently hindered the reliability of FMS. Our new approach has therefore greatly extended the range of applicable environmentally sensitive probes, which can now be designed to better accommodate the needs of each particular experiment. Moreover, photobleaching of fluorescence signal can now even be exploited to obtain new valuable information about molecular environment of the probes, as bleaching rates of certain probes also depend on physical and chemical properties of the local surroundings. In this manner we increased the number of available spatially localized spectral parameters, which becomes invaluable when investigating complex biological systems that can only be adequately characterized by several independent variables. Applying the developed

  2. Model Systems of Precursor Cellular Membranes: Long-Chain Alcohols Stabilize Spontaneously Formed Oleic Acid Vesicles

    PubMed Central

    Rendón, Adela; Carton, David Gil; Sot, Jesús; García-Pacios, Marcos; Montes, Ruth; Valle, Mikel; Arrondo, José-Luis R.; Goñi, Felix M.; Ruiz-Mirazo, Kepa

    2012-01-01

    Oleic acid vesicles have been used as model systems to study the properties of membranes that could be the evolutionary precursors of more complex, stable, and impermeable phospholipid biomembranes. Pure fatty acid vesicles in general show high sensitivity to ionic strength and pH variation, but there is growing evidence that this lack of stability can be counterbalanced through mixtures with other amphiphilic or surfactant compounds. Here, we present a systematic experimental analysis of the oleic acid system and explore the spontaneous formation of vesicles under different conditions, as well as the effects that alcohols and alkanes may have in the process. Our results support the hypothesis that alcohols (in particular 10- to 14-C-atom alcohols) contribute to the stability of oleic acid vesicles under a wider range of experimental conditions. Moreover, studies of mixed oleic-acid-alkane and oleic-acid-alcohol systems using infrared spectroscopy and Langmuir trough measurements indicate that precisely those alcohols that increased vesicle stability also decreased the mobility of oleic acid polar headgroups, as well as the area/molecule of lipid. PMID:22339864

  3. Some Approaches to Modeling Complex Information Systems.

    ERIC Educational Resources Information Center

    Rao, V. Venkata; Zunde, Pranas

    1982-01-01

    Brief discussion of state-of-the-art of modeling complex information systems distinguishes between macrolevel and microlevel modeling of such systems. Network layout and hierarchical system models, simulation, information acquisition and dissemination, databases and information storage, and operating systems are described and assessed. Thirty-four…

  4. A musculoskeletal model of the elbow joint complex

    NASA Technical Reports Server (NTRS)

    Gonzalez, Roger V.; Barr, Ronald E.; Abraham, Lawrence D.

    1993-01-01

    This paper describes a musculoskeletal model that represents human elbow flexion-extension and forearm pronation-supination. Musculotendon parameters and the skeletal geometry were determined for the musculoskeletal model in the analysis of ballistic elbow joint complex movements. The key objective was to develop a computational model, guided by optimal control, to investigate the relationship among patterns of muscle excitation, individual muscle forces, and movement kinematics. The model was verified using experimental kinematic, torque, and electromyographic data from volunteer subjects performing both isometric and ballistic elbow joint complex movements. In general, the model predicted kinematic and muscle excitation patterns similar to what was experimentally measured.

  5. A Practical Philosophy of Complex Climate Modelling

    NASA Technical Reports Server (NTRS)

    Schmidt, Gavin A.; Sherwood, Steven

    2014-01-01

    We give an overview of the practice of developing and using complex climate models, as seen from experiences in a major climate modelling center and through participation in the Coupled Model Intercomparison Project (CMIP).We discuss the construction and calibration of models; their evaluation, especially through use of out-of-sample tests; and their exploitation in multi-model ensembles to identify biases and make predictions. We stress that adequacy or utility of climate models is best assessed via their skill against more naive predictions. The framework we use for making inferences about reality using simulations is naturally Bayesian (in an informal sense), and has many points of contact with more familiar examples of scientific epistemology. While the use of complex simulations in science is a development that changes much in how science is done in practice, we argue that the concepts being applied fit very much into traditional practices of the scientific method, albeit those more often associated with laboratory work.

  6. Theoretical Modeling and Electromagnetic Response of Complex Metamaterials

    DTIC Science & Technology

    2017-03-06

    AFRL-AFOSR-VA-TR-2017-0042 Theoretical Modeling and Electromagnetic Response of Complex Metamaterials Andrea Alu UNIVERSITY OF TEXAS AT AUSTIN Final...Nov 2016 4. TITLE AND SUBTITLE Theoretical Modeling and Electromagnetic Response of Complex Metamaterials 5a.  CONTRACT NUMBER 5b.  GRANT NUMBER...based on parity-time symmetric metasurfaces, and various advances in electromagnetic and acoustic theory and applications. Our findings have opened

  7. Reassessing Geophysical Models of the Bushveld Complex in 3D

    NASA Astrophysics Data System (ADS)

    Cole, J.; Webb, S. J.; Finn, C.

    2012-12-01

    Conceptual geophysical models of the Bushveld Igneous Complex show three possible geometries for its mafic component: 1) Separate intrusions with vertical feeders for the eastern and western lobes (Cousins, 1959) 2) Separate dipping sheets for the two lobes (Du Plessis and Kleywegt, 1987) 3) A single saucer-shaped unit connected at depth in the central part between the two lobes (Cawthorn et al, 1998) Model three incorporates isostatic adjustment of the crust in response to the weight of the dense mafic material. The model was corroborated by results of a broadband seismic array over southern Africa, known as the Southern African Seismic Experiment (SASE) (Nguuri, et al, 2001; Webb et al, 2004). This new information about the crustal thickness only became available in the last decade and could not be considered in the earlier models. Nevertheless, there is still on-going debate as to which model is correct. All of the models published up to now have been done in 2 or 2.5 dimensions. This is not well suited to modelling the complex geometry of the Bushveld intrusion. 3D modelling takes into account effects of variations in geometry and geophysical properties of lithologies in a full three dimensional sense and therefore affects the shape and amplitude of calculated fields. The main question is how the new knowledge of the increased crustal thickness, as well as the complexity of the Bushveld Complex, will impact on the gravity fields calculated for the existing conceptual models, when modelling in 3D. The three published geophysical models were remodelled using full 3Dl potential field modelling software, and including crustal thickness obtained from the SASE. The aim was not to construct very detailed models, but to test the existing conceptual models in an equally conceptual way. Firstly a specific 2D model was recreated in 3D, without crustal thickening, to establish the difference between 2D and 3D results. Then the thicker crust was added. Including the less

  8. Routine Discovery of Complex Genetic Models using Genetic Algorithms

    PubMed Central

    Moore, Jason H.; Hahn, Lance W.; Ritchie, Marylyn D.; Thornton, Tricia A.; White, Bill C.

    2010-01-01

    Simulation studies are useful in various disciplines for a number of reasons including the development and evaluation of new computational and statistical methods. This is particularly true in human genetics and genetic epidemiology where new analytical methods are needed for the detection and characterization of disease susceptibility genes whose effects are complex, nonlinear, and partially or solely dependent on the effects of other genes (i.e. epistasis or gene-gene interaction). Despite this need, the development of complex genetic models that can be used to simulate data is not always intuitive. In fact, only a few such models have been published. We have previously developed a genetic algorithm approach to discovering complex genetic models in which two single nucleotide polymorphisms (SNPs) influence disease risk solely through nonlinear interactions. In this paper, we extend this approach for the discovery of high-order epistasis models involving three to five SNPs. We demonstrate that the genetic algorithm is capable of routinely discovering interesting high-order epistasis models in which each SNP influences risk of disease only through interactions with the other SNPs in the model. This study opens the door for routine simulation of complex gene-gene interactions among SNPs for the development and evaluation of new statistical and computational approaches for identifying common, complex multifactorial disease susceptibility genes. PMID:20948983

  9. Crystal Structure of the Potassium Importing KdpFABC Membrane Complex

    PubMed Central

    Huang, Ching-Shin; Pedersen, Bjørn Panyella; Stokes, David Lloyd

    2017-01-01

    Cellular potassium import systems play a fundamental role in osmoregulation, pH homeostasis and membrane potential in all domains of life. In bacteria, the kdp operon encodes a four subunit potassium pump that maintains intracellular homeostasis as well as cell shape and turgor under conditions where potassium is limiting1. This membrane complex, called KdpFABC, has one channel-like subunit (KdpA) belonging to the Superfamily of Potassium Transporters and another pump-like subunit (KdpB) belonging to the Superfamily of P-type ATPases. Although there is considerable structural and functional information about members from both superfamilies, the mechanism by which uphill potassium transport through KdpA is coupled with ATP hydrolysis by KdpB remains poorly understood. Here we report the 2.9 Å X-ray structure of the complete Escherichia coli KdpFABC complex with a potassium ion within the selectivity filter of KdpA as well as a water molecule at a canonical cation site in the transmembrane domain of KdpB. The structure also reveals two structural elements that appear to mediate the coupling between these two subunits. Specifically, a protein-embedded tunnel runs between these potassium and water sites and a helix controlling the cytoplasmic gate of KdpA is linked to the phosphorylation domain of KdpB. Based on these observations, we propose an unprecedented mechanism that repurposes protein channel architecture for active transport across biomembranes. PMID:28636601

  10. Bim Automation: Advanced Modeling Generative Process for Complex Structures

    NASA Astrophysics Data System (ADS)

    Banfi, F.; Fai, S.; Brumana, R.

    2017-08-01

    The new paradigm of the complexity of modern and historic structures, which are characterised by complex forms, morphological and typological variables, is one of the greatest challenges for building information modelling (BIM). Generation of complex parametric models needs new scientific knowledge concerning new digital technologies. These elements are helpful to store a vast quantity of information during the life cycle of buildings (LCB). The latest developments of parametric applications do not provide advanced tools, resulting in time-consuming work for the generation of models. This paper presents a method capable of processing and creating complex parametric Building Information Models (BIM) with Non-Uniform to NURBS) with multiple levels of details (Mixed and ReverseLoD) based on accurate 3D photogrammetric and laser scanning surveys. Complex 3D elements are converted into parametric BIM software and finite element applications (BIM to FEA) using specific exchange formats and new modelling tools. The proposed approach has been applied to different case studies: the BIM of modern structure for the courtyard of West Block on Parliament Hill in Ottawa (Ontario) and the BIM of Masegra Castel in Sondrio (Italy), encouraging the dissemination and interaction of scientific results without losing information during the generative process.

  11. Watching individual molecules flex within lipid membranes using SERS

    NASA Astrophysics Data System (ADS)

    Taylor, Richard W.; Benz, Felix; Sigle, Daniel O.; Bowman, Richard W.; Bao, Peng; Roth, Johannes S.; Heath, George R.; Evans, Stephen D.; Baumberg, Jeremy J.

    2014-08-01

    Interrogating individual molecules within bio-membranes is key to deepening our understanding of biological processes essential for life. Using Raman spectroscopy to map molecular vibrations is ideal to non-destructively `fingerprint' biomolecules for dynamic information on their molecular structure, composition and conformation. Such tag-free tracking of molecules within lipid bio-membranes can directly connect structure and function. In this paper, stable co-assembly with gold nano-components in a `nanoparticle-on-mirror' geometry strongly enhances the local optical field and reduces the volume probed to a few nm3, enabling repeated measurements for many tens of minutes on the same molecules. The intense gap plasmons are assembled around model bio-membranes providing molecular identification of the diffusing lipids. Our experiments clearly evidence measurement of individual lipids flexing through telltale rapid correlated vibrational shifts and intensity fluctuations in the Raman spectrum. These track molecules that undergo bending and conformational changes within the probe volume, through their interactions with the environment. This technique allows for in situ high-speed single-molecule investigations of the molecules embedded within lipid bio-membranes. It thus offers a new way to investigate the hidden dynamics of cell membranes important to a myriad of life processes.

  12. Modeling of protein binary complexes using structural mass spectrometry data

    PubMed Central

    Kamal, J.K. Amisha; Chance, Mark R.

    2008-01-01

    In this article, we describe a general approach to modeling the structure of binary protein complexes using structural mass spectrometry data combined with molecular docking. In the first step, hydroxyl radical mediated oxidative protein footprinting is used to identify residues that experience conformational reorganization due to binding or participate in the binding interface. In the second step, a three-dimensional atomic structure of the complex is derived by computational modeling. Homology modeling approaches are used to define the structures of the individual proteins if footprinting detects significant conformational reorganization as a function of complex formation. A three-dimensional model of the complex is constructed from these binary partners using the ClusPro program, which is composed of docking, energy filtering, and clustering steps. Footprinting data are used to incorporate constraints—positive and/or negative—in the docking step and are also used to decide the type of energy filter—electrostatics or desolvation—in the successive energy-filtering step. By using this approach, we examine the structure of a number of binary complexes of monomeric actin and compare the results to crystallographic data. Based on docking alone, a number of competing models with widely varying structures are observed, one of which is likely to agree with crystallographic data. When the docking steps are guided by footprinting data, accurate models emerge as top scoring. We demonstrate this method with the actin/gelsolin segment-1 complex. We also provide a structural model for the actin/cofilin complex using this approach which does not have a crystal or NMR structure. PMID:18042684

  13. Mathematic modeling of complex aquifer: Evian Natural Mineral Water case study considering lumped and distributed models.

    NASA Astrophysics Data System (ADS)

    Henriot, abel; Blavoux, bernard; Travi, yves; Lachassagne, patrick; Beon, olivier; Dewandel, benoit; Ladouche, bernard

    2013-04-01

    The Evian Natural Mineral Water (NMW) aquifer is a highly heterogeneous Quaternary glacial deposits complex composed of three main units, from bottom to top: - The "Inferior Complex" mainly composed of basal and impermeable till lying on the Alpine rocks. It outcrops only at the higher altitudes but is known in depth through drilled holes. - The "Gavot Plateau Complex" is an interstratified complex of mainly basal and lateral till up to 400 m thick. It outcrops at heights above approximately 850 m a.m.s.l. and up to 1200 m a.m.s.l. over a 30 km² area. It is the main recharge area known for the hydromineral system. - The "Terminal Complex" from which the Evian NMW is emerging at 410 m a.m.s.l. It is composed of sand and gravel Kame terraces that allow water to flow from the deep "Gavot Plateau Complex" permeable layers to the "Terminal Complex". A thick and impermeable terminal till caps and seals the system. Aquifer is then confined at its downstream area. Because of heterogeneity and complexity of this hydrosystem, distributed modeling tools are difficult to implement at the whole system scale: important hypothesis would have to be made about geometry, hydraulic properties, boundary conditions for example and extrapolation would lead with no doubt to unacceptable errors. Consequently a modeling strategy is being developed and leads also to improve the conceptual model of the hydrosystem. Lumped models mainly based on tritium time series allow the whole hydrosystem to be modeled combining in series: an exponential model (superficial aquifers of the "Gavot Plateau Complex"), a dispersive model (Gavot Plateau interstratified complex) and a piston flow model (sand and gravel from the Kame terraces) respectively 8, 60 and 2.5 years of mean transit time. These models provide insight on the governing parameters for the whole mineral aquifer. They help improving the current conceptual model and are to be improved with other environmental tracers such as CFC, SF6. A

  14. Geometric modeling of subcellular structures, organelles, and multiprotein complexes

    PubMed Central

    Feng, Xin; Xia, Kelin; Tong, Yiying; Wei, Guo-Wei

    2013-01-01

    SUMMARY Recently, the structure, function, stability, and dynamics of subcellular structures, organelles, and multi-protein complexes have emerged as a leading interest in structural biology. Geometric modeling not only provides visualizations of shapes for large biomolecular complexes but also fills the gap between structural information and theoretical modeling, and enables the understanding of function, stability, and dynamics. This paper introduces a suite of computational tools for volumetric data processing, information extraction, surface mesh rendering, geometric measurement, and curvature estimation of biomolecular complexes. Particular emphasis is given to the modeling of cryo-electron microscopy data. Lagrangian-triangle meshes are employed for the surface presentation. On the basis of this representation, algorithms are developed for surface area and surface-enclosed volume calculation, and curvature estimation. Methods for volumetric meshing have also been presented. Because the technological development in computer science and mathematics has led to multiple choices at each stage of the geometric modeling, we discuss the rationales in the design and selection of various algorithms. Analytical models are designed to test the computational accuracy and convergence of proposed algorithms. Finally, we select a set of six cryo-electron microscopy data representing typical subcellular complexes to demonstrate the efficacy of the proposed algorithms in handling biomolecular surfaces and explore their capability of geometric characterization of binding targets. This paper offers a comprehensive protocol for the geometric modeling of subcellular structures, organelles, and multiprotein complexes. PMID:23212797

  15. Acquisition of Complex Systemic Thinking: Mental Models of Evolution

    ERIC Educational Resources Information Center

    d'Apollonia, Sylvia T.; Charles, Elizabeth S.; Boyd, Gary M.

    2004-01-01

    We investigated the impact of introducing college students to complex adaptive systems on their subsequent mental models of evolution compared to those of students taught in the same manner but with no reference to complex systems. The students' mental models (derived from similarity ratings of 12 evolutionary terms using the pathfinder algorithm)…

  16. Interactive Visualizations of Complex Seismic Data and Models

    NASA Astrophysics Data System (ADS)

    Chai, C.; Ammon, C. J.; Maceira, M.; Herrmann, R. B.

    2016-12-01

    The volume and complexity of seismic data and models have increased dramatically thanks to dense seismic station deployments and advances in data modeling and processing. Seismic observations such as receiver functions and surface-wave dispersion are multidimensional: latitude, longitude, time, amplitude and latitude, longitude, period, and velocity. Three-dimensional seismic velocity models are characterized with three spatial dimensions and one additional dimension for the speed. In these circumstances, exploring the data and models and assessing the data fits is a challenge. A few professional packages are available to visualize these complex data and models. However, most of these packages rely on expensive commercial software or require a substantial time investment to master, and even when that effort is complete, communicating the results to others remains a problem. A traditional approach during the model interpretation stage is to examine data fits and model features using a large number of static displays. Publications include a few key slices or cross-sections of these high-dimensional data, but this prevents others from directly exploring the model and corresponding data fits. In this presentation, we share interactive visualization examples of complex seismic data and models that are based on open-source tools and are easy to implement. Model and data are linked in an intuitive and informative web-browser based display that can be used to explore the model and the features in the data that influence various aspects of the model. We encode the model and data into HTML files and present high-dimensional information using two approaches. The first uses a Python package to pack both data and interactive plots in a single file. The second approach uses JavaScript, CSS, and HTML to build a dynamic webpage for seismic data visualization. The tools have proven useful and led to deeper insight into 3D seismic models and the data that were used to construct them

  17. Complexation and molecular modeling studies of europium(III)-gallic acid-amino acid complexes.

    PubMed

    Taha, Mohamed; Khan, Imran; Coutinho, João A P

    2016-04-01

    With many metal-based drugs extensively used today in the treatment of cancer, attention has focused on the development of new coordination compounds with antitumor activity with europium(III) complexes recently introduced as novel anticancer drugs. The aim of this work is to design new Eu(III) complexes with gallic acid, an antioxida'nt phenolic compound. Gallic acid was chosen because it shows anticancer activity without harming health cells. As antioxidant, it helps to protect human cells against oxidative damage that implicated in DNA damage, cancer, and accelerated cell aging. In this work, the formation of binary and ternary complexes of Eu(III) with gallic acid, primary ligand, and amino acids alanine, leucine, isoleucine, and tryptophan was studied by glass electrode potentiometry in aqueous solution containing 0.1M NaNO3 at (298.2 ± 0.1) K. Their overall stability constants were evaluated and the concentration distributions of the complex species in solution were calculated. The protonation constants of gallic acid and amino acids were also determined at our experimental conditions and compared with those predicted by using conductor-like screening model for realistic solvation (COSMO-RS) model. The geometries of Eu(III)-gallic acid complexes were characterized by the density functional theory (DFT). The spectroscopic UV-visible and photoluminescence measurements are carried out to confirm the formation of Eu(III)-gallic acid complexes in aqueous solutions. Copyright © 2016 Elsevier Inc. All rights reserved.

  18. A simple model clarifies the complicated relationships of complex networks

    PubMed Central

    Zheng, Bojin; Wu, Hongrun; Kuang, Li; Qin, Jun; Du, Wenhua; Wang, Jianmin; Li, Deyi

    2014-01-01

    Real-world networks such as the Internet and WWW have many common traits. Until now, hundreds of models were proposed to characterize these traits for understanding the networks. Because different models used very different mechanisms, it is widely believed that these traits origin from different causes. However, we find that a simple model based on optimisation can produce many traits, including scale-free, small-world, ultra small-world, Delta-distribution, compact, fractal, regular and random networks. Moreover, by revising the proposed model, the community-structure networks are generated. By this model and the revised versions, the complicated relationships of complex networks are illustrated. The model brings a new universal perspective to the understanding of complex networks and provide a universal method to model complex networks from the viewpoint of optimisation. PMID:25160506

  19. Debating complexity in modeling

    USGS Publications Warehouse

    Hunt, Randall J.; Zheng, Chunmiao

    1999-01-01

    As scientists trying to understand the natural world, how should our effort be apportioned? We know that the natural world is characterized by complex and interrelated processes. Yet do we need to explicitly incorporate these intricacies to perform the tasks we are charged with? In this era of expanding computer power and development of sophisticated preprocessors and postprocessors, are bigger machines making better models? Put another way, do we understand the natural world better now with all these advancements in our simulation ability? Today the public's patience for long-term projects producing indeterminate results is wearing thin. This increases pressure on the investigator to use the appropriate technology efficiently. On the other hand, bringing scientific results into the legal arena opens up a new dimension to the issue: to the layperson, a tool that includes more of the complexity known to exist in the real world is expected to provide the more scientifically valid answer.

  20. Modeling protein complexes with BiGGER.

    PubMed

    Krippahl, Ludwig; Moura, José J; Palma, P Nuno

    2003-07-01

    This article describes the method and results of our participation in the Critical Assessment of PRediction of Interactions (CAPRI) experiment, using the protein docking program BiGGER (Bimolecular complex Generation with Global Evaluation and Ranking) (Palma et al., Proteins 2000;39:372-384). Of five target complexes (CAPRI targets 2, 4, 5, 6, and 7), only one was successfully predicted (target 6), but BiGGER generated reasonable models for targets 4, 5, and 7, which could have been identified if additional biochemical information had been available. Copyright 2003 Wiley-Liss, Inc.

  1. Modeling Complex Cross-Systems Software Interfaces Using SysML

    NASA Technical Reports Server (NTRS)

    Mandutianu, Sanda; Morillo, Ron; Simpson, Kim; Liepack, Otfrid; Bonanne, Kevin

    2013-01-01

    The complex flight and ground systems for NASA human space exploration are designed, built, operated and managed as separate programs and projects. However, each system relies on one or more of the other systems in order to accomplish specific mission objectives, creating a complex, tightly coupled architecture. Thus, there is a fundamental need to understand how each system interacts with the other. To determine if a model-based system engineering approach could be utilized to assist with understanding the complex system interactions, the NASA Engineering and Safety Center (NESC) sponsored a task to develop an approach for performing cross-system behavior modeling. This paper presents the results of applying Model Based Systems Engineering (MBSE) principles using the System Modeling Language (SysML) to define cross-system behaviors and how they map to crosssystem software interfaces documented in system-level Interface Control Documents (ICDs).

  2. What do we gain from simplicity versus complexity in species distribution models?

    USGS Publications Warehouse

    Merow, Cory; Smith, Matthew J.; Edwards, Thomas C.; Guisan, Antoine; McMahon, Sean M.; Normand, Signe; Thuiller, Wilfried; Wuest, Rafael O.; Zimmermann, Niklaus E.; Elith, Jane

    2014-01-01

    Species distribution models (SDMs) are widely used to explain and predict species ranges and environmental niches. They are most commonly constructed by inferring species' occurrence–environment relationships using statistical and machine-learning methods. The variety of methods that can be used to construct SDMs (e.g. generalized linear/additive models, tree-based models, maximum entropy, etc.), and the variety of ways that such models can be implemented, permits substantial flexibility in SDM complexity. Building models with an appropriate amount of complexity for the study objectives is critical for robust inference. We characterize complexity as the shape of the inferred occurrence–environment relationships and the number of parameters used to describe them, and search for insights into whether additional complexity is informative or superfluous. By building ‘under fit’ models, having insufficient flexibility to describe observed occurrence–environment relationships, we risk misunderstanding the factors shaping species distributions. By building ‘over fit’ models, with excessive flexibility, we risk inadvertently ascribing pattern to noise or building opaque models. However, model selection can be challenging, especially when comparing models constructed under different modeling approaches. Here we argue for a more pragmatic approach: researchers should constrain the complexity of their models based on study objective, attributes of the data, and an understanding of how these interact with the underlying biological processes. We discuss guidelines for balancing under fitting with over fitting and consequently how complexity affects decisions made during model building. Although some generalities are possible, our discussion reflects differences in opinions that favor simpler versus more complex models. We conclude that combining insights from both simple and complex SDM building approaches best advances our knowledge of current and future species

  3. From Complex to Simple: Interdisciplinary Stochastic Models

    ERIC Educational Resources Information Center

    Mazilu, D. A.; Zamora, G.; Mazilu, I.

    2012-01-01

    We present two simple, one-dimensional, stochastic models that lead to a qualitative understanding of very complex systems from biology, nanoscience and social sciences. The first model explains the complicated dynamics of microtubules, stochastic cellular highways. Using the theory of random walks in one dimension, we find analytical expressions…

  4. Research on complex 3D tree modeling based on L-system

    NASA Astrophysics Data System (ADS)

    Gang, Chen; Bin, Chen; Yuming, Liu; Hui, Li

    2018-03-01

    L-system as a fractal iterative system could simulate complex geometric patterns. Based on the field observation data of trees and knowledge of forestry experts, this paper extracted modeling constraint rules and obtained an L-system rules set. Using the self-developed L-system modeling software the L-system rule set was parsed to generate complex tree 3d models.The results showed that the geometrical modeling method based on l-system could be used to describe the morphological structure of complex trees and generate 3D tree models.

  5. A Complex Systems Model Approach to Quantified Mineral Resource Appraisal

    USGS Publications Warehouse

    Gettings, M.E.; Bultman, M.W.; Fisher, F.S.

    2004-01-01

    For federal and state land management agencies, mineral resource appraisal has evolved from value-based to outcome-based procedures wherein the consequences of resource development are compared with those of other management options. Complex systems modeling is proposed as a general framework in which to build models that can evaluate outcomes. Three frequently used methods of mineral resource appraisal (subjective probabilistic estimates, weights of evidence modeling, and fuzzy logic modeling) are discussed to obtain insight into methods of incorporating complexity into mineral resource appraisal models. Fuzzy logic and weights of evidence are most easily utilized in complex systems models. A fundamental product of new appraisals is the production of reusable, accessible databases and methodologies so that appraisals can easily be repeated with new or refined data. The data are representations of complex systems and must be so regarded if all of their information content is to be utilized. The proposed generalized model framework is applicable to mineral assessment and other geoscience problems. We begin with a (fuzzy) cognitive map using (+1,0,-1) values for the links and evaluate the map for various scenarios to obtain a ranking of the importance of various links. Fieldwork and modeling studies identify important links and help identify unanticipated links. Next, the links are given membership functions in accordance with the data. Finally, processes are associated with the links; ideally, the controlling physical and chemical events and equations are found for each link. After calibration and testing, this complex systems model is used for predictions under various scenarios.

  6. Intrinsic Uncertainties in Modeling Complex Systems.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cooper, Curtis S; Bramson, Aaron L.; Ames, Arlo L.

    2014-09-01

    Models are built to understand and predict the behaviors of both natural and artificial systems. Because it is always necessary to abstract away aspects of any non-trivial system being modeled, we know models can potentially leave out important, even critical elements. This reality of the modeling enterprise forces us to consider the prospective impacts of those effects completely left out of a model - either intentionally or unconsidered. Insensitivity to new structure is an indication of diminishing returns. In this work, we represent a hypothetical unknown effect on a validated model as a finite perturba- tion whose amplitude is constrainedmore » within a control region. We find robustly that without further constraints, no meaningful bounds can be placed on the amplitude of a perturbation outside of the control region. Thus, forecasting into unsampled regions is a very risky proposition. We also present inherent difficulties with proper time discretization of models and representing in- herently discrete quantities. We point out potentially worrisome uncertainties, arising from math- ematical formulation alone, which modelers can inadvertently introduce into models of complex systems. Acknowledgements This work has been funded under early-career LDRD project #170979, entitled "Quantify- ing Confidence in Complex Systems Models Having Structural Uncertainties", which ran from 04/2013 to 09/2014. We wish to express our gratitude to the many researchers at Sandia who con- tributed ideas to this work, as well as feedback on the manuscript. In particular, we would like to mention George Barr, Alexander Outkin, Walt Beyeler, Eric Vugrin, and Laura Swiler for provid- ing invaluable advice and guidance through the course of the project. We would also like to thank Steven Kleban, Amanda Gonzales, Trevor Manzanares, and Sarah Burwell for their assistance in managing project tasks and resources.« less

  7. Surface complexation modeling of americium sorption onto volcanic tuff.

    PubMed

    Ding, M; Kelkar, S; Meijer, A

    2014-10-01

    Results of a surface complexation model (SCM) for americium sorption on volcanic rocks (devitrified and zeolitic tuff) are presented. The model was developed using PHREEQC and based on laboratory data for americium sorption on quartz. Available data for sorption of americium on quartz as a function of pH in dilute groundwater can be modeled with two surface reactions involving an americium sulfate and an americium carbonate complex. It was assumed in applying the model to volcanic rocks from Yucca Mountain, that the surface properties of volcanic rocks can be represented by a quartz surface. Using groundwaters compositionally representative of Yucca Mountain, americium sorption distribution coefficient (Kd, L/Kg) values were calculated as function of pH. These Kd values are close to the experimentally determined Kd values for americium sorption on volcanic rocks, decreasing with increasing pH in the pH range from 7 to 9. The surface complexation constants, derived in this study, allow prediction of sorption of americium in a natural complex system, taking into account the inherent uncertainty associated with geochemical conditions that occur along transport pathways. Published by Elsevier Ltd.

  8. Network model of bilateral power markets based on complex networks

    NASA Astrophysics Data System (ADS)

    Wu, Yang; Liu, Junyong; Li, Furong; Yan, Zhanxin; Zhang, Li

    2014-06-01

    The bilateral power transaction (BPT) mode becomes a typical market organization with the restructuring of electric power industry, the proper model which could capture its characteristics is in urgent need. However, the model is lacking because of this market organization's complexity. As a promising approach to modeling complex systems, complex networks could provide a sound theoretical framework for developing proper simulation model. In this paper, a complex network model of the BPT market is proposed. In this model, price advantage mechanism is a precondition. Unlike other general commodity transactions, both of the financial layer and the physical layer are considered in the model. Through simulation analysis, the feasibility and validity of the model are verified. At same time, some typical statistical features of BPT network are identified. Namely, the degree distribution follows the power law, the clustering coefficient is low and the average path length is a bit long. Moreover, the topological stability of the BPT network is tested. The results show that the network displays a topological robustness to random market member's failures while it is fragile against deliberate attacks, and the network could resist cascading failure to some extent. These features are helpful for making decisions and risk management in BPT markets.

  9. Minimum-complexity helicopter simulation math model

    NASA Technical Reports Server (NTRS)

    Heffley, Robert K.; Mnich, Marc A.

    1988-01-01

    An example of a minimal complexity simulation helicopter math model is presented. Motivating factors are the computational delays, cost, and inflexibility of the very sophisticated math models now in common use. A helicopter model form is given which addresses each of these factors and provides better engineering understanding of the specific handling qualities features which are apparent to the simulator pilot. The technical approach begins with specification of features which are to be modeled, followed by a build up of individual vehicle components and definition of equations. Model matching and estimation procedures are given which enable the modeling of specific helicopters from basic data sources such as flight manuals. Checkout procedures are given which provide for total model validation. A number of possible model extensions and refinement are discussed. Math model computer programs are defined and listed.

  10. Elastic Network Model of a Nuclear Transport Complex

    NASA Astrophysics Data System (ADS)

    Ryan, Patrick; Liu, Wing K.; Lee, Dockjin; Seo, Sangjae; Kim, Young-Jin; Kim, Moon K.

    2010-05-01

    The structure of Kap95p was obtained from the Protein Data Bank (www.pdb.org) and analyzed RanGTP plays an important role in both nuclear protein import and export cycles. In the nucleus, RanGTP releases macromolecular cargoes from importins and conversely facilitates cargo binding to exportins. Although the crystal structure of the nuclear import complex formed by importin Kap95p and RanGTP was recently identified, its molecular mechanism still remains unclear. To understand the relationship between structure and function of a nuclear transport complex, a structure-based mechanical model of Kap95p:RanGTP complex is introduced. In this model, a protein structure is simply modeled as an elastic network in which a set of coarse-grained point masses are connected by linear springs representing biochemical interactions at atomic level. Harmonic normal mode analysis (NMA) and anharmonic elastic network interpolation (ENI) are performed to predict the modes of vibrations and a feasible pathway between locked and unlocked conformations of Kap95p, respectively. Simulation results imply that the binding of RanGTP to Kap95p induces the release of the cargo in the nucleus as well as prevents any new cargo from attaching to the Kap95p:RanGTP complex.

  11. Complex Networks in Psychological Models

    NASA Astrophysics Data System (ADS)

    Wedemann, R. S.; Carvalho, L. S. A. V. D.; Donangelo, R.

    We develop schematic, self-organizing, neural-network models to describe mechanisms associated with mental processes, by a neurocomputational substrate. These models are examples of real world complex networks with interesting general topological structures. Considering dopaminergic signal-to-noise neuronal modulation in the central nervous system, we propose neural network models to explain development of cortical map structure and dynamics of memory access, and unify different mental processes into a single neurocomputational substrate. Based on our neural network models, neurotic behavior may be understood as an associative memory process in the brain, and the linguistic, symbolic associative process involved in psychoanalytic working-through can be mapped onto a corresponding process of reconfiguration of the neural network. The models are illustrated through computer simulations, where we varied dopaminergic modulation and observed the self-organizing emergent patterns at the resulting semantic map, interpreting them as different manifestations of mental functioning, from psychotic through to normal and neurotic behavior, and creativity.

  12. Assessing Complexity in Learning Outcomes--A Comparison between the SOLO Taxonomy and the Model of Hierarchical Complexity

    ERIC Educational Resources Information Center

    Stålne, Kristian; Kjellström, Sofia; Utriainen, Jukka

    2016-01-01

    An important aspect of higher education is to educate students who can manage complex relationships and solve complex problems. Teachers need to be able to evaluate course content with regard to complexity, as well as evaluate students' ability to assimilate complex content and express it in the form of a learning outcome. One model for evaluating…

  13. Building a pseudo-atomic model of the anaphase-promoting complex.

    PubMed

    Kulkarni, Kiran; Zhang, Ziguo; Chang, Leifu; Yang, Jing; da Fonseca, Paula C A; Barford, David

    2013-11-01

    The anaphase-promoting complex (APC/C) is a large E3 ubiquitin ligase that regulates progression through specific stages of the cell cycle by coordinating the ubiquitin-dependent degradation of cell-cycle regulatory proteins. Depending on the species, the active form of the APC/C consists of 14-15 different proteins that assemble into a 20-subunit complex with a mass of approximately 1.3 MDa. A hybrid approach of single-particle electron microscopy and protein crystallography of individual APC/C subunits has been applied to generate pseudo-atomic models of various functional states of the complex. Three approaches for assigning regions of the EM-derived APC/C density map to specific APC/C subunits are described. This information was used to dock atomic models of APC/C subunits, determined either by protein crystallography or homology modelling, to specific regions of the APC/C EM map, allowing the generation of a pseudo-atomic model corresponding to 80% of the entire complex.

  14. The noisy voter model on complex networks.

    PubMed

    Carro, Adrián; Toral, Raúl; San Miguel, Maxi

    2016-04-20

    We propose a new analytical method to study stochastic, binary-state models on complex networks. Moving beyond the usual mean-field theories, this alternative approach is based on the introduction of an annealed approximation for uncorrelated networks, allowing to deal with the network structure as parametric heterogeneity. As an illustration, we study the noisy voter model, a modification of the original voter model including random changes of state. The proposed method is able to unfold the dependence of the model not only on the mean degree (the mean-field prediction) but also on more complex averages over the degree distribution. In particular, we find that the degree heterogeneity--variance of the underlying degree distribution--has a strong influence on the location of the critical point of a noise-induced, finite-size transition occurring in the model, on the local ordering of the system, and on the functional form of its temporal correlations. Finally, we show how this latter point opens the possibility of inferring the degree heterogeneity of the underlying network by observing only the aggregate behavior of the system as a whole, an issue of interest for systems where only macroscopic, population level variables can be measured.

  15. Classrooms as Complex Adaptive Systems: A Relational Model

    ERIC Educational Resources Information Center

    Burns, Anne; Knox, John S.

    2011-01-01

    In this article, we describe and model the language classroom as a complex adaptive system (see Logan & Schumann, 2005). We argue that linear, categorical descriptions of classroom processes and interactions do not sufficiently explain the complex nature of classrooms, and cannot account for how classroom change occurs (or does not occur), over…

  16. Mathematical Models to Determine Stable Behavior of Complex Systems

    NASA Astrophysics Data System (ADS)

    Sumin, V. I.; Dushkin, A. V.; Smolentseva, T. E.

    2018-05-01

    The paper analyzes a possibility to predict functioning of a complex dynamic system with a significant amount of circulating information and a large number of random factors impacting its functioning. Functioning of the complex dynamic system is described as a chaotic state, self-organized criticality and bifurcation. This problem may be resolved by modeling such systems as dynamic ones, without applying stochastic models and taking into account strange attractors.

  17. Mathematical and Computational Modeling in Complex Biological Systems

    PubMed Central

    Li, Wenyang; Zhu, Xiaoliang

    2017-01-01

    The biological process and molecular functions involved in the cancer progression remain difficult to understand for biologists and clinical doctors. Recent developments in high-throughput technologies urge the systems biology to achieve more precise models for complex diseases. Computational and mathematical models are gradually being used to help us understand the omics data produced by high-throughput experimental techniques. The use of computational models in systems biology allows us to explore the pathogenesis of complex diseases, improve our understanding of the latent molecular mechanisms, and promote treatment strategy optimization and new drug discovery. Currently, it is urgent to bridge the gap between the developments of high-throughput technologies and systemic modeling of the biological process in cancer research. In this review, we firstly studied several typical mathematical modeling approaches of biological systems in different scales and deeply analyzed their characteristics, advantages, applications, and limitations. Next, three potential research directions in systems modeling were summarized. To conclude, this review provides an update of important solutions using computational modeling approaches in systems biology. PMID:28386558

  18. Mathematical and Computational Modeling in Complex Biological Systems.

    PubMed

    Ji, Zhiwei; Yan, Ke; Li, Wenyang; Hu, Haigen; Zhu, Xiaoliang

    2017-01-01

    The biological process and molecular functions involved in the cancer progression remain difficult to understand for biologists and clinical doctors. Recent developments in high-throughput technologies urge the systems biology to achieve more precise models for complex diseases. Computational and mathematical models are gradually being used to help us understand the omics data produced by high-throughput experimental techniques. The use of computational models in systems biology allows us to explore the pathogenesis of complex diseases, improve our understanding of the latent molecular mechanisms, and promote treatment strategy optimization and new drug discovery. Currently, it is urgent to bridge the gap between the developments of high-throughput technologies and systemic modeling of the biological process in cancer research. In this review, we firstly studied several typical mathematical modeling approaches of biological systems in different scales and deeply analyzed their characteristics, advantages, applications, and limitations. Next, three potential research directions in systems modeling were summarized. To conclude, this review provides an update of important solutions using computational modeling approaches in systems biology.

  19. Pattern-oriented modeling of agent-based complex systems: Lessons from ecology

    USGS Publications Warehouse

    Grimm, Volker; Revilla, Eloy; Berger, Uta; Jeltsch, Florian; Mooij, Wolf M.; Railsback, Steven F.; Thulke, Hans-Hermann; Weiner, Jacob; Wiegand, Thorsten; DeAngelis, Donald L.

    2005-01-01

    Agent-based complex systems are dynamic networks of many interacting agents; examples include ecosystems, financial markets, and cities. The search for general principles underlying the internal organization of such systems often uses bottom-up simulation models such as cellular automata and agent-based models. No general framework for designing, testing, and analyzing bottom-up models has yet been established, but recent advances in ecological modeling have come together in a general strategy we call pattern-oriented modeling. This strategy provides a unifying framework for decoding the internal organization of agent-based complex systems and may lead toward unifying algorithmic theories of the relation between adaptive behavior and system complexity.

  20. Pattern-Oriented Modeling of Agent-Based Complex Systems: Lessons from Ecology

    NASA Astrophysics Data System (ADS)

    Grimm, Volker; Revilla, Eloy; Berger, Uta; Jeltsch, Florian; Mooij, Wolf M.; Railsback, Steven F.; Thulke, Hans-Hermann; Weiner, Jacob; Wiegand, Thorsten; DeAngelis, Donald L.

    2005-11-01

    Agent-based complex systems are dynamic networks of many interacting agents; examples include ecosystems, financial markets, and cities. The search for general principles underlying the internal organization of such systems often uses bottom-up simulation models such as cellular automata and agent-based models. No general framework for designing, testing, and analyzing bottom-up models has yet been established, but recent advances in ecological modeling have come together in a general strategy we call pattern-oriented modeling. This strategy provides a unifying framework for decoding the internal organization of agent-based complex systems and may lead toward unifying algorithmic theories of the relation between adaptive behavior and system complexity.

  1. Improving a regional model using reduced complexity and parameter estimation

    USGS Publications Warehouse

    Kelson, Victor A.; Hunt, Randall J.; Haitjema, Henk M.

    2002-01-01

    The availability of powerful desktop computers and graphical user interfaces for ground water flow models makes possible the construction of ever more complex models. A proposed copper-zinc sulfide mine in northern Wisconsin offers a unique case in which the same hydrologic system has been modeled using a variety of techniques covering a wide range of sophistication and complexity. Early in the permitting process, simple numerical models were used to evaluate the necessary amount of water to be pumped from the mine, reductions in streamflow, and the drawdowns in the regional aquifer. More complex models have subsequently been used in an attempt to refine the predictions. Even after so much modeling effort, questions regarding the accuracy and reliability of the predictions remain. We have performed a new analysis of the proposed mine using the two-dimensional analytic element code GFLOW coupled with the nonlinear parameter estimation code UCODE. The new model is parsimonious, containing fewer than 10 parameters, and covers a region several times larger in areal extent than any of the previous models. The model demonstrates the suitability of analytic element codes for use with parameter estimation codes. The simplified model results are similar to the more complex models; predicted mine inflows and UCODE-derived 95% confidence intervals are consistent with the previous predictions. More important, the large areal extent of the model allowed us to examine hydrological features not included in the previous models, resulting in new insights about the effects that far-field boundary conditions can have on near-field model calibration and parameterization. In this case, the addition of surface water runoff into a lake in the headwaters of a stream while holding recharge constant moved a regional ground watershed divide and resulted in some of the added water being captured by the adjoining basin. Finally, a simple analytical solution was used to clarify the GFLOW model

  2. Improving a regional model using reduced complexity and parameter estimation.

    PubMed

    Kelson, Victor A; Hunt, Randall J; Haitjema, Henk M

    2002-01-01

    The availability of powerful desktop computers and graphical user interfaces for ground water flow models makes possible the construction of ever more complex models. A proposed copper-zinc sulfide mine in northern Wisconsin offers a unique case in which the same hydrologic system has been modeled using a variety of techniques covering a wide range of sophistication and complexity. Early in the permitting process, simple numerical models were used to evaluate the necessary amount of water to be pumped from the mine, reductions in streamflow, and the drawdowns in the regional aquifer. More complex models have subsequently been used in an attempt to refine the predictions. Even after so much modeling effort, questions regarding the accuracy and reliability of the predictions remain. We have performed a new analysis of the proposed mine using the two-dimensional analytic element code GFLOW coupled with the nonlinear parameter estimation code UCODE. The new model is parsimonious, containing fewer than 10 parameters, and covers a region several times larger in areal extent than any of the previous models. The model demonstrates the suitability of analytic element codes for use with parameter estimation codes. The simplified model results are similar to the more complex models; predicted mine inflows and UCODE-derived 95% confidence intervals are consistent with the previous predictions. More important, the large areal extent of the model allowed us to examine hydrological features not included in the previous models, resulting in new insights about the effects that far-field boundary conditions can have on near-field model calibration and parameterization. In this case, the addition of surface water runoff into a lake in the headwaters of a stream while holding recharge constant moved a regional ground watershed divide and resulted in some of the added water being captured by the adjoining basin. Finally, a simple analytical solution was used to clarify the GFLOW model

  3. Musculoskeletal modelling of human ankle complex: Estimation of ankle joint moments.

    PubMed

    Jamwal, Prashant K; Hussain, Shahid; Tsoi, Yun Ho; Ghayesh, Mergen H; Xie, Sheng Quan

    2017-05-01

    A musculoskeletal model for the ankle complex is vital in order to enhance the understanding of neuro-mechanical control of ankle motions, diagnose ankle disorders and assess subsequent treatments. Motions at the human ankle and foot, however, are complex due to simultaneous movements at the two joints namely, the ankle joint and the subtalar joint. The musculoskeletal elements at the ankle complex, such as ligaments, muscles and tendons, have intricate arrangements and exhibit transient and nonlinear behaviour. This paper develops a musculoskeletal model of the ankle complex considering the biaxial ankle structure. The model provides estimates of overall mechanical characteristics (motion and moments) of ankle complex through consideration of forces applied along ligaments and muscle-tendon units. The dynamics of the ankle complex and its surrounding ligaments and muscle-tendon units is modelled and formulated into a state space model to facilitate simulations. A graphical user interface is also developed during this research in order to include the visual anatomical information by converting it to quantitative information on coordinates. Validation of the ankle model was carried out by comparing its outputs with those published in literature as well as with experimental data obtained from an existing parallel ankle rehabilitation robot. Qualitative agreement was observed between the model and measured data for both, the passive and active ankle motions during trials in terms of displacements and moments. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Pattern-Based Inverse Modeling for Characterization of Subsurface Flow Models with Complex Geologic Heterogeneity

    NASA Astrophysics Data System (ADS)

    Golmohammadi, A.; Jafarpour, B.; M Khaninezhad, M. R.

    2017-12-01

    Calibration of heterogeneous subsurface flow models leads to ill-posed nonlinear inverse problems, where too many unknown parameters are estimated from limited response measurements. When the underlying parameters form complex (non-Gaussian) structured spatial connectivity patterns, classical variogram-based geostatistical techniques cannot describe the underlying connectivity patterns. Modern pattern-based geostatistical methods that incorporate higher-order spatial statistics are more suitable for describing such complex spatial patterns. Moreover, when the underlying unknown parameters are discrete (geologic facies distribution), conventional model calibration techniques that are designed for continuous parameters cannot be applied directly. In this paper, we introduce a novel pattern-based model calibration method to reconstruct discrete and spatially complex facies distributions from dynamic flow response data. To reproduce complex connectivity patterns during model calibration, we impose a feasibility constraint to ensure that the solution follows the expected higher-order spatial statistics. For model calibration, we adopt a regularized least-squares formulation, involving data mismatch, pattern connectivity, and feasibility constraint terms. Using an alternating directions optimization algorithm, the regularized objective function is divided into a continuous model calibration problem, followed by mapping the solution onto the feasible set. The feasibility constraint to honor the expected spatial statistics is implemented using a supervised machine learning algorithm. The two steps of the model calibration formulation are repeated until the convergence criterion is met. Several numerical examples are used to evaluate the performance of the developed method.

  5. 2.5D complex resistivity modeling and inversion using unstructured grids

    NASA Astrophysics Data System (ADS)

    Xu, Kaijun; Sun, Jie

    2016-04-01

    The characteristic of complex resistivity on rock and ore has been recognized by people for a long time. Generally we have used the Cole-Cole Model(CCM) to describe complex resistivity. It has been proved that the electrical anomaly of geologic body can be quantitative estimated by CCM parameters such as direct resistivity(ρ0), chargeability(m), time constant(τ) and frequency dependence(c). Thus it is very important to obtain the complex parameters of geologic body. It is difficult to approximate complex structures and terrain using traditional rectangular grid. In order to enhance the numerical accuracy and rationality of modeling and inversion, we use an adaptive finite-element algorithm for forward modeling of the frequency-domain 2.5D complex resistivity and implement the conjugate gradient algorithm in the inversion of 2.5D complex resistivity. An adaptive finite element method is applied for solving the 2.5D complex resistivity forward modeling of horizontal electric dipole source. First of all, the CCM is introduced into the Maxwell's equations to calculate the complex resistivity electromagnetic fields. Next, the pseudo delta function is used to distribute electric dipole source. Then the electromagnetic fields can be expressed in terms of the primary fields caused by layered structure and the secondary fields caused by inhomogeneities anomalous conductivity. At last, we calculated the electromagnetic fields response of complex geoelectric structures such as anticline, syncline, fault. The modeling results show that adaptive finite-element methods can automatically improve mesh generation and simulate complex geoelectric models using unstructured grids. The 2.5D complex resistivity invertion is implemented based the conjugate gradient algorithm.The conjugate gradient algorithm doesn't need to compute the sensitivity matrix but directly computes the sensitivity matrix or its transpose multiplying vector. In addition, the inversion target zones are

  6. Toward Modeling the Intrinsic Complexity of Test Problems

    ERIC Educational Resources Information Center

    Shoufan, Abdulhadi

    2017-01-01

    The concept of intrinsic complexity explains why different problems of the same type, tackled by the same problem solver, can require different times to solve and yield solutions of different quality. This paper proposes a general four-step approach that can be used to establish a model for the intrinsic complexity of a problem class in terms of…

  7. Complex versus simple models: ion-channel cardiac toxicity prediction.

    PubMed

    Mistry, Hitesh B

    2018-01-01

    There is growing interest in applying detailed mathematical models of the heart for ion-channel related cardiac toxicity prediction. However, a debate as to whether such complex models are required exists. Here an assessment in the predictive performance between two established large-scale biophysical cardiac models and a simple linear model B net was conducted. Three ion-channel data-sets were extracted from literature. Each compound was designated a cardiac risk category using two different classification schemes based on information within CredibleMeds. The predictive performance of each model within each data-set for each classification scheme was assessed via a leave-one-out cross validation. Overall the B net model performed equally as well as the leading cardiac models in two of the data-sets and outperformed both cardiac models on the latest. These results highlight the importance of benchmarking complex versus simple models but also encourage the development of simple models.

  8. Reduced complexity modeling of Arctic delta dynamics

    NASA Astrophysics Data System (ADS)

    Piliouras, A.; Lauzon, R.; Rowland, J. C.

    2017-12-01

    How water and sediment are routed through deltas has important implications for our understanding of nutrient and sediment fluxes to the coastal ocean. These fluxes may be especially important in Arctic environments, because the Arctic ocean receives a disproportionately large amount of river discharge and high latitude regions are expected to be particularly vulnerable to climate change. The Arctic has some of the world's largest but least studied deltas. This lack of data is due to remote and hazardous conditions, sparse human populations, and limited remote sensing resources. In the absence of data, complex models may be of limited scientific utility in understanding Arctic delta dynamics. To overcome this challenge, we adapt the reduced complexity delta-building model DeltaRCM for Arctic environments to explore the influence of sea ice and permafrost on delta morphology and dynamics. We represent permafrost by increasing the threshold for sediment erosion, as permafrost has been found to increase cohesion and reduce channel migration rates. The presence of permafrost in the model results in the creation of more elongate channels, fewer active channels, and a rougher shoreline. We consider several effects of sea ice, including introducing friction which increases flow resistance, constriction of flow by landfast ice, and changes in effective water surface elevation. Flow constriction and increased friction from ice results in a rougher shoreline, more frequent channel switching, decreased channel migration rates, and enhanced deposition offshore of channel mouths. The reduced complexity nature of the model is ideal for generating a basic understanding of which processes unique to Arctic environments may have important effects on delta evolution, and it allows us to explore a variety of rules for incorporating those processes into the model to inform future Arctic delta modelling efforts. Finally, we plan to use the modeling results to determine how the presence

  9. Turing instability in reaction-diffusion models on complex networks

    NASA Astrophysics Data System (ADS)

    Ide, Yusuke; Izuhara, Hirofumi; Machida, Takuya

    2016-09-01

    In this paper, the Turing instability in reaction-diffusion models defined on complex networks is studied. Here, we focus on three types of models which generate complex networks, i.e. the Erdős-Rényi, the Watts-Strogatz, and the threshold network models. From analysis of the Laplacian matrices of graphs generated by these models, we numerically reveal that stable and unstable regions of a homogeneous steady state on the parameter space of two diffusion coefficients completely differ, depending on the network architecture. In addition, we theoretically discuss the stable and unstable regions in the cases of regular enhanced ring lattices which include regular circles, and networks generated by the threshold network model when the number of vertices is large enough.

  10. On the Complexity of Item Response Theory Models.

    PubMed

    Bonifay, Wes; Cai, Li

    2017-01-01

    Complexity in item response theory (IRT) has traditionally been quantified by simply counting the number of freely estimated parameters in the model. However, complexity is also contingent upon the functional form of the model. We examined four popular IRT models-exploratory factor analytic, bifactor, DINA, and DINO-with different functional forms but the same number of free parameters. In comparison, a simpler (unidimensional 3PL) model was specified such that it had 1 more parameter than the previous models. All models were then evaluated according to the minimum description length principle. Specifically, each model was fit to 1,000 data sets that were randomly and uniformly sampled from the complete data space and then assessed using global and item-level fit and diagnostic measures. The findings revealed that the factor analytic and bifactor models possess a strong tendency to fit any possible data. The unidimensional 3PL model displayed minimal fitting propensity, despite the fact that it included an additional free parameter. The DINA and DINO models did not demonstrate a proclivity to fit any possible data, but they did fit well to distinct data patterns. Applied researchers and psychometricians should therefore consider functional form-and not goodness-of-fit alone-when selecting an IRT model.

  11. Epidemic threshold of the susceptible-infected-susceptible model on complex networks

    NASA Astrophysics Data System (ADS)

    Lee, Hyun Keun; Shim, Pyoung-Seop; Noh, Jae Dong

    2013-06-01

    We demonstrate that the susceptible-infected-susceptible (SIS) model on complex networks can have an inactive Griffiths phase characterized by a slow relaxation dynamics. It contrasts with the mean-field theoretical prediction that the SIS model on complex networks is active at any nonzero infection rate. The dynamic fluctuation of infected nodes, ignored in the mean field approach, is responsible for the inactive phase. It is proposed that the question whether the epidemic threshold of the SIS model on complex networks is zero or not can be resolved by the percolation threshold in a model where nodes are occupied in degree-descending order. Our arguments are supported by the numerical studies on scale-free network models.

  12. A Systematic Review of Conceptual Frameworks of Medical Complexity and New Model Development.

    PubMed

    Zullig, Leah L; Whitson, Heather E; Hastings, Susan N; Beadles, Chris; Kravchenko, Julia; Akushevich, Igor; Maciejewski, Matthew L

    2016-03-01

    Patient complexity is often operationalized by counting multiple chronic conditions (MCC) without considering contextual factors that can affect patient risk for adverse outcomes. Our objective was to develop a conceptual model of complexity addressing gaps identified in a review of published conceptual models. We searched for English-language MEDLINE papers published between 1 January 2004 and 16 January 2014. Two reviewers independently evaluated abstracts and all authors contributed to the development of the conceptual model in an iterative process. From 1606 identified abstracts, six conceptual models were selected. One additional model was identified through reference review. Each model had strengths, but several constructs were not fully considered: 1) contextual factors; 2) dynamics of complexity; 3) patients' preferences; 4) acute health shocks; and 5) resilience. Our Cycle of Complexity model illustrates relationships between acute shocks and medical events, healthcare access and utilization, workload and capacity, and patient preferences in the context of interpersonal, organizational, and community factors. This model may inform studies on the etiology of and changes in complexity, the relationship between complexity and patient outcomes, and intervention development to improve modifiable elements of complex patients.

  13. Lipids in host-pathogen interactions: pathogens exploit the complexity of the host cell lipidome.

    PubMed

    van der Meer-Janssen, Ynske P M; van Galen, Josse; Batenburg, Joseph J; Helms, J Bernd

    2010-01-01

    Lipids were long believed to have a structural role in biomembranes and a role in energy storage utilizing cellular lipid droplets and plasma lipoproteins. Research over the last decades has identified an additional role of lipids in cellular signaling, membrane microdomain organization and dynamics, and membrane trafficking. These properties make lipids an attractive target for pathogens to modulate host cell processes in order to allow their survival and replication. In this review we will summarize the often ingenious strategies of pathogens to modify the lipid homeostasis of host cells, allowing them to divert cellular processes. To this end pathogens take full advantage of the complexity of the lipidome. The examples are categorized in generalized and emerging principles describing the involvement of lipids in host-pathogen interactions. Several pathogens are described that simultaneously induce multiple changes in the host cell signaling and trafficking mechanisms. Elucidation of these pathogen-induced changes may have important implications for drug development. The emergence of high-throughput lipidomic techniques will allow the description of changes of the host cell lipidome at the level of individual molecular lipid species and the identification of lipid biomarkers.

  14. Volterra representation enables modeling of complex synaptic nonlinear dynamics in large-scale simulations.

    PubMed

    Hu, Eric Y; Bouteiller, Jean-Marie C; Song, Dong; Baudry, Michel; Berger, Theodore W

    2015-01-01

    Chemical synapses are comprised of a wide collection of intricate signaling pathways involving complex dynamics. These mechanisms are often reduced to simple spikes or exponential representations in order to enable computer simulations at higher spatial levels of complexity. However, these representations cannot capture important nonlinear dynamics found in synaptic transmission. Here, we propose an input-output (IO) synapse model capable of generating complex nonlinear dynamics while maintaining low computational complexity. This IO synapse model is an extension of a detailed mechanistic glutamatergic synapse model capable of capturing the input-output relationships of the mechanistic model using the Volterra functional power series. We demonstrate that the IO synapse model is able to successfully track the nonlinear dynamics of the synapse up to the third order with high accuracy. We also evaluate the accuracy of the IO synapse model at different input frequencies and compared its performance with that of kinetic models in compartmental neuron models. Our results demonstrate that the IO synapse model is capable of efficiently replicating complex nonlinear dynamics that were represented in the original mechanistic model and provide a method to replicate complex and diverse synaptic transmission within neuron network simulations.

  15. Volterra representation enables modeling of complex synaptic nonlinear dynamics in large-scale simulations

    PubMed Central

    Hu, Eric Y.; Bouteiller, Jean-Marie C.; Song, Dong; Baudry, Michel; Berger, Theodore W.

    2015-01-01

    Chemical synapses are comprised of a wide collection of intricate signaling pathways involving complex dynamics. These mechanisms are often reduced to simple spikes or exponential representations in order to enable computer simulations at higher spatial levels of complexity. However, these representations cannot capture important nonlinear dynamics found in synaptic transmission. Here, we propose an input-output (IO) synapse model capable of generating complex nonlinear dynamics while maintaining low computational complexity. This IO synapse model is an extension of a detailed mechanistic glutamatergic synapse model capable of capturing the input-output relationships of the mechanistic model using the Volterra functional power series. We demonstrate that the IO synapse model is able to successfully track the nonlinear dynamics of the synapse up to the third order with high accuracy. We also evaluate the accuracy of the IO synapse model at different input frequencies and compared its performance with that of kinetic models in compartmental neuron models. Our results demonstrate that the IO synapse model is capable of efficiently replicating complex nonlinear dynamics that were represented in the original mechanistic model and provide a method to replicate complex and diverse synaptic transmission within neuron network simulations. PMID:26441622

  16. Modeling of Complex Mixtures: JP-8 Toxicokinetics

    DTIC Science & Technology

    2008-10-01

    generic tissue compartments in which we have combined diffusion limitation and deep tissue (global tissue model). We also applied a QSAR approach for...SUBJECT TERMS jet fuel, JP-8, PBPK modeling, complex mixtures, nonane, decane, naphthalene, QSAR , alternative fuels 16. SECURITY CLASSIFICATION OF...necessary, to apply to the interaction of specific compounds with specific tissues. We have also applied a QSAR approach for estimating blood and tissue

  17. Hierarchical Model for the Evolution of Cloud Complexes

    NASA Astrophysics Data System (ADS)

    Sánchez D., Néstor M.; Parravano, Antonio

    1999-01-01

    The structure of cloud complexes appears to be well described by a tree structure (i.e., a simplified ``stick man'') representation when the image is partitioned into ``clouds.'' In this representation, the parent-child relationships are assigned according to containment. Based on this picture, a hierarchical model for the evolution of cloud complexes, including star formation, is constructed. The model follows the mass evolution of each substructure by computing its mass exchange with its parent and children. The parent-child mass exchange (evaporation or condensation) depends on the radiation density at the interphase. At the end of the ``lineage,'' stars may be born or die, so that there is a nonstationary mass flow in the hierarchical structure. For a variety of parameter sets the system follows the same series of steps to transform diffuse gas into stars, and the regulation of the mass flux in the tree by previously formed stars dominates the evolution of the star formation. For the set of parameters used here as a reference model, the system tends to produce initial mass functions (IMFs) that have a maximum at a mass that is too high (~2 Msolar) and the characteristic times for evolution seem too long. We show that these undesired properties can be improved by adjusting the model parameters. The model requires further physics (e.g., allowing for multiple stellar systems and clump collisions) before a definitive comparison with observations can be made. Instead, the emphasis here is to illustrate some general properties of this kind of complex nonlinear model for the star formation process. Notwithstanding the simplifications involved, the model reveals an essential feature that will likely remain if additional physical processes are included, that is, the detailed behavior of the system is very sensitive to the variations on the initial and external conditions, suggesting that a ``universal'' IMF is very unlikely. When an ensemble of IMFs corresponding to a

  18. Computer models of complex multiloop branched pipeline systems

    NASA Astrophysics Data System (ADS)

    Kudinov, I. V.; Kolesnikov, S. V.; Eremin, A. V.; Branfileva, A. N.

    2013-11-01

    This paper describes the principal theoretical concepts of the method used for constructing computer models of complex multiloop branched pipeline networks, and this method is based on the theory of graphs and two Kirchhoff's laws applied to electrical circuits. The models make it possible to calculate velocities, flow rates, and pressures of a fluid medium in any section of pipeline networks, when the latter are considered as single hydraulic systems. On the basis of multivariant calculations the reasons for existing problems can be identified, the least costly methods of their elimination can be proposed, and recommendations for planning the modernization of pipeline systems and construction of their new sections can be made. The results obtained can be applied to complex pipeline systems intended for various purposes (water pipelines, petroleum pipelines, etc.). The operability of the model has been verified on an example of designing a unified computer model of the heat network for centralized heat supply of the city of Samara.

  19. Modeling the propagation of mobile malware on complex networks

    NASA Astrophysics Data System (ADS)

    Liu, Wanping; Liu, Chao; Yang, Zheng; Liu, Xiaoyang; Zhang, Yihao; Wei, Zuxue

    2016-08-01

    In this paper, the spreading behavior of malware across mobile devices is addressed. By introducing complex networks to model mobile networks, which follows the power-law degree distribution, a novel epidemic model for mobile malware propagation is proposed. The spreading threshold that guarantees the dynamics of the model is calculated. Theoretically, the asymptotic stability of the malware-free equilibrium is confirmed when the threshold is below the unity, and the global stability is further proved under some sufficient conditions. The influences of different model parameters as well as the network topology on malware propagation are also analyzed. Our theoretical studies and numerical simulations show that networks with higher heterogeneity conduce to the diffusion of malware, and complex networks with lower power-law exponents benefit malware spreading.

  20. Simulating complex intracellular processes using object-oriented computational modelling.

    PubMed

    Johnson, Colin G; Goldman, Jacki P; Gullick, William J

    2004-11-01

    The aim of this paper is to give an overview of computer modelling and simulation in cellular biology, in particular as applied to complex biochemical processes within the cell. This is illustrated by the use of the techniques of object-oriented modelling, where the computer is used to construct abstractions of objects in the domain being modelled, and these objects then interact within the computer to simulate the system and allow emergent properties to be observed. The paper also discusses the role of computer simulation in understanding complexity in biological systems, and the kinds of information which can be obtained about biology via simulation.

  1. Risk Modeling of Interdependent Complex Systems of Systems: Theory and Practice.

    PubMed

    Haimes, Yacov Y

    2018-01-01

    The emergence of the complexity characterizing our systems of systems (SoS) requires a reevaluation of the way we model, assess, manage, communicate, and analyze the risk thereto. Current models for risk analysis of emergent complex SoS are insufficient because too often they rely on the same risk functions and models used for single systems. These models commonly fail to incorporate the complexity derived from the networks of interdependencies and interconnectedness (I-I) characterizing SoS. There is a need to reevaluate currently practiced risk analysis to respond to this reality by examining, and thus comprehending, what makes emergent SoS complex. The key to evaluating the risk to SoS lies in understanding the genesis of characterizing I-I of systems manifested through shared states and other essential entities within and among the systems that constitute SoS. The term "essential entities" includes shared decisions, resources, functions, policies, decisionmakers, stakeholders, organizational setups, and others. This undertaking can be accomplished by building on state-space theory, which is fundamental to systems engineering and process control. This article presents a theoretical and analytical framework for modeling the risk to SoS with two case studies performed with the MITRE Corporation and demonstrates the pivotal contributions made by shared states and other essential entities to modeling and analysis of the risk to complex SoS. A third case study highlights the multifarious representations of SoS, which require harmonizing the risk analysis process currently applied to single systems when applied to complex SoS. © 2017 Society for Risk Analysis.

  2. Modeling Electromagnetic Scattering From Complex Inhomogeneous Objects

    NASA Technical Reports Server (NTRS)

    Deshpande, Manohar; Reddy, C. J.

    2011-01-01

    This software innovation is designed to develop a mathematical formulation to estimate the electromagnetic scattering characteristics of complex, inhomogeneous objects using the finite-element-method (FEM) and method-of-moments (MoM) concepts, as well as to develop a FORTRAN code called FEMOM3DS (Finite Element Method and Method of Moments for 3-Dimensional Scattering), which will implement the steps that are described in the mathematical formulation. Very complex objects can be easily modeled, and the operator of the code is not required to know the details of electromagnetic theory to study electromagnetic scattering.

  3. Modelling the evolution of complex conductivity during calcite precipitation on glass beads

    NASA Astrophysics Data System (ADS)

    Leroy, Philippe; Li, Shuai; Jougnot, Damien; Revil, André; Wu, Yuxin

    2017-04-01

    When pH and alkalinity increase, calcite frequently precipitates and hence modifies the petrophysical properties of porous media. The complex conductivity method can be used to directly monitor calcite precipitation in porous media because it is sensitive to the evolution of the mineralogy, pore structure and its connectivity. We have developed a mechanistic grain polarization model considering the electrochemical polarization of the Stern and diffuse layers surrounding calcite particles. Our complex conductivity model depends on the surface charge density of the Stern layer and on the electrical potential at the onset of the diffuse layer, which are computed using a basic Stern model of the calcite/water interface. The complex conductivity measurements of Wu et al. on a column packed with glass beads where calcite precipitation occurs are reproduced by our surface complexation and complex conductivity models. The evolution of the size and shape of calcite particles during the calcite precipitation experiment is estimated by our complex conductivity model. At the early stage of the calcite precipitation experiment, modelled particles sizes increase and calcite particles flatten with time because calcite crystals nucleate at the surface of glass beads and grow into larger calcite grains. At the later stage of the calcite precipitation experiment, modelled sizes and cementation exponents of calcite particles decrease with time because large calcite grains aggregate over multiple glass beads and only small calcite crystals polarize.

  4. Modeling of Wall-Bounded Complex Flows and Free Shear Flows

    NASA Technical Reports Server (NTRS)

    Shih, Tsan-Hsing; Zhu, Jiang; Lumley, John L.

    1994-01-01

    Various wall-bounded flows with complex geometries and free shear flows have been studied with a newly developed realizable Reynolds stress algebraic equation model. The model development is based on the invariant theory in continuum mechanics. This theory enables us to formulate a general constitutive relation for the Reynolds stresses. Pope was the first to introduce this kind of constitutive relation to turbulence modeling. In our study, realizability is imposed on the truncated constitutive relation to determine the coefficients so that, unlike the standard k-E eddy viscosity model, the present model will not produce negative normal stresses in any situations of rapid distortion. The calculations based on the present model have shown an encouraging success in modeling complex turbulent flows.

  5. GalaxyRefineComplex: Refinement of protein-protein complex model structures driven by interface repacking.

    PubMed

    Heo, Lim; Lee, Hasup; Seok, Chaok

    2016-08-18

    Protein-protein docking methods have been widely used to gain an atomic-level understanding of protein interactions. However, docking methods that employ low-resolution energy functions are popular because of computational efficiency. Low-resolution docking tends to generate protein complex structures that are not fully optimized. GalaxyRefineComplex takes such low-resolution docking structures and refines them to improve model accuracy in terms of both interface contact and inter-protein orientation. This refinement method allows flexibility at the protein interface and in the overall docking structure to capture conformational changes that occur upon binding. Symmetric refinement is also provided for symmetric homo-complexes. This method was validated by refining models produced by available docking programs, including ZDOCK and M-ZDOCK, and was successfully applied to CAPRI targets in a blind fashion. An example of using the refinement method with an existing docking method for ligand binding mode prediction of a drug target is also presented. A web server that implements the method is freely available at http://galaxy.seoklab.org/refinecomplex.

  6. Application of surface complexation models to anion adsorption by natural materials.

    PubMed

    Goldberg, Sabine

    2014-10-01

    Various chemical models of ion adsorption are presented and discussed. Chemical models, such as surface complexation models, provide a molecular description of anion adsorption reactions using an equilibrium approach. Two such models, the constant capacitance model and the triple layer model, are described in the present study. Characteristics common to all the surface complexation models are equilibrium constant expressions, mass and charge balances, and surface activity coefficient electrostatic potential terms. Methods for determining parameter values for surface site density, capacitances, and surface complexation constants also are discussed. Spectroscopic experimental methods of establishing ion adsorption mechanisms include vibrational spectroscopy, nuclear magnetic resonance spectroscopy, electron spin resonance spectroscopy, X-ray absorption spectroscopy, and X-ray reflectivity. Experimental determinations of point of zero charge shifts and ionic strength dependence of adsorption results and molecular modeling calculations also can be used to deduce adsorption mechanisms. Applications of the surface complexation models to heterogeneous natural materials, such as soils, using the component additivity and the generalized composite approaches are described. Emphasis is on the generalized composite approach for predicting anion adsorption by soils. Continuing research is needed to develop consistent and realistic protocols for describing ion adsorption reactions on soil minerals and soils. The availability of standardized model parameter databases for use in chemical speciation-transport models is critical. Published 2014 Wiley Periodicals Inc. on behalf of SETAC. This article is a US Government work and as such, is in the public domain in the in the United States of America.

  7. Evidence for complex contagion models of social contagion from observational data

    PubMed Central

    Sprague, Daniel A.

    2017-01-01

    Social influence can lead to behavioural ‘fads’ that are briefly popular and quickly die out. Various models have been proposed for these phenomena, but empirical evidence of their accuracy as real-world predictive tools has so far been absent. Here we find that a ‘complex contagion’ model accurately describes the spread of behaviours driven by online sharing. We found that standard, ‘simple’, contagion often fails to capture both the rapid spread and the long tails of popularity seen in real fads, where our complex contagion model succeeds. Complex contagion also has predictive power: it successfully predicted the peak time and duration of the ALS Icebucket Challenge. The fast spread and longer duration of fads driven by complex contagion has important implications for activities such as publicity campaigns and charity drives. PMID:28686719

  8. A multi-element cosmological model with a complex space-time topology

    NASA Astrophysics Data System (ADS)

    Kardashev, N. S.; Lipatova, L. N.; Novikov, I. D.; Shatskiy, A. A.

    2015-02-01

    Wormhole models with a complex topology having one entrance and two exits into the same space-time of another universe are considered, as well as models with two entrances from the same space-time and one exit to another universe. These models are used to build a model of a multi-sheeted universe (a multi-element model of the "Multiverse") with a complex topology. Spherical symmetry is assumed in all the models. A Reissner-Norström black-hole model having no singularity beyond the horizon is constructed. The strength of the central singularity of the black hole is analyzed.

  9. Modeling the assembly order of multimeric heteroprotein complexes

    PubMed Central

    Esquivel-Rodriguez, Juan; Terashi, Genki; Christoffer, Charles; Shin, Woong-Hee

    2018-01-01

    Protein-protein interactions are the cornerstone of numerous biological processes. Although an increasing number of protein complex structures have been determined using experimental methods, relatively fewer studies have been performed to determine the assembly order of complexes. In addition to the insights into the molecular mechanisms of biological function provided by the structure of a complex, knowing the assembly order is important for understanding the process of complex formation. Assembly order is also practically useful for constructing subcomplexes as a step toward solving the entire complex experimentally, designing artificial protein complexes, and developing drugs that interrupt a critical step in the complex assembly. There are several experimental methods for determining the assembly order of complexes; however, these techniques are resource-intensive. Here, we present a computational method that predicts the assembly order of protein complexes by building the complex structure. The method, named Path-LzerD, uses a multimeric protein docking algorithm that assembles a protein complex structure from individual subunit structures and predicts assembly order by observing the simulated assembly process of the complex. Benchmarked on a dataset of complexes with experimental evidence of assembly order, Path-LZerD was successful in predicting the assembly pathway for the majority of the cases. Moreover, when compared with a simple approach that infers the assembly path from the buried surface area of subunits in the native complex, Path-LZerD has the strong advantage that it can be used for cases where the complex structure is not known. The path prediction accuracy decreased when starting from unbound monomers, particularly for larger complexes of five or more subunits, for which only a part of the assembly path was correctly identified. As the first method of its kind, Path-LZerD opens a new area of computational protein structure modeling and will be

  10. Modeling the assembly order of multimeric heteroprotein complexes.

    PubMed

    Peterson, Lenna X; Togawa, Yoichiro; Esquivel-Rodriguez, Juan; Terashi, Genki; Christoffer, Charles; Roy, Amitava; Shin, Woong-Hee; Kihara, Daisuke

    2018-01-01

    Protein-protein interactions are the cornerstone of numerous biological processes. Although an increasing number of protein complex structures have been determined using experimental methods, relatively fewer studies have been performed to determine the assembly order of complexes. In addition to the insights into the molecular mechanisms of biological function provided by the structure of a complex, knowing the assembly order is important for understanding the process of complex formation. Assembly order is also practically useful for constructing subcomplexes as a step toward solving the entire complex experimentally, designing artificial protein complexes, and developing drugs that interrupt a critical step in the complex assembly. There are several experimental methods for determining the assembly order of complexes; however, these techniques are resource-intensive. Here, we present a computational method that predicts the assembly order of protein complexes by building the complex structure. The method, named Path-LzerD, uses a multimeric protein docking algorithm that assembles a protein complex structure from individual subunit structures and predicts assembly order by observing the simulated assembly process of the complex. Benchmarked on a dataset of complexes with experimental evidence of assembly order, Path-LZerD was successful in predicting the assembly pathway for the majority of the cases. Moreover, when compared with a simple approach that infers the assembly path from the buried surface area of subunits in the native complex, Path-LZerD has the strong advantage that it can be used for cases where the complex structure is not known. The path prediction accuracy decreased when starting from unbound monomers, particularly for larger complexes of five or more subunits, for which only a part of the assembly path was correctly identified. As the first method of its kind, Path-LZerD opens a new area of computational protein structure modeling and will be

  11. Local synchronization of a complex network model.

    PubMed

    Yu, Wenwu; Cao, Jinde; Chen, Guanrong; Lü, Jinhu; Han, Jian; Wei, Wei

    2009-02-01

    This paper introduces a novel complex network model to evaluate the reputation of virtual organizations. By using the Lyapunov function and linear matrix inequality approaches, the local synchronization of the proposed model is further investigated. Here, the local synchronization is defined by the inner synchronization within a group which does not mean the synchronization between different groups. Moreover, several sufficient conditions are derived to ensure the local synchronization of the proposed network model. Finally, several representative examples are given to show the effectiveness of the proposed methods and theories.

  12. The Use of Behavior Models for Predicting Complex Operations

    NASA Technical Reports Server (NTRS)

    Gore, Brian F.

    2010-01-01

    Modeling and simulation (M&S) plays an important role when complex human-system notions are being proposed, developed and tested within the system design process. National Aeronautics and Space Administration (NASA) as an agency uses many different types of M&S approaches for predicting human-system interactions, especially when it is early in the development phase of a conceptual design. NASA Ames Research Center possesses a number of M&S capabilities ranging from airflow, flight path models, aircraft models, scheduling models, human performance models (HPMs), and bioinformatics models among a host of other kinds of M&S capabilities that are used for predicting whether the proposed designs will benefit the specific mission criteria. The Man-Machine Integration Design and Analysis System (MIDAS) is a NASA ARC HPM software tool that integrates many models of human behavior with environment models, equipment models, and procedural / task models. The challenge to model comprehensibility is heightened as the number of models that are integrated and the requisite fidelity of the procedural sets are increased. Model transparency is needed for some of the more complex HPMs to maintain comprehensibility of the integrated model performance. This will be exemplified in a recent MIDAS v5 application model and plans for future model refinements will be presented.

  13. Systems Engineering Metrics: Organizational Complexity and Product Quality Modeling

    NASA Technical Reports Server (NTRS)

    Mog, Robert A.

    1997-01-01

    Innovative organizational complexity and product quality models applicable to performance metrics for NASA-MSFC's Systems Analysis and Integration Laboratory (SAIL) missions and objectives are presented. An intensive research effort focuses on the synergistic combination of stochastic process modeling, nodal and spatial decomposition techniques, organizational and computational complexity, systems science and metrics, chaos, and proprietary statistical tools for accelerated risk assessment. This is followed by the development of a preliminary model, which is uniquely applicable and robust for quantitative purposes. Exercise of the preliminary model using a generic system hierarchy and the AXAF-I architectural hierarchy is provided. The Kendall test for positive dependence provides an initial verification and validation of the model. Finally, the research and development of the innovation is revisited, prior to peer review. This research and development effort results in near-term, measurable SAIL organizational and product quality methodologies, enhanced organizational risk assessment and evolutionary modeling results, and 91 improved statistical quantification of SAIL productivity interests.

  14. Post-closure biosphere assessment modelling: comparison of complex and more stylised approaches.

    PubMed

    Walke, Russell C; Kirchner, Gerald; Xu, Shulan; Dverstorp, Björn

    2015-10-01

    Geological disposal facilities are the preferred option for high-level radioactive waste, due to their potential to provide isolation from the surface environment (biosphere) on very long timescales. Assessments need to strike a balance between stylised models and more complex approaches that draw more extensively on site-specific information. This paper explores the relative merits of complex versus more stylised biosphere models in the context of a site-specific assessment. The more complex biosphere modelling approach was developed by the Swedish Nuclear Fuel and Waste Management Co (SKB) for the Formark candidate site for a spent nuclear fuel repository in Sweden. SKB's approach is built on a landscape development model, whereby radionuclide releases to distinct hydrological basins/sub-catchments (termed 'objects') are represented as they evolve through land rise and climate change. Each of seventeen of these objects is represented with more than 80 site specific parameters, with about 22 that are time-dependent and result in over 5000 input values per object. The more stylised biosphere models developed for this study represent releases to individual ecosystems without environmental change and include the most plausible transport processes. In the context of regulatory review of the landscape modelling approach adopted in the SR-Site assessment in Sweden, the more stylised representation has helped to build understanding in the more complex modelling approaches by providing bounding results, checking the reasonableness of the more complex modelling, highlighting uncertainties introduced through conceptual assumptions and helping to quantify the conservatisms involved. The more stylised biosphere models are also shown capable of reproducing the results of more complex approaches. A major recommendation is that biosphere assessments need to justify the degree of complexity in modelling approaches as well as simplifying and conservative assumptions. In light of

  15. Molecular modeling of the neurophysin I/oxytocin complex

    NASA Astrophysics Data System (ADS)

    Kazmierkiewicz, R.; Czaplewski, C.; Lammek, B.; Ciarkowski, J.

    1997-01-01

    Neurophysins I and II (NPI and NPII) act in the neurosecretory granules as carrier proteinsfor the neurophyseal hormones oxytocin (OT) and vasopressin (VP), respectively. The NPI/OTfunctional unit, believed to be an (NPI/OT)2 heterotetramer, was modeled using low-resolution structure information, viz. the Cα carbon atom coordinates of the homologousNPII/dipeptide complex (file 1BN2 in the Brookhaven Protein Databank) as a template. Itsall-atom representation was obtained using standard modeling tools available within theINSIGHT/Biopolymer modules supplied by Biosym Technologies Inc. A conformation of theNPI-bound OT, similar to that recently proposed in a transfer NOE experiment, was dockedinto the ligand-binding site by a superposition of its Cys1-Tyr2 fragment onto the equivalentportion of the dipeptide in the template. The starting complex for the initial refinements wasprepared by two alternative strategies, termed Model I and Model II, each ending with a˜100 ps molecular dynamics (MD) simulation in water using the AMBER 4.1 force field. The freehomodimer NPI2 was obtained by removal of the two OT subunits from their sites, followedby a similar structure refinement. The use of Model I, consisting of a constrained simulatedannealing, resulted in a structure remarkably similar to both the NPII/dipeptide complex anda recently published solid-state structure of the NPII/OT complex. Thus, Model I isrecommended as the method of choice for the preparation of the starting all-atom data forMD. The MD simulations indicate that, both in the homodimer and in the heterotetramer, the310-helices demonstrate an increased mobility relative to the remaining body of the protein.Also, the C-terminal domains in the NPI2 homodimer are more mobile than the N-terminalones. Finally, a distinct intermonomer interaction is identified, concentrated around its mostprominent, although not unique, contribution provided by an H-bond from Ser25Oγ in one NPI unit to Glu81 Oɛ in the other

  16. New approaches in agent-based modeling of complex financial systems

    NASA Astrophysics Data System (ADS)

    Chen, Ting-Ting; Zheng, Bo; Li, Yan; Jiang, Xiong-Fei

    2017-12-01

    Agent-based modeling is a powerful simulation technique to understand the collective behavior and microscopic interaction in complex financial systems. Recently, the concept for determining the key parameters of agent-based models from empirical data instead of setting them artificially was suggested. We first review several agent-based models and the new approaches to determine the key model parameters from historical market data. Based on the agents' behaviors with heterogeneous personal preferences and interactions, these models are successful in explaining the microscopic origination of the temporal and spatial correlations of financial markets. We then present a novel paradigm combining big-data analysis with agent-based modeling. Specifically, from internet query and stock market data, we extract the information driving forces and develop an agent-based model to simulate the dynamic behaviors of complex financial systems.

  17. Describing Ecosystem Complexity through Integrated Catchment Modeling

    NASA Astrophysics Data System (ADS)

    Shope, C. L.; Tenhunen, J. D.; Peiffer, S.

    2011-12-01

    Land use and climate change have been implicated in reduced ecosystem services (ie: high quality water yield, biodiversity, and agricultural yield. The prediction of ecosystem services expected under future land use decisions and changing climate conditions has become increasingly important. Complex policy and management decisions require the integration of physical, economic, and social data over several scales to assess effects on water resources and ecology. Field-based meteorology, hydrology, soil physics, plant production, solute and sediment transport, economic, and social behavior data were measured in a South Korean catchment. A variety of models are being used to simulate plot and field scale experiments within the catchment. Results from each of the local-scale models provide identification of sensitive, local-scale parameters which are then used as inputs into a large-scale watershed model. We used the spatially distributed SWAT model to synthesize the experimental field data throughout the catchment. The approach of our study was that the range in local-scale model parameter results can be used to define the sensitivity and uncertainty in the large-scale watershed model. Further, this example shows how research can be structured for scientific results describing complex ecosystems and landscapes where cross-disciplinary linkages benefit the end result. The field-based and modeling framework described is being used to develop scenarios to examine spatial and temporal changes in land use practices and climatic effects on water quantity, water quality, and sediment transport. Development of accurate modeling scenarios requires understanding the social relationship between individual and policy driven land management practices and the value of sustainable resources to all shareholders.

  18. Mathematical modelling of complex contagion on clustered networks

    NASA Astrophysics Data System (ADS)

    O'sullivan, David J.; O'Keeffe, Gary; Fennell, Peter; Gleeson, James

    2015-09-01

    The spreading of behavior, such as the adoption of a new innovation, is influenced bythe structure of social networks that interconnect the population. In the experiments of Centola (Science, 2010), adoption of new behavior was shown to spread further and faster across clustered-lattice networks than across corresponding random networks. This implies that the “complex contagion” effects of social reinforcement are important in such diffusion, in contrast to “simple” contagion models of disease-spread which predict that epidemics would grow more efficiently on random networks than on clustered networks. To accurately model complex contagion on clustered networks remains a challenge because the usual assumptions (e.g. of mean-field theory) regarding tree-like networks are invalidated by the presence of triangles in the network; the triangles are, however, crucial to the social reinforcement mechanism, which posits an increased probability of a person adopting behavior that has been adopted by two or more neighbors. In this paper we modify the analytical approach that was introduced by Hebert-Dufresne et al. (Phys. Rev. E, 2010), to study disease-spread on clustered networks. We show how the approximation method can be adapted to a complex contagion model, and confirm the accuracy of the method with numerical simulations. The analytical results of the model enable us to quantify the level of social reinforcement that is required to observe—as in Centola’s experiments—faster diffusion on clustered topologies than on random networks.

  19. Complex groundwater flow systems as traveling agent models

    PubMed Central

    Padilla, Pablo; Escolero, Oscar; González, Tomas; Morales-Casique, Eric; Osorio-Olvera, Luis

    2014-01-01

    Analyzing field data from pumping tests, we show that as with many other natural phenomena, groundwater flow exhibits complex dynamics described by 1/f power spectrum. This result is theoretically studied within an agent perspective. Using a traveling agent model, we prove that this statistical behavior emerges when the medium is complex. Some heuristic reasoning is provided to justify both spatial and dynamic complexity, as the result of the superposition of an infinite number of stochastic processes. Even more, we show that this implies that non-Kolmogorovian probability is needed for its study, and provide a set of new partial differential equations for groundwater flow. PMID:25337455

  20. The Complex Action Recognition via the Correlated Topic Model

    PubMed Central

    Tu, Hong-bin; Xia, Li-min; Wang, Zheng-wu

    2014-01-01

    Human complex action recognition is an important research area of the action recognition. Among various obstacles to human complex action recognition, one of the most challenging is to deal with self-occlusion, where one body part occludes another one. This paper presents a new method of human complex action recognition, which is based on optical flow and correlated topic model (CTM). Firstly, the Markov random field was used to represent the occlusion relationship between human body parts in terms of an occlusion state variable. Secondly, the structure from motion (SFM) is used for reconstructing the missing data of point trajectories. Then, we can extract the key frame based on motion feature from optical flow and the ratios of the width and height are extracted by the human silhouette. Finally, we use the topic model of correlated topic model (CTM) to classify action. Experiments were performed on the KTH, Weizmann, and UIUC action dataset to test and evaluate the proposed method. The compared experiment results showed that the proposed method was more effective than compared methods. PMID:24574920

  1. a Range Based Method for Complex Facade Modeling

    NASA Astrophysics Data System (ADS)

    Adami, A.; Fregonese, L.; Taffurelli, L.

    2011-09-01

    3d modelling of Architectural Heritage does not follow a very well-defined way, but it goes through different algorithms and digital form according to the shape complexity of the object, to the main goal of the representation and to the starting data. Even if the process starts from the same data, such as a pointcloud acquired by laser scanner, there are different possibilities to realize a digital model. In particular we can choose between two different attitudes: the mesh and the solid model. In the first case the complexity of architecture is represented by a dense net of triangular surfaces which approximates the real surface of the object. In the other -opposite- case the 3d digital model can be realized by the use of simple geometrical shapes, by the use of sweeping algorithm and the Boolean operations. Obviously these two models are not the same and each one is characterized by some peculiarities concerning the way of modelling (the choice of a particular triangulation algorithm or the quasi-automatic modelling by known shapes) and the final results (a more detailed and complex mesh versus an approximate and more simple solid model). Usually the expected final representation and the possibility of publishing lead to one way or the other. In this paper we want to suggest a semiautomatic process to build 3d digital models of the facades of complex architecture to be used for example in city models or in other large scale representations. This way of modelling guarantees also to obtain small files to be published on the web or to be transmitted. The modelling procedure starts from laser scanner data which can be processed in the well known way. Usually more than one scan is necessary to describe a complex architecture and to avoid some shadows on the facades. These have to be registered in a single reference system by the use of targets which are surveyed by topography and then to be filtered in order to obtain a well controlled and homogeneous point cloud of

  2. Research Area 3: Mathematics (3.1 Modeling of Complex Systems)

    DTIC Science & Technology

    2017-10-31

    RESEARCH AREA 3: MATHEMATICS (3.1 Modeling of Complex Systems). Proposal should be directed to Dr. John Lavery The views, opinions and/or findings...so designated by other documentation. 9. SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS (ES) U.S. Army Research Office P.O. Box 12211 Research ...Title: RESEARCH AREA 3: MATHEMATICS (3.1 Modeling of Complex Systems). Proposal should be directed to Dr. John Lavery Report Term: 0-Other Email

  3. Statistical Techniques Complement UML When Developing Domain Models of Complex Dynamical Biosystems.

    PubMed

    Williams, Richard A; Timmis, Jon; Qwarnstrom, Eva E

    2016-01-01

    Computational modelling and simulation is increasingly being used to complement traditional wet-lab techniques when investigating the mechanistic behaviours of complex biological systems. In order to ensure computational models are fit for purpose, it is essential that the abstracted view of biology captured in the computational model, is clearly and unambiguously defined within a conceptual model of the biological domain (a domain model), that acts to accurately represent the biological system and to document the functional requirements for the resultant computational model. We present a domain model of the IL-1 stimulated NF-κB signalling pathway, which unambiguously defines the spatial, temporal and stochastic requirements for our future computational model. Through the development of this model, we observe that, in isolation, UML is not sufficient for the purpose of creating a domain model, and that a number of descriptive and multivariate statistical techniques provide complementary perspectives, in particular when modelling the heterogeneity of dynamics at the single-cell level. We believe this approach of using UML to define the structure and interactions within a complex system, along with statistics to define the stochastic and dynamic nature of complex systems, is crucial for ensuring that conceptual models of complex dynamical biosystems, which are developed using UML, are fit for purpose, and unambiguously define the functional requirements for the resultant computational model.

  4. Statistical Techniques Complement UML When Developing Domain Models of Complex Dynamical Biosystems

    PubMed Central

    Timmis, Jon; Qwarnstrom, Eva E.

    2016-01-01

    Computational modelling and simulation is increasingly being used to complement traditional wet-lab techniques when investigating the mechanistic behaviours of complex biological systems. In order to ensure computational models are fit for purpose, it is essential that the abstracted view of biology captured in the computational model, is clearly and unambiguously defined within a conceptual model of the biological domain (a domain model), that acts to accurately represent the biological system and to document the functional requirements for the resultant computational model. We present a domain model of the IL-1 stimulated NF-κB signalling pathway, which unambiguously defines the spatial, temporal and stochastic requirements for our future computational model. Through the development of this model, we observe that, in isolation, UML is not sufficient for the purpose of creating a domain model, and that a number of descriptive and multivariate statistical techniques provide complementary perspectives, in particular when modelling the heterogeneity of dynamics at the single-cell level. We believe this approach of using UML to define the structure and interactions within a complex system, along with statistics to define the stochastic and dynamic nature of complex systems, is crucial for ensuring that conceptual models of complex dynamical biosystems, which are developed using UML, are fit for purpose, and unambiguously define the functional requirements for the resultant computational model. PMID:27571414

  5. Postprocessing of docked protein-ligand complexes using implicit solvation models.

    PubMed

    Lindström, Anton; Edvinsson, Lotta; Johansson, Andreas; Andersson, C David; Andersson, Ida E; Raubacher, Florian; Linusson, Anna

    2011-02-28

    Molecular docking plays an important role in drug discovery as a tool for the structure-based design of small organic ligands for macromolecules. Possible applications of docking are identification of the bioactive conformation of a protein-ligand complex and the ranking of different ligands with respect to their strength of binding to a particular target. We have investigated the effect of implicit water on the postprocessing of binding poses generated by molecular docking using MM-PB/GB-SA (molecular mechanics Poisson-Boltzmann and generalized Born surface area) methodology. The investigation was divided into three parts: geometry optimization, pose selection, and estimation of the relative binding energies of docked protein-ligand complexes. Appropriate geometry optimization afforded more accurate binding poses for 20% of the complexes investigated. The time required for this step was greatly reduced by minimizing the energy of the binding site using GB solvation models rather than minimizing the entire complex using the PB model. By optimizing the geometries of docking poses using the GB(HCT+SA) model then calculating their free energies of binding using the PB implicit solvent model, binding poses similar to those observed in crystal structures were obtained. Rescoring of these poses according to their calculated binding energies resulted in improved correlations with experimental binding data. These correlations could be further improved by applying the postprocessing to several of the most highly ranked poses rather than focusing exclusively on the top-scored pose. The postprocessing protocol was successfully applied to the analysis of a set of Factor Xa inhibitors and a set of glycopeptide ligands for the class II major histocompatibility complex (MHC) A(q) protein. These results indicate that the protocol for the postprocessing of docked protein-ligand complexes developed in this paper may be generally useful for structure-based design in drug discovery.

  6. Intelligent Decisions Need Intelligent Choice of Models and Data - a Bayesian Justifiability Analysis for Models with Vastly Different Complexity

    NASA Astrophysics Data System (ADS)

    Nowak, W.; Schöniger, A.; Wöhling, T.; Illman, W. A.

    2016-12-01

    Model-based decision support requires justifiable models with good predictive capabilities. This, in turn, calls for a fine adjustment between predictive accuracy (small systematic model bias that can be achieved with rather complex models), and predictive precision (small predictive uncertainties that can be achieved with simpler models with fewer parameters). The implied complexity/simplicity trade-off depends on the availability of informative data for calibration. If not available, additional data collection can be planned through optimal experimental design. We present a model justifiability analysis that can compare models of vastly different complexity. It rests on Bayesian model averaging (BMA) to investigate the complexity/performance trade-off dependent on data availability. Then, we disentangle the complexity component from the performance component. We achieve this by replacing actually observed data by realizations of synthetic data predicted by the models. This results in a "model confusion matrix". Based on this matrix, the modeler can identify the maximum model complexity that can be justified by the available (or planned) amount and type of data. As a side product, the matrix quantifies model (dis-)similarity. We apply this analysis to aquifer characterization via hydraulic tomography, comparing four models with a vastly different number of parameters (from a homogeneous model to geostatistical random fields). As a testing scenario, we consider hydraulic tomography data. Using subsets of these data, we determine model justifiability as a function of data set size. The test case shows that geostatistical parameterization requires a substantial amount of hydraulic tomography data to be justified, while a zonation-based model can be justified with more limited data set sizes. The actual model performance (as opposed to model justifiability), however, depends strongly on the quality of prior geological information.

  7. Uranium(VI) adsorption to ferrihydrite: Application of a surface complexation model

    USGS Publications Warehouse

    Waite, T.D.; Davis, J.A.; Payne, T.E.; Waychunas, G.A.; Xu, N.

    1994-01-01

    A study of U(VI) adsorption by ferrihydrite was conducted over a wide range of U(VI) concentrations, pH, and at two partial pressures of carbon dioxide. A two-site (strong- and weak-affinity sites, FesOH and FewOH, respectively) surface complexation model was able to describe the experimental data well over a wide range of conditions, with only one species formed with each site type: an inner-sphere, mononuclear, bidentate complex of the type (FeO2)UO2. The existence of such a surface species was supported by results of uranium EXAFS spectroscopy performed on two samples with U(VI) adsorption density in the upper range observed in this study (10 and 18% occupancy of total surface sites). Adsorption data in the alkaline pH range suggested the existence of a second surface species, modeled as a ternary surface complex with UO2CO30 binding to a bidentate surface site. Previous surface complexation models for U(VI) adsorption have proposed surface species that are identical to the predominant aqueous species, e.g., multinuclear hydrolysis complexes or several U(VI)-carbonate complexes. The results demonstrate that the speciation of adsorbed U(VI) may be constrained by the coordination environment at the surface, giving rise to surface speciation for U(VI) that is significantly less complex than aqueous speciation.

  8. Modeling Structure and Dynamics of Protein Complexes with SAXS Profiles

    PubMed Central

    Schneidman-Duhovny, Dina; Hammel, Michal

    2018-01-01

    Small-angle X-ray scattering (SAXS) is an increasingly common and useful technique for structural characterization of molecules in solution. A SAXS experiment determines the scattering intensity of a molecule as a function of spatial frequency, termed SAXS profile. SAXS profiles can be utilized in a variety of molecular modeling applications, such as comparing solution and crystal structures, structural characterization of flexible proteins, assembly of multi-protein complexes, and modeling of missing regions in the high-resolution structure. Here, we describe protocols for modeling atomic structures based on SAXS profiles. The first protocol is for comparing solution and crystal structures including modeling of missing regions and determination of the oligomeric state. The second protocol performs multi-state modeling by finding a set of conformations and their weights that fit the SAXS profile starting from a single-input structure. The third protocol is for protein-protein docking based on the SAXS profile of the complex. We describe the underlying software, followed by demonstrating their application on interleukin 33 (IL33) with its primary receptor ST2 and DNA ligase IV-XRCC4 complex. PMID:29605933

  9. Probabilistic Analysis Techniques Applied to Complex Spacecraft Power System Modeling

    NASA Technical Reports Server (NTRS)

    Hojnicki, Jeffrey S.; Rusick, Jeffrey J.

    2005-01-01

    Electric power system performance predictions are critical to spacecraft, such as the International Space Station (ISS), to ensure that sufficient power is available to support all the spacecraft s power needs. In the case of the ISS power system, analyses to date have been deterministic, meaning that each analysis produces a single-valued result for power capability because of the complexity and large size of the model. As a result, the deterministic ISS analyses did not account for the sensitivity of the power capability to uncertainties in model input variables. Over the last 10 years, the NASA Glenn Research Center has developed advanced, computationally fast, probabilistic analysis techniques and successfully applied them to large (thousands of nodes) complex structural analysis models. These same techniques were recently applied to large, complex ISS power system models. This new application enables probabilistic power analyses that account for input uncertainties and produce results that include variations caused by these uncertainties. Specifically, N&R Engineering, under contract to NASA, integrated these advanced probabilistic techniques with Glenn s internationally recognized ISS power system model, System Power Analysis for Capability Evaluation (SPACE).

  10. Accuracy of travel time distribution (TTD) models as affected by TTD complexity, observation errors, and model and tracer selection

    USGS Publications Warehouse

    Green, Christopher T.; Zhang, Yong; Jurgens, Bryant C.; Starn, J. Jeffrey; Landon, Matthew K.

    2014-01-01

    Analytical models of the travel time distribution (TTD) from a source area to a sample location are often used to estimate groundwater ages and solute concentration trends. The accuracies of these models are not well known for geologically complex aquifers. In this study, synthetic datasets were used to quantify the accuracy of four analytical TTD models as affected by TTD complexity, observation errors, model selection, and tracer selection. Synthetic TTDs and tracer data were generated from existing numerical models with complex hydrofacies distributions for one public-supply well and 14 monitoring wells in the Central Valley, California. Analytical TTD models were calibrated to synthetic tracer data, and prediction errors were determined for estimates of TTDs and conservative tracer (NO3−) concentrations. Analytical models included a new, scale-dependent dispersivity model (SDM) for two-dimensional transport from the watertable to a well, and three other established analytical models. The relative influence of the error sources (TTD complexity, observation error, model selection, and tracer selection) depended on the type of prediction. Geological complexity gave rise to complex TTDs in monitoring wells that strongly affected errors of the estimated TTDs. However, prediction errors for NO3− and median age depended more on tracer concentration errors. The SDM tended to give the most accurate estimates of the vertical velocity and other predictions, although TTD model selection had minor effects overall. Adding tracers improved predictions if the new tracers had different input histories. Studies using TTD models should focus on the factors that most strongly affect the desired predictions.

  11. Hierarchical Modeling of Sequential Behavioral Data: Examining Complex Association Patterns in Mediation Models

    ERIC Educational Resources Information Center

    Dagne, Getachew A.; Brown, C. Hendricks; Howe, George W.

    2007-01-01

    This article presents new methods for modeling the strength of association between multiple behaviors in a behavioral sequence, particularly those involving substantively important interaction patterns. Modeling and identifying such interaction patterns becomes more complex when behaviors are assigned to more than two categories, as is the case…

  12. Reduced Complexity Modelling of Urban Floodplain Inundation

    NASA Astrophysics Data System (ADS)

    McMillan, H. K.; Brasington, J.; Mihir, M.

    2004-12-01

    Significant recent advances in floodplain inundation modelling have been achieved by directly coupling 1d channel hydraulic models with a raster storage cell approximation for floodplain flows. The strengths of this reduced-complexity model structure derive from its explicit dependence on a digital elevation model (DEM) to parameterize flows through riparian areas, providing a computationally efficient algorithm to model heterogeneous floodplains. Previous applications of this framework have generally used mid-range grid scales (101-102 m), showing the capacity of the models to simulate long reaches (103-104 m). However, the increasing availability of precision DEMs derived from airborne laser altimetry (LIDAR) enables their use at very high spatial resolutions (100-101 m). This spatial scale offers the opportunity to incorporate the complexity of the built environment directly within the floodplain DEM and simulate urban flooding. This poster describes a series of experiments designed to explore model functionality at these reduced scales. Important questions are considered, raised by this new approach, about the reliability and representation of the floodplain topography and built environment, and the resultant sensitivity of inundation forecasts. The experiments apply a raster floodplain model to reconstruct a 1:100 year flood event on the River Granta in eastern England, which flooded 72 properties in the town of Linton in October 2001. The simulations use a nested-scale model to maintain efficiency. A 2km by 4km urban zone is represented by a high-resolution DEM derived from single-pulse LIDAR data supplied by the UK Environment Agency, together with surveyed data and aerial photography. Novel methods of processing the raw data to provide the individual structure detail required are investigated and compared. This is then embedded within a lower-resolution model application at the reach scale which provides boundary conditions based on recorded flood stage

  13. Modeling and complexity of stochastic interacting Lévy type financial price dynamics

    NASA Astrophysics Data System (ADS)

    Wang, Yiduan; Zheng, Shenzhou; Zhang, Wei; Wang, Jun; Wang, Guochao

    2018-06-01

    In attempt to reproduce and investigate nonlinear dynamics of security markets, a novel nonlinear random interacting price dynamics, which is considered as a Lévy type process, is developed and investigated by the combination of lattice oriented percolation and Potts dynamics, which concerns with the instinctive random fluctuation and the fluctuation caused by the spread of the investors' trading attitudes, respectively. To better understand the fluctuation complexity properties of the proposed model, the complexity analyses of random logarithmic price return and corresponding volatility series are preformed, including power-law distribution, Lempel-Ziv complexity and fractional sample entropy. In order to verify the rationality of the proposed model, the corresponding studies of actual security market datasets are also implemented for comparison. The empirical results reveal that this financial price model can reproduce some important complexity features of actual security markets to some extent. The complexity of returns decreases with the increase of parameters γ1 and β respectively, furthermore, the volatility series exhibit lower complexity than the return series

  14. Surface complexation modeling of zinc sorption onto ferrihydrite.

    PubMed

    Dyer, James A; Trivedi, Paras; Scrivner, Noel C; Sparks, Donald L

    2004-02-01

    A previous study involving lead(II) [Pb(II)] sorption onto ferrihydrite over a wide range of conditions highlighted the advantages of combining molecular- and macroscopic-scale investigations with surface complexation modeling to predict Pb(II) speciation and partitioning in aqueous systems. In this work, an extensive collection of new macroscopic and spectroscopic data was used to assess the ability of the modified triple-layer model (TLM) to predict single-solute zinc(II) [Zn(II)] sorption onto 2-line ferrihydrite in NaNO(3) solutions as a function of pH, ionic strength, and concentration. Regression of constant-pH isotherm data, together with potentiometric titration and pH edge data, was a much more rigorous test of the modified TLM than fitting pH edge data alone. When coupled with valuable input from spectroscopic analyses, good fits of the isotherm data were obtained with a one-species, one-Zn-sorption-site model using the bidentate-mononuclear surface complex, (triple bond FeO)(2)Zn; however, surprisingly, both the density of Zn(II) sorption sites and the value of the best-fit equilibrium "constant" for the bidentate-mononuclear complex had to be adjusted with pH to adequately fit the isotherm data. Although spectroscopy provided some evidence for multinuclear surface complex formation at surface loadings approaching site saturation at pH >/=6.5, the assumption of a bidentate-mononuclear surface complex provided acceptable fits of the sorption data over the entire range of conditions studied. Regressing edge data in the absence of isotherm and spectroscopic data resulted in a fair number of surface-species/site-type combinations that provided acceptable fits of the edge data, but unacceptable fits of the isotherm data. A linear relationship between logK((triple bond FeO)2Zn) and pH was found, given by logK((triple bond FeO)2Znat1g/l)=2.058 (pH)-6.131. In addition, a surface activity coefficient term was introduced to the model to reduce the ionic strength

  15. An Adaptive Complex Network Model for Brain Functional Networks

    PubMed Central

    Gomez Portillo, Ignacio J.; Gleiser, Pablo M.

    2009-01-01

    Brain functional networks are graph representations of activity in the brain, where the vertices represent anatomical regions and the edges their functional connectivity. These networks present a robust small world topological structure, characterized by highly integrated modules connected sparsely by long range links. Recent studies showed that other topological properties such as the degree distribution and the presence (or absence) of a hierarchical structure are not robust, and show different intriguing behaviors. In order to understand the basic ingredients necessary for the emergence of these complex network structures we present an adaptive complex network model for human brain functional networks. The microscopic units of the model are dynamical nodes that represent active regions of the brain, whose interaction gives rise to complex network structures. The links between the nodes are chosen following an adaptive algorithm that establishes connections between dynamical elements with similar internal states. We show that the model is able to describe topological characteristics of human brain networks obtained from functional magnetic resonance imaging studies. In particular, when the dynamical rules of the model allow for integrated processing over the entire network scale-free non-hierarchical networks with well defined communities emerge. On the other hand, when the dynamical rules restrict the information to a local neighborhood, communities cluster together into larger ones, giving rise to a hierarchical structure, with a truncated power law degree distribution. PMID:19738902

  16. Are more complex physiological models of forest ecosystems better choices for plot and regional predictions?

    Treesearch

    Wenchi Jin; Hong S. He; Frank R. Thompson

    2016-01-01

    Process-based forest ecosystem models vary from simple physiological, complex physiological, to hybrid empirical-physiological models. Previous studies indicate that complex models provide the best prediction at plot scale with a temporal extent of less than 10 years, however, it is largely untested as to whether complex models outperform the other two types of models...

  17. Application of surface complexation models to anion adsorption by natural materials

    USDA-ARS?s Scientific Manuscript database

    Various chemical models of ion adsorption will be presented and discussed. Chemical models, such as surface complexation models, provide a molecular description of anion adsorption reactions using an equilibrium approach. Two such models, the constant capacitance model and the triple layer model w...

  18. Understanding large multiprotein complexes: applying a multiple allosteric networks model to explain the function of the Mediator transcription complex.

    PubMed

    Lewis, Brian A

    2010-01-15

    The regulation of transcription and of many other cellular processes involves large multi-subunit protein complexes. In the context of transcription, it is known that these complexes serve as regulatory platforms that connect activator DNA-binding proteins to a target promoter. However, there is still a lack of understanding regarding the function of these complexes. Why do multi-subunit complexes exist? What is the molecular basis of the function of their constituent subunits, and how are these subunits organized within a complex? What is the reason for physical connections between certain subunits and not others? In this article, I address these issues through a model of network allostery and its application to the eukaryotic RNA polymerase II Mediator transcription complex. The multiple allosteric networks model (MANM) suggests that protein complexes such as Mediator exist not only as physical but also as functional networks of interconnected proteins through which information is transferred from subunit to subunit by the propagation of an allosteric state known as conformational spread. Additionally, there are multiple distinct sub-networks within the Mediator complex that can be defined by their connections to different subunits; these sub-networks have discrete functions that are activated when specific subunits interact with other activator proteins.

  19. Modeling complexity in pathologist workload measurement: the Automatable Activity-Based Approach to Complexity Unit Scoring (AABACUS).

    PubMed

    Cheung, Carol C; Torlakovic, Emina E; Chow, Hung; Snover, Dale C; Asa, Sylvia L

    2015-03-01

    Pathologists provide diagnoses relevant to the disease state of the patient and identify specific tissue characteristics relevant to response to therapy and prognosis. As personalized medicine evolves, there is a trend for increased demand of tissue-derived parameters. Pathologists perform increasingly complex analyses on the same 'cases'. Traditional methods of workload assessment and reimbursement, based on number of cases sometimes with a modifier (eg, the relative value unit (RVU) system used in the United States), often grossly underestimate the amount of work needed for complex cases and may overvalue simple, small biopsy cases. We describe a new approach to pathologist workload measurement that aligns with this new practice paradigm. Our multisite institution with geographically diverse partner institutions has developed the Automatable Activity-Based Approach to Complexity Unit Scoring (AABACUS) model that captures pathologists' clinical activities from parameters documented in departmental laboratory information systems (LISs). The model's algorithm includes: 'capture', 'export', 'identify', 'count', 'score', 'attribute', 'filter', and 'assess filtered results'. Captured data include specimen acquisition, handling, analysis, and reporting activities. Activities were counted and complexity units (CUs) generated using a complexity factor for each activity. CUs were compared between institutions, practice groups, and practice types and evaluated over a 5-year period (2008-2012). The annual load of a clinical service pathologist, irrespective of subspecialty, was ∼40,000 CUs using relative benchmarking. The model detected changing practice patterns and was appropriate for monitoring clinical workload for anatomical pathology, neuropathology, and hematopathology in academic and community settings, and encompassing subspecialty and generalist practices. AABACUS is objective, can be integrated with an LIS and automated, is reproducible, backwards compatible

  20. Measuring ECS Interaction with Biomembranes.

    PubMed

    Angelucci, Clotilde B; Sabatucci, Annalaura; Dainese, Enrico

    2016-01-01

    Understanding the correct interaction among the different components of the endocannabinoid system (ECS) is fundamental for a proper assessment of the function of endocannabinoids (eCBs) as signaling molecules. The knowledge of how membrane environment is able to modulate intracellular trafficking of eCBs and their interacting proteins holds a huge potential in unraveling new mechanisms of ECS modulation.Here, fluorescence resonance energy transfer (FRET) technique is applied to measure the binding affinity of ECS proteins to model membranes (i.e., large unilamellar vesicles, LUVs). In particular, we describe in details the paradigmatic example of the interaction of recombinant rat FAAH-ΔTM with LUVs constituted by 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphocholine (POPC).

  1. Complex Instruction: A Model for Reaching Up--and Out

    ERIC Educational Resources Information Center

    Tomlinson, Carol Ann

    2018-01-01

    Complex Instruction is a multifaceted instructional model designed to provide highly challenging learning opportunities for students in heterogeneous classrooms. The model provides a rationale for and philosophy of creating equity of access to excellent curriculum and instruction for a broad range of learners, guidance for preparing students for…

  2. Entropy, complexity, and Markov diagrams for random walk cancer models

    PubMed Central

    Newton, Paul K.; Mason, Jeremy; Hurt, Brian; Bethel, Kelly; Bazhenova, Lyudmila; Nieva, Jorge; Kuhn, Peter

    2014-01-01

    The notion of entropy is used to compare the complexity associated with 12 common cancers based on metastatic tumor distribution autopsy data. We characterize power-law distributions, entropy, and Kullback-Liebler divergence associated with each primary cancer as compared with data for all cancer types aggregated. We then correlate entropy values with other measures of complexity associated with Markov chain dynamical systems models of progression. The Markov transition matrix associated with each cancer is associated with a directed graph model where nodes are anatomical locations where a metastatic tumor could develop, and edge weightings are transition probabilities of progression from site to site. The steady-state distribution corresponds to the autopsy data distribution. Entropy correlates well with the overall complexity of the reduced directed graph structure for each cancer and with a measure of systemic interconnectedness of the graph, called graph conductance. The models suggest that grouping cancers according to their entropy values, with skin, breast, kidney, and lung cancers being prototypical high entropy cancers, stomach, uterine, pancreatic and ovarian being mid-level entropy cancers, and colorectal, cervical, bladder, and prostate cancers being prototypical low entropy cancers, provides a potentially useful framework for viewing metastatic cancer in terms of predictability, complexity, and metastatic potential. PMID:25523357

  3. Entropy, complexity, and Markov diagrams for random walk cancer models.

    PubMed

    Newton, Paul K; Mason, Jeremy; Hurt, Brian; Bethel, Kelly; Bazhenova, Lyudmila; Nieva, Jorge; Kuhn, Peter

    2014-12-19

    The notion of entropy is used to compare the complexity associated with 12 common cancers based on metastatic tumor distribution autopsy data. We characterize power-law distributions, entropy, and Kullback-Liebler divergence associated with each primary cancer as compared with data for all cancer types aggregated. We then correlate entropy values with other measures of complexity associated with Markov chain dynamical systems models of progression. The Markov transition matrix associated with each cancer is associated with a directed graph model where nodes are anatomical locations where a metastatic tumor could develop, and edge weightings are transition probabilities of progression from site to site. The steady-state distribution corresponds to the autopsy data distribution. Entropy correlates well with the overall complexity of the reduced directed graph structure for each cancer and with a measure of systemic interconnectedness of the graph, called graph conductance. The models suggest that grouping cancers according to their entropy values, with skin, breast, kidney, and lung cancers being prototypical high entropy cancers, stomach, uterine, pancreatic and ovarian being mid-level entropy cancers, and colorectal, cervical, bladder, and prostate cancers being prototypical low entropy cancers, provides a potentially useful framework for viewing metastatic cancer in terms of predictability, complexity, and metastatic potential.

  4. Entropy, complexity, and Markov diagrams for random walk cancer models

    NASA Astrophysics Data System (ADS)

    Newton, Paul K.; Mason, Jeremy; Hurt, Brian; Bethel, Kelly; Bazhenova, Lyudmila; Nieva, Jorge; Kuhn, Peter

    2014-12-01

    The notion of entropy is used to compare the complexity associated with 12 common cancers based on metastatic tumor distribution autopsy data. We characterize power-law distributions, entropy, and Kullback-Liebler divergence associated with each primary cancer as compared with data for all cancer types aggregated. We then correlate entropy values with other measures of complexity associated with Markov chain dynamical systems models of progression. The Markov transition matrix associated with each cancer is associated with a directed graph model where nodes are anatomical locations where a metastatic tumor could develop, and edge weightings are transition probabilities of progression from site to site. The steady-state distribution corresponds to the autopsy data distribution. Entropy correlates well with the overall complexity of the reduced directed graph structure for each cancer and with a measure of systemic interconnectedness of the graph, called graph conductance. The models suggest that grouping cancers according to their entropy values, with skin, breast, kidney, and lung cancers being prototypical high entropy cancers, stomach, uterine, pancreatic and ovarian being mid-level entropy cancers, and colorectal, cervical, bladder, and prostate cancers being prototypical low entropy cancers, provides a potentially useful framework for viewing metastatic cancer in terms of predictability, complexity, and metastatic potential.

  5. Sparkle model for AM1 calculation of lanthanide complexes: improved parameters for europium.

    PubMed

    Rocha, Gerd B; Freire, Ricardo O; Da Costa, Nivan B; De Sá, Gilberto F; Simas, Alfredo M

    2004-04-05

    In the present work, we sought to improve our sparkle model for the calculation of lanthanide complexes, SMLC,in various ways: (i) inclusion of the europium atomic mass, (ii) reparametrization of the model within AM1 from a new response function including all distances of the coordination polyhedron for tris(acetylacetonate)(1,10-phenanthroline) europium(III), (iii) implementation of the model in the software package MOPAC93r2, and (iv) inclusion of spherical Gaussian functions in the expression which computes the core-core repulsion energy. The parametrization results indicate that SMLC II is superior to the previous version of the model because Gaussian functions proved essential if one requires a better description of the geometries of the complexes. In order to validate our parametrization, we carried out calculations on 96 europium(III) complexes, selected from Cambridge Structural Database 2003, and compared our predicted ground state geometries with the experimental ones. Our results show that this new parametrization of the SMLC model, with the inclusion of spherical Gaussian functions in the core-core repulsion energy, is better capable of predicting the Eu-ligand distances than the previous version. The unsigned mean error for all interatomic distances Eu-L, in all 96 complexes, which, for the original SMLC is 0.3564 A, is lowered to 0.1993 A when the model was parametrized with the inclusion of two Gaussian functions. Our results also indicate that this model is more applicable to europium complexes with beta-diketone ligands. As such, we conclude that this improved model can be considered a powerful tool for the study of lanthanide complexes and their applications, such as the modeling of light conversion molecular devices.

  6. Assessment of wear dependence parameters in complex model of cutting tool wear

    NASA Astrophysics Data System (ADS)

    Antsev, A. V.; Pasko, N. I.; Antseva, N. V.

    2018-03-01

    This paper addresses wear dependence of the generic efficient life period of cutting tools taken as an aggregate of the law of tool wear rate distribution and dependence of parameters of this law's on the cutting mode, factoring in the random factor as exemplified by the complex model of wear. The complex model of wear takes into account the variance of cutting properties within one batch of tools, variance in machinability within one batch of workpieces, and the stochastic nature of the wear process itself. A technique of assessment of wear dependence parameters in a complex model of cutting tool wear is provided. The technique is supported by a numerical example.

  7. STEPS: Modeling and Simulating Complex Reaction-Diffusion Systems with Python

    PubMed Central

    Wils, Stefan; Schutter, Erik De

    2008-01-01

    We describe how the use of the Python language improved the user interface of the program STEPS. STEPS is a simulation platform for modeling and stochastic simulation of coupled reaction-diffusion systems with complex 3-dimensional boundary conditions. Setting up such models is a complicated process that consists of many phases. Initial versions of STEPS relied on a static input format that did not cleanly separate these phases, limiting modelers in how they could control the simulation and becoming increasingly complex as new features and new simulation algorithms were added. We solved all of these problems by tightly integrating STEPS with Python, using SWIG to expose our existing simulation code. PMID:19623245

  8. Tips on Creating Complex Geometry Using Solid Modeling Software

    ERIC Educational Resources Information Center

    Gow, George

    2008-01-01

    Three-dimensional computer-aided drafting (CAD) software, sometimes referred to as "solid modeling" software, is easy to learn, fun to use, and becoming the standard in industry. However, many users have difficulty creating complex geometry with the solid modeling software. And the problem is not entirely a student problem. Even some teachers and…

  9. The effects of numerical-model complexity and observation type on estimated porosity values

    USGS Publications Warehouse

    Starn, Jeffrey; Bagtzoglou, Amvrossios C.; Green, Christopher T.

    2015-01-01

    The relative merits of model complexity and types of observations employed in model calibration are compared. An existing groundwater flow model coupled with an advective transport simulation of the Salt Lake Valley, Utah (USA), is adapted for advective transport, and effective porosity is adjusted until simulated tritium concentrations match concentrations in samples from wells. Two calibration approaches are used: a “complex” highly parameterized porosity field and a “simple” parsimonious model of porosity distribution. The use of an atmospheric tracer (tritium in this case) and apparent ages (from tritium/helium) in model calibration also are discussed. Of the models tested, the complex model (with tritium concentrations and tritium/helium apparent ages) performs best. Although tritium breakthrough curves simulated by complex and simple models are very generally similar, and there is value in the simple model, the complex model is supported by a more realistic porosity distribution and a greater number of estimable parameters. Culling the best quality data did not lead to better calibration, possibly because of processes and aquifer characteristics that are not simulated. Despite many factors that contribute to shortcomings of both the models and the data, useful information is obtained from all the models evaluated. Although any particular prediction of tritium breakthrough may have large errors, overall, the models mimic observed trends.

  10. Mechanics of the Cell

    NASA Astrophysics Data System (ADS)

    Boal, David

    2012-01-01

    Preface; List of symbols; 1. Introduction to the cell; 2. Soft materials and fluids; Part I. Rods and Ropes: 3. Polymers; 4. Complex filaments; 5. Two-dimensional networks; 6. Three-dimensional networks; Part II. Membranes: 7. Biomembranes; 8. Membrane undulations; 9. Intermembrane and electrostatic forces; Part III. The Whole Cell: 10. Structure of the simplest cells; 11. Dynamic filaments; 12. Growth and division; 13. Signals and switches; Appendixes; Glossary; References; Index.

  11. Complexity, accuracy and practical applicability of different biogeochemical model versions

    NASA Astrophysics Data System (ADS)

    Los, F. J.; Blaas, M.

    2010-04-01

    The construction of validated biogeochemical model applications as prognostic tools for the marine environment involves a large number of choices particularly with respect to the level of details of the .physical, chemical and biological aspects. Generally speaking, enhanced complexity might enhance veracity, accuracy and credibility. However, very complex models are not necessarily effective or efficient forecast tools. In this paper, models of varying degrees of complexity are evaluated with respect to their forecast skills. In total 11 biogeochemical model variants have been considered based on four different horizontal grids. The applications vary in spatial resolution, in vertical resolution (2DH versus 3D), in nature of transport, in turbidity and in the number of phytoplankton species. Included models range from 15 year old applications with relatively simple physics up to present state of the art 3D models. With all applications the same year, 2003, has been simulated. During the model intercomparison it has been noticed that the 'OSPAR' Goodness of Fit cost function (Villars and de Vries, 1998) leads to insufficient discrimination of different models. This results in models obtaining similar scores although closer inspection of the results reveals large differences. In this paper therefore, we have adopted the target diagram by Jolliff et al. (2008) which provides a concise and more contrasting picture of model skill on the entire model domain and for the entire period of the simulations. Correctness in prediction of the mean and the variability are separated and thus enhance insight in model functioning. Using the target diagrams it is demonstrated that recent models are more consistent and have smaller biases. Graphical inspection of time series confirms this, as the level of variability appears more realistic, also given the multi-annual background statistics of the observations. Nevertheless, whether the improvements are all genuine for the particular

  12. Using Models to Inform Policy: Insights from Modeling the Complexities of Global Polio Eradication

    NASA Astrophysics Data System (ADS)

    Thompson, Kimberly M.

    Drawing on over 20 years of experience modeling risks in complex systems, this talk will challenge SBP participants to develop models that provide timely and useful answers to critical policy questions when decision makers need them. The talk will include reflections on the opportunities and challenges associated with developing integrated models for complex problems and communicating their results effectively. Dr. Thompson will focus the talk largely on collaborative modeling related to global polio eradication and the application of system dynamics tools. After successful global eradication of wild polioviruses, live polioviruses will still present risks that could potentially lead to paralytic polio cases. This talk will present the insights of efforts to use integrated dynamic, probabilistic risk, decision, and economic models to address critical policy questions related to managing global polio risks. Using a dynamic disease transmission model combined with probabilistic model inputs that characterize uncertainty for a stratified world to account for variability, we find that global health leaders will face some difficult choices, but that they can take actions that will manage the risks effectively. The talk will emphasize the need for true collaboration between modelers and subject matter experts, and the importance of working with decision makers as partners to ensure the development of useful models that actually get used.

  13. Complex Road Intersection Modelling Based on Low-Frequency GPS Track Data

    NASA Astrophysics Data System (ADS)

    Huang, J.; Deng, M.; Zhang, Y.; Liu, H.

    2017-09-01

    It is widely accepted that digital map becomes an indispensable guide for human daily traveling. Traditional road network maps are produced in the time-consuming and labour-intensive ways, such as digitizing printed maps and extraction from remote sensing images. At present, a large number of GPS trajectory data collected by floating vehicles makes it a reality to extract high-detailed and up-to-date road network information. Road intersections are often accident-prone areas and very critical to route planning and the connectivity of road networks is mainly determined by the topological geometry of road intersections. A few studies paid attention on detecting complex road intersections and mining the attached traffic information (e.g., connectivity, topology and turning restriction) from massive GPS traces. To the authors' knowledge, recent studies mainly used high frequency (1 s sampling rate) trajectory data to detect the crossroads regions or extract rough intersection models. It is still difficult to make use of low frequency (20-100 s) and easily available trajectory data to modelling complex road intersections geometrically and semantically. The paper thus attempts to construct precise models for complex road intersection by using low frequency GPS traces. We propose to firstly extract the complex road intersections by a LCSS-based (Longest Common Subsequence) trajectory clustering method, then delineate the geometry shapes of complex road intersections by a K-segment principle curve algorithm, and finally infer the traffic constraint rules inside the complex intersections.

  14. Structured analysis and modeling of complex systems

    NASA Technical Reports Server (NTRS)

    Strome, David R.; Dalrymple, Mathieu A.

    1992-01-01

    The Aircrew Evaluation Sustained Operations Performance (AESOP) facility at Brooks AFB, Texas, combines the realism of an operational environment with the control of a research laboratory. In recent studies we collected extensive data from the Airborne Warning and Control Systems (AWACS) Weapons Directors subjected to high and low workload Defensive Counter Air Scenarios. A critical and complex task in this environment involves committing a friendly fighter against a hostile fighter. Structured Analysis and Design techniques and computer modeling systems were applied to this task as tools for analyzing subject performance and workload. This technology is being transferred to the Man-Systems Division of NASA Johnson Space Center for application to complex mission related tasks, such as manipulating the Shuttle grappler arm.

  15. A model of clutter for complex, multivariate geospatial displays.

    PubMed

    Lohrenz, Maura C; Trafton, J Gregory; Beck, R Melissa; Gendron, Marlin L

    2009-02-01

    A novel model of measuring clutter in complex geospatial displays was compared with human ratings of subjective clutter as a measure of convergent validity. The new model is called the color-clustering clutter (C3) model. Clutter is a known problem in displays of complex data and has been shown to affect target search performance. Previous clutter models are discussed and compared with the C3 model. Two experiments were performed. In Experiment 1, participants performed subjective clutter ratings on six classes of information visualizations. Empirical results were used to set two free parameters in the model. In Experiment 2, participants performed subjective clutter ratings on aeronautical charts. Both experiments compared and correlated empirical data to model predictions. The first experiment resulted in a .76 correlation between ratings and C3. The second experiment resulted in a .86 correlation, significantly better than results from a model developed by Rosenholtz et al. Outliers to our correlation suggest further improvements to C3. We suggest that (a) the C3 model is a good predictor of subjective impressions of clutter in geospatial displays, (b) geospatial clutter is a function of color density and saliency (primary C3 components), and (c) pattern analysis techniques could further improve C3. The C3 model could be used to improve the design of electronic geospatial displays by suggesting when a display will be too cluttered for its intended audience.

  16. Modeling complexity in engineered infrastructure system: Water distribution network as an example

    NASA Astrophysics Data System (ADS)

    Zeng, Fang; Li, Xiang; Li, Ke

    2017-02-01

    The complex topology and adaptive behavior of infrastructure systems are driven by both self-organization of the demand and rigid engineering solutions. Therefore, engineering complex systems requires a method balancing holism and reductionism. To model the growth of water distribution networks, a complex network model was developed following the combination of local optimization rules and engineering considerations. The demand node generation is dynamic and follows the scaling law of urban growth. The proposed model can generate a water distribution network (WDN) similar to reported real-world WDNs on some structural properties. Comparison with different modeling approaches indicates that a realistic demand node distribution and co-evolvement of demand node and network are important for the simulation of real complex networks. The simulation results indicate that the efficiency of water distribution networks is exponentially affected by the urban growth pattern. On the contrary, the improvement of efficiency by engineering optimization is limited and relatively insignificant. The redundancy and robustness, on another aspect, can be significantly improved through engineering methods.

  17. Modeling complex tone perception: grouping harmonics with combination-sensitive neurons.

    PubMed

    Medvedev, Andrei V; Chiao, Faye; Kanwal, Jagmeet S

    2002-06-01

    Perception of complex communication sounds is a major function of the auditory system. To create a coherent precept of these sounds the auditory system may instantaneously group or bind multiple harmonics within complex sounds. This perception strategy simplifies further processing of complex sounds and facilitates their meaningful integration with other sensory inputs. Based on experimental data and a realistic model, we propose that associative learning of combinations of harmonic frequencies and nonlinear facilitation of responses to those combinations, also referred to as "combination-sensitivity," are important for spectral grouping. For our model, we simulated combination sensitivity using Hebbian and associative types of synaptic plasticity in auditory neurons. We also provided a parallel tonotopic input that converges and diverges within the network. Neurons in higher-order layers of the network exhibited an emergent property of multifrequency tuning that is consistent with experimental findings. Furthermore, this network had the capacity to "recognize" the pitch or fundamental frequency of a harmonic tone complex even when the fundamental frequency itself was missing.

  18. A computational framework for modeling targets as complex adaptive systems

    NASA Astrophysics Data System (ADS)

    Santos, Eugene; Santos, Eunice E.; Korah, John; Murugappan, Vairavan; Subramanian, Suresh

    2017-05-01

    Modeling large military targets is a challenge as they can be complex systems encompassing myriad combinations of human, technological, and social elements that interact, leading to complex behaviors. Moreover, such targets have multiple components and structures, extending across multiple spatial and temporal scales, and are in a state of change, either in response to events in the environment or changes within the system. Complex adaptive system (CAS) theory can help in capturing the dynamism, interactions, and more importantly various emergent behaviors, displayed by the targets. However, a key stumbling block is incorporating information from various intelligence, surveillance and reconnaissance (ISR) sources, while dealing with the inherent uncertainty, incompleteness and time criticality of real world information. To overcome these challenges, we present a probabilistic reasoning network based framework called complex adaptive Bayesian Knowledge Base (caBKB). caBKB is a rigorous, overarching and axiomatic framework that models two key processes, namely information aggregation and information composition. While information aggregation deals with the union, merger and concatenation of information and takes into account issues such as source reliability and information inconsistencies, information composition focuses on combining information components where such components may have well defined operations. Since caBKBs can explicitly model the relationships between information pieces at various scales, it provides unique capabilities such as the ability to de-aggregate and de-compose information for detailed analysis. Using a scenario from the Network Centric Operations (NCO) domain, we will describe how our framework can be used for modeling targets with a focus on methodologies for quantifying NCO performance metrics.

  19. Contrasting model complexity under a changing climate in a headwaters catchment.

    NASA Astrophysics Data System (ADS)

    Foster, L.; Williams, K. H.; Maxwell, R. M.

    2017-12-01

    Alpine, snowmelt-dominated catchments are the source of water for more than 1/6th of the world's population. These catchments are topographically complex, leading to steep weather gradients and nonlinear relationships between water and energy fluxes. Recent evidence suggests that alpine systems are more sensitive to climate warming, but these regions are vastly simplified in climate models and operational water management tools due to computational limitations. Simultaneously, point-scale observations are often extrapolated to larger regions where feedbacks can both exacerbate or mitigate locally observed changes. It is critical to determine whether projected climate impacts are robust to different methodologies, including model complexity. Using high performance computing and an integrated model of a representative headwater catchment we determined the hydrologic response from 30 projected climate changes to precipitation, temperature and vegetation for the Rocky Mountains. Simulations were run with 100m and 1km resolution, and with and without lateral subsurface flow in order to vary model complexity. We found that model complexity alters nonlinear relationships between water and energy fluxes. Higher-resolution models predicted larger changes per degree of temperature increase than lower resolution models, suggesting that reductions to snowpack, surface water, and groundwater due to warming may be underestimated in simple models. Increases in temperature were found to have a larger impact on water fluxes and stores than changes in precipitation, corroborating previous research showing that mountain systems are significantly more sensitive to temperature changes than to precipitation changes and that increases in winter precipitation are unlikely to compensate for increased evapotranspiration in a higher energy environment. These numerical experiments help to (1) bracket the range of uncertainty in published literature of climate change impacts on headwater

  20. Schizophrenia: an integrative approach to modelling a complex disorder

    PubMed Central

    Robertson, George S.; Hori, Sarah E.; Powell, Kelly J.

    2006-01-01

    The discovery of candidate susceptibility genes for schizophrenia and the generation of mice lacking proteins that reproduce biochemical processes that are disrupted in this mental illness offer unprecedented opportunities for improved modelling of this complex disorder. Several lines of evidence indicate that obstetrical complications, as well as fetal or neonatal exposure to viral infection, are predisposing events for some forms of schizophrenia. These environmental events can be modelled in animals, resulting in some of the characteristic features of schizophrenia; however, animal models have yet to be developed that encompass both environmental and genetic aspects of this mental illness. A large number of candidate schizophrenia susceptibility genes have been identified that encode proteins implicated in the regulation of synaptic plasticity, neurotransmission, neuronal migration, cell adherence, signal transduction, energy metabolism and neurite outgrowth. In support of the importance of these processes in schizophrenia, mice that have reduced levels or completely lack proteins that control glutamatergic neurotransmission, neuronal migration, cell adherence, signal transduction, neurite outgrowth and synaptic plasticity display many features reminiscent of schizophrenia. In the present review, we discuss strategies for modelling schizophrenia that involve treating mice that bear these mutations in a variety of ways to better model both environmental and genetic factors responsible for this complex mental illness according to a “two-hit hypothesis.” Because rodents are able to perform complex cognitive tasks using odour but not visual or auditory cues, we hypothesize that olfactory-based tests of cognitive performance should be used to search for novel therapeutics that ameliorate the cognitive deficits that are a feature of this devastating mental disorder. PMID:16699601

  1. Modeling and simulation for fewer-axis grinding of complex surface

    NASA Astrophysics Data System (ADS)

    Li, Zhengjian; Peng, Xiaoqiang; Song, Ci

    2017-10-01

    As the basis of fewer-axis grinding of complex surface, the grinding mathematical model is of great importance. A mathematical model of the grinding wheel was established, and then coordinate and normal vector of the wheel profile could be calculated. Through normal vector matching at the cutter contact point and the coordinate system transformation, the grinding mathematical model was established to work out the coordinate of the cutter location point. Based on the model, interference analysis was simulated to find out the right position and posture of workpiece for grinding. Then positioning errors of the workpiece including the translation positioning error and the rotation positioning error were analyzed respectively, and the main locating datum was obtained. According to the analysis results, the grinding tool path was planned and generated to grind the complex surface, and good form accuracy was obtained. The grinding mathematical model is simple, feasible and can be widely applied.

  2. A modeling framework for exposing risks in complex systems.

    PubMed

    Sharit, J

    2000-08-01

    This article introduces and develops a modeling framework for exposing risks in the form of human errors and adverse consequences in high-risk systems. The modeling framework is based on two components: a two-dimensional theory of accidents in systems developed by Perrow in 1984, and the concept of multiple system perspectives. The theory of accidents differentiates systems on the basis of two sets of attributes. One set characterizes the degree to which systems are interactively complex; the other emphasizes the extent to which systems are tightly coupled. The concept of multiple perspectives provides alternative descriptions of the entire system that serve to enhance insight into system processes. The usefulness of these two model components derives from a modeling framework that cross-links them, enabling a variety of work contexts to be exposed and understood that would otherwise be very difficult or impossible to identify. The model components and the modeling framework are illustrated in the case of a large and comprehensive trauma care system. In addition to its general utility in the area of risk analysis, this methodology may be valuable in applications of current methods of human and system reliability analysis in complex and continually evolving high-risk systems.

  3. Nonlinear model of epidemic spreading in a complex social network.

    PubMed

    Kosiński, Robert A; Grabowski, A

    2007-10-01

    The epidemic spreading in a human society is a complex process, which can be described on the basis of a nonlinear mathematical model. In such an approach the complex and hierarchical structure of social network (which has implications for the spreading of pathogens and can be treated as a complex network), can be taken into account. In our model each individual has one of the four permitted states: susceptible, infected, infective, unsusceptible or dead. This refers to the SEIR model used in epidemiology. The state of an individual changes in time, depending on the previous state and the interactions with other individuals. The description of the interpersonal contacts is based on the experimental observations of the social relations in the community. It includes spatial localization of the individuals and hierarchical structure of interpersonal interactions. Numerical simulations were performed for different types of epidemics, giving the progress of a spreading process and typical relationships (e.g. range of epidemic in time, the epidemic curve). The spreading process has a complex and spatially chaotic character. The time dependence of the number of infective individuals shows the nonlinear character of the spreading process. We investigate the influence of the preventive vaccinations on the spreading process. In particular, for a critical value of preventively vaccinated individuals the percolation threshold is observed and the epidemic is suppressed.

  4. Deciphering the complexity of acute inflammation using mathematical models.

    PubMed

    Vodovotz, Yoram

    2006-01-01

    Various stresses elicit an acute, complex inflammatory response, leading to healing but sometimes also to organ dysfunction and death. We constructed both equation-based models (EBM) and agent-based models (ABM) of various degrees of granularity--which encompass the dynamics of relevant cells, cytokines, and the resulting global tissue dysfunction--in order to begin to unravel these inflammatory interactions. The EBMs describe and predict various features of septic shock and trauma/hemorrhage (including the response to anthrax, preconditioning phenomena, and irreversible hemorrhage) and were used to simulate anti-inflammatory strategies in clinical trials. The ABMs that describe the interrelationship between inflammation and wound healing yielded insights into intestinal healing in necrotizing enterocolitis, vocal fold healing during phonotrauma, and skin healing in the setting of diabetic foot ulcers. Modeling may help in understanding the complex interactions among the components of inflammation and response to stress, and therefore aid in the development of novel therapies and diagnostics.

  5. Low frequency complex dielectric (conductivity) response of dilute clay suspensions: Modeling and experiments.

    PubMed

    Hou, Chang-Yu; Feng, Ling; Seleznev, Nikita; Freed, Denise E

    2018-09-01

    In this work, we establish an effective medium model to describe the low-frequency complex dielectric (conductivity) dispersion of dilute clay suspensions. We use previously obtained low-frequency polarization coefficients for a charged oblate spheroidal particle immersed in an electrolyte as the building block for the Maxwell Garnett mixing formula to model the dilute clay suspension. The complex conductivity phase dispersion exhibits a near-resonance peak when the clay grains have a narrow size distribution. The peak frequency is associated with the size distribution as well as the shape of clay grains and is often referred to as the characteristic frequency. In contrast, if the size of the clay grains has a broad distribution, the phase peak is broadened and can disappear into the background of the canonical phase response of the brine. To benchmark our model, the low-frequency dispersion of the complex conductivity of dilute clay suspensions is measured using a four-point impedance measurement, which can be reliably calibrated in the frequency range between 0.1 Hz and 10 kHz. By using a minimal number of fitting parameters when reliable information is available as input for the model and carefully examining the issue of potential over-fitting, we found that our model can be used to fit the measured dispersion of the complex conductivity with reasonable parameters. The good match between the modeled and experimental complex conductivity dispersion allows us to argue that our simplified model captures the essential physics for describing the low-frequency dispersion of the complex conductivity of dilute clay suspensions. Copyright © 2018 Elsevier Inc. All rights reserved.

  6. Cx-02 Program, workshop on modeling complex systems

    USGS Publications Warehouse

    Mossotti, Victor G.; Barragan, Jo Ann; Westergard, Todd D.

    2003-01-01

    This publication contains the abstracts and program for the workshop on complex systems that was held on November 19-21, 2002, in Reno, Nevada. Complex systems are ubiquitous within the realm of the earth sciences. Geological systems consist of a multiplicity of linked components with nested feedback loops; the dynamics of these systems are non-linear, iterative, multi-scale, and operate far from equilibrium. That notwithstanding, It appears that, with the exception of papers on seismic studies, geology and geophysics work has been disproportionally underrepresented at regional and national meetings on complex systems relative to papers in the life sciences. This is somewhat puzzling because geologists and geophysicists are, in many ways, preadapted to thinking of complex system mechanisms. Geologists and geophysicists think about processes involving large volumes of rock below the sunlit surface of Earth, the accumulated consequence of processes extending hundreds of millions of years in the past. Not only do geologists think in the abstract by virtue of the vast time spans, most of the evidence is out-of-sight. A primary goal of this workshop is to begin to bridge the gap between the Earth sciences and life sciences through demonstration of the universality of complex systems science, both philosophically and in model structures.

  7. An Ontology for Modeling Complex Inter-relational Organizations

    NASA Astrophysics Data System (ADS)

    Wautelet, Yves; Neysen, Nicolas; Kolp, Manuel

    This paper presents an ontology for organizational modeling through multiple complementary aspects. The primary goal of the ontology is to dispose of an adequate set of related concepts for studying complex organizations involved in a lot of relationships at the same time. In this paper, we define complex organizations as networked organizations involved in a market eco-system that are playing several roles simultaneously. In such a context, traditional approaches focus on the macro analytic level of transactions; this is supplemented here with a micro analytic study of the actors' rationale. At first, the paper overviews enterprise ontologies literature to position our proposal and exposes its contributions and limitations. The ontology is then brought to an advanced level of formalization: a meta-model in the form of a UML class diagram allows to overview the ontology concepts and their relationships which are formally defined. Finally, the paper presents the case study on which the ontology has been validated.

  8. 40 CFR 80.49 - Fuels to be used in augmenting the complex emission model through vehicle testing.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... complex emission model through vehicle testing. 80.49 Section 80.49 Protection of Environment... Reformulated Gasoline § 80.49 Fuels to be used in augmenting the complex emission model through vehicle testing... augmenting the complex emission model with a parameter not currently included in the complex emission model...

  9. 40 CFR 80.49 - Fuels to be used in augmenting the complex emission model through vehicle testing.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... complex emission model through vehicle testing. 80.49 Section 80.49 Protection of Environment... Reformulated Gasoline § 80.49 Fuels to be used in augmenting the complex emission model through vehicle testing... augmenting the complex emission model with a parameter not currently included in the complex emission model...

  10. 40 CFR 80.49 - Fuels to be used in augmenting the complex emission model through vehicle testing.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... complex emission model through vehicle testing. 80.49 Section 80.49 Protection of Environment... Reformulated Gasoline § 80.49 Fuels to be used in augmenting the complex emission model through vehicle testing... augmenting the complex emission model with a parameter not currently included in the complex emission model...

  11. 40 CFR 80.49 - Fuels to be used in augmenting the complex emission model through vehicle testing.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... complex emission model through vehicle testing. 80.49 Section 80.49 Protection of Environment... Reformulated Gasoline § 80.49 Fuels to be used in augmenting the complex emission model through vehicle testing... augmenting the complex emission model with a parameter not currently included in the complex emission model...

  12. A complex fermionic tensor model in d dimensions

    NASA Astrophysics Data System (ADS)

    Prakash, Shiroman; Sinha, Ritam

    2018-02-01

    In this note, we study a melonic tensor model in d dimensions based on three-index Dirac fermions with a four-fermion interaction. Summing the melonic diagrams at strong coupling allows one to define a formal large- N saddle point in arbitrary d and calculate the spectrum of scalar bilinear singlet operators. For d = 2 - ɛ the theory is an infrared fixed point, which we find has a purely real spectrum that we determine numerically for arbitrary d < 2, and analytically as a power series in ɛ. The theory appears to be weakly interacting when ɛ is small, suggesting that fermionic tensor models in 1-dimension can be studied in an ɛ expansion. For d > 2, the spectrum can still be calculated using the saddle point equations, which may define a formal large- N ultraviolet fixed point analogous to the Gross-Neveu model in d > 2. For 2 < d < 6, we find that the spectrum contains at least one complex scalar eigenvalue (similar to the complex eigenvalue present in the bosonic tensor model recently studied by Giombi, Klebanov and Tarnopolsky) which indicates that the theory is unstable. We also find that the fixed point is weakly-interacting when d = 6 (or more generally d = 4 n + 2) and has a real spectrum for 6 < d < 6 .14 which we present as a power series in ɛ in 6 + ɛ dimensions.

  13. High-resolution dust modelling over complex terrains in West Asia

    NASA Astrophysics Data System (ADS)

    Basart, S.; Vendrell, L.; Baldasano, J. M.

    2016-12-01

    The present work demonstrates the impact of model resolution in dust propagation in a complex terrain region such as West Asia. For this purpose, two simulations using the NMMB/BSC-Dust model are performed and analysed, one with a high horizontal resolution (at 0.03° × 0.03°) and one with a lower horizontal resolution (at 0.33° × 0.33°). Both model experiments cover two intense dust storms that occurred on 17-20 March 2012 as a consequence of strong northwesterly Shamal winds that spanned over thousands of kilometres in West Asia. The comparison with ground-based (surface weather stations and sunphotometers) and satellite aerosol observations (Aqua/MODIS and MSG/SEVIRI) shows that despite differences in the magnitude of the simulated dust concentrations, the model is able to reproduce these two dust outbreaks. Differences between both simulations on the dust spread rise on regional dust transport areas in south-western Saudi Arabia, Yemen and Oman. The complex orography in south-western Saudi Arabia, Yemen and Oman (with peaks higher than 3000 m) has an impact on the transported dust concentration fields over mountain regions. Differences between both model configurations are mainly associated to the channelization of the dust flow through valleys and the differences in the modelled altitude of the mountains that alters the meteorology and blocks the dust fronts limiting the dust transport. These results demonstrate how the dust prediction in the vicinity of complex terrains improves using high-horizontal resolution simulations.

  14. Capturing complexity in work disability research: application of system dynamics modeling methodology.

    PubMed

    Jetha, Arif; Pransky, Glenn; Hettinger, Lawrence J

    2016-01-01

    Work disability (WD) is characterized by variable and occasionally undesirable outcomes. The underlying determinants of WD outcomes include patterns of dynamic relationships among health, personal, organizational and regulatory factors that have been challenging to characterize, and inadequately represented by contemporary WD models. System dynamics modeling (SDM) methodology applies a sociotechnical systems thinking lens to view WD systems as comprising a range of influential factors linked by feedback relationships. SDM can potentially overcome limitations in contemporary WD models by uncovering causal feedback relationships, and conceptualizing dynamic system behaviors. It employs a collaborative and stakeholder-based model building methodology to create a visual depiction of the system as a whole. SDM can also enable researchers to run dynamic simulations to provide evidence of anticipated or unanticipated outcomes that could result from policy and programmatic intervention. SDM may advance rehabilitation research by providing greater insights into the structure and dynamics of WD systems while helping to understand inherent complexity. Challenges related to data availability, determining validity, and the extensive time and technical skill requirements for model building may limit SDM's use in the field and should be considered. Contemporary work disability (WD) models provide limited insight into complexity associated with WD processes. System dynamics modeling (SDM) has the potential to capture complexity through a stakeholder-based approach that generates a simulation model consisting of multiple feedback loops. SDM may enable WD researchers and practitioners to understand the structure and behavior of the WD system as a whole, and inform development of improved strategies to manage straightforward and complex WD cases.

  15. Structural model of control system for hydraulic stepper motor complex

    NASA Astrophysics Data System (ADS)

    Obukhov, A. D.; Dedov, D. L.; Kolodin, A. N.

    2018-03-01

    The article considers the problem of developing a structural model of the control system for a hydraulic stepper drive complex. A comparative analysis of stepper drives and assessment of the applicability of HSM for solving problems, requiring accurate displacement in space with subsequent positioning of the object, are carried out. The presented structural model of the automated control system of the multi-spindle complex of hydraulic stepper drives reflects the main components of the system, as well as the process of its control based on the control signals transfer to the solenoid valves by the controller. The models and methods described in the article can be used to formalize the control process in technical systems based on the application hydraulic stepper drives and allow switching from mechanical control to automated control.

  16. Bayesian Mixed-Membership Models of Complex and Evolving Networks

    DTIC Science & Technology

    2006-12-01

    R. Hughes, J. Parkinson , M. Gerstein, S . J. Wodak, A. Emili, and J. F. Greenblatt. Global landscape of protein complexes in the yeast Saccharomyces...provision of law , no person shall be subject to a penalty for failing to comply with a collection of information if it does not display a currently valid...Membership Models of Complex and Evolving Networks 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR( S ) 5d. PROJECT NUMBER 5e

  17. Reducing the Complexity of an Agent-Based Local Heroin Market Model

    PubMed Central

    Heard, Daniel; Bobashev, Georgiy V.; Morris, Robert J.

    2014-01-01

    This project explores techniques for reducing the complexity of an agent-based model (ABM). The analysis involved a model developed from the ethnographic research of Dr. Lee Hoffer in the Larimer area heroin market, which involved drug users, drug sellers, homeless individuals and police. The authors used statistical techniques to create a reduced version of the original model which maintained simulation fidelity while reducing computational complexity. This involved identifying key summary quantities of individual customer behavior as well as overall market activity and replacing some agents with probability distributions and regressions. The model was then extended to allow external market interventions in the form of police busts. Extensions of this research perspective, as well as its strengths and limitations, are discussed. PMID:25025132

  18. A Peep into the Uncertainty-Complexity-Relevance Modeling Trilemma through Global Sensitivity and Uncertainty Analysis

    NASA Astrophysics Data System (ADS)

    Munoz-Carpena, R.; Muller, S. J.; Chu, M.; Kiker, G. A.; Perz, S. G.

    2014-12-01

    Model Model complexity resulting from the need to integrate environmental system components cannot be understated. In particular, additional emphasis is urgently needed on rational approaches to guide decision making through uncertainties surrounding the integrated system across decision-relevant scales. However, in spite of the difficulties that the consideration of modeling uncertainty represent for the decision process, it should not be avoided or the value and science behind the models will be undermined. These two issues; i.e., the need for coupled models that can answer the pertinent questions and the need for models that do so with sufficient certainty, are the key indicators of a model's relevance. Model relevance is inextricably linked with model complexity. Although model complexity has advanced greatly in recent years there has been little work to rigorously characterize the threshold of relevance in integrated and complex models. Formally assessing the relevance of the model in the face of increasing complexity would be valuable because there is growing unease among developers and users of complex models about the cumulative effects of various sources of uncertainty on model outputs. In particular, this issue has prompted doubt over whether the considerable effort going into further elaborating complex models will in fact yield the expected payback. New approaches have been proposed recently to evaluate the uncertainty-complexity-relevance modeling trilemma (Muller, Muñoz-Carpena and Kiker, 2011) by incorporating state-of-the-art global sensitivity and uncertainty analysis (GSA/UA) in every step of the model development so as to quantify not only the uncertainty introduced by the addition of new environmental components, but the effect that these new components have over existing components (interactions, non-linear responses). Outputs from the analysis can also be used to quantify system resilience (stability, alternative states, thresholds or tipping

  19. Border Security: A Conceptual Model of Complexity

    DTIC Science & Technology

    2013-12-01

    maximum 200 words ) This research applies complexity and system dynamics theory to the idea of border security, culminating in the development of...alternative policy options. E. LIMITATIONS OF RESEARCH AND MODEL This research explores whether border security is a living system. In other words , whether...border inspections. Washington State, for example, experienced a 50% drop in tourism and lost over $100 million in local revenue because of the

  20. Persistent model order reduction for complex dynamical systems using smooth orthogonal decomposition

    NASA Astrophysics Data System (ADS)

    Ilbeigi, Shahab; Chelidze, David

    2017-11-01

    Full-scale complex dynamic models are not effective for parametric studies due to the inherent constraints on available computational power and storage resources. A persistent reduced order model (ROM) that is robust, stable, and provides high-fidelity simulations for a relatively wide range of parameters and operating conditions can provide a solution to this problem. The fidelity of a new framework for persistent model order reduction of large and complex dynamical systems is investigated. The framework is validated using several numerical examples including a large linear system and two complex nonlinear systems with material and geometrical nonlinearities. While the framework is used for identifying the robust subspaces obtained from both proper and smooth orthogonal decompositions (POD and SOD, respectively), the results show that SOD outperforms POD in terms of stability, accuracy, and robustness.

  1. Calibration of an Unsteady Groundwater Flow Model for a Complex, Strongly Heterogeneous Aquifer

    NASA Astrophysics Data System (ADS)

    Curtis, Z. K.; Liao, H.; Li, S. G.; Phanikumar, M. S.; Lusch, D.

    2016-12-01

    Modeling of groundwater systems characterized by complex three-dimensional structure and heterogeneity remains a significant challenge. Most of today's groundwater models are developed based on relatively simple conceptual representations in favor of model calibratibility. As more complexities are modeled, e.g., by adding more layers and/or zones, or introducing transient processes, more parameters have to be estimated and issues related to ill-posed groundwater problems and non-unique calibration arise. Here, we explore the use of an alternative conceptual representation for groundwater modeling that is fully three-dimensional and can capture complex 3D heterogeneity (both systematic and "random") without over-parameterizing the aquifer system. In particular, we apply Transition Probability (TP) geostatistics on high resolution borehole data from a water well database to characterize the complex 3D geology. Different aquifer material classes, e.g., `AQ' (aquifer material), `MAQ' (marginal aquifer material'), `PCM' (partially confining material), and `CM' (confining material), are simulated, with the hydraulic properties of each material type as tuning parameters during calibration. The TP-based approach is applied to simulate unsteady groundwater flow in a large, complex, and strongly heterogeneous glacial aquifer system in Michigan across multiple spatial and temporal scales. The resulting model is calibrated to observed static water level data over a time span of 50 years. The results show that the TP-based conceptualization enables much more accurate and robust calibration/simulation than that based on conventional deterministic layer/zone based conceptual representations.

  2. Modeling Stochastic Complexity in Complex Adaptive Systems: Non-Kolmogorov Probability and the Process Algebra Approach.

    PubMed

    Sulis, William H

    2017-10-01

    Walter Freeman III pioneered the application of nonlinear dynamical systems theories and methodologies in his work on mesoscopic brain dynamics.Sadly, mainstream psychology and psychiatry still cling to linear correlation based data analysis techniques, which threaten to subvert the process of experimentation and theory building. In order to progress, it is necessary to develop tools capable of managing the stochastic complexity of complex biopsychosocial systems, which includes multilevel feedback relationships, nonlinear interactions, chaotic dynamics and adaptability. In addition, however, these systems exhibit intrinsic randomness, non-Gaussian probability distributions, non-stationarity, contextuality, and non-Kolmogorov probabilities, as well as the absence of mean and/or variance and conditional probabilities. These properties and their implications for statistical analysis are discussed. An alternative approach, the Process Algebra approach, is described. It is a generative model, capable of generating non-Kolmogorov probabilities. It has proven useful in addressing fundamental problems in quantum mechanics and in the modeling of developing psychosocial systems.

  3. Modeling the surface tension of complex, reactive organic-inorganic mixtures

    NASA Astrophysics Data System (ADS)

    Schwier, A. N.; Viglione, G. A.; Li, Z.; McNeill, V. Faye

    2013-11-01

    Atmospheric aerosols can contain thousands of organic compounds which impact aerosol surface tension, affecting aerosol properties such as heterogeneous reactivity, ice nucleation, and cloud droplet formation. We present new experimental data for the surface tension of complex, reactive organic-inorganic aqueous mixtures mimicking tropospheric aerosols. Each solution contained 2-6 organic compounds, including methylglyoxal, glyoxal, formaldehyde, acetaldehyde, oxalic acid, succinic acid, leucine, alanine, glycine, and serine, with and without ammonium sulfate. We test two semi-empirical surface tension models and find that most reactive, complex, aqueous organic mixtures which do not contain salt are well described by a weighted Szyszkowski-Langmuir (S-L) model which was first presented by Henning et al. (2005). Two approaches for modeling the effects of salt were tested: (1) the Tuckermann approach (an extension of the Henning model with an additional explicit salt term), and (2) a new implicit method proposed here which employs experimental surface tension data obtained for each organic species in the presence of salt used with the Henning model. We recommend the use of method (2) for surface tension modeling of aerosol systems because the Henning model (using data obtained from organic-inorganic systems) and Tuckermann approach provide similar modeling results and goodness-of-fit (χ2) values, yet the Henning model is a simpler and more physical approach to modeling the effects of salt, requiring less empirically determined parameters.

  4. Modeling the surface tension of complex, reactive organic-inorganic mixtures

    NASA Astrophysics Data System (ADS)

    Schwier, A. N.; Viglione, G. A.; Li, Z.; McNeill, V. F.

    2013-01-01

    Atmospheric aerosols can contain thousands of organic compounds which impact aerosol surface tension, affecting aerosol properties such as cloud condensation nuclei (CCN) ability. We present new experimental data for the surface tension of complex, reactive organic-inorganic aqueous mixtures mimicking tropospheric aerosols. Each solution contained 2-6 organic compounds, including methylglyoxal, glyoxal, formaldehyde, acetaldehyde, oxalic acid, succinic acid, leucine, alanine, glycine, and serine, with and without ammonium sulfate. We test two surface tension models and find that most reactive, complex, aqueous organic mixtures which do not contain salt are well-described by a weighted Szyszkowski-Langmuir (S-L) model which was first presented by Henning et al. (2005). Two approaches for modeling the effects of salt were tested: (1) the Tuckermann approach (an extension of the Henning model with an additional explicit salt term), and (2) a new implicit method proposed here which employs experimental surface tension data obtained for each organic species in the presence of salt used with the Henning model. We recommend the use of method (2) for surface tension modeling because the Henning model (using data obtained from organic-inorganic systems) and Tuckermann approach provide similar modeling fits and goodness of fit (χ2) values, yet the Henning model is a simpler and more physical approach to modeling the effects of salt, requiring less empirically determined parameters.

  5. Evaluating models of remember-know judgments: complexity, mimicry, and discriminability.

    PubMed

    Cohen, Andrew L; Rotello, Caren M; Macmillan, Neil A

    2008-10-01

    Remember-know judgments provide additional information in recognition memory tests, but the nature of this information and the attendant decision process are in dispute. Competing models have proposed that remember judgments reflect a sum of familiarity and recollective information (the one-dimensional model), are based on a difference between these strengths (STREAK), or are purely recollective (the dual-process model). A choice among these accounts is sometimes made by comparing the precision of their fits to data, but this strategy may be muddied by differences in model complexity: Some models that appear to provide good fits may simply be better able to mimic the data produced by other models. To evaluate this possibility, we simulated data with each of the models in each of three popular remember-know paradigms, then fit those data to each of the models. We found that the one-dimensional model is generally less complex than the others, but despite this handicap, it dominates the others as the best-fitting model. For both reasons, the one-dimensional model should be preferred. In addition, we found that some empirical paradigms are ill-suited for distinguishing among models. For example, data collected by soliciting remember/know/new judgments--that is, the trinary task--provide a particularly weak ground for distinguishing models. Additional tables and figures may be downloaded from the Psychonomic Society's Archive of Norms, Stimuli, and Data, at www.psychonomic.org/archive.

  6. Process consistency in models: The importance of system signatures, expert knowledge, and process complexity

    NASA Astrophysics Data System (ADS)

    Hrachowitz, M.; Fovet, O.; Ruiz, L.; Euser, T.; Gharari, S.; Nijzink, R.; Freer, J.; Savenije, H. H. G.; Gascuel-Odoux, C.

    2014-09-01

    Hydrological models frequently suffer from limited predictive power despite adequate calibration performances. This can indicate insufficient representations of the underlying processes. Thus, ways are sought to increase model consistency while satisfying the contrasting priorities of increased model complexity and limited equifinality. In this study, the value of a systematic use of hydrological signatures and expert knowledge for increasing model consistency was tested. It was found that a simple conceptual model, constrained by four calibration objective functions, was able to adequately reproduce the hydrograph in the calibration period. The model, however, could not reproduce a suite of hydrological signatures, indicating a lack of model consistency. Subsequently, testing 11 models, model complexity was increased in a stepwise way and counter-balanced by "prior constraints," inferred from expert knowledge to ensure a model which behaves well with respect to the modeler's perception of the system. We showed that, in spite of unchanged calibration performance, the most complex model setup exhibited increased performance in the independent test period and skill to better reproduce all tested signatures, indicating a better system representation. The results suggest that a model may be inadequate despite good performance with respect to multiple calibration objectives and that increasing model complexity, if counter-balanced by prior constraints, can significantly increase predictive performance of a model and its skill to reproduce hydrological signatures. The results strongly illustrate the need to balance automated model calibration with a more expert-knowledge-driven strategy of constraining models.

  7. Socio-Environmental Resilience and Complex Urban Systems Modeling

    NASA Astrophysics Data System (ADS)

    Deal, Brian; Petri, Aaron; Pan, Haozhi; Goldenberg, Romain; Kalantari, Zahra; Cvetkovic, Vladimir

    2017-04-01

    The increasing pressure of climate change has inspired two normative agendas; socio-technical transitions and socio-ecological resilience, both sharing a complex-systems epistemology (Gillard et al. 2016). Socio-technical solutions include a continuous, massive data gathering exercise now underway in urban places under the guise of developing a 'smart'(er) city. This has led to the creation of data-rich environments where large data sets have become central to monitoring and forming a response to anomalies. Some have argued that these kinds of data sets can help in planning for resilient cities (Norberg and Cumming 2008; Batty 2013). In this paper, we focus on a more nuanced, ecologically based, socio-environmental perspective of resilience planning that is often given less consideration. Here, we broadly discuss (and model) the tightly linked, mutually influenced, social and biophysical subsystems that are critical for understanding urban resilience. We argue for the need to incorporate these sub system linkages into the resilience planning lexicon through the integration of systems models and planning support systems. We make our case by first providing a context for urban resilience from a socio-ecological and planning perspective. We highlight the data needs for this type of resilient planning and compare it to currently collected data streams in various smart city efforts. This helps to define an approach for operationalizing socio-environmental resilience planning using robust systems models and planning support systems. For this, we draw from our experiences in coupling a spatio-temporal land use model (the Landuse Evolution and impact Assessment Model (LEAM)) with water quality and quantity models in Stockholm Sweden. We describe the coupling of these systems models using a robust Planning Support System (PSS) structural framework. We use the coupled model simulations and PSS to analyze the connection between urban land use transformation (social) and water

  8. Mathematical concepts for modeling human behavior in complex man-machine systems

    NASA Technical Reports Server (NTRS)

    Johannsen, G.; Rouse, W. B.

    1979-01-01

    Many human behavior (e.g., manual control) models have been found to be inadequate for describing processes in certain real complex man-machine systems. An attempt is made to find a way to overcome this problem by examining the range of applicability of existing mathematical models with respect to the hierarchy of human activities in real complex tasks. Automobile driving is chosen as a baseline scenario, and a hierarchy of human activities is derived by analyzing this task in general terms. A structural description leads to a block diagram and a time-sharing computer analogy.

  9. A range of complex probabilistic models for RNA secondary structure prediction that includes the nearest-neighbor model and more.

    PubMed

    Rivas, Elena; Lang, Raymond; Eddy, Sean R

    2012-02-01

    The standard approach for single-sequence RNA secondary structure prediction uses a nearest-neighbor thermodynamic model with several thousand experimentally determined energy parameters. An attractive alternative is to use statistical approaches with parameters estimated from growing databases of structural RNAs. Good results have been reported for discriminative statistical methods using complex nearest-neighbor models, including CONTRAfold, Simfold, and ContextFold. Little work has been reported on generative probabilistic models (stochastic context-free grammars [SCFGs]) of comparable complexity, although probabilistic models are generally easier to train and to use. To explore a range of probabilistic models of increasing complexity, and to directly compare probabilistic, thermodynamic, and discriminative approaches, we created TORNADO, a computational tool that can parse a wide spectrum of RNA grammar architectures (including the standard nearest-neighbor model and more) using a generalized super-grammar that can be parameterized with probabilities, energies, or arbitrary scores. By using TORNADO, we find that probabilistic nearest-neighbor models perform comparably to (but not significantly better than) discriminative methods. We find that complex statistical models are prone to overfitting RNA structure and that evaluations should use structurally nonhomologous training and test data sets. Overfitting has affected at least one published method (ContextFold). The most important barrier to improving statistical approaches for RNA secondary structure prediction is the lack of diversity of well-curated single-sequence RNA secondary structures in current RNA databases.

  10. Evaluating and Mitigating the Impact of Complexity in Software Models

    DTIC Science & Technology

    2015-12-01

    Internal use:* Permission to reproduce this material and to prepare derivative works from this material for internal use is granted, provided the...introduction) provides our motivation to study complexity and the essential re- search questions that we address in this effort. Some background information... provides the reader with a basis for the work and related areas explored. Section 2 (The Impact of Complexity) discusses the impact of model-based

  11. Developing and Modeling Complex Social Interventions: Introducing the Connecting People Intervention

    ERIC Educational Resources Information Center

    Webber, Martin; Reidy, Hannah; Ansari, David; Stevens, Martin; Morris, David

    2016-01-01

    Objectives: Modeling the processes involved in complex social interventions is important in social work practice, as it facilitates their implementation and translation into different contexts. This article reports the process of developing and modeling the connecting people intervention (CPI), a model of practice that supports people with mental…

  12. Are our dynamic water quality models too complex? A comparison of a new parsimonious phosphorus model, SimplyP, and INCA-P

    NASA Astrophysics Data System (ADS)

    Jackson-Blake, L. A.; Sample, J. E.; Wade, A. J.; Helliwell, R. C.; Skeffington, R. A.

    2017-07-01

    Catchment-scale water quality models are increasingly popular tools for exploring the potential effects of land management, land use change and climate change on water quality. However, the dynamic, catchment-scale nutrient models in common usage are complex, with many uncertain parameters requiring calibration, limiting their usability and robustness. A key question is whether this complexity is justified. To explore this, we developed a parsimonious phosphorus model, SimplyP, incorporating a rainfall-runoff model and a biogeochemical model able to simulate daily streamflow, suspended sediment, and particulate and dissolved phosphorus dynamics. The model's complexity was compared to one popular nutrient model, INCA-P, and the performance of the two models was compared in a small rural catchment in northeast Scotland. For three land use classes, less than six SimplyP parameters must be determined through calibration, the rest may be based on measurements, while INCA-P has around 40 unmeasurable parameters. Despite substantially simpler process-representation, SimplyP performed comparably to INCA-P in both calibration and validation and produced similar long-term projections in response to changes in land management. Results support the hypothesis that INCA-P is overly complex for the study catchment. We hope our findings will help prompt wider model comparison exercises, as well as debate among the water quality modeling community as to whether today's models are fit for purpose. Simpler models such as SimplyP have the potential to be useful management and research tools, building blocks for future model development (prototype code is freely available), or benchmarks against which more complex models could be evaluated.

  13. Computational modeling of carbohydrate recognition in protein complex

    NASA Astrophysics Data System (ADS)

    Ishida, Toyokazu

    2017-11-01

    To understand the mechanistic principle of carbohydrate recognition in proteins, we propose a systematic computational modeling strategy to identify complex carbohydrate chain onto the reduced 2D free energy surface (2D-FES), determined by MD sampling combined with QM/MM energy corrections. In this article, we first report a detailed atomistic simulation study of the norovirus capsid proteins with carbohydrate antigens based on ab initio QM/MM combined with MD-FEP simulations. The present result clearly shows that the binding geometries of complex carbohydrate antigen are determined not by one single, rigid carbohydrate structure, but rather by the sum of averaged conformations mapped onto the minimum free energy region of QM/MM 2D-FES.

  14. On Convergence of Development Costs and Cost Models for Complex Spaceflight Instrument Electronics

    NASA Technical Reports Server (NTRS)

    Kizhner, Semion; Patel, Umeshkumar D.; Kasa, Robert L.; Hestnes, Phyllis; Brown, Tammy; Vootukuru, Madhavi

    2008-01-01

    Development costs of a few recent spaceflight instrument electrical and electronics subsystems have diverged from respective heritage cost model predictions. The cost models used are Grass Roots, Price-H and Parametric Model. These cost models originated in the military and industry around 1970 and were successfully adopted and patched by NASA on a mission-by-mission basis for years. However, the complexity of new instruments recently changed rapidly by orders of magnitude. This is most obvious in the complexity of representative spaceflight instrument electronics' data system. It is now required to perform intermediate processing of digitized data apart from conventional processing of science phenomenon signals from multiple detectors. This involves on-board instrument formatting of computational operands from row data for example, images), multi-million operations per second on large volumes of data in reconfigurable hardware (in addition to processing on a general purpose imbedded or standalone instrument flight computer), as well as making decisions for on-board system adaptation and resource reconfiguration. The instrument data system is now tasked to perform more functions, such as forming packets and instrument-level data compression of more than one data stream, which are traditionally performed by the spacecraft command and data handling system. It is furthermore required that the electronics box for new complex instruments is developed for one-digit watt power consumption, small size and that it is light-weight, and delivers super-computing capabilities. The conflict between the actual development cost of newer complex instruments and its electronics components' heritage cost model predictions seems to be irreconcilable. This conflict and an approach to its resolution are addressed in this paper by determining the complexity parameters, complexity index, and their use in enhanced cost model.

  15. A framework for modelling the complexities of food and water security under globalisation

    NASA Astrophysics Data System (ADS)

    Dermody, Brian J.; Sivapalan, Murugesu; Stehfest, Elke; van Vuuren, Detlef P.; Wassen, Martin J.; Bierkens, Marc F. P.; Dekker, Stefan C.

    2018-01-01

    We present a new framework for modelling the complexities of food and water security under globalisation. The framework sets out a method to capture regional and sectoral interdependencies and cross-scale feedbacks within the global food system that contribute to emergent water use patterns. The framework integrates aspects of existing models and approaches in the fields of hydrology and integrated assessment modelling. The core of the framework is a multi-agent network of city agents connected by infrastructural trade networks. Agents receive socio-economic and environmental constraint information from integrated assessment models and hydrological models respectively and simulate complex, socio-environmental dynamics that operate within those constraints. The emergent changes in food and water resources are aggregated and fed back to the original models with minimal modification of the structure of those models. It is our conviction that the framework presented can form the basis for a new wave of decision tools that capture complex socio-environmental change within our globalised world. In doing so they will contribute to illuminating pathways towards a sustainable future for humans, ecosystems and the water they share.

  16. Low-complexity stochastic modeling of wall-bounded shear flows

    NASA Astrophysics Data System (ADS)

    Zare, Armin

    Turbulent flows are ubiquitous in nature and they appear in many engineering applications. Transition to turbulence, in general, increases skin-friction drag in air/water vehicles compromising their fuel-efficiency and reduces the efficiency and longevity of wind turbines. While traditional flow control techniques combine physical intuition with costly experiments, their effectiveness can be significantly enhanced by control design based on low-complexity models and optimization. In this dissertation, we develop a theoretical and computational framework for the low-complexity stochastic modeling of wall-bounded shear flows. Part I of the dissertation is devoted to the development of a modeling framework which incorporates data-driven techniques to refine physics-based models. We consider the problem of completing partially known sample statistics in a way that is consistent with underlying stochastically driven linear dynamics. Neither the statistics nor the dynamics are precisely known. Thus, our objective is to reconcile the two in a parsimonious manner. To this end, we formulate optimization problems to identify the dynamics and directionality of input excitation in order to explain and complete available covariance data. For problem sizes that general-purpose solvers cannot handle, we develop customized optimization algorithms based on alternating direction methods. The solution to the optimization problem provides information about critical directions that have maximal effect in bringing model and statistics in agreement. In Part II, we employ our modeling framework to account for statistical signatures of turbulent channel flow using low-complexity stochastic dynamical models. We demonstrate that white-in-time stochastic forcing is not sufficient to explain turbulent flow statistics and develop models for colored-in-time forcing of the linearized Navier-Stokes equations. We also examine the efficacy of stochastically forced linearized NS equations and their

  17. Molecular modelling, spectroscopic characterization and biological studies of tetraazamacrocyclic metal complexes

    NASA Astrophysics Data System (ADS)

    Rathi, Parveen; Sharma, Kavita; Singh, Dharam Pal

    2014-09-01

    Macrocyclic complexes of the type [MLX]X2; where L is (C30H28N4), a macrocyclic ligand, M = Cr(III) and Fe(III) and X = Cl-, CH3COO- or NO3-, have been synthesized by template condensation reaction of 1,8-diaminonaphthalene and acetylacetone in the presence of trivalent metal salts in a methanolic medium. The complexes have been formulated as [MLX]X2 due to 1:2 electrolytic nature of these complexes. The complexes have been characterized with the help of elemental analyses, molar conductance measurements, magnetic susceptibility measurements, electronic, infrared, far infrared, Mass spectral studies and molecular modelling. Molecular weight of these complexes indicates their monomeric nature. On the basis of all these studies, a five coordinated square pyramidal geometry has been proposed for all these complexes. These metal complexes have also been screened for their in vitro antimicrobial activities.

  18. Comparing flood loss models of different complexity

    NASA Astrophysics Data System (ADS)

    Schröter, Kai; Kreibich, Heidi; Vogel, Kristin; Riggelsen, Carsten; Scherbaum, Frank; Merz, Bruno

    2013-04-01

    Any deliberation on flood risk requires the consideration of potential flood losses. In particular, reliable flood loss models are needed to evaluate cost-effectiveness of mitigation measures, to assess vulnerability, for comparative risk analysis and financial appraisal during and after floods. In recent years, considerable improvements have been made both concerning the data basis and the methodological approaches used for the development of flood loss models. Despite of that, flood loss models remain an important source of uncertainty. Likewise the temporal and spatial transferability of flood loss models is still limited. This contribution investigates the predictive capability of different flood loss models in a split sample cross regional validation approach. For this purpose, flood loss models of different complexity, i.e. based on different numbers of explaining variables, are learned from a set of damage records that was obtained from a survey after the Elbe flood in 2002. The validation of model predictions is carried out for different flood events in the Elbe and Danube river basins in 2002, 2005 and 2006 for which damage records are available from surveys after the flood events. The models investigated are a stage-damage model, the rule based model FLEMOps+r as well as novel model approaches which are derived using data mining techniques of regression trees and Bayesian networks. The Bayesian network approach to flood loss modelling provides attractive additional information concerning the probability distribution of both model predictions and explaining variables.

  19. PeTTSy: a computational tool for perturbation analysis of complex systems biology models.

    PubMed

    Domijan, Mirela; Brown, Paul E; Shulgin, Boris V; Rand, David A

    2016-03-10

    Over the last decade sensitivity analysis techniques have been shown to be very useful to analyse complex and high dimensional Systems Biology models. However, many of the currently available toolboxes have either used parameter sampling, been focused on a restricted set of model observables of interest, studied optimisation of a objective function, or have not dealt with multiple simultaneous model parameter changes where the changes can be permanent or temporary. Here we introduce our new, freely downloadable toolbox, PeTTSy (Perturbation Theory Toolbox for Systems). PeTTSy is a package for MATLAB which implements a wide array of techniques for the perturbation theory and sensitivity analysis of large and complex ordinary differential equation (ODE) based models. PeTTSy is a comprehensive modelling framework that introduces a number of new approaches and that fully addresses analysis of oscillatory systems. It examines sensitivity analysis of the models to perturbations of parameters, where the perturbation timing, strength, length and overall shape can be controlled by the user. This can be done in a system-global setting, namely, the user can determine how many parameters to perturb, by how much and for how long. PeTTSy also offers the user the ability to explore the effect of the parameter perturbations on many different types of outputs: period, phase (timing of peak) and model solutions. PeTTSy can be employed on a wide range of mathematical models including free-running and forced oscillators and signalling systems. To enable experimental optimisation using the Fisher Information Matrix it efficiently allows one to combine multiple variants of a model (i.e. a model with multiple experimental conditions) in order to determine the value of new experiments. It is especially useful in the analysis of large and complex models involving many variables and parameters. PeTTSy is a comprehensive tool for analysing large and complex models of regulatory and

  20. On explicit algebraic stress models for complex turbulent flows

    NASA Technical Reports Server (NTRS)

    Gatski, T. B.; Speziale, C. G.

    1992-01-01

    Explicit algebraic stress models that are valid for three-dimensional turbulent flows in noninertial frames are systematically derived from a hierarchy of second-order closure models. This represents a generalization of the model derived by Pope who based his analysis on the Launder, Reece, and Rodi model restricted to two-dimensional turbulent flows in an inertial frame. The relationship between the new models and traditional algebraic stress models -- as well as anistropic eddy visosity models -- is theoretically established. The need for regularization is demonstrated in an effort to explain why traditional algebraic stress models have failed in complex flows. It is also shown that these explicit algebraic stress models can shed new light on what second-order closure models predict for the equilibrium states of homogeneous turbulent flows and can serve as a useful alternative in practical computations.

  1. QMU as an approach to strengthening the predictive capabilities of complex models.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gray, Genetha Anne.; Boggs, Paul T.; Grace, Matthew D.

    2010-09-01

    Complex systems are made up of multiple interdependent parts, and the behavior of the entire system cannot always be directly inferred from the behavior of the individual parts. They are nonlinear and system responses are not necessarily additive. Examples of complex systems include energy, cyber and telecommunication infrastructures, human and animal social structures, and biological structures such as cells. To meet the goals of infrastructure development, maintenance, and protection for cyber-related complex systems, novel modeling and simulation technology is needed. Sandia has shown success using M&S in the nuclear weapons (NW) program. However, complex systems represent a significant challenge andmore » relative departure from the classical M&S exercises, and many of the scientific and mathematical M&S processes must be re-envisioned. Specifically, in the NW program, requirements and acceptable margins for performance, resilience, and security are well-defined and given quantitatively from the start. The Quantification of Margins and Uncertainties (QMU) process helps to assess whether or not these safety, reliability and performance requirements have been met after a system has been developed. In this sense, QMU is used as a sort of check that requirements have been met once the development process is completed. In contrast, performance requirements and margins may not have been defined a priori for many complex systems, (i.e. the Internet, electrical distribution grids, etc.), particularly not in quantitative terms. This project addresses this fundamental difference by investigating the use of QMU at the start of the design process for complex systems. Three major tasks were completed. First, the characteristics of the cyber infrastructure problem were collected and considered in the context of QMU-based tools. Second, UQ methodologies for the quantification of model discrepancies were considered in the context of statistical models of cyber activity

  2. First results from the International Urban Energy Balance Model Comparison: Model Complexity

    NASA Astrophysics Data System (ADS)

    Blackett, M.; Grimmond, S.; Best, M.

    2009-04-01

    A great variety of urban energy balance models has been developed. These vary in complexity from simple schemes that represent the city as a slab, through those which model various facets (i.e. road, walls and roof) to more complex urban forms (including street canyons with intersections) and features (such as vegetation cover and anthropogenic heat fluxes). Some schemes also incorporate detailed representations of momentum and energy fluxes distributed throughout various layers of the urban canopy layer. The models each differ in the parameters they require to describe the site and the in demands they make on computational processing power. Many of these models have been evaluated using observational datasets but to date, no controlled comparisons have been conducted. Urban surface energy balance models provide a means to predict the energy exchange processes which influence factors such as urban temperature, humidity, atmospheric stability and winds. These all need to be modelled accurately to capture features such as the urban heat island effect and to provide key information for dispersion and air quality modelling. A comparison of the various models available will assist in improving current and future models and will assist in formulating research priorities for future observational campaigns within urban areas. In this presentation we will summarise the initial results of this international urban energy balance model comparison. In particular, the relative performance of the models involved will be compared based on their degree of complexity. These results will inform us on ways in which we can improve the modelling of air quality within, and climate impacts of, global megacities. The methodology employed in conducting this comparison followed that used in PILPS (the Project for Intercomparison of Land-Surface Parameterization Schemes) which is also endorsed by the GEWEX Global Land Atmosphere System Study (GLASS) panel. In all cases, models were run

  3. Effect of shoulder model complexity in upper-body kinematics analysis of the golf swing.

    PubMed

    Bourgain, M; Hybois, S; Thoreux, P; Rouillon, O; Rouch, P; Sauret, C

    2018-06-25

    The golf swing is a complex full body movement during which the spine and shoulders are highly involved. In order to determine shoulder kinematics during this movement, multibody kinematics optimization (MKO) can be recommended to limit the effect of the soft tissue artifact and to avoid joint dislocations or bone penetration in reconstructed kinematics. Classically, in golf biomechanics research, the shoulder is represented by a 3 degrees-of-freedom model representing the glenohumeral joint. More complex and physiological models are already provided in the scientific literature. Particularly, the model used in this study was a full body model and also described motions of clavicles and scapulae. This study aimed at quantifying the effect of utilizing a more complex and physiological shoulder model when studying the golf swing. Results obtained on 20 golfers showed that a more complex and physiologically-accurate model can more efficiently track experimental markers, which resulted in differences in joint kinematics. Hence, the model with 3 degrees-of-freedom between the humerus and the thorax may be inadequate when combined with MKO and a more physiological model would be beneficial. Finally, results would also be improved through a subject-specific approach for the determination of the segment lengths. Copyright © 2018 Elsevier Ltd. All rights reserved.

  4. Dynamical Behaviors in Complex-Valued Love Model With or Without Time Delays

    NASA Astrophysics Data System (ADS)

    Deng, Wei; Liao, Xiaofeng; Dong, Tao

    2017-12-01

    In this paper, a novel version of nonlinear model, i.e. a complex-valued love model with two time delays between two individuals in a love affair, has been proposed. A notable feature in this model is that we separate the emotion of one individual into real and imaginary parts to represent the variation and complexity of psychophysiological emotion in romantic relationship instead of just real domain, and make our model much closer to reality. This is because love is a complicated cognitive and social phenomenon, full of complexity, diversity and unpredictability, which refers to the coexistence of different aspects of feelings, states and attitudes ranging from joy and trust to sadness and disgust. By analyzing associated characteristic equation of linearized equations for our model, it is found that the Hopf bifurcation occurs when the sum of time delays passes through a sequence of critical value. Stability of bifurcating cyclic love dynamics is also derived by applying the normal form theory and the center manifold theorem. In addition, it is also shown that, for some appropriate chosen parameters, chaotic behaviors can appear even without time delay.

  5. A range of complex probabilistic models for RNA secondary structure prediction that includes the nearest-neighbor model and more

    PubMed Central

    Rivas, Elena; Lang, Raymond; Eddy, Sean R.

    2012-01-01

    The standard approach for single-sequence RNA secondary structure prediction uses a nearest-neighbor thermodynamic model with several thousand experimentally determined energy parameters. An attractive alternative is to use statistical approaches with parameters estimated from growing databases of structural RNAs. Good results have been reported for discriminative statistical methods using complex nearest-neighbor models, including CONTRAfold, Simfold, and ContextFold. Little work has been reported on generative probabilistic models (stochastic context-free grammars [SCFGs]) of comparable complexity, although probabilistic models are generally easier to train and to use. To explore a range of probabilistic models of increasing complexity, and to directly compare probabilistic, thermodynamic, and discriminative approaches, we created TORNADO, a computational tool that can parse a wide spectrum of RNA grammar architectures (including the standard nearest-neighbor model and more) using a generalized super-grammar that can be parameterized with probabilities, energies, or arbitrary scores. By using TORNADO, we find that probabilistic nearest-neighbor models perform comparably to (but not significantly better than) discriminative methods. We find that complex statistical models are prone to overfitting RNA structure and that evaluations should use structurally nonhomologous training and test data sets. Overfitting has affected at least one published method (ContextFold). The most important barrier to improving statistical approaches for RNA secondary structure prediction is the lack of diversity of well-curated single-sequence RNA secondary structures in current RNA databases. PMID:22194308

  6. Visualizing and modelling complex rockfall slopes using game-engine hosted models

    NASA Astrophysics Data System (ADS)

    Ondercin, Matthew; Hutchinson, D. Jean; Harrap, Rob

    2015-04-01

    Innovations in computing in the past few decades have resulted in entirely new ways to collect 3d geological data and visualize it. For example, new tools and techniques relying on high performance computing capabilities have become widely available, allowing us to model rockfalls with more attention to complexity of the rock slope geometry and rockfall path, with significantly higher quality base data, and with more analytical options. Model results are used to design mitigation solutions, considering the potential paths of the rockfall events and the energy they impart on impacted structures. Such models are currently implemented as general-purpose GIS tools and in specialized programs. These tools are used to inspect geometrical and geomechanical data, model rockfalls, and communicate results to researchers and the larger community. The research reported here explores the notion that 3D game engines provide a high speed, widely accessible platform on which to build rockfall modelling workflows and to provide a new and accessible outreach method. Taking advantage of the in-built physics capability of the 3D game codes, and ability to handle large terrains, these models are rapidly deployed and generate realistic visualizations of rockfall trajectories. Their utility in this area is as yet unproven, but preliminary research shows that they are capable of producing results that are comparable to existing approaches. Furthermore, modelling of case histories shows that the output matches the behaviour that is observed in the field. The key advantage of game-engine hosted models is their accessibility to the general public and to people with little to no knowledge of rockfall hazards. With much of the younger generation being very familiar with 3D environments such as Minecraft, the idea of a game-like simulation is intuitive and thus offers new ways to communicate to the general public. We present results from using the Unity game engine to develop 3D voxel worlds

  7. Per Aspera ad Astra: Through Complex Population Modeling to Predictive Theory.

    PubMed

    Topping, Christopher J; Alrøe, Hugo Fjelsted; Farrell, Katharine N; Grimm, Volker

    2015-11-01

    Population models in ecology are often not good at predictions, even if they are complex and seem to be realistic enough. The reason for this might be that Occam's razor, which is key for minimal models exploring ideas and concepts, has been too uncritically adopted for more realistic models of systems. This can tie models too closely to certain situations, thereby preventing them from predicting the response to new conditions. We therefore advocate a new kind of parsimony to improve the application of Occam's razor. This new parsimony balances two contrasting strategies for avoiding errors in modeling: avoiding inclusion of nonessential factors (false inclusions) and avoiding exclusion of sometimes-important factors (false exclusions). It involves a synthesis of traditional modeling and analysis, used to describe the essentials of mechanistic relationships, with elements that are included in a model because they have been reported to be or can arguably be assumed to be important under certain conditions. The resulting models should be able to reflect how the internal organization of populations change and thereby generate representations of the novel behavior necessary for complex predictions, including regime shifts.

  8. Model development and validation of geometrically complex eddy current coils using finite element methods

    NASA Astrophysics Data System (ADS)

    Brown, Alexander; Eviston, Connor

    2017-02-01

    Multiple FEM models of complex eddy current coil geometries were created and validated to calculate the change of impedance due to the presence of a notch. Capable realistic simulations of eddy current inspections are required for model assisted probability of detection (MAPOD) studies, inversion algorithms, experimental verification, and tailored probe design for NDE applications. An FEM solver was chosen to model complex real world situations including varying probe dimensions and orientations along with complex probe geometries. This will also enable creation of a probe model library database with variable parameters. Verification and validation was performed using other commercially available eddy current modeling software as well as experimentally collected benchmark data. Data analysis and comparison showed that the created models were able to correctly model the probe and conductor interactions and accurately calculate the change in impedance of several experimental scenarios with acceptable error. The promising results of the models enabled the start of an eddy current probe model library to give experimenters easy access to powerful parameter based eddy current models for alternate project applications.

  9. Knowledge-based grouping of modeled HLA peptide complexes.

    PubMed

    Kangueane, P; Sakharkar, M K; Lim, K S; Hao, H; Lin, K; Chee, R E; Kolatkar, P R

    2000-05-01

    Human leukocyte antigens are the most polymorphic of human genes and multiple sequence alignment shows that such polymorphisms are clustered in the functional peptide binding domains. Because of such polymorphism among the peptide binding residues, the prediction of peptides that bind to specific HLA molecules is very difficult. In recent years two different types of computer based prediction methods have been developed and both the methods have their own advantages and disadvantages. The nonavailability of allele specific binding data restricts the use of knowledge-based prediction methods for a wide range of HLA alleles. Alternatively, the modeling scheme appears to be a promising predictive tool for the selection of peptides that bind to specific HLA molecules. The scoring of the modeled HLA-peptide complexes is a major concern. The use of knowledge based rules (van der Waals clashes and solvent exposed hydrophobic residues) to distinguish binders from nonbinders is applied in the present study. The rules based on (1) number of observed atomic clashes between the modeled peptide and the HLA structure, and (2) number of solvent exposed hydrophobic residues on the modeled peptide effectively discriminate experimentally known binders from poor/nonbinders. Solved crystal complexes show no vdW Clash (vdWC) in 95% cases and no solvent exposed hydrophobic peptide residues (SEHPR) were seen in 86% cases. In our attempt to compare experimental binding data with the predicted scores by this scoring scheme, 77% of the peptides are correctly grouped as good binders with a sensitivity of 71%.

  10. Prediction of Complex Aerodynamic Flows with Explicit Algebraic Stress Models

    NASA Technical Reports Server (NTRS)

    Abid, Ridha; Morrison, Joseph H.; Gatski, Thomas B.; Speziale, Charles G.

    1996-01-01

    An explicit algebraic stress equation, developed by Gatski and Speziale, is used in the framework of K-epsilon formulation to predict complex aerodynamic turbulent flows. The nonequilibrium effects are modeled through coefficients that depend nonlinearly on both rotational and irrotational strains. The proposed model was implemented in the ISAAC Navier-Stokes code. Comparisons with the experimental data are presented which clearly demonstrate that explicit algebraic stress models can predict the correct response to nonequilibrium flow.

  11. The Model of Complex Structure of Quark

    NASA Astrophysics Data System (ADS)

    Liu, Rongwu

    2017-09-01

    In Quantum Chromodynamics, quark is known as a kind of point-like fundamental particle which carries mass, charge, color, and flavor, strong interaction takes place between quarks by means of exchanging intermediate particles-gluons. An important consequence of this theory is that, strong interaction is a kind of short-range force, and it has the features of ``asymptotic freedom'' and ``quark confinement''. In order to reveal the nature of strong interaction, the ``bag'' model of vacuum and the ``string'' model of string theory were proposed in the context of quantum mechanics, but neither of them can provide a clear interaction mechanism. This article formulates a new mechanism by proposing a model of complex structure of quark, it can be outlined as follows: (1) Quark (as well as electron, etc) is a kind of complex structure, it is composed of fundamental particle (fundamental matter mass and electricity) and fundamental volume field (fundamental matter flavor and color) which exists in the form of limited volume; fundamental particle lies in the center of fundamental volume field, forms the ``nucleus'' of quark. (2) As static electric force, the color field force between quarks has classical form, it is proportional to the square of the color quantity carried by each color field, and inversely proportional to the area of cross section of overlapping color fields which is along force direction, it has the properties of overlap, saturation, non-central, and constant. (3) Any volume field undergoes deformation when interacting with other volume field, the deformation force follows Hooke's law. (4) The phenomena of ``asymptotic freedom'' and ``quark confinement'' are the result of color field force and deformation force.

  12. Lattice Boltzmann Modeling of Complex Flows for Engineering Applications

    NASA Astrophysics Data System (ADS)

    Montessori, Andrea; Falcucci, Giacomo

    2018-01-01

    Nature continuously presents a huge number of complex and multiscale phenomena, which in many cases, involve the presence of one or more fluids flowing, merging and evolving around us. Since the very first years of the third millennium, the Lattice Boltzmann method (LB) has seen an exponential growth of applications, especially in the fields connected with the simulation of complex and soft matter flows. LB, in fact, has shown a remarkable versatility in different fields of applications from nanoactive materials, free surface flows, and multiphase and reactive flows to the simulation of the processes inside engines and fluid machinery. In this book, the authors present the most recent advances of the application of the LB to complex flow phenomena of scientific and technical interest with focus on the multiscale modeling of heterogeneous catalysis within nano-porous media and multiphase, multicomponent flows.

  13. EVALUATING PREDICTIVE ERRORS OF A COMPLEX ENVIRONMENTAL MODEL USING A GENERAL LINEAR MODEL AND LEAST SQUARE MEANS

    EPA Science Inventory

    A General Linear Model (GLM) was used to evaluate the deviation of predicted values from expected values for a complex environmental model. For this demonstration, we used the default level interface of the Regional Mercury Cycling Model (R-MCM) to simulate epilimnetic total mer...

  14. Modeling complex aquifer systems: a case study in Baton Rouge, Louisiana (USA)

    NASA Astrophysics Data System (ADS)

    Pham, Hai V.; Tsai, Frank T.-C.

    2017-05-01

    This study targets two challenges in groundwater model development: grid generation and model calibration for aquifer systems that are fluvial in origin. Realistic hydrostratigraphy can be developed using a large quantity of well log data to capture the complexity of an aquifer system. However, generating valid groundwater model grids to be consistent with the complex hydrostratigraphy is non-trivial. Model calibration can also become intractable for groundwater models that intend to match the complex hydrostratigraphy. This study uses the Baton Rouge aquifer system, Louisiana (USA), to illustrate a technical need to cope with grid generation and model calibration issues. A grid generation technique is introduced based on indicator kriging to interpolate 583 wireline well logs in the Baton Rouge area to derive a hydrostratigraphic architecture with fine vertical discretization. Then, an upscaling procedure is developed to determine a groundwater model structure with 162 layers that captures facies geometry in the hydrostratigraphic architecture. To handle model calibration for such a large model, this study utilizes a derivative-free optimization method in parallel computing to complete parameter estimation in a few months. The constructed hydrostratigraphy indicates the Baton Rouge aquifer system is fluvial in origin. The calibration result indicates hydraulic conductivity for Miocene sands is higher than that for Pliocene to Holocene sands and indicates the Baton Rouge fault and the Denham Springs-Scotlandville fault to be low-permeability leaky aquifers. The modeling result shows significantly low groundwater level in the "2,000-foot" sand due to heavy pumping, indicating potential groundwater upward flow from the "2,400-foot" sand.

  15. Modeling complex treatment strategies: construction and validation of a discrete event simulation model for glaucoma.

    PubMed

    van Gestel, Aukje; Severens, Johan L; Webers, Carroll A B; Beckers, Henny J M; Jansonius, Nomdo M; Schouten, Jan S A G

    2010-01-01

    Discrete event simulation (DES) modeling has several advantages over simpler modeling techniques in health economics, such as increased flexibility and the ability to model complex systems. Nevertheless, these benefits may come at the cost of reduced transparency, which may compromise the model's face validity and credibility. We aimed to produce a transparent report on the construction and validation of a DES model using a recently developed model of ocular hypertension and glaucoma. Current evidence of associations between prognostic factors and disease progression in ocular hypertension and glaucoma was translated into DES model elements. The model was extended to simulate treatment decisions and effects. Utility and costs were linked to disease status and treatment, and clinical and health economic outcomes were defined. The model was validated at several levels. The soundness of design and the plausibility of input estimates were evaluated in interdisciplinary meetings (face validity). Individual patients were traced throughout the simulation under a multitude of model settings to debug the model, and the model was run with a variety of extreme scenarios to compare the outcomes with prior expectations (internal validity). Finally, several intermediate (clinical) outcomes of the model were compared with those observed in experimental or observational studies (external validity) and the feasibility of evaluating hypothetical treatment strategies was tested. The model performed well in all validity tests. Analyses of hypothetical treatment strategies took about 30 minutes per cohort and lead to plausible health-economic outcomes. There is added value of DES models in complex treatment strategies such as glaucoma. Achieving transparency in model structure and outcomes may require some effort in reporting and validating the model, but it is feasible.

  16. A density-based clustering model for community detection in complex networks

    NASA Astrophysics Data System (ADS)

    Zhao, Xiang; Li, Yantao; Qu, Zehui

    2018-04-01

    Network clustering (or graph partitioning) is an important technique for uncovering the underlying community structures in complex networks, which has been widely applied in various fields including astronomy, bioinformatics, sociology, and bibliometric. In this paper, we propose a density-based clustering model for community detection in complex networks (DCCN). The key idea is to find group centers with a higher density than their neighbors and a relatively large integrated-distance from nodes with higher density. The experimental results indicate that our approach is efficient and effective for community detection of complex networks.

  17. Delineating parameter unidentifiabilities in complex models

    NASA Astrophysics Data System (ADS)

    Raman, Dhruva V.; Anderson, James; Papachristodoulou, Antonis

    2017-03-01

    Scientists use mathematical modeling as a tool for understanding and predicting the properties of complex physical systems. In highly parametrized models there often exist relationships between parameters over which model predictions are identical, or nearly identical. These are known as structural or practical unidentifiabilities, respectively. They are hard to diagnose and make reliable parameter estimation from data impossible. They furthermore imply the existence of an underlying model simplification. We describe a scalable method for detecting unidentifiabilities, as well as the functional relations defining them, for generic models. This allows for model simplification, and appreciation of which parameters (or functions thereof) cannot be estimated from data. Our algorithm can identify features such as redundant mechanisms and fast time-scale subsystems, as well as the regimes in parameter space over which such approximations are valid. We base our algorithm on a quantification of regional parametric sensitivity that we call `multiscale sloppiness'. Traditionally, the link between parametric sensitivity and the conditioning of the parameter estimation problem is made locally, through the Fisher information matrix. This is valid in the regime of infinitesimal measurement uncertainty. We demonstrate the duality between multiscale sloppiness and the geometry of confidence regions surrounding parameter estimates made where measurement uncertainty is non-negligible. Further theoretical relationships are provided linking multiscale sloppiness to the likelihood-ratio test. From this, we show that a local sensitivity analysis (as typically done) is insufficient for determining the reliability of parameter estimation, even with simple (non)linear systems. Our algorithm can provide a tractable alternative. We finally apply our methods to a large-scale, benchmark systems biology model of necrosis factor (NF)-κ B , uncovering unidentifiabilities.

  18. Inference, simulation, modeling, and analysis of complex networks, with special emphasis on complex networks in systems biology

    NASA Astrophysics Data System (ADS)

    Christensen, Claire Petra

    Across diverse fields ranging from physics to biology, sociology, and economics, the technological advances of the past decade have engendered an unprecedented explosion of data on highly complex systems with thousands, if not millions of interacting components. These systems exist at many scales of size and complexity, and it is becoming ever-more apparent that they are, in fact, universal, arising in every field of study. Moreover, they share fundamental properties---chief among these, that the individual interactions of their constituent parts may be well-understood, but the characteristic behaviour produced by the confluence of these interactions---by these complex networks---is unpredictable; in a nutshell, the whole is more than the sum of its parts. There is, perhaps, no better illustration of this concept than the discoveries being made regarding complex networks in the biological sciences. In particular, though the sequencing of the human genome in 2003 was a remarkable feat, scientists understand that the "cellular-level blueprints" for the human being are cellular-level parts lists, but they say nothing (explicitly) about cellular-level processes. The challenge of modern molecular biology is to understand these processes in terms of the networks of parts---in terms of the interactions among proteins, enzymes, genes, and metabolites---as it is these processes that ultimately differentiate animate from inanimate, giving rise to life! It is the goal of systems biology---an umbrella field encapsulating everything from molecular biology to epidemiology in social systems---to understand processes in terms of fundamental networks of core biological parts, be they proteins or people. By virtue of the fact that there are literally countless complex systems, not to mention tools and techniques used to infer, simulate, analyze, and model these systems, it is impossible to give a truly comprehensive account of the history and study of complex systems. The author

  19. Complex Constructivism: A Theoretical Model of Complexity and Cognition

    ERIC Educational Resources Information Center

    Doolittle, Peter E.

    2014-01-01

    Education has long been driven by its metaphors for teaching and learning. These metaphors have influenced both educational research and educational practice. Complexity and constructivism are two theories that provide functional and robust metaphors. Complexity provides a metaphor for the structure of myriad phenomena, while constructivism…

  20. Formative feedback and scaffolding for developing complex problem solving and modelling outcomes

    NASA Astrophysics Data System (ADS)

    Frank, Brian; Simper, Natalie; Kaupp, James

    2018-07-01

    This paper discusses the use and impact of formative feedback and scaffolding to develop outcomes for complex problem solving in a required first-year course in engineering design and practice at a medium-sized research-intensive Canadian university. In 2010, the course began to use team-based, complex, open-ended contextualised problems to develop problem solving, communications, teamwork, modelling, and professional skills. Since then, formative feedback has been incorporated into: task and process-level feedback on scaffolded tasks in-class, formative assignments, and post-assignment review. Development in complex problem solving and modelling has been assessed through analysis of responses from student surveys, direct criterion-referenced assessment of course outcomes from 2013 to 2015, and an external longitudinal study. The findings suggest that students are improving in outcomes related to complex problem solving over the duration of the course. Most notably, the addition of new feedback and scaffolding coincided with improved student performance.

  1. Development of structural model of adaptive training complex in ergatic systems for professional use

    NASA Astrophysics Data System (ADS)

    Obukhov, A. D.; Dedov, D. L.; Arkhipov, A. E.

    2018-03-01

    The article considers the structural model of the adaptive training complex (ATC), which reflects the interrelations between the hardware, software and mathematical model of ATC and describes the processes in this subject area. The description of the main components of software and hardware complex, their interaction and functioning within the common system are given. Also the article scrutinizers a brief description of mathematical models of personnel activity, a technical system and influences, the interactions of which formalize the regularities of ATC functioning. The studies of main objects of training complexes and connections between them will make it possible to realize practical implementation of ATC in ergatic systems for professional use.

  2. Contingency Detection in a Complex World: A Developmental Model and Implications for Atypical Development

    ERIC Educational Resources Information Center

    Northrup, Jessie Bolz

    2017-01-01

    The present article proposes a new developmental model of how young infants adapt and respond to complex contingencies in their environment, and how this influences development. The model proposes that typically developing infants adjust to an increasingly complex environment in ways that make it easier for them to allocate limited attentional…

  3. Process Consistency in Models: the Importance of System Signatures, Expert Knowledge and Process Complexity

    NASA Astrophysics Data System (ADS)

    Hrachowitz, Markus; Fovet, Ophelie; Ruiz, Laurent; Gascuel-Odoux, Chantal; Savenije, Hubert

    2014-05-01

    Hydrological models are frequently characterized by what is often considered to be adequate calibration performances. In many cases, however, these models experience a substantial uncertainty and performance decrease in validation periods, thus resulting in poor predictive power. Besides the likely presence of data errors, this observation can point towards wrong or insufficient representations of the underlying processes and their heterogeneity. In other words, right results are generated for the wrong reasons. Thus ways are sought to increase model consistency and to thereby satisfy the contrasting priorities of the need a) to increase model complexity and b) to limit model equifinality. In this study a stepwise model development approach is chosen to test the value of an exhaustive and systematic combined use of hydrological signatures, expert knowledge and readily available, yet anecdotal and rarely exploited, hydrological information for increasing model consistency towards generating the right answer for the right reasons. A simple 3-box, 7 parameter, conceptual HBV-type model, constrained by 4 calibration objective functions was able to adequately reproduce the hydrograph with comparatively high values for the 4 objective functions in the 5-year calibration period. However, closer inspection of the results showed a dramatic decrease of model performance in the 5-year validation period. In addition, assessing the model's skill to reproduce a range of 20 hydrological signatures including, amongst others, the flow duration curve, the autocorrelation function and the rising limb density, showed that it could not adequately reproduce the vast majority of these signatures, indicating a lack of model consistency. Subsequently model complexity was increased in a stepwise way to allow for more process heterogeneity. To limit model equifinality, increase in complexity was counter-balanced by a stepwise application of "realism constraints", inferred from expert

  4. Surface complexation model of uranyl sorption on Georgia kaolinite

    USGS Publications Warehouse

    Payne, T.E.; Davis, J.A.; Lumpkin, G.R.; Chisari, R.; Waite, T.D.

    2004-01-01

    The adsorption of uranyl on standard Georgia kaolinites (KGa-1 and KGa-1B) was studied as a function of pH (3-10), total U (1 and 10 ??mol/l), and mass loading of clay (4 and 40 g/l). The uptake of uranyl in air-equilibrated systems increased with pH and reached a maximum in the near-neutral pH range. At higher pH values, the sorption decreased due to the presence of aqueous uranyl carbonate complexes. One kaolinite sample was examined after the uranyl uptake experiments by transmission electron microscopy (TEM), using energy dispersive X-ray spectroscopy (EDS) to determine the U content. It was found that uranium was preferentially adsorbed by Ti-rich impurity phases (predominantly anatase), which are present in the kaolinite samples. Uranyl sorption on the Georgia kaolinites was simulated with U sorption reactions on both titanol and aluminol sites, using a simple non-electrostatic surface complexation model (SCM). The relative amounts of U-binding >TiOH and >AlOH sites were estimated from the TEM/EDS results. A ternary uranyl carbonate complex on the titanol site improved the fit to the experimental data in the higher pH range. The final model contained only three optimised log K values, and was able to simulate adsorption data across a wide range of experimental conditions. The >TiOH (anatase) sites appear to play an important role in retaining U at low uranyl concentrations. As kaolinite often contains trace TiO2, its presence may need to be taken into account when modelling the results of sorption experiments with radionuclides or trace metals on kaolinite. ?? 2004 Elsevier B.V. All rights reserved.

  5. Using multi-criteria analysis of simulation models to understand complex biological systems

    Treesearch

    Maureen C. Kennedy; E. David Ford

    2011-01-01

    Scientists frequently use computer-simulation models to help solve complex biological problems. Typically, such models are highly integrated, they produce multiple outputs, and standard methods of model analysis are ill suited for evaluating them. We show how multi-criteria optimization with Pareto optimality allows for model outputs to be compared to multiple system...

  6. Modelling Complex Fenestration Systems using physical and virtual models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thanachareonkit, Anothai; Scartezzini, Jean-Louis

    2010-04-15

    Physical or virtual models are commonly used to visualize the conceptual ideas of architects, lighting designers and researchers; they are also employed to assess the daylighting performance of buildings, particularly in cases where Complex Fenestration Systems (CFS) are considered. Recent studies have however revealed a general tendency of physical models to over-estimate this performance, compared to those of real buildings; these discrepancies can be attributed to several reasons. In order to identify the main error sources, a series of comparisons in-between a real building (a single office room within a test module) and the corresponding physical and virtual models wasmore » undertaken. The physical model was placed in outdoor conditions, which were strictly identical to those of the real building, as well as underneath a scanning sky simulator. The virtual model simulations were carried out by way of the Radiance program using the GenSky function; an alternative evaluation method, named Partial Daylight Factor method (PDF method), was also employed with the physical model together with sky luminance distributions acquired by a digital sky scanner during the monitoring of the real building. The overall daylighting performance of physical and virtual models were assessed and compared. The causes of discrepancies between the daylighting performance of the real building and the models were analysed. The main identified sources of errors are the reproduction of building details, the CFS modelling and the mocking-up of the geometrical and photometrical properties. To study the impact of these errors on daylighting performance assessment, computer simulation models created using the Radiance program were also used to carry out a sensitivity analysis of modelling errors. The study of the models showed that large discrepancies can occur in daylighting performance assessment. In case of improper mocking-up of the glazing for instance, relative divergences of 25

  7. Modeling ultrasound propagation through material of increasing geometrical complexity.

    PubMed

    Odabaee, Maryam; Odabaee, Mostafa; Pelekanos, Matthew; Leinenga, Gerhard; Götz, Jürgen

    2018-06-01

    Ultrasound is increasingly being recognized as a neuromodulatory and therapeutic tool, inducing a broad range of bio-effects in the tissue of experimental animals and humans. To achieve these effects in a predictable manner in the human brain, the thick cancellous skull presents a problem, causing attenuation. In order to overcome this challenge, as a first step, the acoustic properties of a set of simple bone-modeling resin samples that displayed an increasing geometrical complexity (increasing step sizes) were analyzed. Using two Non-Destructive Testing (NDT) transducers, we found that Wiener deconvolution predicted the Ultrasound Acoustic Response (UAR) and attenuation caused by the samples. However, whereas the UAR of samples with step sizes larger than the wavelength could be accurately estimated, the prediction was not accurate when the sample had a smaller step size. Furthermore, a Finite Element Analysis (FEA) performed in ANSYS determined that the scattering and refraction of sound waves was significantly higher in complex samples with smaller step sizes compared to simple samples with a larger step size. Together, this reveals an interaction of frequency and geometrical complexity in predicting the UAR and attenuation. These findings could in future be applied to poro-visco-elastic materials that better model the human skull. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.

  8. Predictive model of complexity in early palliative care: a cohort of advanced cancer patients (PALCOM study).

    PubMed

    Tuca, Albert; Gómez-Martínez, Mónica; Prat, Aleix

    2018-01-01

    Model of early palliative care (PC) integrated in oncology is based on shared care from the diagnosis to the end of life and is mainly focused on patients with greater complexity. However, there is no definition or tools to evaluate PC complexity. The objectives of the study were to identify the factors influencing level determination of complexity, propose predictive models, and build a complexity scale of PC. We performed a prospective, observational, multicenter study in a cohort of advanced cancer patients with an estimated prognosis ≤ 6 months. An ad hoc structured evaluation including socio-demographic and clinical data, symptom burden, functional and cognitive status, psychosocial problems, and existential-ethic dilemmas was recorded systematically. According to this multidimensional evaluation, investigator classified patients as high, medium, or low palliative complexity, associated to need of basic or specialized PC. Logistic regression was used to identify the variables influencing determination of level of PC complexity and explore predictive models. We included 324 patients; 41% were classified as having high PC complexity and 42.9% as medium, both levels being associated with specialized PC. Variables influencing determination of PC complexity were as follows: high symptom burden (OR 3.19 95%CI: 1.72-6.17), difficult pain (OR 2.81 95%CI:1.64-4.9), functional status (OR 0.99 95%CI:0.98-0.9), and social-ethical existential risk factors (OR 3.11 95%CI:1.73-5.77). Logistic analysis of variables allowed construct a complexity model and structured scales (PALCOM 1 and 2) with high predictive value (AUC ROC 76%). This study provides a new model and tools to assess complexity in palliative care, which may be very useful to manage referral to specialized PC services, and agree intensity of their intervention in a model of early-shared care integrated in oncology.

  9. Using machine learning tools to model complex toxic interactions with limited sampling regimes.

    PubMed

    Bertin, Matthew J; Moeller, Peter; Guillette, Louis J; Chapman, Robert W

    2013-03-19

    A major impediment to understanding the impact of environmental stress, including toxins and other pollutants, on organisms, is that organisms are rarely challenged by one or a few stressors in natural systems. Thus, linking laboratory experiments that are limited by practical considerations to a few stressors and a few levels of these stressors to real world conditions is constrained. In addition, while the existence of complex interactions among stressors can be identified by current statistical methods, these methods do not provide a means to construct mathematical models of these interactions. In this paper, we offer a two-step process by which complex interactions of stressors on biological systems can be modeled in an experimental design that is within the limits of practicality. We begin with the notion that environment conditions circumscribe an n-dimensional hyperspace within which biological processes or end points are embedded. We then randomly sample this hyperspace to establish experimental conditions that span the range of the relevant parameters and conduct the experiment(s) based upon these selected conditions. Models of the complex interactions of the parameters are then extracted using machine learning tools, specifically artificial neural networks. This approach can rapidly generate highly accurate models of biological responses to complex interactions among environmentally relevant toxins, identify critical subspaces where nonlinear responses exist, and provide an expedient means of designing traditional experiments to test the impact of complex mixtures on biological responses. Further, this can be accomplished with an astonishingly small sample size.

  10. Experimental determination and modeling of arsenic complexation with humic and fulvic acids.

    PubMed

    Fakour, Hoda; Lin, Tsair-Fuh

    2014-08-30

    The complexation of humic acid (HA) and fulvic acid (FA) with arsenic (As) in water was studied. Experimental results indicate that arsenic may form complexes with HA and FA with a higher affinity for arsenate than for arsenite. With the presence of iron oxide based adsorbents, binding of arsenic to HA/FA in water was significantly suppressed, probably due to adsorption of As and HA/FA. A two-site ligand binding model, considering only strong and weak site types of binding affinity, was successfully developed to describe the complexation of arsenic on the two natural organic fractions. The model showed that the numbers of weak sites were more than 10 times those of strong sites on both HA and FA for both arsenic species studied. The numbers of both types of binding sites were found to be proportional to the HA concentrations, while the apparent stability constants, defined for describing binding affinity between arsenic and the sites, are independent of the HA concentrations. To the best of our knowledge, this is the first study to characterize the impact of HA concentrations on the applicability of the ligand binding model, and to extrapolate the model to FA. The obtained results may give insights on the complexation of arsenic in HA/FA laden groundwater and on the selection of more effective adsorption-based treatment methods for natural waters. Copyright © 2014 Elsevier B.V. All rights reserved.

  11. The Complexity of Developmental Predictions from Dual Process Models

    ERIC Educational Resources Information Center

    Stanovich, Keith E.; West, Richard F.; Toplak, Maggie E.

    2011-01-01

    Drawing developmental predictions from dual-process theories is more complex than is commonly realized. Overly simplified predictions drawn from such models may lead to premature rejection of the dual process approach as one of many tools for understanding cognitive development. Misleading predictions can be avoided by paying attention to several…

  12. Fischer and Schrock Carbene Complexes: A Molecular Modeling Exercise

    ERIC Educational Resources Information Center

    Montgomery, Craig D.

    2015-01-01

    An exercise in molecular modeling that demonstrates the distinctive features of Fischer and Schrock carbene complexes is presented. Semi-empirical calculations (PM3) demonstrate the singlet ground electronic state, restricted rotation about the C-Y bond, the positive charge on the carbon atom, and hence, the electrophilic nature of the Fischer…

  13. Development and evaluation of a musculoskeletal model of the elbow joint complex

    NASA Technical Reports Server (NTRS)

    Gonzalez, Roger V.; Hutchins, E. L.; Barr, Ronald E.; Abraham, Lawrence D.

    1993-01-01

    This paper describes the development and evaluation of a musculoskeletal model that represents human elbow flexion-extension and forearm pronation-supination. The length, velocity, and moment arm for each of the eight musculotendon actuators were based on skeletal anatomy and position. Musculotendon parameters were determined for each actuator and verified by comparing analytical torque-angle curves with experimental joint torque data. The parameters and skeletal geometry were also utilized in the musculoskeletal model for the analysis of ballistic elbow joint complex movements. The key objective was to develop a computational model, guided by parameterized optimal control, to investigate the relationship among patterns of muscle excitation, individual muscle forces, and movement kinematics. The model was verified using experimental kinematic, torque, and electromyographic data from volunteer subjects performing ballistic elbow joint complex movements.

  14. The Skilled Counselor Training Model: Skills Acquisition, Self-Assessment, and Cognitive Complexity

    ERIC Educational Resources Information Center

    Little, Cassandra; Packman, Jill; Smaby, Marlowe H.; Maddux, Cleborne D.

    2005-01-01

    The authors evaluated the effectiveness of the Skilled Counselor Training Model (SCTM; M. H. Smaby, C. D. Maddux, E. Torres-Rivera, & R. Zimmick, 1999) in teaching counseling skills and in fostering counselor cognitive complexity. Counselor trainees who completed the SCTM had better counseling skills and higher levels of cognitive complexity than…

  15. Simulating Complex, Cold-region Process Interactions Using a Multi-scale, Variable-complexity Hydrological Model

    NASA Astrophysics Data System (ADS)

    Marsh, C.; Pomeroy, J. W.; Wheater, H. S.

    2017-12-01

    Accurate management of water resources is necessary for social, economic, and environmental sustainability worldwide. In locations with seasonal snowcovers, the accurate prediction of these water resources is further complicated due to frozen soils, solid-phase precipitation, blowing snow transport, and snowcover-vegetation-atmosphere interactions. Complex process interactions and feedbacks are a key feature of hydrological systems and may result in emergent phenomena, i.e., the arising of novel and unexpected properties within a complex system. One example is the feedback associated with blowing snow redistribution, which can lead to drifts that cause locally-increased soil moisture, thus increasing plant growth that in turn subsequently impacts snow redistribution, creating larger drifts. Attempting to simulate these emergent behaviours is a significant challenge, however, and there is concern that process conceptualizations within current models are too incomplete to represent the needed interactions. An improved understanding of the role of emergence in hydrological systems often requires high resolution distributed numerical hydrological models that incorporate the relevant process dynamics. The Canadian Hydrological Model (CHM) provides a novel tool for examining cold region hydrological systems. Key features include efficient terrain representation, allowing simulations at various spatial scales, reduced computational overhead, and a modular process representation allowing for an alternative-hypothesis framework. Using both physics-based and conceptual process representations sourced from long term process studies and the current cold regions literature allows for comparison of process representations and importantly, their ability to produce emergent behaviours. Examining the system in a holistic, process-based manner can hopefully derive important insights and aid in development of improved process representations.

  16. Computer-aided molecular modeling techniques for predicting the stability of drug cyclodextrin inclusion complexes in aqueous solutions

    NASA Astrophysics Data System (ADS)

    Faucci, Maria Teresa; Melani, Fabrizio; Mura, Paola

    2002-06-01

    Molecular modeling was used to investigate factors influencing complex formation between cyclodextrins and guest molecules and predict their stability through a theoretical model based on the search for a correlation between experimental stability constants ( Ks) and some theoretical parameters describing complexation (docking energy, host-guest contact surfaces, intermolecular interaction fields) calculated from complex structures at a minimum conformational energy, obtained through stochastic methods based on molecular dynamic simulations. Naproxen, ibuprofen, ketoprofen and ibuproxam were used as model drug molecules. Multiple Regression Analysis allowed identification of the significant factors for the complex stability. A mathematical model ( r=0.897) related log Ks with complex docking energy and lipophilic molecular fields of cyclodextrin and drug.

  17. Comparison of new generation low-complexity flood inundation mapping tools with a hydrodynamic model

    NASA Astrophysics Data System (ADS)

    Afshari, Shahab; Tavakoly, Ahmad A.; Rajib, Mohammad Adnan; Zheng, Xing; Follum, Michael L.; Omranian, Ehsan; Fekete, Balázs M.

    2018-01-01

    The objective of this study is to compare two new generation low-complexity tools, AutoRoute and Height Above the Nearest Drainage (HAND), with a two-dimensional hydrodynamic model (Hydrologic Engineering Center-River Analysis System, HEC-RAS 2D). The assessment was conducted on two hydrologically different and geographically distant test-cases in the United States, including the 16,900 km2 Cedar River (CR) watershed in Iowa and a 62 km2 domain along the Black Warrior River (BWR) in Alabama. For BWR, twelve different configurations were set up for each of the models, including four different terrain setups (e.g. with and without channel bathymetry and a levee), and three flooding conditions representing moderate to extreme hazards at 10-, 100-, and 500-year return periods. For the CR watershed, models were compared with a simplistic terrain setup (without bathymetry and any form of hydraulic controls) and one flooding condition (100-year return period). Input streamflow forcing data representing these hypothetical events were constructed by applying a new fusion approach on National Water Model outputs. Simulated inundation extent and depth from AutoRoute, HAND, and HEC-RAS 2D were compared with one another and with the corresponding FEMA reference estimates. Irrespective of the configurations, the low-complexity models were able to produce inundation extents similar to HEC-RAS 2D, with AutoRoute showing slightly higher accuracy than the HAND model. Among four terrain setups, the one including both levee and channel bathymetry showed lowest fitness score on the spatial agreement of inundation extent, due to the weak physical representation of low-complexity models compared to a hydrodynamic model. For inundation depth, the low-complexity models showed an overestimating tendency, especially in the deeper segments of the channel. Based on such reasonably good prediction skills, low-complexity flood models can be considered as a suitable alternative for fast

  18. Comparing GWAS Results of Complex Traits Using Full Genetic Model and Additive Models for Revealing Genetic Architecture

    PubMed Central

    Monir, Md. Mamun; Zhu, Jun

    2017-01-01

    Most of the genome-wide association studies (GWASs) for human complex diseases have ignored dominance, epistasis and ethnic interactions. We conducted comparative GWASs for total cholesterol using full model and additive models, which illustrate the impacts of the ignoring genetic variants on analysis results and demonstrate how genetic effects of multiple loci could differ across different ethnic groups. There were 15 quantitative trait loci with 13 individual loci and 3 pairs of epistasis loci identified by full model, whereas only 14 loci (9 common loci and 5 different loci) identified by multi-loci additive model. Again, 4 full model detected loci were not detected using multi-loci additive model. PLINK-analysis identified two loci and GCTA-analysis detected only one locus with genome-wide significance. Full model identified three previously reported genes as well as several new genes. Bioinformatics analysis showed some new genes are related with cholesterol related chemicals and/or diseases. Analyses of cholesterol data and simulation studies revealed that the full model performs were better than the additive-model performs in terms of detecting power and unbiased estimations of genetic variants of complex traits. PMID:28079101

  19. High-resolution Structures of Protein-Membrane Complexes by Neutron Reflection and MD Simulation: Membrane Association of the PTEN Tumor Suppressor

    NASA Astrophysics Data System (ADS)

    Lösche, Matthias

    2012-02-01

    The lipid matrix of biomembranes is an in-plane fluid, thermally and compositionally disordered leaflet of 5 nm thickness and notoriously difficult to characterize in structural terms. Yet, biomembranes are ubiquitous in the cell, and membrane-bound proteins are implicated in a variety of signaling pathways and intra-cellular transport. We developed methodology to study proteins associated with model membranes using neutron reflection measurements and showed recently that this approach can resolve the penetration depth and orientation of membrane proteins with ångstrom resolution if their crystal or NMR structure is known. Here we apply this technology to determine the membrane bindung and unravel functional details of the PTEN phosphatase, a key player in the PI3K apoptosis pathway. PTEN is an important regulatory protein and tumor suppressor that performs its phosphatase activity as an interfacial enzyme at the plasma membrane-cytoplasm boundary. Acting as an antagonist to phosphoinositide-3-kinase (PI3K) in cell signaling, it is deleted in many human cancers. Despite its importance in regulating the levels of the phosphoinositoltriphosphate PI(3,4,5)P3, there is little understanding of how PTEN binds to membranes, is activated and then acts as a phosphatase. We investigated the structure and function of PTEN by studying its membrane affinity and localization on in-plane fluid, thermally disordered synthetic membrane models. The membrane association of the protein depends strongly on membrane composition, where phosphatidylserine (PS) and phosphatidylinositol diphosphate (PI(4,5)P2) act synergetically in attracting the enzyme to the membrane surface. Membrane affinities depend strongly on membrane fluidity, which suggests multiple binding sites on the protein for PI(4,5)P2. Neutron reflection measurements show that the PTEN phosphatase ``scoots'' along the membrane surface (penetration < 5 å) but binds the membrane tightly with its two major domains, the C2 and

  20. KRISSY: user's guide to modeling three-dimensional wind flow in complex terrain

    Treesearch

    Michael A. Fosberg; Michael L. Sestak

    1986-01-01

    KRISSY is a computer model for generating three-dimensional wind flows in complex terrain from data that were not or perhaps cannot be collected. The model is written in FORTRAN IV This guide describes data requirements, modeling, and output from an applications viewpoint rather than that of programming or theoretical modeling. KRISSY is designed to minimize...

  1. An Associational Model for the Diffusion of Complex Innovations.

    ERIC Educational Resources Information Center

    Barnett, George A.

    A paradigm for the study of the diffusion of complex innovations through a society is presented in this paper; the paradigm is useful for studying sociocultural change as innovations diffuse. The model is designed to account for change within social systems rather than in individuals, although it would also be consistent with information…

  2. Fitting Meta-Analytic Structural Equation Models with Complex Datasets

    ERIC Educational Resources Information Center

    Wilson, Sandra Jo; Polanin, Joshua R.; Lipsey, Mark W.

    2016-01-01

    A modification of the first stage of the standard procedure for two-stage meta-analytic structural equation modeling for use with large complex datasets is presented. This modification addresses two common problems that arise in such meta-analyses: (a) primary studies that provide multiple measures of the same construct and (b) the correlation…

  3. Multiagent model and mean field theory of complex auction dynamics

    NASA Astrophysics Data System (ADS)

    Chen, Qinghua; Huang, Zi-Gang; Wang, Yougui; Lai, Ying-Cheng

    2015-09-01

    Recent years have witnessed a growing interest in analyzing a variety of socio-economic phenomena using methods from statistical and nonlinear physics. We study a class of complex systems arising from economics, the lowest unique bid auction (LUBA) systems, which is a recently emerged class of online auction game systems. Through analyzing large, empirical data sets of LUBA, we identify a general feature of the bid price distribution: an inverted J-shaped function with exponential decay in the large bid price region. To account for the distribution, we propose a multi-agent model in which each agent bids stochastically in the field of winner’s attractiveness, and develop a theoretical framework to obtain analytic solutions of the model based on mean field analysis. The theory produces bid-price distributions that are in excellent agreement with those from the real data. Our model and theory capture the essential features of human behaviors in the competitive environment as exemplified by LUBA, and may provide significant quantitative insights into complex socio-economic phenomena.

  4. Student Cognitive Difficulties and Mental Model Development of Complex Earth and Environmental Systems

    NASA Astrophysics Data System (ADS)

    Sell, K.; Herbert, B.; Schielack, J.

    2004-05-01

    Students organize scientific knowledge and reason about environmental issues through manipulation of mental models. The nature of the environmental sciences, which are focused on the study of complex, dynamic systems, may present cognitive difficulties to students in their development of authentic, accurate mental models of environmental systems. The inquiry project seeks to develop and assess the coupling of information technology (IT)-based learning with physical models in order to foster rich mental model development of environmental systems in geoscience undergraduate students. The manipulation of multiple representations, the development and testing of conceptual models based on available evidence, and exposure to authentic, complex and ill-constrained problems were the components of investigation utilized to reach the learning goals. Upper-level undergraduate students enrolled in an environmental geology course at Texas A&M University participated in this research which served as a pilot study. Data based on rubric evaluations interpreted by principal component analyses suggest students' understanding of the nature of scientific inquiry is limited and the ability to cross scales and link systems proved problematic. Results categorized into content knowledge and cognition processes where reasoning, critical thinking and cognitive load were driving factors behind difficulties in student learning. Student mental model development revealed multiple misconceptions and lacked complexity and completeness to represent the studied systems. Further, the positive learning impacts of the implemented modules favored the physical model over the IT-based learning projects, likely due to cognitive load issues. This study illustrates the need to better understand student difficulties in solving complex problems when using IT, where the appropriate scaffolding can then be implemented to enhance student learning of the earth system sciences.

  5. Applicability study of classical and contemporary models for effective complex permittivity of metal powders.

    PubMed

    Kiley, Erin M; Yakovlev, Vadim V; Ishizaki, Kotaro; Vaucher, Sebastien

    2012-01-01

    Microwave thermal processing of metal powders has recently been a topic of a substantial interest; however, experimental data on the physical properties of mixtures involving metal particles are often unavailable. In this paper, we perform a systematic analysis of classical and contemporary models of complex permittivity of mixtures and discuss the use of these models for determining effective permittivity of dielectric matrices with metal inclusions. Results from various mixture and core-shell mixture models are compared to experimental data for a titanium/stearic acid mixture and a boron nitride/graphite mixture (both obtained through the original measurements), and for a tungsten/Teflon mixture (from literature). We find that for certain experiments, the average error in determining the effective complex permittivity using Lichtenecker's, Maxwell Garnett's, Bruggeman's, Buchelnikov's, and Ignatenko's models is about 10%. This suggests that, for multiphysics computer models describing the processing of metal powder in the full temperature range, input data on effective complex permittivity obtained from direct measurement has, up to now, no substitute.

  6. [Analysis of a three-dimensional finite element model of atlas and axis complex fracture].

    PubMed

    Tang, X M; Liu, C; Huang, K; Zhu, G T; Sun, H L; Dai, J; Tian, J W

    2018-05-22

    Objective: To explored the clinical application of the three-dimensional finite element model of atlantoaxial complex fracture. Methods: A three-dimensional finite element model of cervical spine (FEM/intact) was established by software of Abaqus6.12.On the basis of this model, a three-dimensional finite element model of four types of atlantoaxial complex fracture was established: C(1) fracture (Jefferson)+ C(2) fracture (type Ⅱfracture), Jefferson+ C(2) fracture(type Ⅲfracture), Jefferson+ C(2) fracture(Hangman), Jefferson+ stable C(2) fracture (FEM/fracture). The range of motion under flexion, extension, lateral bending and axial rotation were measured and compared with the model of cervical spine. Results: The three-dimensional finite element model of four types of atlantoaxial complex fracture had the same similarity and profile.The range of motion (ROM) of different segments had different changes.Compared with those in the normal model, the ROM of C(0/1) and C(1/2) in C(1) combined Ⅱ odontoid fracture model in flexion/extension, lateral bending and rotation increased by 57.45%, 29.34%, 48.09% and 95.49%, 88.52%, 36.71%, respectively.The ROM of C(0/1) and C(1/2) in C(1) combined Ⅲodontoid fracture model in flexion/extension, lateral bending and rotation increased by 47.01%, 27.30%, 45.31% and 90.38%, 27.30%, 30.0%.The ROM of C(0/1) and C(1/2) in C(1) combined Hangman fracture model in flexion/extension, lateral bending and rotation increased by 32.68%, 79.34%, 77.62% and 60.53%, 81.20%, 21.48%, respectively.The ROM of C(0/1) and C(1/2) in C(1) combined axis fracture model in flexion/extension, lateral bending and rotation increased by 15.00%, 29.30%, 8.47% and 37.87%, 75.57%, 8.30%, respectively. Conclusions: The three-dimensional finite element model can be used to simulate the biomechanics of atlantoaxial complex fracture.The ROM of atlantoaxial complex fracture is larger than nomal model, which indicates that surgical treatment should be performed.

  7. Petri net model for analysis of concurrently processed complex algorithms

    NASA Technical Reports Server (NTRS)

    Stoughton, John W.; Mielke, Roland R.

    1986-01-01

    This paper presents a Petri-net model suitable for analyzing the concurrent processing of computationally complex algorithms. The decomposed operations are to be processed in a multiple processor, data driven architecture. Of particular interest is the application of the model to both the description of the data/control flow of a particular algorithm, and to the general specification of the data driven architecture. A candidate architecture is also presented.

  8. The development and application of composite complexity models and a relative complexity metric in a software maintenance environment

    NASA Technical Reports Server (NTRS)

    Hops, J. M.; Sherif, J. S.

    1994-01-01

    A great deal of effort is now being devoted to the study, analysis, prediction, and minimization of software maintenance expected cost, long before software is delivered to users or customers. It has been estimated that, on the average, the effort spent on software maintenance is as costly as the effort spent on all other software costs. Software design methods should be the starting point to aid in alleviating the problems of software maintenance complexity and high costs. Two aspects of maintenance deserve attention: (1) protocols for locating and rectifying defects, and for ensuring that noe new defects are introduced in the development phase of the software process; and (2) protocols for modification, enhancement, and upgrading. This article focuses primarily on the second aspect, the development of protocols to help increase the quality and reduce the costs associated with modifications, enhancements, and upgrades of existing software. This study developed parsimonious models and a relative complexity metric for complexity measurement of software that were used to rank the modules in the system relative to one another. Some success was achieved in using the models and the relative metric to identify maintenance-prone modules.

  9. Modeling relations in nature and eco-informatics: a practical application of rosennean complexity.

    PubMed

    Kineman, John J

    2007-10-01

    The purpose of eco-informatics is to communicate critical information about organisms and ecosystems. To accomplish this, it must reflect the complexity of natural systems. Present information systems are designed around mechanistic concepts that do not capture complexity. Robert Rosen's relational theory offers a way of representing complexity in terms of information entailments that are part of an ontologically implicit 'modeling relation'. This relation has corresponding epistemological components that can be captured empirically, the components being structure (associated with model encoding) and function (associated with model decoding). Relational complexity, thus, provides a long-awaited theoretical underpinning for these concepts that ecology has found indispensable. Structural information pertains to the material organization of a system, which can be represented by data. Functional information specifies potential change, which can be inferred from experiment and represented as models or descriptions of state transformations. Contextual dependency (of structure or function) implies meaning. Biological functions imply internalized or system-dependent laws. Complexity can be represented epistemologically by relating structure and function in two different ways. One expresses the phenomenal relation that exists in any present or past instance, and the other draws the ontology of a system into the empirical world in terms of multiple potentials subject to natural forms of selection and optimality. These act as system attractors. Implementing these components and their theoretical relations in an informatics system will provide more-complete ecological informatics than is possible from a strictly mechanistic point of view. This approach will enable many new possibilities for supporting science and decision making.

  10. A mouse model of mitochondrial complex III dysfunction induced by myxothiazol

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davoudi, Mina; Kallijärvi, Jukka; Marjavaara, Sanna

    2014-04-18

    Highlights: • Reversible chemical inhibition of complex III in wild type mouse. • Myxothiazol causes decreased complex III activity in mouse liver. • The model is useful for therapeutic trials to improve mitochondrial function. - Abstract: Myxothiazol is a respiratory chain complex III (CIII) inhibitor that binds to the ubiquinol oxidation site Qo of CIII. It blocks electron transfer from ubiquinol to cytochrome b and thus inhibits CIII activity. It has been utilized as a tool in studies of respiratory chain function in in vitro and cell culture models. We developed a mouse model of biochemically induced and reversible CIIImore » inhibition using myxothiazol. We administered myxothiazol intraperitoneally at a dose of 0.56 mg/kg to C57Bl/J6 mice every 24 h and assessed CIII activity, histology, lipid content, supercomplex formation, and gene expression in the livers of the mice. A reversible CIII activity decrease to 50% of control value occurred at 2 h post-injection. At 74 h only minor histological changes in the liver were found, supercomplex formation was preserved and no significant changes in the expression of genes indicating hepatotoxicity or inflammation were found. Thus, myxothiazol-induced CIII inhibition can be induced in mice for four days in a row without overt hepatotoxicity or lethality. This model could be utilized in further studies of respiratory chain function and pharmacological approaches to mitochondrial hepatopathies.« less

  11. Atomic Resolution Modeling of the Ferredoxin:[FeFe] Hydrogenase Complex from Chlamydomonas reinhardtii

    PubMed Central

    Chang, Christopher H.; King, Paul W.; Ghirardi, Maria L.; Kim, Kwiseon

    2007-01-01

    The [FeFe] hydrogenases HydA1 and HydA2 in the green alga Chlamydomonas reinhardtii catalyze the final reaction in a remarkable metabolic pathway allowing this photosynthetic organism to produce H2 from water in the chloroplast. A [2Fe-2S] ferredoxin is a critical branch point in electron flow from Photosystem I toward a variety of metabolic fates, including proton reduction by hydrogenases. To better understand the binding determinants involved in ferredoxin:hydrogenase interactions, we have modeled Chlamydomonas PetF1 and HydA2 based on amino-acid sequence homology, and produced two promising electron-transfer model complexes by computational docking. To characterize these models, quantitative free energy calculations at atomic resolution were carried out, and detailed analysis of the interprotein interactions undertaken. The protein complex model we propose for ferredoxin:HydA2 interaction is energetically favored over the alternative candidate by 20 kcal/mol. This proposed model of the electron-transfer complex between PetF1 and HydA2 permits a more detailed view of the molecular events leading up to H2 evolution, and suggests potential mutagenic strategies to modulate electron flow to HydA2. PMID:17660315

  12. Atomic resolution modeling of the ferredoxin:[FeFe] hydrogenase complex from Chlamydomonas reinhardtii.

    PubMed

    Chang, Christopher H; King, Paul W; Ghirardi, Maria L; Kim, Kwiseon

    2007-11-01

    The [FeFe] hydrogenases HydA1 and HydA2 in the green alga Chlamydomonas reinhardtii catalyze the final reaction in a remarkable metabolic pathway allowing this photosynthetic organism to produce H(2) from water in the chloroplast. A [2Fe-2S] ferredoxin is a critical branch point in electron flow from Photosystem I toward a variety of metabolic fates, including proton reduction by hydrogenases. To better understand the binding determinants involved in ferredoxin:hydrogenase interactions, we have modeled Chlamydomonas PetF1 and HydA2 based on amino-acid sequence homology, and produced two promising electron-transfer model complexes by computational docking. To characterize these models, quantitative free energy calculations at atomic resolution were carried out, and detailed analysis of the interprotein interactions undertaken. The protein complex model we propose for ferredoxin:HydA2 interaction is energetically favored over the alternative candidate by 20 kcal/mol. This proposed model of the electron-transfer complex between PetF1 and HydA2 permits a more detailed view of the molecular events leading up to H(2) evolution, and suggests potential mutagenic strategies to modulate electron flow to HydA2.

  13. Rethinking the Psychogenic Model of Complex Regional Pain Syndrome: Somatoform Disorders and Complex Regional Pain Syndrome

    PubMed Central

    Hill, Renee J.; Chopra, Pradeep; Richardi, Toni

    2012-01-01

    Abstract Explaining the etiology of Complex Regional Pain Syndrome (CRPS) from the psychogenic model is exceedingly unsophisticated, because neurocognitive deficits, neuroanatomical abnormalities, and distortions in cognitive mapping are features of CRPS pathology. More importantly, many people who have developed CRPS have no history of mental illness. The psychogenic model offers comfort to physicians and mental health practitioners (MHPs) who have difficulty understanding pain maintained by newly uncovered neuro inflammatory processes. With increased education about CRPS through a biopsychosocial perspective, both physicians and MHPs can better diagnose, treat, and manage CRPS symptomatology. PMID:24223338

  14. Modeling of the Human - Operator in a Complex System Functioning Under Extreme Conditions

    NASA Astrophysics Data System (ADS)

    Getzov, Peter; Hubenova, Zoia; Yordanov, Dimitar; Popov, Wiliam

    2013-12-01

    Problems, related to the explication of sophisticated control systems of objects, operating under extreme conditions, have been examined and the impact of the effectiveness of the operator's activity on the systems as a whole. The necessity of creation of complex simulation models, reflecting operator's activity, is discussed. Organizational and technical system of an unmanned aviation complex is described as a sophisticated ergatic system. Computer realization of main subsystems of algorithmic system of the man as a controlling system is implemented and specialized software for data processing and analysis is developed. An original computer model of a Man as a tracking system has been implemented. Model of unmanned complex for operators training and formation of a mental model in emergency situation, implemented in "matlab-simulink" environment, has been synthesized. As a unit of the control loop, the pilot (operator) is simplified viewed as an autocontrol system consisting of three main interconnected subsystems: sensitive organs (perception sensors); central nervous system; executive organs (muscles of the arms, legs, back). Theoretical-data model of prediction the level of operator's information load in ergatic systems is proposed. It allows the assessment and prediction of the effectiveness of a real working operator. Simulation model of operator's activity in takeoff based on the Petri nets has been synthesized.

  15. A Simple Model for Complex Dynamical Transitions in Epidemics

    NASA Astrophysics Data System (ADS)

    Earn, David J. D.; Rohani, Pejman; Bolker, Benjamin M.; Grenfell, Bryan T.

    2000-01-01

    Dramatic changes in patterns of epidemics have been observed throughout this century. For childhood infectious diseases such as measles, the major transitions are between regular cycles and irregular, possibly chaotic epidemics, and from regionally synchronized oscillations to complex, spatially incoherent epidemics. A simple model can explain both kinds of transitions as the consequences of changes in birth and vaccination rates. Measles is a natural ecological system that exhibits different dynamical transitions at different times and places, yet all of these transitions can be predicted as bifurcations of a single nonlinear model.

  16. Discovering Link Communities in Complex Networks by an Integer Programming Model and a Genetic Algorithm

    PubMed Central

    Li, Zhenping; Zhang, Xiang-Sun; Wang, Rui-Sheng; Liu, Hongwei; Zhang, Shihua

    2013-01-01

    Identification of communities in complex networks is an important topic and issue in many fields such as sociology, biology, and computer science. Communities are often defined as groups of related nodes or links that correspond to functional subunits in the corresponding complex systems. While most conventional approaches have focused on discovering communities of nodes, some recent studies start partitioning links to find overlapping communities straightforwardly. In this paper, we propose a new quantity function for link community identification in complex networks. Based on this quantity function we formulate the link community partition problem into an integer programming model which allows us to partition a complex network into overlapping communities. We further propose a genetic algorithm for link community detection which can partition a network into overlapping communities without knowing the number of communities. We test our model and algorithm on both artificial networks and real-world networks. The results demonstrate that the model and algorithm are efficient in detecting overlapping community structure in complex networks. PMID:24386268

  17. Effect of unsaturations on the physical properties of a model membrane with the highly polyunsaturated docosahexaenoic fatty acid

    NASA Astrophysics Data System (ADS)

    Saiz, Leonor; Klein, Michael L.

    2001-03-01

    Polyunsaturated fatty acids are an essential component of biomembranes. The docosahexaenoic fatty acid (DHA), in particular, is found in high concentrations in retinal and neuronal tissue and in the olfactory bulb. Furthermore, it is well known the ability of DHA rich membranes to modulate membrane protein function, in some situations, by modifying the membrane physical properties. A particularly well studied situation is the DHA effect onthe activity of the visual receptor (protein) rhodopsin. Here, we study at a microscopic level this type of complex systems under physiological conditions. In this way, we can probe the molecular origin of the peculiarities that the system confers to membranes. To this purpose, the structure of a fully hydrated mixed (saturated/polyunsaturated) chain lipid bilayer in the biologically relevant liquid crystalline phase has been examined by performing molecular dynamics simulations. The model membrane, a 1-stearoyl- 2-docosahexaenoic- sn-glycero- 3-phosphatidylcholine (18:0/22:6 PC) lipid bilayer, was investigated at room temperature and ambient pressure and the results obtained in the nanosecond time scale were in good agreement with the available experimental data. Among the effects of the multiple unsaturations on the physical properties of these membranes, we focus on the enhanced permeability to water and small organic solvents, the decreased area compressibility modulus, and the domain formation and chain segregation.

  18. Physical modelling of the nuclear pore complex

    PubMed Central

    Fassati, Ariberto; Ford, Ian J.; Hoogenboom, Bart W.

    2013-01-01

    Physically interesting behaviour can arise when soft matter is confined to nanoscale dimensions. A highly relevant biological example of such a phenomenon is the Nuclear Pore Complex (NPC) found perforating the nuclear envelope of eukaryotic cells. In the central conduit of the NPC, of ∼30–60 nm diameter, a disordered network of proteins regulates all macromolecular transport between the nucleus and the cytoplasm. In spite of a wealth of experimental data, the selectivity barrier of the NPC has yet to be explained fully. Experimental and theoretical approaches are complicated by the disordered and heterogeneous nature of the NPC conduit. Modelling approaches have focused on the behaviour of the partially unfolded protein domains in the confined geometry of the NPC conduit, and have demonstrated that within the range of parameters thought relevant for the NPC, widely varying behaviour can be observed. In this review, we summarise recent efforts to physically model the NPC barrier and function. We illustrate how attempts to understand NPC barrier function have employed many different modelling techniques, each of which have contributed to our understanding of the NPC.

  19. Teacher Stress: Complex Model Building with LISREL. Pedagogical Reports, No. 16.

    ERIC Educational Resources Information Center

    Tellenback, Sten

    This paper presents a complex causal model of teacher stress based on data received from the responses of 1,466 teachers from Malmo, Sweden to a questionnaire. Also presented is a method for treating the model variables as higher-order factors or higher-order theoretical constructs. The paper's introduction presents a brief review of teacher…

  20. A complex speciation–richness relationship in a simple neutral model

    PubMed Central

    Desjardins-Proulx, Philippe; Gravel, Dominique

    2012-01-01

    Speciation is the “elephant in the room” of community ecology. As the ultimate source of biodiversity, its integration in ecology's theoretical corpus is necessary to understand community assembly. Yet, speciation is often completely ignored or stripped of its spatial dimension. Recent approaches based on network theory have allowed ecologists to effectively model complex landscapes. In this study, we use this framework to model allopatric and parapatric speciation in networks of communities. We focus on the relationship between speciation, richness, and the spatial structure of communities. We find a strong opposition between speciation and local richness, with speciation being more common in isolated communities and local richness being higher in more connected communities. Unlike previous models, we also find a transition to a positive relationship between speciation and local richness when dispersal is low and the number of communities is small. We use several measures of centrality to characterize the effect of network structure on diversity. The degree, the simplest measure of centrality, is the best predictor of local richness and speciation, although it loses some of its predictive power as connectivity grows. Our framework shows how a simple neutral model can be combined with network theory to reveal complex relationships between speciation, richness, and the spatial organization of populations. PMID:22957181

  1. Spinning Q-balls in the complex signum-Gordon model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arodz, H.; Karkowski, J.; Swierczynski, Z.

    2009-09-15

    Rotational excitations of compact Q-balls in the complex signum-Gordon model in 2+1 dimensions are investigated. We find that almost all such spinning Q-balls have the form of a ring of strictly finite width. In the limit of large angular momentum M{sub z}, their energy is proportional to |M{sub z}|{sup 1/5}.

  2. Generalized model of electromigration with 1:1 (analyte:selector) complexation stoichiometry: part I. Theory.

    PubMed

    Dubský, Pavel; Müllerová, Ludmila; Dvořák, Martin; Gaš, Bohuslav

    2015-03-06

    The model of electromigration of a multivalent weak acidic/basic/amphoteric analyte that undergoes complexation with a mixture of selectors is introduced. The model provides an extension of the series of models starting with the single-selector model without dissociation by Wren and Rowe in 1992, continuing with the monovalent weak analyte/single-selector model by Rawjee, Williams and Vigh in 1993 and that by Lelièvre in 1994, and ending with the multi-selector overall model without dissociation developed by our group in 2008. The new multivalent analyte multi-selector model shows that the effective mobility of the analyte obeys the original Wren and Row's formula. The overall complexation constant, mobility of the free analyte and mobility of complex can be measured and used in a standard way. The mathematical expressions for the overall parameters are provided. We further demonstrate mathematically that the pH dependent parameters for weak analytes can be simply used as an input into the multi-selector overall model and, in reverse, the multi-selector overall parameters can serve as an input into the pH-dependent models for the weak analytes. These findings can greatly simplify the rationale method development in analytical electrophoresis, specifically enantioseparations. Copyright © 2015 Elsevier B.V. All rights reserved.

  3. A Corticothalamic Circuit Model for Sound Identification in Complex Scenes

    PubMed Central

    Otazu, Gonzalo H.; Leibold, Christian

    2011-01-01

    The identification of the sound sources present in the environment is essential for the survival of many animals. However, these sounds are not presented in isolation, as natural scenes consist of a superposition of sounds originating from multiple sources. The identification of a source under these circumstances is a complex computational problem that is readily solved by most animals. We present a model of the thalamocortical circuit that performs level-invariant recognition of auditory objects in complex auditory scenes. The circuit identifies the objects present from a large dictionary of possible elements and operates reliably for real sound signals with multiple concurrently active sources. The key model assumption is that the activities of some cortical neurons encode the difference between the observed signal and an internal estimate. Reanalysis of awake auditory cortex recordings revealed neurons with patterns of activity corresponding to such an error signal. PMID:21931668

  4. An egalitarian network model for the emergence of simple and complex cells in visual cortex

    PubMed Central

    Tao, Louis; Shelley, Michael; McLaughlin, David; Shapley, Robert

    2004-01-01

    We explain how simple and complex cells arise in a large-scale neuronal network model of the primary visual cortex of the macaque. Our model consists of ≈4,000 integrate-and-fire, conductance-based point neurons, representing the cells in a small, 1-mm2 patch of an input layer of the primary visual cortex. In the model the local connections are isotropic and nonspecific, and convergent input from the lateral geniculate nucleus confers cortical cells with orientation and spatial phase preference. The balance between lateral connections and lateral geniculate nucleus drive determines whether individual neurons in this recurrent circuit are simple or complex. The model reproduces qualitatively the experimentally observed distributions of both extracellular and intracellular measures of simple and complex response. PMID:14695891

  5. History matching of a complex epidemiological model of human immunodeficiency virus transmission by using variance emulation.

    PubMed

    Andrianakis, I; Vernon, I; McCreesh, N; McKinley, T J; Oakley, J E; Nsubuga, R N; Goldstein, M; White, R G

    2017-08-01

    Complex stochastic models are commonplace in epidemiology, but their utility depends on their calibration to empirical data. History matching is a (pre)calibration method that has been applied successfully to complex deterministic models. In this work, we adapt history matching to stochastic models, by emulating the variance in the model outputs, and therefore accounting for its dependence on the model's input values. The method proposed is applied to a real complex epidemiological model of human immunodeficiency virus in Uganda with 22 inputs and 18 outputs, and is found to increase the efficiency of history matching, requiring 70% of the time and 43% fewer simulator evaluations compared with a previous variant of the method. The insight gained into the structure of the human immunodeficiency virus model, and the constraints placed on it, are then discussed.

  6. Variable speed limit strategies analysis with mesoscopic traffic flow model based on complex networks

    NASA Astrophysics Data System (ADS)

    Li, Shu-Bin; Cao, Dan-Ni; Dang, Wen-Xiu; Zhang, Lin

    As a new cross-discipline, the complexity science has penetrated into every field of economy and society. With the arrival of big data, the research of the complexity science has reached its summit again. In recent years, it offers a new perspective for traffic control by using complex networks theory. The interaction course of various kinds of information in traffic system forms a huge complex system. A new mesoscopic traffic flow model is improved with variable speed limit (VSL), and the simulation process is designed, which is based on the complex networks theory combined with the proposed model. This paper studies effect of VSL on the dynamic traffic flow, and then analyzes the optimal control strategy of VSL in different network topologies. The conclusion of this research is meaningful to put forward some reasonable transportation plan and develop effective traffic management and control measures to help the department of traffic management.

  7. Virtual enterprise model for the electronic components business in the Nuclear Weapons Complex

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferguson, T.J.; Long, K.S.; Sayre, J.A.

    1994-08-01

    The electronic components business within the Nuclear Weapons Complex spans organizational and Department of Energy contractor boundaries. An assessment of the current processes indicates a need for fundamentally changing the way electronic components are developed, procured, and manufactured. A model is provided based on a virtual enterprise that recognizes distinctive competencies within the Nuclear Weapons Complex and at the vendors. The model incorporates changes that reduce component delivery cycle time and improve cost effectiveness while delivering components of the appropriate quality.

  8. Some aspects of mathematical and chemical modeling of complex chemical processes

    NASA Technical Reports Server (NTRS)

    Nemes, I.; Botar, L.; Danoczy, E.; Vidoczy, T.; Gal, D.

    1983-01-01

    Some theoretical questions involved in the mathematical modeling of the kinetics of complex chemical process are discussed. The analysis is carried out for the homogeneous oxidation of ethylbenzene in the liquid phase. Particular attention is given to the determination of the general characteristics of chemical systems from an analysis of mathematical models developed on the basis of linear algebra.

  9. Systematic coarse-grained modeling of complexation between small interfering RNA and polycations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wei, Zonghui; Luijten, Erik, E-mail: luijten@northwestern.edu; Department of Materials Science and Engineering, Northwestern University, Evanston, Illinois 60208

    All-atom molecular dynamics simulations can provide insight into the properties of polymeric gene-delivery carriers by elucidating their interactions and detailed binding patterns with nucleic acids. However, to explore nanoparticle formation through complexation of these polymers and nucleic acids and study their behavior at experimentally relevant time and length scales, a reliable coarse-grained model is needed. Here, we systematically develop such a model for the complexation of small interfering RNA (siRNA) and grafted polyethyleneimine copolymers, a promising candidate for siRNA delivery. We compare the predictions of this model with all-atom simulations and demonstrate that it is capable of reproducing detailed bindingmore » patterns, charge characteristics, and water release kinetics. Since the coarse-grained model accelerates the simulations by one to two orders of magnitude, it will make it possible to quantitatively investigate nanoparticle formation involving multiple siRNA molecules and cationic copolymers.« less

  10. Balancing the stochastic description of uncertainties as a function of hydrologic model complexity

    NASA Astrophysics Data System (ADS)

    Del Giudice, D.; Reichert, P.; Albert, C.; Kalcic, M.; Logsdon Muenich, R.; Scavia, D.; Bosch, N. S.; Michalak, A. M.

    2016-12-01

    Uncertainty analysis is becoming an important component of forecasting water and pollutant fluxes in urban and rural environments. Properly accounting for errors in the modeling process can help to robustly assess the uncertainties associated with the inputs (e.g. precipitation) and outputs (e.g. runoff) of hydrological models. In recent years we have investigated several Bayesian methods to infer the parameters of a mechanistic hydrological model along with those of the stochastic error component. The latter describes the uncertainties of model outputs and possibly inputs. We have adapted our framework to a variety of applications, ranging from predicting floods in small stormwater systems to nutrient loads in large agricultural watersheds. Given practical constraints, we discuss how in general the number of quantities to infer probabilistically varies inversely with the complexity of the mechanistic model. Most often, when evaluating a hydrological model of intermediate complexity, we can infer the parameters of the model as well as of the output error model. Describing the output errors as a first order autoregressive process can realistically capture the "downstream" effect of inaccurate inputs and structure. With simpler runoff models we can additionally quantify input uncertainty by using a stochastic rainfall process. For complex hydrologic transport models, instead, we show that keeping model parameters fixed and just estimating time-dependent output uncertainties could be a viable option. The common goal across all these applications is to create time-dependent prediction intervals which are both reliable (cover the nominal amount of validation data) and precise (are as narrow as possible). In conclusion, we recommend focusing both on the choice of the hydrological model and of the probabilistic error description. The latter can include output uncertainty only, if the model is computationally-expensive, or, with simpler models, it can separately account

  11. Syntactic Complexity as an Aspect of Text Complexity

    ERIC Educational Resources Information Center

    Frantz, Roger S.; Starr, Laura E.; Bailey, Alison L.

    2015-01-01

    Students' ability to read complex texts is emphasized in the Common Core State Standards (CCSS) for English Language Arts and Literacy. The standards propose a three-part model for measuring text complexity. Although the model presents a robust means for determining text complexity based on a variety of features inherent to a text as well as…

  12. Understanding Complex Natural Systems by Articulating Structure-Behavior-Function Models

    ERIC Educational Resources Information Center

    Vattam, Swaroop S.; Goel, Ashok K.; Rugaber, Spencer; Hmelo-Silver, Cindy E.; Jordan, Rebecca; Gray, Steven; Sinha, Suparna

    2011-01-01

    Artificial intelligence research on creative design has led to Structure-Behavior-Function (SBF) models that emphasize functions as abstractions for organizing understanding of physical systems. Empirical studies on understanding complex systems suggest that novice understanding is shallow, typically focusing on their visible structures and…

  13. [Comparison of predictive models for the selection of high-complexity patients].

    PubMed

    Estupiñán-Ramírez, Marcos; Tristancho-Ajamil, Rita; Company-Sancho, María Consuelo; Sánchez-Janáriz, Hilda

    2017-08-18

    To compare the concordance of complexity weights between Clinical Risk Groups (CRG) and Adjusted Morbidity Groups (AMG). To determine which one is the best predictor of patient admission. To optimise the method used to select the 0.5% of patients of higher complexity that will be included in an intervention protocol. Cross-sectional analytical study in 18 Canary Island health areas, 385,049 citizens were enrolled, using sociodemographic variables from health cards; diagnoses and use of healthcare resources obtained from primary health care electronic records (PCHR) and the basic minimum set of hospital data; the functional status recorded in the PCHR, and the drugs prescribed through the electronic prescription system. The correlation between stratifiers was estimated from these data. The ability of each stratifier to predict patient admissions was evaluated and prediction optimisation models were constructed. Concordance between weights complexity stratifiers was strong (rho = 0.735) and the correlation between categories of complexity was moderate (weighted kappa = 0.515). AMG complexity weight predicts better patient admission than CRG (AUC: 0.696 [0.695-0.697] versus 0.692 [0.691-0.693]). Other predictive variables were added to the AMG weight, obtaining the best AUC (0.708 [0.707-0.708]) the model composed by AMG, sex, age, Pfeiffer and Barthel scales, re-admissions and number of prescribed therapeutic groups. strong concordance was found between stratifiers, and higher predictive capacity for admission from AMG, which can be increased by adding other dimensions. Copyright © 2017 SESPAS. Publicado por Elsevier España, S.L.U. All rights reserved.

  14. Scientific and technical complex for modeling, researching and testing of rocket-space vehicles’ electric power installations

    NASA Astrophysics Data System (ADS)

    Bezruchko, Konstantin; Davidov, Albert

    2009-01-01

    In the given article scientific and technical complex for modeling, researching and testing of rocket-space vehicles' power installations which was created in Power Source Laboratory of National Aerospace University "KhAI" is described. This scientific and technical complex gives the opportunity to replace the full-sized tests on model tests and to reduce financial and temporary inputs at modeling, researching and testing of rocket-space vehicles' power installations. Using the given complex it is possible to solve the problems of designing and researching of rocket-space vehicles' power installations efficiently, and also to provide experimental researches of physical processes and tests of solar and chemical batteries of rocket-space complexes and space vehicles. Scientific and technical complex also allows providing accelerated tests, diagnostics, life-time control and restoring of chemical accumulators for rocket-space vehicles' power supply systems.

  15. The application of CFD to the modelling of fires in complex geometries

    NASA Astrophysics Data System (ADS)

    Burns, A. D.; Clarke, D. S.; Guilbert, P.; Jones, I. P.; Simcox, S.; Wilkes, N. S.

    The application of Computational Fluid Dynamics (CFD) to industrial safety is a challenging activity. In particular it involves the interaction of several different physical processes, including turbulence, combustion, radiation, buoyancy, compressible flow and shock waves in complex three-dimensional geometries. In addition, there may be multi-phase effects arising, for example, from sprinkler systems for extinguishing fires. The FLOW3D software (1-3) from Computational Fluid Dynamics Services (CFDS) is in widespread use in industrial safety problems, both within AEA Technology, and also by CFDS's commercial customers, for example references (4-13). This paper discusses some other applications of FLOW3D to safety problems. These applications illustrate the coupling of the gas flows with radiation models and combustion models, particularly for complex geometries where simpler radiation models are not applicable.

  16. Increasing Model Complexity: Unit Testing and Validation of a Coupled Electrical Resistive Heating and Macroscopic Invasion Percolation Model

    NASA Astrophysics Data System (ADS)

    Molnar, I. L.; Krol, M.; Mumford, K. G.

    2016-12-01

    Geoenvironmental models are becoming increasingly sophisticated as they incorporate rising numbers of mechanisms and process couplings to describe environmental scenarios. When combined with advances in computing and numerical techniques, these already complicated models are experiencing large increases in code complexity and simulation time. Although, this complexity has enabled breakthroughs in the ability to describe environmental problems, it is difficult to ensure that complex models are sufficiently robust and behave as intended. Many development tools used for testing software robustness have not seen widespread use in geoenvironmental sciences despite an increasing reliance on complex numerical models, leaving many models at risk of undiscovered errors and potentially improper validations. This study explores the use of unit testing, which independently examines small code elements to ensure each unit is working as intended as well as their integrated behaviour, to test the functionality and robustness of a coupled Electrical Resistive Heating (ERH) - Macroscopic Invasion Percolation (MIP) model. ERH is a thermal remediation technique where the soil is heated until boiling and volatile contaminants are stripped from the soil. There is significant interest in improving the efficiency of ERH, including taking advantage of low-temperature co-boiling behaviour which may reduce energy consumption. However, at lower co-boiling temperatures gas bubbles can form, mobilize and collapse in cooler areas, potentially contaminating previously clean zones. The ERH-MIP model was created to simulate the behaviour of gas bubbles in the subsurface and to evaluate ERH during co-boiling1. This study demonstrates how unit testing ensures that the model behaves in an expected manner and examines the robustness of every component within the ERH-MIP model. Once unit testing is established, the MIP module (a discrete gas transport algorithm for gas expansion, mobilization and

  17. Modeling uranium(VI) adsorption onto montmorillonite under varying carbonate concentrations: A surface complexation model accounting for the spillover effect on surface potential

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tournassat, C.; Tinnacher, R. M.; Grangeon, S.

    The prediction of U(VI) adsorption onto montmorillonite clay is confounded by the complexities of: (1) the montmorillonite structure in terms of adsorption sites on basal and edge surfaces, and the complex interactions between the electrical double layers at these surfaces, and (2) U(VI) solution speciation, which can include cationic, anionic and neutral species. Previous U(VI)-montmorillonite adsorption and modeling studies have typically expanded classical surface complexation modeling approaches, initially developed for simple oxides, to include both cation exchange and surface complexation reactions. However, previous models have not taken into account the unique characteristics of electrostatic surface potentials that occur at montmorillonitemore » edge sites, where the electrostatic surface potential of basal plane cation exchange sites influences the surface potential of neighboring edge sites (‘spillover’ effect).« less

  18. Modeling uranium(VI) adsorption onto montmorillonite under varying carbonate concentrations: A surface complexation model accounting for the spillover effect on surface potential

    DOE PAGES

    Tournassat, C.; Tinnacher, R. M.; Grangeon, S.; ...

    2017-10-06

    The prediction of U(VI) adsorption onto montmorillonite clay is confounded by the complexities of: (1) the montmorillonite structure in terms of adsorption sites on basal and edge surfaces, and the complex interactions between the electrical double layers at these surfaces, and (2) U(VI) solution speciation, which can include cationic, anionic and neutral species. Previous U(VI)-montmorillonite adsorption and modeling studies have typically expanded classical surface complexation modeling approaches, initially developed for simple oxides, to include both cation exchange and surface complexation reactions. However, previous models have not taken into account the unique characteristics of electrostatic surface potentials that occur at montmorillonitemore » edge sites, where the electrostatic surface potential of basal plane cation exchange sites influences the surface potential of neighboring edge sites (‘spillover’ effect).« less

  19. Complex networks generated by the Penna bit-string model: Emergence of small-world and assortative mixing

    NASA Astrophysics Data System (ADS)

    Li, Chunguang; Maini, Philip K.

    2005-10-01

    The Penna bit-string model successfully encompasses many phenomena of population evolution, including inheritance, mutation, evolution, and aging. If we consider social interactions among individuals in the Penna model, the population will form a complex network. In this paper, we first modify the Verhulst factor to control only the birth rate, and introduce activity-based preferential reproduction of offspring in the Penna model. The social interactions among individuals are generated by both inheritance and activity-based preferential increase. Then we study the properties of the complex network generated by the modified Penna model. We find that the resulting complex network has a small-world effect and the assortative mixing property.

  20. Compact Q-balls in the complex signum-Gordon model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arodz, H.; Lis, J.

    2008-05-15

    We discuss Q-balls in the complex signum-Gordon model in d-dimensional space for d=1, 2, 3. The Q-balls have strictly finite size. Their total energy is a powerlike function of the conserved U(1) charge with the exponent equal to (d+2)(d+3){sup -1}. In the cases d=1 and d=3 explicit analytic solutions are presented.

  1. Modeling uranium(VI) adsorption onto montmorillonite under varying carbonate concentrations: A surface complexation model accounting for the spillover effect on surface potential

    NASA Astrophysics Data System (ADS)

    Tournassat, C.; Tinnacher, R. M.; Grangeon, S.; Davis, J. A.

    2018-01-01

    The prediction of U(VI) adsorption onto montmorillonite clay is confounded by the complexities of: (1) the montmorillonite structure in terms of adsorption sites on basal and edge surfaces, and the complex interactions between the electrical double layers at these surfaces, and (2) U(VI) solution speciation, which can include cationic, anionic and neutral species. Previous U(VI)-montmorillonite adsorption and modeling studies have typically expanded classical surface complexation modeling approaches, initially developed for simple oxides, to include both cation exchange and surface complexation reactions. However, previous models have not taken into account the unique characteristics of electrostatic surface potentials that occur at montmorillonite edge sites, where the electrostatic surface potential of basal plane cation exchange sites influences the surface potential of neighboring edge sites ('spillover' effect). A series of U(VI) - Na-montmorillonite batch adsorption experiments was conducted as a function of pH, with variable U(VI), Ca, and dissolved carbonate concentrations. Based on the experimental data, a new type of surface complexation model (SCM) was developed for montmorillonite, that specifically accounts for the spillover effect using the edge surface speciation model by Tournassat et al. (2016a). The SCM allows for a prediction of U(VI) adsorption under varying chemical conditions with a minimum number of fitting parameters, not only for our own experimental results, but also for a number of published data sets. The model agreed well with many of these datasets without introducing a second site type or including the formation of ternary U(VI)-carbonato surface complexes. The model predictions were greatly impacted by utilizing analytical measurements of dissolved inorganic carbon (DIC) concentrations in individual sample solutions rather than assuming solution equilibration with a specific partial pressure of CO2, even when the gas phase was

  2. Turbulent Dispersion Modelling in a Complex Urban Environment - Data Analysis and Model Development

    DTIC Science & Technology

    2010-02-01

    Technology Laboratory (Dstl) is used as a benchmark for comparison. Comparisons are also made with some more practically oriented computational fluid dynamics...predictions. To achieve clarity in the range of approaches available for practical models of con- taminant dispersion in urban areas, an overview of...complexity of those methods is simplified to a degree that allows straightforward practical implementation and application. Using these results as a

  3. Modelling DC responses of 3D complex fracture networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beskardes, Gungor Didem; Weiss, Chester Joseph

    Here, the determination of the geometrical properties of fractures plays a critical role in many engineering problems to assess the current hydrological and mechanical states of geological media and to predict their future states. However, numerical modeling of geoelectrical responses in realistic fractured media has been challenging due to the explosive computational cost imposed by the explicit discretizations of fractures at multiple length scales, which often brings about a tradeoff between computational efficiency and geologic realism. Here, we use the hierarchical finite element method to model electrostatic response of realistically complex 3D conductive fracture networks with minimal computational cost.

  4. Modelling DC responses of 3D complex fracture networks

    DOE PAGES

    Beskardes, Gungor Didem; Weiss, Chester Joseph

    2018-03-01

    Here, the determination of the geometrical properties of fractures plays a critical role in many engineering problems to assess the current hydrological and mechanical states of geological media and to predict their future states. However, numerical modeling of geoelectrical responses in realistic fractured media has been challenging due to the explosive computational cost imposed by the explicit discretizations of fractures at multiple length scales, which often brings about a tradeoff between computational efficiency and geologic realism. Here, we use the hierarchical finite element method to model electrostatic response of realistically complex 3D conductive fracture networks with minimal computational cost.

  5. Use of paired simple and complex models to reduce predictive bias and quantify uncertainty

    NASA Astrophysics Data System (ADS)

    Doherty, John; Christensen, Steen

    2011-12-01

    Modern environmental management and decision-making is based on the use of increasingly complex numerical models. Such models have the advantage of allowing representation of complex processes and heterogeneous system property distributions inasmuch as these are understood at any particular study site. The latter are often represented stochastically, this reflecting knowledge of the character of system heterogeneity at the same time as it reflects a lack of knowledge of its spatial details. Unfortunately, however, complex models are often difficult to calibrate because of their long run times and sometimes questionable numerical stability. Analysis of predictive uncertainty is also a difficult undertaking when using models such as these. Such analysis must reflect a lack of knowledge of spatial hydraulic property details. At the same time, it must be subject to constraints on the spatial variability of these details born of the necessity for model outputs to replicate observations of historical system behavior. In contrast, the rapid run times and general numerical reliability of simple models often promulgates good calibration and ready implementation of sophisticated methods of calibration-constrained uncertainty analysis. Unfortunately, however, many system and process details on which uncertainty may depend are, by design, omitted from simple models. This can lead to underestimation of the uncertainty associated with many predictions of management interest. The present paper proposes a methodology that attempts to overcome the problems associated with complex models on the one hand and simple models on the other hand, while allowing access to the benefits each of them offers. It provides a theoretical analysis of the simplification process from a subspace point of view, this yielding insights into the costs of model simplification, and into how some of these costs may be reduced. It then describes a methodology for paired model usage through which predictive

  6. Turbulence modeling needs of commercial CFD codes: Complex flows in the aerospace and automotive industries

    NASA Technical Reports Server (NTRS)

    Befrui, Bizhan A.

    1995-01-01

    This viewgraph presentation discusses the following: STAR-CD computational features; STAR-CD turbulence models; common features of industrial complex flows; industry-specific CFD development requirements; applications and experiences of industrial complex flows, including flow in rotating disc cavities, diffusion hole film cooling, internal blade cooling, and external car aerodynamics; and conclusions on turbulence modeling needs.

  7. MASS BALANCE MODELLING OF PCBS IN THE FOX RIVER/GREEN BAY COMPLEX

    EPA Science Inventory

    The USEPA Office of Research and Development developed and applies a multimedia, mass balance modeling approach to the Fox River/Green Bay complex to aid managers with remedial decision-making. The suite of models were applied to PCBs due to the long history of contamination and ...

  8. Multi-level emulation of complex climate model responses to boundary forcing data

    NASA Astrophysics Data System (ADS)

    Tran, Giang T.; Oliver, Kevin I. C.; Holden, Philip B.; Edwards, Neil R.; Sóbester, András; Challenor, Peter

    2018-04-01

    Climate model components involve both high-dimensional input and output fields. It is desirable to efficiently generate spatio-temporal outputs of these models for applications in integrated assessment modelling or to assess the statistical relationship between such sets of inputs and outputs, for example, uncertainty analysis. However, the need for efficiency often compromises the fidelity of output through the use of low complexity models. Here, we develop a technique which combines statistical emulation with a dimensionality reduction technique to emulate a wide range of outputs from an atmospheric general circulation model, PLASIM, as functions of the boundary forcing prescribed by the ocean component of a lower complexity climate model, GENIE-1. Although accurate and detailed spatial information on atmospheric variables such as precipitation and wind speed is well beyond the capability of GENIE-1's energy-moisture balance model of the atmosphere, this study demonstrates that the output of this model is useful in predicting PLASIM's spatio-temporal fields through multi-level emulation. Meaningful information from the fast model, GENIE-1 was extracted by utilising the correlation between variables of the same type in the two models and between variables of different types in PLASIM. We present here the construction and validation of several PLASIM variable emulators and discuss their potential use in developing a hybrid model with statistical components.

  9. Cationic lipids: molecular structure/ transfection activity relationships and interactions with biomembranes.

    PubMed

    Koynova, Rumiana; Tenchov, Boris

    2010-01-01

    Abstract Synthetic cationic lipids, which form complexes (lipoplexes) with polyanionic DNA, are presently the most widely used constituents of nonviral gene carriers. A large number of cationic amphiphiles have been synthesized and tested in transfection studies. However, due to the complexity of the transfection pathway, no general schemes have emerged for correlating the cationic lipid chemistry with their transfection efficacy and the approaches for optimizing their molecular structures are still largely empirical. Here we summarize data on the relationships between transfection activity and cationic lipid molecular structure and demonstrate that the transfection activity depends in a systematic way on the lipid hydrocarbon chain structure. A number of examples, including a large series of cationic phosphatidylcholine derivatives, show that optimum transfection is displayed by lipids with chain length of approximately 14 carbon atoms and that the transfection efficiency strongly increases with increase of chain unsaturation, specifically upon replacement of saturated with monounsaturated chains.

  10. Adsorption of uranium(VI) to manganese oxides: X-ray absorption spectroscopy and surface complexation modeling.

    PubMed

    Wang, Zimeng; Lee, Sung-Woo; Catalano, Jeffrey G; Lezama-Pacheco, Juan S; Bargar, John R; Tebo, Bradley M; Giammar, Daniel E

    2013-01-15

    The mobility of hexavalent uranium in soil and groundwater is strongly governed by adsorption to mineral surfaces. As strong naturally occurring adsorbents, manganese oxides may significantly influence the fate and transport of uranium. Models for U(VI) adsorption over a broad range of chemical conditions can improve predictive capabilities for uranium transport in the subsurface. This study integrated batch experiments of U(VI) adsorption to synthetic and biogenic MnO(2), surface complexation modeling, ζ-potential analysis, and molecular-scale characterization of adsorbed U(VI) with extended X-ray absorption fine structure (EXAFS) spectroscopy. The surface complexation model included inner-sphere monodentate and bidentate surface complexes and a ternary uranyl-carbonato surface complex, which was consistent with the EXAFS analysis. The model could successfully simulate adsorption results over a broad range of pH and dissolved inorganic carbon concentrations. U(VI) adsorption to synthetic δ-MnO(2) appears to be stronger than to biogenic MnO(2), and the differences in adsorption affinity and capacity are not associated with any substantial difference in U(VI) coordination.

  11. Surface Complexation Modeling of Eu(III) and U(VI) Interactions with Graphene Oxide.

    PubMed

    Xie, Yu; Helvenston, Edward M; Shuller-Nickles, Lindsay C; Powell, Brian A

    2016-02-16

    Graphene oxide (GO) has great potential for actinide removal due to its extremely high sorption capacity, but the mechanism of sorption remains unclear. In this study, the carboxylic functional group and an unexpected sulfonate functional group on GO were characterized as the reactive surface sites and quantified via diffuse layer modeling of the GO acid/base titrations. The presence of sulfonate functional group on GO was confirmed using elemental analysis and X-ray photoelectron spectroscopy. Batch experiments of Eu(III) and U(VI) sorption to GO as the function of pH (1-8) and as the function of analyte concentration (10-100, 000 ppb) at a constant pH ≈ 5 were conducted; the batch sorption results were modeled simultaneously using surface complexation modeling (SCM). The SCM indicated that Eu(III) and U(VI) complexation to carboxylate functional group is the main mechanism for their sorption to GO; their complexation to the sulfonate site occurred at the lower pH range and the complexation of Eu(III) to sulfonate site are more significant than that of U(VI). Eu(III) and U(VI) facilitated GO aggregation was observed with high Eu(III) and U(VI) concentration and may be caused by surface charge neutralization of GO after sorption.

  12. Diffusion in higher dimensional SYK model with complex fermions

    NASA Astrophysics Data System (ADS)

    Cai, Wenhe; Ge, Xian-Hui; Yang, Guo-Hong

    2018-01-01

    We construct a new higher dimensional SYK model with complex fermions on bipartite lattices. As an extension of the original zero-dimensional SYK model, we focus on the one-dimension case, and similar Hamiltonian can be obtained in higher dimensions. This model has a conserved U(1) fermion number Q and a conjugate chemical potential μ. We evaluate the thermal and charge diffusion constants via large q expansion at low temperature limit. The results show that the diffusivity depends on the ratio of free Majorana fermions to Majorana fermions with SYK interactions. The transport properties and the butterfly velocity are accordingly calculated at low temperature. The specific heat and the thermal conductivity are proportional to the temperature. The electrical resistivity also has a linear temperature dependence term.

  13. Understanding Transportation Systems : An Integrated Approach to Modeling Complex Transportation Systems

    DOT National Transportation Integrated Search

    2013-01-01

    The ability to model and understand the complex dynamics of intelligent agents as they interact within a transportation system could lead to revolutionary advances in transportation engineering and intermodal surface transportation in the United Stat...

  14. A 3D modeling approach to complex faults with multi-source data

    NASA Astrophysics Data System (ADS)

    Wu, Qiang; Xu, Hua; Zou, Xukai; Lei, Hongzhuan

    2015-04-01

    Fault modeling is a very important step in making an accurate and reliable 3D geological model. Typical existing methods demand enough fault data to be able to construct complex fault models, however, it is well known that the available fault data are generally sparse and undersampled. In this paper, we propose a workflow of fault modeling, which can integrate multi-source data to construct fault models. For the faults that are not modeled with these data, especially small-scale or approximately parallel with the sections, we propose the fault deduction method to infer the hanging wall and footwall lines after displacement calculation. Moreover, using the fault cutting algorithm can supplement the available fault points on the location where faults cut each other. Increasing fault points in poor sample areas can not only efficiently construct fault models, but also reduce manual intervention. By using a fault-based interpolation and remeshing the horizons, an accurate 3D geological model can be constructed. The method can naturally simulate geological structures no matter whether the available geological data are sufficient or not. A concrete example of using the method in Tangshan, China, shows that the method can be applied to broad and complex geological areas.

  15. Progress on Complex Langevin simulations of a finite density matrix model for QCD

    NASA Astrophysics Data System (ADS)

    Bloch, Jacques; Glesaaen, Jonas; Verbaarschot, Jacobus; Zafeiropoulos, Savvas

    2018-03-01

    We study the Stephanov model, which is an RMT model for QCD at finite density, using the Complex Langevin algorithm. Naive implementation of the algorithm shows convergence towards the phase quenched or quenched theory rather than to intended theory with dynamical quarks. A detailed analysis of this issue and a potential resolution of the failure of this algorithm are discussed. We study the effect of gauge cooling on the Dirac eigenvalue distribution and time evolution of the norm for various cooling norms, which were specifically designed to remove the pathologies of the complex Langevin evolution. The cooling is further supplemented with a shifted representation for the random matrices. Unfortunately, none of these modifications generate a substantial improvement on the complex Langevin evolution and the final results still do not agree with the analytical predictions.

  16. Introduction to a special section on ecohydrology of semiarid environments: Confronting mathematical models with ecosystem complexity

    NASA Astrophysics Data System (ADS)

    Svoray, Tal; Assouline, Shmuel; Katul, Gabriel

    2015-11-01

    Current literature provides large number of publications about ecohydrological processes and their effect on the biota in drylands. Given the limited laboratory and field experiments in such systems, many of these publications are based on mathematical models of varying complexity. The underlying implicit assumption is that the data set used to evaluate these models covers the parameter space of conditions that characterize drylands and that the models represent the actual processes with acceptable certainty. However, a question raised is to what extent these mathematical models are valid when confronted with observed ecosystem complexity? This Introduction reviews the 16 papers that comprise the Special Section on Eco-hydrology of Semiarid Environments: Confronting Mathematical Models with Ecosystem Complexity. The subjects studied in these papers include rainfall regime, infiltration and preferential flow, evaporation and evapotranspiration, annual net primary production, dispersal and invasion, and vegetation greening. The findings in the papers published in this Special Section show that innovative mathematical modeling approaches can represent actual field measurements. Hence, there are strong grounds for suggesting that mathematical models can contribute to greater understanding of ecosystem complexity through characterization of space-time dynamics of biomass and water storage as well as their multiscale interactions. However, the generality of the models and their low-dimensional representation of many processes may also be a "curse" that results in failures when particulars of an ecosystem are required. It is envisaged that the search for a unifying "general" model, while seductive, may remain elusive in the foreseeable future. It is for this reason that improving the merger between experiments and models of various degrees of complexity continues to shape the future research agenda.

  17. Synchronization Experiments With A Global Coupled Model of Intermediate Complexity

    NASA Astrophysics Data System (ADS)

    Selten, Frank; Hiemstra, Paul; Shen, Mao-Lin

    2013-04-01

    In the super modeling approach an ensemble of imperfect models are connected through nudging terms that nudge the solution of each model to the solution of all other models in the ensemble. The goal is to obtain a synchronized state through a proper choice of connection strengths that closely tracks the trajectory of the true system. For the super modeling approach to be successful, the connections should be dense and strong enough for synchronization to occur. In this study we analyze the behavior of an ensemble of connected global atmosphere-ocean models of intermediate complexity. All atmosphere models are connected to the same ocean model through the surface fluxes of heat, water and momentum, the ocean is integrated using weighted averaged surface fluxes. In particular we analyze the degree of synchronization between the atmosphere models and the characteristics of the ensemble mean solution. The results are interpreted using a low order atmosphere-ocean toy model.

  18. New Age of 3D Geological Modelling or Complexity is not an Issue Anymore

    NASA Astrophysics Data System (ADS)

    Mitrofanov, Aleksandr

    2017-04-01

    Geological model has a significant value in almost all types of researches related to regional mapping, geodynamics and especially to structural and resource geology of mineral deposits. Well-developed geological model must take into account all vital features of modelling object without over-simplification and also should adequately represent the interpretation of the geologist. In recent years with the gradual exhaustion deposits with relatively simple morphology geologists from all over the world are faced with the necessity of building the representative models for more and more structurally complex objects. Meanwhile, the amount of tools used for that has not significantly changed in the last two-three decades. The most widespread method of wireframe geological modelling now was developed in 1990s and is fully based on engineering design set of instruments (so-called CAD). Strings and polygons representing the section-based interpretation are being used as an intermediate step in the process of wireframes generation. Despite of significant time required for this type of modelling, it still can provide sufficient results for simple and medium-complexity geological objects. However, with the increasing complexity more and more vital features of the deposit are being sacrificed because of fundamental inability (or much greater time required for modelling) of CAD-based explicit techniques to develop the wireframes of the appropriate complexity. At the same time alternative technology which is not based on sectional approach and which uses the fundamentally different mathematical algorithms is being actively developed in the variety of other disciplines: medicine, advanced industrial design, game and cinema industry. In the recent years this implicit technology started to being developed for geological modelling purpose and nowadays it is represented by very powerful set of tools that has been integrated in almost all major commercial software packages. Implicit

  19. Derivative Free Optimization of Complex Systems with the Use of Statistical Machine Learning Models

    DTIC Science & Technology

    2015-09-12

    AFRL-AFOSR-VA-TR-2015-0278 DERIVATIVE FREE OPTIMIZATION OF COMPLEX SYSTEMS WITH THE USE OF STATISTICAL MACHINE LEARNING MODELS Katya Scheinberg...COMPLEX SYSTEMS WITH THE USE OF STATISTICAL MACHINE LEARNING MODELS 5a.  CONTRACT NUMBER 5b.  GRANT NUMBER FA9550-11-1-0239 5c.  PROGRAM ELEMENT...developed, which has been the focus of our research. 15. SUBJECT TERMS optimization, Derivative-Free Optimization, Statistical Machine Learning 16. SECURITY

  20. Generative complexity of Gray-Scott model

    NASA Astrophysics Data System (ADS)

    Adamatzky, Andrew

    2018-03-01

    In the Gray-Scott reaction-diffusion system one reactant is constantly fed in the system, another reactant is reproduced by consuming the supplied reactant and also converted to an inert product. The rate of feeding one reactant in the system and the rate of removing another reactant from the system determine configurations of concentration profiles: stripes, spots, waves. We calculate the generative complexity-a morphological complexity of concentration profiles grown from a point-wise perturbation of the medium-of the Gray-Scott system for a range of the feeding and removal rates. The morphological complexity is evaluated using Shannon entropy, Simpson diversity, approximation of Lempel-Ziv complexity, and expressivity (Shannon entropy divided by space-filling). We analyse behaviour of the systems with highest values of the generative morphological complexity and show that the Gray-Scott systems expressing highest levels of the complexity are composed of the wave-fragments (similar to wave-fragments in sub-excitable media) and travelling localisations (similar to quasi-dissipative solitons and gliders in Conway's Game of Life).

  1. The evaluative imaging of mental models - Visual representations of complexity

    NASA Technical Reports Server (NTRS)

    Dede, Christopher

    1989-01-01

    The paper deals with some design issues involved in building a system that could visually represent the semantic structures of training materials and their underlying mental models. In particular, hypermedia-based semantic networks that instantiate classification problem solving strategies are thought to be a useful formalism for such representations; the complexity of these web structures can be best managed through visual depictions. It is also noted that a useful approach to implement in these hypermedia models would be some metrics of conceptual distance.

  2. Recommended Research Directions for Improving the Validation of Complex Systems Models.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vugrin, Eric D.; Trucano, Timothy G.; Swiler, Laura Painton

    Improved validation for models of complex systems has been a primary focus over the past year for the Resilience in Complex Systems Research Challenge. This document describes a set of research directions that are the result of distilling those ideas into three categories of research -- epistemic uncertainty, strong tests, and value of information. The content of this document can be used to transmit valuable information to future research activities, update the Resilience in Complex Systems Research Challenge's roadmap, inform the upcoming FY18 Laboratory Directed Research and Development (LDRD) call and research proposals, and facilitate collaborations between Sandia and externalmore » organizations. The recommended research directions can provide topics for collaborative research, development of proposals, workshops, and other opportunities.« less

  3. Performance of Random Effects Model Estimators under Complex Sampling Designs

    ERIC Educational Resources Information Center

    Jia, Yue; Stokes, Lynne; Harris, Ian; Wang, Yan

    2011-01-01

    In this article, we consider estimation of parameters of random effects models from samples collected via complex multistage designs. Incorporation of sampling weights is one way to reduce estimation bias due to unequal probabilities of selection. Several weighting methods have been proposed in the literature for estimating the parameters of…

  4. Hierarchical modeling and robust synthesis for the preliminary design of large scale complex systems

    NASA Astrophysics Data System (ADS)

    Koch, Patrick Nathan

    Large-scale complex systems are characterized by multiple interacting subsystems and the analysis of multiple disciplines. The design and development of such systems inevitably requires the resolution of multiple conflicting objectives. The size of complex systems, however, prohibits the development of comprehensive system models, and thus these systems must be partitioned into their constituent parts. Because simultaneous solution of individual subsystem models is often not manageable iteration is inevitable and often excessive. In this dissertation these issues are addressed through the development of a method for hierarchical robust preliminary design exploration to facilitate concurrent system and subsystem design exploration, for the concurrent generation of robust system and subsystem specifications for the preliminary design of multi-level, multi-objective, large-scale complex systems. This method is developed through the integration and expansion of current design techniques: (1) Hierarchical partitioning and modeling techniques for partitioning large-scale complex systems into more tractable parts, and allowing integration of subproblems for system synthesis, (2) Statistical experimentation and approximation techniques for increasing both the efficiency and the comprehensiveness of preliminary design exploration, and (3) Noise modeling techniques for implementing robust preliminary design when approximate models are employed. The method developed and associated approaches are illustrated through their application to the preliminary design of a commercial turbofan turbine propulsion system; the turbofan system-level problem is partitioned into engine cycle and configuration design and a compressor module is integrated for more detailed subsystem-level design exploration, improving system evaluation.

  5. An electrostatic model for the determination of magnetic anisotropy in dysprosium complexes.

    PubMed

    Chilton, Nicholas F; Collison, David; McInnes, Eric J L; Winpenny, Richard E P; Soncini, Alessandro

    2013-01-01

    Understanding the anisotropic electronic structure of lanthanide complexes is important in areas as diverse as magnetic resonance imaging, luminescent cell labelling and quantum computing. Here we present an intuitive strategy based on a simple electrostatic method, capable of predicting the magnetic anisotropy of dysprosium(III) complexes, even in low symmetry. The strategy relies only on knowing the X-ray structure of the complex and the well-established observation that, in the absence of high symmetry, the ground state of dysprosium(III) is a doublet quantized along the anisotropy axis with an angular momentum quantum number mJ=±(15)/2. The magnetic anisotropy axis of 14 low-symmetry monometallic dysprosium(III) complexes computed via high-level ab initio calculations are very well reproduced by our electrostatic model. Furthermore, we show that the magnetic anisotropy is equally well predicted in a selection of low-symmetry polymetallic complexes.

  6. An example of complex modelling in dentistry using Markov chain Monte Carlo (MCMC) simulation.

    PubMed

    Helfenstein, Ulrich; Menghini, Giorgio; Steiner, Marcel; Murati, Francesca

    2002-09-01

    In the usual regression setting one regression line is computed for a whole data set. In a more complex situation, each person may be observed for example at several points in time and thus a regression line might be calculated for each person. Additional complexities, such as various forms of errors in covariables may make a straightforward statistical evaluation difficult or even impossible. During recent years methods have been developed allowing convenient analysis of problems where the data and the corresponding models show these and many other forms of complexity. The methodology makes use of a Bayesian approach and Markov chain Monte Carlo (MCMC) simulations. The methods allow the construction of increasingly elaborate models by building them up from local sub-models. The essential structure of the models can be represented visually by directed acyclic graphs (DAG). This attractive property allows communication and discussion of the essential structure and the substantial meaning of a complex model without needing algebra. After presentation of the statistical methods an example from dentistry is presented in order to demonstrate their application and use. The dataset of the example had a complex structure; each of a set of children was followed up over several years. The number of new fillings in permanent teeth had been recorded at several ages. The dependent variables were markedly different from the normal distribution and could not be transformed to normality. In addition, explanatory variables were assumed to be measured with different forms of error. Illustration of how the corresponding models can be estimated conveniently via MCMC simulation, in particular, 'Gibbs sampling', using the freely available software BUGS is presented. In addition, how the measurement error may influence the estimates of the corresponding coefficients is explored. It is demonstrated that the effect of the independent variable on the dependent variable may be markedly

  7. XML Encoding of Features Describing Rule-Based Modeling of Reaction Networks with Multi-Component Molecular Complexes

    PubMed Central

    Blinov, Michael L.; Moraru, Ion I.

    2011-01-01

    Multi-state molecules and multi-component complexes are commonly involved in cellular signaling. Accounting for molecules that have multiple potential states, such as a protein that may be phosphorylated on multiple residues, and molecules that combine to form heterogeneous complexes located among multiple compartments, generates an effect of combinatorial complexity. Models involving relatively few signaling molecules can include thousands of distinct chemical species. Several software tools (StochSim, BioNetGen) are already available to deal with combinatorial complexity. Such tools need information standards if models are to be shared, jointly evaluated and developed. Here we discuss XML conventions that can be adopted for modeling biochemical reaction networks described by user-specified reaction rules. These could form a basis for possible future extensions of the Systems Biology Markup Language (SBML). PMID:21464833

  8. Using iMCFA to Perform the CFA, Multilevel CFA, and Maximum Model for Analyzing Complex Survey Data.

    PubMed

    Wu, Jiun-Yu; Lee, Yuan-Hsuan; Lin, John J H

    2018-01-01

    To construct CFA, MCFA, and maximum MCFA with LISREL v.8 and below, we provide iMCFA (integrated Multilevel Confirmatory Analysis) to examine the potential multilevel factorial structure in the complex survey data. Modeling multilevel structure for complex survey data is complicated because building a multilevel model is not an infallible statistical strategy unless the hypothesized model is close to the real data structure. Methodologists have suggested using different modeling techniques to investigate potential multilevel structure of survey data. Using iMCFA, researchers can visually set the between- and within-level factorial structure to fit MCFA, CFA and/or MAX MCFA models for complex survey data. iMCFA can then yield between- and within-level variance-covariance matrices, calculate intraclass correlations, perform the analyses and generate the outputs for respective models. The summary of the analytical outputs from LISREL is gathered and tabulated for further model comparison and interpretation. iMCFA also provides LISREL syntax of different models for researchers' future use. An empirical and a simulated multilevel dataset with complex and simple structures in the within or between level was used to illustrate the usability and the effectiveness of the iMCFA procedure on analyzing complex survey data. The analytic results of iMCFA using Muthen's limited information estimator were compared with those of Mplus using Full Information Maximum Likelihood regarding the effectiveness of different estimation methods.

  9. A GIS-based atmospheric dispersion model for pollutants emitted by complex source areas.

    PubMed

    Teggi, Sergio; Costanzini, Sofia; Ghermandi, Grazia; Malagoli, Carlotta; Vinceti, Marco

    2018-01-01

    Gaussian dispersion models are widely used to simulate the concentrations and deposition fluxes of pollutants emitted by source areas. Very often, the calculation time limits the number of sources and receptors and the geometry of the sources must be simple and without holes. This paper presents CAREA, a new GIS-based Gaussian model for complex source areas. CAREA was coded in the Python language, and is largely based on a simplified formulation of the very popular and recognized AERMOD model. The model allows users to define in a GIS environment thousands of gridded or scattered receptors and thousands of complex sources with hundreds of vertices and holes. CAREA computes ground level, or near ground level, concentrations and dry deposition fluxes of pollutants. The input/output and the runs of the model can be completely managed in GIS environment (e.g. inside a GIS project). The paper presents the CAREA formulation and its applications to very complex test cases. The tests shows that the processing time are satisfactory and that the definition of sources and receptors and the output retrieval are quite easy in a GIS environment. CAREA and AERMOD are compared using simple and reproducible test cases. The comparison shows that CAREA satisfactorily reproduces AERMOD simulations and is considerably faster than AERMOD. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Large eddy simulation modeling of particle-laden flows in complex terrain

    NASA Astrophysics Data System (ADS)

    Salesky, S.; Giometto, M. G.; Chamecki, M.; Lehning, M.; Parlange, M. B.

    2017-12-01

    The transport, deposition, and erosion of heavy particles over complex terrain in the atmospheric boundary layer is an important process for hydrology, air quality forecasting, biology, and geomorphology. However, in situ observations can be challenging in complex terrain due to spatial heterogeneity. Furthermore, there is a need to develop numerical tools that can accurately represent the physics of these multiphase flows over complex surfaces. We present a new numerical approach to accurately model the transport and deposition of heavy particles in complex terrain using large eddy simulation (LES). Particle transport is represented through solution of the advection-diffusion equation including terms that represent gravitational settling and inertia. The particle conservation equation is discretized in a cut-cell finite volume framework in order to accurately enforce mass conservation. Simulation results will be validated with experimental data, and numerical considerations required to enforce boundary conditions at the surface will be discussed. Applications will be presented in the context of snow deposition and transport, as well as urban dispersion.

  11. Interactive Visualization of Complex Seismic Data and Models Using Bokeh

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chai, Chengping; Ammon, Charles J.; Maceira, Monica

    Visualizing multidimensional data and models becomes more challenging as the volume and resolution of seismic data and models increase. But thanks to the development of powerful and accessible computer systems, a model web browser can be used to visualize complex scientific data and models dynamically. In this paper, we present four examples of seismic model visualization using an open-source Python package Bokeh. One example is a visualization of a surface-wave dispersion data set, another presents a view of three-component seismograms, and two illustrate methods to explore a 3D seismic-velocity model. Unlike other 3D visualization packages, our visualization approach has amore » minimum requirement on users and is relatively easy to develop, provided you have reasonable programming skills. Finally, utilizing familiar web browsing interfaces, the dynamic tools provide us an effective and efficient approach to explore large data sets and models.« less

  12. Interactive Visualization of Complex Seismic Data and Models Using Bokeh

    DOE PAGES

    Chai, Chengping; Ammon, Charles J.; Maceira, Monica; ...

    2018-02-14

    Visualizing multidimensional data and models becomes more challenging as the volume and resolution of seismic data and models increase. But thanks to the development of powerful and accessible computer systems, a model web browser can be used to visualize complex scientific data and models dynamically. In this paper, we present four examples of seismic model visualization using an open-source Python package Bokeh. One example is a visualization of a surface-wave dispersion data set, another presents a view of three-component seismograms, and two illustrate methods to explore a 3D seismic-velocity model. Unlike other 3D visualization packages, our visualization approach has amore » minimum requirement on users and is relatively easy to develop, provided you have reasonable programming skills. Finally, utilizing familiar web browsing interfaces, the dynamic tools provide us an effective and efficient approach to explore large data sets and models.« less

  13. Acute Complex Care Model: An organizational approach for the medical care of hospitalized acute complex patients.

    PubMed

    Pietrantonio, Filomena; Orlandini, Francesco; Moriconi, Luca; La Regina, Micaela

    2015-12-01

    Chronic diseases are the major cause of death (59%) and disability worldwide, representing 46% of global disease burden. According to the Future Hospital Commission of the Royal College of Physicians, Medical Division (MD) will be responsible for all hospital medical services, from emergency to specialist wards. The Hospital Acute Care Hub will bring together the clinical areas of the MD that focus on the management of acute medical patients. The Chronic Care Model (CCM) places the patient at the center of the care system enhancing the community's social and health support, pathways and structures to keep chronic, frail, poly-pathological people at home or out of the hospital. The management of such patients in the hospital still needs to be solved. Hereby, we propose an innovative model for the management of the hospital's acute complex patients, which is the hospital counterpart of the CCM. The target population are acutely ill complex and poly-pathological patients (AICPPs), admitted to hospital and requiring high technology resources. The mission is to improve the management of medical admissions through pre-defined intra-hospital tracks and a global, multidisciplinary, patient-centered approach. The ACCM leader is an internal medicine specialist (IMS) who summarizes health problems, establishes priorities, and restores health balance in AICPPs. The epidemiological transition leading to a progressive increase in "chronically unstable" and complex patients needing frequent hospital treatment, inevitably enhances the role of hospital IMS in the coordination and delivery of care. ACCM represents a practical response to this epochal change of roles. Copyright © 2015 European Federation of Internal Medicine. Published by Elsevier B.V. All rights reserved.

  14. Simulation of groundwater flow in the glacial aquifer system of northeastern Wisconsin with variable model complexity

    USGS Publications Warehouse

    Juckem, Paul F.; Clark, Brian R.; Feinstein, Daniel T.

    2017-05-04

    The U.S. Geological Survey, National Water-Quality Assessment seeks to map estimated intrinsic susceptibility of the glacial aquifer system of the conterminous United States. Improved understanding of the hydrogeologic characteristics that explain spatial patterns of intrinsic susceptibility, commonly inferred from estimates of groundwater age distributions, is sought so that methods used for the estimation process are properly equipped. An important step beyond identifying relevant hydrogeologic datasets, such as glacial geology maps, is to evaluate how incorporation of these resources into process-based models using differing levels of detail could affect resulting simulations of groundwater age distributions and, thus, estimates of intrinsic susceptibility.This report describes the construction and calibration of three groundwater-flow models of northeastern Wisconsin that were developed with differing levels of complexity to provide a framework for subsequent evaluations of the effects of process-based model complexity on estimations of groundwater age distributions for withdrawal wells and streams. Preliminary assessments, which focused on the effects of model complexity on simulated water levels and base flows in the glacial aquifer system, illustrate that simulation of vertical gradients using multiple model layers improves simulated heads more in low-permeability units than in high-permeability units. Moreover, simulation of heterogeneous hydraulic conductivity fields in coarse-grained and some fine-grained glacial materials produced a larger improvement in simulated water levels in the glacial aquifer system compared with simulation of uniform hydraulic conductivity within zones. The relation between base flows and model complexity was less clear; however, the relation generally seemed to follow a similar pattern as water levels. Although increased model complexity resulted in improved calibrations, future application of the models using simulated particle

  15. Progress on Complex Langevin simulations of a finite density matrix model for QCD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bloch, Jacques; Glesaan, Jonas; Verbaarschot, Jacobus

    We study the Stephanov model, which is an RMT model for QCD at finite density, using the Complex Langevin algorithm. Naive implementation of the algorithm shows convergence towards the phase quenched or quenched theory rather than to intended theory with dynamical quarks. A detailed analysis of this issue and a potential resolution of the failure of this algorithm are discussed. We study the effect of gauge cooling on the Dirac eigenvalue distribution and time evolution of the norm for various cooling norms, which were specifically designed to remove the pathologies of the complex Langevin evolution. The cooling is further supplementedmore » with a shifted representation for the random matrices. Unfortunately, none of these modifications generate a substantial improvement on the complex Langevin evolution and the final results still do not agree with the analytical predictions.« less

  16. Development of a One-Equation Eddy Viscosity Turbulence Model for Application to Complex Turbulent Flows

    NASA Astrophysics Data System (ADS)

    Wray, Timothy J.

    Computational fluid dynamics (CFD) is routinely used in performance prediction and design of aircraft, turbomachinery, automobiles, and in many other industrial applications. Despite its wide range of use, deficiencies in its prediction accuracy still exist. One critical weakness is the accurate simulation of complex turbulent flows using the Reynolds-Averaged Navier-Stokes equations in conjunction with a turbulence model. The goal of this research has been to develop an eddy viscosity type turbulence model to increase the accuracy of flow simulations for mildly separated flows, flows with rotation and curvature effects, and flows with surface roughness. It is accomplished by developing a new zonal one-equation turbulence model which relies heavily on the flow physics; it is now known in the literature as the Wray-Agarwal one-equation turbulence model. The effectiveness of the new model is demonstrated by comparing its results with those obtained by the industry standard one-equation Spalart-Allmaras model and two-equation Shear-Stress-Transport k - o model and experimental data. Results for subsonic, transonic, and supersonic flows in and about complex geometries are presented. It is demonstrated that the Wray-Agarwal model can provide the industry and CFD researchers an accurate, efficient, and reliable turbulence model for the computation of a large class of complex turbulent flows.

  17. QRS complex detection based on continuous density hidden Markov models using univariate observations

    NASA Astrophysics Data System (ADS)

    Sotelo, S.; Arenas, W.; Altuve, M.

    2018-04-01

    In the electrocardiogram (ECG), the detection of QRS complexes is a fundamental step in the ECG signal processing chain since it allows the determination of other characteristics waves of the ECG and provides information about heart rate variability. In this work, an automatic QRS complex detector based on continuous density hidden Markov models (HMM) is proposed. HMM were trained using univariate observation sequences taken either from QRS complexes or their derivatives. The detection approach is based on the log-likelihood comparison of the observation sequence with a fixed threshold. A sliding window was used to obtain the observation sequence to be evaluated by the model. The threshold was optimized by receiver operating characteristic curves. Sensitivity (Sen), specificity (Spc) and F1 score were used to evaluate the detection performance. The approach was validated using ECG recordings from the MIT-BIH Arrhythmia database. A 6-fold cross-validation shows that the best detection performance was achieved with 2 states HMM trained with QRS complexes sequences (Sen = 0.668, Spc = 0.360 and F1 = 0.309). We concluded that these univariate sequences provide enough information to characterize the QRS complex dynamics from HMM. Future works are directed to the use of multivariate observations to increase the detection performance.

  18. Model Lipid Membranes on a Tunable Polymer Cushion

    NASA Astrophysics Data System (ADS)

    Smith, Hillary L.; Jablin, Michael S.; Vidyasagar, Ajay; Saiz, Jessica; Watkins, Erik; Toomey, Ryan; Hurd, Alan J.; Majewski, Jaroslaw

    2009-06-01

    A hydrated, surface-tethered polymer network capable of fivefold change in thickness over a 25-37°C temperature range has been demonstrated via neutron reflectivity and fluorescence microscopy to be a novel support for single lipid bilayers in a liquid environment. As the polymer swells from 170 to 900 Å, it promotes both in- and out-of-plane fluctuations of the supported membrane. The cushioned bilayer proved to be very robust, remaining structurally intact for 16 days and many temperature cycles. The promotion of membrane fluctuations offers far-reaching applications for this system as a surrogate biomembrane.

  19. Modeling the complex activity of sickle cell and thalassemia specialist nurses in England.

    PubMed

    Leary, Alison; Anionwu, Elizabeth N

    2014-01-01

    Specialist advanced practice nursing in hemoglobinopathies has a rich historical and descriptive literature. Subsequent work has shown that the role is valued by patients and families and also by other professionals. However, there is little empirical research on the complexity of activity of these services in terms of interventions offered. In addition, the work of clinical nurse specialists in England has been devalued through a perception of oversimplification. The purpose of this study was to understand the complexity of expert nursing practice in sickle cell and thalassemia. The approach taken to modeling complexity was used from common methods in mathematical modeling and computational mathematics. Knowledge discovery through data was the underpinning framework used in this study using a priori mined data. This allowed categorization of activity and articulation of complexity. In total, 8966 nursing events were captured over 1639 hours from a total of 22.8 whole time equivalents, and several data sources were mined. The work of specialist nurses in this area is complex in terms of the physical and psychosocial care they provide. The nurses also undertook case management activity such as utilizing a very large network of professionals, and others participated in admission avoidance work and education of patients' families and other staff. The work of nurses specializing in hemoglobinopathy care is complex and multidimensional and is likely to contribute to the quality of care in a cost-effective way. An understanding of this complexity can be used as an underpinning to establishing key performance indicators, optimum caseload calculations, and economic evaluation.

  20. A modeling process to understand complex system architectures

    NASA Astrophysics Data System (ADS)

    Robinson, Santiago Balestrini

    2009-12-01

    In recent decades, several tools have been developed by the armed forces, and their contractors, to test the capability of a force. These campaign level analysis tools, often times characterized as constructive simulations are generally expensive to create and execute, and at best they are extremely difficult to verify and validate. This central observation, that the analysts are relying more and more on constructive simulations to predict the performance of future networks of systems, leads to the two central objectives of this thesis: (1) to enable the quantitative comparison of architectures in terms of their ability to satisfy a capability without resorting to constructive simulations, and (2) when constructive simulations must be created, to quantitatively determine how to spend the modeling effort amongst the different system classes. The first objective led to Hypothesis A, the first main hypotheses, which states that by studying the relationships between the entities that compose an architecture, one can infer how well it will perform a given capability. The method used to test the hypothesis is based on two assumptions: (1) the capability can be defined as a cycle of functions, and that it (2) must be possible to estimate the probability that a function-based relationship occurs between any two types of entities. If these two requirements are met, then by creating random functional networks, different architectures can be compared in terms of their ability to satisfy a capability. In order to test this hypothesis, a novel process for creating representative functional networks of large-scale system architectures was developed. The process, named the Digraph Modeling for Architectures (DiMA), was tested by comparing its results to those of complex constructive simulations. Results indicate that if the inputs assigned to DiMA are correct (in the tests they were based on time-averaged data obtained from the ABM), DiMA is able to identify which of any two

  1. Disulfide Trapping for Modeling and Structure Determination of Receptor: Chemokine Complexes.

    PubMed

    Kufareva, Irina; Gustavsson, Martin; Holden, Lauren G; Qin, Ling; Zheng, Yi; Handel, Tracy M

    2016-01-01

    Despite the recent breakthrough advances in GPCR crystallography, structure determination of protein-protein complexes involving chemokine receptors and their endogenous chemokine ligands remains challenging. Here, we describe disulfide trapping, a methodology for generating irreversible covalent binary protein complexes from unbound protein partners by introducing two cysteine residues, one per interaction partner, at selected positions within their interaction interface. Disulfide trapping can serve at least two distinct purposes: (i) stabilization of the complex to assist structural studies and/or (ii) determination of pairwise residue proximities to guide molecular modeling. Methods for characterization of disulfide-trapped complexes are described and evaluated in terms of throughput, sensitivity, and specificity toward the most energetically favorable crosslinks. Due to abundance of native disulfide bonds at receptor:chemokine interfaces, disulfide trapping of their complexes can be associated with intramolecular disulfide shuffling and result in misfolding of the component proteins; because of this, evidence from several experiments is typically needed to firmly establish a positive disulfide crosslink. An optimal pipeline that maximizes throughput and minimizes time and costs by early triage of unsuccessful candidate constructs is proposed. © 2016 Elsevier Inc. All rights reserved.

  2. Representing spatial and temporal complexity in ecohydrological models: a meta-analysis focusing on groundwater - surface water interactions

    NASA Astrophysics Data System (ADS)

    McDonald, Karlie; Mika, Sarah; Kolbe, Tamara; Abbott, Ben; Ciocca, Francesco; Marruedo, Amaia; Hannah, David; Schmidt, Christian; Fleckenstein, Jan; Karuse, Stefan

    2016-04-01

    Sub-surface hydrologic processes are highly dynamic, varying spatially and temporally with strong links to the geomorphology and hydrogeologic properties of an area. This spatial and temporal complexity is a critical regulator of biogeochemical and ecological processes within the interface groundwater - surface water (GW-SW) ecohydrological interface and adjacent ecosystems. Many GW-SW models have attempted to capture this spatial and temporal complexity with varying degrees of success. The incorporation of spatial and temporal complexity within GW-SW model configuration is important to investigate interactions with transient storage and subsurface geology, infiltration and recharge, and mass balance of exchange fluxes at the GW-SW ecohydrological interface. Additionally, characterising spatial and temporal complexity in GW-SW models is essential to derive predictions using realistic environmental conditions. In this paper we conduct a systematic Web of Science meta-analysis of conceptual, hydrodynamic, and reactive and heat transport models of the GW-SW ecohydrological interface since 2004 to explore how these models handled spatial and temporal complexity. The freshwater - groundwater ecohydrological interface was the most commonly represented in publications between 2004 and 2014 with 91% of papers followed by marine 6% and estuarine systems with 3% of papers. Of the GW-SW models published since 2004, the 52% have focused on hydrodynamic processes and <15% covered more than one process (e.g. heat and reactive transport). Within the hydrodynamic subset, 25% of models focused on a vertical depth of <5m. The primary scientific and technological limitations of incorporating spatial and temporal variability into GW-SW models are identified as the inclusion of woody debris, carbon sources, subsurface geological structures and bioclogging into model parameterization. The technological limitations influence the types of models applied, such as hydrostatic coupled models

  3. Modeling the Complexities of Water and Hygiene in Limpopo Province South Africa

    NASA Astrophysics Data System (ADS)

    Mellor, J. E.; Smith, J. A.; Learmonth, G.; Netshandama, V.; Dillingham, R.

    2012-12-01

    Access to sustainable water and sanitation services is one of the biggest challenges the developing world faces as an increasing number of people inhabit those areas. Inadequate access to water and sanitation infrastructure often leads children to drink poor quality water which can result in early childhood diarrhea (ECD). Repeated episodes of ECD can cause serious problems such as growth stunting, cognitive impairment, and even death. Although researchers have long studied the connection between poor access to water and hygiene facilities and ECD, most studies have relied on intervention-control methods to study the effects of singular interventions. Such studies are time-consuming, costly, and fail to acknowledge that the causes and prevention strategies for ECD are numerous and complex. An alternate approach is to think of a community as a complex system in which the engineered, natural and social environments interact in ways that are not easily predicted. Such complex systems have no central or coordinating mechanism and may exhibit emergent behavior which can be counterintuitive and lead to valuable insights. The goal of this research is to develop a robust, quantitative understanding of the complex pathogen transmission chain that leads to ECD. To realize this goal, we have developed an Agent-Based Model (ABM) which simulates individual community member behavior. We have validated this transdisciplinary model with four years of field data from a community in Limpopo Province, South Africa. Our model incorporates data such as household water source preferences, collection habits, household- and source-water quality, water-source reliability and biological regrowth. Our outcome measures are household water quality, ECD incidences, and child growth stunting. This technique allows us to test hypotheses on the computer. Future researchers can implement promising interventions with our partner institution, the University of Venda, and the model can be refined as

  4. A Sensitivity Analysis Method to Study the Behavior of Complex Process-based Models

    NASA Astrophysics Data System (ADS)

    Brugnach, M.; Neilson, R.; Bolte, J.

    2001-12-01

    The use of process-based models as a tool for scientific inquiry is becoming increasingly relevant in ecosystem studies. Process-based models are artificial constructs that simulate the system by mechanistically mimicking the functioning of its component processes. Structurally, a process-based model can be characterized, in terms of its processes and the relationships established among them. Each process comprises a set of functional relationships among several model components (e.g., state variables, parameters and input data). While not encoded explicitly, the dynamics of the model emerge from this set of components and interactions organized in terms of processes. It is the task of the modeler to guarantee that the dynamics generated are appropriate and semantically equivalent to the phenomena being modeled. Despite the availability of techniques to characterize and understand model behavior, they do not suffice to completely and easily understand how a complex process-based model operates. For example, sensitivity analysis studies model behavior by determining the rate of change in model output as parameters or input data are varied. One of the problems with this approach is that it considers the model as a "black box", and it focuses on explaining model behavior by analyzing the relationship input-output. Since, these models have a high degree of non-linearity, understanding how the input affects an output can be an extremely difficult task. Operationally, the application of this technique may constitute a challenging task because complex process-based models are generally characterized by a large parameter space. In order to overcome some of these difficulties, we propose a method of sensitivity analysis to be applicable to complex process-based models. This method focuses sensitivity analysis at the process level, and it aims to determine how sensitive the model output is to variations in the processes. Once the processes that exert the major influence in

  5. Conceptual and Developmental Analysis of Mental Models: An Example with Complex Change Problems.

    ERIC Educational Resources Information Center

    Poirier, Louise

    Defining better implicit models of children's actions in a series of situations is of paramount importance to understanding how knowledge is constructed. The objective of this study was to analyze the implicit mental models used by children in complex change problems to understand the stability of the models and their evolution with the child's…

  6. Calcium-manganese oxides as structural and functional models for active site in oxygen evolving complex in photosystem II: lessons from simple models.

    PubMed

    Najafpour, Mohammad Mahdi

    2011-01-01

    The oxygen evolving complex in photosystem II which induces the oxidation of water to dioxygen in plants, algae and certain bacteria contains a cluster of one calcium and four manganese ions. It serves as a model to split water by sunlight. Reports on the mechanism and structure of photosystem II provide a more detailed architecture of the oxygen evolving complex and the surrounding amino acids. One challenge in this field is the development of artificial model compounds to study oxygen evolution reaction outside the complicated environment of the enzyme. Calcium-manganese oxides as structural and functional models for the active site of photosystem II are explained and reviewed in this paper. Because of related structures of these calcium-manganese oxides and the catalytic centers of active site of the oxygen evolving complex of photosystem II, the study may help to understand more about mechanism of oxygen evolution by the oxygen evolving complex of photosystem II. Copyright © 2010 Elsevier B.V. All rights reserved.

  7. Describing complex cells in primary visual cortex: a comparison of context and multi-filter LN models.

    PubMed

    Westö, Johan; May, Patrick J C

    2018-05-02

    Receptive field (RF) models are an important tool for deciphering neural responses to sensory stimuli. The two currently popular RF models are multi-filter linear-nonlinear (LN) models and context models. Models are, however, never correct and they rely on assumptions to keep them simple enough to be interpretable. As a consequence, different models describe different stimulus-response mappings, which may or may not be good approximations of real neural behavior. In the current study, we take up two tasks: First, we introduce new ways to estimate context models with realistic nonlinearities, that is, with logistic and exponential functions. Second, we evaluate context models and multi-filter LN models in terms of how well they describe recorded data from complex cells in cat primary visual cortex. Our results, based on single-spike information and correlation coefficients, indicate that context models outperform corresponding multi-filter LN models of equal complexity (measured in terms of number of parameters), with the best increase in performance being achieved by the novel context models. Consequently, our results suggest that the multi-filter LN-model framework is suboptimal for describing the behavior of complex cells: the context-model framework is clearly superior while still providing interpretable quantizations of neural behavior.

  8. Complex Systems Models and Their Applications: Towards a New Science of Verification, Validation & Uncertainty Quantification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tsao, Jeffrey Y.; Trucano, Timothy G.; Kleban, Stephen D.

    This report contains the written footprint of a Sandia-hosted workshop held in Albuquerque, New Mexico, June 22-23, 2016 on “Complex Systems Models and Their Applications: Towards a New Science of Verification, Validation and Uncertainty Quantification,” as well as of pre-work that fed into the workshop. The workshop’s intent was to explore and begin articulating research opportunities at the intersection between two important Sandia communities: the complex systems (CS) modeling community, and the verification, validation and uncertainty quantification (VVUQ) community The overarching research opportunity (and challenge) that we ultimately hope to address is: how can we quantify the credibility of knowledgemore » gained from complex systems models, knowledge that is often incomplete and interim, but will nonetheless be used, sometimes in real-time, by decision makers?« less

  9. Predictive Models for the Free Energy of Hydrogen Bonded Complexes with Single and Cooperative Hydrogen Bonds.

    PubMed

    Glavatskikh, Marta; Madzhidov, Timur; Solov'ev, Vitaly; Marcou, Gilles; Horvath, Dragos; Varnek, Alexandre

    2016-12-01

    In this work, we report QSPR modeling of the free energy ΔG of 1 : 1 hydrogen bond complexes of different H-bond acceptors and donors. The modeling was performed on a large and structurally diverse set of 3373 complexes featuring a single hydrogen bond, for which ΔG was measured at 298 K in CCl 4 . The models were prepared using Support Vector Machine and Multiple Linear Regression, with ISIDA fragment descriptors. The marked atoms strategy was applied at fragmentation stage, in order to capture the location of H-bond donor and acceptor centers. Different strategies of model validation have been suggested, including the targeted omission of individual H-bond acceptors and donors from the training set, in order to check whether the predictive ability of the model is not limited to the interpolation of H-bond strength between two already encountered partners. Successfully cross-validating individual models were combined into a consensus model, and challenged to predict external test sets of 629 and 12 complexes, in which donor and acceptor formed single and cooperative H-bonds, respectively. In all cases, SVM models outperform MLR. The SVM consensus model performs well both in 3-fold cross-validation (RMSE=1.50 kJ/mol), and on the external test sets containing complexes with single (RMSE=3.20 kJ/mol) and cooperative H-bonds (RMSE=1.63 kJ/mol). © 2016 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Enhanced cellulose orientation analysis in complex model plant tissues.

    PubMed

    Rüggeberg, Markus; Saxe, Friederike; Metzger, Till H; Sundberg, Björn; Fratzl, Peter; Burgert, Ingo

    2013-09-01

    The orientation distribution of cellulose microfibrils in the plant cell wall is a key parameter for understanding anisotropic plant growth and mechanical behavior. However, precisely visualizing cellulose orientation in the plant cell wall has ever been a challenge due to the small size of the cellulose microfibrils and the complex network of polymers in the plant cell wall. X-ray diffraction is one of the most frequently used methods for analyzing cellulose orientation in single cells and plant tissues, but the interpretation of the diffraction images is complex. Traditionally, circular or square cells and Gaussian orientation of the cellulose microfibrils have been assumed to elucidate cellulose orientation from the diffraction images. However, the complex tissue structures of common model plant systems such as Arabidopsis or aspen (Populus) require a more sophisticated approach. We present an evaluation procedure which takes into account the precise cell geometry and is able to deal with complex microfibril orientation distributions. The evaluation procedure reveals the entire orientation distribution of the cellulose microfibrils, reflecting different orientations within the multi-layered cell wall. By analyzing aspen wood and Arabidopsis stems we demonstrate the versatility of this method and show that simplifying assumptions on geometry and orientation distributions can lead to errors in the calculated microfibril orientation pattern. The simulation routine is intended to be used as a valuable tool for nanostructural analysis of plant cell walls and is freely available from the authors on request. Copyright © 2013 Elsevier Inc. All rights reserved.

  11. Coevolving complex networks in the model of social interactions

    NASA Astrophysics Data System (ADS)

    Raducha, Tomasz; Gubiec, Tomasz

    2017-04-01

    We analyze Axelrod's model of social interactions on coevolving complex networks. We introduce four extensions with different mechanisms of edge rewiring. The models are intended to catch two kinds of interactions-preferential attachment, which can be observed in scientists or actors collaborations, and local rewiring, which can be observed in friendship formation in everyday relations. Numerical simulations show that proposed dynamics can lead to the power-law distribution of nodes' degree and high value of the clustering coefficient, while still retaining the small-world effect in three models. All models are characterized by two phase transitions of a different nature. In case of local rewiring we obtain order-disorder discontinuous phase transition even in the thermodynamic limit, while in case of long-distance switching discontinuity disappears in the thermodynamic limit, leaving one continuous phase transition. In addition, we discover a new and universal characteristic of the second transition point-an abrupt increase of the clustering coefficient, due to formation of many small complete subgraphs inside the network.

  12. Interaction of Ionic Liquids with Lipid Biomembrane: Implication from Supramolecular Assembly to Cytotoxicity

    NASA Astrophysics Data System (ADS)

    Jing, Benxin; Lan, Nan; Zhu, Y. Elaine

    2013-03-01

    An explosion in the research activities using ionic liquids (ILs) as new ``green'' chemicals in several chemical and biomedical processes has resulted in the urgent need to understand their impact in term of their transport and toxicity towards aquatic organisms. Though a few experimental toxicology studies have reported that some ionic liquids are toxic with increased hydrophobicity of ILs while others are not, our understanding of the molecular level mechanism of IL toxicity remains poorly understood. In this talk, we will discuss our recent study of the interaction of ionic liquids with model cell membranes. We have found that the ILs could induce morphological change of lipid bilayers when a critical concentration is exceeded, leading to the swelling and tube-like formation of lipid bilayers. The critical concentration shows a strong dependence on the length of hydrocarbon tails and hydrophobic counterions. By SAXS, Langmuir-Blodgett (LB) and fluorescence microscopic measurement, we have confirmed that tube-like lipid complexes result from the insertion of ILs with long hydrocarbon chains to minimize the hydrophobic interaction with aqueous media. This finding could give insight to the modification and adoption of ILs for the engineering of micro-organisms.

  13. Near-atomic structural model for bacterial DNA replication initiation complex and its functional insights.

    PubMed

    Shimizu, Masahiro; Noguchi, Yasunori; Sakiyama, Yukari; Kawakami, Hironori; Katayama, Tsutomu; Takada, Shoji

    2016-12-13

    Upon DNA replication initiation in Escherichia coli, the initiator protein DnaA forms higher-order complexes with the chromosomal origin oriC and a DNA-bending protein IHF. Although tertiary structures of DnaA and IHF have previously been elucidated, dynamic structures of oriC-DnaA-IHF complexes remain unknown. Here, combining computer simulations with biochemical assays, we obtained models at almost-atomic resolution for the central part of the oriC-DnaA-IHF complex. This complex can be divided into three subcomplexes; the left and right subcomplexes include pentameric DnaA bound in a head-to-tail manner and the middle subcomplex contains only a single DnaA. In the left and right subcomplexes, DnaA ATPases associated with various cellular activities (AAA+) domain III formed helices with specific structural differences in interdomain orientations, provoking a bend in the bound DNA. In the left subcomplex a continuous DnaA chain exists, including insertion of IHF into the DNA looping, consistent with the DNA unwinding function of the complex. The intervening spaces in those subcomplexes are crucial for DNA unwinding and loading of DnaB helicases. Taken together, this model provides a reasonable near-atomic level structural solution of the initiation complex, including the dynamic conformations and spatial arrangements of DnaA subcomplexes.

  14. Modeling bed load transport and step-pool morphology with a reduced-complexity approach

    NASA Astrophysics Data System (ADS)

    Saletti, Matteo; Molnar, Peter; Hassan, Marwan A.; Burlando, Paolo

    2016-04-01

    Steep mountain channels are complex fluvial systems, where classical methods developed for lowland streams fail to capture the dynamics of sediment transport and bed morphology. Estimations of sediment transport based on average conditions have more than one order of magnitude of uncertainty because of the wide grain-size distribution of the bed material, the small relative submergence of coarse grains, the episodic character of sediment supply, and the complex boundary conditions. Most notably, bed load transport is modulated by the structure of the bed, where grains are imbricated in steps and similar bedforms and, therefore, they are much more stable then predicted. In this work we propose a new model based on a reduced-complexity (RC) approach focused on the reproduction of the step-pool morphology. In our 2-D cellular-automaton model entrainment, transport and deposition of particles are considered via intuitive rules based on physical principles. A parsimonious set of parameters allows the control of the behavior of the system, and the basic processes can be considered in a deterministic or stochastic way. The probability of entrainment of grains (and, as a consequence, particle travel distances and resting times) is a function of flow conditions and bed topography. Sediment input is fed at the upper boundary of the channel at a constant or variable rate. Our model yields realistic results in terms of longitudinal bed profiles and sediment transport trends. Phases of aggradation and degradation can be observed in the channel even under a constant input and the memory of the morphology can be quantified with long-range persistence indicators. Sediment yield at the channel outlet shows intermittency as observed in natural streams. Steps are self-formed in the channel and their stability is tested against the model parameters. Our results show the potential of RC models as complementary tools to more sophisticated models. They provide a realistic description of

  15. Comparing and improving proper orthogonal decomposition (POD) to reduce the complexity of groundwater models

    NASA Astrophysics Data System (ADS)

    Gosses, Moritz; Nowak, Wolfgang; Wöhling, Thomas

    2017-04-01

    Physically-based modeling is a wide-spread tool in understanding and management of natural systems. With the high complexity of many such models and the huge amount of model runs necessary for parameter estimation and uncertainty analysis, overall run times can be prohibitively long even on modern computer systems. An encouraging strategy to tackle this problem are model reduction methods. In this contribution, we compare different proper orthogonal decomposition (POD, Siade et al. (2010)) methods and their potential applications to groundwater models. The POD method performs a singular value decomposition on system states as simulated by the complex (e.g., PDE-based) groundwater model taken at several time-steps, so-called snapshots. The singular vectors with the highest information content resulting from this decomposition are then used as a basis for projection of the system of model equations onto a subspace of much lower dimensionality than the original complex model, thereby greatly reducing complexity and accelerating run times. In its original form, this method is only applicable to linear problems. Many real-world groundwater models are non-linear, tough. These non-linearities are introduced either through model structure (unconfined aquifers) or boundary conditions (certain Cauchy boundaries, like rivers with variable connection to the groundwater table). To date, applications of POD focused on groundwater models simulating pumping tests in confined aquifers with constant head boundaries. In contrast, POD model reduction either greatly looses accuracy or does not significantly reduce model run time if the above-mentioned non-linearities are introduced. We have also found that variable Dirichlet boundaries are problematic for POD model reduction. An extension to the POD method, called POD-DEIM, has been developed for non-linear groundwater models by Stanko et al. (2016). This method uses spatial interpolation points to build the equation system in the

  16. Parasitic light scattered by complex optical coatings: modelization and metrology

    NASA Astrophysics Data System (ADS)

    Zerrad, Myriam; Lequime, Michel; Liukaityte, Simona; Amra, Claude

    2017-12-01

    Optical components realized for space applications have to be mastered in term of parasitic light. This paper present the last improvements performed at the Institute Fresnel to predict and measure scattering losses of optical components with a special care to complex optical coatings. Agreement between numerical models and metrology is now excellent. Some examples will be presented.

  17. Utilisation of three-dimensional printed heart models for operative planning of complex congenital heart defects.

    PubMed

    Olejník, Peter; Nosal, Matej; Havran, Tomas; Furdova, Adriana; Cizmar, Maros; Slabej, Michal; Thurzo, Andrej; Vitovic, Pavol; Klvac, Martin; Acel, Tibor; Masura, Jozef

    2017-01-01

    To evaluate the accuracy of the three-dimensional (3D) printing of cardiovascular structures. To explore whether utilisation of 3D printed heart replicas can improve surgical and catheter interventional planning in patients with complex congenital heart defects. Between December 2014 and November 2015 we fabricated eight cardiovascular models based on computed tomography data in patients with complex spatial anatomical relationships of cardiovascular structures. A Bland-Altman analysis was used to assess the accuracy of 3D printing by comparing dimension measurements at analogous anatomical locations between the printed models and digital imagery data, as well as between printed models and in vivo surgical findings. The contribution of 3D printed heart models for perioperative planning improvement was evaluated in the four most representative patients. Bland-Altman analysis confirmed the high accuracy of 3D cardiovascular printing. Each printed model offered an improved spatial anatomical orientation of cardiovascular structures. Current 3D printers can produce authentic copies of patients` cardiovascular systems from computed tomography data. The use of 3D printed models can facilitate surgical or catheter interventional procedures in patients with complex congenital heart defects due to better preoperative planning and intraoperative orientation.

  18. A novel prediction method about single components of analog circuits based on complex field modeling.

    PubMed

    Zhou, Jingyu; Tian, Shulin; Yang, Chenglin

    2014-01-01

    Few researches pay attention to prediction about analog circuits. The few methods lack the correlation with circuit analysis during extracting and calculating features so that FI (fault indicator) calculation often lack rationality, thus affecting prognostic performance. To solve the above problem, this paper proposes a novel prediction method about single components of analog circuits based on complex field modeling. Aiming at the feature that faults of single components hold the largest number in analog circuits, the method starts with circuit structure, analyzes transfer function of circuits, and implements complex field modeling. Then, by an established parameter scanning model related to complex field, it analyzes the relationship between parameter variation and degeneration of single components in the model in order to obtain a more reasonable FI feature set via calculation. According to the obtained FI feature set, it establishes a novel model about degeneration trend of analog circuits' single components. At last, it uses particle filter (PF) to update parameters for the model and predicts remaining useful performance (RUP) of analog circuits' single components. Since calculation about the FI feature set is more reasonable, accuracy of prediction is improved to some extent. Finally, the foregoing conclusions are verified by experiments.

  19. Gas Chromatography Data Classification Based on Complex Coefficients of an Autoregressive Model

    DOE PAGES

    Zhao, Weixiang; Morgan, Joshua T.; Davis, Cristina E.

    2008-01-01

    This paper introduces autoregressive (AR) modeling as a novel method to classify outputs from gas chromatography (GC). The inverse Fourier transformation was applied to the original sensor data, and then an AR model was applied to transform data to generate AR model complex coefficients. This series of coefficients effectively contains a compressed version of all of the information in the original GC signal output. We applied this method to chromatograms resulting from proliferating bacteria species grown in culture. Three types of neural networks were used to classify the AR coefficients: backward propagating neural network (BPNN), radial basis function-principal component analysismore » (RBF-PCA) approach, and radial basis function-partial least squares regression (RBF-PLSR) approach. This exploratory study demonstrates the feasibility of using complex root coefficient patterns to distinguish various classes of experimental data, such as those from the different bacteria species. This cognition approach also proved to be robust and potentially useful for freeing us from time alignment of GC signals.« less

  20. Interaction of Viscotoxins A3 and B with Membrane Model Systems: Implications to Their Mechanism of Action

    PubMed Central

    Giudici, Marcela; Pascual, Roberto; de la Canal, Laura; Pfüller, Karola; Pfüller, Uwe; Villalaín, José

    2003-01-01

    Viscotoxins are small proteins that are thought to interact with biomembranes, displaying different toxic activities against a varied number of cell types, being viscotoxin A3 (VtA3) the most cytotoxic whereas viscotoxin B (VtB) is the less potent. By using infrared and fluorescence spectroscopies, we have studied the interaction of VtA3 and VtB, both wild and reduced ones, with model membranes containing negatively charged phospholipids. Both VtA3 and VtB present a high conformational stability, and a similar conformation both in solution and when bound to membranes. In solution, the infrared spectra of the reduced proteins show an increase in bandwidth compared to the nonreduced ones indicating a greater flexibility. VtA3 and VtB bind with high affinity to membranes containing negatively charged phospholipids and are motional restricted, their binding being dependent on phospholipid composition. Whereas nonreduced proteins maintain their structure when bound to membranes, reduced ones aggregate. Furthermore, leakage experiments show that wild proteins were capable of disrupting membranes whereas reduced proteins were not. The effect of VtA3 and VtB on membranes having different phospholipid composition is diverse, affecting the cooperativity and fluidity of the membranes. Viscotoxins interact with membranes in a complex way, most likely organizing themselves at the surface inducing the appearance of defects that lead to the destabilization and disruption of the membrane bilayer. PMID:12885644

  1. The Naïve Overfitting Index Selection (NOIS): A new method to optimize model complexity for hyperspectral data

    NASA Astrophysics Data System (ADS)

    Rocha, Alby D.; Groen, Thomas A.; Skidmore, Andrew K.; Darvishzadeh, Roshanak; Willemen, Louise

    2017-11-01

    The growing number of narrow spectral bands in hyperspectral remote sensing improves the capacity to describe and predict biological processes in ecosystems. But it also poses a challenge to fit empirical models based on such high dimensional data, which often contain correlated and noisy predictors. As sample sizes, to train and validate empirical models, seem not to be increasing at the same rate, overfitting has become a serious concern. Overly complex models lead to overfitting by capturing more than the underlying relationship, and also through fitting random noise in the data. Many regression techniques claim to overcome these problems by using different strategies to constrain complexity, such as limiting the number of terms in the model, by creating latent variables or by shrinking parameter coefficients. This paper is proposing a new method, named Naïve Overfitting Index Selection (NOIS), which makes use of artificially generated spectra, to quantify the relative model overfitting and to select an optimal model complexity supported by the data. The robustness of this new method is assessed by comparing it to a traditional model selection based on cross-validation. The optimal model complexity is determined for seven different regression techniques, such as partial least squares regression, support vector machine, artificial neural network and tree-based regressions using five hyperspectral datasets. The NOIS method selects less complex models, which present accuracies similar to the cross-validation method. The NOIS method reduces the chance of overfitting, thereby avoiding models that present accurate predictions that are only valid for the data used, and too complex to make inferences about the underlying process.

  2. My Corporis Fabrica: an ontology-based tool for reasoning and querying on complex anatomical models

    PubMed Central

    2014-01-01

    Background Multiple models of anatomy have been developed independently and for different purposes. In particular, 3D graphical models are specially useful for visualizing the different organs composing the human body, while ontologies such as FMA (Foundational Model of Anatomy) are symbolic models that provide a unified formal description of anatomy. Despite its comprehensive content concerning the anatomical structures, the lack of formal descriptions of anatomical functions in FMA limits its usage in many applications. In addition, the absence of connection between 3D models and anatomical ontologies makes it difficult and time-consuming to set up and access to the anatomical content of complex 3D objects. Results First, we provide a new ontology of anatomy called My Corporis Fabrica (MyCF), which conforms to FMA but extends it by making explicit how anatomical structures are composed, how they contribute to functions, and also how they can be related to 3D complex objects. Second, we have equipped MyCF with automatic reasoning capabilities that enable model checking and complex queries answering. We illustrate the added-value of such a declarative approach for interactive simulation and visualization as well as for teaching applications. Conclusions The novel vision of ontologies that we have developed in this paper enables a declarative assembly of different models to obtain composed models guaranteed to be anatomically valid while capturing the complexity of human anatomy. The main interest of this approach is its declarativity that makes possible for domain experts to enrich the knowledge base at any moment through simple editors without having to change the algorithmic machinery. This provides MyCF software environment a flexibility to process and add semantics on purpose for various applications that incorporate not only symbolic information but also 3D geometric models representing anatomical entities as well as other symbolic information like the

  3. A novel multilayer model for missing link prediction and future link forecasting in dynamic complex networks

    NASA Astrophysics Data System (ADS)

    Yasami, Yasser; Safaei, Farshad

    2018-02-01

    The traditional complex network theory is particularly focused on network models in which all network constituents are dealt with equivalently, while fail to consider the supplementary information related to the dynamic properties of the network interactions. This is a main constraint leading to incorrect descriptions of some real-world phenomena or incomplete capturing the details of certain real-life problems. To cope with the problem, this paper addresses the multilayer aspects of dynamic complex networks by analyzing the properties of intrinsically multilayered co-authorship networks, DBLP and Astro Physics, and presenting a novel multilayer model of dynamic complex networks. The model examines the layers evolution (layers birth/death process and lifetime) throughout the network evolution. Particularly, this paper models the evolution of each node's membership in different layers by an Infinite Factorial Hidden Markov Model considering feature cascade, and thereby formulates the link generation process for intra-layer and inter-layer links. Although adjacency matrixes are useful to describe the traditional single-layer networks, such a representation is not sufficient to describe and analyze the multilayer dynamic networks. This paper also extends a generalized mathematical infrastructure to address the problems issued by multilayer complex networks. The model inference is performed using some Markov Chain Monte Carlo sampling strategies, given synthetic and real complex networks data. Experimental results indicate a tremendous improvement in the performance of the proposed multilayer model in terms of sensitivity, specificity, positive and negative predictive values, positive and negative likelihood ratios, F1-score, Matthews correlation coefficient, and accuracy for two important applications of missing link prediction and future link forecasting. The experimental results also indicate the strong predictivepower of the proposed model for the application of

  4. Developing predictive systems models to address complexity and relevance for ecological risk assessment.

    PubMed

    Forbes, Valery E; Calow, Peter

    2013-07-01

    Ecological risk assessments (ERAs) are not used as well as they could be in risk management. Part of the problem is that they often lack ecological relevance; that is, they fail to grasp necessary ecological complexities. Adding realism and complexity can be difficult and costly. We argue that predictive systems models (PSMs) can provide a way of capturing complexity and ecological relevance cost-effectively. However, addressing complexity and ecological relevance is only part of the problem. Ecological risk assessments often fail to meet the needs of risk managers by not providing assessments that relate to protection goals and by expressing risk in ratios that cannot be weighed against the costs of interventions. Once more, PSMs can be designed to provide outputs in terms of value-relevant effects that are modulated against exposure and that can provide a better basis for decision making than arbitrary ratios or threshold values. Recent developments in the modeling and its potential for implementation by risk assessors and risk managers are beginning to demonstrate how PSMs can be practically applied in risk assessment and the advantages that doing so could have. Copyright © 2013 SETAC.

  5. Model of a ternary complex between activated factor VII, tissue factor and factor IX.

    PubMed

    Chen, Shu-wen W; Pellequer, Jean-Luc; Schved, Jean-François; Giansily-Blaizot, Muriel

    2002-07-01

    Upon binding to tissue factor, FVIIa triggers coagulation by activating vitamin K-dependent zymogens, factor IX (FIX) and factor X (FX). To understand recognition mechanisms in the initiation step of the coagulation cascade, we present a three-dimensional model of the ternary complex between FVIIa:TF:FIX. This model was built using a full-space search algorithm in combination with computational graphics. With the known crystallographic complex FVIIa:TF kept fixed, the FIX docking was performed first with FIX Gla-EGF1 domains, followed by the FIX protease/EGF2 domains. Because the FIXa crystal structure lacks electron density for the Gla domain, we constructed a chimeric FIX molecule that contains the Gla-EGF1 domains of FVIIa and the EGF2-protease domains of FIXa. The FVIIa:TF:FIX complex has been extensively challenged against experimental data including site-directed mutagenesis, inhibitory peptide data, haemophilia B database mutations, inhibitor antibodies and a novel exosite binding inhibitor peptide. This FVIIa:TF:FIX complex provides a powerful tool to study the regulation of FVIIa production and presents new avenues for developing therapeutic inhibitory compounds of FVIIa:TF:substrate complex.

  6. A Corner-Point-Grid-Based Voxelization Method for Complex Geological Structure Model with Folds

    NASA Astrophysics Data System (ADS)

    Chen, Qiyu; Mariethoz, Gregoire; Liu, Gang

    2017-04-01

    3D voxelization is the foundation of geological property modeling, and is also an effective approach to realize the 3D visualization of the heterogeneous attributes in geological structures. The corner-point grid is a representative data model among all voxel models, and is a structured grid type that is widely applied at present. When carrying out subdivision for complex geological structure model with folds, we should fully consider its structural morphology and bedding features to make the generated voxels keep its original morphology. And on the basis of which, they can depict the detailed bedding features and the spatial heterogeneity of the internal attributes. In order to solve the shortage of the existing technologies, this work puts forward a corner-point-grid-based voxelization method for complex geological structure model with folds. We have realized the fast conversion from the 3D geological structure model to the fine voxel model according to the rule of isocline in Ramsay's fold classification. In addition, the voxel model conforms to the spatial features of folds, pinch-out and other complex geological structures, and the voxels of the laminas inside a fold accords with the result of geological sedimentation and tectonic movement. This will provide a carrier and model foundation for the subsequent attribute assignment as well as the quantitative analysis and evaluation based on the spatial voxels. Ultimately, we use examples and the contrastive analysis between the examples and the Ramsay's description of isoclines to discuss the effectiveness and advantages of the method proposed in this work when dealing with the voxelization of 3D geologic structural model with folds based on corner-point grids.

  7. Modeling the Complex Photochemistry of Biomass Burning Plumes in Plume-Scale, Regional, and Global Air Quality Models

    NASA Astrophysics Data System (ADS)

    Alvarado, M. J.; Lonsdale, C. R.; Yokelson, R. J.; Travis, K.; Fischer, E. V.; Lin, J. C.

    2014-12-01

    Forecasting the impacts of biomass burning (BB) plumes on air quality is difficult due to the complex photochemistry that takes place in the concentrated young BB plumes. The spatial grid of global and regional scale Eulerian models is generally too large to resolve BB photochemistry, which can lead to errors in predicting the formation of secondary organic aerosol (SOA) and O3, as well as the partitioning of NOyspecies. AER's Aerosol Simulation Program (ASP v2.1) can be used within plume-scale Lagrangian models to simulate this complex photochemistry. We will present results of validation studies of the ASP model against aircraft observations of young BB smoke plumes. We will also present initial results from the coupling of ASP v2.1 into the Lagrangian particle dispersion model STILT-Chem in order to better examine the interactions between BB plume chemistry and dispersion. In addition, we have used ASP to develop a sub-grid scale parameterization of the near-source chemistry of BB plumes for use in regional and global air quality models. The parameterization takes inputs from the host model, such as solar zenith angle, temperature, and fire fuel type, and calculates enhancement ratios of O3, NOx, PAN, aerosol nitrate, and other NOy species, as well as organic aerosol (OA). We will present results from the ASP-based BB parameterization as well as its implementation into the global atmospheric composition model GEOS-Chem for the SEAC4RS campaign.

  8. Effects of model complexity and priors on estimation using sequential importance sampling/resampling for species conservation

    USGS Publications Warehouse

    Dunham, Kylee; Grand, James B.

    2016-01-01

    We examined the effects of complexity and priors on the accuracy of models used to estimate ecological and observational processes, and to make predictions regarding population size and structure. State-space models are useful for estimating complex, unobservable population processes and making predictions about future populations based on limited data. To better understand the utility of state space models in evaluating population dynamics, we used them in a Bayesian framework and compared the accuracy of models with differing complexity, with and without informative priors using sequential importance sampling/resampling (SISR). Count data were simulated for 25 years using known parameters and observation process for each model. We used kernel smoothing to reduce the effect of particle depletion, which is common when estimating both states and parameters with SISR. Models using informative priors estimated parameter values and population size with greater accuracy than their non-informative counterparts. While the estimates of population size and trend did not suffer greatly in models using non-informative priors, the algorithm was unable to accurately estimate demographic parameters. This model framework provides reasonable estimates of population size when little to no information is available; however, when information on some vital rates is available, SISR can be used to obtain more precise estimates of population size and process. Incorporating model complexity such as that required by structured populations with stage-specific vital rates affects precision and accuracy when estimating latent population variables and predicting population dynamics. These results are important to consider when designing monitoring programs and conservation efforts requiring management of specific population segments.

  9. Simulating Mass Removal of Groundwater Contaminant Plumes with Complex and Simple Models

    NASA Astrophysics Data System (ADS)

    Lopez, J.; Guo, Z.; Fogg, G. E.

    2016-12-01

    Chlorinated solvents used in industrial, commercial, and other applications continue to pose significant threats to human health through contamination of groundwater resources. A recent National Research Council report concludes that it is unlikely that remediation of these complex sites will be achieved in a time frame of 50-100 years under current methods and standards (NRC, 2013). Pump and treat has been a common strategy at many sites to contain and treat groundwater contamination. In these sites, extensive retention of contaminant mass in low-permeability materials (tailing) has been observed after years or decades of pumping. Although transport models can be built that contain enough of the complex, 3D heterogeneity to simulate the tailing and long cleanup times, this is seldom done because of the large data and computational burdens. Hence, useful, reliable models to simulate various cleanup strategies are rare. The purpose of this study is to explore other potential ways to simulate the mass-removal processes with shorter time and less cost but still produce robust results by capturing effects of the heterogeneity and long-term retention of mass. A site containing a trichloroethylene groundwater plume was selected as the study area. The plume is located within alluvial sediments in the Tucson Basin. A fully heterogeneous domain is generated first and MODFLOW is used to simulate the flow field. Contaminant transport is simulated using both MT3D and RWHet for the fully heterogeneous model. Other approaches, including dual-domain mass transfer and heterogeneous chemical reactions, are manipulated to simulate the mass removal in a less heterogeneous, or homogeneous, domain and results are compared to the results obtained from complex models. The capability of these simpler models to simulate remediation processes, especially capture the late-time tailing, are examined.

  10. An overview of structurally complex network-based modeling of public opinion in the “We the Media” era

    NASA Astrophysics Data System (ADS)

    Wang, Guanghui; Wang, Yufei; Liu, Yijun; Chi, Yuxue

    2018-05-01

    As the transmission of public opinion on the Internet in the “We the Media” era tends to be supraterritorial, concealed and complex, the traditional “point-to-surface” transmission of information has been transformed into “point-to-point” reciprocal transmission. A foundation for studies of the evolution of public opinion and its transmission on the Internet in the “We the Media” era can be laid by converting the massive amounts of fragmented information on public opinion that exists on “We the Media” platforms into structurally complex networks of information. This paper describes studies of structurally complex network-based modeling of public opinion on the Internet in the “We the Media” era from the perspective of the development and evolution of complex networks. The progress that has been made in research projects relevant to the structural modeling of public opinion on the Internet is comprehensively summarized. The review considers aspects such as regular grid-based modeling of the rules that describe the propagation of public opinion on the Internet in the “We the Media” era, social network modeling, dynamic network modeling, and supernetwork modeling. Moreover, an outlook for future studies that address complex network-based modeling of public opinion on the Internet is put forward as a summary from the perspective of modeling conducted using the techniques mentioned above.

  11. A simple and complete model for wind turbine wakes over complex terrain

    NASA Astrophysics Data System (ADS)

    Rommelfanger, Nick; Rajborirug, Mai; Luzzatto-Fegiz, Paolo

    2017-11-01

    Simple models for turbine wakes have been used extensively in the wind energy community, both as independent tools, as well as to complement more refined and computationally-intensive techniques. These models typically prescribe empirical relations for how the wake radius grows with downstream distance x and obtain the wake velocity at each x through the application of either mass conservation, or of both mass and momentum conservation (e.g. Katić et al. 1986; Frandsen et al. 2006; Bastankhah & Porté-Agel 2014). Since these models assume a global behavior of the wake (for example, linear spreading with x) they cannot respond to local changes in background flow, as may occur over complex terrain. Instead of assuming a global wake shape, we develop a model by relying on a local assumption for the growth of the turbulent interface. To this end, we introduce to wind turbine wakes the use of the entrainment hypothesis, which has been used extensively in other areas of geophysical fluid dynamics. We obtain two coupled ordinary differential equations for mass and momentum conservation, which can be readily solved with a prescribed background pressure gradient. Our model is in good agreement with published data for the development of wakes over complex terrain.

  12. Modelling and simulation of complex sociotechnical systems: envisioning and analysing work environments

    PubMed Central

    Hettinger, Lawrence J.; Kirlik, Alex; Goh, Yang Miang; Buckle, Peter

    2015-01-01

    Accurate comprehension and analysis of complex sociotechnical systems is a daunting task. Empirically examining, or simply envisioning the structure and behaviour of such systems challenges traditional analytic and experimental approaches as well as our everyday cognitive capabilities. Computer-based models and simulations afford potentially useful means of accomplishing sociotechnical system design and analysis objectives. From a design perspective, they can provide a basis for a common mental model among stakeholders, thereby facilitating accurate comprehension of factors impacting system performance and potential effects of system modifications. From a research perspective, models and simulations afford the means to study aspects of sociotechnical system design and operation, including the potential impact of modifications to structural and dynamic system properties, in ways not feasible with traditional experimental approaches. This paper describes issues involved in the design and use of such models and simulations and describes a proposed path forward to their development and implementation. Practitioner Summary: The size and complexity of real-world sociotechnical systems can present significant barriers to their design, comprehension and empirical analysis. This article describes the potential advantages of computer-based models and simulations for understanding factors that impact sociotechnical system design and operation, particularly with respect to process and occupational safety. PMID:25761227

  13. CALIBRATION OF SUBSURFACE BATCH AND REACTIVE-TRANSPORT MODELS INVOLVING COMPLEX BIOGEOCHEMICAL PROCESSES

    EPA Science Inventory

    In this study, the calibration of subsurface batch and reactive-transport models involving complex biogeochemical processes was systematically evaluated. Two hypothetical nitrate biodegradation scenarios were developed and simulated in numerical experiments to evaluate the perfor...

  14. Methodology and Results of Mathematical Modelling of Complex Technological Processes

    NASA Astrophysics Data System (ADS)

    Mokrova, Nataliya V.

    2018-03-01

    The methodology of system analysis allows us to draw a mathematical model of the complex technological process. The mathematical description of the plasma-chemical process was proposed. The importance the quenching rate and initial temperature decrease time was confirmed for producing the maximum amount of the target product. The results of numerical integration of the system of differential equations can be used to describe reagent concentrations, plasma jet rate and temperature in order to achieve optimal mode of hardening. Such models are applicable both for solving control problems and predicting future states of sophisticated technological systems.

  15. Star formation in a hierarchical model for Cloud Complexes

    NASA Astrophysics Data System (ADS)

    Sanchez, N.; Parravano, A.

    The effects of the external and initial conditions on the star formation processes in Molecular Cloud Complexes are examined in the context of a schematic model. The model considers a hierarchical system with five predefined phases: warm gas, neutral gas, low density molecular gas, high density molecular gas and protostars. The model follows the mass evolution of each substructure by computing its mass exchange with their parent and children. The parent-child mass exchange depends on the radiation density at the interphase, which is produced by the radiation coming from the stars that form at the end of the hierarchical structure, and by the external radiation field. The system is chaotic in the sense that its temporal evolution is very sensitive to small changes in the initial or external conditions. However, global features such as the star formation efficience and the Initial Mass Function are less affected by those variations.

  16. Role of cholesterol on the transfection barriers of cationic lipid/DNA complexes

    NASA Astrophysics Data System (ADS)

    Pozzi, Daniela; Cardarelli, Francesco; Salomone, Fabrizio; Marchini, Cristina; Amenitsch, Heinz; Barbera, Giorgia La; Caracciolo, Giulio

    2014-08-01

    Most lipid formulations need cholesterol for efficient transfection, but the precise motivation remains unclear. Here, we have investigated the effect of cholesterol on the transfection efficiency (TE) of cationic liposomes made of 1,2-dioleoyl-3-trimethylammonium-propane and dioleoylphosphocholine in Chinese hamster ovary cells. The transfection mechanisms of cholesterol-containing lipoplexes have been investigated by TE, synchrotron small angle X-ray scattering, and laser scanning confocal microscopy experiments. We prove that cholesterol-containing lipoplexes enter the cells using different endocytosis pathways. Formulations with high cholesterol content efficiently escape from endosomes and exhibit a lamellar-nonlamellar phase transition in mixture with biomembrane mimicking lipid formulations. This might explain both the DNA release ability and the high transfection efficiency. These studies highlight the enrichment in cholesterol as a decisive factor for transfection and will contribute to the rational design of lipid nanocarriers with superior TE.

  17. Computation of the effective mechanical response of biological networks accounting for large configuration changes.

    PubMed

    El Nady, K; Ganghoffer, J F

    2016-05-01

    The asymptotic homogenization technique is involved to derive the effective elastic response of biological membranes viewed as repetitive beam networks. Thereby, a systematic methodology is established, allowing the prediction of the overall mechanical properties of biological membranes in the nonlinear regime, reflecting the influence of the geometrical and mechanical micro-parameters of the network structure on the overall response of the equivalent continuum. Biomembranes networks are classified based on nodal connectivity, so that we analyze in this work 3, 4 and 6-connectivity networks, which are representative of most biological networks. The individual filaments of the network are described as undulated beams prone to entropic elasticity, with tensile moduli determined from their persistence length. The effective micropolar continuum evaluated as a continuum substitute of the biological network has a kinematics reflecting the discrete network deformation modes, involving a nodal displacement and a microrotation. The statics involves the classical Cauchy stress and internal moments encapsulated into couple stresses, which develop internal work in duality to microcurvatures reflecting local network undulations. The relative ratio of the characteristic bending length of the effective micropolar continuum to the unit cell size determines the relevant choice of the equivalent medium. In most cases, the Cauchy continuum is sufficient to model biomembranes. The peptidoglycan network may exhibit a re-entrant hexagonal configuration due to thermal or pressure fluctuations, for which micropolar effects become important. The homogenized responses are in good agreement with FE simulations performed over the whole network. The predictive nature of the employed homogenization technique allows the identification of a strain energy density of a hyperelastic model, for the purpose of performing structural calculations of the shape evolutions of biomembranes. Copyright

  18. Autonomous Modeling, Statistical Complexity and Semi-annealed Treatment of Boolean Networks

    NASA Astrophysics Data System (ADS)

    Gong, Xinwei

    This dissertation presents three studies on Boolean networks. Boolean networks are a class of mathematical systems consisting of interacting elements with binary state variables. Each element is a node with a Boolean logic gate, and the presence of interactions between any two nodes is represented by directed links. Boolean networks that implement the logic structures of real systems are studied as coarse-grained models of the real systems. Large random Boolean networks are studied with mean field approximations and used to provide a baseline of possible behaviors of large real systems. This dissertation presents one study of the former type, concerning the stable oscillation of a yeast cell-cycle oscillator, and two studies of the latter type, respectively concerning the statistical complexity of large random Boolean networks and an extension of traditional mean field techniques that accounts for the presence of short loops. In the cell-cycle oscillator study, a novel autonomous update scheme is introduced to study the stability of oscillations in small networks. A motif that corrects pulse-growing perturbations and a motif that grows pulses are identified. A combination of the two motifs is capable of sustaining stable oscillations. Examining a Boolean model of the yeast cell-cycle oscillator using an autonomous update scheme yields evidence that it is endowed with such a combination. Random Boolean networks are classified as ordered, critical or disordered based on their response to small perturbations. In the second study, random Boolean networks are taken as prototypical cases for the evaluation of two measures of complexity based on a criterion for optimal statistical prediction. One measure, defined for homogeneous systems, does not distinguish between the static spatial inhomogeneity in the ordered phase and the dynamical inhomogeneity in the disordered phase. A modification in which complexities of individual nodes are calculated yields vanishing

  19. Food-web complexity, meta-community complexity and community stability.

    PubMed

    Mougi, A; Kondoh, M

    2016-04-13

    What allows interacting, diverse species to coexist in nature has been a central question in ecology, ever since the theoretical prediction that a complex community should be inherently unstable. Although the role of spatiality in species coexistence has been recognized, its application to more complex systems has been less explored. Here, using a meta-community model of food web, we show that meta-community complexity, measured by the number of local food webs and their connectedness, elicits a self-regulating, negative-feedback mechanism and thus stabilizes food-web dynamics. Moreover, the presence of meta-community complexity can give rise to a positive food-web complexity-stability effect. Spatiality may play a more important role in stabilizing dynamics of complex, real food webs than expected from ecological theory based on the models of simpler food webs.

  20. Mesh-based Monte Carlo code for fluorescence modeling in complex tissues with irregular boundaries

    NASA Astrophysics Data System (ADS)

    Wilson, Robert H.; Chen, Leng-Chun; Lloyd, William; Kuo, Shiuhyang; Marcelo, Cynthia; Feinberg, Stephen E.; Mycek, Mary-Ann

    2011-07-01

    There is a growing need for the development of computational models that can account for complex tissue morphology in simulations of photon propagation. We describe the development and validation of a user-friendly, MATLAB-based Monte Carlo code that uses analytically-defined surface meshes to model heterogeneous tissue geometry. The code can use information from non-linear optical microscopy images to discriminate the fluorescence photons (from endogenous or exogenous fluorophores) detected from different layers of complex turbid media. We present a specific application of modeling a layered human tissue-engineered construct (Ex Vivo Produced Oral Mucosa Equivalent, EVPOME) designed for use in repair of oral tissue following surgery. Second-harmonic generation microscopic imaging of an EVPOME construct (oral keratinocytes atop a scaffold coated with human type IV collagen) was employed to determine an approximate analytical expression for the complex shape of the interface between the two layers. This expression can then be inserted into the code to correct the simulated fluorescence for the effect of the irregular tissue geometry.

  1. A Qualitative Model of Human Interaction with Complex Dynamic Systems

    NASA Technical Reports Server (NTRS)

    Hess, Ronald A.

    1987-01-01

    A qualitative model describing human interaction with complex dynamic systems is developed. The model is hierarchical in nature and consists of three parts: a behavior generator, an internal model, and a sensory information processor. The behavior generator is responsible for action decomposition, turning higher level goals or missions into physical action at the human-machine interface. The internal model is an internal representation of the environment which the human is assumed to possess and is divided into four submodel categories. The sensory information processor is responsible for sensory composition. All three parts of the model act in consort to allow anticipatory behavior on the part of the human in goal-directed interaction with dynamic systems. Human workload and error are interpreted in this framework, and the familiar example of an automobile commute is used to illustrate the nature of the activity in the three model elements. Finally, with the qualitative model as a guide, verbal protocols from a manned simulation study of a helicopter instrument landing task are analyzed with particular emphasis on the effect of automation on human-machine performance.

  2. A qualitative model of human interaction with complex dynamic systems

    NASA Technical Reports Server (NTRS)

    Hess, Ronald A.

    1987-01-01

    A qualitative model describing human interaction with complex dynamic systems is developed. The model is hierarchical in nature and consists of three parts: a behavior generator, an internal model, and a sensory information processor. The behavior generator is responsible for action decomposition, turning higher level goals or missions into physical action at the human-machine interface. The internal model is an internal representation of the environment which the human is assumed to possess and is divided into four submodel categories. The sensory information processor is responsible for sensory composition. All three parts of the model act in consort to allow anticipatory behavior on the part of the human in goal-directed interaction with dynamic systems. Human workload and error are interpreted in this framework, and the familiar example of an automobile commute is used to illustrate the nature of the activity in the three model elements. Finally, with the qualitative model as a guide, verbal protocols from a manned simulation study of a helicopter instrument landing task are analyzed with particular emphasis on the effect of automation on human-machine performance.

  3. A Novel Prediction Method about Single Components of Analog Circuits Based on Complex Field Modeling

    PubMed Central

    Tian, Shulin; Yang, Chenglin

    2014-01-01

    Few researches pay attention to prediction about analog circuits. The few methods lack the correlation with circuit analysis during extracting and calculating features so that FI (fault indicator) calculation often lack rationality, thus affecting prognostic performance. To solve the above problem, this paper proposes a novel prediction method about single components of analog circuits based on complex field modeling. Aiming at the feature that faults of single components hold the largest number in analog circuits, the method starts with circuit structure, analyzes transfer function of circuits, and implements complex field modeling. Then, by an established parameter scanning model related to complex field, it analyzes the relationship between parameter variation and degeneration of single components in the model in order to obtain a more reasonable FI feature set via calculation. According to the obtained FI feature set, it establishes a novel model about degeneration trend of analog circuits' single components. At last, it uses particle filter (PF) to update parameters for the model and predicts remaining useful performance (RUP) of analog circuits' single components. Since calculation about the FI feature set is more reasonable, accuracy of prediction is improved to some extent. Finally, the foregoing conclusions are verified by experiments. PMID:25147853

  4. Sparkle/AM1 Parameters for the Modeling of Samarium(III) and Promethium(III) Complexes.

    PubMed

    Freire, Ricardo O; da Costa, Nivan B; Rocha, Gerd B; Simas, Alfredo M

    2006-01-01

    The Sparkle/AM1 model is extended to samarium(III) and promethium(III) complexes. A set of 15 structures of high crystallographic quality (R factor < 0.05 Å), with ligands chosen to be representative of all samarium complexes in the Cambridge Crystallographic Database 2004, CSD, with nitrogen or oxygen directly bonded to the samarium ion, was used as a training set. In the validation procedure, we used a set of 42 other complexes, also of high crystallographic quality. The results show that this parametrization for the Sm(III) ion is similar in accuracy to the previous parametrizations for Eu(III), Gd(III), and Tb(III). On the other hand, promethium is an artificial radioactive element with no stable isotope. So far, there are no promethium complex crystallographic structures in CSD. To circumvent this, we confirmed our previous result that RHF/STO-3G/ECP, with the MWB effective core potential (ECP), appears to be the most efficient ab initio model chemistry in terms of coordination polyhedron crystallographic geometry predictions from isolated lanthanide complex ion calculations. We thus generated a set of 15 RHF/STO-3G/ECP promethium complex structures with ligands chosen to be representative of complexes available in the CSD for all other trivalent lanthanide cations, with nitrogen or oxygen directly bonded to the lanthanide ion. For the 42 samarium(III) complexes and 15 promethium(III) complexes considered, the Sparkle/AM1 unsigned mean error, for all interatomic distances between the Ln(III) ion and the ligand atoms of the first sphere of coordination, is 0.07 and 0.06 Å, respectively, a level of accuracy comparable to present day ab initio/ECP geometries, while being hundreds of times faster.

  5. Improving a complex finite-difference ground water flow model through the use of an analytic element screening model

    USGS Publications Warehouse

    Hunt, R.J.; Anderson, M.P.; Kelson, V.A.

    1998-01-01

    This paper demonstrates that analytic element models have potential as powerful screening tools that can facilitate or improve calibration of more complicated finite-difference and finite-element models. We demonstrate how a two-dimensional analytic element model was used to identify errors in a complex three-dimensional finite-difference model caused by incorrect specification of boundary conditions. An improved finite-difference model was developed using boundary conditions developed from a far-field analytic element model. Calibration of a revised finite-difference model was achieved using fewer zones of hydraulic conductivity and lake bed conductance than the original finite-difference model. Calibration statistics were also improved in that simulated base-flows were much closer to measured values. The improved calibration is due mainly to improved specification of the boundary conditions made possible by first solving the far-field problem with an analytic element model.This paper demonstrates that analytic element models have potential as powerful screening tools that can facilitate or improve calibration of more complicated finite-difference and finite-element models. We demonstrate how a two-dimensional analytic element model was used to identify errors in a complex three-dimensional finite-difference model caused by incorrect specification of boundary conditions. An improved finite-difference model was developed using boundary conditions developed from a far-field analytic element model. Calibration of a revised finite-difference model was achieved using fewer zones of hydraulic conductivity and lake bed conductance than the original finite-difference model. Calibration statistics were also improved in that simulated base-flows were much closer to measured values. The improved calibration is due mainly to improved specification of the boundary conditions made possible by first solving the far-field problem with an analytic element model.

  6. The semiotics of control and modeling relations in complex systems.

    PubMed

    Joslyn, C

    2001-01-01

    We provide a conceptual analysis of ideas and principles from the systems theory discourse which underlie Pattee's semantic or semiotic closure, which is itself foundational for a school of theoretical biology derived from systems theory and cybernetics, and is now being related to biological semiotics and explicated in the relational biological school of Rashevsky and Rosen. Atomic control systems and models are described as the canonical forms of semiotic organization, sharing measurement relations, but differing topologically in that control systems are circularly and models linearly related to their environments. Computation in control systems is introduced, motivating hierarchical decomposition, hybrid modeling and control systems, and anticipatory or model-based control. The semiotic relations in complex control systems are described in terms of relational constraints, and rules and laws are distinguished as contingent and necessary functional entailments, respectively. Finally, selection as a meta-level of constraint is introduced as the necessary condition for semantic relations in control systems and models.

  7. Complex Wall Boundary Conditions for Modeling Combustion in Catalytic Channels

    NASA Astrophysics Data System (ADS)

    Zhu, Huayang; Jackson, Gregory

    2000-11-01

    Monolith catalytic reactors for exothermic oxidation are being used in automobile exhaust clean-up and ultra-low emissions combustion systems. The reactors present a unique coupling between mass, heat, and momentum transport in a channel flow configuration. The use of porous catalytic coatings along the channel wall presents a complex boundary condition when modeled with the two-dimensional channel flow. This current work presents a 2-D transient model for predicting the performance of catalytic combustion systems for methane oxidation on Pd catalysts. The model solves the 2-D compressible transport equations for momentum, species, and energy, which are solved with a porous washcoat model for the wall boundary conditions. A time-splitting algorithm is used to separate the stiff chemical reactions from the convective/diffusive equations for the channel flow. A detailed surface chemistry mechanism is incorporated for the catalytic wall model and is used to predict transient ignition and steady-state conversion of CH4-air flows in the catalytic reactor.

  8. Surface complexation modeling calculation of Pb(II) adsorption onto the calcined diatomite

    NASA Astrophysics Data System (ADS)

    Ma, Shu-Cui; Zhang, Ji-Lin; Sun, De-Hui; Liu, Gui-Xia

    2015-12-01

    Removal of noxious heavy metal ions (e.g. Pb(II)) by surface adsorption of minerals (e.g. diatomite) is an important means in the environmental aqueous pollution control. Thus, it is very essential to understand the surface adsorptive behavior and mechanism. In this work, the Pb(II) apparent surface complexation reaction equilibrium constants on the calcined diatomite and distributions of Pb(II) surface species were investigated through modeling calculations of Pb(II) based on diffuse double layer model (DLM) with three amphoteric sites. Batch experiments were used to study the adsorption of Pb(II) onto the calcined diatomite as a function of pH (3.0-7.0) and different ionic strengths (0.05 and 0.1 mol L-1 NaCl) under ambient atmosphere. Adsorption of Pb(II) can be well described by Freundlich isotherm models. The apparent surface complexation equilibrium constants (log K) were obtained by fitting the batch experimental data using the PEST 13.0 together with PHREEQC 3.1.2 codes and there is good agreement between measured and predicted data. Distribution of Pb(II) surface species on the diatomite calculated by PHREEQC 3.1.2 program indicates that the impurity cations (e.g. Al3+, Fe3+, etc.) in the diatomite play a leading role in the Pb(II) adsorption and dominant formation of complexes and additional electrostatic interaction are the main adsorption mechanism of Pb(II) on the diatomite under weak acidic conditions.

  9. 3D printing the pterygopalatine fossa: a negative space model of a complex structure.

    PubMed

    Bannon, Ross; Parihar, Shivani; Skarparis, Yiannis; Varsou, Ourania; Cezayirli, Enis

    2018-02-01

    The pterygopalatine fossa is one of the most complex anatomical regions to understand. It is poorly visualized in cadaveric dissection and most textbooks rely on schematic depictions. We describe our approach to creating a low-cost, 3D model of the pterygopalatine fossa, including its associated canals and foramina, using an affordable "desktop" 3D printer. We used open source software to create a volume render of the pterygopalatine fossa from axial slices of a head computerised tomography scan. These data were then exported to a 3D printer to produce an anatomically accurate model. The resulting 'negative space' model of the pterygopalatine fossa provides a useful and innovative aid for understanding the complex anatomical relationships of the pterygopalatine fossa. This model was designed primarily for medical students; however, it will also be of interest to postgraduates in ENT, ophthalmology, neurosurgery, and radiology. The technical process described may be replicated by other departments wishing to develop their own anatomical models whilst incurring minimal costs.

  10. Modelling radiation fluxes in simple and complex environments: basics of the RayMan model.

    PubMed

    Matzarakis, Andreas; Rutz, Frank; Mayer, Helmut

    2010-03-01

    Short- and long-wave radiation flux densities absorbed by people have a significant influence on their energy balance. The heat effect of the absorbed radiation flux densities is parameterised by the mean radiant temperature. This paper presents the physical basis of the RayMan model, which simulates the short- and long-wave radiation flux densities from the three-dimensional surroundings in simple and complex environments. RayMan has the character of a freely available radiation and human-bioclimate model. The aim of the RayMan model is to calculate radiation flux densities, sunshine duration, shadow spaces and thermo-physiologically relevant assessment indices using only a limited number of meteorological and other input data. A comparison between measured and simulated values for global radiation and mean radiant temperature shows that the simulated data closely resemble measured data.

  11. Technical Note: Approximate Bayesian parameterization of a complex tropical forest model

    NASA Astrophysics Data System (ADS)

    Hartig, F.; Dislich, C.; Wiegand, T.; Huth, A.

    2013-08-01

    successfully be applied to process-based models of high complexity. The methodology is particularly suited to heterogeneous and complex data structures and can easily be adjusted to other model types, including most stochastic population and individual-based models. Our study therefore provides a blueprint for a fairly general approach to parameter estimation of stochastic process-based models in ecology and evolution.

  12. Parameterization and Sensitivity Analysis of a Complex Simulation Model for Mosquito Population Dynamics, Dengue Transmission, and Their Control

    PubMed Central

    Ellis, Alicia M.; Garcia, Andres J.; Focks, Dana A.; Morrison, Amy C.; Scott, Thomas W.

    2011-01-01

    Models can be useful tools for understanding the dynamics and control of mosquito-borne disease. More detailed models may be more realistic and better suited for understanding local disease dynamics; however, evaluating model suitability, accuracy, and performance becomes increasingly difficult with greater model complexity. Sensitivity analysis is a technique that permits exploration of complex models by evaluating the sensitivity of the model to changes in parameters. Here, we present results of sensitivity analyses of two interrelated complex simulation models of mosquito population dynamics and dengue transmission. We found that dengue transmission may be influenced most by survival in each life stage of the mosquito, mosquito biting behavior, and duration of the infectious period in humans. The importance of these biological processes for vector-borne disease models and the overwhelming lack of knowledge about them make acquisition of relevant field data on these biological processes a top research priority. PMID:21813844

  13. Atmospheric dispersion modelling over complex terrain at small scale

    NASA Astrophysics Data System (ADS)

    Nosek, S.; Janour, Z.; Kukacka, L.; Jurcakova, K.; Kellnerova, R.; Gulikova, E.

    2014-03-01

    Previous study concerned of qualitative modelling neutrally stratified flow over open-cut coal mine and important surrounding topography at meso-scale (1:9000) revealed an important area for quantitative modelling of atmospheric dispersion at small-scale (1:3300). The selected area includes a necessary part of the coal mine topography with respect to its future expansion and surrounding populated areas. At this small-scale simultaneous measurement of velocity components and concentrations in specified points of vertical and horizontal planes were performed by two-dimensional Laser Doppler Anemometry (LDA) and Fast-Response Flame Ionization Detector (FFID), respectively. The impact of the complex terrain on passive pollutant dispersion with respect to the prevailing wind direction was observed and the prediction of the air quality at populated areas is discussed. The measured data will be used for comparison with another model taking into account the future coal mine transformation. Thus, the impact of coal mine transformation on pollutant dispersion can be observed.

  14. The Silent Canyon caldera complex: a three-dimensional model based on drill-hole stratigraphy and gravity inversion

    USGS Publications Warehouse

    McKee, Edwin H.; Hildenbrand, Thomas G.; Anderson, Megan L.; Rowley, Peter D.; Sawyer, David A.

    1999-01-01

    The structural framework of Pahute Mesa, Nevada, is dominated by the Silent Canyon caldera complex, a buried, multiple collapse caldera complex. Using the boundary surface between low density Tertiary volcanogenic rocks and denser granitic and weakly metamorphosed sedimentary rocks (basement) as the outer fault surfaces for the modeled collapse caldera complex, it is postulated that the caldera complex collapsed on steeply- dipping arcuate faults two, possibly three, times following eruption of at least two major ash-flow tuffs. The caldera and most of its eruptive products are now deeply buried below the surface of Pahute Mesa. Relatively low-density rocks in the caldera complex produce one of the largest gravity lows in the western conterminous United States. Gravity modeling defines a steep sided, cup-shaped depression as much as 6,000 meters (19,800 feet) deep that is surrounded and floored by denser rocks. The steeply dipping surface located between the low-density basin fill and the higher density external rocks is considered to be the surface of the ring faults of the multiple calderas. Extrapolation of this surface upward to the outer, or topographic rim, of the Silent Canyon caldera complex defines the upper part of the caldera collapse structure. Rock units within and outside the Silent Canyon caldera complex are combined into seven hydrostratigraphic units based on their predominant hydrologic characteristics. The caldera structures and other faults on Pahute Mesa are used with the seven hydrostratigraphic units to make a three-dimensional geologic model of Pahute Mesa using the "EarthVision" (Dynamic Graphics, Inc.) modeling computer program. This method allows graphic representation of the geometry of the rocks and produces computer generated cross sections, isopach maps, and three-dimensional oriented diagrams. These products have been created to aid in visualizing and modeling the ground-water flow system beneath Pahute Mesa.

  15. Sorting of amphiphile membrane components in curvature and composition gradients

    NASA Astrophysics Data System (ADS)

    Tian, Aiwei

    Phase and shape heterogeneities in biomembranes are of functional importance. However, it is difficult to elucidate the roles membrane heterogeneities play in maintaining cellular function due to the complexity of biomembranes. Therefore, investigations of phase behavior and composition/curvature coupling in lipid and polymer model membranes offer some advantages. In this thesis, phase properties in lipid and polymer giant vesicles were studied. Line tension at the fluid/fluid phase boundary of giant lipid unilamellar vesicles was determined directly by micropipette aspiration, and found to be composition-dependent. Dynamics of calcium-induced domains within polyanionic vesicles subject to chemical stimuli were investigated, which revealed the strength of molecular interaction and suggested applications in triggered delivery. In addition, curvature sorting of lipids and proteins was examined. Lipid membrane tethers were pulled from giant unilamellar vesicles using two micropipettes and a bead. Tether radius can be controlled and measured in this system. By examining fluorescence intensity of labeled molecules as a function of curvature, we found that DiI dyes (lipid analogues with spontaneous curvatures) had no curvature preference down to radii of 10 nm. Theoretical calculation predicted that the distribution of small lipids was dominated by entropy instead of bending energy. However protein Cholera toxin subunit B was efficiently sorted away from the high positive curvature due to its negative spontaneous curvature. Bending stiffness was determined to decrease as curvature increased in homogeneous membranes with ternary lipid mixtures near a critical consulate point, revealing the strong preferential intermolecular interactions of such mixtures. In addition, diffusion controlled domain growth was observed in tethers pulled from phase-separated vesicles, which provides a new dynamic sorting principle for lipids and proteins in curvature gradients.

  16. Reducing Spatial Data Complexity for Classification Models

    NASA Astrophysics Data System (ADS)

    Ruta, Dymitr; Gabrys, Bogdan

    2007-11-01

    Intelligent data analytics gradually becomes a day-to-day reality of today's businesses. However, despite rapidly increasing storage and computational power current state-of-the-art predictive models still can not handle massive and noisy corporate data warehouses. What is more adaptive and real-time operational environment requires multiple models to be frequently retrained which further hinders their use. Various data reduction techniques ranging from data sampling up to density retention models attempt to address this challenge by capturing a summarised data structure, yet they either do not account for labelled data or degrade the classification performance of the model trained on the condensed dataset. Our response is a proposition of a new general framework for reducing the complexity of labelled data by means of controlled spatial redistribution of class densities in the input space. On the example of Parzen Labelled Data Compressor (PLDC) we demonstrate a simulatory data condensation process directly inspired by the electrostatic field interaction where the data are moved and merged following the attracting and repelling interactions with the other labelled data. The process is controlled by the class density function built on the original data that acts as a class-sensitive potential field ensuring preservation of the original class density distributions, yet allowing data to rearrange and merge joining together their soft class partitions. As a result we achieved a model that reduces the labelled datasets much further than any competitive approaches yet with the maximum retention of the original class densities and hence the classification performance. PLDC leaves the reduced dataset with the soft accumulative class weights allowing for efficient online updates and as shown in a series of experiments if coupled with Parzen Density Classifier (PDC) significantly outperforms competitive data condensation methods in terms of classification performance at the

  17. No Evidence for Connectivity between the Bushveld Igneous Complex and the Molopo Farms Complex from Forward Modeling of Receiver Functions

    NASA Astrophysics Data System (ADS)

    Skryzalin, P. A.; Ramirez, C.; Durrheim, R. J.; Raveloson, A.; Nyblade, A.; Feineman, M. D.

    2016-12-01

    The Bushveld Igneous Complex contains one of the most studied and economically important layered mafic intrusions in the world. The Rustenburg Layered Suite outcrops in northern South Africa over an area of 65,000 km2, and has a volume of up to 1,000,000 km3. Both the Bushveld Igneous Complex and the Molopo Farms Complex in Botswana intruded the crust at 2.05 Ga. Despite being extensively exploited by the mining industry, many questions still exist regarding the structure of the Bushveld Igneous Complex, specifically the total size and connectivity of the different outcrops. In this study, we used receiver function analysis, a technique for determining the seismic velocity structure of the crust and upper mantle, to search for evidence of the Bushveld at station LBTB, which lies in Botswana, between the Far Western Limb of the Bushveld and the Molopo Farms Complex. The goal of our study was to determine whether a fast, high-density mafic body can be seen in the crust beneath this region using receiver functions. Observation of a high density layer would argue in favor of connectivity of the Bushveld between The Far Western Limb and the Molopo Farms Complex. We forward modeled stacks of receiver functions as well as sub-stacks that were split into azimuthal groups which share similar characteristics. We found that there was no evidence for a high velocity zone in the crust, and that the Moho in this region is located at a depth of 38 ± 3 km, about 8-9 km shallower than Moho depths determined beneath the Bushveld Complex. These two lines of evidence give no reason to assume connectivity between the Bushveld Igneous Complex and the Molopo Farms Complex, and rather suggest two separate intrusive suites.

  18. Looping and clustering model for the organization of protein-DNA complexes on the bacterial genome

    NASA Astrophysics Data System (ADS)

    Walter, Jean-Charles; Walliser, Nils-Ole; David, Gabriel; Dorignac, Jérôme; Geniet, Frédéric; Palmeri, John; Parmeggiani, Andrea; Wingreen, Ned S.; Broedersz, Chase P.

    2018-03-01

    The bacterial genome is organized by a variety of associated proteins inside a structure called the nucleoid. These proteins can form complexes on DNA that play a central role in various biological processes, including chromosome segregation. A prominent example is the large ParB-DNA complex, which forms an essential component of the segregation machinery in many bacteria. ChIP-Seq experiments show that ParB proteins localize around centromere-like parS sites on the DNA to which ParB binds specifically, and spreads from there over large sections of the chromosome. Recent theoretical and experimental studies suggest that DNA-bound ParB proteins can interact with each other to condense into a coherent 3D complex on the DNA. However, the structural organization of this protein-DNA complex remains unclear, and a predictive quantitative theory for the distribution of ParB proteins on DNA is lacking. Here, we propose the looping and clustering model, which employs a statistical physics approach to describe protein-DNA complexes. The looping and clustering model accounts for the extrusion of DNA loops from a cluster of interacting DNA-bound proteins that is organized around a single high-affinity binding site. Conceptually, the structure of the protein-DNA complex is determined by a competition between attractive protein interactions and loop closure entropy of this protein-DNA cluster on the one hand, and the positional entropy for placing loops within the cluster on the other. Indeed, we show that the protein interaction strength determines the ‘tightness’ of the loopy protein-DNA complex. Thus, our model provides a theoretical framework for quantitatively computing the binding profiles of ParB-like proteins around a cognate (parS) binding site.

  19. Complexity and chaos control in a discrete-time prey-predator model

    NASA Astrophysics Data System (ADS)

    Din, Qamar

    2017-08-01

    We investigate the complex behavior and chaos control in a discrete-time prey-predator model. Taking into account the Leslie-Gower prey-predator model, we propose a discrete-time prey-predator system with predator partially dependent on prey and investigate the boundedness, existence and uniqueness of positive equilibrium and bifurcation analysis of the system by using center manifold theorem and bifurcation theory. Various feedback control strategies are implemented for controlling the bifurcation and chaos in the system. Numerical simulations are provided to illustrate theoretical discussion.

  20. Some comparisons of complexity in dictionary-based and linear computational models.

    PubMed

    Gnecco, Giorgio; Kůrková, Věra; Sanguineti, Marcello

    2011-03-01

    Neural networks provide a more flexible approximation of functions than traditional linear regression. In the latter, one can only adjust the coefficients in linear combinations of fixed sets of functions, such as orthogonal polynomials or Hermite functions, while for neural networks, one may also adjust the parameters of the functions which are being combined. However, some useful properties of linear approximators (such as uniqueness, homogeneity, and continuity of best approximation operators) are not satisfied by neural networks. Moreover, optimization of parameters in neural networks becomes more difficult than in linear regression. Experimental results suggest that these drawbacks of neural networks are offset by substantially lower model complexity, allowing accuracy of approximation even in high-dimensional cases. We give some theoretical results comparing requirements on model complexity for two types of approximators, the traditional linear ones and so called variable-basis types, which include neural networks, radial, and kernel models. We compare upper bounds on worst-case errors in variable-basis approximation with lower bounds on such errors for any linear approximator. Using methods from nonlinear approximation and integral representations tailored to computational units, we describe some cases where neural networks outperform any linear approximator. Copyright © 2010 Elsevier Ltd. All rights reserved.

  1. Modeling the Propagation of Mobile Phone Virus under Complex Network

    PubMed Central

    Yang, Wei; Wei, Xi-liang; Guo, Hao; An, Gang; Guo, Lei

    2014-01-01

    Mobile phone virus is a rogue program written to propagate from one phone to another, which can take control of a mobile device by exploiting its vulnerabilities. In this paper the propagation model of mobile phone virus is tackled to understand how particular factors can affect its propagation and design effective containment strategies to suppress mobile phone virus. Two different propagation models of mobile phone viruses under the complex network are proposed in this paper. One is intended to describe the propagation of user-tricking virus, and the other is to describe the propagation of the vulnerability-exploiting virus. Based on the traditional epidemic models, the characteristics of mobile phone viruses and the network topology structure are incorporated into our models. A detailed analysis is conducted to analyze the propagation models. Through analysis, the stable infection-free equilibrium point and the stability condition are derived. Finally, considering the network topology, the numerical and simulation experiments are carried out. Results indicate that both models are correct and suitable for describing the spread of two different mobile phone viruses, respectively. PMID:25133209

  2. Animal Models of Lymphangioleiomyomatosis (LAM) and Tuberous Sclerosis Complex (TSC)

    PubMed Central

    2010-01-01

    Abstract Animal models of lymphangioleiomyomatosis (LAM) and tuberous sclerosis complex (TSC) are highly desired to enable detailed investigation of the pathogenesis of these diseases. Multiple rats and mice have been generated in which a mutation similar to that occurring in TSC patients is present in an allele of Tsc1 or Tsc2. Unfortunately, these mice do not develop pathologic lesions that match those seen in LAM or TSC. However, these Tsc rodent models have been useful in confirming the two-hit model of tumor development in TSC, and in providing systems in which therapeutic trials (e.g., rapamycin) can be performed. In addition, conditional alleles of both Tsc1 and Tsc2 have provided the opportunity to target loss of these genes to specific tissues and organs, to probe the in vivo function of these genes, and attempt to generate better models. Efforts to generate an authentic LAM model are impeded by a lack of understanding of the cell of origin of this process. However, ongoing studies provide hope that such a model will be generated in the coming years. PMID:20235887

  3. 40 CFR 80.48 - Augmentation of the complex emission model by vehicle testing.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... section, the analysis shall fit a regression model to a combined data set that includes vehicle testing... logarithm of emissions contained in this combined data set: (A) A term for each vehicle that shall reflect... nearest limit of the data core, using the unaugmented complex model. (B) “B” shall be set equal to the...

  4. 40 CFR 80.48 - Augmentation of the complex emission model by vehicle testing.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... section, the analysis shall fit a regression model to a combined data set that includes vehicle testing... logarithm of emissions contained in this combined data set: (A) A term for each vehicle that shall reflect... nearest limit of the data core, using the unaugmented complex model. (B) “B” shall be set equal to the...

  5. 40 CFR 80.48 - Augmentation of the complex emission model by vehicle testing.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... section, the analysis shall fit a regression model to a combined data set that includes vehicle testing... logarithm of emissions contained in this combined data set: (A) A term for each vehicle that shall reflect... nearest limit of the data core, using the unaugmented complex model. (B) “B” shall be set equal to the...

  6. 40 CFR 80.48 - Augmentation of the complex emission model by vehicle testing.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... section, the analysis shall fit a regression model to a combined data set that includes vehicle testing... logarithm of emissions contained in this combined data set: (A) A term for each vehicle that shall reflect... nearest limit of the data core, using the unaugmented complex model. (B) “B” shall be set equal to the...

  7. 40 CFR 80.48 - Augmentation of the complex emission model by vehicle testing.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... section, the analysis shall fit a regression model to a combined data set that includes vehicle testing... logarithm of emissions contained in this combined data set: (A) A term for each vehicle that shall reflect... nearest limit of the data core, using the unaugmented complex model. (B) “B” shall be set equal to the...

  8. Numerical modeling of Gaussian beam propagation and diffraction in inhomogeneous media based on the complex eikonal equation

    NASA Astrophysics Data System (ADS)

    Huang, Xingguo; Sun, Hui

    2018-05-01

    Gaussian beam is an important complex geometrical optical technology for modeling seismic wave propagation and diffraction in the subsurface with complex geological structure. Current methods for Gaussian beam modeling rely on the dynamic ray tracing and the evanescent wave tracking. However, the dynamic ray tracing method is based on the paraxial ray approximation and the evanescent wave tracking method cannot describe strongly evanescent fields. This leads to inaccuracy of the computed wave fields in the region with a strong inhomogeneous medium. To address this problem, we compute Gaussian beam wave fields using the complex phase by directly solving the complex eikonal equation. In this method, the fast marching method, which is widely used for phase calculation, is combined with Gauss-Newton optimization algorithm to obtain the complex phase at the regular grid points. The main theoretical challenge in combination of this method with Gaussian beam modeling is to address the irregular boundary near the curved central ray. To cope with this challenge, we present the non-uniform finite difference operator and a modified fast marching method. The numerical results confirm the proposed approach.

  9. Learning and inference using complex generative models in a spatial localization task.

    PubMed

    Bejjanki, Vikranth R; Knill, David C; Aslin, Richard N

    2016-01-01

    A large body of research has established that, under relatively simple task conditions, human observers integrate uncertain sensory information with learned prior knowledge in an approximately Bayes-optimal manner. However, in many natural tasks, observers must perform this sensory-plus-prior integration when the underlying generative model of the environment consists of multiple causes. Here we ask if the Bayes-optimal integration seen with simple tasks also applies to such natural tasks when the generative model is more complex, or whether observers rely instead on a less efficient set of heuristics that approximate ideal performance. Participants localized a "hidden" target whose position on a touch screen was sampled from a location-contingent bimodal generative model with different variances around each mode. Over repeated exposure to this task, participants learned the a priori locations of the target (i.e., the bimodal generative model), and integrated this learned knowledge with uncertain sensory information on a trial-by-trial basis in a manner consistent with the predictions of Bayes-optimal behavior. In particular, participants rapidly learned the locations of the two modes of the generative model, but the relative variances of the modes were learned much more slowly. Taken together, our results suggest that human performance in a more complex localization task, which requires the integration of sensory information with learned knowledge of a bimodal generative model, is consistent with the predictions of Bayes-optimal behavior, but involves a much longer time-course than in simpler tasks.

  10. Organizational-economic model of formation of socio-commercial multifunctional complex in the construction of high-rise buildings

    NASA Astrophysics Data System (ADS)

    Kirillova, Ariadna; Prytkova, Oksana O.

    2018-03-01

    The article is devoted to the features of the formation of the organizational and economic model of the construction of a socio-commercial multifunctional complex for high-rise construction. Authors have given examples of high-altitude multifunctional complexes in Moscow, analyzed the advantages and disadvantages in the implementation of multifunctional complexes, stressed the need for a holistic strategic approach, allowing to take into account the prospects for the development of the city and the creation of a comfortable living environment. Based on the analysis of multifunctional complexes features, a matrix of SWOT analysis was compiled. For the development of cities and improving the quality of life of the population, it is proposed to implement a new type of multifunctional complexes of a joint social and commercial direction, including, along with the implementation of office areas - schools, polyclinics, various sports facilities and cultural and leisure centers (theatrical, dance, studio, etc.). The approach proposed in the article for developing the model is based on a comparative evaluation of the multifunctional complex project of a social and commercial direction implemented at the expense of public-private partnership in the form of a concession agreement and a commercial multifunctional complex being built at the expense of the investor. It has been proved by calculations that the obtained indicators satisfy the conditions of expediency of the proposed organizational-economic model and the project of the social and commercial multifunctional complex is effective.

  11. Characterization, phase solubility and molecular modeling of α-cyclodextrin/pyrimethamine inclusion complex

    NASA Astrophysics Data System (ADS)

    Araujo, Marcia Valeria Gaspar de; Macedo, Osmir F. L.; Nascimento, Cristiane da Cunha; Conegero, Leila Souza; Barreto, Ledjane Silva; Almeida, Luis Eduardo; Costa, Nivan Bezerra da; Gimenez, Iara F.

    2009-02-01

    An inclusion complex between the dihydrofolate reductase inhibitor pyrimethamine (PYR) and α-cyclodextrin (α-CD) was prepared and characterized. From the phase-solubility diagram, a linear increase of PYR solubility was verified as a function of α-CD concentration, suggesting the formation of a soluble complex. A 1:1 host-guest stoichiometry can be proposed according to the Job's plot, obtained from the difference of PYR fluorescence intensity in the presence and absence of α-CD. Differential scanning calorimetry (DSC) measurements provided additional evidences of complexation such as the absence of the endothermic peak assigned to the melting of the drug. The inclusion mode characterized by two-dimensional 1H NMR spectroscopy (ROESY) involves penetration of the p-chlorophenyl ring into the α-CD cavity, in agreement to the orientation optimized by molecular modeling methods.

  12. Electromagnetic modelling of Ground Penetrating Radar responses to complex targets

    NASA Astrophysics Data System (ADS)

    Pajewski, Lara; Giannopoulos, Antonis

    2014-05-01

    This work deals with the electromagnetic modelling of composite structures for Ground Penetrating Radar (GPR) applications. It was developed within the Short-Term Scientific Mission ECOST-STSM-TU1208-211013-035660, funded by COST Action TU1208 "Civil Engineering Applications of Ground Penetrating Radar". The Authors define a set of test concrete structures, hereinafter called cells. The size of each cell is 60 x 100 x 18 cm and the content varies with growing complexity, from a simple cell with few rebars of different diameters embedded in concrete at increasing depths, to a final cell with a quite complicated pattern, including a layer of tendons between two overlying meshes of rebars. Other cells, of intermediate complexity, contain pvc ducts (air filled or hosting rebars), steel objects commonly used in civil engineering (as a pipe, an angle bar, a box section and an u-channel), as well as void and honeycombing defects. One of the cells has a steel mesh embedded in it, overlying two rebars placed diagonally across the comers of the structure. Two cells include a couple of rebars bent into a right angle and placed on top of each other, with a square/round circle lying at the base of the concrete slab. Inspiration for some of these cells is taken from the very interesting experimental work presented in Ref. [1]. For each cell, a subset of models with growing complexity is defined, starting from a simple representation of the cell and ending with a more realistic one. In particular, the model's complexity increases from the geometrical point of view, as well as in terms of how the constitutive parameters of involved media and GPR antennas are described. Some cells can be simulated in both two and three dimensions; the concrete slab can be approximated as a finite-thickness layer having infinite extension on the transverse plane, thus neglecting how edges affect radargrams, or else its finite size can be fully taken into account. The permittivity of concrete can be

  13. Modeling the Internet of Things, Self-Organizing and Other Complex Adaptive Communication Networks: A Cognitive Agent-Based Computing Approach

    PubMed Central

    2016-01-01

    Background Computer Networks have a tendency to grow at an unprecedented scale. Modern networks involve not only computers but also a wide variety of other interconnected devices ranging from mobile phones to other household items fitted with sensors. This vision of the "Internet of Things" (IoT) implies an inherent difficulty in modeling problems. Purpose It is practically impossible to implement and test all scenarios for large-scale and complex adaptive communication networks as part of Complex Adaptive Communication Networks and Environments (CACOONS). The goal of this study is to explore the use of Agent-based Modeling as part of the Cognitive Agent-based Computing (CABC) framework to model a Complex communication network problem. Method We use Exploratory Agent-based Modeling (EABM), as part of the CABC framework, to develop an autonomous multi-agent architecture for managing carbon footprint in a corporate network. To evaluate the application of complexity in practical scenarios, we have also introduced a company-defined computer usage policy. Results The conducted experiments demonstrated two important results: Primarily CABC-based modeling approach such as using Agent-based Modeling can be an effective approach to modeling complex problems in the domain of IoT. Secondly, the specific problem of managing the Carbon footprint can be solved using a multiagent system approach. PMID:26812235

  14. Modeling the Internet of Things, Self-Organizing and Other Complex Adaptive Communication Networks: A Cognitive Agent-Based Computing Approach.

    PubMed

    Laghari, Samreen; Niazi, Muaz A

    2016-01-01

    Computer Networks have a tendency to grow at an unprecedented scale. Modern networks involve not only computers but also a wide variety of other interconnected devices ranging from mobile phones to other household items fitted with sensors. This vision of the "Internet of Things" (IoT) implies an inherent difficulty in modeling problems. It is practically impossible to implement and test all scenarios for large-scale and complex adaptive communication networks as part of Complex Adaptive Communication Networks and Environments (CACOONS). The goal of this study is to explore the use of Agent-based Modeling as part of the Cognitive Agent-based Computing (CABC) framework to model a Complex communication network problem. We use Exploratory Agent-based Modeling (EABM), as part of the CABC framework, to develop an autonomous multi-agent architecture for managing carbon footprint in a corporate network. To evaluate the application of complexity in practical scenarios, we have also introduced a company-defined computer usage policy. The conducted experiments demonstrated two important results: Primarily CABC-based modeling approach such as using Agent-based Modeling can be an effective approach to modeling complex problems in the domain of IoT. Secondly, the specific problem of managing the Carbon footprint can be solved using a multiagent system approach.

  15. Data-Driven Modeling of Complex Systems by means of a Dynamical ANN

    NASA Astrophysics Data System (ADS)

    Seleznev, A.; Mukhin, D.; Gavrilov, A.; Loskutov, E.; Feigin, A.

    2017-12-01

    The data-driven methods for modeling and prognosis of complex dynamical systems become more and more popular in various fields due to growth of high-resolution data. We distinguish the two basic steps in such an approach: (i) determining the phase subspace of the system, or embedding, from available time series and (ii) constructing an evolution operator acting in this reduced subspace. In this work we suggest a novel approach combining these two steps by means of construction of an artificial neural network (ANN) with special topology. The proposed ANN-based model, on the one hand, projects the data onto a low-dimensional manifold, and, on the other hand, models a dynamical system on this manifold. Actually, this is a recurrent multilayer ANN which has internal dynamics and capable of generating time series. Very important point of the proposed methodology is the optimization of the model allowing us to avoid overfitting: we use Bayesian criterion to optimize the ANN structure and estimate both the degree of evolution operator nonlinearity and the complexity of nonlinear manifold which the data are projected on. The proposed modeling technique will be applied to the analysis of high-dimensional dynamical systems: Lorenz'96 model of atmospheric turbulence, producing high-dimensional space-time chaos, and quasi-geostrophic three-layer model of the Earth's atmosphere with the natural orography, describing the dynamics of synoptical vortexes as well as mesoscale blocking systems. The possibility of application of the proposed methodology to analyze real measured data is also discussed. The study was supported by the Russian Science Foundation (grant #16-12-10198).

  16. Transferability of a Three-Dimensional Air Quality Model between Two Different Sites in Complex Terrain.

    NASA Astrophysics Data System (ADS)

    Lange, Rolf

    1989-07-01

    The three-dimensional, diagnostic, particle-in-cell transport and diffusion model MATHEW/ADPIC is used to test its transferability from one site in complex terrain to another with different characteristics, under stable nighttime drainage flow conditions. The two sites were subject to extensive drainage flow tracer experiments under the multilaboratory Atmospheric Studies in Complex Terrain (ASCOT) program: the first being a valley in the Geysers geothermal region of northern California, and the second a canyon in western Colorado. The domain in each case is approximately 10 × 10 km. The 1980 Geysers model evaluation is only quoted. The 1984 Brush Creek model evaluation is described in detail.Results from comparing computed with measured concentrations from a variety of tracer releases indicate that 52% of the 4531 samples from five experiments in Brush Creek and 50% of the 831 samples from four experiments in the Geysers agreed within a factor of 5. When an angular 10° uncertainty, consistent with anemometer reliability limits in complex terrain, was allowed to be applied to the model results, model performance improved such that 78% of samples compared within a factor of 5 for Brush Creek and 77% for the Geysers. Looking at the range of other factors of concentration ratios, results indicate that the model is satisfactorily transferable without tuning it to a specific site.

  17. Adapting APSIM to model the physiology and genetics of complex adaptive traits in field crops.

    PubMed

    Hammer, Graeme L; van Oosterom, Erik; McLean, Greg; Chapman, Scott C; Broad, Ian; Harland, Peter; Muchow, Russell C

    2010-05-01

    Progress in molecular plant breeding is limited by the ability to predict plant phenotype based on its genotype, especially for complex adaptive traits. Suitably constructed crop growth and development models have the potential to bridge this predictability gap. A generic cereal crop growth and development model is outlined here. It is designed to exhibit reliable predictive skill at the crop level while also introducing sufficient physiological rigour for complex phenotypic responses to become emergent properties of the model dynamics. The approach quantifies capture and use of radiation, water, and nitrogen within a framework that predicts the realized growth of major organs based on their potential and whether the supply of carbohydrate and nitrogen can satisfy that potential. The model builds on existing approaches within the APSIM software platform. Experiments on diverse genotypes of sorghum that underpin the development and testing of the adapted crop model are detailed. Genotypes differing in height were found to differ in biomass partitioning among organs and a tall hybrid had significantly increased radiation use efficiency: a novel finding in sorghum. Introducing these genetic effects associated with plant height into the model generated emergent simulated phenotypic differences in green leaf area retention during grain filling via effects associated with nitrogen dynamics. The relevance to plant breeding of this capability in complex trait dissection and simulation is discussed.

  18. Qualitative analysis of a discrete thermostatted kinetic framework modeling complex adaptive systems

    NASA Astrophysics Data System (ADS)

    Bianca, Carlo; Mogno, Caterina

    2018-01-01

    This paper deals with the derivation of a new discrete thermostatted kinetic framework for the modeling of complex adaptive systems subjected to external force fields (nonequilibrium system). Specifically, in order to model nonequilibrium stationary states of the system, the external force field is coupled to a dissipative term (thermostat). The well-posedness of the related Cauchy problem is investigated thus allowing the new discrete thermostatted framework to be suitable for the derivation of specific models and the related computational analysis. Applications to crowd dynamics and future research directions are also discussed within the paper.

  19. Hierarchical Model for the Analysis of Scattering Data of Complex Materials

    DOE PAGES

    Oyedele, Akinola; Mcnutt, Nicholas W.; Rios, Orlando; ...

    2016-05-16

    Interpreting the results of scattering data for complex materials with a hierarchical structure in which at least one phase is amorphous presents a significant challenge. Often the interpretation relies on the use of large-scale molecular dynamics (MD) simulations, in which a structure is hypothesized and from which a radial distribution function (RDF) can be extracted and directly compared against an experimental RDF. This computationally intensive approach presents a bottleneck in the efficient characterization of the atomic structure of new materials. Here, we propose and demonstrate an approach for a hierarchical decomposition of the RDF in which MD simulations are replacedmore » by a combination of tractable models and theory at the atomic scale and the mesoscale, which when combined yield the RDF. We apply the procedure to a carbon composite, in which graphitic nanocrystallites are distributed in an amorphous domain. We compare the model with the RDF from both MD simulation and neutron scattering data. Ultimately, this procedure is applicable for understanding the fundamental processing-structure-property relationships in complex magnetic materials.« less

  20. Identification of the dominant hydrological process and appropriate model structure of a karst catchment through stepwise simplification of a complex conceptual model

    NASA Astrophysics Data System (ADS)

    Chang, Yong; Wu, Jichun; Jiang, Guanghui; Kang, Zhiqiang

    2017-05-01

    Conceptual models often suffer from the over-parameterization problem due to limited available data for the calibration. This leads to the problem of parameter nonuniqueness and equifinality, which may bring much uncertainty of the simulation result. How to find out the appropriate model structure supported by the available data to simulate the catchment is still a big challenge in the hydrological research. In this paper, we adopt a multi-model framework to identify the dominant hydrological process and appropriate model structure of a karst spring, located in Guilin city, China. For this catchment, the spring discharge is the only available data for the model calibration. This framework starts with a relative complex conceptual model according to the perception of the catchment and then this complex is simplified into several different models by gradually removing the model component. The multi-objective approach is used to compare the performance of these different models and the regional sensitivity analysis (RSA) is used to investigate the parameter identifiability. The results show this karst spring is mainly controlled by two different hydrological processes and one of the processes is threshold-driven which is consistent with the fieldwork investigation. However, the appropriate model structure to simulate the discharge of this spring is much simpler than the actual aquifer structure and hydrological processes understanding from the fieldwork investigation. A simple linear reservoir with two different outlets is enough to simulate this spring discharge. The detail runoff process in the catchment is not needed in the conceptual model to simulate the spring discharge. More complex model should need more other additional data to avoid serious deterioration of model predictions.

  1. Accurate modeling and evaluation of microstructures in complex materials

    NASA Astrophysics Data System (ADS)

    Tahmasebi, Pejman

    2018-02-01

    Accurate characterization of heterogeneous materials is of great importance for different fields of science and engineering. Such a goal can be achieved through imaging. Acquiring three- or two-dimensional images under different conditions is not, however, always plausible. On the other hand, accurate characterization of complex and multiphase materials requires various digital images (I) under different conditions. An ensemble method is presented that can take one single (or a set of) I(s) and stochastically produce several similar models of the given disordered material. The method is based on a successive calculating of a conditional probability by which the initial stochastic models are produced. Then, a graph formulation is utilized for removing unrealistic structures. A distance transform function for the Is with highly connected microstructure and long-range features is considered which results in a new I that is more informative. Reproduction of the I is also considered through a histogram matching approach in an iterative framework. Such an iterative algorithm avoids reproduction of unrealistic structures. Furthermore, a multiscale approach, based on pyramid representation of the large Is, is presented that can produce materials with millions of pixels in a matter of seconds. Finally, the nonstationary systems—those for which the distribution of data varies spatially—are studied using two different methods. The method is tested on several complex and large examples of microstructures. The produced results are all in excellent agreement with the utilized Is and the similarities are quantified using various correlation functions.

  2. Clarity versus complexity: land-use modeling as a practical tool for decision-makers

    USGS Publications Warehouse

    Sohl, Terry L.; Claggett, Peter

    2013-01-01

    The last decade has seen a remarkable increase in the number of modeling tools available to examine future land-use and land-cover (LULC) change. Integrated modeling frameworks, agent-based models, cellular automata approaches, and other modeling techniques have substantially improved the representation of complex LULC systems, with each method using a different strategy to address complexity. However, despite the development of new and better modeling tools, the use of these tools is limited for actual planning, decision-making, or policy-making purposes. LULC modelers have become very adept at creating tools for modeling LULC change, but complicated models and lack of transparency limit their utility for decision-makers. The complicated nature of many LULC models also makes it impractical or even impossible to perform a rigorous analysis of modeling uncertainty. This paper provides a review of land-cover modeling approaches and the issues causes by the complicated nature of models, and provides suggestions to facilitate the increased use of LULC models by decision-makers and other stakeholders. The utility of LULC models themselves can be improved by 1) providing model code and documentation, 2) through the use of scenario frameworks to frame overall uncertainties, 3) improving methods for generalizing key LULC processes most important to stakeholders, and 4) adopting more rigorous standards for validating models and quantifying uncertainty. Communication with decision-makers and other stakeholders can be improved by increasing stakeholder participation in all stages of the modeling process, increasing the transparency of model structure and uncertainties, and developing user-friendly decision-support systems to bridge the link between LULC science and policy. By considering these options, LULC science will be better positioned to support decision-makers and increase real-world application of LULC modeling results.

  3. Watershed System Model: The Essentials to Model Complex Human-Nature System at the River Basin Scale

    NASA Astrophysics Data System (ADS)

    Li, Xin; Cheng, Guodong; Lin, Hui; Cai, Ximing; Fang, Miao; Ge, Yingchun; Hu, Xiaoli; Chen, Min; Li, Weiyue

    2018-03-01

    Watershed system models are urgently needed to understand complex watershed systems and to support integrated river basin management. Early watershed modeling efforts focused on the representation of hydrologic processes, while the next-generation watershed models should represent the coevolution of the water-land-air-plant-human nexus in a watershed and provide capability of decision-making support. We propose a new modeling framework and discuss the know-how approach to incorporate emerging knowledge into integrated models through data exchange interfaces. We argue that the modeling environment is a useful tool to enable effective model integration, as well as create domain-specific models of river basin systems. The grand challenges in developing next-generation watershed system models include but are not limited to providing an overarching framework for linking natural and social sciences, building a scientifically based decision support system, quantifying and controlling uncertainties, and taking advantage of new technologies and new findings in the various disciplines of watershed science. The eventual goal is to build transdisciplinary, scientifically sound, and scale-explicit watershed system models that are to be codesigned by multidisciplinary communities.

  4. Simple versus complex models of trait evolution and stasis as a response to environmental change

    NASA Astrophysics Data System (ADS)

    Hunt, Gene; Hopkins, Melanie J.; Lidgard, Scott

    2015-04-01

    Previous analyses of evolutionary patterns, or modes, in fossil lineages have focused overwhelmingly on three simple models: stasis, random walks, and directional evolution. Here we use likelihood methods to fit an expanded set of evolutionary models to a large compilation of ancestor-descendant series of populations from the fossil record. In addition to the standard three models, we assess more complex models with punctuations and shifts from one evolutionary mode to another. As in previous studies, we find that stasis is common in the fossil record, as is a strict version of stasis that entails no real evolutionary changes. Incidence of directional evolution is relatively low (13%), but higher than in previous studies because our analytical approach can more sensitively detect noisy trends. Complex evolutionary models are often favored, overwhelmingly so for sequences comprising many samples. This finding is consistent with evolutionary dynamics that are, in reality, more complex than any of the models we consider. Furthermore, the timing of shifts in evolutionary dynamics varies among traits measured from the same series. Finally, we use our empirical collection of evolutionary sequences and a long and highly resolved proxy for global climate to inform simulations in which traits adaptively track temperature changes over time. When realistically calibrated, we find that this simple model can reproduce important aspects of our paleontological results. We conclude that observed paleontological patterns, including the prevalence of stasis, need not be inconsistent with adaptive evolution, even in the face of unstable physical environments.

  5. Mechanisms of complex network growth: Synthesis of the preferential attachment and fitness models

    NASA Astrophysics Data System (ADS)

    Golosovsky, Michael

    2018-06-01

    We analyze growth mechanisms of complex networks and focus on their validation by measurements. To this end we consider the equation Δ K =A (t ) (K +K0) Δ t , where K is the node's degree, Δ K is its increment, A (t ) is the aging constant, and K0 is the initial attractivity. This equation has been commonly used to validate the preferential attachment mechanism. We show that this equation is undiscriminating and holds for the fitness model [Caldarelli et al., Phys. Rev. Lett. 89, 258702 (2002), 10.1103/PhysRevLett.89.258702] as well. In other words, accepted method of the validation of the microscopic mechanism of network growth does not discriminate between "rich-gets-richer" and "good-gets-richer" scenarios. This means that the growth mechanism of many natural complex networks can be based on the fitness model rather than on the preferential attachment, as it was believed so far. The fitness model yields the long-sought explanation for the initial attractivity K0, an elusive parameter which was left unexplained within the framework of the preferential attachment model. We show that the initial attractivity is determined by the width of the fitness distribution. We also present the network growth model based on recursive search with memory and show that this model contains both the preferential attachment and the fitness models as extreme cases.

  6. Extension of the TDCR model to compute counting efficiencies for radionuclides with complex decay schemes.

    PubMed

    Kossert, K; Cassette, Ph; Carles, A Grau; Jörg, G; Gostomski, Christroph Lierse V; Nähle, O; Wolf, Ch

    2014-05-01

    The triple-to-double coincidence ratio (TDCR) method is frequently used to measure the activity of radionuclides decaying by pure β emission or electron capture (EC). Some radionuclides with more complex decays have also been studied, but accurate calculations of decay branches which are accompanied by many coincident γ transitions have not yet been investigated. This paper describes recent extensions of the model to make efficiency computations for more complex decay schemes possible. In particular, the MICELLE2 program that applies a stochastic approach of the free parameter model was extended. With an improved code, efficiencies for β(-), β(+) and EC branches with up to seven coincident γ transitions can be calculated. Moreover, a new parametrization for the computation of electron stopping powers has been implemented to compute the ionization quenching function of 10 commercial scintillation cocktails. In order to demonstrate the capabilities of the TDCR method, the following radionuclides are discussed: (166m)Ho (complex β(-)/γ), (59)Fe (complex β(-)/γ), (64)Cu (β(-), β(+), EC and EC/γ) and (229)Th in equilibrium with its progenies (decay chain with many α, β and complex β(-)/γ transitions). © 2013 Published by Elsevier Ltd.

  7. Surface complexation modeling of Cu(II) adsorption on mixtures of hydrous ferric oxide and kaolinite

    PubMed Central

    Lund, Tracy J; Koretsky, Carla M; Landry, Christopher J; Schaller, Melinda S; Das, Soumya

    2008-01-01

    Background The application of surface complexation models (SCMs) to natural sediments and soils is hindered by a lack of consistent models and data for large suites of metals and minerals of interest. Furthermore, the surface complexation approach has mostly been developed and tested for single solid systems. Few studies have extended the SCM approach to systems containing multiple solids. Results Cu adsorption was measured on pure hydrous ferric oxide (HFO), pure kaolinite (from two sources) and in systems containing mixtures of HFO and kaolinite over a wide range of pH, ionic strength, sorbate/sorbent ratios and, for the mixed solid systems, using a range of kaolinite/HFO ratios. Cu adsorption data measured for the HFO and kaolinite systems was used to derive diffuse layer surface complexation models (DLMs) describing Cu adsorption. Cu adsorption on HFO is reasonably well described using a 1-site or 2-site DLM. Adsorption of Cu on kaolinite could be described using a simple 1-site DLM with formation of a monodentate Cu complex on a variable charge surface site. However, for consistency with models derived for weaker sorbing cations, a 2-site DLM with a variable charge and a permanent charge site was also developed. Conclusion Component additivity predictions of speciation in mixed mineral systems based on DLM parameters derived for the pure mineral systems were in good agreement with measured data. Discrepancies between the model predictions and measured data were similar to those observed for the calibrated pure mineral systems. The results suggest that quantifying specific interactions between HFO and kaolinite in speciation models may not be necessary. However, before the component additivity approach can be applied to natural sediments and soils, the effects of aging must be further studied and methods must be developed to estimate reactive surface areas of solid constituents in natural samples. PMID:18783619

  8. Clinical application of computer-designed polystyrene models in complex severe spinal deformities: a pilot study

    PubMed Central

    Mao, Keya; Xiao, Songhua; Liu, Zhengsheng; Zhang, Yonggang; Zhang, Xuesong; Wang, Zheng; Lu, Ning; Shourong, Zhu; Xifeng, Zhang; Geng, Cui; Baowei, Liu

    2010-01-01

    Surgical treatment of complex severe spinal deformity, involving a scoliosis Cobb angle of more than 90° and kyphosis or vertebral and rib deformity, is challenging. Preoperative two-dimensional images resulting from plain film radiography, computed tomography (CT) and magnetic resonance imaging provide limited morphometric information. Although the three-dimensional (3D) reconstruction CT with special software can view the stereo and rotate the spinal image on the screen, it cannot show the full-scale spine and cannot directly be used on the operation table. This study was conducted to investigate the application of computer-designed polystyrene models in the treatment of complex severe spinal deformity. The study involved 16 cases of complex severe spinal deformity treated in our hospital between 1 May 2004 and 31 December 2007; the mean ± SD preoperative scoliosis Cobb angle was 118° ± 27°. The CT scanning digital imaging and communication in medicine (DICOM) data sets of the affected spinal segments were collected for 3D digital reconstruction and rapid prototyping to prepare computer-designed polystyrene models, which were applied in the treatment of these cases. The computer-designed polystyrene models allowed 3D observation and measurement of the deformities directly, which helped the surgeon to perform morphological assessment and communicate with the patient and colleagues. Furthermore, the models also guided the choice and placement of pedicle screws. Moreover, the models were used to aid in virtual surgery and guide the actual surgical procedure. The mean ± SD postoperative scoliosis Cobb angle was 42° ± 32°, and no serious complications such as spinal cord or major vascular injury occurred. The use of computer-designed polystyrene models could provide more accurate morphometric information and facilitate surgical correction of complex severe spinal deformity. PMID:20213294

  9. Modeling the complex pathology of Alzheimer’s disease in Drosophila

    PubMed Central

    Fernandez-Funez, Pedro; de Mena, Lorena; Rincon-Limas, Diego E.

    2015-01-01

    Alzheimer’s disease (AD) is the leading cause of dementia and the most common neurodegenerative disorder. AD is mostly a sporadic disorder and its main risk factor is age, but mutations in three genes that promote the accumulation of the amyloid-β (Aβ42) peptide revealed the critical role of Amyloid precursor protein (APP) processing in AD. Neurofibrillary tangles enriched in tau are the other pathological hallmark of AD, but the lack of causative tau mutations still puzzles researchers. Here, we describe the contribution of a powerful invertebrate model, the fruit fly Drosophila melanogaster, to uncovering the function and pathogenesis of human APP, Aβ42, and tau. APP and tau participate in many complex cellular processes, although their main function is microtubule stabilization and the to-and-fro transport of axonal vesicles. Additionally, expression of secreted Aβ42 induces prominent neuronal death in Drosophila, a critical feature of AD, making this model a popular choice for identifying intrinsic and extrinsic factors mediating Aβ42 neurotoxicity. Overall, Drosophila has made significant contributions to better understand the complex pathology of AD, although additional insight can be expected from combining multiple transgenes, performing genome-wide loss-of-function screens, and testing anti-tau therapies alone or in combination with Aβ42. PMID:26024860

  10. Empirical modeling ENSO dynamics with complex-valued artificial neural networks

    NASA Astrophysics Data System (ADS)

    Seleznev, Aleksei; Gavrilov, Andrey; Mukhin, Dmitry

    2016-04-01

    The main difficulty in empirical reconstructing the distributed dynamical systems (e.g. regional climate systems, such as El-Nino-Southern Oscillation - ENSO) is a huge amount of observational data comprising time-varying spatial fields of several variables. An efficient reduction of system's dimensionality thereby is essential for inferring an evolution operator (EO) for a low-dimensional subsystem that determines the key properties of the observed dynamics. In this work, to efficient reduction of observational data sets we use complex-valued (Hilbert) empirical orthogonal functions which are appropriate, by their nature, for describing propagating structures unlike traditional empirical orthogonal functions. For the approximation of the EO, a universal model in the form of complex-valued artificial neural network is suggested. The effectiveness of this approach is demonstrated by predicting both the Jin-Neelin-Ghil ENSO model [1] behavior and real ENSO variability from sea surface temperature anomalies data [2]. The study is supported by Government of Russian Federation (agreement #14.Z50.31.0033 with the Institute of Applied Physics of RAS). 1. Jin, F.-F., J. D. Neelin, and M. Ghil, 1996: El Ni˜no/Southern Oscillation and the annual cycle: subharmonic frequency locking and aperiodicity. Physica D, 98, 442-465. 2. http://iridl.ldeo.columbia.edu/SOURCES/.KAPLAN/.EXTENDED/.v2/.ssta/

  11. Generative model selection using a scalable and size-independent complex network classifier

    NASA Astrophysics Data System (ADS)

    Motallebi, Sadegh; Aliakbary, Sadegh; Habibi, Jafar

    2013-12-01

    Real networks exhibit nontrivial topological features, such as heavy-tailed degree distribution, high clustering, and small-worldness. Researchers have developed several generative models for synthesizing artificial networks that are structurally similar to real networks. An important research problem is to identify the generative model that best fits to a target network. In this paper, we investigate this problem and our goal is to select the model that is able to generate graphs similar to a given network instance. By the means of generating synthetic networks with seven outstanding generative models, we have utilized machine learning methods to develop a decision tree for model selection. Our proposed method, which is named "Generative Model Selection for Complex Networks," outperforms existing methods with respect to accuracy, scalability, and size-independence.

  12. A consensus for the development of a vector model to assess clinical complexity.

    PubMed

    Corazza, Gino Roberto; Klersy, Catherine; Formagnana, Pietro; Lenti, Marco Vincenzo; Padula, Donatella

    2017-12-01

    The progressive rise in multimorbidity has made management of complex patients one of the most topical and challenging issues in medicine, both in clinical practice and for healthcare organizations. To make this easier, a score of clinical complexity (CC) would be useful. A vector model to evaluate biological and extra-biological (socio-economic, cultural, behavioural, environmental) domains of CC was proposed a few years ago. However, given that the variables that grade each domain had never been defined, this model has never been used in clinical practice. To overcome these limits, a consensus meeting was organised to grade each domain of CC, and to establish the hierarchy of the domains. A one-day consensus meeting consisting of a multi-professional panel of 25 people was held at our Hospital. In a preliminary phase, the proponents selected seven variables as qualifiers for each of the five above-mentioned domains. In the course of the meeting, the panel voted for five variables considered to be the most representative for each domain. Consensus was established with 2/3 agreement, and all variables were dichotomised. Finally, the various domains were parametrized and ranked within a feasible vector model. A Clinical Complexity Index was set up using the chosen variables. All the domains were graphically represented through a vector model: the biological domain was chosen as the most significant (highest slope), followed by the behavioural and socio-economic domains (intermediate slope), and lastly by the cultural and environmental ones (lowest slope). A feasible and comprehensive tool to evaluate CC in clinical practice is proposed herein.

  13. Modeling Complex Workflow in Molecular Diagnostics

    PubMed Central

    Gomah, Mohamed E.; Turley, James P.; Lu, Huimin; Jones, Dan

    2010-01-01

    One of the hurdles to achieving personalized medicine has been implementing the laboratory processes for performing and reporting complex molecular tests. The rapidly changing test rosters and complex analysis platforms in molecular diagnostics have meant that many clinical laboratories still use labor-intensive manual processing and testing without the level of automation seen in high-volume chemistry and hematology testing. We provide here a discussion of design requirements and the results of implementation of a suite of lab management tools that incorporate the many elements required for use of molecular diagnostics in personalized medicine, particularly in cancer. These applications provide the functionality required for sample accessioning and tracking, material generation, and testing that are particular to the evolving needs of individualized molecular diagnostics. On implementation, the applications described here resulted in improvements in the turn-around time for reporting of more complex molecular test sets, and significant changes in the workflow. Therefore, careful mapping of workflow can permit design of software applications that simplify even the complex demands of specialized molecular testing. By incorporating design features for order review, software tools can permit a more personalized approach to sample handling and test selection without compromising efficiency. PMID:20007844

  14. SToRM: A Model for Unsteady Surface Hydraulics Over Complex Terrain

    USGS Publications Warehouse

    Simoes, Francisco J.

    2014-01-01

    A two-dimensional (depth-averaged) finite volume Godunov-type shallow water model developed for flow over complex topography is presented. The model is based on an unstructured cellcentered finite volume formulation and a nonlinear strong stability preserving Runge-Kutta time stepping scheme. The numerical discretization is founded on the classical and well established shallow water equations in hyperbolic conservative form, but the convective fluxes are calculated using auto-switching Riemann and diffusive numerical fluxes. The model’s implementation within a graphical user interface is discussed. Field application of the model is illustrated by utilizing it to estimate peak flow discharges in a flooding event of historic significance in Colorado, U.S.A., in 2013.

  15. Modeling the Effect of APC Truncation on Destruction Complex Function in Colorectal Cancer Cells

    PubMed Central

    Barua, Dipak; Hlavacek, William S.

    2013-01-01

    In colorectal cancer cells, APC, a tumor suppressor protein, is commonly expressed in truncated form. Truncation of APC is believed to disrupt degradation of β—catenin, which is regulated by a multiprotein complex called the destruction complex. The destruction complex comprises APC, Axin, β—catenin, serine/threonine kinases, and other proteins. The kinases and , which are recruited by Axin, mediate phosphorylation of β—catenin, which initiates its ubiquitination and proteosomal degradation. The mechanism of regulation of β—catenin degradation by the destruction complex and the role of truncation of APC in colorectal cancer are not entirely understood. Through formulation and analysis of a rule-based computational model, we investigated the regulation of β—catenin phosphorylation and degradation by APC and the effect of APC truncation on function of the destruction complex. The model integrates available mechanistic knowledge about site-specific interactions and phosphorylation of destruction complex components and is consistent with an array of published data. We find that the phosphorylated truncated form of APC can outcompete Axin for binding to β—catenin, provided that Axin is limiting, and thereby sequester β—catenin away from Axin and the Axin-recruited kinases and . Full-length APC also competes with Axin for binding to β—catenin; however, full-length APC is able, through its SAMP repeats, which bind Axin and which are missing in truncated oncogenic forms of APC, to bring β—catenin into indirect association with Axin and Axin-recruited kinases. Because our model indicates that the positive effects of truncated APC on β—catenin levels depend on phosphorylation of APC, at the first 20-amino acid repeat, and because phosphorylation of this site is mediated by , we suggest that is a potential target for therapeutic intervention in colorectal cancer. Specific inhibition of is predicted to limit binding of β—catenin to truncated

  16. Complex networks repair strategies: Dynamic models

    NASA Astrophysics Data System (ADS)

    Fu, Chaoqi; Wang, Ying; Gao, Yangjun; Wang, Xiaoyang

    2017-09-01

    Network repair strategies are tactical methods that restore the efficiency of damaged networks; however, unreasonable repair strategies not only waste resources, they are also ineffective for network recovery. Most extant research on network repair focuses on static networks, but results and findings on static networks cannot be applied to evolutionary dynamic networks because, in dynamic models, complex network repair has completely different characteristics. For instance, repaired nodes face more severe challenges, and require strategic repair methods in order to have a significant effect. In this study, we propose the Shell Repair Strategy (SRS) to minimize the risk of secondary node failures due to the cascading effect. Our proposed method includes the identification of a set of vital nodes that have a significant impact on network repair and defense. Our identification of these vital nodes reduces the number of switching nodes that face the risk of secondary failures during the dynamic repair process. This is positively correlated with the size of the average degree 〈 k 〉 and enhances network invulnerability.

  17. The multiple complex exponential model and its application to EEG analysis

    NASA Astrophysics Data System (ADS)

    Chen, Dao-Mu; Petzold, J.

    The paper presents a novel approach to the analysis of the EEG signal, which is based on a multiple complex exponential (MCE) model. Parameters of the model are estimated using a nonharmonic Fourier expansion algorithm. The central idea of the algorithm is outlined, and the results, estimated on the basis of simulated data, are presented and compared with those obtained by the conventional methods of signal analysis. Preliminary work on various application possibilities of the MCE model in EEG data analysis is described. It is shown that the parameters of the MCE model reflect the essential information contained in an EEG segment. These parameters characterize the EEG signal in a more objective way because they are closer to the recent supposition of the nonlinear character of the brain's dynamic behavior.

  18. Coupled metal partitioning dynamics and toxicodynamics at biointerfaces: a theory beyond the biotic ligand model framework.

    PubMed

    Duval, Jérôme F L

    2016-04-14

    A mechanistic understanding of the processes governing metal toxicity to microorganisms (bacteria, algae) calls for an adequate formulation of metal partitioning at biointerfaces during cell exposure. This includes the account of metal transport dynamics from bulk solution to biomembrane and the kinetics of metal internalisation, both potentially controlling the intracellular and surface metal fractions that originate cell growth inhibition. A theoretical rationale is developed here for such coupled toxicodynamics and interfacial metal partitioning dynamics under non-complexing medium conditions with integration of the defining cell electrostatic properties. The formalism explicitly considers intertwined metal adsorption at the biointerface, intracellular metal excretion, cell growth and metal depletion from bulk solution. The theory is derived under relevant steady-state metal transport conditions on the basis of coupled Nernst-Planck equation and continuous logistic equation modified to include metal-induced cell growth inhibition and cell size changes. Computational examples are discussed to identify limitations of the classical Biotic Ligand Model (BLM) in evaluating metal toxicity over time. In particular, BLM is shown to severely underestimate metal toxicity depending on cell exposure time, metal internalisation kinetics, cell surface electrostatics and initial cell density. Analytical expressions are provided for the interfacial metal concentration profiles in the limit where cell-growth is completely inhibited. A rigorous relationship between time-dependent cell density and metal concentrations at the biosurface and in bulk solution is further provided, which unifies previous equations formulated by Best and Duval under constant cell density and cell size conditions. The theory is sufficiently flexible to adapt to toxicity scenarios with involved cell survival-death processes.

  19. Evaluation of 2D shallow-water model for spillway flow with a complex geometry

    USDA-ARS?s Scientific Manuscript database

    Although the two-dimensional (2D) shallow water model is formulated based on several assumptions such as hydrostatic pressure distribution and vertical velocity is negligible, as a simple alternative to the complex 3D model, it has been used to compute water flows in which these assumptions may be ...

  20. Mathematical Description of Complex Chemical Kinetics and Application to CFD Modeling Codes

    NASA Technical Reports Server (NTRS)

    Bittker, D. A.

    1993-01-01

    A major effort in combustion research at the present time is devoted to the theoretical modeling of practical combustion systems. These include turbojet and ramjet air-breathing engines as well as ground-based gas-turbine power generating systems. The ability to use computational modeling extensively in designing these products not only saves time and money, but also helps designers meet the quite rigorous environmental standards that have been imposed on all combustion devices. The goal is to combine the very complex solution of the Navier-Stokes flow equations with realistic turbulence and heat-release models into a single computer code. Such a computational fluid-dynamic (CFD) code simulates the coupling of fluid mechanics with the chemistry of combustion to describe the practical devices. This paper will focus on the task of developing a simplified chemical model which can predict realistic heat-release rates as well as species composition profiles, and is also computationally rapid. We first discuss the mathematical techniques used to describe a complex, multistep fuel oxidation chemical reaction and develop a detailed mechanism for the process. We then show how this mechanism may be reduced and simplified to give an approximate model which adequately predicts heat release rates and a limited number of species composition profiles, but is computationally much faster than the original one. Only such a model can be incorporated into a CFD code without adding significantly to long computation times. Finally, we present some of the recent advances in the development of these simplified chemical mechanisms.

  1. Mathematical description of complex chemical kinetics and application to CFD modeling codes

    NASA Technical Reports Server (NTRS)

    Bittker, D. A.

    1993-01-01

    A major effort in combustion research at the present time is devoted to the theoretical modeling of practical combustion systems. These include turbojet and ramjet air-breathing engines as well as ground-based gas-turbine power generating systems. The ability to use computational modeling extensively in designing these products not only saves time and money, but also helps designers meet the quite rigorous environmental standards that have been imposed on all combustion devices. The goal is to combine the very complex solution of the Navier-Stokes flow equations with realistic turbulence and heat-release models into a single computer code. Such a computational fluid-dynamic (CFD) code simulates the coupling of fluid mechanics with the chemistry of combustion to describe the practical devices. This paper will focus on the task of developing a simplified chemical model which can predict realistic heat-release rates as well as species composition profiles, and is also computationally rapid. We first discuss the mathematical techniques used to describe a complex, multistep fuel oxidation chemical reaction and develop a detailed mechanism for the process. We then show how this mechanism may be reduced and simplified to give an approximate model which adequately predicts heat release rates and a limited number of species composition profiles, but is computationally much faster than the original one. Only such a model can be incorporated into a CFD code without adding significantly to long computation times. Finally, we present some of the recent advances in the development of these simplified chemical mechanisms.

  2. A general model to explore complex dominance patterns in plant sporophytic self-incompatibility systems.

    PubMed

    Billiard, Sylvain; Castric, Vincent; Vekemans, Xavier

    2007-03-01

    We developed a general model of sporophytic self-incompatibility under negative frequency-dependent selection allowing complex patterns of dominance among alleles. We used this model deterministically to investigate the effects on equilibrium allelic frequencies of the number of dominance classes, the number of alleles per dominance class, the asymmetry in dominance expression between pollen and pistil, and whether selection acts on male fitness only or both on male and on female fitnesses. We show that the so-called "recessive effect" occurs under a wide variety of situations. We found emerging properties of finite population models with several alleles per dominance class such as that higher numbers of alleles are maintained in more dominant classes and that the number of dominance classes can evolve. We also investigated the occurrence of homozygous genotypes and found that substantial proportions of those can occur for the most recessive alleles. We used the model for two species with complex dominance patterns to test whether allelic frequencies in natural populations are in agreement with the distribution predicted by our model. We suggest that the model can be used to test explicitly for additional, allele-specific, selective forces.

  3. Atomic level insights into realistic molecular models of dendrimer-drug complexes through MD simulations.

    PubMed

    Jain, Vaibhav; Maiti, Prabal K; Bharatam, Prasad V

    2016-09-28

    Computational studies performed on dendrimer-drug complexes usually consider 1:1 stoichiometry, which is far from reality, since in experiments more number of drug molecules get encapsulated inside a dendrimer. In the present study, molecular dynamic (MD) simulations were implemented to characterize the more realistic molecular models of dendrimer-drug complexes (1:n stoichiometry) in order to understand the effect of high drug loading on the structural properties and also to unveil the atomistic level details. For this purpose, possible inclusion complexes of model drug Nateglinide (Ntg) (antidiabetic, belongs to Biopharmaceutics Classification System class II) with amine- and acetyl-terminated G4 poly(amidoamine) (G4 PAMAM(NH 2 ) and G4 PAMAM(Ac)) dendrimers at neutral and low pH conditions are explored in this work. MD simulation analysis on dendrimer-drug complexes revealed that the drug encapsulation efficiency of G4 PAMAM(NH 2 ) and G4 PAMAM(Ac) dendrimers at neutral pH was 6 and 5, respectively, while at low pH it was 12 and 13, respectively. Center-of-mass distance analysis showed that most of the drug molecules are located in the interior hydrophobic pockets of G4 PAMAM(NH 2 ) at both the pH; while in the case of G4 PAMAM(Ac), most of them are distributed near to the surface at neutral pH and in the interior hydrophobic pockets at low pH. Structural properties such as radius of gyration, shape, radial density distribution, and solvent accessible surface area of dendrimer-drug complexes were also assessed and compared with that of the drug unloaded dendrimers. Further, binding energy calculations using molecular mechanics Poisson-Boltzmann surface area approach revealed that the location of drug molecules in the dendrimer is not the decisive factor for the higher and lower binding affinity of the complex, but the charged state of dendrimer and drug, intermolecular interactions, pH-induced conformational changes, and surface groups of dendrimer do play an

  4. Atomic level insights into realistic molecular models of dendrimer-drug complexes through MD simulations

    NASA Astrophysics Data System (ADS)

    Jain, Vaibhav; Maiti, Prabal K.; Bharatam, Prasad V.

    2016-09-01

    Computational studies performed on dendrimer-drug complexes usually consider 1:1 stoichiometry, which is far from reality, since in experiments more number of drug molecules get encapsulated inside a dendrimer. In the present study, molecular dynamic (MD) simulations were implemented to characterize the more realistic molecular models of dendrimer-drug complexes (1:n stoichiometry) in order to understand the effect of high drug loading on the structural properties and also to unveil the atomistic level details. For this purpose, possible inclusion complexes of model drug Nateglinide (Ntg) (antidiabetic, belongs to Biopharmaceutics Classification System class II) with amine- and acetyl-terminated G4 poly(amidoamine) (G4 PAMAM(NH2) and G4 PAMAM(Ac)) dendrimers at neutral and low pH conditions are explored in this work. MD simulation analysis on dendrimer-drug complexes revealed that the drug encapsulation efficiency of G4 PAMAM(NH2) and G4 PAMAM(Ac) dendrimers at neutral pH was 6 and 5, respectively, while at low pH it was 12 and 13, respectively. Center-of-mass distance analysis showed that most of the drug molecules are located in the interior hydrophobic pockets of G4 PAMAM(NH2) at both the pH; while in the case of G4 PAMAM(Ac), most of them are distributed near to the surface at neutral pH and in the interior hydrophobic pockets at low pH. Structural properties such as radius of gyration, shape, radial density distribution, and solvent accessible surface area of dendrimer-drug complexes were also assessed and compared with that of the drug unloaded dendrimers. Further, binding energy calculations using molecular mechanics Poisson-Boltzmann surface area approach revealed that the location of drug molecules in the dendrimer is not the decisive factor for the higher and lower binding affinity of the complex, but the charged state of dendrimer and drug, intermolecular interactions, pH-induced conformational changes, and surface groups of dendrimer do play an

  5. Practical aspects of complex permittivity reconstruction with neural-network-controlled FDTD modeling of a two-port fixture.

    PubMed

    Eves, E Eugene; Murphy, Ethan K; Yakovlev, Vadim V

    2007-01-01

    The paper discusses characteristics of a new modeling-based technique for determining dielectric properties of materials. Complex permittivity is found with an optimization algorithm designed to match complex S-parameters obtained from measurements and from 3D FDTD simulation. The method is developed on a two-port (waveguide-type) fixture and deals with complex reflection and transmission characteristics at the frequency of interest. A computational part is constructed as an inverse-RBF-network-based procedure that reconstructs dielectric constant and the loss factor of the sample from the FDTD modeling data sets and the measured reflection and transmission coefficients. As such, it is applicable to samples and cavities of arbitrary configurations provided that the geometry of the experimental setup is adequately represented by the FDTD model. The practical implementation of the method considered in this paper is a section of a WR975 waveguide containing a sample of a liquid in a cylindrical cutout of a rectangular Teflon cup. The method is run in two stages and employs two databases--first, built for a sparse grid on the complex permittivity plane, in order to locate a domain with an anticipated solution and, second, made as a denser grid covering the determined domain, for finding an exact location of the complex permittivity point. Numerical tests demonstrate that the computational part of the method is highly accurate even when the modeling data is represented by relatively small data sets. When working with reflection and transmission coefficients measured in an actual experimental fixture and reconstructing a low dielectric constant and the loss factor the technique may be less accurate. It is shown that the employed neural network is capable of finding complex permittivity of the sample when experimental data on the reflection and transmission coefficients are numerically dispersive (noise-contaminated). A special modeling test is proposed for validating the

  6. Analysis of undergraduate students' conceptual models of a complex biological system across a diverse body of learners

    NASA Astrophysics Data System (ADS)

    Dirnbeck, Matthew R.

    Biological systems pose a challenge both for learners and teachers because they are complex systems mediated by feedback loops; networks of cause-effect relationships; and non-linear, hierarchical, and emergent properties. Teachers and scientists routinely use models to communicate ideas about complex systems. Model-based pedagogies engage students in model construction as a means of practicing higher-order reasoning skills. One such modeling paradigm describes systems in terms of their structures, behaviors, and functions (SBF). The SBF framework is a simple modeling language that has been used to teach about complex biological systems. Here, we used student-generated SBF models to assess students' causal reasoning in the context of a novel biological problem on an exam. We compared students' performance on the modeling problem, their performance on a set of knowledge/comprehension questions, and their performance on a set of scientific reasoning questions. We found that students who performed well on knowledge and understanding questions also constructed more networked, higher quality models. Previous studies have shown that learners' mental maps increase in complexity with increased expertise. We wanted to investigate if biology students with varying levels of training in biology showed a similar pattern when constructing system models. In a pilot study, we administered the same modeling problem to two additional groups of students: 1) an animal physiology course for students pursuing a major in biology (n=37) and 2) an exercise physiology course for non-majors (n=27). We found that there was no significant difference in model organization across the three student populations, but there was a significant difference in the ability to represent function between the three populations. Between the three groups the non-majors had the lowest function scores, the introductory majors had the middle function scores, and the upper division majors had the highest function

  7. Complex Langevin simulation of a random matrix model at nonzero chemical potential

    DOE PAGES

    Bloch, Jacques; Glesaaen, Jonas; Verbaarschot, Jacobus J. M.; ...

    2018-03-06

    In this study we test the complex Langevin algorithm for numerical simulations of a random matrix model of QCD with a first order phase transition to a phase of finite baryon density. We observe that a naive implementation of the algorithm leads to phase quenched results, which were also derived analytically in this article. We test several fixes for the convergence issues of the algorithm, in particular the method of gauge cooling, the shifted representation, the deformation technique and reweighted complex Langevin, but only the latter method reproduces the correct analytical results in the region where the quark mass ismore » inside the domain of the eigenvalues. In order to shed more light on the issues of the methods we also apply them to a similar random matrix model with a milder sign problem and no phase transition, and in that case gauge cooling solves the convergence problems as was shown before in the literature.« less

  8. Complex Langevin simulation of a random matrix model at nonzero chemical potential

    NASA Astrophysics Data System (ADS)

    Bloch, J.; Glesaaen, J.; Verbaarschot, J. J. M.; Zafeiropoulos, S.

    2018-03-01

    In this paper we test the complex Langevin algorithm for numerical simulations of a random matrix model of QCD with a first order phase transition to a phase of finite baryon density. We observe that a naive implementation of the algorithm leads to phase quenched results, which were also derived analytically in this article. We test several fixes for the convergence issues of the algorithm, in particular the method of gauge cooling, the shifted representation, the deformation technique and reweighted complex Langevin, but only the latter method reproduces the correct analytical results in the region where the quark mass is inside the domain of the eigenvalues. In order to shed more light on the issues of the methods we also apply them to a similar random matrix model with a milder sign problem and no phase transition, and in that case gauge cooling solves the convergence problems as was shown before in the literature.

  9. Complex Langevin simulation of a random matrix model at nonzero chemical potential

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bloch, Jacques; Glesaaen, Jonas; Verbaarschot, Jacobus J. M.

    In this study we test the complex Langevin algorithm for numerical simulations of a random matrix model of QCD with a first order phase transition to a phase of finite baryon density. We observe that a naive implementation of the algorithm leads to phase quenched results, which were also derived analytically in this article. We test several fixes for the convergence issues of the algorithm, in particular the method of gauge cooling, the shifted representation, the deformation technique and reweighted complex Langevin, but only the latter method reproduces the correct analytical results in the region where the quark mass ismore » inside the domain of the eigenvalues. In order to shed more light on the issues of the methods we also apply them to a similar random matrix model with a milder sign problem and no phase transition, and in that case gauge cooling solves the convergence problems as was shown before in the literature.« less

  10. A practical approach for comparing management strategies in complex forest ecosystems using meta-modelling toolkits

    Treesearch

    Andrew Fall; B. Sturtevant; M.-J. Fortin; M. Papaik; F. Doyon; D. Morgan; K. Berninger; C. Messier

    2010-01-01

    The complexity and multi-scaled nature of forests poses significant challenges to understanding and management. Models can provide useful insights into process and their interactions, and implications of alternative management options. Most models, particularly scientific models, focus on a relatively small set of processes and are designed to operate within a...

  11. A Multilevel Multiset Time-Series Model for Describing Complex Developmental Processes

    PubMed Central

    Ma, Xin; Shen, Jianping

    2017-01-01

    The authors sought to develop an analytical platform where multiple sets of time series can be examined simultaneously. This multivariate platform capable of testing interaction effects among multiple sets of time series can be very useful in empirical research. The authors demonstrated that the multilevel framework can readily accommodate this analytical capacity. Given their intention to use the multilevel multiset time-series model to pursue complicated research purposes, their resulting model is relatively simple to specify, to run, and to interpret. These advantages make the adoption of their model relatively effortless as long as researchers have the basic knowledge and skills in working with multilevel growth modeling. With multiple potential extensions of their model, the establishment of this analytical platform for analysis of multiple sets of time series can inspire researchers to pursue far more advanced research designs to address complex developmental processes in reality. PMID:29881094

  12. A growth model for directed complex networks with power-law shape in the out-degree distribution

    PubMed Central

    Esquivel-Gómez, J.; Stevens-Navarro, E.; Pineda-Rico, U.; Acosta-Elias, J.

    2015-01-01

    Many growth models have been published to model the behavior of real complex networks. These models are able to reproduce several of the topological properties of such networks. However, in most of these growth models, the number of outgoing links (i.e., out-degree) of nodes added to the network is constant, that is all nodes in the network are born with the same number of outgoing links. In other models, the resultant out-degree distribution decays as a poisson or an exponential distribution. However, it has been found that in real complex networks, the out-degree distribution decays as a power-law. In order to obtain out-degree distribution with power-law behavior some models have been proposed. This work introduces a new model that allows to obtain out-degree distributions that decay as a power-law with an exponent in the range from 0 to 1. PMID:25567141

  13. Determination of timescales of nitrate contamination by groundwater age models in a complex aquifer system

    NASA Astrophysics Data System (ADS)

    Koh, E. H.; Lee, E.; Kaown, D.; Lee, K. K.; Green, C. T.

    2017-12-01

    Timing and magnitudes of nitrate contamination are determined by various factors like contaminant loading, recharge characteristics and geologic system. Information of an elapsed time since recharged water traveling to a certain outlet location, which is defined as groundwater age, can provide indirect interpretation related to the hydrologic characteristics of the aquifer system. There are three major methods (apparent ages, lumped parameter model, and numerical model) to date groundwater ages, which differently characterize groundwater mixing resulted by various groundwater flow pathways in a heterogeneous aquifer system. Therefore, in this study, we compared the three age models in a complex aquifer system by using observed age tracer data and reconstructed history of nitrate contamination by long-term source loading. The 3H-3He and CFC-12 apparent ages, which did not consider the groundwater mixing, estimated the most delayed response time and a highest period of the nitrate loading had not reached yet. However, the lumped parameter model could generate more recent loading response than the apparent ages and the peak loading period influenced the water quality. The numerical model could delineate various groundwater mixing components and its different impacts on nitrate dynamics in the complex aquifer system. The different age estimation methods lead to variations in the estimated contaminant loading history, in which the discrepancy in the age estimation was dominantly observed in the complex aquifer system.

  14. A model based bayesian solution for characterization of complex damage scenarios in aerospace composite structures.

    PubMed

    Reed, H; Leckey, Cara A C; Dick, A; Harvey, G; Dobson, J

    2018-01-01

    Ultrasonic damage detection and characterization is commonly used in nondestructive evaluation (NDE) of aerospace composite components. In recent years there has been an increased development of guided wave based methods. In real materials and structures, these dispersive waves result in complicated behavior in the presence of complex damage scenarios. Model-based characterization methods utilize accurate three dimensional finite element models (FEMs) of guided wave interaction with realistic damage scenarios to aid in defect identification and classification. This work describes an inverse solution for realistic composite damage characterization by comparing the wavenumber-frequency spectra of experimental and simulated ultrasonic inspections. The composite laminate material properties are first verified through a Bayesian solution (Markov chain Monte Carlo), enabling uncertainty quantification surrounding the characterization. A study is undertaken to assess the efficacy of the proposed damage model and comparative metrics between the experimental and simulated output. The FEM is then parameterized with a damage model capable of describing the typical complex damage created by impact events in composites. The damage is characterized through a transdimensional Markov chain Monte Carlo solution, enabling a flexible damage model capable of adapting to the complex damage geometry investigated here. The posterior probability distributions of the individual delamination petals as well as the overall envelope of the damage site are determined. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Generative model selection using a scalable and size-independent complex network classifier

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Motallebi, Sadegh, E-mail: motallebi@ce.sharif.edu; Aliakbary, Sadegh, E-mail: aliakbary@ce.sharif.edu; Habibi, Jafar, E-mail: jhabibi@sharif.edu

    2013-12-15

    Real networks exhibit nontrivial topological features, such as heavy-tailed degree distribution, high clustering, and small-worldness. Researchers have developed several generative models for synthesizing artificial networks that are structurally similar to real networks. An important research problem is to identify the generative model that best fits to a target network. In this paper, we investigate this problem and our goal is to select the model that is able to generate graphs similar to a given network instance. By the means of generating synthetic networks with seven outstanding generative models, we have utilized machine learning methods to develop a decision tree formore » model selection. Our proposed method, which is named “Generative Model Selection for Complex Networks,” outperforms existing methods with respect to accuracy, scalability, and size-independence.« less

  16. Modeling Increased Complexity and the Reliance on Automation: FLightdeck Automation Problems (FLAP) Model

    NASA Technical Reports Server (NTRS)

    Ancel, Ersin; Shih, Ann T.

    2014-01-01

    This paper highlights the development of a model that is focused on the safety issue of increasing complexity and reliance on automation systems in transport category aircraft. Recent statistics show an increase in mishaps related to manual handling and automation errors due to pilot complacency and over-reliance on automation, loss of situational awareness, automation system failures and/or pilot deficiencies. Consequently, the aircraft can enter a state outside the flight envelope and/or air traffic safety margins which potentially can lead to loss-of-control (LOC), controlled-flight-into-terrain (CFIT), or runway excursion/confusion accidents, etc. The goal of this modeling effort is to provide NASA's Aviation Safety Program (AvSP) with a platform capable of assessing the impacts of AvSP technologies and products towards reducing the relative risk of automation related accidents and incidents. In order to do so, a generic framework, capable of mapping both latent and active causal factors leading to automation errors, is developed. Next, the framework is converted into a Bayesian Belief Network model and populated with data gathered from Subject Matter Experts (SMEs). With the insertion of technologies and products, the model provides individual and collective risk reduction acquired by technologies and methodologies developed within AvSP.

  17. Modeling Dynamics of Culex pipiens Complex Populations and Assessing Abatement Strategies for West Nile Virus

    PubMed Central

    Pawelek, Kasia A.; Hager, Elizabeth J.; Hunt, Gregg J.

    2014-01-01

    The primary mosquito species associated with underground stormwater systems in the United States are the Culex pipiens complex species. This group represents important vectors of West Nile virus (WNV) throughout regions of the continental U.S. In this study, we designed a mathematical model and compared it with surveillance data for the Cx. pipiens complex collected in Beaufort County, South Carolina. Based on the best fit of the model to the data, we estimated parameters associated with the effectiveness of public health insecticide (adulticide) treatments (primarily pyrethrin products) as well as the birth, maturation, and death rates of immature and adult Cx. pipiens complex mosquitoes. We used these estimates for modeling the spread of WNV to obtain more reliable disease outbreak predictions and performed numerical simulations to test various mosquito abatement strategies. We demonstrated that insecticide treatments produced significant reductions in the Cx. pipiens complex populations. However, abatement efforts were effective for approximately one day and the vector mosquitoes rebounded until the next treatment. These results suggest that frequent insecticide applications are necessary to control these mosquitoes. We derived the basic reproductive number (ℜ0) to predict the conditions under which disease outbreaks are likely to occur and to evaluate mosquito abatement strategies. We concluded that enhancing the mosquito death rate results in lower values of ℜ0, and if ℜ0<1, then an epidemic will not occur. Our modeling results provide insights about control strategies of the vector populations and, consequently, a potential decrease in the risk of a WNV outbreak. PMID:25268229

  18. Phenomenological model to fit complex permittivity data of water from radio to optical frequencies.

    PubMed

    Shubitidze, Fridon; Osterberg, Ulf

    2007-04-01

    A general factorized form of the dielectric function together with a fractional model-based parameter estimation method is used to provide an accurate analytical formula for the complex refractive index in water for the frequency range 10(8)-10(16)Hz . The analytical formula is derived using a combination of a microscopic frequency-dependent rational function for adjusting zeros and poles of the dielectric dispersion together with the macroscopic statistical Fermi-Dirac distribution to provide a description of both the real and imaginary parts of the complex permittivity for water. The Fermi-Dirac distribution allows us to model the dramatic reduction in the imaginary part of the permittivity in the visible window of the water spectrum.

  19. Large-Scale Transport Model Uncertainty and Sensitivity Analysis: Distributed Sources in Complex Hydrogeologic Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sig Drellack, Lance Prothro

    2007-12-01

    The Underground Test Area (UGTA) Project of the U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office is in the process of assessing and developing regulatory decision options based on modeling predictions of contaminant transport from underground testing of nuclear weapons at the Nevada Test Site (NTS). The UGTA Project is attempting to develop an effective modeling strategy that addresses and quantifies multiple components of uncertainty including natural variability, parameter uncertainty, conceptual/model uncertainty, and decision uncertainty in translating model results into regulatory requirements. The modeling task presents multiple unique challenges to the hydrological sciences as a result ofmore » the complex fractured and faulted hydrostratigraphy, the distributed locations of sources, the suite of reactive and non-reactive radionuclides, and uncertainty in conceptual models. Characterization of the hydrogeologic system is difficult and expensive because of deep groundwater in the arid desert setting and the large spatial setting of the NTS. Therefore, conceptual model uncertainty is partially addressed through the development of multiple alternative conceptual models of the hydrostratigraphic framework and multiple alternative models of recharge and discharge. Uncertainty in boundary conditions is assessed through development of alternative groundwater fluxes through multiple simulations using the regional groundwater flow model. Calibration of alternative models to heads and measured or inferred fluxes has not proven to provide clear measures of model quality. Therefore, model screening by comparison to independently-derived natural geochemical mixing targets through cluster analysis has also been invoked to evaluate differences between alternative conceptual models. Advancing multiple alternative flow models, sensitivity of transport predictions to parameter uncertainty is assessed through Monte Carlo simulations. The

  20. On the use of Empirical Data to Downscale Non-scientific Scepticism About Results From Complex Physical Based Models

    NASA Astrophysics Data System (ADS)

    Germer, S.; Bens, O.; Hüttl, R. F.

    2008-12-01

    The scepticism of non-scientific local stakeholders about results from complex physical based models is a major problem concerning the development and implementation of local climate change adaptation measures. This scepticism originates from the high complexity of such models. Local stakeholders perceive complex models as black-box models, as it is impossible to gasp all underlying assumptions and mathematically formulated processes at a glance. The use of physical based models is, however, indispensible to study complex underlying processes and to predict future environmental changes. The increase of climate change adaptation efforts following the release of the latest IPCC report indicates that the communication of facts about what has already changed is an appropriate tool to trigger climate change adaptation. Therefore we suggest increasing the practice of empirical data analysis in addition to modelling efforts. The analysis of time series can generate results that are easier to comprehend for non-scientific stakeholders. Temporal trends and seasonal patterns of selected hydrological parameters (precipitation, evapotranspiration, groundwater levels and river discharge) can be identified and the dependence of trends and seasonal patters to land use, topography and soil type can be highlighted. A discussion about lag times between the hydrological parameters can increase the awareness of local stakeholders for delayed environment responses.