Science.gov

Sample records for complexes model experiments

  1. EXPERIMENT-GUIDED MOLECULAR MODELING OF PROTEIN-PROTEIN COMPLEXES INVOLVING GPCRS

    PubMed Central

    Kufareva, Irina; Handel, Tracy M.

    2015-01-01

    Summary Experimental structure determination for G protein coupled receptors (GPCRs) and especially their complexes with protein and peptide ligands is at its infancy. In the absence of complex structures, molecular modeling and docking play a large role not only by providing a proper 3D context for interpretation of biochemical and biophysical data, but also by prospectively guiding experiments. Experimentally confirmed restraints may help improve the accuracy and information content of the computational models. Here we present a hybrid molecular modeling protocol that integrates heterogeneous experimental data with force field-based calculations in the stochastic global optimization of the conformations and relative orientations of binding partners. Some experimental data, such as pharmacophore-like chemical fields or disulfide-trapping restraints, can be seamlessly incorporated in the protocol, while other types of data are more useful at the stage of solution filtering. The protocol was successfully applied to modeling and design of a stable construct that resulted in crystallization of the first complex between a chemokine and its receptor. Examples from this work are used to illustrate the steps of the protocol. The utility of different types of experimental data for modeling and docking is discussed and caveats associated with data misinterpretation are highlighted. PMID:26260608

  2. Model complexity in carbon sequestration:A design of experiment and response surface uncertainty analysis

    NASA Astrophysics Data System (ADS)

    Zhang, Y.; Li, S.

    2014-12-01

    Geologic carbon sequestration (GCS) is proposed for the Nugget Sandstone in Moxa Arch, a regional saline aquifer with a large storage potential. For a proposed storage site, this study builds a suite of increasingly complex conceptual "geologic" model families, using subsets of the site characterization data: a homogeneous model family, a stationary petrophysical model family, a stationary facies model family with sub-facies petrophysical variability, and a non-stationary facies model family (with sub-facies variability) conditioned to soft data. These families, representing alternative conceptual site models built with increasing data, were simulated with the same CO2 injection test (50 years at 1/10 Mt per year), followed by 2950 years of monitoring. Using the Design of Experiment, an efficient sensitivity analysis (SA) is conducted for all families, systematically varying uncertain input parameters. Results are compared among the families to identify parameters that have 1st order impact on predicting the CO2 storage ratio (SR) at both end of injection and end of monitoring. At this site, geologic modeling factors do not significantly influence the short-term prediction of the storage ratio, although they become important over monitoring time, but only for those families where such factors are accounted for. Based on the SA, a response surface analysis is conducted to generate prediction envelopes of the storage ratio, which are compared among the families at both times. Results suggest a large uncertainty in the predicted storage ratio given the uncertainties in model parameters and modeling choices: SR varies from 5-60% (end of injection) to 18-100% (end of monitoring), although its variation among the model families is relatively minor. Moreover, long-term leakage risk is considered small at the proposed site. In the lowest-SR scenarios, all families predict gravity-stable supercritical CO2 migrating toward the bottom of the aquifer. In the highest

  3. Photochemistry of iron(III)-carboxylato complexes in aqueous atmospheric particles - Laboratory experiments and modeling studies

    NASA Astrophysics Data System (ADS)

    Weller, C.; Tilgner, A.; Herrmann, H.

    2010-12-01

    Iron is always present in the atmosphere in concentrations from ~10-9 M (clouds, rain) up to ~10-3 M (fog, particles). Sources are mainly mineral dust emissions. Iron complexes are very good absorbers in the UV-VIS actinic region and therefore photo-chemically reactive. Iron complex photolysis leads to radical production and can initiate radical chain reactions, which is related to the oxidizing capacity of the atmosphere. These radical chain reactions are involved in the decomposition and transformation of a variety of chemical compounds in cloud droplets and deliquescent particles. Additionally, the photochemical reaction itself can be a degradation pathway for organic compounds with the ability to bind iron. Iron-complexes of atmospherically relevant coordination compounds like oxalate, malonate, succinate, glutarate, tartronate, gluconate, pyruvate and glyoxalate have been investigated in laboratory experiments. Iron speciation depends on the iron-ligand ratio and the pH. The most suitable experimental conditions were calculated with a speciation program (Visual Minteq). The solutions were prepared accordingly and transferred to a 1 cm quartz cuvette and flash-photolyzed with an excimer laser at wavelengths 308 or 351 nm. Photochemically produced Fe2+ has been measured by spectrometry at 510 nm as Fe(phenantroline)32+. Fe2+ overall effective quantum yields have been calculated with the concentration of photochemically produced Fe2+ and the measured energy of the excimer laser pulse. The laser pulse energy was measured with a pyroelectric sensor. For some iron-carboxylate systems the experimental parameters like the oxygen content of the solution, the initial Iron concentration and the incident laser energy were systematically altered to observe an effect on the overall quantum yield. The dependence of some quantum yields on these parameters allows in some cases an interpretation of the underlying photochemical reaction mechanism. Quantum yields of malonate

  4. A computational methodology for learning low-complexity surrogate models of process from experiments or simulations. (Paper 679a)

    SciTech Connect

    Cozad, A.; Sahinidis, N.; Miller, D.

    2011-01-01

    Costly and/or insufficiently robust simulations or experiments can often pose difficulties when their use extends well beyond a single evaluation. This is case with the numerous evaluations of uncertainty quantification, when an algebraic model is needed for optimization, as well as numerous other areas. To overcome these difficulties, we generate an accurate set of algebraic surrogate models of disaggregated process blocks of the experiment or simulation. We developed a method that uses derivative-based and derivative-free optimization alongside machine learning and statistical techniques to generate the set of surrogate models using data sampled from experiments or detailed simulations. Our method begins by building a low-complexity surrogate model for each block from an initial sample set. The model is built using a best subset technique that leverages a mixed-integer linear problem formulation to allow for very large initial basis sets. The models are then tested, exploited, and improved through the use of derivative-free solvers to adaptively sample new simulation or experimental points. The sets of surrogate models from each disaggregated process block are then combined with heat and mass balances around each disaggregated block to generate a full algebraic model of the process. The full model can be used for cheap and accurate evaluations of the original simulation or experiment or combined with design specifications and an objective for nonlinear optimization.

  5. Complexity in forest fires: From simple experiments to nonlinear networked models

    NASA Astrophysics Data System (ADS)

    Buscarino, Arturo; Famoso, Carlo; Fortuna, Luigi; Frasca, Mattia; Xibilia, Maria Gabriella

    2015-05-01

    The evolution of natural phenomena in real environments often involves complex nonlinear interactions. Modeling them implies the characterization of both a map of interactions and a dynamical process, but also of the peculiarity of the space in which the phenomena occur. The model presented in this paper encompasses all these aspects to formalize an innovative methodology to simulate the propagation of forest fires. It is based on a networked multilayer structure, allowing a flexible definition of the spatial properties of the medium and of the dynamical laws regulating fire propagation. The dynamical core of each node in the network is represented by an hyperbolic reaction-diffusion equation in which the intrinsic characteristics of trees ignition are considered. Furthermore, to define the simulation scenarios, an experimental setup in which the propagation of a fire wave in a small-scale medium can been observed has been realized. A number of simulations is then reported to illustrate the wide spectrum of scenarios which can be reproduced by the model.

  6. Complexities in Barrier Island Response to Sea-Level Rise: Insights from Model Experiments

    NASA Astrophysics Data System (ADS)

    Moore, L. J.; List, J. H.; Williams, S. J.

    2008-12-01

    As sea level rises, a barrier island will respond either by migrating landward across the underlying substrate to higher elevations or by disintegrating if there is no longer sufficient sand volume and relief above sea level to prevent inundation during storms. Using the morphological-behavior model GEOMBEST, we investigate the sea-level rise response of a complex coastal environment to changes in variety of factors, thus yielding insights into barrier island evolution. Our base case is a simplified Holocene run which simulates a possible scenario for the evolution of a 25-km stretch of the North Carolina Outer Banks over the last 8500 years. Varying one parameter at a time, we explore the degree to which changes in sea-level rise rate, sediment supply/loss rate, offshore limits to sediment transport, substrate erodibility, substrate composition and depth- dependent response rate produce changes in average landward barrier island migration rate, average depth of substrate erosion, and final barrier island volume. As expected, sensitivity analyses reveal that within the range of possible values for the North Carolina coast, sea-level rise rate, followed by sediment supply rate, is the most important factor in determining barrier island response to sea-level rise. More surprisingly, the analyses in aggregate indicate that barrier island evolution is highly sensitive to the range of substrate slopes encountered throughout landward migration (i.e., the slope history); as a barrier encounters a continually changing substrate slope, island volume constantly changes, moving toward equilibrium with the current average slope. Through this process, steeper average slope histories produce smaller barrier islands while shallower average slope histories produce larger barrier islands. In both cases, secondary effects on substrate erosion depth and migration rate also result. Notably, similar geometric effects explain an insensitivity of simulation results to changes in offshore

  7. Cadmium sorption onto Natural Red Earth - An assessment using batch experiments and surface complexation modeling

    NASA Astrophysics Data System (ADS)

    Mahatantila, K.; Minoru, O.; Seike, Y.; Vithanage, M. S.

    2010-12-01

    Natural red earth (NRE), an iron coated sand found in north western part of Sri Lanka was used to examine its retention behavior of cadmium, a heavy metal postulated as a factor of chronic kidney disease in Sri Lanka. Adsorption studies were examined in batch experiments as a function of pH, ionic strength and initial cadmium loading. Proton binding sites on NRE were characterized by potentiometric titration yielding a pHzpc around 6.6. The cadmium adsorption increased from 6% to 99% along with a pH increase from 4 to 8.5. In addition, the maximum adsorption was observed when pH is greater than 7.5. Ionic strength dependency of cadmium adsorption for 100 fold variation of NaNO3 evidences the dominance of an inner-sphere bonding mechanism for 10 fold variation of initial cadmium loadings (4.44 and 44.4 µmol/L). Adsorption edges were quantified with a 2pK generalized diffuse double layer model considering two site types, >FeOH and >AlOH, for Cd2+ binding. From modeling, we introduced a monodentate chemical bonding mechanism for cadmium binding on to NRE and this finding was further verified with FTIR spectroscopy. Intrinsic constants determined were log KFeOCd = 8.543 and log KAlOCd = 13.917. Isotherm data implies the heterogeneity of NRE surface and the sorption maximum of 9.418 x10-6 mol/g and 1.3x10-4 mol/g for Langmuir and Freundlich isotherm models. The study suggested the potential of NRE as a material in decontaminating environmental water polluted with cadmium.

  8. Reduction of U(VI) Complexes by Anthraquinone Disulfonate: Experiment and Molecular Modeling

    SciTech Connect

    Ainsworth, C.C.; Wang, Z.; Rosso, K.M.; Wagnon, K.; Fredrickson, J.K.

    2004-03-17

    Past studies demonstrate that complexation will limit abiotic and biotic U(VI) reduction rates and the overall extent of reduction. However, the underlying basis for this behavior is not understood and presently unpredictable across species and ligand structure. The central tenets of these investigations are: (1) reduction of U(VI) follows the electron-transfer (ET) mechanism developed by Marcus; (2) the ET rate is the rate-limiting step in U(VI) reduction and is the step that is most affected by complexation; and (3) Marcus theory can be used to unify the apparently disparate U(VI) reduction rate data and as a computational tool to construct a predictive relationship.

  9. Prediction of homoprotein and heteroprotein complexes by protein docking and template-based modeling: A CASP-CAPRI experiment.

    PubMed

    Lensink, Marc F; Velankar, Sameer; Kryshtafovych, Andriy; Huang, Shen-You; Schneidman-Duhovny, Dina; Sali, Andrej; Segura, Joan; Fernandez-Fuentes, Narcis; Viswanath, Shruthi; Elber, Ron; Grudinin, Sergei; Popov, Petr; Neveu, Emilie; Lee, Hasup; Baek, Minkyung; Park, Sangwoo; Heo, Lim; Rie Lee, Gyu; Seok, Chaok; Qin, Sanbo; Zhou, Huan-Xiang; Ritchie, David W; Maigret, Bernard; Devignes, Marie-Dominique; Ghoorah, Anisah; Torchala, Mieczyslaw; Chaleil, Raphaël A G; Bates, Paul A; Ben-Zeev, Efrat; Eisenstein, Miriam; Negi, Surendra S; Weng, Zhiping; Vreven, Thom; Pierce, Brian G; Borrman, Tyler M; Yu, Jinchao; Ochsenbein, Françoise; Guerois, Raphaël; Vangone, Anna; Rodrigues, João P G L M; van Zundert, Gydo; Nellen, Mehdi; Xue, Li; Karaca, Ezgi; Melquiond, Adrien S J; Visscher, Koen; Kastritis, Panagiotis L; Bonvin, Alexandre M J J; Xu, Xianjin; Qiu, Liming; Yan, Chengfei; Li, Jilong; Ma, Zhiwei; Cheng, Jianlin; Zou, Xiaoqin; Shen, Yang; Peterson, Lenna X; Kim, Hyung-Rae; Roy, Amit; Han, Xusi; Esquivel-Rodriguez, Juan; Kihara, Daisuke; Yu, Xiaofeng; Bruce, Neil J; Fuller, Jonathan C; Wade, Rebecca C; Anishchenko, Ivan; Kundrotas, Petras J; Vakser, Ilya A; Imai, Kenichiro; Yamada, Kazunori; Oda, Toshiyuki; Nakamura, Tsukasa; Tomii, Kentaro; Pallara, Chiara; Romero-Durana, Miguel; Jiménez-García, Brian; Moal, Iain H; Férnandez-Recio, Juan; Joung, Jong Young; Kim, Jong Yun; Joo, Keehyoung; Lee, Jooyoung; Kozakov, Dima; Vajda, Sandor; Mottarella, Scott; Hall, David R; Beglov, Dmitri; Mamonov, Artem; Xia, Bing; Bohnuud, Tanggis; Del Carpio, Carlos A; Ichiishi, Eichiro; Marze, Nicholas; Kuroda, Daisuke; Roy Burman, Shourya S; Gray, Jeffrey J; Chermak, Edrisse; Cavallo, Luigi; Oliva, Romina; Tovchigrechko, Andrey; Wodak, Shoshana J

    2016-09-01

    We present the results for CAPRI Round 30, the first joint CASP-CAPRI experiment, which brought together experts from the protein structure prediction and protein-protein docking communities. The Round comprised 25 targets from amongst those submitted for the CASP11 prediction experiment of 2014. The targets included mostly homodimers, a few homotetramers, and two heterodimers, and comprised protein chains that could readily be modeled using templates from the Protein Data Bank. On average 24 CAPRI groups and 7 CASP groups submitted docking predictions for each target, and 12 CAPRI groups per target participated in the CAPRI scoring experiment. In total more than 9500 models were assessed against the 3D structures of the corresponding target complexes. Results show that the prediction of homodimer assemblies by homology modeling techniques and docking calculations is quite successful for targets featuring large enough subunit interfaces to represent stable associations. Targets with ambiguous or inaccurate oligomeric state assignments, often featuring crystal contact-sized interfaces, represented a confounding factor. For those, a much poorer prediction performance was achieved, while nonetheless often providing helpful clues on the correct oligomeric state of the protein. The prediction performance was very poor for genuine tetrameric targets, where the inaccuracy of the homology-built subunit models and the smaller pair-wise interfaces severely limited the ability to derive the correct assembly mode. Our analysis also shows that docking procedures tend to perform better than standard homology modeling techniques and that highly accurate models of the protein components are not always required to identify their association modes with acceptable accuracy. Proteins 2016; 84(Suppl 1):323-348. © 2016 Wiley Periodicals, Inc.

  10. Prediction of homoprotein and heteroprotein complexes by protein docking and template-based modeling: A CASP-CAPRI experiment.

    PubMed

    Lensink, Marc F; Velankar, Sameer; Kryshtafovych, Andriy; Huang, Shen-You; Schneidman-Duhovny, Dina; Sali, Andrej; Segura, Joan; Fernandez-Fuentes, Narcis; Viswanath, Shruthi; Elber, Ron; Grudinin, Sergei; Popov, Petr; Neveu, Emilie; Lee, Hasup; Baek, Minkyung; Park, Sangwoo; Heo, Lim; Rie Lee, Gyu; Seok, Chaok; Qin, Sanbo; Zhou, Huan-Xiang; Ritchie, David W; Maigret, Bernard; Devignes, Marie-Dominique; Ghoorah, Anisah; Torchala, Mieczyslaw; Chaleil, Raphaël A G; Bates, Paul A; Ben-Zeev, Efrat; Eisenstein, Miriam; Negi, Surendra S; Weng, Zhiping; Vreven, Thom; Pierce, Brian G; Borrman, Tyler M; Yu, Jinchao; Ochsenbein, Françoise; Guerois, Raphaël; Vangone, Anna; Rodrigues, João P G L M; van Zundert, Gydo; Nellen, Mehdi; Xue, Li; Karaca, Ezgi; Melquiond, Adrien S J; Visscher, Koen; Kastritis, Panagiotis L; Bonvin, Alexandre M J J; Xu, Xianjin; Qiu, Liming; Yan, Chengfei; Li, Jilong; Ma, Zhiwei; Cheng, Jianlin; Zou, Xiaoqin; Shen, Yang; Peterson, Lenna X; Kim, Hyung-Rae; Roy, Amit; Han, Xusi; Esquivel-Rodriguez, Juan; Kihara, Daisuke; Yu, Xiaofeng; Bruce, Neil J; Fuller, Jonathan C; Wade, Rebecca C; Anishchenko, Ivan; Kundrotas, Petras J; Vakser, Ilya A; Imai, Kenichiro; Yamada, Kazunori; Oda, Toshiyuki; Nakamura, Tsukasa; Tomii, Kentaro; Pallara, Chiara; Romero-Durana, Miguel; Jiménez-García, Brian; Moal, Iain H; Férnandez-Recio, Juan; Joung, Jong Young; Kim, Jong Yun; Joo, Keehyoung; Lee, Jooyoung; Kozakov, Dima; Vajda, Sandor; Mottarella, Scott; Hall, David R; Beglov, Dmitri; Mamonov, Artem; Xia, Bing; Bohnuud, Tanggis; Del Carpio, Carlos A; Ichiishi, Eichiro; Marze, Nicholas; Kuroda, Daisuke; Roy Burman, Shourya S; Gray, Jeffrey J; Chermak, Edrisse; Cavallo, Luigi; Oliva, Romina; Tovchigrechko, Andrey; Wodak, Shoshana J

    2016-09-01

    We present the results for CAPRI Round 30, the first joint CASP-CAPRI experiment, which brought together experts from the protein structure prediction and protein-protein docking communities. The Round comprised 25 targets from amongst those submitted for the CASP11 prediction experiment of 2014. The targets included mostly homodimers, a few homotetramers, and two heterodimers, and comprised protein chains that could readily be modeled using templates from the Protein Data Bank. On average 24 CAPRI groups and 7 CASP groups submitted docking predictions for each target, and 12 CAPRI groups per target participated in the CAPRI scoring experiment. In total more than 9500 models were assessed against the 3D structures of the corresponding target complexes. Results show that the prediction of homodimer assemblies by homology modeling techniques and docking calculations is quite successful for targets featuring large enough subunit interfaces to represent stable associations. Targets with ambiguous or inaccurate oligomeric state assignments, often featuring crystal contact-sized interfaces, represented a confounding factor. For those, a much poorer prediction performance was achieved, while nonetheless often providing helpful clues on the correct oligomeric state of the protein. The prediction performance was very poor for genuine tetrameric targets, where the inaccuracy of the homology-built subunit models and the smaller pair-wise interfaces severely limited the ability to derive the correct assembly mode. Our analysis also shows that docking procedures tend to perform better than standard homology modeling techniques and that highly accurate models of the protein components are not always required to identify their association modes with acceptable accuracy. Proteins 2016; 84(Suppl 1):323-348. © 2016 Wiley Periodicals, Inc. PMID:27122118

  11. Prediction of homoprotein and heteroprotein complexes by protein docking and template‐based modeling: A CASP‐CAPRI experiment

    PubMed Central

    Velankar, Sameer; Kryshtafovych, Andriy; Huang, Shen‐You; Schneidman‐Duhovny, Dina; Sali, Andrej; Segura, Joan; Fernandez‐Fuentes, Narcis; Viswanath, Shruthi; Elber, Ron; Grudinin, Sergei; Popov, Petr; Neveu, Emilie; Lee, Hasup; Baek, Minkyung; Park, Sangwoo; Heo, Lim; Rie Lee, Gyu; Seok, Chaok; Qin, Sanbo; Zhou, Huan‐Xiang; Ritchie, David W.; Maigret, Bernard; Devignes, Marie‐Dominique; Ghoorah, Anisah; Torchala, Mieczyslaw; Chaleil, Raphaël A.G.; Bates, Paul A.; Ben‐Zeev, Efrat; Eisenstein, Miriam; Negi, Surendra S.; Weng, Zhiping; Vreven, Thom; Pierce, Brian G.; Borrman, Tyler M.; Yu, Jinchao; Ochsenbein, Françoise; Guerois, Raphaël; Vangone, Anna; Rodrigues, João P.G.L.M.; van Zundert, Gydo; Nellen, Mehdi; Xue, Li; Karaca, Ezgi; Melquiond, Adrien S.J.; Visscher, Koen; Kastritis, Panagiotis L.; Bonvin, Alexandre M.J.J.; Xu, Xianjin; Qiu, Liming; Yan, Chengfei; Li, Jilong; Ma, Zhiwei; Cheng, Jianlin; Zou, Xiaoqin; Shen, Yang; Peterson, Lenna X.; Kim, Hyung‐Rae; Roy, Amit; Han, Xusi; Esquivel‐Rodriguez, Juan; Kihara, Daisuke; Yu, Xiaofeng; Bruce, Neil J.; Fuller, Jonathan C.; Wade, Rebecca C.; Anishchenko, Ivan; Kundrotas, Petras J.; Vakser, Ilya A.; Imai, Kenichiro; Yamada, Kazunori; Oda, Toshiyuki; Nakamura, Tsukasa; Tomii, Kentaro; Pallara, Chiara; Romero‐Durana, Miguel; Jiménez‐García, Brian; Moal, Iain H.; Férnandez‐Recio, Juan; Joung, Jong Young; Kim, Jong Yun; Joo, Keehyoung; Lee, Jooyoung; Kozakov, Dima; Vajda, Sandor; Mottarella, Scott; Hall, David R.; Beglov, Dmitri; Mamonov, Artem; Xia, Bing; Bohnuud, Tanggis; Del Carpio, Carlos A.; Ichiishi, Eichiro; Marze, Nicholas; Kuroda, Daisuke; Roy Burman, Shourya S.; Gray, Jeffrey J.; Chermak, Edrisse; Cavallo, Luigi; Oliva, Romina; Tovchigrechko, Andrey

    2016-01-01

    ABSTRACT We present the results for CAPRI Round 30, the first joint CASP‐CAPRI experiment, which brought together experts from the protein structure prediction and protein–protein docking communities. The Round comprised 25 targets from amongst those submitted for the CASP11 prediction experiment of 2014. The targets included mostly homodimers, a few homotetramers, and two heterodimers, and comprised protein chains that could readily be modeled using templates from the Protein Data Bank. On average 24 CAPRI groups and 7 CASP groups submitted docking predictions for each target, and 12 CAPRI groups per target participated in the CAPRI scoring experiment. In total more than 9500 models were assessed against the 3D structures of the corresponding target complexes. Results show that the prediction of homodimer assemblies by homology modeling techniques and docking calculations is quite successful for targets featuring large enough subunit interfaces to represent stable associations. Targets with ambiguous or inaccurate oligomeric state assignments, often featuring crystal contact‐sized interfaces, represented a confounding factor. For those, a much poorer prediction performance was achieved, while nonetheless often providing helpful clues on the correct oligomeric state of the protein. The prediction performance was very poor for genuine tetrameric targets, where the inaccuracy of the homology‐built subunit models and the smaller pair‐wise interfaces severely limited the ability to derive the correct assembly mode. Our analysis also shows that docking procedures tend to perform better than standard homology modeling techniques and that highly accurate models of the protein components are not always required to identify their association modes with acceptable accuracy. Proteins 2016; 84(Suppl 1):323–348. © 2016 The Authors Proteins: Structure, Function, and Bioinformatics Published by Wiley Periodicals, Inc. PMID:27122118

  12. The Mentoring Relationship as a Complex Adaptive System: Finding a Model for Our Experience

    ERIC Educational Resources Information Center

    Jones, Rachel; Brown, Dot

    2011-01-01

    Mentoring theory and practice has evolved significantly during the past 40 years. Early mentoring models were characterized by the top-down flow of information and benefits to the protege. This framework was reconceptualized as a reciprocal model when scholars realized mentoring was a mutually beneficial process. Recently, in response to rapidly…

  13. Development of a smart flexible transducer to inspect component of complex geometry: Modeling and experiment

    NASA Astrophysics Data System (ADS)

    Roy, Olivier; Mahaut, Steve; Casula, Olivier

    2002-05-01

    In many industrial sectors, as nuclear and aircraft, the main part of ultrasonic non destructive testing is carried out using contact transducers. Among others, the cooling piping of French pressurized water reactor comprises many welding components with complex geometry which lead to degraded inspection performances; loss of sensitivity due to unmatched contact on irregular surface, beam distortions, uncovered area. To improve the performances of such inspections, the French Atomic Energy Commission (CEA), supported by the safety authorities (IPSN), has developed a new concept of contact phased array transducer. Its radiating surface is flexible to optimize the contact, while the characteristics of the transmitted beam (orientation and focal depth) are preserved thanks to a delay law optimizing algorithm. This computation requires the actual positions of the elements, so an instrumentation is coupled to the transducer to measure its radiating surface distortions. Thus, this smart flexible transducer becomes self-adaptive. Recent studies have been made to obtain further performances improvements of this system, including instrumentation development and a new phased array design allowing to generate both longitudinal and shear waves beams. Inspections have been performed on a specimen containing artificial defects under a realistic profile, with an adaptive mode to compensate the effect of the irregular profile. Experimental results show the ability of this system to detect and characterize defects under irregular profiles, using longitudinal or shear waves.

  14. Concept model of the formation process of humic acid-kaolin complexes deduced by trichloroethylene sorption experiments and various characterizations.

    PubMed

    Zhu, Xiaojing; He, Jiangtao; Su, Sihui; Zhang, Xiaoliang; Wang, Fei

    2016-05-01

    To explore the interactions between soil organic matter and minerals, humic acid (HA, as organic matter), kaolin (as a mineral component) and Ca(2+) (as metal ions) were used to prepare HA-kaolin and Ca-HA-kaolin complexes. These complexes were used in trichloroethylene (TCE) sorption experiments and various characterizations. Interactions between HA and kaolin during the formation of their complexes were confirmed by the obvious differences between the Qe (experimental sorbed TCE) and Qe_p (predicted sorbed TCE) values of all detected samples. The partition coefficient kd obtained for the different samples indicated that both the organic content (fom) and Ca(2+) could significantly impact the interactions. Based on experimental results and various characterizations, a concept model was developed. In the absence of Ca(2+), HA molecules first patched onto charged sites of kaolin surfaces, filling the pores. Subsequently, as the HA content increased and the first HA layer reached saturation, an outer layer of HA began to form, compressing the inner HA layer. As HA loading continued, the second layer reached saturation, such that an outer-third layer began to form, compressing the inner layers. In the presence of Ca(2+), which not only can promote kaolin self-aggregation but can also boost HA attachment to kaolin, HA molecules were first surrounded by kaolin. Subsequently, first and second layers formed (with inner layer compression) via the same process as described above in the absence of Ca(2+), except that the second layer continued to load rather than reach saturation, within the investigated conditions, because of enhanced HA aggregation caused by Ca(2+). PMID:26933902

  15. Surface complexation modeling

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Adsorption-desorption reactions are important processes that affect the transport of contaminants in the environment. Surface complexation models are chemical models that can account for the effects of variable chemical conditions, such as pH, on adsorption reactions. These models define specific ...

  16. Numerical Modeling of Complex Targets for High-Energy- Density Experiments with Ion Beams and other Drivers

    DOE PAGES

    Koniges, Alice; Liu, Wangyi; Lidia, Steven; Schenkel, Thomas; Barnard, John; Friedman, Alex; Eder, David; Fisher, Aaron; Masters, Nathan

    2016-03-01

    We explore the simulation challenges and requirements for experiments planned on facilities such as the NDCX-II ion accelerator at LBNL, currently undergoing commissioning. Hydrodynamic modeling of NDCX-II experiments include certain lower temperature effects, e.g., surface tension and target fragmentation, that are not generally present in extreme high-energy laser facility experiments, where targets are completely vaporized in an extremely short period of time. Target designs proposed for NDCX-II range from metal foils of order one micron thick (thin targets) to metallic foam targets several tens of microns thick (thick targets). These high-energy-density experiments allow for the study of fracture as wellmore » as the process of bubble and droplet formation. We incorporate these physics effects into a code called ALE-AMR that uses a combination of Arbitrary Lagrangian Eulerian hydrodynamics and Adaptive Mesh Refinement. Inclusion of certain effects becomes tricky as we must deal with non-orthogonal meshes of various levels of refinement in three dimensions. A surface tension model used for droplet dynamics is implemented in ALE-AMR using curvature calculated from volume fractions. Thick foam target experiments provide information on how ion beam induced shock waves couple into kinetic energy of fluid flow. Although NDCX-II is not fully commissioned, experiments are being conducted that explore material defect production and dynamics.« less

  17. Numerical Modeling of Complex Targets for High-Energy- Density Experiments with Ion Beams and other Drivers

    NASA Astrophysics Data System (ADS)

    Koniges, Alice; Liu, Wangyi; Lidia, Steven; Schenkel, Thomas; Barnard, John; Friedman, Alex; Eder, David; Fisher, Aaron; Masters, Nathan

    2016-03-01

    We explore the simulation challenges and requirements for experiments planned on facilities such as the NDCX-II ion accelerator at LBNL, currently undergoing commissioning. Hydrodynamic modeling of NDCX-II experiments include certain lower temperature effects, e.g., surface tension and target fragmentation, that are not generally present in extreme high-energy laser facility experiments, where targets are completely vaporized in an extremely short period of time. Target designs proposed for NDCX-II range from metal foils of order one micron thick (thin targets) to metallic foam targets several tens of microns thick (thick targets). These high-energy-density experiments allow for the study of fracture as well as the process of bubble and droplet formation. We incorporate these physics effects into a code called ALE-AMR that uses a combination of Arbitrary Lagrangian Eulerian hydrodynamics and Adaptive Mesh Refinement. Inclusion of certain effects becomes tricky as we must deal with non-orthogonal meshes of various levels of refinement in three dimensions. A surface tension model used for droplet dynamics is implemented in ALE-AMR using curvature calculated from volume fractions. Thick foam target experiments provide information on how ion beam induced shock waves couple into kinetic energy of fluid flow. Although NDCX-II is not fully commissioned, experiments are being conducted that explore material defect production and dynamics.

  18. Modeling Complex Calorimeters

    NASA Technical Reports Server (NTRS)

    Figueroa-Feliciano, Enectali

    2004-01-01

    We have developed a software suite that models complex calorimeters in the time and frequency domain. These models can reproduce all measurements that we currently do in a lab setting, like IV curves, impedance measurements, noise measurements, and pulse generation. Since all these measurements are modeled from one set of parameters, we can fully describe a detector and characterize its behavior. This leads to a model than can be used effectively for engineering and design of detectors for particular applications.

  19. Modelling multi-protein complexes using PELDOR distance measurements for rigid body minimisation experiments using XPLOR-NIH

    PubMed Central

    Hammond, Colin M.; Owen-Hughes, Tom; Norman, David G.

    2014-01-01

    Crystallographic and NMR approaches have provided a wealth of structural information about protein domains. However, often these domains are found as components of larger multi domain polypeptides or complexes. Orienting domains within such contexts can provide powerful new insight into their function. The combination of site specific spin labelling and Pulsed Electron Double Resonance (PELDOR) provide a means of obtaining structural measurements that can be used to generate models describing how such domains are oriented. Here we describe a pipeline for modelling the location of thio-reactive nitroxyl spin locations to engineered sties on the histone chaperone Vps75. We then use a combination of experimentally determined measurements and symmetry constraints to model the orientation in which homodimers of Vps75 associate to form homotetramers using the XPLOR-NIH platform. This provides a working example of how PELDOR measurements can be used to generate a structural model. PMID:25448300

  20. Network Enrichment Analysis in Complex Experiments*

    PubMed Central

    Shojaie, Ali; Michailidis, George

    2010-01-01

    Cellular functions of living organisms are carried out through complex systems of interacting components. Including such interactions in the analysis, and considering sub-systems defined by biological pathways instead of individual components (e.g. genes), can lead to new findings about complex biological mechanisms. Networks are often used to capture such interactions and can be incorporated in models to improve the efficiency in estimation and inference. In this paper, we propose a model for incorporating external information about interactions among genes (proteins/metabolites) in differential analysis of gene sets. We exploit the framework of mixed linear models and propose a flexible inference procedure for analysis of changes in biological pathways. The proposed method facilitates the analysis of complex experiments, including multiple experimental conditions and temporal correlations among observations. We propose an efficient iterative algorithm for estimation of the model parameters and show that the proposed framework is asymptotically robust to the presence of noise in the network information. The performance of the proposed model is illustrated through the analysis of gene expression data for environmental stress response (ESR) in yeast, as well as simulated data sets. PMID:20597848

  1. Modelling of Complex Plasmas

    NASA Astrophysics Data System (ADS)

    Akdim, Mohamed Reda

    2003-09-01

    Nowadays plasmas are used for various applications such as the fabrication of silicon solar cells, integrated circuits, coatings and dental cleaning. In the case of a processing plasma, e.g. for the fabrication of amorphous silicon solar cells, a mixture of silane and hydrogen gas is injected in a reactor. These gases are decomposed by making a plasma. A plasma with a low degree of ionization (typically 10_5) is usually made in a reactor containing two electrodes driven by a radio-frequency (RF) power source in the megahertz range. Under the right circumstances the radicals, neutrals and ions can react further to produce nanometer sized dust particles. The particles can stick to the surface and thereby contribute to a higher deposition rate. Another possibility is that the nanometer sized particles coagulate and form larger micron sized particles. These particles obtain a high negative charge, due to their large radius and are usually trapped in a radiofrequency plasma. The electric field present in the discharge sheaths causes the entrapment. Such plasmas are called dusty or complex plasmas. In this thesis numerical models are presented which describe dusty plasmas in reactive and nonreactive plasmas. We started first with the development of a simple one-dimensional silane fluid model where a dusty radio-frequency silane/hydrogen discharge is simulated. In the model, discharge quantities like the fluxes, densities and electric field are calculated self-consistently. A radius and an initial density profile for the spherical dust particles are given and the charge and the density of the dust are calculated with an iterative method. During the transport of the dust, its charge is kept constant in time. The dust influences the electric field distribution through its charge and the density of the plasma through recombination of positive ions and electrons at its surface. In the model this process gives an extra production of silane radicals, since the growth of dust is

  2. Debating complexity in modeling

    USGS Publications Warehouse

    Hunt, Randall J.; Zheng, Chunmiao

    1999-01-01

    As scientists trying to understand the natural world, how should our effort be apportioned? We know that the natural world is characterized by complex and interrelated processes. Yet do we need to explicitly incorporate these intricacies to perform the tasks we are charged with? In this era of expanding computer power and development of sophisticated preprocessors and postprocessors, are bigger machines making better models? Put another way, do we understand the natural world better now with all these advancements in our simulation ability? Today the public's patience for long-term projects producing indeterminate results is wearing thin. This increases pressure on the investigator to use the appropriate technology efficiently. On the other hand, bringing scientific results into the legal arena opens up a new dimension to the issue: to the layperson, a tool that includes more of the complexity known to exist in the real world is expected to provide the more scientifically valid answer.

  3. Experimenting model deconstruction

    NASA Astrophysics Data System (ADS)

    Seeger, Manuel; Wirtz, Stefan; Ali, Mazhar

    2013-04-01

    transported grains and their concentratio, affecting them the transport of sediments. The experiments also show that hydraulic parameters are not able to predict the combination of sediment detachment and transport. Moreover, the relationship between flowing water and sediment transport is shown to be complex, depending on the morphological evolution of the bed, depending again on the characteristics of the substrate. The field experiments confirm these results, and also show that under variable conditions higher transport rates than those predicted by different model concepts are not only possible, but even the common observation. We conclude from these results that soil erosion by flowing water is much more complex than reflected in model concepts: they neither reflect the process variability nor the interaction between the different dynamic parameters of flow and soils. Mechanistic concepts, in which simple or composite predictors define the dynamics of soil erosion, can not succeed in soil erosion modelling.

  4. Model Experiments and Model Descriptions

    NASA Technical Reports Server (NTRS)

    Jackman, Charles H.; Ko, Malcolm K. W.; Weisenstein, Debra; Scott, Courtney J.; Shia, Run-Lie; Rodriguez, Jose; Sze, N. D.; Vohralik, Peter; Randeniya, Lakshman; Plumb, Ian

    1999-01-01

    The Second Workshop on Stratospheric Models and Measurements Workshop (M&M II) is the continuation of the effort previously started in the first Workshop (M&M I, Prather and Remsberg [1993]) held in 1992. As originally stated, the aim of M&M is to provide a foundation for establishing the credibility of stratospheric models used in environmental assessments of the ozone response to chlorofluorocarbons, aircraft emissions, and other climate-chemistry interactions. To accomplish this, a set of measurements of the present day atmosphere was selected. The intent was that successful simulations of the set of measurements should become the prerequisite for the acceptance of these models as having a reliable prediction for future ozone behavior. This section is divided into two: model experiment and model descriptions. In the model experiment, participant were given the charge to design a number of experiments that would use observations to test whether models are using the correct mechanisms to simulate the distributions of ozone and other trace gases in the atmosphere. The purpose is closely tied to the needs to reduce the uncertainties in the model predicted responses of stratospheric ozone to perturbations. The specifications for the experiments were sent out to the modeling community in June 1997. Twenty eight modeling groups responded to the requests for input. The first part of this section discusses the different modeling group, along with the experiments performed. Part two of this section, gives brief descriptions of each model as provided by the individual modeling groups.

  5. Prediction of homo- and hetero-protein complexes by protein docking and template-based modeling: a CASP-CAPRI experiment

    PubMed Central

    Lensink, Marc F.; Velankar, Sameer; Kryshtafovych, Andriy; Huang, Shen-You; Schneidman-Duhovny, Dina; Sali, Andrej; Segura, Joan; Fernandez-Fuentes, Narcis; Viswanath, Shruthi; Elber, Ron; Grudinin, Sergei; Popov, Petr; Neveu, Emilie; Lee, Hasup; Baek, Minkyung; Park, Sangwoo; Heo, Lim; Lee, Gyu Rie; Seok, Chaok; Qin, Sanbo; Zhou, Huan-Xiang; Ritchie, David W.; Maigret, Bernard; Devignes, Marie-Dominique; Ghoorah, Anisah; Torchala, Mieczyslaw; Chaleil, Raphaël A.G.; Bates, Paul A.; Ben-Zeev, Efrat; Eisenstein, Miriam; Negi, Surendra S.; Weng, Zhiping; Vreven, Thom; Pierce, Brian G.; Borrman, Tyler M.; Yu, Jinchao; Ochsenbein, Françoise; Guerois, Raphaël; Vangone, Anna; Rodrigues, João P.G.L.M.; van Zundert, Gydo; Nellen, Mehdi; Xue, Li; Karaca, Ezgi; Melquiond, Adrien S.J.; Visscher, Koen; Kastritis, Panagiotis L.; Bonvin, Alexandre M.J.J.; Xu, Xianjin; Qiu, Liming; Yan, Chengfei; Li, Jilong; Ma, Zhiwei; Cheng, Jianlin; Zou, Xiaoqin; Shen, Yang; Peterson, Lenna X.; Kim, Hyung-Rae; Roy, Amit; Han, Xusi; Esquivel-Rodriguez, Juan; Kihara, Daisuke; Yu, Xiaofeng; Bruce, Neil J.; Fuller, Jonathan C.; Wade, Rebecca C.; Anishchenko, Ivan; Kundrotas, Petras J.; Vakser, Ilya A.; Imai, Kenichiro; Yamada, Kazunori; Oda, Toshiyuki; Nakamura, Tsukasa; Tomii, Kentaro; Pallara, Chiara; Romero-Durana, Miguel; Jiménez-García, Brian; Moal, Iain H.; Férnandez-Recio, Juan; Joung, Jong Young; Kim, Jong Yun; Joo, Keehyoung; Lee, Jooyoung; Kozakov, Dima; Vajda, Sandor; Mottarella, Scott; Hall, David R.; Beglov, Dmitri; Mamonov, Artem; Xia, Bing; Bohnuud, Tanggis; Del Carpio, Carlos A.; Ichiishi, Eichiro; Marze, Nicholas; Kuroda, Daisuke; Roy Burman, Shourya S.; Gray, Jeffrey J.; Chermak, Edrisse; Cavallo, Luigi; Oliva, Romina; Tovchigrechko, Andrey; Wodak, Shoshana J.

    2016-01-01

    We present the results for CAPRI Round 30, the first joint CASP-CAPRI experiment, which brought together experts from the protein structure prediction and protein-protein docking communities. The Round comprised 25 targets from amongst those submitted for the CASP11 prediction experiment of 2014. The targets included mostly homodimers, a few homotetramers, and two heterodimers, and comprised protein chains that could readily be modeled using templates from the Protein Data Bank. On average 24 CAPRI groups and 7 CASP groups submitted docking predictions for each target, and 12 CAPRI groups per target participated in the CAPRI scoring experiment. In total more than 9500 models were assessed against the 3D structures of the corresponding target complexes. Results show that the prediction of homodimer assemblies by homology modeling techniques and docking calculations is quite successful for targets featuring large enough subunit interfaces to represent stable associations. Targets with ambiguous or inaccurate oligomeric state assignments, often featuring crystal contact-sized interfaces, represented a confounding factor. For those, a much poorer prediction performance was achieved, while nonetheless often providing helpful clues on the correct oligomeric state of the protein. The prediction performance was very poor for genuine tetrameric targets, where the inaccuracy of the homology-built subunit models and the smaller pair-wise interfaces severely limited the ability to derive the correct assembly mode. Our analysis also shows that docking procedures tend to perform better than standard homology modeling techniques and that highly accurate models of the protein components are not always required to identify their association modes with acceptable accuracy. PMID:27122118

  6. Analysis of designed experiments with complex aliasing

    SciTech Connect

    Hamada, M.; Wu, C.F.J. )

    1992-07-01

    Traditionally, Plackett-Burman (PB) designs have been used in screening experiments for identifying important main effects. The PB designs whose run sizes are not a power of two have been criticized for their complex aliasing patterns, which according to conventional wisdom gives confusing results. This paper goes beyond the traditional approach by proposing the analysis strategy that entertains interactions in addition to main effects. Based on the precepts of effect sparsity and effect heredity, the proposed procedure exploits the designs' complex aliasing patterns, thereby turning their 'liability' into an advantage. Demonstration of the procedure on three real experiments shows the potential for extracting important information available in the data that has, until now, been missed. Some limitations are discussed, and extentions to overcome them are given. The proposed procedure also applies to more general mixed level designs that have become increasingly popular. 16 refs.

  7. CFD [computational fluid dynamics] And Safety Factors. Computer modeling of complex processes needs old-fashioned experiments to stay in touch with reality.

    SciTech Connect

    Leishear, Robert A.; Lee, Si Y.; Poirier, Michael R.; Steeper, Timothy J.; Ervin, Robert C.; Giddings, Billy J.; Stefanko, David B.; Harp, Keith D.; Fowley, Mark D.; Van Pelt, William B.

    2012-10-07

    Computational fluid dynamics (CFD) is recognized as a powerful engineering tool. That is, CFD has advanced over the years to the point where it can now give us deep insight into the analysis of very complex processes. There is a danger, though, that an engineer can place too much confidence in a simulation. If a user is not careful, it is easy to believe that if you plug in the numbers, the answer comes out, and you are done. This assumption can lead to significant errors. As we discovered in the course of a study on behalf of the Department of Energy's Savannah River Site in South Carolina, CFD models fail to capture some of the large variations inherent in complex processes. These variations, or scatter, in experimental data emerge from physical tests and are inadequately captured or expressed by calculated mean values for a process. This anomaly between experiment and theory can lead to serious errors in engineering analysis and design unless a correction factor, or safety factor, is experimentally validated. For this study, blending times for the mixing of salt solutions in large storage tanks were the process of concern under investigation. This study focused on the blending processes needed to mix salt solutions to ensure homogeneity within waste tanks, where homogeneity is required to control radioactivity levels during subsequent processing. Two of the requirements for this task were to determine the minimum number of submerged, centrifugal pumps required to blend the salt mixtures in a full-scale tank in half a day or less, and to recommend reasonable blending times to achieve nearly homogeneous salt mixtures. A full-scale, low-flow pump with a total discharge flow rate of 500 to 800 gpm was recommended with two opposing 2.27-inch diameter nozzles. To make this recommendation, both experimental and CFD modeling were performed. Lab researchers found that, although CFD provided good estimates of an average blending time, experimental blending times varied

  8. Modeling complexity in biology

    NASA Astrophysics Data System (ADS)

    Louzoun, Yoram; Solomon, Sorin; Atlan, Henri; Cohen, Irun. R.

    2001-08-01

    Biological systems, unlike physical or chemical systems, are characterized by the very inhomogeneous distribution of their components. The immune system, in particular, is notable for self-organizing its structure. Classically, the dynamics of natural systems have been described using differential equations. But, differential equation models fail to account for the emergence of large-scale inhomogeneities and for the influence of inhomogeneity on the overall dynamics of biological systems. Here, we show that a microscopic simulation methodology enables us to model the emergence of large-scale objects and to extend the scope of mathematical modeling in biology. We take a simple example from immunology and illustrate that the methods of classical differential equations and microscopic simulation generate contradictory results. Microscopic simulations generate a more faithful approximation of the reality of the immune system.

  9. Hydraulic Fracturing Mineback Experiment in Complex Media

    NASA Astrophysics Data System (ADS)

    Green, S. J.; McLennan, J. D.

    2012-12-01

    Hydraulic fracturing (or "fracking") for the recovery of gas and liquids from tight shale formations has gained much attention. This operation which involves horizontal well drilling and massive hydraulic fracturing has been developed over the last decade to produce fluids from extremely low permeability mudstone and siltstone rocks with high organic content. Nearly thirteen thousand wells and about one hundred and fifty thousand stages within the wells were fractured in the US in 2011. This operation has proven to be successful, causing hundreds of billions of dollars to be invested and has produced an abundance of natural gas and is making billions of barrels of hydrocarbon liquids available for the US. But, even with this commercial success, relatively little is clearly known about the complexity--or lack of complexity--of the hydraulic fracture, the extent that the newly created surface area contacts the high Reservoir Quality rock, nor the connectivity and conductivity of the hydraulic fractures created. To better understand this phenomena in order to improve efficiency, a large-scale mine-back experiment is progressing. The mine-back experiment is a full-scale hydraulic fracture carried out in a well-characterized environment, with comprehensive instrumentation deployed to measure fracture growth. A tight shale mudstone rock geologic setting is selected, near the edge of a formation where one to two thousand feet difference in elevation occurs. From the top of the formation, drilling, well logging, and hydraulic fracture pumping will occur. From the bottom of the formation a horizontal tunnel will be mined using conventional mining techniques into the rock formation towards the drilled well. Certain instrumentation will be located within this tunnel for observations during the hydraulic fracturing. After the hydraulic fracturing, the tunnel will be extended toward the well, with careful mapping of the created hydraulic fracture. Fracturing fluid will be

  10. Trends in modeling Biomedical Complex Systems

    PubMed Central

    Milanesi, Luciano; Romano, Paolo; Castellani, Gastone; Remondini, Daniel; Liò, Petro

    2009-01-01

    In this paper we provide an introduction to the techniques for multi-scale complex biological systems, from the single bio-molecule to the cell, combining theoretical modeling, experiments, informatics tools and technologies suitable for biological and biomedical research, which are becoming increasingly multidisciplinary, multidimensional and information-driven. The most important concepts on mathematical modeling methodologies and statistical inference, bioinformatics and standards tools to investigate complex biomedical systems are discussed and the prominent literature useful to both the practitioner and the theoretician are presented. PMID:19828068

  11. "Computational Modeling of Actinide Complexes"

    SciTech Connect

    Balasubramanian, K

    2007-03-07

    We will present our recent studies on computational actinide chemistry of complexes which are not only interesting from the standpoint of actinide coordination chemistry but also of relevance to environmental management of high-level nuclear wastes. We will be discussing our recent collaborative efforts with Professor Heino Nitsche of LBNL whose research group has been actively carrying out experimental studies on these species. Computations of actinide complexes are also quintessential to our understanding of the complexes found in geochemical, biochemical environments and actinide chemistry relevant to advanced nuclear systems. In particular we have been studying uranyl, plutonyl, and Cm(III) complexes are in aqueous solution. These studies are made with a variety of relativistic methods such as coupled cluster methods, DFT, and complete active space multi-configuration self-consistent-field (CASSCF) followed by large-scale CI computations and relativistic CI (RCI) computations up to 60 million configurations. Our computational studies on actinide complexes were motivated by ongoing EXAFS studies of speciated complexes in geo and biochemical environments carried out by Prof Heino Nitsche's group at Berkeley, Dr. David Clark at Los Alamos and Dr. Gibson's work on small actinide molecules at ORNL. The hydrolysis reactions of urnayl, neputyl and plutonyl complexes have received considerable attention due to their geochemical and biochemical importance but the results of free energies in solution and the mechanism of deprotonation have been topic of considerable uncertainty. We have computed deprotonating and migration of one water molecule from the first solvation shell to the second shell in UO{sub 2}(H{sub 2}O){sub 5}{sup 2+}, UO{sub 2}(H{sub 2}O){sub 5}{sup 2+}NpO{sub 2}(H{sub 2}O){sub 6}{sup +}, and PuO{sub 2}(H{sub 2}O){sub 5}{sup 2+} complexes. Our computed Gibbs free energy(7.27 kcal/m) in solution for the first time agrees with the experiment (7.1 kcal

  12. Modelling Canopy Flows over Complex Terrain

    NASA Astrophysics Data System (ADS)

    Grant, Eleanor R.; Ross, Andrew N.; Gardiner, Barry A.

    2016-06-01

    Recent studies of flow over forested hills have been motivated by a number of important applications including understanding CO_2 and other gaseous fluxes over forests in complex terrain, predicting wind damage to trees, and modelling wind energy potential at forested sites. Current modelling studies have focussed almost exclusively on highly idealized, and usually fully forested, hills. Here, we present model results for a site on the Isle of Arran, Scotland with complex terrain and heterogeneous forest canopy. The model uses an explicit representation of the canopy and a 1.5-order turbulence closure for flow within and above the canopy. The validity of the closure scheme is assessed using turbulence data from a field experiment before comparing predictions of the full model with field observations. For near-neutral stability, the results compare well with the observations, showing that such a relatively simple canopy model can accurately reproduce the flow patterns observed over complex terrain and realistic, variable forest cover, while at the same time remaining computationally feasible for real case studies. The model allows closer examination of the flow separation observed over complex forested terrain. Comparisons with model simulations using a roughness length parametrization show significant differences, particularly with respect to flow separation, highlighting the need to explicitly model the forest canopy if detailed predictions of near-surface flow around forests are required.

  13. BOOK REVIEW: Modeling Complex Systems

    NASA Astrophysics Data System (ADS)

    Schreckenberg, M.

    2004-10-01

    This book by Nino Boccara presents a compilation of model systems commonly termed as `complex'. It starts with a definition of the systems under consideration and how to build up a model to describe the complex dynamics. The subsequent chapters are devoted to various categories of mean-field type models (differential and recurrence equations, chaos) and of agent-based models (cellular automata, networks and power-law distributions). Each chapter is supplemented by a number of exercises and their solutions. The table of contents looks a little arbitrary but the author took the most prominent model systems investigated over the years (and up until now there has been no unified theory covering the various aspects of complex dynamics). The model systems are explained by looking at a number of applications in various fields. The book is written as a textbook for interested students as well as serving as a compehensive reference for experts. It is an ideal source for topics to be presented in a lecture on dynamics of complex systems. This is the first book on this `wide' topic and I have long awaited such a book (in fact I planned to write it myself but this is much better than I could ever have written it!). Only section 6 on cellular automata is a little too limited to the author's point of view and one would have expected more about the famous Domany--Kinzel model (and more accurate citation!). In my opinion this is one of the best textbooks published during the last decade and even experts can learn a lot from it. Hopefully there will be an actualization after, say, five years since this field is growing so quickly. The price is too high for students but this, unfortunately, is the normal case today. Nevertheless I think it will be a great success!

  14. Complex Networks in Psychological Models

    NASA Astrophysics Data System (ADS)

    Wedemann, R. S.; Carvalho, L. S. A. V. D.; Donangelo, R.

    We develop schematic, self-organizing, neural-network models to describe mechanisms associated with mental processes, by a neurocomputational substrate. These models are examples of real world complex networks with interesting general topological structures. Considering dopaminergic signal-to-noise neuronal modulation in the central nervous system, we propose neural network models to explain development of cortical map structure and dynamics of memory access, and unify different mental processes into a single neurocomputational substrate. Based on our neural network models, neurotic behavior may be understood as an associative memory process in the brain, and the linguistic, symbolic associative process involved in psychoanalytic working-through can be mapped onto a corresponding process of reconfiguration of the neural network. The models are illustrated through computer simulations, where we varied dopaminergic modulation and observed the self-organizing emergent patterns at the resulting semantic map, interpreting them as different manifestations of mental functioning, from psychotic through to normal and neurotic behavior, and creativity.

  15. Modeling wildfire incident complexity dynamics.

    PubMed

    Thompson, Matthew P

    2013-01-01

    Wildfire management in the United States and elsewhere is challenged by substantial uncertainty regarding the location and timing of fire events, the socioeconomic and ecological consequences of these events, and the costs of suppression. Escalating U.S. Forest Service suppression expenditures is of particular concern at a time of fiscal austerity as swelling fire management budgets lead to decreases for non-fire programs, and as the likelihood of disruptive within-season borrowing potentially increases. Thus there is a strong interest in better understanding factors influencing suppression decisions and in turn their influence on suppression costs. As a step in that direction, this paper presents a probabilistic analysis of geographic and temporal variation in incident management team response to wildfires. The specific focus is incident complexity dynamics through time for fires managed by the U.S. Forest Service. The modeling framework is based on the recognition that large wildfire management entails recurrent decisions across time in response to changing conditions, which can be represented as a stochastic dynamic system. Daily incident complexity dynamics are modeled according to a first-order Markov chain, with containment represented as an absorbing state. A statistically significant difference in complexity dynamics between Forest Service Regions is demonstrated. Incident complexity probability transition matrices and expected times until containment are presented at national and regional levels. Results of this analysis can help improve understanding of geographic variation in incident management and associated cost structures, and can be incorporated into future analyses examining the economic efficiency of wildfire management.

  16. X-ray Absorption Spectroscopy and Coherent X-ray Diffraction Imaging for Time-Resolved Investigation of the Biological Complexes: Computer Modelling towards the XFEL Experiment

    NASA Astrophysics Data System (ADS)

    Bugaev, A. L.; Guda, A. A.; Yefanov, O. M.; Lorenz, U.; Soldatov, A. V.; Vartanyants, I. A.

    2016-05-01

    The development of the next generation synchrotron radiation sources - free electron lasers - is approaching to become an effective tool for the time-resolved experiments aimed to solve actual problems in various fields such as chemistry’ biology’ medicine’ etc. In order to demonstrate’ how these experiments may be performed for the real systems to obtain information at the atomic and macromolecular levels’ we have performed a molecular dynamics computer simulation combined with quantum chemistry calculations for the human phosphoglycerate kinase enzyme with Mg containing substrate. The simulated structures were used to calculate coherent X-ray diffraction patterns’ reflecting the conformational state of the enzyme, and Mg K-edge X-ray absorption spectra, which depend on the local structure of the substrate. These two techniques give complementary information making such an approach highly effective for time-resolved investigation of various biological complexes, such as metalloproteins or enzymes with metal-containing substrate, to obtain information about both metal-containing active site or substrate and the atomic structure of each conformation.

  17. Modeling Wildfire Incident Complexity Dynamics

    PubMed Central

    Thompson, Matthew P.

    2013-01-01

    Wildfire management in the United States and elsewhere is challenged by substantial uncertainty regarding the location and timing of fire events, the socioeconomic and ecological consequences of these events, and the costs of suppression. Escalating U.S. Forest Service suppression expenditures is of particular concern at a time of fiscal austerity as swelling fire management budgets lead to decreases for non-fire programs, and as the likelihood of disruptive within-season borrowing potentially increases. Thus there is a strong interest in better understanding factors influencing suppression decisions and in turn their influence on suppression costs. As a step in that direction, this paper presents a probabilistic analysis of geographic and temporal variation in incident management team response to wildfires. The specific focus is incident complexity dynamics through time for fires managed by the U.S. Forest Service. The modeling framework is based on the recognition that large wildfire management entails recurrent decisions across time in response to changing conditions, which can be represented as a stochastic dynamic system. Daily incident complexity dynamics are modeled according to a first-order Markov chain, with containment represented as an absorbing state. A statistically significant difference in complexity dynamics between Forest Service Regions is demonstrated. Incident complexity probability transition matrices and expected times until containment are presented at national and regional levels. Results of this analysis can help improve understanding of geographic variation in incident management and associated cost structures, and can be incorporated into future analyses examining the economic efficiency of wildfire management. PMID:23691014

  18. Diagnosis in Complex Plasmas for Microgravity Experiments (PK-3 plus)

    SciTech Connect

    Takahashi, Kazuo; Hayashi, Yasuaki; Thomas, Hubertus M.; Morfill, Gregor E.; Ivlev, Alexei V.; Adachi, Satoshi

    2008-09-07

    Microgravity gives the complex (dusty) plasmas, where dust particles are embedded in complete charge neutral region of bulk plasma. The dust clouds as an uncompressed strongly coupled Coulomb system correspond to atomic model with several physical phenomena, crystallization, phase transition, and so on. As the phenomena tightly connect to plasma states, it is significant to understand plasma parameters such as electron density and temperature. The present work shows the electron density in the setup for microgravity experiments currently onboard on the International Space Station.

  19. The Hidden Complexities of a "Simple" Experiment.

    ERIC Educational Resources Information Center

    Caplan, Jeremy B.; And Others

    1994-01-01

    Provides two experiments that do not give the expected results. One involves burning a candle in an air-filled beaker under water and the other burns the candle in pure oxygen. Provides methodology, suggestions, and theory. (MVL)

  20. Explosion modelling for complex geometries

    NASA Astrophysics Data System (ADS)

    Nehzat, Naser

    A literature review suggested that the combined effects of fuel reactivity, obstacle density, ignition strength, and confinement result in flame acceleration and subsequent pressure build-up during a vapour cloud explosion (VCE). Models for the prediction of propagating flames in hazardous areas, such as coal mines, oil platforms, storage and process chemical areas etc. fall into two classes. One class involves use of Computation Fluid Dynamics (CFD). This approach has been utilised by several researchers. The other approach relies upon a lumped parameter approach as developed by Baker (1983). The former approach is restricted by the appropriateness of sub-models and numerical stability requirements inherent in the computational solution. The latter approach raises significant questions regarding the validity of the simplification involved in representing the complexities of a propagating explosion. This study was conducted to investigate and improve the Computational Fluid Dynamic (CFD) code EXPLODE which has been developed by Green et al., (1993) for use on practical gas explosion hazard assessments. The code employs a numerical method for solving partial differential equations by using finite volume techniques. Verification exercises, involving comparison with analytical solutions for the classical shock-tube and with experimental (small-scale, medium and large-scale) results, demonstrate the accuracy of the code and the new combustion models but also identify differences between predictions and the experimental results. The project has resulted in a developed version of the code (EXPLODE2) with new combustion models for simulating gas explosions. Additional features of this program include the physical models necessary to simulate the combustion process using alternative combustion models, improvement to the numerical accuracy and robustness of the code, and special input for simulation of different gas explosions. The present code has the capability of

  1. Modelling biological complexity: a physical scientist's perspective.

    PubMed

    Coveney, Peter V; Fowler, Philip W

    2005-09-22

    integration of molecular and more coarse-grained representations of matter. The scope of such integrative approaches to complex systems research is circumscribed by the computational resources available. Computational grids should provide a step jump in the scale of these resources; we describe the tools that RealityGrid, a major UK e-Science project, has developed together with our experience of deploying complex models on nascent grids. We also discuss the prospects for mathematical approaches to reducing the dimensionality of complex networks in the search for universal systems-level properties, illustrating our approach with a description of the origin of life according to the RNA world view. PMID:16849185

  2. Modelling biological complexity: a physical scientist's perspective

    PubMed Central

    Coveney, Peter V; Fowler, Philip W

    2005-01-01

    integration of molecular and more coarse-grained representations of matter. The scope of such integrative approaches to complex systems research is circumscribed by the computational resources available. Computational grids should provide a step jump in the scale of these resources; we describe the tools that RealityGrid, a major UK e-Science project, has developed together with our experience of deploying complex models on nascent grids. We also discuss the prospects for mathematical approaches to reducing the dimensionality of complex networks in the search for universal systems-level properties, illustrating our approach with a description of the origin of life according to the RNA world view. PMID:16849185

  3. A Practical Philosophy of Complex Climate Modelling

    NASA Technical Reports Server (NTRS)

    Schmidt, Gavin A.; Sherwood, Steven

    2014-01-01

    We give an overview of the practice of developing and using complex climate models, as seen from experiences in a major climate modelling center and through participation in the Coupled Model Intercomparison Project (CMIP).We discuss the construction and calibration of models; their evaluation, especially through use of out-of-sample tests; and their exploitation in multi-model ensembles to identify biases and make predictions. We stress that adequacy or utility of climate models is best assessed via their skill against more naive predictions. The framework we use for making inferences about reality using simulations is naturally Bayesian (in an informal sense), and has many points of contact with more familiar examples of scientific epistemology. While the use of complex simulations in science is a development that changes much in how science is done in practice, we argue that the concepts being applied fit very much into traditional practices of the scientific method, albeit those more often associated with laboratory work.

  4. Teacher Modeling Using Complex Informational Texts

    ERIC Educational Resources Information Center

    Fisher, Douglas; Frey, Nancy

    2015-01-01

    Modeling in complex texts requires that teachers analyze the text for factors of qualitative complexity and then design lessons that introduce students to that complexity. In addition, teachers can model the disciplinary nature of content area texts as well as word solving and comprehension strategies. Included is a planning guide for think aloud.

  5. A Novel BA Complex Network Model on Color Template Matching

    PubMed Central

    Han, Risheng; Yue, Guangxue; Ding, Hui

    2014-01-01

    A novel BA complex network model of color space is proposed based on two fundamental rules of BA scale-free network model: growth and preferential attachment. The scale-free characteristic of color space is discovered by analyzing evolving process of template's color distribution. And then the template's BA complex network model can be used to select important color pixels which have much larger effects than other color pixels in matching process. The proposed BA complex network model of color space can be easily integrated into many traditional template matching algorithms, such as SSD based matching and SAD based matching. Experiments show the performance of color template matching results can be improved based on the proposed algorithm. To the best of our knowledge, this is the first study about how to model the color space of images using a proper complex network model and apply the complex network model to template matching. PMID:25243235

  6. Describing Ecosystem Complexity through Integrated Catchment Modeling

    NASA Astrophysics Data System (ADS)

    Shope, C. L.; Tenhunen, J. D.; Peiffer, S.

    2011-12-01

    Land use and climate change have been implicated in reduced ecosystem services (ie: high quality water yield, biodiversity, and agricultural yield. The prediction of ecosystem services expected under future land use decisions and changing climate conditions has become increasingly important. Complex policy and management decisions require the integration of physical, economic, and social data over several scales to assess effects on water resources and ecology. Field-based meteorology, hydrology, soil physics, plant production, solute and sediment transport, economic, and social behavior data were measured in a South Korean catchment. A variety of models are being used to simulate plot and field scale experiments within the catchment. Results from each of the local-scale models provide identification of sensitive, local-scale parameters which are then used as inputs into a large-scale watershed model. We used the spatially distributed SWAT model to synthesize the experimental field data throughout the catchment. The approach of our study was that the range in local-scale model parameter results can be used to define the sensitivity and uncertainty in the large-scale watershed model. Further, this example shows how research can be structured for scientific results describing complex ecosystems and landscapes where cross-disciplinary linkages benefit the end result. The field-based and modeling framework described is being used to develop scenarios to examine spatial and temporal changes in land use practices and climatic effects on water quantity, water quality, and sediment transport. Development of accurate modeling scenarios requires understanding the social relationship between individual and policy driven land management practices and the value of sustainable resources to all shareholders.

  7. Cognitive Complexity among Practicing Counselors: How Thinking Changes with Experience

    ERIC Educational Resources Information Center

    Granello, Darcy Haag

    2010-01-01

    The purpose of this study was to investigate whether years of experience in the counseling profession can help predict levels of cognitive complexity among practicing counselors. Results of a regression equation found that counselors with more years in the counseling profession had higher levels of cognitive complexity, with highest degree…

  8. "Long, Boring, and Tedious": Youths' Experiences with Complex, Religious Texts

    ERIC Educational Resources Information Center

    Rackley, Eric D.; Kwok, Michelle

    2016-01-01

    Growing out of the renewed attention to text complexity in the United States and the large population of youth who are deeply committed to reading scripture, this study explores 16 Latter-day Saint and Methodist youths' experiences with complex, religious texts. The study took place in the Midwestern United States. Data consisted of an academic…

  9. Complex Aerosol Experiment in Western Siberia (April - October 2013)

    NASA Astrophysics Data System (ADS)

    Matvienko, G. G.; Belan, B. D.; Panchenko, M. V.; Romanovskii, O. A.; Sakerin, S. M.; Kabanov, D. M.; Turchinovich, S. A.; Turchinovich, Yu. S.; Eremina, T. A.; Kozlov, V. S.; Terpugova, S. A.; Pol'kin, V. V.; Yausheva, E. P.; Chernov, D. G.; Zuravleva, T. B.; Bedareva, T. V.; Odintsov, S. L.; Burlakov, V. D.; Arshinov, M. Yu.; Ivlev, G. A.; Savkin, D. E.; Fofonov, A. V.; Gladkikh, V. A.; Kamardin, A. P.; Balin, Yu. S.; Kokhanenko, G. P.; Penner, I. E.; Samoilova, S. V.; Antokhin, P. N.; Arshinova, V. G.; Davydov, D. K.; Kozlov, A. V.; Pestunov, D. A.; Rasskazchikova, T. M.; Simonenkov, D. V.; Sklyadneva, T. K.; Tolmachev, G. N.; Belan, S. B.; Shmargunov, V. P.

    2016-06-01

    The primary project objective was to accomplish the Complex Aerosol Experiment, during which the aerosol properties should be measured in the near-ground layer and free atmosphere. Three measurement cycles were performed during the project implementation: in spring period (April), when the maximum of aerosol generation is observed; in summer (July), when atmospheric boundary layer height and mixing layer height are maximal; and in late summer - early autumn (October), when the secondary particle nucleation period is recorded. Numerical calculations were compared with measurements of fluxes of downward solar radiation. It was shown that the relative differences between model and experimental values of fluxes of direct and total radiation, on the average, do not exceed 1% and 3% respectively.

  10. Iron-Sulfur-Carbonyl and -Nitrosyl Complexes: A Laboratory Experiment.

    ERIC Educational Resources Information Center

    Glidewell, Christopher; And Others

    1985-01-01

    Background information, materials needed, procedures used, and typical results obtained, are provided for an experiment on iron-sulfur-carbonyl and -nitrosyl complexes. The experiment involved (1) use of inert atmospheric techniques and thin-layer and flexible-column chromatography and (2) interpretation of infrared, hydrogen and carbon-13 nuclear…

  11. PETN ignition experiments and models.

    PubMed

    Hobbs, Michael L; Wente, William B; Kaneshige, Michael J

    2010-04-29

    Ignition experiments from various sources, including our own laboratory, have been used to develop a simple ignition model for pentaerythritol tetranitrate (PETN). The experiments consist of differential thermal analysis, thermogravimetric analysis, differential scanning calorimetry, beaker tests, one-dimensional time to explosion tests, Sandia's instrumented thermal ignition tests (SITI), and thermal ignition of nonelectrical detonators. The model developed using this data consists of a one-step, first-order, pressure-independent mechanism used to predict pressure, temperature, and time to ignition for various configurations. The model was used to assess the state of the degraded PETN at the onset of ignition. We propose that cookoff violence for PETN can be correlated with the extent of reaction at the onset of ignition. This hypothesis was tested by evaluating metal deformation produced from detonators encased in copper as well as comparing postignition photos of the SITI experiments. PMID:20361790

  12. Capturing Complexity through Maturity Modelling

    ERIC Educational Resources Information Center

    Underwood, Jean; Dillon, Gayle

    2004-01-01

    The impact of information and communication technologies (ICT) on the process and products of education is difficult to assess for a number of reasons. In brief, education is a complex system of interrelationships, of checks and balances. This context is not a neutral backdrop on which teaching and learning are played out. Rather, it may help, or…

  13. Complexity and Uncertainty in Soil Nitrogen Modeling

    NASA Astrophysics Data System (ADS)

    Ajami, N. K.; Gu, C.

    2009-12-01

    Model uncertainty is rarely considered in the field of biogeochemical modeling. The standard biogeochemical modeling approach is to proceed based on one selected model with the “right” complexity level based on data availability. However other plausible models can result in dissimilar answer to the scientific question in hand using the same set of data. Relying on a single model can lead to underestimation of uncertainty associated with the results and therefore lead to unreliable conclusions. Multi-model ensemble strategy is a means to exploit the diversity of skillful predictions from different models with multiple levels of complexity. The aim of this study is two fold, first to explore the impact of a model’s complexity level on the accuracy of the end results and second to introduce a probabilistic multi-model strategy in the context of a process-based biogeochemical model. We developed three different versions of a biogeochemical model, TOUGHREACT-N, with various complexity levels. Each one of these models was calibrated against the observed data from a tomato field in Western Sacramento County, California, and considered two different weighting sets on the objective function. This way we created a set of six ensemble members. The Bayesian Model Averaging (BMA) approach was then used to combine these ensemble members by the likelihood that an individual model is correct given the observations. The results clearly indicate the need to consider a multi-model ensemble strategy over a single model selection in biogeochemical modeling.

  14. Numerical Modeling of LCROSS experiment

    NASA Astrophysics Data System (ADS)

    Sultanov, V. G.; Kim, V. V.; Matveichev, A. V.; Zhukov, B. G.; Lomonosov, I. V.

    2009-06-01

    The mission objectives of the Lunar Crater Observation and Sensing Satellite (LCROSS) include confirming the presence or absence of water ice in a permanently shadowed crater in the Moon's polar regions. In this research we present results of numerical modeling of forthcoming LCROSS experiment. The parallel FPIC3D gas dynamic code with implemented realistic equations of state (EOS) and constitutive relations [1] was used. New wide--range EOS for lunar ground was developed. We carried out calculations of impact of model body on the lunar surface at different angels. Situations of impact on dry and water ice--contained lunar ground were also taken into account. Modeling results are given for crater's shape and size along with amount of ejecta. [4pt] [1] V.E. Fortov, V.V. Kim, I.V. Lomonosov, A.V. Matveichev, A.V. Ostrik. Numerical modeling of hypervelocity impacts, Intern J Impact Engeneering, 33, 244-253 (2006)

  15. Fock spaces for modeling macromolecular complexes

    NASA Astrophysics Data System (ADS)

    Kinney, Justin

    Large macromolecular complexes play a fundamental role in how cells function. Here I describe a Fock space formalism for mathematically modeling these complexes. Specifically, this formalism allows ensembles of complexes to be defined in terms of elementary molecular ``building blocks'' and ``assembly rules.'' Such definitions avoid the massive redundancy inherent in standard representations, in which all possible complexes are manually enumerated. Methods for systematically computing ensembles of complexes from a list of components and interaction rules are described. I also show how this formalism readily accommodates coarse-graining. Finally, I introduce diagrammatic techniques that greatly facilitate the application of this formalism to both equilibrium and non-equilibrium biochemical systems.

  16. The Complex Experience of Learning to Do Research

    ERIC Educational Resources Information Center

    Emo, Kenneth; Emo, Wendy; Kimn, Jung-Han; Gent, Stephen

    2015-01-01

    This article examines how student learning is a product of the experiential interaction between person and environment. We draw from the theoretical perspective of complexity to shed light on the emergent, adaptive, and unpredictable nature of students' learning experiences. To understand the relationship between the environment and the student…

  17. School Experiences of an Adolescent with Medical Complexities Involving Incontinence

    ERIC Educational Resources Information Center

    Filce, Hollie Gabler; Bishop, John B.

    2014-01-01

    The educational implications of chronic illnesses which involve incontinence are not well represented in the literature. The experiences of an adolescent with multiple complex illnesses, including incontinence, were explored via an intrinsic case study. Data were gathered from the adolescent, her mother, and teachers through interviews, email…

  18. Facing up to Complexity: Implications for Our Social Experiments.

    PubMed

    Hawkins, Ronnie

    2016-06-01

    Biological systems are highly complex, and for this reason there is a considerable degree of uncertainty as to the consequences of making significant interventions into their workings. Since a number of new technologies are already impinging on living systems, including our bodies, many of us have become participants in large-scale "social experiments". I will discuss biological complexity and its relevance to the technologies that brought us BSE/vCJD and the controversy over GM foods. Then I will consider some of the complexities of our social dynamics, and argue for making a shift from using the precautionary principle to employing the approach of evaluating the introduction of new technologies by conceiving of them as social experiments. PMID:26062747

  19. Scaffolding in Complex Modelling Situations

    ERIC Educational Resources Information Center

    Stender, Peter; Kaiser, Gabriele

    2015-01-01

    The implementation of teacher-independent realistic modelling processes is an ambitious educational activity with many unsolved problems so far. Amongst others, there hardly exists any empirical knowledge about efficient ways of possible teacher support with students' activities, which should be mainly independent from the teacher. The research…

  20. Reducing Spatial Data Complexity for Classification Models

    SciTech Connect

    Ruta, Dymitr; Gabrys, Bogdan

    2007-11-29

    Intelligent data analytics gradually becomes a day-to-day reality of today's businesses. However, despite rapidly increasing storage and computational power current state-of-the-art predictive models still can not handle massive and noisy corporate data warehouses. What is more adaptive and real-time operational environment requires multiple models to be frequently retrained which further hinders their use. Various data reduction techniques ranging from data sampling up to density retention models attempt to address this challenge by capturing a summarised data structure, yet they either do not account for labelled data or degrade the classification performance of the model trained on the condensed dataset. Our response is a proposition of a new general framework for reducing the complexity of labelled data by means of controlled spatial redistribution of class densities in the input space. On the example of Parzen Labelled Data Compressor (PLDC) we demonstrate a simulatory data condensation process directly inspired by the electrostatic field interaction where the data are moved and merged following the attracting and repelling interactions with the other labelled data. The process is controlled by the class density function built on the original data that acts as a class-sensitive potential field ensuring preservation of the original class density distributions, yet allowing data to rearrange and merge joining together their soft class partitions. As a result we achieved a model that reduces the labelled datasets much further than any competitive approaches yet with the maximum retention of the original class densities and hence the classification performance. PLDC leaves the reduced dataset with the soft accumulative class weights allowing for efficient online updates and as shown in a series of experiments if coupled with Parzen Density Classifier (PDC) significantly outperforms competitive data condensation methods in terms of classification performance at the

  1. Reducing Spatial Data Complexity for Classification Models

    NASA Astrophysics Data System (ADS)

    Ruta, Dymitr; Gabrys, Bogdan

    2007-11-01

    Intelligent data analytics gradually becomes a day-to-day reality of today's businesses. However, despite rapidly increasing storage and computational power current state-of-the-art predictive models still can not handle massive and noisy corporate data warehouses. What is more adaptive and real-time operational environment requires multiple models to be frequently retrained which further hinders their use. Various data reduction techniques ranging from data sampling up to density retention models attempt to address this challenge by capturing a summarised data structure, yet they either do not account for labelled data or degrade the classification performance of the model trained on the condensed dataset. Our response is a proposition of a new general framework for reducing the complexity of labelled data by means of controlled spatial redistribution of class densities in the input space. On the example of Parzen Labelled Data Compressor (PLDC) we demonstrate a simulatory data condensation process directly inspired by the electrostatic field interaction where the data are moved and merged following the attracting and repelling interactions with the other labelled data. The process is controlled by the class density function built on the original data that acts as a class-sensitive potential field ensuring preservation of the original class density distributions, yet allowing data to rearrange and merge joining together their soft class partitions. As a result we achieved a model that reduces the labelled datasets much further than any competitive approaches yet with the maximum retention of the original class densities and hence the classification performance. PLDC leaves the reduced dataset with the soft accumulative class weights allowing for efficient online updates and as shown in a series of experiments if coupled with Parzen Density Classifier (PDC) significantly outperforms competitive data condensation methods in terms of classification performance at the

  2. Role models for complex networks

    NASA Astrophysics Data System (ADS)

    Reichardt, J.; White, D. R.

    2007-11-01

    We present a framework for automatically decomposing (“block-modeling”) the functional classes of agents within a complex network. These classes are represented by the nodes of an image graph (“block model”) depicting the main patterns of connectivity and thus functional roles in the network. Using a first principles approach, we derive a measure for the fit of a network to any given image graph allowing objective hypothesis testing. From the properties of an optimal fit, we derive how to find the best fitting image graph directly from the network and present a criterion to avoid overfitting. The method can handle both two-mode and one-mode data, directed and undirected as well as weighted networks and allows for different types of links to be dealt with simultaneously. It is non-parametric and computationally efficient. The concepts of structural equivalence and modularity are found as special cases of our approach. We apply our method to the world trade network and analyze the roles individual countries play in the global economy.

  3. Concept of a Model City Complex

    ERIC Educational Resources Information Center

    Giammatteo, Michael C.

    The model cities concept calls for an educational complex which includes the nonschool educational institutions and facilities of the community as well as actual school facilities. Such an educational complex would require a wider administrative base than the school yet smaller than the municipal government. Examples of nonschool educational…

  4. Agent-based modeling of complex infrastructures

    SciTech Connect

    North, M. J.

    2001-06-01

    Complex Adaptive Systems (CAS) can be applied to investigate complex infrastructures and infrastructure interdependencies. The CAS model agents within the Spot Market Agent Research Tool (SMART) and Flexible Agent Simulation Toolkit (FAST) allow investigation of the electric power infrastructure, the natural gas infrastructure and their interdependencies.

  5. Emulator-assisted data assimilation in complex models

    NASA Astrophysics Data System (ADS)

    Margvelashvili, Nugzar Yu; Herzfeld, Mike; Rizwi, Farhan; Mongin, Mathieu; Baird, Mark E.; Jones, Emlyn; Schaffelke, Britta; King, Edward; Schroeder, Thomas

    2016-09-01

    Emulators are surrogates of complex models that run orders of magnitude faster than the original model. The utility of emulators for the data assimilation into ocean models is still not well understood. High complexity of ocean models translates into high uncertainty of the corresponding emulators which may undermine the quality of the assimilation schemes based on such emulators. Numerical experiments with a chaotic Lorenz-95 model are conducted to illustrate this point and suggest a strategy to alleviate this problem through the localization of the emulation and data assimilation procedures. Insights gained through these experiments are used to design and implement data assimilation scenario for a 3D fine-resolution sediment transport model of the Great Barrier Reef (GBR), Australia.

  6. Emulator-assisted data assimilation in complex models

    NASA Astrophysics Data System (ADS)

    Margvelashvili, Nugzar Yu; Herzfeld, Mike; Rizwi, Farhan; Mongin, Mathieu; Baird, Mark E.; Jones, Emlyn; Schaffelke, Britta; King, Edward; Schroeder, Thomas

    2016-08-01

    Emulators are surrogates of complex models that run orders of magnitude faster than the original model. The utility of emulators for the data assimilation into ocean models is still not well understood. High complexity of ocean models translates into high uncertainty of the corresponding emulators which may undermine the quality of the assimilation schemes based on such emulators. Numerical experiments with a chaotic Lorenz-95 model are conducted to illustrate this point and suggest a strategy to alleviate this problem through the localization of the emulation and data assimilation procedures. Insights gained through these experiments are used to design and implement data assimilation scenario for a 3D fine-resolution sediment transport model of the Great Barrier Reef (GBR), Australia.

  7. Numerical models of complex diapirs

    NASA Astrophysics Data System (ADS)

    Podladchikov, Yu.; Talbot, C.; Poliakov, A. N. B.

    1993-12-01

    Numerically modelled diapirs that rise into overburdens with viscous rheology produce a large variety of shapes. This work uses the finite-element method to study the development of diapirs that rise towards a surface on which a diapir-induced topography creeps flat or disperses ("erodes") at different rates. Slow erosion leads to diapirs with "mushroom" shapes, moderate erosion rate to "wine glass" diapirs and fast erosion to "beer glass"- and "column"-shaped diapirs. The introduction of a low-viscosity layer at the top of the overburden causes diapirs to develop into structures resembling a "Napoleon hat". These spread lateral sheets.

  8. Modeling of microgravity combustion experiments

    NASA Technical Reports Server (NTRS)

    Buckmaster, John

    1993-01-01

    Modeling plays a vital role in providing physical insights into behavior revealed by experiment. The program at the University of Illinois is designed to improve our understanding of basic combustion phenomena through the analytical and numerical modeling of a variety of configurations undergoing experimental study in NASA's microgravity combustion program. Significant progress has been made in two areas: (1) flame-balls, studied experimentally by Ronney and his co-workers; (2) particle-cloud flames studied by Berlad and his collaborators. Additional work is mentioned below. NASA funding for the U. of Illinois program commenced in February 1991 but work was initiated prior to that date and the program can only be understood with this foundation exposed. Accordingly, we start with a brief description of some key results obtained in the pre - 2/91 work.

  9. Modeling a crowdsourced definition of molecular complexity.

    PubMed

    Sheridan, Robert P; Zorn, Nicolas; Sherer, Edward C; Campeau, Louis-Charles; Chang, Charlie Zhenyu; Cumming, Jared; Maddess, Matthew L; Nantermet, Philippe G; Sinz, Christopher J; O'Shea, Paul D

    2014-06-23

    This paper brings together the concepts of molecular complexity and crowdsourcing. An exercise was done at Merck where 386 chemists voted on the molecular complexity (on a scale of 1-5) of 2681 molecules taken from various sources: public, licensed, and in-house. The meanComplexity of a molecule is the average over all votes for that molecule. As long as enough votes are cast per molecule, we find meanComplexity is quite easy to model with QSAR methods using only a handful of physical descriptors (e.g., number of chiral centers, number of unique topological torsions, a Wiener index, etc.). The high level of self-consistency of the model (cross-validated R(2) ∼0.88) is remarkable given that our chemists do not agree with each other strongly about the complexity of any given molecule. Thus, the power of crowdsourcing is clearly demonstrated in this case. The meanComplexity appears to be correlated with at least one metric of synthetic complexity from the literature derived in a different way and is correlated with values of process mass intensity (PMI) from the literature and from in-house studies. Complexity can be used to differentiate between in-house programs and to follow a program over time.

  10. Dynamic modeling of structures from measured complex modes

    NASA Technical Reports Server (NTRS)

    Ibrahim, s. R.

    1982-01-01

    A technique is presented to use a set of identified complex modes together with an analytical mathematical model of a structure under test to compute improved mass, stiffness and damping matrices. A set of identified normal modes, computed from the measured complex modes, is used in the mass orthogonality equation to compute an improved mass matrix. This eliminates possible errors that may result from using approximated complex modes as normal modes. The improved mass matrix, the measured complex modes and the higher analytical modes are then used to compute the improved stiffness and damping matrices. The number of degrees-of-freedom of the improved model is limited to equal the number of elements in the measured modal vectors. A simulated experiment shows considerable improvements, in the system's analytical dynamic model, over the frequency range of the given measured modal information.

  11. Random energy model at complex temperatures

    PubMed

    Saakian

    2000-06-01

    The complete phase diagram of the random energy model is obtained for complex temperatures using the method proposed by Derrida. We find the density of zeroes for the statistical sum. Then the method is applied to the generalized random energy model. This allowed us to propose an analytical method for investigating zeroes of the statistical sum for finite-dimensional systems. PMID:11088286

  12. From Complex to Simple: Interdisciplinary Stochastic Models

    ERIC Educational Resources Information Center

    Mazilu, D. A.; Zamora, G.; Mazilu, I.

    2012-01-01

    We present two simple, one-dimensional, stochastic models that lead to a qualitative understanding of very complex systems from biology, nanoscience and social sciences. The first model explains the complicated dynamics of microtubules, stochastic cellular highways. Using the theory of random walks in one dimension, we find analytical expressions…

  13. Updating the debate on model complexity

    USGS Publications Warehouse

    Simmons, Craig T.; Hunt, Randall J.

    2012-01-01

    As scientists who are trying to understand a complex natural world that cannot be fully characterized in the field, how can we best inform the society in which we live? This founding context was addressed in a special session, “Complexity in Modeling: How Much is Too Much?” convened at the 2011 Geological Society of America Annual Meeting. The session had a variety of thought-provoking presentations—ranging from philosophy to cost-benefit analyses—and provided some areas of broad agreement that were not evident in discussions of the topic in 1998 (Hunt and Zheng, 1999). The session began with a short introduction during which model complexity was framed borrowing from an economic concept, the Law of Diminishing Returns, and an example of enjoyment derived by eating ice cream. Initially, there is increasing satisfaction gained from eating more ice cream, to a point where the gain in satisfaction starts to decrease, ending at a point when the eater sees no value in eating more ice cream. A traditional view of model complexity is similar—understanding gained from modeling can actually decrease if models become unnecessarily complex. However, oversimplified models—those that omit important aspects of the problem needed to make a good prediction—can also limit and confound our understanding. Thus, the goal of all modeling is to find the “sweet spot” of model sophistication—regardless of whether complexity was added sequentially to an overly simple model or collapsed from an initial highly parameterized framework that uses mathematics and statistics to attain an optimum (e.g., Hunt et al., 2007). Thus, holistic parsimony is attained, incorporating “as simple as possible,” as well as the equally important corollary “but no simpler.”

  14. Combustion, Complex Fluids, and Fluid Physics Experiments on the ISS

    NASA Technical Reports Server (NTRS)

    Motil, Brian; Urban, David

    2012-01-01

    multiphase flows, capillary phenomena, and heat pipes. Finally in complex fluids, experiments in rheology and soft condensed materials will be presented.

  15. Combustion, Complex Fluids, and Fluid Physics Experiments on the ISS

    NASA Technical Reports Server (NTRS)

    Motil, Brian; Urban, David

    2012-01-01

    From the very early days of human spaceflight, NASA has been conducting experiments in space to understand the effect of weightlessness on physical and chemically reacting systems. NASA Glenn Research Center (GRC) in Cleveland, Ohio has been at the forefront of this research looking at both fundamental studies in microgravity as well as experiments targeted at reducing the risks to long duration human missions to the moon, Mars, and beyond. In the current International Space Station (ISS) era, we now have an orbiting laboratory that provides the highly desired condition of long-duration microgravity. This allows continuous and interactive research similar to Earth-based laboratories. Because of these capabilities, the ISS is an indispensible laboratory for low gravity research. NASA GRC has been actively involved in developing and operating facilities and experiments on the ISS since the beginning of a permanent human presence on November 2, 2000. As the lead Center for combustion, complex fluids, and fluid physics; GRC has led the successful implementation of the Combustion Integrated Rack (CIR) and the Fluids Integrated Rack (FIR) as well as the continued use of other facilities on the ISS. These facilities have supported combustion experiments in fundamental droplet combustion; fire detection; fire extinguishment; soot phenomena; flame liftoff and stability; and material flammability. The fluids experiments have studied capillary flow; magneto-rheological fluids; colloidal systems; extensional rheology; pool and nucleate boiling phenomena. In this paper, we provide an overview of the experiments conducted on the ISS over the past 12 years.

  16. Multifaceted Modelling of Complex Business Enterprises

    PubMed Central

    2015-01-01

    We formalise and present a new generic multifaceted complex system approach for modelling complex business enterprises. Our method has a strong focus on integrating the various data types available in an enterprise which represent the diverse perspectives of various stakeholders. We explain the challenges faced and define a novel approach to converting diverse data types into usable Bayesian probability forms. The data types that can be integrated include historic data, survey data, and management planning data, expert knowledge and incomplete data. The structural complexities of the complex system modelling process, based on various decision contexts, are also explained along with a solution. This new application of complex system models as a management tool for decision making is demonstrated using a railway transport case study. The case study demonstrates how the new approach can be utilised to develop a customised decision support model for a specific enterprise. Various decision scenarios are also provided to illustrate the versatility of the decision model at different phases of enterprise operations such as planning and control. PMID:26247591

  17. Modeling the complex bromate-iodine reaction.

    PubMed

    Machado, Priscilla B; Faria, Roberto B

    2009-05-01

    In this article, it is shown that the FLEK model (ref 5 ) is able to model the experimental results of the bromate-iodine clock reaction. Five different complex chemical systems, the bromate-iodide clock and oscillating reactions, the bromite-iodide clock and oscillating reactions, and now the bromate-iodine clock reaction are adequately accounted for by the FLEK model. PMID:19361181

  18. Model validation for karst flow using sandbox experiments

    NASA Astrophysics Data System (ADS)

    Ye, M.; Pacheco Castro, R. B.; Tao, X.; Zhao, J.

    2015-12-01

    The study of flow in karst is complex due of the high heterogeneity of the porous media. Several approaches have been proposed in the literature to study overcome the natural complexity of karst. Some of those methods are the single continuum, double continuum and the discrete network of conduits coupled with the single continuum. Several mathematical and computing models are available in the literature for each approach. In this study one computer model has been selected for each category to validate its usefulness to model flow in karst using a sandbox experiment. The models chosen are: Modflow 2005, Modflow CFPV1 and Modflow CFPV2. A sandbox experiment was implemented in such way that all the parameters required for each model can be measured. The sandbox experiment was repeated several times under different conditions. The model validation will be carried out by comparing the results of the model simulation and the real data. This model validation will allows ud to compare the accuracy of each model and the applicability in Karst. Also we will be able to evaluate if the results of the complex models improve a lot compared to the simple models specially because some models require complex parameters that are difficult to measure in the real world.

  19. Minimum-complexity helicopter simulation math model

    NASA Technical Reports Server (NTRS)

    Heffley, Robert K.; Mnich, Marc A.

    1988-01-01

    An example of a minimal complexity simulation helicopter math model is presented. Motivating factors are the computational delays, cost, and inflexibility of the very sophisticated math models now in common use. A helicopter model form is given which addresses each of these factors and provides better engineering understanding of the specific handling qualities features which are apparent to the simulator pilot. The technical approach begins with specification of features which are to be modeled, followed by a build up of individual vehicle components and definition of equations. Model matching and estimation procedures are given which enable the modeling of specific helicopters from basic data sources such as flight manuals. Checkout procedures are given which provide for total model validation. A number of possible model extensions and refinement are discussed. Math model computer programs are defined and listed.

  20. Experiments beyond the standard model

    SciTech Connect

    Perl, M.L.

    1984-09-01

    This paper is based upon lectures in which I have described and explored the ways in which experimenters can try to find answers, or at least clues toward answers, to some of the fundamental questions of elementary particle physics. All of these experimental techniques and directions have been discussed fully in other papers, for example: searches for heavy charged leptons, tests of quantum chromodynamics, searches for Higgs particles, searches for particles predicted by supersymmetric theories, searches for particles predicted by technicolor theories, searches for proton decay, searches for neutrino oscillations, monopole searches, studies of low transfer momentum hadron physics at very high energies, and elementary particle studies using cosmic rays. Each of these subjects requires several lectures by itself to do justice to the large amount of experimental work and theoretical thought which has been devoted to these subjects. My approach in these tutorial lectures is to describe general ways to experiment beyond the standard model. I will use some of the topics listed to illustrate these general ways. Also, in these lectures I present some dreams and challenges about new techniques in experimental particle physics and accelerator technology, I call these Experimental Needs. 92 references.

  1. Constructing minimal models for complex system dynamics

    NASA Astrophysics Data System (ADS)

    Barzel, Baruch; Liu, Yang-Yu; Barabási, Albert-László

    2015-05-01

    One of the strengths of statistical physics is the ability to reduce macroscopic observations into microscopic models, offering a mechanistic description of a system's dynamics. This paradigm, rooted in Boltzmann's gas theory, has found applications from magnetic phenomena to subcellular processes and epidemic spreading. Yet, each of these advances were the result of decades of meticulous model building and validation, which are impossible to replicate in most complex biological, social or technological systems that lack accurate microscopic models. Here we develop a method to infer the microscopic dynamics of a complex system from observations of its response to external perturbations, allowing us to construct the most general class of nonlinear pairwise dynamics that are guaranteed to recover the observed behaviour. The result, which we test against both numerical and empirical data, is an effective dynamic model that can predict the system's behaviour and provide crucial insights into its inner workings.

  2. Complexity of precipitation patterns: Comparison of simulation with experiment.

    PubMed

    Polezhaev, A. A.; Muller, S. C.

    1994-12-01

    Numerical simulations show that a simple model for the formation of Liesegang precipitation patterns, which takes into account the dependence of nucleation and particle growth kinetics on supersaturation, can explain not only simple patterns like parallel bands in a test tube or concentric rings in a petri dish, but also more complex structural features, such as dislocations, helices, "Saturn rings," or patterns formed in the case of equal initial concentrations of the source substances. The limits of application of the model are discussed. (c) 1994 American Institute of Physics.

  3. Modeling acuity for optotypes varying in complexity.

    PubMed

    Watson, Andrew B; Ahumada, Albert J

    2012-01-01

    Watson and Ahumada (2008) described a template model of visual acuity based on an ideal-observer limited by optical filtering, neural filtering, and noise. They computed predictions for selected optotypes and optical aberrations. Here we compare this model's predictions to acuity data for six human observers, each viewing seven different optotype sets, consisting of one set of Sloan letters and six sets of Chinese characters, differing in complexity (Zhang, Zhang, Xue, Liu, & Yu, 2007). Since optical aberrations for the six observers were unknown, we constructed 200 model observers using aberrations collected from 200 normal human eyes (Thibos, Hong, Bradley, & Cheng, 2002). For each condition (observer, optotype set, model observer) we estimated the model noise required to match the data. Expressed as efficiency, performance for Chinese characters was 1.4 to 2.7 times lower than for Sloan letters. Efficiency was weakly and inversely related to perimetric complexity of optotype set. We also compared confusion matrices for human and model observers. Correlations for off-diagonal elements ranged from 0.5 to 0.8 for different sets, and the average correlation for the template model was superior to a geometrical moment model with a comparable number of parameters (Liu, Klein, Xue, Zhang, & Yu, 2009). The template model performed well overall. Estimated psychometric function slopes matched the data, and noise estimates agreed roughly with those obtained independently from contrast sensitivity to Gabor targets. For optotypes of low complexity, the model accurately predicted relative performance. This suggests the model may be used to compare acuities measured with different sets of simple optotypes. PMID:23024356

  4. Reassessing Geophysical Models of the Bushveld Complex in 3D

    NASA Astrophysics Data System (ADS)

    Cole, J.; Webb, S. J.; Finn, C.

    2012-12-01

    Conceptual geophysical models of the Bushveld Igneous Complex show three possible geometries for its mafic component: 1) Separate intrusions with vertical feeders for the eastern and western lobes (Cousins, 1959) 2) Separate dipping sheets for the two lobes (Du Plessis and Kleywegt, 1987) 3) A single saucer-shaped unit connected at depth in the central part between the two lobes (Cawthorn et al, 1998) Model three incorporates isostatic adjustment of the crust in response to the weight of the dense mafic material. The model was corroborated by results of a broadband seismic array over southern Africa, known as the Southern African Seismic Experiment (SASE) (Nguuri, et al, 2001; Webb et al, 2004). This new information about the crustal thickness only became available in the last decade and could not be considered in the earlier models. Nevertheless, there is still on-going debate as to which model is correct. All of the models published up to now have been done in 2 or 2.5 dimensions. This is not well suited to modelling the complex geometry of the Bushveld intrusion. 3D modelling takes into account effects of variations in geometry and geophysical properties of lithologies in a full three dimensional sense and therefore affects the shape and amplitude of calculated fields. The main question is how the new knowledge of the increased crustal thickness, as well as the complexity of the Bushveld Complex, will impact on the gravity fields calculated for the existing conceptual models, when modelling in 3D. The three published geophysical models were remodelled using full 3Dl potential field modelling software, and including crustal thickness obtained from the SASE. The aim was not to construct very detailed models, but to test the existing conceptual models in an equally conceptual way. Firstly a specific 2D model was recreated in 3D, without crustal thickening, to establish the difference between 2D and 3D results. Then the thicker crust was added. Including the less

  5. The Kuramoto model in complex networks

    NASA Astrophysics Data System (ADS)

    Rodrigues, Francisco A.; Peron, Thomas K. DM.; Ji, Peng; Kurths, Jürgen

    2016-01-01

    Synchronization of an ensemble of oscillators is an emergent phenomenon present in several complex systems, ranging from social and physical to biological and technological systems. The most successful approach to describe how coherent behavior emerges in these complex systems is given by the paradigmatic Kuramoto model. This model has been traditionally studied in complete graphs. However, besides being intrinsically dynamical, complex systems present very heterogeneous structure, which can be represented as complex networks. This report is dedicated to review main contributions in the field of synchronization in networks of Kuramoto oscillators. In particular, we provide an overview of the impact of network patterns on the local and global dynamics of coupled phase oscillators. We cover many relevant topics, which encompass a description of the most used analytical approaches and the analysis of several numerical results. Furthermore, we discuss recent developments on variations of the Kuramoto model in networks, including the presence of noise and inertia. The rich potential for applications is discussed for special fields in engineering, neuroscience, physics and Earth science. Finally, we conclude by discussing problems that remain open after the last decade of intensive research on the Kuramoto model and point out some promising directions for future research.

  6. Modeling of Protein Binary Complexes Using Structural Mass Spectrometry Data

    SciTech Connect

    Amisha Kamal,J.; Chance, M.

    2008-01-01

    In this article, we describe a general approach to modeling the structure of binary protein complexes using structural mass spectrometry data combined with molecular docking. In the first step, hydroxyl radical mediated oxidative protein footprinting is used to identify residues that experience conformational reorganization due to binding or participate in the binding interface. In the second step, a three-dimensional atomic structure of the complex is derived by computational modeling. Homology modeling approaches are used to define the structures of the individual proteins if footprinting detects significant conformational reorganization as a function of complex formation. A three-dimensional model of the complex is constructed from these binary partners using the ClusPro program, which is composed of docking, energy filtering, and clustering steps. Footprinting data are used to incorporate constraints--positive and/or negative--in the docking step and are also used to decide the type of energy filter--electrostatics or desolvation--in the successive energy-filtering step. By using this approach, we examine the structure of a number of binary complexes of monomeric actin and compare the results to crystallographic data. Based on docking alone, a number of competing models with widely varying structures are observed, one of which is likely to agree with crystallographic data. When the docking steps are guided by footprinting data, accurate models emerge as top scoring. We demonstrate this method with the actin/gelsolin segment-1 complex. We also provide a structural model for the actin/cofilin complex using this approach which does not have a crystal or NMR structure.

  7. Computational Modeling of T Cell Receptor Complexes.

    PubMed

    Riley, Timothy P; Singh, Nishant K; Pierce, Brian G; Weng, Zhiping; Baker, Brian M

    2016-01-01

    T-cell receptor (TCR) binding to peptide/MHC determines specificity and initiates signaling in antigen-specific cellular immune responses. Structures of TCR-pMHC complexes have provided enormous insight to cellular immune functions, permitted a rational understanding of processes such as pathogen escape, and led to the development of novel approaches for the design of vaccines and other therapeutics. As production, crystallization, and structure determination of TCR-pMHC complexes can be challenging, there is considerable interest in modeling new complexes. Here we describe a rapid approach to TCR-pMHC modeling that takes advantage of structural features conserved in known complexes, such as the restricted TCR binding site and the generally conserved diagonal docking mode. The approach relies on the powerful Rosetta suite and is implemented using the PyRosetta scripting environment. We show how the approach can recapitulate changes in TCR binding angles and other structural details, and highlight areas where careful evaluation of parameters is needed and alternative choices might be made. As TCRs are highly sensitive to subtle structural perturbations, there is room for improvement. Our method nonetheless generates high-quality models that can be foundational for structure-based hypotheses regarding TCR recognition.

  8. Computational Modeling of T Cell Receptor Complexes.

    PubMed

    Riley, Timothy P; Singh, Nishant K; Pierce, Brian G; Weng, Zhiping; Baker, Brian M

    2016-01-01

    T-cell receptor (TCR) binding to peptide/MHC determines specificity and initiates signaling in antigen-specific cellular immune responses. Structures of TCR-pMHC complexes have provided enormous insight to cellular immune functions, permitted a rational understanding of processes such as pathogen escape, and led to the development of novel approaches for the design of vaccines and other therapeutics. As production, crystallization, and structure determination of TCR-pMHC complexes can be challenging, there is considerable interest in modeling new complexes. Here we describe a rapid approach to TCR-pMHC modeling that takes advantage of structural features conserved in known complexes, such as the restricted TCR binding site and the generally conserved diagonal docking mode. The approach relies on the powerful Rosetta suite and is implemented using the PyRosetta scripting environment. We show how the approach can recapitulate changes in TCR binding angles and other structural details, and highlight areas where careful evaluation of parameters is needed and alternative choices might be made. As TCRs are highly sensitive to subtle structural perturbations, there is room for improvement. Our method nonetheless generates high-quality models that can be foundational for structure-based hypotheses regarding TCR recognition. PMID:27094300

  9. Modeling Electromagnetic Scattering From Complex Inhomogeneous Objects

    NASA Technical Reports Server (NTRS)

    Deshpande, Manohar; Reddy, C. J.

    2011-01-01

    This software innovation is designed to develop a mathematical formulation to estimate the electromagnetic scattering characteristics of complex, inhomogeneous objects using the finite-element-method (FEM) and method-of-moments (MoM) concepts, as well as to develop a FORTRAN code called FEMOM3DS (Finite Element Method and Method of Moments for 3-Dimensional Scattering), which will implement the steps that are described in the mathematical formulation. Very complex objects can be easily modeled, and the operator of the code is not required to know the details of electromagnetic theory to study electromagnetic scattering.

  10. Computing the complexity for Schelling segregation models

    NASA Astrophysics Data System (ADS)

    Gerhold, Stefan; Glebsky, Lev; Schneider, Carsten; Weiss, Howard; Zimmermann, Burkhard

    2008-12-01

    The Schelling segregation models are "agent based" population models, where individual members of the population (agents) interact directly with other agents and move in space and time. In this note we study one-dimensional Schelling population models as finite dynamical systems. We define a natural notion of entropy which measures the complexity of the family of these dynamical systems. The entropy counts the asymptotic growth rate of the number of limit states. We find formulas and deduce precise asymptotics for the number of limit states, which enable us to explicitly compute the entropy.

  11. Seismic modeling of complex stratified reservoirs

    NASA Astrophysics Data System (ADS)

    Lai, Hung-Liang

    Turbidite reservoirs in deep-water depositional systems, such as the oil fields in the offshore Gulf of Mexico and North Sea, are becoming an important exploration target in the petroleum industry. Accurate seismic reservoir characterization, however, is complicated by the heterogeneous of the sand and shale distribution and also by the lack of resolution when imaging thin channel deposits. Amplitude variation with offset (AVO) is a very important technique that is widely applied to locate hydrocarbons. Inaccurate estimates of seismic reflection amplitudes may result in misleading interpretations because of these problems in application to turbidite reservoirs. Therefore, an efficient, accurate, and robust method of modeling seismic responses for such complex reservoirs is crucial and necessary to reduce exploration risk. A fast and accurate approach generating synthetic seismograms for such reservoir models combines wavefront construction ray tracing with composite reflection coefficients in a hybrid modeling algorithm. The wavefront construction approach is a modern, fast implementation of ray tracing that I have extended to model quasi-shear wave propagation in anisotropic media. Composite reflection coefficients, which are computed using propagator matrix methods, provide the exact seismic reflection amplitude for a stratified reservoir model. This is a distinct improvement over conventional AVO analysis based on a model with only two homogeneous half spaces. I combine the two methods to compute synthetic seismograms for test models of turbidite reservoirs in the Ursa field, Gulf of Mexico, validating the new results against exact calculations using the discrete wavenumber method. The new method, however, can also be used to generate synthetic seismograms for the laterally heterogeneous, complex stratified reservoir models. The results show important frequency dependence that may be useful for exploration. Because turbidite channel systems often display complex

  12. Human driven transitions in complex model ecosystems

    NASA Astrophysics Data System (ADS)

    Harfoot, Mike; Newbold, Tim; Tittinsor, Derek; Purves, Drew

    2015-04-01

    Human activities have been observed to be impacting ecosystems across the globe, leading to reduced ecosystem functioning, altered trophic and biomass structure and ultimately ecosystem collapse. Previous attempts to understand global human impacts on ecosystems have usually relied on statistical models, which do not explicitly model the processes underlying the functioning of ecosystems, represent only a small proportion of organisms and do not adequately capture complex non-linear and dynamic responses of ecosystems to perturbations. We use a mechanistic ecosystem model (1), which simulates the underlying processes structuring ecosystems and can thus capture complex and dynamic interactions, to investigate boundaries of complex ecosystems to human perturbation. We explore several drivers including human appropriation of net primary production and harvesting of animal biomass. We also present an analysis of the key interactions between biotic, societal and abiotic earth system components, considering why and how we might think about these couplings. References: M. B. J. Harfoot et al., Emergent global patterns of ecosystem structure and function from a mechanistic general ecosystem model., PLoS Biol. 12, e1001841 (2014).

  13. Experiment Design for Complex VTOL Aircraft with Distributed Propulsion and Tilt Wing

    NASA Technical Reports Server (NTRS)

    Murphy, Patrick C.; Landman, Drew

    2015-01-01

    Selected experimental results from a wind tunnel study of a subscale VTOL concept with distributed propulsion and tilt lifting surfaces are presented. The vehicle complexity and automated test facility were ideal for use with a randomized designed experiment. Design of Experiments and Response Surface Methods were invoked to produce run efficient, statistically rigorous regression models with minimized prediction error. Static tests were conducted at the NASA Langley 12-Foot Low-Speed Tunnel to model all six aerodynamic coefficients over a large flight envelope. This work supports investigations at NASA Langley in developing advanced configurations, simulations, and advanced control systems.

  14. Impedance control complements incomplete internal models under complex external dynamics.

    PubMed

    Tomi, Naoki; Gouko, Manabu; Ito, Koji

    2008-01-01

    In this paper, we investigate motor adaptation of human arm movements to external dynamics. In an experiment, we tried to determine whether humans can learn an internal model of a mixed force field (V+P) that was the sum of a velocity-dependent force field (V) and a position-dependent force field (P). The experimental results show that the subjects did not learn the internal model of V+P accurately and they compensated for the loads by using impedance control. Our results suggest that humans use impedance control when internal models become inaccurate because of the complexity of the external dynamics.

  15. Intrinsic Uncertainties in Modeling Complex Systems.

    SciTech Connect

    Cooper, Curtis S; Bramson, Aaron L.; Ames, Arlo L.

    2014-09-01

    Models are built to understand and predict the behaviors of both natural and artificial systems. Because it is always necessary to abstract away aspects of any non-trivial system being modeled, we know models can potentially leave out important, even critical elements. This reality of the modeling enterprise forces us to consider the prospective impacts of those effects completely left out of a model - either intentionally or unconsidered. Insensitivity to new structure is an indication of diminishing returns. In this work, we represent a hypothetical unknown effect on a validated model as a finite perturba- tion whose amplitude is constrained within a control region. We find robustly that without further constraints, no meaningful bounds can be placed on the amplitude of a perturbation outside of the control region. Thus, forecasting into unsampled regions is a very risky proposition. We also present inherent difficulties with proper time discretization of models and representing in- herently discrete quantities. We point out potentially worrisome uncertainties, arising from math- ematical formulation alone, which modelers can inadvertently introduce into models of complex systems. Acknowledgements This work has been funded under early-career LDRD project #170979, entitled "Quantify- ing Confidence in Complex Systems Models Having Structural Uncertainties", which ran from 04/2013 to 09/2014. We wish to express our gratitude to the many researchers at Sandia who con- tributed ideas to this work, as well as feedback on the manuscript. In particular, we would like to mention George Barr, Alexander Outkin, Walt Beyeler, Eric Vugrin, and Laura Swiler for provid- ing invaluable advice and guidance through the course of the project. We would also like to thank Steven Kleban, Amanda Gonzales, Trevor Manzanares, and Sarah Burwell for their assistance in managing project tasks and resources.

  16. Different Epidemic Models on Complex Networks

    NASA Astrophysics Data System (ADS)

    Zhang, Hai-Feng; Small, Michael; Fu, Xin-Chu

    2009-07-01

    Models for diseases spreading are not just limited to SIS or SIR. For instance, for the spreading of AIDS/HIV, the susceptible individuals can be classified into different cases according to their immunity, and similarly, the infected individuals can be sorted into different classes according to their infectivity. Moreover, some diseases may develop through several stages. Many authors have shown that the individuals' relation can be viewed as a complex network. So in this paper, in order to better explain the dynamical behavior of epidemics, we consider different epidemic models on complex networks, and obtain the epidemic threshold for each case. Finally, we present numerical simulations for each case to verify our results.

  17. Noncommutative complex Grosse-Wulkenhaar model

    SciTech Connect

    Hounkonnou, Mahouton Norbert; Samary, Dine Ousmane

    2008-11-18

    This paper stands for an application of the noncommutative (NC) Noether theorem, given in our previous work [AIP Proc 956(2007) 55-60], for the NC complex Grosse-Wulkenhaar model. It provides with an extension of a recent work [Physics Letters B 653(2007) 343-345]. The local conservation of energy-momentum tensors (EMTs) is recovered using improvement procedures based on Moyal algebraic techniques. Broken dilatation symmetry is discussed. NC gauge currents are also explicitly computed.

  18. The noisy voter model on complex networks

    NASA Astrophysics Data System (ADS)

    Carro, Adrián; Toral, Raúl; San Miguel, Maxi

    2016-04-01

    We propose a new analytical method to study stochastic, binary-state models on complex networks. Moving beyond the usual mean-field theories, this alternative approach is based on the introduction of an annealed approximation for uncorrelated networks, allowing to deal with the network structure as parametric heterogeneity. As an illustration, we study the noisy voter model, a modification of the original voter model including random changes of state. The proposed method is able to unfold the dependence of the model not only on the mean degree (the mean-field prediction) but also on more complex averages over the degree distribution. In particular, we find that the degree heterogeneity—variance of the underlying degree distribution—has a strong influence on the location of the critical point of a noise-induced, finite-size transition occurring in the model, on the local ordering of the system, and on the functional form of its temporal correlations. Finally, we show how this latter point opens the possibility of inferring the degree heterogeneity of the underlying network by observing only the aggregate behavior of the system as a whole, an issue of interest for systems where only macroscopic, population level variables can be measured.

  19. The noisy voter model on complex networks

    PubMed Central

    Carro, Adrián; Toral, Raúl; San Miguel, Maxi

    2016-01-01

    We propose a new analytical method to study stochastic, binary-state models on complex networks. Moving beyond the usual mean-field theories, this alternative approach is based on the introduction of an annealed approximation for uncorrelated networks, allowing to deal with the network structure as parametric heterogeneity. As an illustration, we study the noisy voter model, a modification of the original voter model including random changes of state. The proposed method is able to unfold the dependence of the model not only on the mean degree (the mean-field prediction) but also on more complex averages over the degree distribution. In particular, we find that the degree heterogeneity—variance of the underlying degree distribution—has a strong influence on the location of the critical point of a noise-induced, finite-size transition occurring in the model, on the local ordering of the system, and on the functional form of its temporal correlations. Finally, we show how this latter point opens the possibility of inferring the degree heterogeneity of the underlying network by observing only the aggregate behavior of the system as a whole, an issue of interest for systems where only macroscopic, population level variables can be measured. PMID:27094773

  20. Complexity of groundwater models in catchment hydrological models

    NASA Astrophysics Data System (ADS)

    Attinger, Sabine; Herold, Christian; Kumar, Rohini; Mai, Juliane; Ross, Katharina; Samaniego, Luis; Zink, Matthias

    2015-04-01

    In catchment hydrological models, groundwater is usually modeled very simple: it is conceptualized as a linear reservoir that gets the water from the upper unsaturated zone reservoir and releases water to the river system as baseflow. The baseflow is only a minor component of the total river flow and groundwater reservoir parameters are therefore difficult to be inversely estimated by means of river flow data only. In addition, the modelled values of the absolute height of the water filling the groundwater reservoir - in other words the groundwater levels - are of limited meaning due to coarse or no spatial resolution of groundwater and due to the fact that only river flow data are used for the calibration. The talk focuses on the question: Which complexity in terms of model complexity and model resolution is necessary to characterize groundwater processes and groundwater responses adequately in distributed catchment hydrological models? Starting from a spatially distributed catchment hydrological model with a groundwater compartment that is conceptualized as a linear reservoir we stepwise increase the groundwater model complexity and its spatial resolution to investigate which resolution, which complexity and which data are needed to reproduce baseflow and groundwater level data adequately.

  1. General Blending Models for Data From Mixture Experiments

    PubMed Central

    Brown, L.; Donev, A. N.; Bissett, A. C.

    2015-01-01

    We propose a new class of models providing a powerful unification and extension of existing statistical methodology for analysis of data obtained in mixture experiments. These models, which integrate models proposed by Scheffé and Becker, extend considerably the range of mixture component effects that may be described. They become complex when the studied phenomenon requires it, but remain simple whenever possible. This article has supplementary material online. PMID:26681812

  2. Experiments on a Model Eye

    ERIC Educational Resources Information Center

    Arell, Antti; Kolari, Samuli

    1978-01-01

    Explains a laboratory experiment dealing with the optical features of the human eye. Shows how to measure the magnification of the retina and the refractive anomaly of the eye could be used to measure the refractive power of the observer's eye. (GA)

  3. Complex Constructivism: A Theoretical Model of Complexity and Cognition

    ERIC Educational Resources Information Center

    Doolittle, Peter E.

    2014-01-01

    Education has long been driven by its metaphors for teaching and learning. These metaphors have influenced both educational research and educational practice. Complexity and constructivism are two theories that provide functional and robust metaphors. Complexity provides a metaphor for the structure of myriad phenomena, while constructivism…

  4. Magnetic modeling of the Bushveld Igneous Complex

    NASA Astrophysics Data System (ADS)

    Webb, S. J.; Cole, J.; Letts, S. A.; Finn, C.; Torsvik, T. H.; Lee, M. D.

    2009-12-01

    Magnetic modeling of the 2.06 Ga Bushveld Complex presents special challenges due a variety of magnetic effects. These include strong remanence in the Main Zone and extremely high magnetic susceptibilities in the Upper Zone, which exhibit self-demagnetization. Recent palaeomagnetic results have resolved a long standing discrepancy between age data, which constrain the emplacement to within 1 million years, and older palaeomagnetic data which suggested ~50 million years for emplacement. The new palaeomagnetic results agree with the age data and present a single consistent pole, as opposed to a long polar wander path, for the Bushveld for all of the Zones and all of the limbs. These results also pass a fold test indicating the Bushveld Complex was emplaced horizontally lending support to arguments for connectivity. The magnetic signature of the Bushveld Complex provides an ideal mapping tool as the UZ has high susceptibility values and is well layered showing up as distinct anomalies on new high resolution magnetic data. However, this signature is similar to the highly magnetic BIFs found in the Transvaal and in the Witwatersrand Supergroups. Through careful mapping using new high resolution aeromagnetic data, we have been able to map the Bushveld UZ in complicated geological regions and identify a characteristic signature with well defined layers. The Main Zone, which has a more subdued magnetic signature, does have a strong remanent component and exhibits several magnetic reversals. The magnetic layers of the UZ contain layers of magnetitite with as much as 80-90% pure magnetite with large crystals (1-2 cm). While these layers are not strongly remanent, they have extremely high magnetic susceptibilities, and the self demagnetization effect must be taken into account when modeling these layers. Because the Bushveld Complex is so large, the geometry of the Earth’s magnetic field relative to the layers of the UZ Bushveld Complex changes orientation, creating

  5. Modeling the human prothrombinase complex components

    NASA Astrophysics Data System (ADS)

    Orban, Tivadar

    Thrombin generation is the culminating stage of the blood coagulation process. Thrombin is obtained from prothrombin (the substrate) in a reaction catalyzed by the prothrombinase complex (the enzyme). The prothrombinase complex is composed of factor Xa (the enzyme), factor Va (the cofactor) associated in the presence of calcium ions on a negatively charged cell membrane. Factor Xa, alone, can activate prothrombin to thrombin; however, the rate of conversion is not physiologically relevant for survival. Incorporation of factor Va into prothrombinase accelerates the rate of prothrombinase activity by 300,000-fold, and provides the physiological pathway of thrombin generation. The long-term goal of the current proposal is to provide the necessary support for the advancing of studies to design potential drug candidates that may be used to avoid development of deep venous thrombosis in high-risk patients. The short-term goals of the present proposal are to (1) to propose a model of a mixed asymmetric phospholipid bilayer, (2) expand the incomplete model of human coagulation factor Va and study its interaction with the phospholipid bilayer, (3) to create a homology model of prothrombin (4) to study the dynamics of interaction between prothrombin and the phospholipid bilayer.

  6. Modeling of microgravity combustion experiments

    NASA Technical Reports Server (NTRS)

    Buckmaster, John

    1995-01-01

    This program started in February 1991, and is designed to improve our understanding of basic combustion phenomena by the modeling of various configurations undergoing experimental study by others. Results through 1992 were reported in the second workshop. Work since that time has examined the following topics: Flame-balls; Intrinsic and acoustic instabilities in multiphase mixtures; Radiation effects in premixed combustion; Smouldering, both forward and reverse, as well as two dimensional smoulder.

  7. An experiment with interactive planning models

    NASA Technical Reports Server (NTRS)

    Beville, J.; Wagner, J. H.; Zannetos, Z. S.

    1970-01-01

    Experiments on decision making in planning problems are described. Executives were tested in dealing with capital investments and competitive pricing decisions under conditions of uncertainty. A software package, the interactive risk analysis model system, was developed, and two controlled experiments were conducted. It is concluded that planning models can aid management, and predicted uses of the models are as a central tool, as an educational tool, to improve consistency in decision making, to improve communications, and as a tool for consensus decision making.

  8. Lateral organization of complex lipid mixtures from multiscale modeling

    PubMed Central

    Tumaneng, Paul W.; Pandit, Sagar A.; Zhao, Guijun; Scott, H. L.

    2010-01-01

    The organizational properties of complex lipid mixtures can give rise to functionally important structures in cell membranes. In model membranes, ternary lipid-cholesterol (CHOL) mixtures are often used as representative systems to investigate the formation and stabilization of localized structural domains (“rafts”). In this work, we describe a self-consistent mean-field model that builds on molecular dynamics simulations to incorporate multiple lipid components and to investigate the lateral organization of such mixtures. The model predictions reveal regions of bimodal order on ternary plots that are in good agreement with experiment. Specifically, we have applied the model to ternary mixtures composed of dioleoylphosphatidylcholine:18:0 sphingomyelin:CHOL. This work provides insight into the specific intermolecular interactions that drive the formation of localized domains in these mixtures. The model makes use of molecular dynamics simulations to extract interaction parameters and to provide chain configuration order parameter libraries. PMID:20151760

  9. Wind modelling over complex terrain using CFD

    NASA Astrophysics Data System (ADS)

    Avila, Matias; Owen, Herbert; Folch, Arnau; Prieto, Luis; Cosculluela, Luis

    2015-04-01

    The present work deals with the numerical CFD modelling of onshore wind farms in the context of High Performance Computing (HPC). The CFD model involves the numerical solution of the Reynolds-Averaged Navier-Stokes (RANS) equations together with a κ-ɛ turbulence model and the energy equation, specially designed for Atmospheric Boundary Layer (ABL) flows. The aim is to predict the wind velocity distribution over complex terrain, using a model that includes meteorological data assimilation, thermal coupling, forested canopy and Coriolis effects. The modelling strategy involves automatic mesh generation, terrain data assimilation and generation of boundary conditions for the inflow wind flow distribution up to the geostrophic height. The CFD model has been implemented in Alya, a HPC multi physics parallel solver able to run with thousands of processors with an optimal scalability, developed in Barcelona Supercomputing Center. The implemented thermal stability and canopy physical model was developed by Sogachev in 2012. The k-ɛ equations are of non-linear convection diffusion reaction type. The implemented numerical scheme consists on a stabilized finite element formulation based on the variational multiscale method, that is known to be stable for this kind of turbulence equations. We present a numerical formulation that stresses on the robustness of the solution method, tackling common problems that produce instability. The iterative strategy and linearization scheme is discussed. It intends to avoid the possibility of having negative values of diffusion during the iterative process, which may lead to divergence of the scheme. These problems are addressed by acting on the coefficients of the reaction and diffusion terms and on the turbulent variables themselves. The k-ɛ equations are highly nonlinear. Complex terrain induces transient flow instabilities that may preclude the convergence of computer flow simulations based on steady state formulation of the

  10. In Vivo Experiments with Dental Pulp Stem Cells for Pulp-Dentin Complex Regeneration

    PubMed Central

    Kim, Sunil; Shin, Su-Jung; Song, Yunjung; Kim, Euiseong

    2015-01-01

    In recent years, many studies have examined the pulp-dentin complex regeneration with DPSCs. While it is important to perform research on cells, scaffolds, and growth factors, it is also critical to develop animal models for preclinical trials. The development of a reproducible animal model of transplantation is essential for obtaining precise and accurate data in vivo. The efficacy of pulp regeneration should be assessed qualitatively and quantitatively using animal models. This review article sought to introduce in vivo experiments that have evaluated the potential of dental pulp stem cells for pulp-dentin complex regeneration. According to a review of various researches about DPSCs, the majority of studies have used subcutaneous mouse and dog teeth for animal models. There is no way to know which animal model will reproduce the clinical environment. If an animal model is developed which is easier to use and is useful in more situations than the currently popular models, it will be a substantial aid to studies examining pulp-dentin complex regeneration. PMID:26688616

  11. The Database for Reaching Experiments and Models

    PubMed Central

    Walker, Ben; Kording, Konrad

    2013-01-01

    Reaching is one of the central experimental paradigms in the field of motor control, and many computational models of reaching have been published. While most of these models try to explain subject data (such as movement kinematics, reaching performance, forces, etc.) from only a single experiment, distinct experiments often share experimental conditions and record similar kinematics. This suggests that reaching models could be applied to (and falsified by) multiple experiments. However, using multiple datasets is difficult because experimental data formats vary widely. Standardizing data formats promises to enable scientists to test model predictions against many experiments and to compare experimental results across labs. Here we report on the development of a new resource available to scientists: a database of reaching called the Database for Reaching Experiments And Models (DREAM). DREAM collects both experimental datasets and models and facilitates their comparison by standardizing formats. The DREAM project promises to be useful for experimentalists who want to understand how their data relates to models, for modelers who want to test their theories, and for educators who want to help students better understand reaching experiments, models, and data analysis. PMID:24244351

  12. Inexpensive Complex Hand Model Twenty Years Later.

    PubMed

    Frenger, Paul

    2015-01-01

    Twenty years ago the author unveiled his inexpensive complex hand model, which reproduced every motion of the human hand. A control system programmed in the Forth language operated its actuators and sensors. Follow-on papers for this popular project were next presented in Texas, Canada and Germany. From this hand grew the author’s meter-tall robot (nicknamed ANNIE: Android With Neural Networks, Intellect and Emotions). It received machine vision, facial expressiveness, speech synthesis and speech recognition; a simian version also received a dexterous ape foot. New artificial intelligence features included op-amp neurons for OCR and simulated emotions, hormone emulation, endocannabinoid receptors, fear-trust-love mechanisms, a Grandmother Cell recognizer and artificial consciousness. Simulated illnesses included narcotic addiction, autism, PTSD, fibromyalgia and Alzheimer’s disease. The author gave 13 robotics-AI presentations at NASA in Houston since 2006. A meter-tall simian robot was proposed with gripping hand-feet for use with space vehicles and to explore distant planets and moons. Also proposed were: intelligent motorized exoskeletons for astronaut force multiplication; a cognitive prosthesis to detect and alleviate decreased crew mental performance; and a gynoid robot medic to tend astronauts in deep space missions. What began as a complex hand model evolved into an innovative robot-AI within two decades. PMID:25996742

  13. Inexpensive Complex Hand Model Twenty Years Later.

    PubMed

    Frenger, Paul

    2015-01-01

    Twenty years ago the author unveiled his inexpensive complex hand model, which reproduced every motion of the human hand. A control system programmed in the Forth language operated its actuators and sensors. Follow-on papers for this popular project were next presented in Texas, Canada and Germany. From this hand grew the author’s meter-tall robot (nicknamed ANNIE: Android With Neural Networks, Intellect and Emotions). It received machine vision, facial expressiveness, speech synthesis and speech recognition; a simian version also received a dexterous ape foot. New artificial intelligence features included op-amp neurons for OCR and simulated emotions, hormone emulation, endocannabinoid receptors, fear-trust-love mechanisms, a Grandmother Cell recognizer and artificial consciousness. Simulated illnesses included narcotic addiction, autism, PTSD, fibromyalgia and Alzheimer’s disease. The author gave 13 robotics-AI presentations at NASA in Houston since 2006. A meter-tall simian robot was proposed with gripping hand-feet for use with space vehicles and to explore distant planets and moons. Also proposed were: intelligent motorized exoskeletons for astronaut force multiplication; a cognitive prosthesis to detect and alleviate decreased crew mental performance; and a gynoid robot medic to tend astronauts in deep space missions. What began as a complex hand model evolved into an innovative robot-AI within two decades.

  14. Latent Hierarchical Model of Temporal Structure for Complex Activity Classification.

    PubMed

    Wang, Limin; Qiao, Yu; Tang, Xiaoou

    2014-02-01

    Modeling the temporal structure of sub-activities is an important yet challenging problem in complex activity classification. This paper proposes a latent hierarchical model (LHM) to describe the decomposition of complex activity into sub-activities in a hierarchical way. The LHM has a tree-structure, where each node corresponds to a video segment (sub-activity) at certain temporal scale. The starting and ending time points of each sub-activity are represented by two latent variables, which are automatically determined during the inference process. We formulate the training problem of the LHM in a latent kernelized SVM framework and develop an efficient cascade inference method to speed up classification. The advantages of our methods come from: 1) LHM models the complex activity with a deep structure, which is decomposed into sub-activities in a coarse-to-fine manner and 2) the starting and ending time points of each segment are adaptively determined to deal with the temporal displacement and duration variation of sub-activity. We conduct experiments on three datasets: 1) the KTH; 2) the Hollywood2; and 3) the Olympic Sports. The experimental results show the effectiveness of the LHM in complex activity classification. With dense features, our LHM achieves the state-of-the-art performance on the Hollywood2 dataset and the Olympic Sports dataset.

  15. Using Perspective to Model Complex Processes

    SciTech Connect

    Kelsey, R.L.; Bisset, K.R.

    1999-04-04

    The notion of perspective, when supported in an object-based knowledge representation, can facilitate better abstractions of reality for modeling and simulation. The object modeling of complex physical and chemical processes is made more difficult in part due to the poor abstractions of state and phase changes available in these models. The notion of perspective can be used to create different views to represent the different states of matter in a process. These techniques can lead to a more understandable model. Additionally, the ability to record the progress of a process from start to finish is problematic. It is desirable to have a historic record of the entire process, not just the end result of the process. A historic record should facilitate backtracking and re-start of a process at different points in time. The same representation structures and techniques can be used to create a sequence of process markers to represent a historic record. By using perspective, the sequence of markers can have multiple and varying views tailored for a particular user's context of interest.

  16. Turbulence modeling needs of commercial CFD codes: Complex flows in the aerospace and automotive industries

    NASA Technical Reports Server (NTRS)

    Befrui, Bizhan A.

    1995-01-01

    This viewgraph presentation discusses the following: STAR-CD computational features; STAR-CD turbulence models; common features of industrial complex flows; industry-specific CFD development requirements; applications and experiences of industrial complex flows, including flow in rotating disc cavities, diffusion hole film cooling, internal blade cooling, and external car aerodynamics; and conclusions on turbulence modeling needs.

  17. Turbulence modeling needs of commercial CFD codes: Complex flows in the aerospace and automotive industries

    NASA Astrophysics Data System (ADS)

    Befrui, Bizhan A.

    1995-03-01

    This viewgraph presentation discusses the following: STAR-CD computational features; STAR-CD turbulence models; common features of industrial complex flows; industry-specific CFD development requirements; applications and experiences of industrial complex flows, including flow in rotating disc cavities, diffusion hole film cooling, internal blade cooling, and external car aerodynamics; and conclusions on turbulence modeling needs.

  18. Modeling the respiratory chain complexes with biothermokinetic equations - the case of complex I.

    PubMed

    Heiske, Margit; Nazaret, Christine; Mazat, Jean-Pierre

    2014-10-01

    The mitochondrial respiratory chain plays a crucial role in energy metabolism and its dysfunction is implicated in a wide range of human diseases. In order to understand the global expression of local mutations in the rate of oxygen consumption or in the production of adenosine triphosphate (ATP) it is useful to have a mathematical model in which the changes in a given respiratory complex are properly modeled. Our aim in this paper is to provide thermodynamics respecting and structurally simple equations to represent the kinetics of each isolated complexes which can, assembled in a dynamical system, also simulate the behavior of the respiratory chain, as a whole, under a large set of different physiological and pathological conditions. On the example of the reduced nicotinamide adenine dinucleotide (NADH)-ubiquinol-oxidoreductase (complex I) we analyze the suitability of different types of rate equations. Based on our kinetic experiments we show that very simple rate laws, as those often used in many respiratory chain models, fail to describe the kinetic behavior when applied to a wide concentration range. This led us to adapt rate equations containing the essential parameters of enzyme kinetic, maximal velocities and Henri-Michaelis-Menten like-constants (KM and KI) to satisfactorily simulate these data. PMID:25064016

  19. Geomorphological experiments for understanding cross-scale complexity of earth surface processes

    NASA Astrophysics Data System (ADS)

    Seeger, Manuel

    2016-04-01

    The shape of the earth's surface is the result of a complex interaction of different processes at different spatial and temporal scales. The challenging problem is, that process observation is rarely possible due to this different scales. In addition, the resulting landform often does not match the scale of process observation. But it is indispensable for the development of concepts of formation of landforms to identify and understand the involved processes and their interaction. To develop models it is even necessary to quantify them and their relevant parameters. Experiments are able to bridge the constraints of process observation mentioned above: it is possible to observe and quantify individual processes as well as complex process combinations up to the development of geomorphological units. The contribution aims at showing, based on soil erosion research, the possibilities of experimental methods for contributing to th understanding of geomorphological processes. A special emphasis is put on the linkage of conceptual understanding of processes, their measurement and the following development of models. The development of experiments to quantify relevant parameters will be shown, as well as the steps undertaken to bring them into the field taking into account the resulting increase of uncertainty in system parameters and results. It will be shown that experiments are even so able to produce precise measurements on individual processes as well as of complex combinations of parameters and processes and to identify their influence on the overall geomorphological dynamics. Experiments are therefore a methodological package able to check complex soil erosion processes at different levels of conceptualization and to generate data for their quantification. And thus, also a methodological concept to take more into account and to further develop in geomorphological science.

  20. Mathematical modelling of complex contagion on clustered networks

    NASA Astrophysics Data System (ADS)

    O'sullivan, David J.; O'Keeffe, Gary; Fennell, Peter; Gleeson, James

    2015-09-01

    The spreading of behavior, such as the adoption of a new innovation, is influenced bythe structure of social networks that interconnect the population. In the experiments of Centola (Science, 2010), adoption of new behavior was shown to spread further and faster across clustered-lattice networks than across corresponding random networks. This implies that the “complex contagion” effects of social reinforcement are important in such diffusion, in contrast to “simple” contagion models of disease-spread which predict that epidemics would grow more efficiently on random networks than on clustered networks. To accurately model complex contagion on clustered networks remains a challenge because the usual assumptions (e.g. of mean-field theory) regarding tree-like networks are invalidated by the presence of triangles in the network; the triangles are, however, crucial to the social reinforcement mechanism, which posits an increased probability of a person adopting behavior that has been adopted by two or more neighbors. In this paper we modify the analytical approach that was introduced by Hebert-Dufresne et al. (Phys. Rev. E, 2010), to study disease-spread on clustered networks. We show how the approximation method can be adapted to a complex contagion model, and confirm the accuracy of the method with numerical simulations. The analytical results of the model enable us to quantify the level of social reinforcement that is required to observe—as in Centola’s experiments—faster diffusion on clustered topologies than on random networks.

  1. Ants (Formicidae): models for social complexity.

    PubMed

    Smith, Chris R; Dolezal, Adam; Eliyahu, Dorit; Holbrook, C Tate; Gadau, Jürgen

    2009-07-01

    The family Formicidae (ants) is composed of more than 12,000 described species that vary greatly in size, morphology, behavior, life history, ecology, and social organization. Ants occur in most terrestrial habitats and are the dominant animals in many of them. They have been used as models to address fundamental questions in ecology, evolution, behavior, and development. The literature on ants is extensive, and the natural history of many species is known in detail. Phylogenetic relationships for the family, as well as within many subfamilies, are known, enabling comparative studies. Their ease of sampling and ecological variation makes them attractive for studying populations and questions relating to communities. Their sociality and variation in social organization have contributed greatly to an understanding of complex systems, division of labor, and chemical communication. Ants occur in colonies composed of tens to millions of individuals that vary greatly in morphology, physiology, and behavior; this variation has been used to address proximate and ultimate mechanisms generating phenotypic plasticity. Relatedness asymmetries within colonies have been fundamental to the formulation and empirical testing of kin and group selection theories. Genomic resources have been developed for some species, and a whole-genome sequence for several species is likely to follow in the near future; comparative genomics in ants should provide new insights into the evolution of complexity and sociogenomics. Future studies using ants should help establish a more comprehensive understanding of social life, from molecules to colonies. PMID:20147200

  2. Physical modelling of the nuclear pore complex

    PubMed Central

    Fassati, Ariberto; Ford, Ian J.; Hoogenboom, Bart W.

    2013-01-01

    Physically interesting behaviour can arise when soft matter is confined to nanoscale dimensions. A highly relevant biological example of such a phenomenon is the Nuclear Pore Complex (NPC) found perforating the nuclear envelope of eukaryotic cells. In the central conduit of the NPC, of ∼30–60 nm diameter, a disordered network of proteins regulates all macromolecular transport between the nucleus and the cytoplasm. In spite of a wealth of experimental data, the selectivity barrier of the NPC has yet to be explained fully. Experimental and theoretical approaches are complicated by the disordered and heterogeneous nature of the NPC conduit. Modelling approaches have focused on the behaviour of the partially unfolded protein domains in the confined geometry of the NPC conduit, and have demonstrated that within the range of parameters thought relevant for the NPC, widely varying behaviour can be observed. In this review, we summarise recent efforts to physically model the NPC barrier and function. We illustrate how attempts to understand NPC barrier function have employed many different modelling techniques, each of which have contributed to our understanding of the NPC.

  3. CALIBRATION OF SUBSURFACE BATCH AND REACTIVE-TRANSPORT MODELS INVOLVING COMPLEX BIOGEOCHEMICAL PROCESSES

    EPA Science Inventory

    In this study, the calibration of subsurface batch and reactive-transport models involving complex biogeochemical processes was systematically evaluated. Two hypothetical nitrate biodegradation scenarios were developed and simulated in numerical experiments to evaluate the perfor...

  4. An Experiment on Isomerism in Metal-Amino Acid Complexes.

    ERIC Educational Resources Information Center

    Harrison, R. Graeme; Nolan, Kevin B.

    1982-01-01

    Background information, laboratory procedures, and discussion of results are provided for syntheses of cobalt (III) complexes, I-III, illustrating three possible bonding modes of glycine to a metal ion (the complex cations II and III being linkage/geometric isomers). Includes spectrophotometric and potentiometric methods to distinguish among the…

  5. Finding the right balance between groundwater model complexity and experimental effort via Bayesian model selection

    NASA Astrophysics Data System (ADS)

    Schöniger, Anneli; Illman, Walter A.; Wöhling, Thomas; Nowak, Wolfgang

    2015-12-01

    Groundwater modelers face the challenge of how to assign representative parameter values to the studied aquifer. Several approaches are available to parameterize spatial heterogeneity in aquifer parameters. They differ in their conceptualization and complexity, ranging from homogeneous models to heterogeneous random fields. While it is common practice to invest more effort into data collection for models with a finer resolution of heterogeneities, there is a lack of advice which amount of data is required to justify a certain level of model complexity. In this study, we propose to use concepts related to Bayesian model selection to identify this balance. We demonstrate our approach on the characterization of a heterogeneous aquifer via hydraulic tomography in a sandbox experiment (Illman et al., 2010). We consider four increasingly complex parameterizations of hydraulic conductivity: (1) Effective homogeneous medium, (2) geology-based zonation, (3) interpolation by pilot points, and (4) geostatistical random fields. First, we investigate the shift in justified complexity with increasing amount of available data by constructing a model confusion matrix. This matrix indicates the maximum level of complexity that can be justified given a specific experimental setup. Second, we determine which parameterization is most adequate given the observed drawdown data. Third, we test how the different parameterizations perform in a validation setup. The results of our test case indicate that aquifer characterization via hydraulic tomography does not necessarily require (or justify) a geostatistical description. Instead, a zonation-based model might be a more robust choice, but only if the zonation is geologically adequate.

  6. Troposphere-lower-stratosphere connection in an intermediate complexity model.

    NASA Astrophysics Data System (ADS)

    Ruggieri, Paolo; King, Martin; Kucharski, Fred; Buizza, Roberto; Visconti, Guido

    2016-04-01

    The dynamical coupling between the troposphere and the lower stratosphere has been investigated using a low-top, intermediate complexity model provided by the Abdus Salam International Centre for Theoretical Physics (SPEEDY). The key question that we wanted to address is whether a simple model like SPEEDY can be used to understand troposphere-stratosphere interactions, e.g. forced by changes of sea-ice concentration in polar arctic regions. Three sets of experiments have been performed. Firstly, a potential vorticity perspective has been applied to understand the wave-like forcing of the troposphere on the stratosphere and to provide quantitative information on the sub seasonal variability of the coupling. Then, the zonally asymmetric, near-surface response to a lower-stratospheric forcing has been analysed in a set of forced experiments with an artificial heating imposed in the extra-tropical lower stratosphere. Finally, the lower-stratosphere response sensitivity to tropospheric initial conditions has been examined. Results indicate how SPEEDY captures the physics of the troposphere-stratosphere connection but also show the lack of stratospheric variability. Results also suggest that intermediate-complexity models such as SPEEDY could be used to investigate the effects that surface forcing (e.g. due to sea-ice concentration changes) have on the troposphere and the lower stratosphere.

  7. Analytical models for complex swirling flows

    NASA Astrophysics Data System (ADS)

    Borissov, A.; Hussain, V.

    1996-11-01

    We develops a new class of analytical solutions of the Navier-Stokes equations for swirling flows, and suggests ways to predict and control such flows occurring in various technological applications. We view momentum accumulation on the axis as a key feature of swirling flows and consider vortex-sink flows on curved axisymmetric surfaces with an axial flow. We show that these solutions model swirling flows in a cylindrical can, whirlpools, tornadoes, and cosmic swirling jets. The singularity of these solutions on the flow axis is removed by matching them with near-axis Schlichting and Long's swirling jets. The matched solutions model flows with very complex patterns, consisting of up to seven separation regions with recirculatory 'bubbles' and vortex rings. We apply the matched solutions for computing flows in the Ranque-Hilsch tube, in the meniscus of electrosprays, in vortex breakdown, and in an industrial vortex burner. The simple analytical solutions allow a clear understanding of how different control parameters affect the flow and guide selection of optimal parameter values for desired flow features. These solutions permit extension to other problems (such as heat transfer and chemical reaction) and have the potential of being significantly useful for further detailed investigation by direct or large-eddy numerical simulations as well as laboratory experimentation.

  8. Advanced Combustion Modeling for Complex Turbulent Flows

    NASA Technical Reports Server (NTRS)

    Ham, Frank Stanford

    2005-01-01

    The next generation of aircraft engines will need to pass stricter efficiency and emission tests. NASA's Ultra-Efficient Engine Technology (UEET) program has set an ambitious goal of 70% reduction of NO(x) emissions and a 15% increase in fuel efficiency of aircraft engines. We will demonstrate the state-of-the-art combustion tools developed a t Stanford's Center for Turbulence Research (CTR) as part of this program. In the last decade, CTR has spear-headed a multi-physics-based combustion modeling program. Key technologies have been transferred to the aerospace industry and are currently being used for engine simulations. In this demo, we will showcase the next-generation combustion modeling tools that integrate a very high level of detailed physics into advanced flow simulation codes. Combustor flows involve multi-phase physics with liquid fuel jet breakup, evaporation, and eventual combustion. Individual components of the simulation are verified against complex test cases and show excellent agreement with experimental data.

  9. Modeling choice and valuation in decision experiments.

    PubMed

    Loomes, Graham

    2010-07-01

    This article develops a parsimonious descriptive model of individual choice and valuation in the kinds of experiments that constitute a substantial part of the literature relating to decision making under risk and uncertainty. It suggests that many of the best known "regularities" observed in those experiments may arise from a tendency for participants to perceive probabilities and payoffs in a particular way. This model organizes more of the data than any other extant model and generates a number of novel testable implications which are examined with new data.

  10. STELLA Experiment: Design and Model Predictions

    SciTech Connect

    Kimura, W. D.; Babzien, M.; Ben-Zvi, I.; Campbell, L. P.; Cline, D. B.; Fiorito, R. B.; Gallardo, J. C.; Gottschalk, S. C.; He, P.; Kusche, K. P.; Liu, Y.; Pantell, R. H.; Pogorelsky, I. V.; Quimby, D. C.; Robinson, K. E.; Rule, D. W.; Sandweiss, J.; Skaritka, J.; van Steenbergen, A.; Steinhauer, L. C.; Yakimenko, V.

    1998-07-05

    The STaged ELectron Laser Acceleration (STELLA) experiment will be one of the first to examine the critical issue of staging the laser acceleration process. The BNL inverse free electron laser (EEL) will serve as a prebuncher to generate {approx} 1 {micro}m long microbunches. These microbunches will be accelerated by an inverse Cerenkov acceleration (ICA) stage. A comprehensive model of the STELLA experiment is described. This model includes the EEL prebunching, drift and focusing of the microbunches into the ICA stage, and their subsequent acceleration. The model predictions will be presented including the results of a system error study to determine the sensitivity to uncertainties in various system parameters.

  11. Argonne Bubble Experiment Thermal Model Development

    SciTech Connect

    Buechler, Cynthia Eileen

    2015-12-03

    This report will describe the Computational Fluid Dynamics (CFD) model that was developed to calculate the temperatures and gas volume fractions in the solution vessel during the irradiation. It is based on the model used to calculate temperatures and volume fractions in an annular vessel containing an aqueous solution of uranium . The experiment was repeated at several electron beam power levels, but the CFD analysis was performed only for the 12 kW irradiation, because this experiment came the closest to reaching a steady-state condition. The aim of the study is to compare results of the calculation with experimental measurements to determine the validity of the CFD model.

  12. Using Ecosystem Experiments to Improve Vegetation Models

    SciTech Connect

    Medlyn, Belinda; Zaehle, S; DeKauwe, Martin G.; Walker, Anthony P.; Dietze, Michael; Hanson, Paul J.; Hickler, Thomas; Jain, Atul; Luo, Yiqi; Parton, William; Prentice, I. Collin; Thornton, Peter E.; Wang, Shusen; Wang, Yingping; Weng, Ensheng; Iversen, Colleen M.; McCarthy, Heather R.; Warren, Jeffrey; Oren, Ram; Norby, Richard J

    2015-05-21

    Ecosystem responses to rising CO2 concentrations are a major source of uncertainty in climate change projections. Data from ecosystem-scale Free-Air CO2 Enrichment (FACE) experiments provide a unique opportunity to reduce this uncertainty. The recent FACE Model–Data Synthesis project aimed to use the information gathered in two forest FACE experiments to assess and improve land ecosystem models. A new 'assumption-centred' model intercomparison approach was used, in which participating models were evaluated against experimental data based on the ways in which they represent key ecological processes. Identifying and evaluating the main assumptions caused differences among models, and the assumption-centered approach produced a clear roadmap for reducing model uncertainty. We explain this approach and summarize the resulting research agenda. We encourage the application of this approach in other model intercomparison projects to fundamentally improve predictive understanding of the Earth system.

  13. Using Ecosystem Experiments to Improve Vegetation Models

    DOE PAGES

    Medlyn, Belinda; Zaehle, S; DeKauwe, Martin G.; Walker, Anthony P.; Dietze, Michael; Hanson, Paul J.; Hickler, Thomas; Jain, Atul; Luo, Yiqi; Parton, William; et al

    2015-05-21

    Ecosystem responses to rising CO2 concentrations are a major source of uncertainty in climate change projections. Data from ecosystem-scale Free-Air CO2 Enrichment (FACE) experiments provide a unique opportunity to reduce this uncertainty. The recent FACE Model–Data Synthesis project aimed to use the information gathered in two forest FACE experiments to assess and improve land ecosystem models. A new 'assumption-centred' model intercomparison approach was used, in which participating models were evaluated against experimental data based on the ways in which they represent key ecological processes. Identifying and evaluating the main assumptions caused differences among models, and the assumption-centered approach produced amore » clear roadmap for reducing model uncertainty. We explain this approach and summarize the resulting research agenda. We encourage the application of this approach in other model intercomparison projects to fundamentally improve predictive understanding of the Earth system.« less

  14. Experiences with two-equation turbulence models

    NASA Technical Reports Server (NTRS)

    Singhal, Ashok K.; Lai, Yong G.; Avva, Ram K.

    1995-01-01

    This viewgraph presentation discusses the following: introduction to CFD Research Corporation; experiences with two-equation models - models used, numerical difficulties, validation and applications, and strengths and weaknesses; and answers to three questions posed by the workshop organizing committee - what are your customers telling you, what are you doing in-house, and how can NASA-CMOTT (Center for Modeling of Turbulence and Transition) help.

  15. a Model Study of Complex Behavior in the Belousov - Reaction.

    NASA Astrophysics Data System (ADS)

    Lindberg, David Mark

    1988-12-01

    We have studied the complex oscillatory behavior in a model of the Belousov-Zhabotinskii (BZ) reaction in a continuously-fed stirred tank reactor (CSTR). The model consisted of a set of nonlinear ordinary differential equations derived from a reduced mechanism of the chemical system. These equations were integrated numerically on a computer, which yielded the concentrations of the constituent chemicals as functions of time. In addition, solutions were tracked as functions of a single parameter, the stability of the solutions was determined, and bifurcations of the solutions were located and studied. The intent of this study was to use this BZ model to explore further a region of complex oscillatory behavior found in experimental investigations, the most thorough of which revealed an alternating periodic-chaotic (P-C) sequence of states. A P-C sequence was discovered in the model which showed the same qualitative features as the experimental sequence. In order to better understand the P-C sequence, a detailed study was conducted in the vicinity of the P-C sequence, with two experimentally accessible parameters as control variables. This study mapped out the bifurcation sets, and included examination of the dynamics of the stable periodic, unstable periodic, and chaotic oscillatory motion. Observations made from the model results revealed a rough symmetry which suggests a new way of looking at the P-C sequence. Other nonlinear phenomena uncovered in the model were boundary and interior crises, several codimension-two bifurcations, and similarities in the shapes of areas of stability for periodic orbits in two-parameter space. Each earlier model study of this complex region involved only a limited one-parameter scan and had limited success in producing agreement with experiments. In contrast, for those regions of complex behavior that have been studied experimentally, the observations agree qualitatively with our model results. Several new predictions of the model

  16. Reduced Complexity Modeling (RCM): toward more use of less

    NASA Astrophysics Data System (ADS)

    Paola, Chris; Voller, Vaughan

    2014-05-01

    Although not exact, there is a general correspondence between reductionism and detailed, high-fidelity models, while 'synthesism' is often associated with reduced-complexity modeling. There is no question that high-fidelity reduction- based computational models are extremely useful in simulating the behaviour of complex natural systems. In skilled hands they are also a source of insight and understanding. We focus here on the case for the other side (reduced-complexity models), not because we think they are 'better' but because their value is more subtle, and their natural constituency less clear. What kinds of problems and systems lend themselves to the reduced-complexity approach? RCM is predicated on the idea that the mechanism of the system or phenomenon in question is, for whatever reason, insensitive to the full details of the underlying physics. There are multiple ways in which this can happen. B.T. Werner argued for the importance of process hierarchies in which processes at larger scales depend on only a small subset of everything going on at smaller scales. Clear scale breaks would seem like a way to test systems for this property but to our knowledge has not been used in this way. We argue that scale-independent physics, as for example exhibited by natural fractals, is another. We also note that the same basic criterion - independence of the process in question from details of the underlying physics - underpins 'unreasonably effective' laboratory experiments. There is thus a link between suitability for experimentation at reduced scale and suitability for RCM. Examples from RCM approaches to erosional landscapes, braided rivers, and deltas illustrate these ideas, and suggest that they are insufficient. There is something of a 'wild west' nature to RCM that puts some researchers off by suggesting a departure from traditional methods that have served science well for centuries. We offer two thoughts: first, that in the end the measure of a model is its

  17. Surface complexation model of uranyl sorption on Georgia kaolinite

    USGS Publications Warehouse

    Payne, T.E.; Davis, J.A.; Lumpkin, G.R.; Chisari, R.; Waite, T.D.

    2004-01-01

    The adsorption of uranyl on standard Georgia kaolinites (KGa-1 and KGa-1B) was studied as a function of pH (3-10), total U (1 and 10 ??mol/l), and mass loading of clay (4 and 40 g/l). The uptake of uranyl in air-equilibrated systems increased with pH and reached a maximum in the near-neutral pH range. At higher pH values, the sorption decreased due to the presence of aqueous uranyl carbonate complexes. One kaolinite sample was examined after the uranyl uptake experiments by transmission electron microscopy (TEM), using energy dispersive X-ray spectroscopy (EDS) to determine the U content. It was found that uranium was preferentially adsorbed by Ti-rich impurity phases (predominantly anatase), which are present in the kaolinite samples. Uranyl sorption on the Georgia kaolinites was simulated with U sorption reactions on both titanol and aluminol sites, using a simple non-electrostatic surface complexation model (SCM). The relative amounts of U-binding >TiOH and >AlOH sites were estimated from the TEM/EDS results. A ternary uranyl carbonate complex on the titanol site improved the fit to the experimental data in the higher pH range. The final model contained only three optimised log K values, and was able to simulate adsorption data across a wide range of experimental conditions. The >TiOH (anatase) sites appear to play an important role in retaining U at low uranyl concentrations. As kaolinite often contains trace TiO2, its presence may need to be taken into account when modelling the results of sorption experiments with radionuclides or trace metals on kaolinite. ?? 2004 Elsevier B.V. All rights reserved.

  18. Modeling competitive substitution in a polyelectrolyte complex

    NASA Astrophysics Data System (ADS)

    Peng, B.; Muthukumar, M.

    2015-12-01

    We have simulated the invasion of a polyelectrolyte complex made of a polycation chain and a polyanion chain, by another longer polyanion chain, using the coarse-grained united atom model for the chains and the Langevin dynamics methodology. Our simulations reveal many intricate details of the substitution reaction in terms of conformational changes of the chains and competition between the invading chain and the chain being displaced for the common complementary chain. We show that the invading chain is required to be sufficiently longer than the chain being displaced for effecting the substitution. Yet, having the invading chain to be longer than a certain threshold value does not reduce the substitution time much further. While most of the simulations were carried out in salt-free conditions, we show that presence of salt facilitates the substitution reaction and reduces the substitution time. Analysis of our data shows that the dominant driving force for the substitution process involving polyelectrolytes lies in the release of counterions during the substitution.

  19. Modeling competitive substitution in a polyelectrolyte complex

    SciTech Connect

    Peng, B.; Muthukumar, M.

    2015-12-28

    We have simulated the invasion of a polyelectrolyte complex made of a polycation chain and a polyanion chain, by another longer polyanion chain, using the coarse-grained united atom model for the chains and the Langevin dynamics methodology. Our simulations reveal many intricate details of the substitution reaction in terms of conformational changes of the chains and competition between the invading chain and the chain being displaced for the common complementary chain. We show that the invading chain is required to be sufficiently longer than the chain being displaced for effecting the substitution. Yet, having the invading chain to be longer than a certain threshold value does not reduce the substitution time much further. While most of the simulations were carried out in salt-free conditions, we show that presence of salt facilitates the substitution reaction and reduces the substitution time. Analysis of our data shows that the dominant driving force for the substitution process involving polyelectrolytes lies in the release of counterions during the substitution.

  20. Newton and Colour: The Complex Interplay of Theory and Experiment.

    ERIC Educational Resources Information Center

    Martins, Roberto De Andrade; Silva, Cibelle Celestino

    2001-01-01

    Elucidates some aspects of Newton's theory of light and colors, specifically as presented in his first optical paper in 1672. Analyzes Newton's main experiments intended to show that light is a mixture of rays with different refrangibilities. (SAH)

  1. The Meduza experiment: An orbital complex ten weeks in flight

    NASA Technical Reports Server (NTRS)

    Ovcharov, V.

    1979-01-01

    The newspaper article discusses the contribution of space research to understanding the origin of life on Earth. Part of this basic research involves studying amino acids, ribonucleic acid and DNA molecules subjected to cosmic radiation. The results from the Meduza experiment are not all analyzed as yet. The article also discusses the psychological changes in cosmonauts as evidenced by their attitude towards biology experiments in space.

  2. Multicomponent reactive transport modeling of uranium bioremediation field experiments

    SciTech Connect

    Fang, Yilin; Yabusaki, Steven B.; Morrison, Stan J.; Amonette, James E.; Long, Philip E.

    2009-10-15

    Biostimulation field experiments with acetate amendment are being performed at a former uranium mill tailings site in Rifle, Colorado, to investigate subsurface processes controlling in situ bioremediation of uranium-contaminated groundwater. An important part of the research is identifying and quantifying field-scale models of the principal terminal electron-accepting processes (TEAPs) during biostimulation and the consequent biogeochemical impacts to the subsurface receiving environment. Integrating abiotic chemistry with the microbially mediated TEAPs in the reaction network brings into play geochemical observations (e.g., pH, alkalinity, redox potential, major ions, and secondary minerals) that the reactive transport model must recognize. These additional constraints provide for a more systematic and mechanistic interpretation of the field behaviors during biostimulation. The reaction network specification developed for the 2002 biostimulation field experiment was successfully applied without additional calibration to the 2003 and 2007 field experiments. The robustness of the model specification is significant in that 1) the 2003 biostimulation field experiment was performed with 3 times higher acetate concentrations than the previous biostimulation in the same field plot (i.e., the 2002 experiment), and 2) the 2007 field experiment was performed in a new unperturbed plot on the same site. The biogeochemical reactive transport simulations accounted for four TEAPs, two distinct functional microbial populations, two pools of bioavailable Fe(III) minerals (iron oxides and phyllosilicate iron), uranium aqueous and surface complexation, mineral precipitation, and dissolution. The conceptual model for bioavailable iron reflects recent laboratory studies with sediments from the Old Rifle Uranium Mill Tailings Remedial Action (UMTRA) site that demonstrated that the bulk (~90%) of Fe(III) bioreduction is associated with the phyllosilicates rather than the iron oxides

  3. Industrial processing of complex fluids: Formulation and modeling

    SciTech Connect

    Scovel, J.C.; Bleasdale, S.; Forest, G.M.; Bechtel, S.

    1997-08-01

    The production of many important commercial materials involves the evolution of a complex fluid through a cooling phase into a hardened product. Textile fibers, high-strength fibers(KEVLAR, VECTRAN), plastics, chopped-fiber compounds, and fiber optical cable are such materials. Industry desires to replace experiments with on-line, real time models of these processes. Solutions to the problems are not just a matter of technology transfer, but require a fundamental description and simulation of the processes. Goals of the project are to develop models that can be used to optimize macroscopic properties of the solid product, to identify sources of undesirable defects, and to seek boundary-temperature and flow-and-material controls to optimize desired properties.

  4. Graduate Social Work Education and Cognitive Complexity: Does Prior Experience Really Matter?

    ERIC Educational Resources Information Center

    Simmons, Chris

    2014-01-01

    This study examined the extent to which age, education, and practice experience among social work graduate students (N = 184) predicted cognitive complexity, an essential aspect of critical thinking. In the regression analysis, education accounted for more of the variance associated with cognitive complexity than age and practice experience. When…

  5. Clinical complexity in medicine: A measurement model of task and patient complexity

    PubMed Central

    Islam, R.; Weir, C.; Fiol, G. Del

    2016-01-01

    Summary Background Complexity in medicine needs to be reduced to simple components in a way that is comprehensible to researchers and clinicians. Few studies in the current literature propose a measurement model that addresses both task and patient complexity in medicine. Objective The objective of this paper is to develop an integrated approach to understand and measure clinical complexity by incorporating both task and patient complexity components focusing on infectious disease domain. The measurement model was adapted and modified to healthcare domain. Methods Three clinical Infectious Disease teams were observed, audio-recorded and transcribed. Each team included an Infectious Diseases expert, one Infectious Diseases fellow, one physician assistant and one pharmacy resident fellow. The transcripts were parsed and the authors independently coded complexity attributes. This baseline measurement model of clinical complexity was modified in an initial set of coding process and further validated in a consensus-based iterative process that included several meetings and email discussions by three clinical experts from diverse backgrounds from the Department of Biomedical Informatics at the University of Utah. Inter-rater reliability was calculated using Cohen’s kappa. Results The proposed clinical complexity model consists of two separate components. The first is a clinical task complexity model with 13 clinical complexity-contributing factors and 7 dimensions. The second is the patient complexity model with 11 complexity-contributing factors and 5 dimensions. Conclusion The measurement model for complexity encompassing both task and patient complexity will be a valuable resource for future researchers and industry to measure and understand complexity in healthcare. PMID:26404626

  6. Modeling a Thermal Seepage Laboratory Experiment

    SciTech Connect

    Y. Zhang; J. Birkholzer

    2004-07-30

    A thermal seepage model has been developed to evaluate the potential for seepage into the waste emplacement drifts at the proposed high-level radioactive materials repository at Yucca Mountain when the rock is at elevated temperature. The coupled-process-model results show that no seepage occurs as long as the temperature at the drift wall is above boiling. This important result has been incorporated into the Total System Performance Assessment of Yucca Mountain. We have applied the same conceptual model to a laboratory heater experiment conducted by the Center for Nuclear Waste Regulatory Analyses. This experiment involves a fractured-porous rock system, composed of concrete slabs, heated by an electric heater placed in a 0.15 m diameter ''drift''. A substantial volume of water was released above the boiling zone over a time period of 135 days, giving rise to vaporization around the heat source. In this study, two basic conceptual models, similar to the thermal seepage models used in the Yucca Mountain Project, a dual-permeability model and an active-fracture model, are set up to predict evolution of temperature and saturation at the ''drift'' crown, and thereby to estimate potential for thermal seepage. Preliminary results from the model show good agreement with temperature profiles as well as with the potential seepage indicated in the lab experiments. These results build confidence in the thermal seepage models used in the Yucca Mountain Project. Different approaches are considered in our conceptual model to implement fracture-matrix interaction. Sensitivity analyses of fracture properties are conducted to help evaluation of uncertainty.

  7. The Effect of Complex Formation upon the Redox Potentials of Metallic Ions. Cyclic Voltammetry Experiments.

    ERIC Educational Resources Information Center

    Ibanez, Jorge G.; And Others

    1988-01-01

    Describes experiments in which students prepare in situ soluble complexes of metal ions with different ligands and observe and estimate the change in formal potential that the ion undergoes upon complexation. Discusses student formation and analysis of soluble complexes of two different metal ions with the same ligand. (CW)

  8. Complexation Effect on Redox Potential of Iron(III)-Iron(II) Couple: A Simple Potentiometric Experiment

    ERIC Educational Resources Information Center

    Rizvi, Masood Ahmad; Syed, Raashid Maqsood; Khan, Badruddin

    2011-01-01

    A titration curve with multiple inflection points results when a mixture of two or more reducing agents with sufficiently different reduction potentials are titrated. In this experiment iron(II) complexes are combined into a mixture of reducing agents and are oxidized to the corresponding iron(III) complexes. As all of the complexes involve the…

  9. IDMS: inert dark matter model with a complex singlet

    NASA Astrophysics Data System (ADS)

    Bonilla, Cesar; Sokolowska, Dorota; Darvishi, Neda; Diaz-Cruz, J. Lorenzo; Krawczyk, Maria

    2016-06-01

    We study an extension of the inert doublet model (IDM) that includes an extra complex singlet of the scalars fields, which we call the IDMS. In this model there are three Higgs particles, among them a SM-like Higgs particle, and the lightest neutral scalar, from the inert sector, remains a viable dark matter (DM) candidate. We assume a non-zero complex vacuum expectation value for the singlet, so that the visible sector can introduce extra sources of CP violation. We construct the scalar potential of IDMS, assuming an exact Z 2 symmetry, with the new singlet being Z 2-even, as well as a softly broken U(1) symmetry, which allows a reduced number of free parameters in the potential. In this paper we explore the foundations of the model, in particular the masses and interactions of scalar particles for a few benchmark scenarios. Constraints from collider physics, in particular from the Higgs signal observed at the Large Hadron Collider with {M}h≈ 125 {{GeV}}, as well as constraints from the DM experiments, such as relic density measurements and direct detection limits, are included in the analysis. We observe significant differences with respect to the IDM in relic density values from additional annihilation channels, interference and resonance effects due to the extended Higgs sector.

  10. Overload depending on driving experience and situation complexity: Which strategies faced with a pedestrian crossing?

    PubMed

    Paxion, Julie; Galy, Edith; Berthelon, Catherine

    2015-11-01

    The purpose of this study was to identify the influence of situation complexity and driving experience on subjective workload and driving performance, and the less costly and the most effective strategies faced with a hazard pedestrian crossing. Four groups of young drivers (15 traditionally trained novices, 12 early-trained novices, 15 with three years of experience and 15 with a minimum of five years of experience) were randomly assigned to three situations (simple, moderately complex and very complex) including unexpected pedestrian crossings, in a driving simulator. The subjective workload was collected by the NASA-TLX questionnaire after each situation. The main results confirmed that the situation complexity and the lack of experience increased the subjective workload. Moreover, the subjective workload, the avoidance strategies and the reaction times influenced the number of collisions depending on situation complexity and driving experience. These results must be taken into account to target the prevention actions.

  11. Power Curve Modeling in Complex Terrain Using Statistical Models

    NASA Astrophysics Data System (ADS)

    Bulaevskaya, V.; Wharton, S.; Clifton, A.; Qualley, G.; Miller, W.

    2014-12-01

    Traditional power output curves typically model power only as a function of the wind speed at the turbine hub height. While the latter is an essential predictor of power output, wind speed information in other parts of the vertical profile, as well as additional atmospheric variables, are also important determinants of power. The goal of this work was to determine the gain in predictive ability afforded by adding wind speed information at other heights, as well as other atmospheric variables, to the power prediction model. Using data from a wind farm with a moderately complex terrain in the Altamont Pass region in California, we trained three statistical models, a neural network, a random forest and a Gaussian process model, to predict power output from various sets of aforementioned predictors. The comparison of these predictions to the observed power data revealed that considerable improvements in prediction accuracy can be achieved both through the addition of predictors other than the hub-height wind speed and the use of statistical models. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under contract DE-AC52-07NA27344 and was funded by Wind Uncertainty Quantification Laboratory Directed Research and Development Project at LLNL under project tracking code 12-ERD-069.

  12. A Computer Simulated Experiment in Complex Order Kinetics

    ERIC Educational Resources Information Center

    Merrill, J. C.; And Others

    1975-01-01

    Describes a computer simulation experiment in which physical chemistry students can determine all of the kinetic parameters of a reaction, such as order of the reaction with respect to each reagent, forward and reverse rate constants for the overall reaction, and forward and reverse activation energies. (MLH)

  13. Modeling the propagation of mobile phone virus under complex network.

    PubMed

    Yang, Wei; Wei, Xi-liang; Guo, Hao; An, Gang; Guo, Lei; Yao, Yu

    2014-01-01

    Mobile phone virus is a rogue program written to propagate from one phone to another, which can take control of a mobile device by exploiting its vulnerabilities. In this paper the propagation model of mobile phone virus is tackled to understand how particular factors can affect its propagation and design effective containment strategies to suppress mobile phone virus. Two different propagation models of mobile phone viruses under the complex network are proposed in this paper. One is intended to describe the propagation of user-tricking virus, and the other is to describe the propagation of the vulnerability-exploiting virus. Based on the traditional epidemic models, the characteristics of mobile phone viruses and the network topology structure are incorporated into our models. A detailed analysis is conducted to analyze the propagation models. Through analysis, the stable infection-free equilibrium point and the stability condition are derived. Finally, considering the network topology, the numerical and simulation experiments are carried out. Results indicate that both models are correct and suitable for describing the spread of two different mobile phone viruses, respectively.

  14. Modeling the Propagation of Mobile Phone Virus under Complex Network

    PubMed Central

    Yang, Wei; Wei, Xi-liang; Guo, Hao; An, Gang; Guo, Lei

    2014-01-01

    Mobile phone virus is a rogue program written to propagate from one phone to another, which can take control of a mobile device by exploiting its vulnerabilities. In this paper the propagation model of mobile phone virus is tackled to understand how particular factors can affect its propagation and design effective containment strategies to suppress mobile phone virus. Two different propagation models of mobile phone viruses under the complex network are proposed in this paper. One is intended to describe the propagation of user-tricking virus, and the other is to describe the propagation of the vulnerability-exploiting virus. Based on the traditional epidemic models, the characteristics of mobile phone viruses and the network topology structure are incorporated into our models. A detailed analysis is conducted to analyze the propagation models. Through analysis, the stable infection-free equilibrium point and the stability condition are derived. Finally, considering the network topology, the numerical and simulation experiments are carried out. Results indicate that both models are correct and suitable for describing the spread of two different mobile phone viruses, respectively. PMID:25133209

  15. Anomalous sea surface reverberation scale model experiments.

    PubMed

    Neighbors, T H; Bjørnø, L

    2006-12-22

    Low frequency sea surface sound backscattering from approximately 100 Hz to a few kHz observed from the 1960s broadband measurements using explosive charges to the Critical Sea Test measurements conducted in the 1990 s is substantially higher than explained by rough sea surface scattering theory. Alternative theories for explaining this difference range from scattering by bubble plumes/clouds formed by breaking waves to stochastic scattering from fluctuating bubble layers near the sea surface. In each case, theories focus on reverberation in the absence of the large-scale surface wave height fluctuations that are characteristic of a sea that produces bubble clouds and plumes. At shallow grazing angles, shadowing of bubble plumes and clouds caused by surface wave height fluctuations may induce first order changes in the backscattered signal strength. To understand the magnitude of shadowing effects under controlled and repeatable conditions, scale model experiments were performed in a 3 m x 1.5 m x 1.5 m tank at the Technical University of Denmark. The experiments used a 1 MHz transducer as the source and receiver, a computer controlled data acquisition system, a scale model target, and a surface wave generator. The scattered signal strength fluctuations observed at shallow angles are characteristic of the predicted ocean environment. These experiments demonstrate that shadowing has a first order impact on bubble plume and cloud scattering strength and emphasize the usefulness of model scale experiments for studying underwater acoustic events under controlled conditions.

  16. Data production models for the CDF experiment

    SciTech Connect

    Antos, J.; Babik, M.; Benjamin, D.; Cabrera, S.; Chan, A.W.; Chen, Y.C.; Coca, M.; Cooper, B.; Genser, K.; Hatakeyama, K.; Hou, S.; Hsieh, T.L.; Jayatilaka, B.; Kraan, A.C.; Lysak, R.; Mandrichenko, I.V.; Robson, A.; Siket, M.; Stelzer, B.; Syu, J.; Teng, P.K.; /Kosice, IEF /Duke U. /Taiwan, Inst. Phys. /University Coll. London /Fermilab /Rockefeller U. /Michigan U. /Pennsylvania U. /Glasgow U. /UCLA /Tsukuba U. /New Mexico U.

    2006-06-01

    The data production for the CDF experiment is conducted on a large Linux PC farm designed to meet the needs of data collection at a maximum rate of 40 MByte/sec. We present two data production models that exploits advances in computing and communication technology. The first production farm is a centralized system that has achieved a stable data processing rate of approximately 2 TByte per day. The recently upgraded farm is migrated to the SAM (Sequential Access to data via Metadata) data handling system. The software and hardware of the CDF production farms has been successful in providing large computing and data throughput capacity to the experiment.

  17. Model-scale sound propagation experiment

    NASA Technical Reports Server (NTRS)

    Willshire, William L., Jr.

    1988-01-01

    The results of a scale model propagation experiment to investigate grazing propagation above a finite impedance boundary are reported. In the experiment, a 20 x 25 ft ground plane was installed in an anechoic chamber. Propagation tests were performed over the plywood surface of the ground plane and with the ground plane covered with felt, styrofoam, and fiberboard. Tests were performed with discrete tones in the frequency range of 10 to 15 kHz. The acoustic source and microphones varied in height above the test surface from flush to 6 in. Microphones were located in a linear array up to 18 ft from the source. A preliminary experiment using the same ground plane, but only testing the plywood and felt surfaces was performed. The results of this first experiment were encouraging, but data variability and repeatability were poor, particularly, for the felt surface, making comparisons with theoretical predictions difficult. In the main experiment the sound source, microphones, microphone positioning, data acquisition, quality of the anechoic chamber, and environmental control of the anechoic chamber were improved. High-quality, repeatable acoustic data were measured in the main experiment for all four test surfaces. Comparisons with predictions are good, but limited by uncertainties of the impedance values of the test surfaces.

  18. Model-scale sound propagation experiment

    NASA Astrophysics Data System (ADS)

    Willshire, William L., Jr.

    1988-04-01

    The results of a scale model propagation experiment to investigate grazing propagation above a finite impedance boundary are reported. In the experiment, a 20 x 25 ft ground plane was installed in an anechoic chamber. Propagation tests were performed over the plywood surface of the ground plane and with the ground plane covered with felt, styrofoam, and fiberboard. Tests were performed with discrete tones in the frequency range of 10 to 15 kHz. The acoustic source and microphones varied in height above the test surface from flush to 6 in. Microphones were located in a linear array up to 18 ft from the source. A preliminary experiment using the same ground plane, but only testing the plywood and felt surfaces was performed. The results of this first experiment were encouraging, but data variability and repeatability were poor, particularly, for the felt surface, making comparisons with theoretical predictions difficult. In the main experiment the sound source, microphones, microphone positioning, data acquisition, quality of the anechoic chamber, and environmental control of the anechoic chamber were improved. High-quality, repeatable acoustic data were measured in the main experiment for all four test surfaces. Comparisons with predictions are good, but limited by uncertainties of the impedance values of the test surfaces.

  19. Sex differences in dental caries experience: clinical evidence, complex etiology.

    PubMed

    Lukacs, John R

    2011-10-01

    A sex difference in oral health has been widely documented through time and across cultures. Women's oral health declines more rapidly than men's with the onset of agriculture and the associated rise in fertility. The magnitude of this disparity in oral health by sex increases during ontogeny: from childhood, to adolescence, and through the reproductive years. Representative studies of sex differences in caries, tooth loss, and periodontal disease are critically reviewed. Surveys conducted in Hungary, India, and in an isolated traditional Brazilian sample provide additional support for a significant sex bias in dental caries, especially in mature adults. Compounding hormonal and reproductive factors, the sex difference in oral health in India appears to involve social and religious causes such as son preference, ritual fasting, and dietary restrictions during pregnancy. Like the sex difference in caries, tooth loss in women is greater than in men and has been linked to caries and parity. Results of genome wide association studies have found caries susceptible and caries protective loci that influence variation in taste, saliva, and enamel proteins, affecting the oral environment and the micro-structure of enamel. Genetic variation, some of which is X-linked, may partly explain how sex differences in oral health originate. A primary, but neglected, factor in explaining the sex differential in oral health is the complex and synergistic changes associated with female sex hormones, pregnancy, and women's reproductive life history. Caries etiology is complex and impacts understanding of the sex difference in oral health. Both biological (genetics, hormones, and reproductive history) and anthropological (behavioral) factors such culture-based division of labor and gender-based dietary preferences play a role.

  20. Modeling Hemispheric Detonation Experiments in 2-Dimensions

    SciTech Connect

    Howard, W M; Fried, L E; Vitello, P A; Druce, R L; Phillips, D; Lee, R; Mudge, S; Roeske, F

    2006-06-22

    Experiments have been performed with LX-17 (92.5% TATB and 7.5% Kel-F 800 binder) to study scaling of detonation waves using a dimensional scaling in a hemispherical divergent geometry. We model these experiments using an arbitrary Lagrange-Eulerian (ALE3D) hydrodynamics code, with reactive flow models based on the thermo-chemical code, Cheetah. The thermo-chemical code Cheetah provides a pressure-dependent kinetic rate law, along with an equation of state based on exponential-6 fluid potentials for individual detonation product species, calibrated to high pressures ({approx} few Mbars) and high temperatures (20000K). The parameters for these potentials are fit to a wide variety of experimental data, including shock, compression and sound speed data. For the un-reacted high explosive equation of state we use a modified Murnaghan form. We model the detonator (including the flyer plate) and initiation system in detail. The detonator is composed of LX-16, for which we use a program burn model. Steinberg-Guinan models5 are used for the metal components of the detonator. The booster and high explosive are LX-10 and LX-17, respectively. For both the LX-10 and LX-17, we use a pressure dependent rate law, coupled with a chemical equilibrium equation of state based on Cheetah. For LX-17, the kinetic model includes carbon clustering on the nanometer size scale.

  1. Analysis of Complex Intervention Effects in Time-Series Experiments.

    ERIC Educational Resources Information Center

    Bower, Cathleen

    An iterative least squares procedure for analyzing the effect of various kinds of intervention in time-series data is described. There are numerous applications of this design in economics, education, and psychology, although until recently, no appropriate analysis techniques had been developed to deal with the model adequately. This paper…

  2. Modeling Complex Workflow in Molecular Diagnostics

    PubMed Central

    Gomah, Mohamed E.; Turley, James P.; Lu, Huimin; Jones, Dan

    2010-01-01

    One of the hurdles to achieving personalized medicine has been implementing the laboratory processes for performing and reporting complex molecular tests. The rapidly changing test rosters and complex analysis platforms in molecular diagnostics have meant that many clinical laboratories still use labor-intensive manual processing and testing without the level of automation seen in high-volume chemistry and hematology testing. We provide here a discussion of design requirements and the results of implementation of a suite of lab management tools that incorporate the many elements required for use of molecular diagnostics in personalized medicine, particularly in cancer. These applications provide the functionality required for sample accessioning and tracking, material generation, and testing that are particular to the evolving needs of individualized molecular diagnostics. On implementation, the applications described here resulted in improvements in the turn-around time for reporting of more complex molecular test sets, and significant changes in the workflow. Therefore, careful mapping of workflow can permit design of software applications that simplify even the complex demands of specialized molecular testing. By incorporating design features for order review, software tools can permit a more personalized approach to sample handling and test selection without compromising efficiency. PMID:20007844

  3. Anatomy of a Metamorphic Core Complex: Preliminary Results of Ruby Mountains Seismic Experiment, Northeastern Nevada

    NASA Astrophysics Data System (ADS)

    Schiltz, K. K.; Litherland, M.; Klemperer, S. L.

    2010-12-01

    The Ruby Mountains Seismic Experiment is a 50-station deployment of Earthscope’s Flexible Array installed in June 2010 to study the Ruby Mountain metamorphic core complex, northeastern Nevada. Competing theories of metamorphic core complexes stress the importance of either (1) low-angle detachment faulting and lateral crustal flow, likely leading to horizontal shearing and anisotropy, or (2) vertical diapirism creating dominantly vertical shearing and anisotropy. Our experiment aims to distinguish between these two hypotheses using densely spaced (5 to 10 km) broadband seismometers along two WNW-ESE transects across the Ruby Range and one NNE-SSW transect along the axis of the range. When data acquisition is complete we will image crustal structures and measure velocity and anisotropy with a range of receiver function, shear-wave splitting and surface-wave tomographic methods. In addition to the newly acquired data, existing data can also be used to build understanding of the region. Previous regional studies have interpreted shear-wave splitting in terms of single-layer anisotropy in the mantle, related to a complex flow structure, but previous controlled source studies have identified measurable crustal anisotropy. We therefore attempted to fit existing data to a two-layer model consisting of a weakly anisotropic crustal layer and a more dominant mantle layer. We used “SplitLab” to measure apparent splitting parameters from ELK (a USGS permanent station) and 3 Earthscope Transportable Array stations. There is a clear variation in the splitting parameters with back-azimuth, but existing data do not provide a stable inversion for a two-layer model. Our best forward-model solution is a crustal layer with a fast axis orientation of 357° and 0.3 second delay time and a mantle layer with a 282° fast axis and 1.3 s delay time. Though the direction of the fast axis is consistent with previously published regional results, the 1.3 s delay time is larger than

  4. Background modeling for the GERDA experiment

    SciTech Connect

    Becerici-Schmidt, N.; Collaboration: GERDA Collaboration

    2013-08-08

    The neutrinoless double beta (0νββ) decay experiment GERDA at the LNGS of INFN has started physics data taking in November 2011. This paper presents an analysis aimed at understanding and modeling the observed background energy spectrum, which plays an essential role in searches for a rare signal like 0νββ decay. A very promising preliminary model has been obtained, with the systematic uncertainties still under study. Important information can be deduced from the model such as the expected background and its decomposition in the signal region. According to the model the main background contributions around Q{sub ββ} come from {sup 214}Bi, {sup 228}Th, {sup 42}K, {sup 60}Co and α emitting isotopes in the {sup 226}Ra decay chain, with a fraction depending on the assumed source positions.

  5. Background modeling for the GERDA experiment

    NASA Astrophysics Data System (ADS)

    Becerici-Schmidt, N.; Gerda Collaboration

    2013-08-01

    The neutrinoless double beta (0νββ) decay experiment GERDA at the LNGS of INFN has started physics data taking in November 2011. This paper presents an analysis aimed at understanding and modeling the observed background energy spectrum, which plays an essential role in searches for a rare signal like 0νββ decay. A very promising preliminary model has been obtained, with the systematic uncertainties still under study. Important information can be deduced from the model such as the expected background and its decomposition in the signal region. According to the model the main background contributions around Qββ come from 214Bi, 228Th, 42K, 60Co and α emitting isotopes in the 226Ra decay chain, with a fraction depending on the assumed source positions.

  6. Simulation model for the closed plant experiment facility of CEEF

    NASA Astrophysics Data System (ADS)

    Abe, Koichi; Ishikawa, Yoshio; Kibe, Seishiro; Nitta, Keiji

    The Closed Ecology Experiment Facilities (CEEF) is a testbed for Controlled Ecological Life Support Systems (CELSS) investigations. CEEF including the physico-chemical material regenerative system has been constructed for the experiments of material circulation among plants, breeding animals and crew of CEEF. Because CEEF is a complex system, an appropriate schedule for the operation must be prepared in advance. The CEEF behavioral Prediction System, CPS, that will help to confirm the operation schedule, is under development. CPS will simulate CEEFs behavior with data (conditions of equipments, quantity of materials in tanks, etc.) of CEEF and an operation schedule that will be made by the operation team everyday, before the schedule will be carried out. The result of the simulation will show whether the operation schedule is appropriate or not. In order to realize CPS, models of the simulation program that is installed in CPS must mirror the real facilities of CEEF. For the first step of development, a flexible algorithm of the simulation program was investigated. The next step was development of a replicate simulation model of the material circulation system for the Closed Plant Experiment Facility (CPEF) that is a part of CEEF. All the parts of a real material circulation system for CPEF are connected together and work as a complex mechanism. In the simulation model, the system was separated into 38 units according to its operational segmentation. In order to develop each model for its corresponding unit, specifications for the model were fixed based on the specifications of the real part. These models were put into a simulation model for the system.

  7. Impact polymorphs of quartz: experiments and modelling

    NASA Astrophysics Data System (ADS)

    Price, M. C.; Dutta, R.; Burchell, M. J.; Cole, M. J.

    2013-09-01

    We have used the light gas gun at the University of Kent to perform a series of impact experiments firing quartz projectiles onto metal, quartz and sapphire targets. The aim is to quantify the amount of any high pressure quartz polymorphs produced, and use these data to develop our hydrocode modelling to enable the predict ion of the quantity of polymorphs produced during a planetary scale impact.

  8. Specifying and Refining a Complex Measurement Model.

    ERIC Educational Resources Information Center

    Levy, Roy; Mislevy, Robert J.

    This paper aims to describe a Bayesian approach to modeling and estimating cognitive models both in terms of statistical machinery and actual instrument development. Such a method taps the knowledge of experts to provide initial estimates for the probabilistic relationships among the variables in a multivariate latent variable model and refines…

  9. Dispersion Modeling in Complex Urban Systems

    EPA Science Inventory

    Models are used to represent real systems in an understandable way. They take many forms. A conceptual model explains the way a system works. In environmental studies, for example, a conceptual model may delineate all the factors and parameters for determining how a particle move...

  10. Data Assimilation and Model Evaluation Experiment Datasets.

    NASA Astrophysics Data System (ADS)

    Lai, Chung-Chieng A.; Qian, Wen; Glenn, Scott M.

    1994-05-01

    The Institute for Naval Oceanography, in cooperation with Naval Research Laboratories and universities, executed the Data Assimilation and Model Evaluation Experiment (DAMÉE) for the Gulf Stream region during fiscal years 1991-1993. Enormous effort has gone into the preparation of several high-quality and consistent datasets for model initialization and verification. This paper describes the preparation process, the temporal and spatial scopes, the contents, the structure, etc., of these datasets.The goal of DAMEE and the need of data for the four phases of experiment are briefly stated. The preparation of DAMEE datasets consisted of a series of processes: 1)collection of observational data; 2) analysis and interpretation; 3) interpolation using the Optimum Thermal Interpolation System package; 4) quality control and re-analysis; and 5) data archiving and software documentation.The data products from these processes included a time series of 3D fields of temperature and salinity, 2D fields of surface dynamic height and mixed-layer depth, analysis of the Gulf Stream and rings system, and bathythermograph profiles. To date, these are the most detailed and high-quality data for mesoscale ocean modeling, data assimilation, and forecasting research. Feedback from ocean modeling groups who tested this data was incorporated into its refinement.Suggestions for DAMEE data usages include 1) ocean modeling and data assimilation studies, 2) diagnosis and theorectical studies, and 3) comparisons with locally detailed observations.

  11. Design of Experiments, Model Calibration and Data Assimilation

    SciTech Connect

    Williams, Brian J.

    2014-07-30

    This presentation provides an overview of emulation, calibration and experiment design for computer experiments. Emulation refers to building a statistical surrogate from a carefully selected and limited set of model runs to predict unsampled outputs. The standard kriging approach to emulation of complex computer models is presented. Calibration refers to the process of probabilistically constraining uncertain physics/engineering model inputs to be consistent with observed experimental data. An initial probability distribution for these parameters is updated using the experimental information. Markov chain Monte Carlo (MCMC) algorithms are often used to sample the calibrated parameter distribution. Several MCMC algorithms commonly employed in practice are presented, along with a popular diagnostic for evaluating chain behavior. Space-filling approaches to experiment design for selecting model runs to build effective emulators are discussed, including Latin Hypercube Design and extensions based on orthogonal array skeleton designs and imposed symmetry requirements. Optimization criteria that further enforce space-filling, possibly in projections of the input space, are mentioned. Designs to screen for important input variations are summarized and used for variable selection in a nuclear fuels performance application. This is followed by illustration of sequential experiment design strategies for optimization, global prediction, and rare event inference.

  12. Using Models to Inform Policy: Insights from Modeling the Complexities of Global Polio Eradication

    NASA Astrophysics Data System (ADS)

    Thompson, Kimberly M.

    Drawing on over 20 years of experience modeling risks in complex systems, this talk will challenge SBP participants to develop models that provide timely and useful answers to critical policy questions when decision makers need them. The talk will include reflections on the opportunities and challenges associated with developing integrated models for complex problems and communicating their results effectively. Dr. Thompson will focus the talk largely on collaborative modeling related to global polio eradication and the application of system dynamics tools. After successful global eradication of wild polioviruses, live polioviruses will still present risks that could potentially lead to paralytic polio cases. This talk will present the insights of efforts to use integrated dynamic, probabilistic risk, decision, and economic models to address critical policy questions related to managing global polio risks. Using a dynamic disease transmission model combined with probabilistic model inputs that characterize uncertainty for a stratified world to account for variability, we find that global health leaders will face some difficult choices, but that they can take actions that will manage the risks effectively. The talk will emphasize the need for true collaboration between modelers and subject matter experts, and the importance of working with decision makers as partners to ensure the development of useful models that actually get used.

  13. Epidemiological models of Mycobacterium tuberculosis complex infections.

    PubMed

    Ozcaglar, Cagri; Shabbeer, Amina; Vandenberg, Scott L; Yener, Bülent; Bennett, Kristin P

    2012-04-01

    The resurgence of tuberculosis in the 1990s and the emergence of drug-resistant tuberculosis in the first decade of the 21st century increased the importance of epidemiological models for the disease. Due to slow progression of tuberculosis, the transmission dynamics and its long-term effects can often be better observed and predicted using simulations of epidemiological models. This study provides a review of earlier study on modeling different aspects of tuberculosis dynamics. The models simulate tuberculosis transmission dynamics, treatment, drug resistance, control strategies for increasing compliance to treatment, HIV/TB co-infection, and patient groups. The models are based on various mathematical systems, such as systems of ordinary differential equations, simulation models, and Markov Chain Monte Carlo methods. The inferences from the models are justified by case studies and statistical analysis of TB patient datasets. PMID:22387570

  14. Disentangling nestedness from models of ecological complexity.

    PubMed

    James, Alex; Pitchford, Jonathan W; Plank, Michael J

    2012-07-12

    Complex networks of interactions are ubiquitous and are particularly important in ecological communities, in which large numbers of species exhibit negative (for example, competition or predation) and positive (for example, mutualism) interactions with one another. Nestedness in mutualistic ecological networks is the tendency for ecological specialists to interact with a subset of species that also interact with more generalist species. Recent mathematical and computational analysis has suggested that such nestedness increases species richness. By examining previous results and applying computational approaches to 59 empirical data sets representing mutualistic plant–pollinator networks, we show that this statement is incorrect. A simpler metric—the number of mutualistic partners a species has—is a much better predictor of individual species survival and hence, community persistence. Nestedness is, at best, a secondary covariate rather than a causative factor for biodiversity in mutualistic communities. Analysis of complex networks should be accompanied by analysis of simpler, underpinning mechanisms that drive multiple higher-order network properties.

  15. Design of experiments and springback prediction for AHSS automotive components with complex geometry

    NASA Astrophysics Data System (ADS)

    Asgari, A.; Pereira, M.; Rolfe, B.; Dingle, M.; Hodgson, P.

    2005-08-01

    With the drive towards implementing Advanced High Strength Steels (AHSS) in the automotive industry; stamping engineers need to quickly answer questions about forming these strong materials into elaborate shapes. Commercially available codes have been successfully used to accurately predict formability, thickness and strains in complex parts. However, springback and twisting are still challenging subjects in numerical simulations of AHSS components. Design of Experiments (DOE) has been used in this paper to study the sensitivity of the implicit and explicit numerical results with respect to certain arrays of user input parameters in the forming of an AHSS component. Numerical results were compared to experimental measurements of the parts stamped in an industrial production line. The forming predictions of the implicit and explicit codes were in good agreement with the experimental measurements for the conventional steel grade, while lower accuracies were observed for the springback predictions. The forming predictions of the complex component with an AHSS material were also in good correlation with the respective experimental measurements. However, much lower accuracies were observed in its springback predictions. The number of integration points through the thickness and tool offset were found to be of significant importance, while coefficient of friction and Young's modulus (modeling input parameters) have no significant effect on the accuracy of the predictions for the complex geometry.

  16. Studying complex chemistries using PLASIMO's global model

    NASA Astrophysics Data System (ADS)

    Koelman, PMJ; Tadayon Mousavi, S.; Perillo, R.; Graef, WAAD; Mihailova, DB; van Dijk, J.

    2016-02-01

    The Plasimo simulation software is used to construct a Global Model of a CO2 plasma. A DBD plasma between two coaxial cylinders is considered, which is driven by a triangular input power pulse. The plasma chemistry is studied during this power pulse and in the afterglow. The model consists of 71 species that interact in 3500 reactions. Preliminary results from the model are presented. The model has been validated by comparing its results with those presented in Kozák et al. (Plasma Sources Science and Technology 23(4) p. 045004, 2014). A good qualitative agreement has been reached; potential sources of remaining discrepancies are extensively discussed.

  17. Wind Tunnel Modeling Of Wind Flow Over Complex Terrain

    NASA Astrophysics Data System (ADS)

    Banks, D.; Cochran, B.

    2010-12-01

    This presentation will describe the finding of an atmospheric boundary layer (ABL) wind tunnel study conducted as part of the Bolund Experiment. This experiment was sponsored by Risø DTU (National Laboratory for Sustainable Energy, Technical University of Denmark) during the fall of 2009 to enable a blind comparison of various air flow models in an attempt to validate their performance in predicting airflow over complex terrain. Bohlund hill sits 12 m above the water level at the end of a narrow isthmus. The island features a steep escarpment on one side, over which the airflow can be expected to separate. The island was equipped with several anemometer towers, and the approach flow over the water was well characterized. This study was one of only two only physical model studies included in the blind model comparison, the other being a water plume study. The remainder were computational fluid dynamics (CFD) simulations, including both RANS and LES. Physical modeling of air flow over topographical features has been used since the middle of the 20th century, and the methods required are well understood and well documented. Several books have been written describing how to properly perform ABL wind tunnel studies, including ASCE manual of engineering practice 67. Boundary layer wind tunnel tests are the only modelling method deemed acceptable in ASCE 7-10, the most recent edition of the American Society of Civil Engineers standard that provides wind loads for buildings and other structures for buildings codes across the US. Since the 1970’s, most tall structures undergo testing in a boundary layer wind tunnel to accurately determine the wind induced loading. When compared to CFD, the US EPA considers a properly executed wind tunnel study to be equivalent to a CFD model with infinitesimal grid resolution and near infinite memory. One key reason for this widespread acceptance is that properly executed ABL wind tunnel studies will accurately simulate flow separation

  18. Ballistic Response of Fabrics: Model and Experiments

    NASA Astrophysics Data System (ADS)

    Orphal, Dennis L.; Walker Anderson, James D., Jr.

    2001-06-01

    Walker (1999)developed an analytical model for the dynamic response of fabrics to ballistic impact. From this model the force, F, applied to the projectile by the fabric is derived to be F = 8/9 (ET*)h^3/R^2, where E is the Young's modulus of the fabric, T* is the "effective thickness" of the fabric and equal to the ratio of the areal density of the fabric to the fiber density, h is the displacement of the fabric on the axis of impact and R is the radius of the fabric deformation or "bulge". Ballistic tests against Zylon^TM fabric have been performed to measure h and R as a function of time. The results of these experiments are presented and analyzed in the context of the Walker model. Walker (1999), Proceedings of the 18th International Symposium on Ballistics, pp. 1231.

  19. Multiscale Computational Models of Complex Biological Systems

    PubMed Central

    Walpole, Joseph; Papin, Jason A.; Peirce, Shayn M.

    2014-01-01

    Integration of data across spatial, temporal, and functional scales is a primary focus of biomedical engineering efforts. The advent of powerful computing platforms, coupled with quantitative data from high-throughput experimental platforms, has allowed multiscale modeling to expand as a means to more comprehensively investigate biological phenomena in experimentally relevant ways. This review aims to highlight recently published multiscale models of biological systems while using their successes to propose the best practices for future model development. We demonstrate that coupling continuous and discrete systems best captures biological information across spatial scales by selecting modeling techniques that are suited to the task. Further, we suggest how to best leverage these multiscale models to gain insight into biological systems using quantitative, biomedical engineering methods to analyze data in non-intuitive ways. These topics are discussed with a focus on the future of the field, the current challenges encountered, and opportunities yet to be realized. PMID:23642247

  20. Information, complexity and efficiency: The automobile model

    SciTech Connect

    Allenby, B. |

    1996-08-08

    The new, rapidly evolving field of industrial ecology - the objective, multidisciplinary study of industrial and economic systems and their linkages with fundamental natural systems - provides strong ground for believing that a more environmentally and economically efficient economy will be more information intensive and complex. Information and intellectual capital will be substituted for the more traditional inputs of materials and energy in producing a desirable, yet sustainable, quality of life. While at this point this remains a strong hypothesis, the evolution of the automobile industry can be used to illustrate how such substitution may, in fact, already be occurring in an environmentally and economically critical sector.

  1. Process modelling for Space Station experiments

    NASA Technical Reports Server (NTRS)

    Alexander, J. Iwan D.; Rosenberger, Franz; Nadarajah, Arunan; Ouazzani, Jalil; Amiroudine, Sakir

    1990-01-01

    Examined here is the sensitivity of a variety of space experiments to residual accelerations. In all the cases discussed the sensitivity is related to the dynamic response of a fluid. In some cases the sensitivity can be defined by the magnitude of the response of the velocity field. This response may involve motion of the fluid associated with internal density gradients, or the motion of a free liquid surface. For fluids with internal density gradients, the type of acceleration to which the experiment is sensitive will depend on whether buoyancy driven convection must be small in comparison to other types of fluid motion, or fluid motion must be suppressed or eliminated. In the latter case, the experiments are sensitive to steady and low frequency accelerations. For experiments such as the directional solidification of melts with two or more components, determination of the velocity response alone is insufficient to assess the sensitivity. The effect of the velocity on the composition and temperature field must be considered, particularly in the vicinity of the melt-crystal interface. As far as the response to transient disturbances is concerned, the sensitivity is determined by both the magnitude and frequency of the acceleration and the characteristic momentum and solute diffusion times. The microgravity environment, a numerical analysis of low gravity tolerance of the Bridgman-Stockbarger technique, and modeling crystal growth by physical vapor transport in closed ampoules are discussed.

  2. Sensitivity Analysis in Complex Plasma Chemistry Models

    NASA Astrophysics Data System (ADS)

    Turner, Miles

    2015-09-01

    The purpose of a plasma chemistry model is prediction of chemical species densities, including understanding the mechanisms by which such species are formed. These aims are compromised by an uncertain knowledge of the rate constants included in the model, which directly causes uncertainty in the model predictions. We recently showed that this predictive uncertainty can be large--a factor of ten or more in some cases. There is probably no context in which a plasma chemistry model might be used where the existence of uncertainty on this scale could not be a matter of concern. A question that at once follows is: Which rate constants cause such uncertainty? In the present paper we show how this question can be answered by applying a systematic screening procedure--the so-called Morris method--to identify sensitive rate constants. We investigate the topical example of the helium-oxygen chemistry. Beginning with a model with almost four hundred reactions, we show that only about fifty rate constants materially affect the model results, and as few as ten cause most of the uncertainty. This means that the model can be improved, and the uncertainty substantially reduced, by focussing attention on this tractably small set of rate constants. Work supported by Science Foundation Ireland under grant08/SRC/I1411, and by COST Action MP1101 ``Biomedical Applications of Atmospheric Pressure Plasmas.''

  3. Modeling Power Systems as Complex Adaptive Systems

    SciTech Connect

    Chassin, David P.; Malard, Joel M.; Posse, Christian; Gangopadhyaya, Asim; Lu, Ning; Katipamula, Srinivas; Mallow, J V.

    2004-12-30

    Physical analogs have shown considerable promise for understanding the behavior of complex adaptive systems, including macroeconomics, biological systems, social networks, and electric power markets. Many of today's most challenging technical and policy questions can be reduced to a distributed economic control problem. Indeed, economically based control of large-scale systems is founded on the conjecture that the price-based regulation (e.g., auctions, markets) results in an optimal allocation of resources and emergent optimal system control. This report explores the state-of-the-art physical analogs for understanding the behavior of some econophysical systems and deriving stable and robust control strategies for using them. We review and discuss applications of some analytic methods based on a thermodynamic metaphor, according to which the interplay between system entropy and conservation laws gives rise to intuitive and governing global properties of complex systems that cannot be otherwise understood. We apply these methods to the question of how power markets can be expected to behave under a variety of conditions.

  4. A resistive force model for complex intrusion in granular media

    NASA Astrophysics Data System (ADS)

    Zhang, Tingnan; Li, Chen; Goldman, Daniel

    2012-11-01

    Intrusion forces in granular media (GM) are best understood for simple shapes (like disks and rods) undergoing vertical penetration and horizontal drag. Inspired by a resistive force theory for sand-swimming, we develop a new two-dimensional resistive force model for intruders of arbitrary shape and intrusion path into GM in the vertical plane. We divide an intruder of complex geometry into small segments and approximate segmental forces by measuring forces on small flat plates in experiments. Both lift and drag forces on the plates are proportional to penetration depth, and depend sensitively on the angle of attack and the direction of motion. Summation of segmental forces over the intruder predicts the net forces on a c-leg, a flat leg, and a reversed c-leg rotated into GM about a fixed axle. The stress profiles are similar for GM of different particle sizes, densities, coefficients of friction, and volume fractions. We propose a universal scaling law applicable to all tested GM. By combining the new force model with a multi-body simulator, we can also predict the locomotion dynamics of a small legged robot on GM. Our force laws can provide a strict test of hydrodynamic-like approaches to model dense granular flows. Also affiliated to: School of Physics, Georgia Institute of Technology.

  5. Integrated Modeling of Complex Optomechanical Systems

    NASA Astrophysics Data System (ADS)

    Andersen, Torben; Enmark, Anita

    2011-09-01

    Mathematical modeling and performance simulation are playing an increasing role in large, high-technology projects. There are two reasons; first, projects are now larger than they were before, and the high cost calls for detailed performance prediction before construction. Second, in particular for space-related designs, it is often difficult to test systems under realistic conditions beforehand, and mathematical modeling is then needed to verify in advance that a system will work as planned. Computers have become much more powerful, permitting calculations that were not possible before. At the same time mathematical tools have been further developed and found acceptance in the community. Particular progress has been made in the fields of structural mechanics, optics and control engineering, where new methods have gained importance over the last few decades. Also, methods for combining optical, structural and control system models into global models have found widespread use. Such combined models are usually called integrated models and were the subject of this symposium. The objective was to bring together people working in the fields of groundbased optical telescopes, ground-based radio telescopes, and space telescopes. We succeeded in doing so and had 39 interesting presentations and many fruitful discussions during coffee and lunch breaks and social arrangements. We are grateful that so many top ranked specialists found their way to Kiruna and we believe that these proceedings will prove valuable during much future work.

  6. Comparison of Thermal Modeling Approaches for Complex Measurement Equipment

    NASA Astrophysics Data System (ADS)

    Schalles, M.; Thewes, R.

    2014-04-01

    Thermal modeling is used for thermal investigation and optimization of sensors, instruments, and structures. Here, results depend on the chosen modeling approach, the complexity of the model, the quality of material data, and the information about the heat transport conditions of the object of investigation. Despite the widespread application, the advantages and limits of the modeling approaches are partially unknown. For comparison of different modeling approaches, a simplified and analytically describable demonstration object is used. This object is a steel rod at well-defined heat exchange conditions with the environment. For this, analytically describable models, equivalent electrical circuits, and simple and complex finite-element-analysis models are presented. Using the different approaches, static and dynamic simulations are performed and temperatures and temperature fields in the rod are estimated. The results of those calculations, comparisons with measurements, and identification of the sensitive points of the approaches are shown. General conclusions for thermal modeling of complex equipment are drawn.

  7. Modeling complex systems in the geosciences

    NASA Astrophysics Data System (ADS)

    Balcerak, Ernie

    2013-03-01

    Many geophysical phenomena can be described as complex systems, involving phenomena such as extreme or "wild" events that often do not follow the Gaussian distribution that would be expected if the events were simply random and uncorrelated. For instance, some geophysical phenomena like earthquakes show a much higher occurrence of relatively large values than would a Gaussian distribution and so are examples of the "Noah effect" (named by Benoit Mandelbrot for the exceptionally heavy rain in the biblical flood). Other geophysical phenomena are examples of the "Joseph effect," in which a state is especially persistent, such as a spell of multiple consecutive hot days (heat waves) or several dry summers in a row. The Joseph effect was named after the biblical story in which Joseph's dream of seven fat cows and seven thin ones predicted 7 years of plenty followed by 7 years of drought.

  8. How to teach friction: Experiments and models

    NASA Astrophysics Data System (ADS)

    Besson, Ugo; Borghi, Lidia; De Ambrosis, Anna; Mascheretti, Paolo

    2007-12-01

    Students generally have difficulty understanding friction and its associated phenomena. High school and introductory college-level physics courses usually do not give the topic the attention it deserves. We have designed a sequence for teaching about friction between solids based on a didactic reconstruction of the relevant physics, as well as research findings about student conceptions. The sequence begins with demonstrations that illustrate different types of friction. Experiments are subsequently performed to motivate students to obtain quantitative relations in the form of phenomenological laws. To help students understand the mechanisms producing friction, models illustrating the processes taking place on the surface of bodies in contact are proposed.

  9. Experiments for foam model development and validation.

    SciTech Connect

    Bourdon, Christopher Jay; Cote, Raymond O.; Moffat, Harry K.; Grillet, Anne Mary; Mahoney, James F.; Russick, Edward Mark; Adolf, Douglas Brian; Rao, Rekha Ranjana; Thompson, Kyle Richard; Kraynik, Andrew Michael; Castaneda, Jaime N.; Brotherton, Christopher M.; Mondy, Lisa Ann; Gorby, Allen D.

    2008-09-01

    A series of experiments has been performed to allow observation of the foaming process and the collection of temperature, rise rate, and microstructural data. Microfocus video is used in conjunction with particle image velocimetry (PIV) to elucidate the boundary condition at the wall. Rheology, reaction kinetics and density measurements complement the flow visualization. X-ray computed tomography (CT) is used to examine the cured foams to determine density gradients. These data provide input to a continuum level finite element model of the blowing process.

  10. Experience with the CMS Event Data Model

    SciTech Connect

    Elmer, P.; Hegner, B.; Sexton-Kennedy, L.; /Fermilab

    2009-06-01

    The re-engineered CMS EDM was presented at CHEP in 2006. Since that time we have gained a lot of operational experience with the chosen model. We will present some of our findings, and attempt to evaluate how well it is meeting its goals. We will discuss some of the new features that have been added since 2006 as well as some of the problems that have been addressed. Also discussed is the level of adoption throughout CMS, which spans the trigger farm up to the final physics analysis. Future plans, in particular dealing with schema evolution and scaling, will be discussed briefly.

  11. Smoothed Particle Hydrodynamics simulation and laboratory-scale experiments of complex flow dynamics in unsaturated fractures

    NASA Astrophysics Data System (ADS)

    Kordilla, J.; Tartakovsky, A. M.; Pan, W.; Shigorina, E.; Noffz, T.; Geyer, T.

    2015-12-01

    Unsaturated flow in fractured porous media exhibits highly complex flow dynamics and a wide range of intermittent flow processes. Especially in wide aperture fractures, flow processes may be dominated by gravitational instead of capillary forces leading to a deviation from the classical volume effective approaches (Richard's equation, Van Genuchten type relationships). The existence of various flow modes such as droplets, rivulets, turbulent and adsorbed films is well known, however, their spatial and temporal distribution within fracture networks is still an open question partially due to the lack of appropriate modeling tools. With our work we want to gain a deeper understanding of the underlying flow and transport dynamics in unsaturated fractured media in order to support the development of more refined upscaled methods, applicable on catchment scales. We present fracture-scale flow simulations obtained with a parallelized Smoothed Particle Hydrodynamics (SPH) model. The model allows us to simulate free-surface flow dynamics including the effect of surface tension for a wide range of wetting conditions in smooth and rough fractures. Due to the highly efficient generation of surface tension via particle-particle interaction forces the dynamic wetting of surfaces can readily be obtained. We validated the model via empirical and semi-analytical solutions and conducted laboratory-scale percolation experiments of unsaturated flow through synthetic fracture systems. The setup allows us to obtain travel time distributions and identify characteristic flow mode distributions on wide aperture fractures intercepted by horizontal fracture elements.

  12. Communicating about Loss: Experiences of Older Australian Adults with Cerebral Palsy and Complex Communication Needs

    ERIC Educational Resources Information Center

    Dark, Leigha; Balandin, Susan; Clemson, Lindy

    2011-01-01

    Loss and grief is a universal human experience, yet little is known about how older adults with a lifelong disability, such as cerebral palsy, and complex communication needs (CCN) experience loss and manage the grieving process. In-depth interviews were conducted with 20 Australian participants with cerebral palsy and CCN to determine the types…

  13. Woven into the Fabric of Experience: Residential Adventure Education and Complexity

    ERIC Educational Resources Information Center

    Williams, Randall

    2013-01-01

    Residential adventure education is a surprisingly powerful developmental experience. This paper reports on a mixed-methods study focused on English primary school pupils aged 9-11, which used complexity theory to throw light on the synergistic inter-relationships between the different aspects of that experience. Broadly expressed, the research…

  14. Complex Perceptions of Identity: The Experiences of Student Combat Veterans in Community College

    ERIC Educational Resources Information Center

    Hammond, Shane Patrick

    2016-01-01

    This qualitative study illustrates how complex perceptions of identity influence the community college experience for student veterans who have been in combat, creating barriers to their overall persistence. The collective experiences of student combat veterans at two community colleges in northwestern Massachusetts are presented, and a Combat…

  15. Using machine learning tools to model complex toxic interactions with limited sampling regimes.

    PubMed

    Bertin, Matthew J; Moeller, Peter; Guillette, Louis J; Chapman, Robert W

    2013-03-19

    A major impediment to understanding the impact of environmental stress, including toxins and other pollutants, on organisms, is that organisms are rarely challenged by one or a few stressors in natural systems. Thus, linking laboratory experiments that are limited by practical considerations to a few stressors and a few levels of these stressors to real world conditions is constrained. In addition, while the existence of complex interactions among stressors can be identified by current statistical methods, these methods do not provide a means to construct mathematical models of these interactions. In this paper, we offer a two-step process by which complex interactions of stressors on biological systems can be modeled in an experimental design that is within the limits of practicality. We begin with the notion that environment conditions circumscribe an n-dimensional hyperspace within which biological processes or end points are embedded. We then randomly sample this hyperspace to establish experimental conditions that span the range of the relevant parameters and conduct the experiment(s) based upon these selected conditions. Models of the complex interactions of the parameters are then extracted using machine learning tools, specifically artificial neural networks. This approach can rapidly generate highly accurate models of biological responses to complex interactions among environmentally relevant toxins, identify critical subspaces where nonlinear responses exist, and provide an expedient means of designing traditional experiments to test the impact of complex mixtures on biological responses. Further, this can be accomplished with an astonishingly small sample size.

  16. A simple model clarifies the complicated relationships of complex networks

    NASA Astrophysics Data System (ADS)

    Zheng, Bojin; Wu, Hongrun; Kuang, Li; Qin, Jun; Du, Wenhua; Wang, Jianmin; Li, Deyi

    2014-08-01

    Real-world networks such as the Internet and WWW have many common traits. Until now, hundreds of models were proposed to characterize these traits for understanding the networks. Because different models used very different mechanisms, it is widely believed that these traits origin from different causes. However, we find that a simple model based on optimisation can produce many traits, including scale-free, small-world, ultra small-world, Delta-distribution, compact, fractal, regular and random networks. Moreover, by revising the proposed model, the community-structure networks are generated. By this model and the revised versions, the complicated relationships of complex networks are illustrated. The model brings a new universal perspective to the understanding of complex networks and provide a universal method to model complex networks from the viewpoint of optimisation.

  17. Improved Structure Factors for Modeling XRTS Experiments

    NASA Astrophysics Data System (ADS)

    Stanton, Liam; Murillo, Michael; Benage, John; Graziani, Frank

    2012-10-01

    Characterizing warm dense matter (WDM) has gained renewed interest due to advances in powerful lasers and next generation light sources. Because WDM is strongly coupled and moderately degenerate, we must often rely on simulations, which are necessarily based on ions interacting through a screened potential that must be determined. Given such a potential, ionic radial distribution functions (RDFs) and structure factors (SFs) can be calculated and related to XRTS data and EOS quantities. While many screening models are available, such as the Debye- (Yukawa-) potential, they are known to over-screen and are unable capture accurate bound state effects, which have been shown to contribute to both scattering data from XRTS as well as the short-range repulsion in the RDF. Here, we present a model which incorporates an improvement to the screening length in addition to a consistent treatment of the core electrons. This new potential improves the accuracy of both bound state and screening effects without contributing to the computational complexity of Debye-like models. Calculations of ionic RDFs and SFs are compared to experimental data and quantum molecular dynamics simulations for Be, Na, Mg and Al in the WDM and liquid metal regime.

  18. A musculoskeletal model of the elbow joint complex

    NASA Technical Reports Server (NTRS)

    Gonzalez, Roger V.; Barr, Ronald E.; Abraham, Lawrence D.

    1993-01-01

    This paper describes a musculoskeletal model that represents human elbow flexion-extension and forearm pronation-supination. Musculotendon parameters and the skeletal geometry were determined for the musculoskeletal model in the analysis of ballistic elbow joint complex movements. The key objective was to develop a computational model, guided by optimal control, to investigate the relationship among patterns of muscle excitation, individual muscle forces, and movement kinematics. The model was verified using experimental kinematic, torque, and electromyographic data from volunteer subjects performing both isometric and ballistic elbow joint complex movements. In general, the model predicted kinematic and muscle excitation patterns similar to what was experimentally measured.

  19. Classrooms as Complex Adaptive Systems: A Relational Model

    ERIC Educational Resources Information Center

    Burns, Anne; Knox, John S.

    2011-01-01

    In this article, we describe and model the language classroom as a complex adaptive system (see Logan & Schumann, 2005). We argue that linear, categorical descriptions of classroom processes and interactions do not sufficiently explain the complex nature of classrooms, and cannot account for how classroom change occurs (or does not occur), over…

  20. Blueprints for Complex Learning: The 4C/ID-Model.

    ERIC Educational Resources Information Center

    van Merrienboer, Jeroen J. G.; Clark, Richard E.; de Croock, Marcel B. M.

    2002-01-01

    Describes the four-component instructional design system (4C/ID-model) developed for the design of training programs for complex skills. Discusses the structure of training blueprints for complex learning and associated instructional methods, focusing on learning tasks, supportive information, just-in-time information, and part-task practice.…

  1. Model complexity and performance: How far can we simplify?

    NASA Astrophysics Data System (ADS)

    Raick, C.; Soetaert, K.; Grégoire, M.

    2006-07-01

    Handling model complexity and reliability is a key area of research today. While complex models containing sufficient detail have become possible due to increased computing power, they often lead to too much uncertainty. On the other hand, very simple models often crudely oversimplify the real ecosystem and can not be used for management purposes. Starting from a complex and validated 1D pelagic ecosystem model of the Ligurian Sea (NW Mediterranean Sea), we derived simplified aggregated models in which either the unbalanced algal growth, the functional group diversity or the explicit description of the microbial loop was sacrificed. To overcome the problem of data availability with adequate spatial and temporal resolution, the outputs of the complex model are used as the baseline of perfect knowledge to calibrate the simplified models. Objective criteria of model performance were used to compare the simplified models’ results to the complex model output and to the available data at the DYFAMED station in the central Ligurian Sea. We show that even the simplest (NPZD) model is able to represent the global ecosystem features described by the complex model (e.g. primary and secondary productions, particulate organic matter export flux, etc.). However, a certain degree of sophistication in the formulation of some biogeochemical processes is required to produce realistic behaviors (e.g. the phytoplankton competition, the potential carbon or nitrogen limitation of the zooplankton ingestion, the model trophic closure, etc.). In general, a 9 state-variable model that has the functional group diversity removed, but which retains the bacterial loop and the unbalanced algal growth, performs best.

  2. Prequential Analysis of Complex Data with Adaptive Model Reselection†

    PubMed Central

    Clarke, Jennifer; Clarke, Bertrand

    2010-01-01

    In Prequential analysis, an inference method is viewed as a forecasting system, and the quality of the inference method is based on the quality of its predictions. This is an alternative approach to more traditional statistical methods that focus on the inference of parameters of the data generating distribution. In this paper, we introduce adaptive combined average predictors (ACAPs) for the Prequential analysis of complex data. That is, we use convex combinations of two different model averages to form a predictor at each time step in a sequence. A novel feature of our strategy is that the models in each average are re-chosen adaptively at each time step. To assess the complexity of a given data set, we introduce measures of data complexity for continuous response data. We validate our measures in several simulated contexts prior to using them in real data examples. The performance of ACAPs is compared with the performances of predictors based on stacking or likelihood weighted averaging in several model classes and in both simulated and real data sets. Our results suggest that ACAPs achieve a better trade off between model list bias and model list variability in cases where the data is very complex. This implies that the choices of model class and averaging method should be guided by a concept of complexity matching, i.e. the analysis of a complex data set may require a more complex model class and averaging strategy than the analysis of a simpler data set. We propose that complexity matching is akin to a bias–variance tradeoff in statistical modeling. PMID:20617104

  3. Modeling of the Princeton Raman Amplification Experiment

    NASA Astrophysics Data System (ADS)

    Hur, Min Sup; Lindberg, Ryan; Wurtele, Jonathan; Cheng, W.; Avitzour, Y.; Ping, Y.; Suckewer, S.; Fisch, N. J.

    2004-11-01

    We numerically model the Princeton experiments on Raman amplification [1] using averaged-PIC (aPIC) [2] and 3-wave codes. Recently, there has been a series of experimental results performed in Princeton University [3]. Amplification factors up to 500 in intensity were obtained using a subpicosecond pulse propagating in a 3mm plasma. The plasma was created using a gas jet. The intensity of the amplified pulse exceeds the intensity of the pump pulse, indicating that the process of Raman amplification is in the nonlinear regime. Comparisons are made between 3-wave models, kinetic models and the experimental data. It is found that better agreement is achieved when kinetic effects are included. Discussion of the influence of a range of potentially deleterious phenomena such as density inhomogeneity, particle trapping, ionization, and pump depletion by noise amplification will be examined. References [1] V.M. Malkin, G. Shvets, and N.J. Fisch, Phys. Rev. Lett. vol.82, 1999. [2] M.S. Hur, G. Penn, J.S. Wurtele, R. Lindberg, to appear in Phys. Plasmas. [3] Y. Ping, et al., Phys. Rev. E vol. 66, 2002; Phys. Rev. E vol. 67, 2003;

  4. Size and complexity in model financial systems.

    PubMed

    Arinaminpathy, Nimalan; Kapadia, Sujit; May, Robert M

    2012-11-01

    The global financial crisis has precipitated an increasing appreciation of the need for a systemic perspective toward financial stability. For example: What role do large banks play in systemic risk? How should capital adequacy standards recognize this role? How is stability shaped by concentration and diversification in the financial system? We explore these questions using a deliberately simplified, dynamic model of a banking system that combines three different channels for direct transmission of contagion from one bank to another: liquidity hoarding, asset price contagion, and the propagation of defaults via counterparty credit risk. Importantly, we also introduce a mechanism for capturing how swings in "confidence" in the system may contribute to instability. Our results highlight that the importance of relatively large, well-connected banks in system stability scales more than proportionately with their size: the impact of their collapse arises not only from their connectivity, but also from their effect on confidence in the system. Imposing tougher capital requirements on larger banks than smaller ones can thus enhance the resilience of the system. Moreover, these effects are more pronounced in more concentrated systems, and continue to apply, even when allowing for potential diversification benefits that may be realized by larger banks. We discuss some tentative implications for policy, as well as conceptual analogies in ecosystem stability and in the control of infectious diseases.

  5. Size and complexity in model financial systems

    PubMed Central

    Arinaminpathy, Nimalan; Kapadia, Sujit; May, Robert M.

    2012-01-01

    The global financial crisis has precipitated an increasing appreciation of the need for a systemic perspective toward financial stability. For example: What role do large banks play in systemic risk? How should capital adequacy standards recognize this role? How is stability shaped by concentration and diversification in the financial system? We explore these questions using a deliberately simplified, dynamic model of a banking system that combines three different channels for direct transmission of contagion from one bank to another: liquidity hoarding, asset price contagion, and the propagation of defaults via counterparty credit risk. Importantly, we also introduce a mechanism for capturing how swings in “confidence” in the system may contribute to instability. Our results highlight that the importance of relatively large, well-connected banks in system stability scales more than proportionately with their size: the impact of their collapse arises not only from their connectivity, but also from their effect on confidence in the system. Imposing tougher capital requirements on larger banks than smaller ones can thus enhance the resilience of the system. Moreover, these effects are more pronounced in more concentrated systems, and continue to apply, even when allowing for potential diversification benefits that may be realized by larger banks. We discuss some tentative implications for policy, as well as conceptual analogies in ecosystem stability and in the control of infectious diseases. PMID:23091020

  6. Experiments on a hurricane windmill model

    NASA Astrophysics Data System (ADS)

    Bolie, V. W.

    1981-08-01

    Airflow tests of a vertical-axis wind turbine model were performed to establish accurate endpoints for the curve of trans-rotor pressure vs trans-rotor flow rate. Calibrated free-field flow tests at wind speeds up to 25 m/s, and corroborating experiments using tufted-yarn flow tracers were performed, with the latter showing smoother flows at higher wind speeds. The rotor was also replaced by a close-fitting weighted solid disk to measure maximum available trans-orifice pressure drop. Results indicate that the vertical-axis turbines are superior in terms of simplicity, TV interference, and safety enclosed rotor blades, while producing the same amount of power as conventional windmills. Economically, however, the design would not be competitive in terms of dollars/kW/yr.

  7. Full-Scale Cookoff Model Validation Experiments

    SciTech Connect

    McClelland, M A; Rattanapote, M K; Heimdahl, E R; Erikson, W E; Curran, P O; Atwood, A I

    2003-11-25

    This paper presents the experimental results of the third and final phase of a cookoff model validation effort. In this phase of the work, two generic Heavy Wall Penetrators (HWP) were tested in two heating orientations. Temperature and strain gage data were collected over the entire test period. Predictions for time and temperature of reaction were made prior to release of the live data. Predictions were comparable to the measured values and were highly dependent on the established boundary conditions. Both HWP tests failed at a weld located near the aft closure of the device. More than 90 percent of unreacted explosive was recovered in the end heated experiment and less than 30 percent recovered in the side heated test.

  8. Nanofluid Drop Evaporation: Experiment, Theory, and Modeling

    NASA Astrophysics Data System (ADS)

    Gerken, William James

    Nanofluids, stable colloidal suspensions of nanoparticles in a base fluid, have potential applications in the heat transfer, combustion and propulsion, manufacturing, and medical fields. Experiments were conducted to determine the evaporation rate of room temperature, millimeter-sized pendant drops of ethanol laden with varying amounts (0-3% by weight) of 40-60 nm aluminum nanoparticles (nAl). Time-resolved high-resolution drop images were collected for the determination of early-time evaporation rate (D2/D 02 > 0.75), shown to exhibit D-square law behavior, and surface tension. Results show an asymptotic decrease in pendant drop evaporation rate with increasing nAl loading. The evaporation rate decreases by approximately 15% at around 1% to 3% nAl loading relative to the evaporation rate of pure ethanol. Surface tension was observed to be unaffected by nAl loading up to 3% by weight. A model was developed to describe the evaporation of the nanofluid pendant drops based on D-square law analysis for the gas domain and a description of the reduction in liquid fraction available for evaporation due to nanoparticle agglomerate packing near the evaporating drop surface. Model predictions are in relatively good agreement with experiment, within a few percent of measured nanofluid pendant drop evaporation rate. The evaporation of pinned nanofluid sessile drops was also considered via modeling. It was found that the same mechanism for nanofluid evaporation rate reduction used to explain pendant drops could be used for sessile drops. That mechanism is a reduction in evaporation rate due to a reduction in available ethanol for evaporation at the drop surface caused by the packing of nanoparticle agglomerates near the drop surface. Comparisons of the present modeling predictions with sessile drop evaporation rate measurements reported for nAl/ethanol nanofluids by Sefiane and Bennacer [11] are in fairly good agreement. Portions of this abstract previously appeared as: W. J

  9. Preparation, spectroscopy and molecular modelling studies of the inclusion complex of cordycepin with cyclodextrins.

    PubMed

    Zhang, Jian-Qiang; Wu, Di; Jiang, Kun-Ming; Zhang, Da; Zheng, Xi; Wan, Chun-Ping; Zhu, Hong-You; Xie, Xiao-Guang; Jin, Yi; Lin, Jun

    2015-04-10

    The inclusion complexes of cordycepin with cyclodextrins (CDs) were prepared, the resultant complexes were characterised by UV-vis, FTIR, DSC, SEM, XRD, ESI-MS and proton nuclear magnetic resonance spectroscopy ((1)H NMR). The stoichiometry was established using a Job plot and the inclusion mechanism was clarified using molecular dynamic simulations. Molecular modelling calculations have been carried out to rationalise the experimental findings and predict the stable molecular structure of the inclusion complex. The stability of the inclusion complexes were confirmed by energetic and thermodynamic properties (ΔE, ΔH, ΔG and ΔS) and HOMO, LUMO orbital. The 1:1 binding model of complexes were visually proved by ESI-MS experiment. Our results showed that the purine group of cordycepin molecule was deeply inserted into the cavity of CDs.

  10. Benzenecarboxylate surface complexation at the goethite (α-FeOOH)/water interface: II. Linking IR spectroscopic observations to mechanistic surface complexation models for phthalate, trimellitate, and pyromellitate

    NASA Astrophysics Data System (ADS)

    Boily, Jean-François; Persson, Per; Sjöberg, Staffan

    2000-10-01

    A study combining information from infrared spectroscopy and adsorption experiments was carried out to investigate phthalate, trimellitate, and pyromellitate complexes at the goethite (α-FeOOH)/water interface. Infrared spectra showed evidence for inner-sphere complexes below pH 6 and outer-sphere complexes in the pH range 3 to 9. Normalized infrared peak areas were used as a semi-quantitative tool to devise diagrams showing the molecular level surface speciation as a function of pH. Surface complexation models that simultaneously predict these diagrams, the proton balance data and the ligand adsorption data were developed with surface complexation theory. Surface complexation modeling was carried out with a Charge Distribution Multisite Complexation Model (CD-MUSIC), assuming goethite particles with surfaces represented by the {110} plane (90% of total particle surface area) and by the {001} plane (10% of total particle surface area). Inner-sphere complexes were described as mononuclear chelates at the {001} plane, whereas outer-sphere complexes were described as binuclear complexes with singly coordinated sites on the {110} plane. The Three-Plane Model (TPM) was used to described surface electrostatics and to distribute the charges of the inner- and the outer-sphere complexes on different planes of adsorption.

  11. Complexity vs. Simplicity: Tradeoffs in Integrated Water Resources Models

    NASA Astrophysics Data System (ADS)

    Gonda, J.; Elshorbagy, A. A.; Wheater, H. S.; Razavi, S.

    2014-12-01

    Integrated Water Resources Management is an interdisciplinary approach to managing water. Integration often involves linking hydrologic processes with socio-economic development. When implemented through a simulation or optimization model, complexities arise. This complexity is due to the large data requirements, making it difficult to implement by the end users. Not only is computational efficiency at stake, but it becomes cumbersome to future model users. To overcome this issue the model may be simplified through emulation, at the expense of information loss. Herein lies a tradeoff: Complexity involved in an accurate, detailed model versus the transparency and saliency of a simplified model. This presentation examines the role of model emulation towards simplifying a water allocation model. The case study is located in Southern Alberta, Canada. Water here is allocated between agricultural, municipal, environmental and energy sectors. Currently, water allocation is modeled through a detailed optimization model, WRMM. Although WRMM can allocate water on a priority basis, it lacks the simplicity needed by the end user. The proposed System Dynamics-based model, SWAMP 2.0, emulates this optimization model, utilizing two scales of complexity. A regional scale spatially aggregates individual components, reducing the complexity of the original model. A local scale retains the original detail, and is contained within the regional scale. This two tiered emulation presents relevant spatial scales to water managers, who may not be interested in all the details of WRMM. By evaluating the accuracy of SWAMP 2.0 against the original allocation model, the tradeoff of accuracy for simplicity can be further realized.

  12. Rates of heat exchange in largemouth bass: experiment and model

    SciTech Connect

    Weller, D.E.; Anderson, D.J.; DeAngelis, D.L.; Coutant, C.C.

    1984-01-01

    A mathematical model of body-core temperature change in fish was derived by modifying Newton's law of cooling to include an initial time lag in temperature adjustment. This model was tested with data from largemouth bass (Micropterus salmoides) subjected to step changes in ambient temperature and to more complex ambient regimes. Nonlinear least squares was used to fit model parameters k (min/sup -1/) and L (initial lag time in minutes) to time series temperature data from step-change experiments. Temperature change halftimes (t/sub 1/2/, in minutes) were calculated from k and L. Significant differences (P < .05) were found in these parameters between warming and cooling conditions and between live and dead fish. Statistically significant regressions were developed relating k and t/sub 1/2/ to weight and L to length. Estimates of k and L from the step-change experiments were used with a computer solution of the model to stimulate body temperature response to continuously varying ambient regimes. These simulations explained between 52% and 99% of the variation in core temperature, with absolute errors in prediction ranging between 0 and 0.61 C when ambient temperature was varied over 4 C. 14 references, 6 figures, 4 tables.

  13. Bladder Cancer: A Simple Model Becomes Complex

    PubMed Central

    Pierro, Giovanni Battista Di; Gulia, Caterina; Cristini, Cristiano; Fraietta, Giorgio; Marini, Lorenzo; Grande, Pietro; Gentile, Vincenzo; Piergentili, Roberto

    2012-01-01

    Bladder cancer is one of the most frequent malignancies in developed countries and it is also characterized by a high number of recurrences. Despite this, several authors in the past reported that only two altered molecular pathways may genetically explain all cases of bladder cancer: one involving the FGFR3 gene, and the other involving the TP53 gene. Mutations in any of these two genes are usually predictive of the malignancy final outcome. This cancer may also be further classified as low-grade tumors, which is always papillary and in most cases superficial, and high-grade tumors, not necessarily papillary and often invasive. This simple way of considering this pathology has strongly changed in the last few years, with the development of genome-wide studies on expression profiling and the discovery of small non-coding RNA affecting gene expression. An easy search in the OMIM (On-line Mendelian Inheritance in Man) database using “bladder cancer” as a query reveals that genes in some way connected to this pathology are approximately 150, and some authors report that altered gene expression (up- or down-regulation) in this disease may involve up to 500 coding sequences for low-grade tumors and up to 2300 for high-grade tumors. In many clinical cases, mutations inside the coding sequences of the above mentioned two genes were not found, but their expression changed; this indicates that also epigenetic modifications may play an important role in its development. Indeed, several reports were published about genome-wide methylation in these neoplastic tissues, and an increasing number of small non-coding RNA are either up- or down-regulated in bladder cancer, indicating that impaired gene expression may also pass through these metabolic pathways. Taken together, these data reveal that bladder cancer is far to be considered a simple model of malignancy. In the present review, we summarize recent progress in the genome-wide analysis of bladder cancer, and analyse non

  14. Vacuum membrane distillation: Experiments and modeling

    SciTech Connect

    Bandini, S.; Saavedra, A.; Sarti, G.C.

    1997-02-01

    Vacuum membrane distillation is a membrane-based separation process considered here to remove volatile organic compounds from aqueous streams. Microporous hydrophobic membranes are used to separate the aqueous stream from a gas phase kept under vacuum. The evaporation of the liquid stream takes place on one side of the membrane, and mass transfer occurs through the vapor phase inside the membrane. The role of operative conditions on the process performance is widely investigated in the case of dilute binary aqueous mixtures containing acetone, ethanol, isopropanol, ethylacetate, methylacetate, or methylterbutyl ether. Temperature, composition, flow rate of the liquid feed, and pressure downstream the membrane are the main operative variables. Among these, the vacuum-side pressure is the major design factor since it greatly affects the separation efficiency. A mathematical model description of the process is developed, and the results are compared with the experiments. The model is finally used to predict the best operative conditions in which the process can work for the case of benzene removal from waste waters.

  15. Ringing load models verified against experiments

    SciTech Connect

    Krokstad, J.R.; Stansberg, C.T.

    1995-12-31

    What is believed to be the main reason for discrepancies between measured and simulated loads in previous studies has been assessed. One has focused on the balance between second- and third-order load components in relation to what is called ``fat body`` load correction. It is important to understand that the use of Morison strip theory in combination with second-order wave theory give rise to second- as well as third-order components in the horizontal force. A proper balance between second- and third-order components in horizontal force is regarded as the most central requirements for a sufficient accurate ringing load model in irregular sea. It is also verified that simulated second-order components are largely overpredicted both in regular and irregular seas. Nonslender diffraction effects are important to incorporate in the FNV formulation in order to reduce the simulated second-order component and to match experiments more closely. A sufficient accurate ringing simulation model with the use of simplified methods is shown to be within close reach. Some further development and experimental verification must however be performed in order to take non-slender effects into account.

  16. The Use of Behavior Models for Predicting Complex Operations

    NASA Technical Reports Server (NTRS)

    Gore, Brian F.

    2010-01-01

    Modeling and simulation (M&S) plays an important role when complex human-system notions are being proposed, developed and tested within the system design process. National Aeronautics and Space Administration (NASA) as an agency uses many different types of M&S approaches for predicting human-system interactions, especially when it is early in the development phase of a conceptual design. NASA Ames Research Center possesses a number of M&S capabilities ranging from airflow, flight path models, aircraft models, scheduling models, human performance models (HPMs), and bioinformatics models among a host of other kinds of M&S capabilities that are used for predicting whether the proposed designs will benefit the specific mission criteria. The Man-Machine Integration Design and Analysis System (MIDAS) is a NASA ARC HPM software tool that integrates many models of human behavior with environment models, equipment models, and procedural / task models. The challenge to model comprehensibility is heightened as the number of models that are integrated and the requisite fidelity of the procedural sets are increased. Model transparency is needed for some of the more complex HPMs to maintain comprehensibility of the integrated model performance. This will be exemplified in a recent MIDAS v5 application model and plans for future model refinements will be presented.

  17. Multi-scale modelling for HEDP experiments on Orion

    NASA Astrophysics Data System (ADS)

    Sircombe, N. J.; Ramsay, M. G.; Hughes, S. J.; Hoarty, D. J.

    2016-05-01

    The Orion laser at AWE couples high energy long-pulse lasers with high intensity short-pulses, allowing material to be compressed beyond solid density and heated isochorically. This experimental capability has been demonstrated as a platform for conducting High Energy Density Physics material properties experiments. A clear understanding of the physics in experiments at this scale, combined with a robust, flexible and predictive modelling capability, is an important step towards more complex experimental platforms and ICF schemes which rely on high power lasers to achieve ignition. These experiments present a significant modelling challenge, the system is characterised by hydrodynamic effects over nanoseconds, driven by long-pulse lasers or the pre-pulse of the petawatt beams, and fast electron generation, transport, and heating effects over picoseconds, driven by short-pulse high intensity lasers. We describe the approach taken at AWE; to integrate a number of codes which capture the detailed physics for each spatial and temporal scale. Simulations of the heating of buried aluminium microdot targets are discussed and we consider the role such tools can play in understanding the impact of changes to the laser parameters, such as frequency and pre-pulse, as well as understanding effects which are difficult to observe experimentally.

  18. Geometric modeling of subcellular structures, organelles, and multiprotein complexes

    PubMed Central

    Feng, Xin; Xia, Kelin; Tong, Yiying; Wei, Guo-Wei

    2013-01-01

    SUMMARY Recently, the structure, function, stability, and dynamics of subcellular structures, organelles, and multi-protein complexes have emerged as a leading interest in structural biology. Geometric modeling not only provides visualizations of shapes for large biomolecular complexes but also fills the gap between structural information and theoretical modeling, and enables the understanding of function, stability, and dynamics. This paper introduces a suite of computational tools for volumetric data processing, information extraction, surface mesh rendering, geometric measurement, and curvature estimation of biomolecular complexes. Particular emphasis is given to the modeling of cryo-electron microscopy data. Lagrangian-triangle meshes are employed for the surface presentation. On the basis of this representation, algorithms are developed for surface area and surface-enclosed volume calculation, and curvature estimation. Methods for volumetric meshing have also been presented. Because the technological development in computer science and mathematics has led to multiple choices at each stage of the geometric modeling, we discuss the rationales in the design and selection of various algorithms. Analytical models are designed to test the computational accuracy and convergence of proposed algorithms. Finally, we select a set of six cryo-electron microscopy data representing typical subcellular complexes to demonstrate the efficacy of the proposed algorithms in handling biomolecular surfaces and explore their capability of geometric characterization of binding targets. This paper offers a comprehensive protocol for the geometric modeling of subcellular structures, organelles, and multiprotein complexes. PMID:23212797

  19. Routine Discovery of Complex Genetic Models using Genetic Algorithms

    PubMed Central

    Moore, Jason H.; Hahn, Lance W.; Ritchie, Marylyn D.; Thornton, Tricia A.; White, Bill C.

    2010-01-01

    Simulation studies are useful in various disciplines for a number of reasons including the development and evaluation of new computational and statistical methods. This is particularly true in human genetics and genetic epidemiology where new analytical methods are needed for the detection and characterization of disease susceptibility genes whose effects are complex, nonlinear, and partially or solely dependent on the effects of other genes (i.e. epistasis or gene-gene interaction). Despite this need, the development of complex genetic models that can be used to simulate data is not always intuitive. In fact, only a few such models have been published. We have previously developed a genetic algorithm approach to discovering complex genetic models in which two single nucleotide polymorphisms (SNPs) influence disease risk solely through nonlinear interactions. In this paper, we extend this approach for the discovery of high-order epistasis models involving three to five SNPs. We demonstrate that the genetic algorithm is capable of routinely discovering interesting high-order epistasis models in which each SNP influences risk of disease only through interactions with the other SNPs in the model. This study opens the door for routine simulation of complex gene-gene interactions among SNPs for the development and evaluation of new statistical and computational approaches for identifying common, complex multifactorial disease susceptibility genes. PMID:20948983

  20. Routine Discovery of Complex Genetic Models using Genetic Algorithms.

    PubMed

    Moore, Jason H; Hahn, Lance W; Ritchie, Marylyn D; Thornton, Tricia A; White, Bill C

    2004-02-01

    Simulation studies are useful in various disciplines for a number of reasons including the development and evaluation of new computational and statistical methods. This is particularly true in human genetics and genetic epidemiology where new analytical methods are needed for the detection and characterization of disease susceptibility genes whose effects are complex, nonlinear, and partially or solely dependent on the effects of other genes (i.e. epistasis or gene-gene interaction). Despite this need, the development of complex genetic models that can be used to simulate data is not always intuitive. In fact, only a few such models have been published. We have previously developed a genetic algorithm approach to discovering complex genetic models in which two single nucleotide polymorphisms (SNPs) influence disease risk solely through nonlinear interactions. In this paper, we extend this approach for the discovery of high-order epistasis models involving three to five SNPs. We demonstrate that the genetic algorithm is capable of routinely discovering interesting high-order epistasis models in which each SNP influences risk of disease only through interactions with the other SNPs in the model. This study opens the door for routine simulation of complex gene-gene interactions among SNPs for the development and evaluation of new statistical and computational approaches for identifying common, complex multifactorial disease susceptibility genes.

  1. Geometric modeling of subcellular structures, organelles, and multiprotein complexes.

    PubMed

    Feng, Xin; Xia, Kelin; Tong, Yiying; Wei, Guo-Wei

    2012-12-01

    Recently, the structure, function, stability, and dynamics of subcellular structures, organelles, and multiprotein complexes have emerged as a leading interest in structural biology. Geometric modeling not only provides visualizations of shapes for large biomolecular complexes but also fills the gap between structural information and theoretical modeling, and enables the understanding of function, stability, and dynamics. This paper introduces a suite of computational tools for volumetric data processing, information extraction, surface mesh rendering, geometric measurement, and curvature estimation of biomolecular complexes. Particular emphasis is given to the modeling of cryo-electron microscopy data. Lagrangian-triangle meshes are employed for the surface presentation. On the basis of this representation, algorithms are developed for surface area and surface-enclosed volume calculation, and curvature estimation. Methods for volumetric meshing have also been presented. Because the technological development in computer science and mathematics has led to multiple choices at each stage of the geometric modeling, we discuss the rationales in the design and selection of various algorithms. Analytical models are designed to test the computational accuracy and convergence of proposed algorithms. Finally, we select a set of six cryo-electron microscopy data representing typical subcellular complexes to demonstrate the efficacy of the proposed algorithms in handling biomolecular surfaces and explore their capability of geometric characterization of binding targets. This paper offers a comprehensive protocol for the geometric modeling of subcellular structures, organelles, and multiprotein complexes.

  2. Between complexity of modelling and modelling of complexity: An essay on econophysics

    NASA Astrophysics Data System (ADS)

    Schinckus, C.

    2013-09-01

    Econophysics is an emerging field dealing with complex systems and emergent properties. A deeper analysis of themes studied by econophysicists shows that research conducted in this field can be decomposed into two different computational approaches: “statistical econophysics” and “agent-based econophysics”. This methodological scission complicates the definition of the complexity used in econophysics. Therefore, this article aims to clarify what kind of emergences and complexities we can find in econophysics in order to better understand, on one hand, the current scientific modes of reasoning this new field provides; and on the other hand, the future methodological evolution of the field.

  3. Using fMRI to Test Models of Complex Cognition

    ERIC Educational Resources Information Center

    Anderson, John R.; Carter, Cameron S.; Fincham, Jon M.; Qin, Yulin; Ravizza, Susan M.; Rosenberg-Lee, Miriam

    2008-01-01

    This article investigates the potential of fMRI to test assumptions about different components in models of complex cognitive tasks. If the components of a model can be associated with specific brain regions, one can make predictions for the temporal course of the BOLD response in these regions. An event-locked procedure is described for dealing…

  4. Tips on Creating Complex Geometry Using Solid Modeling Software

    ERIC Educational Resources Information Center

    Gow, George

    2008-01-01

    Three-dimensional computer-aided drafting (CAD) software, sometimes referred to as "solid modeling" software, is easy to learn, fun to use, and becoming the standard in industry. However, many users have difficulty creating complex geometry with the solid modeling software. And the problem is not entirely a student problem. Even some teachers and…

  5. Network model of bilateral power markets based on complex networks

    NASA Astrophysics Data System (ADS)

    Wu, Yang; Liu, Junyong; Li, Furong; Yan, Zhanxin; Zhang, Li

    2014-06-01

    The bilateral power transaction (BPT) mode becomes a typical market organization with the restructuring of electric power industry, the proper model which could capture its characteristics is in urgent need. However, the model is lacking because of this market organization's complexity. As a promising approach to modeling complex systems, complex networks could provide a sound theoretical framework for developing proper simulation model. In this paper, a complex network model of the BPT market is proposed. In this model, price advantage mechanism is a precondition. Unlike other general commodity transactions, both of the financial layer and the physical layer are considered in the model. Through simulation analysis, the feasibility and validity of the model are verified. At same time, some typical statistical features of BPT network are identified. Namely, the degree distribution follows the power law, the clustering coefficient is low and the average path length is a bit long. Moreover, the topological stability of the BPT network is tested. The results show that the network displays a topological robustness to random market member's failures while it is fragile against deliberate attacks, and the network could resist cascading failure to some extent. These features are helpful for making decisions and risk management in BPT markets.

  6. MatOFF: a tool for analyzing behaviorally complex neurophysiological experiments.

    PubMed

    Genovesio, Aldo; Mitz, Andrew R

    2007-09-15

    The simple operant conditioning originally used in behavioral neurophysiology 30 years ago has given way to complex and sophisticated behavioral paradigms; so much so, that early general purpose programs for analyzing neurophysiological data are ill-suited for complex experiments. The trend has been to develop custom software for each class of experiment, but custom software can have serious drawbacks. We describe here a general purpose software tool for behavioral and electrophysiological studies, called MatOFF, that is especially suited for processing neurophysiological data gathered during the execution of complex behaviors. Written in the MATLAB programming language, MatOFF solves the problem of handling complex analysis requirements in a unique and powerful way. While other neurophysiological programs are either a loose collection of tools or append MATLAB as a post-processing step, MatOFF is an integrated environment that supports MATLAB scripting within the event search engine safely isolated in a programming sandbox. The results from scripting are stored separately, but in parallel with the raw data, and thus available to all subsequent MatOFF analysis and display processing. An example from a recently published experiment shows how all the features of MatOFF work together to analyze complex experiments and mine neurophysiological data in efficient ways.

  7. Zebrafish as an emerging model for studying complex brain disorders

    PubMed Central

    Kalueff, Allan V.; Stewart, Adam Michael; Gerlai, Robert

    2014-01-01

    The zebrafish (Danio rerio) is rapidly becoming a popular model organism in pharmacogenetics and neuropharmacology. Both larval and adult zebrafish are currently used to increase our understanding of brain function, dysfunction, and their genetic and pharmacological modulation. Here we review the developing utility of zebrafish in the analysis of complex brain disorders (including, for example, depression, autism, psychoses, drug abuse and cognitive disorders), also covering zebrafish applications towards the goal of modeling major human neuropsychiatric and drug-induced syndromes. We argue that zebrafish models of complex brain disorders and drug-induced conditions have become a rapidly emerging critical field in translational neuropharmacology research. PMID:24412421

  8. Characterization of complex systems using the design of experiments approach: transient protein expression in tobacco as a case study.

    PubMed

    Buyel, Johannes Felix; Fischer, Rainer

    2014-01-01

    Plants provide multiple benefits for the production of biopharmaceuticals including low costs, scalability, and safety. Transient expression offers the additional advantage of short development and production times, but expression levels can vary significantly between batches thus giving rise to regulatory concerns in the context of good manufacturing practice. We used a design of experiments (DoE) approach to determine the impact of major factors such as regulatory elements in the expression construct, plant growth and development parameters, and the incubation conditions during expression, on the variability of expression between batches. We tested plants expressing a model anti-HIV monoclonal antibody (2G12) and a fluorescent marker protein (DsRed). We discuss the rationale for selecting certain properties of the model and identify its potential limitations. The general approach can easily be transferred to other problems because the principles of the model are broadly applicable: knowledge-based parameter selection, complexity reduction by splitting the initial problem into smaller modules, software-guided setup of optimal experiment combinations and step-wise design augmentation. Therefore, the methodology is not only useful for characterizing protein expression in plants but also for the investigation of other complex systems lacking a mechanistic description. The predictive equations describing the interconnectivity between parameters can be used to establish mechanistic models for other complex systems. PMID:24514765

  9. Analogue experiments as benchmarks for models of lava flow emplacement

    NASA Astrophysics Data System (ADS)

    Garel, F.; Kaminski, E. C.; Tait, S.; Limare, A.

    2013-12-01

    experimental observations of the effect of wind the surface thermal structure of a viscous flow, that could be used to benchmark a thermal heat loss model. We will also briefly present more complex analogue experiments using wax material. These experiments present discontinuous advance behavior, and a dual surface thermal structure with low (solidified) vs. high (hot liquid exposed at the surface) surface temperatures regions. Emplacement models should tend to reproduce these two features, also observed on lava flows, to better predict the hazard of lava inundation.

  10. Real-data Calibration Experiments On A Distributed Hydrologic Model

    NASA Astrophysics Data System (ADS)

    Brath, A.; Montanari, A.; Toth, E.

    The increasing availability of extended information on the study watersheds does not generally overcome the need for the determination through calibration of at least a part of the parameters of distributed hydrologic models. The complexity of such models, making the computations highly intensive, has often prevented an extensive analysis of calibration issues. The purpose of this study is an evaluation of the validation results of a series of automatic calibration experiments (using the shuffled complex evolu- tion method, Duan et al., 1992) performed with a highly conceptualised, continuously simulating, distributed hydrologic model applied on the real data of a mid-sized Ital- ian watershed. Major flood events occurred in the 1990-2000 decade are simulated with the parameters obtained by the calibration of the model against discharge data observed at the closure section of the watershed and the hydrological features (overall agreement, volumes, peaks and times to peak) of the discharges obtained both in the closure and in an interior stream-gauge are analysed for validation purposes. A first set of calibrations investigates the effect of the variability of the calibration periods, using the data from several single flood events and from longer, continuous periods. Another analysis regards the influence of rainfall input and it is carried out varying the size and distribution of the raingauge network, in order to examine the relation between the spatial pattern of observed rainfall and the variability of modelled runoff. Lastly, a comparison of the hydrographs obtained for the flood events with the model parameterisation resulting when modifying the objective function to be minimised in the automatic calibration procedure is presented.

  11. Theoretical Simulations and Ultrafast Pump-probe Spectroscopy Experiments in Pigment-protein Photosynthetic Complexes

    SciTech Connect

    Buck, D.R.

    2000-09-12

    Theoretical simulations and ultrafast pump-probe laser spectroscopy experiments were used to study photosynthetic pigment-protein complexes and antennae found in green sulfur bacteria such as Prosthecochloris aestuarii, Chloroflexus aurantiacus, and Chlorobium tepidum. The work focused on understanding structure-function relationships in energy transfer processes in these complexes through experiments and trying to model that data as we tested our theoretical assumptions with calculations. Theoretical exciton calculations on tubular pigment aggregates yield electronic absorption spectra that are superimpositions of linear J-aggregate spectra. The electronic spectroscopy of BChl c/d/e antennae in light harvesting chlorosomes from Chloroflexus aurantiacus differs considerably from J-aggregate spectra. Strong symmetry breaking is needed if we hope to simulate the absorption spectra of the BChl c antenna. The theory for simulating absorption difference spectra in strongly coupled photosynthetic antenna is described, first for a relatively simple heterodimer, then for the general N-pigment system. The theory is applied to the Fenna-Matthews-Olson (FMO) BChl a protein trimers from Prosthecochloris aestuarii and then compared with experimental low-temperature absorption difference spectra of FMO trimers from Chlorobium tepidum. Circular dichroism spectra of the FMO trimer are unusually sensitive to diagonal energy disorder. Substantial differences occur between CD spectra in exciton simulations performed with and without realistic inhomogeneous distribution functions for the input pigment diagonal energies. Anisotropic absorption difference spectroscopy measurements are less consistent with 21-pigment trimer simulations than 7-pigment monomer simulations which assume that the laser-prepared states are localized within a subunit of the trimer. Experimental anisotropies from real samples likely arise from statistical averaging over states with diagonal energies shifted by

  12. Complexation and molecular modeling studies of europium(III)-gallic acid-amino acid complexes.

    PubMed

    Taha, Mohamed; Khan, Imran; Coutinho, João A P

    2016-04-01

    With many metal-based drugs extensively used today in the treatment of cancer, attention has focused on the development of new coordination compounds with antitumor activity with europium(III) complexes recently introduced as novel anticancer drugs. The aim of this work is to design new Eu(III) complexes with gallic acid, an antioxida'nt phenolic compound. Gallic acid was chosen because it shows anticancer activity without harming health cells. As antioxidant, it helps to protect human cells against oxidative damage that implicated in DNA damage, cancer, and accelerated cell aging. In this work, the formation of binary and ternary complexes of Eu(III) with gallic acid, primary ligand, and amino acids alanine, leucine, isoleucine, and tryptophan was studied by glass electrode potentiometry in aqueous solution containing 0.1M NaNO3 at (298.2 ± 0.1) K. Their overall stability constants were evaluated and the concentration distributions of the complex species in solution were calculated. The protonation constants of gallic acid and amino acids were also determined at our experimental conditions and compared with those predicted by using conductor-like screening model for realistic solvation (COSMO-RS) model. The geometries of Eu(III)-gallic acid complexes were characterized by the density functional theory (DFT). The spectroscopic UV-visible and photoluminescence measurements are carried out to confirm the formation of Eu(III)-gallic acid complexes in aqueous solutions.

  13. Pedigree models for complex human traits involving the mitochrondrial genome

    SciTech Connect

    Schork, N.J.; Guo, S.W. )

    1993-12-01

    Recent biochemical and molecular-genetic discoveries concerning variations in human mtDNA have suggested a role for mtDNA mutations in a number of human traits and disorders. Although the importance of these discoveries cannot be emphasized enough, the complex natures of mitochondrial biogenesis, mutant mtDNA phenotype expression, and the maternal inheritance pattern exhibited by mtDNA transmission make it difficult to develop models that can be used routinely in pedigree analyses to quantify and test hypotheses about the role of mtDNA in the expression of a trait. In the present paper, the authors describe complexities inherent in mitochondrial biogenesis and genetic transmission and show how these complexities can be incorporated into appropriate mathematical models. The authors offer a variety of likelihood-based models which account for the complexities discussed. The derivation of the models is meant to stimulate the construction of statistical tests for putative mtDNA contribution to a trait. Results of simulation studies which make use of the proposed models are described. The results of the simulation studies suggest that, although pedigree models of mtDNA effects can be reliable, success in mapping chromosomal determinants of a trait does not preclude the possibility that mtDNA determinants exist for the trait as well. Shortcomings inherent in the proposed models are described in an effort to expose areas in need of additional research. 58 refs., 5 figs., 2 tabs.

  14. Systems Engineering Metrics: Organizational Complexity and Product Quality Modeling

    NASA Technical Reports Server (NTRS)

    Mog, Robert A.

    1997-01-01

    Innovative organizational complexity and product quality models applicable to performance metrics for NASA-MSFC's Systems Analysis and Integration Laboratory (SAIL) missions and objectives are presented. An intensive research effort focuses on the synergistic combination of stochastic process modeling, nodal and spatial decomposition techniques, organizational and computational complexity, systems science and metrics, chaos, and proprietary statistical tools for accelerated risk assessment. This is followed by the development of a preliminary model, which is uniquely applicable and robust for quantitative purposes. Exercise of the preliminary model using a generic system hierarchy and the AXAF-I architectural hierarchy is provided. The Kendall test for positive dependence provides an initial verification and validation of the model. Finally, the research and development of the innovation is revisited, prior to peer review. This research and development effort results in near-term, measurable SAIL organizational and product quality methodologies, enhanced organizational risk assessment and evolutionary modeling results, and 91 improved statistical quantification of SAIL productivity interests.

  15. Wake Vortex Encounter Model Validation Experiments

    NASA Technical Reports Server (NTRS)

    Vicroy, Dan; Brandon, Jay; Greene, George C.; Rivers, Robert; Shah, Gautam; Stewart, Eric; Stuever, Robert; Rossow, Vernon

    1997-01-01

    The goal of this current research is to establish a database that validate/calibrate wake encounter analysis methods for fleet-wide application; and measure/document atmospheric effects on wake decay. Two kinds of experiments, wind tunnel experiments and flight experiments, are performed. This paper discusses the different types of tests and compares their wake velocity measurement.

  16. Calibration of Complex Subsurface Reaction Models Using a Surrogate-Model Approach

    EPA Science Inventory

    Application of model assessment techniques to complex subsurface reaction models involves numerous difficulties, including non-trivial model selection, parameter non-uniqueness, and excessive computational burden. To overcome these difficulties, this study introduces SAMM (Simult...

  17. Minimal model for complex dynamics in cellular processes.

    PubMed

    Suguna, C; Chowdhury, K K; Sinha, S

    1999-11-01

    Cellular functions are controlled and coordinated by the complex circuitry of biochemical pathways regulated by genetic and metabolic feedback processes. This paper aims to show, with the help of a minimal model of a regulated biochemical pathway, that the common nonlinearities and control structures present in biomolecular interactions are capable of eliciting a variety of functional dynamics, such as homeostasis, periodic, complex, and chaotic oscillations, including transients, that are observed in various cellular processes.

  18. Improving a regional model using reduced complexity and parameter estimation

    USGS Publications Warehouse

    Kelson, Victor A.; Hunt, Randall J.; Haitjema, Henk M.

    2002-01-01

    The availability of powerful desktop computers and graphical user interfaces for ground water flow models makes possible the construction of ever more complex models. A proposed copper-zinc sulfide mine in northern Wisconsin offers a unique case in which the same hydrologic system has been modeled using a variety of techniques covering a wide range of sophistication and complexity. Early in the permitting process, simple numerical models were used to evaluate the necessary amount of water to be pumped from the mine, reductions in streamflow, and the drawdowns in the regional aquifer. More complex models have subsequently been used in an attempt to refine the predictions. Even after so much modeling effort, questions regarding the accuracy and reliability of the predictions remain. We have performed a new analysis of the proposed mine using the two-dimensional analytic element code GFLOW coupled with the nonlinear parameter estimation code UCODE. The new model is parsimonious, containing fewer than 10 parameters, and covers a region several times larger in areal extent than any of the previous models. The model demonstrates the suitability of analytic element codes for use with parameter estimation codes. The simplified model results are similar to the more complex models; predicted mine inflows and UCODE-derived 95% confidence intervals are consistent with the previous predictions. More important, the large areal extent of the model allowed us to examine hydrological features not included in the previous models, resulting in new insights about the effects that far-field boundary conditions can have on near-field model calibration and parameterization. In this case, the addition of surface water runoff into a lake in the headwaters of a stream while holding recharge constant moved a regional ground watershed divide and resulted in some of the added water being captured by the adjoining basin. Finally, a simple analytical solution was used to clarify the GFLOW model

  19. On explicit algebraic stress models for complex turbulent flows

    NASA Technical Reports Server (NTRS)

    Gatski, T. B.; Speziale, C. G.

    1992-01-01

    Explicit algebraic stress models that are valid for three-dimensional turbulent flows in noninertial frames are systematically derived from a hierarchy of second-order closure models. This represents a generalization of the model derived by Pope who based his analysis on the Launder, Reece, and Rodi model restricted to two-dimensional turbulent flows in an inertial frame. The relationship between the new models and traditional algebraic stress models -- as well as anistropic eddy visosity models -- is theoretically established. The need for regularization is demonstrated in an effort to explain why traditional algebraic stress models have failed in complex flows. It is also shown that these explicit algebraic stress models can shed new light on what second-order closure models predict for the equilibrium states of homogeneous turbulent flows and can serve as a useful alternative in practical computations.

  20. Braiding DNA: Experiments, Simulations, and Models

    PubMed Central

    Charvin, G.; Vologodskii, A.; Bensimon, D.; Croquette, V.

    2005-01-01

    DNA encounters topological problems in vivo because of its extended double-helical structure. As a consequence, the semiconservative mechanism of DNA replication leads to the formation of DNA braids or catenanes, which have to be removed for the completion of cell division. To get a better understanding of these structures, we have studied the elastic behavior of two braided nicked DNA molecules using a magnetic trap apparatus. The experimental data let us identify and characterize three regimes of braiding: a slightly twisted regime before the formation of the first crossing, followed by genuine braids which, at large braiding number, buckle to form plectonemes. Two different approaches support and quantify this characterization of the data. First, Monte Carlo (MC) simulations of braided DNAs yield a full description of the molecules' behavior and their buckling transition. Second, modeling the braids as a twisted swing provides a good approximation of the elastic response of the molecules as they are intertwined. Comparisons of the experiments and the MC simulations with this analytical model allow for a measurement of the diameter of the braids and its dependence upon entropic and electrostatic repulsive interactions. The MC simulations allow for an estimate of the effective torsional constant of the braids (at a stretching force F = 2 pN): Cb ∼ 48 nm (as compared with C ∼100 nm for a single unnicked DNA). Finally, at low salt concentrations and for sufficiently large number of braids, the diameter of the braided molecules is observed to collapse to that of double-stranded DNA. We suggest that this collapse is due to the partial melting and fraying of the two nicked molecules and the subsequent right- or left-handed intertwining of the stretched single strands. PMID:15778439

  1. Multiwell experiment: reservoir modeling analysis, Volume II

    SciTech Connect

    Horton, A.I.

    1985-05-01

    This report updates an ongoing analysis by reservoir modelers at the Morgantown Energy Technology Center (METC) of well test data from the Department of Energy's Multiwell Experiment (MWX). Results of previous efforts were presented in a recent METC Technical Note (Horton 1985). Results included in this report pertain to the poststimulation well tests of Zones 3 and 4 of the Paludal Sandstone Interval and the prestimulation well tests of the Red and Yellow Zones of the Coastal Sandstone Interval. The following results were obtained by using a reservoir model and history matching procedures: (1) Post-minifracture analysis indicated that the minifracture stimulation of the Paludal Interval did not produce an induced fracture, and extreme formation damage did occur, since a 65% permeability reduction around the wellbore was estimated. The design for this minifracture was from 200 to 300 feet on each side of the wellbore; (2) Post full-scale stimulation analysis for the Paludal Interval also showed that extreme formation damage occurred during the stimulation as indicated by a 75% permeability reduction 20 feet on each side of the induced fracture. Also, an induced fracture half-length of 100 feet was determined to have occurred, as compared to a designed fracture half-length of 500 to 600 feet; and (3) Analysis of prestimulation well test data from the Coastal Interval agreed with previous well-to-well interference tests that showed extreme permeability anisotropy was not a factor for this zone. This lack of permeability anisotropy was also verified by a nitrogen injection test performed on the Coastal Red and Yellow Zones. 8 refs., 7 figs., 2 tabs.

  2. Complex groundwater flow systems as traveling agent models.

    PubMed

    López Corona, Oliver; Padilla, Pablo; Escolero, Oscar; González, Tomas; Morales-Casique, Eric; Osorio-Olvera, Luis

    2014-01-01

    Analyzing field data from pumping tests, we show that as with many other natural phenomena, groundwater flow exhibits complex dynamics described by 1/f power spectrum. This result is theoretically studied within an agent perspective. Using a traveling agent model, we prove that this statistical behavior emerges when the medium is complex. Some heuristic reasoning is provided to justify both spatial and dynamic complexity, as the result of the superposition of an infinite number of stochastic processes. Even more, we show that this implies that non-Kolmogorovian probability is needed for its study, and provide a set of new partial differential equations for groundwater flow. PMID:25337455

  3. Complex groundwater flow systems as traveling agent models

    PubMed Central

    Padilla, Pablo; Escolero, Oscar; González, Tomas; Morales-Casique, Eric; Osorio-Olvera, Luis

    2014-01-01

    Analyzing field data from pumping tests, we show that as with many other natural phenomena, groundwater flow exhibits complex dynamics described by 1/f power spectrum. This result is theoretically studied within an agent perspective. Using a traveling agent model, we prove that this statistical behavior emerges when the medium is complex. Some heuristic reasoning is provided to justify both spatial and dynamic complexity, as the result of the superposition of an infinite number of stochastic processes. Even more, we show that this implies that non-Kolmogorovian probability is needed for its study, and provide a set of new partial differential equations for groundwater flow. PMID:25337455

  4. Humic Acid Complexation of Th, Hf and Zr in Ligand Competition Experiments: Metal Loading and Ph Effects

    NASA Technical Reports Server (NTRS)

    Stern, Jennifer C.; Foustoukos, Dionysis I.; Sonke, Jeroen E.; Salters, Vincent J. M.

    2014-01-01

    The mobility of metals in soils and subsurface aquifers is strongly affected by sorption and complexation with dissolved organic matter, oxyhydroxides, clay minerals, and inorganic ligands. Humic substances (HS) are organic macromolecules with functional groups that have a strong affinity for binding metals, such as actinides. Thorium, often studied as an analog for tetravalent actinides, has also been shown to strongly associate with dissolved and colloidal HS in natural waters. The effects of HS on the mobilization dynamics of actinides are of particular interest in risk assessment of nuclear waste repositories. Here, we present conditional equilibrium binding constants (Kc, MHA) of thorium, hafnium, and zirconium-humic acid complexes from ligand competition experiments using capillary electrophoresis coupled with ICP-MS (CE- ICP-MS). Equilibrium dialysis ligand exchange (EDLE) experiments using size exclusion via a 1000 Damembrane were also performed to validate the CE-ICP-MS analysis. Experiments were performed at pH 3.5-7 with solutions containing one tetravalent metal (Th, Hf, or Zr), Elliot soil humic acid (EHA) or Pahokee peat humic acid (PHA), and EDTA. CE-ICP-MS and EDLE experiments yielded nearly identical binding constants for the metal- humic acid complexes, indicating that both methods are appropriate for examining metal speciation at conditions lower than neutral pH. We find that tetravalent metals form strong complexes with humic acids, with Kc, MHA several orders of magnitude above REE-humic complexes. Experiments were conducted at a range of dissolved HA concentrations to examine the effect of [HA]/[Th] molar ratio on Kc, MHA. At low metal loading conditions (i.e. elevated [HA]/[Th] ratios) the ThHA binding constant reached values that were not affected by the relative abundance of humic acid and thorium. The importance of [HA]/[Th] molar ratios on constraining the equilibrium of MHA complexation is apparent when our estimated Kc, MHA values

  5. Implementation of a complex multi-phase equation of state for cerium and its correlation with experiment

    SciTech Connect

    Cherne, Frank J; Jensen, Brian J; Elkin, Vyacheslav M

    2009-01-01

    The complexity of cerium combined with its interesting material properties makes it a desirable material to examine dynamically. Characteristics such as the softening of the material before the phase change, low pressure solid-solid phase change, predicted low pressure melt boundary, and the solid-solid critical point add complexity to the construction of its equation of state. Currently, we are incorporating a feedback loop between a theoretical understanding of the material and an experimental understanding. Using a model equation of state for cerium we compare calculated wave profiles with experimental wave profiles for a number of front surface impact (cerium impacting a plated window) experiments. Using the calculated release isentrope we predict the temperature of the observed rarefaction shock. These experiments showed that the release state occurs at different magnitudes, thus allowing us to infer where dynamic {gamma} - {alpha} phase boundary is.

  6. A Complex Systems Model Approach to Quantified Mineral Resource Appraisal

    USGS Publications Warehouse

    Gettings, M.E.; Bultman, M.W.; Fisher, F.S.

    2004-01-01

    For federal and state land management agencies, mineral resource appraisal has evolved from value-based to outcome-based procedures wherein the consequences of resource development are compared with those of other management options. Complex systems modeling is proposed as a general framework in which to build models that can evaluate outcomes. Three frequently used methods of mineral resource appraisal (subjective probabilistic estimates, weights of evidence modeling, and fuzzy logic modeling) are discussed to obtain insight into methods of incorporating complexity into mineral resource appraisal models. Fuzzy logic and weights of evidence are most easily utilized in complex systems models. A fundamental product of new appraisals is the production of reusable, accessible databases and methodologies so that appraisals can easily be repeated with new or refined data. The data are representations of complex systems and must be so regarded if all of their information content is to be utilized. The proposed generalized model framework is applicable to mineral assessment and other geoscience problems. We begin with a (fuzzy) cognitive map using (+1,0,-1) values for the links and evaluate the map for various scenarios to obtain a ranking of the importance of various links. Fieldwork and modeling studies identify important links and help identify unanticipated links. Next, the links are given membership functions in accordance with the data. Finally, processes are associated with the links; ideally, the controlling physical and chemical events and equations are found for each link. After calibration and testing, this complex systems model is used for predictions under various scenarios.

  7. Cd adsorption onto Anoxybacillus flavithermus: Surface complexation modeling and spectroscopic investigations

    SciTech Connect

    Burnett, Peta-Gaye; Daughney, Christopher J.; Peak, Derek

    2008-06-09

    Several recent studies have applied surface complexation theory to model metal adsorption behaviour onto mesophilic bacteria. However, no investigations have used this approach to characterise metal adsorption by thermophilic bacteria. In this study, we perform batch adsorption experiments to quantify cadmium adsorption onto the thermophile Anoxybacillus flavithermus. Surface complexation models (incorporating the Donnan electrostatic model) are developed to determine stability constants corresponding to specific adsorption reactions. Adsorption reactions and stoichiometries are constrained using spectroscopic techniques (XANES, EXAFS, and ATR-FTIR). The results indicate that the Cd adsorption behaviour of A. flavithermus is similar to that of other mesophilic bacteria. At high bacteria-to-Cd ratios, Cd adsorption occurs by formation of a 1:1 complex with deprotonated cell wall carboxyl functional groups. At lower bacteria-to-Cd ratios, a second adsorption mechanism occurs at pH > 7, which may correspond to the formation of a Cd-phosphoryl, CdOH-carboxyl, or CdOH-phosphoryl surface complex. X-ray absorption spectroscopic investigations confirm the formation of the 1:1 Cd-carboxyl surface complex, but due to the bacteria-to-Cd ratio used in these experiments, other complexation mechanism(s) could not be unequivocally resolved by the spectroscopic data.

  8. Dynamic crack initiation toughness : experiments and peridynamic modeling.

    SciTech Connect

    Foster, John T.

    2009-10-01

    This is a dissertation on research conducted studying the dynamic crack initiation toughness of a 4340 steel. Researchers have been conducting experimental testing of dynamic crack initiation toughness, K{sub Ic}, for many years, using many experimental techniques with vastly different trends in the results when reporting K{sub Ic} as a function of loading rate. The dissertation describes a novel experimental technique for measuring K{sub Ic} in metals using the Kolsky bar. The method borrows from improvements made in recent years in traditional Kolsky bar testing by using pulse shaping techniques to ensure a constant loading rate applied to the sample before crack initiation. Dynamic crack initiation measurements were reported on a 4340 steel at two different loading rates. The steel was shown to exhibit a rate dependence, with the recorded values of K{sub Ic} being much higher at the higher loading rate. Using the knowledge of this rate dependence as a motivation in attempting to model the fracture events, a viscoplastic constitutive model was implemented into a peridynamic computational mechanics code. Peridynamics is a newly developed theory in solid mechanics that replaces the classical partial differential equations of motion with integral-differential equations which do not require the existence of spatial derivatives in the displacement field. This allows for the straightforward modeling of unguided crack initiation and growth. To date, peridynamic implementations have used severely restricted constitutive models. This research represents the first implementation of a complex material model and its validation. After showing results comparing deformations to experimental Taylor anvil impact for the viscoplastic material model, a novel failure criterion is introduced to model the dynamic crack initiation toughness experiments. The failure model is based on an energy criterion and uses the K{sub Ic} values recorded experimentally as an input. The failure model

  9. An analytical pressure-transient model for complex reservoir scenarios

    NASA Astrophysics Data System (ADS)

    Gomes, Edmond; Ambastha, Anil K.

    1994-10-01

    Reservoir deposition occurs through long periods of time, thus most reservoirs are heterogeneous in nature. The presence of various zones and layers of different rock and fluid properties is the usual circumstance in petroleum reservoirs. A secondary recovery operation, such as steam-flooding, results in a composite reservoir situation because of the presence of zones of different fluid properties. Because of reservoir heterogeneity and gravity override effects, fluid boundaries separating two zones may have complicated or irregular shapes. The purpose of this paper is to develop a new analytical pressure-transient model which can accommodate complex reservoir scenarios resulting from reservoir heterogeneity and from thermal recovery or other fluid-injection operations. Mathematically, our analytical model considers such complex situations as a generalized eigenvalue system resulting in a system of linear equations. Computational difficulties faced, validation approach of the new model, and an application for complex reservoir scenarios are discussed.

  10. Mathematical approaches for complexity/predictivity trade-offs in complex system models : LDRD final report.

    SciTech Connect

    Goldsby, Michael E.; Mayo, Jackson R.; Bhattacharyya, Arnab; Armstrong, Robert C.; Vanderveen, Keith

    2008-09-01

    The goal of this research was to examine foundational methods, both computational and theoretical, that can improve the veracity of entity-based complex system models and increase confidence in their predictions for emergent behavior. The strategy was to seek insight and guidance from simplified yet realistic models, such as cellular automata and Boolean networks, whose properties can be generalized to production entity-based simulations. We have explored the usefulness of renormalization-group methods for finding reduced models of such idealized complex systems. We have prototyped representative models that are both tractable and relevant to Sandia mission applications, and quantified the effect of computational renormalization on the predictive accuracy of these models, finding good predictivity from renormalized versions of cellular automata and Boolean networks. Furthermore, we have theoretically analyzed the robustness properties of certain Boolean networks, relevant for characterizing organic behavior, and obtained precise mathematical constraints on systems that are robust to failures. In combination, our results provide important guidance for more rigorous construction of entity-based models, which currently are often devised in an ad-hoc manner. Our results can also help in designing complex systems with the goal of predictable behavior, e.g., for cybersecurity.

  11. Deaf Children with Complex Needs: Parental Experience of Access to Cochlear Implants and Ongoing Support

    ERIC Educational Resources Information Center

    McCracken, Wendy; Turner, Oliver

    2012-01-01

    This paper discusses the experiences of parents of deaf children with additional complex needs (ACN) in accessing cochlear implant (CI) services and achieving ongoing support. Of a total study group of fifty-one children with ACN, twelve had been fitted with a CI. The parental accounts provide a rich and varied picture of service access. For some…

  12. Board Games and Board Game Design as Learning Tools for Complex Scientific Concepts: Some Experiences

    ERIC Educational Resources Information Center

    Chiarello, Fabio; Castellano, Maria Gabriella

    2016-01-01

    In this paper the authors report different experiences in the use of board games as learning tools for complex and abstract scientific concepts such as Quantum Mechanics, Relativity or nano-biotechnologies. In particular we describe "Quantum Race," designed for the introduction of Quantum Mechanical principles, "Lab on a chip,"…

  13. Turing instability in reaction-diffusion models on complex networks

    NASA Astrophysics Data System (ADS)

    Ide, Yusuke; Izuhara, Hirofumi; Machida, Takuya

    2016-09-01

    In this paper, the Turing instability in reaction-diffusion models defined on complex networks is studied. Here, we focus on three types of models which generate complex networks, i.e. the Erdős-Rényi, the Watts-Strogatz, and the threshold network models. From analysis of the Laplacian matrices of graphs generated by these models, we numerically reveal that stable and unstable regions of a homogeneous steady state on the parameter space of two diffusion coefficients completely differ, depending on the network architecture. In addition, we theoretically discuss the stable and unstable regions in the cases of regular enhanced ring lattices which include regular circles, and networks generated by the threshold network model when the number of vertices is large enough.

  14. Modelling nutrient reduction targets - model structure complexity vs. data availability

    NASA Astrophysics Data System (ADS)

    Capell, Rene; Lausten Hansen, Anne; Donnelly, Chantal; Refsgaard, Jens Christian; Arheimer, Berit

    2015-04-01

    In most parts of Europe, macronutrient concentrations and loads in surface water are currently affected by human land use and land management choices. Moreover, current macronutrient concentration and load levels often violate European Water Framework Directive (WFD) targets and effective measures to reduce these levels are sought after by water managers. Identifying such effective measures in specific target catchments should consider the four key processes release, transport, retention, and removal, and thus physical catchment characteristics as e.g. soils and geomorphology, but also management data such as crop distribution and fertilizer application regimes. The BONUS funded research project Soils2Sea evaluates new, differentiated regulation strategies to cost-efficiently reduce nutrient loads to the Baltic Sea based on new knowledge of nutrient transport and retention processes between soils and the coast. Within the Soils2Sea framework, we here examine the capability of two integrated hydrological and nutrient transfer models, HYPE and Mike SHE, to model runoff and nitrate flux responses in the 100 km2 Norsminde catchment, Denmark, comparing different model structures and data bases. We focus on comparing modelled nitrate reductions within and below the root zone, and evaluate model performances as function of available model structures (process representation within the model) and available data bases (temporal forcing data and spatial information). This model evaluation is performed to aid in the development of model tools which will be used to estimate the effect of new nutrient reduction measures on the catchment to regional scale, where available data - both climate forcing and land management - typically are increasingly limited with the targeted spatial scale and may act as a bottleneck for process conceptualizations and thus the value of a model as tool to provide decision support for differentiated regulation strategies.

  15. A Compact Model for the Complex Plant Circadian Clock.

    PubMed

    De Caluwé, Joëlle; Xiao, Qiying; Hermans, Christian; Verbruggen, Nathalie; Leloup, Jean-Christophe; Gonze, Didier

    2016-01-01

    The circadian clock is an endogenous timekeeper that allows organisms to anticipate and adapt to the daily variations of their environment. The plant clock is an intricate network of interlocked feedback loops, in which transcription factors regulate each other to generate oscillations with expression peaks at specific times of the day. Over the last decade, mathematical modeling approaches have been used to understand the inner workings of the clock in the model plant Arabidopsis thaliana. Those efforts have produced a number of models of ever increasing complexity. Here, we present an alternative model that combines a low number of equations and parameters, similar to the very earliest models, with the complex network structure found in more recent ones. This simple model describes the temporal evolution of the abundance of eight clock gene mRNA/protein and captures key features of the clock on a qualitative level, namely the entrained and free-running behaviors of the wild type clock, as well as the defects found in knockout mutants (such as altered free-running periods, lack of entrainment, or changes in the expression of other clock genes). Additionally, our model produces complex responses to various light cues, such as extreme photoperiods and non-24 h environmental cycles, and can describe the control of hypocotyl growth by the clock. Our model constitutes a useful tool to probe dynamical properties of the core clock as well as clock-dependent processes. PMID:26904049

  16. Deterministic ripple-spreading model for complex networks

    NASA Astrophysics Data System (ADS)

    Hu, Xiao-Bing; Wang, Ming; Leeson, Mark S.; Hines, Evor L.; di Paolo, Ezequiel

    2011-04-01

    This paper proposes a deterministic complex network model, which is inspired by the natural ripple-spreading phenomenon. The motivations and main advantages of the model are the following: (i) The establishment of many real-world networks is a dynamic process, where it is often observed that the influence of a few local events spreads out through nodes, and then largely determines the final network topology. Obviously, this dynamic process involves many spatial and temporal factors. By simulating the natural ripple-spreading process, this paper reports a very natural way to set up a spatial and temporal model for such complex networks. (ii) Existing relevant network models are all stochastic models, i.e., with a given input, they cannot output a unique topology. Differently, the proposed ripple-spreading model can uniquely determine the final network topology, and at the same time, the stochastic feature of complex networks is captured by randomly initializing ripple-spreading related parameters. (iii) The proposed model can use an easily manageable number of ripple-spreading related parameters to precisely describe a network topology, which is more memory efficient when compared with traditional adjacency matrix or similar memory-expensive data structures. (iv) The ripple-spreading model has a very good potential for both extensions and applications.

  17. A Compact Model for the Complex Plant Circadian Clock

    PubMed Central

    De Caluwé, Joëlle; Xiao, Qiying; Hermans, Christian; Verbruggen, Nathalie; Leloup, Jean-Christophe; Gonze, Didier

    2016-01-01

    The circadian clock is an endogenous timekeeper that allows organisms to anticipate and adapt to the daily variations of their environment. The plant clock is an intricate network of interlocked feedback loops, in which transcription factors regulate each other to generate oscillations with expression peaks at specific times of the day. Over the last decade, mathematical modeling approaches have been used to understand the inner workings of the clock in the model plant Arabidopsis thaliana. Those efforts have produced a number of models of ever increasing complexity. Here, we present an alternative model that combines a low number of equations and parameters, similar to the very earliest models, with the complex network structure found in more recent ones. This simple model describes the temporal evolution of the abundance of eight clock gene mRNA/protein and captures key features of the clock on a qualitative level, namely the entrained and free-running behaviors of the wild type clock, as well as the defects found in knockout mutants (such as altered free-running periods, lack of entrainment, or changes in the expression of other clock genes). Additionally, our model produces complex responses to various light cues, such as extreme photoperiods and non-24 h environmental cycles, and can describe the control of hypocotyl growth by the clock. Our model constitutes a useful tool to probe dynamical properties of the core clock as well as clock-dependent processes. PMID:26904049

  18. Complex solutions for the scalar field model of the Universe

    NASA Astrophysics Data System (ADS)

    Lyons, Glenn W.

    1992-08-01

    The Hartle-Hawking proposal is implemented for Hawking's scalar field model of the Universe. For this model the complex saddle-point geometries required by the semiclassical approximation to the path integral cannot simply be deformed into real Euclidean and real Lorentzian sections. Approximate saddle points are constructed which are fully complex and have contours of real Lorentzian evolution. The semiclassical wave function is found to give rise to classical spacetimes at late times and extra terms in the Hamilton-Jacobi equation do not contribute significantly to the potential.

  19. Introduction to a special section on ecohydrology of semiarid environments: Confronting mathematical models with ecosystem complexity

    NASA Astrophysics Data System (ADS)

    Svoray, Tal; Assouline, Shmuel; Katul, Gabriel

    2015-11-01

    Current literature provides large number of publications about ecohydrological processes and their effect on the biota in drylands. Given the limited laboratory and field experiments in such systems, many of these publications are based on mathematical models of varying complexity. The underlying implicit assumption is that the data set used to evaluate these models covers the parameter space of conditions that characterize drylands and that the models represent the actual processes with acceptable certainty. However, a question raised is to what extent these mathematical models are valid when confronted with observed ecosystem complexity? This Introduction reviews the 16 papers that comprise the Special Section on Eco-hydrology of Semiarid Environments: Confronting Mathematical Models with Ecosystem Complexity. The subjects studied in these papers include rainfall regime, infiltration and preferential flow, evaporation and evapotranspiration, annual net primary production, dispersal and invasion, and vegetation greening. The findings in the papers published in this Special Section show that innovative mathematical modeling approaches can represent actual field measurements. Hence, there are strong grounds for suggesting that mathematical models can contribute to greater understanding of ecosystem complexity through characterization of space-time dynamics of biomass and water storage as well as their multiscale interactions. However, the generality of the models and their low-dimensional representation of many processes may also be a "curse" that results in failures when particulars of an ecosystem are required. It is envisaged that the search for a unifying "general" model, while seductive, may remain elusive in the foreseeable future. It is for this reason that improving the merger between experiments and models of various degrees of complexity continues to shape the future research agenda.

  20. Simple and complex models for studying muscle function in walking.

    PubMed

    Pandy, Marcus G

    2003-09-29

    While simple models can be helpful in identifying basic features of muscle function, more complex models are needed to discern the functional roles of specific muscles in movement. In this paper, two very different models of walking, one simple and one complex, are used to study how muscle forces, gravitational forces and centrifugal forces (i.e. forces arising from motion of the joints) combine to produce the pattern of force exerted on the ground. Both the simple model and the complex one predict that muscles contribute significantly to the ground force pattern generated in walking; indeed, both models show that muscle action is responsible for the appearance of the two peaks in the vertical force. The simple model, an inverted double pendulum, suggests further that the first and second peaks are due to net extensor muscle moments exerted about the knee and ankle, respectively. Analyses based on a much more complex, muscle-actuated simulation of walking are in general agreement with these results; however, the more detailed model also reveals that both the hip extensor and hip abductor muscles contribute significantly to vertical motion of the centre of mass, and therefore to the appearance of the first peak in the vertical ground force, in early single-leg stance. This discrepancy in the model predictions is most probably explained by the difference in model complexity. First, movements of the upper body in the sagittal plane are not represented properly in the double-pendulum model, which may explain the anomalous result obtained for the contribution of a hip-extensor torque to the vertical ground force. Second, the double-pendulum model incorporates only three of the six major elements of walking, whereas the complex model is fully 3D and incorporates all six gait determinants. In particular, pelvic list occurs primarily in the frontal plane, so there is the potential for this mechanism to contribute significantly to the vertical ground force, especially

  1. Simple and complex models for studying muscle function in walking.

    PubMed Central

    Pandy, Marcus G

    2003-01-01

    While simple models can be helpful in identifying basic features of muscle function, more complex models are needed to discern the functional roles of specific muscles in movement. In this paper, two very different models of walking, one simple and one complex, are used to study how muscle forces, gravitational forces and centrifugal forces (i.e. forces arising from motion of the joints) combine to produce the pattern of force exerted on the ground. Both the simple model and the complex one predict that muscles contribute significantly to the ground force pattern generated in walking; indeed, both models show that muscle action is responsible for the appearance of the two peaks in the vertical force. The simple model, an inverted double pendulum, suggests further that the first and second peaks are due to net extensor muscle moments exerted about the knee and ankle, respectively. Analyses based on a much more complex, muscle-actuated simulation of walking are in general agreement with these results; however, the more detailed model also reveals that both the hip extensor and hip abductor muscles contribute significantly to vertical motion of the centre of mass, and therefore to the appearance of the first peak in the vertical ground force, in early single-leg stance. This discrepancy in the model predictions is most probably explained by the difference in model complexity. First, movements of the upper body in the sagittal plane are not represented properly in the double-pendulum model, which may explain the anomalous result obtained for the contribution of a hip-extensor torque to the vertical ground force. Second, the double-pendulum model incorporates only three of the six major elements of walking, whereas the complex model is fully 3D and incorporates all six gait determinants. In particular, pelvic list occurs primarily in the frontal plane, so there is the potential for this mechanism to contribute significantly to the vertical ground force, especially

  2. Electrostatic Model Applied to ISS Charged Water Droplet Experiment

    NASA Technical Reports Server (NTRS)

    Stevenson, Daan; Schaub, Hanspeter; Pettit, Donald R.

    2015-01-01

    The electrostatic force can be used to create novel relative motion between charged bodies if it can be isolated from the stronger gravitational and dissipative forces. Recently, Coulomb orbital motion was demonstrated on the International Space Station by releasing charged water droplets in the vicinity of a charged knitting needle. In this investigation, the Multi-Sphere Method, an electrostatic model developed to study active spacecraft position control by Coulomb charging, is used to simulate the complex orbital motion of the droplets. When atmospheric drag is introduced, the simulated motion closely mimics that seen in the video footage of the experiment. The electrostatic force's inverse dependency on separation distance near the center of the needle lends itself to analytic predictions of the radial motion.

  3. Extensive video-game experience alters cortical networks for complex visuomotor transformations.

    PubMed

    Granek, Joshua A; Gorbet, Diana J; Sergio, Lauren E

    2010-10-01

    Using event-related functional magnetic resonance imaging (fMRI), we examined the effect of video-game experience on the neural control of increasingly complex visuomotor tasks. Previously, skilled individuals have demonstrated the use of a more efficient movement control brain network, including the prefrontal, premotor, primary sensorimotor and parietal cortices. Our results extend and generalize this finding by documenting additional prefrontal cortex activity in experienced video gamers planning for complex eye-hand coordination tasks that are distinct from actual video-game play. These changes in activation between non-gamers and extensive gamers are putatively related to the increased online control and spatial attention required for complex visually guided reaching. These data suggest that the basic cortical network for processing complex visually guided reaching is altered by extensive video-game play. PMID:20060111

  4. Extensive video-game experience alters cortical networks for complex visuomotor transformations.

    PubMed

    Granek, Joshua A; Gorbet, Diana J; Sergio, Lauren E

    2010-10-01

    Using event-related functional magnetic resonance imaging (fMRI), we examined the effect of video-game experience on the neural control of increasingly complex visuomotor tasks. Previously, skilled individuals have demonstrated the use of a more efficient movement control brain network, including the prefrontal, premotor, primary sensorimotor and parietal cortices. Our results extend and generalize this finding by documenting additional prefrontal cortex activity in experienced video gamers planning for complex eye-hand coordination tasks that are distinct from actual video-game play. These changes in activation between non-gamers and extensive gamers are putatively related to the increased online control and spatial attention required for complex visually guided reaching. These data suggest that the basic cortical network for processing complex visually guided reaching is altered by extensive video-game play.

  5. Force-dependent persistence length of DNA-intercalator complexes measured in single molecule stretching experiments.

    PubMed

    Bazoni, R F; Lima, C H M; Ramos, E B; Rocha, M S

    2015-06-01

    By using optical tweezers with an adjustable trap stiffness, we have performed systematic single molecule stretching experiments with two types of DNA-intercalator complexes, in order to investigate the effects of the maximum applied forces on the mechanical response of such complexes. We have explicitly shown that even in the low-force entropic regime the persistence length of the DNA-intercalator complexes is strongly force-dependent, although such behavior is not exhibited by bare DNA molecules. We discuss the possible physicochemical effects that can lead to such results. In particular, we propose that the stretching force can promote partial denaturation on the highly distorted double-helix of the DNA-intercalator complexes, which interfere strongly in the measured values of the persistence length. PMID:25913936

  6. (Relatively) Simple Models of Flow in Complex Terrain

    NASA Astrophysics Data System (ADS)

    Taylor, Peter; Weng, Wensong; Salmon, Jim

    2013-04-01

    The term, "complex terrain" includes both topography and variations in surface roughness and thermal properties. The scales that are affected can differ and there are some advantages to modeling them separately. In studies of flow in complex terrain we have developed 2 D and 3 D models of atmospheric PBL boundary layer flow over roughness changes, appropriate for longer fetches than most existing models. These "internal boundary layers" are especially important for understanding and predicting wind speed variations with distance from shorelines, an important factor for wind farms around, and potentially in, the Great Lakes. The models can also form a base for studying the wakes behind woodlots and wind turbines. Some sample calculations of wind speed evolution over water and the reduced wind speeds behind an isolated woodlot, represented simply in terms of an increase in surface roughness, will be presented. Note that these models can also include thermal effects and non-neutral stratification. We can use the model to deal with 3-D roughness variations and will describe applications to both on-shore and off-shore situations around the Great Lakes. In particular we will show typical results for hub height winds and indicate the length of over-water fetch needed to get the full benefit of siting turbines over water. The linear Mixed Spectral Finite-Difference (MSFD) and non-linear (NLMSFD) models for surface boundary-layer flow over complex terrain have been extended to planetary boundary-layer flow over topography This allows for their use for larger scale regions and increased heights. The models have been applied to successfully simulate the Askervein hill experimental case and we will show examples of applications to more complex terrain, typical of some Canadian wind farms. Output from the model can be used as an alternative to MS-Micro, WAsP or other CFD calculations of topographic impacts for input to wind farm design software.

  7. Kinetic modeling of molecular motors: pause model and parameter determination from single-molecule experiments

    NASA Astrophysics Data System (ADS)

    Morin, José A.; Ibarra, Borja; Cao, Francisco J.

    2016-05-01

    Single-molecule manipulation experiments of molecular motors provide essential information about the rate and conformational changes of the steps of the reaction located along the manipulation coordinate. This information is not always sufficient to define a particular kinetic cycle. Recent single-molecule experiments with optical tweezers showed that the DNA unwinding activity of a Phi29 DNA polymerase mutant presents a complex pause behavior, which includes short and long pauses. Here we show that different kinetic models, considering different connections between the active and the pause states, can explain the experimental pause behavior. Both the two independent pause model and the two connected pause model are able to describe the pause behavior of a mutated Phi29 DNA polymerase observed in an optical tweezers single-molecule experiment. For the two independent pause model all parameters are fixed by the observed data, while for the more general two connected pause model there is a range of values of the parameters compatible with the observed data (which can be expressed in terms of two of the rates and their force dependencies). This general model includes models with indirect entry and exit to the long-pause state, and also models with cycling in both directions. Additionally, assuming that detailed balance is verified, which forbids cycling, this reduces the ranges of the values of the parameters (which can then be expressed in terms of one rate and its force dependency). The resulting model interpolates between the independent pause model and the indirect entry and exit to the long-pause state model

  8. Assessing Complexity in Learning Outcomes--A Comparison between the SOLO Taxonomy and the Model of Hierarchical Complexity

    ERIC Educational Resources Information Center

    Stålne, Kristian; Kjellström, Sofia; Utriainen, Jukka

    2016-01-01

    An important aspect of higher education is to educate students who can manage complex relationships and solve complex problems. Teachers need to be able to evaluate course content with regard to complexity, as well as evaluate students' ability to assimilate complex content and express it in the form of a learning outcome. One model for evaluating…

  9. Computer models of complex multiloop branched pipeline systems

    NASA Astrophysics Data System (ADS)

    Kudinov, I. V.; Kolesnikov, S. V.; Eremin, A. V.; Branfileva, A. N.

    2013-11-01

    This paper describes the principal theoretical concepts of the method used for constructing computer models of complex multiloop branched pipeline networks, and this method is based on the theory of graphs and two Kirchhoff's laws applied to electrical circuits. The models make it possible to calculate velocities, flow rates, and pressures of a fluid medium in any section of pipeline networks, when the latter are considered as single hydraulic systems. On the basis of multivariant calculations the reasons for existing problems can be identified, the least costly methods of their elimination can be proposed, and recommendations for planning the modernization of pipeline systems and construction of their new sections can be made. The results obtained can be applied to complex pipeline systems intended for various purposes (water pipelines, petroleum pipelines, etc.). The operability of the model has been verified on an example of designing a unified computer model of the heat network for centralized heat supply of the city of Samara.

  10. Modeling the propagation of mobile malware on complex networks

    NASA Astrophysics Data System (ADS)

    Liu, Wanping; Liu, Chao; Yang, Zheng; Liu, Xiaoyang; Zhang, Yihao; Wei, Zuxue

    2016-08-01

    In this paper, the spreading behavior of malware across mobile devices is addressed. By introducing complex networks to model mobile networks, which follows the power-law degree distribution, a novel epidemic model for mobile malware propagation is proposed. The spreading threshold that guarantees the dynamics of the model is calculated. Theoretically, the asymptotic stability of the malware-free equilibrium is confirmed when the threshold is below the unity, and the global stability is further proved under some sufficient conditions. The influences of different model parameters as well as the network topology on malware propagation are also analyzed. Our theoretical studies and numerical simulations show that networks with higher heterogeneity conduce to the diffusion of malware, and complex networks with lower power-law exponents benefit malware spreading.

  11. Stability of complex Langevin dynamics in effective models

    NASA Astrophysics Data System (ADS)

    Aarts, Gert; James, Frank A.; Pawlowski, Jan M.; Seiler, Erhard; Sexty, Dénes; Stamatescu, Ion-Olimpiu

    2013-03-01

    The sign problem at nonzero chemical potential prohibits the use of importance sampling in lattice simulations. Since complex Langevin dynamics does not rely on importance sampling, it provides a potential solution. Recently it was shown that complex Langevin dynamics fails in the disordered phase in the case of the three-dimensional XY model, while it appears to work in the entire phase diagram in the case of the three-dimensional SU(3) spin model. Here we analyse this difference and argue that it is due to the presence of the nontrivial Haar measure in the SU(3) case, which has a stabilizing effect on the complexified dynamics. The freedom to modify and stabilize the complex Langevin process is discussed in some detail.

  12. Catastrophe, Chaos, and Complexity Models and Psychosocial Adjustment to Disability.

    ERIC Educational Resources Information Center

    Parker, Randall M.; Schaller, James; Hansmann, Sandra

    2003-01-01

    Rehabilitation professionals may unknowingly rely on stereotypes and specious beliefs when dealing with people with disabilities, despite the formulation of theories that suggest new models of the adjustment process. Suggests that Catastrophe, Chaos, and Complexity Theories hold considerable promise in this regard. This article reviews these…

  13. 40 CFR 80.45 - Complex emissions model.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... in terms of weight percent oxygen ETH = Ethanol content of the target fuel in terms of weight percent...) REGULATION OF FUELS AND FUEL ADDITIVES Reformulated Gasoline § 80.45 Complex emissions model. (a) Definition of terms. For the purposes of this section, the following definitions shall apply: Target fuel =...

  14. Fischer and Schrock Carbene Complexes: A Molecular Modeling Exercise

    ERIC Educational Resources Information Center

    Montgomery, Craig D.

    2015-01-01

    An exercise in molecular modeling that demonstrates the distinctive features of Fischer and Schrock carbene complexes is presented. Semi-empirical calculations (PM3) demonstrate the singlet ground electronic state, restricted rotation about the C-Y bond, the positive charge on the carbon atom, and hence, the electrophilic nature of the Fischer…

  15. The Complexity of Developmental Predictions from Dual Process Models

    ERIC Educational Resources Information Center

    Stanovich, Keith E.; West, Richard F.; Toplak, Maggie E.

    2011-01-01

    Drawing developmental predictions from dual-process theories is more complex than is commonly realized. Overly simplified predictions drawn from such models may lead to premature rejection of the dual process approach as one of many tools for understanding cognitive development. Misleading predictions can be avoided by paying attention to several…

  16. Conceptual Complexity, Teaching Style and Models of Teaching.

    ERIC Educational Resources Information Center

    Joyce, Bruce; Weil, Marsha

    The focus of this paper is on the relative roles of personality and training in enabling teachers to carry out the kinds of complex learning models which are envisioned by curriculum reformers in the social sciences. The paper surveys some of the major research done in this area and concludes that: 1) Most teachers do not manifest the complex…

  17. Performance of Random Effects Model Estimators under Complex Sampling Designs

    ERIC Educational Resources Information Center

    Jia, Yue; Stokes, Lynne; Harris, Ian; Wang, Yan

    2011-01-01

    In this article, we consider estimation of parameters of random effects models from samples collected via complex multistage designs. Incorporation of sampling weights is one way to reduce estimation bias due to unequal probabilities of selection. Several weighting methods have been proposed in the literature for estimating the parameters of…

  18. Model predicting impact of complexation with cyclodextrins on oral absorption.

    PubMed

    Gamsiz, Ece D; Thombre, Avinash G; Ahmed, Imran; Carrier, Rebecca L

    2013-09-01

    Significant effort and resource expenditure is dedicated to enabling low-solubility oral drug delivery using solubilization technologies. Cyclodextrins (CD) are cyclic oligosaccharides which form inclusion complexes with many drugs and are often used as solubilizing agents. It is not clear prior to developing a drug delivery device with CD what level of absorption enhancement might be achieved; modeling can provide useful guidance in formulation and minimize resource intensive iterative formulation development. A model was developed to enable quantitative, dynamic prediction of the influence of CD on oral absorption of low solubility drug administered as a pre-formed complex. The predominant effects of CD considered were enhancement of dissolution and slowing of precipitation kinetics, as well as binding of free drug in solution. Simulation results with different parameter values reflective of typical drug and CD properties indicate a potential positive (up to five times increase in drug absorption), negative (up to 50% decrease in absorption) or lack of effect of CD. Comparison of model predictions with in vitro and in vivo experimental results indicate that a systems-based dynamic model incorporating CD complexation and key process kinetics may enable quantitative prediction of impact of CD delivered as a pre-formed complex on drug bioavailability.

  19. Surfactant manganese complexes as models for the oxidation of water

    SciTech Connect

    Wohlgemuth, R.; Otvos, J.W.; Calvin, M.

    1984-02-01

    Surfactant manganese complexes have been studied spectroscopically and electrochemically as models for the catalysts involved in the photooxidation of water to produce oxygen. Evidence has been obtained for the participation of the suggested redox cycle Mn/sup II/ to Mn/sup III/ to Mn/sup IV/ and back to Mn/sup II/ with the evolution of oxygen.

  20. Fitting Meta-Analytic Structural Equation Models with Complex Datasets

    ERIC Educational Resources Information Center

    Wilson, Sandra Jo; Polanin, Joshua R.; Lipsey, Mark W.

    2016-01-01

    A modification of the first stage of the standard procedure for two-stage meta-analytic structural equation modeling for use with large complex datasets is presented. This modification addresses two common problems that arise in such meta-analyses: (a) primary studies that provide multiple measures of the same construct and (b) the correlation…

  1. Surface complexation modeling of americium sorption onto volcanic tuff.

    PubMed

    Ding, M; Kelkar, S; Meijer, A

    2014-10-01

    Results of a surface complexation model (SCM) for americium sorption on volcanic rocks (devitrified and zeolitic tuff) are presented. The model was developed using PHREEQC and based on laboratory data for americium sorption on quartz. Available data for sorption of americium on quartz as a function of pH in dilute groundwater can be modeled with two surface reactions involving an americium sulfate and an americium carbonate complex. It was assumed in applying the model to volcanic rocks from Yucca Mountain, that the surface properties of volcanic rocks can be represented by a quartz surface. Using groundwaters compositionally representative of Yucca Mountain, americium sorption distribution coefficient (Kd, L/Kg) values were calculated as function of pH. These Kd values are close to the experimentally determined Kd values for americium sorption on volcanic rocks, decreasing with increasing pH in the pH range from 7 to 9. The surface complexation constants, derived in this study, allow prediction of sorption of americium in a natural complex system, taking into account the inherent uncertainty associated with geochemical conditions that occur along transport pathways. PMID:24963803

  2. Surface complexation modeling of americium sorption onto volcanic tuff.

    PubMed

    Ding, M; Kelkar, S; Meijer, A

    2014-10-01

    Results of a surface complexation model (SCM) for americium sorption on volcanic rocks (devitrified and zeolitic tuff) are presented. The model was developed using PHREEQC and based on laboratory data for americium sorption on quartz. Available data for sorption of americium on quartz as a function of pH in dilute groundwater can be modeled with two surface reactions involving an americium sulfate and an americium carbonate complex. It was assumed in applying the model to volcanic rocks from Yucca Mountain, that the surface properties of volcanic rocks can be represented by a quartz surface. Using groundwaters compositionally representative of Yucca Mountain, americium sorption distribution coefficient (Kd, L/Kg) values were calculated as function of pH. These Kd values are close to the experimentally determined Kd values for americium sorption on volcanic rocks, decreasing with increasing pH in the pH range from 7 to 9. The surface complexation constants, derived in this study, allow prediction of sorption of americium in a natural complex system, taking into account the inherent uncertainty associated with geochemical conditions that occur along transport pathways.

  3. A random interacting network model for complex networks.

    PubMed

    Goswami, Bedartha; Shekatkar, Snehal M; Rheinwalt, Aljoscha; Ambika, G; Kurths, Jürgen

    2015-01-01

    We propose a RAndom Interacting Network (RAIN) model to study the interactions between a pair of complex networks. The model involves two major steps: (i) the selection of a pair of nodes, one from each network, based on intra-network node-based characteristics, and (ii) the placement of a link between selected nodes based on the similarity of their relative importance in their respective networks. Node selection is based on a selection fitness function and node linkage is based on a linkage probability defined on the linkage scores of nodes. The model allows us to relate within-network characteristics to between-network structure. We apply the model to the interaction between the USA and Schengen airline transportation networks (ATNs). Our results indicate that two mechanisms: degree-based preferential node selection and degree-assortative link placement are necessary to replicate the observed inter-network degree distributions as well as the observed inter-network assortativity. The RAIN model offers the possibility to test multiple hypotheses regarding the mechanisms underlying network interactions. It can also incorporate complex interaction topologies. Furthermore, the framework of the RAIN model is general and can be potentially adapted to various real-world complex systems. PMID:26657032

  4. A random interacting network model for complex networks

    NASA Astrophysics Data System (ADS)

    Goswami, Bedartha; Shekatkar, Snehal M.; Rheinwalt, Aljoscha; Ambika, G.; Kurths, Jürgen

    2015-12-01

    We propose a RAndom Interacting Network (RAIN) model to study the interactions between a pair of complex networks. The model involves two major steps: (i) the selection of a pair of nodes, one from each network, based on intra-network node-based characteristics, and (ii) the placement of a link between selected nodes based on the similarity of their relative importance in their respective networks. Node selection is based on a selection fitness function and node linkage is based on a linkage probability defined on the linkage scores of nodes. The model allows us to relate within-network characteristics to between-network structure. We apply the model to the interaction between the USA and Schengen airline transportation networks (ATNs). Our results indicate that two mechanisms: degree-based preferential node selection and degree-assortative link placement are necessary to replicate the observed inter-network degree distributions as well as the observed inter-network assortativity. The RAIN model offers the possibility to test multiple hypotheses regarding the mechanisms underlying network interactions. It can also incorporate complex interaction topologies. Furthermore, the framework of the RAIN model is general and can be potentially adapted to various real-world complex systems.

  5. Bayesian Case-deletion Model Complexity and Information Criterion

    PubMed Central

    Zhu, Hongtu; Ibrahim, Joseph G.; Chen, Qingxia

    2015-01-01

    We establish a connection between Bayesian case influence measures for assessing the influence of individual observations and Bayesian predictive methods for evaluating the predictive performance of a model and comparing different models fitted to the same dataset. Based on such a connection, we formally propose a new set of Bayesian case-deletion model complexity (BCMC) measures for quantifying the effective number of parameters in a given statistical model. Its properties in linear models are explored. Adding some functions of BCMC to a conditional deviance function leads to a Bayesian case-deletion information criterion (BCIC) for comparing models. We systematically investigate some properties of BCIC and its connection with other information criteria, such as the Deviance Information Criterion (DIC). We illustrate the proposed methodology on linear mixed models with simulations and a real data example. PMID:26180578

  6. A perspective on modeling and simulation of complex dynamical systems

    NASA Astrophysics Data System (ADS)

    Åström, K. J.

    2011-09-01

    There has been an amazing development of modeling and simulation from its beginning in the 1920s, when the technology was available only at a handful of University groups who had access to a mechanical differential analyzer. Today, tools for modeling and simulation are available for every student and engineer. This paper gives a perspective on the development with particular emphasis on technology and paradigm shifts. Modeling is increasingly important for design and operation of complex natural and man-made systems. Because of the increased use of model based control such as Kalman filters and model predictive control, models are also appearing as components of feedback systems. Modeling and simulation are multidisciplinary, it is used in a wide variety of fields and their development have been strongly influenced by mathematics, numerics, computer science and computer technology.

  7. Modeling of E-164X Experiment

    SciTech Connect

    Deng, S.; Muggli, P.; Katsouleas, T.; Oz, E.; Barnes, C.D.; Decker, F.J.; Hogan, M.J.; Iverson, R.; Krejcik, P.; O'Connell, C.; Clayton, C.E.; Huang, C.; Johnson, D.K.; Joshi, C.; Lu, W.; Marsh, K.A.; Mori, W.B.; Tsung, F.; Zhou, M.M.; Fonseca, R.A.

    2004-12-07

    In current plasma-based accelerator experiments, very short bunches (100-150{mu}m for E164 and 10-20 {mu}m for E164X experiment at Stanford Linear Accelerator Center (SLAC)) are used to drive plasma wakes and achieve high accelerating gradients, on the order of 10-100GV/m. The self-fields of such intense bunches can tunnel ionize neutral gases and create the plasma. This may completely change the physics of plasma wakes. A 3-D object-oriented fully parallel PIC code OSIRIS is used to simulate various gas types, beam parameters, etc. to support the design of the experiments. The simulation results for real experiment parameters are presented.

  8. Modeling of E-164X Experiment

    SciTech Connect

    Deng, S.; Muggli, P.; Barnes, C.D.; Clayton, C.E.; Decker, F.J.; Fonseca, R.A.; Huang, C.; Hogan, M.J.; Iverson, R.; Johnson, D.K.; Joshi, C.; Katsouleas, T.; Krejcik, P.; Lu, W.; Marsh, K.A.; Mori, W.B.; O'Connell, C.; Oz, E.; Tsung, F.; Zhou, M.M.; /Southern California U. /UCLA /SLAC /Lisbon, IST

    2005-06-28

    In current plasma-based accelerator experiments, very short bunches (100-150 {micro}m for E164 [1] and 10-20 {micro}m for E164X [2] experiment at Stanford Linear Accelerator Center (SLAC)) are used to drive plasma wakes and achieve high accelerating gradients, on the order of 10-100GV/m. The self-fields of such intense bunches can tunnel ionize neutral gases and create the plasma [3,4]. This may completely change the physics of plasma wakes. A 3-D object-oriented fully parallel PIC code OSIRIS [5] is used to simulate various gas types, beam parameters, etc. to support the design of the experiments. The simulation results for real experiment parameters are presented.

  9. Modelling complex organic molecules in dense regions: Eley-Rideal and complex induced reaction

    NASA Astrophysics Data System (ADS)

    Ruaud, M.; Loison, J. C.; Hickson, K. M.; Gratier, P.; Hersant, F.; Wakelam, V.

    2015-03-01

    Recent observations have revealed the existence of complex organic molecules (COMs) in cold dense cores and pre-stellar cores. The presence of these molecules in such cold conditions is not well understood and remains a matter of debate since the previously proposed `warm-up' scenario cannot explain these observations. In this paper, we study the effect of Eley-Rideal and complex induced reaction mechanisms of gas-phase carbon atoms with the main ice components of dust grains on the formation of COMs in cold and dense regions. Based on recent experiments, we use a low value for the chemical desorption efficiency (which was previously invoked to explain the observed COM abundances). We show that our introduced mechanisms are efficient enough to produce a large amount of COMs in the gas phase at temperatures as low as 10 K.

  10. Modelling an IHE experiment with a suite of DSD models

    NASA Astrophysics Data System (ADS)

    Hodgson, A. N.

    2014-05-01

    At the 2011 APS conference, Terrones, Burkett and Morris published an experiment primarily designed to allow examination of the propagation of a detonation front in a 3-dimensional charge of PBX9502 insensitive high explosive (IHE). The charge is confined by a cylindrical steel shell, has an elliptical tin liner, and is line-initiated along its length. The detonation wave must propagate around the inner hollow region and converge on the opposite side. The Detonation Shock Dynamics (DSD) model allows for the calculation of detonation propagation in a region of explosive using a selection of material input parameters, amongst which is the D(K) relation that governs how the local detonation velocity varies as a function of wave curvature. In this paper, experimental data are compared to calculations using the 2D DSD and newly-developed 3D DSD codes at AWE with a variety of D(K) relations.

  11. Coupled Thermal-Chemical-Mechanical Modeling of Validation Cookoff Experiments

    SciTech Connect

    ERIKSON,WILLIAM W.; SCHMITT,ROBERT G.; ATWOOD,A.I.; CURRAN,P.D.

    2000-11-27

    -dominated failure mode experienced in the tests. High-pressure burning rates are needed for more detailed post-ignition studies. Sub-models for chemistry, mechanical response and burn dynamics need to be validated against data from less complex experiments. The sub-models can then be used in integrated analysis for comparison with experimental data taken during integrated tests.

  12. On Using Meta-Modeling and Multi-Modeling to Address Complex Problems

    ERIC Educational Resources Information Center

    Abu Jbara, Ahmed

    2013-01-01

    Models, created using different modeling techniques, usually serve different purposes and provide unique insights. While each modeling technique might be capable of answering specific questions, complex problems require multiple models interoperating to complement/supplement each other; we call this Multi-Modeling. To address the syntactic and…

  13. Emergence of complexity in evolving niche-model food webs.

    PubMed

    Guill, Christian; Drossel, Barbara

    2008-03-01

    We have analysed mechanisms that promote the emergence of complex structures in evolving model food webs. The niche model is used to determine predator-prey relationships. Complexity is measured by species richness as well as trophic level structure and link density. Adaptive dynamics that allow predators to concentrate on the prey species they are best adapted to lead to a strong increase in species number but have only a small effect on the number and relative occupancy of trophic levels. The density of active links also remains small but a high number of potential links allows the network to adjust to changes in the species composition (emergence and extinction of species). Incorporating effects of body size on individual metabolism leads to a more complex trophic level structure: both the maximum and the average trophic level increase. So does the density of active links. Taking body size effects into consideration does not have a measurable influence on species richness. If species are allowed to adjust their foraging behaviour, the complexity of the evolving networks can also be influenced by the size of the external resources. The larger the resources, the larger and more complex is the food web it can sustain. Body size effects and increasing resources do not change size and the simple structure of the evolving networks if adaptive foraging is prohibited. This leads to the conclusion that in the framework of the niche model adaptive foraging is a necessary but not sufficient condition for the emergence of complex networks. It is found that despite the stabilising effect of foraging adaptation the system displays elements of self-organised critical behaviour.

  14. Boolean modeling of collective effects in complex networks

    PubMed Central

    Norrell, Johannes; Socolar, Joshua E. S.

    2009-01-01

    Complex systems are often modeled as Boolean networks in attempts to capture their logical structure and reveal its dynamical consequences. Approximating the dynamics of continuous variables by discrete values and Boolean logic gates may, however, introduce dynamical possibilities that are not accessible to the original system. We show that large random networks of variables coupled through continuous transfer functions often fail to exhibit the complex dynamics of corresponding Boolean models in the disordered (chaotic) regime, even when each individual function appears to be a good candidate for Boolean idealization. A suitably modified Boolean theory explains the behavior of systems in which information does not propagate faithfully down certain chains of nodes. Model networks incorporating calculated or directly measured transfer functions reported in the literature on transcriptional regulation of genes are described by the modified theory. PMID:19658525

  15. Boolean modeling of collective effects in complex networks.

    PubMed

    Norrell, Johannes; Socolar, Joshua E S

    2009-06-01

    Complex systems are often modeled as Boolean networks in attempts to capture their logical structure and reveal its dynamical consequences. Approximating the dynamics of continuous variables by discrete values and Boolean logic gates may, however, introduce dynamical possibilities that are not accessible to the original system. We show that large random networks of variables coupled through continuous transfer functions often fail to exhibit the complex dynamics of corresponding Boolean models in the disordered (chaotic) regime, even when each individual function appears to be a good candidate for Boolean idealization. A suitably modified Boolean theory explains the behavior of systems in which information does not propagate faithfully down certain chains of nodes. Model networks incorporating calculated or directly measured transfer functions reported in the literature on transcriptional regulation of genes are described by the modified theory. PMID:19658525

  16. Entropy, complexity, and Markov diagrams for random walk cancer models.

    PubMed

    Newton, Paul K; Mason, Jeremy; Hurt, Brian; Bethel, Kelly; Bazhenova, Lyudmila; Nieva, Jorge; Kuhn, Peter

    2014-12-19

    The notion of entropy is used to compare the complexity associated with 12 common cancers based on metastatic tumor distribution autopsy data. We characterize power-law distributions, entropy, and Kullback-Liebler divergence associated with each primary cancer as compared with data for all cancer types aggregated. We then correlate entropy values with other measures of complexity associated with Markov chain dynamical systems models of progression. The Markov transition matrix associated with each cancer is associated with a directed graph model where nodes are anatomical locations where a metastatic tumor could develop, and edge weightings are transition probabilities of progression from site to site. The steady-state distribution corresponds to the autopsy data distribution. Entropy correlates well with the overall complexity of the reduced directed graph structure for each cancer and with a measure of systemic interconnectedness of the graph, called graph conductance. The models suggest that grouping cancers according to their entropy values, with skin, breast, kidney, and lung cancers being prototypical high entropy cancers, stomach, uterine, pancreatic and ovarian being mid-level entropy cancers, and colorectal, cervical, bladder, and prostate cancers being prototypical low entropy cancers, provides a potentially useful framework for viewing metastatic cancer in terms of predictability, complexity, and metastatic potential.

  17. Entropy, complexity, and Markov diagrams for random walk cancer models

    NASA Astrophysics Data System (ADS)

    Newton, Paul K.; Mason, Jeremy; Hurt, Brian; Bethel, Kelly; Bazhenova, Lyudmila; Nieva, Jorge; Kuhn, Peter

    2014-12-01

    The notion of entropy is used to compare the complexity associated with 12 common cancers based on metastatic tumor distribution autopsy data. We characterize power-law distributions, entropy, and Kullback-Liebler divergence associated with each primary cancer as compared with data for all cancer types aggregated. We then correlate entropy values with other measures of complexity associated with Markov chain dynamical systems models of progression. The Markov transition matrix associated with each cancer is associated with a directed graph model where nodes are anatomical locations where a metastatic tumor could develop, and edge weightings are transition probabilities of progression from site to site. The steady-state distribution corresponds to the autopsy data distribution. Entropy correlates well with the overall complexity of the reduced directed graph structure for each cancer and with a measure of systemic interconnectedness of the graph, called graph conductance. The models suggest that grouping cancers according to their entropy values, with skin, breast, kidney, and lung cancers being prototypical high entropy cancers, stomach, uterine, pancreatic and ovarian being mid-level entropy cancers, and colorectal, cervical, bladder, and prostate cancers being prototypical low entropy cancers, provides a potentially useful framework for viewing metastatic cancer in terms of predictability, complexity, and metastatic potential.

  18. Complex humanitarian emergencies: a review of epidemiological and response models.

    PubMed

    Burkle, F M

    2006-01-01

    Complex emergencies (CEs) have been the most common human-generated disaster of the past two decades. These internal conflicts and associated acts of genocide have been poorly understood and poorly managed. This article provides an epidemiological background and understanding of developing and developed countries, and chronic or smoldering countries' CEs, and explains in detail the prevailing models of response seen by the international community. Even though CEs are declining in number, they have become more complex and dangerous. The UN Charter reform is expected to address internal conflicts and genocide but may not provide a more effective and efficient means to respond.

  19. Modeling of Carbohydrate Binding Modules Complexed to Cellulose

    SciTech Connect

    Nimlos, M. R.; Beckham, G. T.; Bu, L.; Himmel, M. E.; Crowley, M. F.; Bomble, Y. J.

    2012-01-01

    Modeling results are presented for the interaction of two carbohydrate binding modules (CBMs) with cellulose. The family 1 CBM from Trichoderma reesei's Cel7A cellulase was modeled using molecular dynamics to confirm that this protein selectively binds to the hydrophobic (100) surface of cellulose fibrils and to determine the energetics and mechanisms for locating this surface. Modeling was also conducted of binding of the family 4 CBM from the CbhA complex from Clostridium thermocellum. There is a cleft in this protein, which may accommodate a cellulose chain that is detached from crystalline cellulose. This possibility is explored using molecular dynamics.

  20. Modeling and Algorithmic Approaches to Constitutively-Complex, Microstructured Fluids

    SciTech Connect

    Miller, Gregory H.; Forest, Gregory

    2014-05-01

    We present a new multiscale model for complex fluids based on three scales: microscopic, kinetic, and continuum. We choose the microscopic level as Kramers' bead-rod model for polymers, which we describe as a system of stochastic differential equations with an implicit constraint formulation. The associated Fokker-Planck equation is then derived, and adiabatic elimination removes the fast momentum coordinates. Approached in this way, the kinetic level reduces to a dispersive drift equation. The continuum level is modeled with a finite volume Godunov-projection algorithm. We demonstrate computation of viscoelastic stress divergence using this multiscale approach.

  1. Development of Conceptual Benchmark Models to Evaluate Complex Hydrologic Model Calibration in Managed Basins Using Python

    NASA Astrophysics Data System (ADS)

    Hughes, J. D.; White, J.

    2013-12-01

    For many numerical hydrologic models it is a challenge to quantitatively demonstrate that complex models are preferable to simpler models. Typically, a decision is made to develop and calibrate a complex model at the beginning of a study. The value of selecting a complex model over simpler models is commonly inferred from use of a model with fewer simplifications of the governing equations because it can be time consuming to develop another numerical code with data processing and parameter estimation functionality. High-level programming languages like Python can greatly reduce the effort required to develop and calibrate simple models that can be used to quantitatively demonstrate the increased value of a complex model. We have developed and calibrated a spatially-distributed surface-water/groundwater flow model for managed basins in southeast Florida, USA, to (1) evaluate the effect of municipal groundwater pumpage on surface-water/groundwater exchange, (2) investigate how the study area will respond to sea-level rise, and (3) explore combinations of these forcing functions. To demonstrate the increased value of this complex model, we developed a two-parameter conceptual-benchmark-discharge model for each basin in the study area. The conceptual-benchmark-discharge model includes seasonal scaling and lag parameters and is driven by basin rainfall. The conceptual-benchmark-discharge models were developed in the Python programming language and used weekly rainfall data. Calibration was implemented with the Broyden-Fletcher-Goldfarb-Shanno method available in the Scientific Python (SciPy) library. Normalized benchmark efficiencies calculated using output from the complex model and the corresponding conceptual-benchmark-discharge model indicate that the complex model has more explanatory power than the simple model driven only by rainfall.

  2. Design of Low Complexity Model Reference Adaptive Controllers

    NASA Technical Reports Server (NTRS)

    Hanson, Curt; Schaefer, Jacob; Johnson, Marcus; Nguyen, Nhan

    2012-01-01

    Flight research experiments have demonstrated that adaptive flight controls can be an effective technology for improving aircraft safety in the event of failures or damage. However, the nonlinear, timevarying nature of adaptive algorithms continues to challenge traditional methods for the verification and validation testing of safety-critical flight control systems. Increasingly complex adaptive control theories and designs are emerging, but only make testing challenges more difficult. A potential first step toward the acceptance of adaptive flight controllers by aircraft manufacturers, operators, and certification authorities is a very simple design that operates as an augmentation to a non-adaptive baseline controller. Three such controllers were developed as part of a National Aeronautics and Space Administration flight research experiment to determine the appropriate level of complexity required to restore acceptable handling qualities to an aircraft that has suffered failures or damage. The controllers consist of the same basic design, but incorporate incrementally-increasing levels of complexity. Derivations of the controllers and their adaptive parameter update laws are presented along with details of the controllers implementations.

  3. Bridging Mechanistic and Phenomenological Models of Complex Biological Systems

    PubMed Central

    Transtrum, Mark K.; Qiu, Peng

    2016-01-01

    The inherent complexity of biological systems gives rise to complicated mechanistic models with a large number of parameters. On the other hand, the collective behavior of these systems can often be characterized by a relatively small number of phenomenological parameters. We use the Manifold Boundary Approximation Method (MBAM) as a tool for deriving simple phenomenological models from complicated mechanistic models. The resulting models are not black boxes, but remain expressed in terms of the microscopic parameters. In this way, we explicitly connect the macroscopic and microscopic descriptions, characterize the equivalence class of distinct systems exhibiting the same range of collective behavior, and identify the combinations of components that function as tunable control knobs for the behavior. We demonstrate the procedure for adaptation behavior exhibited by the EGFR pathway. From a 48 parameter mechanistic model, the system can be effectively described by a single adaptation parameter τ characterizing the ratio of time scales for the initial response and recovery time of the system which can in turn be expressed as a combination of microscopic reaction rates, Michaelis-Menten constants, and biochemical concentrations. The situation is not unlike modeling in physics in which microscopically complex processes can often be renormalized into simple phenomenological models with only a few effective parameters. The proposed method additionally provides a mechanistic explanation for non-universal features of the behavior. PMID:27187545

  4. Assessment model for perceived visual complexity of automotive instrument cluster.

    PubMed

    Yoon, Sol Hee; Lim, Jihyoun; Ji, Yong Gu

    2015-01-01

    This research proposes an assessment model for quantifying the perceived visual complexity (PVC) of an in-vehicle instrument cluster. An initial study was conducted to investigate the possibility of evaluating the PVC of an in-vehicle instrument cluster by estimating and analyzing the complexity of its individual components. However, this approach was only partially successful, because it did not take into account the combination of the different components with random levels of complexity to form one visual display. Therefore, a second study was conducted focusing on the effect of combining the different components. The results from the overall research enabled us to suggest a basis for quantifying the PVC of an in-vehicle instrument cluster based both on the PVCs of its components and on the integration effect.

  5. Complexity and robustness in hypernetwork models of metabolism.

    PubMed

    Pearcy, Nicole; Chuzhanova, Nadia; Crofts, Jonathan J

    2016-10-01

    Metabolic reaction data is commonly modelled using a complex network approach, whereby nodes represent the chemical species present within the organism of interest, and connections are formed between those nodes participating in the same chemical reaction. Unfortunately, such an approach provides an inadequate description of the metabolic process in general, as a typical chemical reaction will involve more than two nodes, thus risking oversimplification of the system of interest in a potentially significant way. In this paper, we employ a complex hypernetwork formalism to investigate the robustness of bacterial metabolic hypernetworks by extending the concept of a percolation process to hypernetworks. Importantly, this provides a novel method for determining the robustness of these systems and thus for quantifying their resilience to random attacks/errors. Moreover, we performed a site percolation analysis on a large cohort of bacterial metabolic networks and found that hypernetworks that evolved in more variable environments displayed increased levels of robustness and topological complexity. PMID:27354314

  6. Nuclear reaction modeling, verification experiments, and applications

    SciTech Connect

    Dietrich, F.S.

    1995-10-01

    This presentation summarized the recent accomplishments and future promise of the neutron nuclear physics program at the Manuel Lujan Jr. Neutron Scatter Center (MLNSC) and the Weapons Neutron Research (WNR) facility. The unique capabilities of the spallation sources enable a broad range of experiments in weapons-related physics, basic science, nuclear technology, industrial applications, and medical physics.

  7. Ruthenium Vinylidene and Acetylide Complexes. An Advanced Undergraduate Multi-technique Inorganic/Organometallic Chemistry Experiment

    NASA Astrophysics Data System (ADS)

    McDonagh, Andrew M.; Deeble, Geoffrey J.; Hurst, Steph; Cifuentes, Marie P.; Humphrey, Mark G.

    2001-02-01

    This experiment describes the isolation and characterization of complexes containing examples of two important monohapto ligands, namely vinylidene (C=CHR) and alkynyl (C ? CR) ligands. The former is a tautomer of acetylene that has minimal (10-10 s) existence as an uncomplexed molecule, providing an interesting example of the stabilization of reactive organic species at transition metals--an important motif in organometallic chemistry. The latter ligand affords complexes that have attracted a great deal of interest recently for their potentially useful electronic or optical properties, illustrating a major focus of contemporary organometallic chemistry, the search for useful materials. The particular strength of this experiment is in demonstrating the utility of a range of spectroscopic and analytical techniques in inorganic complex identification. The students observe unusual chemical shifts in the 13C NMR (vinylidene metal-bound carbon), meet heteronuclear NMR (31P), assign intense metal-to-ligand charge transfer (MLCT) bands in the UV-visible spectra, observe the utility of mass spectra in characterizing complexes of poly-isotopic transition metals, and are introduced to redox potentials (cyclic voltammetry).

  8. Cloud chamber experiments on the origin of ice crystal complexity in cirrus clouds

    NASA Astrophysics Data System (ADS)

    Schnaiter, Martin; Järvinen, Emma; Vochezer, Paul; Abdelmonem, Ahmed; Wagner, Robert; Jourdan, Olivier; Mioche, Guillaume; Shcherbakov, Valery N.; Schmitt, Carl G.; Tricoli, Ugo; Ulanowski, Zbigniew; Heymsfield, Andrew J.

    2016-04-01

    This study reports on the origin of small-scale ice crystal complexity and its influence on the angular light scattering properties of cirrus clouds. Cloud simulation experiments were conducted at the AIDA (Aerosol Interactions and Dynamics in the Atmosphere) cloud chamber of the Karlsruhe Institute of Technology (KIT). A new experimental procedure was applied to grow and sublimate ice particles at defined super- and subsaturated ice conditions and for temperatures in the -40 to -60 °C range. The experiments were performed for ice clouds generated via homogeneous and heterogeneous initial nucleation. Small-scale ice crystal complexity was deduced from measurements of spatially resolved single particle light scattering patterns by the latest version of the Small Ice Detector (SID-3). It was found that a high crystal complexity dominates the microphysics of the simulated clouds and the degree of this complexity is dependent on the available water vapor during the crystal growth. Indications were found that the small-scale crystal complexity is influenced by unfrozen H2SO4 / H2O residuals in the case of homogeneous initial ice nucleation. Angular light scattering functions of the simulated ice clouds were measured by the two currently available airborne polar nephelometers: the polar nephelometer (PN) probe of Laboratoire de Métérologie et Physique (LaMP) and the Particle Habit Imaging and Polar Scattering (PHIPS-HALO) probe of KIT. The measured scattering functions are featureless and flat in the side and backward scattering directions. It was found that these functions have a rather low sensitivity to the small-scale crystal complexity for ice clouds that were grown under typical atmospheric conditions. These results have implications for the microphysical properties of cirrus clouds and for the radiative transfer through these clouds.

  9. Cloud chamber experiments on the origin of ice crystal complexity in cirrus clouds

    NASA Astrophysics Data System (ADS)

    Schnaiter, M.; Järvinen, E.; Vochezer, P.; Abdelmonem, A.; Wagner, R.; Jourdan, O.; Mioche, G.; Shcherbakov, V. N.; Schmitt, C. G.; Tricoli, U.; Ulanowski, Z.; Heymsfield, A. J.

    2015-11-01

    This study reports on the origin of ice crystal complexity and its influence on the angular light scattering properties of cirrus clouds. Cloud simulation experiments were conducted at the AIDA (Aerosol Interactions and Dynamics in the Atmosphere) cloud chamber of the Karlsruhe Institute of Technology (KIT). A new experimental procedure was applied to grow and sublimate ice particles at defined super- and subsaturated ice conditions and for temperatures in the -40 to -60 °C range. The experiments were performed for ice clouds generated via homogeneous and heterogeneous initial nucleation. Ice crystal complexity was deduced from measurements of spatially resolved single particle light scattering patterns by the latest version of the Small Ice Detector (SID-3). It was found that a high ice crystal complexity is dominating the microphysics of the simulated clouds and the degree of this complexity is dependent on the available water vapour during the crystal growth. Indications were found that the crystal complexity is influenced by unfrozen H2SO4/H2O residuals in the case of homogeneous initial ice nucleation. Angular light scattering functions of the simulated ice clouds were measured by the two currently available airborne polar nephelometers; the Polar Nephelometer (PN) probe of LaMP and the Particle Habit Imaging and Polar Scattering (PHIPS-HALO) probe of KIT. The measured scattering functions are featureless and flat in the side- and backward scattering directions resulting in low asymmetry parameters g around 0.78. It was found that these functions have a rather low sensitivity to the crystal complexity for ice clouds that were grown under typical atmospheric conditions. These results have implications for the microphysical properties of cirrus clouds and for the radiative transfer through these clouds.

  10. Relating equivalence relations to equivalence relations: A relational framing model of complex human functioning

    PubMed Central

    Barnes, Dermot; Hegarty, Neil; Smeets, Paul M.

    1997-01-01

    The current study aimed to develop a behavior-analytic model of analogical reasoning. In Experiments 1 and 2 subjects (adults and children) were trained and tested for the formation of four, three-member equivalence relations using a delayed matching-to-sample procedure. All subjects (Experiments 1 and 2) were exposed to tests that examined relations between equivalence and non-equivalence relations. For example, on an equivalence-equivalence relation test, the complex sample B1/C1 and the two complex comparisons B3/C3 and B3/C4 were used, and on a nonequivalence-nonequivalence relation test the complex sample B1/C2 was presented with the same two comparisons. All subjects consistently related equivalence relations to equivalence relations and nonequivalence relations to nonequivalence relations (e.g., picked B3/C3 in the presence of B1/C1 and picked B3/C4 in the presence of B1/C2). In Experiment 3, the equivalence responding, the equivalence-equivalence responding, and the nonequivalence-nonequivalence responding was successfully brought under contextual control. Finally, it was shown that the contextual cues could function successfully as comparisons, and the complex samples and comparisons could function successfully as contextual cues and samples, respectively. These data extend the equivalence paradigm and contribute to a behaviour-analytic interpretation of analogical reasoning and complex human functioning, in general. PMID:22477120

  11. An Adaptive Complex Network Model for Brain Functional Networks

    PubMed Central

    Gomez Portillo, Ignacio J.; Gleiser, Pablo M.

    2009-01-01

    Brain functional networks are graph representations of activity in the brain, where the vertices represent anatomical regions and the edges their functional connectivity. These networks present a robust small world topological structure, characterized by highly integrated modules connected sparsely by long range links. Recent studies showed that other topological properties such as the degree distribution and the presence (or absence) of a hierarchical structure are not robust, and show different intriguing behaviors. In order to understand the basic ingredients necessary for the emergence of these complex network structures we present an adaptive complex network model for human brain functional networks. The microscopic units of the model are dynamical nodes that represent active regions of the brain, whose interaction gives rise to complex network structures. The links between the nodes are chosen following an adaptive algorithm that establishes connections between dynamical elements with similar internal states. We show that the model is able to describe topological characteristics of human brain networks obtained from functional magnetic resonance imaging studies. In particular, when the dynamical rules of the model allow for integrated processing over the entire network scale-free non-hierarchical networks with well defined communities emerge. On the other hand, when the dynamical rules restrict the information to a local neighborhood, communities cluster together into larger ones, giving rise to a hierarchical structure, with a truncated power law degree distribution. PMID:19738902

  12. Adapting hydrological model structure to catchment characteristics: A large-sample experiment

    NASA Astrophysics Data System (ADS)

    Addor, Nans; Clark, Martyn P.; Nijssen, Bart

    2016-04-01

    Current hydrological modeling frameworks do not offer a clear way to systematically investigate the relationship between model complexity and model fidelity. The characterization of this relationship has so far relied on comparisons of different modules within the same model or comparisons of entirely different models. This lack of granularity in the differences between the model constructs makes it difficult to pinpoint model features that contribute to good simulations and means that the number of models or modeling hypotheses evaluated is usually small. Here we use flexible modeling frameworks to comprehensively and systematically compare modeling alternatives across the continuum of model complexity. A key goal is to explore which model structures are most adequate for catchments in different hydroclimatic conditions. Starting from conceptual models based on the Framework for Understanding Structural Errors (FUSE), we progressively increase model complexity by replacing conceptual formulations by physically explicit ones (process complexity) and by refining model spatial resolution (spatial complexity) using the newly developed Structure for Unifying Multiple Modeling Alternatives (SUMMA). To investigate how to best reflect catchment characteristics using model structure, we rely on a recently released data set of 671 catchments in the continuous United States. Instead of running hydrological simulations in every catchment, we use clustering techniques to define catchment clusters, run hydrological simulations for representative members of each cluster, develop hypotheses (e.g., when specific process representations have useful explanatory power) and test these hypotheses using other members of the cluster. We thus refine our catchment clustering based on insights into dominant hydrological processes gained from our modeling approach. With this large-sample experiment, we seek to uncover trade-offs between realism and practicality, and formulate general

  13. A new macrocyclic terbium(III) complex for use in RNA footprinting experiments

    PubMed Central

    Belousoff, Matthew J.; Ung, Phuc; Forsyth, Craig M.; Tor, Yitzhak; Spiccia, Leone; Graham, Bim

    2009-01-01

    Reaction of terbium triflate with a heptadentate ligand derivative of cyclen, L1 = 2-[7-ethyl-4,10-bis(isopropylcarbamoylmethyl)-1,4,7,10-tetraazacyclododec-1-yl]-N-isopropylacetamide, produced a new synthetic ribonuclease, [Tb(L1)(OTf)(OH2)](OTf)2·MeCN (C1). X-ray crystal structure analysis indicates that the terbium(III) centre in C1 is 9-coordinate, with a capped square-antiprism geometry. Whilst the terbium(III) center is tightly bound by the L1 ligand, two of the coordination sites are occupied by labile water and triflate ligands. In water, the triflate ligand is likely to be displaced, forming [Tb(L1)(OH2)2]3+, which is able to effectively promote RNA cleavage. This complex greatly accelerates the rate of intramolecular transesterification of an activated model RNA phosphodiester, uridine-3′-p-nitrophenylphosphate (UpNP), with kobs = 5.5(1) × 10-2 s-1 at 21°C and pH 7.5, corresponding to an apparent second-order rate constant of 277(5) M-1s-1. By contrast, the analogous complex of an octadentate derivative of cyclen featuring only a single labile coordination site, [Tb(L2)(OH2)](OTf)3 (C2), where L2 = 2-[4,7,10-tris(isopropylcarbamoylmethyl)-1,4,7,10-tetraazacyclododec-1-yl]-N-isopropyl-acetamide, is inactive. [Tb(L1)(OH2)2]3+ is also capable of hydrolyzing short transcripts of the HIV-1 transactivation response (TAR) element, HIV-1 dimerization initiation site (DIS) and ribosomal A-site, as well as formyl methionine transfer RNA (tRNAfMet), albeit at a considerably slower rate than UpNP transesterification (kobs = 2.78(8) × 10-5 M-1s-1 for TAR cleavage at 37°C, pH 6.5, corresponding to an apparent second-order rate constant of 0.56(2) M-1s-1). Cleavage is concentrated at the single-stranded “bulge” regions of these RNA motifs. Exploiting this selectivity, [Tb(L1)(OH2)23+ was successfully employed in footprinting experiments, in which binding of the Tat peptide and neomycin B to the bulge region of the TAR stem-loop was confirmed. PMID:19119812

  14. Modeling the Classic Meselson and Stahl Experiment.

    ERIC Educational Resources Information Center

    D'Agostino, JoBeth

    2001-01-01

    Points out the importance of molecular models in biology and chemistry. Presents a laboratory activity on DNA. Uses different colored wax strips to represent "heavy" and "light" DNA, cesium chloride for identification of small density differences, and three different liquids with varying densities to model gradient centrifugation. (YDS)

  15. Experience With Bayesian Image Based Surface Modeling

    NASA Technical Reports Server (NTRS)

    Stutz, John C.

    2005-01-01

    Bayesian surface modeling from images requires modeling both the surface and the image generation process, in order to optimize the models by comparing actual and generated images. Thus it differs greatly, both conceptually and in computational difficulty, from conventional stereo surface recovery techniques. But it offers the possibility of using any number of images, taken under quite different conditions, and by different instruments that provide independent and often complementary information, to generate a single surface model that fuses all available information. I describe an implemented system, with a brief introduction to the underlying mathematical models and the compromises made for computational efficiency. I describe successes and failures achieved on actual imagery, where we went wrong and what we did right, and how our approach could be improved. Lastly I discuss how the same approach can be extended to distinct types of instruments, to achieve true sensor fusion.

  16. RHIC injector complex online model status and plans

    SciTech Connect

    Schoefer,V.; Ahrens, L.; Brown, K.; Morris, J.; Nemesure, S.

    2009-05-04

    An online modeling system is being developed for the RHIC injector complex, which consists of the Booster, the AGS and the transfer lines connecting the Booster to the AGS and the AGS to RHIC. Historically the injectors have been operated using static values from design specifications or offline model runs, but tighter beam optics constraints required by polarized proton operations (e.g, accelerating with near-integer tunes) have necessitated a more dynamic system. An online model server for the AGS has been implemented using MAD-X [1] as the model engine, with plans to extend the system to the Booster and the injector transfer lines and to add the option of calculating optics using the Polymorphic Tracking Code (PTC [2]) as the model engine.

  17. Mechanistic modeling confronts the complexity of molecular cell biology.

    PubMed

    Phair, Robert D

    2014-11-01

    Mechanistic modeling has the potential to transform how cell biologists contend with the inescapable complexity of modern biology. I am a physiologist-electrical engineer-systems biologist who has been working at the level of cell biology for the past 24 years. This perspective aims 1) to convey why we build models, 2) to enumerate the major approaches to modeling and their philosophical differences, 3) to address some recurrent concerns raised by experimentalists, and then 4) to imagine a future in which teams of experimentalists and modelers build-and subject to exhaustive experimental tests-models covering the entire spectrum from molecular cell biology to human pathophysiology. There is, in my view, no technical obstacle to this future, but it will require some plasticity in the biological research mind-set.

  18. The semiotics of control and modeling relations in complex systems.

    PubMed

    Joslyn, C

    2001-01-01

    We provide a conceptual analysis of ideas and principles from the systems theory discourse which underlie Pattee's semantic or semiotic closure, which is itself foundational for a school of theoretical biology derived from systems theory and cybernetics, and is now being related to biological semiotics and explicated in the relational biological school of Rashevsky and Rosen. Atomic control systems and models are described as the canonical forms of semiotic organization, sharing measurement relations, but differing topologically in that control systems are circularly and models linearly related to their environments. Computation in control systems is introduced, motivating hierarchical decomposition, hybrid modeling and control systems, and anticipatory or model-based control. The semiotic relations in complex control systems are described in terms of relational constraints, and rules and laws are distinguished as contingent and necessary functional entailments, respectively. Finally, selection as a meta-level of constraint is introduced as the necessary condition for semantic relations in control systems and models.

  19. Mechanistic modeling confronts the complexity of molecular cell biology

    PubMed Central

    Phair, Robert D.

    2014-01-01

    Mechanistic modeling has the potential to transform how cell biologists contend with the inescapable complexity of modern biology. I am a physiologist–electrical engineer–systems biologist who has been working at the level of cell biology for the past 24 years. This perspective aims 1) to convey why we build models, 2) to enumerate the major approaches to modeling and their philosophical differences, 3) to address some recurrent concerns raised by experimentalists, and then 4) to imagine a future in which teams of experimentalists and modelers build—and subject to exhaustive experimental tests—models covering the entire spectrum from molecular cell biology to human pathophysiology. There is, in my view, no technical obstacle to this future, but it will require some plasticity in the biological research mind-set. PMID:25368428

  20. Cx-02 Program, workshop on modeling complex systems

    USGS Publications Warehouse

    Mossotti, Victor G.; Barragan, Jo Ann; Westergard, Todd D.

    2003-01-01

    This publication contains the abstracts and program for the workshop on complex systems that was held on November 19-21, 2002, in Reno, Nevada. Complex systems are ubiquitous within the realm of the earth sciences. Geological systems consist of a multiplicity of linked components with nested feedback loops; the dynamics of these systems are non-linear, iterative, multi-scale, and operate far from equilibrium. That notwithstanding, It appears that, with the exception of papers on seismic studies, geology and geophysics work has been disproportionally underrepresented at regional and national meetings on complex systems relative to papers in the life sciences. This is somewhat puzzling because geologists and geophysicists are, in many ways, preadapted to thinking of complex system mechanisms. Geologists and geophysicists think about processes involving large volumes of rock below the sunlit surface of Earth, the accumulated consequence of processes extending hundreds of millions of years in the past. Not only do geologists think in the abstract by virtue of the vast time spans, most of the evidence is out-of-sight. A primary goal of this workshop is to begin to bridge the gap between the Earth sciences and life sciences through demonstration of the universality of complex systems science, both philosophically and in model structures.

  1. Computational and analytical modeling of cationic lipid-DNA complexes.

    PubMed

    Farago, Oded; Grønbech-Jensen, Niels

    2007-05-01

    We present a theoretical study of the physical properties of cationic lipid-DNA (CL-DNA) complexes--a promising synthetically based nonviral carrier of DNA for gene therapy. The study is based on a coarse-grained molecular model, which is used in Monte Carlo simulations of mesoscopically large systems over timescales long enough to address experimental reality. In the present work, we focus on the statistical-mechanical behavior of lamellar complexes, which in Monte Carlo simulations self-assemble spontaneously from a disordered random initial state. We measure the DNA-interaxial spacing, d(DNA), and the local cationic area charge density, sigma(M), for a wide range of values of the parameter (c) representing the fraction of cationic lipids. For weakly charged complexes (low values of (c)), we find that d(DNA) has a linear dependence on (c)(-1), which is in excellent agreement with x-ray diffraction experimental data. We also observe, in qualitative agreement with previous Poisson-Boltzmann calculations of the system, large fluctuations in the local area charge density with a pronounced minimum of sigma(M) halfway between adjacent DNA molecules. For highly-charged complexes (large (c)), we find moderate charge density fluctuations and observe deviations from linear dependence of d(DNA) on (c)(-1). This last result, together with other findings such as the decrease in the effective stretching modulus of the complex and the increased rate at which pores are formed in the complex membranes, are indicative of the gradual loss of mechanical stability of the complex, which occurs when (c) becomes large. We suggest that this may be the origin of the recently observed enhanced transfection efficiency of lamellar CL-DNA complexes at high charge densities, because the completion of the transfection process requires the disassembly of the complex and the release of the DNA into the cytoplasm. Some of the structural properties of the system are also predicted by a continuum

  2. Accelerating the connection between experiments and models: The FACE-MDS experience

    NASA Astrophysics Data System (ADS)

    Norby, R. J.; Medlyn, B. E.; De Kauwe, M. G.; Zaehle, S.; Walker, A. P.

    2014-12-01

    The mandate is clear for improving communication between models and experiments to better evaluate terrestrial responses to atmospheric and climatic change. Unfortunately, progress in linking experimental and modeling approaches has been slow and sometimes frustrating. Recent successes in linking results from the Duke and Oak Ridge free-air CO2 enrichment (FACE) experiments with ecosystem and land surface models - the FACE Model-Data Synthesis (FACE-MDS) project - came only after a period of slow progress, but the experience points the way to future model-experiment interactions. As the FACE experiments were approaching their termination, the FACE research community made an explicit attempt to work together with the modeling community to synthesize and deliver experimental data to benchmark models and to use models to supply appropriate context for the experimental results. Initial problems that impeded progress were: measurement protocols were not consistent across different experiments; data were not well organized for model input; and parameterizing and spinning up models that were not designed for simulating a specific site was difficult. Once these problems were worked out, the FACE-MDS project has been very successful in using data from the Duke and ORNL FACE experiment to test critical assumptions in the models. The project showed, for example, that the stomatal conductance model most widely used in models was supported by experimental data, but models did not capture important responses such as increased leaf mass per unit area in elevated CO2, and did not appropriately represent foliar nitrogen allocation. We now have an opportunity to learn from this experience. New FACE experiments that have recently been initiated, or are about to be initiated, include a eucalyptus forest in Australia; the AmazonFACE experiment in a primary, tropical forest in Brazil; and a mature oak woodland in England. Cross-site science questions are being developed that will have a

  3. U.S. Geological Survey Groundwater Modeling Software: Making Sense of a Complex Natural Resource

    USGS Publications Warehouse

    Provost, Alden M.; Reilly, Thomas E.; Harbaugh, Arlen W.; Pollock, David W.

    2009-01-01

    Computer models of groundwater systems simulate the flow of groundwater, including water levels, and the transport of chemical constituents and thermal energy. Groundwater models afford hydrologists a framework on which to organize their knowledge and understanding of groundwater systems, and they provide insights water-resources managers need to plan effectively for future water demands. Building on decades of experience, the U.S. Geological Survey (USGS) continues to lead in the development and application of computer software that allows groundwater models to address scientific and management questions of increasing complexity.

  4. GalaxyRefineComplex: Refinement of protein-protein complex model structures driven by interface repacking.

    PubMed

    Heo, Lim; Lee, Hasup; Seok, Chaok

    2016-01-01

    Protein-protein docking methods have been widely used to gain an atomic-level understanding of protein interactions. However, docking methods that employ low-resolution energy functions are popular because of computational efficiency. Low-resolution docking tends to generate protein complex structures that are not fully optimized. GalaxyRefineComplex takes such low-resolution docking structures and refines them to improve model accuracy in terms of both interface contact and inter-protein orientation. This refinement method allows flexibility at the protein interface and in the overall docking structure to capture conformational changes that occur upon binding. Symmetric refinement is also provided for symmetric homo-complexes. This method was validated by refining models produced by available docking programs, including ZDOCK and M-ZDOCK, and was successfully applied to CAPRI targets in a blind fashion. An example of using the refinement method with an existing docking method for ligand binding mode prediction of a drug target is also presented. A web server that implements the method is freely available at http://galaxy.seoklab.org/refinecomplex. PMID:27535582

  5. GalaxyRefineComplex: Refinement of protein-protein complex model structures driven by interface repacking

    PubMed Central

    Heo, Lim; Lee, Hasup; Seok, Chaok

    2016-01-01

    Protein-protein docking methods have been widely used to gain an atomic-level understanding of protein interactions. However, docking methods that employ low-resolution energy functions are popular because of computational efficiency. Low-resolution docking tends to generate protein complex structures that are not fully optimized. GalaxyRefineComplex takes such low-resolution docking structures and refines them to improve model accuracy in terms of both interface contact and inter-protein orientation. This refinement method allows flexibility at the protein interface and in the overall docking structure to capture conformational changes that occur upon binding. Symmetric refinement is also provided for symmetric homo-complexes. This method was validated by refining models produced by available docking programs, including ZDOCK and M-ZDOCK, and was successfully applied to CAPRI targets in a blind fashion. An example of using the refinement method with an existing docking method for ligand binding mode prediction of a drug target is also presented. A web server that implements the method is freely available at http://galaxy.seoklab.org/refinecomplex. PMID:27535582

  6. Coarse-Grained Structure-Based Model for RNA-Protein Complexes Developed by Fluctuation Matching.

    PubMed

    Hori, Naoto; Takada, Shoji

    2012-09-11

    RNA and RNA-protein complexes have recently been intensively studied in experiments, but the corresponding molecular simulation work is much less abundant, primarily due to its large system size and the long time scale involved. Here, to overcome these bottlenecks, we develop a coarse-grained (CG) structure-based simulation model for RNA and RNA-protein complexes and test it for several molecular systems. The CG model for RNA contains three particles per nucleotide, each for phosphate, sugar, and a base. Focusing on RNA molecules that fold to well-defined native structures, we employed a structure-based potential, which is similar to the Go-like potential successfully used in CG modeling of proteins. In addition, we tested three means to approximate electrostatic interactions. Many parameters involved in the CG potential were determined via a multiscale method: We matched the native fluctuation of the CG model with that by all-atom simulations for 16 RNA molecules and 10 RNA-protein complexes, from which we derived a generic set of CG parameters. We show that the derived parameters can reproduce native fluctuations well for four RNA and two RNA-protein complexes. For tRNA, the native fluctuation in solution includes large-amplitude motions that reach conformations nearly corresponding to the hybrid state P/E and EF-Tu-bound state A/T seen in the complexes with ribosome. Finally, large-amplitude modes of ribosome are briefly described.

  7. Modeling the Formation of Language: Embodied Experiments

    NASA Astrophysics Data System (ADS)

    Steels, Luc

    This chapter gives an overview of different experiments that have been performed to demonstrate how a symbolic communication system, including its underlying ontology, can arise in situated embodied interactions between autonomous agents. It gives some details of the Grounded Naming Game, which focuses on the formation of a system of proper names, the Spatial Language Game, which focuses on the formation of a lexicon for expressing spatial relations as well as perspective reversal, and an Event Description Game, which concerns the expression of the role of participants in events through an emergent case grammar. For each experiment, details are provided how the symbolic system emerges, how the interaction is grounded in the world through the embodiment of the agent and its sensori-motor processing, and how concepts are formed in tight interaction with the emerging language.

  8. A Hardware Model Validation Tool for Use in Complex Space Systems

    NASA Technical Reports Server (NTRS)

    Davies, Misty Dawn; Gundy-Burlet, Karen L.; Limes, Gregory L.

    2010-01-01

    One of the many technological hurdles that must be overcome in future missions is the challenge of validating as-built systems against the models used for design. We propose a technique composed of intelligent parameter exploration in concert with automated failure analysis as a scalable method for the validation of complex space systems. The technique is impervious to discontinuities and linear dependencies in the data, and can handle dimensionalities consisting of hundreds of variables over tens of thousands of experiments.

  9. Model estimation and identification of manual controller objectives in complex tracking tasks

    NASA Technical Reports Server (NTRS)

    Schmidt, D. K.; Yuan, P. J.

    1984-01-01

    A methodology is presented for estimating the parameters in an optimal control structural model of the manual controller from experimental data on complex, multiinput/multioutput tracking tasks. Special attention is devoted to estimating the appropriate objective function for the task, as this is considered key in understanding the objectives and strategy of the manual controller. The technique is applied to data from single input/single output as well as multi input/multi outpuut experiments, and results discussed.

  10. Engineering complex topological memories from simple Abelian models

    NASA Astrophysics Data System (ADS)

    Wootton, James R.; Lahtinen, Ville; Doucot, Benoit; Pachos, Jiannis K.

    2011-09-01

    In three spatial dimensions, particles are limited to either bosonic or fermionic statistics. Two-dimensional systems, on the other hand, can support anyonic quasiparticles exhibiting richer statistical behaviors. An exciting proposal for quantum computation is to employ anyonic statistics to manipulate information. Since such statistical evolutions depend only on topological characteristics, the resulting computation is intrinsically resilient to errors. The so-called non-Abelian anyons are most promising for quantum computation, but their physical realization may prove to be complex. Abelian anyons, however, are easier to understand theoretically and realize experimentally. Here we show that complex topological memories inspired by non-Abelian anyons can be engineered in Abelian models. We explicitly demonstrate the control procedures for the encoding and manipulation of quantum information in specific lattice models that can be implemented in the laboratory. This bridges the gap between requirements for anyonic quantum computation and the potential of state-of-the-art technology.

  11. Model Experiment of Two-Dimentional Brownian Motion by Microcomputer.

    ERIC Educational Resources Information Center

    Mishima, Nobuhiko; And Others

    1980-01-01

    Describes the use of a microcomputer in studying a model experiment (Brownian particles colliding with thermal particles). A flow chart and program for the experiment are provided. Suggests that this experiment may foster a deepened understanding through mutual dialog between the student and computer. (SK)

  12. Annual modulation experiments, galactic models and WIMPs

    NASA Astrophysics Data System (ADS)

    Hudson, Robert G.

    Our task in the paper is to examine some recent experiments (in the period 1996-2002) bearing on the issue of whether there is dark matter in the universe in the form of neutralino WIMPs (weakly interacting massive particles). Our main focus is an experiment performed by the DAMA group that claims to have found an 'annual modulation signature' for the WIMP. DAMA's result has been hotly contested by two other groups, EDELWEISS and CDMS, and we study the details of the experiments performed by all three groups. Our goal is to investigate the philosophic and sociological implications of this controversy. Particularly, using an innovative theoretical strategy suggested by (Copi, C. and L. M. Krauss (2003). Comparing interaction rate detectors for weakly interacting massive particles with annual modulation detectors. Physical Review D, 67, 103 507), we suggest a new way of resolving discordant experimental data (extending a previous analysis by (Franklin, A. (2002). Selectivity and discord. Pittsburgh: University of Pittsburgh Press). In addition, we are in a position to contribute substantively to the debate between realists and constructive empiricists. Finally, from a sociological standpoint, we remark that DAMA's work has been valuable in mobilizing other research teams and providing them with a critical focus.

  13. Techniques and outcomes of transcatheter closure of complex atrial septal defects – Single center experience

    PubMed Central

    Pillai, Ajith Ananthakrishna; Satheesh, Santhosh; Pakkirisamy, Gobu; Selvaraj, Raja; Jayaraman, Balachander

    2014-01-01

    Objective To prospectively study the techniques and outcomes of transcatheter closure of complex Atrial septal defects (ASD). Study design and settings Prospective single center study with experience in catheter closure of ASD. All patients with complex ASD suitable for device closure. Objective Analysis of outcomes of transcatheter closure of complex ASD in JIPMER Hospital over the past 5-year period. Methods Complex ASD was predefined and patients satisfying inclusion and exclusion criteria are included. All the patients had meticulous Transesophageal echocardiography (TEE) imaging beforehand. Modifications of the conventional techniques were allowed on a case per case basis according to operator preference. Successfully intervened patients were followed up clinically. Results Out of the 75 patients enrolled, 69 patients had successful device closure (success rate 92%) despite challenging anatomy. Fifty-six (74%) patients had ASD ≥25 mm. Fifteen patients (20%) had defect size ≥35 mm and 20 patients (26.6%) had devices implanted with ≥35 mm waist size. Fifty percent of patients had complete absence of aortic rim and 25% had deficient posterior rim. Twenty percent of patients had malaligned septum. Mean follow up period was 3.2 years. Conclusions Trans catheter closure is feasible in anatomically complex substrates of Secundum ASD. Careful case selection, scrupulous imaging protocol, and expertise in modified techniques are mandatory for successful outcomes. PMID:24581094

  14. Apollo Experiment Report: Lunar-Sample Processing in the Lunar Receiving Laboratory High-Vacuum Complex

    NASA Technical Reports Server (NTRS)

    White, D. R.

    1976-01-01

    A high-vacuum complex composed of an atmospheric decontamination system, sample-processing chambers, storage chambers, and a transfer system was built to process and examine lunar material while maintaining quarantine status. Problems identified, equipment modifications, and procedure changes made for Apollo 11 and 12 sample processing are presented. The sample processing experiences indicate that only a few operating personnel are required to process the sample efficiently, safely, and rapidly in the high-vacuum complex. The high-vacuum complex was designed to handle the many contingencies, both quarantine and scientific, associated with handling an unknown entity such as the lunar sample. Lunar sample handling necessitated a complex system that could not respond rapidly to changing scientific requirements as the characteristics of the lunar sample were better defined. Although the complex successfully handled the processing of Apollo 11 and 12 lunar samples, the scientific requirement for vacuum samples was deleted after the Apollo 12 mission just as the vacuum system was reaching its full potential.

  15. Polygonal Shapes Detection in 3d Models of Complex Architectures

    NASA Astrophysics Data System (ADS)

    Benciolini, G. B.; Vitti, A.

    2015-02-01

    A sequential application of two global models defined on a variational framework is proposed for the detection of polygonal shapes in 3D models of complex architectures. As a first step, the procedure involves the use of the Mumford and Shah (1989) 1st-order variational model in dimension two (gridded height data are processed). In the Mumford-Shah model an auxiliary function detects the sharp changes, i.e., the discontinuities, of a piecewise smooth approximation of the data. The Mumford-Shah model requires the global minimization of a specific functional to simultaneously produce both the smooth approximation and its discontinuities. In the proposed procedure, the edges of the smooth approximation derived by a specific processing of the auxiliary function are then processed using the Blake and Zisserman (1987) 2nd-order variational model in dimension one (edges are processed in the plane). This second step permits to describe the edges of an object by means of piecewise almost-linear approximation of the input edges themselves and to detects sharp changes of the first-derivative of the edges so to detect corners. The Mumford-Shah variational model is used in two dimensions accepting the original data as primary input. The Blake-Zisserman variational model is used in one dimension for the refinement of the description of the edges. The selection among all the boundaries detected by the Mumford-Shah model of those that present a shape close to a polygon is performed by considering only those boundaries for which the Blake-Zisserman model identified discontinuities in their first derivative. The output of the procedure are hence shapes, coming from 3D geometric data, that can be considered as polygons. The application of the procedure is suitable for, but not limited to, the detection of objects such as foot-print of polygonal buildings, building facade boundaries or windows contours. v The procedure is applied to a height model of the building of the Engineering

  16. Tutoring and Multi-Agent Systems: Modeling from Experiences

    ERIC Educational Resources Information Center

    Bennane, Abdellah

    2010-01-01

    Tutoring systems become complex and are offering varieties of pedagogical software as course modules, exercises, simulators, systems online or offline, for single user or multi-user. This complexity motivates new forms and approaches to the design and the modelling. Studies and research in this field introduce emergent concepts that allow the…

  17. A radio-frequency sheath model for complex waveforms

    SciTech Connect

    Turner, M. M.; Chabert, P.

    2014-04-21

    Plasma sheaths driven by radio-frequency voltages occur in contexts ranging from plasma processing to magnetically confined fusion experiments. An analytical understanding of such sheaths is therefore important, both intrinsically and as an element in more elaborate theoretical structures. Radio-frequency sheaths are commonly excited by highly anharmonic waveforms, but no analytical model exists for this general case. We present a mathematically simple sheath model that is in good agreement with earlier models for single frequency excitation, yet can be solved for arbitrary excitation waveforms. As examples, we discuss dual-frequency and pulse-like waveforms. The model employs the ansatz that the time-averaged electron density is a constant fraction of the ion density. In the cases we discuss, the error introduced by this approximation is small, and in general it can be quantified through an internal consistency condition of the model. This simple and accurate model is likely to have wide application.

  18. Uncertainty Analysis with Site Specific Groundwater Models: Experiences and Observations

    SciTech Connect

    Brewer, K.

    2003-07-15

    Groundwater flow and transport predictions are a major component of remedial action evaluations for contaminated groundwater at the Savannah River Site. Because all groundwater modeling results are subject to uncertainty from various causes; quantification of the level of uncertainty in the modeling predictions is beneficial to project decision makers. Complex site-specific models present formidable challenges for implementing an uncertainty analysis.

  19. Modelling an IHE Experiment with a Suite of DSD Models

    NASA Astrophysics Data System (ADS)

    Hodgson, Alexander

    2013-06-01

    At the 2011 APS conference, Terrones, Burkett and Morris published an experiment primarily designed to allow examination of the propagation of a detonation front in a 3-dimensional charge of PBX9502 insensitive high explosive. The charge is confined by a cylindrical steel shell, has an elliptical tin liner, and is line-initiated along its length. The detonation wave must propagate around the inner hollow region and converge on the opposite side. The Detonation Shock Dynamics (DSD) model allows for the calculation of detonation propagation in a region of explosive using a selection of material input parameters, amongst which is the D-K relation that governs how the local detonation velocity varies as a function of wave curvature. In this paper, experimental data are compared to calculations using the newly-developed 3D DSD code at AWE with a variety of D-K relations. The effects of D-K variation through different calibration methods, material lot and initial density are investigated.

  20. Syntheses and Characterization of Ruthenium(II) Tetrakis(pyridine)complexes: An Advanced Coordination Chemistry Experiment or Mini-Project

    ERIC Educational Resources Information Center

    Coe, Benjamin J.

    2004-01-01

    An experiment for third-year undergraduate a student is designed which provides synthetic experience and qualitative interpretation of the spectroscopic properties of the ruthenium complexes. It involves the syntheses and characterization of several coordination complexes of ruthenium, the element found directly beneath iron in the middle of the…

  1. Modeling high-resolution broadband discourse in complex adaptive systems.

    PubMed

    Dooley, Kevin J; Corman, Steven R; McPhee, Robert D; Kuhn, Timothy

    2003-01-01

    Numerous researchers and practitioners have turned to complexity science to better understand human systems. Simulation can be used to observe how the microlevel actions of many human agents create emergent structures and novel behavior in complex adaptive systems. In such simulations, communication between human agents is often modeled simply as message passing, where a message or text may transfer data, trigger action, or inform context. Human communication involves more than the transmission of texts and messages, however. Such a perspective is likely to limit the effectiveness and insight that we can gain from simulations, and complexity science itself. In this paper, we propose a model of how close analysis of discursive processes between individuals (high-resolution), which occur simultaneously across a human system (broadband), dynamically evolve. We propose six different processes that describe how evolutionary variation can occur in texts-recontextualization, pruning, chunking, merging, appropriation, and mutation. These process models can facilitate the simulation of high-resolution, broadband discourse processes, and can aid in the analysis of data from such processes. Examples are used to illustrate each process. We make the tentative suggestion that discourse may evolve to the "edge of chaos." We conclude with a discussion concerning how high-resolution, broadband discourse data could actually be collected. PMID:12876447

  2. The use of workflows in the design and implementation of complex experiments in macromolecular crystallography

    SciTech Connect

    Brockhauser, Sandor; Svensson, Olof; Bowler, Matthew W.; Nanao, Max; Gordon, Elspeth; Leal, Ricardo M. F.; Popov, Alexander; Gerring, Matthew; McCarthy, Andrew A.; Gotz, Andy

    2012-08-01

    A powerful and easy-to-use workflow environment has been developed at the ESRF for combining experiment control with online data analysis on synchrotron beamlines. This tool provides the possibility of automating complex experiments without the need for expertise in instrumentation control and programming, but rather by accessing defined beamline services. The automation of beam delivery, sample handling and data analysis, together with increasing photon flux, diminishing focal spot size and the appearance of fast-readout detectors on synchrotron beamlines, have changed the way that many macromolecular crystallography experiments are planned and executed. Screening for the best diffracting crystal, or even the best diffracting part of a selected crystal, has been enabled by the development of microfocus beams, precise goniometers and fast-readout detectors that all require rapid feedback from the initial processing of images in order to be effective. All of these advances require the coupling of data feedback to the experimental control system and depend on immediate online data-analysis results during the experiment. To facilitate this, a Data Analysis WorkBench (DAWB) for the flexible creation of complex automated protocols has been developed. Here, example workflows designed and implemented using DAWB are presented for enhanced multi-step crystal characterizations, experiments involving crystal reorientation with kappa goniometers, crystal-burning experiments for empirically determining the radiation sensitivity of a crystal system and the application of mesh scans to find the best location of a crystal to obtain the highest diffraction quality. Beamline users interact with the prepared workflows through a specific brick within the beamline-control GUI MXCuBE.

  3. The evaluative imaging of mental models - Visual representations of complexity

    NASA Technical Reports Server (NTRS)

    Dede, Christopher

    1989-01-01

    The paper deals with some design issues involved in building a system that could visually represent the semantic structures of training materials and their underlying mental models. In particular, hypermedia-based semantic networks that instantiate classification problem solving strategies are thought to be a useful formalism for such representations; the complexity of these web structures can be best managed through visual depictions. It is also noted that a useful approach to implement in these hypermedia models would be some metrics of conceptual distance.

  4. A Simple Model for Complex Dynamical Transitions in Epidemics

    NASA Astrophysics Data System (ADS)

    Earn, David J. D.; Rohani, Pejman; Bolker, Benjamin M.; Grenfell, Bryan T.

    2000-01-01

    Dramatic changes in patterns of epidemics have been observed throughout this century. For childhood infectious diseases such as measles, the major transitions are between regular cycles and irregular, possibly chaotic epidemics, and from regionally synchronized oscillations to complex, spatially incoherent epidemics. A simple model can explain both kinds of transitions as the consequences of changes in birth and vaccination rates. Measles is a natural ecological system that exhibits different dynamical transitions at different times and places, yet all of these transitions can be predicted as bifurcations of a single nonlinear model.

  5. Extending a configuration model to find communities in complex networks

    NASA Astrophysics Data System (ADS)

    Jin, Di; He, Dongxiao; Hu, Qinghua; Baquero, Carlos; Yang, Bo

    2013-09-01

    Discovery of communities in complex networks is a fundamental data analysis task in various domains. Generative models are a promising class of techniques for identifying modular properties from networks, which has been actively discussed recently. However, most of them cannot preserve the degree sequence of networks, which will distort the community detection results. Rather than using a blockmodel as most current works do, here we generalize a configuration model, namely, a null model of modularity, to solve this problem. Towards decomposing and combining sub-graphs according to the soft community memberships, our model incorporates the ability to describe community structures, something the original model does not have. Also, it has the property, as with the original model, that it fixes the expected degree sequence to be the same as that of the observed network. We combine both the community property and degree sequence preserving into a single unified model, which gives better community results compared with other models. Thereafter, we learn the model using a technique of nonnegative matrix factorization and determine the number of communities by applying consensus clustering. We test this approach both on synthetic benchmarks and on real-world networks, and compare it with two similar methods. The experimental results demonstrate the superior performance of our method over competing methods in detecting both disjoint and overlapping communities.

  6. Teaching "Instant Experience" with Graphical Model Validation Techniques

    ERIC Educational Resources Information Center

    Ekstrøm, Claus Thorn

    2014-01-01

    Graphical model validation techniques for linear normal models are often used to check the assumptions underlying a statistical model. We describe an approach to provide "instant experience" in looking at a graphical model validation plot, so it becomes easier to validate if any of the underlying assumptions are violated.

  7. A model for restricted diffusion in complex fluids

    NASA Astrophysics Data System (ADS)

    de Bruyn, John; Wylie, Jonathan

    2014-03-01

    We use a model originally due to Tanner to study the diffusion of tracer particles in complex fluids both analytically and through Monte-Carlo simulations. The model consists of regions through which the particles diffuse freely, separated by membranes with a specified low permeability. The mean squared displacement of the particles calculated from the model agrees well with experimental data on the diffusion of particles in a concentrated colloidal suspension when the membrane permeability is used as an adjustable parameter. Data on a micro-phase-separated polymer system can be well modeled by considering two populations of particles constrained by membranes with different permeabilites. Supported by the Hong Kong Research Grants Council and the Natural Sciences and Engineering Research Council of Canada.

  8. Silicon Carbide Derived Carbons: Experiments and Modeling

    SciTech Connect

    Kertesz, Miklos

    2011-02-28

    The main results of the computational modeling was: 1. Development of a new genealogical algorithm to generate vacancy clusters in diamond starting from monovacancies combined with energy criteria based on TBDFT energetics. The method revealed that for smaller vacancy clusters the energetically optimal shapes are compact but for larger sizes they tend to show graphitized regions. In fact smaller clusters of the size as small as 12 already show signatures of this graphitization. The modeling gives firm basis for the slit-pore modeling of porous carbon materials and explains some of their properties. 2. We discovered small vacancy clusters and their physical characteristics that can be used to spectroscopically identify them. 3. We found low barrier pathways for vacancy migration in diamond-like materials by obtaining for the first time optimized reaction pathways.

  9. Fundamental Rotorcraft Acoustic Modeling From Experiments (FRAME)

    NASA Technical Reports Server (NTRS)

    Greenwood, Eric

    2011-01-01

    A new methodology is developed for the construction of helicopter source noise models for use in mission planning tools from experimental measurements of helicopter external noise radiation. The models are constructed by employing a parameter identification method to an assumed analytical model of the rotor harmonic noise sources. This new method allows for the identification of individual rotor harmonic noise sources and allows them to be characterized in terms of their individual non-dimensional governing parameters. The method is applied to both wind tunnel measurements and ground noise measurements of two-bladed rotors. The method is shown to match the parametric trends of main rotor harmonic noise, allowing accurate estimates of the dominant rotorcraft noise sources to be made for operating conditions based on a small number of measurements taken at different operating conditions. The ability of this method to estimate changes in noise radiation due to changes in ambient conditions is also demonstrated.

  10. Experiences Using Formal Methods for Requirements Modeling

    NASA Technical Reports Server (NTRS)

    Easterbrook, Steve; Lutz, Robyn; Covington, Rick; Kelly, John; Ampo, Yoko; Hamilton, David

    1996-01-01

    This paper describes three cases studies in the lightweight application of formal methods to requirements modeling for spacecraft fault protection systems. The case studies differ from previously reported applications of formal methods in that formal methods were applied very early in the requirements engineering process, to validate the evolving requirements. The results were fed back into the projects, to improve the informal specifications. For each case study, we describe what methods were applied, how they were applied, how much effort was involved, and what the findings were. In all three cases, the formal modeling provided a cost effective enhancement of the existing verification and validation processes. We conclude that the benefits gained from early modeling of unstable requirements more than outweigh the effort needed to maintain multiple representations.

  11. Fundamental Rotorcraft Acoustic Modeling from Experiments (FRAME)

    NASA Astrophysics Data System (ADS)

    Greenwood, Eric, II

    2011-12-01

    A new methodology is developed for the construction of helicopter source noise models for use in mission planning tools from experimental measurements of helicopter external noise radiation. The models are constructed by employing a parameter identification method to an assumed analytical model of the rotor harmonic noise sources. This new method allows for the identification of individual rotor harmonic noise sources and allows them to be characterized in terms of their individual non-dimensional governing parameters. The method is applied to both wind tunnel measurements and ground noise measurements of two-bladed rotors. The method is shown to match the parametric trends of main rotor harmonic noise, allowing accurate estimates of the dominant rotorcraft noise sources to be made for operating conditions based on a small number of measurements taken at different operating conditions. The ability of this method to estimate changes in noise radiation due to changes in ambient conditions is also demonstrated.

  12. Modeling complexity in simulating pesticide fate in a rice paddy.

    PubMed

    Luo, Yuzhou; Spurlock, Frank; Gill, Sheryl; Goh, Kean S

    2012-12-01

    Modeling approaches for pesticide regulation are required to provide generic and conservative evaluations on pesticide fate and exposure based on limited data. This study investigates the modeling approach for pesticide simulation in a rice paddy, by developing a component-based modeling system and characterizing the dependence of pesticide concentrations on individual fate processes. The developed system covers the modeling complexity from a "base model" which considers only the essential processes of water management, water-sediment exchange, and aquatic dissipation, to a "full model" for all commonly simulated processes. Model capability and performance were demonstrated by case studies with 5 pesticides in 13 rice fields of the California's Sacramento Valley. With registrant-submitted dissipation half-lives, the base model conservatively estimated dissolved pesticide concentrations within one order of magnitude of measured data. The full model simulations were calibrated to characterize the key model parameters and processes varying with chemical properties and field conditions. Metabolism in water was identified as an important process in predicting pesticide fate in all tested rice fields. Relative contributions of metabolism, hydrolysis, direct aquatic photolysis, and volatilization to the overall pesticide dissipation were significantly correlated to the model sensitivities to the corresponding physicochemical properties and half-lives. While modeling results were sensitive to metabolism half-lives in water for all fields, significances of metabolism in sediment and water-sediment exchange were only observed for pesticides with pre-flooding applications or with rapid dissipation in sediment. Results suggest that, in addition to the development of regional modeling scenarios for rice production, the registrant-submitted maximum values for the aquatic dissipation half-lives could be used for evaluating pesticide for regulatory purposes.

  13. [Model experiments on breathing under sand].

    PubMed

    Maxeiner, H; Haenel, F

    1985-01-01

    Remarkable autopsy findings in persons who had suffocated as a result of closure of the mouth and nose by sand (without the body being buried) induced us to investigate some aspects of this situation by means of a simple experiment. A barrel (diameter 36.7 cm) with a mouthpiece in the bottom was filled with sand to a depth of 15, 30, 60, or 90 cm. The subject tried to breathe as long as possible through the sand, while the amount of sand inspired was measured. Pressure and volume of the breath, as well as the O2 and CO2 content were also measured. A respiratory volume of up to 31 was possible, even when the depth was 90 cm. After about 1 min in all trials, the subject's shortness of breath forced us to stop the experiment. Measurement of O2 and CO2 concentrations proved that respiratory volume in and out of the sand shifts to atmospheric air without gas exchange, even when the sand depth is 15 cm. Sand aspiration depended on the moisture of the material: when the sand was dry, it was impossible to avoid aspiration. However, even a water content of only 5% prevented aspiration, although the sand seemed to be nearly dry.

  14. Hierarchical Model for the Evolution of Cloud Complexes

    NASA Astrophysics Data System (ADS)

    Sánchez D., Néstor M.; Parravano, Antonio

    1999-01-01

    The structure of cloud complexes appears to be well described by a tree structure (i.e., a simplified ``stick man'') representation when the image is partitioned into ``clouds.'' In this representation, the parent-child relationships are assigned according to containment. Based on this picture, a hierarchical model for the evolution of cloud complexes, including star formation, is constructed. The model follows the mass evolution of each substructure by computing its mass exchange with its parent and children. The parent-child mass exchange (evaporation or condensation) depends on the radiation density at the interphase. At the end of the ``lineage,'' stars may be born or die, so that there is a nonstationary mass flow in the hierarchical structure. For a variety of parameter sets the system follows the same series of steps to transform diffuse gas into stars, and the regulation of the mass flux in the tree by previously formed stars dominates the evolution of the star formation. For the set of parameters used here as a reference model, the system tends to produce initial mass functions (IMFs) that have a maximum at a mass that is too high (~2 Msolar) and the characteristic times for evolution seem too long. We show that these undesired properties can be improved by adjusting the model parameters. The model requires further physics (e.g., allowing for multiple stellar systems and clump collisions) before a definitive comparison with observations can be made. Instead, the emphasis here is to illustrate some general properties of this kind of complex nonlinear model for the star formation process. Notwithstanding the simplifications involved, the model reveals an essential feature that will likely remain if additional physical processes are included, that is, the detailed behavior of the system is very sensitive to the variations on the initial and external conditions, suggesting that a ``universal'' IMF is very unlikely. When an ensemble of IMFs corresponding to a

  15. Complexity, parameter sensitivity and parameter transferability in the modelling of floodplain inundation

    NASA Astrophysics Data System (ADS)

    Bates, P. D.; Neal, J. C.; Fewtrell, T. J.

    2012-12-01

    In this we paper we consider two related questions. First, we address the issue of how much physical complexity is necessary in a model in order to simulate floodplain inundation to within validation data error. This is achieved through development of a single code/multiple physics hydraulic model (LISFLOOD-FP) where different degrees of complexity can be switched on or off. Different configurations of this code are applied to four benchmark test cases, and compared to the results of a number of industry standard models. Second we address the issue of how parameter sensitivity and transferability change with increasing complexity using numerical experiments with models of different physical and geometric intricacy. Hydraulic models are a good example system with which to address such generic modelling questions as: (1) they have a strong physical basis; (2) there is only one set of equations to solve; (3) they require only topography and boundary conditions as input data; and (4) they typically require only a single free parameter, namely boundary friction. In terms of complexity required we show that for the problem of sub-critical floodplain inundation a number of codes of different dimensionality and resolution can be found to fit uncertain model validation data equally well, and that in this situation Occam's razor emerges as a useful logic to guide model selection. We find also find that model skill usually improves more rapidly with increases in model spatial resolution than increases in physical complexity, and that standard approaches to testing hydraulic models against laboratory data or analytical solutions may fail to identify this important fact. Lastly, we find that in benchmark testing studies significant differences can exist between codes with identical numerical solution techniques as a result of auxiliary choices regarding the specifics of model implementation that are frequently unreported by code developers. As a consequence, making sound

  16. Complex hybrid models combining deterministic and machine learning components for numerical climate modeling and weather prediction.

    PubMed

    Krasnopolsky, Vladimir M; Fox-Rabinovitz, Michael S

    2006-03-01

    A new practical application of neural network (NN) techniques to environmental numerical modeling has been developed. Namely, a new type of numerical model, a complex hybrid environmental model based on a synergetic combination of deterministic and machine learning model components, has been introduced. Conceptual and practical possibilities of developing hybrid models are discussed in this paper for applications to climate modeling and weather prediction. The approach presented here uses NN as a statistical or machine learning technique to develop highly accurate and fast emulations for time consuming model physics components (model physics parameterizations). The NN emulations of the most time consuming model physics components, short and long wave radiation parameterizations or full model radiation, presented in this paper are combined with the remaining deterministic components (like model dynamics) of the original complex environmental model--a general circulation model or global climate model (GCM)--to constitute a hybrid GCM (HGCM). The parallel GCM and HGCM simulations produce very similar results but HGCM is significantly faster. The speed-up of model calculations opens the opportunity for model improvement. Examples of developed HGCMs illustrate the feasibility and efficiency of the new approach for modeling complex multidimensional interdisciplinary systems.

  17. Hybrid Structural Model of the Complete Human ESCRT-0 Complex

    SciTech Connect

    Ren, Xuefeng; Kloer, Daniel P.; Kim, Young C.; Ghirlando, Rodolfo; Saidi, Layla F.; Hummer, Gerhard; Hurley, James H.

    2009-03-31

    The human Hrs and STAM proteins comprise the ESCRT-0 complex, which sorts ubiquitinated cell surface receptors to lysosomes for degradation. Here we report a model for the complete ESCRT-0 complex based on the crystal structure of the Hrs-STAM core complex, previously solved domain structures, hydrodynamic measurements, and Monte Carlo simulations. ESCRT-0 expressed in insect cells has a hydrodynamic radius of R{sub H} = 7.9 nm and is a 1:1 heterodimer. The 2.3 {angstrom} crystal structure of the ESCRT-0 core complex reveals two domain-swapped GAT domains and an antiparallel two-stranded coiled-coil, similar to yeast ESCRT-0. ESCRT-0 typifies a class of biomolecular assemblies that combine structured and unstructured elements, and have dynamic and open conformations to ensure versatility in target recognition. Coarse-grained Monte Carlo simulations constrained by experimental R{sub H} values for ESCRT-0 reveal a dynamic ensemble of conformations well suited for diverse functions.

  18. Plasma gun pellet acceleration modeling and experiment

    SciTech Connect

    Kincaid, R.W.; Bourham, M.A.; Gilligan, J.G.

    1996-12-31

    Modifications to the electrothermal plasma gun SIRENS have been completed to allow for acceleration experiments using plastic pellets. Modifications have been implemented to the 1-D, time dependent code ODIN to include pellet friction, momentum, and kinetic energy with options of variable barrel length. The code results in the new version, POSEIDON, compare favorably with experimental data and with code results from ODIN. Predicted values show an increased pellet velocity along the barrel length, achieving 2 km/s exit velocity. Measured velocity, at three locations along the barrel length, showed good correlation with predicted values. The code has also been used to investigate the effectiveness of longer pulse length on pellet velocity using simulated ramp up and down currents with flat top, and triangular current pulses with early and late peaking. 16 refs., 5 figs.

  19. Simulation-based parameter estimation for complex models: a breast cancer natural history modelling illustration.

    PubMed

    Chia, Yen Lin; Salzman, Peter; Plevritis, Sylvia K; Glynn, Peter W

    2004-12-01

    Simulation-based parameter estimation offers a powerful means of estimating parameters in complex stochastic models. We illustrate the application of these ideas in the setting of a natural history model for breast cancer. Our model assumes that the tumor growth process follows a geometric Brownian motion; parameters are estimated from the SEER registry. Our discussion focuses on the use of simulation for computing the maximum likelihood estimator for this class of models. The analysis shows that simulation provides a straightforward means of computing such estimators for models of substantial complexity.

  20. Modeling the Self-assembly of the Cellulosome Enzyme Complex*

    PubMed Central

    Bomble, Yannick J.; Beckham, Gregg T.; Matthews, James F.; Nimlos, Mark R.; Himmel, Michael E.; Crowley, Michael F.

    2011-01-01

    Most bacteria use free enzymes to degrade plant cell walls in nature. However, some bacteria have adopted a different strategy wherein enzymes can either be free or tethered on a protein scaffold forming a complex called a cellulosome. The study of the structure and mechanism of these large macromolecular complexes is an active and ongoing research topic, with the goal of finding ways to improve biomass conversion using cellulosomes. Several mechanisms involved in cellulosome formation remain unknown, including how cellulosomal enzymes assemble on the scaffoldin and what governs the population of cellulosomes created during self-assembly. Here, we present a coarse-grained model to study the self-assembly of cellulosomes. The model captures most of the physical characteristics of three cellulosomal enzymes (Cel5B, CelS, and CbhA) and the scaffoldin (CipA) from Clostridium thermocellum. The protein structures are represented by beads connected by restraints to mimic the flexibility and shapes of these proteins. From a large simulation set, the assembly of cellulosomal enzyme complexes is shown to be dominated by their shape and modularity. The multimodular enzyme, CbhA, binds statistically more frequently to the scaffoldin than CelS or Cel5B. The enhanced binding is attributed to the flexible nature and multimodularity of this enzyme, providing a longer residence time around the scaffoldin. The characterization of the factors influencing the cellulosome assembly process may enable new strategies to create designers cellulosomes. PMID:21098021

  1. A hydromechanical biomimetic cochlea: experiments and models.

    PubMed

    Chen, Fangyi; Cohen, Howard I; Bifano, Thomas G; Castle, Jason; Fortin, Jeffrey; Kapusta, Christopher; Mountain, David C; Zosuls, Aleks; Hubbard, Allyn E

    2006-01-01

    The construction, measurement, and modeling of an artificial cochlea (ACochlea) are presented in this paper. An artificial basilar membrane (ABM) was made by depositing discrete Cu beams on a piezomembrane substrate. Rather than two fluid channels, as in the mammalian cochlea, a single fluid channel was implemented on one side of the ABM, facilitating the use of a laser to detect the ABM vibration on the other side. Measurements were performed on both the ABM and the ACochlea. The measurement results on the ABM show that the longitudinal coupling on the ABM is very strong. Reduced longitudinal coupling was achieved by cutting the membrane between adjacent beams using a laser. The measured results from the ACochlea with a laser-cut ABM demonstrate cochlear-like features, including traveling waves, sharp high-frequency rolloffs, and place-specific frequency selectivity. Companion computational models of the mechanical devices were formulated and implemented using a circuit simulator. Experimental data were compared with simulation results. The simulation results from the computational models of the ABM and the ACochlea are similar to their experimental counterparts. PMID:16454294

  2. Nonlinear vibrations of shallow shells with complex boundary: R-functions method and experiments

    NASA Astrophysics Data System (ADS)

    Kurpa, Lidia; Pilgun, Galina; Amabili, Marco

    2007-10-01

    Geometrically nonlinear vibrations of shallow circular cylindrical panels with complex shape of the boundary are considered. The R-functions theory and variational methods are used to study the problem. The R-functions method (RFM) allows constructing in analytical form the sequence of basis functions satisfying the given boundary conditions in case of complex shape of the boundary. The problem is reduced to a single second-order differential equation with quadratic and cubic nonlinear terms. The method developed has been initially applied to study free vibrations of shallow circular cylindrical panels with rectangular base for different boundary conditions: (i) clamped edges, (ii) in-plane immovable simply supported edges, (iii) classically simply supported edges, and (iv) in-plane free simply supported edges. Then, the same approach is applied to a shell with complex shape of the boundary. Experiments have been conducted on an aluminum panel with complex shape of the boundary in order to identify the nonlinear response of the fundamental mode; these experimental results have been compared to numerical results.

  3. A Qualitative Model of Human Interaction with Complex Dynamic Systems

    NASA Technical Reports Server (NTRS)

    Hess, Ronald A.

    1987-01-01

    A qualitative model describing human interaction with complex dynamic systems is developed. The model is hierarchical in nature and consists of three parts: a behavior generator, an internal model, and a sensory information processor. The behavior generator is responsible for action decomposition, turning higher level goals or missions into physical action at the human-machine interface. The internal model is an internal representation of the environment which the human is assumed to possess and is divided into four submodel categories. The sensory information processor is responsible for sensory composition. All three parts of the model act in consort to allow anticipatory behavior on the part of the human in goal-directed interaction with dynamic systems. Human workload and error are interpreted in this framework, and the familiar example of an automobile commute is used to illustrate the nature of the activity in the three model elements. Finally, with the qualitative model as a guide, verbal protocols from a manned simulation study of a helicopter instrument landing task are analyzed with particular emphasis on the effect of automation on human-machine performance.

  4. A qualitative model of human interaction with complex dynamic systems

    NASA Technical Reports Server (NTRS)

    Hess, Ronald A.

    1987-01-01

    A qualitative model describing human interaction with complex dynamic systems is developed. The model is hierarchical in nature and consists of three parts: a behavior generator, an internal model, and a sensory information processor. The behavior generator is responsible for action decomposition, turning higher level goals or missions into physical action at the human-machine interface. The internal model is an internal representation of the environment which the human is assumed to possess and is divided into four submodel categories. The sensory information processor is responsible for sensory composition. All three parts of the model act in consort to allow anticipatory behavior on the part of the human in goal-directed interaction with dynamic systems. Human workload and error are interpreted in this framework, and the familiar example of an automobile commute is used to illustrate the nature of the activity in the three model elements. Finally, with the qualitative model as a guide, verbal protocols from a manned simulation study of a helicopter instrument landing task are analyzed with particular emphasis on the effect of automation on human-machine performance.

  5. Integrated Bayesian network framework for modeling complex ecological issues.

    PubMed

    Johnson, Sandra; Mengersen, Kerrie

    2012-07-01

    The management of environmental problems is multifaceted, requiring varied and sometimes conflicting objectives and perspectives to be considered. Bayesian network (BN) modeling facilitates the integration of information from diverse sources and is well suited to tackling the management challenges of complex environmental problems. However, combining several perspectives in one model can lead to large, unwieldy BNs that are difficult to maintain and understand. Conversely, an oversimplified model may lead to an unrealistic representation of the environmental problem. Environmental managers require the current research and available knowledge about an environmental problem of interest to be consolidated in a meaningful way, thereby enabling the assessment of potential impacts and different courses of action. Previous investigations of the environmental problem of interest may have already resulted in the construction of several disparate ecological models. On the other hand, the opportunity may exist to initiate this modeling. In the first instance, the challenge is to integrate existing models and to merge the information and perspectives from these models. In the second instance, the challenge is to include different aspects of the environmental problem incorporating both the scientific and management requirements. Although the paths leading to the combined model may differ for these 2 situations, the common objective is to design an integrated model that captures the available information and research, yet is simple to maintain, expand, and refine. BN modeling is typically an iterative process, and we describe a heuristic method, the iterative Bayesian network development cycle (IBNDC), for the development of integrated BN models that are suitable for both situations outlined above. The IBNDC approach facilitates object-oriented BN (OOBN) modeling, arguably viewed as the next logical step in adaptive management modeling, and that embraces iterative development

  6. Process modelling for materials preparation experiments

    NASA Technical Reports Server (NTRS)

    Rosenberger, Franz; Alexander, J. Iwan D.

    1993-01-01

    The main goals of the research under this grant consist of the development of mathematical tools and measurement of transport properties necessary for high fidelity modeling of crystal growth from the melt and solution, in particular, for the Bridgman-Stockbarger growth of mercury cadmium telluride (MCT) and the solution growth of triglycine sulphate (TGS). Of the tasks described in detail in the original proposal, two remain to be worked on: (1) development of a spectral code for moving boundary problems; and (2) diffusivity measurements on concentrated and supersaturated TGS solutions. Progress made during this seventh half-year period is reported.

  7. Process modelling for materials preparation experiments

    NASA Technical Reports Server (NTRS)

    Rosenberger, Franz; Alexander, J. Iwan D.

    1993-01-01

    The main goals of the research consist of the development of mathematical tools and measurement of transport properties necessary for high fidelity modeling of crystal growth from the melt and solution, in particular for the Bridgman-Stockbarger growth of mercury cadmium telluride (MCT) and the solution growth of triglycine sulphate (TGS). Of the tasks described in detail in the original proposal, two remain to be worked on: development of a spectral code for moving boundary problems, and diffusivity measurements on concentrated and supersaturated TGS solutions. During this eighth half-year period, good progress was made on these tasks.

  8. Process modelling for materials preparation experiments

    NASA Technical Reports Server (NTRS)

    Rosenberger, Franz; Alexander, J. Iwan D.

    1992-01-01

    The development is examined of mathematical tools and measurement of transport properties necessary for high fidelity modeling of crystal growth from the melt and solution, in particular for the Bridgman-Stockbarger growth of mercury cadmium telluride (MCT) and the solution growth of triglycine sulphate (TGS). The tasks include development of a spectral code for moving boundary problems, kinematic viscosity measurements on liquid MCT at temperatures close to the melting point, and diffusivity measurements on concentrated and supersaturated TGS solutions. A detailed description is given of the work performed for these tasks, together with a summary of the resulting publications and presentations.

  9. Star formation in a hierarchical model for Cloud Complexes

    NASA Astrophysics Data System (ADS)

    Sanchez, N.; Parravano, A.

    The effects of the external and initial conditions on the star formation processes in Molecular Cloud Complexes are examined in the context of a schematic model. The model considers a hierarchical system with five predefined phases: warm gas, neutral gas, low density molecular gas, high density molecular gas and protostars. The model follows the mass evolution of each substructure by computing its mass exchange with their parent and children. The parent-child mass exchange depends on the radiation density at the interphase, which is produced by the radiation coming from the stars that form at the end of the hierarchical structure, and by the external radiation field. The system is chaotic in the sense that its temporal evolution is very sensitive to small changes in the initial or external conditions. However, global features such as the star formation efficience and the Initial Mass Function are less affected by those variations.

  10. Equilibrium modeling of trace metal transport from Duluth complex rockpile

    SciTech Connect

    Kelsey, P.D.; Klusman, R.W.; Lapakko, K.

    1996-12-31

    Geochemical modeling was used to predict weathering processes and the formation of trace metal-adsorbing secondary phases in a waste rock stockpile containing Cu-Ni ore mined from the Duluth Complex, MN. Amorphous ferric hydroxide was identified as a secondary phase within the pile, from observation and geochemical modeling of the weathering process. Due to the high content of cobalt, copper, nickel, and zinc in the primary minerals of the waste rock and in the effluent, it was hypothesized that the predicted and observed precipitant ferric hydroxide would adsorb small quantities of these trace metals. This was verified using sequential extractions and simulated using adsorption geochemical modeling. It was concluded that the trace metals were adsorbed in small quantities, and adsorption onto the amorphous ferric hydroxide was in decreasing order of Cu > Ni > Zn > Co. The low degree of adsorption was due to low pH water and competition for adsorption sites with other ions in solution.

  11. 3D model of amphioxus steroid receptor complexed with estradiol

    SciTech Connect

    Baker, Michael E.; Chang, David J.

    2009-08-28

    The origins of signaling by vertebrate steroids are not fully understood. An important advance was the report that an estrogen-binding steroid receptor [SR] is present in amphioxus, a basal chordate with a similar body plan as vertebrates. To investigate the evolution of estrogen-binding to steroid receptors, we constructed a 3D model of amphioxus SR complexed with estradiol. This 3D model indicates that although the SR is activated by estradiol, some interactions between estradiol and human ER{alpha} are not conserved in the SR, which can explain the low affinity of estradiol for the SR. These differences between the SR and ER{alpha} in the steroid-binding domain are sufficient to suggest that another steroid is the physiological regulator of the SR. The 3D model predicts that mutation of Glu-346 to Gln will increase the affinity of testosterone for amphioxus SR and elucidate the evolution of steroid-binding to nuclear receptors.

  12. Recirculating Planar Magnetron Modeling and Experiments

    NASA Astrophysics Data System (ADS)

    Franzi, Matthew; Gilgenbach, Ronald; Hoff, Brad; French, Dave; Lau, Y. Y.

    2011-10-01

    We present simulations and initial experimental results of a new class of crossed field device: Recirculating Planar Magnetrons (RPM). Two geometries of RPM are being explored: 1) Dual planar-magnetrons connected by a recirculating section with axial magnetic field and transverse electric field, and 2) Planar cathode and anode-cavity rings with radial magnetic field and axial electric field. These RPMs have numerous advantages for high power microwave generation by virtue of larger area cathodes and anodes. The axial B-field RPM can be configured in either the conventional or inverted (faster startup) configuration. Two and three-dimensional EM PIC simulations show rapid electron spoke formation and microwave oscillation in pi-mode. Smoothbore prototype axial-B RPM experiments are underway using the MELBA accelerator at parameters of -300 kV, 1-20 kA and pulselengths of 0.5-1 microsecond. Implementation and operation of the first RPM slow wave structure, operating at 1GHz, will be discussed. Research supported by AFOSR, AFRL, L-3 Communications, and Northrop Grumman. Done...processed 1830 records...17:52:57 Beginning APS data extraction...17:52:57

  13. Metal-mediated reaction modeled on nature: the activation of isothiocyanates initiated by zinc thiolate complexes.

    PubMed

    Eger, Wilhelm A; Presselt, Martin; Jahn, Burkhard O; Schmitt, Michael; Popp, Jürgen; Anders, Ernst

    2011-04-18

    On the basis of detailed theoretical studies of the mode of action of carbonic anhydrase (CA) and models resembling only its reactive core, a complete computational pathway analysis of the reaction between several isothiocyanates and methyl mercaptan activated by a thiolate-bearing model complex [Zn(NH(3))(3)SMe](+) was performed at a high level of density functional theory (DFT). Furthermore, model reactions have been studied in the experiment using relatively stable zinc complexes and have been investigated by gas chromatography/mass spectrometry and Raman spectroscopy. The model complexes used in the experiment are based upon the well-known azamacrocyclic ligand family ([12]aneN(4), [14]aneN(4), i-[14]aneN(4), and [15]aneN(4)) and are commonly formulated as ([Zn([X]aneN(4))(SBn)]ClO(4). As predicted by our DFT calculations, all of these complexes are capable of insertion into the heterocumulene system. Raman spectroscopic investigations indicate that aryl-substituted isothiocyanates predominantly add to the C═N bond and that the size of the ring-shaped ligands of the zinc complex also has a very significant influence on the selectivity and on the reactivity as well. Unfortunately, the activated isothiocyanate is not able to add to the thiolate-corresponding mercaptan to invoke a CA analogous catalytic cycle. However, more reactive compounds such as methyl iodide can be incorporated. This work gives new insight into the mode of action and reaction path variants derived from the CA principles. Further, aspects of the reliability of DFT calculations concerning the prediction of the selectivity and reactivity are discussed. In addition, the presented synthetic pathways can offer a completely new access to a variety of dithiocarbamates. PMID:21405064

  14. A Three-Dimensional DOSY HMQC Experiment for the High-Resolution Analysis of Complex Mixtures

    NASA Astrophysics Data System (ADS)

    Barjat, Hervé; Morris, Gareth A.; Swanson, Alistair G.

    1998-03-01

    A three-dimensional experiment is described in which NMR signals are separated according to their proton chemical shift,13C chemical shift, and diffusion coefficient. The sequence is built up from a stimulated echo sequence with bipolar field gradient pulses and a conventional decoupled HMQC sequence. Results are presented for a model mixture of quinine, camphene, and geraniol in deuteriomethanol.

  15. Indian Consortia Models: FORSA Libraries' Experiences

    NASA Astrophysics Data System (ADS)

    Patil, Y. M.; Birdie, C.; Bawdekar, N.; Barve, S.; Anilkumar, N.

    2007-10-01

    With increases in prices of journals, shrinking library budgets and cuts in subscriptions to journals over the years, there has been a big challenge facing Indian library professionals to cope with the proliferation of electronic information resources. There have been sporadic efforts by different groups of libraries in forming consortia at different levels. The types of consortia identified are generally based on various models evolved in India in a variety of forms depending upon the participants' affiliations and funding sources. Indian astronomy library professionals have formed a group called Forum for Resource Sharing in Astronomy and Astrophysics (FORSA), which falls under `Open Consortia', wherein participants are affiliated to different government departments. This is a model where professionals willingly come forward and actively support consortia formation; thereby everyone benefits. As such, FORSA has realized four consortia, viz. Nature Online Consortium; Indian Astrophysics Consortium for physics/astronomy journals of Springer/Kluwer; Consortium for Scientific American Online Archive (EBSCO); and Open Consortium for Lecture Notes in Physics (Springer), which are discussed briefly.

  16. Experiments and Valve Modelling in Thermoacoustic Device

    NASA Astrophysics Data System (ADS)

    Duthil, P.; Baltean Carlès, D.; Bétrancourt, A.; François, M. X.; Yu, Z. B.; Thermeau, J. P.

    2006-04-01

    In a so called heat driven thermoacoustic refrigerator, using either a pulse tube or a lumped boost configuration, heat pumping is induced by Stirling type thermodynamic cycles within the regenerator. The time phase between acoustic pressure and flow rate throughout must then be close to that met for a purely progressive wave. The study presented here relates the experimental characterization of passive elements such as valves, tubes and tanks which are likely to act on this phase relationship when included in the propagation line of the wave resonator. In order to carry out a characterization — from the acoustic point of view — of these elements, systematic measurements of the acoustic field are performed varying various parameters: mean pressure, oscillations frequency, supplied heat power. Acoustic waves are indeed generated by use of a thermoacoustic prime mover driving a pulse tube refrigerator. The experimental results are then compared with the solutions obtained with various one-dimensional linear models including non linear correction factors. It turns out that when using a non symmetrical valve, and for large dissipative effects, the measurements disagree with the linear modelling and non linear behaviour of this particular element is shown.

  17. The OECI model: the CRO Aviano experience.

    PubMed

    Da Pieve, Lucia; Collazzo, Raffaele; Masutti, Monica; De Paoli, Paolo

    2015-01-01

    In 2012, the "Centro di Riferimento Oncologico" (CRO) National Cancer Institute joined the accreditation program of the Organisation of European Cancer Institutes (OECI) and was one of the first institutes in Italy to receive recognition as a Comprehensive Cancer Center. At the end of the project, a strengths, weaknesses, opportunities, and threats (SWOT) analysis aimed at identifying the pros and cons, both for the institute and of the accreditation model in general, was performed. The analysis shows significant strengths, such as the affinity with other improvement systems and current regulations, and the focus on a multidisciplinary approach. The proposed suggestions for improvement concern mainly the structure of the standards and aim to facilitate the assessment, benchmarking, and sharing of best practices. The OECI accreditation model provided a valuable executive tool and a framework in which we can identify several important development projects. An additional impact for our institute is the participation in the project BenchCan, of which the OECI is lead partner. PMID:27096265

  18. Simultaneous application of dissolution/precipitation and surface complexation/surface precipitation modeling to contaminant leaching.

    PubMed

    Apul, Defne S; Gardner, Kevin H; Eighmy, T Taylor; Fällman, Ann-Marie; Comans, Rob N J

    2005-08-01

    This paper discusses the modeling of anion and cation leaching from complex matrixes such as weathered steel slag. The novelty of the method is its simultaneous application of the theoretical models for solubility, competitive sorption, and surface precipitation phenomena to a complex system. Selective chemical extractions, pH dependent leaching experiments, and geochemical modeling were used to investigate the thermodynamic equilibrium of 12 ions (As, Ca, Cr, Ba, SO4, Mg, Cd, Cu, Mo, Pb, V, and Zn) with aqueous complexes, soluble solids, and sorptive surfaces in the presence of 12 background analytes (Al, Cl, Co, Fe, K, Mn, Na, Ni, Hg, NO3, CO3, and Ba). Modeling results show that surface complexation and surface precipitation reactions limit the aqueous concentrations of Cd, Zn, and Pb in an environment where Ca, Mg, Si, and CO3 dissolve from soluble solids and compete for sorption sites. The leaching of SO4, Cr, As, Si, Ca, and Mg appears to be controlled by corresponding soluble solids.

  19. A novel prediction method about single components of analog circuits based on complex field modeling.

    PubMed

    Zhou, Jingyu; Tian, Shulin; Yang, Chenglin

    2014-01-01

    Few researches pay attention to prediction about analog circuits. The few methods lack the correlation with circuit analysis during extracting and calculating features so that FI (fault indicator) calculation often lack rationality, thus affecting prognostic performance. To solve the above problem, this paper proposes a novel prediction method about single components of analog circuits based on complex field modeling. Aiming at the feature that faults of single components hold the largest number in analog circuits, the method starts with circuit structure, analyzes transfer function of circuits, and implements complex field modeling. Then, by an established parameter scanning model related to complex field, it analyzes the relationship between parameter variation and degeneration of single components in the model in order to obtain a more reasonable FI feature set via calculation. According to the obtained FI feature set, it establishes a novel model about degeneration trend of analog circuits' single components. At last, it uses particle filter (PF) to update parameters for the model and predicts remaining useful performance (RUP) of analog circuits' single components. Since calculation about the FI feature set is more reasonable, accuracy of prediction is improved to some extent. Finally, the foregoing conclusions are verified by experiments. PMID:25147853

  20. Modeling and Algorithmic Approaches to Constitutively-Complex, Micro-structured Fluids

    SciTech Connect

    Forest, Mark Gregory

    2014-05-06

    The team for this Project made significant progress on modeling and algorithmic approaches to hydrodynamics of fluids with complex microstructure. Our advances are broken down into modeling and algorithmic approaches. In experiments a driven magnetic bead in a complex fluid accelerates out of the Stokes regime and settles into another apparent linear response regime. The modeling explains the take-off as a deformation of entanglements, and the longtime behavior is a nonlinear, far-from-equilibrium property. Furthermore, the model has predictive value, as we can tune microstructural properties relative to the magnetic force applied to the bead to exhibit all possible behaviors. Wave-theoretic probes of complex fluids have been extended in two significant directions, to small volumes and the nonlinear regime. Heterogeneous stress and strain features that lie beyond experimental capability were studied. It was shown that nonlinear penetration of boundary stress in confined viscoelastic fluids is not monotone, indicating the possibility of interlacing layers of linear and nonlinear behavior, and thus layers of variable viscosity. Models, algorithms, and codes were developed and simulations performed leading to phase diagrams of nanorod dispersion hydrodynamics in parallel shear cells and confined cavities representative of film and membrane processing conditions. Hydrodynamic codes for polymeric fluids are extended to include coupling between microscopic and macroscopic models, and to the strongly nonlinear regime.

  1. Synergistic benefits of combined technologies in complex, minimally invasive surgical procedures. Clinical experience and educational processes.

    PubMed

    Geis, W P; Kim, H C; McAfee, P C; Kang, J G; Brennan, E J

    1996-10-01

    The new burden surgical technology must assume demands not only improved efficiency and reduced risk, but also diminished cost and resource utilization. To this end, we have instituted the use of multiple, sequential technologies in complex, minimally invasive procedures: laparoscopic gastric surgery (44 cases), spine procedures (38 cases), and colectomies (96 cases). The technologies include head-mounted display, 3-D optics, robotic arm, harmonic scalpel, and optical access trocars. The combined use of these technologies shortened operative times, diminished use of personnel, and as associated with no technical mishap. Surgeon concentration and control of the operative environment were increased. In an effort to promote combined use of technologies, a structured teaching process was designed and implemented. It required five (average) experiences for efficient, hands-on implementation of combined technologies. We conclude that combined use of sophisticated technologies is safe and efficient; is accomplished by structured, moderately intense educational experience; and diminishes cost and use of human resources.

  2. Characterizing and Modeling the Noise and Complex Impedance of Feedhorn-Coupled TES Polarimeters

    SciTech Connect

    Appel, J. W.; Beall, J. A.; Essinger-Hileman, T.; Parker, L. P.; Staggs, S. T.; Visnjic, C.; Zhao, Y.; Austermann, J. E.; Halverson, N. W.; Henning, J. W.; Simon, S. M.; Becker, D.; Britton, J.; Cho, H. M.; Hilton, G. C.; Irwin, K. D.; Niemack, M. D.; Yoon, K. W.; Benson, B. A.; Bleem, L. E.

    2009-12-16

    We present results from modeling the electrothermal performance of feedhorn-coupled transition edge sensor (TES) polarimeters under development for use in cosmic microwave background (CMB) polarization experiments. Each polarimeter couples radiation from a corrugated feedhorn through a planar orthomode transducer, which transmits power from orthogonal polarization modes to two TES bolometers. We model our TES with two- and three-block thermal architectures. We fit the complex impedance data at multiple points in the TES transition. From the fits, we predict the noise spectra. We present comparisons of these predictions to the data for two TESes on a prototype polarimeter.

  3. Process modelling for space station experiments

    NASA Technical Reports Server (NTRS)

    Rosenberger, Franz; Alexander, J. Iwan D.

    1988-01-01

    The work performed during the first year 1 Oct. 1987 to 30 Sept. 1988 involved analyses of crystal growth from the melt and from solution. The particular melt growth technique under investigation is directional solidification by the Bridgman-Stockbarger method. Two types of solution growth systems are also being studied. One involves growth from solution in a closed container, the other concerns growth of protein crystals by the hanging drop method. Following discussions with Dr. R. J. Naumann of the Low Gravity Science Division at MSFC it was decided to tackle the analysis of crystal growth from the melt earlier than originally proposed. Rapid progress was made in this area. Work is on schedule and full calculations were underway for some time. Progress was also made in the formulation of the two solution growth models.

  4. Preconditioning the bidomain model with almost linear complexity

    NASA Astrophysics Data System (ADS)

    Pierre, Charles

    2012-01-01

    The bidomain model is widely used in electro-cardiology to simulate spreading of excitation in the myocardium and electrocardiograms. It consists of a system of two parabolic reaction diffusion equations coupled with an ODE system. Its discretisation displays an ill-conditioned system matrix to be inverted at each time step: simulations based on the bidomain model therefore are associated with high computational costs. In this paper we propose a preconditioning for the bidomain model either for an isolated heart or in an extended framework including a coupling with the surrounding tissues (the torso). The preconditioning is based on a formulation of the discrete problem that is shown to be symmetric positive semi-definite. A block LU decomposition of the system together with a heuristic approximation (referred to as the monodomain approximation) are the key ingredients for the preconditioning definition. Numerical results are provided for two test cases: a 2D test case on a realistic slice of the thorax based on a segmented heart medical image geometry, a 3D test case involving a small cubic slab of tissue with orthotropic anisotropy. The analysis of the resulting computational cost (both in terms of CPU time and of iteration number) shows an almost linear complexity with the problem size, i.e. of type nlog α( n) (for some constant α) which is optimal complexity for such problems.

  5. Bloch-Redfield equations for modeling light-harvesting complexes.

    PubMed

    Jeske, Jan; Ing, David J; Plenio, Martin B; Huelga, Susana F; Cole, Jared H

    2015-02-14

    We challenge the misconception that Bloch-Redfield equations are a less powerful tool than phenomenological Lindblad equations for modeling exciton transport in photosynthetic complexes. This view predominantly originates from an indiscriminate use of the secular approximation. We provide a detailed description of how to model both coherent oscillations and several types of noise, giving explicit examples. All issues with non-positivity are overcome by a consistent straightforward physical noise model. Herein also lies the strength of the Bloch-Redfield approach because it facilitates the analysis of noise-effects by linking them back to physical parameters of the noise environment. This includes temporal and spatial correlations and the strength and type of interaction between the noise and the system of interest. Finally, we analyze a prototypical dimer system as well as a 7-site Fenna-Matthews-Olson complex in regards to spatial correlation length of the noise, noise strength, temperature, and their connection to the transfer time and transfer probability.

  6. NMR-derived model for a peptide-antibody complex

    SciTech Connect

    Zilber, B.; Scherf, T.; Anglister, J. ); Levitt, M. )

    1990-10-01

    The TE34 monoclonal antibody against cholera toxin peptide 3 (CTP3; VEVPGSQHIDSQKKA) was sequenced and investigated by two-dimensional transferred NOE difference spectroscopy and molecular modeling. The V{sub H} sequence of TE34, which does not bind cholera toxin, shares remarkable homology to that of TE32 and TE33, which are both anti-CTP3 antibodies that bind the toxin. However, due to a shortened heavy chain CDR3, TE34 assumes a radically different combining site structure. The assignment of the combining site interactions to specific peptide residues was completed by use of AcIDSQRKA, a truncated peptide analogue in which lysine-13 was substituted by arginine, specific deuteration of individual polypeptide chains of the antibody, and a computer model for the Fv fragment of TE34. NMR-derived distance restraints were then applied to the calculated model of the Fv to generate a three-dimensional structure of the TE34/CTP3 complex. The combining site was found to be a very hydrophobic cavity composed of seven aromatic residues. Charged residues are found in the periphery of the combining site. The peptide residues HIDSQKKA form a {beta}-turn inside the combining site. The contact area between the peptide and the TE34 antibody is 388 {Angstrom}{sup 2}, about half of the contact area observed in protein-antibody complexes.

  7. A computational model for cancer growth by using complex networks

    NASA Astrophysics Data System (ADS)

    Galvão, Viviane; Miranda, José G. V.

    2008-09-01

    In this work we propose a computational model to investigate the proliferation of cancerous cell by using complex networks. In our model the network represents the structure of available space in the cancer propagation. The computational scheme considers a cancerous cell randomly included in the complex network. When the system evolves the cells can assume three states: proliferative, non-proliferative, and necrotic. Our results were compared with experimental data obtained from three human lung carcinoma cell lines. The computational simulations show that the cancerous cells have a Gompertzian growth. Also, our model simulates the formation of necrosis, increase of density, and resources diffusion to regions of lower nutrient concentration. We obtain that the cancer growth is very similar in random and small-world networks. On the other hand, the topological structure of the small-world network is more affected. The scale-free network has the largest rates of cancer growth due to hub formation. Finally, our results indicate that for different average degrees the rate of cancer growth is related to the available space in the network.

  8. When do evolutionary food web models generate complex networks?

    PubMed

    Allhoff, Korinna T; Drossel, Barbara

    2013-10-01

    Evolutionary foodweb models are used to build food webs by the repeated addition of new species. Population dynamics leads to the extinction or establishment of a newly added species, and possibly to the extinction of other species. The food web structure that emerges after some time is a highly nontrivial result of the evolutionary and dynamical rules. We investigate the evolutionary food web model introduced by Loeuille and Loreau (2005), which characterizes species by their body mass as the only evolving trait. Our goal is to find the reasons behind the model's remarkable robustness and its capability to generate various and stable networks. In contrast to other evolutionary food web models, this model requires neither adaptive foraging nor allometric scaling of metabolic rates with body mass in order to produce complex networks that do not eventually collapse to trivial structures. Our study shows that this is essentially due to the fact that the difference in niche value between predator and prey as well as the feeding range are constrained so that they remain within narrow limits under evolution. Furthermore, competition between similar species is sufficiently strong, so that a trophic level can accommodate several species. We discuss the implications of these findings and argue that the conditions that stabilize other evolutionary food web models have similar effects because they also prevent the occurrence of extreme specialists or extreme generalists that have in general a higher fitness than species with a moderate niche width.

  9. Model Complexity in Diffusion Modeling: Benefits of Making the Model More Parsimonious

    PubMed Central

    Lerche, Veronika; Voss, Andreas

    2016-01-01

    The diffusion model (Ratcliff, 1978) takes into account the reaction time distributions of both correct and erroneous responses from binary decision tasks. This high degree of information usage allows the estimation of different parameters mapping cognitive components such as speed of information accumulation or decision bias. For three of the four main parameters (drift rate, starting point, and non-decision time) trial-to-trial variability is allowed. We investigated the influence of these variability parameters both drawing on simulation studies and on data from an empirical test-retest study using different optimization criteria and different trial numbers. Our results suggest that less complex models (fixing intertrial variabilities of the drift rate and the starting point at zero) can improve the estimation of the psychologically most interesting parameters (drift rate, threshold separation, starting point, and non-decision time). PMID:27679585

  10. Model Complexity in Diffusion Modeling: Benefits of Making the Model More Parsimonious.

    PubMed

    Lerche, Veronika; Voss, Andreas

    2016-01-01

    The diffusion model (Ratcliff, 1978) takes into account the reaction time distributions of both correct and erroneous responses from binary decision tasks. This high degree of information usage allows the estimation of different parameters mapping cognitive components such as speed of information accumulation or decision bias. For three of the four main parameters (drift rate, starting point, and non-decision time) trial-to-trial variability is allowed. We investigated the influence of these variability parameters both drawing on simulation studies and on data from an empirical test-retest study using different optimization criteria and different trial numbers. Our results suggest that less complex models (fixing intertrial variabilities of the drift rate and the starting point at zero) can improve the estimation of the psychologically most interesting parameters (drift rate, threshold separation, starting point, and non-decision time). PMID:27679585

  11. Model Complexity in Diffusion Modeling: Benefits of Making the Model More Parsimonious

    PubMed Central

    Lerche, Veronika; Voss, Andreas

    2016-01-01

    The diffusion model (Ratcliff, 1978) takes into account the reaction time distributions of both correct and erroneous responses from binary decision tasks. This high degree of information usage allows the estimation of different parameters mapping cognitive components such as speed of information accumulation or decision bias. For three of the four main parameters (drift rate, starting point, and non-decision time) trial-to-trial variability is allowed. We investigated the influence of these variability parameters both drawing on simulation studies and on data from an empirical test-retest study using different optimization criteria and different trial numbers. Our results suggest that less complex models (fixing intertrial variabilities of the drift rate and the starting point at zero) can improve the estimation of the psychologically most interesting parameters (drift rate, threshold separation, starting point, and non-decision time).

  12. FIELD EXPERIMENTS AND MODELING AT CDG AIRPORTS

    NASA Astrophysics Data System (ADS)

    Ramaroson, R.

    2009-12-01

    Richard Ramaroson1,4, Klaus Schaefer2, Stefan Emeis2, Carsten Jahn2, Gregor Schürmann2, Maria Hoffmann2, Mikhael Zatevakhin3, Alexandre Ignatyev3. 1ONERA, Châtillon, France; 4SEAS, Harvard University, Cambridge, USA; 2FZK, Garmisch, Germany; (3)FSUE SPbAEP, St Petersburg, Russia. 2-month field campaigns have been organized at CDG airports in autumn 2004 and summer 2005. Air quality and ground air traffic emissions have been monitored continuously at terminals and taxi-runways, along with meteorological parameters onboard trucks and with a SODAR. This paper analyses the commercial engine emissions characteristics at airports and their effects on gas pollutants and airborne particles coupled to meteorology. LES model results for PM dispersion coupled to microphysics in the PBL are compared to measurements. Winds and temperature at the surface and their vertical profiles have been stored with turbulence. SODAR observations show the time-development of the mixing layer depth and turbulent mixing in summer up to 800m. Active low level jets and their regional extent have been observed and analyzed. PM number and mass size distribution, morphology and chemical contents are investigated. Formation of new ultra fine volatile (UFV) particles in the ambient plume downstream of running engines is observed. Soot particles are mostly observed at significant level at high power thrusts at take-off (TO) and on touch-down whereas at lower thrusts at taxi and aprons ultra the UFV PM emissions become higher. Ambient airborne PM1/2.5 is closely correlated to air traffic volume and shows a maximum beside runways. PM number distribution at airports is composed mainly by volatile UF PM abundant at apron. Ambient PM mass in autumn is higher than in summer. The expected differences between TO and taxi emissions are confirmed for NO, NO2, speciated VOC and CO. NO/NO2 emissions are larger at runways due to higher power. Reactive VOC and CO are more produced at low powers during idling at

  13. Application of Peterson's stray light model to complex optical instruments

    NASA Astrophysics Data System (ADS)

    Fray, S.; Goepel, M.; Kroneberger, M.

    2016-07-01

    Gary L. Peterson (Breault Research Organization) presented a simple analytical model for in- field stray light evaluation of axial optical systems. We exploited this idea for more complex optical instruments of the Meteosat Third Generation (MTG) mission. For the Flexible Combined Imager (FCI) we evaluated the in-field stray light of its three-mirroranastigmat telescope, while for the Infrared Sounder (IRS) we performed an end-to-end analysis including the front telescope, interferometer and back telescope assembly and the cold optics. A comparison to simulations will be presented. The authors acknowledge the support by ESA and Thales Alenia Space through the MTG satellites program.

  14. Modeling and Visualizing Flow of Chemical Agents Across Complex Terrain

    NASA Technical Reports Server (NTRS)

    Kao, David; Kramer, Marc; Chaderjian, Neal

    2005-01-01

    Release of chemical agents across complex terrain presents a real threat to homeland security. Modeling and visualization tools are being developed that capture flow fluid terrain interaction as well as point dispersal downstream flow paths. These analytic tools when coupled with UAV atmospheric observations provide predictive capabilities to allow for rapid emergency response as well as developing a comprehensive preemptive counter-threat evacuation plan. The visualization tools involve high-end computing and massive parallel processing combined with texture mapping. We demonstrate our approach across a mountainous portion of North California under two contrasting meteorological conditions. Animations depicting flow over this geographical location provide immediate assistance in decision support and crisis management.

  15. The modeling of complex continua: Fundamental obstacles and grand challenges

    SciTech Connect

    Not Available

    1993-01-01

    The research is divided into: discontinuities and adaptive computation, chaotic flows, dispersion of flow in porous media, and nonlinear waves and nonlinear materials. The research program has emphasized innovative computation and theory. The approach depends on abstracting mathematical concepts and computational methods from individual applications to a wide range of problems involving complex continua. The generic difficulties in the modeling of continua that guide this abstraction are multiple length and time scales, microstructures (bubbles, droplets, vortices, crystal defects), and chaotic or random phenomena described by a statistical formulation.

  16. The use of workflows in the design and implementation of complex experiments in macromolecular crystallography

    PubMed Central

    Brockhauser, Sandor; Svensson, Olof; Bowler, Matthew W.; Nanao, Max; Gordon, Elspeth; Leal, Ricardo M. F.; Popov, Alexander; Gerring, Matthew; McCarthy, Andrew A.; Gotz, Andy

    2012-01-01

    The automation of beam delivery, sample handling and data analysis, together with increasing photon flux, diminishing focal spot size and the appearance of fast-readout detectors on synchrotron beamlines, have changed the way that many macromolecular crystallography experiments are planned and executed. Screening for the best diffracting crystal, or even the best diffracting part of a selected crystal, has been enabled by the development of microfocus beams, precise goniometers and fast-readout detectors that all require rapid feedback from the initial processing of images in order to be effective. All of these advances require the coupling of data feedback to the experimental control system and depend on immediate online data-analysis results during the experiment. To facilitate this, a Data Analysis WorkBench (DAWB) for the flexible creation of complex automated protocols has been developed. Here, example workflows designed and implemented using DAWB are presented for enhanced multi-step crystal characterizations, experiments involving crystal re­orientation with kappa goniometers, crystal-burning experiments for empirically determining the radiation sensitivity of a crystal system and the application of mesh scans to find the best location of a crystal to obtain the highest diffraction quality. Beamline users interact with the prepared workflows through a specific brick within the beamline-control GUI MXCuBE. PMID:22868763

  17. Does model performance improve with complexity? A case study with three hydrological models

    NASA Astrophysics Data System (ADS)

    Orth, Rene; Staudinger, Maria; Seneviratne, Sonia I.; Seibert, Jan; Zappa, Massimiliano

    2015-04-01

    In recent decades considerable progress has been made in climate model development. Following the massive increase in computational power, models became more sophisti- cated. At the same time also simple conceptual models have advanced. In this study we validate and compare three hydrological models of different complexity to investigate whether their performance varies accordingly. For this purpose we use runoff and also soil moisture measurements, which allow a truly independent validation, from several sites across Switzerland. The models are calibrated in similar ways with the same runoff data. Our results show that the more complex models HBV and PREVAH outperform the simple water balance model (SWBM) in case of runoff but not for soil moisture. Furthermore the most sophisticated PREVAH model shows an added value compared to the HBV model only in case of soil moisture. Focusing on extreme events we find generally improved performance of the SWBM during drought conditions and degraded agreement with observations during wet extremes. For the more complex models we find the opposite behavior, probably because they were primarily developed for predic- tion of runoff extremes. As expected given their complexity, HBV and PREVAH have more problems with over-fitting. All models show a tendency towards better perfor- mance in lower altitudes as opposed to (pre-)alpine sites. The results vary considerably across the investigated sites. In contrast, the different metrics we consider to estimate the agreement between models and observations lead to similar conclusions, indicating that the performance of the considered models is similar at different time scales as well as for anomalies and long-term means. We conclude that added complexity does not necessarily lead to improved performance of hydrological models, and that performance can vary greatly depending on the considered hydrological variable (e.g. runoff vs. soil moisture) or hydrological conditions (floods vs

  18. Does model performance improve with complexity? A case study with three hydrological models

    NASA Astrophysics Data System (ADS)

    Orth, Rene; Staudinger, Maria; Seneviratne, Sonia I.; Seibert, Jan; Zappa, Massimiliano

    2015-04-01

    In recent decades considerable progress has been made in climate model development. Following the massive increase in computational power, models became more sophisticated. At the same time also simple conceptual models have advanced. In this study we validate and compare three hydrological models of different complexity to investigate whether their performance varies accordingly. For this purpose we use runoff and also soil moisture measurements, which allow a truly independent validation, from several sites across Switzerland. The models are calibrated in similar ways with the same runoff data. Our results show that the more complex models HBV and PREVAH outperform the simple water balance model (SWBM) in case of runoff but not for soil moisture. Furthermore the most sophisticated PREVAH model shows an added value compared to the HBV model only in case of soil moisture. Focusing on extreme events we find generally improved performance of the SWBM during drought conditions and degraded agreement with observations during wet extremes. For the more complex models we find the opposite behavior, probably because they were primarily developed for prediction of runoff extremes. As expected given their complexity, HBV and PREVAH have more problems with over-fitting. All models show a tendency towards better performance in lower altitudes as opposed to (pre-) alpine sites. The results vary considerably across the investigated sites. In contrast, the different metrics we consider to estimate the agreement between models and observations lead to similar conclusions, indicating that the performance of the considered models is similar at different time scales as well as for anomalies and long-term means. We conclude that added complexity does not necessarily lead to improved performance of hydrological models, and that performance can vary greatly depending on the considered hydrological variable (e.g. runoff vs. soil moisture) or hydrological conditions (floods vs. droughts).

  19. Process modelling for materials preparation experiments

    NASA Technical Reports Server (NTRS)

    Rosenberger, Franz; Alexander, J. Iwan D.

    1994-01-01

    The main goals of the research under this grant consist of the development of mathematical tools and measurement techniques for transport properties necessary for high fidelity modelling of crystal growth from the melt and solution. Of the tasks described in detail in the original proposal, two remain to be worked on: development of a spectral code for moving boundary problems, and development of an expedient diffusivity measurement technique for concentrated and supersaturated solutions. We have focused on developing a code to solve for interface shape, heat and species transport during directional solidification. The work involved the computation of heat, mass and momentum transfer during Bridgman-Stockbarger solidification of compound semiconductors. Domain decomposition techniques and preconditioning methods were used in conjunction with Chebyshev spectral methods to accelerate convergence while retaining the high-order spectral accuracy. During the report period we have further improved our experimental setup. These improvements include: temperature control of the measurement cell to 0.1 C between 10 and 60 C; enclosure of the optical measurement path outside the ZYGO interferometer in a metal housing that is temperature controlled to the same temperature setting as the measurement cell; simultaneous dispensing and partial removal of the lower concentration (lighter) solution above the higher concentration (heavier) solution through independently motor-driven syringes; three-fold increase in data resolution by orientation of the interferometer with respect to diffusion direction; and increase of the optical path length in the solution cell to 12 mm.

  20. The Eemian climate simulated by two models of different complexities

    NASA Astrophysics Data System (ADS)

    Nikolova, Irina; Yin, Qiuzhen; Berger, Andre; Singh, Umesh; Karami, Pasha

    2013-04-01

    The Eemian period, also known as MIS-5, experienced warmer than today climate, reduction in ice sheets and important sea-level rise. These interesting features have made the Eemian appropriate to evaluate climate models when forced with astronomical and greenhouse gas forcings different from today. In this work, we present the simulated Eemian climate by two climate models of different complexities, LOVECLIM (LLN Earth system model of intermediate complexity) and CCSM3 (NCAR atmosphere-ocean general circulation model). Feedbacks from sea ice, vegetation, monsoon and ENSO phenomena are discussed to explain the regional similarities/dissimilarities in both models with respect to the pre-industrial (PI) climate. Significant warming (cooling) over almost all the continents during boreal summer (winter) leads to a largely increased (reduced) seasonal contrast in the northern (southern) hemisphere, mainly due to the much higher (lower) insolation received by the whole Earth in boreal summer (winter). The arctic is warmer than at PI through the whole year, resulting from its much higher summer insolation and its remnant effect in the following fall-winter through the interactions between atmosphere, ocean and sea ice. Regional discrepancies exist in the sea-ice formation zones between the two models. Excessive sea-ice formation in CCSM3 results in intense regional cooling. In both models intensified African monsoon and vegetation feedback are responsible for the cooling during summer in North Africa and on the Arabian Peninsula. Over India precipitation maximum is found further west, while in Africa the precipitation maximum migrates further north. Trees and grassland expand north in Sahel/Sahara, trees being more abundant in the results from LOVECLIM than from CCSM3. A mix of forest and grassland occupies continents and expand deep in the high northern latitudes in line with proxy records. Desert areas reduce significantly in Northern Hemisphere, but increase in North

  1. Uncertainty and error in complex plasma chemistry models

    NASA Astrophysics Data System (ADS)

    Turner, Miles M.

    2015-06-01

    Chemistry models that include dozens of species and hundreds to thousands of reactions are common in low-temperature plasma physics. The rate constants used in such models are uncertain, because they are obtained from some combination of experiments and approximate theories. Since the predictions of these models are a function of the rate constants, these predictions must also be uncertain. However, systematic investigations of the influence of uncertain rate constants on model predictions are rare to non-existent. In this work we examine a particular chemistry model, for helium-oxygen plasmas. This chemistry is of topical interest because of its relevance to biomedical applications of atmospheric pressure plasmas. We trace the primary sources for every rate constant in the model, and hence associate an error bar (or equivalently, an uncertainty) with each. We then use a Monte Carlo procedure to quantify the uncertainty in predicted plasma species densities caused by the uncertainty in the rate constants. Under the conditions investigated, the range of uncertainty in most species densities is a factor of two to five. However, the uncertainty can vary strongly for different species, over time, and with other plasma conditions. There are extreme (pathological) cases where the uncertainty is more than a factor of ten. One should therefore be cautious in drawing any conclusion from plasma chemistry modelling, without first ensuring that the conclusion in question survives an examination of the related uncertainty.

  2. Optimal experiment design for model selection in biochemical networks

    PubMed Central

    2014-01-01

    Background Mathematical modeling is often used to formalize hypotheses on how a biochemical network operates by discriminating between competing models. Bayesian model selection offers a way to determine the amount of evidence that data provides to support one model over the other while favoring simple models. In practice, the amount of experimental data is often insufficient to make a clear distinction between competing models. Often one would like to perform a new experiment which would discriminate between competing hypotheses. Results We developed a novel method to perform Optimal Experiment Design to predict which experiments would most effectively allow model selection. A Bayesian approach is applied to infer model parameter distributions. These distributions are sampled and used to simulate from multivariate predictive densities. The method is based on a k-Nearest Neighbor estimate of the Jensen Shannon divergence between the multivariate predictive densities of competing models. Conclusions We show that the method successfully uses predictive differences to enable model selection by applying it to several test cases. Because the design criterion is based on predictive distributions, which can be computed for a wide range of model quantities, the approach is very flexible. The method reveals specific combinations of experiments which improve discriminability even in cases where data is scarce. The proposed approach can be used in conjunction with existing Bayesian methodologies where (approximate) posteriors have been determined, making use of relations that exist within the inferred posteriors. PMID:24555498

  3. a Range Based Method for Complex Facade Modeling

    NASA Astrophysics Data System (ADS)

    Adami, A.; Fregonese, L.; Taffurelli, L.

    2011-09-01

    3d modelling of Architectural Heritage does not follow a very well-defined way, but it goes through different algorithms and digital form according to the shape complexity of the object, to the main goal of the representation and to the starting data. Even if the process starts from the same data, such as a pointcloud acquired by laser scanner, there are different possibilities to realize a digital model. In particular we can choose between two different attitudes: the mesh and the solid model. In the first case the complexity of architecture is represented by a dense net of triangular surfaces which approximates the real surface of the object. In the other -opposite- case the 3d digital model can be realized by the use of simple geometrical shapes, by the use of sweeping algorithm and the Boolean operations. Obviously these two models are not the same and each one is characterized by some peculiarities concerning the way of modelling (the choice of a particular triangulation algorithm or the quasi-automatic modelling by known shapes) and the final results (a more detailed and complex mesh versus an approximate and more simple solid model). Usually the expected final representation and the possibility of publishing lead to one way or the other. In this paper we want to suggest a semiautomatic process to build 3d digital models of the facades of complex architecture to be used for example in city models or in other large scale representations. This way of modelling guarantees also to obtain small files to be published on the web or to be transmitted. The modelling procedure starts from laser scanner data which can be processed in the well known way. Usually more than one scan is necessary to describe a complex architecture and to avoid some shadows on the facades. These have to be registered in a single reference system by the use of targets which are surveyed by topography and then to be filtered in order to obtain a well controlled and homogeneous point cloud of

  4. The first naphthosemiquinone complex of K+ with vitamin K3 analog: Experiment and density functional theory

    NASA Astrophysics Data System (ADS)

    Kathawate, Laxmi; Gejji, Shridhar P.; Yeole, Sachin D.; Verma, Prakash L.; Puranik, Vedavati G.; Salunke-Gawali, Sunita

    2015-05-01

    Synthesis and characterization of potassium complex of 2-hydroxy-3-methyl-1,4-naphthoquinone (phthiocol), the vitamin K3 analog, has been carried out using FT-IR, UV-Vis, 1H and 13C NMR, EPR, cyclic voltammetry and single crystal X-ray diffraction experiments combined with the density functional theory. It has been observed that naphthosemiquinone binds to two K+ ions extending the polymeric chain through bridging oxygens O(2) and O(3). The crystal network possesses hydrogen bonding interactions from coordinated water molecules showing water channels along the c-axis. 13C NMR spectra revealed that the complexation of phthiocol with potassium ion engenders deshielding of C(2) signals, which appear at δ = ∼14.6 ppm whereas those of C(3) exhibit up-field signals near δ ∼ 6.9 ppm. These inferences are supported by the M06-2x based density functional theory. Electrochemical experiments further suggest that reduction of naphthosemiquinone results in only a cathodic peak from catechol. A triplet state arising from interactions between neighboring phthiocol anion lead to a half field signal at g = 4.1 in the polycrystalline X-band EPR spectra at 133 K.

  5. Thermohaline feedbacks in ocean-climate models of varying complexity

    NASA Astrophysics Data System (ADS)

    den Toom, M.

    2013-03-01

    explicitly resolves eddies, and a model in which eddies are parameterized. It is found that the behavior of an eddy-resolving model is qualitatively different from that of a non-eddying model. What is clear at this point, is that the AMOC is governed by non-linear dynamics. As a result, its simulated behavior depends in a non-trivial way on how unresolved processes are represented in a model. As demonstrated in this thesis, model fidelity can be effectively assessed by examining models of varying complexity.

  6. Modeling of complex systems using nonlinear, flexible multibody dynamics

    NASA Astrophysics Data System (ADS)

    Rodriguez, Jesus Diaz

    Finite element based multibody dynamics formulations extend the applicability of classical finite element methods to the modeling of flexible mechanisms. A general computer code will include rigid and flexible bodies, such as beams, joints, and active elements. These procedures are designed to overcome the modeling limitations of conventional multibody formulations that are often restricted to the analysis of rigid systems or use a modal representation to model the flexibility of elastic components. As multibody formulations become more widely accepted, the need to model a wider array of phenomena increases. The goal of this work is to present a methodology for the analysis of complex systems that may require the modeling of new joints and elements, or include the effects of clearance, freeplay or friction in the joints. Joints are essential components of multibody systems, rigid or flexible. Usually, joints are modeled as perfect components. In actual joints, clearance, freeplay, friction, lubrication and impact forces will can have a significant effect on the dynamic response of the system. Certain systems require the formulation of new joints for their analysis. Among one of them is the curve sliding joint which enforces the sliding of a body on a rigid curve connected to another body. The curve sliding joint is especially useful when modeling a vibration absorber device mounted on the rotor hub of rotorcraft: the bifilar pendulum. The formulation of a new modal based element is also presented. A modal based element is a model of an elastic substructure that includes a modal representation of elastic effects together with large rigid body motions. The proposed approach makes use of a component mode synthesis technique that allows the analyst to choose any type of modal basis and simplifies the connection to other multibody elements. The formulation is independent of the finite element analysis package used to compute the modes of the elastic component.

  7. Modeling of the formation of complex molecules in protostellar objects

    NASA Astrophysics Data System (ADS)

    Kochina, O. V.; Wiebe, D. S.; Kalenskii, S. V.; Vasyunin, A. I.

    2013-11-01

    The results of molecular composition modeling are presented for the well studied low-mass star-forming region TMC-1 and the massive star-forming region DR21(OH), which is poorly studied from a chemical point of view. The column densities of dozens of molecules, ranging from simple diatomic to complex organic molecules, are reproduced to within an order of magnitude using a one-dimensional model for the physical and chemical structure of these regions. The chemical ages of the regions are approximately 105 years in both cases. The main desorption mechanisms that are usually included in chemical models (photodesorption, thermal desorption, and cosmic-ray-induced desorption) do not provide sufficient gasphase abundances of molecules that are synthesized in surface reactions; however, this shortcoming can be removed by introducing small amount of reactive desorption into the model. It is possible to reproduce the properties of the TMC-1 chemical composition in a standard model, without requiring additional assumptions about an anomalous C/O ratio or the recent accretion of matter enriched with atomic carbon, as has been proposed by some researchers.

  8. Velocity response curves demonstrate the complexity of modeling entrainable clocks.

    PubMed

    Taylor, Stephanie R; Cheever, Allyson; Harmon, Sarah M

    2014-12-21

    Circadian clocks are biological oscillators that regulate daily behaviors in organisms across the kingdoms of life. Their rhythms are generated by complex systems, generally involving interlocked regulatory feedback loops. These rhythms are entrained by the daily light/dark cycle, ensuring that the internal clock time is coordinated with the environment. Mathematical models play an important role in understanding how the components work together to function as a clock which can be entrained by light. For a clock to entrain, it must be possible for it to be sped up or slowed down at appropriate times. To understand how biophysical processes affect the speed of the clock, one can compute velocity response curves (VRCs). Here, in a case study involving the fruit fly clock, we demonstrate that VRC analysis provides insight into a clock׳s response to light. We also show that biochemical mechanisms and parameters together determine a model׳s ability to respond realistically to light. The implication is that, if one is developing a model and its current form has an unrealistic response to light, then one must reexamine one׳s model structure, because searching for better parameter values is unlikely to lead to a realistic response to light. PMID:25193284

  9. A subsurface model of the beaver meadow complex

    NASA Astrophysics Data System (ADS)

    Nash, C.; Grant, G.; Flinchum, B. A.; Lancaster, J.; Holbrook, W. S.; Davis, L. G.; Lewis, S.

    2015-12-01

    Wet meadows are a vital component of arid and semi-arid environments. These valley spanning, seasonally inundated wetlands provide critical habitat and refugia for wildlife, and may potentially mediate catchment-scale hydrology in otherwise "water challenged" landscapes. In the last 150 years, these meadows have begun incising rapidly, causing the wetlands to drain and much of the ecological benefit to be lost. The mechanisms driving this incision are poorly understood, with proposed means ranging from cattle grazing to climate change, to the removal of beaver. There is considerable interest in identifying cost-effective strategies to restore the hydrologic and ecological conditions of these meadows at a meaningful scale, but effective process based restoration first requires a thorough understanding of the constructional history of these ubiquitous features. There is emerging evidence to suggest that the North American beaver may have had a considerable role in shaping this landscape through the building of dams. This "beaver meadow complex hypothesis" posits that as beaver dams filled with fine-grained sediments, they became large wet meadows on which new dams, and new complexes, were formed, thereby aggrading valley bottoms. A pioneering study done in Yellowstone indicated that 32-50% of the alluvial sediment was deposited in ponded environments. The observed aggradation rates were highly heterogeneous, suggesting spatial variability in the depositional process - all consistent with the beaver meadow complex hypothesis (Polvi and Wohl, 2012). To expand on this initial work, we have probed deeper into these meadow complexes using a combination of geophysical techniques, coring methods and numerical modeling to create a 3-dimensional representation of the subsurface environments. This imaging has given us a unique view into the patterns and processes responsible for the landforms, and may shed further light on the role of beaver in shaping these landscapes.

  10. A Community Mentoring Model for STEM Undergraduate Research Experiences

    ERIC Educational Resources Information Center

    Kobulnicky, Henry A.; Dale, Daniel A.

    2016-01-01

    This article describes a community mentoring model for UREs that avoids some of the common pitfalls of the traditional paradigm while harnessing the power of learning communities to provide young scholars a stimulating collaborative STEM research experience.

  11. Underwater Blast Experiments and Modeling for Shock Mitigation

    SciTech Connect

    Glascoe, L; McMichael, L; Vandersall, K; Margraf, J

    2010-03-07

    A simple but novel mitigation concept to enforce standoff distance and reduce shock loading on a vertical, partially-submerged structure is evaluated using scaled aquarium experiments and numerical modeling. Scaled, water tamped explosive experiments were performed using three gallon aquariums. The effectiveness of different mitigation configurations, including air-filled media and an air gap, is assessed relative to an unmitigated detonation using the same charge weight and standoff distance. Experiments using an air-filled media mitigation concept were found to effectively dampen the explosive response of the aluminum plate and reduce the final displacement at plate center by approximately half. The finite element model used for the initial experimental design compares very well to the experimental DIC results both spatially and temporally. Details of the experiment and finite element aquarium models are described including the boundary conditions, Eulerian and Lagrangian techniques, detonation models, experimental design and test diagnostics.

  12. Complex fluid flow modeling with SPH on GPU

    NASA Astrophysics Data System (ADS)

    Bilotta, Giuseppe; Hérault, Alexis; Del Negro, Ciro; Russo, Giovanni; Vicari, Annamaria

    2010-05-01

    We describe an implementation of the Smoothed Particle Hydrodynamics (SPH) method for the simulation of complex fluid flows. The algorithm is entirely executed on Graphic Processing Units (GPUs) using the Compute Unified Device Architecture (CUDA) developed by NVIDIA and fully exploiting their computational power. An increase of one to two orders of magnitude in simulation speed over equivalent CPU code is achieved. A complete modeling of the flow of a complex fluid such as lava is challenging from the modelistic, numerical and computational points of view. The natural topography irregularities, the dynamic free boundaries and phenomena such as solidification, presence of floating solid bodies or other obstacles and their eventual fragmentation make the problem difficult to solve using traditional numerical methods (finite volumes, finite elements): the need to refine the discretization grid in correspondence of high gradients, when possible, is computationally expensive and with an often inadequate control of the error; for real-world applications, moreover, the information needed by the grid refinement may not be available (e.g. because the Digital Elevation Models are too coarse); boundary tracking is also problematic with Eulerian discretizations, more so with complex fluids due to the presence of internal boundaries given by fluid inhomogeneity and presence of solidification fronts. An alternative approach is offered by mesh-free particle methods, that solve most of the problems connected to the dynamics of complex fluids in a natural way. Particle methods discretize the fluid using nodes which are not forced on a given topological structure: boundary treatment is therefore implicit and automatic; the movement freedom of the particles also permits the treatment of deformations without incurring in any significant penalty; finally, the accuracy is easily controlled by the insertion of new particles where needed. Our team has developed a new model based on the

  13. A complex network model for seismicity based on mutual information

    NASA Astrophysics Data System (ADS)

    Jiménez, Abigail

    2013-05-01

    Seismicity is the product of the interaction between the different parts of the lithosphere. Here, we model each part of the Earth as a cell that is constantly communicating its state to its environment. As a neuron is stimulated and produces an output, the different parts of the lithosphere are constantly stimulated by both other cells and the ductile part of the lithosphere, and produce an output in the form of a stress transfer or an earthquake. This output depends on the properties of each part of the Earth’s crust and the magnitude of the inputs. In this study, we propose an approach to the quantification of this communication, with the aid of the Information Theory, and model seismicity as a Complex Network. We have used data from California, and this new approach gives a better understanding of the processes involved in the formation of seismic patterns in that region.

  14. Modeling pedestrian's conformity violation behavior: a complex network based approach.

    PubMed

    Zhou, Zhuping; Hu, Qizhou; Wang, Wei

    2014-01-01

    Pedestrian injuries and fatalities present a problem all over the world. Pedestrian conformity violation behaviors, which lead to many pedestrian crashes, are common phenomena at the signalized intersections in China. The concepts and metrics of complex networks are applied to analyze the structural characteristics and evolution rules of pedestrian network about the conformity violation crossings. First, a network of pedestrians crossing the street is established, and the network's degree distributions are analyzed. Then, by using the basic idea of SI model, a spreading model of pedestrian illegal crossing behavior is proposed. Finally, through simulation analysis, pedestrian's illegal crossing behavior trends are obtained in different network structures and different spreading rates. Some conclusions are drawn: as the waiting time increases, more pedestrians will join in the violation crossing once a pedestrian crosses on red firstly. And pedestrian's conformity violation behavior will increase as the spreading rate increases.

  15. Applying modeling Results in designing a global tropospheric experiment

    NASA Technical Reports Server (NTRS)

    1982-01-01

    A set of field experiments and advanced modeling studies which provide a strategy for a program of global tropospheric experiments was identified. An expanded effort to develop space applications for trospheric air quality monitoring and studies was recommended. The tropospheric ozone, carbon, nitrogen, and sulfur cycles are addressed. Stratospheric-tropospheric exchange is discussed. Fast photochemical processes in the free troposphere are considered.

  16. Characteristics of a Model Industrial Technology Education Field Experience.

    ERIC Educational Resources Information Center

    Foster, Phillip R.; Kozak, Michael R.

    1986-01-01

    This report contains selected findings from a research project that investigated field experiences in industrial technology education. Funded by the Texas Education Agency, the project addressed the identification of characteristics of a model field experience in industrial technology education. This was accomplished using the Delphi technique.…

  17. Engineering teacher training models and experiences

    NASA Astrophysics Data System (ADS)

    González-Tirados, R. M.

    2009-04-01

    Education Area, we renewed the programme, content and methodology, teaching the course under the name of "Initial Teacher Training Course within the framework of the European Higher Education Area". Continuous Training means learning throughout one's life as an Engineering teacher. They are actions designed to update and improve teaching staff, and are systematically offered on the current issues of: Teaching Strategies, training for research, training for personal development, classroom innovations, etc. They are activities aimed at conceptual change, changing the way of teaching and bringing teaching staff up-to-date. At the same time, the Institution is at the disposal of all teaching staff as a meeting point to discuss issues in common, attend conferences, department meetings, etc. In this Congress we present a justification of both training models and their design together with some results obtained on: training needs, participation, how it is developing and to what extent students are profiting from it.

  18. Modeling the complex pathology of Alzheimer's disease in Drosophila.

    PubMed

    Fernandez-Funez, Pedro; de Mena, Lorena; Rincon-Limas, Diego E

    2015-12-01

    Alzheimer's disease (AD) is the leading cause of dementia and the most common neurodegenerative disorder. AD is mostly a sporadic disorder and its main risk factor is age, but mutations in three genes that promote the accumulation of the amyloid-β (Aβ42) peptide revealed the critical role of amyloid precursor protein (APP) processing in AD. Neurofibrillary tangles enriched in tau are the other pathological hallmark of AD, but the lack of causative tau mutations still puzzles researchers. Here, we describe the contribution of a powerful invertebrate model, the fruit fly Drosophila melanogaster, to uncover the function and pathogenesis of human APP, Aβ42, and tau. APP and tau participate in many complex cellular processes, although their main function is microtubule stabilization and the to-and-fro transport of axonal vesicles. Additionally, expression of secreted Aβ42 induces prominent neuronal death in Drosophila, a critical feature of AD, making this model a popular choice for identifying intrinsic and extrinsic factors mediating Aβ42 neurotoxicity. Overall, Drosophila has made significant contributions to better understand the complex pathology of AD, although additional insight can be expected from combining multiple transgenes, performing genome-wide loss-of-function screens, and testing anti-tau therapies alone or in combination with Aβ42.

  19. Engineering complex topological memories from simple Abelian models

    SciTech Connect

    Wootton, James R.; Lahtinen, Ville; Doucot, Benoit; Pachos, Jiannis K.

    2011-09-15

    In three spatial dimensions, particles are limited to either bosonic or fermionic statistics. Two-dimensional systems, on the other hand, can support anyonic quasiparticles exhibiting richer statistical behaviors. An exciting proposal for quantum computation is to employ anyonic statistics to manipulate information. Since such statistical evolutions depend only on topological characteristics, the resulting computation is intrinsically resilient to errors. The so-called non-Abelian anyons are most promising for quantum computation, but their physical realization may prove to be complex. Abelian anyons, however, are easier to understand theoretically and realize experimentally. Here we show that complex topological memories inspired by non-Abelian anyons can be engineered in Abelian models. We explicitly demonstrate the control procedures for the encoding and manipulation of quantum information in specific lattice models that can be implemented in the laboratory. This bridges the gap between requirements for anyonic quantum computation and the potential of state-of-the-art technology. - Highlights: > A novel quantum memory using Abelian anyons is developed. > This uses an advanced encoding, inspired by non-Abelian anyons. > Errors are suppressed topologically, by means of single spin interactions. > An implementation with current Josephson junction technology is proposed.

  20. Alpha Decay in the Complex-Energy Shell Model

    SciTech Connect

    Betan, R. Id

    2012-01-01

    Background: Alpha emission from a nucleus is a fundamental decay process in which the alpha particle formed inside the nucleus tunnels out through the potential barrier. Purpose: We describe alpha decay of 212Po and 104Te by means of the configuration interaction approach. Method: To compute the preformation factor and penetrability, we use the complex-energy shell model with a separable T = 1 interaction. The single-particle space is expanded in a Woods-Saxon basis that consists of bound and unbound resonant states. Special attention is paid to the treatment of the norm kernel appearing in the definition of the formation amplitude that guarantees the normalization of the channel function. Results: Without explicitly considering the alpha-cluster component in the wave function of the parent nucleus, we reproduce the experimental alpha-decay width of 212Po and predict an upper limit of T1/2 = 5.5 10 7 sec for the half-life of 104Te. Conclusions: The complex-energy shell model in a large valence configuration space is capable of providing a microscopic description of the alpha decay of heavy nuclei having two valence protons and two valence neutrons outside the doubly magic core. The inclusion of proton-neutron interaction between the valence nucleons is likely to shorten the predicted half-live of 104Te.

  1. Modeling the complex pathology of Alzheimer's disease in Drosophila.

    PubMed

    Fernandez-Funez, Pedro; de Mena, Lorena; Rincon-Limas, Diego E

    2015-12-01

    Alzheimer's disease (AD) is the leading cause of dementia and the most common neurodegenerative disorder. AD is mostly a sporadic disorder and its main risk factor is age, but mutations in three genes that promote the accumulation of the amyloid-β (Aβ42) peptide revealed the critical role of amyloid precursor protein (APP) processing in AD. Neurofibrillary tangles enriched in tau are the other pathological hallmark of AD, but the lack of causative tau mutations still puzzles researchers. Here, we describe the contribution of a powerful invertebrate model, the fruit fly Drosophila melanogaster, to uncover the function and pathogenesis of human APP, Aβ42, and tau. APP and tau participate in many complex cellular processes, although their main function is microtubule stabilization and the to-and-fro transport of axonal vesicles. Additionally, expression of secreted Aβ42 induces prominent neuronal death in Drosophila, a critical feature of AD, making this model a popular choice for identifying intrinsic and extrinsic factors mediating Aβ42 neurotoxicity. Overall, Drosophila has made significant contributions to better understand the complex pathology of AD, although additional insight can be expected from combining multiple transgenes, performing genome-wide loss-of-function screens, and testing anti-tau therapies alone or in combination with Aβ42. PMID:26024860

  2. Fish locomotion: insights from both simple and complex mechanical models

    NASA Astrophysics Data System (ADS)

    Lauder, George

    2015-11-01

    Fishes are well-known for their ability to swim and maneuver effectively in the water, and recent years have seen great progress in understanding the hydrodynamics of aquatic locomotion. But studying freely-swimming fishes is challenging due to difficulties in controlling fish behavior. Mechanical models of aquatic locomotion have many advantages over studying live animals, including the ability to manipulate and control individual structural or kinematic factors, easier measurement of forces and torques, and the ability to abstract complex animal designs into simpler components. Such simplifications, while not without their drawbacks, facilitate interpretation of how individual traits alter swimming performance and the discovery of underlying physical principles. In this presentation I will discuss the use of a variety of mechanical models for fish locomotion, ranging from simple flexing panels to complex biomimetic designs incorporating flexible, actively moved, fin rays on multiple fins. Mechanical devices have provided great insight into the dynamics of aquatic propulsion and, integrated with studies of locomotion in freely-swimming fishes, provide new insights into how fishes move through the water.

  3. Chromate adsorption on selected soil minerals: Surface complexation modeling coupled with spectroscopic investigation.

    PubMed

    Veselská, Veronika; Fajgar, Radek; Číhalová, Sylva; Bolanz, Ralph M; Göttlicher, Jörg; Steininger, Ralph; Siddique, Jamal A; Komárek, Michael

    2016-11-15

    This study investigates the mechanisms of Cr(VI) adsorption on natural clay (illite and kaolinite) and synthetic (birnessite and ferrihydrite) minerals, including its speciation changes, and combining quantitative thermodynamically based mechanistic surface complexation models (SCMs) with spectroscopic measurements. Series of adsorption experiments have been performed at different pH values (3-10), ionic strengths (0.001-0.1M KNO3), sorbate concentrations (10(-4), 10(-5), and 10(-6)M Cr(VI)), and sorbate/sorbent ratios (50-500). Fourier transform infrared spectroscopy, X-ray photoelectron spectroscopy, and X-ray absorption spectroscopy were used to determine the surface complexes, including surface reactions. Adsorption of Cr(VI) is strongly ionic strength dependent. For ferrihydrite at pH <7, a simple diffuse-layer model provides a reasonable prediction of adsorption. For birnessite, bidentate inner-sphere complexes of chromate and dichromate resulted in a better diffuse-layer model fit. For kaolinite, outer-sphere complexation prevails mainly at lower Cr(VI) loadings. Dissolution of solid phases needs to be considered for better SCMs fits. The coupled SCM and spectroscopic approach is thus useful for investigating individual minerals responsible for Cr(VI) retention in soils, and improving the handling and remediation processes.

  4. Chromate adsorption on selected soil minerals: Surface complexation modeling coupled with spectroscopic investigation.

    PubMed

    Veselská, Veronika; Fajgar, Radek; Číhalová, Sylva; Bolanz, Ralph M; Göttlicher, Jörg; Steininger, Ralph; Siddique, Jamal A; Komárek, Michael

    2016-11-15

    This study investigates the mechanisms of Cr(VI) adsorption on natural clay (illite and kaolinite) and synthetic (birnessite and ferrihydrite) minerals, including its speciation changes, and combining quantitative thermodynamically based mechanistic surface complexation models (SCMs) with spectroscopic measurements. Series of adsorption experiments have been performed at different pH values (3-10), ionic strengths (0.001-0.1M KNO3), sorbate concentrations (10(-4), 10(-5), and 10(-6)M Cr(VI)), and sorbate/sorbent ratios (50-500). Fourier transform infrared spectroscopy, X-ray photoelectron spectroscopy, and X-ray absorption spectroscopy were used to determine the surface complexes, including surface reactions. Adsorption of Cr(VI) is strongly ionic strength dependent. For ferrihydrite at pH <7, a simple diffuse-layer model provides a reasonable prediction of adsorption. For birnessite, bidentate inner-sphere complexes of chromate and dichromate resulted in a better diffuse-layer model fit. For kaolinite, outer-sphere complexation prevails mainly at lower Cr(VI) loadings. Dissolution of solid phases needs to be considered for better SCMs fits. The coupled SCM and spectroscopic approach is thus useful for investigating individual minerals responsible for Cr(VI) retention in soils, and improving the handling and remediation processes. PMID:27450335

  5. Efficient design of experiments for complex response surfaces with application to etching uniformity in a plasma reactor

    NASA Astrophysics Data System (ADS)

    Tatavalli Mittadar, Nirmal

    Plasma etching uniformity across silicon wafers is of paramount importance in the semiconductor industry. The complexity of plasma etching, coupled with lack of instrumentation to provide real-time process information (that could be used for feedback control), necessitate that optimal conditions for uniform etching must be designed into the reactor and process recipe. This is often done empirically using standard design of experiments which, however, are very costly and time consuming. The objective of this study was to develop a general purpose efficient design strategy that requires a minimum number of experiments, and can handle complex constraints in the presence of uncertainties. Traditionally, Response Surface Methodology (RSM) is used in these applications to design experiments to determine the optimal value of decision variables or inputs. We demonstrated that standard RSM, when applied to the problem of plasma etching uniformity, has the following drawbacks (1) inefficient search due to process nonlinearities, (2) lack of converge to the optimum, and, (3) inability to handle complex inequality constraints. We developed a four-phase Efficient Design Strategy (EDS) based on the DACE paradigm (Design and Analysis of Computer Experiments) and Bayesian search algorithms. The four phases of EDS are: (1) exploration of the design space by maximizing information, (2) exploration of the design space for feasible points by maximizing probability of constraint satisfaction, (3) optimization of the objective and (4) constrained local search. We also designed novel algorithms to switch between the different phases. The choice of model parameters for DACE predictors is usually determined by the Maximum Likelihood Estimation (MLE) method. Depending on the dataset, MLE could result in unrealistic predictors that show a peak-and-dip behavior. To solve this problem we developed techniques to detect the presence of peak-and-dip behavior and a new scheme based on Maximum a

  6. Model slope infiltration experiments for shallow landslides early warning

    NASA Astrophysics Data System (ADS)

    Damiano, E.; Greco, R.; Guida, A.; Olivares, L.; Picarelli, L.

    2009-04-01

    simple empirical models [Versace et al., 2003] based on correlation between some features of rainfall records (cumulated height, duration, season etc.) and the correspondent observed landslides. Laboratory experiments on instrumented small scale slope models represent an effective way to provide data sets [Eckersley, 1990; Wang and Sassa, 2001] useful for building up more complex models of landslide triggering prediction. At the Geotechnical Laboratory of C.I.R.I.AM. an instrumented flume to investigate on the mechanics of landslides in unsaturated deposits of granular soils is available [Olivares et al. 2003; Damiano, 2004; Olivares et al., 2007]. In the flume a model slope is reconstituted by a moist-tamping technique and subjected to an artificial uniform rainfall since failure happens. The state of stress and strain of the slope is monitored during the entire test starting from the infiltration process since the early post-failure stage: the monitoring system is constituted by several mini-tensiometers placed at different locations and depths, to measure suction, mini-transducers to measure positive pore pressures, laser sensors, to measure settlements of the ground surface, and high definition video-cameras to obtain, through a software (PIV) appositely dedicated, the overall horizontal displacement field. Besides, TDR sensors, used with an innovative technique [Greco, 2006], allow to reconstruct the water content profile of soil along the entire thickness of the investigated deposit and to monitor its continuous changes during infiltration. In this paper a series of laboratory tests carried out on model slopes in granular pyroclastic soils taken in the mountainous area north-eastern of Napoli, are presented. The experimental results demonstrate the completeness of information provided by the various sensors installed. In particular, very useful information is given by the coupled measurements of soil water content by TDR and suction by tensiometers. Knowledge of

  7. Complex magnetic field exposure system for in vitro experiments at intermediate frequencies.

    PubMed

    Lodato, Rossella; Merla, Caterina; Pinto, Rosanna; Mancini, Sergio; Lopresto, Vanni; Lovisolo, Giorgio A

    2013-04-01

    In occupational environments, an increasing number of electromagnetic sources emitting complex magnetic field waveforms in the range of intermediate frequencies is present, requiring an accurate exposure risk assessment with both in vitro and in vivo experiments. In this article, an in vitro exposure system able to generate complex magnetic flux density B-fields, reproducing signals from actual intermediate frequency sources such as magnetic resonance imaging (MRI) scanners, for instance, is developed and validated. The system consists of a magnetic field generation system and an exposure apparatus realized with a couple of square coils. A wide homogeneity (99.9%) volume of 210 × 210 × 110 mm(3) was obtained within the coils, with the possibility of simultaneous exposure of a large number of standard Petri dishes. The system is able to process any numerical input sequence through a filtering technique aimed at compensating the coils' impedance effect. The B-field, measured in proximity to a 1.5 T MRI bore during a typical examination, was excellently reproduced (cross-correlation index of 0.99). Thus, it confirms the ability of the proposed setup to accurately simulate complex waveforms in the intermediate frequency band. Suitable field levels were also attained. Moreover, a dosimetry index based on the weighted-peak method was evaluated considering the induced E-field on a Petri dish exposed to the reproduced complex B-field. The weighted-peak index was equal to 0.028 for the induced E-field, indicating an exposure level compliant with the basic restrictions of the International Commission on Non-Ionizing Radiation Protection. Bioelectromagnetics 34:211-219, 2013. © 2012 Wiley Periodicals, Inc. PMID:23060274

  8. Complex magnetic field exposure system for in vitro experiments at intermediate frequencies.

    PubMed

    Lodato, Rossella; Merla, Caterina; Pinto, Rosanna; Mancini, Sergio; Lopresto, Vanni; Lovisolo, Giorgio A

    2013-04-01

    In occupational environments, an increasing number of electromagnetic sources emitting complex magnetic field waveforms in the range of intermediate frequencies is present, requiring an accurate exposure risk assessment with both in vitro and in vivo experiments. In this article, an in vitro exposure system able to generate complex magnetic flux density B-fields, reproducing signals from actual intermediate frequency sources such as magnetic resonance imaging (MRI) scanners, for instance, is developed and validated. The system consists of a magnetic field generation system and an exposure apparatus realized with a couple of square coils. A wide homogeneity (99.9%) volume of 210 × 210 × 110 mm(3) was obtained within the coils, with the possibility of simultaneous exposure of a large number of standard Petri dishes. The system is able to process any numerical input sequence through a filtering technique aimed at compensating the coils' impedance effect. The B-field, measured in proximity to a 1.5 T MRI bore during a typical examination, was excellently reproduced (cross-correlation index of 0.99). Thus, it confirms the ability of the proposed setup to accurately simulate complex waveforms in the intermediate frequency band. Suitable field levels were also attained. Moreover, a dosimetry index based on the weighted-peak method was evaluated considering the induced E-field on a Petri dish exposed to the reproduced complex B-field. The weighted-peak index was equal to 0.028 for the induced E-field, indicating an exposure level compliant with the basic restrictions of the International Commission on Non-Ionizing Radiation Protection. Bioelectromagnetics 34:211-219, 2013. © 2012 Wiley Periodicals, Inc.

  9. Optimal Complexity in Reservoir Modeling of an Eolian Sandstone for Carbon Sequestration Simulation

    NASA Astrophysics Data System (ADS)

    Li, S.; Zhang, Y.; Zhang, X.

    2011-12-01

    Geologic Carbon Sequestration (GCS) is a proposed means to reduce atmospheric concentrations of carbon dioxide (CO2). Given the type, abundance, and accessibility of geologic characterization data, different reservoir modeling techniques can be utilized to build a site model. However, petrophysical properties of a formation can be modeled with simplifying assumptions or with greater detail, the later requiring sophisticated modeling techniques supported by additional data. In GCS where cost of data collection needs to be minimized, will detailed (expensive) reservoir modeling efforts lead to much improved model predictive capability? Is there an optimal level of detail in the reservoir model sufficient for prediction purposes? In Wyoming, GCS into the Nugget Sandstone is proposed. This formation is a deep (>13,000 ft) saline aquifer deposited in eolian environments, exhibiting permeability heterogeneity at multiple scales. Based on a set of characterization data, this study utilizes multiple, increasingly complex reservoir modeling techniques to create a suite of reservoir models including a multiscale, non-stationary heterogeneous model conditioned to a soft depositional model (i.e., training image), a geostatistical (stationary) facies model without conditioning, a geostatistical (stationary) petrophysical model ignoring facies, and finally, a homogeneous model ignoring all aspects of sub-aquifer heterogeneity. All models are built at regional scale with a high-resolution grid (245,133,140 cells) from which a set of local simulation models (448,000 grid cells) are extracted. These are considered alternative conceptual models with which pilot-scale CO2 injection is simulated (50 year duration at 1/10 Mt per year). A computationally efficient sensitivity analysis (SA) is conducted for all models based on a Plackett-Burman Design of Experiment metric. The SA systematically varies key parameters of the models (e.g., variogram structure and principal axes of intrinsic

  10. Three-dimensional Physical Modeling: Applications and Experience at Mayo Clinic.

    PubMed

    Matsumoto, Jane S; Morris, Jonathan M; Foley, Thomas A; Williamson, Eric E; Leng, Shuai; McGee, Kiaran P; Kuhlmann, Joel L; Nesberg, Linda E; Vrtiska, Terri J

    2015-01-01

    Radiologists will be at the center of the rapid technologic expansion of three-dimensional (3D) printing of medical models, as accurate models depend on well-planned, high-quality imaging studies. This article outlines the available technology and the processes necessary to create 3D models from the radiologist's perspective. We review the published medical literature regarding the use of 3D models in various surgical practices and share our experience in creating a hospital-based three-dimensional printing laboratory to aid in the planning of complex surgeries.

  11. Complex Geometry Creation and Turbulent Conjugate Heat Transfer Modeling

    SciTech Connect

    Bodey, Isaac T; Arimilli, Rao V; Freels, James D

    2011-01-01

    The multiphysics capabilities of COMSOL provide the necessary tools to simulate the turbulent thermal-fluid aspects of the High Flux Isotope Reactor (HFIR). Version 4.1, and later, of COMSOL provides three different turbulence models: the standard k-{var_epsilon} closure model, the low Reynolds number (LRN) k-{var_epsilon} model, and the Spalart-Allmaras model. The LRN meets the needs of the nominal HFIR thermal-hydraulic requirements for 2D and 3D simulations. COMSOL also has the capability to create complex geometries. The circular involute fuel plates used in the HFIR require the use of algebraic equations to generate an accurate geometrical representation in the simulation environment. The best-estimate simulation results show that the maximum fuel plate clad surface temperatures are lower than those predicted by the legacy thermal safety code used at HFIR by approximately 17 K. The best-estimate temperature distribution determined by COMSOL was then used to determine the necessary increase in the magnitude of the power density profile (PDP) to produce a similar clad surface temperature as compared to the legacy thermal safety code. It was determined and verified that a 19% power increase was sufficient to bring the two temperature profiles to relatively good agreement.

  12. 3-D Numerical Modeling of a Complex Salt Structure

    SciTech Connect

    House, L.; Larsen, S.; Bednar, J.B.

    2000-02-17

    Reliably processing, imaging, and interpreting seismic data from areas with complicated structures, such as sub-salt, requires a thorough understanding of elastic as well as acoustic wave propagation. Elastic numerical modeling is an essential tool to develop that understanding. While 2-D elastic modeling is in common use, 3-D elastic modeling has been too computationally intensive to be used routinely. Recent advances in computing hardware, including commodity-based hardware, have substantially reduced computing costs. These advances are making 3-D elastic numerical modeling more feasible. A series of example 3-D elastic calculations were performed using a complicated structure, the SEG/EAGE salt structure. The synthetic traces show that the effects of shear wave propagation can be important for imaging and interpretation of images, and also for AVO and other applications that rely on trace amplitudes. Additional calculations are needed to better identify and understand the complex wave propagation effects produced in complicated structures, such as the SEG/EAGE salt structure.

  13. Wind Power Curve Modeling in Simple and Complex Terrain

    SciTech Connect

    Bulaevskaya, V.; Wharton, S.; Irons, Z.; Qualley, G.

    2015-02-09

    Our previous work on wind power curve modeling using statistical models focused on a location with a moderately complex terrain in the Altamont Pass region in northern California (CA). The work described here is the follow-up to that work, but at a location with a simple terrain in northern Oklahoma (OK). The goal of the present analysis was to determine the gain in predictive ability afforded by adding information beyond the hub-height wind speed, such as wind speeds at other heights, as well as other atmospheric variables, to the power prediction model at this new location and compare the results to those obtained at the CA site in the previous study. While we reach some of the same conclusions at both sites, many results reported for the CA site do not hold at the OK site. In particular, using the entire vertical profile of wind speeds improves the accuracy of wind power prediction relative to using the hub-height wind speed alone at both sites. However, in contrast to the CA site, the rotor equivalent wind speed (REWS) performs almost as well as the entire profile at the OK site. Another difference is that at the CA site, adding wind veer as a predictor significantly improved the power prediction accuracy. The same was true for that site when air density was added to the model separately instead of using the standard air density adjustment. At the OK site, these additional variables result in no significant benefit for the prediction accuracy.

  14. Combined (Super 31)P and (Super 1)H NMR Experiments in the Structural Elucidation of Polynuclear Thiolate Complexes

    ERIC Educational Resources Information Center

    Cerrada, Elena; Laguna, Mariano

    2005-01-01

    A facile synthesis of two gold(I) complexes with 1,2-benzenedithiolate ligand and two different bidentate phosphines are described. A detailed sequence of NMR experiments is suggested to determine the structure of the compounds.

  15. Stepwise building of plankton functional type (PFT) models: A feasible route to complex models?

    NASA Astrophysics Data System (ADS)

    Frede Thingstad, T.; Strand, Espen; Larsen, Aud

    2010-01-01

    We discuss the strategy of building models of the lower part of the planktonic food web in a stepwise manner: starting with few plankton functional types (PFTs) and adding resolution and complexity while carrying along the insight and results gained from simpler models. A central requirement for PFT models is that they allow sustained coexistence of the PFTs. Here we discuss how this identifies a need to consider predation, parasitism and defence mechanisms together with nutrient acquisition and competition. Although the stepwise addition of complexity is assumed to be useful and feasible, a rapid increase in complexity strongly calls for alternative approaches able to model emergent system-level features without a need for detailed representation of all the underlying biological detail.

  16. Complexity in mathematical models of public health policies: a guide for consumers of models.

    PubMed

    Basu, Sanjay; Andrews, Jason

    2013-10-01

    Sanjay Basu and colleagues explain how models are increasingly used to inform public health policy yet readers may struggle to evaluate the quality of models. All models require simplifying assumptions, and there are tradeoffs between creating models that are more "realistic" versus those that are grounded in more solid data. Indeed, complex models are not necessarily more accurate or reliable simply because they can more easily fit real-world data than simpler models can. Please see later in the article for the Editors' Summary.

  17. Staff Experiences of Supported Employment with the Sustainable Hub of Innovative Employment for People with Complex Needs

    ERIC Educational Resources Information Center

    Gore, Nick J.; Forrester-Jones, Rachel; Young, Rhea

    2014-01-01

    Whilst the value of supported employment for people with learning disabilities is well substantiated, the experiences of supporting individuals into work are less well documented. The Sustainable Hub of Innovative Employment for people with Complex needs aims to support people with learning disabilities and complex needs to find paid employment.…

  18. The concept of a unified modeling of optical radiation propagation in complex turbid media

    NASA Astrophysics Data System (ADS)

    Meglinski, I.; Kirillin, M.; Kuzmin, V. L.

    2008-09-01

    Multipurpose unified Monte Carlo (MC) based model is developed for adequate simulation of various aspects of optical/ laser radiation propagation within biological tissues. The modeling is aimed to provide predictive information to optimize clinical/biomedical optical diagnostic systems and improve interpretation of the experimental results in biomedical diagnostics. Complex structure of biological tissues in terms of scattering and absorption is presented on the example of human skin. Validation and verification are performed against the tabulated data, theoretical predictions, and experiments. We demonstrate the use of the model to imitate 2-D polarization-sensitive OCT images with non-planar boundaries of layers in the medium like a human skin. The performances of the model are demonstrated both for conventional and polarization-sensitive OCT modalities.

  19. An evaluation of the PENCURV model for penetration events in complex targets.

    SciTech Connect

    Broyles, Todd P.

    2004-07-01

    Three complex target penetration scenarios are run with a model developed by the U. S. Army Engineer Waterways Experiment Station, called PENCURV. The results are compared with both test data and a Zapotec model to evaluate PENCURV's suitability for conducting broad-based scoping studies on a variety of targets to give first order solutions to the problem of G-loading. Under many circumstances, the simpler, empirically based PENCURV model compares well with test data and the much more sophisticated Zapotec model. The results suggest that, if PENCURV were enhanced to include rotational acceleration in its G-loading computations, it would provide much more accurate solutions for a wide variety of penetration problems. Data from an improved PENCURV program would allow for faster, lower cost optimization of targets, test parameters and penetration bodies as Sandia National Laboratories continues in its evaluation of the survivability requirements for earth penetrating sensors and weapons.

  20. A Model for Supervising School Counseling Students without Teaching Experience

    ERIC Educational Resources Information Center

    Peterson, Jean Sunde; Deuschle, Connie

    2006-01-01

    Changed demographics of those now entering the field of school counseling argue for changes in preparatory curriculum, including the curriculum for supervision. The authors present a 5-component model for supervising graduate students without previous school experience that is based on 2 pertinent studies. This model focuses on information for…

  1. Informing education policy in Afghanistan: Using design of experiments and data envelopment analysis to provide transparency in complex simulation

    NASA Astrophysics Data System (ADS)

    Marlin, Benjamin

    Education planning provides the policy maker and the decision maker a logical framework in which to develop and implement education policy. At the international level, education planning is often confounded by both internal and external complexities, making the development of education policy difficult. This research presents a discrete event simulation in which individual students and teachers flow through the system across a variable time horizon. This simulation is then used with advancements in design of experiments, multivariate statistical analysis, and data envelopment analysis, to provide a methodology designed to assist the international education planning community. We propose that this methodology will provide the education planner with insights into the complexity of the education system, the effects of both endogenous and exogenous factors upon the system, and the implications of policies as they pertain to potential futures of the system. We do this recognizing that there are multiple actors and stochastic events in play, which although cannot be accurately forecasted, must be accounted for within the education model. To both test the implementation and usefulness of such a model and to prove its relevance, we chose the Afghan education system as the focal point of this research. The Afghan education system is a complex, real world system with competing actors, dynamic requirements, and ambiguous states. At the time of this writing, Afghanistan is at a pivotal point as a nation, and has been the recipient of a tremendous amount of international support and attention. Finally, Afghanistan is a fragile state, and the proliferation of the current disparity in education across gender, districts, and ethnicity could provide the catalyst to drive the country into hostility. In order to prevent the failure of the current government, it is essential that the education system is able to meet the demands of the Afghan people. This work provides insights into

  2. A model for transgenerational imprinting variation in complex traits.

    PubMed

    Wang, Chenguang; Wang, Zhong; Luo, Jiangtao; Li, Qin; Li, Yao; Ahn, Kwangmi; Prows, Daniel R; Wu, Rongling

    2010-07-14

    Despite the fact that genetic imprinting, i.e., differential expression of the same allele due to its different parental origins, plays a pivotal role in controlling complex traits or diseases, the origin, action and transmission mode of imprinted genes have still remained largely unexplored. We present a new strategy for studying these properties of genetic imprinting with a two-stage reciprocal F mating design, initiated with two contrasting inbred lines. This strategy maps quantitative trait loci that are imprinted (i.e., iQTLs) based on their segregation and transmission across different generations. By incorporating the allelic configuration of an iQTL genotype into a mixture model framework, this strategy provides a path to trace the parental origin of alleles from previous generations. The imprinting effects of iQTLs and their interactions with other traditionally defined genetic effects, expressed in different generations, are estimated and tested by implementing the EM algorithm. The strategy was used to map iQTLs responsible for survival time with four reciprocal F populations and test whether and how the detected iQTLs inherit their imprinting effects into the next generation. The new strategy will provide a tool for quantifying the role of imprinting effects in the creation and maintenance of phenotypic diversity and elucidating a comprehensive picture of the genetic architecture of complex traits and diseases.

  3. Modeling Cu2+-Aβ complexes from computational approaches

    NASA Astrophysics Data System (ADS)

    Alí-Torres, Jorge; Mirats, Andrea; Maréchal, Jean-Didier; Rodríguez-Santiago, Luis; Sodupe, Mariona

    2015-09-01

    Amyloid plaques formation and oxidative stress are two key events in the pathology of the Alzheimer disease (AD), in which metal cations have been shown to play an important role. In particular, the interaction of the redox active Cu2+ metal cation with Aβ has been found to interfere in amyloid aggregation and to lead to reactive oxygen species (ROS). A detailed knowledge of the electronic and molecular structure of Cu2+-Aβ complexes is thus important to get a better understanding of the role of these complexes in the development and progression of the AD disease. The computational treatment of these systems requires a combination of several available computational methodologies, because two fundamental aspects have to be addressed: the metal coordination sphere and the conformation adopted by the peptide upon copper binding. In this paper we review the main computational strategies used to deal with the Cu2+-Aβ coordination and build plausible Cu2+-Aβ models that will afterwards allow determining physicochemical properties of interest, such as their redox potential.

  4. Complex dynamics in the Oregonator model with linear delayed feedback

    NASA Astrophysics Data System (ADS)

    Sriram, K.; Bernard, S.

    2008-06-01

    The Belousov-Zhabotinsky (BZ) reaction can display a rich dynamics when a delayed feedback is applied. We used the Oregonator model of the oscillating BZ reaction to explore the dynamics brought about by a linear delayed feedback. The time-delayed feedback can generate a succession of complex dynamics: period-doubling bifurcation route to chaos; amplitude death; fat, wrinkled, fractal, and broken tori; and mixed-mode oscillations. We observed that this dynamics arises due to a delay-driven transition, or toggling of the system between large and small amplitude oscillations, through a canard bifurcation. We used a combination of numerical bifurcation continuation techniques and other numerical methods to explore the dynamics in the strength of feedback-delay space. We observed that the period-doubling and quasiperiodic route to chaos span a low-dimensional subspace, perhaps due to the trapping of the trajectories in the small amplitude regime near the canard; and the trapped chaotic trajectories get ejected from the small amplitude regime due to a crowding effect to generate chaotic-excitable spikes. We also qualitatively explained the observed dynamics by projecting a three-dimensional phase portrait of the delayed dynamics on the two-dimensional nullclines. This is the first instance in which it is shown that the interaction of delay and canard can bring about complex dynamics.

  5. Deposition parameterizations for the Industrial Source Complex (ISC3) model

    SciTech Connect

    Wesely, Marvin L.; Doskey, Paul V.; Shannon, J. D.

    2002-06-01

    Improved algorithms have been developed to simulate the dry and wet deposition of hazardous air pollutants (HAPs) with the Industrial Source Complex version 3 (ISC3) model system. The dry deposition velocities (concentrations divided by downward flux at a specified height) of the gaseous HAPs are modeled with algorithms adapted from existing dry deposition modules. The dry deposition velocities are described in a conventional resistance scheme, for which micrometeorological formulas are applied to describe the aerodynamic resistances above the surface. Pathways to uptake at the ground and in vegetative canopies are depicted with several resistances that are affected by variations in air temperature, humidity, solar irradiance, and soil moisture. The role of soil moisture variations in affecting the uptake of gases through vegetative plant leaf stomata is assessed with the relative available soil moisture, which is estimated with a rudimentary budget of soil moisture content. Some of the procedures and equations are simplified to be commensurate with the type and extent of information on atmospheric and surface conditions available to the ISC3 model system user. For example, standardized land use types and seasonal categories provide sets of resistances to uptake by various components of the surface. To describe the dry deposition of the large number of gaseous organic HAPS, a new technique based on laboratory study results and theoretical considerations has been developed providing a means of evaluating the role of lipid solubility in uptake by the waxy outer cuticle of vegetative plant leaves.

  6. Atmospheric dispersion modelling over complex terrain at small scale

    NASA Astrophysics Data System (ADS)

    Nosek, S.; Janour, Z.; Kukacka, L.; Jurcakova, K.; Kellnerova, R.; Gulikova, E.

    2014-03-01

    Previous study concerned of qualitative modelling neutrally stratified flow over open-cut coal mine and important surrounding topography at meso-scale (1:9000) revealed an important area for quantitative modelling of atmospheric dispersion at small-scale (1:3300). The selected area includes a necessary part of the coal mine topography with respect to its future expansion and surrounding populated areas. At this small-scale simultaneous measurement of velocity components and concentrations in specified points of vertical and horizontal planes were performed by two-dimensional Laser Doppler Anemometry (LDA) and Fast-Response Flame Ionization Detector (FFID), respectively. The impact of the complex terrain on passive pollutant dispersion with respect to the prevailing wind direction was observed and the prediction of the air quality at populated areas is discussed. The measured data will be used for comparison with another model taking into account the future coal mine transformation. Thus, the impact of coal mine transformation on pollutant dispersion can be observed.

  7. Simulation and Processing Seismic Data in Complex Geological Models

    NASA Astrophysics Data System (ADS)

    Forestieri da Gama Rodrigues, S.; Moreira Lupinacci, W.; Martins de Assis, C. A.

    2014-12-01

    Seismic simulations in complex geological models are interesting to verify some limitations of seismic data. In this project, different geological models were designed to analyze some difficulties encountered in the interpretation of seismic data. Another idea is these data become available for LENEP/UENF students to test new tools to assist in seismic data processing. The geological models were created considering some characteristics found in oil exploration. We simulated geological medium with volcanic intrusions, salt domes, fault, pinch out and layers more distante from surface (Kanao, 2012). We used the software Tesseral Pro to simulate the seismic acquisitions. The acquisition geometries simulated were of the type common offset, end-on and split-spread. (Figure 1) Data acquired with constant offset require less processing routines. The processing flow used with tools available in Seismic Unix package (for more details, see Pennington et al., 2005) was geometric spreading correction, deconvolution, attenuation correction and post-stack depth migration. In processing of the data acquired with end-on and split-spread geometries, we included velocity analysis and NMO correction routines. Although we analyze synthetic data and carefully applied each processing routine, we can observe some limitations of the seismic reflection in imaging thin layers, great surface depth layers, layers with low impedance contrast and faults.

  8. Neurocomputational Model of EEG Complexity during Mind Wandering

    PubMed Central

    Ibáñez-Molina, Antonio J.; Iglesias-Parro, Sergio

    2016-01-01

    Mind wandering (MW) can be understood as a transient state in which attention drifts from an external task to internal self-generated thoughts. MW has been associated with the activation of the Default Mode Network (DMN). In addition, it has been shown that the activity of the DMN is anti-correlated with activation in brain networks related to the processing of external events (e.g., Salience network, SN). In this study, we present a mean field model based on weakly coupled Kuramoto oscillators. We simulated the oscillatory activity of the entire brain and explored the role of the interaction between the nodes from the DMN and SN in MW states. External stimulation was added to the network model in two opposite conditions. Stimuli could be presented when oscillators in the SN showed more internal coherence (synchrony) than in the DMN, or, on the contrary, when the coherence in the SN was lower than in the DMN. The resulting phases of the oscillators were analyzed and used to simulate EEG signals. Our results showed that the structural complexity from both simulated and real data was higher when the model was stimulated during periods in which DMN was more coherent than the SN. Overall, our results provided a plausible mechanistic explanation to MW as a state in which high coherence in the DMN partially suppresses the capacity of the system to process external stimuli. PMID:26973505

  9. Neurocomputational Model of EEG Complexity during Mind Wandering.

    PubMed

    Ibáñez-Molina, Antonio J; Iglesias-Parro, Sergio

    2016-01-01

    Mind wandering (MW) can be understood as a transient state in which attention drifts from an external task to internal self-generated thoughts. MW has been associated with the activation of the Default Mode Network (DMN). In addition, it has been shown that the activity of the DMN is anti-correlated with activation in brain networks related to the processing of external events (e.g., Salience network, SN). In this study, we present a mean field model based on weakly coupled Kuramoto oscillators. We simulated the oscillatory activity of the entire brain and explored the role of the interaction between the nodes from the DMN and SN in MW states. External stimulation was added to the network model in two opposite conditions. Stimuli could be presented when oscillators in the SN showed more internal coherence (synchrony) than in the DMN, or, on the contrary, when the coherence in the SN was lower than in the DMN. The resulting phases of the oscillators were analyzed and used to simulate EEG signals. Our results showed that the structural complexity from both simulated and real data was higher when the model was stimulated during periods in which DMN was more coherent than the SN. Overall, our results provided a plausible mechanistic explanation to MW as a state in which high coherence in the DMN partially suppresses the capacity of the system to process external stimuli.

  10. Integrated modeling tool for performance engineering of complex computer systems

    NASA Technical Reports Server (NTRS)

    Wright, Gary; Ball, Duane; Hoyt, Susan; Steele, Oscar

    1989-01-01

    This report summarizes Advanced System Technologies' accomplishments on the Phase 2 SBIR contract NAS7-995. The technical objectives of the report are: (1) to develop an evaluation version of a graphical, integrated modeling language according to the specification resulting from the Phase 2 research; and (2) to determine the degree to which the language meets its objectives by evaluating ease of use, utility of two sets of performance predictions, and the power of the language constructs. The technical approach followed to meet these objectives was to design, develop, and test an evaluation prototype of a graphical, performance prediction tool. The utility of the prototype was then evaluated by applying it to a variety of test cases found in the literature and in AST case histories. Numerous models were constructed and successfully tested. The major conclusion of this Phase 2 SBIR research and development effort is that complex, real-time computer systems can be specified in a non-procedural manner using combinations of icons, windows, menus, and dialogs. Such a specification technique provides an interface that system designers and architects find natural and easy to use. In addition, PEDESTAL's multiview approach provides system engineers with the capability to perform the trade-offs necessary to produce a design that meets timing performance requirements. Sample system designs analyzed during the development effort showed that models could be constructed in a fraction of the time required by non-visual system design capture tools.

  11. Experimental and Numerical Modelling of Flow over Complex Terrain: The Bolund Hill

    NASA Astrophysics Data System (ADS)

    Conan, Boris; Chaudhari, Ashvinkumar; Aubrun, Sandrine; van Beeck, Jeroen; Hämäläinen, Jari; Hellsten, Antti

    2016-02-01

    In the wind-energy sector, wind-power forecasting, turbine siting, and turbine-design selection are all highly dependent on a precise evaluation of atmospheric wind conditions. On-site measurements provide reliable data; however, in complex terrain and at the scale of a wind farm, local measurements may be insufficient for a detailed site description. On highly variable terrain, numerical models are commonly used but still constitute a challenge regarding simulation and interpretation. We propose a joint state-of-the-art study of two approaches to modelling atmospheric flow over the Bolund hill: a wind-tunnel test and a large-eddy simulation (LES). The approach has the particularity of describing both methods in parallel in order to highlight their similarities and differences. The work provides a first detailed comparison between field measurements, wind-tunnel experiments and numerical simulations. The systematic and quantitative approach used for the comparison contributes to a better understanding of the strengths and weaknesses of each model and, therefore, to their enhancement. Despite fundamental modelling differences, both techniques result in only a 5 % difference in the mean wind speed and 15 % in the turbulent kinetic energy (TKE). The joint comparison makes it possible to identify the most difficult features to model: the near-ground flow and the wake of the hill. When compared to field data, both models reach 11 % error for the mean wind speed, which is close to the best performance reported in the literature. For the TKE, a great improvement is found using the LES model compared to previous studies (20 % error). Wind-tunnel results are in the low range of error when compared to experiments reported previously (40 % error). This comparison highlights the potential of such approaches and gives directions for the improvement of complex flow modelling.

  12. Methods of Information Geometry to model complex shapes

    NASA Astrophysics Data System (ADS)

    De Sanctis, A.; Gattone, S. A.

    2016-09-01

    In this paper, a new statistical method to model patterns emerging in complex systems is proposed. A framework for shape analysis of 2- dimensional landmark data is introduced, in which each landmark is represented by a bivariate Gaussian distribution. From Information Geometry we know that Fisher-Rao metric endows the statistical manifold of parameters of a family of probability distributions with a Riemannian metric. Thus this approach allows to reconstruct the intermediate steps in the evolution between observed shapes by computing the geodesic, with respect to the Fisher-Rao metric, between the corresponding distributions. Furthermore, the geodesic path can be used for shape predictions. As application, we study the evolution of the rat skull shape. A future application in Ophthalmology is introduced.

  13. Random field Ising model and community structure in complex networks

    NASA Astrophysics Data System (ADS)

    Son, S.-W.; Jeong, H.; Noh, J. D.

    2006-04-01

    We propose a method to determine the community structure of a complex network. In this method the ground state problem of a ferromagnetic random field Ising model is considered on the network with the magnetic field Bs = +∞, Bt = -∞, and Bi≠s,t=0 for a node pair s and t. The ground state problem is equivalent to the so-called maximum flow problem, which can be solved exactly numerically with the help of a combinatorial optimization algorithm. The community structure is then identified from the ground state Ising spin domains for all pairs of s and t. Our method provides a criterion for the existence of the community structure, and is applicable equally well to unweighted and weighted networks. We demonstrate the performance of the method by applying it to the Barabási-Albert network, Zachary karate club network, the scientific collaboration network, and the stock price correlation network. (Ising, Potts, etc.)

  14. Disulfide Trapping for Modeling and Structure Determination of Receptor:Chemokine Complexes

    PubMed Central

    Kufareva, Irina; Gustavsson, Martin; Holden, Lauren G.; Qin, Ling; Zheng, Yi; Handel, Tracy M.

    2016-01-01

    Despite the recent breakthrough advances in GPCR crystallography, structure determination of protein-protein complexes involving chemokine receptors and their endogenous chemokine ligands remains challenging. Here we describe disulfide trapping, a methodology for generating irreversible covalent binary protein complexes from unbound protein partners by introducing two cysteine residues, one per interaction partner, at selected positions within their interaction interface. Disulfide trapping can serve at least two distinct purposes: (i) stabilization of the complex to assist structural studies, and/or (ii) determination of pairwise residue proximities to guide molecular modeling. Methods for characterization of disulfide-trapped complexes are described and evaluated in terms of throughput, sensitivity, and specificity towards the most energetically favorable cross-links. Due to abundance of native disulfide bonds at receptor:chemokine interfaces, disulfide trapping of their complexes can be associated with intramolecular disulfide shuffling and result in misfolding of the component proteins; because of this, evidence from several experiments is typically needed to firmly establish a positive disulfide crosslink. An optimal pipeline that maximizes throughput and minimizes time and costs by early triage of unsuccessful candidate constructs is proposed. PMID:26921956

  15. Calibration of two complex ecosystem models with different likelihood functions

    NASA Astrophysics Data System (ADS)

    Hidy, Dóra; Haszpra, László; Pintér, Krisztina; Nagy, Zoltán; Barcza, Zoltán

    2014-05-01

    The biosphere is a sensitive carbon reservoir. Terrestrial ecosystems were approximately carbon neutral during the past centuries, but they became net carbon sinks due to climate change induced environmental change and associated CO2 fertilization effect of the atmosphere. Model studies and measurements indicate that the biospheric carbon sink can saturate in the future due to ongoing climate change which can act as a positive feedback. Robustness of carbon cycle models is a key issue when trying to choose the appropriate model for decision support. The input parameters of the process-based models are decisive regarding the model output. At the same time there are several input parameters for which accurate values are hard to obtain directly from experiments or no local measurements are available. Due to the uncertainty associated with the unknown model parameters significant bias can be experienced if the model is used to simulate the carbon and nitrogen cycle components of different ecosystems. In order to improve model performance the unknown model parameters has to be estimated. We developed a multi-objective, two-step calibration method based on Bayesian approach in order to estimate the unknown parameters of PaSim and Biome-BGC models. Biome-BGC and PaSim are a widely used biogeochemical models that simulate the storage and flux of water, carbon, and nitrogen between the ecosystem and the atmosphere, and within the components of the terrestrial ecosystems (in this research the developed version of Biome-BGC is used which is referred as BBGC MuSo). Both models were calibrated regardless the simulated processes and type of model parameters. The calibration procedure is based on the comparison of measured data with simulated results via calculating a likelihood function (degree of goodness-of-fit between simulated and measured data). In our research different likelihood function formulations were used in order to examine the effect of the different model

  16. Inverse Problems in Complex Models and Applications to Earth Sciences

    NASA Astrophysics Data System (ADS)

    Bosch, M. E.

    2015-12-01

    The inference of the subsurface earth structure and properties requires the integration of different types of data, information and knowledge, by combined processes of analysis and synthesis. To support the process of integrating information, the regular concept of data inversion is evolving to expand its application to models with multiple inner components (properties, scales, structural parameters) that explain multiple data (geophysical survey data, well-logs, core data). The probabilistic inference methods provide the natural framework for the formulation of these problems, considering a posterior probability density function (PDF) that combines the information from a prior information PDF and the new sets of observations. To formulate the posterior PDF in the context of multiple datasets, the data likelihood functions are factorized assuming independence of uncertainties for data originating across different surveys. A realistic description of the earth medium requires modeling several properties and structural parameters, which relate to each other according to dependency and independency notions. Thus, conditional probabilities across model components also factorize. A common setting proceeds by structuring the model parameter space in hierarchical layers. A primary layer (e.g. lithology) conditions a secondary layer (e.g. physical medium properties), which conditions a third layer (e.g. geophysical data). In general, less structured relations within model components and data emerge from the analysis of other inverse problems. They can be described with flexibility via direct acyclic graphs, which are graphs that map dependency relations between the model components. Examples of inverse problems in complex models can be shown at various scales. At local scale, for example, the distribution of gas saturation is inferred from pre-stack seismic data and a calibrated rock-physics model. At regional scale, joint inversion of gravity and magnetic data is applied

  17. Complex events in a fault model with interacting asperities

    NASA Astrophysics Data System (ADS)

    Dragoni, Michele; Tallarico, Andrea

    2016-08-01

    The dynamics of a fault with heterogeneous friction is studied by employing a discrete fault model with two asperities of different strengths. The average values of stress, friction and slip on each asperity are considered and the state of the fault is described by the slip deficits of the asperities as functions of time. The fault has three different slipping modes, corresponding to the asperities slipping one at a time or simultaneously. Any seismic event produced by the fault is a sequence of n slipping modes. According to initial conditions, seismic events can be different sequences of slipping modes, implying different moment rates and seismic moments. Each event can be represented geometrically in the state space by an orbit that is the union of n damped Lissajous curves. We focus our interest on events that are sequences of two or more slipping modes: they show a complex stress interchange between the asperities and a complex temporal pattern of slip rate. The initial stress distribution producing these events is not uniform on the fault. We calculate the stress drop, the moment rate and the frequency spectrum of the events, showing how these quantities depend on initial conditions. These events have the greatest seismic moments that can be produced by fault slip. As an example, we model the moment rate of the 1992 Landers, California, earthquake that can be described as the consecutive failure of two asperities, one of which has a double strength than the other, and evaluate the evolution of stress distribution on the fault during the event.

  18. Autoimmunity contributes to nociceptive sensitization in a mouse model of complex regional pain syndrome

    PubMed Central

    Li, Wen-Wu; Guo, Tian-Zhi; Shi, Xiaoyou; Czirr, Eva; Stan, Trisha; Sahbaie, Peyman; Wyss-Coray, Tony; Kingery, Wade S.; Clark, J. David

    2014-01-01

    Complex regional pain syndrome (CRPS) is a painful, disabling, chronic condition whose etiology remains poorly understood. The recent suggestion that immunological mechanisms may underlie CRPS provides an entirely novel framework in which to study the condition and consider new approaches to treatment. Using a murine fracture/cast model of CRPS, we studied the effects of B-cell depletion using anti-CD20 antibodies or by performing experiments in genetically B-cell-deficient (µMT) mice. We observed that mice treated with anti-CD20 developed attenuated vascular and nociceptive CRPS-like changes after tibial fracture and 3 weeks of cast immobilization. In mice with established CRPS-like changes, the depletion of CD-20+ cells slowly reversed nociceptive sensitization. Correspondingly, µMT mice, deficient in producing immunoglobulin M (IgM), failed to fully develop CRPS-like changes after fracture and casting. Depletion of CD20+ cells had no detectable effects on nociceptive sensitization in a model of postoperative incisional pain, however. Immunohistochemical experiments showed that CD20+ cells accumulate near the healing fracture but few such cells collect in skin or sciatic nerves. On the other hand, IgM-containing immune complexes were deposited in skin and sciatic nerve after fracture in wild-type, but not in µMT fracture/cast, mice. Additional experiments demonstrated that complement system activation and deposition of membrane attack complexes were partially blocked by anti-CD20+ treatment. Collectively, our results suggest that CD20-positive B cells produce antibodies that ultimately support the CRPS-like changes in the murine fracture/cast model. Therapies directed at reducing B-cell activity may be of use in treating patients with CRPS. PMID:25218828

  19. Autoimmunity contributes to nociceptive sensitization in a mouse model of complex regional pain syndrome.

    PubMed

    Li, Wen-Wu; Guo, Tian-Zhi; Shi, Xiaoyou; Czirr, Eva; Stan, Trisha; Sahbaie, Peyman; Wyss-Coray, Tony; Kingery, Wade S; Clark, J David

    2014-11-01

    Complex regional pain syndrome (CRPS) is a painful, disabling, chronic condition whose etiology remains poorly understood. The recent suggestion that immunological mechanisms may underlie CRPS provides an entirely novel framework in which to study the condition and consider new approaches to treatment. Using a murine fracture/cast model of CRPS, we studied the effects of B-cell depletion using anti-CD20 antibodies or by performing experiments in genetically B-cell-deficient (μMT) mice. We observed that mice treated with anti-CD20 developed attenuated vascular and nociceptive CRPS-like changes after tibial fracture and 3 weeks of cast immobilization. In mice with established CRPS-like changes, the depletion of CD-20+ cells slowly reversed nociceptive sensitization. Correspondingly, μMT mice, deficient in producing immunoglobulin M (IgM), failed to fully develop CRPS-like changes after fracture and casting. Depletion of CD20+ cells had no detectable effects on nociceptive sensitization in a model of postoperative incisional pain, however. Immunohistochemical experiments showed that CD20+ cells accumulate near the healing fracture but few such cells collect in skin or sciatic nerves. On the other hand, IgM-containing immune complexes were deposited in skin and sciatic nerve after fracture in wild-type, but not in μMT fracture/cast, mice. Additional experiments demonstrated that complement system activation and deposition of membrane attack complexes were partially blocked by anti-CD20+ treatment. Collectively, our results suggest that CD20-positive B cells produce antibodies that ultimately support the CRPS-like changes in the murine fracture/cast model. Therapies directed at reducing B-cell activity may be of use in treating patients with CRPS.

  20. Assessing model sensitivity and uncertainty across multiple Free-Air CO2 Enrichment experiments.

    NASA Astrophysics Data System (ADS)

    Cowdery, E.; Dietze, M.

    2015-12-01

    As atmospheric levels of carbon dioxide levels continue to increase, it is critical that terrestrial ecosystem models can accurately predict ecological responses to the changing environment. Current predictions of net primary productivity (NPP) in response to elevated atmospheric CO2 concentrations are highly variable and contain a considerable amount of uncertainty. It is necessary that we understand which factors are driving this uncertainty. The Free-Air CO2 Enrichment (FACE) experiments have equipped us with a rich data source that can be used to calibrate and validate these model predictions. To identify and evaluate the assumptions causing inter-model differences we performed model sensitivity and uncertainty analysis across ambient and elevated CO2 treatments using the Data Assimilation Linked Ecosystem Carbon (DALEC) model and the Ecosystem Demography Model (ED2), two process-based models ranging from low to high complexity respectively. These modeled process responses were compared to experimental data from the Kennedy Space Center Open Top Chamber Experiment, the Nevada Desert Free Air CO2 Enrichment Facility, the Rhinelander FACE experiment, the Wyoming Prairie Heating and CO2 Enrichment Experiment, the Duke Forest Face experiment and the Oak Ridge Experiment on CO2 Enrichment. By leveraging data access proxy and data tilling services provided by the BrownDog data curation project alongside analysis modules available in the Predictive Ecosystem Analyzer (PEcAn), we produced automated, repeatable benchmarking workflows that are generalized to incorporate different sites and ecological models. Combining the observed patterns of uncertainty between the two models with results of the recent FACE-model data synthesis project (FACE-MDS) can help identify which processes need further study and additional data constraints. These findings can be used to inform future experimental design and in turn can provide informative starting point for data assimilation.

  1. The Entropy and Complexity of Drift waves in a LAPTAG Plasma Physics Experiment

    NASA Astrophysics Data System (ADS)

    Birge-Lee, Henry; Gekelman, Walter; Pribyl, Patrick; Wise, Joe; Katz, Cami; Baker, Bob; Marmie, Ken; Thomas, Sam; Buckley-Bonnano, Samuel

    2015-11-01

    Drift waves grow from noise on a density gradient in a narrow (dia = 3 cm, L = 1.5 m) magnetized (Boz = 160G) plasma column. A two-dimensional probe drive measured fluctuations in the plasma column in a plane transverse to the background magnetic field. Correlation techniques determined that the fluctuations were that of electrostatic drift waves. The time series data was used to generate the Bandt-Pompe/Shannon entropy, H, and Jensen-Shannon complexity, CJS. C-H diagrams can be used to tell the difference between deterministic chaos, random noise and stochastic processes and simple waves, which makes it a powerful tool in nonlinear dynamics. The C-H diagram in this experiment, reveal that the combination of drift waves and other background fluctuations is a deterministically chaotic system. The PDF of the time series, the wave spectra the spatial dependence of the entropy wave complexity will be presented. LAPTAG is a university-high school alliance outreach program, which has been in existence for over 20 years. Work done at BaPSF at UCLA and supported by NSF and DOE.

  2. Using Interactive 3D PDF for Exploring Complex Biomedical Data: Experiences and Solutions.

    PubMed

    Newe, Axel; Becker, Linda

    2016-01-01

    The Portable Document Format (PDF) is the most commonly used file format for the exchange of electronic documents. A lesser-known feature of PDF is the possibility to embed three-dimensional models and to display these models interactively with a qualified reader. This technology is well suited to present, to explore and to communicate complex biomedical data. This applies in particular for data which would suffer from a loss of information if it was reduced to a static two-dimensional projection. In this article, we present applications of 3D PDF for selected scholarly and clinical use cases in the biomedical domain. Furthermore, we present a sophisticated tool for the generation of respective PDF documents. PMID:27577484

  3. Complex Pathways in Folding of Protein G Explored by Simulation and Experiment

    PubMed Central

    Lapidus, Lisa J.; Acharya, Srabasti; Schwantes, Christian R.; Wu, Ling; Shukla, Diwakar; King, Michael; DeCamp, Stephen J.; Pande, Vijay S.

    2014-01-01

    The B1 domain of protein G has been a classic model system of folding for decades, the subject of numerous experimental and computational studies. Most of the experimental work has focused on whether the protein folds via an intermediate, but the evidence is mostly limited to relatively slow kinetic observations with a few structural probes. In this work we observe folding on the submillisecond timescale with microfluidic mixers using a variety of probes including tryptophan fluorescence, circular dichroism, and photochemical oxidation. We find that each probe yields different kinetics and compare these observations with a Markov State Model constructed from large-scale molecular dynamics simulations and find a complex network of states that yield different kinetics for different observables. We conclude that there are many folding pathways before the final folding step and that these paths do not have large free energy barriers. PMID:25140430

  4. Construction of Lyapunov functions for some models of infectious diseases in vivo: from simple models to complex models.

    PubMed

    Kajiwara, Tsuyoshi; Sasaki, Toru; Takeuchi, Yasuhiro

    2015-02-01

    We present a constructive method for Lyapunov functions for ordinary differential equation models of infectious diseases in vivo. We consider models derived from the Nowak-Bangham models. We construct Lyapunov functions for complex models using those of simpler models. Especially, we construct Lyapunov functions for models with an immune variable from those for models without an immune variable, a Lyapunov functions of a model with absorption effect from that for a model without absorption effect. We make the construction clear for Lyapunov functions proposed previously, and present new results with our method.

  5. Model-Driven Design: Systematically Building Integrated Blended Learning Experiences

    ERIC Educational Resources Information Center

    Laster, Stephen

    2010-01-01

    Developing and delivering curricula that are integrated and that use blended learning techniques requires a highly orchestrated design. While institutions have demonstrated the ability to design complex curricula on an ad-hoc basis, these projects are generally successful at a great human and capital cost. Model-driven design provides a…

  6. Atomic level insights into realistic molecular models of dendrimer-drug complexes through MD simulations

    NASA Astrophysics Data System (ADS)

    Jain, Vaibhav; Maiti, Prabal K.; Bharatam, Prasad V.

    2016-09-01

    Computational studies performed on dendrimer-drug complexes usually consider 1:1 stoichiometry, which is far from reality, since in experiments more number of drug molecules get encapsulated inside a dendrimer. In the present study, molecular dynamic (MD) simulations were implemented to characterize the more realistic molecular models of dendrimer-drug complexes (1:n stoichiometry) in order to understand the effect of high drug loading on the structural properties and also to unveil the atomistic level details. For this purpose, possible inclusion complexes of model drug Nateglinide (Ntg) (antidiabetic, belongs to Biopharmaceutics Classification System class II) with amine- and acetyl-terminated G4 poly(amidoamine) (G4 PAMAM(NH2) and G4 PAMAM(Ac)) dendrimers at neutral and low pH conditions are explored in this work. MD simulation analysis on dendrimer-drug complexes revealed that the drug encapsulation efficiency of G4 PAMAM(NH2) and G4 PAMAM(Ac) dendrimers at neutral pH was 6 and 5, respectively, while at low pH it was 12 and 13, respectively. Center-of-mass distance analysis showed that most of the drug molecules are located in the interior hydrophobic pockets of G4 PAMAM(NH2) at both the pH; while in the case of G4 PAMAM(Ac), most of them are distributed near to the surface at neutral pH and in the interior hydrophobic pockets at low pH. Structural properties such as radius of gyration, shape, radial density distribution, and solvent accessible surface area of dendrimer-drug complexes were also assessed and compared with that of the drug unloaded dendrimers. Further, binding energy calculations using molecular mechanics Poisson-Boltzmann surface area approach revealed that the location of drug molecules in the dendrimer is not the decisive factor for the higher and lower binding affinity of the complex, but the charged state of dendrimer and drug, intermolecular interactions, pH-induced conformational changes, and surface groups of dendrimer do play an

  7. Complex Environmental Data Modelling Using Adaptive General Regression Neural Networks

    NASA Astrophysics Data System (ADS)

    Kanevski, Mikhail

    2015-04-01

    The research deals with an adaptation and application of Adaptive General Regression Neural Networks (GRNN) to high dimensional environmental data. GRNN [1,2,3] are efficient modelling tools both for spatial and temporal data and are based on nonparametric kernel methods closely related to classical Nadaraya-Watson estimator. Adaptive GRNN, using anisotropic kernels, can be also applied for features selection tasks when working with high dimensional data [1,3]. In the present research Adaptive GRNN are used to study geospatial data predictability and relevant feature selection using both simulated and real data case studies. The original raw data were either three dimensional monthly precipitation data or monthly wind speeds embedded into 13 dimensional space constructed by geographical coordinates and geo-features calculated from digital elevation model. GRNN were applied in two different ways: 1) adaptive GRNN with the resulting list of features ordered according to their relevancy; and 2) adaptive GRNN applied to evaluate all possible models N [in case of wind fields N=(2^13 -1)=8191] and rank them according to the cross-validation error. In both cases training were carried out applying leave-one-out procedure. An important result of the study is that the set of the most relevant features depends on the month (strong seasonal effect) and year. The predictabilities of precipitation and wind field patterns, estimated using the cross-validation and testing errors of raw and shuffled data, were studied in detail. The results of both approaches were qualitatively and quantitatively compared. In conclusion, Adaptive GRNN with their ability to select features and efficient modelling of complex high dimensional data can be widely used in automatic/on-line mapping and as an integrated part of environmental decision support systems. 1. Kanevski M., Pozdnoukhov A., Timonin V. Machine Learning for Spatial Environmental Data. Theory, applications and software. EPFL Press

  8. Radiative transfer model validations during the First ISLSCP Field Experiment

    NASA Technical Reports Server (NTRS)

    Frouin, Robert; Breon, Francois-Marie; Gautier, Catherine

    1990-01-01

    Two simple radiative transfer models, the 5S model based on Tanre et al. (1985, 1986) and the wide-band model of Morcrette (1984) are validated by comparing their outputs with results obtained during the First ISLSCP Field Experiment on concomitant radiosonde, aerosol turbidity, and radiation measurements and sky photographs. Results showed that the 5S model overestimates the short-wave irradiance by 13.2 W/sq m, whereas the Morcrette model underestimated the long-wave irradiance by 7.4 W/sq m.

  9. Electronic Transitions as a Probe of Tetrahedral versus Octahedral Coordination in Nickel(II) Complexes: An Undergraduate Inorganic Chemistry Experiment.

    ERIC Educational Resources Information Center

    Filgueiras, Carlos A. L.; Carazza, Fernando

    1980-01-01

    Discusses procedures, theoretical considerations, and results of an experiment involving the preparation of a tetrahedral nickel(II) complex and its transformation into an octahedral species. Suggests that fundamental aspects of coordination chemistry can be demonstrated by simple experiments performed in introductory level courses. (Author/JN)

  10. Identification of Copper(II) Complexes in Aqueous Solution by Electron Spin Resonance: An Undergraduate Coordination Chemistry Experiment.

    ERIC Educational Resources Information Center

    Micera, G.; And Others

    1984-01-01

    Background, procedures, and results are provided for an experiment which examines, through electron spin resonance spectroscopy, complex species formed by cupric and 2,6-dihydroxybenzoate ions in aqueous solutions. The experiment is illustrative of several aspects of inorganic and coordination chemistry, including the identification of species…

  11. Designing Experiments to Discriminate Families of Logic Models

    PubMed Central

    Videla, Santiago; Konokotina, Irina; Alexopoulos, Leonidas G.; Saez-Rodriguez, Julio; Schaub, Torsten; Siegel, Anne; Guziolowski, Carito

    2015-01-01

    Logic models of signaling pathways are a promising way of building effective in silico functional models of a cell, in particular of signaling pathways. The automated learning of Boolean logic models describing signaling pathways can be achieved by training to phosphoproteomics data, which is particularly useful if it is measured upon different combinations of perturbations in a high-throughput fashion. However, in practice, the number and type of allowed perturbations are not exhaustive. Moreover, experimental data are unavoidably subjected to noise. As a result, the learning process results in a family of feasible logical networks rather than in a single model. This family is composed of logic models implementing different internal wirings for the system and therefore the predictions of experiments from this family may present a significant level of variability, and hence uncertainty. In this paper, we introduce a method based on Answer Set Programming to propose an optimal experimental design that aims to narrow down the variability (in terms of input–output behaviors) within families of logical models learned from experimental data. We study how the fitness with respect to the data can be improved after an optimal selection of signaling perturbations and how we learn optimal logic models with minimal number of experiments. The methods are applied on signaling pathways in human liver cells and phosphoproteomics experimental data. Using 25% of the experiments, we obtained logical models with fitness scores (mean square error) 15% close to the ones obtained using all experiments, illustrating the impact that our approach can have on the design of experiments for efficient model calibration. PMID:26389116

  12. Designing Experiments to Discriminate Families of Logic Models.

    PubMed

    Videla, Santiago; Konokotina, Irina; Alexopoulos, Leonidas G; Saez-Rodriguez, Julio; Schaub, Torsten; Siegel, Anne; Guziolowski, Carito

    2015-01-01

    Logic models of signaling pathways are a promising way of building effective in silico functional models of a cell, in particular of signaling pathways. The automated learning of Boolean logic models describing signaling pathways can be achieved by training to phosphoproteomics data, which is particularly useful if it is measured upon different combinations of perturbations in a high-throughput fashion. However, in practice, the number and type of allowed perturbations are not exhaustive. Moreover, experimental data are unavoidably subjected to noise. As a result, the learning process results in a family of feasible logical networks rather than in a single model. This family is composed of logic models implementing different internal wirings for the system and therefore the predictions of experiments from this family may present a significant level of variability, and hence uncertainty. In this paper, we introduce a method based on Answer Set Programming to propose an optimal experimental design that aims to narrow down the variability (in terms of input-output behaviors) within families of logical models learned from experimental data. We study how the fitness with respect to the data can be improved after an optimal selection of signaling perturbations and how we learn optimal logic models with minimal number of experiments. The methods are applied on signaling pathways in human liver cells and phosphoproteomics experimental data. Using 25% of the experiments, we obtained logical models with fitness scores (mean square error) 15% close to the ones obtained using all experiments, illustrating the impact that our approach can have on the design of experiments for efficient model calibration.

  13. Thermophysical Model of S-complex NEAs: 1627 Ivar

    NASA Astrophysics Data System (ADS)

    Crowell, Jenna L.; Howell, Ellen S.; Magri, Christopher; Fernandez, Yan R.; Marshall, Sean E.; Warner, Brian D.; Vervack, Ronald J.

    2015-11-01

    We present updates to the thermophysical model of asteroid 1627 Ivar. Ivar is an Amor class near Earth asteroid (NEA) with a taxonomic type of Sqw [1] and a rotation rate of 4.795162 ± 5.4 * 10-6 hours [2]. In 2013, our group observed Ivar in radar, in CCD lightcurves, and in the near-IR’s reflected and thermal regimes (0.8 - 4.1 µm) using the Arecibo Observatory’s 2380 MHz radar, the Palmer Divide Station’s 0.35m telescope, and the SpeX instrument at the NASA IRTF respectively. Using these radar and lightcurve data, we generated a detailed shape model of Ivar using the software SHAPE [3,4]. Our shape model reveals more surface detail compared to earlier models [5] and we found Ivar to be an elongated asteroid with the maximum extended length along the three body-fixed coordinates being 12 x 11.76 x 6 km. For our thermophysical modeling, we have used SHERMAN [6,7] with input parameters such as the asteroid’s IR emissivity, optical scattering law and thermal inertia, in order to complete thermal computations based on our shape model and the known spin state. We then create synthetic near-IR spectra that can be compared to our observed spectra, which cover a wide range of Ivar’s rotational longitudes and viewing geometries. As has been noted [6,8], the use of an accurate shape model is often crucial for correctly interpreting multi-epoch thermal emission observations. We will present what SHERMAN has let us determine about the reflective, thermal, and surface properties for Ivar that best reproduce our spectra. From our derived best-fit thermal parameters, we will learn more about the regolith, surface properties, and heterogeneity of Ivar and how those properties compare to those of other S-complex asteroids. References: [1] DeMeo et al. 2009, Icarus 202, 160-180 [2] Crowell, J. et al. 2015, LPSC 46 [3] Magri C. et al. 2007, Icarus 186, 152-177 [4] Crowell, J. et al. 2014, AAS/DPS 46 [5] Kaasalainen, M. et al. 2004, Icarus 167, 178-196 [6] Crowell, J. et

  14. Creation of a simplified benchmark model for the neptunium sphere experiment

    SciTech Connect

    Mosteller, R. D.; Loaiza, D. J.; Sanchez, R. G.

    2004-01-01

    Although neptunium is produced in significant amounts by nuclear power reactors, its critical mass is not well known. In addition, sizeable uncertainties exist for its cross sections. As an important step toward resolution of these issues, a critical experiment was conducted in 2002 at the Los Alamos Critical Experiments Facility. In the experiment, a 6-kg sphere of {sup 237}Np was surrounded by nested hemispherical shells of highly enriched uranium. The shells were required in order to reach a critical condition. Subsequently, a detailed model of the experiment was developed. This model faithfully reproduces the components of the experiment, but it is geometrically complex. Furthermore, the isotopics analysis upon which that model is based omits nearly 1 % of the mass of the sphere. A simplified benchmark model has been constructed that retains all of the neutronically important aspects of the detailed model and substantially reduces the computer resources required for the calculation. The reactivity impact, of each of the simplifications is quantified, including the effect of the missing mass. A complete set of specifications for the benchmark is included in the full paper. Both the detailed and simplified benchmark models underpredict k{sub eff} by more than 1% {Delta}k. This discrepancy supports the suspicion that better cross sections are needed for {sup 237}Np.

  15. Molybdate transport in a chemically complex aquifer: Field measurements compared with solute-transport model predictions

    USGS Publications Warehouse

    Stollenwerk, K.G.

    1998-01-01

    A natural-gradient tracer test was conducted in an unconfined sand and gravel aquifer on Cape Cod, Massachusetts. Molybdate was included in the injectate to study the effects of variable groundwater chemistry on its aqueous distribution and to evaluate the reliability of laboratory experiments for identifying and quantifying reactions that control the transport of reactive solutes in groundwater. Transport of molybdate in this aquifer was controlled by adsorption. The amount adsorbed varied with aqueous chemistry that changed with depth as freshwater recharge mixed with a plume of sewage-contaminated groundwater. Molybdate adsorption was strongest near the water table where pH (5.7) and the concentration of the competing solutes phosphate (2.3 micromolar) and sulfate (86 micromolar) were low. Adsorption of molybdate decreased with depth as pH increased to 6.5, phosphate increased to 40 micromolar, and sulfate increased to 340 micromolar. A one-site diffuse-layer surface-complexation model and a two-site diffuse-layer surface-complexation model were used to simulate adsorption. Reactions and equilibrium constants for both models were determined in laboratory experiments and used in the reactive-transport model PHAST to simulate the two-dimensional transport of molybdate during the tracer test. No geochemical parameters were adjusted in the simulation to improve the fit between model and field data. Both models simulated the travel distance of the molybdate cloud to within 10% during the 2-year tracer test; however, the two-site diffuse-layer model more accurately simulated the molybdate concentration distribution within the cloud.

  16. Cognitive Modeling of Video Game Player User Experience

    NASA Technical Reports Server (NTRS)

    Bohil, Corey J.; Biocca, Frank A.

    2010-01-01

    This paper argues for the use of cognitive modeling to gain a detailed and dynamic look into user experience during game play. Applying cognitive models to game play data can help researchers understand a player's attentional focus, memory status, learning state, and decision strategies (among other things) as these cognitive processes occurred throughout game play. This is a stark contrast to the common approach of trying to assess the long-term impact of games on cognitive functioning after game play has ended. We describe what cognitive models are, what they can be used for and how game researchers could benefit by adopting these methods. We also provide details of a single model - based on decision field theory - that has been successfUlly applied to data sets from memory, perception, and decision making experiments, and has recently found application in real world scenarios. We examine possibilities for applying this model to game-play data.

  17. Amblypygids: Model Organisms for the Study of Arthropod Navigation Mechanisms in Complex Environments?

    PubMed Central

    Wiegmann, Daniel D.; Hebets, Eileen A.; Gronenberg, Wulfila; Graving, Jacob M.; Bingman, Verner P.

    2016-01-01

    Navigation is an ideal behavioral model for the study of sensory system integration and the neural substrates associated with complex behavior. For this broader purpose, however, it may be profitable to develop new model systems that are both tractable and sufficiently complex to ensure that information derived from a single sensory modality and path integration are inadequate to locate a goal. Here, we discuss some recent discoveries related to navigation by amblypygids, nocturnal arachnids that inhabit the tropics and sub-tropics. Nocturnal displacement experiments under the cover of a tropical rainforest reveal that these animals possess navigational abilities that are reminiscent, albeit on a smaller spatial scale, of true-navigating vertebrates. Specialized legs, called antenniform legs, which possess hundreds of olfactory and tactile sensory hairs, and vision appear to be involved. These animals also have enormous mushroom bodies, higher-order brain regions that, in insects, integrate contextual cues and may be involved in spatial memory. In amblypygids, the complexity of a nocturnal rainforest may impose navigational challenges that favor the integration of information derived from multimodal cues. Moreover, the movement of these animals is easily studied in the laboratory and putative neural integration sites of sensory information can be manipulated. Thus, amblypygids could serve as model organisms for the discovery of neural substrates associated with a unique and potentially sophisticated navigational capability. The diversity of habitats in which amblypygids are found also offers an opportunity for comparative studies of sensory integration and ecological selection pressures on navigation mechanisms. PMID:27014008

  18. An Aqueous Thermodynamic Model for the Complexation of Nickel with EDTA Valid to high Base Concentration

    SciTech Connect

    Felmy, Andrew R.; Qafoku, Odeta

    2004-09-01

    An aqueous thermodynamic model is developed which accurately describes the effects of high base concentration on the complexation of Ni2+ by ethylenedinitrilotetraacetic acid (EDTA). The model is primarily developed from an extensive data on the solubility of Ni(OH)2(c) in the presence of EDTA and in the presence and absence of Ca2+ as the competing metal ion. The solubility data for Ni(OH)2(c) were obtained in solutions ranging in NaOH concentration from 0.01 to 11.6m, and in Ca 2+ concentrations extending to saturation with respect to portlandite, Ca(OH)2. Owing to the inert nature of the Ni-EDTA complexation reactions, solubility experiments were approached from both the oversaturation and undersaturation direction and over time frames extending to 413 days. The final aqueous thermodynamic model is based upon the equations of Pitzer, accurately predicts the observed solubilities to concentrations as high as 11.6m NaOH, and is consistent with UV-Vis spectroscopic studies of the complexes in solution.

  19. The complexity of model checking for belief revision and update

    SciTech Connect

    Liberatore, P.; Schaerf, M.

    1996-12-31

    One of the main challenges in the formal modeling of common-sense reasoning is the ability to cope with the dynamic nature of the world. Among the approaches put forward to address this problem are belief revision and update. Given a knowledge base T, representing our knowledge of the {open_quotes}state of affairs{close_quotes} of the world of interest, it is possible that we are lead to trust another piece of information P, possibly inconsistent with the old one T. The aim of revision and update operators is to characterize the revised knowledge base T{prime} that incorporates the new formula P into the old one T while preserving consistency and, at the same time, avoiding the loss of too much information in this process. In this paper we study the computational complexity of one of the main computational problems of belief revision and update: deciding if an interpretation M is a model of the revised knowledge base.

  20. Reliable modeling of the electronic spectra of realistic uranium complexes

    NASA Astrophysics Data System (ADS)

    Tecmer, Paweł; Govind, Niranjan; Kowalski, Karol; de Jong, Wibe A.; Visscher, Lucas

    2013-07-01

    We present an EOMCCSD (equation of motion coupled cluster with singles and doubles) study of excited states of the small [UO2]2+ and [UO2]+ model systems as well as the larger UVIO2(saldien) complex. In addition, the triples contribution within the EOMCCSDT and CR-EOMCCSD(T) (completely renormalized EOMCCSD with non-iterative triples) approaches for the [UO2]2+ and [UO2]+ systems as well as the active-space variant of the CR-EOMCCSD(T) method—CR-EOMCCSd(t)—for the UVIO2(saldien) molecule are investigated. The coupled cluster data were employed as benchmark to choose the "best" appropriate exchange-correlation functional for subsequent time-dependent density functional (TD-DFT) studies on the transition energies for closed-shell species. Furthermore, the influence of the saldien ligands on the electronic structure and excitation energies of the [UO2]+ molecule is discussed. The electronic excitations as well as their oscillator dipole strengths modeled with TD-DFT approach using the CAM-B3LYP exchange-correlation functional for the [UVO2(saldien)]- with explicit inclusion of two dimethyl sulfoxide molecules are in good agreement with the experimental data of Takao et al. [Inorg. Chem. 49, 2349 (2010), 10.1021/ic902225f].

  1. Electromagnetic modelling of Ground Penetrating Radar responses to complex targets

    NASA Astrophysics Data System (ADS)

    Pajewski, Lara; Giannopoulos, Antonis

    2014-05-01

    This work deals with the electromagnetic modelling of composite structures for Ground Penetrating Radar (GPR) applications. It was developed within the Short-Term Scientific Mission ECOST-STSM-TU1208-211013-035660, funded by COST Action TU1208 "Civil Engineering Applications of Ground Penetrating Radar". The Authors define a set of test concrete structures, hereinafter called cells. The size of each cell is 60 x 100 x 18 cm and the content varies with growing complexity, from a simple cell with few rebars of different diameters embedded in concrete at increasing depths, to a final cell with a quite complicated pattern, including a layer of tendons between two overlying meshes of rebars. Other cells, of intermediate complexity, contain pvc ducts (air filled or hosting rebars), steel objects commonly used in civil engineering (as a pipe, an angle bar, a box section and an u-channel), as well as void and honeycombing defects. One of the cells has a steel mesh embedded in it, overlying two rebars placed diagonally across the comers of the structure. Two cells include a couple of rebars bent into a right angle and placed on top of each other, with a square/round circle lying at the base of the concrete slab. Inspiration for some of these cells is taken from the very interesting experimental work presented in Ref. [1]. For each cell, a subset of models with growing complexity is defined, starting from a simple representation of the cell and ending with a more realistic one. In particular, the model's complexity increases from the geometrical point of view, as well as in terms of how the constitutive parameters of involved media and GPR antennas are described. Some cells can be simulated in both two and three dimensions; the concrete slab can be approximated as a finite-thickness layer having infinite extension on the transverse plane, thus neglecting how edges affect radargrams, or else its finite size can be fully taken into account. The permittivity of concrete can be

  2. Reliable Modeling of the Electronic Spectra of Realistic Uranium Complexes

    SciTech Connect

    Tecmer, Pawel; Govind, Niranjan; Kowalski, Karol; De Jong, Wibe A.; Visscher, Lucas

    2013-07-21

    We present an EOMCCSD (equation of motion coupled cluster with singles and doubles) study of excited states of the small [UO2]2+ and [UO2]+ model systems as well as the larger UV IO2(saldien) complex. In addition, the triples contribution within the EOMCCSDT and CR-EOMCCSD(T) (completely renormalized EOMCCSD with non-iterative triples) approaches for the [UO2]2+ and [UO2]+ systems as well as the active-space variant of the CR-EOMCCSD(T) method | CREOMCCSd(t) | for the UV IO2(saldien) molecule are investigated. The coupled cluster data was employed as benchmark to chose the "best" appropriate exchange--correlation functional for subsequent time-dependent density functional (TD-DFT) studies on the transition energies for closed-shell species. Furthermore, the influence of the saldien ligands on the electronic structure and excitation energies of the [UO2]+ molecule is discussed. The electronic excitations as well as their oscillator dipole strengths modeled with TD-DFT approach using the CAM-B3LYP exchange{correlation functional for the [UV O2(saldien)]- with explicit inclusion of two DMSOs are in good agreement with the experimental data of Takao et al. [Inorg. Chem. 49, 2349-2359, (2010)].

  3. Complex partial epileptic-like experiences in university students and practitioners of Dharmakaya in Thailand: comparison with Canadian university students.

    PubMed

    Murphy, T; Persinger, M A

    2001-08-01

    We tested the hypothesis that individuals who frequently practice meditation within another culture whose assumptions explicitly endorse this practice should exhibit more frequent and varied experience associated with complex partial epilepsy (without the seizures) as inferred by the Personal Philosophy Inventory and Roberts' Questionnaire for the Epileptic Spectrum Disorder. 80 practitioners of Dharma Meditation and 24 university students in Thailand were compared with 76 students from first-year courses in psychology in a Canadian university. Although there were large significant differences for some items and clusters of items expected as a result of cultural differences, there were no statistically significant differences between the two populations for the proportions of complex partial epileptic-like experiences or their frequency of occurrence. There were no strong or consistent correlations between the history of meditation within the sample who practiced Dharma meditation and these experiences. These results suggest complex partial epileptic-like experiences may be a normal feature of the human species.

  4. Modeling a ponded infiltration experiment at Yucca Mountain, NV

    SciTech Connect

    Hudson, D.B.; Guertal, W.R.; Flint, A.L.

    1994-12-31

    Yucca Mountain, Nevada is being evaluated as a potential site for a geologic repository for high level radioactive waste. As part of the site characterization activities at Yucca Mountain, a field-scale ponded infiltration experiment was done to help characterize the hydraulic and infiltration properties of a layered dessert alluvium deposit. Calcium carbonate accumulation and cementation, heterogeneous layered profiles, high evapotranspiration, low precipitation, and rocky soil make the surface difficult to characterize.The effects of the strong morphological horizonation on the infiltration processes, the suitability of measured hydraulic properties, and the usefulness of ponded infiltration experiments in site characterization work were of interest. One-dimensional and two-dimensional radial flow numerical models were used to help interpret the results of the ponding experiment. The objective of this study was to evaluate the results of a ponded infiltration experiment done around borehole UE25 UZN {number_sign}85 (N85) at Yucca Mountain, NV. The effects of morphological horizons on the infiltration processes, lateral flow, and measured soil hydaulic properties were studied. The evaluation was done by numerically modeling the results of a field ponded infiltration experiment. A comparison the experimental results and the modeled results was used to qualitatively indicate the degree to which infiltration processes and the hydaulic properties are understood. Results of the field characterization, soil characterization, borehole geophysics, and the ponding experiment are presented in a companion paper.

  5. Neutral null models for diversity in serial transfer evolution experiments.

    PubMed

    Harpak, Arbel; Sella, Guy

    2014-09-01

    Evolution experiments with microorganisms coupled with genome-wide sequencing now allow for the systematic study of population genetic processes under a wide range of conditions. In learning about these processes in natural, sexual populations, neutral models that describe the behavior of diversity and divergence summaries have played a pivotal role. It is therefore natural to ask whether neutral models, suitably modified, could be useful in the context of evolution experiments. Here, we introduce coalescent models for polymorphism and divergence under the most common experimental evolution assay, a serial transfer experiment. This relatively simple setting allows us to address several issues that could affect diversity patterns in evolution experiments, whether selection is operating or not: the transient behavior of neutral polymorphism in an experiment beginning from a single clone, the effects of randomness in the timing of cell division and noisiness in population size in the dilution stage. In our analyses and discussion, we emphasize the implications for experiments aimed at measuring diversity patterns and making inferences about population genetic processes based on these measurements.

  6. Cryogenic Tank Modeling for the Saturn AS-203 Experiment

    NASA Technical Reports Server (NTRS)

    Grayson, Gary D.; Lopez, Alfredo; Chandler, Frank O.; Hastings, Leon J.; Tucker, Stephen P.

    2006-01-01

    A computational fluid dynamics (CFD) model is developed for the Saturn S-IVB liquid hydrogen (LH2) tank to simulate the 1966 AS-203 flight experiment. This significant experiment is the only known, adequately-instrumented, low-gravity, cryogenic self pressurization test that is well suited for CFD model validation. A 4000-cell, axisymmetric model predicts motion of the LH2 surface including boil-off and thermal stratification in the liquid and gas phases. The model is based on a modified version of the commercially available FLOW3D software. During the experiment, heat enters the LH2 tank through the tank forward dome, side wall, aft dome, and common bulkhead. In both model and test the liquid and gases thermally stratify in the low-gravity natural convection environment. LH2 boils at the free surface which in turn increases the pressure within the tank during the 5360 second experiment. The Saturn S-IVB tank model is shown to accurately simulate the self pressurization and thermal stratification in the 1966 AS-203 test. The average predicted pressurization rate is within 4% of the pressure rise rate suggested by test data. Ullage temperature results are also in good agreement with the test where the model predicts an ullage temperature rise rate within 6% of the measured data. The model is based on first principles only and includes no adjustments to bring the predictions closer to the test data. Although quantitative model validation is achieved or one specific case, a significant step is taken towards demonstrating general use of CFD for low-gravity cryogenic fluid modeling.

  7. Earthquake nucleation mechanisms and periodic loading: Models, Experiments, and Observations

    NASA Astrophysics Data System (ADS)

    Dahmen, K.; Brinkman, B.; Tsekenis, G.; Ben-Zion, Y.; Uhl, J.

    2010-12-01

    The project has two main goals: (a) Improve the understanding of how earthquakes are nucleated ¬ with specific focus on seismic response to periodic stresses (such as tidal or seasonal variations) (b) Use the results of (a) to infer on the possible existence of precursory activity before large earthquakes. A number of mechanisms have been proposed for the nucleation of earthquakes, including frictional nucleation (Dieterich 1987) and fracture (Lockner 1999, Beeler 2003). We study the relation between the observed rates of triggered seismicity, the period and amplitude of cyclic loadings and whether the observed seismic activity in response to periodic stresses can be used to identify the correct nucleation mechanism (or combination of mechanisms). A generalized version of the Ben-Zion and Rice model for disordered fault zones and results from related recent studies on dislocation dynamics and magnetization avalanches in slowly magnetized materials are used in the analysis (Ben-Zion et al. 2010; Dahmen et al. 2009). The analysis makes predictions for the statistics of macroscopic failure events of sheared materials in the presence of added cyclic loading, as a function of the period, amplitude, and noise in the system. The employed tools include analytical methods from statistical physics, the theory of phase transitions, and numerical simulations. The results will be compared to laboratory experiments and observations. References: Beeler, N.M., D.A. Lockner (2003). Why earthquakes correlate weakly with the solid Earth tides: effects of periodic stress on the rate and probability of earthquake occurrence. J. Geophys. Res.-Solid Earth 108, 2391-2407. Ben-Zion, Y. (2008). Collective Behavior of Earthquakes and Faults: Continuum-Discrete Transitions, Evolutionary Changes and Corresponding Dynamic Regimes, Rev. Geophysics, 46, RG4006, doi:10.1029/2008RG000260. Ben-Zion, Y., Dahmen, K. A. and J. T. Uhl (2010). A unifying phase diagram for the dynamics of sheared solids

  8. Model slope infiltration experiments for shallow landslides early warning

    NASA Astrophysics Data System (ADS)

    Damiano, E.; Greco, R.; Guida, A.; Olivares, L.; Picarelli, L.

    2009-04-01

    simple empirical models [Versace et al., 2003] based on correlation between some features of rainfall records (cumulated height, duration, season etc.) and the correspondent observed landslides. Laboratory experiments on instrumented small scale slope models represent an effective way to provide data sets [Eckersley, 1990; Wang and Sassa, 2001] useful for building up more complex models of landslide triggering prediction. At the Geotechnical Laboratory of C.I.R.I.AM. an instrumented flume to investigate on the mechanics of landslides in unsaturated deposits of granular soils is available [Olivares et al. 2003; Damiano, 2004; Olivares et al., 2007]. In the flume a model slope is reconstituted by a moist-tamping technique and subjected to an artificial uniform rainfall since failure happens. The state of stress and strain of the slope is monitored during the entire test starting from the infiltration process since the early post-failure stage: the monitoring system is constituted by several mini-tensiometers placed at different locations and depths, to measure suction, mini-transducers to measure positive pore pressures, laser sensors, to measure settlements of the ground surface, and high definition video-cameras to obtain, through a software (PIV) appositely dedicated, the overall horizontal displacement field. Besides, TDR sensors, used with an innovative technique [Greco, 2006], allow to reconstruct the water content profile of soil along the entire thickness of the investigated deposit and to monitor its continuous changes during infiltration. In this paper a series of laboratory tests carried out on model slopes in granular pyroclastic soils taken in the mountainous area north-eastern of Napoli, are presented. The experimental results demonstrate the completeness of information provided by the various sensors installed. In particular, very useful information is given by the coupled measurements of soil water content by TDR and suction by tensiometers. Knowledge of

  9. Atmospheric Modelling for Air Quality Study over the complex Himalayas

    NASA Astrophysics Data System (ADS)

    Surapipith, Vanisa; Panday, Arnico; Mukherji, Aditi; Banmali Pradhan, Bidya; Blumer, Sandro

    2014-05-01

    An Atmospheric Modelling System has been set up at International Centre for Integrated Mountain Development (ICIMOD) for the assessment of Air Quality across the Himalaya mountain ranges. The Weather Research and Forecasting (WRF) model version 3.5 has been implemented over the regional domain, stretching across 4995 x 4455 km2 centred at Ichhyakamana , the ICIMOD newly setting-up mountain-peak station (1860 m) in central Nepal, and covering terrains from sea-level to the Everest (8848 m). Simulation is carried out for the winter time period, i.e. December 2012 to February 2013, when there was an intensive field campaign SusKat, where at least 7 super stations were collecting meteorology and chemical parameters on various sites. The very complex terrain requires a high horizontal resolution (1 × 1 km2), which is achieved by nesting the domain of interest, e.g. Kathmandu Valley, into 3 coarser ones (27, 9, 3 km resolution). Model validation is performed against the field data as well as satellite data, and the challenge of capturing the necessary atmospheric processes is discussed, before moving forward with the fully coupled chemistry module (WRF-Chem), having local and regional emission databases as input. The effort aims at finding a better understanding of the atmospheric processes and air quality impact on the mountain population, as well as the impact of the long-range transport, particularly of Black Carbon aerosol deposition, to the radiative budget over the Himalayan glaciers. The higher rate of snowcap melting, and shrinkage of permafrost as noticed by glaciologists is a concern. Better prediction will supply crucial information to form the proper mitigation and adaptation strategies for saving people lives across the Himalayas in the changing climate.

  10. Experience of Time Passage:. Phenomenology, Psychophysics, and Biophysical Modelling

    NASA Astrophysics Data System (ADS)

    Wackermann, Jiří

    2005-10-01

    The experience of time's passing appears, from the 1st person perspective, to be a primordial subjective experience, seemingly inaccessible to the 3rd person accounts of time perception (psychophysics, cognitive psychology). In our analysis of the `dual klepsydra' model of reproduction of temporal durations, time passage occurs as a cognitive construct, based upon more elementary (`proto-cognitive') function of the psychophysical organism. This conclusion contradicts the common concepts of `subjective' or `psychological' time as readings of an `internal clock'. Our study shows how phenomenological, experimental and modelling approaches can be fruitfully combined.

  11. Searching for Drug Synergy in Complex Dose-Response Landscapes Using an Interaction Potency Model.

    PubMed

    Yadav, Bhagwan; Wennerberg, Krister; Aittokallio, Tero; Tang, Jing

    2015-01-01

    Rational design of multi-targeted drug combinations is a promising strategy to tackle the drug resistance problem for many complex disorders. A drug combination is usually classified as synergistic or antagonistic, depending on the deviation of the observed combination response from the expected effect calculated based on a reference model of non-interaction. The existing reference models were proposed originally for low-throughput drug combination experiments, which make the model assumptions often incompatible with the complex drug interaction patterns across various dose pairs that are typically observed in large-scale dose-response matrix experiments. To address these limitations, we proposed a novel reference model, named zero interaction potency (ZIP), which captures the drug interaction relationships by comparing the change in the potency of the dose-response curves between individual drugs and their combinations. We utilized a delta score to quantify the deviation from the expectation of zero interaction, and proved that a delta score value of zero implies both probabilistic independence and dose additivity. Using data from a large-scale anticancer drug combination experiment, we demonstrated empirically how the ZIP scoring approach captures the experimentally confirmed drug synergy while keeping the false positive rate at a low level. Further, rather than relying on a single parameter to assess drug interaction, we proposed the use of an interaction landscape over the full dose-response matrix to identify and quantify synergistic and antagonistic dose regions. The interaction landscape offers an increased power to differentiate between various classes of drug combinations, and may therefore provide an improved means for understanding their mechanisms of action toward clinical translation.

  12. Determination of the levitation limits of dust particles within the sheath in complex plasma experiments

    SciTech Connect

    Douglass, Angela; Land, Victor; Qiao Ke; Matthews, Lorin; Hyde, Truell

    2012-01-15

    Experiments are performed in which dust particles are levitated at varying heights above the powered electrode in a radio frequency plasma discharge by changing the discharge power. The trajectories of particles dropped from the top of the discharge chamber are used to reconstruct the vertical electric force acting on the particles. The resulting data, together with the results from a self-consistent fluid model, are used to determine the lower levitation limit for dust particles in the discharge and the approximate height above the lower electrode where quasineutrality is attained, locating the sheath edge. These results are then compared with current sheath models. It is also shown that particles levitated within a few electron Debye lengths of the sheath edge are located outside the linearly increasing portion of the electric field.

  13. Modeling a set of heavy oil aqueous pyrolysis experiments

    SciTech Connect

    Thorsness, C.B.; Reynolds, J.G.

    1996-11-01

    Aqueous pyrolysis experiments, aimed at mild upgrading of heavy oil, were analyzed using various computer models. The primary focus of the analysis was the pressure history of the closed autoclave reactors obtained during the heating of the autoclave to desired reaction temperatures. The models used included a means of estimating nonideal behavior of primary components with regard to vapor liquid equilibrium. The modeling indicated that to match measured autoclave pressures, which often were well below the vapor pressure of water at a given temperature, it was necessary to incorporate water solubility in the oil phase and an activity model for the water in the oil phase which reduced its fugacity below that of pure water. Analysis also indicated that the mild to moderate upgrading of the oil which occurred in experiments that reached 400{degrees}C or more using a FE(III) 2-ethylhexanoate could be reasonably well characterized by a simple first order rate constant of 1.7xl0{sup 8} exp(-20000/T)s{sup {minus}l}. Both gas production and API gravity increase were characterized by this rate constant. Models were able to match the complete pressure history of the autoclave experiments fairly well with relatively simple equilibria models. However, a consistent lower than measured buildup in pressure at peak temperatures was noted in the model calculations. This phenomena was tentatively attributed to an increase in the amount of water entering the vapor phase caused by a change in its activity in the oil phase.

  14. Thermophysical Model of S-complex NEAs: 1627 Ivar

    NASA Astrophysics Data System (ADS)

    Crowell, Jenna; Howell, Ellen S.; Magri, Christopher; Fernandez, Yanga R.; Marshall, Sean E.; Warner, Brian D.; Vervack, Ronald J., Jr.

    2016-01-01

    We present an updated thermophysical model of 1627 Ivar, an Amor class near Earth asteroid (NEA) with a taxonomic type of Sqw [1]. Ivar's large size and close approach to Earth in 2013 (minimum distance 0.32 AU) provided an opportunity to observe the asteroid over many different viewing angles for an extended period of time, which we have utilized to generate a shape and thermophysical model of Ivar, allowing us to discuss the implications that these results have on the regolith of this asteroid. Using the software SHAPE [2,3], we updated the nonconvex shape model of Ivar, which was constructed by Kaasalainen et al. [4] using photometry. We incorporated 2013 radar data and CCD lightcurves using the Arecibo Observatory's 2380Mz radar and the 0.35m telescope at the Palmer Divide Station respectively, to create a shape model with higher surface detail. We found Ivar to be elongated with maximum extended lengths along principal axes of 12 x 5 x 6 km and a rotation rate of 4.795162 ± 5.4 * 10-6 hrs [5]. In addition to these radar data and lightcurves, we also observed Ivar in the near IR using the SpeX instrument at the NASA IRTF. These data cover a wide range of Ivar's rotational longitudes and viewing geometries. We have used SHERMAN [6,7] with input parameters such as the asteroid's IR emissivity, optical scattering law, and thermal inertia, in order to complete thermal computations based on our shape model and known spin state. Using this procedure, we find which reflective, thermal, and surface properties best reproduce the observed spectra. This allows us to characterize properties of the asteroid's regolith and study heterogeneity of the surface. We will compare these results with those of other S-complex asteroids to better understand this asteroid type and the uniqueness of 1627 Ivar.[1] DeMeo et al. 2009, Icarus 202, 160-180 [2] Magri, C. et al. 2011, Icarus 214, 210-227. [3] Crowell, J. et al. 2014, AAS/DPS 46 [4] Kaasalainen, M. et al. 2004, Icarus 167, 178

  15. Community Climate System Model (CCSM) Experiments and Output Data

    DOE Data Explorer

    The National Center for Atmospheric Research (NCAR) created the first version of the Community Climate Model (CCM) in 1983 as a global atmosphere model. It was improved in 1994 when NCAR, with support from the National Science Foundation (NSF), developed and incorporated a Climate System Model (CSM) that included atmosphere, land surface, ocean, and sea ice. As the capabilities of the model grew, so did interest in its applications and changes in how it would be managed. A workshop in 1996 set the future management structure, marked the beginning of the second phase of the model, a phase that included full participation of the scientific community, and also saw additional financial support, including support from the Department of Energy. In recognition of these changes, the model was renamed to the Community Climate System Model (CCSM). It began to function as a model with the interactions of land, sea, and air fully coupled, providing computer simulations of Earth's past climate, its present climate, and its possible future climate. The CCSM website at http://www2.cesm.ucar.edu/ describes some of the research that has been done since then: A 300-year run has been performed using the CSM, and results from this experiment have appeared in a special issue of theJournal of Climate, 11, June, 1998. A 125-year experiment has been carried out in which carbon dioxide was described to increase at 1% per year from its present concentration to approximately three times its present concentration. More recently, the Climate of the 20th Century experiment was run, with carbon dioxide and other greenhouse gases and sulfate aerosols prescribed to evolve according to our best knowledge from 1870 to the present. Three scenarios for the 21st century were developed: a "business as usual" experiment, in which greenhouse gases are assumed to increase with no economic constraints; an experiment using the Intergovernmental Panel on Climate Change (IPCC) Scenario A1; and a "policy

  16. Second-order model selection in mixture experiments

    SciTech Connect

    Redgate, P.E.; Piepel, G.F.; Hrma, P.R.

    1992-07-01

    Full second-order models for q-component mixture experiments contain q(q+l)/2 terms, which increases rapidly as q increases. Fitting full second-order models for larger q may involve problems with ill-conditioning and overfitting. These problems can be remedied by transforming the mixture components and/or fitting reduced forms of the full second-order mixture model. Various component transformation and model reduction approaches are discussed. Data from a 10-component nuclear waste glass study are used to illustrate ill-conditioning and overfitting problems that can be encountered when fitting a full second-order mixture model. Component transformation, model term selection, and model evaluation/validation techniques are discussed and illustrated for the waste glass example.

  17. Optimization of Time-Course Experiments for Kinetic Model Discrimination

    PubMed Central

    Lages, Nuno F.; Cordeiro, Carlos; Sousa Silva, Marta; Ponces Freire, Ana; Ferreira, António E. N.

    2012-01-01

    Systems biology relies heavily on the construction of quantitative models of biochemical networks. These models must have predictive power to help unveiling the underlying molecular mechanisms of cellular physiology, but it is also paramount that they are consistent with the data resulting from key experiments. Often, it is possible to find several models that describe the data equally well, but provide significantly different quantitative predictions regarding particular variables of the network. In those cases, one is faced with a problem of model discrimination, the procedure of rejecting inappropriate models from a set of candidates in order to elect one as the best model to use for prediction. In this work, a method is proposed to optimize the design of enzyme kinetic assays with the goal of selecting a model among a set of candidates. We focus on models with systems of ordinary differential equations as the underlying mathematical description. The method provides a design where an extension of the Kullback-Leibler distance, computed over the time courses predicted by the models, is maximized. Given the asymmetric nature this measure, a generalized differential evolution algorithm for multi-objective optimization problems was used. The kinetics of yeast glyoxalase I (EC 4.4.1.5) was chosen as a difficult test case to evaluate the method. Although a single-substrate kinetic model is usually considered, a two-substrate mechanism has also been proposed for this enzyme. We designed an experiment capable of discriminating between the two models by optimizing the initial substrate concentrations of glyoxalase I, in the presence of the subsequent pathway enzyme, glyoxalase II (EC 3.1.2.6). This discriminatory experiment was conducted in the laboratory and the results indicate a two-substrate mechanism for the kinetics of yeast glyoxalase I. PMID:22403703

  18. A New Approach to Modelling Student Retention through an Application of Complexity Thinking

    ERIC Educational Resources Information Center

    Forsman, Jonas; Linder, Cedric; Moll, Rachel; Fraser, Duncan; Andersson, Staffan

    2014-01-01

    Complexity thinking is relatively new to education research and has rarely been used to examine complex issues in physics and engineering education. Issues in higher education such as student retention have been approached from a multiplicity of perspectives and are recognized as complex. The complex system of student retention modelling in higher…

  19. Integrated Experiment and Modeling of Insensitive High Explosives

    NASA Astrophysics Data System (ADS)

    Stewart, D. Scott; Lambert, David E.; Yoo, Sunhee; Lieber, M.; Holman, Steven

    2009-06-01

    New design paradigms for insensitive high explosives are being sought for use in munitions applications that require enhanced, safety, reliability and performance. We describe recent work of our group that uses an integrated approach to develop predictive models, guided by experiments. Insensitive explosive can have relatively longer detonation reaction zones and slower reaction rates than their sensitive counterparts. We employ reactive flow models that are constrained by detonation shock dynamics to pose candidate predictive models. We discuss variation of the pressure dependent reaction rate exponent and reaction order, on the length of the supporting reaction zone, the detonation velocity curvature relation, computed critical energy required for initiation, the relation between the diameter effect curve and the corresponding normal detonation velocity curvature relation. We discuss representative characterization experiments carried out at Eglin, AFB and the constraints imposed on models by a standardized experimental characterization sequence.

  20. A Novel Saccharomyces cerevisiae FG Nucleoporin Mutant Collection for Use in Nuclear Pore Complex Functional Experiments.

    PubMed

    Adams, Rebecca L; Terry, Laura J; Wente, Susan R

    2015-11-03

    FG nucleoporins (Nups) are the class of proteins that both generate the permeability barrier and mediate selective transport through the nuclear pore complex (NPC). The FG Nup family has 11 members in Saccharomyces cerevisiae, and the study of mutants lacking different FG domains has been instrumental in testing transport models. To continue analyzing the distinct functional roles of FG Nups in vivo, additional robust genetic tools are required. Here, we describe a novel collection of S. cerevisiae mutant strains in which the FG domains of different groups of Nups are absent (Δ) in the greatest number documented to date. Using this plasmid-based ΔFG strategy, we find that a GLFG domain-only pore is sufficient for viability. The resulting extensive plasmid and strain resources are available to the scientific community for future in-depth in vivo studies of NPC transport.