Science.gov

Sample records for complexes model experiments

  1. Dynamics of vortices in complex wakes: Modeling, analysis, and experiments

    NASA Astrophysics Data System (ADS)

    Basu, Saikat

    The thesis develops singly-periodic mathematical models for complex laminar wakes which are formed behind vortex-shedding bluff bodies. These wake structures exhibit a variety of patterns as the bodies oscillate or are in close proximity of one another. The most well-known formation comprises two counter-rotating vortices in each shedding cycle and is popularly known as the von Karman vortex street. Of the more complex configurations, as a specific example, this thesis investigates one of the most commonly occurring wake arrangements, which consists of two pairs of vortices in each shedding period. The paired vortices are, in general, counter-rotating and belong to a more general definition of the 2P mode, which involves periodic release of four vortices into the flow. The 2P arrangement can, primarily, be sub-classed into two types: one with a symmetric orientation of the two vortex pairs about the streamwise direction in a periodic domain and the other in which the two vortex pairs per period are placed in a staggered geometry about the wake centerline. The thesis explores the governing dynamics of such wakes and characterizes the corresponding relative vortex motion. In general, for both the symmetric as well as the staggered four vortex periodic arrangements, the thesis develops two-dimensional potential flow models (consisting of an integrable Hamiltonian system of point vortices) that consider spatially periodic arrays of four vortices with their strengths being +/-Gamma1 and +/-Gamma2. Vortex formations observed in the experiments inspire the assumed spatial symmetry. The models demonstrate a number of dynamic modes that are classified using a bifurcation analysis of the phase space topology, consisting of level curves of the Hamiltonian. Despite the vortex strengths in each pair being unequal in magnitude, some initial conditions lead to relative equilibrium when the vortex configuration moves with invariant size and shape. The scaled comparisons of the

  2. Historical and idealized climate model experiments: an intercomparison of Earth system models of intermediate complexity

    NASA Astrophysics Data System (ADS)

    Eby, M.; Weaver, A. J.; Alexander, K.; Zickfeld, K.; Abe-Ouchi, A.; Cimatoribus, A. A.; Crespin, E.; Drijfhout, S. S.; Edwards, N. R.; Eliseev, A. V.; Feulner, G.; Fichefet, T.; Forest, C. E.; Goosse, H.; Holden, P. B.; Joos, F.; Kawamiya, M.; Kicklighter, D.; Kienert, H.; Matsumoto, K.; Mokhov, I. I.; Monier, E.; Olsen, S. M.; Pedersen, J. O. P.; Perrette, M.; Philippon-Berthier, G.; Ridgwell, A.; Schlosser, A.; Schneider von Deimling, T.; Shaffer, G.; Smith, R. S.; Spahni, R.; Sokolov, A. P.; Steinacher, M.; Tachiiri, K.; Tokos, K.; Yoshimori, M.; Zeng, N.; Zhao, F.

    2013-05-01

    Both historical and idealized climate model experiments are performed with a variety of Earth system models of intermediate complexity (EMICs) as part of a community contribution to the Intergovernmental Panel on Climate Change Fifth Assessment Report. Historical simulations start at 850 CE and continue through to 2005. The standard simulations include changes in forcing from solar luminosity, Earth's orbital configuration, CO2, additional greenhouse gases, land use, and sulphate and volcanic aerosols. In spite of very different modelled pre-industrial global surface air temperatures, overall 20th century trends in surface air temperature and carbon uptake are reasonably well simulated when compared to observed trends. Land carbon fluxes show much more variation between models than ocean carbon fluxes, and recent land fluxes appear to be slightly underestimated. It is possible that recent modelled climate trends or climate-carbon feedbacks are overestimated resulting in too much land carbon loss or that carbon uptake due to CO2 and/or nitrogen fertilization is underestimated. Several one thousand year long, idealized, 2 × and 4 × CO2 experiments are used to quantify standard model characteristics, including transient and equilibrium climate sensitivities, and climate-carbon feedbacks. The values from EMICs generally fall within the range given by general circulation models. Seven additional historical simulations, each including a single specified forcing, are used to assess the contributions of different climate forcings to the overall climate and carbon cycle response. The response of surface air temperature is the linear sum of the individual forcings, while the carbon cycle response shows a non-linear interaction between land-use change and CO2 forcings for some models. Finally, the preindustrial portions of the last millennium simulations are used to assess historical model carbon-climate feedbacks. Given the specified forcing, there is a tendency for the

  3. Modeling Complex Equilibria in ITC Experiments: Thermodynamic Parameters Estimation for a Three Binding Site Model

    PubMed Central

    Le, Vu H.; Buscaglia, Robert; Chaires, Jonathan B.; Lewis, Edwin A.

    2013-01-01

    Isothermal Titration Calorimetry, ITC, is a powerful technique that can be used to estimate a complete set of thermodynamic parameters (e.g. Keq (or ΔG), ΔH, ΔS, and n) for a ligand binding interaction described by a thermodynamic model. Thermodynamic models are constructed by combination of equilibrium constant, mass balance, and charge balance equations for the system under study. Commercial ITC instruments are supplied with software that includes a number of simple interaction models, for example one binding site, two binding sites, sequential sites, and n-independent binding sites. More complex models for example, three or more binding sites, one site with multiple binding mechanisms, linked equilibria, or equilibria involving macromolecular conformational selection through ligand binding need to be developed on a case by case basis by the ITC user. In this paper we provide an algorithm (and a link to our MATLAB program) for the non-linear regression analysis of a multiple binding site model with up to four overlapping binding equilibria. Error analysis demonstrates that fitting ITC data for multiple parameters (e.g. up to nine parameters in the three binding site model) yields thermodynamic parameters with acceptable accuracy. PMID:23262283

  4. Model complexity in carbon sequestration:A design of experiment and response surface uncertainty analysis

    NASA Astrophysics Data System (ADS)

    Zhang, Y.; Li, S.

    2014-12-01

    Geologic carbon sequestration (GCS) is proposed for the Nugget Sandstone in Moxa Arch, a regional saline aquifer with a large storage potential. For a proposed storage site, this study builds a suite of increasingly complex conceptual "geologic" model families, using subsets of the site characterization data: a homogeneous model family, a stationary petrophysical model family, a stationary facies model family with sub-facies petrophysical variability, and a non-stationary facies model family (with sub-facies variability) conditioned to soft data. These families, representing alternative conceptual site models built with increasing data, were simulated with the same CO2 injection test (50 years at 1/10 Mt per year), followed by 2950 years of monitoring. Using the Design of Experiment, an efficient sensitivity analysis (SA) is conducted for all families, systematically varying uncertain input parameters. Results are compared among the families to identify parameters that have 1st order impact on predicting the CO2 storage ratio (SR) at both end of injection and end of monitoring. At this site, geologic modeling factors do not significantly influence the short-term prediction of the storage ratio, although they become important over monitoring time, but only for those families where such factors are accounted for. Based on the SA, a response surface analysis is conducted to generate prediction envelopes of the storage ratio, which are compared among the families at both times. Results suggest a large uncertainty in the predicted storage ratio given the uncertainties in model parameters and modeling choices: SR varies from 5-60% (end of injection) to 18-100% (end of monitoring), although its variation among the model families is relatively minor. Moreover, long-term leakage risk is considered small at the proposed site. In the lowest-SR scenarios, all families predict gravity-stable supercritical CO2 migrating toward the bottom of the aquifer. In the highest

  5. Photochemistry of iron(III)-carboxylato complexes in aqueous atmospheric particles - Laboratory experiments and modeling studies

    NASA Astrophysics Data System (ADS)

    Weller, C.; Tilgner, A.; Herrmann, H.

    2010-12-01

    Iron is always present in the atmosphere in concentrations from ~10-9 M (clouds, rain) up to ~10-3 M (fog, particles). Sources are mainly mineral dust emissions. Iron complexes are very good absorbers in the UV-VIS actinic region and therefore photo-chemically reactive. Iron complex photolysis leads to radical production and can initiate radical chain reactions, which is related to the oxidizing capacity of the atmosphere. These radical chain reactions are involved in the decomposition and transformation of a variety of chemical compounds in cloud droplets and deliquescent particles. Additionally, the photochemical reaction itself can be a degradation pathway for organic compounds with the ability to bind iron. Iron-complexes of atmospherically relevant coordination compounds like oxalate, malonate, succinate, glutarate, tartronate, gluconate, pyruvate and glyoxalate have been investigated in laboratory experiments. Iron speciation depends on the iron-ligand ratio and the pH. The most suitable experimental conditions were calculated with a speciation program (Visual Minteq). The solutions were prepared accordingly and transferred to a 1 cm quartz cuvette and flash-photolyzed with an excimer laser at wavelengths 308 or 351 nm. Photochemically produced Fe2+ has been measured by spectrometry at 510 nm as Fe(phenantroline)32+. Fe2+ overall effective quantum yields have been calculated with the concentration of photochemically produced Fe2+ and the measured energy of the excimer laser pulse. The laser pulse energy was measured with a pyroelectric sensor. For some iron-carboxylate systems the experimental parameters like the oxygen content of the solution, the initial Iron concentration and the incident laser energy were systematically altered to observe an effect on the overall quantum yield. The dependence of some quantum yields on these parameters allows in some cases an interpretation of the underlying photochemical reaction mechanism. Quantum yields of malonate

  6. A business process modeling experience in a complex information system re-engineering.

    PubMed

    Bernonville, Stéphanie; Vantourout, Corinne; Fendeler, Geneviève; Beuscart, Régis

    2013-01-01

    This article aims to share a business process modeling experience in a re-engineering project of a medical records department in a 2,965-bed hospital. It presents the modeling strategy, an extract of the results and the feedback experience.

  7. Complexity of Choice: Teachers' and Students' Experiences Implementing a Choice-Based Comprehensive School Health Model

    ERIC Educational Resources Information Center

    Sulz, Lauren; Gibbons, Sandra; Naylor, Patti-Jean; Wharf Higgins, Joan

    2016-01-01

    Background: Comprehensive School Health models offer a promising strategy to elicit changes in student health behaviours. To maximise the effect of such models, the active involvement of teachers and students in the change process is recommended. Objective: The goal of this project was to gain insight into the experiences and motivations of…

  8. Complexity of Choice: Teachers' and Students' Experiences Implementing a Choice-Based Comprehensive School Health Model

    ERIC Educational Resources Information Center

    Sulz, Lauren; Gibbons, Sandra; Naylor, Patti-Jean; Wharf Higgins, Joan

    2016-01-01

    Background: Comprehensive School Health models offer a promising strategy to elicit changes in student health behaviours. To maximise the effect of such models, the active involvement of teachers and students in the change process is recommended. Objective: The goal of this project was to gain insight into the experiences and motivations of…

  9. Circular Dichroism of Carotenoids in Bacterial Light-Harvesting Complexes: Experiments and Modeling

    PubMed Central

    Georgakopoulou, S.; van Grondelle, R.; van der Zwan, G.

    2004-01-01

    In this work we investigate the origin and characteristics of the circular dichroism (CD) spectrum of rhodopin glucoside and lycopene in the light-harvesting 2 complex of Rhodopseudomonas acidophila and Rhodospirillum molischianum, respectively. We successfully model their absorption and CD spectra based on the high-resolution structures. We assume that these spectra originate from seven interacting transition dipole moments: the first corresponds to the 0-0 transition of the carotenoid, whereas the remaining six represent higher vibronic components of the S2 state. From the absorption spectra we get an estimate of the Franck-Condon factors of these transitions. Furthermore, we investigate the broadening mechanisms that lead to the final shape of the spectra and get an insight into the interaction energy between carotenoids. Finally, we examine the consequences of rotations of the carotenoid transition dipole moment and of deformations in the light-harvesting 2 complex rings. Comparison of the modeled carotenoid spectra with modeled spectra of the bacteriochlorophyll QY region leads to a refinement of the modeling procedure and an improvement of all calculated results. We therefore propose that the combined carotenoid and bacteriochlorophyll CD can be used as an accurate reflection of the overall structure of the light-harvesting complexes. PMID:15326029

  10. Toward Technological Application of Non-Newtonian Fluids & Complex Materials/Modeling, Simulation, & Design of Experiments

    DTIC Science & Technology

    2007-11-02

    34Therrmanechanical Equations Governing a Material with Prescribed Temperature-Dependent Density, with Applications to Nonisothernal Plane Poiseuille Flow ", D...1-0431 Materials/Modeling, Simulation , & Design of Experiments 6. AUTHORS M. Gregory Forest & Stephen E. Bechtel 7. PERFORMING ORGANIZATION NAME(S...made significant progress in each of these general areas. We produced high resolution models and codes that simulate molten fiber manufacturing

  11. A computational methodology for learning low-complexity surrogate models of process from experiments or simulations. (Paper 679a)

    SciTech Connect

    Cozad, A.; Sahinidis, N.; Miller, D.

    2011-01-01

    Costly and/or insufficiently robust simulations or experiments can often pose difficulties when their use extends well beyond a single evaluation. This is case with the numerous evaluations of uncertainty quantification, when an algebraic model is needed for optimization, as well as numerous other areas. To overcome these difficulties, we generate an accurate set of algebraic surrogate models of disaggregated process blocks of the experiment or simulation. We developed a method that uses derivative-based and derivative-free optimization alongside machine learning and statistical techniques to generate the set of surrogate models using data sampled from experiments or detailed simulations. Our method begins by building a low-complexity surrogate model for each block from an initial sample set. The model is built using a best subset technique that leverages a mixed-integer linear problem formulation to allow for very large initial basis sets. The models are then tested, exploited, and improved through the use of derivative-free solvers to adaptively sample new simulation or experimental points. The sets of surrogate models from each disaggregated process block are then combined with heat and mass balances around each disaggregated block to generate a full algebraic model of the process. The full model can be used for cheap and accurate evaluations of the original simulation or experiment or combined with design specifications and an objective for nonlinear optimization.

  12. Sorption of phosphate onto calcite; results from batch experiments and surface complexation modeling

    NASA Astrophysics Data System (ADS)

    Ugilt Sø, Helle; Postma, Dieke; Jakobsen, Rasmus; Larsen, Flemming

    2011-05-01

    The adsorption of phosphate onto calcite was studied in a series of batch experiments. To avoid the precipitation of phosphate-containing minerals the experiments were conducted using a short reaction time (3 h) and low concentrations of phosphate (⩽50 μM). Sorption of phosphate on calcite was studied in 11 different calcite-equilibrated solutions that varied in pH, P, ionic strength and activity of Ca 2+, CO32- and HCO3-. Our results show strong sorption of phosphate onto calcite. The kinetics of phosphate sorption onto calcite are fast; adsorption is complete within 2-3 h while desorption is complete in less than 0.5 h. The reversibility of the sorption process indicates that phosphate is not incorporated into the calcite crystal lattice under our experimental conditions. Precipitation of phosphate-containing phases does not seem to take place in systems with ⩽50 μM total phosphate, in spite of a high degree of super-saturation with respect to hydroxyapatite (SI HAP ⩽ 7.83). The amount of phosphate adsorbed varied with the solution composition, in particular, adsorption increases as the CO32- activity decreases (at constant pH) and as pH increases (at constant CO32- activity). The primary effect of ionic strength on phosphate sorption onto calcite is its influence on the activity of the different aqueous phosphate species. The experimental results were modeled satisfactorily using the constant capacitance model with >CaPO 4Ca 0 and either >CaHPO 4Ca + or > CaHPO4- as the adsorbed surface species. Generally the model captures the variation in phosphate adsorption onto calcite as a function of solution composition, though it was necessary to include two types of sorption sites (strong and weak) in the model to reproduce the convex shape of the sorption isotherms.

  13. Cadmium sorption onto Natural Red Earth - An assessment using batch experiments and surface complexation modeling

    NASA Astrophysics Data System (ADS)

    Mahatantila, K.; Minoru, O.; Seike, Y.; Vithanage, M. S.

    2010-12-01

    Natural red earth (NRE), an iron coated sand found in north western part of Sri Lanka was used to examine its retention behavior of cadmium, a heavy metal postulated as a factor of chronic kidney disease in Sri Lanka. Adsorption studies were examined in batch experiments as a function of pH, ionic strength and initial cadmium loading. Proton binding sites on NRE were characterized by potentiometric titration yielding a pHzpc around 6.6. The cadmium adsorption increased from 6% to 99% along with a pH increase from 4 to 8.5. In addition, the maximum adsorption was observed when pH is greater than 7.5. Ionic strength dependency of cadmium adsorption for 100 fold variation of NaNO3 evidences the dominance of an inner-sphere bonding mechanism for 10 fold variation of initial cadmium loadings (4.44 and 44.4 µmol/L). Adsorption edges were quantified with a 2pK generalized diffuse double layer model considering two site types, >FeOH and >AlOH, for Cd2+ binding. From modeling, we introduced a monodentate chemical bonding mechanism for cadmium binding on to NRE and this finding was further verified with FTIR spectroscopy. Intrinsic constants determined were log KFeOCd = 8.543 and log KAlOCd = 13.917. Isotherm data implies the heterogeneity of NRE surface and the sorption maximum of 9.418 x10-6 mol/g and 1.3x10-4 mol/g for Langmuir and Freundlich isotherm models. The study suggested the potential of NRE as a material in decontaminating environmental water polluted with cadmium.

  14. Modeling complex equilibria in isothermal titration calorimetry experiments: thermodynamic parameters estimation for a three-binding-site model.

    PubMed

    Le, Vu H; Buscaglia, Robert; Chaires, Jonathan B; Lewis, Edwin A

    2013-03-15

    Isothermal titration calorimetry (ITC) is a powerful technique that can be used to estimate a complete set of thermodynamic parameters (e.g., K(eq) (or ΔG), ΔH, ΔS, and n) for a ligand-binding interaction described by a thermodynamic model. Thermodynamic models are constructed by combining equilibrium constant, mass balance, and charge balance equations for the system under study. Commercial ITC instruments are supplied with software that includes a number of simple interaction models, for example, one binding site, two binding sites, sequential sites, and n-independent binding sites. More complex models, for example, three or more binding sites, one site with multiple binding mechanisms, linked equilibria, or equilibria involving macromolecular conformational selection through ligand binding, need to be developed on a case-by-case basis by the ITC user. In this paper we provide an algorithm (and a link to our MATLAB program) for the nonlinear regression analysis of a multiple-binding-site model with up to four overlapping binding equilibria. Error analysis demonstrates that fitting ITC data for multiple parameters (e.g., up to nine parameters in the three-binding-site model) yields thermodynamic parameters with acceptable accuracy. Copyright © 2012 Elsevier Inc. All rights reserved.

  15. Reduction of U(VI) Complexes by Anthraquinone Disulfonate: Experiment and Molecular Modeling

    SciTech Connect

    Ainsworth, C.C.; Wang, Z.; Rosso, K.M.; Wagnon, K.; Fredrickson, J.K.

    2004-03-17

    Past studies demonstrate that complexation will limit abiotic and biotic U(VI) reduction rates and the overall extent of reduction. However, the underlying basis for this behavior is not understood and presently unpredictable across species and ligand structure. The central tenets of these investigations are: (1) reduction of U(VI) follows the electron-transfer (ET) mechanism developed by Marcus; (2) the ET rate is the rate-limiting step in U(VI) reduction and is the step that is most affected by complexation; and (3) Marcus theory can be used to unify the apparently disparate U(VI) reduction rate data and as a computational tool to construct a predictive relationship.

  16. Prediction of homoprotein and heteroprotein complexes by protein docking and template-based modeling: A CASP-CAPRI experiment.

    PubMed

    Lensink, Marc F; Velankar, Sameer; Kryshtafovych, Andriy; Huang, Shen-You; Schneidman-Duhovny, Dina; Sali, Andrej; Segura, Joan; Fernandez-Fuentes, Narcis; Viswanath, Shruthi; Elber, Ron; Grudinin, Sergei; Popov, Petr; Neveu, Emilie; Lee, Hasup; Baek, Minkyung; Park, Sangwoo; Heo, Lim; Rie Lee, Gyu; Seok, Chaok; Qin, Sanbo; Zhou, Huan-Xiang; Ritchie, David W; Maigret, Bernard; Devignes, Marie-Dominique; Ghoorah, Anisah; Torchala, Mieczyslaw; Chaleil, Raphaël A G; Bates, Paul A; Ben-Zeev, Efrat; Eisenstein, Miriam; Negi, Surendra S; Weng, Zhiping; Vreven, Thom; Pierce, Brian G; Borrman, Tyler M; Yu, Jinchao; Ochsenbein, Françoise; Guerois, Raphaël; Vangone, Anna; Rodrigues, João P G L M; van Zundert, Gydo; Nellen, Mehdi; Xue, Li; Karaca, Ezgi; Melquiond, Adrien S J; Visscher, Koen; Kastritis, Panagiotis L; Bonvin, Alexandre M J J; Xu, Xianjin; Qiu, Liming; Yan, Chengfei; Li, Jilong; Ma, Zhiwei; Cheng, Jianlin; Zou, Xiaoqin; Shen, Yang; Peterson, Lenna X; Kim, Hyung-Rae; Roy, Amit; Han, Xusi; Esquivel-Rodriguez, Juan; Kihara, Daisuke; Yu, Xiaofeng; Bruce, Neil J; Fuller, Jonathan C; Wade, Rebecca C; Anishchenko, Ivan; Kundrotas, Petras J; Vakser, Ilya A; Imai, Kenichiro; Yamada, Kazunori; Oda, Toshiyuki; Nakamura, Tsukasa; Tomii, Kentaro; Pallara, Chiara; Romero-Durana, Miguel; Jiménez-García, Brian; Moal, Iain H; Férnandez-Recio, Juan; Joung, Jong Young; Kim, Jong Yun; Joo, Keehyoung; Lee, Jooyoung; Kozakov, Dima; Vajda, Sandor; Mottarella, Scott; Hall, David R; Beglov, Dmitri; Mamonov, Artem; Xia, Bing; Bohnuud, Tanggis; Del Carpio, Carlos A; Ichiishi, Eichiro; Marze, Nicholas; Kuroda, Daisuke; Roy Burman, Shourya S; Gray, Jeffrey J; Chermak, Edrisse; Cavallo, Luigi; Oliva, Romina; Tovchigrechko, Andrey; Wodak, Shoshana J

    2016-09-01

    We present the results for CAPRI Round 30, the first joint CASP-CAPRI experiment, which brought together experts from the protein structure prediction and protein-protein docking communities. The Round comprised 25 targets from amongst those submitted for the CASP11 prediction experiment of 2014. The targets included mostly homodimers, a few homotetramers, and two heterodimers, and comprised protein chains that could readily be modeled using templates from the Protein Data Bank. On average 24 CAPRI groups and 7 CASP groups submitted docking predictions for each target, and 12 CAPRI groups per target participated in the CAPRI scoring experiment. In total more than 9500 models were assessed against the 3D structures of the corresponding target complexes. Results show that the prediction of homodimer assemblies by homology modeling techniques and docking calculations is quite successful for targets featuring large enough subunit interfaces to represent stable associations. Targets with ambiguous or inaccurate oligomeric state assignments, often featuring crystal contact-sized interfaces, represented a confounding factor. For those, a much poorer prediction performance was achieved, while nonetheless often providing helpful clues on the correct oligomeric state of the protein. The prediction performance was very poor for genuine tetrameric targets, where the inaccuracy of the homology-built subunit models and the smaller pair-wise interfaces severely limited the ability to derive the correct assembly mode. Our analysis also shows that docking procedures tend to perform better than standard homology modeling techniques and that highly accurate models of the protein components are not always required to identify their association modes with acceptable accuracy. Proteins 2016; 84(Suppl 1):323-348. © 2016 Wiley Periodicals, Inc.

  17. Prediction of homoprotein and heteroprotein complexes by protein docking and template‐based modeling: A CASP‐CAPRI experiment

    PubMed Central

    Velankar, Sameer; Kryshtafovych, Andriy; Huang, Shen‐You; Schneidman‐Duhovny, Dina; Sali, Andrej; Segura, Joan; Fernandez‐Fuentes, Narcis; Viswanath, Shruthi; Elber, Ron; Grudinin, Sergei; Popov, Petr; Neveu, Emilie; Lee, Hasup; Baek, Minkyung; Park, Sangwoo; Heo, Lim; Rie Lee, Gyu; Seok, Chaok; Qin, Sanbo; Zhou, Huan‐Xiang; Ritchie, David W.; Maigret, Bernard; Devignes, Marie‐Dominique; Ghoorah, Anisah; Torchala, Mieczyslaw; Chaleil, Raphaël A.G.; Bates, Paul A.; Ben‐Zeev, Efrat; Eisenstein, Miriam; Negi, Surendra S.; Weng, Zhiping; Vreven, Thom; Pierce, Brian G.; Borrman, Tyler M.; Yu, Jinchao; Ochsenbein, Françoise; Guerois, Raphaël; Vangone, Anna; Rodrigues, João P.G.L.M.; van Zundert, Gydo; Nellen, Mehdi; Xue, Li; Karaca, Ezgi; Melquiond, Adrien S.J.; Visscher, Koen; Kastritis, Panagiotis L.; Bonvin, Alexandre M.J.J.; Xu, Xianjin; Qiu, Liming; Yan, Chengfei; Li, Jilong; Ma, Zhiwei; Cheng, Jianlin; Zou, Xiaoqin; Shen, Yang; Peterson, Lenna X.; Kim, Hyung‐Rae; Roy, Amit; Han, Xusi; Esquivel‐Rodriguez, Juan; Kihara, Daisuke; Yu, Xiaofeng; Bruce, Neil J.; Fuller, Jonathan C.; Wade, Rebecca C.; Anishchenko, Ivan; Kundrotas, Petras J.; Vakser, Ilya A.; Imai, Kenichiro; Yamada, Kazunori; Oda, Toshiyuki; Nakamura, Tsukasa; Tomii, Kentaro; Pallara, Chiara; Romero‐Durana, Miguel; Jiménez‐García, Brian; Moal, Iain H.; Férnandez‐Recio, Juan; Joung, Jong Young; Kim, Jong Yun; Joo, Keehyoung; Lee, Jooyoung; Kozakov, Dima; Vajda, Sandor; Mottarella, Scott; Hall, David R.; Beglov, Dmitri; Mamonov, Artem; Xia, Bing; Bohnuud, Tanggis; Del Carpio, Carlos A.; Ichiishi, Eichiro; Marze, Nicholas; Kuroda, Daisuke; Roy Burman, Shourya S.; Gray, Jeffrey J.; Chermak, Edrisse; Cavallo, Luigi; Oliva, Romina; Tovchigrechko, Andrey

    2016-01-01

    ABSTRACT We present the results for CAPRI Round 30, the first joint CASP‐CAPRI experiment, which brought together experts from the protein structure prediction and protein–protein docking communities. The Round comprised 25 targets from amongst those submitted for the CASP11 prediction experiment of 2014. The targets included mostly homodimers, a few homotetramers, and two heterodimers, and comprised protein chains that could readily be modeled using templates from the Protein Data Bank. On average 24 CAPRI groups and 7 CASP groups submitted docking predictions for each target, and 12 CAPRI groups per target participated in the CAPRI scoring experiment. In total more than 9500 models were assessed against the 3D structures of the corresponding target complexes. Results show that the prediction of homodimer assemblies by homology modeling techniques and docking calculations is quite successful for targets featuring large enough subunit interfaces to represent stable associations. Targets with ambiguous or inaccurate oligomeric state assignments, often featuring crystal contact‐sized interfaces, represented a confounding factor. For those, a much poorer prediction performance was achieved, while nonetheless often providing helpful clues on the correct oligomeric state of the protein. The prediction performance was very poor for genuine tetrameric targets, where the inaccuracy of the homology‐built subunit models and the smaller pair‐wise interfaces severely limited the ability to derive the correct assembly mode. Our analysis also shows that docking procedures tend to perform better than standard homology modeling techniques and that highly accurate models of the protein components are not always required to identify their association modes with acceptable accuracy. Proteins 2016; 84(Suppl 1):323–348. © 2016 The Authors Proteins: Structure, Function, and Bioinformatics Published by Wiley Periodicals, Inc. PMID:27122118

  18. The Mentoring Relationship as a Complex Adaptive System: Finding a Model for Our Experience

    ERIC Educational Resources Information Center

    Jones, Rachel; Brown, Dot

    2011-01-01

    Mentoring theory and practice has evolved significantly during the past 40 years. Early mentoring models were characterized by the top-down flow of information and benefits to the protege. This framework was reconceptualized as a reciprocal model when scholars realized mentoring was a mutually beneficial process. Recently, in response to rapidly…

  19. Complexities in barrier island response to sea level rise: Insights from numerical model experiments, North Carolina Outer Banks

    USGS Publications Warehouse

    Moore, Laura J.; List, Jeffrey H.; Williams, S. Jeffress; Stolper, David

    2010-01-01

    Using a morphological-behavior model to conduct sensitivity experiments, we investigate the sea level rise response of a complex coastal environment to changes in a variety of factors. Experiments reveal that substrate composition, followed in rank order by substrate slope, sea level rise rate, and sediment supply rate, are the most important factors in determining barrier island response to sea level rise. We find that geomorphic threshold crossing, defined as a change in state (e.g., from landward migrating to drowning) that is irreversible over decadal to millennial time scales, is most likely to occur in muddy coastal systems where the combination of substrate composition, depth-dependent limitations on shoreface response rates, and substrate erodibility may prevent sand from being liberated rapidly enough, or in sufficient quantity, to maintain a subaerial barrier. Analyses indicate that factors affecting sediment availability such as low substrate sand proportions and high sediment loss rates cause a barrier to migrate landward along a trajectory having a lower slope than average barrier island slope, thereby defining an “effective” barrier island slope. Other factors being equal, such barriers will tend to be smaller and associated with a more deeply incised shoreface, thereby requiring less migration per sea level rise increment to liberate sufficient sand to maintain subaerial exposure than larger, less incised barriers. As a result, the evolution of larger/less incised barriers is more likely to be limited by shoreface erosion rates or substrate erodibility making them more prone to disintegration related to increasing sea level rise rates than smaller/more incised barriers. Thus, the small/deeply incised North Carolina barriers are likely to persist in the near term (although their long-term fate is less certain because of the low substrate slopes that will soon be encountered). In aggregate, results point to the importance of system history (e

  20. Concept model of the formation process of humic acid-kaolin complexes deduced by trichloroethylene sorption experiments and various characterizations.

    PubMed

    Zhu, Xiaojing; He, Jiangtao; Su, Sihui; Zhang, Xiaoliang; Wang, Fei

    2016-05-01

    To explore the interactions between soil organic matter and minerals, humic acid (HA, as organic matter), kaolin (as a mineral component) and Ca(2+) (as metal ions) were used to prepare HA-kaolin and Ca-HA-kaolin complexes. These complexes were used in trichloroethylene (TCE) sorption experiments and various characterizations. Interactions between HA and kaolin during the formation of their complexes were confirmed by the obvious differences between the Qe (experimental sorbed TCE) and Qe_p (predicted sorbed TCE) values of all detected samples. The partition coefficient kd obtained for the different samples indicated that both the organic content (fom) and Ca(2+) could significantly impact the interactions. Based on experimental results and various characterizations, a concept model was developed. In the absence of Ca(2+), HA molecules first patched onto charged sites of kaolin surfaces, filling the pores. Subsequently, as the HA content increased and the first HA layer reached saturation, an outer layer of HA began to form, compressing the inner HA layer. As HA loading continued, the second layer reached saturation, such that an outer-third layer began to form, compressing the inner layers. In the presence of Ca(2+), which not only can promote kaolin self-aggregation but can also boost HA attachment to kaolin, HA molecules were first surrounded by kaolin. Subsequently, first and second layers formed (with inner layer compression) via the same process as described above in the absence of Ca(2+), except that the second layer continued to load rather than reach saturation, within the investigated conditions, because of enhanced HA aggregation caused by Ca(2+).

  1. [Complex vesicoureteral reflux. Our experience].

    PubMed

    Argüelles Salido, E; García Merino, F; Millán López, A; Fernández Hurtado, M; Borrero Fernández, J

    2005-01-01

    To analize the proportion of complex reflux in the whole amount of patients treated endoscopically of vesicoureteral reflux in our hospital. To determine the endoscopic treatment success in complex reflux, and the influence of reflux grade in it. We present our experience between 1992 and 2003 with three kinds of substances (polytetrafluoroethylene, polydimethylsiloxane and dextranomer-hyaluronic acid copolymer). We treated complex reflux in 74 patients with endoscopic injection. All patients were scheduled to have voiding cystourethrogram 3 and 9 moths after injection. A positive response was defined as grade 0 or I reflux. Reflux was solved using the endoscopic procedure in 86.25% after first injection, 93.75% after second and 96.25% after third. The corresponding results for reflux grade II, III and IV were 88.9%, 83.3% and 100%. We conclude that subureteral injection of different sustances (Teflon, Macroplastique or Deflux) is a useful treatment for most cases of vesicoureteral reflux. We propose it as first step of treatment.

  2. Surface complexation modeling

    USDA-ARS?s Scientific Manuscript database

    Adsorption-desorption reactions are important processes that affect the transport of contaminants in the environment. Surface complexation models are chemical models that can account for the effects of variable chemical conditions, such as pH, on adsorption reactions. These models define specific ...

  3. Numerical Modeling of Complex Targets for High-Energy- Density Experiments with Ion Beams and other Drivers

    NASA Astrophysics Data System (ADS)

    Koniges, Alice; Liu, Wangyi; Lidia, Steven; Schenkel, Thomas; Barnard, John; Friedman, Alex; Eder, David; Fisher, Aaron; Masters, Nathan

    2016-03-01

    We explore the simulation challenges and requirements for experiments planned on facilities such as the NDCX-II ion accelerator at LBNL, currently undergoing commissioning. Hydrodynamic modeling of NDCX-II experiments include certain lower temperature effects, e.g., surface tension and target fragmentation, that are not generally present in extreme high-energy laser facility experiments, where targets are completely vaporized in an extremely short period of time. Target designs proposed for NDCX-II range from metal foils of order one micron thick (thin targets) to metallic foam targets several tens of microns thick (thick targets). These high-energy-density experiments allow for the study of fracture as well as the process of bubble and droplet formation. We incorporate these physics effects into a code called ALE-AMR that uses a combination of Arbitrary Lagrangian Eulerian hydrodynamics and Adaptive Mesh Refinement. Inclusion of certain effects becomes tricky as we must deal with non-orthogonal meshes of various levels of refinement in three dimensions. A surface tension model used for droplet dynamics is implemented in ALE-AMR using curvature calculated from volume fractions. Thick foam target experiments provide information on how ion beam induced shock waves couple into kinetic energy of fluid flow. Although NDCX-II is not fully commissioned, experiments are being conducted that explore material defect production and dynamics.

  4. Numerical Modeling of Complex Targets for High-Energy- Density Experiments with Ion Beams and other Drivers

    DOE PAGES

    Koniges, Alice; Liu, Wangyi; Lidia, Steven; ...

    2016-04-01

    We explore the simulation challenges and requirements for experiments planned on facilities such as the NDCX-II ion accelerator at LBNL, currently undergoing commissioning. Hydrodynamic modeling of NDCX-II experiments include certain lower temperature effects, e.g., surface tension and target fragmentation, that are not generally present in extreme high-energy laser facility experiments, where targets are completely vaporized in an extremely short period of time. Target designs proposed for NDCX-II range from metal foils of order one micron thick (thin targets) to metallic foam targets several tens of microns thick (thick targets). These high-energy-density experiments allow for the study of fracture as wellmore » as the process of bubble and droplet formation. We incorporate these physics effects into a code called ALE-AMR that uses a combination of Arbitrary Lagrangian Eulerian hydrodynamics and Adaptive Mesh Refinement. Inclusion of certain effects becomes tricky as we must deal with non-orthogonal meshes of various levels of refinement in three dimensions. A surface tension model used for droplet dynamics is implemented in ALE-AMR using curvature calculated from volume fractions. Thick foam target experiments provide information on how ion beam induced shock waves couple into kinetic energy of fluid flow. Although NDCX-II is not fully commissioned, experiments are being conducted that explore material defect production and dynamics.« less

  5. Numerical Modeling of Complex Targets for High-Energy- Density Experiments with Ion Beams and other Drivers

    SciTech Connect

    Koniges, Alice; Liu, Wangyi; Lidia, Steven; Schenkel, Thomas; Barnard, John; Friedman, Alex; Eder, David; Fisher, Aaron; Masters, Nathan

    2016-04-01

    We explore the simulation challenges and requirements for experiments planned on facilities such as the NDCX-II ion accelerator at LBNL, currently undergoing commissioning. Hydrodynamic modeling of NDCX-II experiments include certain lower temperature effects, e.g., surface tension and target fragmentation, that are not generally present in extreme high-energy laser facility experiments, where targets are completely vaporized in an extremely short period of time. Target designs proposed for NDCX-II range from metal foils of order one micron thick (thin targets) to metallic foam targets several tens of microns thick (thick targets). These high-energy-density experiments allow for the study of fracture as well as the process of bubble and droplet formation. We incorporate these physics effects into a code called ALE-AMR that uses a combination of Arbitrary Lagrangian Eulerian hydrodynamics and Adaptive Mesh Refinement. Inclusion of certain effects becomes tricky as we must deal with non-orthogonal meshes of various levels of refinement in three dimensions. A surface tension model used for droplet dynamics is implemented in ALE-AMR using curvature calculated from volume fractions. Thick foam target experiments provide information on how ion beam induced shock waves couple into kinetic energy of fluid flow. Although NDCX-II is not fully commissioned, experiments are being conducted that explore material defect production and dynamics.

  6. Modelling multi-protein complexes using PELDOR distance measurements for rigid body minimisation experiments using XPLOR-NIH

    PubMed Central

    Hammond, Colin M.; Owen-Hughes, Tom; Norman, David G.

    2014-01-01

    Crystallographic and NMR approaches have provided a wealth of structural information about protein domains. However, often these domains are found as components of larger multi domain polypeptides or complexes. Orienting domains within such contexts can provide powerful new insight into their function. The combination of site specific spin labelling and Pulsed Electron Double Resonance (PELDOR) provide a means of obtaining structural measurements that can be used to generate models describing how such domains are oriented. Here we describe a pipeline for modelling the location of thio-reactive nitroxyl spin locations to engineered sties on the histone chaperone Vps75. We then use a combination of experimentally determined measurements and symmetry constraints to model the orientation in which homodimers of Vps75 associate to form homotetramers using the XPLOR-NIH platform. This provides a working example of how PELDOR measurements can be used to generate a structural model. PMID:25448300

  7. Can a simple lumped parameter model simulate complex transit time distributions? Benchmarking experiments in a virtual watershed.

    NASA Astrophysics Data System (ADS)

    Wilusz, D. C.; Maxwell, R. M.; Buda, A. R.; Ball, W. P.; Harman, C. J.

    2016-12-01

    The catchment transit-time distribution (TTD) is the time-varying, probabilistic distribution of water travel times through a watershed. The TTD is increasingly recognized as a useful descriptor of a catchment's flow and transport processes. However, TTDs are temporally complex and cannot be observed directly at watershed scale. Estimates of TTDs depend on available environmental tracers (such as stable water isotopes) and an assumed model whose parameters can be inverted from tracer data. All tracers have limitations though, such as (typically) short periods of observation or non-conservative behavior. As a result, models that faithfully simulate tracer observations may nonetheless yield TTD estimates with significant errors at certain times and water ages, conditioned on the tracer data available and the model structure. Recent advances have shown that time-varying catchment TTDs can be parsimoniously modeled by the lumped parameter rank StorAge Selection (rSAS) model, in which an rSAS function relates the distribution of water ages in outflows to the composition of age-ranked water in storage. Like other TTD models, rSAS is calibrated and evaluated against environmental tracer data, and the relative influence of tracer-dependent and model-dependent error on its TTD estimates is poorly understood. The purpose of this study is to benchmark the ability of different rSAS formulations to simulate TTDs in a complex, synthetic watershed where the lumped model can be calibrated and directly compared to a virtually "true" TTD. This experimental design allows for isolation of model-dependent error from tracer-dependent error. The integrated hydrologic model ParFlow with SLIM-FAST particle tracking code is used to simulate the watershed and its true TTD. To add field intelligence, the ParFlow model is populated with over forty years of hydrometric and physiographic data from the WE-38 subwatershed of the USDA's Mahantango Creek experimental catchment in PA, USA. The

  8. Modeling complexes of modeled proteins.

    PubMed

    Anishchenko, Ivan; Kundrotas, Petras J; Vakser, Ilya A

    2017-03-01

    Structural characterization of proteins is essential for understanding life processes at the molecular level. However, only a fraction of known proteins have experimentally determined structures. This fraction is even smaller for protein-protein complexes. Thus, structural modeling of protein-protein interactions (docking) primarily has to rely on modeled structures of the individual proteins, which typically are less accurate than the experimentally determined ones. Such "double" modeling is the Grand Challenge of structural reconstruction of the interactome. Yet it remains so far largely untested in a systematic way. We present a comprehensive validation of template-based and free docking on a set of 165 complexes, where each protein model has six levels of structural accuracy, from 1 to 6 Å C(α) RMSD. Many template-based docking predictions fall into acceptable quality category, according to the CAPRI criteria, even for highly inaccurate proteins (5-6 Å RMSD), although the number of such models (and, consequently, the docking success rate) drops significantly for models with RMSD > 4 Å. The results show that the existing docking methodologies can be successfully applied to protein models with a broad range of structural accuracy, and the template-based docking is much less sensitive to inaccuracies of protein models than the free docking. Proteins 2017; 85:470-478. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  9. Sporadic meteoroid complex: Modeling

    NASA Astrophysics Data System (ADS)

    Andreev, V.

    2014-07-01

    The distribution of the sporadic meteoroids flux density over the celestial sphere is the common form of representation of the meteoroids distribution in the vicinity of the Earth's orbit. The determination of the flux density of sporadic meteor bodies is Q(V,e,f) = Q_0 P_e(V) P(e,f) where V is the meteoroid velocity, e,f are the radiant coordinates, Q_0 is the meteoroid flux over whole celestial sphere, P_e(V) is the conditional velocity distributions and P(e,f) is the radiant distribution over the celestial sphere. The sporadic meteoroid complex model is analytical and based on heliocentric velocities and radiant distributions. The multi-mode character of the heliocentric velocity and radiant distributions follows from the analysis of meteor observational data. This fact points to a complicated structure of the sporadic meteoroid complex. It is the consequence of the plurality of the parent bodies and the origin mechanisms of the meteoroids. The meteoroid complex was divided into four groups for that reason and with a goal of more accurate modelling of velocities and radiant distributions. As the classifying parameter to determine the meteoroid membership in any group, we adopt the Tisserand invariant relative to Jupiter T_J = 1/a + 2 A_J^{-3/2} √{a (1 - e^2)} cos i and the meteoroid orbit inclination i. Two meteoroid groups relate to long-period and short-period comets. One meteoroid group is related to asteroids. The relationship to the last, fourth group is a problematic one. Then, we construct models of radiant and velocity distributions for each group. The analytical model for the whole sporadic meteoroid complex is the sum of the ones for each group.

  10. From Metaphors to Models: Broadening the Lens of the Hunter Warrior Experiment with a Complex Adaptive System Tool

    DTIC Science & Technology

    1999-01-01

    complexity and complex adaptive systems, self-organizing criticality, cellular automata , and so on, because they all globally share this property...distribution unlimited 13. SUPPLEMENTARY NOTES 14. ABSTRACT 15. SUBJECT TERMS 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT Same...four categories: Artificial Life, Evolution and Complexity, Classification , Heuristic Search and Computation, Neural Networks, Chaos and Fractals. 14

  11. Predictive Surface Complexation Modeling

    SciTech Connect

    Sverjensky, Dimitri A.

    2016-11-29

    Surface complexation plays an important role in the equilibria and kinetics of processes controlling the compositions of soilwaters and groundwaters, the fate of contaminants in groundwaters, and the subsurface storage of CO2 and nuclear waste. Over the last several decades, many dozens of individual experimental studies have addressed aspects of surface complexation that have contributed to an increased understanding of its role in natural systems. However, there has been no previous attempt to develop a model of surface complexation that can be used to link all the experimental studies in order to place them on a predictive basis. Overall, my research has successfully integrated the results of the work of many experimentalists published over several decades. For the first time in studies of the geochemistry of the mineral-water interface, a practical predictive capability for modeling has become available. The predictive correlations developed in my research now enable extrapolations of experimental studies to provide estimates of surface chemistry for systems not yet studied experimentally and for natural and anthropogenically perturbed systems.

  12. Debating complexity in modeling

    USGS Publications Warehouse

    Hunt, Randall J.; Zheng, Chunmiao

    1999-01-01

    As scientists trying to understand the natural world, how should our effort be apportioned? We know that the natural world is characterized by complex and interrelated processes. Yet do we need to explicitly incorporate these intricacies to perform the tasks we are charged with? In this era of expanding computer power and development of sophisticated preprocessors and postprocessors, are bigger machines making better models? Put another way, do we understand the natural world better now with all these advancements in our simulation ability? Today the public's patience for long-term projects producing indeterminate results is wearing thin. This increases pressure on the investigator to use the appropriate technology efficiently. On the other hand, bringing scientific results into the legal arena opens up a new dimension to the issue: to the layperson, a tool that includes more of the complexity known to exist in the real world is expected to provide the more scientifically valid answer.

  13. Model Experiments and Model Descriptions

    NASA Technical Reports Server (NTRS)

    Jackman, Charles H.; Ko, Malcolm K. W.; Weisenstein, Debra; Scott, Courtney J.; Shia, Run-Lie; Rodriguez, Jose; Sze, N. D.; Vohralik, Peter; Randeniya, Lakshman; Plumb, Ian

    1999-01-01

    The Second Workshop on Stratospheric Models and Measurements Workshop (M&M II) is the continuation of the effort previously started in the first Workshop (M&M I, Prather and Remsberg [1993]) held in 1992. As originally stated, the aim of M&M is to provide a foundation for establishing the credibility of stratospheric models used in environmental assessments of the ozone response to chlorofluorocarbons, aircraft emissions, and other climate-chemistry interactions. To accomplish this, a set of measurements of the present day atmosphere was selected. The intent was that successful simulations of the set of measurements should become the prerequisite for the acceptance of these models as having a reliable prediction for future ozone behavior. This section is divided into two: model experiment and model descriptions. In the model experiment, participant were given the charge to design a number of experiments that would use observations to test whether models are using the correct mechanisms to simulate the distributions of ozone and other trace gases in the atmosphere. The purpose is closely tied to the needs to reduce the uncertainties in the model predicted responses of stratospheric ozone to perturbations. The specifications for the experiments were sent out to the modeling community in June 1997. Twenty eight modeling groups responded to the requests for input. The first part of this section discusses the different modeling group, along with the experiments performed. Part two of this section, gives brief descriptions of each model as provided by the individual modeling groups.

  14. Model Experiments and Model Descriptions

    NASA Technical Reports Server (NTRS)

    Jackman, Charles H.; Ko, Malcolm K. W.; Weisenstein, Debra; Scott, Courtney J.; Shia, Run-Lie; Rodriguez, Jose; Sze, N. D.; Vohralik, Peter; Randeniya, Lakshman; Plumb, Ian

    1999-01-01

    The Second Workshop on Stratospheric Models and Measurements Workshop (M&M II) is the continuation of the effort previously started in the first Workshop (M&M I, Prather and Remsberg [1993]) held in 1992. As originally stated, the aim of M&M is to provide a foundation for establishing the credibility of stratospheric models used in environmental assessments of the ozone response to chlorofluorocarbons, aircraft emissions, and other climate-chemistry interactions. To accomplish this, a set of measurements of the present day atmosphere was selected. The intent was that successful simulations of the set of measurements should become the prerequisite for the acceptance of these models as having a reliable prediction for future ozone behavior. This section is divided into two: model experiment and model descriptions. In the model experiment, participant were given the charge to design a number of experiments that would use observations to test whether models are using the correct mechanisms to simulate the distributions of ozone and other trace gases in the atmosphere. The purpose is closely tied to the needs to reduce the uncertainties in the model predicted responses of stratospheric ozone to perturbations. The specifications for the experiments were sent out to the modeling community in June 1997. Twenty eight modeling groups responded to the requests for input. The first part of this section discusses the different modeling group, along with the experiments performed. Part two of this section, gives brief descriptions of each model as provided by the individual modeling groups.

  15. Network dynamics: quantitative analysis of complex behavior in metabolism, organelles, and cells, from experiments to models and back.

    PubMed

    Kurz, Felix T; Kembro, Jackelyn M; Flesia, Ana G; Armoundas, Antonis A; Cortassa, Sonia; Aon, Miguel A; Lloyd, David

    2017-01-01

    Advancing from two core traits of biological systems: multilevel network organization and nonlinearity, we review a host of novel and readily available techniques to explore and analyze their complex dynamic behavior within the framework of experimental-computational synergy. In the context of concrete biological examples, analytical methods such as wavelet, power spectra, and metabolomics-fluxomics analyses, are presented, discussed, and their strengths and limitations highlighted. Further shown is how time series from stationary and nonstationary biological variables and signals, such as membrane potential, high-throughput metabolomics, O2 and CO2 levels, bird locomotion, at the molecular, (sub)cellular, tissue, and whole organ and animal levels, can reveal important information on the properties of the underlying biological networks. Systems biology-inspired computational methods start to pave the way for addressing the integrated functional dynamics of metabolic, organelle and organ networks. As our capacity to unravel the control and regulatory properties of these networks and their dynamics under normal or pathological conditions broadens, so is our ability to address endogenous rhythms and clocks to improve health-span in human aging, and to manage complex metabolic disorders, neurodegeneration, and cancer. WIREs Syst Biol Med 2017, 9:e1352. doi: 10.1002/wsbm.1352 For further resources related to this article, please visit the WIREs website. © 2016 Wiley Periodicals, Inc.

  16. Analysis of designed experiments with complex aliasing

    SciTech Connect

    Hamada, M.; Wu, C.F.J. )

    1992-07-01

    Traditionally, Plackett-Burman (PB) designs have been used in screening experiments for identifying important main effects. The PB designs whose run sizes are not a power of two have been criticized for their complex aliasing patterns, which according to conventional wisdom gives confusing results. This paper goes beyond the traditional approach by proposing the analysis strategy that entertains interactions in addition to main effects. Based on the precepts of effect sparsity and effect heredity, the proposed procedure exploits the designs' complex aliasing patterns, thereby turning their 'liability' into an advantage. Demonstration of the procedure on three real experiments shows the potential for extracting important information available in the data that has, until now, been missed. Some limitations are discussed, and extentions to overcome them are given. The proposed procedure also applies to more general mixed level designs that have become increasingly popular. 16 refs.

  17. CFD [computational fluid dynamics] And Safety Factors. Computer modeling of complex processes needs old-fashioned experiments to stay in touch with reality.

    SciTech Connect

    Leishear, Robert A.; Lee, Si Y.; Poirier, Michael R.; Steeper, Timothy J.; Ervin, Robert C.; Giddings, Billy J.; Stefanko, David B.; Harp, Keith D.; Fowley, Mark D.; Van Pelt, William B.

    2012-10-07

    Computational fluid dynamics (CFD) is recognized as a powerful engineering tool. That is, CFD has advanced over the years to the point where it can now give us deep insight into the analysis of very complex processes. There is a danger, though, that an engineer can place too much confidence in a simulation. If a user is not careful, it is easy to believe that if you plug in the numbers, the answer comes out, and you are done. This assumption can lead to significant errors. As we discovered in the course of a study on behalf of the Department of Energy's Savannah River Site in South Carolina, CFD models fail to capture some of the large variations inherent in complex processes. These variations, or scatter, in experimental data emerge from physical tests and are inadequately captured or expressed by calculated mean values for a process. This anomaly between experiment and theory can lead to serious errors in engineering analysis and design unless a correction factor, or safety factor, is experimentally validated. For this study, blending times for the mixing of salt solutions in large storage tanks were the process of concern under investigation. This study focused on the blending processes needed to mix salt solutions to ensure homogeneity within waste tanks, where homogeneity is required to control radioactivity levels during subsequent processing. Two of the requirements for this task were to determine the minimum number of submerged, centrifugal pumps required to blend the salt mixtures in a full-scale tank in half a day or less, and to recommend reasonable blending times to achieve nearly homogeneous salt mixtures. A full-scale, low-flow pump with a total discharge flow rate of 500 to 800 gpm was recommended with two opposing 2.27-inch diameter nozzles. To make this recommendation, both experimental and CFD modeling were performed. Lab researchers found that, although CFD provided good estimates of an average blending time, experimental blending times varied

  18. Turbulence modeling and experiments

    NASA Technical Reports Server (NTRS)

    Shabbir, Aamir

    1992-01-01

    The best way of verifying turbulence is to do a direct comparison between the various terms and their models. The success of this approach depends upon the availability of the data for the exact correlations (both experimental and DNS). The other approach involves numerically solving the differential equations and then comparing the results with the data. The results of such a computation will depend upon the accuracy of all the modeled terms and constants. Because of this it is sometimes difficult to find the cause of a poor performance by a model. However, such a calculation is still meaningful in other ways as it shows how a complete Reynolds stress model performs. Thirteen homogeneous flows are numerically computed using the second order closure models. We concentrate only on those models which use a linear (or quasi-linear) model for the rapid term. This, therefore, includes the Launder, Reece and Rodi (LRR) model; the isotropization of production (IP) model; and the Speziale, Sarkar, and Gatski (SSG) model. Which of the three models performs better is examined along with what are their weaknesses, if any. The other work reported deal with the experimental balances of the second moment equations for a buoyant plume. Despite the tremendous amount of activity toward the second order closure modeling of turbulence, very little experimental information is available about the budgets of the second moment equations. Part of the problem stems from our inability to measure the pressure correlations. However, if everything else appearing in these equations is known from the experiment, pressure correlations can be obtained as the closing terms. This is the closest we can come to in obtaining these terms from experiment, and despite the measurement errors which might be present in such balances, the resulting information will be extremely useful for the turbulence modelers. The purpose of this part of the work was to provide such balances of the Reynolds stress and heat

  19. Molecular modeling of polynucleotide complexes.

    PubMed

    Meneksedag-Erol, Deniz; Tang, Tian; Uludağ, Hasan

    2014-08-01

    Delivery of polynucleotides into patient cells is a promising strategy for treatment of genetic disorders. Gene therapy aims to either synthesize desired proteins (DNA delivery) or suppress expression of endogenous genes (siRNA delivery). Carriers constitute an important part of gene therapeutics due to limitations arising from the pharmacokinetics of polynucleotides. Non-viral carriers such as polymers and lipids protect polynucleotides from intra and extracellular threats and facilitate formation of cell-permeable nanoparticles through shielding and/or bridging multiple polynucleotide molecules. Formation of nanoparticulate systems with optimal features, their cellular uptake and intracellular trafficking are crucial steps for an effective gene therapy. Despite the great amount of experimental work pursued, critical features of the nanoparticles as well as their processing mechanisms are still under debate due to the lack of instrumentation at atomic resolution. Molecular modeling based computational approaches can shed light onto the atomic level details of gene delivery systems, thus provide valuable input that cannot be readily obtained with experimental techniques. Here, we review the molecular modeling research pursued on critical gene therapy steps, highlight the knowledge gaps in the field and providing future perspectives. Existing modeling studies revealed several important aspects of gene delivery, such as nanoparticle formation dynamics with various carriers, effect of carrier properties on complexation, carrier conformations in endosomal stages, and release of polynucleotides from carriers. Rate-limiting steps related to cellular events (i.e. internalization, endosomal escape, and nuclear uptake) are now beginning to be addressed by computational approaches. Limitations arising from current computational power and accuracy of modeling have been hindering the development of more realistic models. With the help of rapidly-growing computational power

  20. Adolescents' experience of complex persistent pain.

    PubMed

    Sørensen, Kari; Christiansen, Bjørg

    2017-04-01

    Persistent (chronic) pain is a common phenomenon in adolescents. When young people are referred to a pain clinic, they usually have amplified pain signals, with pain syndromes of unconfirmed ethology, such as fibromyalgia and complex regional pain syndrome (CRPS). Pain is complex and seems to be related to a combination of illness, injury, psychological distress, and environmental factors. These young people are found to have higher levels of distress, anxiety, sleep disturbance, and lower mood than their peers and may be in danger of entering adulthood with mental and physical problems. In order to understand the complexity of persistent pain in adolescents, there seems to be a need for further qualitative research into their lived experiences. The aim of this study was to explore adolescents' experiences of complex persistent pain and its impact on everyday life. The study has an exploratory design with individual in-depth interviews with six youths aged 12-19, recruited from a pain clinic at a main referral hospital in Norway. A narrative approach allowed the informants to give voice to their experiences concerning complex persistent pain. A hermeneutic analysis was used, where the research question was the basis for a reflective interpretation. Three main themes were identified: (1) a life with pain and unpleasant bodily expressions; (2) an altered emotional wellbeing; and (3) the struggle to keep up with everyday life. The pain was experienced as extremely strong, emerging from a minor injury or without any obvious causation, and not always being recognised by healthcare providers. The pain intensity increased as the suffering got worse, and the sensation was hard to describe with words. Parts of their body could change in appearance, and some described having pain-attacks or fainting. The feeling of anxiety was strongly connected to the pain. Despair and uncertainty contributed to physical disability, major sleep problems, school absence, and withdrawal from

  1. Hydraulic Fracturing Mineback Experiment in Complex Media

    NASA Astrophysics Data System (ADS)

    Green, S. J.; McLennan, J. D.

    2012-12-01

    Hydraulic fracturing (or "fracking") for the recovery of gas and liquids from tight shale formations has gained much attention. This operation which involves horizontal well drilling and massive hydraulic fracturing has been developed over the last decade to produce fluids from extremely low permeability mudstone and siltstone rocks with high organic content. Nearly thirteen thousand wells and about one hundred and fifty thousand stages within the wells were fractured in the US in 2011. This operation has proven to be successful, causing hundreds of billions of dollars to be invested and has produced an abundance of natural gas and is making billions of barrels of hydrocarbon liquids available for the US. But, even with this commercial success, relatively little is clearly known about the complexity--or lack of complexity--of the hydraulic fracture, the extent that the newly created surface area contacts the high Reservoir Quality rock, nor the connectivity and conductivity of the hydraulic fractures created. To better understand this phenomena in order to improve efficiency, a large-scale mine-back experiment is progressing. The mine-back experiment is a full-scale hydraulic fracture carried out in a well-characterized environment, with comprehensive instrumentation deployed to measure fracture growth. A tight shale mudstone rock geologic setting is selected, near the edge of a formation where one to two thousand feet difference in elevation occurs. From the top of the formation, drilling, well logging, and hydraulic fracture pumping will occur. From the bottom of the formation a horizontal tunnel will be mined using conventional mining techniques into the rock formation towards the drilled well. Certain instrumentation will be located within this tunnel for observations during the hydraulic fracturing. After the hydraulic fracturing, the tunnel will be extended toward the well, with careful mapping of the created hydraulic fracture. Fracturing fluid will be

  2. Surface complexation modeling of groundwater arsenic mobility: Results of a forced gradient experiment in a Red River flood plain aquifer, Vietnam

    NASA Astrophysics Data System (ADS)

    Jessen, Søren; Postma, Dieke; Larsen, Flemming; Nhan, Pham Quy; Hoa, Le Quynh; Trang, Pham Thi Kim; Long, Tran Vu; Viet, Pham Hung; Jakobsen, Rasmus

    2012-12-01

    Three surface complexation models (SCMs) developed for, respectively, ferrihydrite, goethite and sorption data for a Pleistocene oxidized aquifer sediment from Bangladesh were used to explore the effect of multicomponent adsorption processes on As mobility in a reduced Holocene floodplain aquifer along the Red River, Vietnam. The SCMs for ferrihydrite and goethite yielded very different results. The ferrihydrite SCM favors As(III) over As(V) and has carbonate and silica species as the main competitors for surface sites. In contrast, the goethite SCM has a greater affinity for As(V) over As(III) while PO43- and Fe(II) form the predominant surface species. The SCM for Pleistocene aquifer sediment resembles most the goethite SCM but shows more Si sorption. Compiled As(III) adsorption data for Holocene sediment was also well described by the SCM determined for Pleistocene aquifer sediment, suggesting a comparable As(III) affinity of Holocene and Pleistocene aquifer sediments. A forced gradient field experiment was conducted in a bank aquifer adjacent to a tributary channel to the Red River, and the passage in the aquifer of mixed groundwater containing up to 74% channel water was observed. The concentrations of As (<0.013 μM) and major ions in the channel water are low compared to those in the pristine groundwater in the adjacent bank aquifer, which had an As concentration of ˜3 μM. Calculations for conservative mixing of channel and groundwater could explain the observed variation in concentration for most elements. However, the mixed waters did contain an excess of As(III), PO43- and Si which is attributed to desorption from the aquifer sediment. The three SCMs were tested on their ability to model the desorption of As(III), PO43- and Si. Qualitatively, the ferrihydrite SCM correctly predicts desorption for As(III) but for Si and PO43- it predicts an increased adsorption instead of desorption. The goethite SCM correctly predicts desorption of both As(III) and PO43

  3. Barrier experiment: Shock initiation under complex loading

    SciTech Connect

    Menikoff, Ralph

    2016-01-12

    The barrier experiments are a variant of the gap test; a detonation wave in a donor HE impacts a barrier and drives a shock wave into an acceptor HE. The question we ask is: What is the trade-off between the barrier material and threshold barrier thickness to prevent the acceptor from detonating. This can be viewed from the perspective of shock initiation of the acceptor subject to a complex pressure drive condition. Here we consider key factors which affect whether or not the acceptor undergoes a shock-to-detonation transition. These include the following: shock impedance matches for the donor detonation wave into the barrier and then the barrier shock into the acceptor, the pressure gradient behind the donor detonation wave, and the curvature of detonation front in the donor. Numerical simulations are used to illustrate how these factors affect the reaction in the acceptor.

  4. Trends in modeling Biomedical Complex Systems

    PubMed Central

    Milanesi, Luciano; Romano, Paolo; Castellani, Gastone; Remondini, Daniel; Liò, Petro

    2009-01-01

    In this paper we provide an introduction to the techniques for multi-scale complex biological systems, from the single bio-molecule to the cell, combining theoretical modeling, experiments, informatics tools and technologies suitable for biological and biomedical research, which are becoming increasingly multidisciplinary, multidimensional and information-driven. The most important concepts on mathematical modeling methodologies and statistical inference, bioinformatics and standards tools to investigate complex biomedical systems are discussed and the prominent literature useful to both the practitioner and the theoretician are presented. PMID:19828068

  5. Modelling Canopy Flows over Complex Terrain

    NASA Astrophysics Data System (ADS)

    Grant, Eleanor R.; Ross, Andrew N.; Gardiner, Barry A.

    2016-12-01

    Recent studies of flow over forested hills have been motivated by a number of important applications including understanding CO_2 and other gaseous fluxes over forests in complex terrain, predicting wind damage to trees, and modelling wind energy potential at forested sites. Current modelling studies have focussed almost exclusively on highly idealized, and usually fully forested, hills. Here, we present model results for a site on the Isle of Arran, Scotland with complex terrain and heterogeneous forest canopy. The model uses an explicit representation of the canopy and a 1.5-order turbulence closure for flow within and above the canopy. The validity of the closure scheme is assessed using turbulence data from a field experiment before comparing predictions of the full model with field observations. For near-neutral stability, the results compare well with the observations, showing that such a relatively simple canopy model can accurately reproduce the flow patterns observed over complex terrain and realistic, variable forest cover, while at the same time remaining computationally feasible for real case studies. The model allows closer examination of the flow separation observed over complex forested terrain. Comparisons with model simulations using a roughness length parametrization show significant differences, particularly with respect to flow separation, highlighting the need to explicitly model the forest canopy if detailed predictions of near-surface flow around forests are required.

  6. "Computational Modeling of Actinide Complexes"

    SciTech Connect

    Balasubramanian, K

    2007-03-07

    We will present our recent studies on computational actinide chemistry of complexes which are not only interesting from the standpoint of actinide coordination chemistry but also of relevance to environmental management of high-level nuclear wastes. We will be discussing our recent collaborative efforts with Professor Heino Nitsche of LBNL whose research group has been actively carrying out experimental studies on these species. Computations of actinide complexes are also quintessential to our understanding of the complexes found in geochemical, biochemical environments and actinide chemistry relevant to advanced nuclear systems. In particular we have been studying uranyl, plutonyl, and Cm(III) complexes are in aqueous solution. These studies are made with a variety of relativistic methods such as coupled cluster methods, DFT, and complete active space multi-configuration self-consistent-field (CASSCF) followed by large-scale CI computations and relativistic CI (RCI) computations up to 60 million configurations. Our computational studies on actinide complexes were motivated by ongoing EXAFS studies of speciated complexes in geo and biochemical environments carried out by Prof Heino Nitsche's group at Berkeley, Dr. David Clark at Los Alamos and Dr. Gibson's work on small actinide molecules at ORNL. The hydrolysis reactions of urnayl, neputyl and plutonyl complexes have received considerable attention due to their geochemical and biochemical importance but the results of free energies in solution and the mechanism of deprotonation have been topic of considerable uncertainty. We have computed deprotonating and migration of one water molecule from the first solvation shell to the second shell in UO{sub 2}(H{sub 2}O){sub 5}{sup 2+}, UO{sub 2}(H{sub 2}O){sub 5}{sup 2+}NpO{sub 2}(H{sub 2}O){sub 6}{sup +}, and PuO{sub 2}(H{sub 2}O){sub 5}{sup 2+} complexes. Our computed Gibbs free energy(7.27 kcal/m) in solution for the first time agrees with the experiment (7.1 kcal

  7. Complex terrain experiments in the New European Wind Atlas.

    PubMed

    Mann, J; Angelou, N; Arnqvist, J; Callies, D; Cantero, E; Arroyo, R Chávez; Courtney, M; Cuxart, J; Dellwik, E; Gottschall, J; Ivanell, S; Kühn, P; Lea, G; Matos, J C; Palma, J M L M; Pauscher, L; Peña, A; Rodrigo, J Sanz; Söderberg, S; Vasiljevic, N; Rodrigues, C Veiga

    2017-04-13

    The New European Wind Atlas project will create a freely accessible wind atlas covering Europe and Turkey, develop the model chain to create the atlas and perform a series of experiments on flow in many different kinds of complex terrain to validate the models. This paper describes the experiments of which some are nearly completed while others are in the planning stage. All experiments focus on the flow properties that are relevant for wind turbines, so the main focus is the mean flow and the turbulence at heights between 40 and 300 m. Also extreme winds, wind shear and veer, and diurnal and seasonal variations of the wind are of interest. Common to all the experiments is the use of Doppler lidar systems to supplement and in some cases replace completely meteorological towers. Many of the lidars will be equipped with scan heads that will allow for arbitrary scan patterns by several synchronized systems. Two pilot experiments, one in Portugal and one in Germany, show the value of using multiple synchronized, scanning lidar, both in terms of the accuracy of the measurements and the atmospheric physical processes that can be studied. The experimental data will be used for validation of atmospheric flow models and will by the end of the project be freely available.This article is part of the themed issue 'Wind energy in complex terrains'.

  8. Complex terrain experiments in the New European Wind Atlas

    PubMed Central

    Angelou, N.; Callies, D.; Cantero, E.; Arroyo, R. Chávez; Courtney, M.; Cuxart, J.; Dellwik, E.; Gottschall, J.; Ivanell, S.; Kühn, P.; Lea, G.; Matos, J. C.; Palma, J. M. L. M.; Peña, A.; Rodrigo, J. Sanz; Söderberg, S.; Vasiljevic, N.; Rodrigues, C. Veiga

    2017-01-01

    The New European Wind Atlas project will create a freely accessible wind atlas covering Europe and Turkey, develop the model chain to create the atlas and perform a series of experiments on flow in many different kinds of complex terrain to validate the models. This paper describes the experiments of which some are nearly completed while others are in the planning stage. All experiments focus on the flow properties that are relevant for wind turbines, so the main focus is the mean flow and the turbulence at heights between 40 and 300 m. Also extreme winds, wind shear and veer, and diurnal and seasonal variations of the wind are of interest. Common to all the experiments is the use of Doppler lidar systems to supplement and in some cases replace completely meteorological towers. Many of the lidars will be equipped with scan heads that will allow for arbitrary scan patterns by several synchronized systems. Two pilot experiments, one in Portugal and one in Germany, show the value of using multiple synchronized, scanning lidar, both in terms of the accuracy of the measurements and the atmospheric physical processes that can be studied. The experimental data will be used for validation of atmospheric flow models and will by the end of the project be freely available. This article is part of the themed issue ‘Wind energy in complex terrains’. PMID:28265025

  9. Complex terrain experiments in the New European Wind Atlas

    NASA Astrophysics Data System (ADS)

    Mann, J.; Angelou, N.; Arnqvist, J.; Callies, D.; Cantero, E.; Arroyo, R. Chávez; Courtney, M.; Cuxart, J.; Dellwik, E.; Gottschall, J.; Ivanell, S.; Kühn, P.; Lea, G.; Matos, J. C.; Palma, J. M. L. M.; Pauscher, L.; Peña, A.; Rodrigo, J. Sanz; Söderberg, S.; Vasiljevic, N.; Rodrigues, C. Veiga

    2017-03-01

    The New European Wind Atlas project will create a freely accessible wind atlas covering Europe and Turkey, develop the model chain to create the atlas and perform a series of experiments on flow in many different kinds of complex terrain to validate the models. This paper describes the experiments of which some are nearly completed while others are in the planning stage. All experiments focus on the flow properties that are relevant for wind turbines, so the main focus is the mean flow and the turbulence at heights between 40 and 300 m. Also extreme winds, wind shear and veer, and diurnal and seasonal variations of the wind are of interest. Common to all the experiments is the use of Doppler lidar systems to supplement and in some cases replace completely meteorological towers. Many of the lidars will be equipped with scan heads that will allow for arbitrary scan patterns by several synchronized systems. Two pilot experiments, one in Portugal and one in Germany, show the value of using multiple synchronized, scanning lidar, both in terms of the accuracy of the measurements and the atmospheric physical processes that can be studied. The experimental data will be used for validation of atmospheric flow models and will by the end of the project be freely available. This article is part of the themed issue 'Wind energy in complex terrains'.

  10. Atmospheric modeling in complex terrain

    SciTech Connect

    Williams, M. D.; Streit, G. E.

    1990-05-01

    Los Alamos investigators have developed several models which are relevant to modeling Mexico City air quality. The collection of models includes: meteorological models, dispersion models, air chemistry models, and visibility models. The models have been applied in several different contexts. They have been developed primarily to address the complexities posed by complex terrain. HOTMAC is the meteorological model which requires terrain and limited meteorological information. HOTMAC incorporates a relatively complete description of atmospheric physics to give good descriptions of the wind, temperature, and turbulence fields. RAPTAD is a dispersion code which uses random particle transport and kernel representations to efficiently provide accurate pollutant concentration fields. RAPTAD provides a much better description of tracer dispersion than do Gaussian puff models which fail to properly represent the effects of the wind profile near the surface. ATMOS and LAVM treat photochemistry and visibility respectively. ATMOS has been used to describe wintertime chemistry of the Denver brown cloud. Its description provided reasonable agreement with measurements for the high altitude of Denver. LAVM can provide both numerical indices or pictoral representations of visibility effects of pollutants. 15 refs., 74 figs.

  11. BOOK REVIEW: Modeling Complex Systems

    NASA Astrophysics Data System (ADS)

    Schreckenberg, M.

    2004-10-01

    This book by Nino Boccara presents a compilation of model systems commonly termed as `complex'. It starts with a definition of the systems under consideration and how to build up a model to describe the complex dynamics. The subsequent chapters are devoted to various categories of mean-field type models (differential and recurrence equations, chaos) and of agent-based models (cellular automata, networks and power-law distributions). Each chapter is supplemented by a number of exercises and their solutions. The table of contents looks a little arbitrary but the author took the most prominent model systems investigated over the years (and up until now there has been no unified theory covering the various aspects of complex dynamics). The model systems are explained by looking at a number of applications in various fields. The book is written as a textbook for interested students as well as serving as a compehensive reference for experts. It is an ideal source for topics to be presented in a lecture on dynamics of complex systems. This is the first book on this `wide' topic and I have long awaited such a book (in fact I planned to write it myself but this is much better than I could ever have written it!). Only section 6 on cellular automata is a little too limited to the author's point of view and one would have expected more about the famous Domany--Kinzel model (and more accurate citation!). In my opinion this is one of the best textbooks published during the last decade and even experts can learn a lot from it. Hopefully there will be an actualization after, say, five years since this field is growing so quickly. The price is too high for students but this, unfortunately, is the normal case today. Nevertheless I think it will be a great success!

  12. Modeling phototaxis in complex networks

    NASA Astrophysics Data System (ADS)

    Kuksenok, Olga; Balazs, Anna C.

    2006-03-01

    Phototaxis is the movement of organisms towards or away from light. It is one of the most important photo-biological processes, which in turn are responsible for light reception and the use of photons as a source of information. We briefly review current models of phototaxis of biological organisms and we develop a simple, minimal model for synthetic microscale units that can undergo phototactic motion. We then use this model to simulate the collective motion of such photosensitive artificial objects within a complex network, which is illuminated in a non-uniform manner by an external light.

  13. Complex Networks in Psychological Models

    NASA Astrophysics Data System (ADS)

    Wedemann, R. S.; Carvalho, L. S. A. V. D.; Donangelo, R.

    We develop schematic, self-organizing, neural-network models to describe mechanisms associated with mental processes, by a neurocomputational substrate. These models are examples of real world complex networks with interesting general topological structures. Considering dopaminergic signal-to-noise neuronal modulation in the central nervous system, we propose neural network models to explain development of cortical map structure and dynamics of memory access, and unify different mental processes into a single neurocomputational substrate. Based on our neural network models, neurotic behavior may be understood as an associative memory process in the brain, and the linguistic, symbolic associative process involved in psychoanalytic working-through can be mapped onto a corresponding process of reconfiguration of the neural network. The models are illustrated through computer simulations, where we varied dopaminergic modulation and observed the self-organizing emergent patterns at the resulting semantic map, interpreting them as different manifestations of mental functioning, from psychotic through to normal and neurotic behavior, and creativity.

  14. Modeling wildfire incident complexity dynamics.

    PubMed

    Thompson, Matthew P

    2013-01-01

    Wildfire management in the United States and elsewhere is challenged by substantial uncertainty regarding the location and timing of fire events, the socioeconomic and ecological consequences of these events, and the costs of suppression. Escalating U.S. Forest Service suppression expenditures is of particular concern at a time of fiscal austerity as swelling fire management budgets lead to decreases for non-fire programs, and as the likelihood of disruptive within-season borrowing potentially increases. Thus there is a strong interest in better understanding factors influencing suppression decisions and in turn their influence on suppression costs. As a step in that direction, this paper presents a probabilistic analysis of geographic and temporal variation in incident management team response to wildfires. The specific focus is incident complexity dynamics through time for fires managed by the U.S. Forest Service. The modeling framework is based on the recognition that large wildfire management entails recurrent decisions across time in response to changing conditions, which can be represented as a stochastic dynamic system. Daily incident complexity dynamics are modeled according to a first-order Markov chain, with containment represented as an absorbing state. A statistically significant difference in complexity dynamics between Forest Service Regions is demonstrated. Incident complexity probability transition matrices and expected times until containment are presented at national and regional levels. Results of this analysis can help improve understanding of geographic variation in incident management and associated cost structures, and can be incorporated into future analyses examining the economic efficiency of wildfire management.

  15. Effect of a Whole-Person Model of Care on Patient Experience in Patients With Complex Chronic Illness in Late Life.

    PubMed

    Shippee, Nathan D; Shippee, Tetyana P; Mobley, Patrick D; Fernstrom, Karl M; Britt, Heather R

    2017-01-01

    Patients with serious chronic illness are at a greater risk of depersonalized, overmedicalized care as they move into later life. Existing intervention research on person-focused care for persons in this transitional period is limited. To test the effects of LifeCourse, a team-based, whole-person intervention emphasizing listening to and knowing patients, on patient experience at 6 months. This is a quasi-experimental study with patients allocated to LifeCourse and comparison groups based on 2 geographic locations. Robust change-score regression models adjusted for baseline differences and confounding. Patients (113 intervention, 99 comparison in analyses) were individuals with heart failure or other serious chronic illness, cancer, or dementia who had visits to hospitals at a large multipractice health system in the United States Midwest. Primary outcome was 6-month change in patient experience measured via a novel, validated 21-item patient experience tool developed specifically for this intervention. Covariates included demographics, comorbidity score, and primary diagnosis. At 6 months, LifeCourse was associated with a moderate improvement in overall patient experience versus usual care. Individual domain subscales for care team, communication, and patient goals were not individually significant but trended positively in the direction of effect. Person-focused, team-based interventions can improve patient experience with care at a stage fraught with overmedicalization and many care needs. Improvement in patient experience in LifeCourse represents the sum effect of small improvements across different domains/aspects of care such as relationships with and work by the care team.

  16. Modeling Wildfire Incident Complexity Dynamics

    PubMed Central

    Thompson, Matthew P.

    2013-01-01

    Wildfire management in the United States and elsewhere is challenged by substantial uncertainty regarding the location and timing of fire events, the socioeconomic and ecological consequences of these events, and the costs of suppression. Escalating U.S. Forest Service suppression expenditures is of particular concern at a time of fiscal austerity as swelling fire management budgets lead to decreases for non-fire programs, and as the likelihood of disruptive within-season borrowing potentially increases. Thus there is a strong interest in better understanding factors influencing suppression decisions and in turn their influence on suppression costs. As a step in that direction, this paper presents a probabilistic analysis of geographic and temporal variation in incident management team response to wildfires. The specific focus is incident complexity dynamics through time for fires managed by the U.S. Forest Service. The modeling framework is based on the recognition that large wildfire management entails recurrent decisions across time in response to changing conditions, which can be represented as a stochastic dynamic system. Daily incident complexity dynamics are modeled according to a first-order Markov chain, with containment represented as an absorbing state. A statistically significant difference in complexity dynamics between Forest Service Regions is demonstrated. Incident complexity probability transition matrices and expected times until containment are presented at national and regional levels. Results of this analysis can help improve understanding of geographic variation in incident management and associated cost structures, and can be incorporated into future analyses examining the economic efficiency of wildfire management. PMID:23691014

  17. On the Way to Appropriate Model Complexity

    NASA Astrophysics Data System (ADS)

    Höge, M.

    2016-12-01

    When statistical models are used to represent natural phenomena they are often too simple or too complex - this is known. But what exactly is model complexity? Among many other definitions, the complexity of a model can be conceptualized as a measure of statistical dependence between observations and parameters (Van der Linde, 2014). However, several issues remain when working with model complexity: A unique definition for model complexity is missing. Assuming a definition is accepted, how can model complexity be quantified? How can we use a quantified complexity to the better of modeling? Generally defined, "complexity is a measure of the information needed to specify the relationships between the elements of organized systems" (Bawden & Robinson, 2015). The complexity of a system changes as the knowledge about the system changes. For models this means that complexity is not a static concept: With more data or higher spatio-temporal resolution of parameters, the complexity of a model changes. There are essentially three categories into which all commonly used complexity measures can be classified: (1) An explicit representation of model complexity as "Degrees of freedom" of a model, e.g. effective number of parameters. (2) Model complexity as code length, a.k.a. "Kolmogorov complexity": The longer the shortest model code, the higher its complexity (e.g. in bits). (3) Complexity defined via information entropy of parametric or predictive uncertainty. Preliminary results show that Bayes theorem allows for incorporating all parts of the non-static concept of model complexity like data quality and quantity or parametric uncertainty. Therefore, we test how different approaches for measuring model complexity perform in comparison to a fully Bayesian model selection procedure. Ultimately, we want to find a measure that helps to assess the most appropriate model.

  18. Numerical Modeling Experiments

    DTIC Science & Technology

    1974-09-01

    presence of clouds is associated with the occurvence of condensation in the atmospheric models. Cloudiness 3t a particulat grid point is introduced -4...when saturation is predicted as a result of either large-scale moisture flux convergence or vertical convective adjustment. In most models such clouds ... cloud top, cloud thickness, and liquid-water content. In some general circulation models the local fractional convective cloud amountv tre taken

  19. Diagnosis in Complex Plasmas for Microgravity Experiments (PK-3 plus)

    SciTech Connect

    Takahashi, Kazuo; Hayashi, Yasuaki; Thomas, Hubertus M.; Morfill, Gregor E.; Ivlev, Alexei V.; Adachi, Satoshi

    2008-09-07

    Microgravity gives the complex (dusty) plasmas, where dust particles are embedded in complete charge neutral region of bulk plasma. The dust clouds as an uncompressed strongly coupled Coulomb system correspond to atomic model with several physical phenomena, crystallization, phase transition, and so on. As the phenomena tightly connect to plasma states, it is significant to understand plasma parameters such as electron density and temperature. The present work shows the electron density in the setup for microgravity experiments currently onboard on the International Space Station.

  20. The Hidden Complexities of a "Simple" Experiment.

    ERIC Educational Resources Information Center

    Caplan, Jeremy B.; And Others

    1994-01-01

    Provides two experiments that do not give the expected results. One involves burning a candle in an air-filled beaker under water and the other burns the candle in pure oxygen. Provides methodology, suggestions, and theory. (MVL)

  1. The Hidden Complexities of a "Simple" Experiment.

    ERIC Educational Resources Information Center

    Caplan, Jeremy B.; And Others

    1994-01-01

    Provides two experiments that do not give the expected results. One involves burning a candle in an air-filled beaker under water and the other burns the candle in pure oxygen. Provides methodology, suggestions, and theory. (MVL)

  2. Numerical Experiments In Strongly Coupled Complex (Dusty) Plasmas

    NASA Astrophysics Data System (ADS)

    Hou, L. J.; Ivlev A.; Hubertus M. T.; Morfill, G. E.

    2010-07-01

    Complex (dusty) plasma is a suspension of micron-sized charged dust particles in a weakly ionized plasma with electrons, ions, and neutral atoms or molecules. Therein, dust particles acquire a few thousand electron charges by absorbing surrounding electrons and ions, and consequently interact with each other via a dynamically screened Coulomb potential while undergoing Brownian motion due primarily to frequent collisions with the neutral molecules. When the interaction potential energy between charged dust particles significantly exceeds their kinetic energy, they become strongly coupled and can form ordered structures comprising liquid and solid states. Since the motion of charged dust particles in complex (dusty) plasmas can be directly observed in real time by using a video camera, such systems have been generally regarded as a promising model system to study many phenomena occurring in solids, liquids and other strongly-coupled systems at the kinetic level, such as phase transitions, transport processes, and collective dynamics. Complex plasma physics has now grown into a mature research field with a very broad range of interdisciplinary facets. In addition to usual experimental and theoretical study, computer simulation in complex plasma plays an important role in bridging experimental observations and theories and in understanding many interesting phenomena observed in laboratory. The present talk will focus on a class of computer simulations that are usually non-equilibrium ones with external perturbation and that mimic the real complex plasma experiments (i. e., numerical experiment). The simulation method, i. e., the so-called Brownian Dynamics methods, will be firstly reviewed and then examples, such as simulations of heat transfer and shock wave propagation, will be present.

  3. Modelling biological complexity: a physical scientist's perspective.

    PubMed

    Coveney, Peter V; Fowler, Philip W

    2005-09-22

    integration of molecular and more coarse-grained representations of matter. The scope of such integrative approaches to complex systems research is circumscribed by the computational resources available. Computational grids should provide a step jump in the scale of these resources; we describe the tools that RealityGrid, a major UK e-Science project, has developed together with our experience of deploying complex models on nascent grids. We also discuss the prospects for mathematical approaches to reducing the dimensionality of complex networks in the search for universal systems-level properties, illustrating our approach with a description of the origin of life according to the RNA world view.

  4. Modelling biological complexity: a physical scientist's perspective

    PubMed Central

    Coveney, Peter V; Fowler, Philip W

    2005-01-01

    integration of molecular and more coarse-grained representations of matter. The scope of such integrative approaches to complex systems research is circumscribed by the computational resources available. Computational grids should provide a step jump in the scale of these resources; we describe the tools that RealityGrid, a major UK e-Science project, has developed together with our experience of deploying complex models on nascent grids. We also discuss the prospects for mathematical approaches to reducing the dimensionality of complex networks in the search for universal systems-level properties, illustrating our approach with a description of the origin of life according to the RNA world view. PMID:16849185

  5. A Practical Philosophy of Complex Climate Modelling

    NASA Technical Reports Server (NTRS)

    Schmidt, Gavin A.; Sherwood, Steven

    2014-01-01

    We give an overview of the practice of developing and using complex climate models, as seen from experiences in a major climate modelling center and through participation in the Coupled Model Intercomparison Project (CMIP).We discuss the construction and calibration of models; their evaluation, especially through use of out-of-sample tests; and their exploitation in multi-model ensembles to identify biases and make predictions. We stress that adequacy or utility of climate models is best assessed via their skill against more naive predictions. The framework we use for making inferences about reality using simulations is naturally Bayesian (in an informal sense), and has many points of contact with more familiar examples of scientific epistemology. While the use of complex simulations in science is a development that changes much in how science is done in practice, we argue that the concepts being applied fit very much into traditional practices of the scientific method, albeit those more often associated with laboratory work.

  6. A Practical Philosophy of Complex Climate Modelling

    NASA Technical Reports Server (NTRS)

    Schmidt, Gavin A.; Sherwood, Steven

    2014-01-01

    We give an overview of the practice of developing and using complex climate models, as seen from experiences in a major climate modelling center and through participation in the Coupled Model Intercomparison Project (CMIP).We discuss the construction and calibration of models; their evaluation, especially through use of out-of-sample tests; and their exploitation in multi-model ensembles to identify biases and make predictions. We stress that adequacy or utility of climate models is best assessed via their skill against more naive predictions. The framework we use for making inferences about reality using simulations is naturally Bayesian (in an informal sense), and has many points of contact with more familiar examples of scientific epistemology. While the use of complex simulations in science is a development that changes much in how science is done in practice, we argue that the concepts being applied fit very much into traditional practices of the scientific method, albeit those more often associated with laboratory work.

  7. Tuberous sclerosis complex; single center experience

    PubMed Central

    Erol, İlknur; Savaş, Tülin; Şekerci, Sevda; Yazıcı, Nalan; Erbay, Ayşe; Demir, Şenay; Saygı, Semra; Alkan, Özlem

    2015-01-01

    Aim: This study was planned with the aim of retrospectively reviewing the clinical and laboratory findings and therapies of our patients diagnosed with tuberous sclerosis and redefining the patients according to the diagnostic criteria revised by the 2012 International Tuberous Sclerosis Complex Consensus Group and comparing them with the literature. Materials and Method: Twenty patients diagnosed with tuberous sclerosis complex in the Pediatric Neurology Clinic were examined retrospectively in terms of clinical findings and therapies. The diagnoses were compared again according to 1998 and 2012 criteria. Results: It was observed that the complaint at presentation was seizure in 17 of 20 patients and hypopigmented spots on the skin in 3 of 20 patients. On the initial physical examination, findings related with the disease were found in the skin in 17 of the patients, in the eye in 5, in the kidneys in 7 and in the brain with imaging in 17. No cardiac involvement was observed in the patients. Infantile spasm was observed in 7 of the patients who presented because of seizure (n=17), partial seizure was observed in 7 and multiple seizure types were observed in 3. It was found that sirolimus treatment was given to 9 of 20 patients because of different reasons, 7 of these 9 patients had epileptic seizures and sirolimus treatment had no effect on epileptic seizures. According to 2012 diagnostic criteria, no marked change occured in the diagnoses of our patients. Conclusions: It was observed that the signs and symptoms of our patients were compatible with the literature. Molecular genetic examination was planned for the patients who were being followed up because of probable tuberous sclerosis complex. It was observed that sirolimus treatment had no marked effect on the seizure frequency of our patients. PMID:26078697

  8. Extracting Models in Single Molecule Experiments

    NASA Astrophysics Data System (ADS)

    Presse, Steve

    2013-03-01

    Single molecule experiments can now monitor the journey of a protein from its assembly near a ribosome to its proteolytic demise. Ideally all single molecule data should be self-explanatory. However data originating from single molecule experiments is particularly challenging to interpret on account of fluctuations and noise at such small scales. Realistically, basic understanding comes from models carefully extracted from the noisy data. Statistical mechanics, and maximum entropy in particular, provide a powerful framework for accomplishing this task in a principled fashion. Here I will discuss our work in extracting conformational memory from single molecule force spectroscopy experiments on large biomolecules. One clear advantage of this method is that we let the data tend towards the correct model, we do not fit the data. I will show that the dynamical model of the single molecule dynamics which emerges from this analysis is often more textured and complex than could otherwise come from fitting the data to a pre-conceived model.

  9. Teacher Modeling Using Complex Informational Texts

    ERIC Educational Resources Information Center

    Fisher, Douglas; Frey, Nancy

    2015-01-01

    Modeling in complex texts requires that teachers analyze the text for factors of qualitative complexity and then design lessons that introduce students to that complexity. In addition, teachers can model the disciplinary nature of content area texts as well as word solving and comprehension strategies. Included is a planning guide for think aloud.

  10. Teacher Modeling Using Complex Informational Texts

    ERIC Educational Resources Information Center

    Fisher, Douglas; Frey, Nancy

    2015-01-01

    Modeling in complex texts requires that teachers analyze the text for factors of qualitative complexity and then design lessons that introduce students to that complexity. In addition, teachers can model the disciplinary nature of content area texts as well as word solving and comprehension strategies. Included is a planning guide for think aloud.

  11. Metallo Complexes: An Experiment for the Undergraduate Laboratory.

    ERIC Educational Resources Information Center

    Kauffman, George B.; And Others

    1984-01-01

    Describes an experiment in which several metallo complexes with different central atoms are prepared. Background information on these compounds is provided, including requirements for their formation, preparation methods, and comments on their general properties and analysis. (JN)

  12. "Long, Boring, and Tedious": Youths' Experiences with Complex, Religious Texts

    ERIC Educational Resources Information Center

    Rackley, Eric D.; Kwok, Michelle

    2016-01-01

    Growing out of the renewed attention to text complexity in the United States and the large population of youth who are deeply committed to reading scripture, this study explores 16 Latter-day Saint and Methodist youths' experiences with complex, religious texts. The study took place in the Midwestern United States. Data consisted of an academic…

  13. "Long, Boring, and Tedious": Youths' Experiences with Complex, Religious Texts

    ERIC Educational Resources Information Center

    Rackley, Eric D.; Kwok, Michelle

    2016-01-01

    Growing out of the renewed attention to text complexity in the United States and the large population of youth who are deeply committed to reading scripture, this study explores 16 Latter-day Saint and Methodist youths' experiences with complex, religious texts. The study took place in the Midwestern United States. Data consisted of an academic…

  14. Iron-Sulfur-Carbonyl and -Nitrosyl Complexes: A Laboratory Experiment.

    ERIC Educational Resources Information Center

    Glidewell, Christopher; And Others

    1985-01-01

    Background information, materials needed, procedures used, and typical results obtained, are provided for an experiment on iron-sulfur-carbonyl and -nitrosyl complexes. The experiment involved (1) use of inert atmospheric techniques and thin-layer and flexible-column chromatography and (2) interpretation of infrared, hydrogen and carbon-13 nuclear…

  15. Iron-Sulfur-Carbonyl and -Nitrosyl Complexes: A Laboratory Experiment.

    ERIC Educational Resources Information Center

    Glidewell, Christopher; And Others

    1985-01-01

    Background information, materials needed, procedures used, and typical results obtained, are provided for an experiment on iron-sulfur-carbonyl and -nitrosyl complexes. The experiment involved (1) use of inert atmospheric techniques and thin-layer and flexible-column chromatography and (2) interpretation of infrared, hydrogen and carbon-13 nuclear…

  16. Wind and Diffusion Modeling for Complex Terrain.

    NASA Astrophysics Data System (ADS)

    Cox, Robert M.; Sontowski, John; Fry, Richard N., Jr.; Dougherty, Catherine M.; Smith, Thomas J.

    1998-10-01

    Atmospheric transport and dispersion over complex terrain were investigated. Meteorological and sulfur hexafluoride (SF6) concentration data were collected and used to evaluate the performance of a transport and diffusion model coupled with a mass consistency wind field model. Meteorological data were collected throughout April 1995. Both meteorological and plume location and concentration data were measured in December 1995. The meteorological data included measurements taken at 11-15 surface stations, one to three upper-air stations, and one mobile profiler. A range of conditions was encountered, including inversion and postinversion breakup, light to strong winds, and a broad distribution of wind directions.The models used were the MINERVE mass consistency wind model and the SCIPUFF (Second-Order Closure Integrated Puff) transport and diffusion model. These models were expected to provide and use high-resolution three-dimensional wind fields. An objective of the experiment was to determine if these models could provide emergency personnel with high-resolution hazardous plume information for quick response operations.Evaluation of the models focused primarily on their effectiveness as a short-term (1-4 h) predictive tool. These studies showed how they could be used to help direct emergency response following a hazardous material release. For purposes of the experiments, the models were used to direct the deployment of mobile sensors intended to intercept and measure tracer clouds.The April test was conducted to evaluate the performance of the MINERVE wind field generation model. It was evaluated during the early morning radiation inversion, inversion dissipation, and afternoon mixed atmosphere. The average deviations in wind speed and wind direction as compared to observations were within 0.4 m s1 and less than 10° for up to 2 h after data time. These deviations increased as time from data time increased. It was also found that deviations were greatest during

  17. PETN ignition experiments and models.

    PubMed

    Hobbs, Michael L; Wente, William B; Kaneshige, Michael J

    2010-04-29

    Ignition experiments from various sources, including our own laboratory, have been used to develop a simple ignition model for pentaerythritol tetranitrate (PETN). The experiments consist of differential thermal analysis, thermogravimetric analysis, differential scanning calorimetry, beaker tests, one-dimensional time to explosion tests, Sandia's instrumented thermal ignition tests (SITI), and thermal ignition of nonelectrical detonators. The model developed using this data consists of a one-step, first-order, pressure-independent mechanism used to predict pressure, temperature, and time to ignition for various configurations. The model was used to assess the state of the degraded PETN at the onset of ignition. We propose that cookoff violence for PETN can be correlated with the extent of reaction at the onset of ignition. This hypothesis was tested by evaluating metal deformation produced from detonators encased in copper as well as comparing postignition photos of the SITI experiments.

  18. Assessment and Computerized Modeling of the Environmental Deposition of Military Smokes. Characterization of the Atmospheric Boundary Layer in Complex Terrain and Results from the Amadeus Smoke Dispersion Experiments

    DTIC Science & Technology

    1991-12-01

    empirical relation. The appro- 3 priate scaling parameters for this region are w. and h (Smith and Blackall , 1979), and several empirical forms using these...Slope Flows in a tributary Conyon During the 1984 ASCOT Experiment," Journal of Applied Meteorology, 28, 569-577. 168 I I Smith, F. B. and R. M. Blackall

  19. Capturing Complexity through Maturity Modelling

    ERIC Educational Resources Information Center

    Underwood, Jean; Dillon, Gayle

    2004-01-01

    The impact of information and communication technologies (ICT) on the process and products of education is difficult to assess for a number of reasons. In brief, education is a complex system of interrelationships, of checks and balances. This context is not a neutral backdrop on which teaching and learning are played out. Rather, it may help, or…

  20. Capturing Complexity through Maturity Modelling

    ERIC Educational Resources Information Center

    Underwood, Jean; Dillon, Gayle

    2004-01-01

    The impact of information and communication technologies (ICT) on the process and products of education is difficult to assess for a number of reasons. In brief, education is a complex system of interrelationships, of checks and balances. This context is not a neutral backdrop on which teaching and learning are played out. Rather, it may help, or…

  1. A model of clutter for complex, multivariate geospatial displays.

    PubMed

    Lohrenz, Maura C; Trafton, J Gregory; Beck, R Melissa; Gendron, Marlin L

    2009-02-01

    A novel model of measuring clutter in complex geospatial displays was compared with human ratings of subjective clutter as a measure of convergent validity. The new model is called the color-clustering clutter (C3) model. Clutter is a known problem in displays of complex data and has been shown to affect target search performance. Previous clutter models are discussed and compared with the C3 model. Two experiments were performed. In Experiment 1, participants performed subjective clutter ratings on six classes of information visualizations. Empirical results were used to set two free parameters in the model. In Experiment 2, participants performed subjective clutter ratings on aeronautical charts. Both experiments compared and correlated empirical data to model predictions. The first experiment resulted in a .76 correlation between ratings and C3. The second experiment resulted in a .86 correlation, significantly better than results from a model developed by Rosenholtz et al. Outliers to our correlation suggest further improvements to C3. We suggest that (a) the C3 model is a good predictor of subjective impressions of clutter in geospatial displays, (b) geospatial clutter is a function of color density and saliency (primary C3 components), and (c) pattern analysis techniques could further improve C3. The C3 model could be used to improve the design of electronic geospatial displays by suggesting when a display will be too cluttered for its intended audience.

  2. Molecular simulation and modeling of complex I.

    PubMed

    Hummer, Gerhard; Wikström, Mårten

    2016-07-01

    Molecular modeling and molecular dynamics simulations play an important role in the functional characterization of complex I. With its large size and complicated function, linking quinone reduction to proton pumping across a membrane, complex I poses unique modeling challenges. Nonetheless, simulations have already helped in the identification of possible proton transfer pathways. Simulations have also shed light on the coupling between electron and proton transfer, thus pointing the way in the search for the mechanistic principles underlying the proton pump. In addition to reviewing what has already been achieved in complex I modeling, we aim here to identify pressing issues and to provide guidance for future research to harness the power of modeling in the functional characterization of complex I. This article is part of a Special Issue entitled Respiratory complex I, edited by Volker Zickermann and Ulrich Brandt.

  3. Hierarchical Models of the Nearshore Complex System

    DTIC Science & Technology

    2004-01-01

    unclassified unclassified /,andard Form 7 7Qien. -pii Prescrbed by ANS Sid 239-18 zgB -10z Hierarchical Models of the Nearshore Complex System: Final...TITLE AND SUBTITLE S. FUNDING NUMBERS Hierarchical Models of the Nearshore Complex System N00014-02-1-0358 6. AUTHOR(S) Brad Werner 7. PERFORMING...8217 ........... The long-term goal of this reasearch was to develop and test predictive models for nearshore processes. This grant was terminaton funding for the

  4. Numerical Modeling of LCROSS experiment

    NASA Astrophysics Data System (ADS)

    Sultanov, V. G.; Kim, V. V.; Matveichev, A. V.; Zhukov, B. G.; Lomonosov, I. V.

    2009-06-01

    The mission objectives of the Lunar Crater Observation and Sensing Satellite (LCROSS) include confirming the presence or absence of water ice in a permanently shadowed crater in the Moon's polar regions. In this research we present results of numerical modeling of forthcoming LCROSS experiment. The parallel FPIC3D gas dynamic code with implemented realistic equations of state (EOS) and constitutive relations [1] was used. New wide--range EOS for lunar ground was developed. We carried out calculations of impact of model body on the lunar surface at different angels. Situations of impact on dry and water ice--contained lunar ground were also taken into account. Modeling results are given for crater's shape and size along with amount of ejecta. [4pt] [1] V.E. Fortov, V.V. Kim, I.V. Lomonosov, A.V. Matveichev, A.V. Ostrik. Numerical modeling of hypervelocity impacts, Intern J Impact Engeneering, 33, 244-253 (2006)

  5. Scaffolding in Complex Modelling Situations

    ERIC Educational Resources Information Center

    Stender, Peter; Kaiser, Gabriele

    2015-01-01

    The implementation of teacher-independent realistic modelling processes is an ambitious educational activity with many unsolved problems so far. Amongst others, there hardly exists any empirical knowledge about efficient ways of possible teacher support with students' activities, which should be mainly independent from the teacher. The research…

  6. Scaffolding in Complex Modelling Situations

    ERIC Educational Resources Information Center

    Stender, Peter; Kaiser, Gabriele

    2015-01-01

    The implementation of teacher-independent realistic modelling processes is an ambitious educational activity with many unsolved problems so far. Amongst others, there hardly exists any empirical knowledge about efficient ways of possible teacher support with students' activities, which should be mainly independent from the teacher. The research…

  7. School Experiences of an Adolescent with Medical Complexities Involving Incontinence

    ERIC Educational Resources Information Center

    Filce, Hollie Gabler; Bishop, John B.

    2014-01-01

    The educational implications of chronic illnesses which involve incontinence are not well represented in the literature. The experiences of an adolescent with multiple complex illnesses, including incontinence, were explored via an intrinsic case study. Data were gathered from the adolescent, her mother, and teachers through interviews, email…

  8. The Complex Experience of Learning to Do Research

    ERIC Educational Resources Information Center

    Emo, Kenneth; Emo, Wendy; Kimn, Jung-Han; Gent, Stephen

    2015-01-01

    This article examines how student learning is a product of the experiential interaction between person and environment. We draw from the theoretical perspective of complexity to shed light on the emergent, adaptive, and unpredictable nature of students' learning experiences. To understand the relationship between the environment and the student…

  9. The Complex Experience of Learning to Do Research

    ERIC Educational Resources Information Center

    Emo, Kenneth; Emo, Wendy; Kimn, Jung-Han; Gent, Stephen

    2015-01-01

    This article examines how student learning is a product of the experiential interaction between person and environment. We draw from the theoretical perspective of complexity to shed light on the emergent, adaptive, and unpredictable nature of students' learning experiences. To understand the relationship between the environment and the student…

  10. Reducing Spatial Data Complexity for Classification Models

    NASA Astrophysics Data System (ADS)

    Ruta, Dymitr; Gabrys, Bogdan

    2007-11-01

    Intelligent data analytics gradually becomes a day-to-day reality of today's businesses. However, despite rapidly increasing storage and computational power current state-of-the-art predictive models still can not handle massive and noisy corporate data warehouses. What is more adaptive and real-time operational environment requires multiple models to be frequently retrained which further hinders their use. Various data reduction techniques ranging from data sampling up to density retention models attempt to address this challenge by capturing a summarised data structure, yet they either do not account for labelled data or degrade the classification performance of the model trained on the condensed dataset. Our response is a proposition of a new general framework for reducing the complexity of labelled data by means of controlled spatial redistribution of class densities in the input space. On the example of Parzen Labelled Data Compressor (PLDC) we demonstrate a simulatory data condensation process directly inspired by the electrostatic field interaction where the data are moved and merged following the attracting and repelling interactions with the other labelled data. The process is controlled by the class density function built on the original data that acts as a class-sensitive potential field ensuring preservation of the original class density distributions, yet allowing data to rearrange and merge joining together their soft class partitions. As a result we achieved a model that reduces the labelled datasets much further than any competitive approaches yet with the maximum retention of the original class densities and hence the classification performance. PLDC leaves the reduced dataset with the soft accumulative class weights allowing for efficient online updates and as shown in a series of experiments if coupled with Parzen Density Classifier (PDC) significantly outperforms competitive data condensation methods in terms of classification performance at the

  11. Reducing Spatial Data Complexity for Classification Models

    SciTech Connect

    Ruta, Dymitr; Gabrys, Bogdan

    2007-11-29

    Intelligent data analytics gradually becomes a day-to-day reality of today's businesses. However, despite rapidly increasing storage and computational power current state-of-the-art predictive models still can not handle massive and noisy corporate data warehouses. What is more adaptive and real-time operational environment requires multiple models to be frequently retrained which further hinders their use. Various data reduction techniques ranging from data sampling up to density retention models attempt to address this challenge by capturing a summarised data structure, yet they either do not account for labelled data or degrade the classification performance of the model trained on the condensed dataset. Our response is a proposition of a new general framework for reducing the complexity of labelled data by means of controlled spatial redistribution of class densities in the input space. On the example of Parzen Labelled Data Compressor (PLDC) we demonstrate a simulatory data condensation process directly inspired by the electrostatic field interaction where the data are moved and merged following the attracting and repelling interactions with the other labelled data. The process is controlled by the class density function built on the original data that acts as a class-sensitive potential field ensuring preservation of the original class density distributions, yet allowing data to rearrange and merge joining together their soft class partitions. As a result we achieved a model that reduces the labelled datasets much further than any competitive approaches yet with the maximum retention of the original class densities and hence the classification performance. PLDC leaves the reduced dataset with the soft accumulative class weights allowing for efficient online updates and as shown in a series of experiments if coupled with Parzen Density Classifier (PDC) significantly outperforms competitive data condensation methods in terms of classification performance at the

  12. Facing up to Complexity: Implications for Our Social Experiments.

    PubMed

    Hawkins, Ronnie

    2016-06-01

    Biological systems are highly complex, and for this reason there is a considerable degree of uncertainty as to the consequences of making significant interventions into their workings. Since a number of new technologies are already impinging on living systems, including our bodies, many of us have become participants in large-scale "social experiments". I will discuss biological complexity and its relevance to the technologies that brought us BSE/vCJD and the controversy over GM foods. Then I will consider some of the complexities of our social dynamics, and argue for making a shift from using the precautionary principle to employing the approach of evaluating the introduction of new technologies by conceiving of them as social experiments.

  13. Role models for complex networks

    NASA Astrophysics Data System (ADS)

    Reichardt, J.; White, D. R.

    2007-11-01

    We present a framework for automatically decomposing (“block-modeling”) the functional classes of agents within a complex network. These classes are represented by the nodes of an image graph (“block model”) depicting the main patterns of connectivity and thus functional roles in the network. Using a first principles approach, we derive a measure for the fit of a network to any given image graph allowing objective hypothesis testing. From the properties of an optimal fit, we derive how to find the best fitting image graph directly from the network and present a criterion to avoid overfitting. The method can handle both two-mode and one-mode data, directed and undirected as well as weighted networks and allows for different types of links to be dealt with simultaneously. It is non-parametric and computationally efficient. The concepts of structural equivalence and modularity are found as special cases of our approach. We apply our method to the world trade network and analyze the roles individual countries play in the global economy.

  14. Complex Parameter Landscape for a Complex Neuron Model

    PubMed Central

    Achard, Pablo; De Schutter, Erik

    2006-01-01

    The electrical activity of a neuron is strongly dependent on the ionic channels present in its membrane. Modifying the maximal conductances from these channels can have a dramatic impact on neuron behavior. But the effect of such modifications can also be cancelled out by compensatory mechanisms among different channels. We used an evolution strategy with a fitness function based on phase-plane analysis to obtain 20 very different computational models of the cerebellar Purkinje cell. All these models produced very similar outputs to current injections, including tiny details of the complex firing pattern. These models were not completely isolated in the parameter space, but neither did they belong to a large continuum of good models that would exist if weak compensations between channels were sufficient. The parameter landscape of good models can best be described as a set of loosely connected hyperplanes. Our method is efficient in finding good models in this complex landscape. Unraveling the landscape is an important step towards the understanding of functional homeostasis of neurons. PMID:16848639

  15. Agent-based modeling of complex infrastructures

    SciTech Connect

    North, M. J.

    2001-06-01

    Complex Adaptive Systems (CAS) can be applied to investigate complex infrastructures and infrastructure interdependencies. The CAS model agents within the Spot Market Agent Research Tool (SMART) and Flexible Agent Simulation Toolkit (FAST) allow investigation of the electric power infrastructure, the natural gas infrastructure and their interdependencies.

  16. Modeling the complex bromate-iodine reaction.

    PubMed

    Machado, Priscilla B; Faria, Roberto B

    2009-05-07

    In this article, it is shown that the FLEK model (ref 5 ) is able to model the experimental results of the bromate-iodine clock reaction. Five different complex chemical systems, the bromate-iodide clock and oscillating reactions, the bromite-iodide clock and oscillating reactions, and now the bromate-iodine clock reaction are adequately accounted for by the FLEK model.

  17. Emulator-assisted data assimilation in complex models

    NASA Astrophysics Data System (ADS)

    Margvelashvili, Nugzar Yu; Herzfeld, Mike; Rizwi, Farhan; Mongin, Mathieu; Baird, Mark E.; Jones, Emlyn; Schaffelke, Britta; King, Edward; Schroeder, Thomas

    2016-09-01

    Emulators are surrogates of complex models that run orders of magnitude faster than the original model. The utility of emulators for the data assimilation into ocean models is still not well understood. High complexity of ocean models translates into high uncertainty of the corresponding emulators which may undermine the quality of the assimilation schemes based on such emulators. Numerical experiments with a chaotic Lorenz-95 model are conducted to illustrate this point and suggest a strategy to alleviate this problem through the localization of the emulation and data assimilation procedures. Insights gained through these experiments are used to design and implement data assimilation scenario for a 3D fine-resolution sediment transport model of the Great Barrier Reef (GBR), Australia.

  18. Numerical models of complex diapirs

    NASA Astrophysics Data System (ADS)

    Podladchikov, Yu.; Talbot, C.; Poliakov, A. N. B.

    1993-12-01

    Numerically modelled diapirs that rise into overburdens with viscous rheology produce a large variety of shapes. This work uses the finite-element method to study the development of diapirs that rise towards a surface on which a diapir-induced topography creeps flat or disperses ("erodes") at different rates. Slow erosion leads to diapirs with "mushroom" shapes, moderate erosion rate to "wine glass" diapirs and fast erosion to "beer glass"- and "column"-shaped diapirs. The introduction of a low-viscosity layer at the top of the overburden causes diapirs to develop into structures resembling a "Napoleon hat". These spread lateral sheets.

  19. Slip complexity in earthquake fault models.

    PubMed

    Rice, J R; Ben-Zion, Y

    1996-04-30

    We summarize studies of earthquake fault models that give rise to slip complexities like those in natural earthquakes. For models of smooth faults between elastically deformable continua, it is critical that the friction laws involve a characteristic distance for slip weakening or evolution of surface state. That results in a finite nucleation size, or coherent slip patch size, h*. Models of smooth faults, using numerical cell size properly small compared to h*, show periodic response or complex and apparently chaotic histories of large events but have not been found to show small event complexity like the self-similar (power law) Gutenberg-Richter frequency-size statistics. This conclusion is supported in the present paper by fully inertial elastodynamic modeling of earthquake sequences. In contrast, some models of locally heterogeneous faults with quasi-independent fault segments, represented approximately by simulations with cell size larger than h* so that the model becomes "inherently discrete," do show small event complexity of the Gutenberg-Richter type. Models based on classical friction laws without a weakening length scale or for which the numerical procedure imposes an abrupt strength drop at the onset of slip have h* = 0 and hence always fall into the inherently discrete class. We suggest that the small-event complexity that some such models show will not survive regularization of the constitutive description, by inclusion of an appropriate length scale leading to a finite h*, and a corresponding reduction of numerical grid size.

  20. Mathematical modeling of complex regulatory networks.

    PubMed

    Stelling, Jörg; Gilles, Ernst Dieter

    2004-09-01

    Cellular regulation comprises overwhelmingly complex interactions between genes and proteins that ultimately will only be rendered understandable by employing formal approaches. Developing large-scale mathematical models of such systems in an efficient and reliable way, however, requires careful evaluation of structuring principles for the models, of the description of the system dynamics, and of the experimental data basis for adjusting the models to reality. We discuss these three aspects of model development using the example of cell cycle regulation in yeast and suggest that capturing complex dynamic networks is feasible despite incomplete (quantitative) biological knowledge.

  1. Preferential urn model and nongrowing complex networks.

    PubMed

    Ohkubo, Jun; Yasuda, Muneki; Tanaka, Kazuyuki

    2005-12-01

    A preferential urn model, which is based on the concept "the rich get richer," is proposed. From a relationship between a nongrowing model for complex networks and the preferential urn model in regard to degree distributions, it is revealed that a fitness parameter in the nongrowing model is interpreted as an inverse local temperature in the preferential urn model. Furthermore, it is clarified that the preferential urn model with randomness generates a fat-tailed occupation distribution; the concept of the local temperature enables us to understand the fat-tailed occupation distribution intuitively. Since the preferential urn model is a simple stochastic model, it can be applied to research on not only the nongrowing complex networks, but also many other fields such as econophysics and social sciences.

  2. Modeling of microgravity combustion experiments

    NASA Technical Reports Server (NTRS)

    Buckmaster, John

    1993-01-01

    Modeling plays a vital role in providing physical insights into behavior revealed by experiment. The program at the University of Illinois is designed to improve our understanding of basic combustion phenomena through the analytical and numerical modeling of a variety of configurations undergoing experimental study in NASA's microgravity combustion program. Significant progress has been made in two areas: (1) flame-balls, studied experimentally by Ronney and his co-workers; (2) particle-cloud flames studied by Berlad and his collaborators. Additional work is mentioned below. NASA funding for the U. of Illinois program commenced in February 1991 but work was initiated prior to that date and the program can only be understood with this foundation exposed. Accordingly, we start with a brief description of some key results obtained in the pre - 2/91 work.

  3. Complex system modelling for veterinary epidemiology.

    PubMed

    Lanzas, Cristina; Chen, Shi

    2015-02-01

    The use of mathematical models has a long tradition in infectious disease epidemiology. The nonlinear dynamics and complexity of pathogen transmission pose challenges in understanding its key determinants, in identifying critical points, and designing effective mitigation strategies. Mathematical modelling provides tools to explicitly represent the variability, interconnectedness, and complexity of systems, and has contributed to numerous insights and theoretical advances in disease transmission, as well as to changes in public policy, health practice, and management. In recent years, our modelling toolbox has considerably expanded due to the advancements in computing power and the need to model novel data generated by technologies such as proximity loggers and global positioning systems. In this review, we discuss the principles, advantages, and challenges associated with the most recent modelling approaches used in systems science, the interdisciplinary study of complex systems, including agent-based, network and compartmental modelling. Agent-based modelling is a powerful simulation technique that considers the individual behaviours of system components by defining a set of rules that govern how individuals ("agents") within given populations interact with one another and the environment. Agent-based models have become a recent popular choice in epidemiology to model hierarchical systems and address complex spatio-temporal dynamics because of their ability to integrate multiple scales and datasets.

  4. Discrete Element Modeling of Complex Granular Flows

    NASA Astrophysics Data System (ADS)

    Movshovitz, N.; Asphaug, E. I.

    2010-12-01

    Granular materials occur almost everywhere in nature, and are actively studied in many fields of research, from food industry to planetary science. One approach to the study of granular media, the continuum approach, attempts to find a constitutive law that determines the material's flow, or strain, under applied stress. The main difficulty with this approach is that granular systems exhibit different behavior under different conditions, behaving at times as an elastic solid (e.g. pile of sand), at times as a viscous fluid (e.g. when poured), or even as a gas (e.g. when shaken). Even if all these physics are accounted for, numerical implementation is made difficult by the wide and often discontinuous ranges in continuum density and sound speed. A different approach is Discrete Element Modeling (DEM). Here the goal is to directly model every grain in the system as a rigid body subject to various body and surface forces. The advantage of this method is that it treats all of the above regimes in the same way, and can easily deal with a system moving back and forth between regimes. But as a granular system typically contains a multitude of individual grains, the direct integration of the system can be very computationally expensive. For this reason most DEM codes are limited to spherical grains of uniform size. However, spherical grains often cannot replicate the behavior of real world granular systems. A simple pile of spherical grains, for example, relies on static friction alone to keep its shape, while in reality a pile of irregular grains can maintain a much steeper angle by interlocking force chains. In the present study we employ a commercial DEM, nVidia's PhysX Engine, originally designed for the game and animation industry, to simulate complex granular flows with irregular, non-spherical grains. This engine runs as a multi threaded process and can be GPU accelerated. We demonstrate the code's ability to physically model granular materials in the three regimes

  5. A differential model of the complex cell.

    PubMed

    Hansard, Miles; Horaud, Radu

    2011-09-01

    The receptive fields of simple cells in the visual cortex can be understood as linear filters. These filters can be modeled by Gabor functions or gaussian derivatives. Gabor functions can also be combined in an energy model of the complex cell response. This letter proposes an alternative model of the complex cell, based on gaussian derivatives. It is most important to account for the insensitivity of the complex response to small shifts of the image. The new model uses a linear combination of the first few derivative filters, at a single position, to approximate the first derivative filter, at a series of adjacent positions. The maximum response, over all positions, gives a signal that is insensitive to small shifts of the image. This model, unlike previous approaches, is based on the scale space theory of visual processing. In particular, the complex cell is built from filters that respond to the 2D differential structure of the image. The computational aspects of the new model are studied in one and two dimensions, using the steerability of the gaussian derivatives. The response of the model to basic images, such as edges and gratings, is derived formally. The response to natural images is also evaluated, using statistical measures of shift insensitivity. The neural implementation and predictions of the model are discussed.

  6. From Complex to Simple: Interdisciplinary Stochastic Models

    ERIC Educational Resources Information Center

    Mazilu, D. A.; Zamora, G.; Mazilu, I.

    2012-01-01

    We present two simple, one-dimensional, stochastic models that lead to a qualitative understanding of very complex systems from biology, nanoscience and social sciences. The first model explains the complicated dynamics of microtubules, stochastic cellular highways. Using the theory of random walks in one dimension, we find analytical expressions…

  7. From Complex to Simple: Interdisciplinary Stochastic Models

    ERIC Educational Resources Information Center

    Mazilu, D. A.; Zamora, G.; Mazilu, I.

    2012-01-01

    We present two simple, one-dimensional, stochastic models that lead to a qualitative understanding of very complex systems from biology, nanoscience and social sciences. The first model explains the complicated dynamics of microtubules, stochastic cellular highways. Using the theory of random walks in one dimension, we find analytical expressions…

  8. Modeling the chemistry of complex petroleum mixtures.

    PubMed

    Quann, R J

    1998-12-01

    Determining the complete molecular composition of petroleum and its refined products is not feasible with current analytical techniques because of the astronomical number of molecular components. Modeling the composition and behavior of such complex mixtures in refinery processes has accordingly evolved along a simplifying concept called lumping. Lumping reduces the complexity of the problem to a manageable form by grouping the entire set of molecular components into a handful of lumps. This traditional approach does not have a molecular basis and therefore excludes important aspects of process chemistry and molecular property fundamentals from the model's formulation. A new approach called structure-oriented lumping has been developed to model the composition and chemistry of complex mixtures at a molecular level. The central concept is to represent an individual molecular or a set of closely related isomers as a mathematical construct of certain specific and repeating structural groups. A complex mixture such as petroleum can then be represented as thousands of distinct molecular components, each having a mathematical identity. This enables the automated construction of large complex reaction networks with tens of thousands of specific reactions for simulating the chemistry of complex mixtures. Further, the method provides a convenient framework for incorporating molecular physical property correlations, existing group contribution methods, molecular thermodynamic properties, and the structure--activity relationships of chemical kinetics in the development of models.

  9. Modeling the chemistry of complex petroleum mixtures.

    PubMed Central

    Quann, R J

    1998-01-01

    Determining the complete molecular composition of petroleum and its refined products is not feasible with current analytical techniques because of the astronomical number of molecular components. Modeling the composition and behavior of such complex mixtures in refinery processes has accordingly evolved along a simplifying concept called lumping. Lumping reduces the complexity of the problem to a manageable form by grouping the entire set of molecular components into a handful of lumps. This traditional approach does not have a molecular basis and therefore excludes important aspects of process chemistry and molecular property fundamentals from the model's formulation. A new approach called structure-oriented lumping has been developed to model the composition and chemistry of complex mixtures at a molecular level. The central concept is to represent an individual molecular or a set of closely related isomers as a mathematical construct of certain specific and repeating structural groups. A complex mixture such as petroleum can then be represented as thousands of distinct molecular components, each having a mathematical identity. This enables the automated construction of large complex reaction networks with tens of thousands of specific reactions for simulating the chemistry of complex mixtures. Further, the method provides a convenient framework for incorporating molecular physical property correlations, existing group contribution methods, molecular thermodynamic properties, and the structure--activity relationships of chemical kinetics in the development of models. PMID:9860903

  10. Updating the debate on model complexity

    USGS Publications Warehouse

    Simmons, Craig T.; Hunt, Randall J.

    2012-01-01

    As scientists who are trying to understand a complex natural world that cannot be fully characterized in the field, how can we best inform the society in which we live? This founding context was addressed in a special session, “Complexity in Modeling: How Much is Too Much?” convened at the 2011 Geological Society of America Annual Meeting. The session had a variety of thought-provoking presentations—ranging from philosophy to cost-benefit analyses—and provided some areas of broad agreement that were not evident in discussions of the topic in 1998 (Hunt and Zheng, 1999). The session began with a short introduction during which model complexity was framed borrowing from an economic concept, the Law of Diminishing Returns, and an example of enjoyment derived by eating ice cream. Initially, there is increasing satisfaction gained from eating more ice cream, to a point where the gain in satisfaction starts to decrease, ending at a point when the eater sees no value in eating more ice cream. A traditional view of model complexity is similar—understanding gained from modeling can actually decrease if models become unnecessarily complex. However, oversimplified models—those that omit important aspects of the problem needed to make a good prediction—can also limit and confound our understanding. Thus, the goal of all modeling is to find the “sweet spot” of model sophistication—regardless of whether complexity was added sequentially to an overly simple model or collapsed from an initial highly parameterized framework that uses mathematics and statistics to attain an optimum (e.g., Hunt et al., 2007). Thus, holistic parsimony is attained, incorporating “as simple as possible,” as well as the equally important corollary “but no simpler.”

  11. Governance of complex systems: results of a sociological simulation experiment.

    PubMed

    Adelt, Fabian; Weyer, Johannes; Fink, Robin D

    2014-01-01

    Social sciences have discussed the governance of complex systems for a long time. The following paper tackles the issue by means of experimental sociology, in order to investigate the performance of different modes of governance empirically. The simulation framework developed is based on Esser's model of sociological explanation as well as on Kroneberg's model of frame selection. The performance of governance has been measured by means of three macro and two micro indicators. Surprisingly, central control mostly performs better than decentralised coordination. However, results not only depend on the mode of governance, but there is also a relation between performance and the composition of actor populations, which has yet not been investigated sufficiently. Practitioner Summary: Practitioners can gain insights into the functioning of complex systems and learn how to better manage them. Additionally, they are provided with indicators to measure the performance of complex systems.

  12. Multifaceted Modelling of Complex Business Enterprises.

    PubMed

    Chakraborty, Subrata; Mengersen, Kerrie; Fidge, Colin; Ma, Lin; Lassen, David

    2015-01-01

    We formalise and present a new generic multifaceted complex system approach for modelling complex business enterprises. Our method has a strong focus on integrating the various data types available in an enterprise which represent the diverse perspectives of various stakeholders. We explain the challenges faced and define a novel approach to converting diverse data types into usable Bayesian probability forms. The data types that can be integrated include historic data, survey data, and management planning data, expert knowledge and incomplete data. The structural complexities of the complex system modelling process, based on various decision contexts, are also explained along with a solution. This new application of complex system models as a management tool for decision making is demonstrated using a railway transport case study. The case study demonstrates how the new approach can be utilised to develop a customised decision support model for a specific enterprise. Various decision scenarios are also provided to illustrate the versatility of the decision model at different phases of enterprise operations such as planning and control.

  13. Multifaceted Modelling of Complex Business Enterprises

    PubMed Central

    2015-01-01

    We formalise and present a new generic multifaceted complex system approach for modelling complex business enterprises. Our method has a strong focus on integrating the various data types available in an enterprise which represent the diverse perspectives of various stakeholders. We explain the challenges faced and define a novel approach to converting diverse data types into usable Bayesian probability forms. The data types that can be integrated include historic data, survey data, and management planning data, expert knowledge and incomplete data. The structural complexities of the complex system modelling process, based on various decision contexts, are also explained along with a solution. This new application of complex system models as a management tool for decision making is demonstrated using a railway transport case study. The case study demonstrates how the new approach can be utilised to develop a customised decision support model for a specific enterprise. Various decision scenarios are also provided to illustrate the versatility of the decision model at different phases of enterprise operations such as planning and control. PMID:26247591

  14. Complex quantum network model of energy transfer in photosynthetic complexes.

    PubMed

    Ai, Bao-Quan; Zhu, Shi-Liang

    2012-12-01

    The quantum network model with real variables is usually used to describe the excitation energy transfer (EET) in the Fenna-Matthews-Olson (FMO) complexes. In this paper we add the quantum phase factors to the hopping terms and find that the quantum phase factors play an important role in the EET. The quantum phase factors allow us to consider the space structure of the pigments. It is found that phase coherence within the complexes would allow quantum interference to affect the dynamics of the EET. There exist some optimal phase regions where the transfer efficiency takes its maxima, which indicates that when the pigments are optimally spaced, the exciton can pass through the FMO with perfect efficiency. Moreover, the optimal phase regions almost do not change with the environments. In addition, we find that the phase factors are useful in the EET just in the case of multiple pathways. Therefore, we demonstrate that the quantum phases may bring the other two factors, the optimal space of the pigments and multiple pathways, together to contribute the EET in photosynthetic complexes with perfect efficiency.

  15. Combustion, Complex Fluids, and Fluid Physics Experiments on the ISS

    NASA Technical Reports Server (NTRS)

    Motil, Brian; Urban, David

    2012-01-01

    multiphase flows, capillary phenomena, and heat pipes. Finally in complex fluids, experiments in rheology and soft condensed materials will be presented.

  16. Polycrystal models to fit experiments

    SciTech Connect

    Kocks, U.F.; Necker, C.T.

    1994-07-01

    Two problems in the modeling of polycrystal plasticity are addressed in which some parameter can best be determined by matching with experiment, although the principles of the underlying mechanisms are presumed known. One of these problems is the transition from ``full constraints`` (FC) to ``relaxed constraints`` (RC) with increasing flatness of the grains. Observed qualitative transitions in texture with strain, such as a transient orthotropic symmetry in torsion textures, can help identify the rate at which the FC-to-RC transition takes place. The second problem is that of the material dependence of deformation textures among the FCC metals which, it is argued, can only be due to a change in deformation modes, i.e., in the shape of the single-crystal yield surface. A heuristic assumption of an increasing importance of (111)<211>-slip as the stacking-fault energy decreases explains the qualitative trend. The quantitative parameter needed has been determined for copper from a match of prediction and experiment over a range of strains.

  17. Combustion, Complex Fluids, and Fluid Physics Experiments on the ISS

    NASA Technical Reports Server (NTRS)

    Motil, Brian; Urban, David

    2012-01-01

    From the very early days of human spaceflight, NASA has been conducting experiments in space to understand the effect of weightlessness on physical and chemically reacting systems. NASA Glenn Research Center (GRC) in Cleveland, Ohio has been at the forefront of this research looking at both fundamental studies in microgravity as well as experiments targeted at reducing the risks to long duration human missions to the moon, Mars, and beyond. In the current International Space Station (ISS) era, we now have an orbiting laboratory that provides the highly desired condition of long-duration microgravity. This allows continuous and interactive research similar to Earth-based laboratories. Because of these capabilities, the ISS is an indispensible laboratory for low gravity research. NASA GRC has been actively involved in developing and operating facilities and experiments on the ISS since the beginning of a permanent human presence on November 2, 2000. As the lead Center for combustion, complex fluids, and fluid physics; GRC has led the successful implementation of the Combustion Integrated Rack (CIR) and the Fluids Integrated Rack (FIR) as well as the continued use of other facilities on the ISS. These facilities have supported combustion experiments in fundamental droplet combustion; fire detection; fire extinguishment; soot phenomena; flame liftoff and stability; and material flammability. The fluids experiments have studied capillary flow; magneto-rheological fluids; colloidal systems; extensional rheology; pool and nucleate boiling phenomena. In this paper, we provide an overview of the experiments conducted on the ISS over the past 12 years.

  18. Slip complexity in earthquake fault models.

    PubMed Central

    Rice, J R; Ben-Zion, Y

    1996-01-01

    We summarize studies of earthquake fault models that give rise to slip complexities like those in natural earthquakes. For models of smooth faults between elastically deformable continua, it is critical that the friction laws involve a characteristic distance for slip weakening or evolution of surface state. That results in a finite nucleation size, or coherent slip patch size, h*. Models of smooth faults, using numerical cell size properly small compared to h*, show periodic response or complex and apparently chaotic histories of large events but have not been found to show small event complexity like the self-similar (power law) Gutenberg-Richter frequency-size statistics. This conclusion is supported in the present paper by fully inertial elastodynamic modeling of earthquake sequences. In contrast, some models of locally heterogeneous faults with quasi-independent fault segments, represented approximately by simulations with cell size larger than h* so that the model becomes "inherently discrete," do show small event complexity of the Gutenberg-Richter type. Models based on classical friction laws without a weakening length scale or for which the numerical procedure imposes an abrupt strength drop at the onset of slip have h* = 0 and hence always fall into the inherently discrete class. We suggest that the small-event complexity that some such models show will not survive regularization of the constitutive description, by inclusion of an appropriate length scale leading to a finite h*, and a corresponding reduction of numerical grid size. Images Fig. 2 Fig. 3 Fig. 4 Fig. 5 PMID:11607669

  19. Minimum-complexity helicopter simulation math model

    NASA Technical Reports Server (NTRS)

    Heffley, Robert K.; Mnich, Marc A.

    1988-01-01

    An example of a minimal complexity simulation helicopter math model is presented. Motivating factors are the computational delays, cost, and inflexibility of the very sophisticated math models now in common use. A helicopter model form is given which addresses each of these factors and provides better engineering understanding of the specific handling qualities features which are apparent to the simulator pilot. The technical approach begins with specification of features which are to be modeled, followed by a build up of individual vehicle components and definition of equations. Model matching and estimation procedures are given which enable the modeling of specific helicopters from basic data sources such as flight manuals. Checkout procedures are given which provide for total model validation. A number of possible model extensions and refinement are discussed. Math model computer programs are defined and listed.

  20. Coherent operation of detector systems and their readout electronics in a complex experiment control environment

    NASA Astrophysics Data System (ADS)

    Koestner, Stefan

    2009-09-01

    With the increasing size and degree of complexity of today's experiments in high energy physics the required amount of work and complexity to integrate a complete subdetector into an experiment control system is often underestimated. We report here on the layered software structure and protocols used by the LHCb experiment to control its detectors and readout boards. The experiment control system of LHCb is based on the commercial SCADA system PVSS II. Readout boards which are outside the radiation area are accessed via embedded credit card sized PCs which are connected to a large local area network. The SPECS protocol is used for control of the front end electronics. Finite state machines are introduced to facilitate the control of a large number of electronic devices and to model the whole experiment at the level of an expert system.

  1. Model validation for karst flow using sandbox experiments

    NASA Astrophysics Data System (ADS)

    Ye, M.; Pacheco Castro, R. B.; Tao, X.; Zhao, J.

    2015-12-01

    The study of flow in karst is complex due of the high heterogeneity of the porous media. Several approaches have been proposed in the literature to study overcome the natural complexity of karst. Some of those methods are the single continuum, double continuum and the discrete network of conduits coupled with the single continuum. Several mathematical and computing models are available in the literature for each approach. In this study one computer model has been selected for each category to validate its usefulness to model flow in karst using a sandbox experiment. The models chosen are: Modflow 2005, Modflow CFPV1 and Modflow CFPV2. A sandbox experiment was implemented in such way that all the parameters required for each model can be measured. The sandbox experiment was repeated several times under different conditions. The model validation will be carried out by comparing the results of the model simulation and the real data. This model validation will allows ud to compare the accuracy of each model and the applicability in Karst. Also we will be able to evaluate if the results of the complex models improve a lot compared to the simple models specially because some models require complex parameters that are difficult to measure in the real world.

  2. Constructing minimal models for complex system dynamics

    NASA Astrophysics Data System (ADS)

    Barzel, Baruch; Liu, Yang-Yu; Barabási, Albert-László

    2015-05-01

    One of the strengths of statistical physics is the ability to reduce macroscopic observations into microscopic models, offering a mechanistic description of a system's dynamics. This paradigm, rooted in Boltzmann's gas theory, has found applications from magnetic phenomena to subcellular processes and epidemic spreading. Yet, each of these advances were the result of decades of meticulous model building and validation, which are impossible to replicate in most complex biological, social or technological systems that lack accurate microscopic models. Here we develop a method to infer the microscopic dynamics of a complex system from observations of its response to external perturbations, allowing us to construct the most general class of nonlinear pairwise dynamics that are guaranteed to recover the observed behaviour. The result, which we test against both numerical and empirical data, is an effective dynamic model that can predict the system's behaviour and provide crucial insights into its inner workings.

  3. Refiners Switch to RFG Complex Model

    EIA Publications

    1998-01-01

    On January 1, 1998, domestic and foreign refineries and importers must stop using the "simple" model and begin using the "complex" model to calculate emissions of volatile organic compounds (VOC), toxic air pollutants (TAP), and nitrogen oxides (NOx) from motor gasoline. The primary differences between application of the two models is that some refineries may have to meet stricter standards for the sulfur and olefin content of the reformulated gasoline (RFG) they produce and all refineries will now be held accountable for NOx emissions. Requirements for calculating emissions from conventional gasoline under the anti-dumping rule similarly change for exhaust TAP and NOx. However, the change to the complex model is not expected to result in an increase in the price premium for RFG or constrain supplies.

  4. Reassessing Geophysical Models of the Bushveld Complex in 3D

    NASA Astrophysics Data System (ADS)

    Cole, J.; Webb, S. J.; Finn, C.

    2012-12-01

    Conceptual geophysical models of the Bushveld Igneous Complex show three possible geometries for its mafic component: 1) Separate intrusions with vertical feeders for the eastern and western lobes (Cousins, 1959) 2) Separate dipping sheets for the two lobes (Du Plessis and Kleywegt, 1987) 3) A single saucer-shaped unit connected at depth in the central part between the two lobes (Cawthorn et al, 1998) Model three incorporates isostatic adjustment of the crust in response to the weight of the dense mafic material. The model was corroborated by results of a broadband seismic array over southern Africa, known as the Southern African Seismic Experiment (SASE) (Nguuri, et al, 2001; Webb et al, 2004). This new information about the crustal thickness only became available in the last decade and could not be considered in the earlier models. Nevertheless, there is still on-going debate as to which model is correct. All of the models published up to now have been done in 2 or 2.5 dimensions. This is not well suited to modelling the complex geometry of the Bushveld intrusion. 3D modelling takes into account effects of variations in geometry and geophysical properties of lithologies in a full three dimensional sense and therefore affects the shape and amplitude of calculated fields. The main question is how the new knowledge of the increased crustal thickness, as well as the complexity of the Bushveld Complex, will impact on the gravity fields calculated for the existing conceptual models, when modelling in 3D. The three published geophysical models were remodelled using full 3Dl potential field modelling software, and including crustal thickness obtained from the SASE. The aim was not to construct very detailed models, but to test the existing conceptual models in an equally conceptual way. Firstly a specific 2D model was recreated in 3D, without crustal thickening, to establish the difference between 2D and 3D results. Then the thicker crust was added. Including the less

  5. Modeling acuity for optotypes varying in complexity.

    PubMed

    Watson, Andrew B; Ahumada, Albert J

    2012-09-29

    Watson and Ahumada (2008) described a template model of visual acuity based on an ideal-observer limited by optical filtering, neural filtering, and noise. They computed predictions for selected optotypes and optical aberrations. Here we compare this model's predictions to acuity data for six human observers, each viewing seven different optotype sets, consisting of one set of Sloan letters and six sets of Chinese characters, differing in complexity (Zhang, Zhang, Xue, Liu, & Yu, 2007). Since optical aberrations for the six observers were unknown, we constructed 200 model observers using aberrations collected from 200 normal human eyes (Thibos, Hong, Bradley, & Cheng, 2002). For each condition (observer, optotype set, model observer) we estimated the model noise required to match the data. Expressed as efficiency, performance for Chinese characters was 1.4 to 2.7 times lower than for Sloan letters. Efficiency was weakly and inversely related to perimetric complexity of optotype set. We also compared confusion matrices for human and model observers. Correlations for off-diagonal elements ranged from 0.5 to 0.8 for different sets, and the average correlation for the template model was superior to a geometrical moment model with a comparable number of parameters (Liu, Klein, Xue, Zhang, & Yu, 2009). The template model performed well overall. Estimated psychometric function slopes matched the data, and noise estimates agreed roughly with those obtained independently from contrast sensitivity to Gabor targets. For optotypes of low complexity, the model accurately predicted relative performance. This suggests the model may be used to compare acuities measured with different sets of simple optotypes.

  6. The Kuramoto model in complex networks

    NASA Astrophysics Data System (ADS)

    Rodrigues, Francisco A.; Peron, Thomas K. DM.; Ji, Peng; Kurths, Jürgen

    2016-01-01

    Synchronization of an ensemble of oscillators is an emergent phenomenon present in several complex systems, ranging from social and physical to biological and technological systems. The most successful approach to describe how coherent behavior emerges in these complex systems is given by the paradigmatic Kuramoto model. This model has been traditionally studied in complete graphs. However, besides being intrinsically dynamical, complex systems present very heterogeneous structure, which can be represented as complex networks. This report is dedicated to review main contributions in the field of synchronization in networks of Kuramoto oscillators. In particular, we provide an overview of the impact of network patterns on the local and global dynamics of coupled phase oscillators. We cover many relevant topics, which encompass a description of the most used analytical approaches and the analysis of several numerical results. Furthermore, we discuss recent developments on variations of the Kuramoto model in networks, including the presence of noise and inertia. The rich potential for applications is discussed for special fields in engineering, neuroscience, physics and Earth science. Finally, we conclude by discussing problems that remain open after the last decade of intensive research on the Kuramoto model and point out some promising directions for future research.

  7. Experiments beyond the standard model

    SciTech Connect

    Perl, M.L.

    1984-09-01

    This paper is based upon lectures in which I have described and explored the ways in which experimenters can try to find answers, or at least clues toward answers, to some of the fundamental questions of elementary particle physics. All of these experimental techniques and directions have been discussed fully in other papers, for example: searches for heavy charged leptons, tests of quantum chromodynamics, searches for Higgs particles, searches for particles predicted by supersymmetric theories, searches for particles predicted by technicolor theories, searches for proton decay, searches for neutrino oscillations, monopole searches, studies of low transfer momentum hadron physics at very high energies, and elementary particle studies using cosmic rays. Each of these subjects requires several lectures by itself to do justice to the large amount of experimental work and theoretical thought which has been devoted to these subjects. My approach in these tutorial lectures is to describe general ways to experiment beyond the standard model. I will use some of the topics listed to illustrate these general ways. Also, in these lectures I present some dreams and challenges about new techniques in experimental particle physics and accelerator technology, I call these Experimental Needs. 92 references.

  8. Optimizing Complex Kinetics Experiments Using Least-Squares Methods

    PubMed Central

    Fahr, A.; Braun, W.; Kurylo, M. J.

    1993-01-01

    Complex kinetic problems are generally modeled employing numerical integration routines. Our kinetics modeling program, Acuchem, has been modified to fit rate constants and absorption coefficients generically to real or synthesized “laboratory data” via a least-squares iterative procedure written for personal computers. To test the model and method of analysis the self- and cross-combination reactions of HO2 and CH3O2 radicals of importance in atmospheric chemistry are examined. These radicals as well as other species absorb ultraviolet radiation. The resultant absorption signal is measured in the laboratory and compared with a modeled signal to obtain the best-fit to various kinetic parameters. The modified program generates synthetic data with added random noise. An analysis of the synthetic data leads to an optimization of the experimental design and best-values for certain rate constants and absorption coefficients. PMID:28053465

  9. Modeling of Protein Binary Complexes Using Structural Mass Spectrometry Data

    SciTech Connect

    Amisha Kamal,J.; Chance, M.

    2008-01-01

    In this article, we describe a general approach to modeling the structure of binary protein complexes using structural mass spectrometry data combined with molecular docking. In the first step, hydroxyl radical mediated oxidative protein footprinting is used to identify residues that experience conformational reorganization due to binding or participate in the binding interface. In the second step, a three-dimensional atomic structure of the complex is derived by computational modeling. Homology modeling approaches are used to define the structures of the individual proteins if footprinting detects significant conformational reorganization as a function of complex formation. A three-dimensional model of the complex is constructed from these binary partners using the ClusPro program, which is composed of docking, energy filtering, and clustering steps. Footprinting data are used to incorporate constraints--positive and/or negative--in the docking step and are also used to decide the type of energy filter--electrostatics or desolvation--in the successive energy-filtering step. By using this approach, we examine the structure of a number of binary complexes of monomeric actin and compare the results to crystallographic data. Based on docking alone, a number of competing models with widely varying structures are observed, one of which is likely to agree with crystallographic data. When the docking steps are guided by footprinting data, accurate models emerge as top scoring. We demonstrate this method with the actin/gelsolin segment-1 complex. We also provide a structural model for the actin/cofilin complex using this approach which does not have a crystal or NMR structure.

  10. Comparing flood loss models of different complexity

    NASA Astrophysics Data System (ADS)

    Schröter, Kai; Kreibich, Heidi; Vogel, Kristin; Riggelsen, Carsten; Scherbaum, Frank; Merz, Bruno

    2013-04-01

    Any deliberation on flood risk requires the consideration of potential flood losses. In particular, reliable flood loss models are needed to evaluate cost-effectiveness of mitigation measures, to assess vulnerability, for comparative risk analysis and financial appraisal during and after floods. In recent years, considerable improvements have been made both concerning the data basis and the methodological approaches used for the development of flood loss models. Despite of that, flood loss models remain an important source of uncertainty. Likewise the temporal and spatial transferability of flood loss models is still limited. This contribution investigates the predictive capability of different flood loss models in a split sample cross regional validation approach. For this purpose, flood loss models of different complexity, i.e. based on different numbers of explaining variables, are learned from a set of damage records that was obtained from a survey after the Elbe flood in 2002. The validation of model predictions is carried out for different flood events in the Elbe and Danube river basins in 2002, 2005 and 2006 for which damage records are available from surveys after the flood events. The models investigated are a stage-damage model, the rule based model FLEMOps+r as well as novel model approaches which are derived using data mining techniques of regression trees and Bayesian networks. The Bayesian network approach to flood loss modelling provides attractive additional information concerning the probability distribution of both model predictions and explaining variables.

  11. Modeling Electromagnetic Scattering From Complex Inhomogeneous Objects

    NASA Technical Reports Server (NTRS)

    Deshpande, Manohar; Reddy, C. J.

    2011-01-01

    This software innovation is designed to develop a mathematical formulation to estimate the electromagnetic scattering characteristics of complex, inhomogeneous objects using the finite-element-method (FEM) and method-of-moments (MoM) concepts, as well as to develop a FORTRAN code called FEMOM3DS (Finite Element Method and Method of Moments for 3-Dimensional Scattering), which will implement the steps that are described in the mathematical formulation. Very complex objects can be easily modeled, and the operator of the code is not required to know the details of electromagnetic theory to study electromagnetic scattering.

  12. Flowgraph Models for Complex Multistate System Reliabiliy.

    SciTech Connect

    Williams, B. J.; Huzurbazar, A. V.

    2005-01-01

    This chapter reviews flowgraph models for complex multistate systems. The focus is on modeling data from semi-Markov processes and constructing likelihoods when different portions of the system data are censored and incomplete. Semi-Markov models play an important role in the analysis of time to event data. However, in practice, data analysis for semi-Markov processes can be quite difficult and many simplifying assumptions are made. Flowgraph models are multistate models that provide a data analytic method for semi-Markov processes. Flowgraphs are useful for estimating Bayes predictive densities, predictive reliability functions, and predictive hazard functions for waiting times of interest in the presence of censored and incomplete data. This chapter reviews data analysis for flowgraph models and then presents methods for constructing likelihoods when portions of the system data are missing.

  13. Complexity.

    PubMed

    Gómez-Hernández, J Jaime

    2006-01-01

    It is difficult to define complexity in modeling. Complexity is often associated with uncertainty since modeling uncertainty is an intrinsically difficult task. However, modeling uncertainty does not require, necessarily, complex models, in the sense of a model requiring an unmanageable number of degrees of freedom to characterize the aquifer. The relationship between complexity, uncertainty, heterogeneity, and stochastic modeling is not simple. Aquifer models should be able to quantify the uncertainty of their predictions, which can be done using stochastic models that produce heterogeneous realizations of aquifer parameters. This is the type of complexity addressed in this article.

  14. Human driven transitions in complex model ecosystems

    NASA Astrophysics Data System (ADS)

    Harfoot, Mike; Newbold, Tim; Tittinsor, Derek; Purves, Drew

    2015-04-01

    Human activities have been observed to be impacting ecosystems across the globe, leading to reduced ecosystem functioning, altered trophic and biomass structure and ultimately ecosystem collapse. Previous attempts to understand global human impacts on ecosystems have usually relied on statistical models, which do not explicitly model the processes underlying the functioning of ecosystems, represent only a small proportion of organisms and do not adequately capture complex non-linear and dynamic responses of ecosystems to perturbations. We use a mechanistic ecosystem model (1), which simulates the underlying processes structuring ecosystems and can thus capture complex and dynamic interactions, to investigate boundaries of complex ecosystems to human perturbation. We explore several drivers including human appropriation of net primary production and harvesting of animal biomass. We also present an analysis of the key interactions between biotic, societal and abiotic earth system components, considering why and how we might think about these couplings. References: M. B. J. Harfoot et al., Emergent global patterns of ecosystem structure and function from a mechanistic general ecosystem model., PLoS Biol. 12, e1001841 (2014).

  15. Constitutive modeling of pia-arachnoid complex.

    PubMed

    Jin, Xin; Mao, Haojie; Yang, King H; King, Albert I

    2014-04-01

    The pia-arachnoid complex (PAC) covering the brain plays an important role in the mechanical response of the brain during impact or inertial loading. Recent studies have revealed the complicated material behavior of the PAC. In this study, the nonlinear viscoelastic, transversely isotropic material properties of the PAC were modeled as Mooney-Rivlin ground substance with collagen fibers strengthening within the meningeal plane through an exponential model. The material constants needed were determined using experimental data from in-plane tension, normal traction, and shear tests conducted on bovine specimens. Results from this study provide essential information to properly model the PAC membrane, an important component in the skull/brain interface, in a computational brain model. Such an improved representation of the skull/brain interface will enhance the accuracy of finite element models used in brain injury mechanism studies under various loading conditions.

  16. BDI-modelling of complex intracellular dynamics.

    PubMed

    Jonker, C M; Snoep, J L; Treur, J; Westerhoff, H V; Wijngaards, W C A

    2008-03-07

    A BDI-based continuous-time modelling approach for intracellular dynamics is presented. It is shown how temporalized BDI-models make it possible to model intracellular biochemical processes as decision processes. By abstracting from some of the details of the biochemical pathways, the model achieves understanding in nearly intuitive terms, without losing veracity: classical intentional state properties such as beliefs, desires and intentions are founded in reality through precise biochemical relations. In an extensive example, the complex regulation of Escherichia coli vis-à-vis lactose, glucose and oxygen is simulated as a discrete-state, continuous-time temporal decision manager. Thus a bridge is introduced between two different scientific areas: the area of BDI-modelling and the area of intracellular dynamics.

  17. Intrinsic Uncertainties in Modeling Complex Systems.

    SciTech Connect

    Cooper, Curtis S; Bramson, Aaron L.; Ames, Arlo L.

    2014-09-01

    Models are built to understand and predict the behaviors of both natural and artificial systems. Because it is always necessary to abstract away aspects of any non-trivial system being modeled, we know models can potentially leave out important, even critical elements. This reality of the modeling enterprise forces us to consider the prospective impacts of those effects completely left out of a model - either intentionally or unconsidered. Insensitivity to new structure is an indication of diminishing returns. In this work, we represent a hypothetical unknown effect on a validated model as a finite perturba- tion whose amplitude is constrained within a control region. We find robustly that without further constraints, no meaningful bounds can be placed on the amplitude of a perturbation outside of the control region. Thus, forecasting into unsampled regions is a very risky proposition. We also present inherent difficulties with proper time discretization of models and representing in- herently discrete quantities. We point out potentially worrisome uncertainties, arising from math- ematical formulation alone, which modelers can inadvertently introduce into models of complex systems. Acknowledgements This work has been funded under early-career LDRD project #170979, entitled "Quantify- ing Confidence in Complex Systems Models Having Structural Uncertainties", which ran from 04/2013 to 09/2014. We wish to express our gratitude to the many researchers at Sandia who con- tributed ideas to this work, as well as feedback on the manuscript. In particular, we would like to mention George Barr, Alexander Outkin, Walt Beyeler, Eric Vugrin, and Laura Swiler for provid- ing invaluable advice and guidance through the course of the project. We would also like to thank Steven Kleban, Amanda Gonzales, Trevor Manzanares, and Sarah Burwell for their assistance in managing project tasks and resources.

  18. Experiment Design for Complex VTOL Aircraft with Distributed Propulsion and Tilt Wing

    NASA Technical Reports Server (NTRS)

    Murphy, Patrick C.; Landman, Drew

    2015-01-01

    Selected experimental results from a wind tunnel study of a subscale VTOL concept with distributed propulsion and tilt lifting surfaces are presented. The vehicle complexity and automated test facility were ideal for use with a randomized designed experiment. Design of Experiments and Response Surface Methods were invoked to produce run efficient, statistically rigorous regression models with minimized prediction error. Static tests were conducted at the NASA Langley 12-Foot Low-Speed Tunnel to model all six aerodynamic coefficients over a large flight envelope. This work supports investigations at NASA Langley in developing advanced configurations, simulations, and advanced control systems.

  19. Different Epidemic Models on Complex Networks

    NASA Astrophysics Data System (ADS)

    Zhang, Hai-Feng; Small, Michael; Fu, Xin-Chu

    2009-07-01

    Models for diseases spreading are not just limited to SIS or SIR. For instance, for the spreading of AIDS/HIV, the susceptible individuals can be classified into different cases according to their immunity, and similarly, the infected individuals can be sorted into different classes according to their infectivity. Moreover, some diseases may develop through several stages. Many authors have shown that the individuals' relation can be viewed as a complex network. So in this paper, in order to better explain the dynamical behavior of epidemics, we consider different epidemic models on complex networks, and obtain the epidemic threshold for each case. Finally, we present numerical simulations for each case to verify our results.

  20. Study on simulation and experiment of laser micro-Doppler effect for detecting complex vibration

    NASA Astrophysics Data System (ADS)

    Yuan, Shuai; Zhang, Juan; Liu, Mei-juan; Zhang, Jun

    2013-08-01

    The spectrum of radar signal will be modulated by moving target or vibration and turning part of the target, this is called micro-Doppler effect. It is a new way of using micro-Doppler effect to realize the feature extraction and target recognition. Because the complex vibration target has more complicated frequency component than the single frequency target, so it is very useful to do the research of simulation and experiment of laser micro-Doppler effect for detecting complex vibration. In this paper, the research on simulation and experiment of laser micro-Doppler effect for detecting complex vibration of moving target was developed based on the simulation research of micro-Doppler effect in lidar. Firstly, the geometry of complex vibrating target detection in radar was established. Secondly, the simulation and experiment signal sources were compared and also the returned signals in radar of the simulation and experiment were compared, the compared results showed that the two complex signal sources were very similar, and the frequency change trend of returned signals in radar was the same. Thirdly, joint time-frequency analysis method of the reassigned smoothed pseudo Wigner-Ville distribution (RSPWVD) was introduced to analyze the signals. The results showed that in RSPWVD, the waveform of the vibration target, the target vibration period, the Doppler frequency shift and the target's speed can be got. The simulation results and the experiment results got by the RSPWVD had the basically same frequency characteristics. So the RSPWVD can reflect very well the micro-Doppler characteristics of moving target's complex vibration. In conclusion it proved the validity of the simulated model, also it proved that using this time-frequency method to analyze the micro-Doppler signal of complex vibration was correct. And it laid the foundation of further using lidar to realize the classification and identification of target.

  1. Noncommutative complex Grosse-Wulkenhaar model

    SciTech Connect

    Hounkonnou, Mahouton Norbert; Samary, Dine Ousmane

    2008-11-18

    This paper stands for an application of the noncommutative (NC) Noether theorem, given in our previous work [AIP Proc 956(2007) 55-60], for the NC complex Grosse-Wulkenhaar model. It provides with an extension of a recent work [Physics Letters B 653(2007) 343-345]. The local conservation of energy-momentum tensors (EMTs) is recovered using improvement procedures based on Moyal algebraic techniques. Broken dilatation symmetry is discussed. NC gauge currents are also explicitly computed.

  2. Color appearance models and complex visual stimuli.

    PubMed

    Fairchild, Mark D

    2010-01-01

    Teeth in a patient's mouth in a dental office, or in the natural environment, represent very complex stimuli for the human color vision system. Predicting their perceived color is a daunting task at best. Colorimetry is designed mainly for the evaluation of uniform, flat, opaque, materials of fairly large size viewed on a medium-grey background under near-daylight sources of fairly high luminance. On the contrary, in situ teeth vary spatially, are curved and ridged, translucent, relatively small, and viewed against a variable background under nonuniform, and typically nonstandard, illumination. These differences in stimuli and viewing conditions summarize the difficulty in predicting the color appearance of teeth. The field of color science has extended basic colorimetry, as represented by CIE XYZ and CIELAB coordinates, to more complex visual stimuli and viewing environments. The CIECAM02 color appearance model accurately addresses issues of chromatic adaptation, luminance effects and adaptation, background and surround effects, and the higher dimensionality of color appearance. Such models represent a significant advance and are used successfully in a variety of applications. However, many stimuli vary in space and time at scales not addressed by typical color appearance models. For example, high-definition video images would fall into such a category and so would in situ human teeth. More recently, color appearance models and image quality metrics have been combined to create image appearance models for even more complex visual stimuli. This paper provides an overview of fundamental and advanced colorimetry leading up to color appearance and image appearance models and their potential application in dentistry. Copyright © 2010 Elsevier Ltd. All rights reserved.

  3. The noisy voter model on complex networks

    PubMed Central

    Carro, Adrián; Toral, Raúl; San Miguel, Maxi

    2016-01-01

    We propose a new analytical method to study stochastic, binary-state models on complex networks. Moving beyond the usual mean-field theories, this alternative approach is based on the introduction of an annealed approximation for uncorrelated networks, allowing to deal with the network structure as parametric heterogeneity. As an illustration, we study the noisy voter model, a modification of the original voter model including random changes of state. The proposed method is able to unfold the dependence of the model not only on the mean degree (the mean-field prediction) but also on more complex averages over the degree distribution. In particular, we find that the degree heterogeneity—variance of the underlying degree distribution—has a strong influence on the location of the critical point of a noise-induced, finite-size transition occurring in the model, on the local ordering of the system, and on the functional form of its temporal correlations. Finally, we show how this latter point opens the possibility of inferring the degree heterogeneity of the underlying network by observing only the aggregate behavior of the system as a whole, an issue of interest for systems where only macroscopic, population level variables can be measured. PMID:27094773

  4. Surface complexation modeling of inositol hexaphosphate sorption onto gibbsite.

    PubMed

    Ruyter-Hooley, Maika; Larsson, Anna-Carin; Johnson, Bruce B; Antzutkin, Oleg N; Angove, Michael J

    2015-02-15

    The sorption of Inositol hexaphosphate (IP6) onto gibbsite was investigated using a combination of adsorption experiments, (31)P solid-state MAS NMR spectroscopy, and surface complexation modeling. Adsorption experiments conducted at four temperatures showed that IP6 sorption decreased with increasing pH. At pH 6, IP6 sorption increased with increasing temperature, while at pH 10 sorption decreased as the temperature was raised. (31)P MAS NMR measurements at pH 3, 6, 9 and 11 produced spectra with broad resonance lines that could be de-convoluted with up to five resonances (+5, 0, -6, -13 and -21ppm). The chemical shifts suggest the sorption process involves a combination of both outer- and inner-sphere complexation and surface precipitation. Relative intensities of the observed resonances indicate that outer-sphere complexation is important in the sorption process at higher pH, while inner-sphere complexation and surface precipitation are dominant at lower pH. Using the adsorption and (31)P MAS NMR data, IP6 sorption to gibbsite was modeled with an extended constant capacitance model (ECCM). The adsorption reactions that best described the sorption of IP6 to gibbsite included two inner-sphere surface complexes and one outer-sphere complex: ≡AlOH + IP₆¹²⁻ + 5H⁺ ↔ ≡Al(IP₆H₄)⁷⁻ + H₂O, ≡3AlOH + IP₆¹²⁻ + 6H⁺ ↔ ≡Al₃(IP₆H₃)⁶⁻ + 3H₂O, ≡2AlOH + IP₆¹²⁻ + 4H⁺ ↔ (≡AlOH₂)₂²⁺(IP₆H₂)¹⁰⁻. The inner-sphere complex involving three surface sites may be considered to be equivalent to a surface precipitate. Thermodynamic parameters were obtained from equilibrium constants derived from surface complexation modeling. Enthalpies for the formation of inner-sphere surface complexes were endothermic, while the enthalpy for the outer-sphere complex was exothermic. The entropies for the proposed sorption reactions were large and positive suggesting that changes in solvation of species play a major role in driving

  5. Complex Constructivism: A Theoretical Model of Complexity and Cognition

    ERIC Educational Resources Information Center

    Doolittle, Peter E.

    2014-01-01

    Education has long been driven by its metaphors for teaching and learning. These metaphors have influenced both educational research and educational practice. Complexity and constructivism are two theories that provide functional and robust metaphors. Complexity provides a metaphor for the structure of myriad phenomena, while constructivism…

  6. Analysis of complex vessel experiments using the Hybrid Lagrangian-Eulerian containment code ALICE-II

    SciTech Connect

    Wang, C.Y.; Ku, J.L.; Zeuch, W.R.

    1984-03-01

    This paper describes the ALICE-II analysis of and comparison with complex vessel experiments. Tests SM-2 through SM-5 were performed by SRI International in 1978 in studying the structural response of 1/20 scale models of the Clinch River Breeder Reactor to a simulated hypothetical core-disruptive accident. These experiments provided quality data for validating treatments of the nonlinear fluid-structure interactions and many complex excursion phenomena, such as flow through perforated structures, large material distortions, multi-dimensional sliding interfaces, flow around sharp corners, and highly contorted fluid boundaries. Correlations of the predicted pressures with the test results of all gauges are made. Wave characteristics and arrival times are also compared. Results show that the ALICE-II code predicts the pressure profile well. Despite the complexity, the code gave good results for the SM-5 test.

  7. Magnetic modeling of the Bushveld Igneous Complex

    NASA Astrophysics Data System (ADS)

    Webb, S. J.; Cole, J.; Letts, S. A.; Finn, C.; Torsvik, T. H.; Lee, M. D.

    2009-12-01

    Magnetic modeling of the 2.06 Ga Bushveld Complex presents special challenges due a variety of magnetic effects. These include strong remanence in the Main Zone and extremely high magnetic susceptibilities in the Upper Zone, which exhibit self-demagnetization. Recent palaeomagnetic results have resolved a long standing discrepancy between age data, which constrain the emplacement to within 1 million years, and older palaeomagnetic data which suggested ~50 million years for emplacement. The new palaeomagnetic results agree with the age data and present a single consistent pole, as opposed to a long polar wander path, for the Bushveld for all of the Zones and all of the limbs. These results also pass a fold test indicating the Bushveld Complex was emplaced horizontally lending support to arguments for connectivity. The magnetic signature of the Bushveld Complex provides an ideal mapping tool as the UZ has high susceptibility values and is well layered showing up as distinct anomalies on new high resolution magnetic data. However, this signature is similar to the highly magnetic BIFs found in the Transvaal and in the Witwatersrand Supergroups. Through careful mapping using new high resolution aeromagnetic data, we have been able to map the Bushveld UZ in complicated geological regions and identify a characteristic signature with well defined layers. The Main Zone, which has a more subdued magnetic signature, does have a strong remanent component and exhibits several magnetic reversals. The magnetic layers of the UZ contain layers of magnetitite with as much as 80-90% pure magnetite with large crystals (1-2 cm). While these layers are not strongly remanent, they have extremely high magnetic susceptibilities, and the self demagnetization effect must be taken into account when modeling these layers. Because the Bushveld Complex is so large, the geometry of the Earth’s magnetic field relative to the layers of the UZ Bushveld Complex changes orientation, creating

  8. Predictive modelling of complex agronomic and biological systems.

    PubMed

    Keurentjes, Joost J B; Molenaar, Jaap; Zwaan, Bas J

    2013-09-01

    Biological systems are tremendously complex in their functioning and regulation. Studying the multifaceted behaviour and describing the performance of such complexity has challenged the scientific community for years. The reduction of real-world intricacy into simple descriptive models has therefore convinced many researchers of the usefulness of introducing mathematics into biological sciences. Predictive modelling takes such an approach another step further in that it takes advantage of existing knowledge to project the performance of a system in alternating scenarios. The ever growing amounts of available data generated by assessing biological systems at increasingly higher detail provide unique opportunities for future modelling and experiment design. Here we aim to provide an overview of the progress made in modelling over time and the currently prevalent approaches for iterative modelling cycles in modern biology. We will further argue for the importance of versatility in modelling approaches, including parameter estimation, model reduction and network reconstruction. Finally, we will discuss the difficulties in overcoming the mathematical interpretation of in vivo complexity and address some of the future challenges lying ahead. © 2013 John Wiley & Sons Ltd.

  9. Structured analysis and modeling of complex systems

    NASA Technical Reports Server (NTRS)

    Strome, David R.; Dalrymple, Mathieu A.

    1992-01-01

    The Aircrew Evaluation Sustained Operations Performance (AESOP) facility at Brooks AFB, Texas, combines the realism of an operational environment with the control of a research laboratory. In recent studies we collected extensive data from the Airborne Warning and Control Systems (AWACS) Weapons Directors subjected to high and low workload Defensive Counter Air Scenarios. A critical and complex task in this environment involves committing a friendly fighter against a hostile fighter. Structured Analysis and Design techniques and computer modeling systems were applied to this task as tools for analyzing subject performance and workload. This technology is being transferred to the Man-Systems Division of NASA Johnson Space Center for application to complex mission related tasks, such as manipulating the Shuttle grappler arm.

  10. Project trades model for complex space missions

    NASA Technical Reports Server (NTRS)

    Girerd, Andre R.; Shishko, Roberto

    2003-01-01

    A Project Trades Model (PTM) is a collection of tools/simulations linked together to rapidly perform integrated system trade studies of performance, cost, risk, and mission effectiveness. An operating PTM captures the interactions between various targeted systems and subsystems through an exchange of computed variables of the constituent models. Selection and implementation of the order, method of interaction, model type, and envisioned operation of the ensemble of tools rpresents the key system engineering challenge of the approach. This paper describes an approach to building a PTM and using it to perform top-level system trades for a complex space mission. In particular, the PTM discussed here is for a future Mars mission involving a large rover.

  11. The Intermediate Complexity Atmospheric Research Model

    NASA Astrophysics Data System (ADS)

    Gutmann, Ethan; Clark, Martyn; Rasmussen, Roy; Arnold, Jeffrey; Brekke, Levi

    2015-04-01

    The high-resolution, non-hydrostatic atmospheric models often used for dynamical downscaling are extremely computationally expensive, and, for a certain class of problems, their complexity hinders our ability to ask key scientific questions, particularly those related to hydrology and climate change. For changes in precipitation in particular, an atmospheric model grid spacing capable of resolving the structure of mountain ranges is of critical importance, yet such simulations can not currently be performed with an advanced regional climate model for long time periods, over large areas, and forced by many climate models. Here we present the newly developed Intermediate Complexity Atmospheric Research model (ICAR) capable of simulating critical atmospheric processes two to three orders of magnitude faster than a state of the art regional climate model. ICAR uses a simplified dynamical formulation based off of linear theory, combined with the circulation field from a low-resolution climate model. The resulting three-dimensional wind field is used to advect heat and moisture within the domain, while sub-grid physics (e.g. microphysics) are processed by standard and simplified physics schemes from the Weather Research and Forecasting (WRF) model. ICAR is tested in comparison to WRF by downscaling a climate change scenario over the Colorado Rockies. Both atmospheric models predict increases in precipitation across the domain with a greater increase on the western half. In contrast, statistically downscaled precipitation using multiple common statistical methods predict decreases in precipitation over the western half of the domain. Finally, we apply ICAR to multiple CMIP5 climate models and scenarios with multiple parameterization options to investigate the importance of uncertainty in sub-grid physics as compared to the uncertainty in the large scale climate scenario. ICAR is a useful tool for climate change and weather forecast downscaling, particularly for orographic

  12. Glass Durability Modeling, Activated Complex Theory (ACT)

    SciTech Connect

    CAROL, JANTZEN

    2005-02-04

    dissolution modeling using simple atomic ratios is shown to represent the structural effects of the glass on the dissolution and the formation of activated complexes in the glass leached layer. This provides two different methods by which a linear glass durability model can be formulated. One based on the quasi- crystalline mineral species in a glass and one based on cation ratios in the glass: both are related to the activated complexes on the surface by the law of mass action. The former would allow a new Thermodynamic Hydration Energy Model to be developed based on the hydration of the quasi-crystalline mineral species if all the pertinent thermodynamic data were available. Since the pertinent thermodynamic data is not available, the quasi-crystalline mineral species and the activated complexes can be related to cation ratios in the glass by the law of mass action. The cation ratio model can, thus, be used by waste form producers to formulate durable glasses based on fundamental structural and activated complex theories. Moreover, glass durability model based on atomic ratios simplifies HLW glass process control in that the measured ratios of only a few waste components and glass formers can be used to predict complex HLW glass performance with a high degree of accuracy, e.g. an R{sup 2} approximately 0.97.

  13. Modeling the human prothrombinase complex components

    NASA Astrophysics Data System (ADS)

    Orban, Tivadar

    Thrombin generation is the culminating stage of the blood coagulation process. Thrombin is obtained from prothrombin (the substrate) in a reaction catalyzed by the prothrombinase complex (the enzyme). The prothrombinase complex is composed of factor Xa (the enzyme), factor Va (the cofactor) associated in the presence of calcium ions on a negatively charged cell membrane. Factor Xa, alone, can activate prothrombin to thrombin; however, the rate of conversion is not physiologically relevant for survival. Incorporation of factor Va into prothrombinase accelerates the rate of prothrombinase activity by 300,000-fold, and provides the physiological pathway of thrombin generation. The long-term goal of the current proposal is to provide the necessary support for the advancing of studies to design potential drug candidates that may be used to avoid development of deep venous thrombosis in high-risk patients. The short-term goals of the present proposal are to (1) to propose a model of a mixed asymmetric phospholipid bilayer, (2) expand the incomplete model of human coagulation factor Va and study its interaction with the phospholipid bilayer, (3) to create a homology model of prothrombin (4) to study the dynamics of interaction between prothrombin and the phospholipid bilayer.

  14. Rhabdomyomas and tuberous sclerosis complex: our experience in 33 cases.

    PubMed

    Sciacca, Pietro; Giacchi, Valentina; Mattia, Carmine; Greco, Filippo; Smilari, Pierluigi; Betta, Pasqua; Distefano, Giuseppe

    2014-05-09

    Rhabdomyomas are the most common type of cardiac tumors in children. Anatomically, they can be considered as hamartomas. They are usually randomly diagnosed antenatally or postnatally sometimes presenting in the neonatal period with haemodynamic compromise or severe arrhythmias although most neonatal cases remain asymptomatic. Typically rhabdomyomas are multiple lesions and usually regress spontaneously but are often associated with tuberous sclerosis complex (TSC), an autosomal dominant multisystem disorder caused by mutations in either of the two genes, TSC1 or TSC2. Diagnosis of tuberous sclerosis is usually made on clinical grounds and eventually confirmed by a genetic test by searching for TSC genes mutations. We report our experience on 33 cases affected with rhabdomyomas and diagnosed from January 1989 to December 2012, focusing on the cardiac outcome and on association with the signs of tuberous sclerosis complex. We performed echocardiography using initially a Philips Sonos 2500 with a 7,5/5 probe and in the last 4 years a Philips IE33 with a S12-4 probe. We investigated the family history, brain, skin, kidney and retinal lesions, development of seizures, and neuropsychiatric disorders. At diagnosis we detected 205 masses, mostly localized in interventricular septum, right ventricle and left ventricle. Only in 4 babies (12%) the presence of a mass caused a significant obstruction. A baby, with an enormous septal rhabdomyoma associated to multiple rhabdomyomas in both right and left ventricular walls died just after birth due to severe heart failure. During follow-up we observed a reduction of rhabdomyomas in terms of both number and size in all 32 surviving patients except in one child. Eight patients (24,2%) had an arrhythmia and in 2 of these cases rhabdomyomas led to Wolf-Parkinson-White Syndrome. For all patients the arrhythmia spontaneously totally disappeared or was reduced gradually. With regarding to association with tuberous sclerosis, we

  15. On Complexity of the Quantum Ising Model

    NASA Astrophysics Data System (ADS)

    Bravyi, Sergey; Hastings, Matthew

    2017-01-01

    We study complexity of several problems related to the Transverse field Ising Model (TIM). First, we consider the problem of estimating the ground state energy known as the Local Hamiltonian Problem (LHP). It is shown that the LHP for TIM on degree-3 graphs is equivalent modulo polynomial reductions to the LHP for general k-local `stoquastic' Hamiltonians with any constant {k ≥ 2}. This result implies that estimating the ground state energy of TIM on degree-3 graphs is a complete problem for the complexity class {StoqMA} —an extension of the classical class {MA}. As a corollary, we complete the complexity classification of 2-local Hamiltonians with a fixed set of interactions proposed recently by Cubitt and Montanaro. Secondly, we study quantum annealing algorithms for finding ground states of classical spin Hamiltonians associated with hard optimization problems. We prove that the quantum annealing with TIM Hamiltonians is equivalent modulo polynomial reductions to the quantum annealing with a certain subclass of k-local stoquastic Hamiltonians. This subclass includes all Hamiltonians representable as a sum of a k-local diagonal Hamiltonian and a 2-local stoquastic Hamiltonian.

  16. Lateral organization of complex lipid mixtures from multiscale modeling

    NASA Astrophysics Data System (ADS)

    Tumaneng, Paul W.; Pandit, Sagar A.; Zhao, Guijun; Scott, H. L.

    2010-02-01

    The organizational properties of complex lipid mixtures can give rise to functionally important structures in cell membranes. In model membranes, ternary lipid-cholesterol (CHOL) mixtures are often used as representative systems to investigate the formation and stabilization of localized structural domains ("rafts"). In this work, we describe a self-consistent mean-field model that builds on molecular dynamics simulations to incorporate multiple lipid components and to investigate the lateral organization of such mixtures. The model predictions reveal regions of bimodal order on ternary plots that are in good agreement with experiment. Specifically, we have applied the model to ternary mixtures composed of dioleoylphosphatidylcholine:18:0 sphingomyelin:CHOL. This work provides insight into the specific intermolecular interactions that drive the formation of localized domains in these mixtures. The model makes use of molecular dynamics simulations to extract interaction parameters and to provide chain configuration order parameter libraries.

  17. Sandia National Laboratories ASCOT (atmospheric studies in complex terrain) field experiment, September 1980

    NASA Astrophysics Data System (ADS)

    Woods, R. O.

    1982-04-01

    During the period September 8 through September 25, 1980, Sandia National Laboratories, Division 4774, participated in a series of experiments held in the Geysers area of California. These experiments, aimed at providing data on nighttime drainage flow in complex terrain, were intended to provide a reliable basis for mathematical fow modeling. Tracers were released at several points on a valley rim and sampled by a large number of stations at ground level. Sandia's contribution was to make it possible to derive vertical tracer profiles. This was done by taking air samples from a captive balloon at chosen altitudes between the surface and 450 meters above ground.

  18. Numerical experiments in geomagnetic modeling

    NASA Technical Reports Server (NTRS)

    Cain, Joseph C.; Holter, Bill; Sandee, Daan

    1990-01-01

    Numerical tests were made, using least squares fitting of a spherical harmonic model, to a selection of Magsat data to determine the practical limits of this technique with modern computers. The resulting (M102189) model, whose coefficients were adjusted up to n = 50, was compared with M07AV6, a previous model which used least squares (on vector data) for coefficients up to n = 29, and Gauss-Legendre quadrature (on Z residuals) to adjust the coefficients up to n = 63. For the new least squares adjustment to n = 50 a condition number of 115 was obtained for the solution matrix, with a resulting precision of 11 significant figures. The M102189 model shows a lower and more Gaussian residual distribution than did M07AV6, though the Gaussian envelope fits to the residual distributions, even for the scalar field, gives "standard deviations' never lower than 6 nT, a factor of three higher than the estimated Magsat observational errors. Ionospheric currents are noted to have a significant effect on the coefficients of the internal potential functions.

  19. Evaluation of a puff dispersion model in complex terrain

    SciTech Connect

    Thuillier, R.H. )

    1992-03-01

    California's Pacific Gas and Electric Company has many power plant operations situated in complex terrain, prominent examples being the Geysers geothermal plant in Lake and Sonoma Counties, and the Diablo Canyon nuclear plant in San Luis Obispo County. Procedures ranging from plant licensing to emergency response require a dispersion modeling capability in a complex terrain environment. This paper describes the performance evaluation of such a capability, the Pacific Gas and Electric Company Modeling System (PGEMS), a fast response Gaussian puff model with a three-dimensional wind field generator. Performance of the model was evaluated for ground level and short stack elevated release on the basis of a special intensive tracer experiment in the complex coastal terrain surrounding the Diablo Canyon Nuclear Power Plant in San Luis Obispo County, California. The model performed well under a variety of meteorological and release conditions within the test region of 20-kilometer radius surrounding the nuclear plant, and turned in a superior performance in the wake of the nuclear plant, using a new wake correction algorithm for ground level and roof-vent releases a that location.

  20. Inexpensive Complex Hand Model Twenty Years Later.

    PubMed

    Frenger, Paul

    2015-01-01

    Twenty years ago the author unveiled his inexpensive complex hand model, which reproduced every motion of the human hand. A control system programmed in the Forth language operated its actuators and sensors. Follow-on papers for this popular project were next presented in Texas, Canada and Germany. From this hand grew the author’s meter-tall robot (nicknamed ANNIE: Android With Neural Networks, Intellect and Emotions). It received machine vision, facial expressiveness, speech synthesis and speech recognition; a simian version also received a dexterous ape foot. New artificial intelligence features included op-amp neurons for OCR and simulated emotions, hormone emulation, endocannabinoid receptors, fear-trust-love mechanisms, a Grandmother Cell recognizer and artificial consciousness. Simulated illnesses included narcotic addiction, autism, PTSD, fibromyalgia and Alzheimer’s disease. The author gave 13 robotics-AI presentations at NASA in Houston since 2006. A meter-tall simian robot was proposed with gripping hand-feet for use with space vehicles and to explore distant planets and moons. Also proposed were: intelligent motorized exoskeletons for astronaut force multiplication; a cognitive prosthesis to detect and alleviate decreased crew mental performance; and a gynoid robot medic to tend astronauts in deep space missions. What began as a complex hand model evolved into an innovative robot-AI within two decades.

  1. Employers' experience of employees with cancer: trajectories of complex communication.

    PubMed

    Tiedtke, C M; Dierckx de Casterlé, B; Frings-Dresen, M H W; De Boer, A G E M; Greidanus, M A; Tamminga, S J; De Rijk, A E

    2017-07-14

    Remaining in paid work is of great importance for cancer survivors, and employers play a crucial role in achieving this. Return to work (RTW) is best seen as a process. This study aims to provide insight into (1) Dutch employers' experiences with RTW of employees with cancer and (2) the employers' needs for support regarding this process. Thirty employer representatives of medium and large for-profit and non-profit organizations were interviewed to investigate their experiences and needs in relation to employees with cancer. A Grounded Theory approach was used. We revealed a trajectory of complex communication and decision-making during different stages, from the moment the employee disclosed that they had been diagnosed to the period after RTW, permanent disability, or the employee's passing away. Employers found this process demanding due to various dilemmas. Dealing with an unfavorable diagnosis and balancing both the employer's and the employee's interests were found to be challenging. Two types of approach to support RTW of employees with cancer were distinguished: (1) a business-oriented approach and (2) a care-oriented approach. Differences in approach were related to differences in organizational structure and employer and employee characteristics. Employers expressed a need for communication skills, information, and decision-making skills to support employees with cancer. The employers interviewed stated that dealing with an employee with cancer is demanding and that the extensive Dutch legislation on RTW did not offer all the support needed. We recommend providing them with easily accessible information on communication and leadership training to better support employees with cancer. • Supporting employers by training communication and decision-making skills and providing information on cancer will contribute to improving RTW support for employees with cancer. • Knowing that the employer will usually be empathic when an employee reveals that they have

  2. Complex Educational Design: A Course Design Model Based on Complexity

    ERIC Educational Resources Information Center

    Freire, Maximina Maria

    2013-01-01

    Purpose: This article aims at presenting a conceptual framework which, theoretically grounded on complexity, provides the basis to conceive of online language courses that intend to respond to the needs of students and society. Design/methodology/approach: This paper is introduced by reflections on distance education and on the paradigmatic view…

  3. An experiment with interactive planning models

    NASA Technical Reports Server (NTRS)

    Beville, J.; Wagner, J. H.; Zannetos, Z. S.

    1970-01-01

    Experiments on decision making in planning problems are described. Executives were tested in dealing with capital investments and competitive pricing decisions under conditions of uncertainty. A software package, the interactive risk analysis model system, was developed, and two controlled experiments were conducted. It is concluded that planning models can aid management, and predicted uses of the models are as a central tool, as an educational tool, to improve consistency in decision making, to improve communications, and as a tool for consensus decision making.

  4. Lattice Boltzmann model for the complex Ginzburg-Landau equation.

    PubMed

    Zhang, Jianying; Yan, Guangwu

    2010-06-01

    A lattice Boltzmann model with complex distribution function for the complex Ginzburg-Landau equation (CGLE) is proposed. By using multiscale technique and the Chapman-Enskog expansion on complex variables, we obtain a series of complex partial differential equations. Then, complex equilibrium distribution function and its complex moments are obtained. Based on this model, the rotation and oscillation properties of stable spiral waves and the breaking-up behavior of unstable spiral waves in CGLE are investigated in detail.

  5. Modeling of microgravity combustion experiments

    NASA Technical Reports Server (NTRS)

    Buckmaster, John

    1995-01-01

    This program started in February 1991, and is designed to improve our understanding of basic combustion phenomena by the modeling of various configurations undergoing experimental study by others. Results through 1992 were reported in the second workshop. Work since that time has examined the following topics: Flame-balls; Intrinsic and acoustic instabilities in multiphase mixtures; Radiation effects in premixed combustion; Smouldering, both forward and reverse, as well as two dimensional smoulder.

  6. Delineating parameter unidentifiabilities in complex models

    NASA Astrophysics Data System (ADS)

    Raman, Dhruva V.; Anderson, James; Papachristodoulou, Antonis

    2017-03-01

    Scientists use mathematical modeling as a tool for understanding and predicting the properties of complex physical systems. In highly parametrized models there often exist relationships between parameters over which model predictions are identical, or nearly identical. These are known as structural or practical unidentifiabilities, respectively. They are hard to diagnose and make reliable parameter estimation from data impossible. They furthermore imply the existence of an underlying model simplification. We describe a scalable method for detecting unidentifiabilities, as well as the functional relations defining them, for generic models. This allows for model simplification, and appreciation of which parameters (or functions thereof) cannot be estimated from data. Our algorithm can identify features such as redundant mechanisms and fast time-scale subsystems, as well as the regimes in parameter space over which such approximations are valid. We base our algorithm on a quantification of regional parametric sensitivity that we call `multiscale sloppiness'. Traditionally, the link between parametric sensitivity and the conditioning of the parameter estimation problem is made locally, through the Fisher information matrix. This is valid in the regime of infinitesimal measurement uncertainty. We demonstrate the duality between multiscale sloppiness and the geometry of confidence regions surrounding parameter estimates made where measurement uncertainty is non-negligible. Further theoretical relationships are provided linking multiscale sloppiness to the likelihood-ratio test. From this, we show that a local sensitivity analysis (as typically done) is insufficient for determining the reliability of parameter estimation, even with simple (non)linear systems. Our algorithm can provide a tractable alternative. We finally apply our methods to a large-scale, benchmark systems biology model of necrosis factor (NF)-κ B , uncovering unidentifiabilities.

  7. In Vivo Experiments with Dental Pulp Stem Cells for Pulp-Dentin Complex Regeneration.

    PubMed

    Kim, Sunil; Shin, Su-Jung; Song, Yunjung; Kim, Euiseong

    2015-01-01

    In recent years, many studies have examined the pulp-dentin complex regeneration with DPSCs. While it is important to perform research on cells, scaffolds, and growth factors, it is also critical to develop animal models for preclinical trials. The development of a reproducible animal model of transplantation is essential for obtaining precise and accurate data in vivo. The efficacy of pulp regeneration should be assessed qualitatively and quantitatively using animal models. This review article sought to introduce in vivo experiments that have evaluated the potential of dental pulp stem cells for pulp-dentin complex regeneration. According to a review of various researches about DPSCs, the majority of studies have used subcutaneous mouse and dog teeth for animal models. There is no way to know which animal model will reproduce the clinical environment. If an animal model is developed which is easier to use and is useful in more situations than the currently popular models, it will be a substantial aid to studies examining pulp-dentin complex regeneration.

  8. In Vivo Experiments with Dental Pulp Stem Cells for Pulp-Dentin Complex Regeneration

    PubMed Central

    Kim, Sunil; Shin, Su-Jung; Song, Yunjung; Kim, Euiseong

    2015-01-01

    In recent years, many studies have examined the pulp-dentin complex regeneration with DPSCs. While it is important to perform research on cells, scaffolds, and growth factors, it is also critical to develop animal models for preclinical trials. The development of a reproducible animal model of transplantation is essential for obtaining precise and accurate data in vivo. The efficacy of pulp regeneration should be assessed qualitatively and quantitatively using animal models. This review article sought to introduce in vivo experiments that have evaluated the potential of dental pulp stem cells for pulp-dentin complex regeneration. According to a review of various researches about DPSCs, the majority of studies have used subcutaneous mouse and dog teeth for animal models. There is no way to know which animal model will reproduce the clinical environment. If an animal model is developed which is easier to use and is useful in more situations than the currently popular models, it will be a substantial aid to studies examining pulp-dentin complex regeneration. PMID:26688616

  9. The database for reaching experiments and models.

    PubMed

    Walker, Ben; Kording, Konrad

    2013-01-01

    Reaching is one of the central experimental paradigms in the field of motor control, and many computational models of reaching have been published. While most of these models try to explain subject data (such as movement kinematics, reaching performance, forces, etc.) from only a single experiment, distinct experiments often share experimental conditions and record similar kinematics. This suggests that reaching models could be applied to (and falsified by) multiple experiments. However, using multiple datasets is difficult because experimental data formats vary widely. Standardizing data formats promises to enable scientists to test model predictions against many experiments and to compare experimental results across labs. Here we report on the development of a new resource available to scientists: a database of reaching called the Database for Reaching Experiments And Models (DREAM). DREAM collects both experimental datasets and models and facilitates their comparison by standardizing formats. The DREAM project promises to be useful for experimentalists who want to understand how their data relates to models, for modelers who want to test their theories, and for educators who want to help students better understand reaching experiments, models, and data analysis.

  10. Interactive Visualizations of Complex Seismic Data and Models

    NASA Astrophysics Data System (ADS)

    Chai, C.; Ammon, C. J.; Maceira, M.; Herrmann, R. B.

    2016-12-01

    . Such easy-to-use interactive displays are essential in teaching environments - user-friendly interactivity allows students to explore large, complex data sets and models at their own pace, enabling a more accessible learning experience.

  11. Turbulence modeling needs of commercial CFD codes: Complex flows in the aerospace and automotive industries

    NASA Technical Reports Server (NTRS)

    Befrui, Bizhan A.

    1995-01-01

    This viewgraph presentation discusses the following: STAR-CD computational features; STAR-CD turbulence models; common features of industrial complex flows; industry-specific CFD development requirements; applications and experiences of industrial complex flows, including flow in rotating disc cavities, diffusion hole film cooling, internal blade cooling, and external car aerodynamics; and conclusions on turbulence modeling needs.

  12. Using Perspective to Model Complex Processes

    SciTech Connect

    Kelsey, R.L.; Bisset, K.R.

    1999-04-04

    The notion of perspective, when supported in an object-based knowledge representation, can facilitate better abstractions of reality for modeling and simulation. The object modeling of complex physical and chemical processes is made more difficult in part due to the poor abstractions of state and phase changes available in these models. The notion of perspective can be used to create different views to represent the different states of matter in a process. These techniques can lead to a more understandable model. Additionally, the ability to record the progress of a process from start to finish is problematic. It is desirable to have a historic record of the entire process, not just the end result of the process. A historic record should facilitate backtracking and re-start of a process at different points in time. The same representation structures and techniques can be used to create a sequence of process markers to represent a historic record. By using perspective, the sequence of markers can have multiple and varying views tailored for a particular user's context of interest.

  13. A summary of computational experience at GE Aircraft Engines for complex turbulent flows in gas turbines

    NASA Technical Reports Server (NTRS)

    Zerkle, Ronald D.; Prakash, Chander

    1995-01-01

    This viewgraph presentation summarizes some CFD experience at GE Aircraft Engines for flows in the primary gaspath of a gas turbine engine and in turbine blade cooling passages. It is concluded that application of the standard k-epsilon turbulence model with wall functions is not adequate for accurate CFD simulation of aerodynamic performance and heat transfer in the primary gas path of a gas turbine engine. New models are required in the near-wall region which include more physics than wall functions. The two-layer modeling approach appears attractive because of its computational complexity. In addition, improved CFD simulation of film cooling and turbine blade internal cooling passages will require anisotropic turbulence models. New turbulence models must be practical in order to have a significant impact on the engine design process. A coordinated turbulence modeling effort between NASA centers would be beneficial to the gas turbine industry.

  14. Mathematical modelling of complex contagion on clustered networks

    NASA Astrophysics Data System (ADS)

    O'sullivan, David J.; O'Keeffe, Gary; Fennell, Peter; Gleeson, James

    2015-09-01

    The spreading of behavior, such as the adoption of a new innovation, is influenced bythe structure of social networks that interconnect the population. In the experiments of Centola (Science, 2010), adoption of new behavior was shown to spread further and faster across clustered-lattice networks than across corresponding random networks. This implies that the “complex contagion” effects of social reinforcement are important in such diffusion, in contrast to “simple” contagion models of disease-spread which predict that epidemics would grow more efficiently on random networks than on clustered networks. To accurately model complex contagion on clustered networks remains a challenge because the usual assumptions (e.g. of mean-field theory) regarding tree-like networks are invalidated by the presence of triangles in the network; the triangles are, however, crucial to the social reinforcement mechanism, which posits an increased probability of a person adopting behavior that has been adopted by two or more neighbors. In this paper we modify the analytical approach that was introduced by Hebert-Dufresne et al. (Phys. Rev. E, 2010), to study disease-spread on clustered networks. We show how the approximation method can be adapted to a complex contagion model, and confirm the accuracy of the method with numerical simulations. The analytical results of the model enable us to quantify the level of social reinforcement that is required to observe—as in Centola’s experiments—faster diffusion on clustered topologies than on random networks.

  15. Multicomponent reactive transport modeling of uranium bioremediation field experiments

    NASA Astrophysics Data System (ADS)

    Fang, Yilin; Yabusaki, Steven B.; Morrison, Stan J.; Amonette, James P.; Long, Philip E.

    2009-10-01

    A reaction network integrating abiotic and microbially mediated reactions has been developed to simulate biostimulation field experiments at a former Uranium Mill Tailings Remedial Action (UMTRA) site in Rifle, Colorado. The reaction network was calibrated using data from the 2002 field experiment, after which it was applied without additional calibration to field experiments performed in 2003 and 2007. The robustness of the model specification is significant in that (1) the 2003 biostimulation field experiment was performed with 3 times higher acetate concentrations than the previous biostimulation in the same field plot (i.e., the 2002 experiment), and (2) the 2007 field experiment was performed in a new unperturbed plot on the same site. The biogeochemical reactive transport simulations accounted for four terminal electron-accepting processes (TEAPs), two distinct functional microbial populations, two pools of bioavailable Fe(III) minerals (iron oxides and phyllosilicate iron), uranium aqueous and surface complexation, mineral precipitation and dissolution. The conceptual model for bioavailable iron reflects recent laboratory studies with sediments from the UMTRA site that demonstrated that the bulk (˜90%) of initial Fe(III) bioreduction is associated with phyllosilicate rather than oxide forms of iron. The uranium reaction network includes a U(VI) surface complexation model based on laboratory studies with Rifle site sediments and aqueous complexation reactions that include ternary complexes (e.g., calcium-uranyl-carbonate). The bioreduced U(IV), Fe(II), and sulfide components produced during the experiments are strongly associated with the solid phases and may play an important role in long-term uranium immobilization.

  16. Surface Complexation Modelling in Metal-Mineral-Bacteria Systems

    NASA Astrophysics Data System (ADS)

    Johnson, K. J.; Fein, J. B.

    2002-12-01

    The reactive surfaces of bacteria and minerals can determine the fate, transport, and bioavailability of aqueous heavy metal cations. Geochemical models are instrumental in accurately accounting for the partitioning of the metals between mineral surfaces and bacteria cell walls. Previous research has shown that surface complexation modelling (SCM) is accurate in two-component systems (metal:mineral and metal:bacteria); however, the ability of SCMs to account for metal distribution in mixed metal-mineral-bacteria systems has not been tested. In this study, we measure aqueous Cd distributions in water-bacteria-mineral systems, and compare these observations with predicted distributions based on a surface complexation modelling approach. We measured Cd adsorption in 2- and 3-component batch adsorption experiments. In the 2-component experiments, we measured the extent of adsorption of 10 ppm aqueous Cd onto either a bacterial or hydrous ferric oxide sorbent. The metal:bacteria experiments contained 1 g/L (wet wt.) of B. subtilis, and were conducted as a function of pH; the metal:mineral experiments were conducted as a function of both pH and HFO content. Two types of 3-component Cd adsorption experiments were also conducted in which both mineral powder and bacteria were present as sorbents: 1) one in which the HFO was physically but not chemically isolated from the system using sealed dialysis tubing, and 2) others where the HFO, Cd and B. subtilis were all in physical contact. The dialysis tubing approach enabled the direct determination of the concentration of Cd on each sorbing surface, after separation and acidification of each sorbent. The experiments indicate that both bacteria and mineral surfaces can dominate adsorption in the system, depending on pH and bacteria:mineral ratio. The stability constants, determined using the data from the 2-component systems, along with those for other surface and aqueous species in the systems, were used with FITEQL to

  17. Clinical Complexity in Medicine: A Measurement Model of Task and Patient Complexity.

    PubMed

    Islam, R; Weir, C; Del Fiol, G

    2016-01-01

    Complexity in medicine needs to be reduced to simple components in a way that is comprehensible to researchers and clinicians. Few studies in the current literature propose a measurement model that addresses both task and patient complexity in medicine. The objective of this paper is to develop an integrated approach to understand and measure clinical complexity by incorporating both task and patient complexity components focusing on the infectious disease domain. The measurement model was adapted and modified for the healthcare domain. Three clinical infectious disease teams were observed, audio-recorded and transcribed. Each team included an infectious diseases expert, one infectious diseases fellow, one physician assistant and one pharmacy resident fellow. The transcripts were parsed and the authors independently coded complexity attributes. This baseline measurement model of clinical complexity was modified in an initial set of coding processes and further validated in a consensus-based iterative process that included several meetings and email discussions by three clinical experts from diverse backgrounds from the Department of Biomedical Informatics at the University of Utah. Inter-rater reliability was calculated using Cohen's kappa. The proposed clinical complexity model consists of two separate components. The first is a clinical task complexity model with 13 clinical complexity-contributing factors and 7 dimensions. The second is the patient complexity model with 11 complexity-contributing factors and 5 dimensions. The measurement model for complexity encompassing both task and patient complexity will be a valuable resource for future researchers and industry to measure and understand complexity in healthcare.

  18. Ants (Formicidae): models for social complexity.

    PubMed

    Smith, Chris R; Dolezal, Adam; Eliyahu, Dorit; Holbrook, C Tate; Gadau, Jürgen

    2009-07-01

    The family Formicidae (ants) is composed of more than 12,000 described species that vary greatly in size, morphology, behavior, life history, ecology, and social organization. Ants occur in most terrestrial habitats and are the dominant animals in many of them. They have been used as models to address fundamental questions in ecology, evolution, behavior, and development. The literature on ants is extensive, and the natural history of many species is known in detail. Phylogenetic relationships for the family, as well as within many subfamilies, are known, enabling comparative studies. Their ease of sampling and ecological variation makes them attractive for studying populations and questions relating to communities. Their sociality and variation in social organization have contributed greatly to an understanding of complex systems, division of labor, and chemical communication. Ants occur in colonies composed of tens to millions of individuals that vary greatly in morphology, physiology, and behavior; this variation has been used to address proximate and ultimate mechanisms generating phenotypic plasticity. Relatedness asymmetries within colonies have been fundamental to the formulation and empirical testing of kin and group selection theories. Genomic resources have been developed for some species, and a whole-genome sequence for several species is likely to follow in the near future; comparative genomics in ants should provide new insights into the evolution of complexity and sociogenomics. Future studies using ants should help establish a more comprehensive understanding of social life, from molecules to colonies.

  19. CALIBRATION OF SUBSURFACE BATCH AND REACTIVE-TRANSPORT MODELS INVOLVING COMPLEX BIOGEOCHEMICAL PROCESSES

    EPA Science Inventory

    In this study, the calibration of subsurface batch and reactive-transport models involving complex biogeochemical processes was systematically evaluated. Two hypothetical nitrate biodegradation scenarios were developed and simulated in numerical experiments to evaluate the perfor...

  20. CALIBRATION OF SUBSURFACE BATCH AND REACTIVE-TRANSPORT MODELS INVOLVING COMPLEX BIOGEOCHEMICAL PROCESSES

    EPA Science Inventory

    In this study, the calibration of subsurface batch and reactive-transport models involving complex biogeochemical processes was systematically evaluated. Two hypothetical nitrate biodegradation scenarios were developed and simulated in numerical experiments to evaluate the perfor...

  1. Capturing the experiences of patients across multiple complex interventions: a meta-qualitative approach.

    PubMed

    Webster, Fiona; Christian, Jennifer; Mansfield, Elizabeth; Bhattacharyya, Onil; Hawker, Gillian; Levinson, Wendy; Naglie, Gary; Pham, Thuy-Nga; Rose, Louise; Schull, Michael; Sinha, Samir; Stergiopoulos, Vicky; Upshur, Ross; Wilson, Lynn

    2015-09-08

    The perspectives, needs and preferences of individuals with complex health and social needs can be overlooked in the design of healthcare interventions. This study was designed to provide new insights on patient perspectives drawing from the qualitative evaluation of 5 complex healthcare interventions. Patients and their caregivers were recruited from 5 interventions based in primary, hospital and community care in Ontario, Canada. We included 62 interviews from 44 patients and 18 non-clinical caregivers. Our team analysed the transcripts from 5 distinct projects. This approach to qualitative meta-evaluation identifies common issues described by a diverse group of patients, therefore providing potential insights into systems issues. This study is a secondary analysis of qualitative data; therefore, no outcome measures were identified. We identified 5 broad themes that capture the patients' experience and highlight issues that might not be adequately addressed in complex interventions. In our study, we found that: (1) the emergency department is the unavoidable point of care; (2) patients and caregivers are part of complex and variable family systems; (3) non-medical issues mediate patients' experiences of health and healthcare delivery; (4) the unanticipated consequences of complex healthcare interventions are often the most valuable; and (5) patient experiences are shaped by the healthcare discourses on medically complex patients. Our findings suggest that key assumptions about patients that inform intervention design need to be made explicit in order to build capacity to better understand and support patients with multiple chronic diseases. Across many health systems internationally, multiple models are being implemented simultaneously that may have shared features and target similar patients, and a qualitative meta-evaluation approach, thus offers an opportunity for cumulative learning at a system level in addition to informing intervention design and

  2. Parallel Scene Generation/Electromagnetic Modeling of Complex Targets in Complex Clutter and Propagation Environments

    DTIC Science & Technology

    2005-10-01

    COMPLEX TARGETS IN COMPLEX CLUTTER AND PROPAGATION ENVIRONMENTS Black River Systems Company APPROVED FOR PUBLIC RELEASE... PROPAGATION ENVIRONMENTS 6. AUTHOR(S) Milissa Benincasa, Tapan Sarkar, Christopher Card, Carl Thomas, Eric Mokole, Douglas Taylor, Richard Schneible, Ravi...targets (today’s capability) to accurate modeling of complex targets in complex environments with all their associated scattering and propagation

  3. Finding the right balance between groundwater model complexity and experimental effort via Bayesian model selection

    NASA Astrophysics Data System (ADS)

    Schöniger, Anneli; Illman, Walter A.; Wöhling, Thomas; Nowak, Wolfgang

    2015-12-01

    Groundwater modelers face the challenge of how to assign representative parameter values to the studied aquifer. Several approaches are available to parameterize spatial heterogeneity in aquifer parameters. They differ in their conceptualization and complexity, ranging from homogeneous models to heterogeneous random fields. While it is common practice to invest more effort into data collection for models with a finer resolution of heterogeneities, there is a lack of advice which amount of data is required to justify a certain level of model complexity. In this study, we propose to use concepts related to Bayesian model selection to identify this balance. We demonstrate our approach on the characterization of a heterogeneous aquifer via hydraulic tomography in a sandbox experiment (Illman et al., 2010). We consider four increasingly complex parameterizations of hydraulic conductivity: (1) Effective homogeneous medium, (2) geology-based zonation, (3) interpolation by pilot points, and (4) geostatistical random fields. First, we investigate the shift in justified complexity with increasing amount of available data by constructing a model confusion matrix. This matrix indicates the maximum level of complexity that can be justified given a specific experimental setup. Second, we determine which parameterization is most adequate given the observed drawdown data. Third, we test how the different parameterizations perform in a validation setup. The results of our test case indicate that aquifer characterization via hydraulic tomography does not necessarily require (or justify) a geostatistical description. Instead, a zonation-based model might be a more robust choice, but only if the zonation is geologically adequate.

  4. Microwave scattering models and basic experiments

    NASA Technical Reports Server (NTRS)

    Fung, Adrian K.

    1989-01-01

    Progress is summarized which has been made in four areas of study: (1) scattering model development for sparsely populated media, such as a forested area; (2) scattering model development for dense media, such as a sea ice medium or a snow covered terrain; (3) model development for randomly rough surfaces; and (4) design and conduct of basic scattering and attenuation experiments suitable for the verification of theoretical models.

  5. Troposphere-lower-stratosphere connection in an intermediate complexity model.

    NASA Astrophysics Data System (ADS)

    Ruggieri, Paolo; King, Martin; Kucharski, Fred; Buizza, Roberto; Visconti, Guido

    2016-04-01

    The dynamical coupling between the troposphere and the lower stratosphere has been investigated using a low-top, intermediate complexity model provided by the Abdus Salam International Centre for Theoretical Physics (SPEEDY). The key question that we wanted to address is whether a simple model like SPEEDY can be used to understand troposphere-stratosphere interactions, e.g. forced by changes of sea-ice concentration in polar arctic regions. Three sets of experiments have been performed. Firstly, a potential vorticity perspective has been applied to understand the wave-like forcing of the troposphere on the stratosphere and to provide quantitative information on the sub seasonal variability of the coupling. Then, the zonally asymmetric, near-surface response to a lower-stratospheric forcing has been analysed in a set of forced experiments with an artificial heating imposed in the extra-tropical lower stratosphere. Finally, the lower-stratosphere response sensitivity to tropospheric initial conditions has been examined. Results indicate how SPEEDY captures the physics of the troposphere-stratosphere connection but also show the lack of stratospheric variability. Results also suggest that intermediate-complexity models such as SPEEDY could be used to investigate the effects that surface forcing (e.g. due to sea-ice concentration changes) have on the troposphere and the lower stratosphere.

  6. An Experiment on Isomerism in Metal-Amino Acid Complexes.

    ERIC Educational Resources Information Center

    Harrison, R. Graeme; Nolan, Kevin B.

    1982-01-01

    Background information, laboratory procedures, and discussion of results are provided for syntheses of cobalt (III) complexes, I-III, illustrating three possible bonding modes of glycine to a metal ion (the complex cations II and III being linkage/geometric isomers). Includes spectrophotometric and potentiometric methods to distinguish among the…

  7. An Experiment on Isomerism in Metal-Amino Acid Complexes.

    ERIC Educational Resources Information Center

    Harrison, R. Graeme; Nolan, Kevin B.

    1982-01-01

    Background information, laboratory procedures, and discussion of results are provided for syntheses of cobalt (III) complexes, I-III, illustrating three possible bonding modes of glycine to a metal ion (the complex cations II and III being linkage/geometric isomers). Includes spectrophotometric and potentiometric methods to distinguish among the…

  8. Advanced Combustion Modeling for Complex Turbulent Flows

    NASA Technical Reports Server (NTRS)

    Ham, Frank Stanford

    2005-01-01

    The next generation of aircraft engines will need to pass stricter efficiency and emission tests. NASA's Ultra-Efficient Engine Technology (UEET) program has set an ambitious goal of 70% reduction of NO(x) emissions and a 15% increase in fuel efficiency of aircraft engines. We will demonstrate the state-of-the-art combustion tools developed a t Stanford's Center for Turbulence Research (CTR) as part of this program. In the last decade, CTR has spear-headed a multi-physics-based combustion modeling program. Key technologies have been transferred to the aerospace industry and are currently being used for engine simulations. In this demo, we will showcase the next-generation combustion modeling tools that integrate a very high level of detailed physics into advanced flow simulation codes. Combustor flows involve multi-phase physics with liquid fuel jet breakup, evaporation, and eventual combustion. Individual components of the simulation are verified against complex test cases and show excellent agreement with experimental data.

  9. Advanced Combustion Modeling for Complex Turbulent Flows

    NASA Technical Reports Server (NTRS)

    Ham, Frank Stanford

    2005-01-01

    The next generation of aircraft engines will need to pass stricter efficiency and emission tests. NASA's Ultra-Efficient Engine Technology (UEET) program has set an ambitious goal of 70% reduction of NO(x) emissions and a 15% increase in fuel efficiency of aircraft engines. We will demonstrate the state-of-the-art combustion tools developed a t Stanford's Center for Turbulence Research (CTR) as part of this program. In the last decade, CTR has spear-headed a multi-physics-based combustion modeling program. Key technologies have been transferred to the aerospace industry and are currently being used for engine simulations. In this demo, we will showcase the next-generation combustion modeling tools that integrate a very high level of detailed physics into advanced flow simulation codes. Combustor flows involve multi-phase physics with liquid fuel jet breakup, evaporation, and eventual combustion. Individual components of the simulation are verified against complex test cases and show excellent agreement with experimental data.

  10. Modeling choice and valuation in decision experiments.

    PubMed

    Loomes, Graham

    2010-07-01

    This article develops a parsimonious descriptive model of individual choice and valuation in the kinds of experiments that constitute a substantial part of the literature relating to decision making under risk and uncertainty. It suggests that many of the best known "regularities" observed in those experiments may arise from a tendency for participants to perceive probabilities and payoffs in a particular way. This model organizes more of the data than any other extant model and generates a number of novel testable implications which are examined with new data.

  11. Argonne Bubble Experiment Thermal Model Development

    SciTech Connect

    Buechler, Cynthia Eileen

    2015-12-03

    This report will describe the Computational Fluid Dynamics (CFD) model that was developed to calculate the temperatures and gas volume fractions in the solution vessel during the irradiation. It is based on the model used to calculate temperatures and volume fractions in an annular vessel containing an aqueous solution of uranium . The experiment was repeated at several electron beam power levels, but the CFD analysis was performed only for the 12 kW irradiation, because this experiment came the closest to reaching a steady-state condition. The aim of the study is to compare results of the calculation with experimental measurements to determine the validity of the CFD model.

  12. STELLA experiment: Design and model predictions

    NASA Astrophysics Data System (ADS)

    Kimura, W. D.; Babzien, M.; Ben-Zvi, I.; Campbell, L. P.; Cline, D. B.; Fiorito, R. B.; Gallardo, J. C.; Gottschalk, S. C.; He, P.; Kusche, K. P.; Liu, Y.; Pantell, R. H.; Pogorelsky, I. V.; Quimby, D. C.; Robinson, K. E.; Rule, D. W.; Sandweiss, J.; Skaritka, J.; van Steenbergen, A.; Steinhauer, L. C.; Yakimenko, V.

    1999-07-01

    The STaged ELectron Laser Acceleration (STELLA) experiment will be one of the first to examine the critical issue of staging the laser acceleration process. The BNL inverse free electron laser (IFEL) will serve as a prebuncher to generate ˜1-μm long microbunches. These microbunches will be accelerated by an inverse Cerenkov acceleration (ICA) stage. A comprehensive model of the STELLA experiment is described. This model includes the IFEL prebunching, drift and focusing of the microbunches into the ICA stage, and their subsequent acceleration. The model predictions will be presented, including the results of a system error study to determine the sensitivity to uncertainties in various system parameters.

  13. Using Ecosystem Experiments to Improve Vegetation Models

    SciTech Connect

    Medlyn, Belinda; Zaehle, S; DeKauwe, Martin G.; Walker, Anthony P.; Dietze, Michael; Hanson, Paul J.; Hickler, Thomas; Jain, Atul; Luo, Yiqi; Parton, William; Prentice, I. Collin; Thornton, Peter E.; Wang, Shusen; Wang, Yingping; Weng, Ensheng; Iversen, Colleen M.; McCarthy, Heather R.; Warren, Jeffrey; Oren, Ram; Norby, Richard J

    2015-05-21

    Ecosystem responses to rising CO2 concentrations are a major source of uncertainty in climate change projections. Data from ecosystem-scale Free-Air CO2 Enrichment (FACE) experiments provide a unique opportunity to reduce this uncertainty. The recent FACE Model–Data Synthesis project aimed to use the information gathered in two forest FACE experiments to assess and improve land ecosystem models. A new 'assumption-centred' model intercomparison approach was used, in which participating models were evaluated against experimental data based on the ways in which they represent key ecological processes. Identifying and evaluating the main assumptions caused differences among models, and the assumption-centered approach produced a clear roadmap for reducing model uncertainty. We explain this approach and summarize the resulting research agenda. We encourage the application of this approach in other model intercomparison projects to fundamentally improve predictive understanding of the Earth system.

  14. Using Ecosystem Experiments to Improve Vegetation Models

    DOE PAGES

    Medlyn, Belinda; Zaehle, S; DeKauwe, Martin G.; ...

    2015-05-21

    Ecosystem responses to rising CO2 concentrations are a major source of uncertainty in climate change projections. Data from ecosystem-scale Free-Air CO2 Enrichment (FACE) experiments provide a unique opportunity to reduce this uncertainty. The recent FACE Model–Data Synthesis project aimed to use the information gathered in two forest FACE experiments to assess and improve land ecosystem models. A new 'assumption-centred' model intercomparison approach was used, in which participating models were evaluated against experimental data based on the ways in which they represent key ecological processes. Identifying and evaluating the main assumptions caused differences among models, and the assumption-centered approach produced amore » clear roadmap for reducing model uncertainty. We explain this approach and summarize the resulting research agenda. We encourage the application of this approach in other model intercomparison projects to fundamentally improve predictive understanding of the Earth system.« less

  15. Reduced Complexity Modeling (RCM): toward more use of less

    NASA Astrophysics Data System (ADS)

    Paola, Chris; Voller, Vaughan

    2014-05-01

    Although not exact, there is a general correspondence between reductionism and detailed, high-fidelity models, while 'synthesism' is often associated with reduced-complexity modeling. There is no question that high-fidelity reduction- based computational models are extremely useful in simulating the behaviour of complex natural systems. In skilled hands they are also a source of insight and understanding. We focus here on the case for the other side (reduced-complexity models), not because we think they are 'better' but because their value is more subtle, and their natural constituency less clear. What kinds of problems and systems lend themselves to the reduced-complexity approach? RCM is predicated on the idea that the mechanism of the system or phenomenon in question is, for whatever reason, insensitive to the full details of the underlying physics. There are multiple ways in which this can happen. B.T. Werner argued for the importance of process hierarchies in which processes at larger scales depend on only a small subset of everything going on at smaller scales. Clear scale breaks would seem like a way to test systems for this property but to our knowledge has not been used in this way. We argue that scale-independent physics, as for example exhibited by natural fractals, is another. We also note that the same basic criterion - independence of the process in question from details of the underlying physics - underpins 'unreasonably effective' laboratory experiments. There is thus a link between suitability for experimentation at reduced scale and suitability for RCM. Examples from RCM approaches to erosional landscapes, braided rivers, and deltas illustrate these ideas, and suggest that they are insufficient. There is something of a 'wild west' nature to RCM that puts some researchers off by suggesting a departure from traditional methods that have served science well for centuries. We offer two thoughts: first, that in the end the measure of a model is its

  16. Scale-model rocket experiments (SRE)

    NASA Astrophysics Data System (ADS)

    Wynne, Douglas G.; Barnell, Mark D.

    1998-07-01

    The Scale model Rocket Experiments (SRE) were conducted in August and September 1997 as a part of the Ballistic Missile Defense Organization (BMDO) Advanced Sensor Technology Program (ASTP) and Discriminating Interceptor Technology Program (DITP). Rome Laboratory (RL) efforts under this effort for ASTP involves the following technology areas: sensor fusion algorithms, high performance processors, and sensor modeling and simulation. In support of the development, test and integration of these areas, Rome Laboratory performed the scale model rocket experiments. This paper details the experiments and results of the scaled rocket experiment as a cost effective, risk reduction experiment to test fusion processor algorithms in a real time environment. The goals of the experiment were to launch, track, fuse, and collect multispectral data from Visible, IR, RADAR and LADAR sensors. The data was collected in real time and was interfaced to the RL-HPC (PARAGON) for real time processing. In June 1997 RL performed the first tests of the series on static targets. The static firings tested data transfers and safety protocols. The RL (Hanscom) IR cameras were calibrated and the proper gain settings were acquired. The next phase of the SRE testing, August 12/13 1997, involved the launching, tracking and acquiring digital IR data into the HPC. In September, RL implemented the next phase of the experiments by incorporating a LADAR and an additional IR sensor from Phillips Laboratory into the system. This paper discusses the success and future work of the SRE.

  17. Modelling of Rare Earth Elements Complexation With Humic Acid

    NASA Astrophysics Data System (ADS)

    Pourret, O.; Davranche, M.; Gruau, G.; Dia, A.

    2006-12-01

    The binding of rare earth elements (REE) to humic acid (HA) was studied by combining Ultrafiltration and ICP- MS techniques. REE-HA complexation experiments were performed at various pH conditions (ranging from 2 to 10.5) using a standard batch equilibration method. Results show that the amount of REE bound to HA strongly increase with increasing pH. Moreover, a Middle REE (MREE) downward concavity is evidenced by REE distribution patterns at acidic pH. Modelling of the experimental data using Humic Ion Binding Model VI provided a set of log KMA values (i.e. the REE-HA complexation constants specific to Model VI) for the entire REE series. The log KMA pattern obtained displays a MREE downward concavity. Log KMA values range from 2.42 to 2.79. These binding constants are in good agreement with the few existing datasets quantifying the binding of REE with humic substances except a recently published study which evidence a lanthanide contraction effect (i.e. continuous increase of the constant from La to Lu). The MREE downward concavity displayed by REE-HA complexation pattern determined in this study compares well with results from REE-fulvic acid (FA) and REE-acetic acid complexation studies. This similarity in the REE complexation pattern shapes suggests that carboxylic groups are the main binding sites of REE in HA. This conclusion is further supported by a detailed review of published studies for natural, organic-rich, river- and ground-waters which show no evidence of a lanthanide contraction effect in REE pattern shape. Finally, application of Model VI using the new, experimentally determined log KMA values to World Average River Water confirms earlier suggestions that REE occur predominantly as organic complexes (> 60 %) in the pH range between 5-5.5 and 7-8.5 (i.e. in circumneutral pH waters). The only significant difference as compared to earlier model predictions made using estimated log KMA values is that the experimentally determined log KMA values

  18. Surface complexation model of uranyl sorption on Georgia kaolinite

    USGS Publications Warehouse

    Payne, T.E.; Davis, J.A.; Lumpkin, G.R.; Chisari, R.; Waite, T.D.

    2004-01-01

    The adsorption of uranyl on standard Georgia kaolinites (KGa-1 and KGa-1B) was studied as a function of pH (3-10), total U (1 and 10 ??mol/l), and mass loading of clay (4 and 40 g/l). The uptake of uranyl in air-equilibrated systems increased with pH and reached a maximum in the near-neutral pH range. At higher pH values, the sorption decreased due to the presence of aqueous uranyl carbonate complexes. One kaolinite sample was examined after the uranyl uptake experiments by transmission electron microscopy (TEM), using energy dispersive X-ray spectroscopy (EDS) to determine the U content. It was found that uranium was preferentially adsorbed by Ti-rich impurity phases (predominantly anatase), which are present in the kaolinite samples. Uranyl sorption on the Georgia kaolinites was simulated with U sorption reactions on both titanol and aluminol sites, using a simple non-electrostatic surface complexation model (SCM). The relative amounts of U-binding >TiOH and >AlOH sites were estimated from the TEM/EDS results. A ternary uranyl carbonate complex on the titanol site improved the fit to the experimental data in the higher pH range. The final model contained only three optimised log K values, and was able to simulate adsorption data across a wide range of experimental conditions. The >TiOH (anatase) sites appear to play an important role in retaining U at low uranyl concentrations. As kaolinite often contains trace TiO2, its presence may need to be taken into account when modelling the results of sorption experiments with radionuclides or trace metals on kaolinite. ?? 2004 Elsevier B.V. All rights reserved.

  19. Pyroelectric Energy Harvesting: Model and Experiments

    DTIC Science & Technology

    2016-05-01

    ARL-TR-7663 ● MAY 2016 US Army Research Laboratory Pyroelectric Energy Harvesting: Model and Experiments by Felisa Sze and...Do not return it to the originator. ARL-TR-7663 ● MAY 2016 US Army Research Laboratory Pyroelectric Energy Harvesting: Model...2016 2. REPORT TYPE Final 3. DATES COVERED (From - To) 07/2015–02/2016 4. TITLE AND SUBTITLE Pyroelectric Energy Harvesting: Model and

  20. Trace Metal-Humic Complexes in Natural Waters: Insights From Speciation Experiments

    NASA Astrophysics Data System (ADS)

    Stern, J. C.; Salters, V.; Sonke, J.

    2006-12-01

    The DOM cycle is intimately linked to the cycling and bioavailability of trace metals in aqueous environments. The presence or absence of DOM in the water column can determined whether trace elements will be present in limited quantities as a nutrient, or in surplus quantities as a toxicant. Humic substances (HS), which represent the refractory products of DOM degradation, strongly affect the speciation of trace metals in natural waters. To simulate metal-HS interactions in nature, experiments must be carried out using trace metal concentrations. Sensitive detection systems such as ICP-MS make working with small (nanomolar) concentrations possible. Capillary electrophoresis coupled with ICP-MS (CE-ICP-MS) has recently been identified as a rapid and accurate method to separate metal species and calculate conditional binding constants (log K_c) of metal-humic complexes. CE-ICP-MS was used to measure partitioning of metals between humic substances and a competing ligand (EDTA) and calculate binding constants of rare earth element (REE) and Th, Hf, and Zr-humic complexes at pH 3.5-8 and ionic strength of 0.1. Equilibrium dialysis ligand exchange (EDLE) experiments to validate the CE-ICP-MS method were performed to separate the metal-HS and metal-EDTA species by partitioning due to size exclusion via diffusion through a 1000 Da membrane. CE-ICP-MS experiments were also conducted to compare binding constants of REE with humic substances of various origin, including soil, peat, and aquatic DOM. Results of our experiments show an increase in log K_c with decrease in ionic radius for REE-humic complexes (the lanthanide contraction effect). Conditional binding constants of tetravalent metal-humic complexes were found to be several orders of magnitude higher than REE-humic complexes, indicating that tetravalent metals have a very strong affinity for humic substances. Because thorium is often used as a proxy for the tetravalent actinides, Th-HS binding constants can allow us

  1. Modeling competitive substitution in a polyelectrolyte complex

    SciTech Connect

    Peng, B.; Muthukumar, M.

    2015-12-28

    We have simulated the invasion of a polyelectrolyte complex made of a polycation chain and a polyanion chain, by another longer polyanion chain, using the coarse-grained united atom model for the chains and the Langevin dynamics methodology. Our simulations reveal many intricate details of the substitution reaction in terms of conformational changes of the chains and competition between the invading chain and the chain being displaced for the common complementary chain. We show that the invading chain is required to be sufficiently longer than the chain being displaced for effecting the substitution. Yet, having the invading chain to be longer than a certain threshold value does not reduce the substitution time much further. While most of the simulations were carried out in salt-free conditions, we show that presence of salt facilitates the substitution reaction and reduces the substitution time. Analysis of our data shows that the dominant driving force for the substitution process involving polyelectrolytes lies in the release of counterions during the substitution.

  2. Ultrasonic ray models for complex geometries

    NASA Astrophysics Data System (ADS)

    Schumm, A.

    2000-05-01

    Computer Aided Design techniques have become an inherent part of many industrial applications and are also gaining popularity in Nondestructive Testing. In sound field calculations, CAD representations can contribute to one of the generic problem in ultrasonic modeling, the wave propagation in complex geometries. Ray tracing codes were the first to take account of the geometry, providing qualitative information on beam propagation, such as geometrical echoes, multiple sound paths and possible conversions between wave modes. The forward ray tracing approach is intuitive and straightforward and can evolve towards a more quantitative code if transmission, divergence and polarization information is added. If used to evaluate the impulse response of a given geometry, an approximated time-dependent received signal can be obtained after convolution with the excitation signal. The more accurate reconstruction of a sound field after interaction with a geometrical interface according to ray theory requires inverse (or Fermat) ray-tracing to obtain the contribution of each elementary point source to the field at a given observation point. The resulting field of a finite transducer can then be obtained after integration over all point sources. While conceptionally close to classical ray tracing, this approach puts more stringent requirements on the CAD representation employed and is more difficult to extend towards multiple interfaces. In this communication we present examples for both approaches. In a prospective step, the link between both ray techniques is shown, and we illustrate how a combination of both approaches contributes to the solution of an industrial problem.

  3. Complex networks repair strategies: Dynamic models

    NASA Astrophysics Data System (ADS)

    Fu, Chaoqi; Wang, Ying; Gao, Yangjun; Wang, Xiaoyang

    2017-09-01

    Network repair strategies are tactical methods that restore the efficiency of damaged networks; however, unreasonable repair strategies not only waste resources, they are also ineffective for network recovery. Most extant research on network repair focuses on static networks, but results and findings on static networks cannot be applied to evolutionary dynamic networks because, in dynamic models, complex network repair has completely different characteristics. For instance, repaired nodes face more severe challenges, and require strategic repair methods in order to have a significant effect. In this study, we propose the Shell Repair Strategy (SRS) to minimize the risk of secondary node failures due to the cascading effect. Our proposed method includes the identification of a set of vital nodes that have a significant impact on network repair and defense. Our identification of these vital nodes reduces the number of switching nodes that face the risk of secondary failures during the dynamic repair process. This is positively correlated with the size of the average degree < k > and enhances network invulnerability.

  4. Modeling competitive substitution in a polyelectrolyte complex

    NASA Astrophysics Data System (ADS)

    Peng, B.; Muthukumar, M.

    2015-12-01

    We have simulated the invasion of a polyelectrolyte complex made of a polycation chain and a polyanion chain, by another longer polyanion chain, using the coarse-grained united atom model for the chains and the Langevin dynamics methodology. Our simulations reveal many intricate details of the substitution reaction in terms of conformational changes of the chains and competition between the invading chain and the chain being displaced for the common complementary chain. We show that the invading chain is required to be sufficiently longer than the chain being displaced for effecting the substitution. Yet, having the invading chain to be longer than a certain threshold value does not reduce the substitution time much further. While most of the simulations were carried out in salt-free conditions, we show that presence of salt facilitates the substitution reaction and reduces the substitution time. Analysis of our data shows that the dominant driving force for the substitution process involving polyelectrolytes lies in the release of counterions during the substitution.

  5. Modeling and minimizing CAPRI round 30 symmetrical protein complexes from CASP-11 structural models.

    PubMed

    El Houasli, Marwa; Maigret, Bernard; Devignes, Marie-Dominique; Ghoorah, Anisah W; Grudinin, Sergei; Ritchie, David W

    2017-03-01

    Many of the modeling targets in the blind CASP-11/CAPRI-30 experiment were protein homo-dimers and homo-tetramers. Here, we perform a retrospective docking-based analysis of the perfectly symmetrical CAPRI Round 30 targets whose crystal structures have been published. Starting from the CASP "stage-2" fold prediction models, we show that using our recently developed "SAM" polar Fourier symmetry docking algorithm combined with NAMD energy minimization often gives acceptable or better 3D models of the target complexes. We also use SAM to analyze the overall quality of all CASP structural models for the selected targets from a docking-based perspective. We demonstrate that docking only CASP "center" structures for the selected targets provides a fruitful and economical docking strategy. Furthermore, our results show that many of the CASP models are dockable in the sense that they can lead to acceptable or better models of symmetrical complexes. Even though SAM is very fast, using docking and NAMD energy minimization to pull out acceptable docking models from a large ensemble of docked CASP models is computationally expensive. Nonetheless, thanks to our SAM docking algorithm, we expect that applying our docking protocol on a modern computer cluster will give us the ability to routinely model 3D structures of symmetrical protein complexes from CASP-quality models. Proteins 2017; 85:463-469. © 2016 Wiley Periodicals, Inc.

  6. Evidence of complex contagion of information in social media: An experiment using Twitter bots

    PubMed Central

    2017-01-01

    It has recently become possible to study the dynamics of information diffusion in techno-social systems at scale, due to the emergence of online platforms, such as Twitter, with millions of users. One question that systematically recurs is whether information spreads according to simple or complex dynamics: does each exposure to a piece of information have an independent probability of a user adopting it (simple contagion), or does this probability depend instead on the number of sources of exposure, increasing above some threshold (complex contagion)? Most studies to date are observational and, therefore, unable to disentangle the effects of confounding factors such as social reinforcement, homophily, limited attention, or network community structure. Here we describe a novel controlled experiment that we performed on Twitter using ‘social bots’ deployed to carry out coordinated attempts at spreading information. We propose two Bayesian statistical models describing simple and complex contagion dynamics, and test the competing hypotheses. We provide experimental evidence that the complex contagion model describes the observed information diffusion behavior more accurately than simple contagion. Future applications of our results include more effective defenses against malicious propaganda campaigns on social media, improved marketing and advertisement strategies, and design of effective network intervention techniques. PMID:28937984

  7. Recent "Ground Testing" Experiences in the National Full-Scale Aerodynamics Complex

    NASA Technical Reports Server (NTRS)

    Zell, Peter; Stich, Phil; Sverdrup, Jacobs; George, M. W. (Technical Monitor)

    2002-01-01

    The large test sections of the National Full-scale Aerodynamics Complex (NFAC) wind tunnels provide ideal controlled wind environments to test ground-based objects and vehicles. Though this facility was designed and provisioned primarily for aeronautical testing requirements, several experiments have been designed to utilize existing model mount structures to support "non-flying" systems. This presentation will discuss some of the ground-based testing capabilities of the facility and provide examples of groundbased tests conducted in the facility to date. It will also address some future work envisioned and solicit input from the SATA membership on ways to improve the service that NASA makes available to customers.

  8. Recent "Ground Testing" Experiences in the National Full-Scale Aerodynamics Complex

    NASA Technical Reports Server (NTRS)

    Zell, Peter; Stich, Phil; Sverdrup, Jacobs; George, M. W. (Technical Monitor)

    2002-01-01

    The large test sections of the National Full-scale Aerodynamics Complex (NFAC) wind tunnels provide ideal controlled wind environments to test ground-based objects and vehicles. Though this facility was designed and provisioned primarily for aeronautical testing requirements, several experiments have been designed to utilize existing model mount structures to support "non-flying" systems. This presentation will discuss some of the ground-based testing capabilities of the facility and provide examples of groundbased tests conducted in the facility to date. It will also address some future work envisioned and solicit input from the SATA membership on ways to improve the service that NASA makes available to customers.

  9. Investigations and experiments of a new multi-layer complex liquid-cooled mirror

    NASA Astrophysics Data System (ADS)

    Lu, Yuling; Cheng, Zuhai; Zhang, Yaoning; Sun, Feng; Yu, Wenfeng

    2004-07-01

    This paper describes a new multi-layer complex liquid-cooled Si mirror with 3 cooling ducts in Archimedes spirals. Utilizing the ANSYS program, the structure of the mirror is optimized and the thermal deformation model of the mirror is simulated. The simulation results show that the mirror has the following advantages: very small amount of surface deformation, uniform distribution of temperature and surface deformation, and fast surface shape restoration. The results of the experiments of thermal deformation and the surface restoration are accurately mapped to the simulation results.

  10. Industrial processing of complex fluids: Formulation and modeling

    SciTech Connect

    Scovel, J.C.; Bleasdale, S.; Forest, G.M.; Bechtel, S.

    1997-08-01

    The production of many important commercial materials involves the evolution of a complex fluid through a cooling phase into a hardened product. Textile fibers, high-strength fibers(KEVLAR, VECTRAN), plastics, chopped-fiber compounds, and fiber optical cable are such materials. Industry desires to replace experiments with on-line, real time models of these processes. Solutions to the problems are not just a matter of technology transfer, but require a fundamental description and simulation of the processes. Goals of the project are to develop models that can be used to optimize macroscopic properties of the solid product, to identify sources of undesirable defects, and to seek boundary-temperature and flow-and-material controls to optimize desired properties.

  11. The Meduza experiment: An orbital complex ten weeks in flight

    NASA Technical Reports Server (NTRS)

    Ovcharov, V.

    1979-01-01

    The newspaper article discusses the contribution of space research to understanding the origin of life on Earth. Part of this basic research involves studying amino acids, ribonucleic acid and DNA molecules subjected to cosmic radiation. The results from the Meduza experiment are not all analyzed as yet. The article also discusses the psychological changes in cosmonauts as evidenced by their attitude towards biology experiments in space.

  12. Solar models, neutrino experiments, and helioseismology

    NASA Technical Reports Server (NTRS)

    Bahcall, John N.; Ulrich, Roger K.

    1988-01-01

    The event rates and their recognized uncertainties are calculated for 11 solar neutrino experiments using accurate solar models. These models are also used to evaluate the frequency spectrum of the p and g oscillations modes of the sun. It is shown that the discrepancy between the predicted and observed event rates in the Cl-37 and Kamiokande II experiments cannot be explained by a 'likely' fluctuation in input parameters with the best estimates and uncertainties given in the present study. It is suggested that, whatever the correct solution to the solar neutrino problem, it is unlikely to be a 'trival' error.

  13. Modeling Complex Chemical Systems: Problems and Solutions

    NASA Astrophysics Data System (ADS)

    van Dijk, Jan

    2016-09-01

    Non-equilibrium plasmas in complex gas mixtures are at the heart of numerous contemporary technologies. They typically contain dozens to hundreds of species, involved in hundreds to thousands of reactions. Chemists and physicists have always been interested in what are now called chemical reduction techniques (CRT's). The idea of such CRT's is that they reduce the number of species that need to be considered explicitly without compromising the validity of the model. This is usually achieved on the basis of an analysis of the reaction time scales of the system under study, which identifies species that are in partial equilibrium after a given time span. The first such CRT that has been widely used in plasma physics was developed in the 1960's and resulted in the concept of effective ionization and recombination rates. It was later generalized to systems in which multiple levels are effected by transport. In recent years there has been a renewed interest in tools for chemical reduction and reaction pathway analysis. An example of the latter is the PumpKin tool. Another trend is that techniques that have previously been developed in other fields of science are adapted as to be able to handle the plasma state of matter. Examples are the Intrinsic Low Dimension Manifold (ILDM) method and its derivatives, which originate from combustion engineering, and the general-purpose Principle Component Analysis (PCA) technique. In this contribution we will provide an overview of the most common reduction techniques, then critically assess the pros and cons of the methods that have gained most popularity in recent years. Examples will be provided for plasmas in argon and carbon dioxide.

  14. Multicomponent reactive transport modeling of uranium bioremediation field experiments

    SciTech Connect

    Fang, Yilin; Yabusaki, Steven B.; Morrison, Stan J.; Amonette, James E.; Long, Philip E.

    2009-10-15

    Biostimulation field experiments with acetate amendment are being performed at a former uranium mill tailings site in Rifle, Colorado, to investigate subsurface processes controlling in situ bioremediation of uranium-contaminated groundwater. An important part of the research is identifying and quantifying field-scale models of the principal terminal electron-accepting processes (TEAPs) during biostimulation and the consequent biogeochemical impacts to the subsurface receiving environment. Integrating abiotic chemistry with the microbially mediated TEAPs in the reaction network brings into play geochemical observations (e.g., pH, alkalinity, redox potential, major ions, and secondary minerals) that the reactive transport model must recognize. These additional constraints provide for a more systematic and mechanistic interpretation of the field behaviors during biostimulation. The reaction network specification developed for the 2002 biostimulation field experiment was successfully applied without additional calibration to the 2003 and 2007 field experiments. The robustness of the model specification is significant in that 1) the 2003 biostimulation field experiment was performed with 3 times higher acetate concentrations than the previous biostimulation in the same field plot (i.e., the 2002 experiment), and 2) the 2007 field experiment was performed in a new unperturbed plot on the same site. The biogeochemical reactive transport simulations accounted for four TEAPs, two distinct functional microbial populations, two pools of bioavailable Fe(III) minerals (iron oxides and phyllosilicate iron), uranium aqueous and surface complexation, mineral precipitation, and dissolution. The conceptual model for bioavailable iron reflects recent laboratory studies with sediments from the Old Rifle Uranium Mill Tailings Remedial Action (UMTRA) site that demonstrated that the bulk (~90%) of Fe(III) bioreduction is associated with the phyllosilicates rather than the iron oxides

  15. Clinical complexity in medicine: A measurement model of task and patient complexity

    PubMed Central

    Islam, R.; Weir, C.; Fiol, G. Del

    2016-01-01

    Summary Background Complexity in medicine needs to be reduced to simple components in a way that is comprehensible to researchers and clinicians. Few studies in the current literature propose a measurement model that addresses both task and patient complexity in medicine. Objective The objective of this paper is to develop an integrated approach to understand and measure clinical complexity by incorporating both task and patient complexity components focusing on infectious disease domain. The measurement model was adapted and modified to healthcare domain. Methods Three clinical Infectious Disease teams were observed, audio-recorded and transcribed. Each team included an Infectious Diseases expert, one Infectious Diseases fellow, one physician assistant and one pharmacy resident fellow. The transcripts were parsed and the authors independently coded complexity attributes. This baseline measurement model of clinical complexity was modified in an initial set of coding process and further validated in a consensus-based iterative process that included several meetings and email discussions by three clinical experts from diverse backgrounds from the Department of Biomedical Informatics at the University of Utah. Inter-rater reliability was calculated using Cohen’s kappa. Results The proposed clinical complexity model consists of two separate components. The first is a clinical task complexity model with 13 clinical complexity-contributing factors and 7 dimensions. The second is the patient complexity model with 11 complexity-contributing factors and 5 dimensions. Conclusion The measurement model for complexity encompassing both task and patient complexity will be a valuable resource for future researchers and industry to measure and understand complexity in healthcare. PMID:26404626

  16. Power Curve Modeling in Complex Terrain Using Statistical Models

    NASA Astrophysics Data System (ADS)

    Bulaevskaya, V.; Wharton, S.; Clifton, A.; Qualley, G.; Miller, W.

    2014-12-01

    Traditional power output curves typically model power only as a function of the wind speed at the turbine hub height. While the latter is an essential predictor of power output, wind speed information in other parts of the vertical profile, as well as additional atmospheric variables, are also important determinants of power. The goal of this work was to determine the gain in predictive ability afforded by adding wind speed information at other heights, as well as other atmospheric variables, to the power prediction model. Using data from a wind farm with a moderately complex terrain in the Altamont Pass region in California, we trained three statistical models, a neural network, a random forest and a Gaussian process model, to predict power output from various sets of aforementioned predictors. The comparison of these predictions to the observed power data revealed that considerable improvements in prediction accuracy can be achieved both through the addition of predictors other than the hub-height wind speed and the use of statistical models. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under contract DE-AC52-07NA27344 and was funded by Wind Uncertainty Quantification Laboratory Directed Research and Development Project at LLNL under project tracking code 12-ERD-069.

  17. Modeling the propagation of mobile phone virus under complex network.

    PubMed

    Yang, Wei; Wei, Xi-liang; Guo, Hao; An, Gang; Guo, Lei; Yao, Yu

    2014-01-01

    Mobile phone virus is a rogue program written to propagate from one phone to another, which can take control of a mobile device by exploiting its vulnerabilities. In this paper the propagation model of mobile phone virus is tackled to understand how particular factors can affect its propagation and design effective containment strategies to suppress mobile phone virus. Two different propagation models of mobile phone viruses under the complex network are proposed in this paper. One is intended to describe the propagation of user-tricking virus, and the other is to describe the propagation of the vulnerability-exploiting virus. Based on the traditional epidemic models, the characteristics of mobile phone viruses and the network topology structure are incorporated into our models. A detailed analysis is conducted to analyze the propagation models. Through analysis, the stable infection-free equilibrium point and the stability condition are derived. Finally, considering the network topology, the numerical and simulation experiments are carried out. Results indicate that both models are correct and suitable for describing the spread of two different mobile phone viruses, respectively.

  18. Modeling the Propagation of Mobile Phone Virus under Complex Network

    PubMed Central

    Yang, Wei; Wei, Xi-liang; Guo, Hao; An, Gang; Guo, Lei

    2014-01-01

    Mobile phone virus is a rogue program written to propagate from one phone to another, which can take control of a mobile device by exploiting its vulnerabilities. In this paper the propagation model of mobile phone virus is tackled to understand how particular factors can affect its propagation and design effective containment strategies to suppress mobile phone virus. Two different propagation models of mobile phone viruses under the complex network are proposed in this paper. One is intended to describe the propagation of user-tricking virus, and the other is to describe the propagation of the vulnerability-exploiting virus. Based on the traditional epidemic models, the characteristics of mobile phone viruses and the network topology structure are incorporated into our models. A detailed analysis is conducted to analyze the propagation models. Through analysis, the stable infection-free equilibrium point and the stability condition are derived. Finally, considering the network topology, the numerical and simulation experiments are carried out. Results indicate that both models are correct and suitable for describing the spread of two different mobile phone viruses, respectively. PMID:25133209

  19. Graduate Social Work Education and Cognitive Complexity: Does Prior Experience Really Matter?

    ERIC Educational Resources Information Center

    Simmons, Chris

    2014-01-01

    This study examined the extent to which age, education, and practice experience among social work graduate students (N = 184) predicted cognitive complexity, an essential aspect of critical thinking. In the regression analysis, education accounted for more of the variance associated with cognitive complexity than age and practice experience. When…

  20. Graduate Social Work Education and Cognitive Complexity: Does Prior Experience Really Matter?

    ERIC Educational Resources Information Center

    Simmons, Chris

    2014-01-01

    This study examined the extent to which age, education, and practice experience among social work graduate students (N = 184) predicted cognitive complexity, an essential aspect of critical thinking. In the regression analysis, education accounted for more of the variance associated with cognitive complexity than age and practice experience. When…

  1. Metal powder absorptivity: Modeling and experiment

    SciTech Connect

    Boley, C. D.; Mitchell, S. C.; Rubenchik, A. M.; Wu, S. S. Q.

    2016-08-10

    Here, we present results of numerical modeling and direct calorimetric measurements of the powder absorptivity for a number of metals. The modeling results generally correlate well with experiment. We show that the powder absorptivity is determined, to a great extent, by the absorptivity of a flat surface at normal incidence. Our results allow the prediction of the powder absorptivity from normal flat-surface absorptivity measurements.

  2. Metal powder absorptivity: Modeling and experiment

    SciTech Connect

    Boley, C. D.; Mitchell, S. C.; Rubenchik, A. M.; Wu, S. S. Q.

    2016-08-10

    Here, we present results of numerical modeling and direct calorimetric measurements of the powder absorptivity for a number of metals. The modeling results generally correlate well with experiment. We show that the powder absorptivity is determined, to a great extent, by the absorptivity of a flat surface at normal incidence. Our results allow the prediction of the powder absorptivity from normal flat-surface absorptivity measurements.

  3. Complexation Effect on Redox Potential of Iron(III)-Iron(II) Couple: A Simple Potentiometric Experiment

    ERIC Educational Resources Information Center

    Rizvi, Masood Ahmad; Syed, Raashid Maqsood; Khan, Badruddin

    2011-01-01

    A titration curve with multiple inflection points results when a mixture of two or more reducing agents with sufficiently different reduction potentials are titrated. In this experiment iron(II) complexes are combined into a mixture of reducing agents and are oxidized to the corresponding iron(III) complexes. As all of the complexes involve the…

  4. The Effect of Complex Formation upon the Redox Potentials of Metallic Ions. Cyclic Voltammetry Experiments.

    ERIC Educational Resources Information Center

    Ibanez, Jorge G.; And Others

    1988-01-01

    Describes experiments in which students prepare in situ soluble complexes of metal ions with different ligands and observe and estimate the change in formal potential that the ion undergoes upon complexation. Discusses student formation and analysis of soluble complexes of two different metal ions with the same ligand. (CW)

  5. The Effect of Complex Formation upon the Redox Potentials of Metallic Ions. Cyclic Voltammetry Experiments.

    ERIC Educational Resources Information Center

    Ibanez, Jorge G.; And Others

    1988-01-01

    Describes experiments in which students prepare in situ soluble complexes of metal ions with different ligands and observe and estimate the change in formal potential that the ion undergoes upon complexation. Discusses student formation and analysis of soluble complexes of two different metal ions with the same ligand. (CW)

  6. Complexation Effect on Redox Potential of Iron(III)-Iron(II) Couple: A Simple Potentiometric Experiment

    ERIC Educational Resources Information Center

    Rizvi, Masood Ahmad; Syed, Raashid Maqsood; Khan, Badruddin

    2011-01-01

    A titration curve with multiple inflection points results when a mixture of two or more reducing agents with sufficiently different reduction potentials are titrated. In this experiment iron(II) complexes are combined into a mixture of reducing agents and are oxidized to the corresponding iron(III) complexes. As all of the complexes involve the…

  7. Management of Complex Ovarian Cysts in Newborns – Our Experience

    PubMed Central

    Manjiri, S; Padmalatha, SK; Shetty, J

    2017-01-01

    Aims: To analyse the clinical presentation, clinicopathological correlation and management of complex ovarian cysts in newborn and infants. Materials and Methods: Over a period of 6 years (2009-2015), 25 newborns who were diagnosed to have ovarian cyst on antenatal ultrasound, were followed up. We collected data in the form of clinical features, radiological findings, pathology and mode of treatment. Results: Of the 25 fetuses who were diagnosed to have ovarian cysts, fourteen (56%) underwent spontaneous regression by 6-8 months. Eight were operated in newborn period while 3 were operated in early infancy. Seven had ovarian cyst on right side, 4 had on left side. Eight babies underwent laparoscopy while 3 underwent laparotomy. Histopathology showed varied features of hemorrhagic cyst with necrosis and calcification, serous cystadenoma with hemorrhage, benign serous cyst with hemorrhage and simple serous cyst. Post-operative recovery was uneventful in all. Conclusion: All the ovarian cysts detected antenatally in female fetuses need close follow-up after birth. Since spontaneous regression is known, only complex or larger cysts need surgical intervention, preferably by laparoscopy. Majority of the complex cysts show atrophic ovarian tissue hence end up in oophorectomy but simple cysts can be removed preserving normal ovarian tissue whenever possible. PMID:28083489

  8. Molecular modeling of the neurophysin I/oxytocin complex

    NASA Astrophysics Data System (ADS)

    Kazmierkiewicz, R.; Czaplewski, C.; Lammek, B.; Ciarkowski, J.

    1997-01-01

    Neurophysins I and II (NPI and NPII) act in the neurosecretory granules as carrier proteinsfor the neurophyseal hormones oxytocin (OT) and vasopressin (VP), respectively. The NPI/OTfunctional unit, believed to be an (NPI/OT)2 heterotetramer, was modeled using low-resolution structure information, viz. the Cα carbon atom coordinates of the homologousNPII/dipeptide complex (file 1BN2 in the Brookhaven Protein Databank) as a template. Itsall-atom representation was obtained using standard modeling tools available within theINSIGHT/Biopolymer modules supplied by Biosym Technologies Inc. A conformation of theNPI-bound OT, similar to that recently proposed in a transfer NOE experiment, was dockedinto the ligand-binding site by a superposition of its Cys1-Tyr2 fragment onto the equivalentportion of the dipeptide in the template. The starting complex for the initial refinements wasprepared by two alternative strategies, termed Model I and Model II, each ending with a˜100 ps molecular dynamics (MD) simulation in water using the AMBER 4.1 force field. The freehomodimer NPI2 was obtained by removal of the two OT subunits from their sites, followedby a similar structure refinement. The use of Model I, consisting of a constrained simulatedannealing, resulted in a structure remarkably similar to both the NPII/dipeptide complex anda recently published solid-state structure of the NPII/OT complex. Thus, Model I isrecommended as the method of choice for the preparation of the starting all-atom data forMD. The MD simulations indicate that, both in the homodimer and in the heterotetramer, the310-helices demonstrate an increased mobility relative to the remaining body of the protein.Also, the C-terminal domains in the NPI2 homodimer are more mobile than the N-terminalones. Finally, a distinct intermonomer interaction is identified, concentrated around its mostprominent, although not unique, contribution provided by an H-bond from Ser25Oγ in one NPI unit to Glu81 Oɛ in the other

  9. High precision modeling for fundamental physics experiments

    NASA Astrophysics Data System (ADS)

    Rievers, Benny; Nesemann, Leo; Costea, Adrian; Andres, Michael; Stephan, Ernst P.; Laemmerzahl, Claus

    With growing experimental accuracies and high precision requirements for fundamental physics space missions the needs for accurate numerical modeling techniques are increasing. Motivated by the challenge of length stability in cavities and optical resonators we propose the develop-ment of a high precision modeling tool for the simulation of thermomechanical effects up to a numerical precision of 10-20 . Exemplary calculations for simplified test cases demonstrate the general feasibility of high precision calculations and point out the high complexity of the task. A tool for high precision analysis of complex geometries will have to use new data types, advanced FE solver routines and implement new methods for the evaluation of numerical precision.

  10. A Computer Simulated Experiment in Complex Order Kinetics

    ERIC Educational Resources Information Center

    Merrill, J. C.; And Others

    1975-01-01

    Describes a computer simulation experiment in which physical chemistry students can determine all of the kinetic parameters of a reaction, such as order of the reaction with respect to each reagent, forward and reverse rate constants for the overall reaction, and forward and reverse activation energies. (MLH)

  11. A Computer Simulated Experiment in Complex Order Kinetics

    ERIC Educational Resources Information Center

    Merrill, J. C.; And Others

    1975-01-01

    Describes a computer simulation experiment in which physical chemistry students can determine all of the kinetic parameters of a reaction, such as order of the reaction with respect to each reagent, forward and reverse rate constants for the overall reaction, and forward and reverse activation energies. (MLH)

  12. Assessing the experience in complex hepatopancreatobiliary surgery among graduating chief residents: Is the operative experience enough?

    PubMed Central

    Sachs, Teviah E.; Ejaz, Aslam; Weiss, Matthew; Spolverato, Gaya; Ahuja, Nita; Makary, Martin A.; Wolfgang, Christopher L.; Hirose, Kenzo; Pawlik, Timothy M.

    2015-01-01

    Introduction Resident operative autonomy and case volume is associated with posttraining confidence and practice plans. Accreditation Council for Graduate Medical Education requirements for graduating general surgery residents are four liver and three pancreas cases. We sought to evaluate trends in resident experience and autonomy for complex hepatopancreatobiliary (HPB) surgery over time. Methods We queried the Accreditation Council for Graduate Medical Education General Surgery Case Log (2003–2012) for all cases performed by graduating chief residents (GCR) relating to liver, pancreas, and the biliary tract (HPB); simple cholecystectomy was excluded. Mean (±SD), median [10th–90th percentiles] and maximum case volumes were compared from 2003 to 2012 using R2 for all trends. Results A total of 252,977 complex HPB cases (36% liver, 43% pancreas, 21% biliary) were performed by 10,288 GCR during the 10-year period examined (Mean = 24.6 per GCR). Of these, 57% were performed during the chief year, whereas 43% were performed as postgraduate year 1–4. Only 52% of liver cases were anatomic resections, whereas 71% of pancreas cases were major resections. Total number of cases increased from 22,516 (mean = 23.0) in 2003 to 27,191 (mean = 24.9) in 2012. During this same time period, the percentage of HPB cases that were performed during the chief year decreased by 7% (liver: 13%, pancreas 8%, biliary 4%). There was an increasing trend in the mean number of operations (mean ± SD) logged by GCR on the pancreas (9.1 ± 5.9 to 11.3 ± 4.3; R2 = .85) and liver (8.0 ± 5.9 to 9.4 ± 3.4; R2 = .91), whereas those for the biliary tract decreased (5.9 ± 2.5 to 3.8 ± 2.1; R2 = .96). Although the median number of cases [10th:90th percentile] increased slightly for both pancreas (7.0 [4.0:15] to 8.0 [4:20]) and liver (7.0 [4:13] to 8.0 [5:14]), the maximum number of cases preformed by any given GCR remained stable for pancreas (51 to 53; R2 = .18), but increased for liver (38

  13. Cooling tower plume - model and experiment

    NASA Astrophysics Data System (ADS)

    Cizek, Jan; Gemperle, Jiri; Strob, Miroslav; Nozicka, Jiri

    The paper discusses the description of the simple model of the, so-called, steam plume, which in many cases forms during the operation of the evaporative cooling systems of the power plants, or large technological units. The model is based on semi-empirical equations that describe the behaviour of a mixture of two gases in case of the free jet stream. In the conclusion of the paper, a simple experiment is presented through which the results of the designed model shall be validated in the subsequent period.

  14. Cumulative Adverse Childhood Experiences and Sexual Satisfaction in Sex Therapy Patients: What Role for Symptom Complexity?

    PubMed

    Bigras, Noémie; Godbout, Natacha; Hébert, Martine; Sabourin, Stéphane

    2017-03-01

    Patients consulting for sexual difficulties frequently present additional personal or relational disorders and symptoms. This is especially the case when they have experienced cumulative adverse childhood experiences (CACEs), which are associated with symptom complexity. CACEs refer to the extent to which an individual has experienced an accumulation of different types of adverse childhood experiences including sexual, physical, and psychological abuse; neglect; exposure to inter-parental violence; and bullying. However, past studies have not examined how symptom complexity might relate to CACEs and sexual satisfaction and even less so in samples of adults consulting for sex therapy. To document the presence of CACEs in a sample of individuals consulting for sexual difficulties and its potential association with sexual satisfaction through the development of symptom complexity operationalized through well-established clinically significant indicators of individual and relationship distress. Men and women (n = 307) aged 18 years and older consulting for sexual difficulties completed a set of questionnaires during their initial assessment. (i) Global Measure of Sexual Satisfaction Scale, (ii) Dyadic Adjustment Scale-4, (iii) Experiences in Close Relationships-12, (iv) Beck Depression Inventory-13, (v) Trauma Symptom Inventory-2, and (vi) Psychiatric Symptom Inventory-14. Results showed that 58.1% of women and 51.9% of men reported at least four forms of childhood adversity. The average number of CACEs was 4.10 (SD = 2.23) in women and 3.71 (SD = 2.08) in men. Structural equation modeling showed that CACEs contribute directly and indirectly to sexual satisfaction in adults consulting for sex therapy through clinically significant individual and relational symptom complexities. The findings underscore the relevance of addressing clinically significant psychological and relational symptoms that can stem from CACEs when treating sexual difficulties in adults seeking sex

  15. Modelling the evolution of complex conductivity during calcite precipitation on glass beads

    NASA Astrophysics Data System (ADS)

    Leroy, Philippe; Li, Shuai; Jougnot, Damien; Revil, André; Wu, Yuxin

    2017-04-01

    When pH and alkalinity increase, calcite frequently precipitates and hence modifies the petrophysical properties of porous media. The complex conductivity method can be used to directly monitor calcite precipitation in porous media because it is sensitive to the evolution of the mineralogy, pore structure and its connectivity. We have developed a mechanistic grain polarization model considering the electrochemical polarization of the Stern and diffuse layers surrounding calcite particles. Our complex conductivity model depends on the surface charge density of the Stern layer and on the electrical potential at the onset of the diffuse layer, which are computed using a basic Stern model of the calcite/water interface. The complex conductivity measurements of Wu et al. on a column packed with glass beads where calcite precipitation occurs are reproduced by our surface complexation and complex conductivity models. The evolution of the size and shape of calcite particles during the calcite precipitation experiment is estimated by our complex conductivity model. At the early stage of the calcite precipitation experiment, modelled particles sizes increase and calcite particles flatten with time because calcite crystals nucleate at the surface of glass beads and grow into larger calcite grains. At the later stage of the calcite precipitation experiment, modelled sizes and cementation exponents of calcite particles decrease with time because large calcite grains aggregate over multiple glass beads and only small calcite crystals polarize.

  16. Modeling the evolution of complex conductivity during calcite precipitation on glass beads

    NASA Astrophysics Data System (ADS)

    Leroy, Philippe; Li, Shuai; Jougnot, Damien; Revil, André; Wu, Yuxin

    2017-01-01

    SUMMARYWhen pH and alkalinity increase, calcite frequently precipitates and hence modifies the petrophysical properties of porous media. The <span class="hlt">complex</span> conductivity method can be used to directly monitor calcite precipitation in porous media because it is sensitive to the evolution of the mineralogy, pore structure and its connectivity. We have developed a mechanistic grain polarization <span class="hlt">model</span> considering the electrochemical polarization of the Stern and diffuse layer surrounding calcite particles. Our <span class="hlt">complex</span> conductivity <span class="hlt">model</span> depends on the surface charge density of the Stern layer and on the electrical potential at the onset of the diffuse layer, which are computed using a basic Stern <span class="hlt">model</span> of the calcite/water interface. The <span class="hlt">complex</span> conductivity measurements of Wu et al. (2010) on a column packed with glass beads where calcite precipitation occurs are reproduced by our surface <span class="hlt">complexation</span> and <span class="hlt">complex</span> conductivity <span class="hlt">models</span>. The evolution of the size and shape of calcite particles during the calcite precipitation <span class="hlt">experiment</span> is estimated by our <span class="hlt">complex</span> conductivity <span class="hlt">model</span>. At the early stage of the calcite precipitation <span class="hlt">experiment</span>, <span class="hlt">modeled</span> particles sizes increase and calcite particles flatten with time because calcite crystals nucleate at the surface of glass beads and grow into larger calcite grains around glass beads. At the later stage of the calcite precipitation <span class="hlt">experiment</span>, <span class="hlt">modeled</span> sizes and cementation exponents of calcite particles decrease with time because large calcite grains aggregate over multiple glass beads, a percolation threshold is achieved, and small and discrete calcite crystals polarize.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.osti.gov/scitech/servlets/purl/892423','SCIGOV-STC'); return false;" href="http://www.osti.gov/scitech/servlets/purl/892423"><span>Data production <span class="hlt">models</span> for the CDF <span class="hlt">experiment</span></span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p>Antos, J.; Babik, M.; Benjamin, D.; Cabrera, S.; Chan, A.W.; Chen, Y.C.; Coca, M.; Cooper, B.; Genser, K.; Hatakeyama, K.; Hou, S.; Hsieh, T.L.; Jayatilaka, B.; Kraan, A.C.; Lysak, R.; Mandrichenko, I.V.; Robson, A.; Siket, M.; Stelzer, B.; Syu, J.; Teng, P.K.; /Kosice, IEF /Duke U. /Taiwan, Inst. Phys. /University Coll. London /Fermilab /Rockefeller U. /Michigan U. /Pennsylvania U. /Glasgow U. /UCLA /Tsukuba U. /New Mexico U.</p> <p>2006-06-01</p> <p>The data production for the CDF <span class="hlt">experiment</span> is conducted on a large Linux PC farm designed to meet the needs of data collection at a maximum rate of 40 MByte/sec. We present two data production <span class="hlt">models</span> that exploits advances in computing and communication technology. The first production farm is a centralized system that has achieved a stable data processing rate of approximately 2 TByte per day. The recently upgraded farm is migrated to the SAM (Sequential Access to data via Metadata) data handling system. The software and hardware of the CDF production farms has been successful in providing large computing and data throughput capacity to the <span class="hlt">experiment</span>.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/19880013325','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19880013325"><span><span class="hlt">Model</span>-scale sound propagation <span class="hlt">experiment</span></span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Willshire, William L., Jr.</p> <p>1988-01-01</p> <p>The results of a scale <span class="hlt">model</span> propagation <span class="hlt">experiment</span> to investigate grazing propagation above a finite impedance boundary are reported. In the <span class="hlt">experiment</span>, a 20 x 25 ft ground plane was installed in an anechoic chamber. Propagation tests were performed over the plywood surface of the ground plane and with the ground plane covered with felt, styrofoam, and fiberboard. Tests were performed with discrete tones in the frequency range of 10 to 15 kHz. The acoustic source and microphones varied in height above the test surface from flush to 6 in. Microphones were located in a linear array up to 18 ft from the source. A preliminary <span class="hlt">experiment</span> using the same ground plane, but only testing the plywood and felt surfaces was performed. The results of this first <span class="hlt">experiment</span> were encouraging, but data variability and repeatability were poor, particularly, for the felt surface, making comparisons with theoretical predictions difficult. In the main <span class="hlt">experiment</span> the sound source, microphones, microphone positioning, data acquisition, quality of the anechoic chamber, and environmental control of the anechoic chamber were improved. High-quality, repeatable acoustic data were measured in the main <span class="hlt">experiment</span> for all four test surfaces. Comparisons with predictions are good, but limited by uncertainties of the impedance values of the test surfaces.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2797718','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2797718"><span><span class="hlt">Modeling</span> <span class="hlt">Complex</span> Workflow in Molecular Diagnostics</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Gomah, Mohamed E.; Turley, James P.; Lu, Huimin; Jones, Dan</p> <p>2010-01-01</p> <p>One of the hurdles to achieving personalized medicine has been implementing the laboratory processes for performing and reporting <span class="hlt">complex</span> molecular tests. The rapidly changing test rosters and <span class="hlt">complex</span> analysis platforms in molecular diagnostics have meant that many clinical laboratories still use labor-intensive manual processing and testing without the level of automation seen in high-volume chemistry and hematology testing. We provide here a discussion of design requirements and the results of implementation of a suite of lab management tools that incorporate the many elements required for use of molecular diagnostics in personalized medicine, particularly in cancer. These applications provide the functionality required for sample accessioning and tracking, material generation, and testing that are particular to the evolving needs of individualized molecular diagnostics. On implementation, the applications described here resulted in improvements in the turn-around time for reporting of more <span class="hlt">complex</span> molecular test sets, and significant changes in the workflow. Therefore, careful mapping of workflow can permit design of software applications that simplify even the <span class="hlt">complex</span> demands of specialized molecular testing. By incorporating design features for order review, software tools can permit a more personalized approach to sample handling and test selection without compromising efficiency. PMID:20007844</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/14872536','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/14872536"><span>CAD-ICAD <span class="hlt">complex</span> structure derived from saturation transfer <span class="hlt">experiment</span> and simulated annealing without using pairwise NOE information.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Matsuda, Tomoki; Nakajima, Nobuyuki; Yamazaki, Toshio; Nakamura, Haruki</p> <p>2004-01-01</p> <p>Saturation transfer <span class="hlt">experiments</span> were performed for the (2)H- and (15)N-labeled mouse CAD domain of the caspase-activated deoxyribonuclease and the CAD domain of its inhibitor to reveal the protein-protein <span class="hlt">complexed</span> conformation. Based on the physical <span class="hlt">model</span> for the spin diffusion, a novel method was developed to reconstruct the <span class="hlt">complexed</span> structure using the simulated annealing calculation. The complementarity in the molecular surface shape and the electrostatic potential distribution provide a good measure for the assessment of the putative <span class="hlt">complexed</span> conformation, despite much less experimental information than the conventional distance geometry calculation. Copyright 2004 John Wiley & Sons, Ltd.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_10");'>10</a></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li class="active"><span>12</span></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_12 --> <div id="page_13" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li class="active"><span>13</span></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="241"> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.osti.gov/scitech/servlets/purl/60269','SCIGOV-STC'); return false;" href="http://www.osti.gov/scitech/servlets/purl/60269"><span>Design and <span class="hlt">modeling</span> of small scale multiple fracturing <span class="hlt">experiments</span></span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p>Cuderman, J F</p> <p>1981-12-01</p> <p>Recent <span class="hlt">experiments</span> at the Nevada Test Site (NTS) have demonstrated the existence of three distinct fracture regimes. Depending on the pressure rise time in a borehole, one can obtain hydraulic, multiple, or explosive fracturing behavior. The use of propellants rather than explosives in tamped boreholes permits tailoring of the pressure risetime over a wide range since propellants having a wide range of burn rates are available. This technique of using the combustion gases from a full bore propellant charge to produce controlled borehole pressurization is termed High Energy Gas Fracturing (HEGF). Several series of HEGF, in 0.15 m and 0.2 m diameter boreholes at 12 m depths, have been completed in a tunnel <span class="hlt">complex</span> at NTS where mineback permitted direct observation of fracturing obtained. Because such large <span class="hlt">experiments</span> are costly and time consuming, smaller scale <span class="hlt">experiments</span> are desirable, provided results from small <span class="hlt">experiments</span> can be used to predict fracture behavior in larger boreholes. In order to design small scale gas fracture <span class="hlt">experiments</span>, the available data from previous HEGF <span class="hlt">experiments</span> were carefully reviewed, analytical elastic wave <span class="hlt">modeling</span> was initiated, and semi-empirical <span class="hlt">modeling</span> was conducted which combined predictions for statically pressurized boreholes with experimental data. The results of these efforts include (1) the definition of what constitutes small scale <span class="hlt">experiments</span> for emplacement in a tunnel <span class="hlt">complex</span> at the Nevada Test Site, (2) prediction of average crack radius, in ash fall tuff, as a function of borehole size and energy input per unit length, (3) definition of multiple-hydraulic and multiple-explosive fracture boundaries as a function of boreholes size and surface wave velocity, (4) semi-empirical criteria for estimating stress and acceleration, and (5) a proposal that multiple fracture orientations may be governed by in situ stresses.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.osti.gov/scitech/servlets/purl/897953','SCIGOV-STC'); return false;" href="http://www.osti.gov/scitech/servlets/purl/897953"><span><span class="hlt">Modeling</span> Hemispheric Detonation <span class="hlt">Experiments</span> in 2-Dimensions</span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p>Howard, W M; Fried, L E; Vitello, P A; Druce, R L; Phillips, D; Lee, R; Mudge, S; Roeske, F</p> <p>2006-06-22</p> <p><span class="hlt">Experiments</span> have been performed with LX-17 (92.5% TATB and 7.5% Kel-F 800 binder) to study scaling of detonation waves using a dimensional scaling in a hemispherical divergent geometry. We <span class="hlt">model</span> these <span class="hlt">experiments</span> using an arbitrary Lagrange-Eulerian (ALE3D) hydrodynamics code, with reactive flow <span class="hlt">models</span> based on the thermo-chemical code, Cheetah. The thermo-chemical code Cheetah provides a pressure-dependent kinetic rate law, along with an equation of state based on exponential-6 fluid potentials for individual detonation product species, calibrated to high pressures ({approx} few Mbars) and high temperatures (20000K). The parameters for these potentials are fit to a wide variety of experimental data, including shock, compression and sound speed data. For the un-reacted high explosive equation of state we use a modified Murnaghan form. We <span class="hlt">model</span> the detonator (including the flyer plate) and initiation system in detail. The detonator is composed of LX-16, for which we use a program burn <span class="hlt">model</span>. Steinberg-Guinan <span class="hlt">models</span>5 are used for the metal components of the detonator. The booster and high explosive are LX-10 and LX-17, respectively. For both the LX-10 and LX-17, we use a pressure dependent rate law, coupled with a chemical equilibrium equation of state based on Cheetah. For LX-17, the kinetic <span class="hlt">model</span> includes carbon clustering on the nanometer size scale.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JMS...165..139M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JMS...165..139M"><span>Simple parameter estimation for <span class="hlt">complex</span> <span class="hlt">models</span> — Testing evolutionary techniques on 3-dimensional biogeochemical ocean <span class="hlt">models</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Mattern, Jann Paul; Edwards, Christopher A.</p> <p>2017-01-01</p> <p>Parameter estimation is an important part of numerical <span class="hlt">modeling</span> and often required when a coupled physical-biogeochemical ocean <span class="hlt">model</span> is first deployed. However, 3-dimensional ocean <span class="hlt">model</span> simulations are computationally expensive and <span class="hlt">models</span> typically contain upwards of 10 parameters suitable for estimation. Hence, manual parameter tuning can be lengthy and cumbersome. Here, we present four easy to implement and flexible parameter estimation techniques and apply them to two 3-dimensional biogeochemical <span class="hlt">models</span> of different <span class="hlt">complexities</span>. Based on a Monte Carlo <span class="hlt">experiment</span>, we first develop a cost function measuring the <span class="hlt">model</span>-observation misfit based on multiple data types. The parameter estimation techniques are then applied and yield a substantial cost reduction over ∼ 100 simulations. Based on the outcome of multiple replicate <span class="hlt">experiments</span>, they perform on average better than random, uninformed parameter search but performance declines when more than 40 parameters are estimated together. Our results emphasize the <span class="hlt">complex</span> cost function structure for biogeochemical parameters and highlight dependencies between different parameters as well as different cost function formulations.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://cfpub.epa.gov/si/si_public_record_report.cfm?dirEntryId=190311&keyword=urban+AND+plant+AND+pollution&actType=&TIMSType=+&TIMSSubTypeID=&DEID=&epaNumber=&ntisID=&archiveStatus=Both&ombCat=Any&dateBeginCreated=&dateEndCreated=&dateBeginPublishedPresented=&dateEndPublishedPresented=&dateBeginUpdated=&dateEndUpdated=&dateBeginCompleted=&dateEndCompleted=&personID=&role=Any&journalID=&publisherID=&sortBy=revisionDate&count=50','EPA-EIMS'); return false;" href="http://cfpub.epa.gov/si/si_public_record_report.cfm?dirEntryId=190311&keyword=urban+AND+plant+AND+pollution&actType=&TIMSType=+&TIMSSubTypeID=&DEID=&epaNumber=&ntisID=&archiveStatus=Both&ombCat=Any&dateBeginCreated=&dateEndCreated=&dateBeginPublishedPresented=&dateEndPublishedPresented=&dateBeginUpdated=&dateEndUpdated=&dateBeginCompleted=&dateEndCompleted=&personID=&role=Any&journalID=&publisherID=&sortBy=revisionDate&count=50"><span>Dispersion <span class="hlt">Modeling</span> in <span class="hlt">Complex</span> Urban Systems</span></a></p> <p><a target="_blank" href="http://oaspub.epa.gov/eims/query.page">EPA Science Inventory</a></p> <p></p> <p></p> <p><span class="hlt">Models</span> are used to represent real systems in an understandable way. They take many forms. A conceptual <span class="hlt">model</span> explains the way a system works. In environmental studies, for example, a conceptual <span class="hlt">model</span> may delineate all the factors and parameters for determining how a particle move...</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://files.eric.ed.gov/fulltext/ED476861.pdf','ERIC'); return false;" href="http://files.eric.ed.gov/fulltext/ED476861.pdf"><span>Specifying and Refining a <span class="hlt">Complex</span> Measurement <span class="hlt">Model</span>.</span></a></p> <p><a target="_blank" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Levy, Roy; Mislevy, Robert J.</p> <p></p> <p>This paper aims to describe a Bayesian approach to <span class="hlt">modeling</span> and estimating cognitive <span class="hlt">models</span> both in terms of statistical machinery and actual instrument development. Such a method taps the knowledge of experts to provide initial estimates for the probabilistic relationships among the variables in a multivariate latent variable <span class="hlt">model</span> and refines…</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AdWR..105...29L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AdWR..105...29L"><span><span class="hlt">Modeling</span> variability in porescale multiphase flow <span class="hlt">experiments</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ling, Bowen; Bao, Jie; Oostrom, Mart; Battiato, Ilenia; Tartakovsky, Alexandre M.</p> <p>2017-07-01</p> <p>Microfluidic devices and porescale numerical <span class="hlt">models</span> are commonly used to study multiphase flow in biological, geological, and engineered porous materials. In this work, we perform a set of drainage and imbibition <span class="hlt">experiments</span> in six identical microfluidic cells to study the reproducibility of multiphase flow <span class="hlt">experiments</span>. We observe significant variations in the experimental results, which are smaller during the drainage stage and larger during the imbibition stage. We demonstrate that these variations are due to sub-porescale geometry differences in microcells (because of manufacturing defects) and variations in the boundary condition (i.e., fluctuations in the injection rate inherent to syringe pumps). Computational simulations are conducted using commercial software STAR-CCM+, both with constant and randomly varying injection rates. Stochastic simulations are able to capture variability in the <span class="hlt">experiments</span> associated with the varying pump injection rate.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2011LNCS.6589....1T','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2011LNCS.6589....1T"><span>Using <span class="hlt">Models</span> to Inform Policy: Insights from <span class="hlt">Modeling</span> the <span class="hlt">Complexities</span> of Global Polio Eradication</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Thompson, Kimberly M.</p> <p></p> <p>Drawing on over 20 years of <span class="hlt">experience</span> <span class="hlt">modeling</span> risks in <span class="hlt">complex</span> systems, this talk will challenge SBP participants to develop <span class="hlt">models</span> that provide timely and useful answers to critical policy questions when decision makers need them. The talk will include reflections on the opportunities and challenges associated with developing integrated <span class="hlt">models</span> for <span class="hlt">complex</span> problems and communicating their results effectively. Dr. Thompson will focus the talk largely on collaborative <span class="hlt">modeling</span> related to global polio eradication and the application of system dynamics tools. After successful global eradication of wild polioviruses, live polioviruses will still present risks that could potentially lead to paralytic polio cases. This talk will present the insights of efforts to use integrated dynamic, probabilistic risk, decision, and economic <span class="hlt">models</span> to address critical policy questions related to managing global polio risks. Using a dynamic disease transmission <span class="hlt">model</span> combined with probabilistic <span class="hlt">model</span> inputs that characterize uncertainty for a stratified world to account for variability, we find that global health leaders will face some difficult choices, but that they can take actions that will manage the risks effectively. The talk will emphasize the need for true collaboration between <span class="hlt">modelers</span> and subject matter experts, and the importance of working with decision makers as partners to ensure the development of useful <span class="hlt">models</span> that actually get used.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010AGUFM.T51C2051S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010AGUFM.T51C2051S"><span>Anatomy of a Metamorphic Core <span class="hlt">Complex</span>: Preliminary Results of Ruby Mountains Seismic <span class="hlt">Experiment</span>, Northeastern Nevada</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Schiltz, K. K.; Litherland, M.; Klemperer, S. L.</p> <p>2010-12-01</p> <p>The Ruby Mountains Seismic <span class="hlt">Experiment</span> is a 50-station deployment of Earthscope’s Flexible Array installed in June 2010 to study the Ruby Mountain metamorphic core <span class="hlt">complex</span>, northeastern Nevada. Competing theories of metamorphic core <span class="hlt">complexes</span> stress the importance of either (1) low-angle detachment faulting and lateral crustal flow, likely leading to horizontal shearing and anisotropy, or (2) vertical diapirism creating dominantly vertical shearing and anisotropy. Our <span class="hlt">experiment</span> aims to distinguish between these two hypotheses using densely spaced (5 to 10 km) broadband seismometers along two WNW-ESE transects across the Ruby Range and one NNE-SSW transect along the axis of the range. When data acquisition is complete we will image crustal structures and measure velocity and anisotropy with a range of receiver function, shear-wave splitting and surface-wave tomographic methods. In addition to the newly acquired data, existing data can also be used to build understanding of the region. Previous regional studies have interpreted shear-wave splitting in terms of single-layer anisotropy in the mantle, related to a <span class="hlt">complex</span> flow structure, but previous controlled source studies have identified measurable crustal anisotropy. We therefore attempted to fit existing data to a two-layer <span class="hlt">model</span> consisting of a weakly anisotropic crustal layer and a more dominant mantle layer. We used “SplitLab” to measure apparent splitting parameters from ELK (a USGS permanent station) and 3 Earthscope Transportable Array stations. There is a clear variation in the splitting parameters with back-azimuth, but existing data do not provide a stable inversion for a two-layer <span class="hlt">model</span>. Our best forward-<span class="hlt">model</span> solution is a crustal layer with a fast axis orientation of 357° and 0.3 second delay time and a mantle layer with a 282° fast axis and 1.3 s delay time. Though the direction of the fast axis is consistent with previously published regional results, the 1.3 s delay time is larger than</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://eric.ed.gov/?q=evolutionary+AND+algorithms&pg=2&id=EJ691535','ERIC'); return false;" href="https://eric.ed.gov/?q=evolutionary+AND+algorithms&pg=2&id=EJ691535"><span>Acquisition of <span class="hlt">Complex</span> Systemic Thinking: Mental <span class="hlt">Models</span> of Evolution</span></a></p> <p><a target="_blank" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>d'Apollonia, Sylvia T.; Charles, Elizabeth S.; Boyd, Gary M.</p> <p>2004-01-01</p> <p>We investigated the impact of introducing college students to <span class="hlt">complex</span> adaptive systems on their subsequent mental <span class="hlt">models</span> of evolution compared to those of students taught in the same manner but with no reference to <span class="hlt">complex</span> systems. The students' mental <span class="hlt">models</span> (derived from similarity ratings of 12 evolutionary terms using the pathfinder algorithm)…</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://eric.ed.gov/?q=Complex+AND+adaptive+AND+system&pg=5&id=EJ691535','ERIC'); return false;" href="http://eric.ed.gov/?q=Complex+AND+adaptive+AND+system&pg=5&id=EJ691535"><span>Acquisition of <span class="hlt">Complex</span> Systemic Thinking: Mental <span class="hlt">Models</span> of Evolution</span></a></p> <p><a target="_blank" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>d'Apollonia, Sylvia T.; Charles, Elizabeth S.; Boyd, Gary M.</p> <p>2004-01-01</p> <p>We investigated the impact of introducing college students to <span class="hlt">complex</span> adaptive systems on their subsequent mental <span class="hlt">models</span> of evolution compared to those of students taught in the same manner but with no reference to <span class="hlt">complex</span> systems. The students' mental <span class="hlt">models</span> (derived from similarity ratings of 12 evolutionary terms using the pathfinder algorithm)…</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017EGUGA..1918478D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017EGUGA..1918478D"><span>Socio-Environmental Resilience and <span class="hlt">Complex</span> Urban Systems <span class="hlt">Modeling</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Deal, Brian; Petri, Aaron; Pan, Haozhi; Goldenberg, Romain; Kalantari, Zahra; Cvetkovic, Vladimir</p> <p>2017-04-01</p> <p>The increasing pressure of climate change has inspired two normative agendas; socio-technical transitions and socio-ecological resilience, both sharing a <span class="hlt">complex</span>-systems epistemology (Gillard et al. 2016). Socio-technical solutions include a continuous, massive data gathering exercise now underway in urban places under the guise of developing a 'smart'(er) city. This has led to the creation of data-rich environments where large data sets have become central to monitoring and forming a response to anomalies. Some have argued that these kinds of data sets can help in planning for resilient cities (Norberg and Cumming 2008; Batty 2013). In this paper, we focus on a more nuanced, ecologically based, socio-environmental perspective of resilience planning that is often given less consideration. Here, we broadly discuss (and <span class="hlt">model</span>) the tightly linked, mutually influenced, social and biophysical subsystems that are critical for understanding urban resilience. We argue for the need to incorporate these sub system linkages into the resilience planning lexicon through the integration of systems <span class="hlt">models</span> and planning support systems. We make our case by first providing a context for urban resilience from a socio-ecological and planning perspective. We highlight the data needs for this type of resilient planning and compare it to currently collected data streams in various smart city efforts. This helps to define an approach for operationalizing socio-environmental resilience planning using robust systems <span class="hlt">models</span> and planning support systems. For this, we draw from our <span class="hlt">experiences</span> in coupling a spatio-temporal land use <span class="hlt">model</span> (the Landuse Evolution and impact Assessment <span class="hlt">Model</span> (LEAM)) with water quality and quantity <span class="hlt">models</span> in Stockholm Sweden. We describe the coupling of these systems <span class="hlt">models</span> using a robust Planning Support System (PSS) structural framework. We use the coupled <span class="hlt">model</span> simulations and PSS to analyze the connection between urban land use transformation (social) and water</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.osti.gov/scitech/biblio/22218170','SCIGOV-STC'); return false;" href="https://www.osti.gov/scitech/biblio/22218170"><span>Background <span class="hlt">modeling</span> for the GERDA <span class="hlt">experiment</span></span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p>Becerici-Schmidt, N.; Collaboration: GERDA Collaboration</p> <p>2013-08-08</p> <p>The neutrinoless double beta (0νββ) decay <span class="hlt">experiment</span> GERDA at the LNGS of INFN has started physics data taking in November 2011. This paper presents an analysis aimed at understanding and <span class="hlt">modeling</span> the observed background energy spectrum, which plays an essential role in searches for a rare signal like 0νββ decay. A very promising preliminary <span class="hlt">model</span> has been obtained, with the systematic uncertainties still under study. Important information can be deduced from the <span class="hlt">model</span> such as the expected background and its decomposition in the signal region. According to the <span class="hlt">model</span> the main background contributions around Q{sub ββ} come from {sup 214}Bi, {sup 228}Th, {sup 42}K, {sup 60}Co and α emitting isotopes in the {sup 226}Ra decay chain, with a fraction depending on the assumed source positions.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.osti.gov/scitech/servlets/purl/1148957','SCIGOV-STC'); return false;" href="http://www.osti.gov/scitech/servlets/purl/1148957"><span>Design of <span class="hlt">Experiments</span>, <span class="hlt">Model</span> Calibration and Data Assimilation</span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p>Williams, Brian J.</p> <p>2014-07-30</p> <p>This presentation provides an overview of emulation, calibration and <span class="hlt">experiment</span> design for computer <span class="hlt">experiments</span>. Emulation refers to building a statistical surrogate from a carefully selected and limited set of <span class="hlt">model</span> runs to predict unsampled outputs. The standard kriging approach to emulation of <span class="hlt">complex</span> computer <span class="hlt">models</span> is presented. Calibration refers to the process of probabilistically constraining uncertain physics/engineering <span class="hlt">model</span> inputs to be consistent with observed experimental data. An initial probability distribution for these parameters is updated using the experimental information. Markov chain Monte Carlo (MCMC) algorithms are often used to sample the calibrated parameter distribution. Several MCMC algorithms commonly employed in practice are presented, along with a popular diagnostic for evaluating chain behavior. Space-filling approaches to <span class="hlt">experiment</span> design for selecting <span class="hlt">model</span> runs to build effective emulators are discussed, including Latin Hypercube Design and extensions based on orthogonal array skeleton designs and imposed symmetry requirements. Optimization criteria that further enforce space-filling, possibly in projections of the input space, are mentioned. Designs to screen for important input variations are summarized and used for variable selection in a nuclear fuels performance application. This is followed by illustration of sequential <span class="hlt">experiment</span> design strategies for optimization, global prediction, and rare event inference.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013APS..MARA47005R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013APS..MARA47005R"><span>Cell motility: Combining <span class="hlt">experiments</span> with <span class="hlt">modeling</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Rappel, Wouter-Jan</p> <p>2013-03-01</p> <p>Cell migration and motility is a pervasive process in many biology systems. It involves intra-cellular signal transduction pathways that eventually lead to membrane extension and contraction. Here we describe our efforts to combine quantitative <span class="hlt">experiments</span> with theoretical and computational <span class="hlt">modeling</span> to gain fundamental insights into eukaryotic cell motion. In particular, we will focus on the amoeboid motion of Dictyostelium discoideum cells. This work is supported by the National Institutes of Health (P01 GM078586)</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013EPSC....8..645P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013EPSC....8..645P"><span>Impact polymorphs of quartz: <span class="hlt">experiments</span> and <span class="hlt">modelling</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Price, M. C.; Dutta, R.; Burchell, M. J.; Cole, M. J.</p> <p>2013-09-01</p> <p>We have used the light gas gun at the University of Kent to perform a series of impact <span class="hlt">experiments</span> firing quartz projectiles onto metal, quartz and sapphire targets. The aim is to quantify the amount of any high pressure quartz polymorphs produced, and use these data to develop our hydrocode <span class="hlt">modelling</span> to enable the predict ion of the quantity of polymorphs produced during a planetary scale impact.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016PhDT.......239K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016PhDT.......239K"><span><span class="hlt">Modeling</span> Electronic Properties of <span class="hlt">Complex</span> Oxides</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Krishnaswamy, Karthik</p> <p></p> <p><span class="hlt">Complex</span> oxides are a class of materials that have recently emerged as potential candidates for electronic applications owing to their interesting electronic properties. The goal of this dissertation is to develop a fundamental understanding of these electronic properties using a combination of first-principles approaches based on density functional theory (DFT), and Schr odinger-Poisson (SP) simulation (Abstract shortened by ProQuest.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.ars.usda.gov/research/publications/publication/?seqNo115=314339','TEKTRAN'); return false;" href="http://www.ars.usda.gov/research/publications/publication/?seqNo115=314339"><span><span class="hlt">Modeled</span> effects of climate change on hydrologic and cover dynamics in Prairie Wetland <span class="hlt">Complexes</span></span></a></p> <p><a target="_blank" href="https://www.ars.usda.gov/research/publications/find-a-publication/">USDA-ARS?s Scientific Manuscript database</a></p> <p></p> <p></p> <p>Hydrologic variability in Prairie Pothole Region (PPR) wetland <span class="hlt">complexes</span> depends on the year-to-year balance between precipitation and evaporative demand. We present the results of <span class="hlt">model</span> <span class="hlt">experiments</span> using a wetland ecosystem <span class="hlt">model</span>, WETLANDSCAPE, to describe the sensitivity of wetland ecological con...</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1994BAMS...75..793L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1994BAMS...75..793L"><span>Data Assimilation and <span class="hlt">Model</span> Evaluation <span class="hlt">Experiment</span> Datasets.</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Lai, Chung-Chieng A.; Qian, Wen; Glenn, Scott M.</p> <p>1994-05-01</p> <p>The Institute for Naval Oceanography, in cooperation with Naval Research Laboratories and universities, executed the Data Assimilation and <span class="hlt">Model</span> Evaluation <span class="hlt">Experiment</span> (DAMÉE) for the Gulf Stream region during fiscal years 1991-1993. Enormous effort has gone into the preparation of several high-quality and consistent datasets for <span class="hlt">model</span> initialization and verification. This paper describes the preparation process, the temporal and spatial scopes, the contents, the structure, etc., of these datasets.The goal of DAMEE and the need of data for the four phases of <span class="hlt">experiment</span> are briefly stated. The preparation of DAMEE datasets consisted of a series of processes: 1)collection of observational data; 2) analysis and interpretation; 3) interpolation using the Optimum Thermal Interpolation System package; 4) quality control and re-analysis; and 5) data archiving and software documentation.The data products from these processes included a time series of 3D fields of temperature and salinity, 2D fields of surface dynamic height and mixed-layer depth, analysis of the Gulf Stream and rings system, and bathythermograph profiles. To date, these are the most detailed and high-quality data for mesoscale ocean <span class="hlt">modeling</span>, data assimilation, and forecasting research. Feedback from ocean <span class="hlt">modeling</span> groups who tested this data was incorporated into its refinement.Suggestions for DAMEE data usages include 1) ocean <span class="hlt">modeling</span> and data assimilation studies, 2) diagnosis and theorectical studies, and 3) comparisons with locally detailed observations.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19950030399&hterms=quality+process+evaluation&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D30%26Ntt%3Dquality%2Bprocess%2Bevaluation','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19950030399&hterms=quality+process+evaluation&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D30%26Ntt%3Dquality%2Bprocess%2Bevaluation"><span>Data assimilation and <span class="hlt">model</span> evaluation <span class="hlt">experiment</span> datasets</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Lai, Chung-Cheng A.; Qian, Wen; Glenn, Scott M.</p> <p>1994-01-01</p> <p>The Institute for Naval Oceanography, in cooperation with Naval Research Laboratories and universities, executed the Data Assimilation and <span class="hlt">Model</span> Evaluation <span class="hlt">Experiment</span> (DAMEE) for the Gulf Stream region during fiscal years 1991-1993. Enormous effort has gone into the preparation of several high-quality and consistent datasets for <span class="hlt">model</span> initialization and verification. This paper describes the preparation process, the temporal and spatial scopes, the contents, the structure, etc., of these datasets. The goal of DAMEE and the need of data for the four phases of <span class="hlt">experiment</span> are briefly stated. The preparation of DAMEE datasets consisted of a series of processes: (1) collection of observational data; (2) analysis and interpretation; (3) interpolation using the Optimum Thermal Interpolation System package; (4) quality control and re-analysis; and (5) data archiving and software documentation. The data products from these processes included a time series of 3D fields of temperature and salinity, 2D fields of surface dynamic height and mixed-layer depth, analysis of the Gulf Stream and rings system, and bathythermograph profiles. To date, these are the most detailed and high-quality data for mesoscale ocean <span class="hlt">modeling</span>, data assimilation, and forecasting research. Feedback from ocean <span class="hlt">modeling</span> groups who tested this data was incorporated into its refinement. Suggestions for DAMEE data usages include (1) ocean <span class="hlt">modeling</span> and data assimilation studies, (2) diagnosis and theoretical studies, and (3) comparisons with locally detailed observations.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/19017657','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/19017657"><span><span class="hlt">Complexity</span> reduction in context-dependent DNA substitution <span class="hlt">models</span>.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Majoros, William H; Ohler, Uwe</p> <p>2009-01-15</p> <p>The <span class="hlt">modeling</span> of conservation patterns in genomic DNA has become increasingly popular for a number of bioinformatic applications. While several systems developed to date incorporate context-dependence in their substitution <span class="hlt">models</span>, the impact on computational <span class="hlt">complexity</span> and generalization ability of the resulting higher order <span class="hlt">models</span> invites the question of whether simpler approaches to context <span class="hlt">modeling</span> might permit appreciable reductions in <span class="hlt">model</span> <span class="hlt">complexity</span> and computational cost, without sacrificing prediction accuracy. We formulate several alternative methods for context <span class="hlt">modeling</span> based on windowed Bayesian networks, and compare their effects on both accuracy and computational <span class="hlt">complexity</span> for the task of discriminating functionally distinct segments in vertebrate DNA. Our results show that substantial reductions in the <span class="hlt">complexity</span> of both the <span class="hlt">model</span> and the associated inference algorithm can be achieved without reducing predictive accuracy.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li class="active"><span>13</span></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_13 --> <div id="page_14" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li class="active"><span>14</span></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="261"> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5361655','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5361655"><span>Evaluation of a Neuromechanical Walking Control <span class="hlt">Model</span> Using Disturbance <span class="hlt">Experiments</span></span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Song, Seungmoon; Geyer, Hartmut</p> <p>2017-01-01</p> <p>Neuromechanical simulations have been used to study the spinal control of human locomotion which involves <span class="hlt">complex</span> mechanical dynamics. So far, most neuromechanical simulation studies have focused on demonstrating the capability of a proposed control <span class="hlt">model</span> in generating normal walking. As many of these <span class="hlt">models</span> with competing control hypotheses can generate human-like normal walking behaviors, a more in-depth evaluation is required. Here, we conduct the more in-depth evaluation on a spinal-reflex-based control <span class="hlt">model</span> using five representative gait disturbances, ranging from electrical stimulation to mechanical perturbation at individual leg joints and at the whole body. The immediate changes in muscle activations of the <span class="hlt">model</span> are compared to those of humans across different gait phases and disturbance magnitudes. Remarkably similar response trends for the majority of investigated muscles and experimental conditions reinforce the plausibility of the reflex circuits of the <span class="hlt">model</span>. However, the <span class="hlt">model</span>'s responses lack in amplitude for two <span class="hlt">experiments</span> with whole body disturbances suggesting that in these cases the proposed reflex circuits need to be amplified by additional control structures such as location-specific cutaneous reflexes. A <span class="hlt">model</span> that captures these selective amplifications would be able to explain both steady and reactive spinal control of human locomotion. Neuromechanical simulations that investigate hypothesized control <span class="hlt">models</span> are complementary to gait <span class="hlt">experiments</span> in better understanding the control of human locomotion. PMID:28381996</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/ADA593318','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/ADA593318"><span><span class="hlt">Complex</span> Systems and Human Performance <span class="hlt">Modeling</span></span></a></p> <p><a target="_blank" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>2013-12-01</p> <p>human communication patterns can be implemented in a task network <span class="hlt">modeling</span> tool. Although queues are a basic feature in many task network <span class="hlt">modeling</span>...time. <span class="hlt">MODELING</span> COMMUNICATIVE BEHAVIOR Barabasi (2010) argues that human communication patterns are “bursty”; that is, the inter-event arrival...Having implemented the methods advocated by Clauset et al. in C3TRACE, we have grown more confident that the human communication data discussed above</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010AGUFM.A44D..07B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010AGUFM.A44D..07B"><span>Wind Tunnel <span class="hlt">Modeling</span> Of Wind Flow Over <span class="hlt">Complex</span> Terrain</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Banks, D.; Cochran, B.</p> <p>2010-12-01</p> <p>This presentation will describe the finding of an atmospheric boundary layer (ABL) wind tunnel study conducted as part of the Bolund <span class="hlt">Experiment</span>. This <span class="hlt">experiment</span> was sponsored by Risø DTU (National Laboratory for Sustainable Energy, Technical University of Denmark) during the fall of 2009 to enable a blind comparison of various air flow <span class="hlt">models</span> in an attempt to validate their performance in predicting airflow over <span class="hlt">complex</span> terrain. Bohlund hill sits 12 m above the water level at the end of a narrow isthmus. The island features a steep escarpment on one side, over which the airflow can be expected to separate. The island was equipped with several anemometer towers, and the approach flow over the water was well characterized. This study was one of only two only physical <span class="hlt">model</span> studies included in the blind <span class="hlt">model</span> comparison, the other being a water plume study. The remainder were computational fluid dynamics (CFD) simulations, including both RANS and LES. Physical <span class="hlt">modeling</span> of air flow over topographical features has been used since the middle of the 20th century, and the methods required are well understood and well documented. Several books have been written describing how to properly perform ABL wind tunnel studies, including ASCE manual of engineering practice 67. Boundary layer wind tunnel tests are the only <span class="hlt">modelling</span> method deemed acceptable in ASCE 7-10, the most recent edition of the American Society of Civil Engineers standard that provides wind loads for buildings and other structures for buildings codes across the US. Since the 1970’s, most tall structures undergo testing in a boundary layer wind tunnel to accurately determine the wind induced loading. When compared to CFD, the US EPA considers a properly executed wind tunnel study to be equivalent to a CFD <span class="hlt">model</span> with infinitesimal grid resolution and near infinite memory. One key reason for this widespread acceptance is that properly executed ABL wind tunnel studies will accurately simulate flow separation</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1993JAG....29..301D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1993JAG....29..301D"><span><span class="hlt">Experience</span> from the ECORS program in regions of <span class="hlt">complex</span> geology</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Damotte, B.</p> <p>1993-04-01</p> <p>The French ECORS program was launched in 1983 by a cooperation agreement between universities and petroleum companies. Crustal surveys have tried to find explanations for the formation of geological features, such as rifts, mountains ranges or subsidence in sedimentary basins. Several seismic surveys were carried out, some across areas with <span class="hlt">complex</span> geological structures. The seismic techniques and equipment used were those developed by petroleum geophysicists, adapted to the depth aimed at (30-50 km) and to various physical constraints encountered in the field. In France, ECORS has recorded 850 km of deep seismic lines onshore across plains and mountains, on various kinds of geological formations. Different variations of the seismic method (reflection, refraction, long-offset seismic) were used, often simultaneously. Multiple coverage profiling constitutes the essential part of this data acquisition. Vibrators and dynamite shots were employed with a spread generally 15 km long, but sometimes 100 km long. Some typical seismic examples show that obtaining crustal reflections essentialy depends on two factors: (1) the type and structure of shallow formations, and (2) the sources used. Thus, when seismic energy is strongly absorbed across the first kilometers in shallow formations, or when these formations are highly structured, standard multiple-coverage profiling is not able to provide results beyond a few seconds. In this case, it is recommended to simultaneously carry out long-offset seismic in low multiple coverage. Other more methodological examples show: how the impact on the crust of a surface fault may be evaluated according to the seismic method implemented ( VIBROSEIS 96-fold coverage or single dynamite shot); that vibrators make it possible to implement wide-angle seismic surveying with an offset 80 km long; how to implement the seismic reflection method on <span class="hlt">complex</span> formations in high mountains. All data were processed using industrial seismic software</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/AD0768928','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/AD0768928"><span>Experimente ueber den Einflusse von Metaboliten und Antimetaboliten am <span class="hlt">Modell</span> von Trichomonas Vaginalis. I. Mitteilung Experimente mit dem Vitamin B2-Komplex (<span class="hlt">Experiments</span> on the Influence of Metabolites and Antimetabolites on the <span class="hlt">Model</span> of Trichomonas vaginalis. I. Communication: <span class="hlt">Experiments</span> with the Vitamin-B2-<span class="hlt">Complex</span>),</span></a></p> <p><a target="_blank" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p></p> <p>pathogenic protozoa Trichomonas vaginalis have been studied. Material and methods are described in the paper. The efficacy of the individual admixtures from the vitamin-B2-<span class="hlt">complex</span> is subsequently discussed. (Author)</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1995PhDT........50A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1995PhDT........50A"><span>Tank Pressure Control <span class="hlt">Experiment</span> and Theoretical <span class="hlt">Modeling</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Albayyari, Jihad M.</p> <p>1995-01-01</p> <p>Future space systems such as Space Station Freedom, and space defense systems will require storage of cryogenic fluids in a low-gravity environment for extended periods of time. Heat leaks to the containment vessel lead to an increase in the temperature and pressure of the fluid. The absence of natural convection results in a non-uniform temperature which exacerbates the pressure increase. Therefore a re-circulating liquid jet is necessary to mix the fluid. Therefore the Tank Pressure Control <span class="hlt">Experiment</span> (TPCE) was designed. This <span class="hlt">experiment</span> has been flown twice on the Space Shuttle (STS 43-1991 and STS 52-1992). The <span class="hlt">experiments</span> used Freon-113 at near saturation conditions to simulate cryogenic fluids in space, with relatively low ullage volume (84 percent liquid fill). The TPCE results showed that low-velocity mixing is effective for pressure control in a nearly full tank. Multiple-burn missions using a single set of cryogenic propellant tanks, however, will consume 50 to 60 percent of the propellant during the first burn. The University of Cincinnati was awarded an In-Space Technology <span class="hlt">Experiment</span> (IN-STEP) contract to re-fly the TPCE with a 40 percent liquid fill using Freon-113, and to theoretically <span class="hlt">model</span> the heating and the mixing process inside the tank. Due to the absence of natural convection during the heating phase, a conduction <span class="hlt">model</span> is needed to determine the temperature increase inside the tank. The heating <span class="hlt">model</span> determined the time required for the pressure inside the tank to start increasing due to nucleate pool boiling at the heater surface. The mixing <span class="hlt">model</span> consists of a non-penetrating laminar jet directed toward the liquid-vapor interface in the tank to induce condensation. The mixing <span class="hlt">model</span> numerically solved the two-dimensional Navier-Stokes, and the energy equations. The <span class="hlt">model</span> will predict the velocity, pressure, and temperature inside the tank. The <span class="hlt">model</span> also predicted condensation rate at the interface, which will reduce the pressure in the tank.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.osti.gov/scitech/biblio/20726125','SCIGOV-STC'); return false;" href="https://www.osti.gov/scitech/biblio/20726125"><span>Design of <span class="hlt">experiments</span> and springback prediction for AHSS automotive components with <span class="hlt">complex</span> geometry</span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p>Asgari, A.; Pereira, M.; Rolfe, B.; Dingle, M.; Hodgson, P.</p> <p>2005-08-05</p> <p>With the drive towards implementing Advanced High Strength Steels (AHSS) in the automotive industry; stamping engineers need to quickly answer questions about forming these strong materials into elaborate shapes. Commercially available codes have been successfully used to accurately predict formability, thickness and strains in <span class="hlt">complex</span> parts. However, springback and twisting are still challenging subjects in numerical simulations of AHSS components. Design of <span class="hlt">Experiments</span> (DOE) has been used in this paper to study the sensitivity of the implicit and explicit numerical results with respect to certain arrays of user input parameters in the forming of an AHSS component. Numerical results were compared to experimental measurements of the parts stamped in an industrial production line. The forming predictions of the implicit and explicit codes were in good agreement with the experimental measurements for the conventional steel grade, while lower accuracies were observed for the springback predictions. The forming predictions of the <span class="hlt">complex</span> component with an AHSS material were also in good correlation with the respective experimental measurements. However, much lower accuracies were observed in its springback predictions. The number of integration points through the thickness and tool offset were found to be of significant importance, while coefficient of friction and Young's modulus (<span class="hlt">modeling</span> input parameters) have no significant effect on the accuracy of the predictions for the <span class="hlt">complex</span> geometry.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3970111','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3970111"><span>Multiscale Computational <span class="hlt">Models</span> of <span class="hlt">Complex</span> Biological Systems</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Walpole, Joseph; Papin, Jason A.; Peirce, Shayn M.</p> <p>2014-01-01</p> <p>Integration of data across spatial, temporal, and functional scales is a primary focus of biomedical engineering efforts. The advent of powerful computing platforms, coupled with quantitative data from high-throughput experimental platforms, has allowed multiscale <span class="hlt">modeling</span> to expand as a means to more comprehensively investigate biological phenomena in experimentally relevant ways. This review aims to highlight recently published multiscale <span class="hlt">models</span> of biological systems while using their successes to propose the best practices for future <span class="hlt">model</span> development. We demonstrate that coupling continuous and discrete systems best captures biological information across spatial scales by selecting <span class="hlt">modeling</span> techniques that are suited to the task. Further, we suggest how to best leverage these multiscale <span class="hlt">models</span> to gain insight into biological systems using quantitative, biomedical engineering methods to analyze data in non-intuitive ways. These topics are discussed with a focus on the future of the field, the current challenges encountered, and opportunities yet to be realized. PMID:23642247</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016AGUFM.H13F1468K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016AGUFM.H13F1468K"><span>Hydrothermal alteration of granite rock cores: <span class="hlt">experiments</span> and kinetic <span class="hlt">modelling</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kuesters, T.; Mueller, T.; Renner, J.</p> <p>2016-12-01</p> <p>The kinetics of water-rock interactions at elevated temperatures is key for understanding the dynamic evolution of porosity and permeability in natural and industrial systems. The implementation of rate data in numerical <span class="hlt">models</span> simulating reactive transport is crucial to reliably predict subsurface fluid flow. The vast majority of data are constrained by single phase powder <span class="hlt">experiments</span> inhering unrealistically high surface areas and hampering consideration of microstructural effects on reaction progress. We present experimental results of batch <span class="hlt">experiments</span> conducted at 200 °C for up to 60 days on a quartz-monzodiorite and pure water that bridge the gap between powder <span class="hlt">experiments</span> and <span class="hlt">complex</span> natural systems. The effect of reactive surface area was <span class="hlt">modelled</span> by using bulk-rock powders, intact, and thermally cracked rock cubes. Fluid composition was monitored (ICP-MS) and solid products were analysed after each <span class="hlt">experiment</span> (SEM, EMPA). The evolution of fluid and solid compositions was compared to a self-coded geochemical transport <span class="hlt">model</span> accounting for dissolution, nucleation, growth and reactive surface area. Experimental and <span class="hlt">modelling</span> results indicate a fast increase of Na, Ca, K and Si in the fluid related to kinetically controlled dissolution of feldspar (plg and kfs) and quartz. Maximum concentrations of Al, Mg, and Fe are reached within two days followed by a rapid decrease induced by secondary mineral precipitation. The amount and type of secondary phases strongly depend on the host substrate indicating that local fluid composition and substrate surface are the controlling parameters. Observed reaction rates differ strongly between powder and rock cube <span class="hlt">experiments</span> due to differences in reactive surface area. Combining kinetic data, gained by <span class="hlt">modelling</span> the experimental results, with new information on substrate-precipitate relationship will aid large scale-transport <span class="hlt">models</span> to realistically predict chemo-mechanical changes and fluid flow in subsurface systems.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/19910007189','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19910007189"><span>Observation simulation <span class="hlt">experiments</span> with regional prediction <span class="hlt">models</span></span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Diak, George; Perkey, Donald J.; Kalb, Michael; Robertson, Franklin R.; Jedlovec, Gary</p> <p>1990-01-01</p> <p>Research efforts in FY 1990 included studies employing regional scale numerical <span class="hlt">models</span> as aids in evaluating potential contributions of specific satellite observing systems (current and future) to numerical prediction. One study involves Observing System Simulation <span class="hlt">Experiments</span> (OSSEs) which mimic operational initialization/forecast cycles but incorporate simulated Advanced Microwave Sounding Unit (AMSU) radiances as input data. The objective of this and related studies is to anticipate the potential value of data from these satellite systems, and develop applications of remotely sensed data for the benefit of short range forecasts. Techniques are also being used that rely on numerical <span class="hlt">model</span>-based synthetic satellite radiances to interpret the information content of various types of remotely sensed image and sounding products. With this approach, evolution of simulated channel radiance image features can be directly interpreted in terms of the atmospheric dynamical processes depicted by a <span class="hlt">model</span>. Progress is being made in a study using the internal consistency of a regional prediction <span class="hlt">model</span> to simplify the assessment of forced diabatic heating and moisture initialization in reducing <span class="hlt">model</span> spinup times. Techniques for <span class="hlt">model</span> initialization are being examined, with focus on implications for potential applications of remote microwave observations, including AMSU and Special Sensor Microwave Imager (SSM/I), in shortening <span class="hlt">model</span> spinup time for regional prediction.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.osti.gov/scitech/servlets/purl/373738','SCIGOV-STC'); return false;" href="http://www.osti.gov/scitech/servlets/purl/373738"><span>Information, <span class="hlt">complexity</span> and efficiency: The automobile <span class="hlt">model</span></span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p>Allenby, B. |</p> <p>1996-08-08</p> <p>The new, rapidly evolving field of industrial ecology - the objective, multidisciplinary study of industrial and economic systems and their linkages with fundamental natural systems - provides strong ground for believing that a more environmentally and economically efficient economy will be more information intensive and <span class="hlt">complex</span>. Information and intellectual capital will be substituted for the more traditional inputs of materials and energy in producing a desirable, yet sustainable, quality of life. While at this point this remains a strong hypothesis, the evolution of the automobile industry can be used to illustrate how such substitution may, in fact, already be occurring in an environmentally and economically critical sector.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.osti.gov/scitech/servlets/purl/877087','SCIGOV-STC'); return false;" href="http://www.osti.gov/scitech/servlets/purl/877087"><span><span class="hlt">Modeling</span> Power Systems as <span class="hlt">Complex</span> Adaptive Systems</span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p>Chassin, David P.; Malard, Joel M.; Posse, Christian; Gangopadhyaya, Asim; Lu, Ning; Katipamula, Srinivas; Mallow, J V.</p> <p>2004-12-30</p> <p>Physical analogs have shown considerable promise for understanding the behavior of <span class="hlt">complex</span> adaptive systems, including macroeconomics, biological systems, social networks, and electric power markets. Many of today's most challenging technical and policy questions can be reduced to a distributed economic control problem. Indeed, economically based control of large-scale systems is founded on the conjecture that the price-based regulation (e.g., auctions, markets) results in an optimal allocation of resources and emergent optimal system control. This report explores the state-of-the-art physical analogs for understanding the behavior of some econophysical systems and deriving stable and robust control strategies for using them. We review and discuss applications of some analytic methods based on a thermodynamic metaphor, according to which the interplay between system entropy and conservation laws gives rise to intuitive and governing global properties of <span class="hlt">complex</span> systems that cannot be otherwise understood. We apply these methods to the question of how power markets can be expected to behave under a variety of conditions.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012APS..DFDH32006Z','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012APS..DFDH32006Z"><span>A resistive force <span class="hlt">model</span> for <span class="hlt">complex</span> intrusion in granular media</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Zhang, Tingnan; Li, Chen; Goldman, Daniel</p> <p>2012-11-01</p> <p>Intrusion forces in granular media (GM) are best understood for simple shapes (like disks and rods) undergoing vertical penetration and horizontal drag. Inspired by a resistive force theory for sand-swimming, we develop a new two-dimensional resistive force <span class="hlt">model</span> for intruders of arbitrary shape and intrusion path into GM in the vertical plane. We divide an intruder of <span class="hlt">complex</span> geometry into small segments and approximate segmental forces by measuring forces on small flat plates in <span class="hlt">experiments</span>. Both lift and drag forces on the plates are proportional to penetration depth, and depend sensitively on the angle of attack and the direction of motion. Summation of segmental forces over the intruder predicts the net forces on a c-leg, a flat leg, and a reversed c-leg rotated into GM about a fixed axle. The stress profiles are similar for GM of different particle sizes, densities, coefficients of friction, and volume fractions. We propose a universal scaling law applicable to all tested GM. By combining the new force <span class="hlt">model</span> with a multi-body simulator, we can also predict the locomotion dynamics of a small legged robot on GM. Our force laws can provide a strict test of hydrodynamic-like approaches to <span class="hlt">model</span> dense granular flows. Also affiliated to: School of Physics, Georgia Institute of Technology.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2004JSV...270..673Z','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2004JSV...270..673Z"><span><span class="hlt">Modelling</span> and <span class="hlt">experiment</span> of railway ballast vibrations</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Zhai, W. M.; Wang, K. Y.; Lin, J. H.</p> <p>2004-03-01</p> <p>The vibration of railway ballast is a key factor to cause track geometry change and increase of track maintenance costs. So far the methods for analyzing and testing the vibration of the granular ballast have not been well formed. In this paper, a five-parameter <span class="hlt">model</span> for analysis of the ballast vibration is established based upon the hypothesis that the load-transmission from a sleeper to the ballast approximately coincides with the cone distribution. The concepts of shear stiffness and shear damping of the ballast are introduced in the <span class="hlt">model</span> in order to consider the continuity of the interlocking ballast granules. A full-scale field <span class="hlt">experiment</span> is carried out to measure the ballast acceleration excited by moving trains. Theoretical simulation results agree well with the measured results. Hence the proposed ballast vibration <span class="hlt">model</span> has been validated.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2001APS..SHK.C5002O','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2001APS..SHK.C5002O"><span>Ballistic Response of Fabrics: <span class="hlt">Model</span> and <span class="hlt">Experiments</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Orphal, Dennis L.; Walker Anderson, James D., Jr.</p> <p>2001-06-01</p> <p>Walker (1999)developed an analytical <span class="hlt">model</span> for the dynamic response of fabrics to ballistic impact. From this <span class="hlt">model</span> the force, F, applied to the projectile by the fabric is derived to be F = 8/9 (ET*)h^3/R^2, where E is the Young's modulus of the fabric, T* is the "effective thickness" of the fabric and equal to the ratio of the areal density of the fabric to the fiber density, h is the displacement of the fabric on the axis of impact and R is the radius of the fabric deformation or "bulge". Ballistic tests against Zylon^TM fabric have been performed to measure h and R as a function of time. The results of these <span class="hlt">experiments</span> are presented and analyzed in the context of the Walker <span class="hlt">model</span>. Walker (1999), Proceedings of the 18th International Symposium on Ballistics, pp. 1231.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2011SPIE.8336E....A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2011SPIE.8336E....A"><span>Integrated <span class="hlt">Modeling</span> of <span class="hlt">Complex</span> Optomechanical Systems</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Andersen, Torben; Enmark, Anita</p> <p>2011-09-01</p> <p>Mathematical <span class="hlt">modeling</span> and performance simulation are playing an increasing role in large, high-technology projects. There are two reasons; first, projects are now larger than they were before, and the high cost calls for detailed performance prediction before construction. Second, in particular for space-related designs, it is often difficult to test systems under realistic conditions beforehand, and mathematical <span class="hlt">modeling</span> is then needed to verify in advance that a system will work as planned. Computers have become much more powerful, permitting calculations that were not possible before. At the same time mathematical tools have been further developed and found acceptance in the community. Particular progress has been made in the fields of structural mechanics, optics and control engineering, where new methods have gained importance over the last few decades. Also, methods for combining optical, structural and control system <span class="hlt">models</span> into global <span class="hlt">models</span> have found widespread use. Such combined <span class="hlt">models</span> are usually called integrated <span class="hlt">models</span> and were the subject of this symposium. The objective was to bring together people working in the fields of groundbased optical telescopes, ground-based radio telescopes, and space telescopes. We succeeded in doing so and had 39 interesting presentations and many fruitful discussions during coffee and lunch breaks and social arrangements. We are grateful that so many top ranked specialists found their way to Kiruna and we believe that these proceedings will prove valuable during much future work.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=39443','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=39443"><span>Slip <span class="hlt">complexity</span> in dynamic <span class="hlt">models</span> of earthquake faults.</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Langer, J S; Carlson, J M; Myers, C R; Shaw, B E</p> <p>1996-01-01</p> <p>We summarize recent evidence that <span class="hlt">models</span> of earthquake faults with dynamically unstable friction laws but no externally imposed heterogeneities can exhibit slip <span class="hlt">complexity</span>. Two <span class="hlt">models</span> are described here. The first is a one-dimensional <span class="hlt">model</span> with velocity-weakening stick-slip friction; the second is a two-dimensional elastodynamic <span class="hlt">model</span> with slip-weakening friction. Both exhibit small-event <span class="hlt">complexity</span> and chaotic sequences of large characteristic events. The large events in both <span class="hlt">models</span> are composed of Heaton pulses. We argue that the key ingredients of these <span class="hlt">models</span> are reasonably accurate representations of the properties of real faults. PMID:11607671</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/19920000894','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19920000894"><span>Process <span class="hlt">modelling</span> for Space Station <span class="hlt">experiments</span></span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Alexander, J. Iwan D.; Rosenberger, Franz; Nadarajah, Arunan; Ouazzani, Jalil; Amiroudine, Sakir</p> <p>1990-01-01</p> <p>Examined here is the sensitivity of a variety of space <span class="hlt">experiments</span> to residual accelerations. In all the cases discussed the sensitivity is related to the dynamic response of a fluid. In some cases the sensitivity can be defined by the magnitude of the response of the velocity field. This response may involve motion of the fluid associated with internal density gradients, or the motion of a free liquid surface. For fluids with internal density gradients, the type of acceleration to which the <span class="hlt">experiment</span> is sensitive will depend on whether buoyancy driven convection must be small in comparison to other types of fluid motion, or fluid motion must be suppressed or eliminated. In the latter case, the <span class="hlt">experiments</span> are sensitive to steady and low frequency accelerations. For <span class="hlt">experiments</span> such as the directional solidification of melts with two or more components, determination of the velocity response alone is insufficient to assess the sensitivity. The effect of the velocity on the composition and temperature field must be considered, particularly in the vicinity of the melt-crystal interface. As far as the response to transient disturbances is concerned, the sensitivity is determined by both the magnitude and frequency of the acceleration and the characteristic momentum and solute diffusion times. The microgravity environment, a numerical analysis of low gravity tolerance of the Bridgman-Stockbarger technique, and <span class="hlt">modeling</span> crystal growth by physical vapor transport in closed ampoules are discussed.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013EOSTr..94T.104B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013EOSTr..94T.104B"><span><span class="hlt">Modeling</span> <span class="hlt">complex</span> systems in the geosciences</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Balcerak, Ernie</p> <p>2013-03-01</p> <p>Many geophysical phenomena can be described as <span class="hlt">complex</span> systems, involving phenomena such as extreme or "wild" events that often do not follow the Gaussian distribution that would be expected if the events were simply random and uncorrelated. For instance, some geophysical phenomena like earthquakes show a much higher occurrence of relatively large values than would a Gaussian distribution and so are examples of the "Noah effect" (named by Benoit Mandelbrot for the exceptionally heavy rain in the biblical flood). Other geophysical phenomena are examples of the "Joseph effect," in which a state is especially persistent, such as a spell of multiple consecutive hot days (heat waves) or several dry summers in a row. The Joseph effect was named after the biblical story in which Joseph's dream of seven fat cows and seven thin ones predicted 7 years of plenty followed by 7 years of drought.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=1618496','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=1618496"><span><span class="hlt">Complex</span> networks and simple <span class="hlt">models</span> in biology</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>de Silva, Eric; Stumpf, Michael P.H</p> <p>2005-01-01</p> <p>The analysis of molecular networks, such as transcriptional, metabolic and protein interaction networks, has progressed substantially because of the power of <span class="hlt">models</span> from statistical physics. Increasingly, the data are becoming so detailed—though not always complete or correct—that the simple <span class="hlt">models</span> are reaching the limits of their usefulness. Here, we will discuss how network information can be described and to some extent quantified. In particular statistics offers a range of tools, such as <span class="hlt">model</span> selection, which have not yet been widely applied in the analysis of biological networks. We will also outline a number of present challenges posed by biological network data in systems biology, and the extent to which these can be addressed by new developments in statistics, physics and applied mathematics. PMID:16849202</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li class="active"><span>14</span></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_14 --> <div id="page_15" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li class="active"><span>15</span></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="281"> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015SPIE.9675E..1KZ','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015SPIE.9675E..1KZ"><span>Research on the optimal selection method of image <span class="hlt">complexity</span> assessment <span class="hlt">model</span> index parameter</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Zhu, Yong; Duan, Jin; Qian, Xiaofei; Xiao, Bo</p> <p>2015-10-01</p> <p>Target recognition is widely used in national economy, space technology and national defense and other fields. There is great difference between the difficulty of the target recognition and target extraction. The image <span class="hlt">complexity</span> is evaluating the difficulty level of extracting the target from background. It can be used as a prior evaluation index of the target recognition algorithm's effectiveness. The paper, from the perspective of the target and background characteristics measurement, describe image <span class="hlt">complexity</span> metrics parameters using quantitative, accurate mathematical relationship. For the collinear problems between each measurement parameters, image <span class="hlt">complexity</span> metrics parameters are clustered with gray correlation method. It can realize the metrics parameters of extraction and selection, improve the reliability and validity of image <span class="hlt">complexity</span> description and representation, and optimize the image the <span class="hlt">complexity</span> assessment calculation <span class="hlt">model</span>. <span class="hlt">Experiment</span> results demonstrate that when gray system theory is applied to the image <span class="hlt">complexity</span> analysis, target characteristics image <span class="hlt">complexity</span> can be measured more accurately and effectively.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.osti.gov/scitech/servlets/purl/6428156','SCIGOV-STC'); return false;" href="http://www.osti.gov/scitech/servlets/purl/6428156"><span>Spectroscopic studies of molybdenum <span class="hlt">complexes</span> as <span class="hlt">models</span> for nitrogenase</span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p>Walker, T.P.</p> <p>1981-05-01</p> <p>Because biological nitrogen fixation requires Mo, there is an interest in inorganic Mo <span class="hlt">complexes</span> which mimic the reactions of nitrogen-fixing enzymes. Two such <span class="hlt">complexes</span> are the dimer Mo/sub 2/O/sub 4/ (cysteine)/sub 2//sup 2 -/ and trans-Mo(N/sub 2/)/sub 2/(dppe)/sub 2/ (dppe = 1,2-bis(diphenylphosphino)ethane). The H/sup 1/ and C/sup 13/ NMR of solutions of Mo/sub 2/O/sub 4/(cys)/sub 2//sup 2 -/ are described. It is shown that in aqueous solution the cysteine ligands assume at least three distinct configurations. A step-wise dissociation of the cysteine ligand is proposed to explain the data. The Extended X-ray Absorption Fine Structure (EXAFS) of trans-Mo(N/sub 2/)/sub 2/(dppe)/sub 2/ is described and compared to the EXAFS of MoH/sub 4/(dppe)/sub 2/. The spectra are fitted to amplitude and phase parameters developed at Bell Laboratories. On the basis of this analysis, one can determine (1) that the dinitrogen <span class="hlt">complex</span> contains nitrogen and the hydride <span class="hlt">complex</span> does not and (2) the correct Mo-N distance. This is significant because the Mo inn both <span class="hlt">complexes</span> is coordinated by four P atoms which dominate the EXAFS. A similar sort of interference is present in nitrogenase due to S coordination of the Mo in the enzyme. This <span class="hlt">model</span> <span class="hlt">experiment</span> indicates that, given adequate signal to noise ratios, the presence or absence of dinitrogen coordination to Mo in the enzyme may be determined by EXAFS using existing data analysis techniques. A new reaction between Mo/sub 2/O/sub 4/(cys)/sub 2//sup 2 -/ and acetylene is described to the extent it is presently understood. A strong EPR signal is observed, suggesting the production of stable Mo(V) monomers. EXAFS studies support this suggestion. The Mo K-edge is described. The edge data suggests Mo(VI) is also produced in the reaction. Ultraviolet spectra suggest that cysteine is released in the course of the reaction.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.osti.gov/scitech/servlets/purl/1260366','SCIGOV-STC'); return false;" href="http://www.osti.gov/scitech/servlets/purl/1260366"><span>Argonne Bubble <span class="hlt">Experiment</span> Thermal <span class="hlt">Model</span> Development II</span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p>Buechler, Cynthia Eileen</p> <p>2016-07-01</p> <p>This report describes the continuation of the work reported in “Argonne Bubble <span class="hlt">Experiment</span> Thermal <span class="hlt">Model</span> Development”. The <span class="hlt">experiment</span> was performed at Argonne National Laboratory (ANL) in 2014. A rastered 35 MeV electron beam deposited power in a solution of uranyl sulfate, generating heat and radiolytic gas bubbles. Irradiations were performed at three beam power levels, 6, 12 and 15 kW. Solution temperatures were measured by thermocouples, and gas bubble behavior was observed. This report will describe the Computational Fluid Dynamics (CFD) <span class="hlt">model</span> that was developed to calculate the temperatures and gas volume fractions in the solution vessel during the irradiations. The previous report described an initial analysis performed on a geometry that had not been updated to reflect the as-built solution vessel. Here, the as-built geometry is used. Monte-Carlo N-Particle (MCNP) calculations were performed on the updated geometry, and these results were used to define the power deposition profile for the CFD analyses, which were performed using Fluent, Ver. 16.2. CFD analyses were performed for the 12 and 15 kW irradiations, and further improvements to the <span class="hlt">model</span> were incorporated, including the consideration of power deposition in nearby vessel components, gas mixture composition, and bubble size distribution. The temperature results of the CFD calculations are compared to experimental measurements.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23402624','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23402624"><span>Using machine learning tools to <span class="hlt">model</span> <span class="hlt">complex</span> toxic interactions with limited sampling regimes.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Bertin, Matthew J; Moeller, Peter; Guillette, Louis J; Chapman, Robert W</p> <p>2013-03-19</p> <p>A major impediment to understanding the impact of environmental stress, including toxins and other pollutants, on organisms, is that organisms are rarely challenged by one or a few stressors in natural systems. Thus, linking laboratory <span class="hlt">experiments</span> that are limited by practical considerations to a few stressors and a few levels of these stressors to real world conditions is constrained. In addition, while the existence of <span class="hlt">complex</span> interactions among stressors can be identified by current statistical methods, these methods do not provide a means to construct mathematical <span class="hlt">models</span> of these interactions. In this paper, we offer a two-step process by which <span class="hlt">complex</span> interactions of stressors on biological systems can be <span class="hlt">modeled</span> in an experimental design that is within the limits of practicality. We begin with the notion that environment conditions circumscribe an n-dimensional hyperspace within which biological processes or end points are embedded. We then randomly sample this hyperspace to establish experimental conditions that span the range of the relevant parameters and conduct the <span class="hlt">experiment(s</span>) based upon these selected conditions. <span class="hlt">Models</span> of the <span class="hlt">complex</span> interactions of the parameters are then extracted using machine learning tools, specifically artificial neural networks. This approach can rapidly generate highly accurate <span class="hlt">models</span> of biological responses to <span class="hlt">complex</span> interactions among environmentally relevant toxins, identify critical subspaces where nonlinear responses exist, and provide an expedient means of designing traditional <span class="hlt">experiments</span> to test the impact of <span class="hlt">complex</span> mixtures on biological responses. Further, this can be accomplished with an astonishingly small sample size.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.osti.gov/scitech/servlets/purl/941408','SCIGOV-STC'); return false;" href="http://www.osti.gov/scitech/servlets/purl/941408"><span><span class="hlt">Experiments</span> for foam <span class="hlt">model</span> development and validation.</span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p>Bourdon, Christopher Jay; Cote, Raymond O.; Moffat, Harry K.; Grillet, Anne Mary; Mahoney, James F.; Russick, Edward Mark; Adolf, Douglas Brian; Rao, Rekha Ranjana; Thompson, Kyle Richard; Kraynik, Andrew Michael; Castaneda, Jaime N.; Brotherton, Christopher M.; Mondy, Lisa Ann; Gorby, Allen D.</p> <p>2008-09-01</p> <p>A series of <span class="hlt">experiments</span> has been performed to allow observation of the foaming process and the collection of temperature, rise rate, and microstructural data. Microfocus video is used in conjunction with particle image velocimetry (PIV) to elucidate the boundary condition at the wall. Rheology, reaction kinetics and density measurements complement the flow visualization. X-ray computed tomography (CT) is used to examine the cured foams to determine density gradients. These data provide input to a continuum level finite element <span class="hlt">model</span> of the blowing process.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.osti.gov/scitech/servlets/purl/959059','SCIGOV-STC'); return false;" href="http://www.osti.gov/scitech/servlets/purl/959059"><span><span class="hlt">Experience</span> with the CMS Event Data <span class="hlt">Model</span></span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p>Elmer, P.; Hegner, B.; Sexton-Kennedy, L.; /Fermilab</p> <p>2009-06-01</p> <p>The re-engineered CMS EDM was presented at CHEP in 2006. Since that time we have gained a lot of operational <span class="hlt">experience</span> with the chosen <span class="hlt">model</span>. We will present some of our findings, and attempt to evaluate how well it is meeting its goals. We will discuss some of the new features that have been added since 2006 as well as some of the problems that have been addressed. Also discussed is the level of adoption throughout CMS, which spans the trigger farm up to the final physics analysis. Future plans, in particular dealing with schema evolution and scaling, will be discussed briefly.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/ADA435841','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/ADA435841"><span>Reduced-<span class="hlt">Complexity</span> <span class="hlt">Models</span> for Network Performance Prediction</span></a></p> <p><a target="_blank" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>2005-05-01</p> <p>traffic over the network . To understand such a <span class="hlt">complex</span> system it is necessary to develop accurate, yet simple, <span class="hlt">models</span> to describe the performance...interconnected in <span class="hlt">complex</span> ways, with millions of users sending traffic over the network . To understand such a <span class="hlt">complex</span> system, it is necessary to develop...number of downloaders . . . . . . . . . . . . . . . . . 17 11 A network of ISP clouds. In this figure, the ISPs are connected via peering points, denoted</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012EGUGA..14.5613O','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012EGUGA..14.5613O"><span>Induced polarization of clay-sand mixtures. <span class="hlt">Experiments</span> and <span class="hlt">modelling</span>.</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Okay, G.; Leroy, P.</p> <p>2012-04-01</p> <p>The <span class="hlt">complex</span> conductivity of saturated unconsolidated sand-clay mixtures was experimentally investigated using two types of clay minerals, kaolinite and smectite (mainly Na-Montmorillonite) in the frequency range 1.4 mHz - 12 kHz. The <span class="hlt">experiments</span> were performed with various clay contents (1, 5, 20, and 100 % in volume of the sand-clay mixture) and salinities (distilled water, 0.1 g/L, 1 g/L, and 10 g/L NaCl solution). Induced polarization measurements were performed with a cylindrical four-electrode sample-holder associated with a SIP-Fuchs II impedance meter and non-polarizing Cu/CuSO4 electrodes. The results illustrate the strong impact of the CEC of the clay minerals upon the <span class="hlt">complex</span> conductivity. The quadrature conductivity increases steadily with the clay content. We observe that the dependence on frequency of the quadrature conductivity of sand-kaolinite mixtures is more important than for sand-bentonite mixtures. For both types of clay, the quadrature conductivity seems to be fairly independent on the pore fluid salinity except at very low clay contents. The experimental data show good agreement with predicted values given by our SIP <span class="hlt">model</span>. This <span class="hlt">complex</span> conductivity <span class="hlt">model</span> considers the electrochemical polarization of the Stern layer coating the clay particles and the Maxwell-Wagner polarization. We use the differential effective medium theory to calculate the <span class="hlt">complex</span> conductivity of the porous medium constituted of the grains and the electrolyte. The SIP <span class="hlt">model</span> includes also the effect of the grain size distribution upon the <span class="hlt">complex</span> conductivity spectra.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4145283','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4145283"><span>A simple <span class="hlt">model</span> clarifies the complicated relationships of <span class="hlt">complex</span> networks</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Zheng, Bojin; Wu, Hongrun; Kuang, Li; Qin, Jun; Du, Wenhua; Wang, Jianmin; Li, Deyi</p> <p>2014-01-01</p> <p>Real-world networks such as the Internet and WWW have many common traits. Until now, hundreds of <span class="hlt">models</span> were proposed to characterize these traits for understanding the networks. Because different <span class="hlt">models</span> used very different mechanisms, it is widely believed that these traits origin from different causes. However, we find that a simple <span class="hlt">model</span> based on optimisation can produce many traits, including scale-free, small-world, ultra small-world, Delta-distribution, compact, fractal, regular and random networks. Moreover, by revising the proposed <span class="hlt">model</span>, the community-structure networks are generated. By this <span class="hlt">model</span> and the revised versions, the complicated relationships of <span class="hlt">complex</span> networks are illustrated. The <span class="hlt">model</span> brings a new universal perspective to the understanding of <span class="hlt">complex</span> networks and provide a universal method to <span class="hlt">model</span> <span class="hlt">complex</span> networks from the viewpoint of optimisation. PMID:25160506</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1998SPIE.3390..511J','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1998SPIE.3390..511J"><span><span class="hlt">Complex</span> Chebyshev-polynomial-based unified <span class="hlt">model</span> (CCPBUM) neural networks</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Jeng, Jin-Tsong; Lee, Tsu-Tian</p> <p>1998-03-01</p> <p>In this paper, we propose <span class="hlt">complex</span> Chebyshev Polynomial Based unified <span class="hlt">model</span> neural network for the approximation of <span class="hlt">complex</span>- valued function. Based on this approximate transformable technique, we have derived the relationship between the single-layered neural network and multi-layered perceptron neural network. It is shown that the <span class="hlt">complex</span> Chebyshev Polynomial Based unified <span class="hlt">model</span> neural network can be represented as a functional link network that are based on Chebyshev polynomial. We also derived a new learning algorithm for the proposed network. It turns out that the <span class="hlt">complex</span> Chebyshev Polynomial Based unified <span class="hlt">model</span> neural network not only has the same capability of universal approximator, but also has faster learning speed than conventional <span class="hlt">complex</span> feedforward/recurrent neural network.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/19950006857','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19950006857"><span>A musculoskeletal <span class="hlt">model</span> of the elbow joint <span class="hlt">complex</span></span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Gonzalez, Roger V.; Barr, Ronald E.; Abraham, Lawrence D.</p> <p>1993-01-01</p> <p>This paper describes a musculoskeletal <span class="hlt">model</span> that represents human elbow flexion-extension and forearm pronation-supination. Musculotendon parameters and the skeletal geometry were determined for the musculoskeletal <span class="hlt">model</span> in the analysis of ballistic elbow joint <span class="hlt">complex</span> movements. The key objective was to develop a computational <span class="hlt">model</span>, guided by optimal control, to investigate the relationship among patterns of muscle excitation, individual muscle forces, and movement kinematics. The <span class="hlt">model</span> was verified using experimental kinematic, torque, and electromyographic data from volunteer subjects performing both isometric and ballistic elbow joint <span class="hlt">complex</span> movements. In general, the <span class="hlt">model</span> predicted kinematic and muscle excitation patterns similar to what was experimentally measured.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28593653','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28593653"><span>Cellular Potts <span class="hlt">modeling</span> of <span class="hlt">complex</span> multicellular behaviors in tissue morphogenesis.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Hirashima, Tsuyoshi; Rens, Elisabeth G; Merks, Roeland M H</p> <p>2017-06-01</p> <p>Mathematical <span class="hlt">modeling</span> is an essential approach for the understanding of <span class="hlt">complex</span> multicellular behaviors in tissue morphogenesis. Here, we review the cellular Potts <span class="hlt">model</span> (CPM; also known as the Glazier-Graner-Hogeweg <span class="hlt">model</span>), an effective computational <span class="hlt">modeling</span> framework. We discuss its usability for <span class="hlt">modeling</span> <span class="hlt">complex</span> developmental phenomena by examining four fundamental examples of tissue morphogenesis: (i) cell sorting, (ii) cyst formation, (iii) tube morphogenesis in kidney development, and (iv) blood vessel formation. The review provides an introduction for biologists for starting simulation analysis using the CPM framework. © 2017 Japanese Society of Developmental Biologists.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/8432645','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/8432645"><span>An elementary method for implementing <span class="hlt">complex</span> biokinetic <span class="hlt">models</span>.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Leggett, R W; Eckerman, K F; Williams, L R</p> <p>1993-03-01</p> <p>Recent efforts to incorporate greater anatomical and physiological realism into biokinetic <span class="hlt">models</span> have resulted in many cases in mathematically <span class="hlt">complex</span> formulations that limit routine application of the <span class="hlt">models</span>. This paper describes an elementary, computer-efficient technique for implementing <span class="hlt">complex</span> compartmental <span class="hlt">models</span>, with attention focused primarily on biokinetic <span class="hlt">models</span> involving time-dependent transfer rates and recycling. The technique applies, in particular, to the physiologically based, age-specific biokinetic <span class="hlt">models</span> recommended in Publication No. 56 of the International Commission on Radiological Protection, Age-Dependent Doses to Members of the Public from Intake of Radionuclides.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2011CoPhC.182...43S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2011CoPhC.182...43S"><span>Realistic <span class="hlt">modeling</span> of <span class="hlt">complex</span> oxide materials</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Solovyev, I. V.</p> <p>2011-01-01</p> <p>Since electronic and magnetic properties of many transition-metal oxides can be efficiently controlled by external factors such as the temperature, pressure, electric or magnetic field, they are regarded as promising materials for various applications. From the viewpoint of the electronic structure, these phenomena are frequently related to the behavior of a small group of states located near the Fermi level. The basic idea of this project is to construct a <span class="hlt">model</span> for the low-energy states, derive all the parameters rigorously on the basis of density functional theory (DFT), and to study this <span class="hlt">model</span> by modern techniques. After a brief review of the method, the abilities of this approach will be illustrated on a number of examples, including multiferroic manganites and spin-orbital-lattice coupled phenomena in RVO 3 (where R is the three-valent element).</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/ADA570616','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/ADA570616"><span><span class="hlt">Complex</span> Network <span class="hlt">Modeling</span> with an Emulab HPC</span></a></p> <p><a target="_blank" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>2012-09-01</p> <p>field. Actual Joint Tactical Radio System (JTRS) radios, Operations Network ( OPNET ) emulations, and GNU (recursive definition for GNU is Not Unix...open-source software-defined-radio software/ firmware/ hardware emulations can be accommodated. Index Terms—network emulation, Emulab, OPNET I...other hand, simulation tools such as MATLAB, Optimized Network Engineering Tools ( OPNET ), NS2, and CORE (a <span class="hlt">modeling</span> environment from Vitech</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017EJPh...38b5202G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017EJPh...38b5202G"><span>Forces between permanent magnets: <span class="hlt">experiments</span> and <span class="hlt">model</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>González, Manuel I.</p> <p>2017-03-01</p> <p>This work describes a very simple, low-cost experimental setup designed for measuring the force between permanent magnets. The <span class="hlt">experiment</span> consists of placing one of the magnets on a balance, attaching the other magnet to a vertical height gauge, aligning carefully both magnets and measuring the load on the balance as a function of the gauge reading. A theoretical <span class="hlt">model</span> is proposed to compute the force, assuming uniform magnetisation and based on laws and techniques accessible to undergraduate students. A comparison between the <span class="hlt">model</span> and the experimental results is made, and good agreement is found at all distances investigated. In particular, it is also found that the force behaves as r -4 at large distances, as expected.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013SMaS...22b5034G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013SMaS...22b5034G"><span>Bucky gel actuator displacement: <span class="hlt">experiment</span> and <span class="hlt">model</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ghamsari, A. K.; Jin, Y.; Zegeye, E.; Woldesenbet, E.</p> <p>2013-02-01</p> <p>Bucky gel actuator (BGA) is a dry electroactive nanocomposite which is driven with a few volts. BGA’s remarkable features make this tri-layered actuator a potential candidate for morphing applications. However, most of these applications would require a better understanding of the effective parameters that influence the BGA displacement. In this study, various sets of <span class="hlt">experiments</span> were designed to investigate the effect of several parameters on the maximum lateral displacement of BGA. Two input parameters, voltage and frequency, and three material/design parameters, carbon nanotube type, thickness, and weight fraction of constituents were selected. A new thickness ratio term was also introduced to study the role of individual layers on BGA displacement. A <span class="hlt">model</span> was established to predict BGA maximum displacement based on the effect of these parameters. This <span class="hlt">model</span> showed good agreement with reported results from the literature. In addition, an important factor in the design of BGA-based devices, lifetime, was investigated.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://eric.ed.gov/?q=electrical&pg=3&id=EJ1141059','ERIC'); return false;" href="https://eric.ed.gov/?q=electrical&pg=3&id=EJ1141059"><span>Toward <span class="hlt">Modeling</span> the Intrinsic <span class="hlt">Complexity</span> of Test Problems</span></a></p> <p><a target="_blank" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Shoufan, Abdulhadi</p> <p>2017-01-01</p> <p>The concept of intrinsic <span class="hlt">complexity</span> explains why different problems of the same type, tackled by the same problem solver, can require different times to solve and yield solutions of different quality. This paper proposes a general four-step approach that can be used to establish a <span class="hlt">model</span> for the intrinsic <span class="hlt">complexity</span> of a problem class in terms of…</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://files.eric.ed.gov/fulltext/EJ940653.pdf','ERIC'); return false;" href="http://files.eric.ed.gov/fulltext/EJ940653.pdf"><span>Classrooms as <span class="hlt">Complex</span> Adaptive Systems: A Relational <span class="hlt">Model</span></span></a></p> <p><a target="_blank" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Burns, Anne; Knox, John S.</p> <p>2011-01-01</p> <p>In this article, we describe and <span class="hlt">model</span> the language classroom as a <span class="hlt">complex</span> adaptive system (see Logan & Schumann, 2005). We argue that linear, categorical descriptions of classroom processes and interactions do not sufficiently explain the <span class="hlt">complex</span> nature of classrooms, and cannot account for how classroom change occurs (or does not occur), over…</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.osti.gov/scitech/servlets/purl/15007540','SCIGOV-STC'); return false;" href="http://www.osti.gov/scitech/servlets/purl/15007540"><span>Computational <span class="hlt">Modeling</span> of Uranium Hydriding and <span class="hlt">Complexes</span></span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p>Balasubramanian, K; Siekhaus, W J; McLean, W</p> <p>2003-02-03</p> <p>Uranium hydriding is one of the most important processes that has received considerable attention over many years. Although many experimental and <span class="hlt">modeling</span> studies have been carried out concerning thermochemistry, diffusion kinetics and mechanisms of U-hydriding, very little is known about the electronic structure and electronic features that govern the U-hydriding process. Yet it is the electronic feature that controls the activation barrier and thus the rate of hydriding. Moreover the role of impurities and the role of the product UH{sub 3} on hydriding rating are not fully understood. An early study by Condon and Larson concerns with the kinetics of U-hydrogen system and a mathematical <span class="hlt">model</span> for the U-hydriding process. They proposed that diffusion in the reactant phase by hydrogen before nucleation to form hydride phase and that the reaction is first order for hydriding and zero order for dehydriding. Condon has also calculated and measures the reaction rates of U-hydriding and proposed a diffusion <span class="hlt">model</span> for the U-hydriding. This <span class="hlt">model</span> was found to be in excellent agreement with the experimental reaction rates. From the slopes of the Arrhenius plot the activation energy was calculated as 6.35 kcal/mole. In a subsequent study Kirkpatrick formulated a close-form for approximate solution to Condon's equation. Bloch and Mintz have proposed the kinetics and mechanism for the U-H reaction over a wide range of pressures and temperatures. They have discussed their results through two <span class="hlt">models</span>, one, which considers hydrogen diffusion through a protective UH{sub 3} product layer, and the second where hydride growth occurs at the hydride-metal interface. These authors obtained two-dimensional fits of experimental data to the pressure-temperature reactions. Kirkpatrick and Condon have obtained a linear solution to hydriding of uranium. These authors showed that the calculated reaction rates compared quite well with the experimental data at a hydrogen pressure of 1 atm. Powell</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li class="active"><span>15</span></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_15 --> <div id="page_16" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li class="active"><span>16</span></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="301"> <li> <p><a target="_blank" onclick="trackOutboundLink('https://eric.ed.gov/?q=catchment&pg=3&id=EJ1002230','ERIC'); return false;" href="https://eric.ed.gov/?q=catchment&pg=3&id=EJ1002230"><span>Woven into the Fabric of <span class="hlt">Experience</span>: Residential Adventure Education and <span class="hlt">Complexity</span></span></a></p> <p><a target="_blank" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Williams, Randall</p> <p>2013-01-01</p> <p>Residential adventure education is a surprisingly powerful developmental <span class="hlt">experience</span>. This paper reports on a mixed-methods study focused on English primary school pupils aged 9-11, which used <span class="hlt">complexity</span> theory to throw light on the synergistic inter-relationships between the different aspects of that <span class="hlt">experience</span>. Broadly expressed, the research…</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://eric.ed.gov/?q=Grief&pg=6&id=EJ924150','ERIC'); return false;" href="https://eric.ed.gov/?q=Grief&pg=6&id=EJ924150"><span>Communicating about Loss: <span class="hlt">Experiences</span> of Older Australian Adults with Cerebral Palsy and <span class="hlt">Complex</span> Communication Needs</span></a></p> <p><a target="_blank" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Dark, Leigha; Balandin, Susan; Clemson, Lindy</p> <p>2011-01-01</p> <p>Loss and grief is a universal human <span class="hlt">experience</span>, yet little is known about how older adults with a lifelong disability, such as cerebral palsy, and <span class="hlt">complex</span> communication needs (CCN) <span class="hlt">experience</span> loss and manage the grieving process. In-depth interviews were conducted with 20 Australian participants with cerebral palsy and CCN to determine the types…</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://eric.ed.gov/?q=experience&pg=4&id=EJ1084186','ERIC'); return false;" href="https://eric.ed.gov/?q=experience&pg=4&id=EJ1084186"><span><span class="hlt">Complex</span> Perceptions of Identity: The <span class="hlt">Experiences</span> of Student Combat Veterans in Community College</span></a></p> <p><a target="_blank" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Hammond, Shane Patrick</p> <p>2016-01-01</p> <p>This qualitative study illustrates how <span class="hlt">complex</span> perceptions of identity influence the community college <span class="hlt">experience</span> for student veterans who have been in combat, creating barriers to their overall persistence. The collective <span class="hlt">experiences</span> of student combat veterans at two community colleges in northwestern Massachusetts are presented, and a Combat…</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://eric.ed.gov/?q=light+AND+table+AND+primary+AND+school&pg=2&id=EJ1002230','ERIC'); return false;" href="http://eric.ed.gov/?q=light+AND+table+AND+primary+AND+school&pg=2&id=EJ1002230"><span>Woven into the Fabric of <span class="hlt">Experience</span>: Residential Adventure Education and <span class="hlt">Complexity</span></span></a></p> <p><a target="_blank" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Williams, Randall</p> <p>2013-01-01</p> <p>Residential adventure education is a surprisingly powerful developmental <span class="hlt">experience</span>. This paper reports on a mixed-methods study focused on English primary school pupils aged 9-11, which used <span class="hlt">complexity</span> theory to throw light on the synergistic inter-relationships between the different aspects of that <span class="hlt">experience</span>. Broadly expressed, the research…</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://eric.ed.gov/?q=negotiation&pg=4&id=EJ1084186','ERIC'); return false;" href="http://eric.ed.gov/?q=negotiation&pg=4&id=EJ1084186"><span><span class="hlt">Complex</span> Perceptions of Identity: The <span class="hlt">Experiences</span> of Student Combat Veterans in Community College</span></a></p> <p><a target="_blank" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Hammond, Shane Patrick</p> <p>2016-01-01</p> <p>This qualitative study illustrates how <span class="hlt">complex</span> perceptions of identity influence the community college <span class="hlt">experience</span> for student veterans who have been in combat, creating barriers to their overall persistence. The collective <span class="hlt">experiences</span> of student combat veterans at two community colleges in northwestern Massachusetts are presented, and a Combat…</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://eric.ed.gov/?q=cerebral+AND+palsy&id=EJ924150','ERIC'); return false;" href="http://eric.ed.gov/?q=cerebral+AND+palsy&id=EJ924150"><span>Communicating about Loss: <span class="hlt">Experiences</span> of Older Australian Adults with Cerebral Palsy and <span class="hlt">Complex</span> Communication Needs</span></a></p> <p><a target="_blank" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Dark, Leigha; Balandin, Susan; Clemson, Lindy</p> <p>2011-01-01</p> <p>Loss and grief is a universal human <span class="hlt">experience</span>, yet little is known about how older adults with a lifelong disability, such as cerebral palsy, and <span class="hlt">complex</span> communication needs (CCN) <span class="hlt">experience</span> loss and manage the grieving process. In-depth interviews were conducted with 20 Australian participants with cerebral palsy and CCN to determine the types…</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2006PrOce..70...27R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2006PrOce..70...27R"><span><span class="hlt">Model</span> <span class="hlt">complexity</span> and performance: How far can we simplify?</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Raick, C.; Soetaert, K.; Grégoire, M.</p> <p>2006-07-01</p> <p>Handling <span class="hlt">model</span> <span class="hlt">complexity</span> and reliability is a key area of research today. While <span class="hlt">complex</span> <span class="hlt">models</span> containing sufficient detail have become possible due to increased computing power, they often lead to too much uncertainty. On the other hand, very simple <span class="hlt">models</span> often crudely oversimplify the real ecosystem and can not be used for management purposes. Starting from a <span class="hlt">complex</span> and validated 1D pelagic ecosystem <span class="hlt">model</span> of the Ligurian Sea (NW Mediterranean Sea), we derived simplified aggregated <span class="hlt">models</span> in which either the unbalanced algal growth, the functional group diversity or the explicit description of the microbial loop was sacrificed. To overcome the problem of data availability with adequate spatial and temporal resolution, the outputs of the <span class="hlt">complex</span> <span class="hlt">model</span> are used as the baseline of perfect knowledge to calibrate the simplified <span class="hlt">models</span>. Objective criteria of <span class="hlt">model</span> performance were used to compare the simplified models’ results to the <span class="hlt">complex</span> <span class="hlt">model</span> output and to the available data at the DYFAMED station in the central Ligurian Sea. We show that even the simplest (NPZD) <span class="hlt">model</span> is able to represent the global ecosystem features described by the <span class="hlt">complex</span> <span class="hlt">model</span> (e.g. primary and secondary productions, particulate organic matter export flux, etc.). However, a certain degree of sophistication in the formulation of some biogeochemical processes is required to produce realistic behaviors (e.g. the phytoplankton competition, the potential carbon or nitrogen limitation of the zooplankton ingestion, the <span class="hlt">model</span> trophic closure, etc.). In general, a 9 state-variable <span class="hlt">model</span> that has the functional group diversity removed, but which retains the bacterial loop and the unbalanced algal growth, performs best.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/20617104','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/20617104"><span>Prequential Analysis of <span class="hlt">Complex</span> Data with Adaptive <span class="hlt">Model</span> Reselection.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Clarke, Jennifer; Clarke, Bertrand</p> <p>2009-11-01</p> <p>In Prequential analysis, an inference method is viewed as a forecasting system, and the quality of the inference method is based on the quality of its predictions. This is an alternative approach to more traditional statistical methods that focus on the inference of parameters of the data generating distribution. In this paper, we introduce adaptive combined average predictors (ACAPs) for the Prequential analysis of <span class="hlt">complex</span> data. That is, we use convex combinations of two different <span class="hlt">model</span> averages to form a predictor at each time step in a sequence. A novel feature of our strategy is that the <span class="hlt">models</span> in each average are re-chosen adaptively at each time step. To assess the <span class="hlt">complexity</span> of a given data set, we introduce measures of data <span class="hlt">complexity</span> for continuous response data. We validate our measures in several simulated contexts prior to using them in real data examples. The performance of ACAPs is compared with the performances of predictors based on stacking or likelihood weighted averaging in several <span class="hlt">model</span> classes and in both simulated and real data sets. Our results suggest that ACAPs achieve a better trade off between <span class="hlt">model</span> list bias and <span class="hlt">model</span> list variability in cases where the data is very <span class="hlt">complex</span>. This implies that the choices of <span class="hlt">model</span> class and averaging method should be guided by a concept of <span class="hlt">complexity</span> matching, i.e. the analysis of a <span class="hlt">complex</span> data set may require a more <span class="hlt">complex</span> <span class="hlt">model</span> class and averaging strategy than the analysis of a simpler data set. We propose that <span class="hlt">complexity</span> matching is akin to a bias-variance tradeoff in statistical <span class="hlt">modeling</span>.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/AD1029117','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/AD1029117"><span>Theoretical <span class="hlt">Modeling</span> and Electromagnetic Response of <span class="hlt">Complex</span> Metamaterials</span></a></p> <p><a target="_blank" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>2017-03-06</p> <p>AFRL-AFOSR-VA-TR-2017-0042 Theoretical <span class="hlt">Modeling</span> and Electromagnetic Response of <span class="hlt">Complex</span> Metamaterials Andrea Alu UNIVERSITY OF TEXAS AT AUSTIN Final...Nov 2016 4. TITLE AND SUBTITLE Theoretical <span class="hlt">Modeling</span> and Electromagnetic Response of <span class="hlt">Complex</span> Metamaterials 5a.  CONTRACT NUMBER 5b.  GRANT NUMBER...based on parity-time symmetric metasurfaces, and various advances in electromagnetic and acoustic theory and applications. Our findings have opened</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016AGUFM.H33Q..08V','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016AGUFM.H33Q..08V"><span><span class="hlt">Model</span> Analysis of <span class="hlt">Complex</span> Systems Behavior using MADS</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Vesselinov, V. V.; O'Malley, D.</p> <p>2016-12-01</p> <p>Evaluation of robustness (reliability) of <span class="hlt">model</span> predictions is challenging for <span class="hlt">models</span> representing <span class="hlt">complex</span> system behavior. Frequently in science and engineering applications related to <span class="hlt">complex</span> systems, several alternative physics <span class="hlt">models</span> may describe the available data equally well and are physically reasonable based on the available conceptual understanding. However, these alternative <span class="hlt">models</span> could give very different predictions about the future states of the analyzed system. Furthermore, in the case of <span class="hlt">complex</span> systems, we often must do <span class="hlt">modeling</span> with an incomplete understanding of the underlying physical processes and <span class="hlt">model</span> parameters. The analyses of <span class="hlt">model</span> predictions representing <span class="hlt">complex</span> system behavior are particularly challenging when we are quantifying uncertainties of rare events in the <span class="hlt">model</span> prediction space that can have major consequences (also called "black swans"). These types of analyses are also computationally challenging. Here, we demonstrate the application of a general high-performance computational tool for <span class="hlt">Model</span> Analysis & Decision Support (MADS; http://mads.lanl.gov) which can be applied to perform analyses using any external physics or systems <span class="hlt">model</span>. The coupling between MADS and the external <span class="hlt">model</span> can be performed using different methods. MADS is implemented in Julia, a high-level, high-performance dynamic programming language for technical computing (http://mads.lanl.gov/, https://github.com/madsjulia/Mads.jl, http://mads.readthedocs.org). MADS has been applied to perform analyses for environmental-management and water-energy-food nexus problems. To demonstrate MADS capabilities and functionalities, we analyze a series of synthetic problems consistent with actual real-world problems.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/17205386','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/17205386"><span><span class="hlt">Complex</span> emotions, <span class="hlt">complex</span> problems: understanding the <span class="hlt">experiences</span> of perinatal depression among new mothers in urban Indonesia.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Andajani-Sutjahjo, Sari; Manderson, Lenore; Astbury, Jill</p> <p>2007-03-01</p> <p>In this article, we explore how Javanese women identify and speak of symptoms of depression in late pregnancy and early postpartum and describe their subjective accounts of mood disorders. The study, conducted in the East Java region of Indonesia in 2000, involved in-depth interviews with a subgroup of women (N = 41) who scored above the cutoff score of 12/13 on the Edinburgh Postnatal Depression Scale (EPDS) during pregnancy, at six weeks postpartum, or on both occasions. This sample was taken from a larger cohort study (N cohort = 488) researching the sociocultural factors that contribute to women's emotional well-being in early motherhood. The women used a variety of Indonesian and Javanese terms to explain their emotional states during pregnancy and in early postpartum, some of which coincided with the feelings described on the EPDS and others of which did not. Women attributed their mood variations to multiple causes including: premarital pregnancy, chronic illness in the family, marital problems, lack of support from partners or family networks, their husband's unemployment, and insufficient family income due to giving up their own paid work. We argue for the importance of understanding the context of childbearing in order to interpret the meaning of depression within <span class="hlt">complex</span> social, cultural, and economic contexts.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/21563020','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/21563020"><span>Do <span class="hlt">complex</span> <span class="hlt">models</span> increase prediction of <span class="hlt">complex</span> behaviours? Predicting driving ability in people with brain disorders.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Innes, Carrie R H; Lee, Dominic; Chen, Chen; Ponder-Sutton, Agate M; Melzer, Tracy R; Jones, Richard D</p> <p>2011-09-01</p> <p>Prediction of <span class="hlt">complex</span> behavioural tasks via relatively simple <span class="hlt">modelling</span> techniques, such as logistic regression and discriminant analysis, often has limited success. We hypothesized that to more accurately <span class="hlt">model</span> <span class="hlt">complex</span> behaviour, more <span class="hlt">complex</span> <span class="hlt">models</span>, such as kernel-based methods, would be needed. To test this hypothesis, we assessed the value of six <span class="hlt">modelling</span> approaches for predicting driving ability based on performance on computerized sensory-motor and cognitive tests (SMCTests™) in 501 people with brain disorders. The <span class="hlt">models</span> included three <span class="hlt">models</span> previously used to predict driving ability (discriminant analysis, DA; binary logistic regression, BLR; and nonlinear causal resource analysis, NCRA) and three kernel methods (support vector machine, SVM; product kernel density, PK; and kernel product density, KP). At the classification level, two kernel methods were substantially more accurate at classifying on-road pass or fail (SVM 99.6%, PK 99.8%) than the other <span class="hlt">models</span> (DA 76%, BLR 78%, NCRA 74%, KP 81%). However, accuracy decreased substantially for all of the kernel <span class="hlt">models</span> when cross-validation techniques were used to estimate prediction of on-road pass or fail in an independent referral group (SVM 73-76%, PK 72-73%, KP 71-72%) but decreased only slightly for DA (74-75%) and BLR (75-76%). Cross-validation of NCRA was not possible. In conclusion, while kernel-based <span class="hlt">models</span> are successful at <span class="hlt">modelling</span> <span class="hlt">complex</span> data at a classification level, this is likely to be due to overfitting of the data, which does not lead to an improvement in accuracy in independent data over and above the accuracy of other less <span class="hlt">complex</span> <span class="hlt">modelling</span> techniques.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016AGUOSAH33A..07M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016AGUOSAH33A..07M"><span><span class="hlt">Model</span> Analysis of Vertical Carbon Export in a Mesocosm <span class="hlt">Experiment</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Mathesius, S.</p> <p>2016-02-01</p> <p>Marine biogeochemical <span class="hlt">models</span> can be improved by developing a deeper understanding of the necessary degree of <span class="hlt">model</span> <span class="hlt">complexity</span>, which includes for example the investigation of non-unique solutions in parameter optimization problems. The examination of uniqueness and uncertainties of optimal parameter estimates might disclose the relevance of individual processes. With our data based <span class="hlt">model</span> analysis we will explore the possibility of explaining similar patterns in observations that could possibly be explained equally well by different parameter settings or even different parameterizations. Here, we show the results of an optimality based plankton ecosystem <span class="hlt">model</span> (Carbon:Nitrogen-Regulated Ecosystem <span class="hlt">Model</span> with Coccolithophores, CN-REcoM&Co). The <span class="hlt">model</span> includes carbonate chemistry (with air-sea flux of carbon dioxide) and its setup was designed to simulate plankton dynamics observed during a mesocosm <span class="hlt">experiment</span> (PeECE III, Bergen, Norway, 2005). A special <span class="hlt">model</span> feature is the explicit consideration of extracellular gels that form from coagulation of algal exudates. These macrogels interact (aggregate) with detritus and become incorporated into sinking particles. In our analysis we focus on those parameters that affect photosynthesis, exudation of polysaccharides, grazing, particle aggregation and sinking. We will discuss how variations of these parameter values induce variability in chlorophyll a, dissolved inorganic carbon (DIC), and particulate organic carbon (POC) concentrations, as well as carbon export flux. Our primary concern is to disclose the uniqueness of the <span class="hlt">model</span> solution that explains the observed DIC drawdown as well as the build-up and sinking loss of POC from the upper layers.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5302026','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5302026"><span>Fatigue Damage of Collagenous Tissues: <span class="hlt">Experiment</span>, <span class="hlt">Modeling</span> and Simulation Studies</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Martin, Caitlin; Sun, Wei</p> <p>2017-01-01</p> <p>Mechanical fatigue damage is a critical issue for soft tissues and tissue-derived materials, particularly for musculoskeletal and cardiovascular applications; yet, our understanding of the fatigue damage process is incomplete. Soft tissue fatigue <span class="hlt">experiments</span> are often difficult and time-consuming to perform, which has hindered progress in this area. However, the recent development of soft-tissue fatigue-damage constitutive <span class="hlt">models</span> has enabled simulation-based fatigue analyses of tissues under various conditions. Computational simulations facilitate highly controlled and quantitative analyses to study the distinct effects of various loading conditions and design features on tissue durability; thus, they are advantageous over <span class="hlt">complex</span> fatigue <span class="hlt">experiments</span>. Although significant work to calibrate the constitutive <span class="hlt">models</span> from fatigue <span class="hlt">experiments</span> and to validate predictability remains, further development in these areas will add to our knowledge of soft-tissue fatigue damage and will facilitate the design of durable treatments and devices. In this review, the experimental, <span class="hlt">modeling</span>, and simulation efforts to study collagenous tissue fatigue damage are summarized and critically assessed. PMID:25955007</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23091020','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23091020"><span>Size and <span class="hlt">complexity</span> in <span class="hlt">model</span> financial systems.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Arinaminpathy, Nimalan; Kapadia, Sujit; May, Robert M</p> <p>2012-11-06</p> <p>The global financial crisis has precipitated an increasing appreciation of the need for a systemic perspective toward financial stability. For example: What role do large banks play in systemic risk? How should capital adequacy standards recognize this role? How is stability shaped by concentration and diversification in the financial system? We explore these questions using a deliberately simplified, dynamic <span class="hlt">model</span> of a banking system that combines three different channels for direct transmission of contagion from one bank to another: liquidity hoarding, asset price contagion, and the propagation of defaults via counterparty credit risk. Importantly, we also introduce a mechanism for capturing how swings in "confidence" in the system may contribute to instability. Our results highlight that the importance of relatively large, well-connected banks in system stability scales more than proportionately with their size: the impact of their collapse arises not only from their connectivity, but also from their effect on confidence in the system. Imposing tougher capital requirements on larger banks than smaller ones can thus enhance the resilience of the system. Moreover, these effects are more pronounced in more concentrated systems, and continue to apply, even when allowing for potential diversification benefits that may be realized by larger banks. We discuss some tentative implications for policy, as well as conceptual analogies in ecosystem stability and in the control of infectious diseases.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3494937','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3494937"><span>Size and <span class="hlt">complexity</span> in <span class="hlt">model</span> financial systems</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Arinaminpathy, Nimalan; Kapadia, Sujit; May, Robert M.</p> <p>2012-01-01</p> <p>The global financial crisis has precipitated an increasing appreciation of the need for a systemic perspective toward financial stability. For example: What role do large banks play in systemic risk? How should capital adequacy standards recognize this role? How is stability shaped by concentration and diversification in the financial system? We explore these questions using a deliberately simplified, dynamic <span class="hlt">model</span> of a banking system that combines three different channels for direct transmission of contagion from one bank to another: liquidity hoarding, asset price contagion, and the propagation of defaults via counterparty credit risk. Importantly, we also introduce a mechanism for capturing how swings in “confidence” in the system may contribute to instability. Our results highlight that the importance of relatively large, well-connected banks in system stability scales more than proportionately with their size: the impact of their collapse arises not only from their connectivity, but also from their effect on confidence in the system. Imposing tougher capital requirements on larger banks than smaller ones can thus enhance the resilience of the system. Moreover, these effects are more pronounced in more concentrated systems, and continue to apply, even when allowing for potential diversification benefits that may be realized by larger banks. We discuss some tentative implications for policy, as well as conceptual analogies in ecosystem stability and in the control of infectious diseases. PMID:23091020</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4269775','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4269775"><span><span class="hlt">Modeling</span> a Ca2+ Channel/BKCa Channel <span class="hlt">Complex</span> at the Single-<span class="hlt">Complex</span> Level</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Cox, Daniel H.</p> <p>2014-01-01</p> <p>BKCa-channel activity often affects the firing properties of neurons, the shapes of neuronal action potentials (APs), and in some cases the extent of neurotransmitter release. It has become clear that BKCa channels often form <span class="hlt">complexes</span> with voltage-gated Ca2+ channels (CaV channels) such that when a CaV channel is activated, the ensuing influx of Ca2+ activates its closely associated BKCa channel. Thus, in <span class="hlt">modeling</span> the electrical properties of neurons, it would be useful to have quantitative <span class="hlt">models</span> of CaV/BKCa <span class="hlt">complexes</span>. Furthermore, in a population of CaV/BKCa <span class="hlt">complexes</span>, all BKCa channels are not exposed to the same Ca2+ concentration at the same time. Thus, stochastic rather than deterministic <span class="hlt">models</span> are required. To date, however, no such <span class="hlt">models</span> have been described. Here, however, I present a stochastic <span class="hlt">model</span> of a CaV2.1/BKCa(α-only) <span class="hlt">complex</span>, as might be found in a central nerve terminal. The CaV2.1/BKCa <span class="hlt">model</span> is based on kinetic <span class="hlt">modeling</span> of its two component channels at physiological temperature. Surprisingly, The CaV2.1/BKCa <span class="hlt">model</span> predicts that although the CaV channel will open nearly every time during a typical cortical AP, its associated BKCa channel is expected to open in only 30% of trials, and this percentage is very sensitive to the duration of the AP, the distance between the two channels in the <span class="hlt">complex</span>, and the presence of fast internal Ca2+ buffers. Also, the <span class="hlt">model</span> predicts that the kinetics of the BKCa currents of a population of CaV2.1/BKCa <span class="hlt">complexes</span> will not be limited by the kinetics of the CaV2.1 channel, and during a train of APs, the current response of the <span class="hlt">complex</span> is expected to faithfully follow even very rapid trains. Aside from providing insight into how these <span class="hlt">complexes</span> are likely to behave in vivo, the <span class="hlt">models</span> presented here could also be of use more generally as components of higher-level <span class="hlt">models</span> of neural function. PMID:25517147</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2007AIPC..908..711B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2007AIPC..908..711B"><span>Micro Wire-Drawing: <span class="hlt">Experiments</span> And <span class="hlt">Modelling</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Berti, G. A.; Monti, M.; Bietresato, M.; D'Angelo, L.</p> <p>2007-05-01</p> <p>In the paper, the authors propose to adopt the micro wire-drawing as a key for investigating <span class="hlt">models</span> of micro forming processes. The reasons of this choice arose in the fact that this process can be considered a quasi-stationary process where tribological conditions at the interface between the material and the die can be assumed to be constant during the whole deformation. Two different materials have been investigated: i) a low-carbon steel and, ii) a nonferrous metal (copper). The micro hardness and tensile tests performed on each drawn wire show a thin hardened layer (more evident then in macro wires) on the external surface of the wire and hardening decreases rapidly from the surface layer to the center. For the copper wire this effect is reduced and traditional material constitutive <span class="hlt">model</span> seems to be adequate to predict experimentation. For the low-carbon steel a modified constitutive material <span class="hlt">model</span> has been proposed and implemented in a FE code giving a better agreement with the <span class="hlt">experiments</span>.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.osti.gov/scitech/biblio/21061746','SCIGOV-STC'); return false;" href="https://www.osti.gov/scitech/biblio/21061746"><span>Micro Wire-Drawing: <span class="hlt">Experiments</span> And <span class="hlt">Modelling</span></span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p>Berti, G. A.; Monti, M.; Bietresato, M.; D'Angelo, L.</p> <p>2007-05-17</p> <p>In the paper, the authors propose to adopt the micro wire-drawing as a key for investigating <span class="hlt">models</span> of micro forming processes. The reasons of this choice arose in the fact that this process can be considered a quasi-stationary process where tribological conditions at the interface between the material and the die can be assumed to be constant during the whole deformation. Two different materials have been investigated: i) a low-carbon steel and, ii) a nonferrous metal (copper). The micro hardness and tensile tests performed on each drawn wire show a thin hardened layer (more evident then in macro wires) on the external surface of the wire and hardening decreases rapidly from the surface layer to the center. For the copper wire this effect is reduced and traditional material constitutive <span class="hlt">model</span> seems to be adequate to predict experimentation. For the low-carbon steel a modified constitutive material <span class="hlt">model</span> has been proposed and implemented in a FE code giving a better agreement with the <span class="hlt">experiments</span>.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JPhCS.803a2057I','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JPhCS.803a2057I"><span>Simulation <span class="hlt">modelling</span> as a tool to diagnose the <span class="hlt">complex</span> networks of security systems</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Iskhakov, S. Y.; Shelupanov, A. A.; Meshcheryakov, R. V.</p> <p>2017-01-01</p> <p>In the article, the questions of <span class="hlt">modelling</span> of <span class="hlt">complex</span> security system networks are considered. The simulation <span class="hlt">model</span> of operation of similar <span class="hlt">complexes</span> and approbation of the offered approach to identification of the incidents are presented. The approach is based on detection of uncharacteristic alterations of the network operation mode. The results of the <span class="hlt">experiment</span> allow one to draw a conclusion on possibility of the offered <span class="hlt">model</span> application to analyse the current status of heterogeneous security systems. Also, it is confirmed that the application of short-term forecasting methods for the analysis of monitoring system data allows one to automate the process of formation the criteria to reveal the incidents.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li class="active"><span>16</span></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_16 --> <div id="page_17" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li class="active"><span>17</span></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="321"> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2005JHEP...03..022Z','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2005JHEP...03..022Z"><span>Generalized <span class="hlt">complex</span> geometry, generalized branes and the Hitchin sigma <span class="hlt">model</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Zucchini, Roberto</p> <p>2005-03-01</p> <p>Hitchin's generalized <span class="hlt">complex</span> geometry has been shown to be relevant in compactifications of superstring theory with fluxes and is expected to lead to a deeper understanding of mirror symmetry. Gualtieri's notion of generalized <span class="hlt">complex</span> submanifold seems to be a natural candidate for the description of branes in this context. Recently, we introduced a Batalin-Vilkovisky field theoretic realization of generalized <span class="hlt">complex</span> geometry, the Hitchin sigma <span class="hlt">model</span>, extending the well known Poisson sigma <span class="hlt">model</span>. In this paper, exploiting Gualtieri's formalism, we incorporate branes into the <span class="hlt">model</span>. A detailed study of the boundary conditions obeyed by the world sheet fields is provided. Finally, it is found that, when branes are present, the classical Batalin-Vilkovisky cohomology contains an extra sector that is related non trivially to a novel cohomology associated with the branes as generalized <span class="hlt">complex</span> submanifolds.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5352923','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5352923"><span>Experimental porcine <span class="hlt">model</span> of <span class="hlt">complex</span> fistula-in-ano</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>A Ba-Bai-Ke-Re, Ma-Mu-Ti-Jiang; Chen, Hui; Liu, Xue; Wang, Yun-Hai</p> <p>2017-01-01</p> <p>AIM To establish and evaluate an experimental porcine <span class="hlt">model</span> of fistula-in-ano. METHODS Twelve healthy pigs were randomly divided into two groups. Under general anesthesia, the experimental group underwent rubber band ligation surgery, and the control group underwent an artificial damage technique. Clinical magnetic resonance imaging (MRI) and histopathological evaluation were performed on the 38th d and 48th d after surgery in both groups, respectively. RESULTS There were no significant differences between the experimental group and the control group in general characteristics such as body weight, gender, and the number of fistula (P > 0.05). In the experimental group, 15 fistulas were confirmed clinically, 13 <span class="hlt">complex</span> fistulas were confirmed by MRI, and 11 <span class="hlt">complex</span> fistulas were confirmed by histopathology. The success rate in the porcine <span class="hlt">complex</span> fistula <span class="hlt">model</span> establishment was 83.33%. Among the 18 fistulas in the control group, 5 fistulas were confirmed clinically, 4 <span class="hlt">complex</span> fistulas were confirmed by MRI, and 3 fistulas were confirmed by histopathology. The success rate in the porcine fistula <span class="hlt">model</span> establishment was 27.78%. Thus, the success rate of the rubber band ligation group was significantly higher than the control group (P < 0.05). CONCLUSION Rubber band ligation is a stable and reliable method to establish <span class="hlt">complex</span> fistula-in-ano <span class="hlt">models</span>. Large animal <span class="hlt">models</span> of <span class="hlt">complex</span> anal fistulas can be used for the diagnosis and treatment of anal fistulas. PMID:28348488</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24904177','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24904177"><span>Finite element analysis to <span class="hlt">model</span> <span class="hlt">complex</span> mitral valve repair.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Labrosse, Michel; Mesana, Thierry; Baxter, Ian; Chan, Vincent</p> <p>2016-01-01</p> <p>Although finite element analysis has been used to <span class="hlt">model</span> simple mitral repair, it has not been used to <span class="hlt">model</span> <span class="hlt">complex</span> repair. A virtual mitral valve <span class="hlt">model</span> was successful in simulating normal and abnormal valve function. <span class="hlt">Models</span> were then developed to simulate an edge-to-edge repair and repair employing quadrangular resection. Stress contour plots demonstrated increased stresses along the mitral annulus, corresponding to the annuloplasty. The role of finite element analysis in guiding clinical practice remains undetermined.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3401896','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3401896"><span>Bladder Cancer: A Simple <span class="hlt">Model</span> Becomes <span class="hlt">Complex</span></span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Pierro, Giovanni Battista Di; Gulia, Caterina; Cristini, Cristiano; Fraietta, Giorgio; Marini, Lorenzo; Grande, Pietro; Gentile, Vincenzo; Piergentili, Roberto</p> <p>2012-01-01</p> <p>Bladder cancer is one of the most frequent malignancies in developed countries and it is also characterized by a high number of recurrences. Despite this, several authors in the past reported that only two altered molecular pathways may genetically explain all cases of bladder cancer: one involving the FGFR3 gene, and the other involving the TP53 gene. Mutations in any of these two genes are usually predictive of the malignancy final outcome. This cancer may also be further classified as low-grade tumors, which is always papillary and in most cases superficial, and high-grade tumors, not necessarily papillary and often invasive. This simple way of considering this pathology has strongly changed in the last few years, with the development of genome-wide studies on expression profiling and the discovery of small non-coding RNA affecting gene expression. An easy search in the OMIM (On-line Mendelian Inheritance in Man) database using “bladder cancer” as a query reveals that genes in some way connected to this pathology are approximately 150, and some authors report that altered gene expression (up- or down-regulation) in this disease may involve up to 500 coding sequences for low-grade tumors and up to 2300 for high-grade tumors. In many clinical cases, mutations inside the coding sequences of the above mentioned two genes were not found, but their expression changed; this indicates that also epigenetic modifications may play an important role in its development. Indeed, several reports were published about genome-wide methylation in these neoplastic tissues, and an increasing number of small non-coding RNA are either up- or down-regulated in bladder cancer, indicating that impaired gene expression may also pass through these metabolic pathways. Taken together, these data reveal that bladder cancer is far to be considered a simple <span class="hlt">model</span> of malignancy. In the present review, we summarize recent progress in the genome-wide analysis of bladder cancer, and analyse non</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.osti.gov/scitech/biblio/411827','SCIGOV-STC'); return false;" href="https://www.osti.gov/scitech/biblio/411827"><span><span class="hlt">Modeling</span> of the Edwards pipe <span class="hlt">experiment</span></span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p>Tiselj, I.; Petelin, S.</p> <p>1995-12-31</p> <p>The Edwards pipe <span class="hlt">experiment</span> is used as one of the basic benchmarks for the two-phase flow codes due to its simple geometry and the wide range of phenomena that it covers. Edwards and O`Brien filled 4-m-long pipe with liquid water at 7 MPa and 502 K and ruptured one end of the tube. They measured pressure and void fraction during the blowdown. Important phenomena observed were pressure rarefaction wave, flashing onset, critical two-phase flow, and void fraction wave. Experimental data were used to analyze the capabilities of the RELAP5/MOD3.1 six-equation two-phase flow <span class="hlt">model</span> and to examine two different numerical schemes: one from the RELAP5/MOD3.1 code and one from our own code, which was based on characteristic upwind discretization.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.osti.gov/scitech/servlets/purl/15013776','SCIGOV-STC'); return false;" href="http://www.osti.gov/scitech/servlets/purl/15013776"><span>Full-Scale Cookoff <span class="hlt">Model</span> Validation <span class="hlt">Experiments</span></span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p>McClelland, M A; Rattanapote, M K; Heimdahl, E R; Erikson, W E; Curran, P O; Atwood, A I</p> <p>2003-11-25</p> <p>This paper presents the experimental results of the third and final phase of a cookoff <span class="hlt">model</span> validation effort. In this phase of the work, two generic Heavy Wall Penetrators (HWP) were tested in two heating orientations. Temperature and strain gage data were collected over the entire test period. Predictions for time and temperature of reaction were made prior to release of the live data. Predictions were comparable to the measured values and were highly dependent on the established boundary conditions. Both HWP tests failed at a weld located near the aft closure of the device. More than 90 percent of unreacted explosive was recovered in the end heated <span class="hlt">experiment</span> and less than 30 percent recovered in the side heated test.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014PhDT.......155G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014PhDT.......155G"><span>Nanofluid Drop Evaporation: <span class="hlt">Experiment</span>, Theory, and <span class="hlt">Modeling</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Gerken, William James</p> <p></p> <p>Nanofluids, stable colloidal suspensions of nanoparticles in a base fluid, have potential applications in the heat transfer, combustion and propulsion, manufacturing, and medical fields. <span class="hlt">Experiments</span> were conducted to determine the evaporation rate of room temperature, millimeter-sized pendant drops of ethanol laden with varying amounts (0-3% by weight) of 40-60 nm aluminum nanoparticles (nAl). Time-resolved high-resolution drop images were collected for the determination of early-time evaporation rate (D2/D 02 > 0.75), shown to exhibit D-square law behavior, and surface tension. Results show an asymptotic decrease in pendant drop evaporation rate with increasing nAl loading. The evaporation rate decreases by approximately 15% at around 1% to 3% nAl loading relative to the evaporation rate of pure ethanol. Surface tension was observed to be unaffected by nAl loading up to 3% by weight. A <span class="hlt">model</span> was developed to describe the evaporation of the nanofluid pendant drops based on D-square law analysis for the gas domain and a description of the reduction in liquid fraction available for evaporation due to nanoparticle agglomerate packing near the evaporating drop surface. <span class="hlt">Model</span> predictions are in relatively good agreement with <span class="hlt">experiment</span>, within a few percent of measured nanofluid pendant drop evaporation rate. The evaporation of pinned nanofluid sessile drops was also considered via <span class="hlt">modeling</span>. It was found that the same mechanism for nanofluid evaporation rate reduction used to explain pendant drops could be used for sessile drops. That mechanism is a reduction in evaporation rate due to a reduction in available ethanol for evaporation at the drop surface caused by the packing of nanoparticle agglomerates near the drop surface. Comparisons of the present <span class="hlt">modeling</span> predictions with sessile drop evaporation rate measurements reported for nAl/ethanol nanofluids by Sefiane and Bennacer [11] are in fairly good agreement. Portions of this abstract previously appeared as: W. J</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/20100027331','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20100027331"><span>The Use of Behavior <span class="hlt">Models</span> for Predicting <span class="hlt">Complex</span> Operations</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Gore, Brian F.</p> <p>2010-01-01</p> <p><span class="hlt">Modeling</span> and simulation (M&S) plays an important role when <span class="hlt">complex</span> human-system notions are being proposed, developed and tested within the system design process. National Aeronautics and Space Administration (NASA) as an agency uses many different types of M&S approaches for predicting human-system interactions, especially when it is early in the development phase of a conceptual design. NASA Ames Research Center possesses a number of M&S capabilities ranging from airflow, flight path <span class="hlt">models</span>, aircraft <span class="hlt">models</span>, scheduling <span class="hlt">models</span>, human performance <span class="hlt">models</span> (HPMs), and bioinformatics <span class="hlt">models</span> among a host of other kinds of M&S capabilities that are used for predicting whether the proposed designs will benefit the specific mission criteria. The Man-Machine Integration Design and Analysis System (MIDAS) is a NASA ARC HPM software tool that integrates many <span class="hlt">models</span> of human behavior with environment <span class="hlt">models</span>, equipment <span class="hlt">models</span>, and procedural / task <span class="hlt">models</span>. The challenge to <span class="hlt">model</span> comprehensibility is heightened as the number of <span class="hlt">models</span> that are integrated and the requisite fidelity of the procedural sets are increased. <span class="hlt">Model</span> transparency is needed for some of the more <span class="hlt">complex</span> HPMs to maintain comprehensibility of the integrated <span class="hlt">model</span> performance. This will be exemplified in a recent MIDAS v5 application <span class="hlt">model</span> and plans for future <span class="hlt">model</span> refinements will be presented.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4101697','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4101697"><span>STATegra EMS: an <span class="hlt">Experiment</span> Management System for <span class="hlt">complex</span> next-generation omics <span class="hlt">experiments</span></span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p></p> <p>2014-01-01</p> <p>High-throughput sequencing assays are now routinely used to study different aspects of genome organization. As decreasing costs and widespread availability of sequencing enable more laboratories to use sequencing assays in their research projects, the number of samples and replicates in these <span class="hlt">experiments</span> can quickly grow to several dozens of samples and thus require standardized annotation, storage and management of preprocessing steps. As a part of the STATegra project, we have developed an <span class="hlt">Experiment</span> Management System (EMS) for high throughput omics data that supports different types of sequencing-based assays such as RNA-seq, ChIP-seq, Methyl-seq, etc, as well as proteomics and metabolomics data. The STATegra EMS provides metadata annotation of experimental design, samples and processing pipelines, as well as storage of different types of data files, from raw data to ready-to-use measurements. The system has been developed to provide research laboratories with a freely-available, integrated system that offers a simple and effective way for <span class="hlt">experiment</span> annotation and tracking of analysis procedures. PMID:25033091</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25033091','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25033091"><span>STATegra EMS: an <span class="hlt">Experiment</span> Management System for <span class="hlt">complex</span> next-generation omics <span class="hlt">experiments</span>.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Hernández-de-Diego, Rafael; Boix-Chova, Noemi; Gómez-Cabrero, David; Tegner, Jesper; Abugessaisa, Imad; Conesa, Ana</p> <p>2014-01-01</p> <p>High-throughput sequencing assays are now routinely used to study different aspects of genome organization. As decreasing costs and widespread availability of sequencing enable more laboratories to use sequencing assays in their research projects, the number of samples and replicates in these <span class="hlt">experiments</span> can quickly grow to several dozens of samples and thus require standardized annotation, storage and management of preprocessing steps. As a part of the STATegra project, we have developed an <span class="hlt">Experiment</span> Management System (EMS) for high throughput omics data that supports different types of sequencing-based assays such as RNA-seq, ChIP-seq, Methyl-seq, etc, as well as proteomics and metabolomics data. The STATegra EMS provides metadata annotation of experimental design, samples and processing pipelines, as well as storage of different types of data files, from raw data to ready-to-use measurements. The system has been developed to provide research laboratories with a freely-available, integrated system that offers a simple and effective way for <span class="hlt">experiment</span> annotation and tracking of analysis procedures.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3568658','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3568658"><span>Geometric <span class="hlt">modeling</span> of subcellular structures, organelles, and multiprotein <span class="hlt">complexes</span></span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Feng, Xin; Xia, Kelin; Tong, Yiying; Wei, Guo-Wei</p> <p>2013-01-01</p> <p>SUMMARY Recently, the structure, function, stability, and dynamics of subcellular structures, organelles, and multi-protein <span class="hlt">complexes</span> have emerged as a leading interest in structural biology. Geometric <span class="hlt">modeling</span> not only provides visualizations of shapes for large biomolecular <span class="hlt">complexes</span> but also fills the gap between structural information and theoretical <span class="hlt">modeling</span>, and enables the understanding of function, stability, and dynamics. This paper introduces a suite of computational tools for volumetric data processing, information extraction, surface mesh rendering, geometric measurement, and curvature estimation of biomolecular <span class="hlt">complexes</span>. Particular emphasis is given to the <span class="hlt">modeling</span> of cryo-electron microscopy data. Lagrangian-triangle meshes are employed for the surface presentation. On the basis of this representation, algorithms are developed for surface area and surface-enclosed volume calculation, and curvature estimation. Methods for volumetric meshing have also been presented. Because the technological development in computer science and mathematics has led to multiple choices at each stage of the geometric <span class="hlt">modeling</span>, we discuss the rationales in the design and selection of various algorithms. Analytical <span class="hlt">models</span> are designed to test the computational accuracy and convergence of proposed algorithms. Finally, we select a set of six cryo-electron microscopy data representing typical subcellular <span class="hlt">complexes</span> to demonstrate the efficacy of the proposed algorithms in handling biomolecular surfaces and explore their capability of geometric characterization of binding targets. This paper offers a comprehensive protocol for the geometric <span class="hlt">modeling</span> of subcellular structures, organelles, and multiprotein <span class="hlt">complexes</span>. PMID:23212797</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23212797','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23212797"><span>Geometric <span class="hlt">modeling</span> of subcellular structures, organelles, and multiprotein <span class="hlt">complexes</span>.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Feng, Xin; Xia, Kelin; Tong, Yiying; Wei, Guo-Wei</p> <p>2012-12-01</p> <p>Recently, the structure, function, stability, and dynamics of subcellular structures, organelles, and multiprotein <span class="hlt">complexes</span> have emerged as a leading interest in structural biology. Geometric <span class="hlt">modeling</span> not only provides visualizations of shapes for large biomolecular <span class="hlt">complexes</span> but also fills the gap between structural information and theoretical <span class="hlt">modeling</span>, and enables the understanding of function, stability, and dynamics. This paper introduces a suite of computational tools for volumetric data processing, information extraction, surface mesh rendering, geometric measurement, and curvature estimation of biomolecular <span class="hlt">complexes</span>. Particular emphasis is given to the <span class="hlt">modeling</span> of cryo-electron microscopy data. Lagrangian-triangle meshes are employed for the surface presentation. On the basis of this representation, algorithms are developed for surface area and surface-enclosed volume calculation, and curvature estimation. Methods for volumetric meshing have also been presented. Because the technological development in computer science and mathematics has led to multiple choices at each stage of the geometric <span class="hlt">modeling</span>, we discuss the rationales in the design and selection of various algorithms. Analytical <span class="hlt">models</span> are designed to test the computational accuracy and convergence of proposed algorithms. Finally, we select a set of six cryo-electron microscopy data representing typical subcellular <span class="hlt">complexes</span> to demonstrate the efficacy of the proposed algorithms in handling biomolecular surfaces and explore their capability of geometric characterization of binding targets. This paper offers a comprehensive protocol for the geometric <span class="hlt">modeling</span> of subcellular structures, organelles, and multiprotein <span class="hlt">complexes</span>. Copyright © 2012 John Wiley & Sons, Ltd.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013PhyA..392.3654S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013PhyA..392.3654S"><span>Between <span class="hlt">complexity</span> of <span class="hlt">modelling</span> and <span class="hlt">modelling</span> of <span class="hlt">complexity</span>: An essay on econophysics</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Schinckus, C.</p> <p>2013-09-01</p> <p>Econophysics is an emerging field dealing with <span class="hlt">complex</span> systems and emergent properties. A deeper analysis of themes studied by econophysicists shows that research conducted in this field can be decomposed into two different computational approaches: “statistical econophysics” and “agent-based econophysics”. This methodological scission complicates the definition of the <span class="hlt">complexity</span> used in econophysics. Therefore, this article aims to clarify what kind of emergences and <span class="hlt">complexities</span> we can find in econophysics in order to better understand, on one hand, the current scientific modes of reasoning this new field provides; and on the other hand, the future methodological evolution of the field.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://eric.ed.gov/?q=mri+AND+scans&pg=5&id=EJ820982','ERIC'); return false;" href="https://eric.ed.gov/?q=mri+AND+scans&pg=5&id=EJ820982"><span>Using fMRI to Test <span class="hlt">Models</span> of <span class="hlt">Complex</span> Cognition</span></a></p> <p><a target="_blank" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Anderson, John R.; Carter, Cameron S.; Fincham, Jon M.; Qin, Yulin; Ravizza, Susan M.; Rosenberg-Lee, Miriam</p> <p>2008-01-01</p> <p>This article investigates the potential of fMRI to test assumptions about different components in <span class="hlt">models</span> of <span class="hlt">complex</span> cognitive tasks. If the components of a <span class="hlt">model</span> can be associated with specific brain regions, one can make predictions for the temporal course of the BOLD response in these regions. An event-locked procedure is described for dealing…</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014IJMPB..2850144W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014IJMPB..2850144W"><span>Network <span class="hlt">model</span> of bilateral power markets based on <span class="hlt">complex</span> networks</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Wu, Yang; Liu, Junyong; Li, Furong; Yan, Zhanxin; Zhang, Li</p> <p>2014-06-01</p> <p>The bilateral power transaction (BPT) mode becomes a typical market organization with the restructuring of electric power industry, the proper <span class="hlt">model</span> which could capture its characteristics is in urgent need. However, the <span class="hlt">model</span> is lacking because of this market organization's <span class="hlt">complexity</span>. As a promising approach to <span class="hlt">modeling</span> <span class="hlt">complex</span> systems, <span class="hlt">complex</span> networks could provide a sound theoretical framework for developing proper simulation <span class="hlt">model</span>. In this paper, a <span class="hlt">complex</span> network <span class="hlt">model</span> of the BPT market is proposed. In this <span class="hlt">model</span>, price advantage mechanism is a precondition. Unlike other general commodity transactions, both of the financial layer and the physical layer are considered in the <span class="hlt">model</span>. Through simulation analysis, the feasibility and validity of the <span class="hlt">model</span> are verified. At same time, some typical statistical features of BPT network are identified. Namely, the degree distribution follows the power law, the clustering coefficient is low and the average path length is a bit long. Moreover, the topological stability of the BPT network is tested. The results show that the network displays a topological robustness to random market member's failures while it is fragile against deliberate attacks, and the network could resist cascading failure to some extent. These features are helpful for making decisions and risk management in BPT markets.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://eric.ed.gov/?q=brain+AND+mapping&pg=7&id=EJ820982','ERIC'); return false;" href="http://eric.ed.gov/?q=brain+AND+mapping&pg=7&id=EJ820982"><span>Using fMRI to Test <span class="hlt">Models</span> of <span class="hlt">Complex</span> Cognition</span></a></p> <p><a target="_blank" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Anderson, John R.; Carter, Cameron S.; Fincham, Jon M.; Qin, Yulin; Ravizza, Susan M.; Rosenberg-Lee, Miriam</p> <p>2008-01-01</p> <p>This article investigates the potential of fMRI to test assumptions about different components in <span class="hlt">models</span> of <span class="hlt">complex</span> cognitive tasks. If the components of a <span class="hlt">model</span> can be associated with specific brain regions, one can make predictions for the temporal course of the BOLD response in these regions. An event-locked procedure is described for dealing…</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://eric.ed.gov/?q=CAD&pg=2&id=EJ788402','ERIC'); return false;" href="http://eric.ed.gov/?q=CAD&pg=2&id=EJ788402"><span>Tips on Creating <span class="hlt">Complex</span> Geometry Using Solid <span class="hlt">Modeling</span> Software</span></a></p> <p><a target="_blank" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Gow, George</p> <p>2008-01-01</p> <p>Three-dimensional computer-aided drafting (CAD) software, sometimes referred to as "solid <span class="hlt">modeling</span>" software, is easy to learn, fun to use, and becoming the standard in industry. However, many users have difficulty creating <span class="hlt">complex</span> geometry with the solid <span class="hlt">modeling</span> software. And the problem is not entirely a student problem. Even some teachers and…</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://eric.ed.gov/?q=cad&pg=2&id=EJ788402','ERIC'); return false;" href="https://eric.ed.gov/?q=cad&pg=2&id=EJ788402"><span>Tips on Creating <span class="hlt">Complex</span> Geometry Using Solid <span class="hlt">Modeling</span> Software</span></a></p> <p><a target="_blank" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Gow, George</p> <p>2008-01-01</p> <p>Three-dimensional computer-aided drafting (CAD) software, sometimes referred to as "solid <span class="hlt">modeling</span>" software, is easy to learn, fun to use, and becoming the standard in industry. However, many users have difficulty creating <span class="hlt">complex</span> geometry with the solid <span class="hlt">modeling</span> software. And the problem is not entirely a student problem. Even some teachers and…</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3913794','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3913794"><span>Zebrafish as an emerging <span class="hlt">model</span> for studying <span class="hlt">complex</span> brain disorders</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Kalueff, Allan V.; Stewart, Adam Michael; Gerlai, Robert</p> <p>2014-01-01</p> <p>The zebrafish (Danio rerio) is rapidly becoming a popular <span class="hlt">model</span> organism in pharmacogenetics and neuropharmacology. Both larval and adult zebrafish are currently used to increase our understanding of brain function, dysfunction, and their genetic and pharmacological modulation. Here we review the developing utility of zebrafish in the analysis of <span class="hlt">complex</span> brain disorders (including, for example, depression, autism, psychoses, drug abuse and cognitive disorders), also covering zebrafish applications towards the goal of <span class="hlt">modeling</span> major human neuropsychiatric and drug-induced syndromes. We argue that zebrafish <span class="hlt">models</span> of <span class="hlt">complex</span> brain disorders and drug-induced conditions have become a rapidly emerging critical field in translational neuropharmacology research. PMID:24412421</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016PhDT.......144H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016PhDT.......144H"><span><span class="hlt">Experiment</span>-Driven <span class="hlt">Modeling</span> of Plasmonic Nanostructures</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Hryn, Alexander John</p> <p></p> <p>Plasmonic nanostructures can confine light at their surface in the form of surface plasmon polaritons (SPPs) or localized surface plasmons (LSPs) depending on their geometry. SPPs are excited on nano- and micropatterned surfaces, where the typical feature size is on the order of the wavelength of light. LSPs, on the other hand, can be excited on nanoparticles much smaller than the diffraction limit. In both cases, far-field optical measurements are used to infer the excited plasmonic modes, and theoretical <span class="hlt">models</span> are used to verify those results. Typically, these theoretical <span class="hlt">models</span> are tailored to match the experimental nanostructures in order to explain observed phenomena. In this thesis, I explore incorporating components of experimental procedures into the <span class="hlt">models</span> to increase the accuracy of the simulated result, and to inform the design of future <span class="hlt">experiments</span>. First, I examine SPPs on nanostructured metal films in the form of low-symmetry moire plasmonic crystals. I created a general Bragg <span class="hlt">model</span> to understand and predict the excited SPP modes in moire plasmonic crystals based on the nanolithography masks used in their fabrication. This <span class="hlt">model</span> makes use of experimental parameters such as periodicity, azimuthal rotation, and number of sequential exposures to predict the energies of excited SPP modes and the opening of plasmonic band gaps. The <span class="hlt">model</span> is further expanded to apply to multiscale gratings, which have patterns that contain hierarchical periodicities: a sub-micron primary periodicity, and microscale superperiodicity. A new set of rules was established to determine how superlattice SPPs are excited, and informed development of a new fabrication technique to create superlattices with multiple primary periodicities that absorb light over a wider spectral range than other plasmonic structures. The second half of the thesis is based on development of finite-difference time-domain (FDTD) simulations of plasmonic nanoparticles. I created a new technique to <span class="hlt">model</span></p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li class="active"><span>17</span></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_17 --> <div id="page_18" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li class="active"><span>18</span></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="341"> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2003Ap%26SS.287...69G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2003Ap%26SS.287...69G"><span>MHD <span class="hlt">Models</span> and Laboratory <span class="hlt">Experiments</span> of Jets</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Gardiner, T. A.; Frank, A.; Blackman, E. G.; Lebedev, S. V.; Chittenden, J. P.; Ampleford, D.; Bland, S. N.; Ciardi, A.; Sherlock, M.; Haines, M. G.</p> <p></p> <p>Jet research has long relied upon a combination of analytical, observational and numerical studies to elucidate the <span class="hlt">complex</span> phenomena involved. One element missing from these studies (which other physical sciences utilize) is the controlled experimental investigation of such systems. With the advent of high-power lasers and fast Z-pinch machines it is now possible to experimentally study similar systems in a laboratory setting. Such investigations can contribute in two useful ways. They can be used for comparison with numerical simulations as a means to validate simulation codes. More importantly, however, such investigations can also be used to complement other jet research, leading to fundamentally new knowledge. In the first part of this article, we analyze the evolution of magnetized wide-angle winds in a collapsing environment. We track the ambient and wind mass separately and describe a physical mechanism by which an ionized central wind can entrain the ambient gas giving rise to internal shells of molecular material on short time scales. The formation of internal shells in molecular outflows has been found to be an important ingredient in describing the observations of convex spurs in P-V diagrams (Hubble wedges in M-V diagrams). In the second part, we present astrophysically relevant <span class="hlt">experiments</span> in which supersonic jets are created using a conical wire array Z-pinch. The conically convergent flow generates a standing shock around the axis which collimates the flow into a Mach ~ 30 jet. The jet formation process is closely related to the work of Cantó et al. (1988) for hydrodynamic jet collimation. The influence of radiative cooling on collimation and stability is studied by varying the wire material (Al, Fe, and W).</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016JPhCS.717a2081S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016JPhCS.717a2081S"><span>Multi-scale <span class="hlt">modelling</span> for HEDP <span class="hlt">experiments</span> on Orion</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Sircombe, N. J.; Ramsay, M. G.; Hughes, S. J.; Hoarty, D. J.</p> <p>2016-05-01</p> <p>The Orion laser at AWE couples high energy long-pulse lasers with high intensity short-pulses, allowing material to be compressed beyond solid density and heated isochorically. This experimental capability has been demonstrated as a platform for conducting High Energy Density Physics material properties <span class="hlt">experiments</span>. A clear understanding of the physics in <span class="hlt">experiments</span> at this scale, combined with a robust, flexible and predictive <span class="hlt">modelling</span> capability, is an important step towards more <span class="hlt">complex</span> experimental platforms and ICF schemes which rely on high power lasers to achieve ignition. These <span class="hlt">experiments</span> present a significant <span class="hlt">modelling</span> challenge, the system is characterised by hydrodynamic effects over nanoseconds, driven by long-pulse lasers or the pre-pulse of the petawatt beams, and fast electron generation, transport, and heating effects over picoseconds, driven by short-pulse high intensity lasers. We describe the approach taken at AWE; to integrate a number of codes which capture the detailed physics for each spatial and temporal scale. Simulations of the heating of buried aluminium microdot targets are discussed and we consider the role such tools can play in understanding the impact of changes to the laser parameters, such as frequency and pre-pulse, as well as understanding effects which are difficult to observe experimentally.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=1987365','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=1987365"><span>MatOFF: A Tool For Analyzing Behaviorally-<span class="hlt">Complex</span> Neurophysiological <span class="hlt">Experiments</span></span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Genovesio, Aldo; Mitz, Andrew R.</p> <p>2007-01-01</p> <p>The simple operant conditioning originally used in behavioral neurophysiology 30 years ago has given way to <span class="hlt">complex</span> and sophisticated behavioral paradigms; so much so, that early general purpose programs for analyzing neurophysiological data are ill-suited for <span class="hlt">complex</span> <span class="hlt">experiments</span>. The trend has been to develop custom software for each class of <span class="hlt">experiment</span>, but custom software can have serious drawbacks. We describe here a general purpose software tool for behavioral and electrophysiological studies, called MatOFF, that is especially suited for processing neurophysiological data gathered during the execution of <span class="hlt">complex</span> behaviors. Written in the MATLAB programming language, MatOFF solves the problem of handling <span class="hlt">complex</span> analysis requirements in a unique and powerful way. While other neurophysiological programs are either a loose collection of tools or append MATLAB as a post-processing step, MatOFF is an integrated environment that supports MATLAB scripting within the event search engine safely isolated in programming sandbox. The results from scripting are stored separately, but in parallel with the raw data, and thus available to all subsequent MatOFF analysis and display processing. An example from a recently published <span class="hlt">experiment</span> shows how all the features of MatOFF work together to analyze <span class="hlt">complex</span> <span class="hlt">experiments</span> and mine neurophysiological data in efficient ways. PMID:17604115</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4093748','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4093748"><span>Characterization of <span class="hlt">Complex</span> Systems Using the Design of <span class="hlt">Experiments</span> Approach: Transient Protein Expression in Tobacco as a Case Study</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Buyel, Johannes Felix; Fischer, Rainer</p> <p>2014-01-01</p> <p>Plants provide multiple benefits for the production of biopharmaceuticals including low costs, scalability, and safety. Transient expression offers the additional advantage of short development and production times, but expression levels can vary significantly between batches thus giving rise to regulatory concerns in the context of good manufacturing practice. We used a design of <span class="hlt">experiments</span> (DoE) approach to determine the impact of major factors such as regulatory elements in the expression construct, plant growth and development parameters, and the incubation conditions during expression, on the variability of expression between batches. We tested plants expressing a <span class="hlt">model</span> anti-HIV monoclonal antibody (2G12) and a fluorescent marker protein (DsRed). We discuss the rationale for selecting certain properties of the <span class="hlt">model</span> and identify its potential limitations. The general approach can easily be transferred to other problems because the principles of the <span class="hlt">model</span> are broadly applicable: knowledge-based parameter selection, <span class="hlt">complexity</span> reduction by splitting the initial problem into smaller modules, software-guided setup of optimal <span class="hlt">experiment</span> combinations and step-wise design augmentation. Therefore, the methodology is not only useful for characterizing protein expression in plants but also for the investigation of other <span class="hlt">complex</span> systems lacking a mechanistic description. The predictive equations describing the interconnectivity between parameters can be used to establish mechanistic <span class="hlt">models</span> for other <span class="hlt">complex</span> systems. PMID:24514765</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24514765','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24514765"><span>Characterization of <span class="hlt">complex</span> systems using the design of <span class="hlt">experiments</span> approach: transient protein expression in tobacco as a case study.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Buyel, Johannes Felix; Fischer, Rainer</p> <p>2014-01-31</p> <p>Plants provide multiple benefits for the production of biopharmaceuticals including low costs, scalability, and safety. Transient expression offers the additional advantage of short development and production times, but expression levels can vary significantly between batches thus giving rise to regulatory concerns in the context of good manufacturing practice. We used a design of <span class="hlt">experiments</span> (DoE) approach to determine the impact of major factors such as regulatory elements in the expression construct, plant growth and development parameters, and the incubation conditions during expression, on the variability of expression between batches. We tested plants expressing a <span class="hlt">model</span> anti-HIV monoclonal antibody (2G12) and a fluorescent marker protein (DsRed). We discuss the rationale for selecting certain properties of the <span class="hlt">model</span> and identify its potential limitations. The general approach can easily be transferred to other problems because the principles of the <span class="hlt">model</span> are broadly applicable: knowledge-based parameter selection, <span class="hlt">complexity</span> reduction by splitting the initial problem into smaller modules, software-guided setup of optimal <span class="hlt">experiment</span> combinations and step-wise design augmentation. Therefore, the methodology is not only useful for characterizing protein expression in plants but also for the investigation of other <span class="hlt">complex</span> systems lacking a mechanistic description. The predictive equations describing the interconnectivity between parameters can be used to establish mechanistic <span class="hlt">models</span> for other <span class="hlt">complex</span> systems.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26827296','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26827296"><span><span class="hlt">Complexation</span> and molecular <span class="hlt">modeling</span> studies of europium(III)-gallic acid-amino acid <span class="hlt">complexes</span>.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Taha, Mohamed; Khan, Imran; Coutinho, João A P</p> <p>2016-04-01</p> <p>With many metal-based drugs extensively used today in the treatment of cancer, attention has focused on the development of new coordination compounds with antitumor activity with europium(III) <span class="hlt">complexes</span> recently introduced as novel anticancer drugs. The aim of this work is to design new Eu(III) <span class="hlt">complexes</span> with gallic acid, an antioxida'nt phenolic compound. Gallic acid was chosen because it shows anticancer activity without harming health cells. As antioxidant, it helps to protect human cells against oxidative damage that implicated in DNA damage, cancer, and accelerated cell aging. In this work, the formation of binary and ternary <span class="hlt">complexes</span> of Eu(III) with gallic acid, primary ligand, and amino acids alanine, leucine, isoleucine, and tryptophan was studied by glass electrode potentiometry in aqueous solution containing 0.1M NaNO3 at (298.2 ± 0.1) K. Their overall stability constants were evaluated and the concentration distributions of the <span class="hlt">complex</span> species in solution were calculated. The protonation constants of gallic acid and amino acids were also determined at our experimental conditions and compared with those predicted by using conductor-like screening <span class="hlt">model</span> for realistic solvation (COSMO-RS) <span class="hlt">model</span>. The geometries of Eu(III)-gallic acid <span class="hlt">complexes</span> were characterized by the density functional theory (DFT). The spectroscopic UV-visible and photoluminescence measurements are carried out to confirm the formation of Eu(III)-gallic acid <span class="hlt">complexes</span> in aqueous solutions.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017ISPAn.4W2....9B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017ISPAn.4W2....9B"><span>Bim Automation: Advanced <span class="hlt">Modeling</span> Generative Process for <span class="hlt">Complex</span> Structures</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Banfi, F.; Fai, S.; Brumana, R.</p> <p>2017-08-01</p> <p>The new paradigm of the <span class="hlt">complexity</span> of modern and historic structures, which are characterised by <span class="hlt">complex</span> forms, morphological and typological variables, is one of the greatest challenges for building information <span class="hlt">modelling</span> (BIM). Generation of <span class="hlt">complex</span> parametric <span class="hlt">models</span> needs new scientific knowledge concerning new digital technologies. These elements are helpful to store a vast quantity of information during the life cycle of buildings (LCB). The latest developments of parametric applications do not provide advanced tools, resulting in time-consuming work for the generation of <span class="hlt">models</span>. This paper presents a method capable of processing and creating <span class="hlt">complex</span> parametric Building Information <span class="hlt">Models</span> (BIM) with Non-Uniform to NURBS) with multiple levels of details (Mixed and ReverseLoD) based on accurate 3D photogrammetric and laser scanning surveys. <span class="hlt">Complex</span> 3D elements are converted into parametric BIM software and finite element applications (BIM to FEA) using specific exchange formats and new <span class="hlt">modelling</span> tools. The proposed approach has been applied to different case studies: the BIM of modern structure for the courtyard of West Block on Parliament Hill in Ottawa (Ontario) and the BIM of Masegra Castel in Sondrio (Italy), encouraging the dissemination and interaction of scientific results without losing information during the generative process.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.osti.gov/scitech/biblio/5428918','SCIGOV-STC'); return false;" href="https://www.osti.gov/scitech/biblio/5428918"><span>Pedigree <span class="hlt">models</span> for <span class="hlt">complex</span> human traits involving the mitochrondrial genome</span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p>Schork, N.J.; Guo, S.W. )</p> <p>1993-12-01</p> <p>Recent biochemical and molecular-genetic discoveries concerning variations in human mtDNA have suggested a role for mtDNA mutations in a number of human traits and disorders. Although the importance of these discoveries cannot be emphasized enough, the <span class="hlt">complex</span> natures of mitochondrial biogenesis, mutant mtDNA phenotype expression, and the maternal inheritance pattern exhibited by mtDNA transmission make it difficult to develop <span class="hlt">models</span> that can be used routinely in pedigree analyses to quantify and test hypotheses about the role of mtDNA in the expression of a trait. In the present paper, the authors describe <span class="hlt">complexities</span> inherent in mitochondrial biogenesis and genetic transmission and show how these <span class="hlt">complexities</span> can be incorporated into appropriate mathematical <span class="hlt">models</span>. The authors offer a variety of likelihood-based <span class="hlt">models</span> which account for the <span class="hlt">complexities</span> discussed. The derivation of the <span class="hlt">models</span> is meant to stimulate the construction of statistical tests for putative mtDNA contribution to a trait. Results of simulation studies which make use of the proposed <span class="hlt">models</span> are described. The results of the simulation studies suggest that, although pedigree <span class="hlt">models</span> of mtDNA effects can be reliable, success in mapping chromosomal determinants of a trait does not preclude the possibility that mtDNA determinants exist for the trait as well. Shortcomings inherent in the proposed <span class="hlt">models</span> are described in an effort to expose areas in need of additional research. 58 refs., 5 figs., 2 tabs.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26738810','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26738810"><span>Multiscale <span class="hlt">Model</span> for the Assembly Kinetics of Protein <span class="hlt">Complexes</span>.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Xie, Zhong-Ru; Chen, Jiawen; Wu, Yinghao</p> <p>2016-02-04</p> <p>The assembly of proteins into high-order <span class="hlt">complexes</span> is a general mechanism for these biomolecules to implement their versatile functions in cells. Natural evolution has developed various assembling pathways for specific protein <span class="hlt">complexes</span> to maintain their stability and proper activities. Previous studies have provided numerous examples of the misassembly of protein <span class="hlt">complexes</span> leading to severe biological consequences. Although the research focusing on protein <span class="hlt">complexes</span> has started to move beyond the static representation of quaternary structures to the dynamic aspect of their assembly, the current understanding of the assembly mechanism of protein <span class="hlt">complexes</span> is still largely limited. To tackle this problem, we developed a new multiscale <span class="hlt">modeling</span> framework. This framework combines a lower-resolution rigid-body-based simulation with a higher-resolution Cα-based simulation method so that protein <span class="hlt">complexes</span> can be assembled with both structural details and computational efficiency. We applied this <span class="hlt">model</span> to a homotrimer and a heterotetramer as simple test systems. Consistent with experimental observations, our simulations indicated very different kinetics between protein oligomerization and dimerization. The formation of protein oligomers is a multistep process that is much slower than dimerization but thermodynamically more stable. Moreover, we showed that even the same protein quaternary structure can have very diverse assembly pathways under different binding constants between subunits, which is important for regulating the functions of protein <span class="hlt">complexes</span>. Finally, we revealed that the binding between subunits in a <span class="hlt">complex</span> can be synergistically strengthened during assembly without considering allosteric regulation or conformational changes. Therefore, our <span class="hlt">model</span> provides a useful tool to understand the general principles of protein <span class="hlt">complex</span> assembly.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/19970014722','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19970014722"><span>Systems Engineering Metrics: Organizational <span class="hlt">Complexity</span> and Product Quality <span class="hlt">Modeling</span></span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Mog, Robert A.</p> <p>1997-01-01</p> <p>Innovative organizational <span class="hlt">complexity</span> and product quality <span class="hlt">models</span> applicable to performance metrics for NASA-MSFC's Systems Analysis and Integration Laboratory (SAIL) missions and objectives are presented. An intensive research effort focuses on the synergistic combination of stochastic process <span class="hlt">modeling</span>, nodal and spatial decomposition techniques, organizational and computational <span class="hlt">complexity</span>, systems science and metrics, chaos, and proprietary statistical tools for accelerated risk assessment. This is followed by the development of a preliminary <span class="hlt">model</span>, which is uniquely applicable and robust for quantitative purposes. Exercise of the preliminary <span class="hlt">model</span> using a generic system hierarchy and the AXAF-I architectural hierarchy is provided. The Kendall test for positive dependence provides an initial verification and validation of the <span class="hlt">model</span>. Finally, the research and development of the innovation is revisited, prior to peer review. This research and development effort results in near-term, measurable SAIL organizational and product quality methodologies, enhanced organizational risk assessment and evolutionary <span class="hlt">modeling</span> results, and 91 improved statistical quantification of SAIL productivity interests.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013AGUFM.V43C..05G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013AGUFM.V43C..05G"><span>Analogue <span class="hlt">experiments</span> as benchmarks for <span class="hlt">models</span> of lava flow emplacement</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Garel, F.; Kaminski, E. C.; Tait, S.; Limare, A.</p> <p>2013-12-01</p> <p> experimental observations of the effect of wind the surface thermal structure of a viscous flow, that could be used to benchmark a thermal heat loss <span class="hlt">model</span>. We will also briefly present more <span class="hlt">complex</span> analogue <span class="hlt">experiments</span> using wax material. These <span class="hlt">experiments</span> present discontinuous advance behavior, and a dual surface thermal structure with low (solidified) vs. high (hot liquid exposed at the surface) surface temperatures regions. Emplacement <span class="hlt">models</span> should tend to reproduce these two features, also observed on lava flows, to better predict the hazard of lava inundation.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://cfpub.epa.gov/si/si_public_record_report.cfm?dirEntryId=187424&keyword=space+AND+exploration&actType=&TIMSType=+&TIMSSubTypeID=&DEID=&epaNumber=&ntisID=&archiveStatus=Both&ombCat=Any&dateBeginCreated=&dateEndCreated=&dateBeginPublishedPresented=&dateEndPublishedPresented=&dateBeginUpdated=&dateEndUpdated=&dateBeginCompleted=&dateEndCompleted=&personID=&role=Any&journalID=&publisherID=&sortBy=revisionDate&count=50','EPA-EIMS'); return false;" href="http://cfpub.epa.gov/si/si_public_record_report.cfm?dirEntryId=187424&keyword=space+AND+exploration&actType=&TIMSType=+&TIMSSubTypeID=&DEID=&epaNumber=&ntisID=&archiveStatus=Both&ombCat=Any&dateBeginCreated=&dateEndCreated=&dateBeginPublishedPresented=&dateEndPublishedPresented=&dateBeginUpdated=&dateEndUpdated=&dateBeginCompleted=&dateEndCompleted=&personID=&role=Any&journalID=&publisherID=&sortBy=revisionDate&count=50"><span>Calibration of <span class="hlt">Complex</span> Subsurface Reaction <span class="hlt">Models</span> Using a Surrogate-<span class="hlt">Model</span> Approach</span></a></p> <p><a target="_blank" href="http://oaspub.epa.gov/eims/query.page">EPA Science Inventory</a></p> <p></p> <p></p> <p>Application of <span class="hlt">model</span> assessment techniques to <span class="hlt">complex</span> subsurface reaction <span class="hlt">models</span> involves numerous difficulties, including non-trivial <span class="hlt">model</span> selection, parameter non-uniqueness, and excessive computational burden. To overcome these difficulties, this study introduces SAMM (Simult...</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://cfpub.epa.gov/si/si_public_record_report.cfm?dirEntryId=187424&keyword=Heuristic&actType=&TIMSType=+&TIMSSubTypeID=&DEID=&epaNumber=&ntisID=&archiveStatus=Both&ombCat=Any&dateBeginCreated=&dateEndCreated=&dateBeginPublishedPresented=&dateEndPublishedPresented=&dateBeginUpdated=&dateEndUpdated=&dateBeginCompleted=&dateEndCompleted=&personID=&role=Any&journalID=&publisherID=&sortBy=revisionDate&count=50&CFID=78835220&CFTOKEN=39705237','EPA-EIMS'); return false;" href="http://cfpub.epa.gov/si/si_public_record_report.cfm?dirEntryId=187424&keyword=Heuristic&actType=&TIMSType=+&TIMSSubTypeID=&DEID=&epaNumber=&ntisID=&archiveStatus=Both&ombCat=Any&dateBeginCreated=&dateEndCreated=&dateBeginPublishedPresented=&dateEndPublishedPresented=&dateBeginUpdated=&dateEndUpdated=&dateBeginCompleted=&dateEndCompleted=&personID=&role=Any&journalID=&publisherID=&sortBy=revisionDate&count=50&CFID=78835220&CFTOKEN=39705237"><span>Calibration of <span class="hlt">Complex</span> Subsurface Reaction <span class="hlt">Models</span> Using a Surrogate-<span class="hlt">Model</span> Approach</span></a></p> <p><a target="_blank" href="http://oaspub.epa.gov/eims/query.page">EPA Science Inventory</a></p> <p></p> <p></p> <p>Application of <span class="hlt">model</span> assessment techniques to <span class="hlt">complex</span> subsurface reaction <span class="hlt">models</span> involves numerous difficulties, including non-trivial <span class="hlt">model</span> selection, parameter non-uniqueness, and excessive computational burden. To overcome these difficulties, this study introduces SAMM (Simult...</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/ADA457738','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/ADA457738"><span><span class="hlt">Modeling</span> of <span class="hlt">Complex</span> Adaptive Systems in Air Operations</span></a></p> <p><a target="_blank" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>2006-09-01</p> <p>control of C3 in an increasingly <span class="hlt">complex</span> military environment. Control theory is a multidisciplinary science associated with dynamic systems and, while...AFRL-IF-RS-TR-2006-282 In- House Final Technical Report September 2006 <span class="hlt">MODELING</span> OF <span class="hlt">COMPLEX</span> ADAPTIVE SYSTEMS IN AIR OPERATIONS...NOTICE AND SIGNATURE PAGE Using Government drawings, specifications, or other data included in this document for any purpose other than Government</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://pubs.er.usgs.gov/publication/70157405','USGSPUBS'); return false;" href="http://pubs.er.usgs.gov/publication/70157405"><span>Improving a regional <span class="hlt">model</span> using reduced <span class="hlt">complexity</span> and parameter estimation</span></a></p> <p><a target="_blank" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Kelson, Victor A.; Hunt, Randall J.; Haitjema, Henk M.</p> <p>2002-01-01</p> <p>The availability of powerful desktop computers and graphical user interfaces for ground water flow <span class="hlt">models</span> makes possible the construction of ever more <span class="hlt">complex</span> <span class="hlt">models</span>. A proposed copper-zinc sulfide mine in northern Wisconsin offers a unique case in which the same hydrologic system has been <span class="hlt">modeled</span> using a variety of techniques covering a wide range of sophistication and <span class="hlt">complexity</span>. Early in the permitting process, simple numerical <span class="hlt">models</span> were used to evaluate the necessary amount of water to be pumped from the mine, reductions in streamflow, and the drawdowns in the regional aquifer. More <span class="hlt">complex</span> <span class="hlt">models</span> have subsequently been used in an attempt to refine the predictions. Even after so much <span class="hlt">modeling</span> effort, questions regarding the accuracy and reliability of the predictions remain. We have performed a new analysis of the proposed mine using the two-dimensional analytic element code GFLOW coupled with the nonlinear parameter estimation code UCODE. The new <span class="hlt">model</span> is parsimonious, containing fewer than 10 parameters, and covers a region several times larger in areal extent than any of the previous <span class="hlt">models</span>. The <span class="hlt">model</span> demonstrates the suitability of analytic element codes for use with parameter estimation codes. The simplified <span class="hlt">model</span> results are similar to the more <span class="hlt">complex</span> <span class="hlt">models</span>; predicted mine inflows and UCODE-derived 95% confidence intervals are consistent with the previous predictions. More important, the large areal extent of the <span class="hlt">model</span> allowed us to examine hydrological features not included in the previous <span class="hlt">models</span>, resulting in new insights about the effects that far-field boundary conditions can have on near-field <span class="hlt">model</span> calibration and parameterization. In this case, the addition of surface water runoff into a lake in the headwaters of a stream while holding recharge constant moved a regional ground watershed divide and resulted in some of the added water being captured by the adjoining basin. Finally, a simple analytical solution was used to clarify the GFLOW <span class="hlt">model</span></p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2002EGSGA..27.5396B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2002EGSGA..27.5396B"><span>Real-data Calibration <span class="hlt">Experiments</span> On A Distributed Hydrologic <span class="hlt">Model</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Brath, A.; Montanari, A.; Toth, E.</p> <p></p> <p>The increasing availability of extended information on the study watersheds does not generally overcome the need for the determination through calibration of at least a part of the parameters of distributed hydrologic <span class="hlt">models</span>. The <span class="hlt">complexity</span> of such <span class="hlt">models</span>, making the computations highly intensive, has often prevented an extensive analysis of calibration issues. The purpose of this study is an evaluation of the validation results of a series of automatic calibration <span class="hlt">experiments</span> (using the shuffled <span class="hlt">complex</span> evolu- tion method, Duan et al., 1992) performed with a highly conceptualised, continuously simulating, distributed hydrologic <span class="hlt">model</span> applied on the real data of a mid-sized Ital- ian watershed. Major flood events occurred in the 1990-2000 decade are simulated with the parameters obtained by the calibration of the <span class="hlt">model</span> against discharge data observed at the closure section of the watershed and the hydrological features (overall agreement, volumes, peaks and times to peak) of the discharges obtained both in the closure and in an interior stream-gauge are analysed for validation purposes. A first set of calibrations investigates the effect of the variability of the calibration periods, using the data from several single flood events and from longer, continuous periods. Another analysis regards the influence of rainfall input and it is carried out varying the size and distribution of the raingauge network, in order to examine the relation between the spatial pattern of observed rainfall and the variability of <span class="hlt">modelled</span> runoff. Lastly, a comparison of the hydrographs obtained for the flood events with the <span class="hlt">model</span> parameterisation resulting when modifying the objective function to be minimised in the automatic calibration procedure is presented.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/19930006292','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19930006292"><span>On explicit algebraic stress <span class="hlt">models</span> for <span class="hlt">complex</span> turbulent flows</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Gatski, T. B.; Speziale, C. G.</p> <p>1992-01-01</p> <p>Explicit algebraic stress <span class="hlt">models</span> that are valid for three-dimensional turbulent flows in noninertial frames are systematically derived from a hierarchy of second-order closure <span class="hlt">models</span>. This represents a generalization of the <span class="hlt">model</span> derived by Pope who based his analysis on the Launder, Reece, and Rodi <span class="hlt">model</span> restricted to two-dimensional turbulent flows in an inertial frame. The relationship between the new <span class="hlt">models</span> and traditional algebraic stress <span class="hlt">models</span> -- as well as anistropic eddy visosity <span class="hlt">models</span> -- is theoretically established. The need for regularization is demonstrated in an effort to explain why traditional algebraic stress <span class="hlt">models</span> have failed in <span class="hlt">complex</span> flows. It is also shown that these explicit algebraic stress <span class="hlt">models</span> can shed new light on what second-order closure <span class="hlt">models</span> predict for the equilibrium states of homogeneous turbulent flows and can serve as a useful alternative in practical computations.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4203025','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4203025"><span><span class="hlt">Complex</span> groundwater flow systems as traveling agent <span class="hlt">models</span></span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Padilla, Pablo; Escolero, Oscar; González, Tomas; Morales-Casique, Eric; Osorio-Olvera, Luis</p> <p>2014-01-01</p> <p>Analyzing field data from pumping tests, we show that as with many other natural phenomena, groundwater flow exhibits <span class="hlt">complex</span> dynamics described by 1/f power spectrum. This result is theoretically studied within an agent perspective. Using a traveling agent <span class="hlt">model</span>, we prove that this statistical behavior emerges when the medium is <span class="hlt">complex</span>. Some heuristic reasoning is provided to justify both spatial and dynamic <span class="hlt">complexity</span>, as the result of the superposition of an infinite number of stochastic processes. Even more, we show that this implies that non-Kolmogorovian probability is needed for its study, and provide a set of new partial differential equations for groundwater flow. PMID:25337455</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25337455','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25337455"><span><span class="hlt">Complex</span> groundwater flow systems as traveling agent <span class="hlt">models</span>.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>López Corona, Oliver; Padilla, Pablo; Escolero, Oscar; González, Tomas; Morales-Casique, Eric; Osorio-Olvera, Luis</p> <p>2014-01-01</p> <p>Analyzing field data from pumping tests, we show that as with many other natural phenomena, groundwater flow exhibits <span class="hlt">complex</span> dynamics described by 1/f power spectrum. This result is theoretically studied within an agent perspective. Using a traveling agent <span class="hlt">model</span>, we prove that this statistical behavior emerges when the medium is <span class="hlt">complex</span>. Some heuristic reasoning is provided to justify both spatial and dynamic <span class="hlt">complexity</span>, as the result of the superposition of an infinite number of stochastic processes. Even more, we show that this implies that non-Kolmogorovian probability is needed for its study, and provide a set of new partial differential equations for groundwater flow.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.osti.gov/scitech/servlets/purl/764683','SCIGOV-STC'); return false;" href="http://www.osti.gov/scitech/servlets/purl/764683"><span>Theoretical Simulations and Ultrafast Pump-probe Spectroscopy <span class="hlt">Experiments</span> in Pigment-protein Photosynthetic <span class="hlt">Complexes</span></span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p>Buck, D. R.</p> <p>2000-09-12</p> <p>Theoretical simulations and ultrafast pump-probe laser spectroscopy <span class="hlt">experiments</span> were used to study photosynthetic pigment-protein <span class="hlt">complexes</span> and antennae found in green sulfur bacteria such as Prosthecochloris aestuarii, Chloroflexus aurantiacus, and Chlorobium tepidum. The work focused on understanding structure-function relationships in energy transfer processes in these <span class="hlt">complexes</span> through <span class="hlt">experiments</span> and trying to <span class="hlt">model</span> that data as we tested our theoretical assumptions with calculations. Theoretical exciton calculations on tubular pigment aggregates yield electronic absorption spectra that are superimpositions of linear J-aggregate spectra. The electronic spectroscopy of BChl c/d/e antennae in light harvesting chlorosomes from Chloroflexus aurantiacus differs considerably from J-aggregate spectra. Strong symmetry breaking is needed if we hope to simulate the absorption spectra of the BChl c antenna. The theory for simulating absorption difference spectra in strongly coupled photosynthetic antenna is described, first for a relatively simple heterodimer, then for the general N-pigment system. The theory is applied to the Fenna-Matthews-Olson (FMO) BChl a protein trimers from Prosthecochloris aestuarii and then compared with experimental low-temperature absorption difference spectra of FMO trimers from Chlorobium tepidum. Circular dichroism spectra of the FMO trimer are unusually sensitive to diagonal energy disorder. Substantial differences occur between CD spectra in exciton simulations performed with and without realistic inhomogeneous distribution functions for the input pigment diagonal energies. Anisotropic absorption difference spectroscopy measurements are less consistent with 21-pigment trimer simulations than 7-pigment monomer simulations which assume that the laser-prepared states are localized within a subunit of the trimer. Experimental anisotropies from real samples likely arise from statistical averaging over states with diagonal energies shifted by</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li class="active"><span>18</span></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_18 --> <div id="page_19" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li class="active"><span>19</span></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="361"> <li> <p><a target="_blank" onclick="trackOutboundLink('http://pubs.er.usgs.gov/publication/70027801','USGSPUBS'); return false;" href="http://pubs.er.usgs.gov/publication/70027801"><span>A <span class="hlt">Complex</span> Systems <span class="hlt">Model</span> Approach to Quantified Mineral Resource Appraisal</span></a></p> <p><a target="_blank" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Gettings, M.E.; Bultman, M.W.; Fisher, F.S.</p> <p>2004-01-01</p> <p>For federal and state land management agencies, mineral resource appraisal has evolved from value-based to outcome-based procedures wherein the consequences of resource development are compared with those of other management options. <span class="hlt">Complex</span> systems <span class="hlt">modeling</span> is proposed as a general framework in which to build <span class="hlt">models</span> that can evaluate outcomes. Three frequently used methods of mineral resource appraisal (subjective probabilistic estimates, weights of evidence <span class="hlt">modeling</span>, and fuzzy logic <span class="hlt">modeling</span>) are discussed to obtain insight into methods of incorporating <span class="hlt">complexity</span> into mineral resource appraisal <span class="hlt">models</span>. Fuzzy logic and weights of evidence are most easily utilized in <span class="hlt">complex</span> systems <span class="hlt">models</span>. A fundamental product of new appraisals is the production of reusable, accessible databases and methodologies so that appraisals can easily be repeated with new or refined data. The data are representations of <span class="hlt">complex</span> systems and must be so regarded if all of their information content is to be utilized. The proposed generalized <span class="hlt">model</span> framework is applicable to mineral assessment and other geoscience problems. We begin with a (fuzzy) cognitive map using (+1,0,-1) values for the links and evaluate the map for various scenarios to obtain a ranking of the importance of various links. Fieldwork and <span class="hlt">modeling</span> studies identify important links and help identify unanticipated links. Next, the links are given membership functions in accordance with the data. Finally, processes are associated with the links; ideally, the controlling physical and chemical events and equations are found for each link. After calibration and testing, this <span class="hlt">complex</span> systems <span class="hlt">model</span> is used for predictions under various scenarios.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2006GeCoA..70.5253B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2006GeCoA..70.5253B"><span>Cd adsorption onto Anoxybacillus flavithermus: Surface <span class="hlt">complexation</span> <span class="hlt">modeling</span> and spectroscopic investigations</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Burnett, Peta-Gaye G.; Daughney, Christopher J.; Peak, Derek</p> <p>2006-11-01</p> <p>Several recent studies have applied surface <span class="hlt">complexation</span> theory to <span class="hlt">model</span> metal adsorption behaviour onto mesophilic bacteria. However, no investigations have used this approach to characterise metal adsorption by thermophilic bacteria. In this study, we perform batch adsorption <span class="hlt">experiments</span> to quantify cadmium adsorption onto the thermophile Anoxybacillus flavithermus. Surface <span class="hlt">complexation</span> <span class="hlt">models</span> (incorporating the Donnan electrostatic <span class="hlt">model</span>) are developed to determine stability constants corresponding to specific adsorption reactions. Adsorption reactions and stoichiometries are constrained using spectroscopic techniques (XANES, EXAFS, and ATR-FTIR). The results indicate that the Cd adsorption behaviour of A. flavithermus is similar to that of other mesophilic bacteria. At high bacteria-to-Cd ratios, Cd adsorption occurs by formation of a 1:1 <span class="hlt">complex</span> with deprotonated cell wall carboxyl functional groups. At lower bacteria-to-Cd ratios, a second adsorption mechanism occurs at pH > 7, which may correspond to the formation of a Cd-phosphoryl, CdOH-carboxyl, or CdOH-phosphoryl surface <span class="hlt">complex</span>. X-ray absorption spectroscopic investigations confirm the formation of the 1:1 Cd-carboxyl surface <span class="hlt">complex</span>, but due to the bacteria-to-Cd ratio used in these <span class="hlt">experiments</span>, other <span class="hlt">complexation</span> mechanism(s) could not be unequivocally resolved by the spectroscopic data.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19980073300&hterms=Robert+Greene&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAuthor-Name%26N%3D0%26No%3D10%26Ntt%3DRobert%2BGreene','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19980073300&hterms=Robert+Greene&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAuthor-Name%26N%3D0%26No%3D10%26Ntt%3DRobert%2BGreene"><span>Wake Vortex Encounter <span class="hlt">Model</span> Validation <span class="hlt">Experiments</span></span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Vicroy, Dan; Brandon, Jay; Greene, George C.; Rivers, Robert; Shah, Gautam; Stewart, Eric; Stuever, Robert; Rossow, Vernon</p> <p>1997-01-01</p> <p>The goal of this current research is to establish a database that validate/calibrate wake encounter analysis methods for fleet-wide application; and measure/document atmospheric effects on wake decay. Two kinds of <span class="hlt">experiments</span>, wind tunnel <span class="hlt">experiments</span> and flight <span class="hlt">experiments</span>, are performed. This paper discusses the different types of tests and compares their wake velocity measurement.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=20160005732&hterms=Acids&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D40%26Ntt%3DAcids','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=20160005732&hterms=Acids&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D40%26Ntt%3DAcids"><span>Humic Acid <span class="hlt">Complexation</span> of Th, Hf and Zr in Ligand Competition <span class="hlt">Experiments</span>: Metal Loading and Ph Effects</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Stern, Jennifer C.; Foustoukos, Dionysis I.; Sonke, Jeroen E.; Salters, Vincent J. M.</p> <p>2014-01-01</p> <p>The mobility of metals in soils and subsurface aquifers is strongly affected by sorption and <span class="hlt">complexation</span> with dissolved organic matter, oxyhydroxides, clay minerals, and inorganic ligands. Humic substances (HS) are organic macromolecules with functional groups that have a strong affinity for binding metals, such as actinides. Thorium, often studied as an analog for tetravalent actinides, has also been shown to strongly associate with dissolved and colloidal HS in natural waters. The effects of HS on the mobilization dynamics of actinides are of particular interest in risk assessment of nuclear waste repositories. Here, we present conditional equilibrium binding constants (Kc, MHA) of thorium, hafnium, and zirconium-humic acid <span class="hlt">complexes</span> from ligand competition <span class="hlt">experiments</span> using capillary electrophoresis coupled with ICP-MS (CE- ICP-MS). Equilibrium dialysis ligand exchange (EDLE) <span class="hlt">experiments</span> using size exclusion via a 1000 Damembrane were also performed to validate the CE-ICP-MS analysis. <span class="hlt">Experiments</span> were performed at pH 3.5-7 with solutions containing one tetravalent metal (Th, Hf, or Zr), Elliot soil humic acid (EHA) or Pahokee peat humic acid (PHA), and EDTA. CE-ICP-MS and EDLE <span class="hlt">experiments</span> yielded nearly identical binding constants for the metal- humic acid <span class="hlt">complexes</span>, indicating that both methods are appropriate for examining metal speciation at conditions lower than neutral pH. We find that tetravalent metals form strong <span class="hlt">complexes</span> with humic acids, with Kc, MHA several orders of magnitude above REE-humic <span class="hlt">complexes</span>. <span class="hlt">Experiments</span> were conducted at a range of dissolved HA concentrations to examine the effect of [HA]/[Th] molar ratio on Kc, MHA. At low metal loading conditions (i.e. elevated [HA]/[Th] ratios) the ThHA binding constant reached values that were not affected by the relative abundance of humic acid and thorium. The importance of [HA]/[Th] molar ratios on constraining the equilibrium of MHA <span class="hlt">complexation</span> is apparent when our estimated Kc, MHA values</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=20160005732&hterms=salter&qs=N%3D0%26Ntk%3DAll%26Ntx%3Dmode%2Bmatchall%26Ntt%3Dsalter','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=20160005732&hterms=salter&qs=N%3D0%26Ntk%3DAll%26Ntx%3Dmode%2Bmatchall%26Ntt%3Dsalter"><span>Humic Acid <span class="hlt">Complexation</span> of Th, Hf and Zr in Ligand Competition <span class="hlt">Experiments</span>: Metal Loading and Ph Effects</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Stern, Jennifer C.; Foustoukos, Dionysis I.; Sonke, Jeroen E.; Salters, Vincent J. M.</p> <p>2014-01-01</p> <p>The mobility of metals in soils and subsurface aquifers is strongly affected by sorption and <span class="hlt">complexation</span> with dissolved organic matter, oxyhydroxides, clay minerals, and inorganic ligands. Humic substances (HS) are organic macromolecules with functional groups that have a strong affinity for binding metals, such as actinides. Thorium, often studied as an analog for tetravalent actinides, has also been shown to strongly associate with dissolved and colloidal HS in natural waters. The effects of HS on the mobilization dynamics of actinides are of particular interest in risk assessment of nuclear waste repositories. Here, we present conditional equilibrium binding constants (Kc, MHA) of thorium, hafnium, and zirconium-humic acid <span class="hlt">complexes</span> from ligand competition <span class="hlt">experiments</span> using capillary electrophoresis coupled with ICP-MS (CE- ICP-MS). Equilibrium dialysis ligand exchange (EDLE) <span class="hlt">experiments</span> using size exclusion via a 1000 Damembrane were also performed to validate the CE-ICP-MS analysis. <span class="hlt">Experiments</span> were performed at pH 3.5-7 with solutions containing one tetravalent metal (Th, Hf, or Zr), Elliot soil humic acid (EHA) or Pahokee peat humic acid (PHA), and EDTA. CE-ICP-MS and EDLE <span class="hlt">experiments</span> yielded nearly identical binding constants for the metal- humic acid <span class="hlt">complexes</span>, indicating that both methods are appropriate for examining metal speciation at conditions lower than neutral pH. We find that tetravalent metals form strong <span class="hlt">complexes</span> with humic acids, with Kc, MHA several orders of magnitude above REE-humic <span class="hlt">complexes</span>. <span class="hlt">Experiments</span> were conducted at a range of dissolved HA concentrations to examine the effect of [HA]/[Th] molar ratio on Kc, MHA. At low metal loading conditions (i.e. elevated [HA]/[Th] ratios) the ThHA binding constant reached values that were not affected by the relative abundance of humic acid and thorium. The importance of [HA]/[Th] molar ratios on constraining the equilibrium of MHA <span class="hlt">complexation</span> is apparent when our estimated Kc, MHA values</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.osti.gov/scitech/servlets/purl/991290','SCIGOV-STC'); return false;" href="http://www.osti.gov/scitech/servlets/purl/991290"><span>Implementation of a <span class="hlt">complex</span> multi-phase equation of state for cerium and its correlation with <span class="hlt">experiment</span></span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p>Cherne, Frank J; Jensen, Brian J; Elkin, Vyacheslav M</p> <p>2009-01-01</p> <p>The <span class="hlt">complexity</span> of cerium combined with its interesting material properties makes it a desirable material to examine dynamically. Characteristics such as the softening of the material before the phase change, low pressure solid-solid phase change, predicted low pressure melt boundary, and the solid-solid critical point add <span class="hlt">complexity</span> to the construction of its equation of state. Currently, we are incorporating a feedback loop between a theoretical understanding of the material and an experimental understanding. Using a <span class="hlt">model</span> equation of state for cerium we compare calculated wave profiles with experimental wave profiles for a number of front surface impact (cerium impacting a plated window) <span class="hlt">experiments</span>. Using the calculated release isentrope we predict the temperature of the observed rarefaction shock. These <span class="hlt">experiments</span> showed that the release state occurs at different magnitudes, thus allowing us to infer where dynamic {gamma} - {alpha} phase boundary is.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.osti.gov/scitech/servlets/purl/5354017','SCIGOV-STC'); return false;" href="http://www.osti.gov/scitech/servlets/purl/5354017"><span>Multiwell <span class="hlt">experiment</span>: reservoir <span class="hlt">modeling</span> analysis, Volume II</span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p>Horton, A.I.</p> <p>1985-05-01</p> <p>This report updates an ongoing analysis by reservoir <span class="hlt">modelers</span> at the Morgantown Energy Technology Center (METC) of well test data from the Department of Energy's Multiwell <span class="hlt">Experiment</span> (MWX). Results of previous efforts were presented in a recent METC Technical Note (Horton 1985). Results included in this report pertain to the poststimulation well tests of Zones 3 and 4 of the Paludal Sandstone Interval and the prestimulation well tests of the Red and Yellow Zones of the Coastal Sandstone Interval. The following results were obtained by using a reservoir <span class="hlt">model</span> and history matching procedures: (1) Post-minifracture analysis indicated that the minifracture stimulation of the Paludal Interval did not produce an induced fracture, and extreme formation damage did occur, since a 65% permeability reduction around the wellbore was estimated. The design for this minifracture was from 200 to 300 feet on each side of the wellbore; (2) Post full-scale stimulation analysis for the Paludal Interval also showed that extreme formation damage occurred during the stimulation as indicated by a 75% permeability reduction 20 feet on each side of the induced fracture. Also, an induced fracture half-length of 100 feet was determined to have occurred, as compared to a designed fracture half-length of 500 to 600 feet; and (3) Analysis of prestimulation well test data from the Coastal Interval agreed with previous well-to-well interference tests that showed extreme permeability anisotropy was not a factor for this zone. This lack of permeability anisotropy was also verified by a nitrogen injection test performed on the Coastal Red and Yellow Zones. 8 refs., 7 figs., 2 tabs.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/15778439','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/15778439"><span>Braiding DNA: <span class="hlt">experiments</span>, simulations, and <span class="hlt">models</span>.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Charvin, G; Vologodskii, A; Bensimon, D; Croquette, V</p> <p>2005-06-01</p> <p>DNA encounters topological problems in vivo because of its extended double-helical structure. As a consequence, the semiconservative mechanism of DNA replication leads to the formation of DNA braids or catenanes, which have to be removed for the completion of cell division. To get a better understanding of these structures, we have studied the elastic behavior of two braided nicked DNA molecules using a magnetic trap apparatus. The experimental data let us identify and characterize three regimes of braiding: a slightly twisted regime before the formation of the first crossing, followed by genuine braids which, at large braiding number, buckle to form plectonemes. Two different approaches support and quantify this characterization of the data. First, Monte Carlo (MC) simulations of braided DNAs yield a full description of the molecules' behavior and their buckling transition. Second, <span class="hlt">modeling</span> the braids as a twisted swing provides a good approximation of the elastic response of the molecules as they are intertwined. Comparisons of the <span class="hlt">experiments</span> and the MC simulations with this analytical <span class="hlt">model</span> allow for a measurement of the diameter of the braids and its dependence upon entropic and electrostatic repulsive interactions. The MC simulations allow for an estimate of the effective torsional constant of the braids (at a stretching force F = 2 pN): C(b) approximately 48 nm (as compared with C approximately 100 nm for a single unnicked DNA). Finally, at low salt concentrations and for sufficiently large number of braids, the diameter of the braided molecules is observed to collapse to that of double-stranded DNA. We suggest that this collapse is due to the partial melting and fraying of the two nicked molecules and the subsequent right- or left-handed intertwining of the stretched single strands.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28891124','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28891124"><span>Dockground: A comprehensive data resource for <span class="hlt">modeling</span> of protein <span class="hlt">complexes</span>.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Kundrotas, Petras J; Anishchenko, Ivan; Dauzhenka, Taras; Kotthoff, Ian; Mnevets, Daniil; Copeland, Matthew M; Vakser, Ilya A</p> <p>2017-09-10</p> <p>Characterization of life processes at the molecular level requires structural details of protein interactions. The number of experimentally determined structures of protein-protein <span class="hlt">complexes</span> accounts only for a fraction of known protein interactions. This gap in structural description of the interactome has to be bridged by <span class="hlt">modeling</span>. An essential part of the development of structural <span class="hlt">modeling</span>/docking techniques for protein interactions is databases of protein-protein <span class="hlt">complexes</span>. They are necessary for studying protein interfaces, providing a knowledge base for docking algorithms, developing intermolecular potentials, search procedures, and scoring functions. Development of protein-protein docking techniques requires thorough benchmarking of different parts of the docking protocols on carefully curated sets of protein-protein <span class="hlt">complexes</span>. We present a comprehensive description of the Dockground resource (http://dockground.compbio.ku.edu) for structural <span class="hlt">modeling</span> of protein interactions, including previously unpublished unbound docking benchmark set 4, and the X-ray docking decoy set 2. The resource offers a variety of interconnected datasets of protein-protein <span class="hlt">complexes</span> and other data for the development and testing of different aspects of protein docking methodologies. Based on protein-protein <span class="hlt">complexes</span> extracted from the PDB biounit files, Dockground offers sets of X-ray unbound, simulated unbound, <span class="hlt">model</span>, and docking decoy structures. All datasets are freely available for download, as a whole or selecting specific structures, through a user-friendly interface on one integrated website. This article is protected by copyright. All rights reserved. © 2017 The Protein Society.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.osti.gov/scitech/servlets/purl/942063','SCIGOV-STC'); return false;" href="http://www.osti.gov/scitech/servlets/purl/942063"><span>Mathematical approaches for <span class="hlt">complexity</span>/predictivity trade-offs in <span class="hlt">complex</span> system <span class="hlt">models</span> : LDRD final report.</span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p>Goldsby, Michael E.; Mayo, Jackson R.; Bhattacharyya, Arnab; Armstrong, Robert C.; Vanderveen, Keith</p> <p>2008-09-01</p> <p>The goal of this research was to examine foundational methods, both computational and theoretical, that can improve the veracity of entity-based <span class="hlt">complex</span> system <span class="hlt">models</span> and increase confidence in their predictions for emergent behavior. The strategy was to seek insight and guidance from simplified yet realistic <span class="hlt">models</span>, such as cellular automata and Boolean networks, whose properties can be generalized to production entity-based simulations. We have explored the usefulness of renormalization-group methods for finding reduced <span class="hlt">models</span> of such idealized <span class="hlt">complex</span> systems. We have prototyped representative <span class="hlt">models</span> that are both tractable and relevant to Sandia mission applications, and quantified the effect of computational renormalization on the predictive accuracy of these <span class="hlt">models</span>, finding good predictivity from renormalized versions of cellular automata and Boolean networks. Furthermore, we have theoretically analyzed the robustness properties of certain Boolean networks, relevant for characterizing organic behavior, and obtained precise mathematical constraints on systems that are robust to failures. In combination, our results provide important guidance for more rigorous construction of entity-based <span class="hlt">models</span>, which currently are often devised in an ad-hoc manner. Our results can also help in designing <span class="hlt">complex</span> systems with the goal of predictable behavior, e.g., for cybersecurity.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/20080040776','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20080040776"><span>SEE Rate Estimation: <span class="hlt">Model</span> <span class="hlt">Complexity</span> and Data Requirements</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Ladbury, Ray</p> <p>2008-01-01</p> <p>Statistical Methods outlined in [Ladbury, TNS20071 can be generalized for Monte Carlo Rate Calculation Methods Two Monte Carlo Approaches: a) Rate based on vendor-supplied (or reverse-engineered) <span class="hlt">model</span> SEE testing and statistical analysis performed to validate <span class="hlt">model</span>; b) Rate calculated based on <span class="hlt">model</span> fit to SEE data Statistical analysis very similar to case for CREME96. Information Theory allows simultaneous consideration of multiple <span class="hlt">models</span> with different <span class="hlt">complexities</span>: a) <span class="hlt">Model</span> with lowest AIC usually has greatest predictive power; b) <span class="hlt">Model</span> averaging using AIC weights may give better performance if several <span class="hlt">models</span> have similar good performance; and c) Rates can be bounded for a given confidence level over multiple <span class="hlt">models</span>, as well as over the parameter space of a <span class="hlt">model</span>.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4937570','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4937570"><span>Multikernel linear mixed <span class="hlt">models</span> for <span class="hlt">complex</span> phenotype prediction</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Weissbrod, Omer; Geiger, Dan; Rosset, Saharon</p> <p>2016-01-01</p> <p>Linear mixed <span class="hlt">models</span> (LMMs) and their extensions have recently become the method of choice in phenotype prediction for <span class="hlt">complex</span> traits. However, LMM use to date has typically been limited by assuming simple genetic architectures. Here, we present multikernel linear mixed <span class="hlt">model</span> (MKLMM), a predictive <span class="hlt">modeling</span> framework that extends the standard LMM using multiple-kernel machine learning approaches. MKLMM can <span class="hlt">model</span> genetic interactions and is particularly suitable for <span class="hlt">modeling</span> <span class="hlt">complex</span> local interactions between nearby variants. We additionally present MKLMM-Adapt, which automatically infers interaction types across multiple genomic regions. In an analysis of eight case-control data sets from the Wellcome Trust Case Control Consortium and more than a hundred mouse phenotypes, MKLMM-Adapt consistently outperforms competing methods in phenotype prediction. MKLMM is as computationally efficient as standard LMMs and does not require storage of genotypes, thus achieving state-of-the-art predictive power without compromising computational feasibility or genomic privacy. PMID:27302636</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012CliPD...8.4121E','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012CliPD...8.4121E"><span>Historical and idealized climate <span class="hlt">model</span> <span class="hlt">experiments</span>: an EMIC intercomparison</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Eby, M.; Weaver, A. J.; Alexander, K.; Zickfeld, K.; Abe-Ouchi, A.; Cimatoribus, A. A.; Crespin, E.; Drijfhout, S. S.; Edwards, N. R.; Eliseev, A. V.; Feulner, G.; Fichefet, T.; Forest, C. E.; Goosse, H.; Holden, P. B.; Joos, F.; Kawamiya, M.; Kicklighter, D.; Kienert, H.; Matsumoto, K.; Mokhov, I. I.; Monier, E.; Olsen, S. M.; Pedersen, J. O. P.; Perrette, M.; Philippon-Berthier, G.; Ridgwell, A.; Schlosser, A.; Schneider von Deimling, T.; Shaffer, G.; Smith, R. S.; Spahni, R.; Sokolov, A. P.; Steinacher, M.; Tachiiri, K.; Tokos, K.; Yoshimori, M.; Zeng, N.; Zhao, F.</p> <p>2012-08-01</p> <p>Both historical and idealized climate <span class="hlt">model</span> <span class="hlt">experiments</span> are performed with a variety of Earth System <span class="hlt">Models</span> of Intermediate <span class="hlt">Complexity</span> (EMICs) as part of a community contribution to the Intergovernmental Panel on Climate Change Fifth Assessment Report. Historical simulations start at 850 CE and continue through to 2005. The standard simulations include changes in forcing from solar luminosity, Earth's orbital configuration, CO2, additional greenhouse gases, land-use, and sulphate and volcanic aerosols. In spite of very different <span class="hlt">modelled</span> pre-industrial global surface air temperatures, overall 20th century trends in surface air temperature and carbon uptake are reasonably well simulated when compared to observed trends. Land carbon fluxes show much more variation between <span class="hlt">models</span> than ocean carbon fluxes, and recent land fluxes seem to be underestimated. It is possible that recent <span class="hlt">modelled</span> climate trends or climate-carbon feedbacks are overestimated resulting in too much land carbon loss or that carbon uptake due to CO2 and/or nitrogen fertilization is underestimated. Several one thousand year long, idealized, 2x and 4x CO2 <span class="hlt">experiments</span> are used to quantify standard <span class="hlt">model</span> characteristics, including transient and equilibrium climate sensitivities, and climate-carbon feedbacks. The values from EMICs generally fall within the range given by General Circulation <span class="hlt">Models</span>. Seven additional historical simulations, each including a single specified forcing, are used to assess the contributions of different climate forcings to the overall climate and carbon cycle response. The response of surface air temperature is the linear sum of the individual forcings, while the carbon cycle response shows considerable synergy between land-use change and CO2 forcings for some <span class="hlt">models</span>. Finally, the preindustrial portions of the last millennium simulations are used to assess historical <span class="hlt">model</span> carbon-climate feedbacks. Given the specified forcing, there is a tendency for the EMICs to</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25698434','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25698434"><span>Evaluation of soil flushing of <span class="hlt">complex</span> contaminated soil: an experimental and <span class="hlt">modeling</span> simulation study.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Yun, Sung Mi; Kang, Christina S; Kim, Jonghwa; Kim, Han S</p> <p>2015-04-28</p> <p>The removal of heavy metals (Zn and Pb) and heavy petroleum oils (HPOs) from a soil with <span class="hlt">complex</span> contamination was examined by soil flushing. Desorption and transport behaviors of the <span class="hlt">complex</span> contaminants were assessed by batch and continuous flow reactor <span class="hlt">experiments</span> and through <span class="hlt">modeling</span> simulations. Flushing a one-dimensional flow column packed with <span class="hlt">complex</span> contaminated soil sequentially with citric acid then a surfactant resulted in the removal of 85.6% of Zn, 62% of Pb, and 31.6% of HPO. The desorption distribution coefficients, KUbatch and KLbatch, converged to constant values as Ce increased. An equilibrium <span class="hlt">model</span> (ADR) and nonequilibrium <span class="hlt">models</span> (TSNE and TRNE) were used to predict the desorption and transport of <span class="hlt">complex</span> contaminants. The nonequilibrium <span class="hlt">models</span> demonstrated better fits with the experimental values obtained from the column test than the equilibrium <span class="hlt">model</span>. The ranges of KUbatch and KLbatch were very close to those of KUfit and KLfit determined from <span class="hlt">model</span> simulations. The parameters (R, β, ω, α, and f) determined from <span class="hlt">model</span> simulations were useful for characterizing the transport of contaminants within the soil matrix. The results of this study provide useful information for the operational parameters of the flushing process for soils with <span class="hlt">complex</span> contamination. Copyright © 2015 Elsevier B.V. All rights reserved.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/22476509','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/22476509"><span>[Neurotheology: neurobiological <span class="hlt">models</span> of religious <span class="hlt">experience</span>].</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Passie, T; Warncke, J; Peschel, T; Ott, U</p> <p>2013-03-01</p> <p>Religions are evolutionary selected social and cultural phenomena. They represent today belief and normative systems on which the main parts of our culture are based. For a long time religions have been seen as mainly originating from a spectrum of religious <span class="hlt">experiences</span>. These include a broad spectrum of <span class="hlt">experiences</span> and are astonishingly widespread in the population. The most consistent and transculturally uniform religious <span class="hlt">experiences</span> are the mystical <span class="hlt">experiences</span>. Only these (and the prayer <span class="hlt">experience</span>) have factually been researched in detail neurobiologically. This article presents a review of empirical results and hypothetical approaches to explain mystical religious <span class="hlt">experiences</span> neurobiologically. Some of the explanatory hypotheses possess logical evidence, some are even supported by neurobiological studies, but all of them have their pitfalls and are at best partially consistent. One important insight from the evidence reviewed here is that there may be a whole array of different neurophysiological conditions which may result in the same core religious mystical <span class="hlt">experiences</span>.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015EGUGA..1710149C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015EGUGA..1710149C"><span><span class="hlt">Modelling</span> nutrient reduction targets - <span class="hlt">model</span> structure <span class="hlt">complexity</span> vs. data availability</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Capell, Rene; Lausten Hansen, Anne; Donnelly, Chantal; Refsgaard, Jens Christian; Arheimer, Berit</p> <p>2015-04-01</p> <p>In most parts of Europe, macronutrient concentrations and loads in surface water are currently affected by human land use and land management choices. Moreover, current macronutrient concentration and load levels often violate European Water Framework Directive (WFD) targets and effective measures to reduce these levels are sought after by water managers. Identifying such effective measures in specific target catchments should consider the four key processes release, transport, retention, and removal, and thus physical catchment characteristics as e.g. soils and geomorphology, but also management data such as crop distribution and fertilizer application regimes. The BONUS funded research project Soils2Sea evaluates new, differentiated regulation strategies to cost-efficiently reduce nutrient loads to the Baltic Sea based on new knowledge of nutrient transport and retention processes between soils and the coast. Within the Soils2Sea framework, we here examine the capability of two integrated hydrological and nutrient transfer <span class="hlt">models</span>, HYPE and Mike SHE, to <span class="hlt">model</span> runoff and nitrate flux responses in the 100 km2 Norsminde catchment, Denmark, comparing different <span class="hlt">model</span> structures and data bases. We focus on comparing <span class="hlt">modelled</span> nitrate reductions within and below the root zone, and evaluate <span class="hlt">model</span> performances as function of available <span class="hlt">model</span> structures (process representation within the <span class="hlt">model</span>) and available data bases (temporal forcing data and spatial information). This <span class="hlt">model</span> evaluation is performed to aid in the development of <span class="hlt">model</span> tools which will be used to estimate the effect of new nutrient reduction measures on the catchment to regional scale, where available data - both climate forcing and land management - typically are increasingly limited with the targeted spatial scale and may act as a bottleneck for process conceptualizations and thus the value of a <span class="hlt">model</span> as tool to provide decision support for differentiated regulation strategies.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.fs.usda.gov/treesearch/pubs/53110','TREESEARCH'); return false;" href="https://www.fs.usda.gov/treesearch/pubs/53110"><span>On the dangers of <span class="hlt">model</span> <span class="hlt">complexity</span> without ecological justification in species distribution <span class="hlt">modeling</span></span></a></p> <p><a target="_blank" href="http://www.fs.usda.gov/treesearch/">Treesearch</a></p> <p>David M. Bell; Daniel R. Schlaepfer</p> <p>2016-01-01</p> <p>Although biogeographic patterns are the product of <span class="hlt">complex</span> ecological processes, the increasing <span class="hlt">com-plexity</span> of correlative species distribution <span class="hlt">models</span> (SDMs) is not always motivated by ecological theory,but by <span class="hlt">model</span> fit. The validity of <span class="hlt">model</span> projections, such as shifts in a species’ climatic niche, becomesquestionable particularly during extrapolations, such as for...</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1989JApMe..28..665L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1989JApMe..28..665L"><span>Transferability of a Three-Dimensional Air Quality <span class="hlt">Model</span> between Two Different Sites in <span class="hlt">Complex</span> Terrain.</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Lange, Rolf</p> <p>1989-07-01</p> <p>The three-dimensional, diagnostic, particle-in-cell transport and diffusion <span class="hlt">model</span> MATHEW/ADPIC is used to test its transferability from one site in <span class="hlt">complex</span> terrain to another with different characteristics, under stable nighttime drainage flow conditions. The two sites were subject to extensive drainage flow tracer <span class="hlt">experiments</span> under the multilaboratory Atmospheric Studies in <span class="hlt">Complex</span> Terrain (ASCOT) program: the first being a valley in the Geysers geothermal region of northern California, and the second a canyon in western Colorado. The domain in each case is approximately 10 × 10 km. The 1980 Geysers <span class="hlt">model</span> evaluation is only quoted. The 1984 Brush Creek <span class="hlt">model</span> evaluation is described in detail.Results from comparing computed with measured concentrations from a variety of tracer releases indicate that 52% of the 4531 samples from five <span class="hlt">experiments</span> in Brush Creek and 50% of the 831 samples from four <span class="hlt">experiments</span> in the Geysers agreed within a factor of 5. When an angular 10° uncertainty, consistent with anemometer reliability limits in <span class="hlt">complex</span> terrain, was allowed to be applied to the <span class="hlt">model</span> results, <span class="hlt">model</span> performance improved such that 78% of samples compared within a factor of 5 for Brush Creek and 77% for the Geysers. Looking at the range of other factors of concentration ratios, results indicate that the <span class="hlt">model</span> is satisfactorily transferable without tuning it to a specific site.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4742534','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4742534"><span>A Compact <span class="hlt">Model</span> for the <span class="hlt">Complex</span> Plant Circadian Clock</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>De Caluwé, Joëlle; Xiao, Qiying; Hermans, Christian; Verbruggen, Nathalie; Leloup, Jean-Christophe; Gonze, Didier</p> <p>2016-01-01</p> <p>The circadian clock is an endogenous timekeeper that allows organisms to anticipate and adapt to the daily variations of their environment. The plant clock is an intricate network of interlocked feedback loops, in which transcription factors regulate each other to generate oscillations with expression peaks at specific times of the day. Over the last decade, mathematical <span class="hlt">modeling</span> approaches have been used to understand the inner workings of the clock in the <span class="hlt">model</span> plant Arabidopsis thaliana. Those efforts have produced a number of <span class="hlt">models</span> of ever increasing <span class="hlt">complexity</span>. Here, we present an alternative <span class="hlt">model</span> that combines a low number of equations and parameters, similar to the very earliest <span class="hlt">models</span>, with the <span class="hlt">complex</span> network structure found in more recent ones. This simple <span class="hlt">model</span> describes the temporal evolution of the abundance of eight clock gene mRNA/protein and captures key features of the clock on a qualitative level, namely the entrained and free-running behaviors of the wild type clock, as well as the defects found in knockout mutants (such as altered free-running periods, lack of entrainment, or changes in the expression of other clock genes). Additionally, our <span class="hlt">model</span> produces <span class="hlt">complex</span> responses to various light cues, such as extreme photoperiods and non-24 h environmental cycles, and can describe the control of hypocotyl growth by the clock. Our <span class="hlt">model</span> constitutes a useful tool to probe dynamical properties of the core clock as well as clock-dependent processes. PMID:26904049</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016PhRvD..94i6011F','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016PhRvD..94i6011F"><span>Vacuum structure of the Higgs <span class="hlt">complex</span> singlet-doublet <span class="hlt">model</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ferreira, P. M.</p> <p>2016-11-01</p> <p>The <span class="hlt">complex</span> singlet-doublet <span class="hlt">model</span> is a popular theory to account for dark matter and electroweak baryogenesis, wherein the Standard <span class="hlt">Model</span> particle content is supplemented by a <span class="hlt">complex</span> scalar gauge singlet, with certain discrete symmetries imposed. The scalar potential which results thereof can have seven different types of minima at tree level, which may coexist for specific choices of parameters. There is therefore the possibility that a given minimum is not global but rather a local one, and may tunnel to a deeper extremum, thus causing vacuum instability. This rich vacuum structure is explained and discussed in detail.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li class="active"><span>19</span></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_19 --> <div id="page_20" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li class="active"><span>20</span></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="381"> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/12293578','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/12293578"><span>Further thoughts on simplicity and <span class="hlt">complexity</span> in population projection <span class="hlt">models</span>.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Smith, S K</p> <p>1997-12-01</p> <p>"This article is a review of--and response to--a special issue of Mathematical Population Studies that focused on the relative performance of simpler vs. more <span class="hlt">complex</span> population projection <span class="hlt">models</span>. I do not attempt to summarize or comment on each of the articles in the special issue, but rather present an additional perspective on several points: definitions of simplicity and <span class="hlt">complexity</span>, empirical evidence regarding population forecast accuracy, the costs and benefits of disaggregation, the potential benefits of combining forecasts, criteria for evaluating projection <span class="hlt">models</span>, and issues of economic efficiency in the production of population projections."</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.osti.gov/scitech/servlets/purl/1001000','SCIGOV-STC'); return false;" href="http://www.osti.gov/scitech/servlets/purl/1001000"><span>Dynamic crack initiation toughness : <span class="hlt">experiments</span> and peridynamic <span class="hlt">modeling</span>.</span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p>Foster, John T.</p> <p>2009-10-01</p> <p>This is a dissertation on research conducted studying the dynamic crack initiation toughness of a 4340 steel. Researchers have been conducting experimental testing of dynamic crack initiation toughness, K{sub Ic}, for many years, using many experimental techniques with vastly different trends in the results when reporting K{sub Ic} as a function of loading rate. The dissertation describes a novel experimental technique for measuring K{sub Ic} in metals using the Kolsky bar. The method borrows from improvements made in recent years in traditional Kolsky bar testing by using pulse shaping techniques to ensure a constant loading rate applied to the sample before crack initiation. Dynamic crack initiation measurements were reported on a 4340 steel at two different loading rates. The steel was shown to exhibit a rate dependence, with the recorded values of K{sub Ic} being much higher at the higher loading rate. Using the knowledge of this rate dependence as a motivation in attempting to <span class="hlt">model</span> the fracture events, a viscoplastic constitutive <span class="hlt">model</span> was implemented into a peridynamic computational mechanics code. Peridynamics is a newly developed theory in solid mechanics that replaces the classical partial differential equations of motion with integral-differential equations which do not require the existence of spatial derivatives in the displacement field. This allows for the straightforward <span class="hlt">modeling</span> of unguided crack initiation and growth. To date, peridynamic implementations have used severely restricted constitutive <span class="hlt">models</span>. This research represents the first implementation of a <span class="hlt">complex</span> material <span class="hlt">model</span> and its validation. After showing results comparing deformations to experimental Taylor anvil impact for the viscoplastic material <span class="hlt">model</span>, a novel failure criterion is introduced to <span class="hlt">model</span> the dynamic crack initiation toughness <span class="hlt">experiments</span>. The failure <span class="hlt">model</span> is based on an energy criterion and uses the K{sub Ic} values recorded experimentally as an input. The failure <span class="hlt">model</span></p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015WRR....51.8677S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015WRR....51.8677S"><span>Introduction to a special section on ecohydrology of semiarid environments: Confronting mathematical <span class="hlt">models</span> with ecosystem <span class="hlt">complexity</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Svoray, Tal; Assouline, Shmuel; Katul, Gabriel</p> <p>2015-11-01</p> <p>Current literature provides large number of publications about ecohydrological processes and their effect on the biota in drylands. Given the limited laboratory and field <span class="hlt">experiments</span> in such systems, many of these publications are based on mathematical <span class="hlt">models</span> of varying <span class="hlt">complexity</span>. The underlying implicit assumption is that the data set used to evaluate these <span class="hlt">models</span> covers the parameter space of conditions that characterize drylands and that the <span class="hlt">models</span> represent the actual processes with acceptable certainty. However, a question raised is to what extent these mathematical <span class="hlt">models</span> are valid when confronted with observed ecosystem <span class="hlt">complexity</span>? This Introduction reviews the 16 papers that comprise the Special Section on Eco-hydrology of Semiarid Environments: Confronting Mathematical <span class="hlt">Models</span> with Ecosystem <span class="hlt">Complexity</span>. The subjects studied in these papers include rainfall regime, infiltration and preferential flow, evaporation and evapotranspiration, annual net primary production, dispersal and invasion, and vegetation greening. The findings in the papers published in this Special Section show that innovative mathematical <span class="hlt">modeling</span> approaches can represent actual field measurements. Hence, there are strong grounds for suggesting that mathematical <span class="hlt">models</span> can contribute to greater understanding of ecosystem <span class="hlt">complexity</span> through characterization of space-time dynamics of biomass and water storage as well as their multiscale interactions. However, the generality of the <span class="hlt">models</span> and their low-dimensional representation of many processes may also be a "curse" that results in failures when particulars of an ecosystem are required. It is envisaged that the search for a unifying "general" <span class="hlt">model</span>, while seductive, may remain elusive in the foreseeable future. It is for this reason that improving the merger between <span class="hlt">experiments</span> and <span class="hlt">models</span> of various degrees of <span class="hlt">complexity</span> continues to shape the future research agenda.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017GMD....10..945U','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017GMD....10..945U"><span>A cloud feedback emulator (CFE, version 1.0) for an intermediate <span class="hlt">complexity</span> <span class="hlt">model</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ullman, David J.; Schmittner, Andreas</p> <p>2017-02-01</p> <p>The dominant source of inter-<span class="hlt">model</span> differences in comprehensive global climate <span class="hlt">models</span> (GCMs) are cloud radiative effects on Earth's energy budget. Intermediate <span class="hlt">complexity</span> <span class="hlt">models</span>, while able to run more efficiently, often lack cloud feedbacks. Here, we describe and evaluate a method for applying GCM-derived shortwave and longwave cloud feedbacks from 4 × CO2 and Last Glacial Maximum <span class="hlt">experiments</span> to the University of Victoria Earth System Climate <span class="hlt">Model</span>. The method generally captures the spread in top-of-the-atmosphere radiative feedbacks between the original GCMs, which impacts the magnitude and spatial distribution of surface temperature changes and climate sensitivity. These results suggest that the method is suitable to incorporate multi-<span class="hlt">model</span> cloud feedback uncertainties in ensemble simulations with a single intermediate <span class="hlt">complexity</span> <span class="hlt">model</span>.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://eric.ed.gov/?q=biotechnology&pg=4&id=EJ1108615','ERIC'); return false;" href="https://eric.ed.gov/?q=biotechnology&pg=4&id=EJ1108615"><span>Board Games and Board Game Design as Learning Tools for <span class="hlt">Complex</span> Scientific Concepts: Some <span class="hlt">Experiences</span></span></a></p> <p><a target="_blank" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Chiarello, Fabio; Castellano, Maria Gabriella</p> <p>2016-01-01</p> <p>In this paper the authors report different <span class="hlt">experiences</span> in the use of board games as learning tools for <span class="hlt">complex</span> and abstract scientific concepts such as Quantum Mechanics, Relativity or nano-biotechnologies. In particular we describe "Quantum Race," designed for the introduction of Quantum Mechanical principles, "Lab on a chip,"…</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://eric.ed.gov/?q=ACN&id=EJ969286','ERIC'); return false;" href="http://eric.ed.gov/?q=ACN&id=EJ969286"><span>Deaf Children with <span class="hlt">Complex</span> Needs: Parental <span class="hlt">Experience</span> of Access to Cochlear Implants and Ongoing Support</span></a></p> <p><a target="_blank" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>McCracken, Wendy; Turner, Oliver</p> <p>2012-01-01</p> <p>This paper discusses the <span class="hlt">experiences</span> of parents of deaf children with additional <span class="hlt">complex</span> needs (ACN) in accessing cochlear implant (CI) services and achieving ongoing support. Of a total study group of fifty-one children with ACN, twelve had been fitted with a CI. The parental accounts provide a rich and varied picture of service access. For some…</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://eric.ed.gov/?q=Immune+AND+System&pg=2&id=EJ1108615','ERIC'); return false;" href="http://eric.ed.gov/?q=Immune+AND+System&pg=2&id=EJ1108615"><span>Board Games and Board Game Design as Learning Tools for <span class="hlt">Complex</span> Scientific Concepts: Some <span class="hlt">Experiences</span></span></a></p> <p><a target="_blank" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Chiarello, Fabio; Castellano, Maria Gabriella</p> <p>2016-01-01</p> <p>In this paper the authors report different <span class="hlt">experiences</span> in the use of board games as learning tools for <span class="hlt">complex</span> and abstract scientific concepts such as Quantum Mechanics, Relativity or nano-biotechnologies. In particular we describe "Quantum Race," designed for the introduction of Quantum Mechanical principles, "Lab on a chip,"…</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://eric.ed.gov/?q=deaf+AND+care&pg=2&id=EJ969286','ERIC'); return false;" href="https://eric.ed.gov/?q=deaf+AND+care&pg=2&id=EJ969286"><span>Deaf Children with <span class="hlt">Complex</span> Needs: Parental <span class="hlt">Experience</span> of Access to Cochlear Implants and Ongoing Support</span></a></p> <p><a target="_blank" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>McCracken, Wendy; Turner, Oliver</p> <p>2012-01-01</p> <p>This paper discusses the <span class="hlt">experiences</span> of parents of deaf children with additional <span class="hlt">complex</span> needs (ACN) in accessing cochlear implant (CI) services and achieving ongoing support. Of a total study group of fifty-one children with ACN, twelve had been fitted with a CI. The parental accounts provide a rich and varied picture of service access. For some…</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/22291224','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/22291224"><span><span class="hlt">Model</span> <span class="hlt">complexity</span> versus ensemble size: allocating resources for climate prediction.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Ferro, Christopher A T; Jupp, Tim E; Lambert, F Hugo; Huntingford, Chris; Cox, Peter M</p> <p>2012-03-13</p> <p>A perennial question in modern weather forecasting and climate prediction is whether to invest resources in more <span class="hlt">complex</span> numerical <span class="hlt">models</span> or in larger ensembles of simulations. If this question is to be addressed quantitatively, then information is needed about how changes in <span class="hlt">model</span> <span class="hlt">complexity</span> and ensemble size will affect predictive performance. Information about the effects of ensemble size is often available, but information about the effects of <span class="hlt">model</span> <span class="hlt">complexity</span> is much rarer. An illustration is provided of the sort of analysis that might be conducted for the simplified case in which <span class="hlt">model</span> <span class="hlt">complexity</span> is judged in terms of grid resolution and ensemble members are constructed only by perturbing their initial conditions. The effects of resolution and ensemble size on the performance of climate simulations are described with a simple mathematical <span class="hlt">model</span>, which is then used to define an optimal allocation of computational resources for a range of hypothetical prediction problems. The optimal resolution and ensemble size both increase with available resources, but their respective rates of increase depend on the values of two parameters that can be determined from a small number of simulations. The potential for such analyses to guide future investment decisions in climate prediction is discussed.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2005AdAtS..22..142X','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2005AdAtS..22..142X"><span>Optimal parameter and uncertainty estimation of a land surface <span class="hlt">model</span>: Sensitivity to parameter ranges and <span class="hlt">model</span> <span class="hlt">complexities</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Xia, Youlong; Yang, Zong-Liang; Stoffa, Paul L.; Sen, Mrinal K.</p> <p>2005-01-01</p> <p>Most previous land-surface <span class="hlt">model</span> calibration studies have defined global ranges for their parameters to search for optimal parameter sets. Little work has been conducted to study the impacts of realistic versus global ranges as well as <span class="hlt">model</span> <span class="hlt">complexities</span> on the calibration and uncertainty estimates. The primary purpose of this paper is to investigate these impacts by employing Bayesian Stochastic Inversion (BSI) to the Chameleon Surface <span class="hlt">Model</span> (CHASM). The CHASM was designed to explore the general aspects of land-surface energy balance representation within a common <span class="hlt">modeling</span> framework that can be run from a simple energy balance formulation to a <span class="hlt">complex</span> mosaic type structure. The BSI is an uncertainty estimation technique based on Bayes theorem, importance sampling, and very fast simulated annealing. The <span class="hlt">model</span> forcing data and surface flux data were collected at seven sites representing a wide range of climate and vegetation conditions. For each site, four <span class="hlt">experiments</span> were performed with simple and <span class="hlt">complex</span> CHASM formulations as well as realistic and global parameter ranges. Twenty eight <span class="hlt">experiments</span> were conducted and 50 000 parameter sets were used for each run. The results show that the use of global and realistic ranges gives similar simulations for both modes for most sites, but the global ranges tend to produce some unreasonable optimal parameter values. Comparison of simple and <span class="hlt">complex</span> modes shows that the simple mode has more parameters with unreasonable optimal values. Use of parameter ranges and <span class="hlt">model</span> <span class="hlt">complexities</span> have significant impacts on frequency distribution of parameters, marginal posterior probability density functions, and estimates of uncertainty of simulated sensible and latent heat fluxes. Comparison between <span class="hlt">model</span> <span class="hlt">complexity</span> and parameter ranges shows that the former has more significant impacts on parameter and uncertainty estimations.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013EGUGA..1513869T','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013EGUGA..1513869T"><span>(Relatively) Simple <span class="hlt">Models</span> of Flow in <span class="hlt">Complex</span> Terrain</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Taylor, Peter; Weng, Wensong; Salmon, Jim</p> <p>2013-04-01</p> <p>The term, "<span class="hlt">complex</span> terrain" includes both topography and variations in surface roughness and thermal properties. The scales that are affected can differ and there are some advantages to <span class="hlt">modeling</span> them separately. In studies of flow in <span class="hlt">complex</span> terrain we have developed 2 D and 3 D <span class="hlt">models</span> of atmospheric PBL boundary layer flow over roughness changes, appropriate for longer fetches than most existing <span class="hlt">models</span>. These "internal boundary layers" are especially important for understanding and predicting wind speed variations with distance from shorelines, an important factor for wind farms around, and potentially in, the Great Lakes. The <span class="hlt">models</span> can also form a base for studying the wakes behind woodlots and wind turbines. Some sample calculations of wind speed evolution over water and the reduced wind speeds behind an isolated woodlot, represented simply in terms of an increase in surface roughness, will be presented. Note that these <span class="hlt">models</span> can also include thermal effects and non-neutral stratification. We can use the <span class="hlt">model</span> to deal with 3-D roughness variations and will describe applications to both on-shore and off-shore situations around the Great Lakes. In particular we will show typical results for hub height winds and indicate the length of over-water fetch needed to get the full benefit of siting turbines over water. The linear Mixed Spectral Finite-Difference (MSFD) and non-linear (NLMSFD) <span class="hlt">models</span> for surface boundary-layer flow over <span class="hlt">complex</span> terrain have been extended to planetary boundary-layer flow over topography This allows for their use for larger scale regions and increased heights. The <span class="hlt">models</span> have been applied to successfully simulate the Askervein hill experimental case and we will show examples of applications to more <span class="hlt">complex</span> terrain, typical of some Canadian wind farms. Output from the <span class="hlt">model</span> can be used as an alternative to MS-Micro, WAsP or other CFD calculations of topographic impacts for input to wind farm design software.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/20150016950','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20150016950"><span>Electrostatic <span class="hlt">Model</span> Applied to ISS Charged Water Droplet <span class="hlt">Experiment</span></span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Stevenson, Daan; Schaub, Hanspeter; Pettit, Donald R.</p> <p>2015-01-01</p> <p>The electrostatic force can be used to create novel relative motion between charged bodies if it can be isolated from the stronger gravitational and dissipative forces. Recently, Coulomb orbital motion was demonstrated on the International Space Station by releasing charged water droplets in the vicinity of a charged knitting needle. In this investigation, the Multi-Sphere Method, an electrostatic <span class="hlt">model</span> developed to study active spacecraft position control by Coulomb charging, is used to simulate the <span class="hlt">complex</span> orbital motion of the droplets. When atmospheric drag is introduced, the simulated motion closely mimics that seen in the video footage of the <span class="hlt">experiment</span>. The electrostatic force's inverse dependency on separation distance near the center of the needle lends itself to analytic predictions of the radial motion.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27744217','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27744217"><span>Is there <span class="hlt">Complex</span> Trauma <span class="hlt">Experience</span> typology for Australian's experiencing extreme social disadvantage and low housing stability?</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Keane, Carol A; Magee, Christopher A; Kelly, Peter J</p> <p>2016-11-01</p> <p>Traumatic childhood <span class="hlt">experiences</span> predict many adverse outcomes in adulthood including <span class="hlt">Complex</span>-PTSD. Understanding <span class="hlt">complex</span> trauma within socially disadvantaged populations has important implications for policy development and intervention implementation. This paper examined the nature of <span class="hlt">complex</span> trauma experienced by disadvantaged individuals using a latent class analysis (LCA) approach. Data were collected through the large-scale Journeys Home Study (N=1682), utilising a representative sample of individuals experiencing low housing stability. Data on adverse childhood <span class="hlt">experiences</span>, adulthood interpersonal trauma and relevant covariates were collected through interviews at baseline (Wave 1). Latent class analysis (LCA) was conducted to identify distinct classes of childhood trauma history, which included physical assault, neglect, and sexual abuse. Multinomial logistic regression investigated childhood relevant factors associated with class membership such as biological relationship of primary carer at age 14 years and number of times in foster care. Of the total sample (N=1682), 99% reported traumatic adverse childhood <span class="hlt">experiences</span>. The most common included witnessing of violence, threat/<span class="hlt">experience</span> of physical abuse, and sexual assault. LCA identified six distinct childhood trauma history classes including high violence and multiple traumas. Significant covariate differences between classes included: gender, biological relationship of primary carer at age 14 years, and time in foster care. Identification of six distinct childhood trauma history profiles suggests there might be unique treatment implications for individuals living in extreme social disadvantage. Further research is required to examine the relationship between these classes of <span class="hlt">experience</span>, consequent impact on adulthood engagement, and future transitions though homelessness.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010JMS....81...44L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010JMS....81...44L"><span><span class="hlt">Complexity</span>, accuracy and practical applicability of different biogeochemical <span class="hlt">model</span> versions</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Los, F. J.; Blaas, M.</p> <p>2010-04-01</p> <p>The construction of validated biogeochemical <span class="hlt">model</span> applications as prognostic tools for the marine environment involves a large number of choices particularly with respect to the level of details of the .physical, chemical and biological aspects. Generally speaking, enhanced <span class="hlt">complexity</span> might enhance veracity, accuracy and credibility. However, very <span class="hlt">complex</span> <span class="hlt">models</span> are not necessarily effective or efficient forecast tools. In this paper, <span class="hlt">models</span> of varying degrees of <span class="hlt">complexity</span> are evaluated with respect to their forecast skills. In total 11 biogeochemical <span class="hlt">model</span> variants have been considered based on four different horizontal grids. The applications vary in spatial resolution, in vertical resolution (2DH versus 3D), in nature of transport, in turbidity and in the number of phytoplankton species. Included <span class="hlt">models</span> range from 15 year old applications with relatively simple physics up to present state of the art 3D <span class="hlt">models</span>. With all applications the same year, 2003, has been simulated. During the <span class="hlt">model</span> intercomparison it has been noticed that the 'OSPAR' Goodness of Fit cost function (Villars and de Vries, 1998) leads to insufficient discrimination of different <span class="hlt">models</span>. This results in <span class="hlt">models</span> obtaining similar scores although closer inspection of the results reveals large differences. In this paper therefore, we have adopted the target diagram by Jolliff et al. (2008) which provides a concise and more contrasting picture of <span class="hlt">model</span> skill on the entire <span class="hlt">model</span> domain and for the entire period of the simulations. Correctness in prediction of the mean and the variability are separated and thus enhance insight in <span class="hlt">model</span> functioning. Using the target diagrams it is demonstrated that recent <span class="hlt">models</span> are more consistent and have smaller biases. Graphical inspection of time series confirms this, as the level of variability appears more realistic, also given the multi-annual background statistics of the observations. Nevertheless, whether the improvements are all genuine for the particular</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://eric.ed.gov/?q=Taxonomy&id=EJ1112599','ERIC'); return false;" href="http://eric.ed.gov/?q=Taxonomy&id=EJ1112599"><span>Assessing <span class="hlt">Complexity</span> in Learning Outcomes--A Comparison between the SOLO Taxonomy and the <span class="hlt">Model</span> of Hierarchical <span class="hlt">Complexity</span></span></a></p> <p><a target="_blank" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Stålne, Kristian; Kjellström, Sofia; Utriainen, Jukka</p> <p>2016-01-01</p> <p>An important aspect of higher education is to educate students who can manage <span class="hlt">complex</span> relationships and solve <span class="hlt">complex</span> problems. Teachers need to be able to evaluate course content with regard to <span class="hlt">complexity</span>, as well as evaluate students' ability to assimilate <span class="hlt">complex</span> content and express it in the form of a learning outcome. One <span class="hlt">model</span> for evaluating…</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://eric.ed.gov/?q=taxonomy&pg=2&id=EJ1112599','ERIC'); return false;" href="https://eric.ed.gov/?q=taxonomy&pg=2&id=EJ1112599"><span>Assessing <span class="hlt">Complexity</span> in Learning Outcomes--A Comparison between the SOLO Taxonomy and the <span class="hlt">Model</span> of Hierarchical <span class="hlt">Complexity</span></span></a></p> <p><a target="_blank" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Stålne, Kristian; Kjellström, Sofia; Utriainen, Jukka</p> <p>2016-01-01</p> <p>An important aspect of higher education is to educate students who can manage <span class="hlt">complex</span> relationships and solve <span class="hlt">complex</span> problems. Teachers need to be able to evaluate course content with regard to <span class="hlt">complexity</span>, as well as evaluate students' ability to assimilate <span class="hlt">complex</span> content and express it in the form of a learning outcome. One <span class="hlt">model</span> for evaluating…</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016CNSNS..37..249L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016CNSNS..37..249L"><span><span class="hlt">Modeling</span> the propagation of mobile malware on <span class="hlt">complex</span> networks</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Liu, Wanping; Liu, Chao; Yang, Zheng; Liu, Xiaoyang; Zhang, Yihao; Wei, Zuxue</p> <p>2016-08-01</p> <p>In this paper, the spreading behavior of malware across mobile devices is addressed. By introducing <span class="hlt">complex</span> networks to <span class="hlt">model</span> mobile networks, which follows the power-law degree distribution, a novel epidemic <span class="hlt">model</span> for mobile malware propagation is proposed. The spreading threshold that guarantees the dynamics of the <span class="hlt">model</span> is calculated. Theoretically, the asymptotic stability of the malware-free equilibrium is confirmed when the threshold is below the unity, and the global stability is further proved under some sufficient conditions. The influences of different <span class="hlt">model</span> parameters as well as the network topology on malware propagation are also analyzed. Our theoretical studies and numerical simulations show that networks with higher heterogeneity conduce to the diffusion of malware, and <span class="hlt">complex</span> networks with lower power-law exponents benefit malware spreading.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013ThEng..60..835K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013ThEng..60..835K"><span>Computer <span class="hlt">models</span> of <span class="hlt">complex</span> multiloop branched pipeline systems</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kudinov, I. V.; Kolesnikov, S. V.; Eremin, A. V.; Branfileva, A. N.</p> <p>2013-11-01</p> <p>This paper describes the principal theoretical concepts of the method used for constructing computer <span class="hlt">models</span> of <span class="hlt">complex</span> multiloop branched pipeline networks, and this method is based on the theory of graphs and two Kirchhoff's laws applied to electrical circuits. The <span class="hlt">models</span> make it possible to calculate velocities, flow rates, and pressures of a fluid medium in any section of pipeline networks, when the latter are considered as single hydraulic systems. On the basis of multivariant calculations the reasons for existing problems can be identified, the least costly methods of their elimination can be proposed, and recommendations for planning the modernization of pipeline systems and construction of their new sections can be made. The results obtained can be applied to <span class="hlt">complex</span> pipeline systems intended for various purposes (water pipelines, petroleum pipelines, etc.). The operability of the <span class="hlt">model</span> has been verified on an example of designing a unified computer <span class="hlt">model</span> of the heat network for centralized heat supply of the city of Samara.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/14692637','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/14692637"><span>Singularities in mixture <span class="hlt">models</span> and upper bounds of stochastic <span class="hlt">complexity</span>.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Yamazaki, Keisuke; Watanabe, Sumio</p> <p>2003-09-01</p> <p>A learning machine which is a mixture of several distributions, for example, a gaussian mixture or a mixture of experts, has a wide range of applications. However, such a machine is a non-identifiable statistical <span class="hlt">model</span> with a lot of singularities in the parameter space, hence its generalization property is left unknown. Recently an algebraic geometrical method has been developed which enables us to treat such learning machines mathematically. Based on this method, this paper rigorously proves that a mixture learning machine has the smaller Bayesian stochastic <span class="hlt">complexity</span> than regular statistical <span class="hlt">models</span>. Since the generalization error of a learning machine is equal to the increase of the stochastic <span class="hlt">complexity</span>, the result of this paper shows that the mixture <span class="hlt">model</span> can attain the more precise prediction than regular statistical <span class="hlt">models</span> if Bayesian estimation is applied in statistical inference.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2001AIPC..574....1A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2001AIPC..574....1A"><span><span class="hlt">Modeling</span> active memory: <span class="hlt">Experiment</span>, theory and simulation</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Amit, Daniel J.</p> <p>2001-06-01</p> <p>Neuro-physiological <span class="hlt">experiments</span> on cognitively performing primates are described to argue that strong evidence exists for localized, non-ergodic (stimulus specific) attractor dynamics in the cortex. The specific phenomena are delay activity distributions-enhanced spike-rate distributions resulting from training, which we associate with working memory. The anatomy of the relevant cortex region and the physiological characteristics of the participating elements (neural cells) are reviewed to provide a substrate for <span class="hlt">modeling</span> the observed phenomena. <span class="hlt">Modeling</span> is based on the properties of the integrate-and-fire neural element in presence of an input current of Gaussian distribution. Theory of stochastic processes provides an expression for the spike emission rate as a function of the mean and the variance of the current distribution. Mean-field theory is then based on the assumption that spike emission processes in different neurons in the network are independent, and hence the input current to a neuron is Gaussian. Consequently, the dynamics of the interacting network is reduced to the computation of the mean and the variance of the current received by a cell of a given population in terms of the constitutive parameters of the network and the emission rates of the neurons in the different populations. Within this logic we analyze the stationary states of an unstructured network, corresponding to spontaneous activity, and show that it can be stable only if locally the net input current of a neuron is inhibitory. This is then tested against simulations and it is found that agreement is excellent down to great detail. A confirmation of the independence hypothesis. On top of stable spontaneous activity, keeping all parameters fixed, training is described by (Hebbian) modification of synapses between neurons responsive to a stimulus and other neurons in the module-synapses are potentiated between two excited neurons and depressed between an excited and a quiescent neuron</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li class="active"><span>20</span></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_20 --> <div id="page_21" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li class="active"><span>21</span></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="401"> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27230723','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27230723"><span>P-wave <span class="hlt">complexity</span> in normal subjects and computer <span class="hlt">models</span>.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Potse, Mark; Lankveld, Theo A R; Zeemering, Stef; Dagnelie, Pieter C; Stehouwer, Coen D A; Henry, Ronald M; Linnenbank, André C; Kuijpers, Nico H L; Schotten, Ulrich</p> <p>2016-01-01</p> <p>P waves reported in electrocardiology literature uniformly appear smooth. Computer simulation and signal analysis studies have shown much more <span class="hlt">complex</span> shapes. We systematically investigated P-wave <span class="hlt">complexity</span> in normal volunteers using high-fidelity electrocardiographic techniques without filtering. We recorded 5-min multichannel ECGs in 16 healthy volunteers. Noise and interference were reduced by averaging over 300 beats per recording. In addition, normal P waves were simulated with a realistic <span class="hlt">model</span> of the human atria. Measured P waves had an average of 4.1 peaks (range 1-10) that were reproducible between recordings. Simulated P waves demonstrated similar <span class="hlt">complexity</span>, which was related to structural discontinuities in the computer <span class="hlt">model</span> of the atria. The true shape of the P wave is very irregular and is best seen in ECGs averaged over many beats. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25167263','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25167263"><span>Electron-impact ionization of neon at low projectile energy: an internormalized <span class="hlt">experiment</span> and theory for a <span class="hlt">complex</span> target.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Pflüger, Thomas; Zatsarinny, Oleg; Bartschat, Klaus; Senftleben, Arne; Ren, Xueguang; Ullrich, Joachim; Dorn, Alexander</p> <p>2013-04-12</p> <p>As a fundamental test for state-of-the-art theoretical approaches, we have studied the single ionization (2p) of neon at a projectile energy of 100 eV. The experimental data were acquired using an advanced reaction microscope that benefits from high efficiency and a large solid-angle acceptance of almost 4π. We put special emphasis on the ability to measure internormalized triple-differential cross sections over a large part of the phase space. The data are compared to predictions from a second-order hybrid distorted-wave plus R-matrix <span class="hlt">model</span> and a fully nonperturbative B-spline R-matrix (BSR) with pseudostates approach. For a target of this <span class="hlt">complexity</span> and the low-energy regime, unprecedented agreement between <span class="hlt">experiment</span> and the BSR <span class="hlt">model</span> is found. This represents a significant step forward in the investigation of <span class="hlt">complex</span> targets.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2005AGUFM.S51F..02B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2005AGUFM.S51F..02B"><span>Slip <span class="hlt">complexity</span> and frictional heterogeneities in dynamic fault <span class="hlt">models</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Bizzarri, A.</p> <p>2005-12-01</p> <p>The numerical <span class="hlt">modeling</span> of earthquake rupture requires the specification of the fault system geometry, the mechanical properties of the media surrounding the fault, the initial conditions and the constitutive law for fault friction. The latter accounts for the fault zone properties and allows for the description of processes of nucleation, propagation, healing and arrest of a spontaneous rupture. In this work I solve the fundamental elasto-dynamic equation for a planar fault, adopting different constitutive equations (slip-dependent and rate- and state-dependent friction laws). We show that the slip patterns may be complicated by different causes. The spatial heterogeneities of constitutive parameters are able to cause the healing of slip, like barrier-healing or slip pulses. Our numerical <span class="hlt">experiments</span> show that the heterogeneities of the parameter L affect the dynamic rupture propagation and weakly modify the dynamic stress drop and the rupture velocity. The heterogeneity of a and b parameters affects the dynamic rupture propagation in a more <span class="hlt">complex</span> way: a velocity strengthening area (a > b) can arrest a dynamic rupture, but can be driven to an instability if suddenly loaded by the dynamic rupture front. Our simulations provide a picture of the <span class="hlt">complex</span> interactions between fault patches having different frictional properties. Moreover, the slip distribution on the fault plane is complicated considering the effects of the rake rotation during the propagation: depending on the position on the fault plane, the orientation of instantaneous total dynamic traction can change with time with respect to the imposed initial stress direction. These temporal rake rotations depend on the amplitude of the initial stress and on its distribution. They also depend on the curvature and direction of the rupture front with respect to the imposed initial stress direction: this explains why rake rotations are mostly located near the rupture front and within the cohesive zone, where the</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/20060111','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/20060111"><span>Extensive video-game <span class="hlt">experience</span> alters cortical networks for <span class="hlt">complex</span> visuomotor transformations.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Granek, Joshua A; Gorbet, Diana J; Sergio, Lauren E</p> <p>2010-10-01</p> <p>Using event-related functional magnetic resonance imaging (fMRI), we examined the effect of video-game <span class="hlt">experience</span> on the neural control of increasingly <span class="hlt">complex</span> visuomotor tasks. Previously, skilled individuals have demonstrated the use of a more efficient movement control brain network, including the prefrontal, premotor, primary sensorimotor and parietal cortices. Our results extend and generalize this finding by documenting additional prefrontal cortex activity in experienced video gamers planning for <span class="hlt">complex</span> eye-hand coordination tasks that are distinct from actual video-game play. These changes in activation between non-gamers and extensive gamers are putatively related to the increased online control and spatial attention required for <span class="hlt">complex</span> visually guided reaching. These data suggest that the basic cortical network for processing <span class="hlt">complex</span> visually guided reaching is altered by extensive video-game play. Crown Copyright © 2009. Published by Elsevier Srl. All rights reserved.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25913936','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25913936"><span>Force-dependent persistence length of DNA-intercalator <span class="hlt">complexes</span> measured in single molecule stretching <span class="hlt">experiments</span>.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Bazoni, R F; Lima, C H M; Ramos, E B; Rocha, M S</p> <p>2015-06-07</p> <p>By using optical tweezers with an adjustable trap stiffness, we have performed systematic single molecule stretching <span class="hlt">experiments</span> with two types of DNA-intercalator <span class="hlt">complexes</span>, in order to investigate the effects of the maximum applied forces on the mechanical response of such <span class="hlt">complexes</span>. We have explicitly shown that even in the low-force entropic regime the persistence length of the DNA-intercalator <span class="hlt">complexes</span> is strongly force-dependent, although such behavior is not exhibited by bare DNA molecules. We discuss the possible physicochemical effects that can lead to such results. In particular, we propose that the stretching force can promote partial denaturation on the highly distorted double-helix of the DNA-intercalator <span class="hlt">complexes</span>, which interfere strongly in the measured values of the persistence length.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016JSMTE..05.4031M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016JSMTE..05.4031M"><span>Kinetic <span class="hlt">modeling</span> of molecular motors: pause <span class="hlt">model</span> and parameter determination from single-molecule <span class="hlt">experiments</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Morin, José A.; Ibarra, Borja; Cao, Francisco J.</p> <p>2016-05-01</p> <p>Single-molecule manipulation <span class="hlt">experiments</span> of molecular motors provide essential information about the rate and conformational changes of the steps of the reaction located along the manipulation coordinate. This information is not always sufficient to define a particular kinetic cycle. Recent single-molecule <span class="hlt">experiments</span> with optical tweezers showed that the DNA unwinding activity of a Phi29 DNA polymerase mutant presents a <span class="hlt">complex</span> pause behavior, which includes short and long pauses. Here we show that different kinetic <span class="hlt">models</span>, considering different connections between the active and the pause states, can explain the experimental pause behavior. Both the two independent pause <span class="hlt">model</span> and the two connected pause <span class="hlt">model</span> are able to describe the pause behavior of a mutated Phi29 DNA polymerase observed in an optical tweezers single-molecule <span class="hlt">experiment</span>. For the two independent pause <span class="hlt">model</span> all parameters are fixed by the observed data, while for the more general two connected pause <span class="hlt">model</span> there is a range of values of the parameters compatible with the observed data (which can be expressed in terms of two of the rates and their force dependencies). This general <span class="hlt">model</span> includes <span class="hlt">models</span> with indirect entry and exit to the long-pause state, and also <span class="hlt">models</span> with cycling in both directions. Additionally, assuming that detailed balance is verified, which forbids cycling, this reduces the ranges of the values of the parameters (which can then be expressed in terms of one rate and its force dependency). The resulting <span class="hlt">model</span> interpolates between the independent pause <span class="hlt">model</span> and the indirect entry and exit to the long-pause state <span class="hlt">model</span></p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.osti.gov/scitech/biblio/352703','SCIGOV-STC'); return false;" href="https://www.osti.gov/scitech/biblio/352703"><span>Surface <span class="hlt">complexation</span> <span class="hlt">modeling</span> or organic acid sorption to goethite</span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p>Evanko, C.R.; Dzombak, D.A.</p> <p>1999-06-15</p> <p>Surface <span class="hlt">complexation</span> <span class="hlt">modeling</span> was performed using the Generalized Two-Layer <span class="hlt">Model</span> for a series of low molecular weight organic acids. Sorption of these organic acids to goethite was investigated in a previous study to assess the influence of particular structural features on sorption. Here, the ability to describe the observed sorption behavior for compounds with similar structural features using surface <span class="hlt">complexation</span> <span class="hlt">modeling</span> was investigated. A set of surface reactions and equilibrium constants yielding optimal data fits was obtained for each organic acid over a range of total sorbate concentrations. Surface <span class="hlt">complexation</span> <span class="hlt">modeling</span> successfully described sorption of a number of the simple organic acids, but an additional hydrophobic component was needed to describe sorption behavior of some compounds with significant hydrophobic character. These compounds exhibited sorption behavior of some compounds with significant hydrophobic character. These compounds exhibited sorption behavior that was inconsistent with ligand exchange mechanisms since sorption behavior of some compounds with significant hydrophobic character. These compounds exhibited sorption behavior that was inconsistent with ligand exchange mechanisms since sorption did not decrease with increasing total sorbate concentration and/or exceeded surface site saturation. Hydrophobic interactions appeared to be most significant for the compound containing a 5-carbon aliphatic chain. Comparison of optimized equilibrium constants for similar surface species showed that <span class="hlt">model</span> results were consistent with observed sorption behavior: equilibrium constants were highest for compounds having adjacent carboxylic groups, lower for compounds with adjacent phenolic groups, and lowest for compounds with phenolic groups in the ortho position relative to a carboxylic group. Surface <span class="hlt">complexation</span> <span class="hlt">modeling</span> was also performed to fit sorption data for Suwannee River fulvic acid. The data could be described well using reactions and</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/10339359','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/10339359"><span>Surface <span class="hlt">Complexation</span> <span class="hlt">Modeling</span> of Organic Acid Sorption to Goethite.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Evanko; Dzombak</p> <p>1999-06-15</p> <p>Surface <span class="hlt">complexation</span> <span class="hlt">modeling</span> was performed using the Generalized Two-Layer <span class="hlt">Model</span> for a series of low molecular weight organic acids. Sorption of these organic acids to goethite was investigated in a previous study to assess the influence of particular structural features on sorption. Here, the ability to describe the observed sorption behavior for compounds with similar structural features using surface <span class="hlt">complexation</span> <span class="hlt">modeling</span> was investigated. A set of surface reactions and equilibrium constants yielding optimal data fits was obtained for each organic acid over a range of total sorbate concentrations. Surface <span class="hlt">complexation</span> <span class="hlt">modeling</span> successfully described sorption of a number of the simple organic acids, but an additional hydrophobic component was needed to describe sorption behavior of some compounds with significant hydrophobic character. These compounds exhibited sorption behavior that was inconsistent with ligand exchange mechanisms since sorption did not decrease with increasing total sorbate concentration and/or exceeded surface site saturation. Hydrophobic interactions appeared to be most significant for the compound containing a 5-carbon aliphatic chain. Comparison of optimized equilibrium constants for similar surface species showed that <span class="hlt">model</span> results were consistent with observed sorption behavior: equilibrium constants were highest for compounds having adjacent carboxylic groups, lower for compounds with adjacent phenolic groups, and lowest for compounds with phenolic groups in the ortho position relative to a carboxylic group. Surface <span class="hlt">complexation</span> <span class="hlt">modeling</span> was also performed to fit sorption data for Suwannee River fulvic acid. The data could be described well using reactions and constants similar to those for pyromellitic acid. This four-carboxyl group compound may be useful as a <span class="hlt">model</span> for fulvic acid with respect to sorption. Other simple organic acids having multiple carboxylic and phenolic functional groups were identified as potential <span class="hlt">models</span> for humic</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19870043929&hterms=petri+nets&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D20%26Ntt%3Dpetri%2Bnets','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19870043929&hterms=petri+nets&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D20%26Ntt%3Dpetri%2Bnets"><span>Petri net <span class="hlt">model</span> for analysis of concurrently processed <span class="hlt">complex</span> algorithms</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Stoughton, John W.; Mielke, Roland R.</p> <p>1986-01-01</p> <p>This paper presents a Petri-net <span class="hlt">model</span> suitable for analyzing the concurrent processing of computationally <span class="hlt">complex</span> algorithms. The decomposed operations are to be processed in a multiple processor, data driven architecture. Of particular interest is the application of the <span class="hlt">model</span> to both the description of the data/control flow of a particular algorithm, and to the general specification of the data driven architecture. A candidate architecture is also presented.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2005APS..DFD.FA008R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2005APS..DFD.FA008R"><span>Ant colonies and foraging line dynamics: <span class="hlt">Modeling</span>, <span class="hlt">experiments</span> and computations</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Rossi, Louis</p> <p>2005-11-01</p> <p>Ants are one of several types of insects that form robust and <span class="hlt">complex</span> societies, and as such, provide rich theoretical ground for the exploration and understanding of collective dynamics and the behaviorial parameters that drive the dynamics. Many species of ants are nearly or completely blind, so they interact locally through behaviorial cues with nearby ants, and through pheromone trails left by other ants. Consistent with biological observation, two populations of ants are <span class="hlt">modeled</span>, those seeking food and those returning to the nest with food. A simple constitutive <span class="hlt">model</span> relating ant densities to pheromone concentrations yields a system of equations describing two interacting fluids and predicts left- and right-moving traveling waves. All the <span class="hlt">model</span> parameters can be reduced to two Froude numbers describing the ratio between a chemical potential and the kinetic energy of the traveling ants. Laboratory <span class="hlt">experiments</span> on Tetramorium caespitum (L) clearly indicate left and right-moving traveling density waves in agreement with the mathematical <span class="hlt">model</span>. We focus on understanding the evolutionary utility of the traveling waves, and the optimality of the Froude numbers and other parameters.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012EGUGA..14.2951V','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012EGUGA..14.2951V"><span>Graphical <span class="hlt">Models</span> as Surrogates for <span class="hlt">Complex</span> Ground Motion <span class="hlt">Models</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Vogel, K.; Riggelsen, C.; Kuehn, N.; Scherbaum, F.</p> <p>2012-04-01</p> <p>An essential part of the probabilistic seismic hazard analysis (PSHA) is the ground motion <span class="hlt">model</span>, which estimates the conditional probability of a ground motion parameter, such as (horizontal) peak ground acceleration or spectral acceleration, given earthquake and site related predictor variables. For a reliable seismic hazard estimation the ground motion <span class="hlt">model</span> has to keep the epistemic uncertainty small, while the aleatory uncertainty of the ground motion is covered by the <span class="hlt">model</span>. In regions of well recorded seismicity the most popular <span class="hlt">modeling</span> approach is to fit a regression function to the observed data, where the functional form is determined by expert knowledge. In regions, where we lack a sufficient amount of data, it is popular to fit the regression function to a data set generated by a so-called stochastic <span class="hlt">model</span>, which distorts the shape of a random time series according to physical principles to obtain a time series with properties that match ground-motion characteristics. The stochastic <span class="hlt">model</span> does not have nice analytical properties nor does it come in a form amenable for easy analytical handling and evaluation as needed for PSHA. Therefore a surrogate <span class="hlt">model</span>, which describes the stochastic <span class="hlt">model</span> in a more abstract sense (e.g. regression) is often used instead. We show how Directed Graphical <span class="hlt">Models</span> (DGM) may be seen as a viable alternative to the classical regression approach. They describe a joint probability distribution of a set of variables, decomposing it into a product of (local) conditional probability distributions according to a directed acyclic graph. Graphical <span class="hlt">models</span> have proven to be a all-round pre/descriptive probabilistic framework for many problems. Their transparent nature is attractive from a domain perspective allowing for a better understanding and gives direct insight into the relationships and workings of a system. DGMs learn the dependency structure of the parameters from the data and do not need, but can include prior expert</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://eric.ed.gov/?q=Chemical+AND+bonds&pg=4&id=EJ1079266','ERIC'); return false;" href="http://eric.ed.gov/?q=Chemical+AND+bonds&pg=4&id=EJ1079266"><span>Fischer and Schrock Carbene <span class="hlt">Complexes</span>: A Molecular <span class="hlt">Modeling</span> Exercise</span></a></p> <p><a target="_blank" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Montgomery, Craig D.</p> <p>2015-01-01</p> <p>An exercise in molecular <span class="hlt">modeling</span> that demonstrates the distinctive features of Fischer and Schrock carbene <span class="hlt">complexes</span> is presented. Semi-empirical calculations (PM3) demonstrate the singlet ground electronic state, restricted rotation about the C-Y bond, the positive charge on the carbon atom, and hence, the electrophilic nature of the Fischer…</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017SuMi..110...49N','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017SuMi..110...49N"><span>Soliton solutions for quintic <span class="hlt">complex</span> Ginzburg-Landau <span class="hlt">model</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Nawaz, B.; Ali, K.; Rizvi, S. T. R.; Younis, M.</p> <p>2017-10-01</p> <p>In this paper, we find the rational function solution, confluent hypergeometric functions solutions and solitary wave solutions for quintic <span class="hlt">complex</span> Ginzburg-Landau (CGLQ) <span class="hlt">model</span> by using extended trial equation method. We also find some new solitary wave soliton solutions for CGLQ equation by using modified extended tanh-function method.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://eric.ed.gov/?q=estimator&pg=4&id=EJ915404','ERIC'); return false;" href="https://eric.ed.gov/?q=estimator&pg=4&id=EJ915404"><span>Performance of Random Effects <span class="hlt">Model</span> Estimators under <span class="hlt">Complex</span> Sampling Designs</span></a></p> <p><a target="_blank" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Jia, Yue; Stokes, Lynne; Harris, Ian; Wang, Yan</p> <p>2011-01-01</p> <p>In this article, we consider estimation of parameters of random effects <span class="hlt">models</span> from samples collected via <span class="hlt">complex</span> multistage designs. Incorporation of sampling weights is one way to reduce estimation bias due to unequal probabilities of selection. Several weighting methods have been proposed in the literature for estimating the parameters of…</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://eric.ed.gov/?q=estimator&pg=3&id=EJ915404','ERIC'); return false;" href="http://eric.ed.gov/?q=estimator&pg=3&id=EJ915404"><span>Performance of Random Effects <span class="hlt">Model</span> Estimators under <span class="hlt">Complex</span> Sampling Designs</span></a></p> <p><a target="_blank" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Jia, Yue; Stokes, Lynne; Harris, Ian; Wang, Yan</p> <p>2011-01-01</p> <p>In this article, we consider estimation of parameters of random effects <span class="hlt">models</span> from samples collected via <span class="hlt">complex</span> multistage designs. Incorporation of sampling weights is one way to reduce estimation bias due to unequal probabilities of selection. Several weighting methods have been proposed in the literature for estimating the parameters of…</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://eric.ed.gov/?q=atom&id=EJ1079266','ERIC'); return false;" href="https://eric.ed.gov/?q=atom&id=EJ1079266"><span>Fischer and Schrock Carbene <span class="hlt">Complexes</span>: A Molecular <span class="hlt">Modeling</span> Exercise</span></a></p> <p><a target="_blank" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Montgomery, Craig D.</p> <p>2015-01-01</p> <p>An exercise in molecular <span class="hlt">modeling</span> that demonstrates the distinctive features of Fischer and Schrock carbene <span class="hlt">complexes</span> is presented. Semi-empirical calculations (PM3) demonstrate the singlet ground electronic state, restricted rotation about the C-Y bond, the positive charge on the carbon atom, and hence, the electrophilic nature of the Fischer…</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://eric.ed.gov/?q=correlation&id=EJ1109001','ERIC'); return false;" href="https://eric.ed.gov/?q=correlation&id=EJ1109001"><span>Fitting Meta-Analytic Structural Equation <span class="hlt">Models</span> with <span class="hlt">Complex</span> Datasets</span></a></p> <p><a target="_blank" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Wilson, Sandra Jo; Polanin, Joshua R.; Lipsey, Mark W.</p> <p>2016-01-01</p> <p>A modification of the first stage of the standard procedure for two-stage meta-analytic structural equation <span class="hlt">modeling</span> for use with large <span class="hlt">complex</span> datasets is presented. This modification addresses two common problems that arise in such meta-analyses: (a) primary studies that provide multiple measures of the same construct and (b) the correlation…</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://eric.ed.gov/?q=%22Process+model%22&pg=6&id=EJ940837','ERIC'); return false;" href="http://eric.ed.gov/?q=%22Process+model%22&pg=6&id=EJ940837"><span>The <span class="hlt">Complexity</span> of Developmental Predictions from Dual Process <span class="hlt">Models</span></span></a></p> <p><a target="_blank" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Stanovich, Keith E.; West, Richard F.; Toplak, Maggie E.</p> <p>2011-01-01</p> <p>Drawing developmental predictions from dual-process theories is more <span class="hlt">complex</span> than is commonly realized. Overly simplified predictions drawn from such <span class="hlt">models</span> may lead to premature rejection of the dual process approach as one of many tools for understanding cognitive development. Misleading predictions can be avoided by paying attention to several…</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24963803','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24963803"><span>Surface <span class="hlt">complexation</span> <span class="hlt">modeling</span> of americium sorption onto volcanic tuff.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Ding, M; Kelkar, S; Meijer, A</p> <p>2014-10-01</p> <p>Results of a surface <span class="hlt">complexation</span> <span class="hlt">model</span> (SCM) for americium sorption on volcanic rocks (devitrified and zeolitic tuff) are presented. The <span class="hlt">model</span> was developed using PHREEQC and based on laboratory data for americium sorption on quartz. Available data for sorption of americium on quartz as a function of pH in dilute groundwater can be <span class="hlt">modeled</span> with two surface reactions involving an americium sulfate and an americium carbonate <span class="hlt">complex</span>. It was assumed in applying the <span class="hlt">model</span> to volcanic rocks from Yucca Mountain, that the surface properties of volcanic rocks can be represented by a quartz surface. Using groundwaters compositionally representative of Yucca Mountain, americium sorption distribution coefficient (Kd, L/Kg) values were calculated as function of pH. These Kd values are close to the experimentally determined Kd values for americium sorption on volcanic rocks, decreasing with increasing pH in the pH range from 7 to 9. The surface <span class="hlt">complexation</span> constants, derived in this study, allow prediction of sorption of americium in a natural <span class="hlt">complex</span> system, taking into account the inherent uncertainty associated with geochemical conditions that occur along transport pathways. Published by Elsevier Ltd.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://eric.ed.gov/?q=b+AND+cells&pg=2&id=EJ1109001','ERIC'); return false;" href="http://eric.ed.gov/?q=b+AND+cells&pg=2&id=EJ1109001"><span>Fitting Meta-Analytic Structural Equation <span class="hlt">Models</span> with <span class="hlt">Complex</span> Datasets</span></a></p> <p><a target="_blank" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Wilson, Sandra Jo; Polanin, Joshua R.; Lipsey, Mark W.</p> <p>2016-01-01</p> <p>A modification of the first stage of the standard procedure for two-stage meta-analytic structural equation <span class="hlt">modeling</span> for use with large <span class="hlt">complex</span> datasets is presented. This modification addresses two common problems that arise in such meta-analyses: (a) primary studies that provide multiple measures of the same construct and (b) the correlation…</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li class="active"><span>21</span></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_21 --> <div id="page_22" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li class="active"><span>22</span></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="421"> <li> <p><a target="_blank" onclick="trackOutboundLink('http://files.eric.ed.gov/fulltext/ED073965.pdf','ERIC'); return false;" href="http://files.eric.ed.gov/fulltext/ED073965.pdf"><span>Conceptual <span class="hlt">Complexity</span>, Teaching Style and <span class="hlt">Models</span> of Teaching.</span></a></p> <p><a target="_blank" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Joyce, Bruce; Weil, Marsha</p> <p></p> <p>The focus of this paper is on the relative roles of personality and training in enabling teachers to carry out the kinds of <span class="hlt">complex</span> learning <span class="hlt">models</span> which are envisioned by curriculum reformers in the social sciences. The paper surveys some of the major research done in this area and concludes that: 1) Most teachers do not manifest the complex…</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017CNSNS..42..324N','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017CNSNS..42..324N"><span>The general theory of the Quasi-reproducible <span class="hlt">experiments</span>: How to describe the measured data of <span class="hlt">complex</span> systems?</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Nigmatullin, Raoul R.; Maione, Guido; Lino, Paolo; Saponaro, Fabrizio; Zhang, Wei</p> <p>2017-01-01</p> <p>In this paper, we suggest a general theory that enables to describe <span class="hlt">experiments</span> associated with reproducible or quasi-reproducible data reflecting the dynamical and self-similar properties of a wide class of <span class="hlt">complex</span> systems. Under <span class="hlt">complex</span> system we understand a system when the <span class="hlt">model</span> based on microscopic principles and suppositions about the nature of the matter is absent. This microscopic <span class="hlt">model</span> is usually determined as ;the best fit" <span class="hlt">model</span>. The behavior of the <span class="hlt">complex</span> system relatively to a control variable (time, frequency, wavelength, etc.) can be described in terms of the so-called intermediate <span class="hlt">model</span> (IM). One can prove that the fitting parameters of the IM are associated with the amplitude-frequency response of the segment of the Prony series. The segment of the Prony series including the set of the decomposition coefficients and the set of the exponential functions (with k = 1,2,…,K) is limited by the final mode K. The exponential functions of this decomposition depend on time and are found by the original algorithm described in the paper. This approach serves as a logical continuation of the results obtained earlier in paper [Nigmatullin RR, W. Zhang and Striccoli D. General theory of <span class="hlt">experiment</span> containing reproducible data: The reduction to an ideal <span class="hlt">experiment</span>. Commun Nonlinear Sci Numer Simul, 27, (2015), pp 175-192] for reproducible <span class="hlt">experiments</span> and includes the previous results as a partial case. In this paper, we consider a more <span class="hlt">complex</span> case when the available data can create short samplings or exhibit some instability during the process of measurements. We give some justified evidences and conditions proving the validity of this theory for the description of a wide class of <span class="hlt">complex</span> systems in terms of the reduced set of the fitting parameters belonging to the segment of the Prony series. The elimination of uncontrollable factors expressed in the form of the apparatus function is discussed. To illustrate how to apply the theory and take advantage of its</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4676066','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4676066"><span>A random interacting network <span class="hlt">model</span> for <span class="hlt">complex</span> networks</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Goswami, Bedartha; Shekatkar, Snehal M.; Rheinwalt, Aljoscha; Ambika, G.; Kurths, Jürgen</p> <p>2015-01-01</p> <p>We propose a RAndom Interacting Network (RAIN) <span class="hlt">model</span> to study the interactions between a pair of <span class="hlt">complex</span> networks. The <span class="hlt">model</span> involves two major steps: (i) the selection of a pair of nodes, one from each network, based on intra-network node-based characteristics, and (ii) the placement of a link between selected nodes based on the similarity of their relative importance in their respective networks. Node selection is based on a selection fitness function and node linkage is based on a linkage probability defined on the linkage scores of nodes. The <span class="hlt">model</span> allows us to relate within-network characteristics to between-network structure. We apply the <span class="hlt">model</span> to the interaction between the USA and Schengen airline transportation networks (ATNs). Our results indicate that two mechanisms: degree-based preferential node selection and degree-assortative link placement are necessary to replicate the observed inter-network degree distributions as well as the observed inter-network assortativity. The RAIN <span class="hlt">model</span> offers the possibility to test multiple hypotheses regarding the mechanisms underlying network interactions. It can also incorporate <span class="hlt">complex</span> interaction topologies. Furthermore, the framework of the RAIN <span class="hlt">model</span> is general and can be potentially adapted to various real-world <span class="hlt">complex</span> systems. PMID:26657032</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2011SPIE.8336E..02A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2011SPIE.8336E..02A"><span>A perspective on <span class="hlt">modeling</span> and simulation of <span class="hlt">complex</span> dynamical systems</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Åström, K. J.</p> <p>2011-09-01</p> <p>There has been an amazing development of <span class="hlt">modeling</span> and simulation from its beginning in the 1920s, when the technology was available only at a handful of University groups who had access to a mechanical differential analyzer. Today, tools for <span class="hlt">modeling</span> and simulation are available for every student and engineer. This paper gives a perspective on the development with particular emphasis on technology and paradigm shifts. <span class="hlt">Modeling</span> is increasingly important for design and operation of <span class="hlt">complex</span> natural and man-made systems. Because of the increased use of <span class="hlt">model</span> based control such as Kalman filters and <span class="hlt">model</span> predictive control, <span class="hlt">models</span> are also appearing as components of feedback systems. <span class="hlt">Modeling</span> and simulation are multidisciplinary, it is used in a wide variety of fields and their development have been strongly influenced by mathematics, numerics, computer science and computer technology.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4500159','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4500159"><span>Bayesian Case-deletion <span class="hlt">Model</span> <span class="hlt">Complexity</span> and Information Criterion</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Zhu, Hongtu; Ibrahim, Joseph G.; Chen, Qingxia</p> <p>2015-01-01</p> <p>We establish a connection between Bayesian case influence measures for assessing the influence of individual observations and Bayesian predictive methods for evaluating the predictive performance of a <span class="hlt">model</span> and comparing different <span class="hlt">models</span> fitted to the same dataset. Based on such a connection, we formally propose a new set of Bayesian case-deletion <span class="hlt">model</span> <span class="hlt">complexity</span> (BCMC) measures for quantifying the effective number of parameters in a given statistical <span class="hlt">model</span>. Its properties in linear <span class="hlt">models</span> are explored. Adding some functions of BCMC to a conditional deviance function leads to a Bayesian case-deletion information criterion (BCIC) for comparing <span class="hlt">models</span>. We systematically investigate some properties of BCIC and its connection with other information criteria, such as the Deviance Information Criterion (DIC). We illustrate the proposed methodology on linear mixed <span class="hlt">models</span> with simulations and a real data example. PMID:26180578</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26180578','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26180578"><span>Bayesian Case-deletion <span class="hlt">Model</span> <span class="hlt">Complexity</span> and Information Criterion.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Zhu, Hongtu; Ibrahim, Joseph G; Chen, Qingxia</p> <p>2014-10-01</p> <p>We establish a connection between Bayesian case influence measures for assessing the influence of individual observations and Bayesian predictive methods for evaluating the predictive performance of a <span class="hlt">model</span> and comparing different <span class="hlt">models</span> fitted to the same dataset. Based on such a connection, we formally propose a new set of Bayesian case-deletion <span class="hlt">model</span> <span class="hlt">complexity</span> (BCMC) measures for quantifying the effective number of parameters in a given statistical <span class="hlt">model</span>. Its properties in linear <span class="hlt">models</span> are explored. Adding some functions of BCMC to a conditional deviance function leads to a Bayesian case-deletion information criterion (BCIC) for comparing <span class="hlt">models</span>. We systematically investigate some properties of BCIC and its connection with other information criteria, such as the Deviance Information Criterion (DIC). We illustrate the proposed methodology on linear mixed <span class="hlt">models</span> with simulations and a real data example.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23750914','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23750914"><span><span class="hlt">Complexity</span> vs. simplicity: groundwater <span class="hlt">model</span> ranking using information criteria.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Engelhardt, I; De Aguinaga, J G; Mikat, H; Schüth, C; Liedl, R</p> <p>2014-01-01</p> <p>A groundwater <span class="hlt">model</span> characterized by a lack of field data about hydraulic <span class="hlt">model</span> parameters and boundary conditions combined with many observation data sets for calibration purpose was investigated concerning <span class="hlt">model</span> uncertainty. Seven different conceptual <span class="hlt">models</span> with a stepwise increase from 0 to 30 adjustable parameters were calibrated using PEST. Residuals, sensitivities, the Akaike information criterion (AIC and AICc), Bayesian information criterion (BIC), and Kashyap's information criterion (KIC) were calculated for a set of seven inverse calibrated <span class="hlt">models</span> with increasing <span class="hlt">complexity</span>. Finally, the likelihood of each <span class="hlt">model</span> was computed. Comparing only residuals of the different conceptual <span class="hlt">models</span> leads to an overparameterization and certainty loss in the conceptual <span class="hlt">model</span> approach. The <span class="hlt">model</span> employing only uncalibrated hydraulic parameters, estimated from sedimentological information, obtained the worst AIC, BIC, and KIC values. Using only sedimentological data to derive hydraulic parameters introduces a systematic error into the simulation results and cannot be recommended for generating a valuable <span class="hlt">model</span>. For numerical investigations with high numbers of calibration data the BIC and KIC select as optimal a simpler <span class="hlt">model</span> than the AIC. The <span class="hlt">model</span> with 15 adjusted parameters was evaluated by AIC as the best option and obtained a likelihood of 98%. The AIC disregards the potential <span class="hlt">model</span> structure error and the selection of the KIC is, therefore, more appropriate. Sensitivities to piezometric heads were highest for the <span class="hlt">model</span> with only five adjustable parameters and sensitivity coefficients were directly influenced by the changes in extracted groundwater volumes.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/20365129','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/20365129"><span>Data-driven approach to decomposing <span class="hlt">complex</span> enzyme kinetics with surrogate <span class="hlt">models</span>.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Calderon, Christopher P</p> <p>2009-12-01</p> <p>The temporal autocorrelation (AC) function associated with monitoring order parameters characterizing conformational fluctuations of an enzyme is analyzed using a collection of surrogate <span class="hlt">models</span>. The surrogates considered are phenomenological stochastic differential equation (SDE) <span class="hlt">models</span>. It is demonstrated how an ensemble of such surrogate <span class="hlt">models</span>, each surrogate being calibrated from a single trajectory, indirectly contains information about unresolved conformational degrees of freedom. This ensemble can be used to construct <span class="hlt">complex</span> temporal ACs associated with a "non-Markovian" process. The ensemble of surrogates approach allows researchers to consider <span class="hlt">models</span> more flexible than a mixture of exponentials to describe relaxation times and at the same time gain physical information about the system. The relevance of this type of analysis to matching single-molecule <span class="hlt">experiments</span> to computer simulations and how more <span class="hlt">complex</span> stochastic processes can emerge from a mixture of simpler processes is also discussed. The ideas are illustrated on a toy SDE <span class="hlt">model</span> and on molecular-dynamics simulations of the enzyme dihydrofolate reductase.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23346354','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23346354"><span>APG: an Active Protein-Gene network <span class="hlt">model</span> to quantify regulatory signals in <span class="hlt">complex</span> biological systems.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Wang, Jiguang; Sun, Yidan; Zheng, Si; Zhang, Xiang-Sun; Zhou, Huarong; Chen, Luonan</p> <p>2013-01-01</p> <p>Synergistic interactions among transcription factors (TFs) and their cofactors collectively determine gene expression in <span class="hlt">complex</span> biological systems. In this work, we develop a novel graphical <span class="hlt">model</span>, called Active Protein-Gene (APG) network <span class="hlt">model</span>, to quantify regulatory signals of transcription in <span class="hlt">complex</span> biomolecular networks through integrating both TF upstream-regulation and downstream-regulation high-throughput data. Firstly, we theoretically and computationally demonstrate the effectiveness of APG by comparing with the traditional strategy based only on TF downstream-regulation information. We then apply this <span class="hlt">model</span> to study spontaneous type 2 diabetic Goto-Kakizaki (GK) and Wistar control rats. Our biological <span class="hlt">experiments</span> validate the theoretical results. In particular, SP1 is found to be a hidden TF with changed regulatory activity, and the loss of SP1 activity contributes to the increased glucose production during diabetes development. APG <span class="hlt">model</span> provides theoretical basis to quantitatively elucidate transcriptional regulation by <span class="hlt">modelling</span> TF combinatorial interactions and exploiting multilevel high-throughput information.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://eric.ed.gov/?q=ontology&pg=6&id=ED559589','ERIC'); return false;" href="https://eric.ed.gov/?q=ontology&pg=6&id=ED559589"><span>On Using Meta-<span class="hlt">Modeling</span> and Multi-<span class="hlt">Modeling</span> to Address <span class="hlt">Complex</span> Problems</span></a></p> <p><a target="_blank" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Abu Jbara, Ahmed</p> <p>2013-01-01</p> <p><span class="hlt">Models</span>, created using different <span class="hlt">modeling</span> techniques, usually serve different purposes and provide unique insights. While each <span class="hlt">modeling</span> technique might be capable of answering specific questions, <span class="hlt">complex</span> problems require multiple <span class="hlt">models</span> interoperating to complement/supplement each other; we call this Multi-<span class="hlt">Modeling</span>. To address the syntactic and…</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://eric.ed.gov/?q=cooperation+AND+Interagency&id=ED559589','ERIC'); return false;" href="http://eric.ed.gov/?q=cooperation+AND+Interagency&id=ED559589"><span>On Using Meta-<span class="hlt">Modeling</span> and Multi-<span class="hlt">Modeling</span> to Address <span class="hlt">Complex</span> Problems</span></a></p> <p><a target="_blank" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Abu Jbara, Ahmed</p> <p>2013-01-01</p> <p><span class="hlt">Models</span>, created using different <span class="hlt">modeling</span> techniques, usually serve different purposes and provide unique insights. While each <span class="hlt">modeling</span> technique might be capable of answering specific questions, <span class="hlt">complex</span> problems require multiple <span class="hlt">models</span> interoperating to complement/supplement each other; we call this Multi-<span class="hlt">Modeling</span>. To address the syntactic and…</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/18578792','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/18578792"><span>Clients with chronic and <span class="hlt">complex</span> conditions: their <span class="hlt">experiences</span> of community nursing services.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Wilkes, Lesley; Cioffi, Jane; Warne, Bronwyn; Harrison, Kathleen; Vonu-Boriceanu, Oana</p> <p>2008-04-01</p> <p>This qualitative study aimed to explore and describes clients' <span class="hlt">experiences</span> of receiving care from community nurses. Understanding of the <span class="hlt">experiences</span> of clients with chronic and <span class="hlt">complex</span> conditions receiving community nursing care can provide insight into their needs. International studies have identified <span class="hlt">experiences</span> clients have had of receiving care from community nurses. However, no Australian study was found that had specifically explored with clients who had chronic and <span class="hlt">complex</span> conditions and their <span class="hlt">experiences</span> of receiving care from community nurses in an area health service. A qualitative descriptive study conducted during 2005 explored and described clients' <span class="hlt">experiences</span> of the nursing care provided by community nurses. A purposive sample of 13 volunteer participants with chronic and <span class="hlt">complex</span> conditions was interviewed and the transcripts analysed. Three main categories were identified that clients used to describe their <span class="hlt">experiences</span>. These were: the client's relationship with the nurse, care process and being able to stay out of hospital. Clients strongly indicated their satisfaction with care provided by experienced community nurses and acknowledged that nurses are playing a key role in fostering their self-management and avoiding their readmission to hospital. Areas that require further attention are the professional development of less-experienced community nurses, services at the weekend, the scope of nursing management of clients with chronic conditions and the education needs of community nurses to meet the goals of these clients. This study highlights the need for nurses who work in strong autonomous clinical roles in the community to have <span class="hlt">experience</span> in assessment, education, planning and delivery of client care before they can be competent community nurses. The possibility of adverse occurrences during weekends provides the opportunity for managers to review and plan weekday and weekend workloads and staffing.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28923155','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28923155"><span><span class="hlt">Modeling</span> Stochastic <span class="hlt">Complexity</span> in <span class="hlt">Complex</span> Adaptive Systems: Non-Kolmogorov Probability and the Process Algebra Approach.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Sulis, William H</p> <p>2017-10-01</p> <p>Walter Freeman III pioneered the application of nonlinear dynamical systems theories and methodologies in his work on mesoscopic brain dynamics.Sadly, mainstream psychology and psychiatry still cling to linear correlation based data analysis techniques, which threaten to subvert the process of experimentation and theory building. In order to progress, it is necessary to develop tools capable of managing the stochastic <span class="hlt">complexity</span> of <span class="hlt">complex</span> biopsychosocial systems, which includes multilevel feedback relationships, nonlinear interactions, chaotic dynamics and adaptability. In addition, however, these systems exhibit intrinsic randomness, non-Gaussian probability distributions, non-stationarity, contextuality, and non-Kolmogorov probabilities, as well as the absence of mean and/or variance and conditional probabilities. These properties and their implications for statistical analysis are discussed. An alternative approach, the Process Algebra approach, is described. It is a generative <span class="hlt">model</span>, capable of generating non-Kolmogorov probabilities. It has proven useful in addressing fundamental problems in quantum mechanics and in the <span class="hlt">modeling</span> of developing psychosocial systems.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.osti.gov/scitech/servlets/purl/768084','SCIGOV-STC'); return false;" href="http://www.osti.gov/scitech/servlets/purl/768084"><span>Coupled Thermal-Chemical-Mechanical <span class="hlt">Modeling</span> of Validation Cookoff <span class="hlt">Experiments</span></span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p>ERIKSON,WILLIAM W.; SCHMITT,ROBERT G.; ATWOOD,A.I.; CURRAN,P.D.</p> <p>2000-11-27</p> <p>-dominated failure mode experienced in the tests. High-pressure burning rates are needed for more detailed post-ignition studies. Sub-<span class="hlt">models</span> for chemistry, mechanical response and burn dynamics need to be validated against data from less <span class="hlt">complex</span> <span class="hlt">experiments</span>. The sub-<span class="hlt">models</span> can then be used in integrated analysis for comparison with experimental data taken during integrated tests.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/19760004550','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19760004550"><span>Investigation of <span class="hlt">models</span> for large-scale meteorological prediction <span class="hlt">experiments</span></span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Spar, J.</p> <p>1975-01-01</p> <p>The feasibility of extended and long-range weather prediction by means of global atmospheric <span class="hlt">models</span> was studied. A number of computer <span class="hlt">experiments</span> were conducted at GISS with the GISS global general circulation <span class="hlt">model</span>. Topics discussed include atmospheric response to sea-surface temperature anomalies, and monthly mean forecast <span class="hlt">experiments</span> with the global <span class="hlt">model</span>.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=20050215642&hterms=Analysis+meaning&qs=N%3D0%26Ntk%3DAll%26Ntx%3Dmode%2Bmatchall%26Ntt%3DAnalysis%2Bmeaning','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=20050215642&hterms=Analysis+meaning&qs=N%3D0%26Ntk%3DAll%26Ntx%3Dmode%2Bmatchall%26Ntt%3DAnalysis%2Bmeaning"><span>Probabilistic Analysis Techniques Applied to <span class="hlt">Complex</span> Spacecraft Power System <span class="hlt">Modeling</span></span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Hojnicki, Jeffrey S.; Rusick, Jeffrey J.</p> <p>2005-01-01</p> <p>Electric power system performance predictions are critical to spacecraft, such as the International Space Station (ISS), to ensure that sufficient power is available to support all the spacecraft s power needs. In the case of the ISS power system, analyses to date have been deterministic, meaning that each analysis produces a single-valued result for power capability because of the <span class="hlt">complexity</span> and large size of the <span class="hlt">model</span>. As a result, the deterministic ISS analyses did not account for the sensitivity of the power capability to uncertainties in <span class="hlt">model</span> input variables. Over the last 10 years, the NASA Glenn Research Center has developed advanced, computationally fast, probabilistic analysis techniques and successfully applied them to large (thousands of nodes) <span class="hlt">complex</span> structural analysis <span class="hlt">models</span>. These same techniques were recently applied to large, <span class="hlt">complex</span> ISS power system <span class="hlt">models</span>. This new application enables probabilistic power analyses that account for input uncertainties and produce results that include variations caused by these uncertainties. Specifically, N&R Engineering, under contract to NASA, integrated these advanced probabilistic techniques with Glenn s internationally recognized ISS power system <span class="hlt">model</span>, System Power Analysis for Capability Evaluation (SPACE).</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=20050215642&hterms=Complex+Variables&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D30%26Ntt%3DComplex%2BVariables','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=20050215642&hterms=Complex+Variables&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D30%26Ntt%3DComplex%2BVariables"><span>Probabilistic Analysis Techniques Applied to <span class="hlt">Complex</span> Spacecraft Power System <span class="hlt">Modeling</span></span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Hojnicki, Jeffrey S.; Rusick, Jeffrey J.</p> <p>2005-01-01</p> <p>Electric power system performance predictions are critical to spacecraft, such as the International Space Station (ISS), to ensure that sufficient power is available to support all the spacecraft s power needs. In the case of the ISS power system, analyses to date have been deterministic, meaning that each analysis produces a single-valued result for power capability because of the <span class="hlt">complexity</span> and large size of the <span class="hlt">model</span>. As a result, the deterministic ISS analyses did not account for the sensitivity of the power capability to uncertainties in <span class="hlt">model</span> input variables. Over the last 10 years, the NASA Glenn Research Center has developed advanced, computationally fast, probabilistic analysis techniques and successfully applied them to large (thousands of nodes) <span class="hlt">complex</span> structural analysis <span class="hlt">models</span>. These same techniques were recently applied to large, <span class="hlt">complex</span> ISS power system <span class="hlt">models</span>. This new application enables probabilistic power analyses that account for input uncertainties and produce results that include variations caused by these uncertainties. Specifically, N&R Engineering, under contract to NASA, integrated these advanced probabilistic techniques with Glenn s internationally recognized ISS power system <span class="hlt">model</span>, System Power Analysis for Capability Evaluation (SPACE).</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2718684','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2718684"><span>Boolean <span class="hlt">modeling</span> of collective effects in <span class="hlt">complex</span> networks</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Norrell, Johannes; Socolar, Joshua E. S.</p> <p>2009-01-01</p> <p><span class="hlt">Complex</span> systems are often <span class="hlt">modeled</span> as Boolean networks in attempts to capture their logical structure and reveal its dynamical consequences. Approximating the dynamics of continuous variables by discrete values and Boolean logic gates may, however, introduce dynamical possibilities that are not accessible to the original system. We show that large random networks of variables coupled through continuous transfer functions often fail to exhibit the <span class="hlt">complex</span> dynamics of corresponding Boolean <span class="hlt">models</span> in the disordered (chaotic) regime, even when each individual function appears to be a good candidate for Boolean idealization. A suitably modified Boolean theory explains the behavior of systems in which information does not propagate faithfully down certain chains of nodes. <span class="hlt">Model</span> networks incorporating calculated or directly measured transfer functions reported in the literature on transcriptional regulation of genes are described by the modified theory. PMID:19658525</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3314197','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3314197"><span>Activity-Dependent Neuronal <span class="hlt">Model</span> on <span class="hlt">Complex</span> Networks</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>de Arcangelis, Lucilla; Herrmann, Hans J.</p> <p>2012-01-01</p> <p>Neuronal avalanches are a novel mode of activity in neuronal networks, experimentally found in vitro and in vivo, and exhibit a robust critical behavior: these avalanches are characterized by a power law distribution for the size and duration, features found in other problems in the context of the physics of <span class="hlt">complex</span> systems. We present a recent <span class="hlt">model</span> inspired in self-organized criticality, which consists of an electrical network with threshold firing, refractory period, and activity-dependent synaptic plasticity. The <span class="hlt">model</span> reproduces the critical behavior of the distribution of avalanche sizes and durations measured experimentally. Moreover, the power spectra of the electrical signal reproduce very robustly the power law behavior found in human electroencephalogram (EEG) spectra. We implement this <span class="hlt">model</span> on a variety of <span class="hlt">complex</span> networks, i.e., regular, small-world, and scale-free and verify the robustness of the critical behavior. PMID:22470347</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/17337784','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/17337784"><span>Deciphering the <span class="hlt">complexity</span> of acute inflammation using mathematical <span class="hlt">models</span>.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Vodovotz, Yoram</p> <p>2006-01-01</p> <p>Various stresses elicit an acute, <span class="hlt">complex</span> inflammatory response, leading to healing but sometimes also to organ dysfunction and death. We constructed both equation-based <span class="hlt">models</span> (EBM) and agent-based <span class="hlt">models</span> (ABM) of various degrees of granularity--which encompass the dynamics of relevant cells, cytokines, and the resulting global tissue dysfunction--in order to begin to unravel these inflammatory interactions. The EBMs describe and predict various features of septic shock and trauma/hemorrhage (including the response to anthrax, preconditioning phenomena, and irreversible hemorrhage) and were used to simulate anti-inflammatory strategies in clinical trials. The ABMs that describe the interrelationship between inflammation and wound healing yielded insights into intestinal healing in necrotizing enterocolitis, vocal fold healing during phonotrauma, and skin healing in the setting of diabetic foot ulcers. <span class="hlt">Modeling</span> may help in understanding the <span class="hlt">complex</span> interactions among the components of inflammation and response to stress, and therefore aid in the development of novel therapies and diagnostics.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li class="active"><span>22</span></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_22 --> <div id="page_23" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li class="active"><span>23</span></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="441"> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014NatSR...4E7558N','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014NatSR...4E7558N"><span>Entropy, <span class="hlt">complexity</span>, and Markov diagrams for random walk cancer <span class="hlt">models</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Newton, Paul K.; Mason, Jeremy; Hurt, Brian; Bethel, Kelly; Bazhenova, Lyudmila; Nieva, Jorge; Kuhn, Peter</p> <p>2014-12-01</p> <p>The notion of entropy is used to compare the <span class="hlt">complexity</span> associated with 12 common cancers based on metastatic tumor distribution autopsy data. We characterize power-law distributions, entropy, and Kullback-Liebler divergence associated with each primary cancer as compared with data for all cancer types aggregated. We then correlate entropy values with other measures of <span class="hlt">complexity</span> associated with Markov chain dynamical systems <span class="hlt">models</span> of progression. The Markov transition matrix associated with each cancer is associated with a directed graph <span class="hlt">model</span> where nodes are anatomical locations where a metastatic tumor could develop, and edge weightings are transition probabilities of progression from site to site. The steady-state distribution corresponds to the autopsy data distribution. Entropy correlates well with the overall <span class="hlt">complexity</span> of the reduced directed graph structure for each cancer and with a measure of systemic interconnectedness of the graph, called graph conductance. The <span class="hlt">models</span> suggest that grouping cancers according to their entropy values, with skin, breast, kidney, and lung cancers being prototypical high entropy cancers, stomach, uterine, pancreatic and ovarian being mid-level entropy cancers, and colorectal, cervical, bladder, and prostate cancers being prototypical low entropy cancers, provides a potentially useful framework for viewing metastatic cancer in terms of predictability, <span class="hlt">complexity</span>, and metastatic potential.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4894412','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4894412"><span>Entropy, <span class="hlt">complexity</span>, and Markov diagrams for random walk cancer <span class="hlt">models</span></span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Newton, Paul K.; Mason, Jeremy; Hurt, Brian; Bethel, Kelly; Bazhenova, Lyudmila; Nieva, Jorge; Kuhn, Peter</p> <p>2014-01-01</p> <p>The notion of entropy is used to compare the <span class="hlt">complexity</span> associated with 12 common cancers based on metastatic tumor distribution autopsy data. We characterize power-law distributions, entropy, and Kullback-Liebler divergence associated with each primary cancer as compared with data for all cancer types aggregated. We then correlate entropy values with other measures of <span class="hlt">complexity</span> associated with Markov chain dynamical systems <span class="hlt">models</span> of progression. The Markov transition matrix associated with each cancer is associated with a directed graph <span class="hlt">model</span> where nodes are anatomical locations where a metastatic tumor could develop, and edge weightings are transition probabilities of progression from site to site. The steady-state distribution corresponds to the autopsy data distribution. Entropy correlates well with the overall <span class="hlt">complexity</span> of the reduced directed graph structure for each cancer and with a measure of systemic interconnectedness of the graph, called graph conductance. The <span class="hlt">models</span> suggest that grouping cancers according to their entropy values, with skin, breast, kidney, and lung cancers being prototypical high entropy cancers, stomach, uterine, pancreatic and ovarian being mid-level entropy cancers, and colorectal, cervical, bladder, and prostate cancers being prototypical low entropy cancers, provides a potentially useful framework for viewing metastatic cancer in terms of predictability, <span class="hlt">complexity</span>, and metastatic potential. PMID:25523357</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25523357','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25523357"><span>Entropy, <span class="hlt">complexity</span>, and Markov diagrams for random walk cancer <span class="hlt">models</span>.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Newton, Paul K; Mason, Jeremy; Hurt, Brian; Bethel, Kelly; Bazhenova, Lyudmila; Nieva, Jorge; Kuhn, Peter</p> <p>2014-12-19</p> <p>The notion of entropy is used to compare the <span class="hlt">complexity</span> associated with 12 common cancers based on metastatic tumor distribution autopsy data. We characterize power-law distributions, entropy, and Kullback-Liebler divergence associated with each primary cancer as compared with data for all cancer types aggregated. We then correlate entropy values with other measures of <span class="hlt">complexity</span> associated with Markov chain dynamical systems <span class="hlt">models</span> of progression. The Markov transition matrix associated with each cancer is associated with a directed graph <span class="hlt">model</span> where nodes are anatomical locations where a metastatic tumor could develop, and edge weightings are transition probabilities of progression from site to site. The steady-state distribution corresponds to the autopsy data distribution. Entropy correlates well with the overall <span class="hlt">complexity</span> of the reduced directed graph structure for each cancer and with a measure of systemic interconnectedness of the graph, called graph conductance. The <span class="hlt">models</span> suggest that grouping cancers according to their entropy values, with skin, breast, kidney, and lung cancers being prototypical high entropy cancers, stomach, uterine, pancreatic and ovarian being mid-level entropy cancers, and colorectal, cervical, bladder, and prostate cancers being prototypical low entropy cancers, provides a potentially useful framework for viewing metastatic cancer in terms of predictability, <span class="hlt">complexity</span>, and metastatic potential.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/20666541','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/20666541"><span>Simple charge-transfer <span class="hlt">model</span> for metallic <span class="hlt">complexes</span>.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Ramírez-Ramírez, José-Zeferino; Vargas, Rubicelia; Garza, Jorge; Gázquez, José L</p> <p>2010-08-05</p> <p>In the chemistry of metallic <span class="hlt">complexes</span>, two important concepts have been used to rationalize the recognition and selectivity of a host by a guest: preorganization and complementarity. Both of these concepts stem from geometrical features. Less explored in the literature has been the interactional complementarity, where mainly the electronic factors in the intermolecular forces are involved. Because the charge transfer between a species rich in electrons (ligand) and another deficient in them (cation) is one of the main intermolecular factors that control the binding energies in metallic <span class="hlt">complexes</span>, for such systems, we propose a simple <span class="hlt">model</span> based on density functional theory. We define an interactional energy in which the geometrical energy changes are subtracted from the binding energies and just the electronic factors are taken into account. The <span class="hlt">model</span> is tested for the <span class="hlt">complexation</span> between bidentate and cyclic ligands and Ca, Pb, and Hg metal dications. The charge-transfer energy described by our <span class="hlt">model</span> fits nicely with the interactional energy. Thus, when the geometrical changes do not contribute in a significant way to the <span class="hlt">complexation</span> energy, the interactional energy is dominated by charge-transfer effects.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015LaPhL..12l5201A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015LaPhL..12l5201A"><span>Quantum scattering <span class="hlt">model</span> of energy transfer in photosynthetic <span class="hlt">complexes</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ai, Bao-quan; Zhu, Shi-Liang</p> <p>2015-12-01</p> <p>We develop a quantum scattering <span class="hlt">model</span> to describe the exciton transport through the Fenna-Matthews-Olson (FMO) <span class="hlt">complex</span>. It is found that the exciton transport involving the optimal quantum coherence is more efficient than that involving classical behaviour alone. Furthermore, we also find that the quantum resonance condition is easier to be fulfilled in multiple pathways than that in one pathway. We then definitely demonstrate that the optimal distribution of the pigments, the multitude of energy delivery pathways and the quantum effects are combined together to contribute to the perfect energy transport in the FMO <span class="hlt">complex</span>.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2006CPL...423...54H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2006CPL...423...54H"><span><span class="hlt">Complex</span> reaction noise in a molecular quasispecies <span class="hlt">model</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Hochberg, David; Zorzano, María-Paz; Morán, Federico</p> <p>2006-05-01</p> <p>We have derived exact Langevin equations for a <span class="hlt">model</span> of quasispecies dynamics. The inherent multiplicative reaction noise is <span class="hlt">complex</span> and its statistical properties are specified completely. The numerical simulation of the <span class="hlt">complex</span> Langevin equations is carried out using the Cholesky decomposition for the noise covariance matrix. This internal noise, which is due to diffusion-limited reactions, produces unavoidable spatio-temporal density fluctuations about the mean field value. In two dimensions, this noise strictly vanishes only in the perfectly mixed limit, a situation difficult to attain in practice.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.osti.gov/scitech/biblio/1150221','SCIGOV-STC'); return false;" href="https://www.osti.gov/scitech/biblio/1150221"><span><span class="hlt">Modeling</span> and Algorithmic Approaches to Constitutively-<span class="hlt">Complex</span>, Microstructured Fluids</span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p>Miller, Gregory H.; Forest, Gregory</p> <p>2014-05-01</p> <p>We present a new multiscale <span class="hlt">model</span> for <span class="hlt">complex</span> fluids based on three scales: microscopic, kinetic, and continuum. We choose the microscopic level as Kramers' bead-rod <span class="hlt">model</span> for polymers, which we describe as a system of stochastic differential equations with an implicit constraint formulation. The associated Fokker-Planck equation is then derived, and adiabatic elimination removes the fast momentum coordinates. Approached in this way, the kinetic level reduces to a dispersive drift equation. The continuum level is <span class="hlt">modeled</span> with a finite volume Godunov-projection algorithm. We demonstrate computation of viscoelastic stress divergence using this multiscale approach.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.osti.gov/scitech/biblio/1087230','SCIGOV-STC'); return false;" href="https://www.osti.gov/scitech/biblio/1087230"><span><span class="hlt">Modeling</span> of Carbohydrate Binding Modules <span class="hlt">Complexed</span> to Cellulose</span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p>Nimlos, M. R.; Beckham, G. T.; Bu, L.; Himmel, M. E.; Crowley, M. F.; Bomble, Y. J.</p> <p>2012-01-01</p> <p><span class="hlt">Modeling</span> results are presented for the interaction of two carbohydrate binding modules (CBMs) with cellulose. The family 1 CBM from Trichoderma reesei's Cel7A cellulase was <span class="hlt">modeled</span> using molecular dynamics to confirm that this protein selectively binds to the hydrophobic (100) surface of cellulose fibrils and to determine the energetics and mechanisms for locating this surface. <span class="hlt">Modeling</span> was also conducted of binding of the family 4 CBM from the CbhA <span class="hlt">complex</span> from Clostridium thermocellum. There is a cleft in this protein, which may accommodate a cellulose chain that is detached from crystalline cellulose. This possibility is explored using molecular dynamics.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2009IJMPC..20.1387R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2009IJMPC..20.1387R"><span><span class="hlt">Complex</span> Behavior in Simple <span class="hlt">Models</span> of Biological Coevolution</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Rikvold, Per Arne</p> <p></p> <p>We explore the <span class="hlt">complex</span> dynamical behavior of simple predator-prey <span class="hlt">models</span> of biological coevolution that account for interspecific and intraspecific competition for resources, as well as adaptive foraging behavior. In long kinetic Monte Carlo simulations of these <span class="hlt">models</span> we find quite robust 1/f-like noise in species diversity and population sizes, as well as power-law distributions for the lifetimes of individual species and the durations of quiet periods of relative evolutionary stasis. In one <span class="hlt">model</span>, based on the Holling Type II functional response, adaptive foraging produces a metastable low-diversity phase and a stable high-diversity phase.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3821113','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3821113"><span>Rethinking the Psychogenic <span class="hlt">Model</span> of <span class="hlt">Complex</span> Regional Pain Syndrome: Somatoform Disorders and <span class="hlt">Complex</span> Regional Pain Syndrome</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Hill, Renee J.; Chopra, Pradeep; Richardi, Toni</p> <p>2012-01-01</p> <p>Abstract Explaining the etiology of <span class="hlt">Complex</span> Regional Pain Syndrome (CRPS) from the psychogenic <span class="hlt">model</span> is exceedingly unsophisticated, because neurocognitive deficits, neuroanatomical abnormalities, and distortions in cognitive mapping are features of CRPS pathology. More importantly, many people who have developed CRPS have no history of mental illness. The psychogenic <span class="hlt">model</span> offers comfort to physicians and mental health practitioners (MHPs) who have difficulty understanding pain maintained by newly uncovered neuro inflammatory processes. With increased education about CRPS through a biopsychosocial perspective, both physicians and MHPs can better diagnose, treat, and manage CRPS symptomatology. PMID:24223338</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24223338','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24223338"><span>Rethinking the psychogenic <span class="hlt">model</span> of <span class="hlt">complex</span> regional pain syndrome: somatoform disorders and <span class="hlt">complex</span> regional pain syndrome.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Hill, Renee J; Chopra, Pradeep; Richardi, Toni</p> <p>2012-01-01</p> <p>Explaining the etiology of <span class="hlt">Complex</span> Regional Pain Syndrome (CRPS) from the psychogenic <span class="hlt">model</span> is exceedingly unsophisticated, because neurocognitive deficits, neuroanatomical abnormalities, and distortions in cognitive mapping are features of CRPS pathology. More importantly, many people who have developed CRPS have no history of mental illness. The psychogenic <span class="hlt">model</span> offers comfort to physicians and mental health practitioners (MHPs) who have difficulty understanding pain maintained by newly uncovered neuro inflammatory processes. With increased education about CRPS through a biopsychosocial perspective, both physicians and MHPs can better diagnose, treat, and manage CRPS symptomatology.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013AGUFM.H52E..05H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013AGUFM.H52E..05H"><span>Development of Conceptual Benchmark <span class="hlt">Models</span> to Evaluate <span class="hlt">Complex</span> Hydrologic <span class="hlt">Model</span> Calibration in Managed Basins Using Python</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Hughes, J. D.; White, J.</p> <p>2013-12-01</p> <p>For many numerical hydrologic <span class="hlt">models</span> it is a challenge to quantitatively demonstrate that <span class="hlt">complex</span> <span class="hlt">models</span> are preferable to simpler <span class="hlt">models</span>. Typically, a decision is made to develop and calibrate a <span class="hlt">complex</span> <span class="hlt">model</span> at the beginning of a study. The value of selecting a <span class="hlt">complex</span> <span class="hlt">model</span> over simpler <span class="hlt">models</span> is commonly inferred from use of a <span class="hlt">model</span> with fewer simplifications of the governing equations because it can be time consuming to develop another numerical code with data processing and parameter estimation functionality. High-level programming languages like Python can greatly reduce the effort required to develop and calibrate simple <span class="hlt">models</span> that can be used to quantitatively demonstrate the increased value of a <span class="hlt">complex</span> <span class="hlt">model</span>. We have developed and calibrated a spatially-distributed surface-water/groundwater flow <span class="hlt">model</span> for managed basins in southeast Florida, USA, to (1) evaluate the effect of municipal groundwater pumpage on surface-water/groundwater exchange, (2) investigate how the study area will respond to sea-level rise, and (3) explore combinations of these forcing functions. To demonstrate the increased value of this <span class="hlt">complex</span> <span class="hlt">model</span>, we developed a two-parameter conceptual-benchmark-discharge <span class="hlt">model</span> for each basin in the study area. The conceptual-benchmark-discharge <span class="hlt">model</span> includes seasonal scaling and lag parameters and is driven by basin rainfall. The conceptual-benchmark-discharge <span class="hlt">models</span> were developed in the Python programming language and used weekly rainfall data. Calibration was implemented with the Broyden-Fletcher-Goldfarb-Shanno method available in the Scientific Python (SciPy) library. Normalized benchmark efficiencies calculated using output from the <span class="hlt">complex</span> <span class="hlt">model</span> and the corresponding conceptual-benchmark-discharge <span class="hlt">model</span> indicate that the <span class="hlt">complex</span> <span class="hlt">model</span> has more explanatory power than the simple <span class="hlt">model</span> driven only by rainfall.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/20120015215','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20120015215"><span>Design of Low <span class="hlt">Complexity</span> <span class="hlt">Model</span> Reference Adaptive Controllers</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Hanson, Curt; Schaefer, Jacob; Johnson, Marcus; Nguyen, Nhan</p> <p>2012-01-01</p> <p>Flight research <span class="hlt">experiments</span> have demonstrated that adaptive flight controls can be an effective technology for improving aircraft safety in the event of failures or damage. However, the nonlinear, timevarying nature of adaptive algorithms continues to challenge traditional methods for the verification and validation testing of safety-critical flight control systems. Increasingly <span class="hlt">complex</span> adaptive control theories and designs are emerging, but only make testing challenges more difficult. A potential first step toward the acceptance of adaptive flight controllers by aircraft manufacturers, operators, and certification authorities is a very simple design that operates as an augmentation to a non-adaptive baseline controller. Three such controllers were developed as part of a National Aeronautics and Space Administration flight research <span class="hlt">experiment</span> to determine the appropriate level of <span class="hlt">complexity</span> required to restore acceptable handling qualities to an aircraft that has suffered failures or damage. The controllers consist of the same basic design, but incorporate incrementally-increasing levels of <span class="hlt">complexity</span>. Derivations of the controllers and their adaptive parameter update laws are presented along with details of the controllers implementations.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5366773','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5366773"><span>Mathematical and Computational <span class="hlt">Modeling</span> in <span class="hlt">Complex</span> Biological Systems</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Li, Wenyang; Zhu, Xiaoliang</p> <p>2017-01-01</p> <p>The biological process and molecular functions involved in the cancer progression remain difficult to understand for biologists and clinical doctors. Recent developments in high-throughput technologies urge the systems biology to achieve more precise <span class="hlt">models</span> for <span class="hlt">complex</span> diseases. Computational and mathematical <span class="hlt">models</span> are gradually being used to help us understand the omics data produced by high-throughput experimental techniques. The use of computational <span class="hlt">models</span> in systems biology allows us to explore the pathogenesis of <span class="hlt">complex</span> diseases, improve our understanding of the latent molecular mechanisms, and promote treatment strategy optimization and new drug discovery. Currently, it is urgent to bridge the gap between the developments of high-throughput technologies and systemic <span class="hlt">modeling</span> of the biological process in cancer research. In this review, we firstly studied several typical mathematical <span class="hlt">modeling</span> approaches of biological systems in different scales and deeply analyzed their characteristics, advantages, applications, and limitations. Next, three potential research directions in systems <span class="hlt">modeling</span> were summarized. To conclude, this review provides an update of important solutions using computational <span class="hlt">modeling</span> approaches in systems biology. PMID:28386558</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4871498','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4871498"><span>Bridging Mechanistic and Phenomenological <span class="hlt">Models</span> of <span class="hlt">Complex</span> Biological Systems</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Transtrum, Mark K.; Qiu, Peng</p> <p>2016-01-01</p> <p>The inherent <span class="hlt">complexity</span> of biological systems gives rise to complicated mechanistic <span class="hlt">models</span> with a large number of parameters. On the other hand, the collective behavior of these systems can often be characterized by a relatively small number of phenomenological parameters. We use the Manifold Boundary Approximation Method (MBAM) as a tool for deriving simple phenomenological <span class="hlt">models</span> from complicated mechanistic <span class="hlt">models</span>. The resulting <span class="hlt">models</span> are not black boxes, but remain expressed in terms of the microscopic parameters. In this way, we explicitly connect the macroscopic and microscopic descriptions, characterize the equivalence class of distinct systems exhibiting the same range of collective behavior, and identify the combinations of components that function as tunable control knobs for the behavior. We demonstrate the procedure for adaptation behavior exhibited by the EGFR pathway. From a 48 parameter mechanistic <span class="hlt">model</span>, the system can be effectively described by a single adaptation parameter τ characterizing the ratio of time scales for the initial response and recovery time of the system which can in turn be expressed as a combination of microscopic reaction rates, Michaelis-Menten constants, and biochemical concentrations. The situation is not unlike <span class="hlt">modeling</span> in physics in which microscopically <span class="hlt">complex</span> processes can often be renormalized into simple phenomenological <span class="hlt">models</span> with only a few effective parameters. The proposed method additionally provides a mechanistic explanation for non-universal features of the behavior. PMID:27187545</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013NIMPA.732..397F','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013NIMPA.732..397F"><span>Development of liquid scintillator containing a zirconium <span class="hlt">complex</span> for neutrinoless double beta decay <span class="hlt">experiment</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Fukuda, Yoshiyuki; Moriyama, Shigetaka; Ogawa, Izumi</p> <p>2013-12-01</p> <p>An organic liquid scintillator containing a zirconium <span class="hlt">complex</span> has been developed for a new neutrinoless double beta decay <span class="hlt">experiment</span>. In order to produce a detector that has good energy resolution (4% at 2.5 MeV) and low background (0.1 counts/(t·year)) and that can monitor tons of target isotope, we chose a zirconium β-diketone <span class="hlt">complex</span> having high solubility (over 10 wt%) in anisole. However, the absorption peak of the diketone ligand overlaps with the luminescence of anisole. Therefore, the light yield of the liquid scintillator decreases in proportion to the concentration of the <span class="hlt">complex</span>. To avoid this problem, we synthesized a β-keto ester <span class="hlt">complex</span> introducing -OC3H7 or -OC2H5 substituent groups in the β-diketone ligand, which shifted the absorption peak to around 245 nm, which is shorter than the emission peak of anisole (275 nm). However, the shift of the absorption peak depends on the polarity of the scintillation solvent. Therefore we must choose a low polarity solvent for the liquid scintillator. We also synthesized a Zr-ODZ <span class="hlt">complex</span>, which has a high quantum yield (30%) and good emission wavelength (425 nm) with a solubility 5 wt% in benzonitrile. However, the absorption peak of the Zr-ODZ <span class="hlt">complex</span> was around 240 nm. Therefore, it is better to use the scintillation solvent which has shorter luminescence wavelength than that of the aromatic solvent.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23609168','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23609168"><span>An efficient fluctuating charge <span class="hlt">model</span> for transition metal <span class="hlt">complexes</span>.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Comba, Peter; Martin, Bodo; Sanyal, Avik</p> <p>2013-07-05</p> <p>A fluctuating charge <span class="hlt">model</span> for transition metal <span class="hlt">complexes</span>, based on the Hirshfeld partitioning scheme, spectroscopic energy data from the NIST Atomic Spectroscopy Database and the electronegativity equalization approach, has been developed and parameterized for organic ligands and their high- and low-spin Fe(II) and Fe(III), low-spin Co(III) and Cu(II) <span class="hlt">complexes</span>, using atom types defined in the Momec force field. Based on large training sets comprising a variety of transition metal <span class="hlt">complexes</span>, a general parameter set has been developed and independently validated which allows the efficient computation of geometry-dependent charge distributions in the field of transition metal coordination compounds. Copyright © 2013 Wiley Periodicals, Inc.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2733151','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2733151"><span>An Adaptive <span class="hlt">Complex</span> Network <span class="hlt">Model</span> for Brain Functional Networks</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Gomez Portillo, Ignacio J.; Gleiser, Pablo M.</p> <p>2009-01-01</p> <p>Brain functional networks are graph representations of activity in the brain, where the vertices represent anatomical regions and the edges their functional connectivity. These networks present a robust small world topological structure, characterized by highly integrated modules connected sparsely by long range links. Recent studies showed that other topological properties such as the degree distribution and the presence (or absence) of a hierarchical structure are not robust, and show different intriguing behaviors. In order to understand the basic ingredients necessary for the emergence of these <span class="hlt">complex</span> network structures we present an adaptive <span class="hlt">complex</span> network <span class="hlt">model</span> for human brain functional networks. The microscopic units of the <span class="hlt">model</span> are dynamical nodes that represent active regions of the brain, whose interaction gives rise to <span class="hlt">complex</span> network structures. The links between the nodes are chosen following an adaptive algorithm that establishes connections between dynamical elements with similar internal states. We show that the <span class="hlt">model</span> is able to describe topological characteristics of human brain networks obtained from functional magnetic resonance imaging studies. In particular, when the dynamical rules of the <span class="hlt">model</span> allow for integrated processing over the entire network scale-free non-hierarchical networks with well defined communities emerge. On the other hand, when the dynamical rules restrict the information to a local neighborhood, communities cluster together into larger ones, giving rise to a hierarchical structure, with a truncated power law degree distribution. PMID:19738902</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017PhyA..483...74S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017PhyA..483...74S"><span>Decision dynamics of departure times: <span class="hlt">Experiments</span> and <span class="hlt">modeling</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Sun, Xiaoyan; Han, Xiao; Bao, Jian-Zhang; Jiang, Rui; Jia, Bin; Yan, Xiaoyong; Zhang, Boyu; Wang, Wen-Xu; Gao, Zi-You</p> <p>2017-10-01</p> <p>A fundamental problem in traffic science is to understand user-choice behaviors that account for the emergence of <span class="hlt">complex</span> traffic phenomena. Despite much effort devoted to theoretically exploring departure time choice behaviors, relatively large-scale and systematic experimental tests of theoretical predictions are still lacking. In this paper, we aim to offer a more comprehensive understanding of departure time choice behaviors in terms of a series of laboratory <span class="hlt">experiments</span> under different traffic conditions and feedback information provided to commuters. In the <span class="hlt">experiment</span>, the number of recruited players is much larger than the number of choices to better mimic the real scenario, in which a large number of commuters will depart simultaneously in a relatively small time window. Sufficient numbers of rounds are conducted to ensure the convergence of collective behavior. Experimental results demonstrate that collective behavior is close to the user equilibrium, regardless of different scales and traffic conditions. Moreover, the amount of feedback information has a negligible influence on collective behavior but has a relatively stronger effect on individual choice behaviors. Reinforcement learning and Fermi learning <span class="hlt">models</span> are built to reproduce the experimental results and uncover the underlying mechanism. Simulation results are in good agreement with the experimentally observed collective behaviors.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25954573','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25954573"><span>Simplifying <span class="hlt">complex</span> clinical element <span class="hlt">models</span> to encourage adoption.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Freimuth, Robert R; Zhu, Qian; Pathak, Jyotishman; Chute, Christopher G</p> <p>2014-01-01</p> <p>Clinical Element <span class="hlt">Models</span> (CEMs) were developed to provide a normalized form for the exchange of clinical data. The CEM specification is quite <span class="hlt">complex</span> and specialized knowledge is required to understand and implement the <span class="hlt">models</span>, which presents a significant barrier to investigators and study designers. To encourage the adoption of CEMs at the time of data collection and reduce the need for retrospective normalization efforts, we developed an approach that provides a simplified view of CEMs for non-experts while retaining the full semantic detail of the underlying logical <span class="hlt">models</span>. This allows investigators to approach CEMs through generalized representations that are intended to be more intuitive than the native <span class="hlt">models</span>, and it permits them to think conceptually about their data elements without worrying about details related to the CEM logical <span class="hlt">models</span> and syntax. We demonstrate our approach using data elements from the Pharmacogenomics Research Network (PGRN).</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li class="active"><span>23</span></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_23 --> <div id="page_24" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li class="active"><span>24</span></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="461"> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/11325508','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/11325508"><span>The semiotics of control and <span class="hlt">modeling</span> relations in <span class="hlt">complex</span> systems.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Joslyn, C</p> <p>2001-01-01</p> <p>We provide a conceptual analysis of ideas and principles from the systems theory discourse which underlie Pattee's semantic or semiotic closure, which is itself foundational for a school of theoretical biology derived from systems theory and cybernetics, and is now being related to biological semiotics and explicated in the relational biological school of Rashevsky and Rosen. Atomic control systems and <span class="hlt">models</span> are described as the canonical forms of semiotic organization, sharing measurement relations, but differing topologically in that control systems are circularly and <span class="hlt">models</span> linearly related to their environments. Computation in control systems is introduced, motivating hierarchical decomposition, hybrid <span class="hlt">modeling</span> and control systems, and anticipatory or <span class="hlt">model</span>-based control. The semiotic relations in <span class="hlt">complex</span> control systems are described in terms of relational constraints, and rules and laws are distinguished as contingent and necessary functional entailments, respectively. Finally, selection as a meta-level of constraint is introduced as the necessary condition for semantic relations in control systems and <span class="hlt">models</span>.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.osti.gov/scitech/servlets/purl/962188','SCIGOV-STC'); return false;" href="http://www.osti.gov/scitech/servlets/purl/962188"><span>RHIC injector <span class="hlt">complex</span> online <span class="hlt">model</span> status and plans</span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p>Schoefer,V.; Ahrens, L.; Brown, K.; Morris, J.; Nemesure, S.</p> <p>2009-05-04</p> <p>An online <span class="hlt">modeling</span> system is being developed for the RHIC injector <span class="hlt">complex</span>, which consists of the Booster, the AGS and the transfer lines connecting the Booster to the AGS and the AGS to RHIC. Historically the injectors have been operated using static values from design specifications or offline <span class="hlt">model</span> runs, but tighter beam optics constraints required by polarized proton operations (e.g, accelerating with near-integer tunes) have necessitated a more dynamic system. An online <span class="hlt">model</span> server for the AGS has been implemented using MAD-X [1] as the <span class="hlt">model</span> engine, with plans to extend the system to the Booster and the injector transfer lines and to add the option of calculating optics using the Polymorphic Tracking Code (PTC [2]) as the <span class="hlt">model</span> engine.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25368428','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25368428"><span>Mechanistic <span class="hlt">modeling</span> confronts the <span class="hlt">complexity</span> of molecular cell biology.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Phair, Robert D</p> <p>2014-11-05</p> <p>Mechanistic <span class="hlt">modeling</span> has the potential to transform how cell biologists contend with the inescapable <span class="hlt">complexity</span> of modern biology. I am a physiologist-electrical engineer-systems biologist who has been working at the level of cell biology for the past 24 years. This perspective aims 1) to convey why we build <span class="hlt">models</span>, 2) to enumerate the major approaches to <span class="hlt">modeling</span> and their philosophical differences, 3) to address some recurrent concerns raised by experimentalists, and then 4) to imagine a future in which teams of experimentalists and <span class="hlt">modelers</span> build-and subject to exhaustive experimental tests-<span class="hlt">models</span> covering the entire spectrum from molecular cell biology to human pathophysiology. There is, in my view, no technical obstacle to this future, but it will require some plasticity in the biological research mind-set.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23868421','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23868421"><span>Robotic general surgery <span class="hlt">experience</span>: a gradual progress from simple to more <span class="hlt">complex</span> procedures.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Al-Naami, M; Anjum, M N; Aldohayan, A; Al-Khayal, K; Alkharji, H</p> <p>2013-12-01</p> <p>Robotic surgery was introduced at our institution in 2003, and we used a progressive approach advancing from simple to more <span class="hlt">complex</span> procedures. A retrospective chart review. Cases included totalled 129. Set-up and operative times have improved over time and with <span class="hlt">experience</span>. Conversion rates to standard laparoscopic or open techniques were 4.7% and 1.6%, respectively. Intraoperative complications (6.2%), blood loss and hospital stay were directly proportional to <span class="hlt">complexity</span>. There were no mortalities and the postoperative complication rate (13.2%) was within accepted norms. Our findings suggest that robot technology is presently most useful in cases tailored toward its advantages, i.e. those confined to a single space, those that require performance of <span class="hlt">complex</span> tasks, and re-do procedures. Copyright © 2013 John Wiley & Sons, Ltd.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://pubs.usgs.gov/of/2003/0364/pdf/OF03-364.pdf','USGSPUBS'); return false;" href="https://pubs.usgs.gov/of/2003/0364/pdf/OF03-364.pdf"><span>Cx-02 Program, workshop on <span class="hlt">modeling</span> <span class="hlt">complex</span> systems</span></a></p> <p><a target="_blank" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Mossotti, Victor G.; Barragan, Jo Ann; Westergard, Todd D.</p> <p>2003-01-01</p> <p>This publication contains the abstracts and program for the workshop on <span class="hlt">complex</span> systems that was held on November 19-21, 2002, in Reno, Nevada. <span class="hlt">Complex</span> systems are ubiquitous within the realm of the earth sciences. Geological systems consist of a multiplicity of linked components with nested feedback loops; the dynamics of these systems are non-linear, iterative, multi-scale, and operate far from equilibrium. That notwithstanding, It appears that, with the exception of papers on seismic studies, geology and geophysics work has been disproportionally underrepresented at regional and national meetings on <span class="hlt">complex</span> systems relative to papers in the life sciences. This is somewhat puzzling because geologists and geophysicists are, in many ways, preadapted to thinking of <span class="hlt">complex</span> system mechanisms. Geologists and geophysicists think about processes involving large volumes of rock below the sunlit surface of Earth, the accumulated consequence of processes extending hundreds of millions of years in the past. Not only do geologists think in the abstract by virtue of the vast time spans, most of the evidence is out-of-sight. A primary goal of this workshop is to begin to bridge the gap between the Earth sciences and life sciences through demonstration of the universality of <span class="hlt">complex</span> systems science, both philosophically and in <span class="hlt">model</span> structures.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/17259279','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/17259279"><span>Computational and analytical <span class="hlt">modeling</span> of cationic lipid-DNA <span class="hlt">complexes</span>.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Farago, Oded; Grønbech-Jensen, Niels</p> <p>2007-05-01</p> <p>We present a theoretical study of the physical properties of cationic lipid-DNA (CL-DNA) <span class="hlt">complexes</span>--a promising synthetically based nonviral carrier of DNA for gene therapy. The study is based on a coarse-grained molecular <span class="hlt">model</span>, which is used in Monte Carlo simulations of mesoscopically large systems over timescales long enough to address experimental reality. In the present work, we focus on the statistical-mechanical behavior of lamellar <span class="hlt">complexes</span>, which in Monte Carlo simulations self-assemble spontaneously from a disordered random initial state. We measure the DNA-interaxial spacing, d(DNA), and the local cationic area charge density, sigma(M), for a wide range of values of the parameter (c) representing the fraction of cationic lipids. For weakly charged <span class="hlt">complexes</span> (low values of (c)), we find that d(DNA) has a linear dependence on (c)(-1), which is in excellent agreement with x-ray diffraction experimental data. We also observe, in qualitative agreement with previous Poisson-Boltzmann calculations of the system, large fluctuations in the local area charge density with a pronounced minimum of sigma(M) halfway between adjacent DNA molecules. For highly-charged <span class="hlt">complexes</span> (large (c)), we find moderate charge density fluctuations and observe deviations from linear dependence of d(DNA) on (c)(-1). This last result, together with other findings such as the decrease in the effective stretching modulus of the <span class="hlt">complex</span> and the increased rate at which pores are formed in the <span class="hlt">complex</span> membranes, are indicative of the gradual loss of mechanical stability of the <span class="hlt">complex</span>, which occurs when (c) becomes large. We suggest that this may be the origin of the recently observed enhanced transfection efficiency of lamellar CL-DNA <span class="hlt">complexes</span> at high charge densities, because the completion of the transfection process requires the disassembly of the <span class="hlt">complex</span> and the release of the DNA into the cytoplasm. Some of the structural properties of the system are also predicted by a continuum</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010ffcd.confE..81S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010ffcd.confE..81S"><span>Fog Forecasting using Synergy between <span class="hlt">Models</span> of different <span class="hlt">Complexity</span>: Large-Eddy Simulation, Column <span class="hlt">modelling</span> and Limited Area <span class="hlt">Modelling</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Steeneveld, G. J.; Masbou, M.; van Heerwaarden, C. C.; Mohr, C.; Schneider, W.; Müller, M.; Bott, A.; Holtslag, A. A. M.</p> <p>2010-07-01</p> <p>Fog is a hazardous weather phenomenon with a large impact on the environment and human life. In particular the transportation sector is vulnerable to fog; but fog is also important for agriculture, for leaf-wetness duration in particular, and for humans with asthma or related diseases. In addition, fog and low level clouds govern to a large extent the radiation balance of the polar regions in summer, and as such fog also influences the regional climate. Hence a thorough understanding of the fog governing processes is essential. However, due to the <span class="hlt">complexity</span> and small scale nature of the relevant physical processes, the current understanding is relatively poor, as is our ability to forecast fog. In order to improve our knowledge, and to identify key deficiencies in the current state of the art fog forecasting <span class="hlt">models</span>, we present an <span class="hlt">experiment</span> in which the synergy between <span class="hlt">models</span> of different <span class="hlt">complexity</span> and observations is used to evaluate <span class="hlt">model</span> skill. Therefore, an observed case study (Cabauw; The Netherlands) of a well developed radiation fog will be innovatively run with a large eddy simulation <span class="hlt">model</span> which allows us to evaluate the key issue of turbulent mixing. In addition, operational and research column <span class="hlt">models</span> (PAFOG; Duynkerke, 1991) will be employed to evaluate their skill on the local scale, while at the limited area <span class="hlt">models</span> WRF-NMMFOG (Mueller et al 2010) and COSMO-FOG will be evaluated on their skill for the regional scale. Special focus will be given to the representation of the boundary-layer vertical structure and turbulence in the latter two <span class="hlt">model</span> types versus the LES results with its solid physical ground.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://pubs.usgs.gov/fs/2009/3105/','USGSPUBS'); return false;" href="https://pubs.usgs.gov/fs/2009/3105/"><span>U.S. Geological Survey Groundwater <span class="hlt">Modeling</span> Software: Making Sense of a <span class="hlt">Complex</span> Natural Resource</span></a></p> <p><a target="_blank" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Provost, Alden M.; Reilly, Thomas E.; Harbaugh, Arlen W.; Pollock, David W.</p> <p>2009-01-01</p> <p>Computer <span class="hlt">models</span> of groundwater systems simulate the flow of groundwater, including water levels, and the transport of chemical constituents and thermal energy. Groundwater <span class="hlt">models</span> afford hydrologists a framework on which to organize their knowledge and understanding of groundwater systems, and they provide insights water-resources managers need to plan effectively for future water demands. Building on decades of <span class="hlt">experience</span>, the U.S. Geological Survey (USGS) continues to lead in the development and application of computer software that allows groundwater <span class="hlt">models</span> to address scientific and management questions of increasing <span class="hlt">complexity</span>.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27535582','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27535582"><span>GalaxyRefine<span class="hlt">Complex</span>: Refinement of protein-protein <span class="hlt">complex</span> <span class="hlt">model</span> structures driven by interface repacking.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Heo, Lim; Lee, Hasup; Seok, Chaok</p> <p>2016-08-18</p> <p>Protein-protein docking methods have been widely used to gain an atomic-level understanding of protein interactions. However, docking methods that employ low-resolution energy functions are popular because of computational efficiency. Low-resolution docking tends to generate protein <span class="hlt">complex</span> structures that are not fully optimized. GalaxyRefine<span class="hlt">Complex</span> takes such low-resolution docking structures and refines them to improve <span class="hlt">model</span> accuracy in terms of both interface contact and inter-protein orientation. This refinement method allows flexibility at the protein interface and in the overall docking structure to capture conformational changes that occur upon binding. Symmetric refinement is also provided for symmetric homo-<span class="hlt">complexes</span>. This method was validated by refining <span class="hlt">models</span> produced by available docking programs, including ZDOCK and M-ZDOCK, and was successfully applied to CAPRI targets in a blind fashion. An example of using the refinement method with an existing docking method for ligand binding mode prediction of a drug target is also presented. A web server that implements the method is freely available at http://galaxy.seoklab.org/refinecomplex.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4989233','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4989233"><span>GalaxyRefine<span class="hlt">Complex</span>: Refinement of protein-protein <span class="hlt">complex</span> <span class="hlt">model</span> structures driven by interface repacking</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Heo, Lim; Lee, Hasup; Seok, Chaok</p> <p>2016-01-01</p> <p>Protein-protein docking methods have been widely used to gain an atomic-level understanding of protein interactions. However, docking methods that employ low-resolution energy functions are popular because of computational efficiency. Low-resolution docking tends to generate protein <span class="hlt">complex</span> structures that are not fully optimized. GalaxyRefine<span class="hlt">Complex</span> takes such low-resolution docking structures and refines them to improve <span class="hlt">model</span> accuracy in terms of both interface contact and inter-protein orientation. This refinement method allows flexibility at the protein interface and in the overall docking structure to capture conformational changes that occur upon binding. Symmetric refinement is also provided for symmetric homo-<span class="hlt">complexes</span>. This method was validated by refining <span class="hlt">models</span> produced by available docking programs, including ZDOCK and M-ZDOCK, and was successfully applied to CAPRI targets in a blind fashion. An example of using the refinement method with an existing docking method for ligand binding mode prediction of a drug target is also presented. A web server that implements the method is freely available at http://galaxy.seoklab.org/refinecomplex. PMID:27535582</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010AGUFM.H44A..08M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010AGUFM.H44A..08M"><span>Uranium transport <span class="hlt">experiments</span> at the intermediate scale: Do more heterogeneous systems create more <span class="hlt">complex</span> behaviors?</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Miller, A. W.; Rodriguez, D.; Honeyman, B.</p> <p>2010-12-01</p> <p>With respect to <span class="hlt">complexity</span>, two things occur as experimental scale increases. The first is that as total system size increases, the heterogeneities at smaller scales are explicitly included while simultaneously allowing for a general increase in total <span class="hlt">complexity</span>. The second is that <span class="hlt">model</span> constraining measurements become more difficult to make. Bench scale systems limit total <span class="hlt">complexity</span>; field scale systems are limited in the amount of characterization that can be completed. Intermediate scale systems can bridge this gap, allowing for increased <span class="hlt">complexity</span> relative to the bench scale and better characterization ability relative to the field scale. We have completed three intermediate scale <span class="hlt">experiments</span> with a uranium contaminated sediment from a former uranium mill site near Naturita in southwestern Colorado, USA. Three tanks were packed with various particle size distributions of this sediment. The first two tanks were 2-D in nature and had dimensions of 2.44m x 1.22m x 7.62cm (tank #1, LxHxW), and 2.44m x 0.61m x 7.62cm (tank #2, LxHxW). Tank #3 was 3-D in nature with dimensions of 2.44m x 0.61m x 0.61m (LxHxW). Tank #1 was packed in a homogenous manner with only the <2mm size fraction of sediment. For tank #2 the <2mm fraction was split into <0.250mm and >0.250mm fractions, and these two fractions allowed for a physically heterogeneous packing. Using all three of the previously mentioned size fractions as well as a 0.125-0.250mm and a 4-12mm fraction, tank #3 was also packed in a heterogeneous fashion. The masses of sediment used in the three tanks are: tank #1 ~280kg, tank #2 - 163kg, and tank #3 - 1160kg. Flow through all three systems was comparable, and controlled by constant head boundaries. Three different artificial ground waters (AGW) were used with ionic compositions similar to that found at the field site. The major distinctions are that AGW #1 was in equilibrium with atmospheric CO2 and had no Si; AGW#2 was in equilibrium with 2%CO2 and had no Si; AGW#3</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/19850006185','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19850006185"><span><span class="hlt">Model</span> estimation and identification of manual controller objectives in <span class="hlt">complex</span> tracking tasks</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Schmidt, D. K.; Yuan, P. J.</p> <p>1984-01-01</p> <p>A methodology is presented for estimating the parameters in an optimal control structural <span class="hlt">model</span> of the manual controller from experimental data on <span class="hlt">complex</span>, multiinput/multioutput tracking tasks. Special attention is devoted to estimating the appropriate objective function for the task, as this is considered key in understanding the objectives and strategy of the manual controller. The technique is applied to data from single input/single output as well as multi input/multi outpuut <span class="hlt">experiments</span>, and results discussed.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/20110003562','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20110003562"><span>A Hardware <span class="hlt">Model</span> Validation Tool for Use in <span class="hlt">Complex</span> Space Systems</span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Davies, Misty Dawn; Gundy-Burlet, Karen L.; Limes, Gregory L.</p> <p>2010-01-01</p> <p>One of the many technological hurdles that must be overcome in future missions is the challenge of validating as-built systems against the <span class="hlt">models</span> used for design. We propose a technique composed of intelligent parameter exploration in concert with automated failure analysis as a scalable method for the validation of <span class="hlt">complex</span> space systems. The technique is impervious to discontinuities and linear dependencies in the data, and can handle dimensionalities consisting of hundreds of variables over tens of thousands of <span class="hlt">experiments</span>.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.osti.gov/scitech/servlets/purl/183675','SCIGOV-STC'); return false;" href="http://www.osti.gov/scitech/servlets/purl/183675"><span>Nuclear reaction <span class="hlt">modeling</span>, verification <span class="hlt">experiments</span>, and applications</span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p>Dietrich, F.S.</p> <p>1995-10-01</p> <p>This presentation summarized the recent accomplishments and future promise of the neutron nuclear physics program at the Manuel Lujan Jr. Neutron Scatter Center (MLNSC) and the Weapons Neutron Research (WNR) facility. The unique capabilities of the spallation sources enable a broad range of <span class="hlt">experiments</span> in weapons-related physics, basic science, nuclear technology, industrial applications, and medical physics.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015ACPD...1530511S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015ACPD...1530511S"><span>Cloud chamber <span class="hlt">experiments</span> on the origin of ice crystal <span class="hlt">complexity</span> in cirrus clouds</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Schnaiter, M.; Järvinen, E.; Vochezer, P.; Abdelmonem, A.; Wagner, R.; Jourdan, O.; Mioche, G.; Shcherbakov, V. N.; Schmitt, C. G.; Tricoli, U.; Ulanowski, Z.; Heymsfield, A. J.</p> <p>2015-11-01</p> <p>This study reports on the origin of ice crystal <span class="hlt">complexity</span> and its influence on the angular light scattering properties of cirrus clouds. Cloud simulation <span class="hlt">experiments</span> were conducted at the AIDA (Aerosol Interactions and Dynamics in the Atmosphere) cloud chamber of the Karlsruhe Institute of Technology (KIT). A new experimental procedure was applied to grow and sublimate ice particles at defined super- and subsaturated ice conditions and for temperatures in the -40 to -60 °C range. The <span class="hlt">experiments</span> were performed for ice clouds generated via homogeneous and heterogeneous initial nucleation. Ice crystal <span class="hlt">complexity</span> was deduced from measurements of spatially resolved single particle light scattering patterns by the latest version of the Small Ice Detector (SID-3). It was found that a high ice crystal <span class="hlt">complexity</span> is dominating the microphysics of the simulated clouds and the degree of this <span class="hlt">complexity</span> is dependent on the available water vapour during the crystal growth. Indications were found that the crystal <span class="hlt">complexity</span> is influenced by unfrozen H2SO4/H2O residuals in the case of homogeneous initial ice nucleation. Angular light scattering functions of the simulated ice clouds were measured by the two currently available airborne polar nephelometers; the Polar Nephelometer (PN) probe of LaMP and the Particle Habit Imaging and Polar Scattering (PHIPS-HALO) probe of KIT. The measured scattering functions are featureless and flat in the side- and backward scattering directions resulting in low asymmetry parameters g around 0.78. It was found that these functions have a rather low sensitivity to the crystal <span class="hlt">complexity</span> for ice clouds that were grown under typical atmospheric conditions. These results have implications for the microphysical properties of cirrus clouds and for the radiative transfer through these clouds.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016ACP....16.5091S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016ACP....16.5091S"><span>Cloud chamber <span class="hlt">experiments</span> on the origin of ice crystal <span class="hlt">complexity</span> in cirrus clouds</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Schnaiter, Martin; Järvinen, Emma; Vochezer, Paul; Abdelmonem, Ahmed; Wagner, Robert; Jourdan, Olivier; Mioche, Guillaume; Shcherbakov, Valery N.; Schmitt, Carl G.; Tricoli, Ugo; Ulanowski, Zbigniew; Heymsfield, Andrew J.</p> <p>2016-04-01</p> <p>This study reports on the origin of small-scale ice crystal <span class="hlt">complexity</span> and its influence on the angular light scattering properties of cirrus clouds. Cloud simulation <span class="hlt">experiments</span> were conducted at the AIDA (Aerosol Interactions and Dynamics in the Atmosphere) cloud chamber of the Karlsruhe Institute of Technology (KIT). A new experimental procedure was applied to grow and sublimate ice particles at defined super- and subsaturated ice conditions and for temperatures in the -40 to -60 °C range. The <span class="hlt">experiments</span> were performed for ice clouds generated via homogeneous and heterogeneous initial nucleation. Small-scale ice crystal <span class="hlt">complexity</span> was deduced from measurements of spatially resolved single particle light scattering patterns by the latest version of the Small Ice Detector (SID-3). It was found that a high crystal <span class="hlt">complexity</span> dominates the microphysics of the simulated clouds and the degree of this <span class="hlt">complexity</span> is dependent on the available water vapor during the crystal growth. Indications were found that the small-scale crystal <span class="hlt">complexity</span> is influenced by unfrozen H2SO4 / H2O residuals in the case of homogeneous initial ice nucleation. Angular light scattering functions of the simulated ice clouds were measured by the two currently available airborne polar nephelometers: the polar nephelometer (PN) probe of Laboratoire de Métérologie et Physique (LaMP) and the Particle Habit Imaging and Polar Scattering (PHIPS-HALO) probe of KIT. The measured scattering functions are featureless and flat in the side and backward scattering directions. It was found that these functions have a rather low sensitivity to the small-scale crystal <span class="hlt">complexity</span> for ice clouds that were grown under typical atmospheric conditions. These results have implications for the microphysical properties of cirrus clouds and for the radiative transfer through these clouds.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27716177','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27716177"><span>Using a <span class="hlt">complex</span> adaptive system lens to understand family caregiving <span class="hlt">experiences</span> navigating the stroke rehabilitation system.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Ghazzawi, Andrea; Kuziemsky, Craig; O'Sullivan, Tracey</p> <p>2016-10-01</p> <p>Family caregivers provide the stroke survivor with social support and continuity during the transition home from a rehabilitation facility. In this exploratory study we examined family caregivers' perceptions and <span class="hlt">experiences</span> navigating the stroke rehabilitation system. The theories of continuity of care and <span class="hlt">complex</span> adaptive systems were integrated to examine the transition from a stroke rehabilitation facility to the patient's home. This study provides an understanding of the interacting <span class="hlt">complexities</span> at the macro and micro levels. A convenient sample of family caregivers (n = 14) who provide care for a stroke survivor were recruited 4-12 weeks following the patient's discharge from a stroke rehabilitation facility in Ontario, Canada. Interviews were conducted with family caregivers to examine their perceptions and <span class="hlt">experiences</span> navigating the stroke rehabilitation system. Directed and inductive content analysis and the theory of <span class="hlt">Complex</span> Adaptive Systems were used to interpret the perceptions of family caregivers. Health system policies and procedures at the macro-level determined the types and timing of information being provided to caregivers, and impacted continuity of care and access to supports and services at the micro-level. Supports and services in the community, such as outpatient physiotherapy services, were limited or did not meet the specific needs of the stroke survivors or family caregivers. Relationships with health providers, informational support, and continuity in case management all influence the family caregiving <span class="hlt">experience</span> and ultimately the quality of care for the stroke survivor, during the transition home from a rehabilitation facility.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013EGUGA..15.9344H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013EGUGA..15.9344H"><span>Mathematic <span class="hlt">modeling</span> of <span class="hlt">complex</span> aquifer: Evian Natural Mineral Water case study considering lumped and distributed <span class="hlt">models</span>.</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Henriot, abel; Blavoux, bernard; Travi, yves; Lachassagne, patrick; Beon, olivier; Dewandel, benoit; Ladouche, bernard</p> <p>2013-04-01</p> <p>The Evian Natural Mineral Water (NMW) aquifer is a highly heterogeneous Quaternary glacial deposits <span class="hlt">complex</span> composed of three main units, from bottom to top: - The "Inferior <span class="hlt">Complex</span>" mainly composed of basal and impermeable till lying on the Alpine rocks. It outcrops only at the higher altitudes but is known in depth through drilled holes. - The "Gavot Plateau <span class="hlt">Complex</span>" is an interstratified <span class="hlt">complex</span> of mainly basal and lateral till up to 400 m thick. It outcrops at heights above approximately 850 m a.m.s.l. and up to 1200 m a.m.s.l. over a 30 km² area. It is the main recharge area known for the hydromineral system. - The "Terminal <span class="hlt">Complex</span>" from which the Evian NMW is emerging at 410 m a.m.s.l. It is composed of sand and gravel Kame terraces that allow water to flow from the deep "Gavot Plateau <span class="hlt">Complex</span>" permeable layers to the "Terminal <span class="hlt">Complex</span>". A thick and impermeable terminal till caps and seals the system. Aquifer is then confined at its downstream area. Because of heterogeneity and <span class="hlt">complexity</span> of this hydrosystem, distributed <span class="hlt">modeling</span> tools are difficult to implement at the whole system scale: important hypothesis would have to be made about geometry, hydraulic properties, boundary conditions for example and extrapolation would lead with no doubt to unacceptable errors. Consequently a <span class="hlt">modeling</span> strategy is being developed and leads also to improve the conceptual <span class="hlt">model</span> of the hydrosystem. Lumped <span class="hlt">models</span> mainly based on tritium time series allow the whole hydrosystem to be <span class="hlt">modeled</span> combining in series: an exponential <span class="hlt">model</span> (superficial aquifers of the "Gavot Plateau <span class="hlt">Complex</span>"), a dispersive <span class="hlt">model</span> (Gavot Plateau interstratified <span class="hlt">complex</span>) and a piston flow <span class="hlt">model</span> (sand and gravel from the Kame terraces) respectively 8, 60 and 2.5 years of mean transit time. These <span class="hlt">models</span> provide insight on the governing parameters for the whole mineral aquifer. They help improving the current conceptual <span class="hlt">model</span> and are to be improved with other environmental tracers such as CFC, SF6. A</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3172241','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3172241"><span>A Corticothalamic Circuit <span class="hlt">Model</span> for Sound Identification in <span class="hlt">Complex</span> Scenes</span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Otazu, Gonzalo H.; Leibold, Christian</p> <p>2011-01-01</p> <p>The identification of the sound sources present in the environment is essential for the survival of many animals. However, these sounds are not presented in isolation, as natural scenes consist of a superposition of sounds originating from multiple sources. The identification of a source under these circumstances is a <span class="hlt">complex</span> computational problem that is readily solved by most animals. We present a <span class="hlt">model</span> of the thalamocortical circuit that performs level-invariant recognition of auditory objects in <span class="hlt">complex</span> auditory scenes. The circuit identifies the objects present from a large dictionary of possible elements and operates reliably for real sound signals with multiple concurrently active sources. The key <span class="hlt">model</span> assumption is that the activities of some cortical neurons encode the difference between the observed signal and an internal estimate. Reanalysis of awake auditory cortex recordings revealed neurons with patterns of activity corresponding to such an error signal. PMID:21931668</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2009LNCS.5872..564W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2009LNCS.5872..564W"><span>An Ontology for <span class="hlt">Modeling</span> <span class="hlt">Complex</span> Inter-relational Organizations</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Wautelet, Yves; Neysen, Nicolas; Kolp, Manuel</p> <p></p> <p>This paper presents an ontology for organizational <span class="hlt">modeling</span> through multiple complementary aspects. The primary goal of the ontology is to dispose of an adequate set of related concepts for studying <span class="hlt">complex</span> organizations involved in a lot of relationships at the same time. In this paper, we define <span class="hlt">complex</span> organizations as networked organizations involved in a market eco-system that are playing several roles simultaneously. In such a context, traditional approaches focus on the macro analytic level of transactions; this is supplemented here with a micro analytic study of the actors' rationale. At first, the paper overviews enterprise ontologies literature to position our proposal and exposes its contributions and limitations. The ontology is then brought to an advanced level of formalization: a meta-<span class="hlt">model</span> in the form of a UML class diagram allows to overview the ontology concepts and their relationships which are formally defined. Finally, the paper presents the case study on which the ontology has been validated.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li class="active"><span>24</span></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_24 --> <div id="page_25" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li class="active"><span>25</span></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="481"> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2633772','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2633772"><span>A new macrocyclic terbium(III) <span class="hlt">complex</span> for use in RNA footprinting <span class="hlt">experiments</span></span></a></p> <p><a target="_blank" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Belousoff, Matthew J.; Ung, Phuc; Forsyth, Craig M.; Tor, Yitzhak; Spiccia, Leone; Graham, Bim</p> <p>2009-01-01</p> <p>Reaction of terbium triflate with a heptadentate ligand derivative of cyclen, L1 = 2-[7-ethyl-4,10-bis(isopropylcarbamoylmethyl)-1,4,7,10-tetraazacyclododec-1-yl]-N-isopropylacetamide, produced a new synthetic ribonuclease, [Tb(L1)(OTf)(OH2)](OTf)2·MeCN (C1). X-ray crystal structure analysis indicates that the terbium(III) centre in C1 is 9-coordinate, with a capped square-antiprism geometry. Whilst the terbium(III) center is tightly bound by the L1 ligand, two of the coordination sites are occupied by labile water and triflate ligands. In water, the triflate ligand is likely to be displaced, forming [Tb(L1)(OH2)2]3+, which is able to effectively promote RNA cleavage. This <span class="hlt">complex</span> greatly accelerates the rate of intramolecular transesterification of an activated <span class="hlt">model</span> RNA phosphodiester, uridine-3′-p-nitrophenylphosphate (UpNP), with kobs = 5.5(1) × 10-2 s-1 at 21°C and pH 7.5, corresponding to an apparent second-order rate constant of 277(5) M-1s-1. By contrast, the analogous <span class="hlt">complex</span> of an octadentate derivative of cyclen featuring only a single labile coordination site, [Tb(L2)(OH2)](OTf)3 (C2), where L2 = 2-[4,7,10-tris(isopropylcarbamoylmethyl)-1,4,7,10-tetraazacyclododec-1-yl]-N-isopropyl-acetamide, is inactive. [Tb(L1)(OH2)2]3+ is also capable of hydrolyzing short transcripts of the HIV-1 transactivation response (TAR) element, HIV-1 dimerization initiation site (DIS) and ribosomal A-site, as well as formyl methionine transfer RNA (tRNAfMet), albeit at a considerably slower rate than UpNP transesterification (kobs = 2.78(8) × 10-5 M-1s-1 for TAR cleavage at 37°C, pH 6.5, corresponding to an apparent second-order rate constant of 0.56(2) M-1s-1). Cleavage is concentrated at the single-stranded “bulge” regions of these RNA motifs. Exploiting this selectivity, [Tb(L1)(OH2)23+ was successfully employed in footprinting <span class="hlt">experiments</span>, in which binding of the Tat peptide and neomycin B to the bulge region of the TAR stem-loop was confirmed. PMID:19119812</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/19119812','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/19119812"><span>New macrocyclic terbium(III) <span class="hlt">complex</span> for use in RNA footprinting <span class="hlt">experiments</span>.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Belousoff, Matthew J; Ung, Phuc; Forsyth, Craig M; Tor, Yitzhak; Spiccia, Leone; Graham, Bim</p> <p>2009-01-28</p> <p>Reaction of terbium triflate with a heptadentate ligand derivative of cyclen, L1 = 2-[7-ethyl-4,10-bis(isopropylcarbamoylmethyl)-1,4,7,10-tetraazacyclododec-1-yl]-N-isopropyl-acetamide, produced a new synthetic ribonuclease, [Tb(L1)(OTf)(OH(2))](OTf)(2).MeCN (C1). X-ray crystal structure analysis indicates that the terbium(III) center in C1 is 9-coordinate, with a capped square-antiprism geometry. While the terbium(III) center is tightly bound by the L1 ligand, two of the coordination sites are occupied by labile water and triflate ligands. In water, the triflate ligand is likely to be displaced, forming [Tb(L1)(OH(2))(2)](3+), which is able to effectively promote RNA cleavage. This <span class="hlt">complex</span> greatly accelerates the rate of intramolecular transesterification of an activated <span class="hlt">model</span> RNA phosphodiester, uridine-3'-p-nitrophenylphosphate (UpNP), with k(obs) = 5.5(1) x 10(-2) s(-1) at 21 degrees C and pH 7.5, corresponding to an apparent second-order rate constant of 277(5) M(-1) s(-1). By contrast, the analogous <span class="hlt">complex</span> of an octadentate derivative of cyclen featuring only a single labile coordination site, [Tb(L2)(OH(2))](OTf)(3) (C2), where L2 = 2-[4,7,10-tris(isopropylcarbamoylmethyl)-1,4,7,10-tetraazacyclododec-1-yl]-N-isopropyl-acetamide, is inactive. [Tb(L1)(OH(2))(2)](3+) is also capable of hydrolyzing short transcripts of the HIV-1 transactivation response (TAR) element, HIV-1 dimerization initiation site (DIS) and ribosomal A-site, as well as formyl methionine tRNA (tRNA(fMet)), albeit at a considerably slower rate than UpNP transesterification (k(obs) = 2.78(8) x 10(-5) s(-1) for TAR cleavage at 37 degrees C, pH 6.5, corresponding to an apparent second-order rate constant of 0.56(2) M(-1)s(-1)). Cleavage is concentrated at the single-stranded "bulge" regions of these RNA motifs. Exploiting this selectivity, [Tb(L1)(OH(2))(2)](3+) was successfully employed in footprinting <span class="hlt">experiments</span>, in which binding of the Tat peptide and neomycin B to the bulge region of the</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015ISPAr.XL5..387B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015ISPAr.XL5..387B"><span>Polygonal Shapes Detection in 3d <span class="hlt">Models</span> of <span class="hlt">Complex</span> Architectures</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Benciolini, G. B.; Vitti, A.</p> <p>2015-02-01</p> <p>A sequential application of two global <span class="hlt">models</span> defined on a variational framework is proposed for the detection of polygonal shapes in 3D <span class="hlt">models</span> of <span class="hlt">complex</span> architectures. As a first step, the procedure involves the use of the Mumford and Shah (1989) 1st-order variational <span class="hlt">model</span> in dimension two (gridded height data are processed). In the Mumford-Shah <span class="hlt">model</span> an auxiliary function detects the sharp changes, i.e., the discontinuities, of a piecewise smooth approximation of the data. The Mumford-Shah <span class="hlt">model</span> requires the global minimization of a specific functional to simultaneously produce both the smooth approximation and its discontinuities. In the proposed procedure, the edges of the smooth approximation derived by a specific processing of the auxiliary function are then processed using the Blake and Zisserman (1987) 2nd-order variational <span class="hlt">model</span> in dimension one (edges are processed in the plane). This second step permits to describe the edges of an object by means of piecewise almost-linear approximation of the input edges themselves and to detects sharp changes of the first-derivative of the edges so to detect corners. The Mumford-Shah variational <span class="hlt">model</span> is used in two dimensions accepting the original data as primary input. The Blake-Zisserman variational <span class="hlt">model</span> is used in one dimension for the refinement of the description of the edges. The selection among all the boundaries detected by the Mumford-Shah <span class="hlt">model</span> of those that present a shape close to a polygon is performed by considering only those boundaries for which the Blake-Zisserman <span class="hlt">model</span> identified discontinuities in their first derivative. The output of the procedure are hence shapes, coming from 3D geometric data, that can be considered as polygons. The application of the procedure is suitable for, but not limited to, the detection of objects such as foot-print of polygonal buildings, building facade boundaries or windows contours. v The procedure is applied to a height <span class="hlt">model</span> of the building of the Engineering</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.osti.gov/scitech/biblio/22262582','SCIGOV-STC'); return false;" href="https://www.osti.gov/scitech/biblio/22262582"><span>A radio-frequency sheath <span class="hlt">model</span> for <span class="hlt">complex</span> waveforms</span></a></p> <p><a target="_blank" href="http://www.osti.gov/scitech">SciTech Connect</a></p> <p>Turner, M. M.; Chabert, P.</p> <p>2014-04-21</p> <p>Plasma sheaths driven by radio-frequency voltages occur in contexts ranging from plasma processing to magnetically confined fusion <span class="hlt">experiments</span>. An analytical understanding of such sheaths is therefore important, both intrinsically and as an element in more elaborate theoretical structures. Radio-frequency sheaths are commonly excited by highly anharmonic waveforms, but no analytical <span class="hlt">model</span> exists for this general case. We present a mathematically simple sheath <span class="hlt">model</span> that is in good agreement with earlier <span class="hlt">models</span> for single frequency excitation, yet can be solved for arbitrary excitation waveforms. As examples, we discuss dual-frequency and pulse-like waveforms. The <span class="hlt">model</span> employs the ansatz that the time-averaged electron density is a constant fraction of the ion density. In the cases we discuss, the error introduced by this approximation is small, and in general it can be quantified through an internal consistency condition of the <span class="hlt">model</span>. This simple and accurate <span class="hlt">model</span> is likely to have wide application.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://eric.ed.gov/?q=cesium&id=EJ627112','ERIC'); return false;" href="http://eric.ed.gov/?q=cesium&id=EJ627112"><span><span class="hlt">Modeling</span> the Classic Meselson and Stahl <span class="hlt">Experiment</span>.</span></a></p> <p><a target="_blank" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>D'Agostino, JoBeth</p> <p>2001-01-01</p> <p>Points out the importance of molecular <span class="hlt">models</span> in biology and chemistry. Presents a laboratory activity on DNA. Uses different colored wax strips to represent "heavy" and "light" DNA, cesium chloride for identification of small density differences, and three different liquids with varying densities to <span class="hlt">model</span> gradient…</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://eric.ed.gov/?q=uses+AND+dna&pg=5&id=EJ627112','ERIC'); return false;" href="https://eric.ed.gov/?q=uses+AND+dna&pg=5&id=EJ627112"><span><span class="hlt">Modeling</span> the Classic Meselson and Stahl <span class="hlt">Experiment</span>.</span></a></p> <p><a target="_blank" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>D'Agostino, JoBeth</p> <p>2001-01-01</p> <p>Points out the importance of molecular <span class="hlt">models</span> in biology and chemistry. Presents a laboratory activity on DNA. Uses different colored wax strips to represent "heavy" and "light" DNA, cesium chloride for identification of small density differences, and three different liquids with varying densities to <span class="hlt">model</span> gradient…</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://eric.ed.gov/?q=modeling&pg=3&id=EJ1150556','ERIC'); return false;" href="https://eric.ed.gov/?q=modeling&pg=3&id=EJ1150556"><span>Mathematical <span class="hlt">Modeling</span>: Are Prior <span class="hlt">Experiences</span> Important?</span></a></p> <p><a target="_blank" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Czocher, Jennifer A.; Moss, Diana L.</p> <p>2017-01-01</p> <p>Why are math <span class="hlt">modeling</span> problems the source of such frustration for students and teachers? The conceptual understanding that students have when engaging with a math <span class="hlt">modeling</span> problem varies greatly. They need opportunities to make their own assumptions and design the mathematics to fit these assumptions (CCSSI 2010). Making these assumptions is part…</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017SPIE10206E..0HS','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017SPIE10206E..0HS"><span>A computational framework for <span class="hlt">modeling</span> targets as <span class="hlt">complex</span> adaptive systems</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Santos, Eugene; Santos, Eunice E.; Korah, John; Murugappan, Vairavan; Subramanian, Suresh</p> <p>2017-05-01</p> <p><span class="hlt">Modeling</span> large military targets is a challenge as they can be <span class="hlt">complex</span> systems encompassing myriad combinations of human, technological, and social elements that interact, leading to <span class="hlt">complex</span> behaviors. Moreover, such targets have multiple components and structures, extending across multiple spatial and temporal scales, and are in a state of change, either in response to events in the environment or changes within the system. <span class="hlt">Complex</span> adaptive system (CAS) theory can help in capturing the dynamism, interactions, and more importantly various emergent behaviors, displayed by the targets. However, a key stumbling block is incorporating information from various intelligence, surveillance and reconnaissance (ISR) sources, while dealing with the inherent uncertainty, incompleteness and time criticality of real world information. To overcome these challenges, we present a probabilistic reasoning network based framework called <span class="hlt">complex</span> adaptive Bayesian Knowledge Base (caBKB). caBKB is a rigorous, overarching and axiomatic framework that <span class="hlt">models</span> two key processes, namely information aggregation and information composition. While information aggregation deals with the union, merger and concatenation of information and takes into account issues such as source reliability and information inconsistencies, information composition focuses on combining information components where such components may have well defined operations. Since caBKBs can explicitly <span class="hlt">model</span> the relationships between information pieces at various scales, it provides unique capabilities such as the ability to de-aggregate and de-compose information for detailed analysis. Using a scenario from the Network Centric Operations (NCO) domain, we will describe how our framework can be used for <span class="hlt">modeling</span> targets with a focus on methodologies for quantifying NCO performance metrics.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2003Sci...302..419P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2003Sci...302..419P"><span>A Hybridization <span class="hlt">Model</span> for the Plasmon Response of <span class="hlt">Complex</span> Nanostructures</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Prodan, E.; Radloff, C.; Halas, N. J.; Nordlander, P.</p> <p>2003-10-01</p> <p>We present a simple and intuitive picture, an electromagnetic analog of molecular orbital theory, that describes the plasmon response of <span class="hlt">complex</span> nanostructures of arbitrary shape. Our <span class="hlt">model</span> can be understood as the interaction or ``hybridization'' of elementary plasmons supported by nanostructures of elementary geometries. As an example, the approach is applied to the important case of a four-layer concentric nanoshell, where the hybridization of the plasmons of the inner and outer nanoshells determines the resonant frequencies of the multilayer nanostructure.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/14564001','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/14564001"><span>A hybridization <span class="hlt">model</span> for the plasmon response of <span class="hlt">complex</span> nanostructures.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Prodan, E; Radloff, C; Halas, N J; Nordlander, P</p> <p>2003-10-17</p> <p>We present a simple and intuitive picture, an electromagnetic analog of molecular orbital theory, that describes the plasmon response of <span class="hlt">complex</span> nanostructures of arbitrary shape. Our <span class="hlt">model</span> can be understood as the interaction or "hybridization" of elementary plasmons supported by nanostructures of elementary geometries. As an example, the approach is applied to the important case of a four-layer concentric nanoshell, where the hybridization of the plasmons of the inner and outer nanoshells determines the resonant frequencies of the multilayer nanostructure.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017EGUGA..1917307B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017EGUGA..1917307B"><span>CFD <span class="hlt">Modeling</span> of Local Scour under <span class="hlt">Complex</span> Free Surface Flow</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Bihs, Hans; Ahmad, Nadeem; Kamath, Arun; Arntsen, Øivind A.</p> <p>2017-04-01</p> <p>In the present study the open-source three-dimensional numerical <span class="hlt">model</span> REEF3D is used to calculate the <span class="hlt">complex</span> free surface flow over a spillway, the corresponding hydraulic jump downstream of the spillway and the resulting local scour. The numerical results are compared with experimental data. The transcritical flow changes from supercritical to subcritical after the hydraulic structure, which results in the hydraulic jump. The flow of the hydraulic jump is characterised by the its violent nature and the large amount of turbulence production. While the downstream area of a spillway is typically protected by a concrete apron, scour can still occur downstream of this protection. REEF3D has advanced interface capturing capabilities, with which it is possible to simulate the <span class="hlt">complex</span> free surface dynamics. With the level set method free surface is <span class="hlt">modeled</span> as the zero level set of a scalar signed distance function. The flow velocities are calculated together with the pressure on a staggered grid, ensuring a tight velocity-pressure coupling. <span class="hlt">Complex</span> geometries are <span class="hlt">modeled</span> with a ghost cell immersed boundary method. The convective terms of the momentum equations, the level set function and the equations of the k-ω turbulence <span class="hlt">model</span> are discretized with the fifth-order finite difference WENO scheme. Parallelization of the numerical scheme is achieved by using the domain decomposition framework together with the MPI library. The topography of the sediment bed is implicitly described by a level set function. Based on bedload and suspended load transport formulations, the sediment continuity defect in the bed cells is converted into the rate of change of the vertical bed elevation. This strategy has two major advantages: the topology is a well defined surface when calculating the incipient motion on the sloping bed and the sand avalanche. In addition, the numerically error prone re-meshing can be avoided, because the <span class="hlt">complex</span> boundary surface is accounted for by the immersed</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/ADA459223','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/ADA459223"><span><span class="hlt">Modeling</span> Dynamic Perceptual Attention in <span class="hlt">Complex</span> Virtual Environments</span></a></p> <p><a target="_blank" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>2005-01-01</p> <p><span class="hlt">Modeling</span> Dynamic Perceptual Attention in <span class="hlt">Complex</span> Virtual Environments Youngjun Kim, Martin van Velsen and Randall W. Hill, Jr. Institute for...Youngjun Kim, Martin van Velsen and Randall W. Hill, Jr. the human realm. Spatial cognition and especially spatial attention has allowed humans to make...At any point in time, the virtual human must recognize which object is the most salient among those 4 Youngjun Kim, Martin van Velsen and</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://hdl.handle.net/2060/20060026070','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20060026070"><span><span class="hlt">Experience</span> With Bayesian Image Based Surface <span class="hlt">Modeling</span></span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Stutz, John C.</p> <p>2005-01-01</p> <p>Bayesian surface <span class="hlt">modeling</span> from images requires <span class="hlt">modeling</span> both the surface and the image generation process, in order to optimize the <span class="hlt">models</span> by comparing actual and generated images. Thus it differs greatly, both conceptually and in computational difficulty, from conventional stereo surface recovery techniques. But it offers the possibility of using any number of images, taken under quite different conditions, and by different instruments that provide independent and often complementary information, to generate a single surface <span class="hlt">model</span> that fuses all available information. I describe an implemented system, with a brief introduction to the underlying mathematical <span class="hlt">models</span> and the compromises made for computational efficiency. I describe successes and failures achieved on actual imagery, where we went wrong and what we did right, and how our approach could be improved. Lastly I discuss how the same approach can be extended to distinct types of instruments, to achieve true sensor fusion.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014AGUFM.B21I..03N','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014AGUFM.B21I..03N"><span>Accelerating the connection between <span class="hlt">experiments</span> and <span class="hlt">models</span>: The FACE-MDS <span class="hlt">experience</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Norby, R. J.; Medlyn, B. E.; De Kauwe, M. G.; Zaehle, S.; Walker, A. P.</p> <p>2014-12-01</p> <p>The mandate is clear for improving communication between <span class="hlt">models</span> and <span class="hlt">experiments</span> to better evaluate terrestrial responses to atmospheric and climatic change. Unfortunately, progress in linking experimental and <span class="hlt">modeling</span> approaches has been slow and sometimes frustrating. Recent successes in linking results from the Duke and Oak Ridge free-air CO2 enrichment (FACE) <span class="hlt">experiments</span> with ecosystem and land surface <span class="hlt">models</span> - the FACE <span class="hlt">Model</span>-Data Synthesis (FACE-MDS) project - came only after a period of slow progress, but the <span class="hlt">experience</span> points the way to future <span class="hlt">model-experiment</span> interactions. As the FACE <span class="hlt">experiments</span> were approaching their termination, the FACE research community made an explicit attempt to work together with the <span class="hlt">modeling</span> community to synthesize and deliver experimental data to benchmark <span class="hlt">models</span> and to use <span class="hlt">models</span> to supply appropriate context for the experimental results. Initial problems that impeded progress were: measurement protocols were not consistent across different <span class="hlt">experiments</span>; data were not well organized for <span class="hlt">model</span> input; and parameterizing and spinning up <span class="hlt">models</span> that were not designed for simulating a specific site was difficult. Once these problems were worked out, the FACE-MDS project has been very successful in using data from the Duke and ORNL FACE <span class="hlt">experiment</span> to test critical assumptions in the <span class="hlt">models</span>. The project showed, for example, that the stomatal conductance <span class="hlt">model</span> most widely used in <span class="hlt">models</span> was supported by experimental data, but <span class="hlt">models</span> did not capture important responses such as increased leaf mass per unit area in elevated CO2, and did not appropriately represent foliar nitrogen allocation. We now have an opportunity to learn from this <span class="hlt">experience</span>. New FACE <span class="hlt">experiments</span> that have recently been initiated, or are about to be initiated, include a eucalyptus forest in Australia; the AmazonFACE <span class="hlt">experiment</span> in a primary, tropical forest in Brazil; and a mature oak woodland in England. Cross-site science questions are being developed that will have a</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1999JChEd..76.1277I','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1999JChEd..76.1277I"><span>Determination of Equilibrium Constants of Metal <span class="hlt">Complexes</span> from Spectrophotometric Measurements. An Undergraduate Laboratory <span class="hlt">Experiment</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ibañez, Gabriela A.; Olivieri, Alejandro C.; Escandar*, Graciela M.</p> <p>1999-09-01</p> <p>We describe an undergraduate laboratory practice involving the determination of <span class="hlt">complex</span> equilibrium constants by spectrophotometric techniques. The results are obtained through <span class="hlt">model</span> fitting using a computer program. As an example of these determinations, salicylic acid was selected and evaluated in the presence of copper(II) ion. The experimental conditions, general procedures, and computational strategem are discussed.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010LNCS.6508....1L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010LNCS.6508....1L"><span>Termination of Multipartite Graph Series Arising from <span class="hlt">Complex</span> Network <span class="hlt">Modelling</span></span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Latapy, Matthieu; Phan, Thi Ha Duong; Crespelle, Christophe; Nguyen, Thanh Qui</p> <p></p> <p>An intense activity is nowadays devoted to the definition of <span class="hlt">models</span> capturing the properties of <span class="hlt">complex</span> networks. Among the most promising approaches, it has been proposed to <span class="hlt">model</span> these graphs via their clique incidence bipartite graphs. However, this approach has, until now, severe limitations resulting from its incapacity to reproduce a key property of this object: the overlapping nature of cliques in <span class="hlt">complex</span> networks. In order to get rid of these limitations we propose to encode the structure of clique overlaps in a network thanks to a process consisting in iteratively factorising the maximal bicliques between the upper level and the other levels of a multipartite graph. We show that the most natural definition of this factorising process leads to infinite series for some instances. Our main result is to design a restriction of this process that terminates for any arbitrary graph. Moreover, we show that the resulting multipartite graph has remarkable combinatorial properties and is closely related to another fundamental combinatorial object. Finally, we show that, in practice, this multipartite graph is computationally tractable and has a size that makes it suitable for <span class="hlt">complex</span> network <span class="hlt">modelling</span>.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://eric.ed.gov/?q=brownian&pg=2&id=EJ239301','ERIC'); return false;" href="https://eric.ed.gov/?q=brownian&pg=2&id=EJ239301"><span><span class="hlt">Model</span> <span class="hlt">Experiment</span> of Two-Dimentional Brownian Motion by Microcomputer.</span></a></p> <p><a target="_blank" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Mishima, Nobuhiko; And Others</p> <p>1980-01-01</p> <p>Describes the use of a microcomputer in studying a <span class="hlt">model</span> <span class="hlt">experiment</span> (Brownian particles colliding with thermal particles). A flow chart and program for the <span class="hlt">experiment</span> are provided. Suggests that this <span class="hlt">experiment</span> may foster a deepened understanding through mutual dialog between the student and computer. (SK)</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26964749','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26964749"><span>PeTTSy: a computational tool for perturbation analysis of <span class="hlt">complex</span> systems biology <span class="hlt">models</span>.</span></a></p> <p><a target="_blank" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Domijan, Mirela; Brown, Paul E; Shulgin, Boris V; Rand, David A</p> <p>2016-03-10</p> <p>Over the last decade sensitivity analysis techniques have been shown to be very useful to analyse <span class="hlt">complex</span> and high dimensional Systems Biology <span class="hlt">models</span>. However, many of the currently available toolboxes have either used parameter sampling, been focused on a restricted set of <span class="hlt">model</span> observables of interest, studied optimisation of a objective function, or have not dealt with multiple simultaneous <span class="hlt">model</span> parameter changes where the changes can be permanent or temporary. Here we introduce our new, freely downloadable toolbox, PeTTSy (Perturbation Theory Toolbox for Systems). PeTTSy is a package for MATLAB which implements a wide array of techniques for the perturbation theory and sensitivity analysis of large and <span class="hlt">complex</span> ordinary differential equation (ODE) based <span class="hlt">models</span>. PeTTSy is a comprehensive <span class="hlt">modelling</span> framework that introduces a number of new approaches and that fully addresses analysis of oscillatory systems. It examines sensitivity analysis of the <span class="hlt">models</span> to perturbations of parameters, where the perturbation timing, strength, length and overall shape can be controlled by the user. This can be done in a system-global setting, namely, the user can determine how many parameters to perturb, by how much and for how long. PeTTSy also offers the user the ability to explore the effect of the parameter perturbations on many different types of outputs: period, phase (timing of peak) and <span class="hlt">model</span> solutions. PeTTSy can be employed on a wide range of mathematical <span class="hlt">models</span> including free-running and forced oscillators and signalling systems. To enable experimental optimisation using the Fisher Information Matrix it efficiently allows one to combine multiple variants of a <span class="hlt">model</span> (i.e. a <span class="hlt">model</span> with multiple experimental conditions) in order to determine the value of new <span class="hlt">experiments</span>. It is especially useful in the analysis of large and <span class="hlt">complex</span> <span class="hlt">models</span> involving many variables and parameters. PeTTSy is a comprehensive tool for analysing large and <span class="hlt">complex</span> <span class="hlt">models</span> of regulatory and</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2000Sci...287..667E','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2000Sci...287..667E"><span>A Simple <span class="hlt">Model</span> for <span class="hlt">Complex</span> Dynamical Transitions in Epidemics</span></a></p> <p><a target="_blank" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Earn, David J. D.; Rohani, Pejman; Bolker, Benjamin M.; Grenfell, Bryan T.</p> <p>2000-01-01</p> <p>Dramatic changes in patterns of epidemics have been observed throughout this century. For childhood infectious diseases such as measles, the major transitions are between regular cycles and irregular, possibly chaotic epidemics, and from regionally synchronized oscillations to <span class="hlt">complex</span>, spatially incoherent epidemics. A simple <span class="hlt">model</span> can explain both kinds of transitions as the consequences of changes in birth and vaccination rates. Measles is a natural ecological system that exhibits different dynamical transitions at different times and places, yet all of these transitions can be predicted as bifurcations of a single nonlinear <span class="hlt">model</span>.</p> </li> <li> <p><a target="_blank" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19900023475&hterms=Semantic+representation&qs=N%3D0%26Ntk%3DAll%26Ntx%3Dmode%2Bmatchall%26Ntt%3DSemantic%2Brepresentation','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19900023475&hterms=Semantic+representation&qs=N%3D0%26Ntk%3DAll%26Ntx%3Dmode%2Bmatchall%26Ntt%3DSemantic%2Brepresentation"><span>The evaluative imaging of mental <span class="hlt">models</span> - Visual representations of <span class="hlt">complexity</span></span></a></p> <p><a target="_blank" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Dede, Christopher</p> <p>1989-01-01</p> <p>The paper deals with some design issues involved in building a system that could visually represent the semantic structures of training materials and their underlying mental <span class="hlt">models</span>. In particular, hypermedia-based semantic networks that instantiate classification problem solving strategies are thought to be a useful formalism for such representations; the <span class="hlt">complexity</span> of these web structures can be best managed through visual depictions. It is also noted that a useful approach to implement in these hypermedia <span class="hlt">models</span> would be some metrics of conceptual distance.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li class="active"><span>25</span></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_25 --> <center> <div class="footer-extlink text-muted"><small>Some links on this page may take you to non-federal websites. Their policies may differ from this site.</small> </div> </center> <div id="footer-wrapper"> <div class="footer-content"> <div id="footerOSTI" class=""> <div class="row"> <div class="col-md-4 text-center col-md-push-4 footer-content-center"><small><a href="http://www.science.gov/disclaimer.html">Privacy and Security</a></small> <div class="visible-sm visible-xs push_footer"></div> </div> <div class="col-md-4 text-center col-md-pull-4 footer-content-left"> <img src="https://www.osti.gov/images/DOE_SC31.png" alt="U.S. Department of Energy" usemap="#doe" height="31" width="177"><map style="display:none;" name="doe" id="doe"><area shape="rect" coords="1,3,107,30" href="http://www.energy.gov" alt="U.S. Deparment of Energy"><area shape="rect" coords="114,3,165,30" href="http://www.science.energy.gov" alt="Office of Science"></map> <a ref="http://www.osti.gov" style="margin-left: 15px;"><img src="https://www.osti.gov/images/footerimages/ostigov53.png" alt="Office of Scientific and Technical Information" height="31" width="53"></a> <div class="visible-sm visible-xs push_footer"></div> </div> <div class="col-md-4 text-center footer-content-right"> <a href="http://www.science.gov"><img src="https://www.osti.gov/images/footerimages/scigov77.png" alt="science.gov" height="31" width="98"></a> <a href="http://worldwidescience.org"><img src="https://www.osti.gov/images/footerimages/wws82.png" alt="WorldWideScience.org" height="31" width="90"></a> </div> </div> </div> </div> </div> <p><br></p> </div><!-- container --> <script type="text/javascript"><!-- // var lastDiv = ""; function showDiv(divName) { // hide last div if (lastDiv) { document.getElementById(lastDiv).className = "hiddenDiv"; } //if value of the box is not nothing and an object with that name exists, then change the class if (divName && document.getElementById(divName)) { document.getElementById(divName).className = "visibleDiv"; lastDiv = divName; } } //--> </script> <script> /** * Function that tracks a click on an outbound link in Google Analytics. * This function takes a valid URL string as an argument, and uses that URL string * as the event label. */ var trackOutboundLink = function(url,collectionCode) { try { h = window.open(url); setTimeout(function() { ga('send', 'event', 'topic-page-click-through', collectionCode, url); }, 1000); } catch(err){} }; </script> <!-- Google Analytics --> <script> (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){ (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o), m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) })(window,document,'script','//www.google-analytics.com/analytics.js','ga'); ga('create', 'UA-1122789-34', 'auto'); ga('send', 'pageview'); </script> <!-- End Google Analytics --> <script> showDiv('page_1') </script> </body> </html>