Science.gov

Sample records for complexes model experiments

  1. Synchronization Experiments With A Global Coupled Model of Intermediate Complexity

    NASA Astrophysics Data System (ADS)

    Selten, Frank; Hiemstra, Paul; Shen, Mao-Lin

    2013-04-01

    In the super modeling approach an ensemble of imperfect models are connected through nudging terms that nudge the solution of each model to the solution of all other models in the ensemble. The goal is to obtain a synchronized state through a proper choice of connection strengths that closely tracks the trajectory of the true system. For the super modeling approach to be successful, the connections should be dense and strong enough for synchronization to occur. In this study we analyze the behavior of an ensemble of connected global atmosphere-ocean models of intermediate complexity. All atmosphere models are connected to the same ocean model through the surface fluxes of heat, water and momentum, the ocean is integrated using weighted averaged surface fluxes. In particular we analyze the degree of synchronization between the atmosphere models and the characteristics of the ensemble mean solution. The results are interpreted using a low order atmosphere-ocean toy model.

  2. Historical and idealized climate model experiments: an intercomparison of Earth system models of intermediate complexity

    NASA Astrophysics Data System (ADS)

    Eby, M.; Weaver, A. J.; Alexander, K.; Zickfeld, K.; Abe-Ouchi, A.; Cimatoribus, A. A.; Crespin, E.; Drijfhout, S. S.; Edwards, N. R.; Eliseev, A. V.; Feulner, G.; Fichefet, T.; Forest, C. E.; Goosse, H.; Holden, P. B.; Joos, F.; Kawamiya, M.; Kicklighter, D.; Kienert, H.; Matsumoto, K.; Mokhov, I. I.; Monier, E.; Olsen, S. M.; Pedersen, J. O. P.; Perrette, M.; Philippon-Berthier, G.; Ridgwell, A.; Schlosser, A.; Schneider von Deimling, T.; Shaffer, G.; Smith, R. S.; Spahni, R.; Sokolov, A. P.; Steinacher, M.; Tachiiri, K.; Tokos, K.; Yoshimori, M.; Zeng, N.; Zhao, F.

    2013-05-01

    Both historical and idealized climate model experiments are performed with a variety of Earth system models of intermediate complexity (EMICs) as part of a community contribution to the Intergovernmental Panel on Climate Change Fifth Assessment Report. Historical simulations start at 850 CE and continue through to 2005. The standard simulations include changes in forcing from solar luminosity, Earth's orbital configuration, CO2, additional greenhouse gases, land use, and sulphate and volcanic aerosols. In spite of very different modelled pre-industrial global surface air temperatures, overall 20th century trends in surface air temperature and carbon uptake are reasonably well simulated when compared to observed trends. Land carbon fluxes show much more variation between models than ocean carbon fluxes, and recent land fluxes appear to be slightly underestimated. It is possible that recent modelled climate trends or climate-carbon feedbacks are overestimated resulting in too much land carbon loss or that carbon uptake due to CO2 and/or nitrogen fertilization is underestimated. Several one thousand year long, idealized, 2 × and 4 × CO2 experiments are used to quantify standard model characteristics, including transient and equilibrium climate sensitivities, and climate-carbon feedbacks. The values from EMICs generally fall within the range given by general circulation models. Seven additional historical simulations, each including a single specified forcing, are used to assess the contributions of different climate forcings to the overall climate and carbon cycle response. The response of surface air temperature is the linear sum of the individual forcings, while the carbon cycle response shows a non-linear interaction between land-use change and CO2 forcings for some models. Finally, the preindustrial portions of the last millennium simulations are used to assess historical model carbon-climate feedbacks. Given the specified forcing, there is a tendency for the

  3. Model complexity in carbon sequestration:A design of experiment and response surface uncertainty analysis

    NASA Astrophysics Data System (ADS)

    Zhang, Y.; Li, S.

    2014-12-01

    Geologic carbon sequestration (GCS) is proposed for the Nugget Sandstone in Moxa Arch, a regional saline aquifer with a large storage potential. For a proposed storage site, this study builds a suite of increasingly complex conceptual "geologic" model families, using subsets of the site characterization data: a homogeneous model family, a stationary petrophysical model family, a stationary facies model family with sub-facies petrophysical variability, and a non-stationary facies model family (with sub-facies variability) conditioned to soft data. These families, representing alternative conceptual site models built with increasing data, were simulated with the same CO2 injection test (50 years at 1/10 Mt per year), followed by 2950 years of monitoring. Using the Design of Experiment, an efficient sensitivity analysis (SA) is conducted for all families, systematically varying uncertain input parameters. Results are compared among the families to identify parameters that have 1st order impact on predicting the CO2 storage ratio (SR) at both end of injection and end of monitoring. At this site, geologic modeling factors do not significantly influence the short-term prediction of the storage ratio, although they become important over monitoring time, but only for those families where such factors are accounted for. Based on the SA, a response surface analysis is conducted to generate prediction envelopes of the storage ratio, which are compared among the families at both times. Results suggest a large uncertainty in the predicted storage ratio given the uncertainties in model parameters and modeling choices: SR varies from 5-60% (end of injection) to 18-100% (end of monitoring), although its variation among the model families is relatively minor. Moreover, long-term leakage risk is considered small at the proposed site. In the lowest-SR scenarios, all families predict gravity-stable supercritical CO2 migrating toward the bottom of the aquifer. In the highest

  4. Photochemistry of iron(III)-carboxylato complexes in aqueous atmospheric particles - Laboratory experiments and modeling studies

    NASA Astrophysics Data System (ADS)

    Weller, C.; Tilgner, A.; Herrmann, H.

    2010-12-01

    Iron is always present in the atmosphere in concentrations from ~10-9 M (clouds, rain) up to ~10-3 M (fog, particles). Sources are mainly mineral dust emissions. Iron complexes are very good absorbers in the UV-VIS actinic region and therefore photo-chemically reactive. Iron complex photolysis leads to radical production and can initiate radical chain reactions, which is related to the oxidizing capacity of the atmosphere. These radical chain reactions are involved in the decomposition and transformation of a variety of chemical compounds in cloud droplets and deliquescent particles. Additionally, the photochemical reaction itself can be a degradation pathway for organic compounds with the ability to bind iron. Iron-complexes of atmospherically relevant coordination compounds like oxalate, malonate, succinate, glutarate, tartronate, gluconate, pyruvate and glyoxalate have been investigated in laboratory experiments. Iron speciation depends on the iron-ligand ratio and the pH. The most suitable experimental conditions were calculated with a speciation program (Visual Minteq). The solutions were prepared accordingly and transferred to a 1 cm quartz cuvette and flash-photolyzed with an excimer laser at wavelengths 308 or 351 nm. Photochemically produced Fe2+ has been measured by spectrometry at 510 nm as Fe(phenantroline)32+. Fe2+ overall effective quantum yields have been calculated with the concentration of photochemically produced Fe2+ and the measured energy of the excimer laser pulse. The laser pulse energy was measured with a pyroelectric sensor. For some iron-carboxylate systems the experimental parameters like the oxygen content of the solution, the initial Iron concentration and the incident laser energy were systematically altered to observe an effect on the overall quantum yield. The dependence of some quantum yields on these parameters allows in some cases an interpretation of the underlying photochemical reaction mechanism. Quantum yields of malonate

  5. A business process modeling experience in a complex information system re-engineering.

    PubMed

    Bernonville, Stéphanie; Vantourout, Corinne; Fendeler, Geneviève; Beuscart, Régis

    2013-01-01

    This article aims to share a business process modeling experience in a re-engineering project of a medical records department in a 2,965-bed hospital. It presents the modeling strategy, an extract of the results and the feedback experience. PMID:23920743

  6. A consistent model for surface complexation on birnessite (-MnO2) and its application to a column experiment

    NASA Astrophysics Data System (ADS)

    Appelo, C. A. J.; Postma, D.

    1999-10-01

    Available surface complexation models for birnessite required the inclusion of bidentate bonds or the adsorption of cation-hydroxy complexes to account for experimentally observed H+/Mm+ exchange. These models contain inconsistencies and therefore the surface complexation on birnessite was re-examined. Structural data on birnessite indicate that sorption sites are located on three oxygens around a vacancy in the octahedral layer. The three oxygens together carry a charge of -2, i.e., constitute a doubly charged sorption site. Therefore a new surface complexation model was formulated using a doubly charged, diprotic, sorption site where divalent cations adsorbing via inner-sphere complexes bind to the three oxygens. Using the diprotic site concept we have remodeled the experimental data for sorption on birnessite by Murray (1975) using the surface complexation model of Dzombak and Morel (1990). Intrinsic constants for the surface complexation model were obtained with the non-linear optimization program PEST in combination with a modified version of PHREEQC (Parkhurst, 1995). The optimized model was subsequently tested against independent data sets for synthetic birnessite by Balistrieri and Murray (1982) and Wang et al. (1996). It was found to describe the experimental data well. Finally the model was tested against the results of column experiments where cations adsorbed on natural MnO2 coated sand. In this case as well, the diprotic surface complexation model gave an excellent description of the experimental results.

  7. Complexity in forest fires: From simple experiments to nonlinear networked models

    NASA Astrophysics Data System (ADS)

    Buscarino, Arturo; Famoso, Carlo; Fortuna, Luigi; Frasca, Mattia; Xibilia, Maria Gabriella

    2015-05-01

    The evolution of natural phenomena in real environments often involves complex nonlinear interactions. Modeling them implies the characterization of both a map of interactions and a dynamical process, but also of the peculiarity of the space in which the phenomena occur. The model presented in this paper encompasses all these aspects to formalize an innovative methodology to simulate the propagation of forest fires. It is based on a networked multilayer structure, allowing a flexible definition of the spatial properties of the medium and of the dynamical laws regulating fire propagation. The dynamical core of each node in the network is represented by an hyperbolic reaction-diffusion equation in which the intrinsic characteristics of trees ignition are considered. Furthermore, to define the simulation scenarios, an experimental setup in which the propagation of a fire wave in a small-scale medium can been observed has been realized. A number of simulations is then reported to illustrate the wide spectrum of scenarios which can be reproduced by the model.

  8. Cadmium sorption onto Natural Red Earth - An assessment using batch experiments and surface complexation modeling

    NASA Astrophysics Data System (ADS)

    Mahatantila, K.; Minoru, O.; Seike, Y.; Vithanage, M. S.

    2010-12-01

    Natural red earth (NRE), an iron coated sand found in north western part of Sri Lanka was used to examine its retention behavior of cadmium, a heavy metal postulated as a factor of chronic kidney disease in Sri Lanka. Adsorption studies were examined in batch experiments as a function of pH, ionic strength and initial cadmium loading. Proton binding sites on NRE were characterized by potentiometric titration yielding a pHzpc around 6.6. The cadmium adsorption increased from 6% to 99% along with a pH increase from 4 to 8.5. In addition, the maximum adsorption was observed when pH is greater than 7.5. Ionic strength dependency of cadmium adsorption for 100 fold variation of NaNO3 evidences the dominance of an inner-sphere bonding mechanism for 10 fold variation of initial cadmium loadings (4.44 and 44.4 µmol/L). Adsorption edges were quantified with a 2pK generalized diffuse double layer model considering two site types, >FeOH and >AlOH, for Cd2+ binding. From modeling, we introduced a monodentate chemical bonding mechanism for cadmium binding on to NRE and this finding was further verified with FTIR spectroscopy. Intrinsic constants determined were log KFeOCd = 8.543 and log KAlOCd = 13.917. Isotherm data implies the heterogeneity of NRE surface and the sorption maximum of 9.418 x10-6 mol/g and 1.3x10-4 mol/g for Langmuir and Freundlich isotherm models. The study suggested the potential of NRE as a material in decontaminating environmental water polluted with cadmium.

  9. Reduction of U(VI) Complexes by Anthraquinone Disulfonate: Experiment and Molecular Modeling

    SciTech Connect

    Ainsworth, C.C.; Wang, Z.; Rosso, K.M.; Wagnon, K.; Fredrickson, J.K.

    2004-03-17

    Past studies demonstrate that complexation will limit abiotic and biotic U(VI) reduction rates and the overall extent of reduction. However, the underlying basis for this behavior is not understood and presently unpredictable across species and ligand structure. The central tenets of these investigations are: (1) reduction of U(VI) follows the electron-transfer (ET) mechanism developed by Marcus; (2) the ET rate is the rate-limiting step in U(VI) reduction and is the step that is most affected by complexation; and (3) Marcus theory can be used to unify the apparently disparate U(VI) reduction rate data and as a computational tool to construct a predictive relationship.

  10. The Mentoring Relationship as a Complex Adaptive System: Finding a Model for Our Experience

    ERIC Educational Resources Information Center

    Jones, Rachel; Brown, Dot

    2011-01-01

    Mentoring theory and practice has evolved significantly during the past 40 years. Early mentoring models were characterized by the top-down flow of information and benefits to the protege. This framework was reconceptualized as a reciprocal model when scholars realized mentoring was a mutually beneficial process. Recently, in response to rapidly…

  11. Experiments on Cryogenic Complex Plasma

    SciTech Connect

    Ishihara, O.; Sekine, W.; Kubota, J.; Uotani, N.; Chikasue, M.; Shindo, M.

    2009-11-10

    Experiments on a cryogenic complex plasma have been performed. Preliminary experiments include production of a plasma in a liquid helium or in a cryogenic helium gas by a pulsed discharge. The extended production of a plasma has been realized in a vapor of liquid helium or in a cryogenic helium gas by rf discharge. The charge of dust particles injected in such a plasma has been studied in detail.

  12. Concept model of the formation process of humic acid-kaolin complexes deduced by trichloroethylene sorption experiments and various characterizations.

    PubMed

    Zhu, Xiaojing; He, Jiangtao; Su, Sihui; Zhang, Xiaoliang; Wang, Fei

    2016-05-01

    To explore the interactions between soil organic matter and minerals, humic acid (HA, as organic matter), kaolin (as a mineral component) and Ca(2+) (as metal ions) were used to prepare HA-kaolin and Ca-HA-kaolin complexes. These complexes were used in trichloroethylene (TCE) sorption experiments and various characterizations. Interactions between HA and kaolin during the formation of their complexes were confirmed by the obvious differences between the Qe (experimental sorbed TCE) and Qe_p (predicted sorbed TCE) values of all detected samples. The partition coefficient kd obtained for the different samples indicated that both the organic content (fom) and Ca(2+) could significantly impact the interactions. Based on experimental results and various characterizations, a concept model was developed. In the absence of Ca(2+), HA molecules first patched onto charged sites of kaolin surfaces, filling the pores. Subsequently, as the HA content increased and the first HA layer reached saturation, an outer layer of HA began to form, compressing the inner HA layer. As HA loading continued, the second layer reached saturation, such that an outer-third layer began to form, compressing the inner layers. In the presence of Ca(2+), which not only can promote kaolin self-aggregation but can also boost HA attachment to kaolin, HA molecules were first surrounded by kaolin. Subsequently, first and second layers formed (with inner layer compression) via the same process as described above in the absence of Ca(2+), except that the second layer continued to load rather than reach saturation, within the investigated conditions, because of enhanced HA aggregation caused by Ca(2+). PMID:26933902

  13. Complex matrix model duality

    SciTech Connect

    Brown, T. W.

    2011-04-15

    The same complex matrix model calculates both tachyon scattering for the c=1 noncritical string at the self-dual radius and certain correlation functions of operators which preserve half the supersymmetry in N=4 super-Yang-Mills theory. It is dual to another complex matrix model where the couplings of the first model are encoded in the Kontsevich-like variables of the second. The duality between the theories is mirrored by the duality of their Feynman diagrams. Analogously to the Hermitian Kontsevich-Penner model, the correlation functions of the second model can be written as sums over discrete points in subspaces of the moduli space of punctured Riemann surfaces.

  14. Surface complexation modeling

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Adsorption-desorption reactions are important processes that affect the transport of contaminants in the environment. Surface complexation models are chemical models that can account for the effects of variable chemical conditions, such as pH, on adsorption reactions. These models define specific ...

  15. Numerical Modeling of Complex Targets for High-Energy- Density Experiments with Ion Beams and other Drivers

    NASA Astrophysics Data System (ADS)

    Koniges, Alice; Liu, Wangyi; Lidia, Steven; Schenkel, Thomas; Barnard, John; Friedman, Alex; Eder, David; Fisher, Aaron; Masters, Nathan

    2016-03-01

    We explore the simulation challenges and requirements for experiments planned on facilities such as the NDCX-II ion accelerator at LBNL, currently undergoing commissioning. Hydrodynamic modeling of NDCX-II experiments include certain lower temperature effects, e.g., surface tension and target fragmentation, that are not generally present in extreme high-energy laser facility experiments, where targets are completely vaporized in an extremely short period of time. Target designs proposed for NDCX-II range from metal foils of order one micron thick (thin targets) to metallic foam targets several tens of microns thick (thick targets). These high-energy-density experiments allow for the study of fracture as well as the process of bubble and droplet formation. We incorporate these physics effects into a code called ALE-AMR that uses a combination of Arbitrary Lagrangian Eulerian hydrodynamics and Adaptive Mesh Refinement. Inclusion of certain effects becomes tricky as we must deal with non-orthogonal meshes of various levels of refinement in three dimensions. A surface tension model used for droplet dynamics is implemented in ALE-AMR using curvature calculated from volume fractions. Thick foam target experiments provide information on how ion beam induced shock waves couple into kinetic energy of fluid flow. Although NDCX-II is not fully commissioned, experiments are being conducted that explore material defect production and dynamics.

  16. Numerical Modeling of Complex Targets for High-Energy- Density Experiments with Ion Beams and other Drivers

    DOE PAGESBeta

    Koniges, Alice; Liu, Wangyi; Lidia, Steven; Schenkel, Thomas; Barnard, John; Friedman, Alex; Eder, David; Fisher, Aaron; Masters, Nathan

    2016-03-01

    We explore the simulation challenges and requirements for experiments planned on facilities such as the NDCX-II ion accelerator at LBNL, currently undergoing commissioning. Hydrodynamic modeling of NDCX-II experiments include certain lower temperature effects, e.g., surface tension and target fragmentation, that are not generally present in extreme high-energy laser facility experiments, where targets are completely vaporized in an extremely short period of time. Target designs proposed for NDCX-II range from metal foils of order one micron thick (thin targets) to metallic foam targets several tens of microns thick (thick targets). These high-energy-density experiments allow for the study of fracture as wellmore » as the process of bubble and droplet formation. We incorporate these physics effects into a code called ALE-AMR that uses a combination of Arbitrary Lagrangian Eulerian hydrodynamics and Adaptive Mesh Refinement. Inclusion of certain effects becomes tricky as we must deal with non-orthogonal meshes of various levels of refinement in three dimensions. A surface tension model used for droplet dynamics is implemented in ALE-AMR using curvature calculated from volume fractions. Thick foam target experiments provide information on how ion beam induced shock waves couple into kinetic energy of fluid flow. Although NDCX-II is not fully commissioned, experiments are being conducted that explore material defect production and dynamics.« less

  17. Modeling Complex Calorimeters

    NASA Technical Reports Server (NTRS)

    Figueroa-Feliciano, Enectali

    2004-01-01

    We have developed a software suite that models complex calorimeters in the time and frequency domain. These models can reproduce all measurements that we currently do in a lab setting, like IV curves, impedance measurements, noise measurements, and pulse generation. Since all these measurements are modeled from one set of parameters, we can fully describe a detector and characterize its behavior. This leads to a model than can be used effectively for engineering and design of detectors for particular applications.

  18. Computer Experiment on the Complex Behavior of a Two-Dimensional Cellular Automaton as a Phenomenological Model for an Ecosystem

    NASA Astrophysics Data System (ADS)

    Satoh, Kazuhiro

    1989-10-01

    Numerical studies are made on the complex behavior of a cellular automaton which serves as a phenomenological model for an ecosystem. The ecosystem is assumed to contain only three populations, i.e., a population of plants, of herbivores, and of carnivores. A two-dimensional region where organisms live is divided into square cells and the population density in each cell is regarded as a discrete variable. The influence of the physical environment and the interactions between organisms are reduced to a simple rule of cellular automaton evolution. It is found that the time dependent spatial distribution of organisms is, in general, very random and complex. However, under certain conditions, the self-organization of ordered patterns such as rotating spirals or concentric circles takes place. The relevance of the cellular automaton as a model for the ecosystem is discussed.

  19. Network Enrichment Analysis in Complex Experiments*

    PubMed Central

    Shojaie, Ali; Michailidis, George

    2010-01-01

    Cellular functions of living organisms are carried out through complex systems of interacting components. Including such interactions in the analysis, and considering sub-systems defined by biological pathways instead of individual components (e.g. genes), can lead to new findings about complex biological mechanisms. Networks are often used to capture such interactions and can be incorporated in models to improve the efficiency in estimation and inference. In this paper, we propose a model for incorporating external information about interactions among genes (proteins/metabolites) in differential analysis of gene sets. We exploit the framework of mixed linear models and propose a flexible inference procedure for analysis of changes in biological pathways. The proposed method facilitates the analysis of complex experiments, including multiple experimental conditions and temporal correlations among observations. We propose an efficient iterative algorithm for estimation of the model parameters and show that the proposed framework is asymptotically robust to the presence of noise in the network information. The performance of the proposed model is illustrated through the analysis of gene expression data for environmental stress response (ESR) in yeast, as well as simulated data sets. PMID:20597848

  20. Modelling of Complex Plasmas

    NASA Astrophysics Data System (ADS)

    Akdim, Mohamed Reda

    2003-09-01

    Nowadays plasmas are used for various applications such as the fabrication of silicon solar cells, integrated circuits, coatings and dental cleaning. In the case of a processing plasma, e.g. for the fabrication of amorphous silicon solar cells, a mixture of silane and hydrogen gas is injected in a reactor. These gases are decomposed by making a plasma. A plasma with a low degree of ionization (typically 10_5) is usually made in a reactor containing two electrodes driven by a radio-frequency (RF) power source in the megahertz range. Under the right circumstances the radicals, neutrals and ions can react further to produce nanometer sized dust particles. The particles can stick to the surface and thereby contribute to a higher deposition rate. Another possibility is that the nanometer sized particles coagulate and form larger micron sized particles. These particles obtain a high negative charge, due to their large radius and are usually trapped in a radiofrequency plasma. The electric field present in the discharge sheaths causes the entrapment. Such plasmas are called dusty or complex plasmas. In this thesis numerical models are presented which describe dusty plasmas in reactive and nonreactive plasmas. We started first with the development of a simple one-dimensional silane fluid model where a dusty radio-frequency silane/hydrogen discharge is simulated. In the model, discharge quantities like the fluxes, densities and electric field are calculated self-consistently. A radius and an initial density profile for the spherical dust particles are given and the charge and the density of the dust are calculated with an iterative method. During the transport of the dust, its charge is kept constant in time. The dust influences the electric field distribution through its charge and the density of the plasma through recombination of positive ions and electrons at its surface. In the model this process gives an extra production of silane radicals, since the growth of dust is

  1. Simulation of characteristics of thermal and hydrologic soil regimes in equilibrium numerical experiments with a climate model of intermediate complexity

    NASA Astrophysics Data System (ADS)

    Arzhanov, M. M.; Demchenko, P. F.; Eliseev, A. V.; Mokhov, I. I.

    2008-10-01

    The IAP RAS CM (Institute of Atmospheric Physics, Russian Academy of Sciences, climate model) has been extended to include a comprehensive scheme of thermal and hydrologic soil processes. In equilibrium numerical experiments with specified preindustrial and current concentrations of atmospheric carbon dioxide, the coupled model successfully reproduces thermal characteristics of soil, including the temperature of its surface, and seasonal thawing and freezing characteristics. On the whole, the model also reproduces soil hydrology, including the winter snow water equivalent and river runoff from large watersheds. Evapotranspiration from the soil surface and soil moisture are simulated somewhat worse. The equilibrium response of the model to a doubling of atmospheric carbon dioxide shows a considerable warming of the soil surface, a reduction in the extent of permanently frozen soils, and the general growth of evaporation from continents. River runoff increases at high latitudes and decreases in the subtropics. The results are in qualitative agreement with observational data for the 20th century and with climate model simulations for the 21st century.

  2. Debating complexity in modeling

    USGS Publications Warehouse

    Hunt, Randall J.; Zheng, Chunmiao

    1999-01-01

    As scientists trying to understand the natural world, how should our effort be apportioned? We know that the natural world is characterized by complex and interrelated processes. Yet do we need to explicitly incorporate these intricacies to perform the tasks we are charged with? In this era of expanding computer power and development of sophisticated preprocessors and postprocessors, are bigger machines making better models? Put another way, do we understand the natural world better now with all these advancements in our simulation ability? Today the public's patience for long-term projects producing indeterminate results is wearing thin. This increases pressure on the investigator to use the appropriate technology efficiently. On the other hand, bringing scientific results into the legal arena opens up a new dimension to the issue: to the layperson, a tool that includes more of the complexity known to exist in the real world is expected to provide the more scientifically valid answer.

  3. Model Experiments and Model Descriptions

    NASA Technical Reports Server (NTRS)

    Jackman, Charles H.; Ko, Malcolm K. W.; Weisenstein, Debra; Scott, Courtney J.; Shia, Run-Lie; Rodriguez, Jose; Sze, N. D.; Vohralik, Peter; Randeniya, Lakshman; Plumb, Ian

    1999-01-01

    The Second Workshop on Stratospheric Models and Measurements Workshop (M&M II) is the continuation of the effort previously started in the first Workshop (M&M I, Prather and Remsberg [1993]) held in 1992. As originally stated, the aim of M&M is to provide a foundation for establishing the credibility of stratospheric models used in environmental assessments of the ozone response to chlorofluorocarbons, aircraft emissions, and other climate-chemistry interactions. To accomplish this, a set of measurements of the present day atmosphere was selected. The intent was that successful simulations of the set of measurements should become the prerequisite for the acceptance of these models as having a reliable prediction for future ozone behavior. This section is divided into two: model experiment and model descriptions. In the model experiment, participant were given the charge to design a number of experiments that would use observations to test whether models are using the correct mechanisms to simulate the distributions of ozone and other trace gases in the atmosphere. The purpose is closely tied to the needs to reduce the uncertainties in the model predicted responses of stratospheric ozone to perturbations. The specifications for the experiments were sent out to the modeling community in June 1997. Twenty eight modeling groups responded to the requests for input. The first part of this section discusses the different modeling group, along with the experiments performed. Part two of this section, gives brief descriptions of each model as provided by the individual modeling groups.

  4. Analysis of designed experiments with complex aliasing

    SciTech Connect

    Hamada, M.; Wu, C.F.J. )

    1992-07-01

    Traditionally, Plackett-Burman (PB) designs have been used in screening experiments for identifying important main effects. The PB designs whose run sizes are not a power of two have been criticized for their complex aliasing patterns, which according to conventional wisdom gives confusing results. This paper goes beyond the traditional approach by proposing the analysis strategy that entertains interactions in addition to main effects. Based on the precepts of effect sparsity and effect heredity, the proposed procedure exploits the designs' complex aliasing patterns, thereby turning their 'liability' into an advantage. Demonstration of the procedure on three real experiments shows the potential for extracting important information available in the data that has, until now, been missed. Some limitations are discussed, and extentions to overcome them are given. The proposed procedure also applies to more general mixed level designs that have become increasingly popular. 16 refs.

  5. CFD [computational fluid dynamics] And Safety Factors. Computer modeling of complex processes needs old-fashioned experiments to stay in touch with reality.

    SciTech Connect

    Leishear, Robert A.; Lee, Si Y.; Poirier, Michael R.; Steeper, Timothy J.; Ervin, Robert C.; Giddings, Billy J.; Stefanko, David B.; Harp, Keith D.; Fowley, Mark D.; Van Pelt, William B.

    2012-10-07

    Computational fluid dynamics (CFD) is recognized as a powerful engineering tool. That is, CFD has advanced over the years to the point where it can now give us deep insight into the analysis of very complex processes. There is a danger, though, that an engineer can place too much confidence in a simulation. If a user is not careful, it is easy to believe that if you plug in the numbers, the answer comes out, and you are done. This assumption can lead to significant errors. As we discovered in the course of a study on behalf of the Department of Energy's Savannah River Site in South Carolina, CFD models fail to capture some of the large variations inherent in complex processes. These variations, or scatter, in experimental data emerge from physical tests and are inadequately captured or expressed by calculated mean values for a process. This anomaly between experiment and theory can lead to serious errors in engineering analysis and design unless a correction factor, or safety factor, is experimentally validated. For this study, blending times for the mixing of salt solutions in large storage tanks were the process of concern under investigation. This study focused on the blending processes needed to mix salt solutions to ensure homogeneity within waste tanks, where homogeneity is required to control radioactivity levels during subsequent processing. Two of the requirements for this task were to determine the minimum number of submerged, centrifugal pumps required to blend the salt mixtures in a full-scale tank in half a day or less, and to recommend reasonable blending times to achieve nearly homogeneous salt mixtures. A full-scale, low-flow pump with a total discharge flow rate of 500 to 800 gpm was recommended with two opposing 2.27-inch diameter nozzles. To make this recommendation, both experimental and CFD modeling were performed. Lab researchers found that, although CFD provided good estimates of an average blending time, experimental blending times varied

  6. Turbulence modeling and experiments

    NASA Technical Reports Server (NTRS)

    Shabbir, Aamir

    1992-01-01

    The best way of verifying turbulence is to do a direct comparison between the various terms and their models. The success of this approach depends upon the availability of the data for the exact correlations (both experimental and DNS). The other approach involves numerically solving the differential equations and then comparing the results with the data. The results of such a computation will depend upon the accuracy of all the modeled terms and constants. Because of this it is sometimes difficult to find the cause of a poor performance by a model. However, such a calculation is still meaningful in other ways as it shows how a complete Reynolds stress model performs. Thirteen homogeneous flows are numerically computed using the second order closure models. We concentrate only on those models which use a linear (or quasi-linear) model for the rapid term. This, therefore, includes the Launder, Reece and Rodi (LRR) model; the isotropization of production (IP) model; and the Speziale, Sarkar, and Gatski (SSG) model. Which of the three models performs better is examined along with what are their weaknesses, if any. The other work reported deal with the experimental balances of the second moment equations for a buoyant plume. Despite the tremendous amount of activity toward the second order closure modeling of turbulence, very little experimental information is available about the budgets of the second moment equations. Part of the problem stems from our inability to measure the pressure correlations. However, if everything else appearing in these equations is known from the experiment, pressure correlations can be obtained as the closing terms. This is the closest we can come to in obtaining these terms from experiment, and despite the measurement errors which might be present in such balances, the resulting information will be extremely useful for the turbulence modelers. The purpose of this part of the work was to provide such balances of the Reynolds stress and heat

  7. Modeling complexity in biology

    NASA Astrophysics Data System (ADS)

    Louzoun, Yoram; Solomon, Sorin; Atlan, Henri; Cohen, Irun. R.

    2001-08-01

    Biological systems, unlike physical or chemical systems, are characterized by the very inhomogeneous distribution of their components. The immune system, in particular, is notable for self-organizing its structure. Classically, the dynamics of natural systems have been described using differential equations. But, differential equation models fail to account for the emergence of large-scale inhomogeneities and for the influence of inhomogeneity on the overall dynamics of biological systems. Here, we show that a microscopic simulation methodology enables us to model the emergence of large-scale objects and to extend the scope of mathematical modeling in biology. We take a simple example from immunology and illustrate that the methods of classical differential equations and microscopic simulation generate contradictory results. Microscopic simulations generate a more faithful approximation of the reality of the immune system.

  8. Hydraulic Fracturing Mineback Experiment in Complex Media

    NASA Astrophysics Data System (ADS)

    Green, S. J.; McLennan, J. D.

    2012-12-01

    Hydraulic fracturing (or "fracking") for the recovery of gas and liquids from tight shale formations has gained much attention. This operation which involves horizontal well drilling and massive hydraulic fracturing has been developed over the last decade to produce fluids from extremely low permeability mudstone and siltstone rocks with high organic content. Nearly thirteen thousand wells and about one hundred and fifty thousand stages within the wells were fractured in the US in 2011. This operation has proven to be successful, causing hundreds of billions of dollars to be invested and has produced an abundance of natural gas and is making billions of barrels of hydrocarbon liquids available for the US. But, even with this commercial success, relatively little is clearly known about the complexity--or lack of complexity--of the hydraulic fracture, the extent that the newly created surface area contacts the high Reservoir Quality rock, nor the connectivity and conductivity of the hydraulic fractures created. To better understand this phenomena in order to improve efficiency, a large-scale mine-back experiment is progressing. The mine-back experiment is a full-scale hydraulic fracture carried out in a well-characterized environment, with comprehensive instrumentation deployed to measure fracture growth. A tight shale mudstone rock geologic setting is selected, near the edge of a formation where one to two thousand feet difference in elevation occurs. From the top of the formation, drilling, well logging, and hydraulic fracture pumping will occur. From the bottom of the formation a horizontal tunnel will be mined using conventional mining techniques into the rock formation towards the drilled well. Certain instrumentation will be located within this tunnel for observations during the hydraulic fracturing. After the hydraulic fracturing, the tunnel will be extended toward the well, with careful mapping of the created hydraulic fracture. Fracturing fluid will be

  9. Surface complexation modeling of groundwater arsenic mobility: Results of a forced gradient experiment in a Red River flood plain aquifer, Vietnam

    NASA Astrophysics Data System (ADS)

    Jessen, Søren; Postma, Dieke; Larsen, Flemming; Nhan, Pham Quy; Hoa, Le Quynh; Trang, Pham Thi Kim; Long, Tran Vu; Viet, Pham Hung; Jakobsen, Rasmus

    2012-12-01

    Three surface complexation models (SCMs) developed for, respectively, ferrihydrite, goethite and sorption data for a Pleistocene oxidized aquifer sediment from Bangladesh were used to explore the effect of multicomponent adsorption processes on As mobility in a reduced Holocene floodplain aquifer along the Red River, Vietnam. The SCMs for ferrihydrite and goethite yielded very different results. The ferrihydrite SCM favors As(III) over As(V) and has carbonate and silica species as the main competitors for surface sites. In contrast, the goethite SCM has a greater affinity for As(V) over As(III) while PO43- and Fe(II) form the predominant surface species. The SCM for Pleistocene aquifer sediment resembles most the goethite SCM but shows more Si sorption. Compiled As(III) adsorption data for Holocene sediment was also well described by the SCM determined for Pleistocene aquifer sediment, suggesting a comparable As(III) affinity of Holocene and Pleistocene aquifer sediments. A forced gradient field experiment was conducted in a bank aquifer adjacent to a tributary channel to the Red River, and the passage in the aquifer of mixed groundwater containing up to 74% channel water was observed. The concentrations of As (<0.013 μM) and major ions in the channel water are low compared to those in the pristine groundwater in the adjacent bank aquifer, which had an As concentration of ˜3 μM. Calculations for conservative mixing of channel and groundwater could explain the observed variation in concentration for most elements. However, the mixed waters did contain an excess of As(III), PO43- and Si which is attributed to desorption from the aquifer sediment. The three SCMs were tested on their ability to model the desorption of As(III), PO43- and Si. Qualitatively, the ferrihydrite SCM correctly predicts desorption for As(III) but for Si and PO43- it predicts an increased adsorption instead of desorption. The goethite SCM correctly predicts desorption of both As(III) and PO43

  10. Percolation experiments in complex fractal media

    NASA Astrophysics Data System (ADS)

    Redondo, Jose Manuel; Tarquis, Ana Maria; Cherubini, Claudia; Lopez Gzlez-Nieto, Pilar; Vila, Teresa

    2013-04-01

    Series of flow percolation experiments under gravity were performed in different glass model and real karstic media samples. We present a multifractal characterization of the experiments in several parametric non-dimensional flow descriptors. Using the maximum local multifractal dimension as an additional flow indicator. Also experiments on Non laminar flow and transport conditions in fractured and karstified media were performed at Bari. The investigation on hypothesis of non linear flow and non fickian transport in fractured aquifers led to a distinction on the different role of channels and microchannels and of the presence of vortices and eddy trapping. The dominance of the elongated channels produced early arrival times, with the solute traveling along the high velocity channel network. On the other hand in a lumped structured karstic media, the percolation flow produced long tails with local Eddy mixing, entrapment in eddies, and slow flow out of the eddies. In The laboratory experiments performed in Madrid and in DAMTP Cambridge the role of the initial pressure produced fractal pathway structures even in iniatilly uniform ballotini substrates.

  11. Debris flows: Experiments and modelling

    NASA Astrophysics Data System (ADS)

    Turnbull, Barbara; Bowman, Elisabeth T.; McElwaine, Jim N.

    2015-01-01

    Debris flows and debris avalanches are complex, gravity-driven currents of rock, water and sediments that can be highly mobile. This combination of component materials leads to a rich morphology and unusual dynamics, exhibiting features of both granular materials and viscous gravity currents. Although extreme events such as those at Kolka Karmadon in North Ossetia (2002) [1] and Huascarán (1970) [2] strongly motivate us to understand how such high levels of mobility can occur, smaller events are ubiquitous and capable of endangering infrastructure and life, requiring mitigation. Recent progress in modelling debris flows has seen the development of multiphase models that can start to provide clues of the origins of the unique phenomenology of debris flows. However, the spatial and temporal variations that debris flows exhibit make this task challenging and laboratory experiments, where boundary and initial conditions can be controlled and reproduced, are crucial both to validate models and to inspire new modelling approaches. This paper discusses recent laboratory experiments on debris flows and the state of the art in numerical models.

  12. Modelling Canopy Flows over Complex Terrain

    NASA Astrophysics Data System (ADS)

    Grant, Eleanor R.; Ross, Andrew N.; Gardiner, Barry A.

    2016-06-01

    Recent studies of flow over forested hills have been motivated by a number of important applications including understanding CO_2 and other gaseous fluxes over forests in complex terrain, predicting wind damage to trees, and modelling wind energy potential at forested sites. Current modelling studies have focussed almost exclusively on highly idealized, and usually fully forested, hills. Here, we present model results for a site on the Isle of Arran, Scotland with complex terrain and heterogeneous forest canopy. The model uses an explicit representation of the canopy and a 1.5-order turbulence closure for flow within and above the canopy. The validity of the closure scheme is assessed using turbulence data from a field experiment before comparing predictions of the full model with field observations. For near-neutral stability, the results compare well with the observations, showing that such a relatively simple canopy model can accurately reproduce the flow patterns observed over complex terrain and realistic, variable forest cover, while at the same time remaining computationally feasible for real case studies. The model allows closer examination of the flow separation observed over complex forested terrain. Comparisons with model simulations using a roughness length parametrization show significant differences, particularly with respect to flow separation, highlighting the need to explicitly model the forest canopy if detailed predictions of near-surface flow around forests are required.

  13. "Computational Modeling of Actinide Complexes"

    SciTech Connect

    Balasubramanian, K

    2007-03-07

    We will present our recent studies on computational actinide chemistry of complexes which are not only interesting from the standpoint of actinide coordination chemistry but also of relevance to environmental management of high-level nuclear wastes. We will be discussing our recent collaborative efforts with Professor Heino Nitsche of LBNL whose research group has been actively carrying out experimental studies on these species. Computations of actinide complexes are also quintessential to our understanding of the complexes found in geochemical, biochemical environments and actinide chemistry relevant to advanced nuclear systems. In particular we have been studying uranyl, plutonyl, and Cm(III) complexes are in aqueous solution. These studies are made with a variety of relativistic methods such as coupled cluster methods, DFT, and complete active space multi-configuration self-consistent-field (CASSCF) followed by large-scale CI computations and relativistic CI (RCI) computations up to 60 million configurations. Our computational studies on actinide complexes were motivated by ongoing EXAFS studies of speciated complexes in geo and biochemical environments carried out by Prof Heino Nitsche's group at Berkeley, Dr. David Clark at Los Alamos and Dr. Gibson's work on small actinide molecules at ORNL. The hydrolysis reactions of urnayl, neputyl and plutonyl complexes have received considerable attention due to their geochemical and biochemical importance but the results of free energies in solution and the mechanism of deprotonation have been topic of considerable uncertainty. We have computed deprotonating and migration of one water molecule from the first solvation shell to the second shell in UO{sub 2}(H{sub 2}O){sub 5}{sup 2+}, UO{sub 2}(H{sub 2}O){sub 5}{sup 2+}NpO{sub 2}(H{sub 2}O){sub 6}{sup +}, and PuO{sub 2}(H{sub 2}O){sub 5}{sup 2+} complexes. Our computed Gibbs free energy(7.27 kcal/m) in solution for the first time agrees with the experiment (7.1 kcal

  14. Complex Networks in Psychological Models

    NASA Astrophysics Data System (ADS)

    Wedemann, R. S.; Carvalho, L. S. A. V. D.; Donangelo, R.

    We develop schematic, self-organizing, neural-network models to describe mechanisms associated with mental processes, by a neurocomputational substrate. These models are examples of real world complex networks with interesting general topological structures. Considering dopaminergic signal-to-noise neuronal modulation in the central nervous system, we propose neural network models to explain development of cortical map structure and dynamics of memory access, and unify different mental processes into a single neurocomputational substrate. Based on our neural network models, neurotic behavior may be understood as an associative memory process in the brain, and the linguistic, symbolic associative process involved in psychoanalytic working-through can be mapped onto a corresponding process of reconfiguration of the neural network. The models are illustrated through computer simulations, where we varied dopaminergic modulation and observed the self-organizing emergent patterns at the resulting semantic map, interpreting them as different manifestations of mental functioning, from psychotic through to normal and neurotic behavior, and creativity.

  15. X-ray Absorption Spectroscopy and Coherent X-ray Diffraction Imaging for Time-Resolved Investigation of the Biological Complexes: Computer Modelling towards the XFEL Experiment

    NASA Astrophysics Data System (ADS)

    Bugaev, A. L.; Guda, A. A.; Yefanov, O. M.; Lorenz, U.; Soldatov, A. V.; Vartanyants, I. A.

    2016-05-01

    The development of the next generation synchrotron radiation sources - free electron lasers - is approaching to become an effective tool for the time-resolved experiments aimed to solve actual problems in various fields such as chemistry’ biology’ medicine’ etc. In order to demonstrate’ how these experiments may be performed for the real systems to obtain information at the atomic and macromolecular levels’ we have performed a molecular dynamics computer simulation combined with quantum chemistry calculations for the human phosphoglycerate kinase enzyme with Mg containing substrate. The simulated structures were used to calculate coherent X-ray diffraction patterns’ reflecting the conformational state of the enzyme, and Mg K-edge X-ray absorption spectra, which depend on the local structure of the substrate. These two techniques give complementary information making such an approach highly effective for time-resolved investigation of various biological complexes, such as metalloproteins or enzymes with metal-containing substrate, to obtain information about both metal-containing active site or substrate and the atomic structure of each conformation.

  16. Modeling Wildfire Incident Complexity Dynamics

    PubMed Central

    Thompson, Matthew P.

    2013-01-01

    Wildfire management in the United States and elsewhere is challenged by substantial uncertainty regarding the location and timing of fire events, the socioeconomic and ecological consequences of these events, and the costs of suppression. Escalating U.S. Forest Service suppression expenditures is of particular concern at a time of fiscal austerity as swelling fire management budgets lead to decreases for non-fire programs, and as the likelihood of disruptive within-season borrowing potentially increases. Thus there is a strong interest in better understanding factors influencing suppression decisions and in turn their influence on suppression costs. As a step in that direction, this paper presents a probabilistic analysis of geographic and temporal variation in incident management team response to wildfires. The specific focus is incident complexity dynamics through time for fires managed by the U.S. Forest Service. The modeling framework is based on the recognition that large wildfire management entails recurrent decisions across time in response to changing conditions, which can be represented as a stochastic dynamic system. Daily incident complexity dynamics are modeled according to a first-order Markov chain, with containment represented as an absorbing state. A statistically significant difference in complexity dynamics between Forest Service Regions is demonstrated. Incident complexity probability transition matrices and expected times until containment are presented at national and regional levels. Results of this analysis can help improve understanding of geographic variation in incident management and associated cost structures, and can be incorporated into future analyses examining the economic efficiency of wildfire management. PMID:23691014

  17. Diagnosis in Complex Plasmas for Microgravity Experiments (PK-3 plus)

    SciTech Connect

    Takahashi, Kazuo; Hayashi, Yasuaki; Thomas, Hubertus M.; Morfill, Gregor E.; Ivlev, Alexei V.; Adachi, Satoshi

    2008-09-07

    Microgravity gives the complex (dusty) plasmas, where dust particles are embedded in complete charge neutral region of bulk plasma. The dust clouds as an uncompressed strongly coupled Coulomb system correspond to atomic model with several physical phenomena, crystallization, phase transition, and so on. As the phenomena tightly connect to plasma states, it is significant to understand plasma parameters such as electron density and temperature. The present work shows the electron density in the setup for microgravity experiments currently onboard on the International Space Station.

  18. The Hidden Complexities of a "Simple" Experiment.

    ERIC Educational Resources Information Center

    Caplan, Jeremy B.; And Others

    1994-01-01

    Provides two experiments that do not give the expected results. One involves burning a candle in an air-filled beaker under water and the other burns the candle in pure oxygen. Provides methodology, suggestions, and theory. (MVL)

  19. Numerical Experiments In Strongly Coupled Complex (Dusty) Plasmas

    NASA Astrophysics Data System (ADS)

    Hou, L. J.; Ivlev A.; Hubertus M. T.; Morfill, G. E.

    2010-07-01

    Complex (dusty) plasma is a suspension of micron-sized charged dust particles in a weakly ionized plasma with electrons, ions, and neutral atoms or molecules. Therein, dust particles acquire a few thousand electron charges by absorbing surrounding electrons and ions, and consequently interact with each other via a dynamically screened Coulomb potential while undergoing Brownian motion due primarily to frequent collisions with the neutral molecules. When the interaction potential energy between charged dust particles significantly exceeds their kinetic energy, they become strongly coupled and can form ordered structures comprising liquid and solid states. Since the motion of charged dust particles in complex (dusty) plasmas can be directly observed in real time by using a video camera, such systems have been generally regarded as a promising model system to study many phenomena occurring in solids, liquids and other strongly-coupled systems at the kinetic level, such as phase transitions, transport processes, and collective dynamics. Complex plasma physics has now grown into a mature research field with a very broad range of interdisciplinary facets. In addition to usual experimental and theoretical study, computer simulation in complex plasma plays an important role in bridging experimental observations and theories and in understanding many interesting phenomena observed in laboratory. The present talk will focus on a class of computer simulations that are usually non-equilibrium ones with external perturbation and that mimic the real complex plasma experiments (i. e., numerical experiment). The simulation method, i. e., the so-called Brownian Dynamics methods, will be firstly reviewed and then examples, such as simulations of heat transfer and shock wave propagation, will be present.

  20. Explosion modelling for complex geometries

    NASA Astrophysics Data System (ADS)

    Nehzat, Naser

    A literature review suggested that the combined effects of fuel reactivity, obstacle density, ignition strength, and confinement result in flame acceleration and subsequent pressure build-up during a vapour cloud explosion (VCE). Models for the prediction of propagating flames in hazardous areas, such as coal mines, oil platforms, storage and process chemical areas etc. fall into two classes. One class involves use of Computation Fluid Dynamics (CFD). This approach has been utilised by several researchers. The other approach relies upon a lumped parameter approach as developed by Baker (1983). The former approach is restricted by the appropriateness of sub-models and numerical stability requirements inherent in the computational solution. The latter approach raises significant questions regarding the validity of the simplification involved in representing the complexities of a propagating explosion. This study was conducted to investigate and improve the Computational Fluid Dynamic (CFD) code EXPLODE which has been developed by Green et al., (1993) for use on practical gas explosion hazard assessments. The code employs a numerical method for solving partial differential equations by using finite volume techniques. Verification exercises, involving comparison with analytical solutions for the classical shock-tube and with experimental (small-scale, medium and large-scale) results, demonstrate the accuracy of the code and the new combustion models but also identify differences between predictions and the experimental results. The project has resulted in a developed version of the code (EXPLODE2) with new combustion models for simulating gas explosions. Additional features of this program include the physical models necessary to simulate the combustion process using alternative combustion models, improvement to the numerical accuracy and robustness of the code, and special input for simulation of different gas explosions. The present code has the capability of

  1. Modelling biological complexity: a physical scientist's perspective

    PubMed Central

    Coveney, Peter V; Fowler, Philip W

    2005-01-01

    integration of molecular and more coarse-grained representations of matter. The scope of such integrative approaches to complex systems research is circumscribed by the computational resources available. Computational grids should provide a step jump in the scale of these resources; we describe the tools that RealityGrid, a major UK e-Science project, has developed together with our experience of deploying complex models on nascent grids. We also discuss the prospects for mathematical approaches to reducing the dimensionality of complex networks in the search for universal systems-level properties, illustrating our approach with a description of the origin of life according to the RNA world view. PMID:16849185

  2. Modelling biological complexity: a physical scientist's perspective.

    PubMed

    Coveney, Peter V; Fowler, Philip W

    2005-09-22

    integration of molecular and more coarse-grained representations of matter. The scope of such integrative approaches to complex systems research is circumscribed by the computational resources available. Computational grids should provide a step jump in the scale of these resources; we describe the tools that RealityGrid, a major UK e-Science project, has developed together with our experience of deploying complex models on nascent grids. We also discuss the prospects for mathematical approaches to reducing the dimensionality of complex networks in the search for universal systems-level properties, illustrating our approach with a description of the origin of life according to the RNA world view. PMID:16849185

  3. A Practical Philosophy of Complex Climate Modelling

    NASA Technical Reports Server (NTRS)

    Schmidt, Gavin A.; Sherwood, Steven

    2014-01-01

    We give an overview of the practice of developing and using complex climate models, as seen from experiences in a major climate modelling center and through participation in the Coupled Model Intercomparison Project (CMIP).We discuss the construction and calibration of models; their evaluation, especially through use of out-of-sample tests; and their exploitation in multi-model ensembles to identify biases and make predictions. We stress that adequacy or utility of climate models is best assessed via their skill against more naive predictions. The framework we use for making inferences about reality using simulations is naturally Bayesian (in an informal sense), and has many points of contact with more familiar examples of scientific epistemology. While the use of complex simulations in science is a development that changes much in how science is done in practice, we argue that the concepts being applied fit very much into traditional practices of the scientific method, albeit those more often associated with laboratory work.

  4. A physical interpretation of hydrologic model complexity

    NASA Astrophysics Data System (ADS)

    Moayeri, MohamadMehdi; Pande, Saket

    2015-04-01

    It is intuitive that instability of hydrological system representation, in the sense of how perturbations in input forcings translate into perturbation in a hydrologic response, may depend on its hydrological characteristics. Responses of unstable systems are thus complex to model. We interpret complexity in this context and define complexity as a measure of instability in hydrological system representation. We provide algorithms to quantify model complexity in this context. We use Sacramento soil moisture accounting model (SAC-SMA) parameterized for MOPEX basins and quantify complexities of corresponding models. Relationships between hydrologic characteristics of MOPEX basins such as location, precipitation seasonality index, slope, hydrologic ratios, saturated hydraulic conductivity and NDVI and respective model complexities are then investigated. We hypothesize that complexities of basin specific SAC-SMA models correspond to aforementioned hydrologic characteristics, thereby suggesting that model complexity, in the context presented here, may have a physical interpretation.

  5. Impacts of snow and glaciers over Tibetan Plateau on Holocene climate change: Sensitivity experiments with a coupled model of intermediate complexity

    NASA Astrophysics Data System (ADS)

    Jin, Liya; Ganopolski, Andrey; Chen, Fahu; Claussen, Martin; Wang, Huijun

    2005-09-01

    An Earth system model of intermediate complexity has been used to investigate the sensitivity of simulated global climate to gradually increased snow and glacier cover over the Tibetan Plateau for the last 9000 years (9 kyr). The simulations show that in the mid-Holocene at about 6 kyr before present (BP) the imposed ice sheets over the Tibetan Plateau induces summer precipitation decreases strongly in North Africa and South Asia, and increases in Southeast Asia. The response of vegetation cover to the imposed ice sheets over the Tibetan Plateau is not synchronous in South Asia and in North Africa, showing an earlier and, hence, a more rapid decrease in vegetation cover in North Africa from 9 to 6 kyr BP while it has almost no influence on that in south Asia until 5 kyr BP. The simulation results suggest that the snow and glacier environment over the Tibetan Plateau is an important factor for Holocene climate variability in North Africa, South Asia and Southeast Asia.

  6. Analyzing Complex Metabolomic Networks: Experiments and Simulation

    NASA Astrophysics Data System (ADS)

    Steuer, R.; Kurths, J.; Fiehn, O.; Weckwerth, W.

    2002-03-01

    In the recent years, remarkable advances in molecular biology have enabled us to measure the behavior of complex regularity networks underlying biological systems. In particular, high throughput techniques, such as gene expression arrays, allow a fast acquisition of a large number of simultaneously measured variables. Similar to gene expression, the analysis of metabolomic datasets results in a huge number of metabolite co-regulations: Metabolites are the end products of cellular regulatory processes, their level can be regarded as the ultimate response to genetic or environmental changes. In this presentation we focus on the topological description of such networks, using both, experimental data and simulations. In particular, we discuss the possibility to deduce novel links between metabolites, using concepts from (nonlinear) time series analysis and information theory.

  7. Extracting Models in Single Molecule Experiments

    NASA Astrophysics Data System (ADS)

    Presse, Steve

    2013-03-01

    Single molecule experiments can now monitor the journey of a protein from its assembly near a ribosome to its proteolytic demise. Ideally all single molecule data should be self-explanatory. However data originating from single molecule experiments is particularly challenging to interpret on account of fluctuations and noise at such small scales. Realistically, basic understanding comes from models carefully extracted from the noisy data. Statistical mechanics, and maximum entropy in particular, provide a powerful framework for accomplishing this task in a principled fashion. Here I will discuss our work in extracting conformational memory from single molecule force spectroscopy experiments on large biomolecules. One clear advantage of this method is that we let the data tend towards the correct model, we do not fit the data. I will show that the dynamical model of the single molecule dynamics which emerges from this analysis is often more textured and complex than could otherwise come from fitting the data to a pre-conceived model.

  8. Teacher Modeling Using Complex Informational Texts

    ERIC Educational Resources Information Center

    Fisher, Douglas; Frey, Nancy

    2015-01-01

    Modeling in complex texts requires that teachers analyze the text for factors of qualitative complexity and then design lessons that introduce students to that complexity. In addition, teachers can model the disciplinary nature of content area texts as well as word solving and comprehension strategies. Included is a planning guide for think aloud.

  9. Describing Ecosystem Complexity through Integrated Catchment Modeling

    NASA Astrophysics Data System (ADS)

    Shope, C. L.; Tenhunen, J. D.; Peiffer, S.

    2011-12-01

    Land use and climate change have been implicated in reduced ecosystem services (ie: high quality water yield, biodiversity, and agricultural yield. The prediction of ecosystem services expected under future land use decisions and changing climate conditions has become increasingly important. Complex policy and management decisions require the integration of physical, economic, and social data over several scales to assess effects on water resources and ecology. Field-based meteorology, hydrology, soil physics, plant production, solute and sediment transport, economic, and social behavior data were measured in a South Korean catchment. A variety of models are being used to simulate plot and field scale experiments within the catchment. Results from each of the local-scale models provide identification of sensitive, local-scale parameters which are then used as inputs into a large-scale watershed model. We used the spatially distributed SWAT model to synthesize the experimental field data throughout the catchment. The approach of our study was that the range in local-scale model parameter results can be used to define the sensitivity and uncertainty in the large-scale watershed model. Further, this example shows how research can be structured for scientific results describing complex ecosystems and landscapes where cross-disciplinary linkages benefit the end result. The field-based and modeling framework described is being used to develop scenarios to examine spatial and temporal changes in land use practices and climatic effects on water quantity, water quality, and sediment transport. Development of accurate modeling scenarios requires understanding the social relationship between individual and policy driven land management practices and the value of sustainable resources to all shareholders.

  10. Complex Aerosol Experiment in Western Siberia (April - October 2013)

    NASA Astrophysics Data System (ADS)

    Matvienko, G. G.; Belan, B. D.; Panchenko, M. V.; Romanovskii, O. A.; Sakerin, S. M.; Kabanov, D. M.; Turchinovich, S. A.; Turchinovich, Yu. S.; Eremina, T. A.; Kozlov, V. S.; Terpugova, S. A.; Pol'kin, V. V.; Yausheva, E. P.; Chernov, D. G.; Zuravleva, T. B.; Bedareva, T. V.; Odintsov, S. L.; Burlakov, V. D.; Arshinov, M. Yu.; Ivlev, G. A.; Savkin, D. E.; Fofonov, A. V.; Gladkikh, V. A.; Kamardin, A. P.; Balin, Yu. S.; Kokhanenko, G. P.; Penner, I. E.; Samoilova, S. V.; Antokhin, P. N.; Arshinova, V. G.; Davydov, D. K.; Kozlov, A. V.; Pestunov, D. A.; Rasskazchikova, T. M.; Simonenkov, D. V.; Sklyadneva, T. K.; Tolmachev, G. N.; Belan, S. B.; Shmargunov, V. P.

    2016-06-01

    The primary project objective was to accomplish the Complex Aerosol Experiment, during which the aerosol properties should be measured in the near-ground layer and free atmosphere. Three measurement cycles were performed during the project implementation: in spring period (April), when the maximum of aerosol generation is observed; in summer (July), when atmospheric boundary layer height and mixing layer height are maximal; and in late summer - early autumn (October), when the secondary particle nucleation period is recorded. Numerical calculations were compared with measurements of fluxes of downward solar radiation. It was shown that the relative differences between model and experimental values of fluxes of direct and total radiation, on the average, do not exceed 1% and 3% respectively.

  11. Iron-Sulfur-Carbonyl and -Nitrosyl Complexes: A Laboratory Experiment.

    ERIC Educational Resources Information Center

    Glidewell, Christopher; And Others

    1985-01-01

    Background information, materials needed, procedures used, and typical results obtained, are provided for an experiment on iron-sulfur-carbonyl and -nitrosyl complexes. The experiment involved (1) use of inert atmospheric techniques and thin-layer and flexible-column chromatography and (2) interpretation of infrared, hydrogen and carbon-13 nuclear…

  12. Membrane associated complexes in calcium dynamics modelling

    NASA Astrophysics Data System (ADS)

    Szopa, Piotr; Dyzma, Michał; Kaźmierczak, Bogdan

    2013-06-01

    Mitochondria not only govern energy production, but are also involved in crucial cellular signalling processes. They are one of the most important organelles determining the Ca2+ regulatory pathway in the cell. Several mathematical models explaining these mechanisms were constructed, but only few of them describe interplay between calcium concentrations in endoplasmic reticulum (ER), cytoplasm and mitochondria. Experiments measuring calcium concentrations in mitochondria and ER suggested the existence of cytosolic microdomains with locally elevated calcium concentration in the nearest vicinity of the outer mitochondrial membrane. These intermediate physical connections between ER and mitochondria are called MAM (mitochondria-associated ER membrane) complexes. We propose a model with a direct calcium flow from ER to mitochondria, which may be justified by the existence of MAMs, and perform detailed numerical analysis of the effect of this flow on the type and shape of calcium oscillations. The model is partially based on the Marhl et al model. We have numerically found that the stable oscillations exist for a considerable set of parameter values. However, for some parameter sets the oscillations disappear and the trajectories of the model tend to a steady state with very high calcium level in mitochondria. This can be interpreted as an early step in an apoptotic pathway.

  13. Capturing Complexity through Maturity Modelling

    ERIC Educational Resources Information Center

    Underwood, Jean; Dillon, Gayle

    2004-01-01

    The impact of information and communication technologies (ICT) on the process and products of education is difficult to assess for a number of reasons. In brief, education is a complex system of interrelationships, of checks and balances. This context is not a neutral backdrop on which teaching and learning are played out. Rather, it may help, or…

  14. Does increased hydrochemical model complexity decrease robustness?

    NASA Astrophysics Data System (ADS)

    Medici, C.; Wade, A. J.; Francés, F.

    2012-05-01

    SummaryThe aim of this study was, within a sensitivity analysis framework, to determine if additional model complexity gives a better capability to model the hydrology and nitrogen dynamics of a small Mediterranean forested catchment or if the additional parameters cause over-fitting. Three nitrogen-models of varying hydrological complexity were considered. For each model, general sensitivity analysis (GSA) and Generalized Likelihood Uncertainty Estimation (GLUE) were applied, each based on 100,000 Monte Carlo simulations. The results highlighted the most complex structure as the most appropriate, providing the best representation of the non-linear patterns observed in the flow and streamwater nitrate concentrations between 1999 and 2002. Its 5% and 95% GLUE bounds, obtained considering a multi-objective approach, provide the narrowest band for streamwater nitrogen, which suggests increased model robustness, though all models exhibit periods of inconsistent good and poor fits between simulated outcomes and observed data. The results confirm the importance of the riparian zone in controlling the short-term (daily) streamwater nitrogen dynamics in this catchment but not the overall flux of nitrogen from the catchment. It was also shown that as the complexity of a hydrological model increases over-parameterisation occurs, but the converse is true for a water quality model where additional process representation leads to additional acceptable model simulations. Water quality data help constrain the hydrological representation in process-based models. Increased complexity was justifiable for modelling river-system hydrochemistry. Increased complexity was justifiable for modelling river-system hydrochemistry.

  15. Complexity and Uncertainty in Soil Nitrogen Modeling

    NASA Astrophysics Data System (ADS)

    Ajami, N. K.; Gu, C.

    2009-12-01

    Model uncertainty is rarely considered in the field of biogeochemical modeling. The standard biogeochemical modeling approach is to proceed based on one selected model with the “right” complexity level based on data availability. However other plausible models can result in dissimilar answer to the scientific question in hand using the same set of data. Relying on a single model can lead to underestimation of uncertainty associated with the results and therefore lead to unreliable conclusions. Multi-model ensemble strategy is a means to exploit the diversity of skillful predictions from different models with multiple levels of complexity. The aim of this study is two fold, first to explore the impact of a model’s complexity level on the accuracy of the end results and second to introduce a probabilistic multi-model strategy in the context of a process-based biogeochemical model. We developed three different versions of a biogeochemical model, TOUGHREACT-N, with various complexity levels. Each one of these models was calibrated against the observed data from a tomato field in Western Sacramento County, California, and considered two different weighting sets on the objective function. This way we created a set of six ensemble members. The Bayesian Model Averaging (BMA) approach was then used to combine these ensemble members by the likelihood that an individual model is correct given the observations. The results clearly indicate the need to consider a multi-model ensemble strategy over a single model selection in biogeochemical modeling.

  16. Fock spaces for modeling macromolecular complexes

    NASA Astrophysics Data System (ADS)

    Kinney, Justin

    Large macromolecular complexes play a fundamental role in how cells function. Here I describe a Fock space formalism for mathematically modeling these complexes. Specifically, this formalism allows ensembles of complexes to be defined in terms of elementary molecular ``building blocks'' and ``assembly rules.'' Such definitions avoid the massive redundancy inherent in standard representations, in which all possible complexes are manually enumerated. Methods for systematically computing ensembles of complexes from a list of components and interaction rules are described. I also show how this formalism readily accommodates coarse-graining. Finally, I introduce diagrammatic techniques that greatly facilitate the application of this formalism to both equilibrium and non-equilibrium biochemical systems.

  17. Molecular simulation and modeling of complex I.

    PubMed

    Hummer, Gerhard; Wikström, Mårten

    2016-07-01

    Molecular modeling and molecular dynamics simulations play an important role in the functional characterization of complex I. With its large size and complicated function, linking quinone reduction to proton pumping across a membrane, complex I poses unique modeling challenges. Nonetheless, simulations have already helped in the identification of possible proton transfer pathways. Simulations have also shed light on the coupling between electron and proton transfer, thus pointing the way in the search for the mechanistic principles underlying the proton pump. In addition to reviewing what has already been achieved in complex I modeling, we aim here to identify pressing issues and to provide guidance for future research to harness the power of modeling in the functional characterization of complex I. This article is part of a Special Issue entitled Respiratory complex I, edited by Volker Zickermann and Ulrich Brandt. PMID:26780586

  18. Selecting model complexity in learning problems

    SciTech Connect

    Buescher, K.L.; Kumar, P.R.

    1993-10-01

    To learn (or generalize) from noisy data, one must resist the temptation to pick a model for the underlying process that overfits the data. Many existing techniques solve this problem at the expense of requiring the evaluation of an absolute, a priori measure of each model`s complexity. We present a method that does not. Instead, it uses a natural, relative measure of each model`s complexity. This method first creates a pool of ``simple`` candidate models using part of the data and then selects from among these by using the rest of the data.

  19. The Complex Experience of Learning to Do Research

    ERIC Educational Resources Information Center

    Emo, Kenneth; Emo, Wendy; Kimn, Jung-Han; Gent, Stephen

    2015-01-01

    This article examines how student learning is a product of the experiential interaction between person and environment. We draw from the theoretical perspective of complexity to shed light on the emergent, adaptive, and unpredictable nature of students' learning experiences. To understand the relationship between the environment and the student…

  20. School Experiences of an Adolescent with Medical Complexities Involving Incontinence

    ERIC Educational Resources Information Center

    Filce, Hollie Gabler; Bishop, John B.

    2014-01-01

    The educational implications of chronic illnesses which involve incontinence are not well represented in the literature. The experiences of an adolescent with multiple complex illnesses, including incontinence, were explored via an intrinsic case study. Data were gathered from the adolescent, her mother, and teachers through interviews, email…

  1. Facing up to Complexity: Implications for Our Social Experiments.

    PubMed

    Hawkins, Ronnie

    2016-06-01

    Biological systems are highly complex, and for this reason there is a considerable degree of uncertainty as to the consequences of making significant interventions into their workings. Since a number of new technologies are already impinging on living systems, including our bodies, many of us have become participants in large-scale "social experiments". I will discuss biological complexity and its relevance to the technologies that brought us BSE/vCJD and the controversy over GM foods. Then I will consider some of the complexities of our social dynamics, and argue for making a shift from using the precautionary principle to employing the approach of evaluating the introduction of new technologies by conceiving of them as social experiments. PMID:26062747

  2. Reducing Spatial Data Complexity for Classification Models

    SciTech Connect

    Ruta, Dymitr; Gabrys, Bogdan

    2007-11-29

    Intelligent data analytics gradually becomes a day-to-day reality of today's businesses. However, despite rapidly increasing storage and computational power current state-of-the-art predictive models still can not handle massive and noisy corporate data warehouses. What is more adaptive and real-time operational environment requires multiple models to be frequently retrained which further hinders their use. Various data reduction techniques ranging from data sampling up to density retention models attempt to address this challenge by capturing a summarised data structure, yet they either do not account for labelled data or degrade the classification performance of the model trained on the condensed dataset. Our response is a proposition of a new general framework for reducing the complexity of labelled data by means of controlled spatial redistribution of class densities in the input space. On the example of Parzen Labelled Data Compressor (PLDC) we demonstrate a simulatory data condensation process directly inspired by the electrostatic field interaction where the data are moved and merged following the attracting and repelling interactions with the other labelled data. The process is controlled by the class density function built on the original data that acts as a class-sensitive potential field ensuring preservation of the original class density distributions, yet allowing data to rearrange and merge joining together their soft class partitions. As a result we achieved a model that reduces the labelled datasets much further than any competitive approaches yet with the maximum retention of the original class densities and hence the classification performance. PLDC leaves the reduced dataset with the soft accumulative class weights allowing for efficient online updates and as shown in a series of experiments if coupled with Parzen Density Classifier (PDC) significantly outperforms competitive data condensation methods in terms of classification performance at the

  3. Reducing Spatial Data Complexity for Classification Models

    NASA Astrophysics Data System (ADS)

    Ruta, Dymitr; Gabrys, Bogdan

    2007-11-01

    Intelligent data analytics gradually becomes a day-to-day reality of today's businesses. However, despite rapidly increasing storage and computational power current state-of-the-art predictive models still can not handle massive and noisy corporate data warehouses. What is more adaptive and real-time operational environment requires multiple models to be frequently retrained which further hinders their use. Various data reduction techniques ranging from data sampling up to density retention models attempt to address this challenge by capturing a summarised data structure, yet they either do not account for labelled data or degrade the classification performance of the model trained on the condensed dataset. Our response is a proposition of a new general framework for reducing the complexity of labelled data by means of controlled spatial redistribution of class densities in the input space. On the example of Parzen Labelled Data Compressor (PLDC) we demonstrate a simulatory data condensation process directly inspired by the electrostatic field interaction where the data are moved and merged following the attracting and repelling interactions with the other labelled data. The process is controlled by the class density function built on the original data that acts as a class-sensitive potential field ensuring preservation of the original class density distributions, yet allowing data to rearrange and merge joining together their soft class partitions. As a result we achieved a model that reduces the labelled datasets much further than any competitive approaches yet with the maximum retention of the original class densities and hence the classification performance. PLDC leaves the reduced dataset with the soft accumulative class weights allowing for efficient online updates and as shown in a series of experiments if coupled with Parzen Density Classifier (PDC) significantly outperforms competitive data condensation methods in terms of classification performance at the

  4. Scaffolding in Complex Modelling Situations

    ERIC Educational Resources Information Center

    Stender, Peter; Kaiser, Gabriele

    2015-01-01

    The implementation of teacher-independent realistic modelling processes is an ambitious educational activity with many unsolved problems so far. Amongst others, there hardly exists any empirical knowledge about efficient ways of possible teacher support with students' activities, which should be mainly independent from the teacher. The research…

  5. Role models for complex networks

    NASA Astrophysics Data System (ADS)

    Reichardt, J.; White, D. R.

    2007-11-01

    We present a framework for automatically decomposing (“block-modeling”) the functional classes of agents within a complex network. These classes are represented by the nodes of an image graph (“block model”) depicting the main patterns of connectivity and thus functional roles in the network. Using a first principles approach, we derive a measure for the fit of a network to any given image graph allowing objective hypothesis testing. From the properties of an optimal fit, we derive how to find the best fitting image graph directly from the network and present a criterion to avoid overfitting. The method can handle both two-mode and one-mode data, directed and undirected as well as weighted networks and allows for different types of links to be dealt with simultaneously. It is non-parametric and computationally efficient. The concepts of structural equivalence and modularity are found as special cases of our approach. We apply our method to the world trade network and analyze the roles individual countries play in the global economy.

  6. Agent-based modeling of complex infrastructures

    SciTech Connect

    North, M. J.

    2001-06-01

    Complex Adaptive Systems (CAS) can be applied to investigate complex infrastructures and infrastructure interdependencies. The CAS model agents within the Spot Market Agent Research Tool (SMART) and Flexible Agent Simulation Toolkit (FAST) allow investigation of the electric power infrastructure, the natural gas infrastructure and their interdependencies.

  7. Emulator-assisted data assimilation in complex models

    NASA Astrophysics Data System (ADS)

    Margvelashvili, Nugzar Yu; Herzfeld, Mike; Rizwi, Farhan; Mongin, Mathieu; Baird, Mark E.; Jones, Emlyn; Schaffelke, Britta; King, Edward; Schroeder, Thomas

    2016-08-01

    Emulators are surrogates of complex models that run orders of magnitude faster than the original model. The utility of emulators for the data assimilation into ocean models is still not well understood. High complexity of ocean models translates into high uncertainty of the corresponding emulators which may undermine the quality of the assimilation schemes based on such emulators. Numerical experiments with a chaotic Lorenz-95 model are conducted to illustrate this point and suggest a strategy to alleviate this problem through the localization of the emulation and data assimilation procedures. Insights gained through these experiments are used to design and implement data assimilation scenario for a 3D fine-resolution sediment transport model of the Great Barrier Reef (GBR), Australia.

  8. Numerical models of complex diapirs

    NASA Astrophysics Data System (ADS)

    Podladchikov, Yu.; Talbot, C.; Poliakov, A. N. B.

    1993-12-01

    Numerically modelled diapirs that rise into overburdens with viscous rheology produce a large variety of shapes. This work uses the finite-element method to study the development of diapirs that rise towards a surface on which a diapir-induced topography creeps flat or disperses ("erodes") at different rates. Slow erosion leads to diapirs with "mushroom" shapes, moderate erosion rate to "wine glass" diapirs and fast erosion to "beer glass"- and "column"-shaped diapirs. The introduction of a low-viscosity layer at the top of the overburden causes diapirs to develop into structures resembling a "Napoleon hat". These spread lateral sheets.

  9. Complex system modelling for veterinary epidemiology.

    PubMed

    Lanzas, Cristina; Chen, Shi

    2015-02-01

    The use of mathematical models has a long tradition in infectious disease epidemiology. The nonlinear dynamics and complexity of pathogen transmission pose challenges in understanding its key determinants, in identifying critical points, and designing effective mitigation strategies. Mathematical modelling provides tools to explicitly represent the variability, interconnectedness, and complexity of systems, and has contributed to numerous insights and theoretical advances in disease transmission, as well as to changes in public policy, health practice, and management. In recent years, our modelling toolbox has considerably expanded due to the advancements in computing power and the need to model novel data generated by technologies such as proximity loggers and global positioning systems. In this review, we discuss the principles, advantages, and challenges associated with the most recent modelling approaches used in systems science, the interdisciplinary study of complex systems, including agent-based, network and compartmental modelling. Agent-based modelling is a powerful simulation technique that considers the individual behaviours of system components by defining a set of rules that govern how individuals ("agents") within given populations interact with one another and the environment. Agent-based models have become a recent popular choice in epidemiology to model hierarchical systems and address complex spatio-temporal dynamics because of their ability to integrate multiple scales and datasets. PMID:25449734

  10. Modeling of microgravity combustion experiments

    NASA Technical Reports Server (NTRS)

    Buckmaster, John

    1993-01-01

    Modeling plays a vital role in providing physical insights into behavior revealed by experiment. The program at the University of Illinois is designed to improve our understanding of basic combustion phenomena through the analytical and numerical modeling of a variety of configurations undergoing experimental study in NASA's microgravity combustion program. Significant progress has been made in two areas: (1) flame-balls, studied experimentally by Ronney and his co-workers; (2) particle-cloud flames studied by Berlad and his collaborators. Additional work is mentioned below. NASA funding for the U. of Illinois program commenced in February 1991 but work was initiated prior to that date and the program can only be understood with this foundation exposed. Accordingly, we start with a brief description of some key results obtained in the pre - 2/91 work.

  11. Discrete Element Modeling of Complex Granular Flows

    NASA Astrophysics Data System (ADS)

    Movshovitz, N.; Asphaug, E. I.

    2010-12-01

    Granular materials occur almost everywhere in nature, and are actively studied in many fields of research, from food industry to planetary science. One approach to the study of granular media, the continuum approach, attempts to find a constitutive law that determines the material's flow, or strain, under applied stress. The main difficulty with this approach is that granular systems exhibit different behavior under different conditions, behaving at times as an elastic solid (e.g. pile of sand), at times as a viscous fluid (e.g. when poured), or even as a gas (e.g. when shaken). Even if all these physics are accounted for, numerical implementation is made difficult by the wide and often discontinuous ranges in continuum density and sound speed. A different approach is Discrete Element Modeling (DEM). Here the goal is to directly model every grain in the system as a rigid body subject to various body and surface forces. The advantage of this method is that it treats all of the above regimes in the same way, and can easily deal with a system moving back and forth between regimes. But as a granular system typically contains a multitude of individual grains, the direct integration of the system can be very computationally expensive. For this reason most DEM codes are limited to spherical grains of uniform size. However, spherical grains often cannot replicate the behavior of real world granular systems. A simple pile of spherical grains, for example, relies on static friction alone to keep its shape, while in reality a pile of irregular grains can maintain a much steeper angle by interlocking force chains. In the present study we employ a commercial DEM, nVidia's PhysX Engine, originally designed for the game and animation industry, to simulate complex granular flows with irregular, non-spherical grains. This engine runs as a multi threaded process and can be GPU accelerated. We demonstrate the code's ability to physically model granular materials in the three regimes

  12. Explicit stress integration of complex soil models

    NASA Astrophysics Data System (ADS)

    Zhao, Jidong; Sheng, Daichao; Rouainia, M.; Sloan, Scott W.

    2005-10-01

    In this paper, two complex critical-state models are implemented in a displacement finite element code. The two models are used for structured clays and sands, and are characterized by multiple yield surfaces, plastic yielding within the yield surface, and complex kinematic and isotropic hardening laws. The consistent tangent operators - which lead to a quadratic convergence when used in a fully implicit algorithm - are difficult to derive or may even not exist. The stress integration scheme used in this paper is based on the explicit Euler method with automatic substepping and error control. This scheme employs the classical elastoplastic stiffness matrix and requires only the first derivatives of the yield function and plastic potential. This explicit scheme is used to integrate the two complex critical-state models - the sub/super-loading surfaces model (SSLSM) and the kinematic hardening structure model (KHSM). Various boundary-value problems are then analysed. The results for the two models are compared with each other, as well with those from standard Cam-clay models. Accuracy and efficiency of the scheme used for the complex models are also investigated. Copyright

  13. SUMMARY OF COMPLEX TERRAIN MODEL EVALUATION

    EPA Science Inventory

    The Environmental Protection Agency conducted a scientific review of a set of eight complex terrain dispersion models. TRC Environmental Consultants, Inc. calculated and tabulated a uniform set of performance statistics for the models using the Cinder Cone Butte and Westvaco Luke...

  14. Building phenomenological models of complex biological processes

    NASA Astrophysics Data System (ADS)

    Daniels, Bryan; Nemenman, Ilya

    2009-11-01

    A central goal of any modeling effort is to make predictions regarding experimental conditions that have not yet been observed. Overly simple models will not be able to fit the original data well, but overly complex models are likely to overfit the data and thus produce bad predictions. Modern quantitative biology modeling efforts often err on the complexity side of this balance, using myriads of microscopic biochemical reaction processes with a priori unknown kinetic parameters to model relatively simple biological phenomena. In this work, we show how Bayesian model selection (which is mathematically similar to low temperature expansion in statistical physics) can be used to build coarse-grained, phenomenological models of complex dynamical biological processes, which have better predictive powers than microscopically correct, but poorely constrained mechanistic molecular models. We illustrate this on the example of a multiply-modifiable protein molecule, which is a simplified description of multiple biological systems, such as an immune receptors and an RNA polymerase complex. Our approach is similar in spirit to the phenomenological Landau expansion for the free energy in the theory of critical phenomena.

  15. From Complex to Simple: Interdisciplinary Stochastic Models

    ERIC Educational Resources Information Center

    Mazilu, D. A.; Zamora, G.; Mazilu, I.

    2012-01-01

    We present two simple, one-dimensional, stochastic models that lead to a qualitative understanding of very complex systems from biology, nanoscience and social sciences. The first model explains the complicated dynamics of microtubules, stochastic cellular highways. Using the theory of random walks in one dimension, we find analytical expressions…

  16. Modeling the chemistry of complex petroleum mixtures.

    PubMed Central

    Quann, R J

    1998-01-01

    Determining the complete molecular composition of petroleum and its refined products is not feasible with current analytical techniques because of the astronomical number of molecular components. Modeling the composition and behavior of such complex mixtures in refinery processes has accordingly evolved along a simplifying concept called lumping. Lumping reduces the complexity of the problem to a manageable form by grouping the entire set of molecular components into a handful of lumps. This traditional approach does not have a molecular basis and therefore excludes important aspects of process chemistry and molecular property fundamentals from the model's formulation. A new approach called structure-oriented lumping has been developed to model the composition and chemistry of complex mixtures at a molecular level. The central concept is to represent an individual molecular or a set of closely related isomers as a mathematical construct of certain specific and repeating structural groups. A complex mixture such as petroleum can then be represented as thousands of distinct molecular components, each having a mathematical identity. This enables the automated construction of large complex reaction networks with tens of thousands of specific reactions for simulating the chemistry of complex mixtures. Further, the method provides a convenient framework for incorporating molecular physical property correlations, existing group contribution methods, molecular thermodynamic properties, and the structure--activity relationships of chemical kinetics in the development of models. PMID:9860903

  17. Governance of complex systems: results of a sociological simulation experiment.

    PubMed

    Adelt, Fabian; Weyer, Johannes; Fink, Robin D

    2014-01-01

    Social sciences have discussed the governance of complex systems for a long time. The following paper tackles the issue by means of experimental sociology, in order to investigate the performance of different modes of governance empirically. The simulation framework developed is based on Esser's model of sociological explanation as well as on Kroneberg's model of frame selection. The performance of governance has been measured by means of three macro and two micro indicators. Surprisingly, central control mostly performs better than decentralised coordination. However, results not only depend on the mode of governance, but there is also a relation between performance and the composition of actor populations, which has yet not been investigated sufficiently. Practitioner Summary: Practitioners can gain insights into the functioning of complex systems and learn how to better manage them. Additionally, they are provided with indicators to measure the performance of complex systems. PMID:24456093

  18. Updating the debate on model complexity

    USGS Publications Warehouse

    Simmons, Craig T.; Hunt, Randall J.

    2012-01-01

    As scientists who are trying to understand a complex natural world that cannot be fully characterized in the field, how can we best inform the society in which we live? This founding context was addressed in a special session, “Complexity in Modeling: How Much is Too Much?” convened at the 2011 Geological Society of America Annual Meeting. The session had a variety of thought-provoking presentations—ranging from philosophy to cost-benefit analyses—and provided some areas of broad agreement that were not evident in discussions of the topic in 1998 (Hunt and Zheng, 1999). The session began with a short introduction during which model complexity was framed borrowing from an economic concept, the Law of Diminishing Returns, and an example of enjoyment derived by eating ice cream. Initially, there is increasing satisfaction gained from eating more ice cream, to a point where the gain in satisfaction starts to decrease, ending at a point when the eater sees no value in eating more ice cream. A traditional view of model complexity is similar—understanding gained from modeling can actually decrease if models become unnecessarily complex. However, oversimplified models—those that omit important aspects of the problem needed to make a good prediction—can also limit and confound our understanding. Thus, the goal of all modeling is to find the “sweet spot” of model sophistication—regardless of whether complexity was added sequentially to an overly simple model or collapsed from an initial highly parameterized framework that uses mathematics and statistics to attain an optimum (e.g., Hunt et al., 2007). Thus, holistic parsimony is attained, incorporating “as simple as possible,” as well as the equally important corollary “but no simpler.”

  19. Balancing model complexity and measurements in hydrology

    NASA Astrophysics Data System (ADS)

    Van De Giesen, N.; Schoups, G.; Weijs, S. V.

    2012-12-01

    The Data Processing Inequality implies that hydrological modeling can only reduce, and never increase, the amount of information available in the original data used to formulate and calibrate hydrological models: I(X;Z(Y)) ≤ I(X;Y). Still, hydrologists around the world seem quite content building models for "their" watersheds to move our discipline forward. Hydrological models tend to have a hybrid character with respect to underlying physics. Most models make use of some well established physical principles, such as mass and energy balances. One could argue that such principles are based on many observations, and therefore add data. These physical principles, however, are applied to hydrological models that often contain concepts that have no direct counterpart in the observable physical universe, such as "buckets" or "reservoirs" that fill up and empty out over time. These not-so-physical concepts are more like the Artificial Neural Networks and Support Vector Machines of the Artificial Intelligence (AI) community. Within AI, one quickly came to the realization that by increasing model complexity, one could basically fit any dataset but that complexity should be controlled in order to be able to predict unseen events. The more data are available to train or calibrate the model, the more complex it can be. Many complexity control approaches exist in AI, with Solomonoff inductive inference being one of the first formal approaches, the Akaike Information Criterion the most popular, and Statistical Learning Theory arguably being the most comprehensive practical approach. In hydrology, complexity control has hardly been used so far. There are a number of reasons for that lack of interest, the more valid ones of which will be presented during the presentation. For starters, there are no readily available complexity measures for our models. Second, some unrealistic simplifications of the underlying complex physics tend to have a smoothing effect on possible model

  20. Multifaceted Modelling of Complex Business Enterprises.

    PubMed

    Chakraborty, Subrata; Mengersen, Kerrie; Fidge, Colin; Ma, Lin; Lassen, David

    2015-01-01

    We formalise and present a new generic multifaceted complex system approach for modelling complex business enterprises. Our method has a strong focus on integrating the various data types available in an enterprise which represent the diverse perspectives of various stakeholders. We explain the challenges faced and define a novel approach to converting diverse data types into usable Bayesian probability forms. The data types that can be integrated include historic data, survey data, and management planning data, expert knowledge and incomplete data. The structural complexities of the complex system modelling process, based on various decision contexts, are also explained along with a solution. This new application of complex system models as a management tool for decision making is demonstrated using a railway transport case study. The case study demonstrates how the new approach can be utilised to develop a customised decision support model for a specific enterprise. Various decision scenarios are also provided to illustrate the versatility of the decision model at different phases of enterprise operations such as planning and control. PMID:26247591

  1. Multifaceted Modelling of Complex Business Enterprises

    PubMed Central

    2015-01-01

    We formalise and present a new generic multifaceted complex system approach for modelling complex business enterprises. Our method has a strong focus on integrating the various data types available in an enterprise which represent the diverse perspectives of various stakeholders. We explain the challenges faced and define a novel approach to converting diverse data types into usable Bayesian probability forms. The data types that can be integrated include historic data, survey data, and management planning data, expert knowledge and incomplete data. The structural complexities of the complex system modelling process, based on various decision contexts, are also explained along with a solution. This new application of complex system models as a management tool for decision making is demonstrated using a railway transport case study. The case study demonstrates how the new approach can be utilised to develop a customised decision support model for a specific enterprise. Various decision scenarios are also provided to illustrate the versatility of the decision model at different phases of enterprise operations such as planning and control. PMID:26247591

  2. Combustion, Complex Fluids, and Fluid Physics Experiments on the ISS

    NASA Technical Reports Server (NTRS)

    Motil, Brian; Urban, David

    2012-01-01

    multiphase flows, capillary phenomena, and heat pipes. Finally in complex fluids, experiments in rheology and soft condensed materials will be presented.

  3. Combustion, Complex Fluids, and Fluid Physics Experiments on the ISS

    NASA Technical Reports Server (NTRS)

    Motil, Brian; Urban, David

    2012-01-01

    From the very early days of human spaceflight, NASA has been conducting experiments in space to understand the effect of weightlessness on physical and chemically reacting systems. NASA Glenn Research Center (GRC) in Cleveland, Ohio has been at the forefront of this research looking at both fundamental studies in microgravity as well as experiments targeted at reducing the risks to long duration human missions to the moon, Mars, and beyond. In the current International Space Station (ISS) era, we now have an orbiting laboratory that provides the highly desired condition of long-duration microgravity. This allows continuous and interactive research similar to Earth-based laboratories. Because of these capabilities, the ISS is an indispensible laboratory for low gravity research. NASA GRC has been actively involved in developing and operating facilities and experiments on the ISS since the beginning of a permanent human presence on November 2, 2000. As the lead Center for combustion, complex fluids, and fluid physics; GRC has led the successful implementation of the Combustion Integrated Rack (CIR) and the Fluids Integrated Rack (FIR) as well as the continued use of other facilities on the ISS. These facilities have supported combustion experiments in fundamental droplet combustion; fire detection; fire extinguishment; soot phenomena; flame liftoff and stability; and material flammability. The fluids experiments have studied capillary flow; magneto-rheological fluids; colloidal systems; extensional rheology; pool and nucleate boiling phenomena. In this paper, we provide an overview of the experiments conducted on the ISS over the past 12 years.

  4. Slip complexity in earthquake fault models.

    PubMed Central

    Rice, J R; Ben-Zion, Y

    1996-01-01

    We summarize studies of earthquake fault models that give rise to slip complexities like those in natural earthquakes. For models of smooth faults between elastically deformable continua, it is critical that the friction laws involve a characteristic distance for slip weakening or evolution of surface state. That results in a finite nucleation size, or coherent slip patch size, h*. Models of smooth faults, using numerical cell size properly small compared to h*, show periodic response or complex and apparently chaotic histories of large events but have not been found to show small event complexity like the self-similar (power law) Gutenberg-Richter frequency-size statistics. This conclusion is supported in the present paper by fully inertial elastodynamic modeling of earthquake sequences. In contrast, some models of locally heterogeneous faults with quasi-independent fault segments, represented approximately by simulations with cell size larger than h* so that the model becomes "inherently discrete," do show small event complexity of the Gutenberg-Richter type. Models based on classical friction laws without a weakening length scale or for which the numerical procedure imposes an abrupt strength drop at the onset of slip have h* = 0 and hence always fall into the inherently discrete class. We suggest that the small-event complexity that some such models show will not survive regularization of the constitutive description, by inclusion of an appropriate length scale leading to a finite h*, and a corresponding reduction of numerical grid size. Images Fig. 2 Fig. 3 Fig. 4 Fig. 5 PMID:11607669

  5. A mechanistic model of the cysteine synthase complex.

    PubMed

    Feldman-Salit, Anna; Wirtz, Markus; Hell, Ruediger; Wade, Rebecca C

    2009-02-13

    Plants and bacteria assimilate and incorporate inorganic sulfur into organic compounds such as the amino acid cysteine. Cysteine biosynthesis involves a bienzyme complex, the cysteine synthase (CS) complex. The CS complex is composed of the enzymes serine acetyl transferase (SAT) and O-acetyl-serine-(thiol)-lyase (OAS-TL). Although it is experimentally known that formation of the CS complex influences cysteine production, the exact biological function of the CS complex, the mechanism of reciprocal regulation of the constituent enzymes and the structure of the complex are still poorly understood. Here, we used docking techniques to construct a model of the CS complex from mitochondrial Arabidopsis thaliana. The three-dimensional structures of the enzymes were modeled by comparative techniques. The C-termini of SAT, missing in the template structures but crucial for CS formation, were modeled de novo. Diffusional encounter complexes of SAT and OAS-TL were generated by rigid-body Brownian dynamics simulation. By incorporating experimental constraints during Brownian dynamics simulation, we identified complexes consistent with experiments. Selected encounter complexes were refined by molecular dynamics simulation to generate structures of bound complexes. We found that although a stoichiometric ratio of six OAS-TL dimers to one SAT hexamer in the CS complex is geometrically possible, binding energy calculations suggest that, consistent with experiments, a ratio of only two OAS-TL dimers to one SAT hexamer is more likely. Computational mutagenesis of residues in OAS-TL that are experimentally significant for CS formation hindered the association of the enzymes due to a less-favorable electrostatic binding free energy. Since the enzymes from A. thaliana were expressed in Escherichia coli, the cross-species binding of SAT and OAS-TL from E. coli and A. thaliana was explored. The results showed that reduced cysteine production might be due to a cross-binding of A. thaliana

  6. Minimum-complexity helicopter simulation math model

    NASA Technical Reports Server (NTRS)

    Heffley, Robert K.; Mnich, Marc A.

    1988-01-01

    An example of a minimal complexity simulation helicopter math model is presented. Motivating factors are the computational delays, cost, and inflexibility of the very sophisticated math models now in common use. A helicopter model form is given which addresses each of these factors and provides better engineering understanding of the specific handling qualities features which are apparent to the simulator pilot. The technical approach begins with specification of features which are to be modeled, followed by a build up of individual vehicle components and definition of equations. Model matching and estimation procedures are given which enable the modeling of specific helicopters from basic data sources such as flight manuals. Checkout procedures are given which provide for total model validation. A number of possible model extensions and refinement are discussed. Math model computer programs are defined and listed.

  7. Coherent operation of detector systems and their readout electronics in a complex experiment control environment

    NASA Astrophysics Data System (ADS)

    Koestner, Stefan

    2009-09-01

    With the increasing size and degree of complexity of today's experiments in high energy physics the required amount of work and complexity to integrate a complete subdetector into an experiment control system is often underestimated. We report here on the layered software structure and protocols used by the LHCb experiment to control its detectors and readout boards. The experiment control system of LHCb is based on the commercial SCADA system PVSS II. Readout boards which are outside the radiation area are accessed via embedded credit card sized PCs which are connected to a large local area network. The SPECS protocol is used for control of the front end electronics. Finite state machines are introduced to facilitate the control of a large number of electronic devices and to model the whole experiment at the level of an expert system.

  8. Constructing minimal models for complex system dynamics

    NASA Astrophysics Data System (ADS)

    Barzel, Baruch; Liu, Yang-Yu; Barabási, Albert-László

    2015-05-01

    One of the strengths of statistical physics is the ability to reduce macroscopic observations into microscopic models, offering a mechanistic description of a system's dynamics. This paradigm, rooted in Boltzmann's gas theory, has found applications from magnetic phenomena to subcellular processes and epidemic spreading. Yet, each of these advances were the result of decades of meticulous model building and validation, which are impossible to replicate in most complex biological, social or technological systems that lack accurate microscopic models. Here we develop a method to infer the microscopic dynamics of a complex system from observations of its response to external perturbations, allowing us to construct the most general class of nonlinear pairwise dynamics that are guaranteed to recover the observed behaviour. The result, which we test against both numerical and empirical data, is an effective dynamic model that can predict the system's behaviour and provide crucial insights into its inner workings.

  9. Reassessing Geophysical Models of the Bushveld Complex in 3D

    NASA Astrophysics Data System (ADS)

    Cole, J.; Webb, S. J.; Finn, C.

    2012-12-01

    Conceptual geophysical models of the Bushveld Igneous Complex show three possible geometries for its mafic component: 1) Separate intrusions with vertical feeders for the eastern and western lobes (Cousins, 1959) 2) Separate dipping sheets for the two lobes (Du Plessis and Kleywegt, 1987) 3) A single saucer-shaped unit connected at depth in the central part between the two lobes (Cawthorn et al, 1998) Model three incorporates isostatic adjustment of the crust in response to the weight of the dense mafic material. The model was corroborated by results of a broadband seismic array over southern Africa, known as the Southern African Seismic Experiment (SASE) (Nguuri, et al, 2001; Webb et al, 2004). This new information about the crustal thickness only became available in the last decade and could not be considered in the earlier models. Nevertheless, there is still on-going debate as to which model is correct. All of the models published up to now have been done in 2 or 2.5 dimensions. This is not well suited to modelling the complex geometry of the Bushveld intrusion. 3D modelling takes into account effects of variations in geometry and geophysical properties of lithologies in a full three dimensional sense and therefore affects the shape and amplitude of calculated fields. The main question is how the new knowledge of the increased crustal thickness, as well as the complexity of the Bushveld Complex, will impact on the gravity fields calculated for the existing conceptual models, when modelling in 3D. The three published geophysical models were remodelled using full 3Dl potential field modelling software, and including crustal thickness obtained from the SASE. The aim was not to construct very detailed models, but to test the existing conceptual models in an equally conceptual way. Firstly a specific 2D model was recreated in 3D, without crustal thickening, to establish the difference between 2D and 3D results. Then the thicker crust was added. Including the less

  10. Modeling acuity for optotypes varying in complexity.

    PubMed

    Watson, Andrew B; Ahumada, Albert J

    2012-01-01

    Watson and Ahumada (2008) described a template model of visual acuity based on an ideal-observer limited by optical filtering, neural filtering, and noise. They computed predictions for selected optotypes and optical aberrations. Here we compare this model's predictions to acuity data for six human observers, each viewing seven different optotype sets, consisting of one set of Sloan letters and six sets of Chinese characters, differing in complexity (Zhang, Zhang, Xue, Liu, & Yu, 2007). Since optical aberrations for the six observers were unknown, we constructed 200 model observers using aberrations collected from 200 normal human eyes (Thibos, Hong, Bradley, & Cheng, 2002). For each condition (observer, optotype set, model observer) we estimated the model noise required to match the data. Expressed as efficiency, performance for Chinese characters was 1.4 to 2.7 times lower than for Sloan letters. Efficiency was weakly and inversely related to perimetric complexity of optotype set. We also compared confusion matrices for human and model observers. Correlations for off-diagonal elements ranged from 0.5 to 0.8 for different sets, and the average correlation for the template model was superior to a geometrical moment model with a comparable number of parameters (Liu, Klein, Xue, Zhang, & Yu, 2009). The template model performed well overall. Estimated psychometric function slopes matched the data, and noise estimates agreed roughly with those obtained independently from contrast sensitivity to Gabor targets. For optotypes of low complexity, the model accurately predicted relative performance. This suggests the model may be used to compare acuities measured with different sets of simple optotypes. PMID:23024356

  11. The Kuramoto model in complex networks

    NASA Astrophysics Data System (ADS)

    Rodrigues, Francisco A.; Peron, Thomas K. DM.; Ji, Peng; Kurths, Jürgen

    2016-01-01

    Synchronization of an ensemble of oscillators is an emergent phenomenon present in several complex systems, ranging from social and physical to biological and technological systems. The most successful approach to describe how coherent behavior emerges in these complex systems is given by the paradigmatic Kuramoto model. This model has been traditionally studied in complete graphs. However, besides being intrinsically dynamical, complex systems present very heterogeneous structure, which can be represented as complex networks. This report is dedicated to review main contributions in the field of synchronization in networks of Kuramoto oscillators. In particular, we provide an overview of the impact of network patterns on the local and global dynamics of coupled phase oscillators. We cover many relevant topics, which encompass a description of the most used analytical approaches and the analysis of several numerical results. Furthermore, we discuss recent developments on variations of the Kuramoto model in networks, including the presence of noise and inertia. The rich potential for applications is discussed for special fields in engineering, neuroscience, physics and Earth science. Finally, we conclude by discussing problems that remain open after the last decade of intensive research on the Kuramoto model and point out some promising directions for future research.

  12. Complexity of precipitation patterns: Comparison of simulation with experiment.

    PubMed

    Polezhaev, A. A.; Muller, S. C.

    1994-12-01

    Numerical simulations show that a simple model for the formation of Liesegang precipitation patterns, which takes into account the dependence of nucleation and particle growth kinetics on supersaturation, can explain not only simple patterns like parallel bands in a test tube or concentric rings in a petri dish, but also more complex structural features, such as dislocations, helices, "Saturn rings," or patterns formed in the case of equal initial concentrations of the source substances. The limits of application of the model are discussed. (c) 1994 American Institute of Physics. PMID:12780140

  13. How useful are complex flood damage models?

    NASA Astrophysics Data System (ADS)

    Schröter, Kai; Kreibich, Heidi; Vogel, Kristin; Riggelsen, Carsten; Scherbaum, Frank; Merz, Bruno

    2014-04-01

    We investigate the usefulness of complex flood damage models for predicting relative damage to residential buildings in a spatial and temporal transfer context. We apply eight different flood damage models to predict relative building damage for five historic flood events in two different regions of Germany. Model complexity is measured in terms of the number of explanatory variables which varies from 1 variable up to 10 variables which are singled out from 28 candidate variables. Model validation is based on empirical damage data, whereas observation uncertainty is taken into consideration. The comparison of model predictive performance shows that additional explanatory variables besides the water depth improve the predictive capability in a spatial and temporal transfer context, i.e., when the models are transferred to different regions and different flood events. Concerning the trade-off between predictive capability and reliability the model structure seem more important than the number of explanatory variables. Among the models considered, the reliability of Bayesian network-based predictions in space-time transfer is larger than for the remaining models, and the uncertainties associated with damage predictions are reflected more completely.

  14. Experiments beyond the standard model

    SciTech Connect

    Perl, M.L.

    1984-09-01

    This paper is based upon lectures in which I have described and explored the ways in which experimenters can try to find answers, or at least clues toward answers, to some of the fundamental questions of elementary particle physics. All of these experimental techniques and directions have been discussed fully in other papers, for example: searches for heavy charged leptons, tests of quantum chromodynamics, searches for Higgs particles, searches for particles predicted by supersymmetric theories, searches for particles predicted by technicolor theories, searches for proton decay, searches for neutrino oscillations, monopole searches, studies of low transfer momentum hadron physics at very high energies, and elementary particle studies using cosmic rays. Each of these subjects requires several lectures by itself to do justice to the large amount of experimental work and theoretical thought which has been devoted to these subjects. My approach in these tutorial lectures is to describe general ways to experiment beyond the standard model. I will use some of the topics listed to illustrate these general ways. Also, in these lectures I present some dreams and challenges about new techniques in experimental particle physics and accelerator technology, I call these Experimental Needs. 92 references.

  15. Modeling of protein binary complexes using structural mass spectrometry data

    PubMed Central

    Kamal, J.K. Amisha; Chance, Mark R.

    2008-01-01

    In this article, we describe a general approach to modeling the structure of binary protein complexes using structural mass spectrometry data combined with molecular docking. In the first step, hydroxyl radical mediated oxidative protein footprinting is used to identify residues that experience conformational reorganization due to binding or participate in the binding interface. In the second step, a three-dimensional atomic structure of the complex is derived by computational modeling. Homology modeling approaches are used to define the structures of the individual proteins if footprinting detects significant conformational reorganization as a function of complex formation. A three-dimensional model of the complex is constructed from these binary partners using the ClusPro program, which is composed of docking, energy filtering, and clustering steps. Footprinting data are used to incorporate constraints—positive and/or negative—in the docking step and are also used to decide the type of energy filter—electrostatics or desolvation—in the successive energy-filtering step. By using this approach, we examine the structure of a number of binary complexes of monomeric actin and compare the results to crystallographic data. Based on docking alone, a number of competing models with widely varying structures are observed, one of which is likely to agree with crystallographic data. When the docking steps are guided by footprinting data, accurate models emerge as top scoring. We demonstrate this method with the actin/gelsolin segment-1 complex. We also provide a structural model for the actin/cofilin complex using this approach which does not have a crystal or NMR structure. PMID:18042684

  16. Modeling Electromagnetic Scattering From Complex Inhomogeneous Objects

    NASA Technical Reports Server (NTRS)

    Deshpande, Manohar; Reddy, C. J.

    2011-01-01

    This software innovation is designed to develop a mathematical formulation to estimate the electromagnetic scattering characteristics of complex, inhomogeneous objects using the finite-element-method (FEM) and method-of-moments (MoM) concepts, as well as to develop a FORTRAN code called FEMOM3DS (Finite Element Method and Method of Moments for 3-Dimensional Scattering), which will implement the steps that are described in the mathematical formulation. Very complex objects can be easily modeled, and the operator of the code is not required to know the details of electromagnetic theory to study electromagnetic scattering.

  17. Synthetic seismograms for a complex crustal model

    NASA Astrophysics Data System (ADS)

    Sandmeier, K.-J.; Wenzel, F.

    1986-01-01

    The algorithm of the original Reflectivity Method has been vectorized and implemented on a CDC CYBER 205 computer. Calculation times are shortened by a factor of 20 to 30 compared with a general purpose computer with a capacity of several million floating point operations per second (MFLOP). The rapid calculation of synthetic seismograms for complex models, high frequency sources and all offset ranges is a provision for modeling not only particular phases but the whole observed wavefield. As an example we model refraction data of the Black Forest, Southwest Germany and are able to derive rather tight constraints on the physical properties of the lower crust.

  18. Human driven transitions in complex model ecosystems

    NASA Astrophysics Data System (ADS)

    Harfoot, Mike; Newbold, Tim; Tittinsor, Derek; Purves, Drew

    2015-04-01

    Human activities have been observed to be impacting ecosystems across the globe, leading to reduced ecosystem functioning, altered trophic and biomass structure and ultimately ecosystem collapse. Previous attempts to understand global human impacts on ecosystems have usually relied on statistical models, which do not explicitly model the processes underlying the functioning of ecosystems, represent only a small proportion of organisms and do not adequately capture complex non-linear and dynamic responses of ecosystems to perturbations. We use a mechanistic ecosystem model (1), which simulates the underlying processes structuring ecosystems and can thus capture complex and dynamic interactions, to investigate boundaries of complex ecosystems to human perturbation. We explore several drivers including human appropriation of net primary production and harvesting of animal biomass. We also present an analysis of the key interactions between biotic, societal and abiotic earth system components, considering why and how we might think about these couplings. References: M. B. J. Harfoot et al., Emergent global patterns of ecosystem structure and function from a mechanistic general ecosystem model., PLoS Biol. 12, e1001841 (2014).

  19. Experiment Design for Complex VTOL Aircraft with Distributed Propulsion and Tilt Wing

    NASA Technical Reports Server (NTRS)

    Murphy, Patrick C.; Landman, Drew

    2015-01-01

    Selected experimental results from a wind tunnel study of a subscale VTOL concept with distributed propulsion and tilt lifting surfaces are presented. The vehicle complexity and automated test facility were ideal for use with a randomized designed experiment. Design of Experiments and Response Surface Methods were invoked to produce run efficient, statistically rigorous regression models with minimized prediction error. Static tests were conducted at the NASA Langley 12-Foot Low-Speed Tunnel to model all six aerodynamic coefficients over a large flight envelope. This work supports investigations at NASA Langley in developing advanced configurations, simulations, and advanced control systems.

  20. Intrinsic Uncertainties in Modeling Complex Systems.

    SciTech Connect

    Cooper, Curtis S; Bramson, Aaron L.; Ames, Arlo L.

    2014-09-01

    Models are built to understand and predict the behaviors of both natural and artificial systems. Because it is always necessary to abstract away aspects of any non-trivial system being modeled, we know models can potentially leave out important, even critical elements. This reality of the modeling enterprise forces us to consider the prospective impacts of those effects completely left out of a model - either intentionally or unconsidered. Insensitivity to new structure is an indication of diminishing returns. In this work, we represent a hypothetical unknown effect on a validated model as a finite perturba- tion whose amplitude is constrained within a control region. We find robustly that without further constraints, no meaningful bounds can be placed on the amplitude of a perturbation outside of the control region. Thus, forecasting into unsampled regions is a very risky proposition. We also present inherent difficulties with proper time discretization of models and representing in- herently discrete quantities. We point out potentially worrisome uncertainties, arising from math- ematical formulation alone, which modelers can inadvertently introduce into models of complex systems. Acknowledgements This work has been funded under early-career LDRD project #170979, entitled "Quantify- ing Confidence in Complex Systems Models Having Structural Uncertainties", which ran from 04/2013 to 09/2014. We wish to express our gratitude to the many researchers at Sandia who con- tributed ideas to this work, as well as feedback on the manuscript. In particular, we would like to mention George Barr, Alexander Outkin, Walt Beyeler, Eric Vugrin, and Laura Swiler for provid- ing invaluable advice and guidance through the course of the project. We would also like to thank Steven Kleban, Amanda Gonzales, Trevor Manzanares, and Sarah Burwell for their assistance in managing project tasks and resources.

  1. Simulating Complex Modulated Phases Through Spin Models

    NASA Astrophysics Data System (ADS)

    Selinger, Jonathan V.; Lopatina, Lena M.; Geng, Jun; Selinger, Robin L. B.

    2009-03-01

    We extend the computational approach for studying striped phases on curved surfaces, presented in the previous talk, to two new problems involving complex modulated phases. First, we simulate a smectic liquid crystal on an arbitrary mesh by mapping the director field onto a vector spin and the density wave onto an Ising spin. We can thereby determine how the smectic phase responds to any geometrical constraints, including hybrid boundary conditions, patterned substrates, and disordered substrates. This method may provide a useful tool for designing ferroelectric liquid crystal cells. Second, we explore a model of vector spins on a flat two-dimensional (2D) lattice with long-range antiferromagnetic interactions. This model generates modulated phases with surprisingly complex structures, including 1D stripes and 2D periodic cells, which are independent of the underlying lattice. We speculate on the physical significance of these structures.

  2. Industrial Source Complex (ISC) dispersion model. Software

    SciTech Connect

    Schewe, G.; Sieurin, E.

    1980-01-01

    The model updates various EPA dispersion model algorithms and combines them in two computer programs that can be used to assess the air quality impact of emissions from the wide variety of source types associated with an industrial source complex. The ISC Model short-term program ISCST, an updated version of the EPA Single Source (CRSTER) Model uses sequential hourly meteorological data to calculate values of average concentration or total dry deposition for time periods of 1, 2, 3, 4, 6, 8, 12 and 24 hours. Additionally, ISCST may be used to calculate 'N' is 366 days. The ISC Model long-term computer program ISCLT, a sector-averaged model that updates and combines basic features of the EPA Air Quality Display Model (AQDM) and the EPA Climatological Dispersion Model (CDM), uses STAR Summaries to calculate seasonal and/or annual average concentration or total deposition values. Both the ISCST and ISCLT programs make the same basic dispersion-model assumptions. Additionally, both the ISCST and ISCLT programs use either a polar or a Cartesian receptor grid...Software Description: The programs are written in the FORTRAN IV programming language for implementation on a UNIVAC 1110 computer and also on medium-to-large IBM or CDC systems. 65,000k words of core storage are required to operate the model.

  3. Noncommutative complex Grosse-Wulkenhaar model

    SciTech Connect

    Hounkonnou, Mahouton Norbert; Samary, Dine Ousmane

    2008-11-18

    This paper stands for an application of the noncommutative (NC) Noether theorem, given in our previous work [AIP Proc 956(2007) 55-60], for the NC complex Grosse-Wulkenhaar model. It provides with an extension of a recent work [Physics Letters B 653(2007) 343-345]. The local conservation of energy-momentum tensors (EMTs) is recovered using improvement procedures based on Moyal algebraic techniques. Broken dilatation symmetry is discussed. NC gauge currents are also explicitly computed.

  4. Surface complexation modeling of inositol hexaphosphate sorption onto gibbsite.

    PubMed

    Ruyter-Hooley, Maika; Larsson, Anna-Carin; Johnson, Bruce B; Antzutkin, Oleg N; Angove, Michael J

    2015-02-15

    The sorption of Inositol hexaphosphate (IP6) onto gibbsite was investigated using a combination of adsorption experiments, (31)P solid-state MAS NMR spectroscopy, and surface complexation modeling. Adsorption experiments conducted at four temperatures showed that IP6 sorption decreased with increasing pH. At pH 6, IP6 sorption increased with increasing temperature, while at pH 10 sorption decreased as the temperature was raised. (31)P MAS NMR measurements at pH 3, 6, 9 and 11 produced spectra with broad resonance lines that could be de-convoluted with up to five resonances (+5, 0, -6, -13 and -21ppm). The chemical shifts suggest the sorption process involves a combination of both outer- and inner-sphere complexation and surface precipitation. Relative intensities of the observed resonances indicate that outer-sphere complexation is important in the sorption process at higher pH, while inner-sphere complexation and surface precipitation are dominant at lower pH. Using the adsorption and (31)P MAS NMR data, IP6 sorption to gibbsite was modeled with an extended constant capacitance model (ECCM). The adsorption reactions that best described the sorption of IP6 to gibbsite included two inner-sphere surface complexes and one outer-sphere complex: ≡AlOH + IP₆¹²⁻ + 5H⁺ ↔ ≡Al(IP₆H₄)⁷⁻ + H₂O, ≡3AlOH + IP₆¹²⁻ + 6H⁺ ↔ ≡Al₃(IP₆H₃)⁶⁻ + 3H₂O, ≡2AlOH + IP₆¹²⁻ + 4H⁺ ↔ (≡AlOH₂)₂²⁺(IP₆H₂)¹⁰⁻. The inner-sphere complex involving three surface sites may be considered to be equivalent to a surface precipitate. Thermodynamic parameters were obtained from equilibrium constants derived from surface complexation modeling. Enthalpies for the formation of inner-sphere surface complexes were endothermic, while the enthalpy for the outer-sphere complex was exothermic. The entropies for the proposed sorption reactions were large and positive suggesting that changes in solvation of species play a major role in driving

  5. The noisy voter model on complex networks

    NASA Astrophysics Data System (ADS)

    Carro, Adrián; Toral, Raúl; San Miguel, Maxi

    2016-04-01

    We propose a new analytical method to study stochastic, binary-state models on complex networks. Moving beyond the usual mean-field theories, this alternative approach is based on the introduction of an annealed approximation for uncorrelated networks, allowing to deal with the network structure as parametric heterogeneity. As an illustration, we study the noisy voter model, a modification of the original voter model including random changes of state. The proposed method is able to unfold the dependence of the model not only on the mean degree (the mean-field prediction) but also on more complex averages over the degree distribution. In particular, we find that the degree heterogeneity—variance of the underlying degree distribution—has a strong influence on the location of the critical point of a noise-induced, finite-size transition occurring in the model, on the local ordering of the system, and on the functional form of its temporal correlations. Finally, we show how this latter point opens the possibility of inferring the degree heterogeneity of the underlying network by observing only the aggregate behavior of the system as a whole, an issue of interest for systems where only macroscopic, population level variables can be measured.

  6. The noisy voter model on complex networks

    PubMed Central

    Carro, Adrián; Toral, Raúl; San Miguel, Maxi

    2016-01-01

    We propose a new analytical method to study stochastic, binary-state models on complex networks. Moving beyond the usual mean-field theories, this alternative approach is based on the introduction of an annealed approximation for uncorrelated networks, allowing to deal with the network structure as parametric heterogeneity. As an illustration, we study the noisy voter model, a modification of the original voter model including random changes of state. The proposed method is able to unfold the dependence of the model not only on the mean degree (the mean-field prediction) but also on more complex averages over the degree distribution. In particular, we find that the degree heterogeneity—variance of the underlying degree distribution—has a strong influence on the location of the critical point of a noise-induced, finite-size transition occurring in the model, on the local ordering of the system, and on the functional form of its temporal correlations. Finally, we show how this latter point opens the possibility of inferring the degree heterogeneity of the underlying network by observing only the aggregate behavior of the system as a whole, an issue of interest for systems where only macroscopic, population level variables can be measured. PMID:27094773

  7. The noisy voter model on complex networks.

    PubMed

    Carro, Adrián; Toral, Raúl; San Miguel, Maxi

    2016-01-01

    We propose a new analytical method to study stochastic, binary-state models on complex networks. Moving beyond the usual mean-field theories, this alternative approach is based on the introduction of an annealed approximation for uncorrelated networks, allowing to deal with the network structure as parametric heterogeneity. As an illustration, we study the noisy voter model, a modification of the original voter model including random changes of state. The proposed method is able to unfold the dependence of the model not only on the mean degree (the mean-field prediction) but also on more complex averages over the degree distribution. In particular, we find that the degree heterogeneity-variance of the underlying degree distribution-has a strong influence on the location of the critical point of a noise-induced, finite-size transition occurring in the model, on the local ordering of the system, and on the functional form of its temporal correlations. Finally, we show how this latter point opens the possibility of inferring the degree heterogeneity of the underlying network by observing only the aggregate behavior of the system as a whole, an issue of interest for systems where only macroscopic, population level variables can be measured. PMID:27094773

  8. Complexity of groundwater models in catchment hydrological models

    NASA Astrophysics Data System (ADS)

    Attinger, Sabine; Herold, Christian; Kumar, Rohini; Mai, Juliane; Ross, Katharina; Samaniego, Luis; Zink, Matthias

    2015-04-01

    In catchment hydrological models, groundwater is usually modeled very simple: it is conceptualized as a linear reservoir that gets the water from the upper unsaturated zone reservoir and releases water to the river system as baseflow. The baseflow is only a minor component of the total river flow and groundwater reservoir parameters are therefore difficult to be inversely estimated by means of river flow data only. In addition, the modelled values of the absolute height of the water filling the groundwater reservoir - in other words the groundwater levels - are of limited meaning due to coarse or no spatial resolution of groundwater and due to the fact that only river flow data are used for the calibration. The talk focuses on the question: Which complexity in terms of model complexity and model resolution is necessary to characterize groundwater processes and groundwater responses adequately in distributed catchment hydrological models? Starting from a spatially distributed catchment hydrological model with a groundwater compartment that is conceptualized as a linear reservoir we stepwise increase the groundwater model complexity and its spatial resolution to investigate which resolution, which complexity and which data are needed to reproduce baseflow and groundwater level data adequately.

  9. Complex Constructivism: A Theoretical Model of Complexity and Cognition

    ERIC Educational Resources Information Center

    Doolittle, Peter E.

    2014-01-01

    Education has long been driven by its metaphors for teaching and learning. These metaphors have influenced both educational research and educational practice. Complexity and constructivism are two theories that provide functional and robust metaphors. Complexity provides a metaphor for the structure of myriad phenomena, while constructivism…

  10. Predictive modelling of complex agronomic and biological systems.

    PubMed

    Keurentjes, Joost J B; Molenaar, Jaap; Zwaan, Bas J

    2013-09-01

    Biological systems are tremendously complex in their functioning and regulation. Studying the multifaceted behaviour and describing the performance of such complexity has challenged the scientific community for years. The reduction of real-world intricacy into simple descriptive models has therefore convinced many researchers of the usefulness of introducing mathematics into biological sciences. Predictive modelling takes such an approach another step further in that it takes advantage of existing knowledge to project the performance of a system in alternating scenarios. The ever growing amounts of available data generated by assessing biological systems at increasingly higher detail provide unique opportunities for future modelling and experiment design. Here we aim to provide an overview of the progress made in modelling over time and the currently prevalent approaches for iterative modelling cycles in modern biology. We will further argue for the importance of versatility in modelling approaches, including parameter estimation, model reduction and network reconstruction. Finally, we will discuss the difficulties in overcoming the mathematical interpretation of in vivo complexity and address some of the future challenges lying ahead. PMID:23777295

  11. Magnetic modeling of the Bushveld Igneous Complex

    NASA Astrophysics Data System (ADS)

    Webb, S. J.; Cole, J.; Letts, S. A.; Finn, C.; Torsvik, T. H.; Lee, M. D.

    2009-12-01

    Magnetic modeling of the 2.06 Ga Bushveld Complex presents special challenges due a variety of magnetic effects. These include strong remanence in the Main Zone and extremely high magnetic susceptibilities in the Upper Zone, which exhibit self-demagnetization. Recent palaeomagnetic results have resolved a long standing discrepancy between age data, which constrain the emplacement to within 1 million years, and older palaeomagnetic data which suggested ~50 million years for emplacement. The new palaeomagnetic results agree with the age data and present a single consistent pole, as opposed to a long polar wander path, for the Bushveld for all of the Zones and all of the limbs. These results also pass a fold test indicating the Bushveld Complex was emplaced horizontally lending support to arguments for connectivity. The magnetic signature of the Bushveld Complex provides an ideal mapping tool as the UZ has high susceptibility values and is well layered showing up as distinct anomalies on new high resolution magnetic data. However, this signature is similar to the highly magnetic BIFs found in the Transvaal and in the Witwatersrand Supergroups. Through careful mapping using new high resolution aeromagnetic data, we have been able to map the Bushveld UZ in complicated geological regions and identify a characteristic signature with well defined layers. The Main Zone, which has a more subdued magnetic signature, does have a strong remanent component and exhibits several magnetic reversals. The magnetic layers of the UZ contain layers of magnetitite with as much as 80-90% pure magnetite with large crystals (1-2 cm). While these layers are not strongly remanent, they have extremely high magnetic susceptibilities, and the self demagnetization effect must be taken into account when modeling these layers. Because the Bushveld Complex is so large, the geometry of the Earth’s magnetic field relative to the layers of the UZ Bushveld Complex changes orientation, creating

  12. General Blending Models for Data From Mixture Experiments

    PubMed Central

    Brown, L.; Donev, A. N.; Bissett, A. C.

    2015-01-01

    We propose a new class of models providing a powerful unification and extension of existing statistical methodology for analysis of data obtained in mixture experiments. These models, which integrate models proposed by Scheffé and Becker, extend considerably the range of mixture component effects that may be described. They become complex when the studied phenomenon requires it, but remain simple whenever possible. This article has supplementary material online. PMID:26681812

  13. Sonoluminescence: Experiments and models (Review)

    NASA Astrophysics Data System (ADS)

    Borisenok, V. A.

    2015-05-01

    Three models of the sonoluminescence source formation are considered: the shock-free compression model, the shock wave model, and the polarization model. Each of them is tested by experimental data on the size of the radiating region and the angular radiation pattern; the shape and duration of the radiation pulse; the influence of the type of liquid, gas composition, surfactants, sound frequency, and temperature of the liquid on the radiation intensity; the characteristics of the shock wave in the liquid; and the radiation spectra. It is shown that the most adequate qualitative explanation of the entire set of experimental data is given by the polarization model. Methods for verifying the model are proposed. Publications devoted to studying the possibility a thermonuclear fusion reaction in a cavitation system are reviewed.

  14. Experiments on a Model Eye

    ERIC Educational Resources Information Center

    Arell, Antti; Kolari, Samuli

    1978-01-01

    Explains a laboratory experiment dealing with the optical features of the human eye. Shows how to measure the magnification of the retina and the refractive anomaly of the eye could be used to measure the refractive power of the observer's eye. (GA)

  15. Structured analysis and modeling of complex systems

    NASA Technical Reports Server (NTRS)

    Strome, David R.; Dalrymple, Mathieu A.

    1992-01-01

    The Aircrew Evaluation Sustained Operations Performance (AESOP) facility at Brooks AFB, Texas, combines the realism of an operational environment with the control of a research laboratory. In recent studies we collected extensive data from the Airborne Warning and Control Systems (AWACS) Weapons Directors subjected to high and low workload Defensive Counter Air Scenarios. A critical and complex task in this environment involves committing a friendly fighter against a hostile fighter. Structured Analysis and Design techniques and computer modeling systems were applied to this task as tools for analyzing subject performance and workload. This technology is being transferred to the Man-Systems Division of NASA Johnson Space Center for application to complex mission related tasks, such as manipulating the Shuttle grappler arm.

  16. The Intermediate Complexity Atmospheric Research Model

    NASA Astrophysics Data System (ADS)

    Gutmann, Ethan; Clark, Martyn; Rasmussen, Roy; Arnold, Jeffrey; Brekke, Levi

    2015-04-01

    The high-resolution, non-hydrostatic atmospheric models often used for dynamical downscaling are extremely computationally expensive, and, for a certain class of problems, their complexity hinders our ability to ask key scientific questions, particularly those related to hydrology and climate change. For changes in precipitation in particular, an atmospheric model grid spacing capable of resolving the structure of mountain ranges is of critical importance, yet such simulations can not currently be performed with an advanced regional climate model for long time periods, over large areas, and forced by many climate models. Here we present the newly developed Intermediate Complexity Atmospheric Research model (ICAR) capable of simulating critical atmospheric processes two to three orders of magnitude faster than a state of the art regional climate model. ICAR uses a simplified dynamical formulation based off of linear theory, combined with the circulation field from a low-resolution climate model. The resulting three-dimensional wind field is used to advect heat and moisture within the domain, while sub-grid physics (e.g. microphysics) are processed by standard and simplified physics schemes from the Weather Research and Forecasting (WRF) model. ICAR is tested in comparison to WRF by downscaling a climate change scenario over the Colorado Rockies. Both atmospheric models predict increases in precipitation across the domain with a greater increase on the western half. In contrast, statistically downscaled precipitation using multiple common statistical methods predict decreases in precipitation over the western half of the domain. Finally, we apply ICAR to multiple CMIP5 climate models and scenarios with multiple parameterization options to investigate the importance of uncertainty in sub-grid physics as compared to the uncertainty in the large scale climate scenario. ICAR is a useful tool for climate change and weather forecast downscaling, particularly for orographic

  17. Modeling the human prothrombinase complex components

    NASA Astrophysics Data System (ADS)

    Orban, Tivadar

    Thrombin generation is the culminating stage of the blood coagulation process. Thrombin is obtained from prothrombin (the substrate) in a reaction catalyzed by the prothrombinase complex (the enzyme). The prothrombinase complex is composed of factor Xa (the enzyme), factor Va (the cofactor) associated in the presence of calcium ions on a negatively charged cell membrane. Factor Xa, alone, can activate prothrombin to thrombin; however, the rate of conversion is not physiologically relevant for survival. Incorporation of factor Va into prothrombinase accelerates the rate of prothrombinase activity by 300,000-fold, and provides the physiological pathway of thrombin generation. The long-term goal of the current proposal is to provide the necessary support for the advancing of studies to design potential drug candidates that may be used to avoid development of deep venous thrombosis in high-risk patients. The short-term goals of the present proposal are to (1) to propose a model of a mixed asymmetric phospholipid bilayer, (2) expand the incomplete model of human coagulation factor Va and study its interaction with the phospholipid bilayer, (3) to create a homology model of prothrombin (4) to study the dynamics of interaction between prothrombin and the phospholipid bilayer.

  18. Rhabdomyomas and Tuberous sclerosis complex: our experience in 33 cases

    PubMed Central

    2014-01-01

    Background Rhabdomyomas are the most common type of cardiac tumors in children. Anatomically, they can be considered as hamartomas. They are usually randomly diagnosed antenatally or postnatally sometimes presenting in the neonatal period with haemodynamic compromise or severe arrhythmias although most neonatal cases remain asymptomatic. Typically rhabdomyomas are multiple lesions and usually regress spontaneously but are often associated with tuberous sclerosis complex (TSC), an autosomal dominant multisystem disorder caused by mutations in either of the two genes, TSC1 or TSC2. Diagnosis of tuberous sclerosis is usually made on clinical grounds and eventually confirmed by a genetic test by searching for TSC genes mutations. Methods We report our experience on 33 cases affected with rhabdomyomas and diagnosed from January 1989 to December 2012, focusing on the cardiac outcome and on association with the signs of tuberous sclerosis complex. We performed echocardiography using initially a Philips Sonos 2500 with a 7,5/5 probe and in the last 4 years a Philips IE33 with a S12-4 probe. We investigated the family history, brain, skin, kidney and retinal lesions, development of seizures, and neuropsychiatric disorders. Results At diagnosis we detected 205 masses, mostly localized in interventricular septum, right ventricle and left ventricle. Only in 4 babies (12%) the presence of a mass caused a significant obstruction. A baby, with an enormous septal rhabdomyoma associated to multiple rhabdomyomas in both right and left ventricular walls died just after birth due to severe heart failure. During follow-up we observed a reduction of rhabdomyomas in terms of both number and size in all 32 surviving patients except in one child. Eight patients (24,2%) had an arrhythmia and in 2 of these cases rhabdomyomas led to Wolf-Parkinson-White Syndrome. For all patients the arrhythmia spontaneously totally disappeared or was reduced gradually. With regarding to association with

  19. Lateral organization of complex lipid mixtures from multiscale modeling

    NASA Astrophysics Data System (ADS)

    Tumaneng, Paul W.; Pandit, Sagar A.; Zhao, Guijun; Scott, H. L.

    2010-02-01

    The organizational properties of complex lipid mixtures can give rise to functionally important structures in cell membranes. In model membranes, ternary lipid-cholesterol (CHOL) mixtures are often used as representative systems to investigate the formation and stabilization of localized structural domains ("rafts"). In this work, we describe a self-consistent mean-field model that builds on molecular dynamics simulations to incorporate multiple lipid components and to investigate the lateral organization of such mixtures. The model predictions reveal regions of bimodal order on ternary plots that are in good agreement with experiment. Specifically, we have applied the model to ternary mixtures composed of dioleoylphosphatidylcholine:18:0 sphingomyelin:CHOL. This work provides insight into the specific intermolecular interactions that drive the formation of localized domains in these mixtures. The model makes use of molecular dynamics simulations to extract interaction parameters and to provide chain configuration order parameter libraries.

  20. Structuring temporal sequences: comparison of models and factors of complexity.

    PubMed

    Essens, P

    1995-05-01

    Two stages for structuring tone sequences have been distinguished by Povel and Essens (1985). In the first, a mental clock segments a sequence into equal time units (clock model); in the second, intervals are specified in terms of subdivisions of these units. The present findings support the clock model in that it predicts human performance better than three other algorithmic models. Two further experiments in which clock and subdivision characteristics were varied did not support the hypothesized effect of the nature of the subdivisions on complexity. A model focusing on the variations in the beat-anchored envelopes of the tone clusters was proposed. Errors in reproduction suggest a dual-code representation comprising temporal and figural characteristics. The temporal part of the representation is based on the clock model but specifies, in addition, the metric of the level below the clock. The beat-tone-cluster envelope concept was proposed to specify the figural part. PMID:7596749

  1. Wind modelling over complex terrain using CFD

    NASA Astrophysics Data System (ADS)

    Avila, Matias; Owen, Herbert; Folch, Arnau; Prieto, Luis; Cosculluela, Luis

    2015-04-01

    The present work deals with the numerical CFD modelling of onshore wind farms in the context of High Performance Computing (HPC). The CFD model involves the numerical solution of the Reynolds-Averaged Navier-Stokes (RANS) equations together with a κ-ɛ turbulence model and the energy equation, specially designed for Atmospheric Boundary Layer (ABL) flows. The aim is to predict the wind velocity distribution over complex terrain, using a model that includes meteorological data assimilation, thermal coupling, forested canopy and Coriolis effects. The modelling strategy involves automatic mesh generation, terrain data assimilation and generation of boundary conditions for the inflow wind flow distribution up to the geostrophic height. The CFD model has been implemented in Alya, a HPC multi physics parallel solver able to run with thousands of processors with an optimal scalability, developed in Barcelona Supercomputing Center. The implemented thermal stability and canopy physical model was developed by Sogachev in 2012. The k-ɛ equations are of non-linear convection diffusion reaction type. The implemented numerical scheme consists on a stabilized finite element formulation based on the variational multiscale method, that is known to be stable for this kind of turbulence equations. We present a numerical formulation that stresses on the robustness of the solution method, tackling common problems that produce instability. The iterative strategy and linearization scheme is discussed. It intends to avoid the possibility of having negative values of diffusion during the iterative process, which may lead to divergence of the scheme. These problems are addressed by acting on the coefficients of the reaction and diffusion terms and on the turbulent variables themselves. The k-ɛ equations are highly nonlinear. Complex terrain induces transient flow instabilities that may preclude the convergence of computer flow simulations based on steady state formulation of the

  2. Modeling the relational complexities of symptoms.

    PubMed

    Dolin, R H

    1994-12-01

    Realization of the value of reliable codified medical data is growing at a rapid rate. Symptom data in particular have been shown to be useful in decision analysis and in the determination of patient outcomes. Electronic medical record systems are emerging, and attempts are underway to define the structure and content of these systems to support the storage of all medical data. The underlying models upon which these systems are being built continue to be strengthened by a deeper understanding of the complex information they are to store. This report analyzes symptoms as they might be recorded in free text notes and presents a high-level conceptual data model representation of this domain. PMID:7869941

  3. In Vivo Experiments with Dental Pulp Stem Cells for Pulp-Dentin Complex Regeneration

    PubMed Central

    Kim, Sunil; Shin, Su-Jung; Song, Yunjung; Kim, Euiseong

    2015-01-01

    In recent years, many studies have examined the pulp-dentin complex regeneration with DPSCs. While it is important to perform research on cells, scaffolds, and growth factors, it is also critical to develop animal models for preclinical trials. The development of a reproducible animal model of transplantation is essential for obtaining precise and accurate data in vivo. The efficacy of pulp regeneration should be assessed qualitatively and quantitatively using animal models. This review article sought to introduce in vivo experiments that have evaluated the potential of dental pulp stem cells for pulp-dentin complex regeneration. According to a review of various researches about DPSCs, the majority of studies have used subcutaneous mouse and dog teeth for animal models. There is no way to know which animal model will reproduce the clinical environment. If an animal model is developed which is easier to use and is useful in more situations than the currently popular models, it will be a substantial aid to studies examining pulp-dentin complex regeneration. PMID:26688616

  4. Inexpensive Complex Hand Model Twenty Years Later.

    PubMed

    Frenger, Paul

    2015-01-01

    Twenty years ago the author unveiled his inexpensive complex hand model, which reproduced every motion of the human hand. A control system programmed in the Forth language operated its actuators and sensors. Follow-on papers for this popular project were next presented in Texas, Canada and Germany. From this hand grew the author’s meter-tall robot (nicknamed ANNIE: Android With Neural Networks, Intellect and Emotions). It received machine vision, facial expressiveness, speech synthesis and speech recognition; a simian version also received a dexterous ape foot. New artificial intelligence features included op-amp neurons for OCR and simulated emotions, hormone emulation, endocannabinoid receptors, fear-trust-love mechanisms, a Grandmother Cell recognizer and artificial consciousness. Simulated illnesses included narcotic addiction, autism, PTSD, fibromyalgia and Alzheimer’s disease. The author gave 13 robotics-AI presentations at NASA in Houston since 2006. A meter-tall simian robot was proposed with gripping hand-feet for use with space vehicles and to explore distant planets and moons. Also proposed were: intelligent motorized exoskeletons for astronaut force multiplication; a cognitive prosthesis to detect and alleviate decreased crew mental performance; and a gynoid robot medic to tend astronauts in deep space missions. What began as a complex hand model evolved into an innovative robot-AI within two decades. PMID:25996742

  5. Complex Educational Design: A Course Design Model Based on Complexity

    ERIC Educational Resources Information Center

    Freire, Maximina Maria

    2013-01-01

    Purpose: This article aims at presenting a conceptual framework which, theoretically grounded on complexity, provides the basis to conceive of online language courses that intend to respond to the needs of students and society. Design/methodology/approach: This paper is introduced by reflections on distance education and on the paradigmatic view…

  6. An experiment with interactive planning models

    NASA Technical Reports Server (NTRS)

    Beville, J.; Wagner, J. H.; Zannetos, Z. S.

    1970-01-01

    Experiments on decision making in planning problems are described. Executives were tested in dealing with capital investments and competitive pricing decisions under conditions of uncertainty. A software package, the interactive risk analysis model system, was developed, and two controlled experiments were conducted. It is concluded that planning models can aid management, and predicted uses of the models are as a central tool, as an educational tool, to improve consistency in decision making, to improve communications, and as a tool for consensus decision making.

  7. Turbulence modeling needs of commercial CFD codes: Complex flows in the aerospace and automotive industries

    NASA Astrophysics Data System (ADS)

    Befrui, Bizhan A.

    1995-03-01

    This viewgraph presentation discusses the following: STAR-CD computational features; STAR-CD turbulence models; common features of industrial complex flows; industry-specific CFD development requirements; applications and experiences of industrial complex flows, including flow in rotating disc cavities, diffusion hole film cooling, internal blade cooling, and external car aerodynamics; and conclusions on turbulence modeling needs.

  8. Turbulence modeling needs of commercial CFD codes: Complex flows in the aerospace and automotive industries

    NASA Technical Reports Server (NTRS)

    Befrui, Bizhan A.

    1995-01-01

    This viewgraph presentation discusses the following: STAR-CD computational features; STAR-CD turbulence models; common features of industrial complex flows; industry-specific CFD development requirements; applications and experiences of industrial complex flows, including flow in rotating disc cavities, diffusion hole film cooling, internal blade cooling, and external car aerodynamics; and conclusions on turbulence modeling needs.

  9. Modeling of microgravity combustion experiments

    NASA Technical Reports Server (NTRS)

    Buckmaster, John

    1995-01-01

    This program started in February 1991, and is designed to improve our understanding of basic combustion phenomena by the modeling of various configurations undergoing experimental study by others. Results through 1992 were reported in the second workshop. Work since that time has examined the following topics: Flame-balls; Intrinsic and acoustic instabilities in multiphase mixtures; Radiation effects in premixed combustion; Smouldering, both forward and reverse, as well as two dimensional smoulder.

  10. A summary of computational experience at GE Aircraft Engines for complex turbulent flows in gas turbines

    NASA Astrophysics Data System (ADS)

    Zerkle, Ronald D.; Prakash, Chander

    1995-03-01

    This viewgraph presentation summarizes some CFD experience at GE Aircraft Engines for flows in the primary gaspath of a gas turbine engine and in turbine blade cooling passages. It is concluded that application of the standard k-epsilon turbulence model with wall functions is not adequate for accurate CFD simulation of aerodynamic performance and heat transfer in the primary gas path of a gas turbine engine. New models are required in the near-wall region which include more physics than wall functions. The two-layer modeling approach appears attractive because of its computational complexity. In addition, improved CFD simulation of film cooling and turbine blade internal cooling passages will require anisotropic turbulence models. New turbulence models must be practical in order to have a significant impact on the engine design process. A coordinated turbulence modeling effort between NASA centers would be beneficial to the gas turbine industry.

  11. The Database for Reaching Experiments and Models

    PubMed Central

    Walker, Ben; Kording, Konrad

    2013-01-01

    Reaching is one of the central experimental paradigms in the field of motor control, and many computational models of reaching have been published. While most of these models try to explain subject data (such as movement kinematics, reaching performance, forces, etc.) from only a single experiment, distinct experiments often share experimental conditions and record similar kinematics. This suggests that reaching models could be applied to (and falsified by) multiple experiments. However, using multiple datasets is difficult because experimental data formats vary widely. Standardizing data formats promises to enable scientists to test model predictions against many experiments and to compare experimental results across labs. Here we report on the development of a new resource available to scientists: a database of reaching called the Database for Reaching Experiments And Models (DREAM). DREAM collects both experimental datasets and models and facilitates their comparison by standardizing formats. The DREAM project promises to be useful for experimentalists who want to understand how their data relates to models, for modelers who want to test their theories, and for educators who want to help students better understand reaching experiments, models, and data analysis. PMID:24244351

  12. Using Perspective to Model Complex Processes

    SciTech Connect

    Kelsey, R.L.; Bisset, K.R.

    1999-04-04

    The notion of perspective, when supported in an object-based knowledge representation, can facilitate better abstractions of reality for modeling and simulation. The object modeling of complex physical and chemical processes is made more difficult in part due to the poor abstractions of state and phase changes available in these models. The notion of perspective can be used to create different views to represent the different states of matter in a process. These techniques can lead to a more understandable model. Additionally, the ability to record the progress of a process from start to finish is problematic. It is desirable to have a historic record of the entire process, not just the end result of the process. A historic record should facilitate backtracking and re-start of a process at different points in time. The same representation structures and techniques can be used to create a sequence of process markers to represent a historic record. By using perspective, the sequence of markers can have multiple and varying views tailored for a particular user's context of interest.

  13. Modeling the respiratory chain complexes with biothermokinetic equations - the case of complex I.

    PubMed

    Heiske, Margit; Nazaret, Christine; Mazat, Jean-Pierre

    2014-10-01

    The mitochondrial respiratory chain plays a crucial role in energy metabolism and its dysfunction is implicated in a wide range of human diseases. In order to understand the global expression of local mutations in the rate of oxygen consumption or in the production of adenosine triphosphate (ATP) it is useful to have a mathematical model in which the changes in a given respiratory complex are properly modeled. Our aim in this paper is to provide thermodynamics respecting and structurally simple equations to represent the kinetics of each isolated complexes which can, assembled in a dynamical system, also simulate the behavior of the respiratory chain, as a whole, under a large set of different physiological and pathological conditions. On the example of the reduced nicotinamide adenine dinucleotide (NADH)-ubiquinol-oxidoreductase (complex I) we analyze the suitability of different types of rate equations. Based on our kinetic experiments we show that very simple rate laws, as those often used in many respiratory chain models, fail to describe the kinetic behavior when applied to a wide concentration range. This led us to adapt rate equations containing the essential parameters of enzyme kinetic, maximal velocities and Henri-Michaelis-Menten like-constants (KM and KI) to satisfactorily simulate these data. PMID:25064016

  14. Mathematical modelling of complex contagion on clustered networks

    NASA Astrophysics Data System (ADS)

    O'sullivan, David J.; O'Keeffe, Gary; Fennell, Peter; Gleeson, James

    2015-09-01

    The spreading of behavior, such as the adoption of a new innovation, is influenced bythe structure of social networks that interconnect the population. In the experiments of Centola (Science, 2010), adoption of new behavior was shown to spread further and faster across clustered-lattice networks than across corresponding random networks. This implies that the “complex contagion” effects of social reinforcement are important in such diffusion, in contrast to “simple” contagion models of disease-spread which predict that epidemics would grow more efficiently on random networks than on clustered networks. To accurately model complex contagion on clustered networks remains a challenge because the usual assumptions (e.g. of mean-field theory) regarding tree-like networks are invalidated by the presence of triangles in the network; the triangles are, however, crucial to the social reinforcement mechanism, which posits an increased probability of a person adopting behavior that has been adopted by two or more neighbors. In this paper we modify the analytical approach that was introduced by Hebert-Dufresne et al. (Phys. Rev. E, 2010), to study disease-spread on clustered networks. We show how the approximation method can be adapted to a complex contagion model, and confirm the accuracy of the method with numerical simulations. The analytical results of the model enable us to quantify the level of social reinforcement that is required to observe—as in Centola’s experiments—faster diffusion on clustered topologies than on random networks.

  15. CALIBRATION OF SUBSURFACE BATCH AND REACTIVE-TRANSPORT MODELS INVOLVING COMPLEX BIOGEOCHEMICAL PROCESSES

    EPA Science Inventory

    In this study, the calibration of subsurface batch and reactive-transport models involving complex biogeochemical processes was systematically evaluated. Two hypothetical nitrate biodegradation scenarios were developed and simulated in numerical experiments to evaluate the perfor...

  16. Capturing the experiences of patients across multiple complex interventions: a meta-qualitative approach

    PubMed Central

    Webster, Fiona; Christian, Jennifer; Mansfield, Elizabeth; Bhattacharyya, Onil; Hawker, Gillian; Levinson, Wendy; Naglie, Gary; Pham, Thuy-Nga; Rose, Louise; Schull, Michael; Sinha, Samir; Stergiopoulos, Vicky; Upshur, Ross; Wilson, Lynn

    2015-01-01

    Objectives The perspectives, needs and preferences of individuals with complex health and social needs can be overlooked in the design of healthcare interventions. This study was designed to provide new insights on patient perspectives drawing from the qualitative evaluation of 5 complex healthcare interventions. Setting Patients and their caregivers were recruited from 5 interventions based in primary, hospital and community care in Ontario, Canada. Participants We included 62 interviews from 44 patients and 18 non-clinical caregivers. Intervention Our team analysed the transcripts from 5 distinct projects. This approach to qualitative meta-evaluation identifies common issues described by a diverse group of patients, therefore providing potential insights into systems issues. Outcome measures This study is a secondary analysis of qualitative data; therefore, no outcome measures were identified. Results We identified 5 broad themes that capture the patients’ experience and highlight issues that might not be adequately addressed in complex interventions. In our study, we found that: (1) the emergency department is the unavoidable point of care; (2) patients and caregivers are part of complex and variable family systems; (3) non-medical issues mediate patients’ experiences of health and healthcare delivery; (4) the unanticipated consequences of complex healthcare interventions are often the most valuable; and (5) patient experiences are shaped by the healthcare discourses on medically complex patients. Conclusions Our findings suggest that key assumptions about patients that inform intervention design need to be made explicit in order to build capacity to better understand and support patients with multiple chronic diseases. Across many health systems internationally, multiple models are being implemented simultaneously that may have shared features and target similar patients, and a qualitative meta-evaluation approach, thus offers an opportunity for cumulative

  17. Geomorphological experiments for understanding cross-scale complexity of earth surface processes

    NASA Astrophysics Data System (ADS)

    Seeger, Manuel

    2016-04-01

    The shape of the earth's surface is the result of a complex interaction of different processes at different spatial and temporal scales. The challenging problem is, that process observation is rarely possible due to this different scales. In addition, the resulting landform often does not match the scale of process observation. But it is indispensable for the development of concepts of formation of landforms to identify and understand the involved processes and their interaction. To develop models it is even necessary to quantify them and their relevant parameters. Experiments are able to bridge the constraints of process observation mentioned above: it is possible to observe and quantify individual processes as well as complex process combinations up to the development of geomorphological units. The contribution aims at showing, based on soil erosion research, the possibilities of experimental methods for contributing to th understanding of geomorphological processes. A special emphasis is put on the linkage of conceptual understanding of processes, their measurement and the following development of models. The development of experiments to quantify relevant parameters will be shown, as well as the steps undertaken to bring them into the field taking into account the resulting increase of uncertainty in system parameters and results. It will be shown that experiments are even so able to produce precise measurements on individual processes as well as of complex combinations of parameters and processes and to identify their influence on the overall geomorphological dynamics. Experiments are therefore a methodological package able to check complex soil erosion processes at different levels of conceptualization and to generate data for their quantification. And thus, also a methodological concept to take more into account and to further develop in geomorphological science.

  18. Ants (Formicidae): models for social complexity.

    PubMed

    Smith, Chris R; Dolezal, Adam; Eliyahu, Dorit; Holbrook, C Tate; Gadau, Jürgen

    2009-07-01

    The family Formicidae (ants) is composed of more than 12,000 described species that vary greatly in size, morphology, behavior, life history, ecology, and social organization. Ants occur in most terrestrial habitats and are the dominant animals in many of them. They have been used as models to address fundamental questions in ecology, evolution, behavior, and development. The literature on ants is extensive, and the natural history of many species is known in detail. Phylogenetic relationships for the family, as well as within many subfamilies, are known, enabling comparative studies. Their ease of sampling and ecological variation makes them attractive for studying populations and questions relating to communities. Their sociality and variation in social organization have contributed greatly to an understanding of complex systems, division of labor, and chemical communication. Ants occur in colonies composed of tens to millions of individuals that vary greatly in morphology, physiology, and behavior; this variation has been used to address proximate and ultimate mechanisms generating phenotypic plasticity. Relatedness asymmetries within colonies have been fundamental to the formulation and empirical testing of kin and group selection theories. Genomic resources have been developed for some species, and a whole-genome sequence for several species is likely to follow in the near future; comparative genomics in ants should provide new insights into the evolution of complexity and sociogenomics. Future studies using ants should help establish a more comprehensive understanding of social life, from molecules to colonies. PMID:20147200

  19. Physical modelling of the nuclear pore complex

    PubMed Central

    Fassati, Ariberto; Ford, Ian J.; Hoogenboom, Bart W.

    2013-01-01

    Physically interesting behaviour can arise when soft matter is confined to nanoscale dimensions. A highly relevant biological example of such a phenomenon is the Nuclear Pore Complex (NPC) found perforating the nuclear envelope of eukaryotic cells. In the central conduit of the NPC, of ∼30–60 nm diameter, a disordered network of proteins regulates all macromolecular transport between the nucleus and the cytoplasm. In spite of a wealth of experimental data, the selectivity barrier of the NPC has yet to be explained fully. Experimental and theoretical approaches are complicated by the disordered and heterogeneous nature of the NPC conduit. Modelling approaches have focused on the behaviour of the partially unfolded protein domains in the confined geometry of the NPC conduit, and have demonstrated that within the range of parameters thought relevant for the NPC, widely varying behaviour can be observed. In this review, we summarise recent efforts to physically model the NPC barrier and function. We illustrate how attempts to understand NPC barrier function have employed many different modelling techniques, each of which have contributed to our understanding of the NPC.

  20. 40 CFR 80.45 - Complex emissions model.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 17 2012-07-01 2012-07-01 false Complex emissions model. 80.45 Section...) REGULATION OF FUELS AND FUEL ADDITIVES Reformulated Gasoline § 80.45 Complex emissions model. (a) Definition... fuel which is being evaluated for its emissions performance using the complex model OXY =...

  1. Finding the right balance between groundwater model complexity and experimental effort via Bayesian model selection

    NASA Astrophysics Data System (ADS)

    Schöniger, Anneli; Illman, Walter A.; Wöhling, Thomas; Nowak, Wolfgang

    2015-12-01

    Groundwater modelers face the challenge of how to assign representative parameter values to the studied aquifer. Several approaches are available to parameterize spatial heterogeneity in aquifer parameters. They differ in their conceptualization and complexity, ranging from homogeneous models to heterogeneous random fields. While it is common practice to invest more effort into data collection for models with a finer resolution of heterogeneities, there is a lack of advice which amount of data is required to justify a certain level of model complexity. In this study, we propose to use concepts related to Bayesian model selection to identify this balance. We demonstrate our approach on the characterization of a heterogeneous aquifer via hydraulic tomography in a sandbox experiment (Illman et al., 2010). We consider four increasingly complex parameterizations of hydraulic conductivity: (1) Effective homogeneous medium, (2) geology-based zonation, (3) interpolation by pilot points, and (4) geostatistical random fields. First, we investigate the shift in justified complexity with increasing amount of available data by constructing a model confusion matrix. This matrix indicates the maximum level of complexity that can be justified given a specific experimental setup. Second, we determine which parameterization is most adequate given the observed drawdown data. Third, we test how the different parameterizations perform in a validation setup. The results of our test case indicate that aquifer characterization via hydraulic tomography does not necessarily require (or justify) a geostatistical description. Instead, a zonation-based model might be a more robust choice, but only if the zonation is geologically adequate.

  2. Modeling Choice and Valuation in Decision Experiments

    ERIC Educational Resources Information Center

    Loomes, Graham

    2010-01-01

    This article develops a parsimonious descriptive model of individual choice and valuation in the kinds of experiments that constitute a substantial part of the literature relating to decision making under risk and uncertainty. It suggests that many of the best known "regularities" observed in those experiments may arise from a tendency for…

  3. Experiments with models committees for flow forecasting

    NASA Astrophysics Data System (ADS)

    Ye, J.; Kayastha, N.; van Andel, S. J.; Fenicia, F.; Solomatine, D. P.

    2012-04-01

    In hydrological modelling typically a single model accounting for all possible hydrological loads, seasons and regimes is used. We argue however, that if a model is not complex enough (and this is the case if conceptual or semi-distributed models are used), then a single model can hardly capture all facets of a complex process, and hence more flexible modelling architectures are required. One possibility here is building several specialized models and making them responsible for various sub-processes. An output would be then a combination of outputs of individual models. In machine learning this approach is widely applied: several learning models are combined in a committee (where each model has a "voting" right with a particular weight). In this presentation we concentrate on optimising the above mentioned process of building a model committee, and on various ways of (a) building individual specialized models (mainly concentrating on calibrating them on various subsets of data and regimes corresponding to hydrological sub-processes), and (b) on various ways of combining their outputs (using the ideas of a fuzzy committee with various parameterisations). In doing so, we extend the approaches developed in [1, 2] and present new results. We consider this problem in multi-objective optimization setting (where objective functions correspond to different hydrological regimes) - leading to a number of Pareto-optimal model combinations from which the most appropriate for a given task can be chosen. Applications of the presented approach to flow forecasting are presented.

  4. Comparison of Two Pasture Growth Models of Differing Complexity

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Two pasture growth models that share many common features but differ in model complexity have been developed for incorporation into the Integrated Farm System Model (IFSM). Major differences between models include the explicit representation of roots in the more complex model, and their effects on c...

  5. Troposphere-lower-stratosphere connection in an intermediate complexity model.

    NASA Astrophysics Data System (ADS)

    Ruggieri, Paolo; King, Martin; Kucharski, Fred; Buizza, Roberto; Visconti, Guido

    2016-04-01

    The dynamical coupling between the troposphere and the lower stratosphere has been investigated using a low-top, intermediate complexity model provided by the Abdus Salam International Centre for Theoretical Physics (SPEEDY). The key question that we wanted to address is whether a simple model like SPEEDY can be used to understand troposphere-stratosphere interactions, e.g. forced by changes of sea-ice concentration in polar arctic regions. Three sets of experiments have been performed. Firstly, a potential vorticity perspective has been applied to understand the wave-like forcing of the troposphere on the stratosphere and to provide quantitative information on the sub seasonal variability of the coupling. Then, the zonally asymmetric, near-surface response to a lower-stratospheric forcing has been analysed in a set of forced experiments with an artificial heating imposed in the extra-tropical lower stratosphere. Finally, the lower-stratosphere response sensitivity to tropospheric initial conditions has been examined. Results indicate how SPEEDY captures the physics of the troposphere-stratosphere connection but also show the lack of stratospheric variability. Results also suggest that intermediate-complexity models such as SPEEDY could be used to investigate the effects that surface forcing (e.g. due to sea-ice concentration changes) have on the troposphere and the lower stratosphere.

  6. An Experiment on Isomerism in Metal-Amino Acid Complexes.

    ERIC Educational Resources Information Center

    Harrison, R. Graeme; Nolan, Kevin B.

    1982-01-01

    Background information, laboratory procedures, and discussion of results are provided for syntheses of cobalt (III) complexes, I-III, illustrating three possible bonding modes of glycine to a metal ion (the complex cations II and III being linkage/geometric isomers). Includes spectrophotometric and potentiometric methods to distinguish among the…

  7. Microwave scattering models and basic experiments

    NASA Technical Reports Server (NTRS)

    Fung, Adrian K.

    1989-01-01

    Progress is summarized which has been made in four areas of study: (1) scattering model development for sparsely populated media, such as a forested area; (2) scattering model development for dense media, such as a sea ice medium or a snow covered terrain; (3) model development for randomly rough surfaces; and (4) design and conduct of basic scattering and attenuation experiments suitable for the verification of theoretical models.

  8. Analytical models for complex swirling flows

    NASA Astrophysics Data System (ADS)

    Borissov, A.; Hussain, V.

    1996-11-01

    We develops a new class of analytical solutions of the Navier-Stokes equations for swirling flows, and suggests ways to predict and control such flows occurring in various technological applications. We view momentum accumulation on the axis as a key feature of swirling flows and consider vortex-sink flows on curved axisymmetric surfaces with an axial flow. We show that these solutions model swirling flows in a cylindrical can, whirlpools, tornadoes, and cosmic swirling jets. The singularity of these solutions on the flow axis is removed by matching them with near-axis Schlichting and Long's swirling jets. The matched solutions model flows with very complex patterns, consisting of up to seven separation regions with recirculatory 'bubbles' and vortex rings. We apply the matched solutions for computing flows in the Ranque-Hilsch tube, in the meniscus of electrosprays, in vortex breakdown, and in an industrial vortex burner. The simple analytical solutions allow a clear understanding of how different control parameters affect the flow and guide selection of optimal parameter values for desired flow features. These solutions permit extension to other problems (such as heat transfer and chemical reaction) and have the potential of being significantly useful for further detailed investigation by direct or large-eddy numerical simulations as well as laboratory experimentation.

  9. Using ecosystem experiments to improve vegetation models

    NASA Astrophysics Data System (ADS)

    Medlyn, Belinda E.; Zaehle, Sönke; de Kauwe, Martin G.; Walker, Anthony P.; Dietze, Michael C.; Hanson, Paul J.; Hickler, Thomas; Jain, Atul K.; Luo, Yiqi; Parton, William; Prentice, I. Colin; Thornton, Peter E.; Wang, Shusen; Wang, Ying-Ping; Weng, Ensheng; Iversen, Colleen M.; McCarthy, Heather R.; Warren, Jeffrey M.; Oren, Ram; Norby, Richard J.

    2015-06-01

    Ecosystem responses to rising CO2 concentrations are a major source of uncertainty in climate change projections. Data from ecosystem-scale Free-Air CO2 Enrichment (FACE) experiments provide a unique opportunity to reduce this uncertainty. The recent FACE Model-Data Synthesis project aimed to use the information gathered in two forest FACE experiments to assess and improve land ecosystem models. A new 'assumption-centred' model intercomparison approach was used, in which participating models were evaluated against experimental data based on the ways in which they represent key ecological processes. By identifying and evaluating the main assumptions causing differences among models, the assumption-centred approach produced a clear roadmap for reducing model uncertainty. Here, we explain this approach and summarize the resulting research agenda. We encourage the application of this approach in other model intercomparison projects to fundamentally improve predictive understanding of the Earth system.

  10. a Model Study of Complex Behavior in the Belousov - Reaction.

    NASA Astrophysics Data System (ADS)

    Lindberg, David Mark

    1988-12-01

    We have studied the complex oscillatory behavior in a model of the Belousov-Zhabotinskii (BZ) reaction in a continuously-fed stirred tank reactor (CSTR). The model consisted of a set of nonlinear ordinary differential equations derived from a reduced mechanism of the chemical system. These equations were integrated numerically on a computer, which yielded the concentrations of the constituent chemicals as functions of time. In addition, solutions were tracked as functions of a single parameter, the stability of the solutions was determined, and bifurcations of the solutions were located and studied. The intent of this study was to use this BZ model to explore further a region of complex oscillatory behavior found in experimental investigations, the most thorough of which revealed an alternating periodic-chaotic (P-C) sequence of states. A P-C sequence was discovered in the model which showed the same qualitative features as the experimental sequence. In order to better understand the P-C sequence, a detailed study was conducted in the vicinity of the P-C sequence, with two experimentally accessible parameters as control variables. This study mapped out the bifurcation sets, and included examination of the dynamics of the stable periodic, unstable periodic, and chaotic oscillatory motion. Observations made from the model results revealed a rough symmetry which suggests a new way of looking at the P-C sequence. Other nonlinear phenomena uncovered in the model were boundary and interior crises, several codimension-two bifurcations, and similarities in the shapes of areas of stability for periodic orbits in two-parameter space. Each earlier model study of this complex region involved only a limited one-parameter scan and had limited success in producing agreement with experiments. In contrast, for those regions of complex behavior that have been studied experimentally, the observations agree qualitatively with our model results. Several new predictions of the model

  11. A Hadronization Model for the MINOS Experiment

    NASA Astrophysics Data System (ADS)

    Yang, T.; Andreopoulos, C.; Gallagher, H.; Kehayias, P.

    2007-12-01

    We present a detailed description of the Andreopoulos-Gallagher-Kehayias-Yang (AGKY) hadronic multiparticle production model. This model was developed within the context of the MINOS experiment [4]. Its validity spans a wide invariant mass range starting from as low as the pion production threshold. It exhibits satisfactory agreement with a wide variety of experimental data.

  12. Epigenetics of complex diseases: from general theory to laboratory experiments.

    PubMed

    Schumacher, A; Petronis, A

    2006-01-01

    Despite significant effort, understanding the causes and mechanisms of complex non-Mendelian diseases remains a key challenge. Although numerous molecular genetic linkage and association studies have been conducted in order to explain the heritable predisposition to complex diseases, the resulting data are quite often inconsistent and even controversial. In a similar way, identification of environmental factors causal to a disease is difficult. In this article, a new interpretation of the paradigm of "genes plus environment" is presented in which the emphasis is shifted to epigenetic misregulation as a major etiopathogenic factor. Epigenetic mechanisms are consistent with various non-Mendelian irregularities of complex diseases, such as the existence of clinically indistinguishable sporadic and familial cases, sexual dimorphism, relatively late age of onset and peaks of susceptibility to some diseases, discordance of monozygotic twins and major fluctuations on the course of disease severity. It is also suggested that a substantial portion of phenotypic variance that traditionally has been attributed to environmental effects may result from stochastic epigenetic events in the cell. It is argued that epigenetic strategies, when applied in parallel with the traditional genetic ones, may significantly advance the discovery of etiopathogenic mechanisms of complex diseases. The second part of this chapter is dedicated to a review of laboratory methods for DNA methylation analysis, which may be useful in the study of complex diseases. In this context, epigenetic microarray technologies are emphasized, as it is evident that such technologies will significantly advance epigenetic analyses in complex diseases. PMID:16909908

  13. Reduced Complexity Modeling (RCM): toward more use of less

    NASA Astrophysics Data System (ADS)

    Paola, Chris; Voller, Vaughan

    2014-05-01

    Although not exact, there is a general correspondence between reductionism and detailed, high-fidelity models, while 'synthesism' is often associated with reduced-complexity modeling. There is no question that high-fidelity reduction- based computational models are extremely useful in simulating the behaviour of complex natural systems. In skilled hands they are also a source of insight and understanding. We focus here on the case for the other side (reduced-complexity models), not because we think they are 'better' but because their value is more subtle, and their natural constituency less clear. What kinds of problems and systems lend themselves to the reduced-complexity approach? RCM is predicated on the idea that the mechanism of the system or phenomenon in question is, for whatever reason, insensitive to the full details of the underlying physics. There are multiple ways in which this can happen. B.T. Werner argued for the importance of process hierarchies in which processes at larger scales depend on only a small subset of everything going on at smaller scales. Clear scale breaks would seem like a way to test systems for this property but to our knowledge has not been used in this way. We argue that scale-independent physics, as for example exhibited by natural fractals, is another. We also note that the same basic criterion - independence of the process in question from details of the underlying physics - underpins 'unreasonably effective' laboratory experiments. There is thus a link between suitability for experimentation at reduced scale and suitability for RCM. Examples from RCM approaches to erosional landscapes, braided rivers, and deltas illustrate these ideas, and suggest that they are insufficient. There is something of a 'wild west' nature to RCM that puts some researchers off by suggesting a departure from traditional methods that have served science well for centuries. We offer two thoughts: first, that in the end the measure of a model is its

  14. Argonne Bubble Experiment Thermal Model Development

    SciTech Connect

    Buechler, Cynthia Eileen

    2015-12-03

    This report will describe the Computational Fluid Dynamics (CFD) model that was developed to calculate the temperatures and gas volume fractions in the solution vessel during the irradiation. It is based on the model used to calculate temperatures and volume fractions in an annular vessel containing an aqueous solution of uranium . The experiment was repeated at several electron beam power levels, but the CFD analysis was performed only for the 12 kW irradiation, because this experiment came the closest to reaching a steady-state condition. The aim of the study is to compare results of the calculation with experimental measurements to determine the validity of the CFD model.

  15. STELLA Experiment: Design and Model Predictions

    SciTech Connect

    Kimura, W. D.; Babzien, M.; Ben-Zvi, I.; Campbell, L. P.; Cline, D. B.; Fiorito, R. B.; Gallardo, J. C.; Gottschalk, S. C.; He, P.; Kusche, K. P.; Liu, Y.; Pantell, R. H.; Pogorelsky, I. V.; Quimby, D. C.; Robinson, K. E.; Rule, D. W.; Sandweiss, J.; Skaritka, J.; van Steenbergen, A.; Steinhauer, L. C.; Yakimenko, V.

    1998-07-05

    The STaged ELectron Laser Acceleration (STELLA) experiment will be one of the first to examine the critical issue of staging the laser acceleration process. The BNL inverse free electron laser (EEL) will serve as a prebuncher to generate {approx} 1 {micro}m long microbunches. These microbunches will be accelerated by an inverse Cerenkov acceleration (ICA) stage. A comprehensive model of the STELLA experiment is described. This model includes the EEL prebunching, drift and focusing of the microbunches into the ICA stage, and their subsequent acceleration. The model predictions will be presented including the results of a system error study to determine the sensitivity to uncertainties in various system parameters.

  16. Surface complexation model of uranyl sorption on Georgia kaolinite

    USGS Publications Warehouse

    Payne, T.E.; Davis, J.A.; Lumpkin, G.R.; Chisari, R.; Waite, T.D.

    2004-01-01

    The adsorption of uranyl on standard Georgia kaolinites (KGa-1 and KGa-1B) was studied as a function of pH (3-10), total U (1 and 10 ??mol/l), and mass loading of clay (4 and 40 g/l). The uptake of uranyl in air-equilibrated systems increased with pH and reached a maximum in the near-neutral pH range. At higher pH values, the sorption decreased due to the presence of aqueous uranyl carbonate complexes. One kaolinite sample was examined after the uranyl uptake experiments by transmission electron microscopy (TEM), using energy dispersive X-ray spectroscopy (EDS) to determine the U content. It was found that uranium was preferentially adsorbed by Ti-rich impurity phases (predominantly anatase), which are present in the kaolinite samples. Uranyl sorption on the Georgia kaolinites was simulated with U sorption reactions on both titanol and aluminol sites, using a simple non-electrostatic surface complexation model (SCM). The relative amounts of U-binding >TiOH and >AlOH sites were estimated from the TEM/EDS results. A ternary uranyl carbonate complex on the titanol site improved the fit to the experimental data in the higher pH range. The final model contained only three optimised log K values, and was able to simulate adsorption data across a wide range of experimental conditions. The >TiOH (anatase) sites appear to play an important role in retaining U at low uranyl concentrations. As kaolinite often contains trace TiO2, its presence may need to be taken into account when modelling the results of sorption experiments with radionuclides or trace metals on kaolinite. ?? 2004 Elsevier B.V. All rights reserved.

  17. Using Ecosystem Experiments to Improve Vegetation Models

    SciTech Connect

    Medlyn, Belinda; Zaehle, S; DeKauwe, Martin G.; Walker, Anthony P.; Dietze, Michael; Hanson, Paul J.; Hickler, Thomas; Jain, Atul; Luo, Yiqi; Parton, William; Prentice, I. Collin; Thornton, Peter E.; Wang, Shusen; Wang, Yingping; Weng, Ensheng; Iversen, Colleen M.; McCarthy, Heather R.; Warren, Jeffrey; Oren, Ram; Norby, Richard J

    2015-05-21

    Ecosystem responses to rising CO2 concentrations are a major source of uncertainty in climate change projections. Data from ecosystem-scale Free-Air CO2 Enrichment (FACE) experiments provide a unique opportunity to reduce this uncertainty. The recent FACE Model–Data Synthesis project aimed to use the information gathered in two forest FACE experiments to assess and improve land ecosystem models. A new 'assumption-centred' model intercomparison approach was used, in which participating models were evaluated against experimental data based on the ways in which they represent key ecological processes. Identifying and evaluating the main assumptions caused differences among models, and the assumption-centered approach produced a clear roadmap for reducing model uncertainty. We explain this approach and summarize the resulting research agenda. We encourage the application of this approach in other model intercomparison projects to fundamentally improve predictive understanding of the Earth system.

  18. Using Ecosystem Experiments to Improve Vegetation Models

    DOE PAGESBeta

    Medlyn, Belinda; Zaehle, S; DeKauwe, Martin G.; Walker, Anthony P.; Dietze, Michael; Hanson, Paul J.; Hickler, Thomas; Jain, Atul; Luo, Yiqi; Parton, William; et al

    2015-05-21

    Ecosystem responses to rising CO2 concentrations are a major source of uncertainty in climate change projections. Data from ecosystem-scale Free-Air CO2 Enrichment (FACE) experiments provide a unique opportunity to reduce this uncertainty. The recent FACE Model–Data Synthesis project aimed to use the information gathered in two forest FACE experiments to assess and improve land ecosystem models. A new 'assumption-centred' model intercomparison approach was used, in which participating models were evaluated against experimental data based on the ways in which they represent key ecological processes. Identifying and evaluating the main assumptions caused differences among models, and the assumption-centered approach produced amore » clear roadmap for reducing model uncertainty. We explain this approach and summarize the resulting research agenda. We encourage the application of this approach in other model intercomparison projects to fundamentally improve predictive understanding of the Earth system.« less

  19. Experiences with two-equation turbulence models

    NASA Technical Reports Server (NTRS)

    Singhal, Ashok K.; Lai, Yong G.; Avva, Ram K.

    1995-01-01

    This viewgraph presentation discusses the following: introduction to CFD Research Corporation; experiences with two-equation models - models used, numerical difficulties, validation and applications, and strengths and weaknesses; and answers to three questions posed by the workshop organizing committee - what are your customers telling you, what are you doing in-house, and how can NASA-CMOTT (Center for Modeling of Turbulence and Transition) help.

  20. Modeling competitive substitution in a polyelectrolyte complex

    SciTech Connect

    Peng, B.; Muthukumar, M.

    2015-12-28

    We have simulated the invasion of a polyelectrolyte complex made of a polycation chain and a polyanion chain, by another longer polyanion chain, using the coarse-grained united atom model for the chains and the Langevin dynamics methodology. Our simulations reveal many intricate details of the substitution reaction in terms of conformational changes of the chains and competition between the invading chain and the chain being displaced for the common complementary chain. We show that the invading chain is required to be sufficiently longer than the chain being displaced for effecting the substitution. Yet, having the invading chain to be longer than a certain threshold value does not reduce the substitution time much further. While most of the simulations were carried out in salt-free conditions, we show that presence of salt facilitates the substitution reaction and reduces the substitution time. Analysis of our data shows that the dominant driving force for the substitution process involving polyelectrolytes lies in the release of counterions during the substitution.

  1. Newton and Colour: The Complex Interplay of Theory and Experiment.

    ERIC Educational Resources Information Center

    Martins, Roberto De Andrade; Silva, Cibelle Celestino

    2001-01-01

    Elucidates some aspects of Newton's theory of light and colors, specifically as presented in his first optical paper in 1672. Analyzes Newton's main experiments intended to show that light is a mixture of rays with different refrangibilities. (SAH)

  2. Management of complex immunogenetics information using an enhanced relational model.

    PubMed

    Barsalou, T; Sujansky, W; Herzenberg, L A; Wiederhold, G

    1991-10-01

    Flow cytometry has become a technique of paramount importance in the armamentarium of the scientist in such domains as immunogenetics. In the PENGUIN project, we are currently developing the architecture for an expert database system to facilitate the design of flow-cytometry experiments. This paper describes the core of this architecture--a methodology for managing complex biomedical information in an extended relational framework. More specifically, we exploit a semantic data model to enhance relational databases with structuring and manipulation tools that take more domain information into account and provide the user with an appropriate level of abstraction. We present specific applications of the structural model to database schema management, data retrieval and browsing, and integrity maintenance. PMID:1743006

  3. Industrial processing of complex fluids: Formulation and modeling

    SciTech Connect

    Scovel, J.C.; Bleasdale, S.; Forest, G.M.; Bechtel, S.

    1997-08-01

    The production of many important commercial materials involves the evolution of a complex fluid through a cooling phase into a hardened product. Textile fibers, high-strength fibers(KEVLAR, VECTRAN), plastics, chopped-fiber compounds, and fiber optical cable are such materials. Industry desires to replace experiments with on-line, real time models of these processes. Solutions to the problems are not just a matter of technology transfer, but require a fundamental description and simulation of the processes. Goals of the project are to develop models that can be used to optimize macroscopic properties of the solid product, to identify sources of undesirable defects, and to seek boundary-temperature and flow-and-material controls to optimize desired properties.

  4. The Teaching-Upbringing Complex: Experience, Problems, Prospects.

    ERIC Educational Resources Information Center

    Vul'fov, B. Z.; And Others

    1990-01-01

    Describes the teaching-upbringing complex (UVK), a new type of Soviet school that attempts to deal with raising and educating children in an integrated manner. Stresses combining required subjects with students' special interests to encourage student achievement and teacher involvement. Concentrates on the development of self-expression and…

  5. Clinical complexity in medicine: A measurement model of task and patient complexity

    PubMed Central

    Islam, R.; Weir, C.; Fiol, G. Del

    2016-01-01

    Summary Background Complexity in medicine needs to be reduced to simple components in a way that is comprehensible to researchers and clinicians. Few studies in the current literature propose a measurement model that addresses both task and patient complexity in medicine. Objective The objective of this paper is to develop an integrated approach to understand and measure clinical complexity by incorporating both task and patient complexity components focusing on infectious disease domain. The measurement model was adapted and modified to healthcare domain. Methods Three clinical Infectious Disease teams were observed, audio-recorded and transcribed. Each team included an Infectious Diseases expert, one Infectious Diseases fellow, one physician assistant and one pharmacy resident fellow. The transcripts were parsed and the authors independently coded complexity attributes. This baseline measurement model of clinical complexity was modified in an initial set of coding process and further validated in a consensus-based iterative process that included several meetings and email discussions by three clinical experts from diverse backgrounds from the Department of Biomedical Informatics at the University of Utah. Inter-rater reliability was calculated using Cohen’s kappa. Results The proposed clinical complexity model consists of two separate components. The first is a clinical task complexity model with 13 clinical complexity-contributing factors and 7 dimensions. The second is the patient complexity model with 11 complexity-contributing factors and 5 dimensions. Conclusion The measurement model for complexity encompassing both task and patient complexity will be a valuable resource for future researchers and industry to measure and understand complexity in healthcare. PMID:26404626

  6. Multicomponent reactive transport modeling of uranium bioremediation field experiments

    SciTech Connect

    Fang, Yilin; Yabusaki, Steven B.; Morrison, Stan J.; Amonette, James E.; Long, Philip E.

    2009-10-15

    Biostimulation field experiments with acetate amendment are being performed at a former uranium mill tailings site in Rifle, Colorado, to investigate subsurface processes controlling in situ bioremediation of uranium-contaminated groundwater. An important part of the research is identifying and quantifying field-scale models of the principal terminal electron-accepting processes (TEAPs) during biostimulation and the consequent biogeochemical impacts to the subsurface receiving environment. Integrating abiotic chemistry with the microbially mediated TEAPs in the reaction network brings into play geochemical observations (e.g., pH, alkalinity, redox potential, major ions, and secondary minerals) that the reactive transport model must recognize. These additional constraints provide for a more systematic and mechanistic interpretation of the field behaviors during biostimulation. The reaction network specification developed for the 2002 biostimulation field experiment was successfully applied without additional calibration to the 2003 and 2007 field experiments. The robustness of the model specification is significant in that 1) the 2003 biostimulation field experiment was performed with 3 times higher acetate concentrations than the previous biostimulation in the same field plot (i.e., the 2002 experiment), and 2) the 2007 field experiment was performed in a new unperturbed plot on the same site. The biogeochemical reactive transport simulations accounted for four TEAPs, two distinct functional microbial populations, two pools of bioavailable Fe(III) minerals (iron oxides and phyllosilicate iron), uranium aqueous and surface complexation, mineral precipitation, and dissolution. The conceptual model for bioavailable iron reflects recent laboratory studies with sediments from the Old Rifle Uranium Mill Tailings Remedial Action (UMTRA) site that demonstrated that the bulk (~90%) of Fe(III) bioreduction is associated with the phyllosilicates rather than the iron oxides

  7. Graduate Social Work Education and Cognitive Complexity: Does Prior Experience Really Matter?

    ERIC Educational Resources Information Center

    Simmons, Chris

    2014-01-01

    This study examined the extent to which age, education, and practice experience among social work graduate students (N = 184) predicted cognitive complexity, an essential aspect of critical thinking. In the regression analysis, education accounted for more of the variance associated with cognitive complexity than age and practice experience. When…

  8. IDMS: inert dark matter model with a complex singlet

    NASA Astrophysics Data System (ADS)

    Bonilla, Cesar; Sokolowska, Dorota; Darvishi, Neda; Diaz-Cruz, J. Lorenzo; Krawczyk, Maria

    2016-06-01

    We study an extension of the inert doublet model (IDM) that includes an extra complex singlet of the scalars fields, which we call the IDMS. In this model there are three Higgs particles, among them a SM-like Higgs particle, and the lightest neutral scalar, from the inert sector, remains a viable dark matter (DM) candidate. We assume a non-zero complex vacuum expectation value for the singlet, so that the visible sector can introduce extra sources of CP violation. We construct the scalar potential of IDMS, assuming an exact Z 2 symmetry, with the new singlet being Z 2-even, as well as a softly broken U(1) symmetry, which allows a reduced number of free parameters in the potential. In this paper we explore the foundations of the model, in particular the masses and interactions of scalar particles for a few benchmark scenarios. Constraints from collider physics, in particular from the Higgs signal observed at the Large Hadron Collider with {M}h≈ 125 {{GeV}}, as well as constraints from the DM experiments, such as relic density measurements and direct detection limits, are included in the analysis. We observe significant differences with respect to the IDM in relic density values from additional annihilation channels, interference and resonance effects due to the extended Higgs sector.

  9. Modeling the Propagation of Mobile Phone Virus under Complex Network

    PubMed Central

    Yang, Wei; Wei, Xi-liang; Guo, Hao; An, Gang; Guo, Lei

    2014-01-01

    Mobile phone virus is a rogue program written to propagate from one phone to another, which can take control of a mobile device by exploiting its vulnerabilities. In this paper the propagation model of mobile phone virus is tackled to understand how particular factors can affect its propagation and design effective containment strategies to suppress mobile phone virus. Two different propagation models of mobile phone viruses under the complex network are proposed in this paper. One is intended to describe the propagation of user-tricking virus, and the other is to describe the propagation of the vulnerability-exploiting virus. Based on the traditional epidemic models, the characteristics of mobile phone viruses and the network topology structure are incorporated into our models. A detailed analysis is conducted to analyze the propagation models. Through analysis, the stable infection-free equilibrium point and the stability condition are derived. Finally, considering the network topology, the numerical and simulation experiments are carried out. Results indicate that both models are correct and suitable for describing the spread of two different mobile phone viruses, respectively. PMID:25133209

  10. The Effect of Complex Formation upon the Redox Potentials of Metallic Ions. Cyclic Voltammetry Experiments.

    ERIC Educational Resources Information Center

    Ibanez, Jorge G.; And Others

    1988-01-01

    Describes experiments in which students prepare in situ soluble complexes of metal ions with different ligands and observe and estimate the change in formal potential that the ion undergoes upon complexation. Discusses student formation and analysis of soluble complexes of two different metal ions with the same ligand. (CW)

  11. Complexation Effect on Redox Potential of Iron(III)-Iron(II) Couple: A Simple Potentiometric Experiment

    ERIC Educational Resources Information Center

    Rizvi, Masood Ahmad; Syed, Raashid Maqsood; Khan, Badruddin

    2011-01-01

    A titration curve with multiple inflection points results when a mixture of two or more reducing agents with sufficiently different reduction potentials are titrated. In this experiment iron(II) complexes are combined into a mixture of reducing agents and are oxidized to the corresponding iron(III) complexes. As all of the complexes involve the…

  12. Power Curve Modeling in Complex Terrain Using Statistical Models

    NASA Astrophysics Data System (ADS)

    Bulaevskaya, V.; Wharton, S.; Clifton, A.; Qualley, G.; Miller, W.

    2014-12-01

    Traditional power output curves typically model power only as a function of the wind speed at the turbine hub height. While the latter is an essential predictor of power output, wind speed information in other parts of the vertical profile, as well as additional atmospheric variables, are also important determinants of power. The goal of this work was to determine the gain in predictive ability afforded by adding wind speed information at other heights, as well as other atmospheric variables, to the power prediction model. Using data from a wind farm with a moderately complex terrain in the Altamont Pass region in California, we trained three statistical models, a neural network, a random forest and a Gaussian process model, to predict power output from various sets of aforementioned predictors. The comparison of these predictions to the observed power data revealed that considerable improvements in prediction accuracy can be achieved both through the addition of predictors other than the hub-height wind speed and the use of statistical models. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under contract DE-AC52-07NA27344 and was funded by Wind Uncertainty Quantification Laboratory Directed Research and Development Project at LLNL under project tracking code 12-ERD-069.

  13. Assessing the experience in complex hepatopancreatobiliary surgery among graduating chief residents: Is the operative experience enough?

    PubMed Central

    Sachs, Teviah E.; Ejaz, Aslam; Weiss, Matthew; Spolverato, Gaya; Ahuja, Nita; Makary, Martin A.; Wolfgang, Christopher L.; Hirose, Kenzo; Pawlik, Timothy M.

    2015-01-01

    Introduction Resident operative autonomy and case volume is associated with posttraining confidence and practice plans. Accreditation Council for Graduate Medical Education requirements for graduating general surgery residents are four liver and three pancreas cases. We sought to evaluate trends in resident experience and autonomy for complex hepatopancreatobiliary (HPB) surgery over time. Methods We queried the Accreditation Council for Graduate Medical Education General Surgery Case Log (2003–2012) for all cases performed by graduating chief residents (GCR) relating to liver, pancreas, and the biliary tract (HPB); simple cholecystectomy was excluded. Mean (±SD), median [10th–90th percentiles] and maximum case volumes were compared from 2003 to 2012 using R2 for all trends. Results A total of 252,977 complex HPB cases (36% liver, 43% pancreas, 21% biliary) were performed by 10,288 GCR during the 10-year period examined (Mean = 24.6 per GCR). Of these, 57% were performed during the chief year, whereas 43% were performed as postgraduate year 1–4. Only 52% of liver cases were anatomic resections, whereas 71% of pancreas cases were major resections. Total number of cases increased from 22,516 (mean = 23.0) in 2003 to 27,191 (mean = 24.9) in 2012. During this same time period, the percentage of HPB cases that were performed during the chief year decreased by 7% (liver: 13%, pancreas 8%, biliary 4%). There was an increasing trend in the mean number of operations (mean ± SD) logged by GCR on the pancreas (9.1 ± 5.9 to 11.3 ± 4.3; R2 = .85) and liver (8.0 ± 5.9 to 9.4 ± 3.4; R2 = .91), whereas those for the biliary tract decreased (5.9 ± 2.5 to 3.8 ± 2.1; R2 = .96). Although the median number of cases [10th:90th percentile] increased slightly for both pancreas (7.0 [4.0:15] to 8.0 [4:20]) and liver (7.0 [4:13] to 8.0 [5:14]), the maximum number of cases preformed by any given GCR remained stable for pancreas (51 to 53; R2 = .18), but increased for liver (38

  14. A Computer Simulated Experiment in Complex Order Kinetics

    ERIC Educational Resources Information Center

    Merrill, J. C.; And Others

    1975-01-01

    Describes a computer simulation experiment in which physical chemistry students can determine all of the kinetic parameters of a reaction, such as order of the reaction with respect to each reagent, forward and reverse rate constants for the overall reaction, and forward and reverse activation energies. (MLH)

  15. Projection- vs. selection-based model reduction of complex hydro-ecological models

    NASA Astrophysics Data System (ADS)

    Galelli, S.; Giuliani, M.; Castelletti, A.; Alsahaf, A.

    2014-12-01

    Projection-based model reduction is one of the most popular approaches used for the identification of reduced-order models (emulators). It is based on the idea of sampling from the original model various values, or snapshots, of the state variables, and then using these snapshots in a projection scheme to find a lower-dimensional subspace that captures the majority of the variation of the original model. The model is then projected onto this subspace and solved, yielding a computationally efficient emulator. Yet, this approach may unnecessarily increase the complexity of the emulator, especially when only a few state variables of the original model are relevant with respect to the output of interest. This is the case of complex hydro-ecological models, which typically account for a variety of water quality processes. On the other hand, selection-based model reduction uses the information contained in the snapshots to select the state variables of the original model that are relevant with respect to the emulator's output, thus allowing for model reduction. This provides a better trade-off between fidelity and model complexity, since the irrelevant and redundant state variables are excluded from the model reduction process. In this work we address these issues by presenting an exhaustive experimental comparison between two popular projection- and selection-based methods, namely Proper Orthogonal Decomposition (POD) and Dynamic Emulation Modelling (DEMo). The comparison is performed on the reduction of DYRESM-CAEDYM, a 1D hydro-ecological model used to describe the in-reservoir water quality conditions of Tono Dam, an artificial reservoir located in western Japan. Experiments on two different output variables (i.e. chlorophyll-a concentration and release water temperature) show that DEMo allows obtaining the same fidelity as POD while reducing the number of state variables in the emulator.

  16. Data production models for the CDF experiment

    SciTech Connect

    Antos, J.; Babik, M.; Benjamin, D.; Cabrera, S.; Chan, A.W.; Chen, Y.C.; Coca, M.; Cooper, B.; Genser, K.; Hatakeyama, K.; Hou, S.; Hsieh, T.L.; Jayatilaka, B.; Kraan, A.C.; Lysak, R.; Mandrichenko, I.V.; Robson, A.; Siket, M.; Stelzer, B.; Syu, J.; Teng, P.K.; /Kosice, IEF /Duke U. /Taiwan, Inst. Phys. /University Coll. London /Fermilab /Rockefeller U. /Michigan U. /Pennsylvania U. /Glasgow U. /UCLA /Tsukuba U. /New Mexico U.

    2006-06-01

    The data production for the CDF experiment is conducted on a large Linux PC farm designed to meet the needs of data collection at a maximum rate of 40 MByte/sec. We present two data production models that exploits advances in computing and communication technology. The first production farm is a centralized system that has achieved a stable data processing rate of approximately 2 TByte per day. The recently upgraded farm is migrated to the SAM (Sequential Access to data via Metadata) data handling system. The software and hardware of the CDF production farms has been successful in providing large computing and data throughput capacity to the experiment.

  17. Model-scale sound propagation experiment

    NASA Technical Reports Server (NTRS)

    Willshire, William L., Jr.

    1988-01-01

    The results of a scale model propagation experiment to investigate grazing propagation above a finite impedance boundary are reported. In the experiment, a 20 x 25 ft ground plane was installed in an anechoic chamber. Propagation tests were performed over the plywood surface of the ground plane and with the ground plane covered with felt, styrofoam, and fiberboard. Tests were performed with discrete tones in the frequency range of 10 to 15 kHz. The acoustic source and microphones varied in height above the test surface from flush to 6 in. Microphones were located in a linear array up to 18 ft from the source. A preliminary experiment using the same ground plane, but only testing the plywood and felt surfaces was performed. The results of this first experiment were encouraging, but data variability and repeatability were poor, particularly, for the felt surface, making comparisons with theoretical predictions difficult. In the main experiment the sound source, microphones, microphone positioning, data acquisition, quality of the anechoic chamber, and environmental control of the anechoic chamber were improved. High-quality, repeatable acoustic data were measured in the main experiment for all four test surfaces. Comparisons with predictions are good, but limited by uncertainties of the impedance values of the test surfaces.

  18. APPLICATION OF SURFACE COMPLEXATION MODELS TO SOIL SYSTEMS

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Chemical surface complexation models were developed to describe potentiometric titration and ion adsorption data on oxide minerals. These models provide molecular descriptions of adsorption using an equilibrium approach that defines surface species, chemical reactions, mass and charge balances and ...

  19. Modeling Complex Workflow in Molecular Diagnostics

    PubMed Central

    Gomah, Mohamed E.; Turley, James P.; Lu, Huimin; Jones, Dan

    2010-01-01

    One of the hurdles to achieving personalized medicine has been implementing the laboratory processes for performing and reporting complex molecular tests. The rapidly changing test rosters and complex analysis platforms in molecular diagnostics have meant that many clinical laboratories still use labor-intensive manual processing and testing without the level of automation seen in high-volume chemistry and hematology testing. We provide here a discussion of design requirements and the results of implementation of a suite of lab management tools that incorporate the many elements required for use of molecular diagnostics in personalized medicine, particularly in cancer. These applications provide the functionality required for sample accessioning and tracking, material generation, and testing that are particular to the evolving needs of individualized molecular diagnostics. On implementation, the applications described here resulted in improvements in the turn-around time for reporting of more complex molecular test sets, and significant changes in the workflow. Therefore, careful mapping of workflow can permit design of software applications that simplify even the complex demands of specialized molecular testing. By incorporating design features for order review, software tools can permit a more personalized approach to sample handling and test selection without compromising efficiency. PMID:20007844

  20. Design and modeling of small scale multiple fracturing experiments

    SciTech Connect

    Cuderman, J F

    1981-12-01

    Recent experiments at the Nevada Test Site (NTS) have demonstrated the existence of three distinct fracture regimes. Depending on the pressure rise time in a borehole, one can obtain hydraulic, multiple, or explosive fracturing behavior. The use of propellants rather than explosives in tamped boreholes permits tailoring of the pressure risetime over a wide range since propellants having a wide range of burn rates are available. This technique of using the combustion gases from a full bore propellant charge to produce controlled borehole pressurization is termed High Energy Gas Fracturing (HEGF). Several series of HEGF, in 0.15 m and 0.2 m diameter boreholes at 12 m depths, have been completed in a tunnel complex at NTS where mineback permitted direct observation of fracturing obtained. Because such large experiments are costly and time consuming, smaller scale experiments are desirable, provided results from small experiments can be used to predict fracture behavior in larger boreholes. In order to design small scale gas fracture experiments, the available data from previous HEGF experiments were carefully reviewed, analytical elastic wave modeling was initiated, and semi-empirical modeling was conducted which combined predictions for statically pressurized boreholes with experimental data. The results of these efforts include (1) the definition of what constitutes small scale experiments for emplacement in a tunnel complex at the Nevada Test Site, (2) prediction of average crack radius, in ash fall tuff, as a function of borehole size and energy input per unit length, (3) definition of multiple-hydraulic and multiple-explosive fracture boundaries as a function of boreholes size and surface wave velocity, (4) semi-empirical criteria for estimating stress and acceleration, and (5) a proposal that multiple fracture orientations may be governed by in situ stresses.

  1. Modeling Hemispheric Detonation Experiments in 2-Dimensions

    SciTech Connect

    Howard, W M; Fried, L E; Vitello, P A; Druce, R L; Phillips, D; Lee, R; Mudge, S; Roeske, F

    2006-06-22

    Experiments have been performed with LX-17 (92.5% TATB and 7.5% Kel-F 800 binder) to study scaling of detonation waves using a dimensional scaling in a hemispherical divergent geometry. We model these experiments using an arbitrary Lagrange-Eulerian (ALE3D) hydrodynamics code, with reactive flow models based on the thermo-chemical code, Cheetah. The thermo-chemical code Cheetah provides a pressure-dependent kinetic rate law, along with an equation of state based on exponential-6 fluid potentials for individual detonation product species, calibrated to high pressures ({approx} few Mbars) and high temperatures (20000K). The parameters for these potentials are fit to a wide variety of experimental data, including shock, compression and sound speed data. For the un-reacted high explosive equation of state we use a modified Murnaghan form. We model the detonator (including the flyer plate) and initiation system in detail. The detonator is composed of LX-16, for which we use a program burn model. Steinberg-Guinan models5 are used for the metal components of the detonator. The booster and high explosive are LX-10 and LX-17, respectively. For both the LX-10 and LX-17, we use a pressure dependent rate law, coupled with a chemical equilibrium equation of state based on Cheetah. For LX-17, the kinetic model includes carbon clustering on the nanometer size scale.

  2. Analysis of Complex Intervention Effects in Time-Series Experiments.

    ERIC Educational Resources Information Center

    Bower, Cathleen

    An iterative least squares procedure for analyzing the effect of various kinds of intervention in time-series data is described. There are numerous applications of this design in economics, education, and psychology, although until recently, no appropriate analysis techniques had been developed to deal with the model adequately. This paper…

  3. Specifying and Refining a Complex Measurement Model.

    ERIC Educational Resources Information Center

    Levy, Roy; Mislevy, Robert J.

    This paper aims to describe a Bayesian approach to modeling and estimating cognitive models both in terms of statistical machinery and actual instrument development. Such a method taps the knowledge of experts to provide initial estimates for the probabilistic relationships among the variables in a multivariate latent variable model and refines…

  4. Dispersion Modeling in Complex Urban Systems

    EPA Science Inventory

    Models are used to represent real systems in an understandable way. They take many forms. A conceptual model explains the way a system works. In environmental studies, for example, a conceptual model may delineate all the factors and parameters for determining how a particle move...

  5. Turbulence modeling for complex hypersonic flows

    NASA Technical Reports Server (NTRS)

    Huang, P. G.; Coakley, T. J.

    1993-01-01

    The paper presents results of calculations for a range of 2D turbulent hypersonic flows using two-equation models. The baseline models and the model corrections required for good hypersonic-flow predictions will be illustrated. Three experimental data sets were chosen for comparison. They are: (1) the hypersonic flare flows of Kussoy and Horstman, (2) a 2D hypersonic compression corner flow of Coleman and Stollery, and (3) the ogive-cylinder impinging shock-expansion flows of Kussoy and Horstman. Comparisons with the experimental data have shown that baseline models under-predict the extent of flow separation but over-predict the heat transfer rate near flow reattachment. Modifications to the models are described which remove the above-mentioned deficiencies. Although we have restricted the discussion only to the selected baseline models in this paper, the modifications proposed are universal and can in principle be transferred to any existing two-equation model formulation.

  6. Simulation model for the closed plant experiment facility of CEEF.

    PubMed

    Abe, Koichi; Ishikawa, Yoshio; Kibe, Seishiro; Nitta, Keiji

    2005-01-01

    The Closed Ecology Experiment Facilities (CEEF) is a testbed for Controlled Ecological Life Support Systems (CELSS) investigations. CEEF including the physico-chemical material regenerative system has been constructed for the experiments of material circulation among plants, breeding animals and crew of CEEF. Because CEEF is a complex system, an appropriate schedule for the operation must be prepared in advance. The CEEF behavioral Prediction System, CPS, that will help to confirm the operation schedule, is under development. CPS will simulate CEEFs behavior with data (conditions of equipments, quantity of materials in tanks, etc.) of CEEF and an operation schedule that will be made by the operation team everyday, before the schedule will be carried out. The result of the simulation will show whether the operation schedule is appropriate or not. In order to realize CPS, models of the simulation program that is installed in CPS must mirror the real facilities of CEEF. For the first step of development, a flexible algorithm of the simulation program was investigated. The next step was development of a replicate simulation model of the material circulation system for the Closed Plant Experiment Facility (CPEF) that is a part of CEEF. All the parts of a real material circulation system for CPEF are connected together and work as a complex mechanism. In the simulation model, the system was separated into 38 units according to its operational segmentation. In order to develop each model for its corresponding unit, specifications for the model were fixed based on the specifications of the real part. These models were put into a simulation model for the system. PMID:16175692

  7. Simulation model for the closed plant experiment facility of CEEF

    NASA Astrophysics Data System (ADS)

    Abe, Koichi; Ishikawa, Yoshio; Kibe, Seishiro; Nitta, Keiji

    The Closed Ecology Experiment Facilities (CEEF) is a testbed for Controlled Ecological Life Support Systems (CELSS) investigations. CEEF including the physico-chemical material regenerative system has been constructed for the experiments of material circulation among plants, breeding animals and crew of CEEF. Because CEEF is a complex system, an appropriate schedule for the operation must be prepared in advance. The CEEF behavioral Prediction System, CPS, that will help to confirm the operation schedule, is under development. CPS will simulate CEEFs behavior with data (conditions of equipments, quantity of materials in tanks, etc.) of CEEF and an operation schedule that will be made by the operation team everyday, before the schedule will be carried out. The result of the simulation will show whether the operation schedule is appropriate or not. In order to realize CPS, models of the simulation program that is installed in CPS must mirror the real facilities of CEEF. For the first step of development, a flexible algorithm of the simulation program was investigated. The next step was development of a replicate simulation model of the material circulation system for the Closed Plant Experiment Facility (CPEF) that is a part of CEEF. All the parts of a real material circulation system for CPEF are connected together and work as a complex mechanism. In the simulation model, the system was separated into 38 units according to its operational segmentation. In order to develop each model for its corresponding unit, specifications for the model were fixed based on the specifications of the real part. These models were put into a simulation model for the system.

  8. Design of Experiments, Model Calibration and Data Assimilation

    SciTech Connect

    Williams, Brian J.

    2014-07-30

    This presentation provides an overview of emulation, calibration and experiment design for computer experiments. Emulation refers to building a statistical surrogate from a carefully selected and limited set of model runs to predict unsampled outputs. The standard kriging approach to emulation of complex computer models is presented. Calibration refers to the process of probabilistically constraining uncertain physics/engineering model inputs to be consistent with observed experimental data. An initial probability distribution for these parameters is updated using the experimental information. Markov chain Monte Carlo (MCMC) algorithms are often used to sample the calibrated parameter distribution. Several MCMC algorithms commonly employed in practice are presented, along with a popular diagnostic for evaluating chain behavior. Space-filling approaches to experiment design for selecting model runs to build effective emulators are discussed, including Latin Hypercube Design and extensions based on orthogonal array skeleton designs and imposed symmetry requirements. Optimization criteria that further enforce space-filling, possibly in projections of the input space, are mentioned. Designs to screen for important input variations are summarized and used for variable selection in a nuclear fuels performance application. This is followed by illustration of sequential experiment design strategies for optimization, global prediction, and rare event inference.

  9. Background modeling for the GERDA experiment

    NASA Astrophysics Data System (ADS)

    Becerici-Schmidt, N.; Gerda Collaboration

    2013-08-01

    The neutrinoless double beta (0νββ) decay experiment GERDA at the LNGS of INFN has started physics data taking in November 2011. This paper presents an analysis aimed at understanding and modeling the observed background energy spectrum, which plays an essential role in searches for a rare signal like 0νββ decay. A very promising preliminary model has been obtained, with the systematic uncertainties still under study. Important information can be deduced from the model such as the expected background and its decomposition in the signal region. According to the model the main background contributions around Qββ come from 214Bi, 228Th, 42K, 60Co and α emitting isotopes in the 226Ra decay chain, with a fraction depending on the assumed source positions.

  10. Background modeling for the GERDA experiment

    SciTech Connect

    Becerici-Schmidt, N.; Collaboration: GERDA Collaboration

    2013-08-08

    The neutrinoless double beta (0νββ) decay experiment GERDA at the LNGS of INFN has started physics data taking in November 2011. This paper presents an analysis aimed at understanding and modeling the observed background energy spectrum, which plays an essential role in searches for a rare signal like 0νββ decay. A very promising preliminary model has been obtained, with the systematic uncertainties still under study. Important information can be deduced from the model such as the expected background and its decomposition in the signal region. According to the model the main background contributions around Q{sub ββ} come from {sup 214}Bi, {sup 228}Th, {sup 42}K, {sup 60}Co and α emitting isotopes in the {sup 226}Ra decay chain, with a fraction depending on the assumed source positions.

  11. Impact polymorphs of quartz: experiments and modelling

    NASA Astrophysics Data System (ADS)

    Price, M. C.; Dutta, R.; Burchell, M. J.; Cole, M. J.

    2013-09-01

    We have used the light gas gun at the University of Kent to perform a series of impact experiments firing quartz projectiles onto metal, quartz and sapphire targets. The aim is to quantify the amount of any high pressure quartz polymorphs produced, and use these data to develop our hydrocode modelling to enable the predict ion of the quantity of polymorphs produced during a planetary scale impact.

  12. Data assimilation and model evaluation experiment datasets

    NASA Technical Reports Server (NTRS)

    Lai, Chung-Cheng A.; Qian, Wen; Glenn, Scott M.

    1994-01-01

    The Institute for Naval Oceanography, in cooperation with Naval Research Laboratories and universities, executed the Data Assimilation and Model Evaluation Experiment (DAMEE) for the Gulf Stream region during fiscal years 1991-1993. Enormous effort has gone into the preparation of several high-quality and consistent datasets for model initialization and verification. This paper describes the preparation process, the temporal and spatial scopes, the contents, the structure, etc., of these datasets. The goal of DAMEE and the need of data for the four phases of experiment are briefly stated. The preparation of DAMEE datasets consisted of a series of processes: (1) collection of observational data; (2) analysis and interpretation; (3) interpolation using the Optimum Thermal Interpolation System package; (4) quality control and re-analysis; and (5) data archiving and software documentation. The data products from these processes included a time series of 3D fields of temperature and salinity, 2D fields of surface dynamic height and mixed-layer depth, analysis of the Gulf Stream and rings system, and bathythermograph profiles. To date, these are the most detailed and high-quality data for mesoscale ocean modeling, data assimilation, and forecasting research. Feedback from ocean modeling groups who tested this data was incorporated into its refinement. Suggestions for DAMEE data usages include (1) ocean modeling and data assimilation studies, (2) diagnosis and theoretical studies, and (3) comparisons with locally detailed observations.

  13. Experience from the ECORS program in regions of complex geology

    NASA Astrophysics Data System (ADS)

    Damotte, B.

    1993-04-01

    The French ECORS program was launched in 1983 by a cooperation agreement between universities and petroleum companies. Crustal surveys have tried to find explanations for the formation of geological features, such as rifts, mountains ranges or subsidence in sedimentary basins. Several seismic surveys were carried out, some across areas with complex geological structures. The seismic techniques and equipment used were those developed by petroleum geophysicists, adapted to the depth aimed at (30-50 km) and to various physical constraints encountered in the field. In France, ECORS has recorded 850 km of deep seismic lines onshore across plains and mountains, on various kinds of geological formations. Different variations of the seismic method (reflection, refraction, long-offset seismic) were used, often simultaneously. Multiple coverage profiling constitutes the essential part of this data acquisition. Vibrators and dynamite shots were employed with a spread generally 15 km long, but sometimes 100 km long. Some typical seismic examples show that obtaining crustal reflections essentialy depends on two factors: (1) the type and structure of shallow formations, and (2) the sources used. Thus, when seismic energy is strongly absorbed across the first kilometers in shallow formations, or when these formations are highly structured, standard multiple-coverage profiling is not able to provide results beyond a few seconds. In this case, it is recommended to simultaneously carry out long-offset seismic in low multiple coverage. Other more methodological examples show: how the impact on the crust of a surface fault may be evaluated according to the seismic method implemented ( VIBROSEIS 96-fold coverage or single dynamite shot); that vibrators make it possible to implement wide-angle seismic surveying with an offset 80 km long; how to implement the seismic reflection method on complex formations in high mountains. All data were processed using industrial seismic software

  14. Sequential Development of Interfering Metamorphic Core Complexes: Numerical Experiments and Comparison to the Cyclades, Greece

    NASA Astrophysics Data System (ADS)

    Tirel, C.; Gautier, P.; van Hinsbergen, D.; Wortel, R.

    2007-12-01

    The Cycladic extensional province (Greece) contains classical examples of metamorphic core complexes (MCCs), where exhumation was accommodated along multiple interfering and/or sequentially developed syn- and antithetic extensional detachment zones. Previous studies on the development of MCCs did not take into account the possible interference between multiple and closely spaced MCCs. In the present study, we have performed new lithosphere-scale experiments in which the deformation is not a priori localized so as to explore the conditions of the development of several MCCs in a direction parallel to extension. In a narrow range of conditions, MCCs are closely spaced, interfere with each other, and develop in sequence. From a comparison between numerical results and geological observations, we find that the Cyclades metamorphic core complexes are in good agreement with the model in terms of Moho geometry and depth, kinematic and structural history, timing and duration of core complex formation and metamorphic history. We infer that, for Cycladic MCC-type to develop, an initial crustal thickness prior to the onset of post-orogenic extension between 40 and 44 km, a boundary velocity close to 2 cm/yr and an initial thermal lithospheric thickness of about 60 km are required. The latter may be explained by a significant heating due to delamination of subducting continental crust or vigorous small-scale thermal convection.

  15. Studying complex chemistries using PLASIMO's global model

    NASA Astrophysics Data System (ADS)

    Koelman, PMJ; Tadayon Mousavi, S.; Perillo, R.; Graef, WAAD; Mihailova, DB; van Dijk, J.

    2016-02-01

    The Plasimo simulation software is used to construct a Global Model of a CO2 plasma. A DBD plasma between two coaxial cylinders is considered, which is driven by a triangular input power pulse. The plasma chemistry is studied during this power pulse and in the afterglow. The model consists of 71 species that interact in 3500 reactions. Preliminary results from the model are presented. The model has been validated by comparing its results with those presented in Kozák et al. (Plasma Sources Science and Technology 23(4) p. 045004, 2014). A good qualitative agreement has been reached; potential sources of remaining discrepancies are extensively discussed.

  16. Multiscale Computational Models of Complex Biological Systems

    PubMed Central

    Walpole, Joseph; Papin, Jason A.; Peirce, Shayn M.

    2014-01-01

    Integration of data across spatial, temporal, and functional scales is a primary focus of biomedical engineering efforts. The advent of powerful computing platforms, coupled with quantitative data from high-throughput experimental platforms, has allowed multiscale modeling to expand as a means to more comprehensively investigate biological phenomena in experimentally relevant ways. This review aims to highlight recently published multiscale models of biological systems while using their successes to propose the best practices for future model development. We demonstrate that coupling continuous and discrete systems best captures biological information across spatial scales by selecting modeling techniques that are suited to the task. Further, we suggest how to best leverage these multiscale models to gain insight into biological systems using quantitative, biomedical engineering methods to analyze data in non-intuitive ways. These topics are discussed with a focus on the future of the field, the current challenges encountered, and opportunities yet to be realized. PMID:23642247

  17. Information, complexity and efficiency: The automobile model

    SciTech Connect

    Allenby, B. |

    1996-08-08

    The new, rapidly evolving field of industrial ecology - the objective, multidisciplinary study of industrial and economic systems and their linkages with fundamental natural systems - provides strong ground for believing that a more environmentally and economically efficient economy will be more information intensive and complex. Information and intellectual capital will be substituted for the more traditional inputs of materials and energy in producing a desirable, yet sustainable, quality of life. While at this point this remains a strong hypothesis, the evolution of the automobile industry can be used to illustrate how such substitution may, in fact, already be occurring in an environmentally and economically critical sector.

  18. Observation simulation experiments with regional prediction models

    NASA Technical Reports Server (NTRS)

    Diak, George; Perkey, Donald J.; Kalb, Michael; Robertson, Franklin R.; Jedlovec, Gary

    1990-01-01

    Research efforts in FY 1990 included studies employing regional scale numerical models as aids in evaluating potential contributions of specific satellite observing systems (current and future) to numerical prediction. One study involves Observing System Simulation Experiments (OSSEs) which mimic operational initialization/forecast cycles but incorporate simulated Advanced Microwave Sounding Unit (AMSU) radiances as input data. The objective of this and related studies is to anticipate the potential value of data from these satellite systems, and develop applications of remotely sensed data for the benefit of short range forecasts. Techniques are also being used that rely on numerical model-based synthetic satellite radiances to interpret the information content of various types of remotely sensed image and sounding products. With this approach, evolution of simulated channel radiance image features can be directly interpreted in terms of the atmospheric dynamical processes depicted by a model. Progress is being made in a study using the internal consistency of a regional prediction model to simplify the assessment of forced diabatic heating and moisture initialization in reducing model spinup times. Techniques for model initialization are being examined, with focus on implications for potential applications of remote microwave observations, including AMSU and Special Sensor Microwave Imager (SSM/I), in shortening model spinup time for regional prediction.

  19. Information-driven modeling of protein-peptide complexes.

    PubMed

    Trellet, Mikael; Melquiond, Adrien S J; Bonvin, Alexandre M J J

    2015-01-01

    Despite their biological importance in many regulatory processes, protein-peptide recognition mechanisms are difficult to study experimentally at the structural level because of the inherent flexibility of peptides and the often transient interactions on which they rely. Complementary methods like biomolecular docking are therefore required. The prediction of the three-dimensional structure of protein-peptide complexes raises unique challenges for computational algorithms, as exemplified by the recent introduction of protein-peptide targets in the blind international experiment CAPRI (Critical Assessment of PRedicted Interactions). Conventional protein-protein docking approaches are often struggling with the high flexibility of peptides whose short sizes impede protocols and scoring functions developed for larger interfaces. On the other side, protein-small ligand docking methods are unable to cope with the larger number of degrees of freedom in peptides compared to small molecules and the typically reduced available information to define the binding site. In this chapter, we describe a protocol to model protein-peptide complexes using the HADDOCK web server, working through a test case to illustrate every steps. The flexibility challenge that peptides represent is dealt with by combining elements of conformational selection and induced fit molecular recognition theories. PMID:25555727

  20. Sensitivity Analysis in Complex Plasma Chemistry Models

    NASA Astrophysics Data System (ADS)

    Turner, Miles

    2015-09-01

    The purpose of a plasma chemistry model is prediction of chemical species densities, including understanding the mechanisms by which such species are formed. These aims are compromised by an uncertain knowledge of the rate constants included in the model, which directly causes uncertainty in the model predictions. We recently showed that this predictive uncertainty can be large--a factor of ten or more in some cases. There is probably no context in which a plasma chemistry model might be used where the existence of uncertainty on this scale could not be a matter of concern. A question that at once follows is: Which rate constants cause such uncertainty? In the present paper we show how this question can be answered by applying a systematic screening procedure--the so-called Morris method--to identify sensitive rate constants. We investigate the topical example of the helium-oxygen chemistry. Beginning with a model with almost four hundred reactions, we show that only about fifty rate constants materially affect the model results, and as few as ten cause most of the uncertainty. This means that the model can be improved, and the uncertainty substantially reduced, by focussing attention on this tractably small set of rate constants. Work supported by Science Foundation Ireland under grant08/SRC/I1411, and by COST Action MP1101 ``Biomedical Applications of Atmospheric Pressure Plasmas.''

  1. Experiences in evaluating regional air quality models

    NASA Astrophysics Data System (ADS)

    Liu, Mei-Kao; Greenfield, Stanley M.

    Any area of the world concerned with the health and welfare of its people and the viability of its ecological system must eventually address the question of the control of air pollution. This is true in developed countries as well as countries that are undergoing a considerable degree of industrialization. The control or limitation of the emissions of a pollutant can be very costly. To avoid ineffective or unnecessary control, the nature of the problem must be fully understood and the relationship between source emissions and ambient concentrations must be established. Mathematical models, while admittedly containing large uncertainties, can be used to examine alternatives of emission restrictions for achieving safe ambient concentrations. The focus of this paper is to summarize our experiences with modeling regional air quality in the United States and Western Europe. The following modeling experiences have been used: future SO 2 and sulfate distributions and projected acidic deposition as related to coal development in the northern Great Plains in the U.S.; analysis of regional ozone and sulfate episodes in the northeastern U.S.; analysis of the regional ozone problem in western Europe in support of alternative emission control strategies; analysis of distributions of toxic chemicals in the Southeast Ohio River Valley in support of the design of a monitoring network human exposure. Collectively, these prior modeling analyses can be invaluable in examining a similar problem in other parts of the world as well, such as the Pacific rim in Asia.

  2. Modeling Power Systems as Complex Adaptive Systems

    SciTech Connect

    Chassin, David P.; Malard, Joel M.; Posse, Christian; Gangopadhyaya, Asim; Lu, Ning; Katipamula, Srinivas; Mallow, J V.

    2004-12-30

    Physical analogs have shown considerable promise for understanding the behavior of complex adaptive systems, including macroeconomics, biological systems, social networks, and electric power markets. Many of today's most challenging technical and policy questions can be reduced to a distributed economic control problem. Indeed, economically based control of large-scale systems is founded on the conjecture that the price-based regulation (e.g., auctions, markets) results in an optimal allocation of resources and emergent optimal system control. This report explores the state-of-the-art physical analogs for understanding the behavior of some econophysical systems and deriving stable and robust control strategies for using them. We review and discuss applications of some analytic methods based on a thermodynamic metaphor, according to which the interplay between system entropy and conservation laws gives rise to intuitive and governing global properties of complex systems that cannot be otherwise understood. We apply these methods to the question of how power markets can be expected to behave under a variety of conditions.

  3. Uniform surface complexation approaches to radionuclide sorption modeling

    SciTech Connect

    Turner, D.R.; Pabalan, R.T.; Muller, P.; Bertetti, F.P.

    1995-12-01

    Simplified surface complexation models, based on a uniform set of model parameters have been developed to address complex radionuclide sorption behavior. Existing data have been examined, and interpreted using numerical nonlinear least-squares optimization techniques to determine the necessary binding constants. Simplified modeling approaches have generally proven successful at simulating and predicting radionuclide sorption on (hydr)oxides and aluminosilicates over a wide range of physical and chemical conditions.

  4. Ballistic Response of Fabrics: Model and Experiments

    NASA Astrophysics Data System (ADS)

    Orphal, Dennis L.; Walker Anderson, James D., Jr.

    2001-06-01

    Walker (1999)developed an analytical model for the dynamic response of fabrics to ballistic impact. From this model the force, F, applied to the projectile by the fabric is derived to be F = 8/9 (ET*)h^3/R^2, where E is the Young's modulus of the fabric, T* is the "effective thickness" of the fabric and equal to the ratio of the areal density of the fabric to the fiber density, h is the displacement of the fabric on the axis of impact and R is the radius of the fabric deformation or "bulge". Ballistic tests against Zylon^TM fabric have been performed to measure h and R as a function of time. The results of these experiments are presented and analyzed in the context of the Walker model. Walker (1999), Proceedings of the 18th International Symposium on Ballistics, pp. 1231.

  5. Integrated Modeling of Complex Optomechanical Systems

    NASA Astrophysics Data System (ADS)

    Andersen, Torben; Enmark, Anita

    2011-09-01

    Mathematical modeling and performance simulation are playing an increasing role in large, high-technology projects. There are two reasons; first, projects are now larger than they were before, and the high cost calls for detailed performance prediction before construction. Second, in particular for space-related designs, it is often difficult to test systems under realistic conditions beforehand, and mathematical modeling is then needed to verify in advance that a system will work as planned. Computers have become much more powerful, permitting calculations that were not possible before. At the same time mathematical tools have been further developed and found acceptance in the community. Particular progress has been made in the fields of structural mechanics, optics and control engineering, where new methods have gained importance over the last few decades. Also, methods for combining optical, structural and control system models into global models have found widespread use. Such combined models are usually called integrated models and were the subject of this symposium. The objective was to bring together people working in the fields of groundbased optical telescopes, ground-based radio telescopes, and space telescopes. We succeeded in doing so and had 39 interesting presentations and many fruitful discussions during coffee and lunch breaks and social arrangements. We are grateful that so many top ranked specialists found their way to Kiruna and we believe that these proceedings will prove valuable during much future work.

  6. A tracer experiment study to evaluate the CALPUFF real time application in a near-field complex terrain setting

    NASA Astrophysics Data System (ADS)

    cui, Huiling; Yao, Rentai; Xu, Xiangjun; Xin, Cuntian; Yang, jinming

    2011-12-01

    CALPUFF is an atmospheric source-receptor model recommended by the US Environmental Protection Agency (EPA) for use on a case-by-case basis in complex terrain and wind condition. As the bulk of validation of CALPUFF has focused on long-range or short-range but long-term dispersion, we can not gauge the reliability of the model for predicting the short-term emission in near-field especially complex terrain, and sometimes this situation is important for emergency emission. To validate the CALPUFF's application in such condition, we carried out a tracer experiment in a near-field complex terrain setting and used CALPUFF atmospheric dispersion model to simulate the tracer experiment in real condition. From the centroid trajectory comparison of predictions and measures, we can see that the model can correctly predict the centroid trajectory and shape of tracer cloud, and the results also indicate that sufficient observed weather data only can develop a good wind field for near-field. From the concentration comparison in each arc, we can see the model underestimate horizontal extent of tracer puff and can not reflect the irregular characters showed in measurements. The result of global analysis is FOEX of -25.91%, FA2 of 27.06%, FA5 of 61.41%. The simulations shows that the CALPUFF can simulate the position and direction of tracer cloud in near-field complex terrain but underestimate over measurements especially in peak concentrations.

  7. Process modelling for Space Station experiments

    NASA Technical Reports Server (NTRS)

    Alexander, J. Iwan D.; Rosenberger, Franz; Nadarajah, Arunan; Ouazzani, Jalil; Amiroudine, Sakir

    1990-01-01

    Examined here is the sensitivity of a variety of space experiments to residual accelerations. In all the cases discussed the sensitivity is related to the dynamic response of a fluid. In some cases the sensitivity can be defined by the magnitude of the response of the velocity field. This response may involve motion of the fluid associated with internal density gradients, or the motion of a free liquid surface. For fluids with internal density gradients, the type of acceleration to which the experiment is sensitive will depend on whether buoyancy driven convection must be small in comparison to other types of fluid motion, or fluid motion must be suppressed or eliminated. In the latter case, the experiments are sensitive to steady and low frequency accelerations. For experiments such as the directional solidification of melts with two or more components, determination of the velocity response alone is insufficient to assess the sensitivity. The effect of the velocity on the composition and temperature field must be considered, particularly in the vicinity of the melt-crystal interface. As far as the response to transient disturbances is concerned, the sensitivity is determined by both the magnitude and frequency of the acceleration and the characteristic momentum and solute diffusion times. The microgravity environment, a numerical analysis of low gravity tolerance of the Bridgman-Stockbarger technique, and modeling crystal growth by physical vapor transport in closed ampoules are discussed.

  8. Spectroscopic studies of molybdenum complexes as models for nitrogenase

    SciTech Connect

    Walker, T.P.

    1981-05-01

    Because biological nitrogen fixation requires Mo, there is an interest in inorganic Mo complexes which mimic the reactions of nitrogen-fixing enzymes. Two such complexes are the dimer Mo/sub 2/O/sub 4/ (cysteine)/sub 2//sup 2 -/ and trans-Mo(N/sub 2/)/sub 2/(dppe)/sub 2/ (dppe = 1,2-bis(diphenylphosphino)ethane). The H/sup 1/ and C/sup 13/ NMR of solutions of Mo/sub 2/O/sub 4/(cys)/sub 2//sup 2 -/ are described. It is shown that in aqueous solution the cysteine ligands assume at least three distinct configurations. A step-wise dissociation of the cysteine ligand is proposed to explain the data. The Extended X-ray Absorption Fine Structure (EXAFS) of trans-Mo(N/sub 2/)/sub 2/(dppe)/sub 2/ is described and compared to the EXAFS of MoH/sub 4/(dppe)/sub 2/. The spectra are fitted to amplitude and phase parameters developed at Bell Laboratories. On the basis of this analysis, one can determine (1) that the dinitrogen complex contains nitrogen and the hydride complex does not and (2) the correct Mo-N distance. This is significant because the Mo inn both complexes is coordinated by four P atoms which dominate the EXAFS. A similar sort of interference is present in nitrogenase due to S coordination of the Mo in the enzyme. This model experiment indicates that, given adequate signal to noise ratios, the presence or absence of dinitrogen coordination to Mo in the enzyme may be determined by EXAFS using existing data analysis techniques. A new reaction between Mo/sub 2/O/sub 4/(cys)/sub 2//sup 2 -/ and acetylene is described to the extent it is presently understood. A strong EPR signal is observed, suggesting the production of stable Mo(V) monomers. EXAFS studies support this suggestion. The Mo K-edge is described. The edge data suggests Mo(VI) is also produced in the reaction. Ultraviolet spectra suggest that cysteine is released in the course of the reaction.

  9. Formation rates of complex organics in UV irradiated CH_3OH-rich ices. I. Experiments

    NASA Astrophysics Data System (ADS)

    Öberg, K. I.; Garrod, R. T.; van Dishoeck, E. F.; Linnartz, H.

    2009-09-01

    Context: Gas-phase complex organic molecules are commonly detected in the warm inner regions of protostellar envelopes, so-called hot cores. Recent models show that photochemistry in ices followed by desorption may explain the observed abundances. There is, however, a general lack of quantitative data on UV-induced complex chemistry in ices. Aims: This study aims to experimentally quantify the UV-induced production rates of complex organics in CH3OH-rich ices under a variety of astrophysically relevant conditions. Methods: The ices are irradiated with a broad-band UV hydrogen microwave-discharge lamp under ultra-high vacuum conditions, at 20-70 K, and then heated to 200 K. The reaction products are identified by reflection-absorption infrared spectroscopy (RAIRS) and temperature programmed desorption (TPD), through comparison with RAIRS and TPD curves of pure complex species, and through the observed effects of isotopic substitution and enhancement of specific functional groups, such as CH3, in the ice. Results: Complex organics are readily formed in all experiments, both during irradiation and during the slow warm-up of the ices after the UV lamp is turned off. The relative abundances of photoproducts depend on the UV fluence, the ice temperature, and whether pure CH3OH ice or CH3OH:CH4/CO ice mixtures are used. C2H6, CH3CHO, CH3CH2OH, CH3OCH3, HCOOCH3, HOCH2CHO and (CH2OH)2 are all detected in at least one experiment. Varying the ice thickness and the UV flux does not affect the chemistry. The derived product-formation yields and their dependences on different experimental parameters, such as the initial ice composition, are used to estimate the CH3OH photodissociation branching ratios in ice and the relative diffusion barriers of the formed radicals. At 20 K, the pure CH3OH photodesorption yield is 2.1(±1.0)×10-3 per incident UV photon, the photo-destruction cross section 2.6(±0.9)×10-18 cm^2. Conclusions: Photochemistry in CH3OH ices is efficient enough to

  10. Smoothed Particle Hydrodynamics simulation and laboratory-scale experiments of complex flow dynamics in unsaturated fractures

    NASA Astrophysics Data System (ADS)

    Kordilla, J.; Tartakovsky, A. M.; Pan, W.; Shigorina, E.; Noffz, T.; Geyer, T.

    2015-12-01

    Unsaturated flow in fractured porous media exhibits highly complex flow dynamics and a wide range of intermittent flow processes. Especially in wide aperture fractures, flow processes may be dominated by gravitational instead of capillary forces leading to a deviation from the classical volume effective approaches (Richard's equation, Van Genuchten type relationships). The existence of various flow modes such as droplets, rivulets, turbulent and adsorbed films is well known, however, their spatial and temporal distribution within fracture networks is still an open question partially due to the lack of appropriate modeling tools. With our work we want to gain a deeper understanding of the underlying flow and transport dynamics in unsaturated fractured media in order to support the development of more refined upscaled methods, applicable on catchment scales. We present fracture-scale flow simulations obtained with a parallelized Smoothed Particle Hydrodynamics (SPH) model. The model allows us to simulate free-surface flow dynamics including the effect of surface tension for a wide range of wetting conditions in smooth and rough fractures. Due to the highly efficient generation of surface tension via particle-particle interaction forces the dynamic wetting of surfaces can readily be obtained. We validated the model via empirical and semi-analytical solutions and conducted laboratory-scale percolation experiments of unsaturated flow through synthetic fracture systems. The setup allows us to obtain travel time distributions and identify characteristic flow mode distributions on wide aperture fractures intercepted by horizontal fracture elements.

  11. The sigma model on complex projective superspaces

    NASA Astrophysics Data System (ADS)

    Candu, Constantin; Mitev, Vladimir; Quella, Thomas; Saleur, Hubert; Schomerus, Volker

    2010-02-01

    The sigma model on projective superspaces mathbb{C}{mathbb{P}^{S - 1left| S right.}} gives rise to a continuous family of interacting 2D conformal field theories which are parametrized by the curvature radius R and the theta angle θ. Our main goal is to determine the spectrum of the model, non-perturbatively as a function of both parameters. We succeed to do so for all open boundary conditions preserving the full global symmetry of the model. In string theory parlor, these correspond to volume filling branes that are equipped with a monopole line bundle and connection. The paper consists of two parts. In the first part, we approach the problem within the continuum formulation. Combining combinatorial arguments with perturbative studies and some simple free field calculations, we determine a closed formula for the partition function of the theory. This is then tested numerically in the second part. There we extend the proposal of [ arXiv:0908.1081 ] for a spin chain regularization of the mathbb{C}{mathbb{P}^{S - 1left| S right.}} model with open boundary conditions and use it to determine the spectrum at the conformal fixed point. The numerical results are in remarkable agreement with the continuum analysis.

  12. A simple model clarifies the complicated relationships of complex networks

    NASA Astrophysics Data System (ADS)

    Zheng, Bojin; Wu, Hongrun; Kuang, Li; Qin, Jun; Du, Wenhua; Wang, Jianmin; Li, Deyi

    2014-08-01

    Real-world networks such as the Internet and WWW have many common traits. Until now, hundreds of models were proposed to characterize these traits for understanding the networks. Because different models used very different mechanisms, it is widely believed that these traits origin from different causes. However, we find that a simple model based on optimisation can produce many traits, including scale-free, small-world, ultra small-world, Delta-distribution, compact, fractal, regular and random networks. Moreover, by revising the proposed model, the community-structure networks are generated. By this model and the revised versions, the complicated relationships of complex networks are illustrated. The model brings a new universal perspective to the understanding of complex networks and provide a universal method to model complex networks from the viewpoint of optimisation.

  13. A simple model clarifies the complicated relationships of complex networks

    PubMed Central

    Zheng, Bojin; Wu, Hongrun; Kuang, Li; Qin, Jun; Du, Wenhua; Wang, Jianmin; Li, Deyi

    2014-01-01

    Real-world networks such as the Internet and WWW have many common traits. Until now, hundreds of models were proposed to characterize these traits for understanding the networks. Because different models used very different mechanisms, it is widely believed that these traits origin from different causes. However, we find that a simple model based on optimisation can produce many traits, including scale-free, small-world, ultra small-world, Delta-distribution, compact, fractal, regular and random networks. Moreover, by revising the proposed model, the community-structure networks are generated. By this model and the revised versions, the complicated relationships of complex networks are illustrated. The model brings a new universal perspective to the understanding of complex networks and provide a universal method to model complex networks from the viewpoint of optimisation. PMID:25160506

  14. Improving phylogenetic regression under complex evolutionary models.

    PubMed

    Mazel, Florent; Davies, T Jonathan; Georges, Damien; Lavergne, Sébastien; Thuiller, Wilfried; Peres-NetoO, Pedro R

    2016-02-01

    Phylogenetic Generalized Least Square (PGLS) is the tool of choice among phylogenetic comparative methods to measure the correlation between species features such as morphological and life-history traits or niche characteristics. In its usual form, it assumes that the residual variation follows a homogenous model of evolution across the branches of the phylogenetic tree. Since a homogenous model of evolution is unlikely to be realistic in nature, we explored the robustness of the phylogenetic regression when this assumption is violated. We did so by simulating a set of traits under various heterogeneous models of evolution, and evaluating the statistical performance (type I error [the percentage of tests based on samples that incorrectly rejected a true null hypothesis] and power [the percentage of tests that correctly rejected a false null hypothesis]) of classical phylogenetic regression. We found that PGLS has good power but unacceptable type I error rates. This finding is important since this method has been increasingly used in comparative analyses over the last decade. To address this issue, we propose a simple solution based on transforming the underlying variance-covariance matrix to adjust for model heterogeneity within PGLS. We suggest that heterogeneous rates of evolution might be particularly prevalent in large phylogenetic trees, while most current approaches assume a homogenous rate of evolution. Our analysis demonstrates that overlooking rate heterogeneity can result in inflated type I errors, thus misleading comparative analyses. We show that it is possible to correct for this bias even when the underlying model of evolution is not known a priori. PMID:27145604

  15. A musculoskeletal model of the elbow joint complex

    NASA Technical Reports Server (NTRS)

    Gonzalez, Roger V.; Barr, Ronald E.; Abraham, Lawrence D.

    1993-01-01

    This paper describes a musculoskeletal model that represents human elbow flexion-extension and forearm pronation-supination. Musculotendon parameters and the skeletal geometry were determined for the musculoskeletal model in the analysis of ballistic elbow joint complex movements. The key objective was to develop a computational model, guided by optimal control, to investigate the relationship among patterns of muscle excitation, individual muscle forces, and movement kinematics. The model was verified using experimental kinematic, torque, and electromyographic data from volunteer subjects performing both isometric and ballistic elbow joint complex movements. In general, the model predicted kinematic and muscle excitation patterns similar to what was experimentally measured.

  16. Experiments for foam model development and validation.

    SciTech Connect

    Bourdon, Christopher Jay; Cote, Raymond O.; Moffat, Harry K.; Grillet, Anne Mary; Mahoney, James F.; Russick, Edward Mark; Adolf, Douglas Brian; Rao, Rekha Ranjana; Thompson, Kyle Richard; Kraynik, Andrew Michael; Castaneda, Jaime N.; Brotherton, Christopher M.; Mondy, Lisa Ann; Gorby, Allen D.

    2008-09-01

    A series of experiments has been performed to allow observation of the foaming process and the collection of temperature, rise rate, and microstructural data. Microfocus video is used in conjunction with particle image velocimetry (PIV) to elucidate the boundary condition at the wall. Rheology, reaction kinetics and density measurements complement the flow visualization. X-ray computed tomography (CT) is used to examine the cured foams to determine density gradients. These data provide input to a continuum level finite element model of the blowing process.

  17. Multiaxial behavior of foams - Experiments and modeling

    NASA Astrophysics Data System (ADS)

    Maheo, Laurent; Guérard, Sandra; Rio, Gérard; Donnard, Adrien; Viot, Philippe

    2015-09-01

    Cellular materials are strongly related to pressure level inside the material. It is therefore important to use experiments which can highlight (i) the pressure-volume behavior, (ii) the shear-shape behavior for different pressure level. Authors propose to use hydrostatic compressive, shear and combined pressure-shear tests to determine cellular materials behavior. Finite Element Modeling must take into account these behavior specificities. Authors chose to use a behavior law with a Hyperelastic, a Viscous and a Hysteretic contributions. Specific developments has been performed on the Hyperelastic one by separating the spherical and the deviatoric part to take into account volume change and shape change characteristics of cellular materials.

  18. Experience with the CMS Event Data Model

    SciTech Connect

    Elmer, P.; Hegner, B.; Sexton-Kennedy, L.; /Fermilab

    2009-06-01

    The re-engineered CMS EDM was presented at CHEP in 2006. Since that time we have gained a lot of operational experience with the chosen model. We will present some of our findings, and attempt to evaluate how well it is meeting its goals. We will discuss some of the new features that have been added since 2006 as well as some of the problems that have been addressed. Also discussed is the level of adoption throughout CMS, which spans the trigger farm up to the final physics analysis. Future plans, in particular dealing with schema evolution and scaling, will be discussed briefly.

  19. Complex Perceptions of Identity: The Experiences of Student Combat Veterans in Community College

    ERIC Educational Resources Information Center

    Hammond, Shane Patrick

    2016-01-01

    This qualitative study illustrates how complex perceptions of identity influence the community college experience for student veterans who have been in combat, creating barriers to their overall persistence. The collective experiences of student combat veterans at two community colleges in northwestern Massachusetts are presented, and a Combat…

  20. Communicating about Loss: Experiences of Older Australian Adults with Cerebral Palsy and Complex Communication Needs

    ERIC Educational Resources Information Center

    Dark, Leigha; Balandin, Susan; Clemson, Lindy

    2011-01-01

    Loss and grief is a universal human experience, yet little is known about how older adults with a lifelong disability, such as cerebral palsy, and complex communication needs (CCN) experience loss and manage the grieving process. In-depth interviews were conducted with 20 Australian participants with cerebral palsy and CCN to determine the types…

  1. Optimal Complexity of Nonlinear Rainfall-Runoff Models

    NASA Astrophysics Data System (ADS)

    Schoups, G.; Vrugt, J.; van de Giesen, N.; Fenicia, F.

    2008-12-01

    Identification of an appropriate level of model complexity to accurately translate rainfall into runoff remains an unresolved issue. The model has to be complex enough to generate accurate predictions, but not too complex such that its parameters cannot be reliably estimated from the data. Earlier work with linear models (Jakeman and Hornberger, 1993) concluded that a model with 4 to 5 parameters is sufficient. However, more recent results with a nonlinear model (Vrugt et al., 2006) suggest that 10 or more parameters may be identified from daily rainfall-runoff time-series. The goal here is to systematically investigate optimal complexity of nonlinear rainfall-runoff models, yielding accurate models with identifiable parameters. Our methodology consists of four steps: (i) a priori specification of a family of model structures from which to pick an optimal one, (ii) parameter optimization of each model structure to estimate empirical or calibration error, (iii) estimation of parameter uncertainty of each calibrated model structure, and (iv) estimation of prediction error of each calibrated model structure. For the first step we formulate a flexible model structure that allows us to systematically vary the complexity with which physical processes are simulated. The second and third steps are achieved using a recently developed Markov chain Monte Carlo algorithm (DREAM), which minimizes calibration error yielding optimal parameter values and their underlying posterior probability density function. Finally, we compare several methods for estimating prediction error of each model structure, including statistical methods based on information criteria and split-sample calibration-validation. Estimates of parameter uncertainty and prediction error are then used to identify optimal complexity for rainfall-runoff modeling, using data from dry and wet MOPEX catchments as case studies.

  2. Blueprints for Complex Learning: The 4C/ID-Model.

    ERIC Educational Resources Information Center

    van Merrienboer, Jeroen J. G.; Clark, Richard E.; de Croock, Marcel B. M.

    2002-01-01

    Describes the four-component instructional design system (4C/ID-model) developed for the design of training programs for complex skills. Discusses the structure of training blueprints for complex learning and associated instructional methods, focusing on learning tasks, supportive information, just-in-time information, and part-task practice.…

  3. Classrooms as Complex Adaptive Systems: A Relational Model

    ERIC Educational Resources Information Center

    Burns, Anne; Knox, John S.

    2011-01-01

    In this article, we describe and model the language classroom as a complex adaptive system (see Logan & Schumann, 2005). We argue that linear, categorical descriptions of classroom processes and interactions do not sufficiently explain the complex nature of classrooms, and cannot account for how classroom change occurs (or does not occur), over…

  4. Phytoavailability of thallium - A model soil experiment

    NASA Astrophysics Data System (ADS)

    Vanek, Ales; Mihaljevic, Martin; Galuskova, Ivana; Komarek, Michael

    2013-04-01

    The study deals with the environmental stability of Tl-modified phases (ferrihydrite, goethite, birnessite, calcite and illite) and phytoavailability of Tl in synthetically prepared soils used in a model vegetation experiment. The obtained data clearly demonstrate a strong relationship between the mineralogical position of Tl in the model soil and its uptake by the plant (Sinapis alba L.). The maximum rate of Tl uptake was observed for plants grown on soil containing Tl-modified illite. In contrast, soil enriched in Ksat-birnessite had the lowest potential for Tl release and phytoaccumulation. Root-induced dissolution of synthetic calcite and ferrihydrite in the rhizosphere followed by Tl mobilization was detected. Highly crystalline goethite was more stable in the rhizosphere, compared to ferrihydrite, leading to reduced biological uptake of Tl. Based on the results, the mineralogical aspect must be taken into account prior to general environmental recommendations in areas affected by Tl.

  5. Prequential Analysis of Complex Data with Adaptive Model Reselection†

    PubMed Central

    Clarke, Jennifer; Clarke, Bertrand

    2010-01-01

    In Prequential analysis, an inference method is viewed as a forecasting system, and the quality of the inference method is based on the quality of its predictions. This is an alternative approach to more traditional statistical methods that focus on the inference of parameters of the data generating distribution. In this paper, we introduce adaptive combined average predictors (ACAPs) for the Prequential analysis of complex data. That is, we use convex combinations of two different model averages to form a predictor at each time step in a sequence. A novel feature of our strategy is that the models in each average are re-chosen adaptively at each time step. To assess the complexity of a given data set, we introduce measures of data complexity for continuous response data. We validate our measures in several simulated contexts prior to using them in real data examples. The performance of ACAPs is compared with the performances of predictors based on stacking or likelihood weighted averaging in several model classes and in both simulated and real data sets. Our results suggest that ACAPs achieve a better trade off between model list bias and model list variability in cases where the data is very complex. This implies that the choices of model class and averaging method should be guided by a concept of complexity matching, i.e. the analysis of a complex data set may require a more complex model class and averaging strategy than the analysis of a simpler data set. We propose that complexity matching is akin to a bias–variance tradeoff in statistical modeling. PMID:20617104

  6. Size and complexity in model financial systems.

    PubMed

    Arinaminpathy, Nimalan; Kapadia, Sujit; May, Robert M

    2012-11-01

    The global financial crisis has precipitated an increasing appreciation of the need for a systemic perspective toward financial stability. For example: What role do large banks play in systemic risk? How should capital adequacy standards recognize this role? How is stability shaped by concentration and diversification in the financial system? We explore these questions using a deliberately simplified, dynamic model of a banking system that combines three different channels for direct transmission of contagion from one bank to another: liquidity hoarding, asset price contagion, and the propagation of defaults via counterparty credit risk. Importantly, we also introduce a mechanism for capturing how swings in "confidence" in the system may contribute to instability. Our results highlight that the importance of relatively large, well-connected banks in system stability scales more than proportionately with their size: the impact of their collapse arises not only from their connectivity, but also from their effect on confidence in the system. Imposing tougher capital requirements on larger banks than smaller ones can thus enhance the resilience of the system. Moreover, these effects are more pronounced in more concentrated systems, and continue to apply, even when allowing for potential diversification benefits that may be realized by larger banks. We discuss some tentative implications for policy, as well as conceptual analogies in ecosystem stability and in the control of infectious diseases. PMID:23091020

  7. Size and complexity in model financial systems

    PubMed Central

    Arinaminpathy, Nimalan; Kapadia, Sujit; May, Robert M.

    2012-01-01

    The global financial crisis has precipitated an increasing appreciation of the need for a systemic perspective toward financial stability. For example: What role do large banks play in systemic risk? How should capital adequacy standards recognize this role? How is stability shaped by concentration and diversification in the financial system? We explore these questions using a deliberately simplified, dynamic model of a banking system that combines three different channels for direct transmission of contagion from one bank to another: liquidity hoarding, asset price contagion, and the propagation of defaults via counterparty credit risk. Importantly, we also introduce a mechanism for capturing how swings in “confidence” in the system may contribute to instability. Our results highlight that the importance of relatively large, well-connected banks in system stability scales more than proportionately with their size: the impact of their collapse arises not only from their connectivity, but also from their effect on confidence in the system. Imposing tougher capital requirements on larger banks than smaller ones can thus enhance the resilience of the system. Moreover, these effects are more pronounced in more concentrated systems, and continue to apply, even when allowing for potential diversification benefits that may be realized by larger banks. We discuss some tentative implications for policy, as well as conceptual analogies in ecosystem stability and in the control of infectious diseases. PMID:23091020

  8. Micro Wire-Drawing: Experiments And Modelling

    SciTech Connect

    Berti, G. A.; Monti, M.; Bietresato, M.; D'Angelo, L.

    2007-05-17

    In the paper, the authors propose to adopt the micro wire-drawing as a key for investigating models of micro forming processes. The reasons of this choice arose in the fact that this process can be considered a quasi-stationary process where tribological conditions at the interface between the material and the die can be assumed to be constant during the whole deformation. Two different materials have been investigated: i) a low-carbon steel and, ii) a nonferrous metal (copper). The micro hardness and tensile tests performed on each drawn wire show a thin hardened layer (more evident then in macro wires) on the external surface of the wire and hardening decreases rapidly from the surface layer to the center. For the copper wire this effect is reduced and traditional material constitutive model seems to be adequate to predict experimentation. For the low-carbon steel a modified constitutive material model has been proposed and implemented in a FE code giving a better agreement with the experiments.

  9. Comparative Assessment of Complex Stabilities of Radiocopper Chelating Agents by a Combination of Complex Challenge and in vivo Experiments.

    PubMed

    Litau, Shanna; Seibold, Uwe; Vall-Sagarra, Alicia; Fricker, Gert; Wängler, Björn; Wängler, Carmen

    2015-07-01

    For (64) Cu radiolabeling of biomolecules to be used as in vivo positron emission tomography (PET) imaging agents, various chelators are commonly applied. It has not yet been determined which of the most potent chelators--NODA-GA ((1,4,7-triazacyclononane-4,7-diyl)diacetic acid-1-glutaric acid), CB-TE2A (2,2'-(1,4,8,11-tetraazabicyclo[6.6.2]hexadecane-4,11-diyl)diacetic acid), or CB-TE1A-GA (1,4,8,11-tetraazabicyclo[6.6.2]hexadecane-4,11-diyl-8-acetic acid-1-glutaric acid)--forms the most stable complexes resulting in PET images of highest quality. We determined the (64) Cu complex stabilities for these three chelators by a combination of complex challenge and an in vivo approach. For this purpose, bioconjugates of the chelating agents with the gastrin-releasing peptide receptor (GRPR)-affine peptide PESIN and an integrin αv β3 -affine c(RGDfC) tetramer were synthesized and radiolabeled with (64) Cu in excellent yields and specific activities. The (64) Cu-labeled biomolecules were evaluated for their complex stabilities in vitro by conducting a challenge experiment with the respective other chelators as challengers. The in vivo stabilities of the complexes were also determined, showing the highest stability for the (64) Cu-CB-TE1A-GA complex in both experimental setups. Therefore, CB-TE1A-GA is the most appropriate chelating agent for *Cu-labeled radiotracers and in vivo imaging applications. PMID:26011290

  10. STATegra EMS: an Experiment Management System for complex next-generation omics experiments

    PubMed Central

    2014-01-01

    High-throughput sequencing assays are now routinely used to study different aspects of genome organization. As decreasing costs and widespread availability of sequencing enable more laboratories to use sequencing assays in their research projects, the number of samples and replicates in these experiments can quickly grow to several dozens of samples and thus require standardized annotation, storage and management of preprocessing steps. As a part of the STATegra project, we have developed an Experiment Management System (EMS) for high throughput omics data that supports different types of sequencing-based assays such as RNA-seq, ChIP-seq, Methyl-seq, etc, as well as proteomics and metabolomics data. The STATegra EMS provides metadata annotation of experimental design, samples and processing pipelines, as well as storage of different types of data files, from raw data to ready-to-use measurements. The system has been developed to provide research laboratories with a freely-available, integrated system that offers a simple and effective way for experiment annotation and tracking of analysis procedures. PMID:25033091

  11. Full-Scale Cookoff Model Validation Experiments

    SciTech Connect

    McClelland, M A; Rattanapote, M K; Heimdahl, E R; Erikson, W E; Curran, P O; Atwood, A I

    2003-11-25

    This paper presents the experimental results of the third and final phase of a cookoff model validation effort. In this phase of the work, two generic Heavy Wall Penetrators (HWP) were tested in two heating orientations. Temperature and strain gage data were collected over the entire test period. Predictions for time and temperature of reaction were made prior to release of the live data. Predictions were comparable to the measured values and were highly dependent on the established boundary conditions. Both HWP tests failed at a weld located near the aft closure of the device. More than 90 percent of unreacted explosive was recovered in the end heated experiment and less than 30 percent recovered in the side heated test.

  12. Modeling PBX 9501 overdriven release experiments

    SciTech Connect

    Tang, P.K.

    1997-11-01

    High Explosives (HE) performs work by the expansion of its detonation products. Along with the propagation of the detonation wave, the equation of state (EOS) of the products determines the HE performance in an engineering system. The authors show the failure of the standard Jones-Wilkins-Lee (JWL) equation of state (EOS) in modeling the overdriven release experiments of PBX 9501. The deficiency can be tracked back to inability of the same EOS in matching the shock pressure and the sound speed on the Hugoniot in the hydrodynamic regime above the Chapman-Jouguet pressure. After adding correction terms to the principal isentrope of the standard JWL EOS, the authors are able to remedy this shortcoming and the simulation was successful.

  13. The Use of Behavior Models for Predicting Complex Operations

    NASA Technical Reports Server (NTRS)

    Gore, Brian F.

    2010-01-01

    Modeling and simulation (M&S) plays an important role when complex human-system notions are being proposed, developed and tested within the system design process. National Aeronautics and Space Administration (NASA) as an agency uses many different types of M&S approaches for predicting human-system interactions, especially when it is early in the development phase of a conceptual design. NASA Ames Research Center possesses a number of M&S capabilities ranging from airflow, flight path models, aircraft models, scheduling models, human performance models (HPMs), and bioinformatics models among a host of other kinds of M&S capabilities that are used for predicting whether the proposed designs will benefit the specific mission criteria. The Man-Machine Integration Design and Analysis System (MIDAS) is a NASA ARC HPM software tool that integrates many models of human behavior with environment models, equipment models, and procedural / task models. The challenge to model comprehensibility is heightened as the number of models that are integrated and the requisite fidelity of the procedural sets are increased. Model transparency is needed for some of the more complex HPMs to maintain comprehensibility of the integrated model performance. This will be exemplified in a recent MIDAS v5 application model and plans for future model refinements will be presented.

  14. Nanofluid Drop Evaporation: Experiment, Theory, and Modeling

    NASA Astrophysics Data System (ADS)

    Gerken, William James

    Nanofluids, stable colloidal suspensions of nanoparticles in a base fluid, have potential applications in the heat transfer, combustion and propulsion, manufacturing, and medical fields. Experiments were conducted to determine the evaporation rate of room temperature, millimeter-sized pendant drops of ethanol laden with varying amounts (0-3% by weight) of 40-60 nm aluminum nanoparticles (nAl). Time-resolved high-resolution drop images were collected for the determination of early-time evaporation rate (D2/D 02 > 0.75), shown to exhibit D-square law behavior, and surface tension. Results show an asymptotic decrease in pendant drop evaporation rate with increasing nAl loading. The evaporation rate decreases by approximately 15% at around 1% to 3% nAl loading relative to the evaporation rate of pure ethanol. Surface tension was observed to be unaffected by nAl loading up to 3% by weight. A model was developed to describe the evaporation of the nanofluid pendant drops based on D-square law analysis for the gas domain and a description of the reduction in liquid fraction available for evaporation due to nanoparticle agglomerate packing near the evaporating drop surface. Model predictions are in relatively good agreement with experiment, within a few percent of measured nanofluid pendant drop evaporation rate. The evaporation of pinned nanofluid sessile drops was also considered via modeling. It was found that the same mechanism for nanofluid evaporation rate reduction used to explain pendant drops could be used for sessile drops. That mechanism is a reduction in evaporation rate due to a reduction in available ethanol for evaporation at the drop surface caused by the packing of nanoparticle agglomerates near the drop surface. Comparisons of the present modeling predictions with sessile drop evaporation rate measurements reported for nAl/ethanol nanofluids by Sefiane and Bennacer [11] are in fairly good agreement. Portions of this abstract previously appeared as: W. J

  15. Geometric modeling of subcellular structures, organelles, and multiprotein complexes

    PubMed Central

    Feng, Xin; Xia, Kelin; Tong, Yiying; Wei, Guo-Wei

    2013-01-01

    SUMMARY Recently, the structure, function, stability, and dynamics of subcellular structures, organelles, and multi-protein complexes have emerged as a leading interest in structural biology. Geometric modeling not only provides visualizations of shapes for large biomolecular complexes but also fills the gap between structural information and theoretical modeling, and enables the understanding of function, stability, and dynamics. This paper introduces a suite of computational tools for volumetric data processing, information extraction, surface mesh rendering, geometric measurement, and curvature estimation of biomolecular complexes. Particular emphasis is given to the modeling of cryo-electron microscopy data. Lagrangian-triangle meshes are employed for the surface presentation. On the basis of this representation, algorithms are developed for surface area and surface-enclosed volume calculation, and curvature estimation. Methods for volumetric meshing have also been presented. Because the technological development in computer science and mathematics has led to multiple choices at each stage of the geometric modeling, we discuss the rationales in the design and selection of various algorithms. Analytical models are designed to test the computational accuracy and convergence of proposed algorithms. Finally, we select a set of six cryo-electron microscopy data representing typical subcellular complexes to demonstrate the efficacy of the proposed algorithms in handling biomolecular surfaces and explore their capability of geometric characterization of binding targets. This paper offers a comprehensive protocol for the geometric modeling of subcellular structures, organelles, and multiprotein complexes. PMID:23212797

  16. Between complexity of modelling and modelling of complexity: An essay on econophysics

    NASA Astrophysics Data System (ADS)

    Schinckus, C.

    2013-09-01

    Econophysics is an emerging field dealing with complex systems and emergent properties. A deeper analysis of themes studied by econophysicists shows that research conducted in this field can be decomposed into two different computational approaches: “statistical econophysics” and “agent-based econophysics”. This methodological scission complicates the definition of the complexity used in econophysics. Therefore, this article aims to clarify what kind of emergences and complexities we can find in econophysics in order to better understand, on one hand, the current scientific modes of reasoning this new field provides; and on the other hand, the future methodological evolution of the field.

  17. Network model of bilateral power markets based on complex networks

    NASA Astrophysics Data System (ADS)

    Wu, Yang; Liu, Junyong; Li, Furong; Yan, Zhanxin; Zhang, Li

    2014-06-01

    The bilateral power transaction (BPT) mode becomes a typical market organization with the restructuring of electric power industry, the proper model which could capture its characteristics is in urgent need. However, the model is lacking because of this market organization's complexity. As a promising approach to modeling complex systems, complex networks could provide a sound theoretical framework for developing proper simulation model. In this paper, a complex network model of the BPT market is proposed. In this model, price advantage mechanism is a precondition. Unlike other general commodity transactions, both of the financial layer and the physical layer are considered in the model. Through simulation analysis, the feasibility and validity of the model are verified. At same time, some typical statistical features of BPT network are identified. Namely, the degree distribution follows the power law, the clustering coefficient is low and the average path length is a bit long. Moreover, the topological stability of the BPT network is tested. The results show that the network displays a topological robustness to random market member's failures while it is fragile against deliberate attacks, and the network could resist cascading failure to some extent. These features are helpful for making decisions and risk management in BPT markets.

  18. Evapotranspiration model of different complexity for multiple land cover types

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A comparison between half-hourly and daily measured and computed evapotranspiration (ET) using three models of different complexity, namely the Priestley-Taylor (P-T), reference Penman-Monteith (P-M), and Common Land Model (CLM) was conducted using three AmeriFlux sites under different land cover an...

  19. Using fMRI to Test Models of Complex Cognition

    ERIC Educational Resources Information Center

    Anderson, John R.; Carter, Cameron S.; Fincham, Jon M.; Qin, Yulin; Ravizza, Susan M.; Rosenberg-Lee, Miriam

    2008-01-01

    This article investigates the potential of fMRI to test assumptions about different components in models of complex cognitive tasks. If the components of a model can be associated with specific brain regions, one can make predictions for the temporal course of the BOLD response in these regions. An event-locked procedure is described for dealing…

  20. Tips on Creating Complex Geometry Using Solid Modeling Software

    ERIC Educational Resources Information Center

    Gow, George

    2008-01-01

    Three-dimensional computer-aided drafting (CAD) software, sometimes referred to as "solid modeling" software, is easy to learn, fun to use, and becoming the standard in industry. However, many users have difficulty creating complex geometry with the solid modeling software. And the problem is not entirely a student problem. Even some teachers and…

  1. Zebrafish as an emerging model for studying complex brain disorders

    PubMed Central

    Kalueff, Allan V.; Stewart, Adam Michael; Gerlai, Robert

    2014-01-01

    The zebrafish (Danio rerio) is rapidly becoming a popular model organism in pharmacogenetics and neuropharmacology. Both larval and adult zebrafish are currently used to increase our understanding of brain function, dysfunction, and their genetic and pharmacological modulation. Here we review the developing utility of zebrafish in the analysis of complex brain disorders (including, for example, depression, autism, psychoses, drug abuse and cognitive disorders), also covering zebrafish applications towards the goal of modeling major human neuropsychiatric and drug-induced syndromes. We argue that zebrafish models of complex brain disorders and drug-induced conditions have become a rapidly emerging critical field in translational neuropharmacology research. PMID:24412421

  2. Simulating complex intracellular processes using object-oriented computational modelling.

    PubMed

    Johnson, Colin G; Goldman, Jacki P; Gullick, William J

    2004-11-01

    The aim of this paper is to give an overview of computer modelling and simulation in cellular biology, in particular as applied to complex biochemical processes within the cell. This is illustrated by the use of the techniques of object-oriented modelling, where the computer is used to construct abstractions of objects in the domain being modelled, and these objects then interact within the computer to simulate the system and allow emergent properties to be observed. The paper also discusses the role of computer simulation in understanding complexity in biological systems, and the kinds of information which can be obtained about biology via simulation. PMID:15302205

  3. Characterization of Complex Systems Using the Design of Experiments Approach: Transient Protein Expression in Tobacco as a Case Study

    PubMed Central

    Buyel, Johannes Felix; Fischer, Rainer

    2014-01-01

    Plants provide multiple benefits for the production of biopharmaceuticals including low costs, scalability, and safety. Transient expression offers the additional advantage of short development and production times, but expression levels can vary significantly between batches thus giving rise to regulatory concerns in the context of good manufacturing practice. We used a design of experiments (DoE) approach to determine the impact of major factors such as regulatory elements in the expression construct, plant growth and development parameters, and the incubation conditions during expression, on the variability of expression between batches. We tested plants expressing a model anti-HIV monoclonal antibody (2G12) and a fluorescent marker protein (DsRed). We discuss the rationale for selecting certain properties of the model and identify its potential limitations. The general approach can easily be transferred to other problems because the principles of the model are broadly applicable: knowledge-based parameter selection, complexity reduction by splitting the initial problem into smaller modules, software-guided setup of optimal experiment combinations and step-wise design augmentation. Therefore, the methodology is not only useful for characterizing protein expression in plants but also for the investigation of other complex systems lacking a mechanistic description. The predictive equations describing the interconnectivity between parameters can be used to establish mechanistic models for other complex systems. PMID:24514765

  4. Multi-scale modelling for HEDP experiments on Orion

    NASA Astrophysics Data System (ADS)

    Sircombe, N. J.; Ramsay, M. G.; Hughes, S. J.; Hoarty, D. J.

    2016-05-01

    The Orion laser at AWE couples high energy long-pulse lasers with high intensity short-pulses, allowing material to be compressed beyond solid density and heated isochorically. This experimental capability has been demonstrated as a platform for conducting High Energy Density Physics material properties experiments. A clear understanding of the physics in experiments at this scale, combined with a robust, flexible and predictive modelling capability, is an important step towards more complex experimental platforms and ICF schemes which rely on high power lasers to achieve ignition. These experiments present a significant modelling challenge, the system is characterised by hydrodynamic effects over nanoseconds, driven by long-pulse lasers or the pre-pulse of the petawatt beams, and fast electron generation, transport, and heating effects over picoseconds, driven by short-pulse high intensity lasers. We describe the approach taken at AWE; to integrate a number of codes which capture the detailed physics for each spatial and temporal scale. Simulations of the heating of buried aluminium microdot targets are discussed and we consider the role such tools can play in understanding the impact of changes to the laser parameters, such as frequency and pre-pulse, as well as understanding effects which are difficult to observe experimentally.

  5. MatOFF: A Tool For Analyzing Behaviorally-Complex Neurophysiological Experiments

    PubMed Central

    Genovesio, Aldo; Mitz, Andrew R.

    2007-01-01

    The simple operant conditioning originally used in behavioral neurophysiology 30 years ago has given way to complex and sophisticated behavioral paradigms; so much so, that early general purpose programs for analyzing neurophysiological data are ill-suited for complex experiments. The trend has been to develop custom software for each class of experiment, but custom software can have serious drawbacks. We describe here a general purpose software tool for behavioral and electrophysiological studies, called MatOFF, that is especially suited for processing neurophysiological data gathered during the execution of complex behaviors. Written in the MATLAB programming language, MatOFF solves the problem of handling complex analysis requirements in a unique and powerful way. While other neurophysiological programs are either a loose collection of tools or append MATLAB as a post-processing step, MatOFF is an integrated environment that supports MATLAB scripting within the event search engine safely isolated in programming sandbox. The results from scripting are stored separately, but in parallel with the raw data, and thus available to all subsequent MatOFF analysis and display processing. An example from a recently published experiment shows how all the features of MatOFF work together to analyze complex experiments and mine neurophysiological data in efficient ways. PMID:17604115

  6. Ringing load models verified against experiments

    SciTech Connect

    Krokstad, J.R.; Stansberg, C.T.

    1995-12-31

    What is believed to be the main reason for discrepancies between measured and simulated loads in previous studies has been assessed. One has focused on the balance between second- and third-order load components in relation to what is called ``fat body`` load correction. It is important to understand that the use of Morison strip theory in combination with second-order wave theory give rise to second- as well as third-order components in the horizontal force. A proper balance between second- and third-order components in horizontal force is regarded as the most central requirements for a sufficient accurate ringing load model in irregular sea. It is also verified that simulated second-order components are largely overpredicted both in regular and irregular seas. Nonslender diffraction effects are important to incorporate in the FNV formulation in order to reduce the simulated second-order component and to match experiments more closely. A sufficient accurate ringing simulation model with the use of simplified methods is shown to be within close reach. Some further development and experimental verification must however be performed in order to take non-slender effects into account.

  7. Numerical modeling of Deep Impact experiment

    NASA Astrophysics Data System (ADS)

    Sultanov, V. G.; Kim, V. V.; Lomonosov, I. V.; Shutov, A. V.; Fortov, V. E.

    2007-06-01

    The Deep Impact active space experiment has been done [1,2] to study a hypervelocity collision of a metal impactor with the comet 9P/Temple 1. The modeling of impact on solid or porous ice made it possible to conclude: the form and size of crater depends strongly on the density of comet material; the copper impactor does not melt and remains in the solid state; the temperature of ejecta varies from 5000 K for solid ice to 15000 K for porous ice. The impact on moist water- saturated sand demonstrated different results. In this case, the copper impactor practically does not penetrate the comet surface, melts, destroys and the ricochet process takes place. In the case of moist porous sand the produced crater is stretched in the direction of impact. The analysis of modeling results indicates to the presence of volatile easy-vaporized chemical compounds in the cometary surface. The hypothesis that the cometary surface consists of only ice does not agree with experimental and computational data on the forming and spreading of impact ejecta. [1] http://deepimpact.jpl.nasa.gov/home/index.html [2] M. F. A'Hearn et al, Deep Impact: Excavating Comet Tempel 1 // Science, 2005, v.310, pp. 258-264

  8. Complexation and molecular modeling studies of europium(III)-gallic acid-amino acid complexes.

    PubMed

    Taha, Mohamed; Khan, Imran; Coutinho, João A P

    2016-04-01

    With many metal-based drugs extensively used today in the treatment of cancer, attention has focused on the development of new coordination compounds with antitumor activity with europium(III) complexes recently introduced as novel anticancer drugs. The aim of this work is to design new Eu(III) complexes with gallic acid, an antioxida'nt phenolic compound. Gallic acid was chosen because it shows anticancer activity without harming health cells. As antioxidant, it helps to protect human cells against oxidative damage that implicated in DNA damage, cancer, and accelerated cell aging. In this work, the formation of binary and ternary complexes of Eu(III) with gallic acid, primary ligand, and amino acids alanine, leucine, isoleucine, and tryptophan was studied by glass electrode potentiometry in aqueous solution containing 0.1M NaNO3 at (298.2±0.1) K. Their overall stability constants were evaluated and the concentration distributions of the complex species in solution were calculated. The protonation constants of gallic acid and amino acids were also determined at our experimental conditions and compared with those predicted by using conductor-like screening model for realistic solvation (COSMO-RS) model. The geometries of Eu(III)-gallic acid complexes were characterized by the density functional theory (DFT). The spectroscopic UV-visible and photoluminescence measurements are carried out to confirm the formation of Eu(III)-gallic acid complexes in aqueous solutions. PMID:26827296

  9. Multiscale Model for the Assembly Kinetics of Protein Complexes.

    PubMed

    Xie, Zhong-Ru; Chen, Jiawen; Wu, Yinghao

    2016-02-01

    The assembly of proteins into high-order complexes is a general mechanism for these biomolecules to implement their versatile functions in cells. Natural evolution has developed various assembling pathways for specific protein complexes to maintain their stability and proper activities. Previous studies have provided numerous examples of the misassembly of protein complexes leading to severe biological consequences. Although the research focusing on protein complexes has started to move beyond the static representation of quaternary structures to the dynamic aspect of their assembly, the current understanding of the assembly mechanism of protein complexes is still largely limited. To tackle this problem, we developed a new multiscale modeling framework. This framework combines a lower-resolution rigid-body-based simulation with a higher-resolution Cα-based simulation method so that protein complexes can be assembled with both structural details and computational efficiency. We applied this model to a homotrimer and a heterotetramer as simple test systems. Consistent with experimental observations, our simulations indicated very different kinetics between protein oligomerization and dimerization. The formation of protein oligomers is a multistep process that is much slower than dimerization but thermodynamically more stable. Moreover, we showed that even the same protein quaternary structure can have very diverse assembly pathways under different binding constants between subunits, which is important for regulating the functions of protein complexes. Finally, we revealed that the binding between subunits in a complex can be synergistically strengthened during assembly without considering allosteric regulation or conformational changes. Therefore, our model provides a useful tool to understand the general principles of protein complex assembly. PMID:26738810

  10. Submarine sand volcanos: experiments and numerical modelling

    NASA Astrophysics Data System (ADS)

    Philippe, P.; Ngoma, J.; Delenne, J.

    2012-12-01

    Fluid overpressure at the bottom of a soil layer may generate fracturation in preferential paths for a cohesive material. But the case of sandy soils is rather different: a significant internal flow is allowed within the material and can potentially induce hydro-mechanical instabilities whose most common example is fluidization. Many works have been devoted to fluidization but very few have the issue of initiation and development of a fluidized zone inside a granular bed, prior entire fluidization of the medium. In this contribution, we report experimental results and numerical simulations on a model system of immersed sand volcanos generated by a localized upward spring of liquid, injected at constant flow-rate at the bottom of a granular layer. Such a localized state of fluidization is relevant for some industrial processes (spouted bed, maintenance of navigable waterways,…) and for several geological issues (kimberlite volcano conduits, fluid venting, oil recovery in sandy soil, More precisely, what is presented here is a comparison between experiments, carried out by direct visualization throughout the medium, and numerical simulations, based on DEM modelling of the grains coupled to resolution of NS equations in the liquid phase (LBM). There is a very good agreement between the experimental phenomenology and the simulation results. When the flow-rate is increased, three regimes are successively observed: static bed, fluidized cavity that does not extend to the top of the layer, and finally fluidization over the entire height of layer that creates a fluidized chimney. A very strong hysteretic effect is present here with an extended range of stability for fluidized cavities when flow-rate is decreased back. This can be interpreted in terms force chains and arches. The influences of grain diameter, layer height and injection width are studied and interpreted using a model previously developed by Zoueshtiagh [1]. Finally, growing rate of the fluidized zone and