Science.gov

Sample records for complexes model experiments

  1. Synchronization Experiments With A Global Coupled Model of Intermediate Complexity

    NASA Astrophysics Data System (ADS)

    Selten, Frank; Hiemstra, Paul; Shen, Mao-Lin

    2013-04-01

    In the super modeling approach an ensemble of imperfect models are connected through nudging terms that nudge the solution of each model to the solution of all other models in the ensemble. The goal is to obtain a synchronized state through a proper choice of connection strengths that closely tracks the trajectory of the true system. For the super modeling approach to be successful, the connections should be dense and strong enough for synchronization to occur. In this study we analyze the behavior of an ensemble of connected global atmosphere-ocean models of intermediate complexity. All atmosphere models are connected to the same ocean model through the surface fluxes of heat, water and momentum, the ocean is integrated using weighted averaged surface fluxes. In particular we analyze the degree of synchronization between the atmosphere models and the characteristics of the ensemble mean solution. The results are interpreted using a low order atmosphere-ocean toy model.

  2. Historical and idealized climate model experiments: an intercomparison of Earth system models of intermediate complexity

    NASA Astrophysics Data System (ADS)

    Eby, M.; Weaver, A. J.; Alexander, K.; Zickfeld, K.; Abe-Ouchi, A.; Cimatoribus, A. A.; Crespin, E.; Drijfhout, S. S.; Edwards, N. R.; Eliseev, A. V.; Feulner, G.; Fichefet, T.; Forest, C. E.; Goosse, H.; Holden, P. B.; Joos, F.; Kawamiya, M.; Kicklighter, D.; Kienert, H.; Matsumoto, K.; Mokhov, I. I.; Monier, E.; Olsen, S. M.; Pedersen, J. O. P.; Perrette, M.; Philippon-Berthier, G.; Ridgwell, A.; Schlosser, A.; Schneider von Deimling, T.; Shaffer, G.; Smith, R. S.; Spahni, R.; Sokolov, A. P.; Steinacher, M.; Tachiiri, K.; Tokos, K.; Yoshimori, M.; Zeng, N.; Zhao, F.

    2013-05-01

    Both historical and idealized climate model experiments are performed with a variety of Earth system models of intermediate complexity (EMICs) as part of a community contribution to the Intergovernmental Panel on Climate Change Fifth Assessment Report. Historical simulations start at 850 CE and continue through to 2005. The standard simulations include changes in forcing from solar luminosity, Earth's orbital configuration, CO2, additional greenhouse gases, land use, and sulphate and volcanic aerosols. In spite of very different modelled pre-industrial global surface air temperatures, overall 20th century trends in surface air temperature and carbon uptake are reasonably well simulated when compared to observed trends. Land carbon fluxes show much more variation between models than ocean carbon fluxes, and recent land fluxes appear to be slightly underestimated. It is possible that recent modelled climate trends or climate-carbon feedbacks are overestimated resulting in too much land carbon loss or that carbon uptake due to CO2 and/or nitrogen fertilization is underestimated. Several one thousand year long, idealized, 2 × and 4 × CO2 experiments are used to quantify standard model characteristics, including transient and equilibrium climate sensitivities, and climate-carbon feedbacks. The values from EMICs generally fall within the range given by general circulation models. Seven additional historical simulations, each including a single specified forcing, are used to assess the contributions of different climate forcings to the overall climate and carbon cycle response. The response of surface air temperature is the linear sum of the individual forcings, while the carbon cycle response shows a non-linear interaction between land-use change and CO2 forcings for some models. Finally, the preindustrial portions of the last millennium simulations are used to assess historical model carbon-climate feedbacks. Given the specified forcing, there is a tendency for the

  3. Model complexity in carbon sequestration:A design of experiment and response surface uncertainty analysis

    NASA Astrophysics Data System (ADS)

    Zhang, Y.; Li, S.

    2014-12-01

    Geologic carbon sequestration (GCS) is proposed for the Nugget Sandstone in Moxa Arch, a regional saline aquifer with a large storage potential. For a proposed storage site, this study builds a suite of increasingly complex conceptual "geologic" model families, using subsets of the site characterization data: a homogeneous model family, a stationary petrophysical model family, a stationary facies model family with sub-facies petrophysical variability, and a non-stationary facies model family (with sub-facies variability) conditioned to soft data. These families, representing alternative conceptual site models built with increasing data, were simulated with the same CO2 injection test (50 years at 1/10 Mt per year), followed by 2950 years of monitoring. Using the Design of Experiment, an efficient sensitivity analysis (SA) is conducted for all families, systematically varying uncertain input parameters. Results are compared among the families to identify parameters that have 1st order impact on predicting the CO2 storage ratio (SR) at both end of injection and end of monitoring. At this site, geologic modeling factors do not significantly influence the short-term prediction of the storage ratio, although they become important over monitoring time, but only for those families where such factors are accounted for. Based on the SA, a response surface analysis is conducted to generate prediction envelopes of the storage ratio, which are compared among the families at both times. Results suggest a large uncertainty in the predicted storage ratio given the uncertainties in model parameters and modeling choices: SR varies from 5-60% (end of injection) to 18-100% (end of monitoring), although its variation among the model families is relatively minor. Moreover, long-term leakage risk is considered small at the proposed site. In the lowest-SR scenarios, all families predict gravity-stable supercritical CO2 migrating toward the bottom of the aquifer. In the highest

  4. Photochemistry of iron(III)-carboxylato complexes in aqueous atmospheric particles - Laboratory experiments and modeling studies

    NASA Astrophysics Data System (ADS)

    Weller, C.; Tilgner, A.; Herrmann, H.

    2010-12-01

    Iron is always present in the atmosphere in concentrations from ~10-9 M (clouds, rain) up to ~10-3 M (fog, particles). Sources are mainly mineral dust emissions. Iron complexes are very good absorbers in the UV-VIS actinic region and therefore photo-chemically reactive. Iron complex photolysis leads to radical production and can initiate radical chain reactions, which is related to the oxidizing capacity of the atmosphere. These radical chain reactions are involved in the decomposition and transformation of a variety of chemical compounds in cloud droplets and deliquescent particles. Additionally, the photochemical reaction itself can be a degradation pathway for organic compounds with the ability to bind iron. Iron-complexes of atmospherically relevant coordination compounds like oxalate, malonate, succinate, glutarate, tartronate, gluconate, pyruvate and glyoxalate have been investigated in laboratory experiments. Iron speciation depends on the iron-ligand ratio and the pH. The most suitable experimental conditions were calculated with a speciation program (Visual Minteq). The solutions were prepared accordingly and transferred to a 1 cm quartz cuvette and flash-photolyzed with an excimer laser at wavelengths 308 or 351 nm. Photochemically produced Fe2+ has been measured by spectrometry at 510 nm as Fe(phenantroline)32+. Fe2+ overall effective quantum yields have been calculated with the concentration of photochemically produced Fe2+ and the measured energy of the excimer laser pulse. The laser pulse energy was measured with a pyroelectric sensor. For some iron-carboxylate systems the experimental parameters like the oxygen content of the solution, the initial Iron concentration and the incident laser energy were systematically altered to observe an effect on the overall quantum yield. The dependence of some quantum yields on these parameters allows in some cases an interpretation of the underlying photochemical reaction mechanism. Quantum yields of malonate

  5. A business process modeling experience in a complex information system re-engineering.

    PubMed

    Bernonville, Stéphanie; Vantourout, Corinne; Fendeler, Geneviève; Beuscart, Régis

    2013-01-01

    This article aims to share a business process modeling experience in a re-engineering project of a medical records department in a 2,965-bed hospital. It presents the modeling strategy, an extract of the results and the feedback experience. PMID:23920743

  6. A consistent model for surface complexation on birnessite (-MnO2) and its application to a column experiment

    NASA Astrophysics Data System (ADS)

    Appelo, C. A. J.; Postma, D.

    1999-10-01

    Available surface complexation models for birnessite required the inclusion of bidentate bonds or the adsorption of cation-hydroxy complexes to account for experimentally observed H+/Mm+ exchange. These models contain inconsistencies and therefore the surface complexation on birnessite was re-examined. Structural data on birnessite indicate that sorption sites are located on three oxygens around a vacancy in the octahedral layer. The three oxygens together carry a charge of -2, i.e., constitute a doubly charged sorption site. Therefore a new surface complexation model was formulated using a doubly charged, diprotic, sorption site where divalent cations adsorbing via inner-sphere complexes bind to the three oxygens. Using the diprotic site concept we have remodeled the experimental data for sorption on birnessite by Murray (1975) using the surface complexation model of Dzombak and Morel (1990). Intrinsic constants for the surface complexation model were obtained with the non-linear optimization program PEST in combination with a modified version of PHREEQC (Parkhurst, 1995). The optimized model was subsequently tested against independent data sets for synthetic birnessite by Balistrieri and Murray (1982) and Wang et al. (1996). It was found to describe the experimental data well. Finally the model was tested against the results of column experiments where cations adsorbed on natural MnO2 coated sand. In this case as well, the diprotic surface complexation model gave an excellent description of the experimental results.

  7. Complexity in forest fires: From simple experiments to nonlinear networked models

    NASA Astrophysics Data System (ADS)

    Buscarino, Arturo; Famoso, Carlo; Fortuna, Luigi; Frasca, Mattia; Xibilia, Maria Gabriella

    2015-05-01

    The evolution of natural phenomena in real environments often involves complex nonlinear interactions. Modeling them implies the characterization of both a map of interactions and a dynamical process, but also of the peculiarity of the space in which the phenomena occur. The model presented in this paper encompasses all these aspects to formalize an innovative methodology to simulate the propagation of forest fires. It is based on a networked multilayer structure, allowing a flexible definition of the spatial properties of the medium and of the dynamical laws regulating fire propagation. The dynamical core of each node in the network is represented by an hyperbolic reaction-diffusion equation in which the intrinsic characteristics of trees ignition are considered. Furthermore, to define the simulation scenarios, an experimental setup in which the propagation of a fire wave in a small-scale medium can been observed has been realized. A number of simulations is then reported to illustrate the wide spectrum of scenarios which can be reproduced by the model.

  8. Cadmium sorption onto Natural Red Earth - An assessment using batch experiments and surface complexation modeling

    NASA Astrophysics Data System (ADS)

    Mahatantila, K.; Minoru, O.; Seike, Y.; Vithanage, M. S.

    2010-12-01

    Natural red earth (NRE), an iron coated sand found in north western part of Sri Lanka was used to examine its retention behavior of cadmium, a heavy metal postulated as a factor of chronic kidney disease in Sri Lanka. Adsorption studies were examined in batch experiments as a function of pH, ionic strength and initial cadmium loading. Proton binding sites on NRE were characterized by potentiometric titration yielding a pHzpc around 6.6. The cadmium adsorption increased from 6% to 99% along with a pH increase from 4 to 8.5. In addition, the maximum adsorption was observed when pH is greater than 7.5. Ionic strength dependency of cadmium adsorption for 100 fold variation of NaNO3 evidences the dominance of an inner-sphere bonding mechanism for 10 fold variation of initial cadmium loadings (4.44 and 44.4 µmol/L). Adsorption edges were quantified with a 2pK generalized diffuse double layer model considering two site types, >FeOH and >AlOH, for Cd2+ binding. From modeling, we introduced a monodentate chemical bonding mechanism for cadmium binding on to NRE and this finding was further verified with FTIR spectroscopy. Intrinsic constants determined were log KFeOCd = 8.543 and log KAlOCd = 13.917. Isotherm data implies the heterogeneity of NRE surface and the sorption maximum of 9.418 x10-6 mol/g and 1.3x10-4 mol/g for Langmuir and Freundlich isotherm models. The study suggested the potential of NRE as a material in decontaminating environmental water polluted with cadmium.

  9. Reduction of U(VI) Complexes by Anthraquinone Disulfonate: Experiment and Molecular Modeling

    SciTech Connect

    Ainsworth, C.C.; Wang, Z.; Rosso, K.M.; Wagnon, K.; Fredrickson, J.K.

    2004-03-17

    Past studies demonstrate that complexation will limit abiotic and biotic U(VI) reduction rates and the overall extent of reduction. However, the underlying basis for this behavior is not understood and presently unpredictable across species and ligand structure. The central tenets of these investigations are: (1) reduction of U(VI) follows the electron-transfer (ET) mechanism developed by Marcus; (2) the ET rate is the rate-limiting step in U(VI) reduction and is the step that is most affected by complexation; and (3) Marcus theory can be used to unify the apparently disparate U(VI) reduction rate data and as a computational tool to construct a predictive relationship.

  10. The Mentoring Relationship as a Complex Adaptive System: Finding a Model for Our Experience

    ERIC Educational Resources Information Center

    Jones, Rachel; Brown, Dot

    2011-01-01

    Mentoring theory and practice has evolved significantly during the past 40 years. Early mentoring models were characterized by the top-down flow of information and benefits to the protege. This framework was reconceptualized as a reciprocal model when scholars realized mentoring was a mutually beneficial process. Recently, in response to rapidly…

  11. Experiments on Cryogenic Complex Plasma

    SciTech Connect

    Ishihara, O.; Sekine, W.; Kubota, J.; Uotani, N.; Chikasue, M.; Shindo, M.

    2009-11-10

    Experiments on a cryogenic complex plasma have been performed. Preliminary experiments include production of a plasma in a liquid helium or in a cryogenic helium gas by a pulsed discharge. The extended production of a plasma has been realized in a vapor of liquid helium or in a cryogenic helium gas by rf discharge. The charge of dust particles injected in such a plasma has been studied in detail.

  12. Concept model of the formation process of humic acid-kaolin complexes deduced by trichloroethylene sorption experiments and various characterizations.

    PubMed

    Zhu, Xiaojing; He, Jiangtao; Su, Sihui; Zhang, Xiaoliang; Wang, Fei

    2016-05-01

    To explore the interactions between soil organic matter and minerals, humic acid (HA, as organic matter), kaolin (as a mineral component) and Ca(2+) (as metal ions) were used to prepare HA-kaolin and Ca-HA-kaolin complexes. These complexes were used in trichloroethylene (TCE) sorption experiments and various characterizations. Interactions between HA and kaolin during the formation of their complexes were confirmed by the obvious differences between the Qe (experimental sorbed TCE) and Qe_p (predicted sorbed TCE) values of all detected samples. The partition coefficient kd obtained for the different samples indicated that both the organic content (fom) and Ca(2+) could significantly impact the interactions. Based on experimental results and various characterizations, a concept model was developed. In the absence of Ca(2+), HA molecules first patched onto charged sites of kaolin surfaces, filling the pores. Subsequently, as the HA content increased and the first HA layer reached saturation, an outer layer of HA began to form, compressing the inner HA layer. As HA loading continued, the second layer reached saturation, such that an outer-third layer began to form, compressing the inner layers. In the presence of Ca(2+), which not only can promote kaolin self-aggregation but can also boost HA attachment to kaolin, HA molecules were first surrounded by kaolin. Subsequently, first and second layers formed (with inner layer compression) via the same process as described above in the absence of Ca(2+), except that the second layer continued to load rather than reach saturation, within the investigated conditions, because of enhanced HA aggregation caused by Ca(2+). PMID:26933902

  13. Complex matrix model duality

    SciTech Connect

    Brown, T. W.

    2011-04-15

    The same complex matrix model calculates both tachyon scattering for the c=1 noncritical string at the self-dual radius and certain correlation functions of operators which preserve half the supersymmetry in N=4 super-Yang-Mills theory. It is dual to another complex matrix model where the couplings of the first model are encoded in the Kontsevich-like variables of the second. The duality between the theories is mirrored by the duality of their Feynman diagrams. Analogously to the Hermitian Kontsevich-Penner model, the correlation functions of the second model can be written as sums over discrete points in subspaces of the moduli space of punctured Riemann surfaces.

  14. Surface complexation modeling

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Adsorption-desorption reactions are important processes that affect the transport of contaminants in the environment. Surface complexation models are chemical models that can account for the effects of variable chemical conditions, such as pH, on adsorption reactions. These models define specific ...

  15. Numerical Modeling of Complex Targets for High-Energy- Density Experiments with Ion Beams and other Drivers

    DOE PAGESBeta

    Koniges, Alice; Liu, Wangyi; Lidia, Steven; Schenkel, Thomas; Barnard, John; Friedman, Alex; Eder, David; Fisher, Aaron; Masters, Nathan

    2016-03-01

    We explore the simulation challenges and requirements for experiments planned on facilities such as the NDCX-II ion accelerator at LBNL, currently undergoing commissioning. Hydrodynamic modeling of NDCX-II experiments include certain lower temperature effects, e.g., surface tension and target fragmentation, that are not generally present in extreme high-energy laser facility experiments, where targets are completely vaporized in an extremely short period of time. Target designs proposed for NDCX-II range from metal foils of order one micron thick (thin targets) to metallic foam targets several tens of microns thick (thick targets). These high-energy-density experiments allow for the study of fracture as wellmore » as the process of bubble and droplet formation. We incorporate these physics effects into a code called ALE-AMR that uses a combination of Arbitrary Lagrangian Eulerian hydrodynamics and Adaptive Mesh Refinement. Inclusion of certain effects becomes tricky as we must deal with non-orthogonal meshes of various levels of refinement in three dimensions. A surface tension model used for droplet dynamics is implemented in ALE-AMR using curvature calculated from volume fractions. Thick foam target experiments provide information on how ion beam induced shock waves couple into kinetic energy of fluid flow. Although NDCX-II is not fully commissioned, experiments are being conducted that explore material defect production and dynamics.« less

  16. Numerical Modeling of Complex Targets for High-Energy- Density Experiments with Ion Beams and other Drivers

    NASA Astrophysics Data System (ADS)

    Koniges, Alice; Liu, Wangyi; Lidia, Steven; Schenkel, Thomas; Barnard, John; Friedman, Alex; Eder, David; Fisher, Aaron; Masters, Nathan

    2016-03-01

    We explore the simulation challenges and requirements for experiments planned on facilities such as the NDCX-II ion accelerator at LBNL, currently undergoing commissioning. Hydrodynamic modeling of NDCX-II experiments include certain lower temperature effects, e.g., surface tension and target fragmentation, that are not generally present in extreme high-energy laser facility experiments, where targets are completely vaporized in an extremely short period of time. Target designs proposed for NDCX-II range from metal foils of order one micron thick (thin targets) to metallic foam targets several tens of microns thick (thick targets). These high-energy-density experiments allow for the study of fracture as well as the process of bubble and droplet formation. We incorporate these physics effects into a code called ALE-AMR that uses a combination of Arbitrary Lagrangian Eulerian hydrodynamics and Adaptive Mesh Refinement. Inclusion of certain effects becomes tricky as we must deal with non-orthogonal meshes of various levels of refinement in three dimensions. A surface tension model used for droplet dynamics is implemented in ALE-AMR using curvature calculated from volume fractions. Thick foam target experiments provide information on how ion beam induced shock waves couple into kinetic energy of fluid flow. Although NDCX-II is not fully commissioned, experiments are being conducted that explore material defect production and dynamics.

  17. Modeling Complex Calorimeters

    NASA Technical Reports Server (NTRS)

    Figueroa-Feliciano, Enectali

    2004-01-01

    We have developed a software suite that models complex calorimeters in the time and frequency domain. These models can reproduce all measurements that we currently do in a lab setting, like IV curves, impedance measurements, noise measurements, and pulse generation. Since all these measurements are modeled from one set of parameters, we can fully describe a detector and characterize its behavior. This leads to a model than can be used effectively for engineering and design of detectors for particular applications.

  18. Computer Experiment on the Complex Behavior of a Two-Dimensional Cellular Automaton as a Phenomenological Model for an Ecosystem

    NASA Astrophysics Data System (ADS)

    Satoh, Kazuhiro

    1989-10-01

    Numerical studies are made on the complex behavior of a cellular automaton which serves as a phenomenological model for an ecosystem. The ecosystem is assumed to contain only three populations, i.e., a population of plants, of herbivores, and of carnivores. A two-dimensional region where organisms live is divided into square cells and the population density in each cell is regarded as a discrete variable. The influence of the physical environment and the interactions between organisms are reduced to a simple rule of cellular automaton evolution. It is found that the time dependent spatial distribution of organisms is, in general, very random and complex. However, under certain conditions, the self-organization of ordered patterns such as rotating spirals or concentric circles takes place. The relevance of the cellular automaton as a model for the ecosystem is discussed.

  19. Network Enrichment Analysis in Complex Experiments*

    PubMed Central

    Shojaie, Ali; Michailidis, George

    2010-01-01

    Cellular functions of living organisms are carried out through complex systems of interacting components. Including such interactions in the analysis, and considering sub-systems defined by biological pathways instead of individual components (e.g. genes), can lead to new findings about complex biological mechanisms. Networks are often used to capture such interactions and can be incorporated in models to improve the efficiency in estimation and inference. In this paper, we propose a model for incorporating external information about interactions among genes (proteins/metabolites) in differential analysis of gene sets. We exploit the framework of mixed linear models and propose a flexible inference procedure for analysis of changes in biological pathways. The proposed method facilitates the analysis of complex experiments, including multiple experimental conditions and temporal correlations among observations. We propose an efficient iterative algorithm for estimation of the model parameters and show that the proposed framework is asymptotically robust to the presence of noise in the network information. The performance of the proposed model is illustrated through the analysis of gene expression data for environmental stress response (ESR) in yeast, as well as simulated data sets. PMID:20597848

  20. Modelling of Complex Plasmas

    NASA Astrophysics Data System (ADS)

    Akdim, Mohamed Reda

    2003-09-01

    Nowadays plasmas are used for various applications such as the fabrication of silicon solar cells, integrated circuits, coatings and dental cleaning. In the case of a processing plasma, e.g. for the fabrication of amorphous silicon solar cells, a mixture of silane and hydrogen gas is injected in a reactor. These gases are decomposed by making a plasma. A plasma with a low degree of ionization (typically 10_5) is usually made in a reactor containing two electrodes driven by a radio-frequency (RF) power source in the megahertz range. Under the right circumstances the radicals, neutrals and ions can react further to produce nanometer sized dust particles. The particles can stick to the surface and thereby contribute to a higher deposition rate. Another possibility is that the nanometer sized particles coagulate and form larger micron sized particles. These particles obtain a high negative charge, due to their large radius and are usually trapped in a radiofrequency plasma. The electric field present in the discharge sheaths causes the entrapment. Such plasmas are called dusty or complex plasmas. In this thesis numerical models are presented which describe dusty plasmas in reactive and nonreactive plasmas. We started first with the development of a simple one-dimensional silane fluid model where a dusty radio-frequency silane/hydrogen discharge is simulated. In the model, discharge quantities like the fluxes, densities and electric field are calculated self-consistently. A radius and an initial density profile for the spherical dust particles are given and the charge and the density of the dust are calculated with an iterative method. During the transport of the dust, its charge is kept constant in time. The dust influences the electric field distribution through its charge and the density of the plasma through recombination of positive ions and electrons at its surface. In the model this process gives an extra production of silane radicals, since the growth of dust is

  1. Simulation of characteristics of thermal and hydrologic soil regimes in equilibrium numerical experiments with a climate model of intermediate complexity

    NASA Astrophysics Data System (ADS)

    Arzhanov, M. M.; Demchenko, P. F.; Eliseev, A. V.; Mokhov, I. I.

    2008-10-01

    The IAP RAS CM (Institute of Atmospheric Physics, Russian Academy of Sciences, climate model) has been extended to include a comprehensive scheme of thermal and hydrologic soil processes. In equilibrium numerical experiments with specified preindustrial and current concentrations of atmospheric carbon dioxide, the coupled model successfully reproduces thermal characteristics of soil, including the temperature of its surface, and seasonal thawing and freezing characteristics. On the whole, the model also reproduces soil hydrology, including the winter snow water equivalent and river runoff from large watersheds. Evapotranspiration from the soil surface and soil moisture are simulated somewhat worse. The equilibrium response of the model to a doubling of atmospheric carbon dioxide shows a considerable warming of the soil surface, a reduction in the extent of permanently frozen soils, and the general growth of evaporation from continents. River runoff increases at high latitudes and decreases in the subtropics. The results are in qualitative agreement with observational data for the 20th century and with climate model simulations for the 21st century.

  2. Debating complexity in modeling

    USGS Publications Warehouse

    Hunt, Randall J.; Zheng, Chunmiao

    1999-01-01

    As scientists trying to understand the natural world, how should our effort be apportioned? We know that the natural world is characterized by complex and interrelated processes. Yet do we need to explicitly incorporate these intricacies to perform the tasks we are charged with? In this era of expanding computer power and development of sophisticated preprocessors and postprocessors, are bigger machines making better models? Put another way, do we understand the natural world better now with all these advancements in our simulation ability? Today the public's patience for long-term projects producing indeterminate results is wearing thin. This increases pressure on the investigator to use the appropriate technology efficiently. On the other hand, bringing scientific results into the legal arena opens up a new dimension to the issue: to the layperson, a tool that includes more of the complexity known to exist in the real world is expected to provide the more scientifically valid answer.

  3. Model Experiments and Model Descriptions

    NASA Technical Reports Server (NTRS)

    Jackman, Charles H.; Ko, Malcolm K. W.; Weisenstein, Debra; Scott, Courtney J.; Shia, Run-Lie; Rodriguez, Jose; Sze, N. D.; Vohralik, Peter; Randeniya, Lakshman; Plumb, Ian

    1999-01-01

    The Second Workshop on Stratospheric Models and Measurements Workshop (M&M II) is the continuation of the effort previously started in the first Workshop (M&M I, Prather and Remsberg [1993]) held in 1992. As originally stated, the aim of M&M is to provide a foundation for establishing the credibility of stratospheric models used in environmental assessments of the ozone response to chlorofluorocarbons, aircraft emissions, and other climate-chemistry interactions. To accomplish this, a set of measurements of the present day atmosphere was selected. The intent was that successful simulations of the set of measurements should become the prerequisite for the acceptance of these models as having a reliable prediction for future ozone behavior. This section is divided into two: model experiment and model descriptions. In the model experiment, participant were given the charge to design a number of experiments that would use observations to test whether models are using the correct mechanisms to simulate the distributions of ozone and other trace gases in the atmosphere. The purpose is closely tied to the needs to reduce the uncertainties in the model predicted responses of stratospheric ozone to perturbations. The specifications for the experiments were sent out to the modeling community in June 1997. Twenty eight modeling groups responded to the requests for input. The first part of this section discusses the different modeling group, along with the experiments performed. Part two of this section, gives brief descriptions of each model as provided by the individual modeling groups.

  4. Analysis of designed experiments with complex aliasing

    SciTech Connect

    Hamada, M.; Wu, C.F.J. )

    1992-07-01

    Traditionally, Plackett-Burman (PB) designs have been used in screening experiments for identifying important main effects. The PB designs whose run sizes are not a power of two have been criticized for their complex aliasing patterns, which according to conventional wisdom gives confusing results. This paper goes beyond the traditional approach by proposing the analysis strategy that entertains interactions in addition to main effects. Based on the precepts of effect sparsity and effect heredity, the proposed procedure exploits the designs' complex aliasing patterns, thereby turning their 'liability' into an advantage. Demonstration of the procedure on three real experiments shows the potential for extracting important information available in the data that has, until now, been missed. Some limitations are discussed, and extentions to overcome them are given. The proposed procedure also applies to more general mixed level designs that have become increasingly popular. 16 refs.

  5. CFD [computational fluid dynamics] And Safety Factors. Computer modeling of complex processes needs old-fashioned experiments to stay in touch with reality.

    SciTech Connect

    Leishear, Robert A.; Lee, Si Y.; Poirier, Michael R.; Steeper, Timothy J.; Ervin, Robert C.; Giddings, Billy J.; Stefanko, David B.; Harp, Keith D.; Fowley, Mark D.; Van Pelt, William B.

    2012-10-07

    Computational fluid dynamics (CFD) is recognized as a powerful engineering tool. That is, CFD has advanced over the years to the point where it can now give us deep insight into the analysis of very complex processes. There is a danger, though, that an engineer can place too much confidence in a simulation. If a user is not careful, it is easy to believe that if you plug in the numbers, the answer comes out, and you are done. This assumption can lead to significant errors. As we discovered in the course of a study on behalf of the Department of Energy's Savannah River Site in South Carolina, CFD models fail to capture some of the large variations inherent in complex processes. These variations, or scatter, in experimental data emerge from physical tests and are inadequately captured or expressed by calculated mean values for a process. This anomaly between experiment and theory can lead to serious errors in engineering analysis and design unless a correction factor, or safety factor, is experimentally validated. For this study, blending times for the mixing of salt solutions in large storage tanks were the process of concern under investigation. This study focused on the blending processes needed to mix salt solutions to ensure homogeneity within waste tanks, where homogeneity is required to control radioactivity levels during subsequent processing. Two of the requirements for this task were to determine the minimum number of submerged, centrifugal pumps required to blend the salt mixtures in a full-scale tank in half a day or less, and to recommend reasonable blending times to achieve nearly homogeneous salt mixtures. A full-scale, low-flow pump with a total discharge flow rate of 500 to 800 gpm was recommended with two opposing 2.27-inch diameter nozzles. To make this recommendation, both experimental and CFD modeling were performed. Lab researchers found that, although CFD provided good estimates of an average blending time, experimental blending times varied

  6. Turbulence modeling and experiments

    NASA Technical Reports Server (NTRS)

    Shabbir, Aamir

    1992-01-01

    The best way of verifying turbulence is to do a direct comparison between the various terms and their models. The success of this approach depends upon the availability of the data for the exact correlations (both experimental and DNS). The other approach involves numerically solving the differential equations and then comparing the results with the data. The results of such a computation will depend upon the accuracy of all the modeled terms and constants. Because of this it is sometimes difficult to find the cause of a poor performance by a model. However, such a calculation is still meaningful in other ways as it shows how a complete Reynolds stress model performs. Thirteen homogeneous flows are numerically computed using the second order closure models. We concentrate only on those models which use a linear (or quasi-linear) model for the rapid term. This, therefore, includes the Launder, Reece and Rodi (LRR) model; the isotropization of production (IP) model; and the Speziale, Sarkar, and Gatski (SSG) model. Which of the three models performs better is examined along with what are their weaknesses, if any. The other work reported deal with the experimental balances of the second moment equations for a buoyant plume. Despite the tremendous amount of activity toward the second order closure modeling of turbulence, very little experimental information is available about the budgets of the second moment equations. Part of the problem stems from our inability to measure the pressure correlations. However, if everything else appearing in these equations is known from the experiment, pressure correlations can be obtained as the closing terms. This is the closest we can come to in obtaining these terms from experiment, and despite the measurement errors which might be present in such balances, the resulting information will be extremely useful for the turbulence modelers. The purpose of this part of the work was to provide such balances of the Reynolds stress and heat

  7. Modeling complexity in biology

    NASA Astrophysics Data System (ADS)

    Louzoun, Yoram; Solomon, Sorin; Atlan, Henri; Cohen, Irun. R.

    2001-08-01

    Biological systems, unlike physical or chemical systems, are characterized by the very inhomogeneous distribution of their components. The immune system, in particular, is notable for self-organizing its structure. Classically, the dynamics of natural systems have been described using differential equations. But, differential equation models fail to account for the emergence of large-scale inhomogeneities and for the influence of inhomogeneity on the overall dynamics of biological systems. Here, we show that a microscopic simulation methodology enables us to model the emergence of large-scale objects and to extend the scope of mathematical modeling in biology. We take a simple example from immunology and illustrate that the methods of classical differential equations and microscopic simulation generate contradictory results. Microscopic simulations generate a more faithful approximation of the reality of the immune system.

  8. Hydraulic Fracturing Mineback Experiment in Complex Media

    NASA Astrophysics Data System (ADS)

    Green, S. J.; McLennan, J. D.

    2012-12-01

    Hydraulic fracturing (or "fracking") for the recovery of gas and liquids from tight shale formations has gained much attention. This operation which involves horizontal well drilling and massive hydraulic fracturing has been developed over the last decade to produce fluids from extremely low permeability mudstone and siltstone rocks with high organic content. Nearly thirteen thousand wells and about one hundred and fifty thousand stages within the wells were fractured in the US in 2011. This operation has proven to be successful, causing hundreds of billions of dollars to be invested and has produced an abundance of natural gas and is making billions of barrels of hydrocarbon liquids available for the US. But, even with this commercial success, relatively little is clearly known about the complexity--or lack of complexity--of the hydraulic fracture, the extent that the newly created surface area contacts the high Reservoir Quality rock, nor the connectivity and conductivity of the hydraulic fractures created. To better understand this phenomena in order to improve efficiency, a large-scale mine-back experiment is progressing. The mine-back experiment is a full-scale hydraulic fracture carried out in a well-characterized environment, with comprehensive instrumentation deployed to measure fracture growth. A tight shale mudstone rock geologic setting is selected, near the edge of a formation where one to two thousand feet difference in elevation occurs. From the top of the formation, drilling, well logging, and hydraulic fracture pumping will occur. From the bottom of the formation a horizontal tunnel will be mined using conventional mining techniques into the rock formation towards the drilled well. Certain instrumentation will be located within this tunnel for observations during the hydraulic fracturing. After the hydraulic fracturing, the tunnel will be extended toward the well, with careful mapping of the created hydraulic fracture. Fracturing fluid will be

  9. Surface complexation modeling of groundwater arsenic mobility: Results of a forced gradient experiment in a Red River flood plain aquifer, Vietnam

    NASA Astrophysics Data System (ADS)

    Jessen, Søren; Postma, Dieke; Larsen, Flemming; Nhan, Pham Quy; Hoa, Le Quynh; Trang, Pham Thi Kim; Long, Tran Vu; Viet, Pham Hung; Jakobsen, Rasmus

    2012-12-01

    Three surface complexation models (SCMs) developed for, respectively, ferrihydrite, goethite and sorption data for a Pleistocene oxidized aquifer sediment from Bangladesh were used to explore the effect of multicomponent adsorption processes on As mobility in a reduced Holocene floodplain aquifer along the Red River, Vietnam. The SCMs for ferrihydrite and goethite yielded very different results. The ferrihydrite SCM favors As(III) over As(V) and has carbonate and silica species as the main competitors for surface sites. In contrast, the goethite SCM has a greater affinity for As(V) over As(III) while PO43- and Fe(II) form the predominant surface species. The SCM for Pleistocene aquifer sediment resembles most the goethite SCM but shows more Si sorption. Compiled As(III) adsorption data for Holocene sediment was also well described by the SCM determined for Pleistocene aquifer sediment, suggesting a comparable As(III) affinity of Holocene and Pleistocene aquifer sediments. A forced gradient field experiment was conducted in a bank aquifer adjacent to a tributary channel to the Red River, and the passage in the aquifer of mixed groundwater containing up to 74% channel water was observed. The concentrations of As (<0.013 μM) and major ions in the channel water are low compared to those in the pristine groundwater in the adjacent bank aquifer, which had an As concentration of ˜3 μM. Calculations for conservative mixing of channel and groundwater could explain the observed variation in concentration for most elements. However, the mixed waters did contain an excess of As(III), PO43- and Si which is attributed to desorption from the aquifer sediment. The three SCMs were tested on their ability to model the desorption of As(III), PO43- and Si. Qualitatively, the ferrihydrite SCM correctly predicts desorption for As(III) but for Si and PO43- it predicts an increased adsorption instead of desorption. The goethite SCM correctly predicts desorption of both As(III) and PO43

  10. Percolation experiments in complex fractal media

    NASA Astrophysics Data System (ADS)

    Redondo, Jose Manuel; Tarquis, Ana Maria; Cherubini, Claudia; Lopez Gzlez-Nieto, Pilar; Vila, Teresa

    2013-04-01

    Series of flow percolation experiments under gravity were performed in different glass model and real karstic media samples. We present a multifractal characterization of the experiments in several parametric non-dimensional flow descriptors. Using the maximum local multifractal dimension as an additional flow indicator. Also experiments on Non laminar flow and transport conditions in fractured and karstified media were performed at Bari. The investigation on hypothesis of non linear flow and non fickian transport in fractured aquifers led to a distinction on the different role of channels and microchannels and of the presence of vortices and eddy trapping. The dominance of the elongated channels produced early arrival times, with the solute traveling along the high velocity channel network. On the other hand in a lumped structured karstic media, the percolation flow produced long tails with local Eddy mixing, entrapment in eddies, and slow flow out of the eddies. In The laboratory experiments performed in Madrid and in DAMTP Cambridge the role of the initial pressure produced fractal pathway structures even in iniatilly uniform ballotini substrates.

  11. Debris flows: Experiments and modelling

    NASA Astrophysics Data System (ADS)

    Turnbull, Barbara; Bowman, Elisabeth T.; McElwaine, Jim N.

    2015-01-01

    Debris flows and debris avalanches are complex, gravity-driven currents of rock, water and sediments that can be highly mobile. This combination of component materials leads to a rich morphology and unusual dynamics, exhibiting features of both granular materials and viscous gravity currents. Although extreme events such as those at Kolka Karmadon in North Ossetia (2002) [1] and Huascarán (1970) [2] strongly motivate us to understand how such high levels of mobility can occur, smaller events are ubiquitous and capable of endangering infrastructure and life, requiring mitigation. Recent progress in modelling debris flows has seen the development of multiphase models that can start to provide clues of the origins of the unique phenomenology of debris flows. However, the spatial and temporal variations that debris flows exhibit make this task challenging and laboratory experiments, where boundary and initial conditions can be controlled and reproduced, are crucial both to validate models and to inspire new modelling approaches. This paper discusses recent laboratory experiments on debris flows and the state of the art in numerical models.

  12. Modelling Canopy Flows over Complex Terrain

    NASA Astrophysics Data System (ADS)

    Grant, Eleanor R.; Ross, Andrew N.; Gardiner, Barry A.

    2016-06-01

    Recent studies of flow over forested hills have been motivated by a number of important applications including understanding CO_2 and other gaseous fluxes over forests in complex terrain, predicting wind damage to trees, and modelling wind energy potential at forested sites. Current modelling studies have focussed almost exclusively on highly idealized, and usually fully forested, hills. Here, we present model results for a site on the Isle of Arran, Scotland with complex terrain and heterogeneous forest canopy. The model uses an explicit representation of the canopy and a 1.5-order turbulence closure for flow within and above the canopy. The validity of the closure scheme is assessed using turbulence data from a field experiment before comparing predictions of the full model with field observations. For near-neutral stability, the results compare well with the observations, showing that such a relatively simple canopy model can accurately reproduce the flow patterns observed over complex terrain and realistic, variable forest cover, while at the same time remaining computationally feasible for real case studies. The model allows closer examination of the flow separation observed over complex forested terrain. Comparisons with model simulations using a roughness length parametrization show significant differences, particularly with respect to flow separation, highlighting the need to explicitly model the forest canopy if detailed predictions of near-surface flow around forests are required.

  13. "Computational Modeling of Actinide Complexes"

    SciTech Connect

    Balasubramanian, K

    2007-03-07

    We will present our recent studies on computational actinide chemistry of complexes which are not only interesting from the standpoint of actinide coordination chemistry but also of relevance to environmental management of high-level nuclear wastes. We will be discussing our recent collaborative efforts with Professor Heino Nitsche of LBNL whose research group has been actively carrying out experimental studies on these species. Computations of actinide complexes are also quintessential to our understanding of the complexes found in geochemical, biochemical environments and actinide chemistry relevant to advanced nuclear systems. In particular we have been studying uranyl, plutonyl, and Cm(III) complexes are in aqueous solution. These studies are made with a variety of relativistic methods such as coupled cluster methods, DFT, and complete active space multi-configuration self-consistent-field (CASSCF) followed by large-scale CI computations and relativistic CI (RCI) computations up to 60 million configurations. Our computational studies on actinide complexes were motivated by ongoing EXAFS studies of speciated complexes in geo and biochemical environments carried out by Prof Heino Nitsche's group at Berkeley, Dr. David Clark at Los Alamos and Dr. Gibson's work on small actinide molecules at ORNL. The hydrolysis reactions of urnayl, neputyl and plutonyl complexes have received considerable attention due to their geochemical and biochemical importance but the results of free energies in solution and the mechanism of deprotonation have been topic of considerable uncertainty. We have computed deprotonating and migration of one water molecule from the first solvation shell to the second shell in UO{sub 2}(H{sub 2}O){sub 5}{sup 2+}, UO{sub 2}(H{sub 2}O){sub 5}{sup 2+}NpO{sub 2}(H{sub 2}O){sub 6}{sup +}, and PuO{sub 2}(H{sub 2}O){sub 5}{sup 2+} complexes. Our computed Gibbs free energy(7.27 kcal/m) in solution for the first time agrees with the experiment (7.1 kcal

  14. Complex Networks in Psychological Models

    NASA Astrophysics Data System (ADS)

    Wedemann, R. S.; Carvalho, L. S. A. V. D.; Donangelo, R.

    We develop schematic, self-organizing, neural-network models to describe mechanisms associated with mental processes, by a neurocomputational substrate. These models are examples of real world complex networks with interesting general topological structures. Considering dopaminergic signal-to-noise neuronal modulation in the central nervous system, we propose neural network models to explain development of cortical map structure and dynamics of memory access, and unify different mental processes into a single neurocomputational substrate. Based on our neural network models, neurotic behavior may be understood as an associative memory process in the brain, and the linguistic, symbolic associative process involved in psychoanalytic working-through can be mapped onto a corresponding process of reconfiguration of the neural network. The models are illustrated through computer simulations, where we varied dopaminergic modulation and observed the self-organizing emergent patterns at the resulting semantic map, interpreting them as different manifestations of mental functioning, from psychotic through to normal and neurotic behavior, and creativity.

  15. X-ray Absorption Spectroscopy and Coherent X-ray Diffraction Imaging for Time-Resolved Investigation of the Biological Complexes: Computer Modelling towards the XFEL Experiment

    NASA Astrophysics Data System (ADS)

    Bugaev, A. L.; Guda, A. A.; Yefanov, O. M.; Lorenz, U.; Soldatov, A. V.; Vartanyants, I. A.

    2016-05-01

    The development of the next generation synchrotron radiation sources - free electron lasers - is approaching to become an effective tool for the time-resolved experiments aimed to solve actual problems in various fields such as chemistry’ biology’ medicine’ etc. In order to demonstrate’ how these experiments may be performed for the real systems to obtain information at the atomic and macromolecular levels’ we have performed a molecular dynamics computer simulation combined with quantum chemistry calculations for the human phosphoglycerate kinase enzyme with Mg containing substrate. The simulated structures were used to calculate coherent X-ray diffraction patterns’ reflecting the conformational state of the enzyme, and Mg K-edge X-ray absorption spectra, which depend on the local structure of the substrate. These two techniques give complementary information making such an approach highly effective for time-resolved investigation of various biological complexes, such as metalloproteins or enzymes with metal-containing substrate, to obtain information about both metal-containing active site or substrate and the atomic structure of each conformation.

  16. Modeling Wildfire Incident Complexity Dynamics

    PubMed Central

    Thompson, Matthew P.

    2013-01-01

    Wildfire management in the United States and elsewhere is challenged by substantial uncertainty regarding the location and timing of fire events, the socioeconomic and ecological consequences of these events, and the costs of suppression. Escalating U.S. Forest Service suppression expenditures is of particular concern at a time of fiscal austerity as swelling fire management budgets lead to decreases for non-fire programs, and as the likelihood of disruptive within-season borrowing potentially increases. Thus there is a strong interest in better understanding factors influencing suppression decisions and in turn their influence on suppression costs. As a step in that direction, this paper presents a probabilistic analysis of geographic and temporal variation in incident management team response to wildfires. The specific focus is incident complexity dynamics through time for fires managed by the U.S. Forest Service. The modeling framework is based on the recognition that large wildfire management entails recurrent decisions across time in response to changing conditions, which can be represented as a stochastic dynamic system. Daily incident complexity dynamics are modeled according to a first-order Markov chain, with containment represented as an absorbing state. A statistically significant difference in complexity dynamics between Forest Service Regions is demonstrated. Incident complexity probability transition matrices and expected times until containment are presented at national and regional levels. Results of this analysis can help improve understanding of geographic variation in incident management and associated cost structures, and can be incorporated into future analyses examining the economic efficiency of wildfire management. PMID:23691014

  17. Diagnosis in Complex Plasmas for Microgravity Experiments (PK-3 plus)

    SciTech Connect

    Takahashi, Kazuo; Hayashi, Yasuaki; Thomas, Hubertus M.; Morfill, Gregor E.; Ivlev, Alexei V.; Adachi, Satoshi

    2008-09-07

    Microgravity gives the complex (dusty) plasmas, where dust particles are embedded in complete charge neutral region of bulk plasma. The dust clouds as an uncompressed strongly coupled Coulomb system correspond to atomic model with several physical phenomena, crystallization, phase transition, and so on. As the phenomena tightly connect to plasma states, it is significant to understand plasma parameters such as electron density and temperature. The present work shows the electron density in the setup for microgravity experiments currently onboard on the International Space Station.

  18. The Hidden Complexities of a "Simple" Experiment.

    ERIC Educational Resources Information Center

    Caplan, Jeremy B.; And Others

    1994-01-01

    Provides two experiments that do not give the expected results. One involves burning a candle in an air-filled beaker under water and the other burns the candle in pure oxygen. Provides methodology, suggestions, and theory. (MVL)

  19. Numerical Experiments In Strongly Coupled Complex (Dusty) Plasmas

    NASA Astrophysics Data System (ADS)

    Hou, L. J.; Ivlev A.; Hubertus M. T.; Morfill, G. E.

    2010-07-01

    Complex (dusty) plasma is a suspension of micron-sized charged dust particles in a weakly ionized plasma with electrons, ions, and neutral atoms or molecules. Therein, dust particles acquire a few thousand electron charges by absorbing surrounding electrons and ions, and consequently interact with each other via a dynamically screened Coulomb potential while undergoing Brownian motion due primarily to frequent collisions with the neutral molecules. When the interaction potential energy between charged dust particles significantly exceeds their kinetic energy, they become strongly coupled and can form ordered structures comprising liquid and solid states. Since the motion of charged dust particles in complex (dusty) plasmas can be directly observed in real time by using a video camera, such systems have been generally regarded as a promising model system to study many phenomena occurring in solids, liquids and other strongly-coupled systems at the kinetic level, such as phase transitions, transport processes, and collective dynamics. Complex plasma physics has now grown into a mature research field with a very broad range of interdisciplinary facets. In addition to usual experimental and theoretical study, computer simulation in complex plasma plays an important role in bridging experimental observations and theories and in understanding many interesting phenomena observed in laboratory. The present talk will focus on a class of computer simulations that are usually non-equilibrium ones with external perturbation and that mimic the real complex plasma experiments (i. e., numerical experiment). The simulation method, i. e., the so-called Brownian Dynamics methods, will be firstly reviewed and then examples, such as simulations of heat transfer and shock wave propagation, will be present.

  20. Explosion modelling for complex geometries

    NASA Astrophysics Data System (ADS)

    Nehzat, Naser

    A literature review suggested that the combined effects of fuel reactivity, obstacle density, ignition strength, and confinement result in flame acceleration and subsequent pressure build-up during a vapour cloud explosion (VCE). Models for the prediction of propagating flames in hazardous areas, such as coal mines, oil platforms, storage and process chemical areas etc. fall into two classes. One class involves use of Computation Fluid Dynamics (CFD). This approach has been utilised by several researchers. The other approach relies upon a lumped parameter approach as developed by Baker (1983). The former approach is restricted by the appropriateness of sub-models and numerical stability requirements inherent in the computational solution. The latter approach raises significant questions regarding the validity of the simplification involved in representing the complexities of a propagating explosion. This study was conducted to investigate and improve the Computational Fluid Dynamic (CFD) code EXPLODE which has been developed by Green et al., (1993) for use on practical gas explosion hazard assessments. The code employs a numerical method for solving partial differential equations by using finite volume techniques. Verification exercises, involving comparison with analytical solutions for the classical shock-tube and with experimental (small-scale, medium and large-scale) results, demonstrate the accuracy of the code and the new combustion models but also identify differences between predictions and the experimental results. The project has resulted in a developed version of the code (EXPLODE2) with new combustion models for simulating gas explosions. Additional features of this program include the physical models necessary to simulate the combustion process using alternative combustion models, improvement to the numerical accuracy and robustness of the code, and special input for simulation of different gas explosions. The present code has the capability of

  1. Modelling biological complexity: a physical scientist's perspective

    PubMed Central

    Coveney, Peter V; Fowler, Philip W

    2005-01-01

    integration of molecular and more coarse-grained representations of matter. The scope of such integrative approaches to complex systems research is circumscribed by the computational resources available. Computational grids should provide a step jump in the scale of these resources; we describe the tools that RealityGrid, a major UK e-Science project, has developed together with our experience of deploying complex models on nascent grids. We also discuss the prospects for mathematical approaches to reducing the dimensionality of complex networks in the search for universal systems-level properties, illustrating our approach with a description of the origin of life according to the RNA world view. PMID:16849185

  2. Modelling biological complexity: a physical scientist's perspective.

    PubMed

    Coveney, Peter V; Fowler, Philip W

    2005-09-22

    integration of molecular and more coarse-grained representations of matter. The scope of such integrative approaches to complex systems research is circumscribed by the computational resources available. Computational grids should provide a step jump in the scale of these resources; we describe the tools that RealityGrid, a major UK e-Science project, has developed together with our experience of deploying complex models on nascent grids. We also discuss the prospects for mathematical approaches to reducing the dimensionality of complex networks in the search for universal systems-level properties, illustrating our approach with a description of the origin of life according to the RNA world view. PMID:16849185

  3. A Practical Philosophy of Complex Climate Modelling

    NASA Technical Reports Server (NTRS)

    Schmidt, Gavin A.; Sherwood, Steven

    2014-01-01

    We give an overview of the practice of developing and using complex climate models, as seen from experiences in a major climate modelling center and through participation in the Coupled Model Intercomparison Project (CMIP).We discuss the construction and calibration of models; their evaluation, especially through use of out-of-sample tests; and their exploitation in multi-model ensembles to identify biases and make predictions. We stress that adequacy or utility of climate models is best assessed via their skill against more naive predictions. The framework we use for making inferences about reality using simulations is naturally Bayesian (in an informal sense), and has many points of contact with more familiar examples of scientific epistemology. While the use of complex simulations in science is a development that changes much in how science is done in practice, we argue that the concepts being applied fit very much into traditional practices of the scientific method, albeit those more often associated with laboratory work.

  4. A physical interpretation of hydrologic model complexity

    NASA Astrophysics Data System (ADS)

    Moayeri, MohamadMehdi; Pande, Saket

    2015-04-01

    It is intuitive that instability of hydrological system representation, in the sense of how perturbations in input forcings translate into perturbation in a hydrologic response, may depend on its hydrological characteristics. Responses of unstable systems are thus complex to model. We interpret complexity in this context and define complexity as a measure of instability in hydrological system representation. We provide algorithms to quantify model complexity in this context. We use Sacramento soil moisture accounting model (SAC-SMA) parameterized for MOPEX basins and quantify complexities of corresponding models. Relationships between hydrologic characteristics of MOPEX basins such as location, precipitation seasonality index, slope, hydrologic ratios, saturated hydraulic conductivity and NDVI and respective model complexities are then investigated. We hypothesize that complexities of basin specific SAC-SMA models correspond to aforementioned hydrologic characteristics, thereby suggesting that model complexity, in the context presented here, may have a physical interpretation.

  5. Impacts of snow and glaciers over Tibetan Plateau on Holocene climate change: Sensitivity experiments with a coupled model of intermediate complexity

    NASA Astrophysics Data System (ADS)

    Jin, Liya; Ganopolski, Andrey; Chen, Fahu; Claussen, Martin; Wang, Huijun

    2005-09-01

    An Earth system model of intermediate complexity has been used to investigate the sensitivity of simulated global climate to gradually increased snow and glacier cover over the Tibetan Plateau for the last 9000 years (9 kyr). The simulations show that in the mid-Holocene at about 6 kyr before present (BP) the imposed ice sheets over the Tibetan Plateau induces summer precipitation decreases strongly in North Africa and South Asia, and increases in Southeast Asia. The response of vegetation cover to the imposed ice sheets over the Tibetan Plateau is not synchronous in South Asia and in North Africa, showing an earlier and, hence, a more rapid decrease in vegetation cover in North Africa from 9 to 6 kyr BP while it has almost no influence on that in south Asia until 5 kyr BP. The simulation results suggest that the snow and glacier environment over the Tibetan Plateau is an important factor for Holocene climate variability in North Africa, South Asia and Southeast Asia.

  6. Analyzing Complex Metabolomic Networks: Experiments and Simulation

    NASA Astrophysics Data System (ADS)

    Steuer, R.; Kurths, J.; Fiehn, O.; Weckwerth, W.

    2002-03-01

    In the recent years, remarkable advances in molecular biology have enabled us to measure the behavior of complex regularity networks underlying biological systems. In particular, high throughput techniques, such as gene expression arrays, allow a fast acquisition of a large number of simultaneously measured variables. Similar to gene expression, the analysis of metabolomic datasets results in a huge number of metabolite co-regulations: Metabolites are the end products of cellular regulatory processes, their level can be regarded as the ultimate response to genetic or environmental changes. In this presentation we focus on the topological description of such networks, using both, experimental data and simulations. In particular, we discuss the possibility to deduce novel links between metabolites, using concepts from (nonlinear) time series analysis and information theory.

  7. Extracting Models in Single Molecule Experiments

    NASA Astrophysics Data System (ADS)

    Presse, Steve

    2013-03-01

    Single molecule experiments can now monitor the journey of a protein from its assembly near a ribosome to its proteolytic demise. Ideally all single molecule data should be self-explanatory. However data originating from single molecule experiments is particularly challenging to interpret on account of fluctuations and noise at such small scales. Realistically, basic understanding comes from models carefully extracted from the noisy data. Statistical mechanics, and maximum entropy in particular, provide a powerful framework for accomplishing this task in a principled fashion. Here I will discuss our work in extracting conformational memory from single molecule force spectroscopy experiments on large biomolecules. One clear advantage of this method is that we let the data tend towards the correct model, we do not fit the data. I will show that the dynamical model of the single molecule dynamics which emerges from this analysis is often more textured and complex than could otherwise come from fitting the data to a pre-conceived model.

  8. Teacher Modeling Using Complex Informational Texts

    ERIC Educational Resources Information Center

    Fisher, Douglas; Frey, Nancy

    2015-01-01

    Modeling in complex texts requires that teachers analyze the text for factors of qualitative complexity and then design lessons that introduce students to that complexity. In addition, teachers can model the disciplinary nature of content area texts as well as word solving and comprehension strategies. Included is a planning guide for think aloud.

  9. Describing Ecosystem Complexity through Integrated Catchment Modeling

    NASA Astrophysics Data System (ADS)

    Shope, C. L.; Tenhunen, J. D.; Peiffer, S.

    2011-12-01

    Land use and climate change have been implicated in reduced ecosystem services (ie: high quality water yield, biodiversity, and agricultural yield. The prediction of ecosystem services expected under future land use decisions and changing climate conditions has become increasingly important. Complex policy and management decisions require the integration of physical, economic, and social data over several scales to assess effects on water resources and ecology. Field-based meteorology, hydrology, soil physics, plant production, solute and sediment transport, economic, and social behavior data were measured in a South Korean catchment. A variety of models are being used to simulate plot and field scale experiments within the catchment. Results from each of the local-scale models provide identification of sensitive, local-scale parameters which are then used as inputs into a large-scale watershed model. We used the spatially distributed SWAT model to synthesize the experimental field data throughout the catchment. The approach of our study was that the range in local-scale model parameter results can be used to define the sensitivity and uncertainty in the large-scale watershed model. Further, this example shows how research can be structured for scientific results describing complex ecosystems and landscapes where cross-disciplinary linkages benefit the end result. The field-based and modeling framework described is being used to develop scenarios to examine spatial and temporal changes in land use practices and climatic effects on water quantity, water quality, and sediment transport. Development of accurate modeling scenarios requires understanding the social relationship between individual and policy driven land management practices and the value of sustainable resources to all shareholders.

  10. Complex Aerosol Experiment in Western Siberia (April - October 2013)

    NASA Astrophysics Data System (ADS)

    Matvienko, G. G.; Belan, B. D.; Panchenko, M. V.; Romanovskii, O. A.; Sakerin, S. M.; Kabanov, D. M.; Turchinovich, S. A.; Turchinovich, Yu. S.; Eremina, T. A.; Kozlov, V. S.; Terpugova, S. A.; Pol'kin, V. V.; Yausheva, E. P.; Chernov, D. G.; Zuravleva, T. B.; Bedareva, T. V.; Odintsov, S. L.; Burlakov, V. D.; Arshinov, M. Yu.; Ivlev, G. A.; Savkin, D. E.; Fofonov, A. V.; Gladkikh, V. A.; Kamardin, A. P.; Balin, Yu. S.; Kokhanenko, G. P.; Penner, I. E.; Samoilova, S. V.; Antokhin, P. N.; Arshinova, V. G.; Davydov, D. K.; Kozlov, A. V.; Pestunov, D. A.; Rasskazchikova, T. M.; Simonenkov, D. V.; Sklyadneva, T. K.; Tolmachev, G. N.; Belan, S. B.; Shmargunov, V. P.

    2016-06-01

    The primary project objective was to accomplish the Complex Aerosol Experiment, during which the aerosol properties should be measured in the near-ground layer and free atmosphere. Three measurement cycles were performed during the project implementation: in spring period (April), when the maximum of aerosol generation is observed; in summer (July), when atmospheric boundary layer height and mixing layer height are maximal; and in late summer - early autumn (October), when the secondary particle nucleation period is recorded. Numerical calculations were compared with measurements of fluxes of downward solar radiation. It was shown that the relative differences between model and experimental values of fluxes of direct and total radiation, on the average, do not exceed 1% and 3% respectively.

  11. Iron-Sulfur-Carbonyl and -Nitrosyl Complexes: A Laboratory Experiment.

    ERIC Educational Resources Information Center

    Glidewell, Christopher; And Others

    1985-01-01

    Background information, materials needed, procedures used, and typical results obtained, are provided for an experiment on iron-sulfur-carbonyl and -nitrosyl complexes. The experiment involved (1) use of inert atmospheric techniques and thin-layer and flexible-column chromatography and (2) interpretation of infrared, hydrogen and carbon-13 nuclear…

  12. Membrane associated complexes in calcium dynamics modelling

    NASA Astrophysics Data System (ADS)

    Szopa, Piotr; Dyzma, Michał; Kaźmierczak, Bogdan

    2013-06-01

    Mitochondria not only govern energy production, but are also involved in crucial cellular signalling processes. They are one of the most important organelles determining the Ca2+ regulatory pathway in the cell. Several mathematical models explaining these mechanisms were constructed, but only few of them describe interplay between calcium concentrations in endoplasmic reticulum (ER), cytoplasm and mitochondria. Experiments measuring calcium concentrations in mitochondria and ER suggested the existence of cytosolic microdomains with locally elevated calcium concentration in the nearest vicinity of the outer mitochondrial membrane. These intermediate physical connections between ER and mitochondria are called MAM (mitochondria-associated ER membrane) complexes. We propose a model with a direct calcium flow from ER to mitochondria, which may be justified by the existence of MAMs, and perform detailed numerical analysis of the effect of this flow on the type and shape of calcium oscillations. The model is partially based on the Marhl et al model. We have numerically found that the stable oscillations exist for a considerable set of parameter values. However, for some parameter sets the oscillations disappear and the trajectories of the model tend to a steady state with very high calcium level in mitochondria. This can be interpreted as an early step in an apoptotic pathway.

  13. Capturing Complexity through Maturity Modelling

    ERIC Educational Resources Information Center

    Underwood, Jean; Dillon, Gayle

    2004-01-01

    The impact of information and communication technologies (ICT) on the process and products of education is difficult to assess for a number of reasons. In brief, education is a complex system of interrelationships, of checks and balances. This context is not a neutral backdrop on which teaching and learning are played out. Rather, it may help, or…

  14. Does increased hydrochemical model complexity decrease robustness?

    NASA Astrophysics Data System (ADS)

    Medici, C.; Wade, A. J.; Francés, F.

    2012-05-01

    SummaryThe aim of this study was, within a sensitivity analysis framework, to determine if additional model complexity gives a better capability to model the hydrology and nitrogen dynamics of a small Mediterranean forested catchment or if the additional parameters cause over-fitting. Three nitrogen-models of varying hydrological complexity were considered. For each model, general sensitivity analysis (GSA) and Generalized Likelihood Uncertainty Estimation (GLUE) were applied, each based on 100,000 Monte Carlo simulations. The results highlighted the most complex structure as the most appropriate, providing the best representation of the non-linear patterns observed in the flow and streamwater nitrate concentrations between 1999 and 2002. Its 5% and 95% GLUE bounds, obtained considering a multi-objective approach, provide the narrowest band for streamwater nitrogen, which suggests increased model robustness, though all models exhibit periods of inconsistent good and poor fits between simulated outcomes and observed data. The results confirm the importance of the riparian zone in controlling the short-term (daily) streamwater nitrogen dynamics in this catchment but not the overall flux of nitrogen from the catchment. It was also shown that as the complexity of a hydrological model increases over-parameterisation occurs, but the converse is true for a water quality model where additional process representation leads to additional acceptable model simulations. Water quality data help constrain the hydrological representation in process-based models. Increased complexity was justifiable for modelling river-system hydrochemistry. Increased complexity was justifiable for modelling river-system hydrochemistry.

  15. Complexity and Uncertainty in Soil Nitrogen Modeling

    NASA Astrophysics Data System (ADS)

    Ajami, N. K.; Gu, C.

    2009-12-01

    Model uncertainty is rarely considered in the field of biogeochemical modeling. The standard biogeochemical modeling approach is to proceed based on one selected model with the “right” complexity level based on data availability. However other plausible models can result in dissimilar answer to the scientific question in hand using the same set of data. Relying on a single model can lead to underestimation of uncertainty associated with the results and therefore lead to unreliable conclusions. Multi-model ensemble strategy is a means to exploit the diversity of skillful predictions from different models with multiple levels of complexity. The aim of this study is two fold, first to explore the impact of a model’s complexity level on the accuracy of the end results and second to introduce a probabilistic multi-model strategy in the context of a process-based biogeochemical model. We developed three different versions of a biogeochemical model, TOUGHREACT-N, with various complexity levels. Each one of these models was calibrated against the observed data from a tomato field in Western Sacramento County, California, and considered two different weighting sets on the objective function. This way we created a set of six ensemble members. The Bayesian Model Averaging (BMA) approach was then used to combine these ensemble members by the likelihood that an individual model is correct given the observations. The results clearly indicate the need to consider a multi-model ensemble strategy over a single model selection in biogeochemical modeling.

  16. Fock spaces for modeling macromolecular complexes

    NASA Astrophysics Data System (ADS)

    Kinney, Justin

    Large macromolecular complexes play a fundamental role in how cells function. Here I describe a Fock space formalism for mathematically modeling these complexes. Specifically, this formalism allows ensembles of complexes to be defined in terms of elementary molecular ``building blocks'' and ``assembly rules.'' Such definitions avoid the massive redundancy inherent in standard representations, in which all possible complexes are manually enumerated. Methods for systematically computing ensembles of complexes from a list of components and interaction rules are described. I also show how this formalism readily accommodates coarse-graining. Finally, I introduce diagrammatic techniques that greatly facilitate the application of this formalism to both equilibrium and non-equilibrium biochemical systems.

  17. Molecular simulation and modeling of complex I.

    PubMed

    Hummer, Gerhard; Wikström, Mårten

    2016-07-01

    Molecular modeling and molecular dynamics simulations play an important role in the functional characterization of complex I. With its large size and complicated function, linking quinone reduction to proton pumping across a membrane, complex I poses unique modeling challenges. Nonetheless, simulations have already helped in the identification of possible proton transfer pathways. Simulations have also shed light on the coupling between electron and proton transfer, thus pointing the way in the search for the mechanistic principles underlying the proton pump. In addition to reviewing what has already been achieved in complex I modeling, we aim here to identify pressing issues and to provide guidance for future research to harness the power of modeling in the functional characterization of complex I. This article is part of a Special Issue entitled Respiratory complex I, edited by Volker Zickermann and Ulrich Brandt. PMID:26780586

  18. Selecting model complexity in learning problems

    SciTech Connect

    Buescher, K.L.; Kumar, P.R.

    1993-10-01

    To learn (or generalize) from noisy data, one must resist the temptation to pick a model for the underlying process that overfits the data. Many existing techniques solve this problem at the expense of requiring the evaluation of an absolute, a priori measure of each model`s complexity. We present a method that does not. Instead, it uses a natural, relative measure of each model`s complexity. This method first creates a pool of ``simple`` candidate models using part of the data and then selects from among these by using the rest of the data.

  19. The Complex Experience of Learning to Do Research

    ERIC Educational Resources Information Center

    Emo, Kenneth; Emo, Wendy; Kimn, Jung-Han; Gent, Stephen

    2015-01-01

    This article examines how student learning is a product of the experiential interaction between person and environment. We draw from the theoretical perspective of complexity to shed light on the emergent, adaptive, and unpredictable nature of students' learning experiences. To understand the relationship between the environment and the student…

  20. School Experiences of an Adolescent with Medical Complexities Involving Incontinence

    ERIC Educational Resources Information Center

    Filce, Hollie Gabler; Bishop, John B.

    2014-01-01

    The educational implications of chronic illnesses which involve incontinence are not well represented in the literature. The experiences of an adolescent with multiple complex illnesses, including incontinence, were explored via an intrinsic case study. Data were gathered from the adolescent, her mother, and teachers through interviews, email…

  1. Facing up to Complexity: Implications for Our Social Experiments.

    PubMed

    Hawkins, Ronnie

    2016-06-01

    Biological systems are highly complex, and for this reason there is a considerable degree of uncertainty as to the consequences of making significant interventions into their workings. Since a number of new technologies are already impinging on living systems, including our bodies, many of us have become participants in large-scale "social experiments". I will discuss biological complexity and its relevance to the technologies that brought us BSE/vCJD and the controversy over GM foods. Then I will consider some of the complexities of our social dynamics, and argue for making a shift from using the precautionary principle to employing the approach of evaluating the introduction of new technologies by conceiving of them as social experiments. PMID:26062747

  2. Reducing Spatial Data Complexity for Classification Models

    SciTech Connect

    Ruta, Dymitr; Gabrys, Bogdan

    2007-11-29

    Intelligent data analytics gradually becomes a day-to-day reality of today's businesses. However, despite rapidly increasing storage and computational power current state-of-the-art predictive models still can not handle massive and noisy corporate data warehouses. What is more adaptive and real-time operational environment requires multiple models to be frequently retrained which further hinders their use. Various data reduction techniques ranging from data sampling up to density retention models attempt to address this challenge by capturing a summarised data structure, yet they either do not account for labelled data or degrade the classification performance of the model trained on the condensed dataset. Our response is a proposition of a new general framework for reducing the complexity of labelled data by means of controlled spatial redistribution of class densities in the input space. On the example of Parzen Labelled Data Compressor (PLDC) we demonstrate a simulatory data condensation process directly inspired by the electrostatic field interaction where the data are moved and merged following the attracting and repelling interactions with the other labelled data. The process is controlled by the class density function built on the original data that acts as a class-sensitive potential field ensuring preservation of the original class density distributions, yet allowing data to rearrange and merge joining together their soft class partitions. As a result we achieved a model that reduces the labelled datasets much further than any competitive approaches yet with the maximum retention of the original class densities and hence the classification performance. PLDC leaves the reduced dataset with the soft accumulative class weights allowing for efficient online updates and as shown in a series of experiments if coupled with Parzen Density Classifier (PDC) significantly outperforms competitive data condensation methods in terms of classification performance at the

  3. Reducing Spatial Data Complexity for Classification Models

    NASA Astrophysics Data System (ADS)

    Ruta, Dymitr; Gabrys, Bogdan

    2007-11-01

    Intelligent data analytics gradually becomes a day-to-day reality of today's businesses. However, despite rapidly increasing storage and computational power current state-of-the-art predictive models still can not handle massive and noisy corporate data warehouses. What is more adaptive and real-time operational environment requires multiple models to be frequently retrained which further hinders their use. Various data reduction techniques ranging from data sampling up to density retention models attempt to address this challenge by capturing a summarised data structure, yet they either do not account for labelled data or degrade the classification performance of the model trained on the condensed dataset. Our response is a proposition of a new general framework for reducing the complexity of labelled data by means of controlled spatial redistribution of class densities in the input space. On the example of Parzen Labelled Data Compressor (PLDC) we demonstrate a simulatory data condensation process directly inspired by the electrostatic field interaction where the data are moved and merged following the attracting and repelling interactions with the other labelled data. The process is controlled by the class density function built on the original data that acts as a class-sensitive potential field ensuring preservation of the original class density distributions, yet allowing data to rearrange and merge joining together their soft class partitions. As a result we achieved a model that reduces the labelled datasets much further than any competitive approaches yet with the maximum retention of the original class densities and hence the classification performance. PLDC leaves the reduced dataset with the soft accumulative class weights allowing for efficient online updates and as shown in a series of experiments if coupled with Parzen Density Classifier (PDC) significantly outperforms competitive data condensation methods in terms of classification performance at the

  4. Scaffolding in Complex Modelling Situations

    ERIC Educational Resources Information Center

    Stender, Peter; Kaiser, Gabriele

    2015-01-01

    The implementation of teacher-independent realistic modelling processes is an ambitious educational activity with many unsolved problems so far. Amongst others, there hardly exists any empirical knowledge about efficient ways of possible teacher support with students' activities, which should be mainly independent from the teacher. The research…

  5. Role models for complex networks

    NASA Astrophysics Data System (ADS)

    Reichardt, J.; White, D. R.

    2007-11-01

    We present a framework for automatically decomposing (“block-modeling”) the functional classes of agents within a complex network. These classes are represented by the nodes of an image graph (“block model”) depicting the main patterns of connectivity and thus functional roles in the network. Using a first principles approach, we derive a measure for the fit of a network to any given image graph allowing objective hypothesis testing. From the properties of an optimal fit, we derive how to find the best fitting image graph directly from the network and present a criterion to avoid overfitting. The method can handle both two-mode and one-mode data, directed and undirected as well as weighted networks and allows for different types of links to be dealt with simultaneously. It is non-parametric and computationally efficient. The concepts of structural equivalence and modularity are found as special cases of our approach. We apply our method to the world trade network and analyze the roles individual countries play in the global economy.

  6. Agent-based modeling of complex infrastructures

    SciTech Connect

    North, M. J.

    2001-06-01

    Complex Adaptive Systems (CAS) can be applied to investigate complex infrastructures and infrastructure interdependencies. The CAS model agents within the Spot Market Agent Research Tool (SMART) and Flexible Agent Simulation Toolkit (FAST) allow investigation of the electric power infrastructure, the natural gas infrastructure and their interdependencies.

  7. Emulator-assisted data assimilation in complex models

    NASA Astrophysics Data System (ADS)

    Margvelashvili, Nugzar Yu; Herzfeld, Mike; Rizwi, Farhan; Mongin, Mathieu; Baird, Mark E.; Jones, Emlyn; Schaffelke, Britta; King, Edward; Schroeder, Thomas

    2016-08-01

    Emulators are surrogates of complex models that run orders of magnitude faster than the original model. The utility of emulators for the data assimilation into ocean models is still not well understood. High complexity of ocean models translates into high uncertainty of the corresponding emulators which may undermine the quality of the assimilation schemes based on such emulators. Numerical experiments with a chaotic Lorenz-95 model are conducted to illustrate this point and suggest a strategy to alleviate this problem through the localization of the emulation and data assimilation procedures. Insights gained through these experiments are used to design and implement data assimilation scenario for a 3D fine-resolution sediment transport model of the Great Barrier Reef (GBR), Australia.

  8. Numerical models of complex diapirs

    NASA Astrophysics Data System (ADS)

    Podladchikov, Yu.; Talbot, C.; Poliakov, A. N. B.

    1993-12-01

    Numerically modelled diapirs that rise into overburdens with viscous rheology produce a large variety of shapes. This work uses the finite-element method to study the development of diapirs that rise towards a surface on which a diapir-induced topography creeps flat or disperses ("erodes") at different rates. Slow erosion leads to diapirs with "mushroom" shapes, moderate erosion rate to "wine glass" diapirs and fast erosion to "beer glass"- and "column"-shaped diapirs. The introduction of a low-viscosity layer at the top of the overburden causes diapirs to develop into structures resembling a "Napoleon hat". These spread lateral sheets.

  9. Complex system modelling for veterinary epidemiology.

    PubMed

    Lanzas, Cristina; Chen, Shi

    2015-02-01

    The use of mathematical models has a long tradition in infectious disease epidemiology. The nonlinear dynamics and complexity of pathogen transmission pose challenges in understanding its key determinants, in identifying critical points, and designing effective mitigation strategies. Mathematical modelling provides tools to explicitly represent the variability, interconnectedness, and complexity of systems, and has contributed to numerous insights and theoretical advances in disease transmission, as well as to changes in public policy, health practice, and management. In recent years, our modelling toolbox has considerably expanded due to the advancements in computing power and the need to model novel data generated by technologies such as proximity loggers and global positioning systems. In this review, we discuss the principles, advantages, and challenges associated with the most recent modelling approaches used in systems science, the interdisciplinary study of complex systems, including agent-based, network and compartmental modelling. Agent-based modelling is a powerful simulation technique that considers the individual behaviours of system components by defining a set of rules that govern how individuals ("agents") within given populations interact with one another and the environment. Agent-based models have become a recent popular choice in epidemiology to model hierarchical systems and address complex spatio-temporal dynamics because of their ability to integrate multiple scales and datasets. PMID:25449734

  10. Modeling of microgravity combustion experiments

    NASA Technical Reports Server (NTRS)

    Buckmaster, John

    1993-01-01

    Modeling plays a vital role in providing physical insights into behavior revealed by experiment. The program at the University of Illinois is designed to improve our understanding of basic combustion phenomena through the analytical and numerical modeling of a variety of configurations undergoing experimental study in NASA's microgravity combustion program. Significant progress has been made in two areas: (1) flame-balls, studied experimentally by Ronney and his co-workers; (2) particle-cloud flames studied by Berlad and his collaborators. Additional work is mentioned below. NASA funding for the U. of Illinois program commenced in February 1991 but work was initiated prior to that date and the program can only be understood with this foundation exposed. Accordingly, we start with a brief description of some key results obtained in the pre - 2/91 work.

  11. Discrete Element Modeling of Complex Granular Flows

    NASA Astrophysics Data System (ADS)

    Movshovitz, N.; Asphaug, E. I.

    2010-12-01

    Granular materials occur almost everywhere in nature, and are actively studied in many fields of research, from food industry to planetary science. One approach to the study of granular media, the continuum approach, attempts to find a constitutive law that determines the material's flow, or strain, under applied stress. The main difficulty with this approach is that granular systems exhibit different behavior under different conditions, behaving at times as an elastic solid (e.g. pile of sand), at times as a viscous fluid (e.g. when poured), or even as a gas (e.g. when shaken). Even if all these physics are accounted for, numerical implementation is made difficult by the wide and often discontinuous ranges in continuum density and sound speed. A different approach is Discrete Element Modeling (DEM). Here the goal is to directly model every grain in the system as a rigid body subject to various body and surface forces. The advantage of this method is that it treats all of the above regimes in the same way, and can easily deal with a system moving back and forth between regimes. But as a granular system typically contains a multitude of individual grains, the direct integration of the system can be very computationally expensive. For this reason most DEM codes are limited to spherical grains of uniform size. However, spherical grains often cannot replicate the behavior of real world granular systems. A simple pile of spherical grains, for example, relies on static friction alone to keep its shape, while in reality a pile of irregular grains can maintain a much steeper angle by interlocking force chains. In the present study we employ a commercial DEM, nVidia's PhysX Engine, originally designed for the game and animation industry, to simulate complex granular flows with irregular, non-spherical grains. This engine runs as a multi threaded process and can be GPU accelerated. We demonstrate the code's ability to physically model granular materials in the three regimes

  12. Building phenomenological models of complex biological processes

    NASA Astrophysics Data System (ADS)

    Daniels, Bryan; Nemenman, Ilya

    2009-11-01

    A central goal of any modeling effort is to make predictions regarding experimental conditions that have not yet been observed. Overly simple models will not be able to fit the original data well, but overly complex models are likely to overfit the data and thus produce bad predictions. Modern quantitative biology modeling efforts often err on the complexity side of this balance, using myriads of microscopic biochemical reaction processes with a priori unknown kinetic parameters to model relatively simple biological phenomena. In this work, we show how Bayesian model selection (which is mathematically similar to low temperature expansion in statistical physics) can be used to build coarse-grained, phenomenological models of complex dynamical biological processes, which have better predictive powers than microscopically correct, but poorely constrained mechanistic molecular models. We illustrate this on the example of a multiply-modifiable protein molecule, which is a simplified description of multiple biological systems, such as an immune receptors and an RNA polymerase complex. Our approach is similar in spirit to the phenomenological Landau expansion for the free energy in the theory of critical phenomena.

  13. SUMMARY OF COMPLEX TERRAIN MODEL EVALUATION

    EPA Science Inventory

    The Environmental Protection Agency conducted a scientific review of a set of eight complex terrain dispersion models. TRC Environmental Consultants, Inc. calculated and tabulated a uniform set of performance statistics for the models using the Cinder Cone Butte and Westvaco Luke...

  14. Explicit stress integration of complex soil models

    NASA Astrophysics Data System (ADS)

    Zhao, Jidong; Sheng, Daichao; Rouainia, M.; Sloan, Scott W.

    2005-10-01

    In this paper, two complex critical-state models are implemented in a displacement finite element code. The two models are used for structured clays and sands, and are characterized by multiple yield surfaces, plastic yielding within the yield surface, and complex kinematic and isotropic hardening laws. The consistent tangent operators - which lead to a quadratic convergence when used in a fully implicit algorithm - are difficult to derive or may even not exist. The stress integration scheme used in this paper is based on the explicit Euler method with automatic substepping and error control. This scheme employs the classical elastoplastic stiffness matrix and requires only the first derivatives of the yield function and plastic potential. This explicit scheme is used to integrate the two complex critical-state models - the sub/super-loading surfaces model (SSLSM) and the kinematic hardening structure model (KHSM). Various boundary-value problems are then analysed. The results for the two models are compared with each other, as well with those from standard Cam-clay models. Accuracy and efficiency of the scheme used for the complex models are also investigated. Copyright

  15. From Complex to Simple: Interdisciplinary Stochastic Models

    ERIC Educational Resources Information Center

    Mazilu, D. A.; Zamora, G.; Mazilu, I.

    2012-01-01

    We present two simple, one-dimensional, stochastic models that lead to a qualitative understanding of very complex systems from biology, nanoscience and social sciences. The first model explains the complicated dynamics of microtubules, stochastic cellular highways. Using the theory of random walks in one dimension, we find analytical expressions…

  16. Modeling the chemistry of complex petroleum mixtures.

    PubMed Central

    Quann, R J

    1998-01-01

    Determining the complete molecular composition of petroleum and its refined products is not feasible with current analytical techniques because of the astronomical number of molecular components. Modeling the composition and behavior of such complex mixtures in refinery processes has accordingly evolved along a simplifying concept called lumping. Lumping reduces the complexity of the problem to a manageable form by grouping the entire set of molecular components into a handful of lumps. This traditional approach does not have a molecular basis and therefore excludes important aspects of process chemistry and molecular property fundamentals from the model's formulation. A new approach called structure-oriented lumping has been developed to model the composition and chemistry of complex mixtures at a molecular level. The central concept is to represent an individual molecular or a set of closely related isomers as a mathematical construct of certain specific and repeating structural groups. A complex mixture such as petroleum can then be represented as thousands of distinct molecular components, each having a mathematical identity. This enables the automated construction of large complex reaction networks with tens of thousands of specific reactions for simulating the chemistry of complex mixtures. Further, the method provides a convenient framework for incorporating molecular physical property correlations, existing group contribution methods, molecular thermodynamic properties, and the structure--activity relationships of chemical kinetics in the development of models. PMID:9860903

  17. Governance of complex systems: results of a sociological simulation experiment.

    PubMed

    Adelt, Fabian; Weyer, Johannes; Fink, Robin D

    2014-01-01

    Social sciences have discussed the governance of complex systems for a long time. The following paper tackles the issue by means of experimental sociology, in order to investigate the performance of different modes of governance empirically. The simulation framework developed is based on Esser's model of sociological explanation as well as on Kroneberg's model of frame selection. The performance of governance has been measured by means of three macro and two micro indicators. Surprisingly, central control mostly performs better than decentralised coordination. However, results not only depend on the mode of governance, but there is also a relation between performance and the composition of actor populations, which has yet not been investigated sufficiently. Practitioner Summary: Practitioners can gain insights into the functioning of complex systems and learn how to better manage them. Additionally, they are provided with indicators to measure the performance of complex systems. PMID:24456093

  18. Updating the debate on model complexity

    USGS Publications Warehouse

    Simmons, Craig T.; Hunt, Randall J.

    2012-01-01

    As scientists who are trying to understand a complex natural world that cannot be fully characterized in the field, how can we best inform the society in which we live? This founding context was addressed in a special session, “Complexity in Modeling: How Much is Too Much?” convened at the 2011 Geological Society of America Annual Meeting. The session had a variety of thought-provoking presentations—ranging from philosophy to cost-benefit analyses—and provided some areas of broad agreement that were not evident in discussions of the topic in 1998 (Hunt and Zheng, 1999). The session began with a short introduction during which model complexity was framed borrowing from an economic concept, the Law of Diminishing Returns, and an example of enjoyment derived by eating ice cream. Initially, there is increasing satisfaction gained from eating more ice cream, to a point where the gain in satisfaction starts to decrease, ending at a point when the eater sees no value in eating more ice cream. A traditional view of model complexity is similar—understanding gained from modeling can actually decrease if models become unnecessarily complex. However, oversimplified models—those that omit important aspects of the problem needed to make a good prediction—can also limit and confound our understanding. Thus, the goal of all modeling is to find the “sweet spot” of model sophistication—regardless of whether complexity was added sequentially to an overly simple model or collapsed from an initial highly parameterized framework that uses mathematics and statistics to attain an optimum (e.g., Hunt et al., 2007). Thus, holistic parsimony is attained, incorporating “as simple as possible,” as well as the equally important corollary “but no simpler.”

  19. Balancing model complexity and measurements in hydrology

    NASA Astrophysics Data System (ADS)

    Van De Giesen, N.; Schoups, G.; Weijs, S. V.

    2012-12-01

    The Data Processing Inequality implies that hydrological modeling can only reduce, and never increase, the amount of information available in the original data used to formulate and calibrate hydrological models: I(X;Z(Y)) ≤ I(X;Y). Still, hydrologists around the world seem quite content building models for "their" watersheds to move our discipline forward. Hydrological models tend to have a hybrid character with respect to underlying physics. Most models make use of some well established physical principles, such as mass and energy balances. One could argue that such principles are based on many observations, and therefore add data. These physical principles, however, are applied to hydrological models that often contain concepts that have no direct counterpart in the observable physical universe, such as "buckets" or "reservoirs" that fill up and empty out over time. These not-so-physical concepts are more like the Artificial Neural Networks and Support Vector Machines of the Artificial Intelligence (AI) community. Within AI, one quickly came to the realization that by increasing model complexity, one could basically fit any dataset but that complexity should be controlled in order to be able to predict unseen events. The more data are available to train or calibrate the model, the more complex it can be. Many complexity control approaches exist in AI, with Solomonoff inductive inference being one of the first formal approaches, the Akaike Information Criterion the most popular, and Statistical Learning Theory arguably being the most comprehensive practical approach. In hydrology, complexity control has hardly been used so far. There are a number of reasons for that lack of interest, the more valid ones of which will be presented during the presentation. For starters, there are no readily available complexity measures for our models. Second, some unrealistic simplifications of the underlying complex physics tend to have a smoothing effect on possible model

  20. Multifaceted Modelling of Complex Business Enterprises.

    PubMed

    Chakraborty, Subrata; Mengersen, Kerrie; Fidge, Colin; Ma, Lin; Lassen, David

    2015-01-01

    We formalise and present a new generic multifaceted complex system approach for modelling complex business enterprises. Our method has a strong focus on integrating the various data types available in an enterprise which represent the diverse perspectives of various stakeholders. We explain the challenges faced and define a novel approach to converting diverse data types into usable Bayesian probability forms. The data types that can be integrated include historic data, survey data, and management planning data, expert knowledge and incomplete data. The structural complexities of the complex system modelling process, based on various decision contexts, are also explained along with a solution. This new application of complex system models as a management tool for decision making is demonstrated using a railway transport case study. The case study demonstrates how the new approach can be utilised to develop a customised decision support model for a specific enterprise. Various decision scenarios are also provided to illustrate the versatility of the decision model at different phases of enterprise operations such as planning and control. PMID:26247591

  1. Multifaceted Modelling of Complex Business Enterprises

    PubMed Central

    2015-01-01

    We formalise and present a new generic multifaceted complex system approach for modelling complex business enterprises. Our method has a strong focus on integrating the various data types available in an enterprise which represent the diverse perspectives of various stakeholders. We explain the challenges faced and define a novel approach to converting diverse data types into usable Bayesian probability forms. The data types that can be integrated include historic data, survey data, and management planning data, expert knowledge and incomplete data. The structural complexities of the complex system modelling process, based on various decision contexts, are also explained along with a solution. This new application of complex system models as a management tool for decision making is demonstrated using a railway transport case study. The case study demonstrates how the new approach can be utilised to develop a customised decision support model for a specific enterprise. Various decision scenarios are also provided to illustrate the versatility of the decision model at different phases of enterprise operations such as planning and control. PMID:26247591

  2. Combustion, Complex Fluids, and Fluid Physics Experiments on the ISS

    NASA Technical Reports Server (NTRS)

    Motil, Brian; Urban, David

    2012-01-01

    multiphase flows, capillary phenomena, and heat pipes. Finally in complex fluids, experiments in rheology and soft condensed materials will be presented.

  3. Combustion, Complex Fluids, and Fluid Physics Experiments on the ISS

    NASA Technical Reports Server (NTRS)

    Motil, Brian; Urban, David

    2012-01-01

    From the very early days of human spaceflight, NASA has been conducting experiments in space to understand the effect of weightlessness on physical and chemically reacting systems. NASA Glenn Research Center (GRC) in Cleveland, Ohio has been at the forefront of this research looking at both fundamental studies in microgravity as well as experiments targeted at reducing the risks to long duration human missions to the moon, Mars, and beyond. In the current International Space Station (ISS) era, we now have an orbiting laboratory that provides the highly desired condition of long-duration microgravity. This allows continuous and interactive research similar to Earth-based laboratories. Because of these capabilities, the ISS is an indispensible laboratory for low gravity research. NASA GRC has been actively involved in developing and operating facilities and experiments on the ISS since the beginning of a permanent human presence on November 2, 2000. As the lead Center for combustion, complex fluids, and fluid physics; GRC has led the successful implementation of the Combustion Integrated Rack (CIR) and the Fluids Integrated Rack (FIR) as well as the continued use of other facilities on the ISS. These facilities have supported combustion experiments in fundamental droplet combustion; fire detection; fire extinguishment; soot phenomena; flame liftoff and stability; and material flammability. The fluids experiments have studied capillary flow; magneto-rheological fluids; colloidal systems; extensional rheology; pool and nucleate boiling phenomena. In this paper, we provide an overview of the experiments conducted on the ISS over the past 12 years.

  4. Slip complexity in earthquake fault models.

    PubMed Central

    Rice, J R; Ben-Zion, Y

    1996-01-01

    We summarize studies of earthquake fault models that give rise to slip complexities like those in natural earthquakes. For models of smooth faults between elastically deformable continua, it is critical that the friction laws involve a characteristic distance for slip weakening or evolution of surface state. That results in a finite nucleation size, or coherent slip patch size, h*. Models of smooth faults, using numerical cell size properly small compared to h*, show periodic response or complex and apparently chaotic histories of large events but have not been found to show small event complexity like the self-similar (power law) Gutenberg-Richter frequency-size statistics. This conclusion is supported in the present paper by fully inertial elastodynamic modeling of earthquake sequences. In contrast, some models of locally heterogeneous faults with quasi-independent fault segments, represented approximately by simulations with cell size larger than h* so that the model becomes "inherently discrete," do show small event complexity of the Gutenberg-Richter type. Models based on classical friction laws without a weakening length scale or for which the numerical procedure imposes an abrupt strength drop at the onset of slip have h* = 0 and hence always fall into the inherently discrete class. We suggest that the small-event complexity that some such models show will not survive regularization of the constitutive description, by inclusion of an appropriate length scale leading to a finite h*, and a corresponding reduction of numerical grid size. Images Fig. 2 Fig. 3 Fig. 4 Fig. 5 PMID:11607669

  5. Minimum-complexity helicopter simulation math model

    NASA Technical Reports Server (NTRS)

    Heffley, Robert K.; Mnich, Marc A.

    1988-01-01

    An example of a minimal complexity simulation helicopter math model is presented. Motivating factors are the computational delays, cost, and inflexibility of the very sophisticated math models now in common use. A helicopter model form is given which addresses each of these factors and provides better engineering understanding of the specific handling qualities features which are apparent to the simulator pilot. The technical approach begins with specification of features which are to be modeled, followed by a build up of individual vehicle components and definition of equations. Model matching and estimation procedures are given which enable the modeling of specific helicopters from basic data sources such as flight manuals. Checkout procedures are given which provide for total model validation. A number of possible model extensions and refinement are discussed. Math model computer programs are defined and listed.

  6. A mechanistic model of the cysteine synthase complex.

    PubMed

    Feldman-Salit, Anna; Wirtz, Markus; Hell, Ruediger; Wade, Rebecca C

    2009-02-13

    Plants and bacteria assimilate and incorporate inorganic sulfur into organic compounds such as the amino acid cysteine. Cysteine biosynthesis involves a bienzyme complex, the cysteine synthase (CS) complex. The CS complex is composed of the enzymes serine acetyl transferase (SAT) and O-acetyl-serine-(thiol)-lyase (OAS-TL). Although it is experimentally known that formation of the CS complex influences cysteine production, the exact biological function of the CS complex, the mechanism of reciprocal regulation of the constituent enzymes and the structure of the complex are still poorly understood. Here, we used docking techniques to construct a model of the CS complex from mitochondrial Arabidopsis thaliana. The three-dimensional structures of the enzymes were modeled by comparative techniques. The C-termini of SAT, missing in the template structures but crucial for CS formation, were modeled de novo. Diffusional encounter complexes of SAT and OAS-TL were generated by rigid-body Brownian dynamics simulation. By incorporating experimental constraints during Brownian dynamics simulation, we identified complexes consistent with experiments. Selected encounter complexes were refined by molecular dynamics simulation to generate structures of bound complexes. We found that although a stoichiometric ratio of six OAS-TL dimers to one SAT hexamer in the CS complex is geometrically possible, binding energy calculations suggest that, consistent with experiments, a ratio of only two OAS-TL dimers to one SAT hexamer is more likely. Computational mutagenesis of residues in OAS-TL that are experimentally significant for CS formation hindered the association of the enzymes due to a less-favorable electrostatic binding free energy. Since the enzymes from A. thaliana were expressed in Escherichia coli, the cross-species binding of SAT and OAS-TL from E. coli and A. thaliana was explored. The results showed that reduced cysteine production might be due to a cross-binding of A. thaliana

  7. Coherent operation of detector systems and their readout electronics in a complex experiment control environment

    NASA Astrophysics Data System (ADS)

    Koestner, Stefan

    2009-09-01

    With the increasing size and degree of complexity of today's experiments in high energy physics the required amount of work and complexity to integrate a complete subdetector into an experiment control system is often underestimated. We report here on the layered software structure and protocols used by the LHCb experiment to control its detectors and readout boards. The experiment control system of LHCb is based on the commercial SCADA system PVSS II. Readout boards which are outside the radiation area are accessed via embedded credit card sized PCs which are connected to a large local area network. The SPECS protocol is used for control of the front end electronics. Finite state machines are introduced to facilitate the control of a large number of electronic devices and to model the whole experiment at the level of an expert system.

  8. Constructing minimal models for complex system dynamics

    NASA Astrophysics Data System (ADS)

    Barzel, Baruch; Liu, Yang-Yu; Barabási, Albert-László

    2015-05-01

    One of the strengths of statistical physics is the ability to reduce macroscopic observations into microscopic models, offering a mechanistic description of a system's dynamics. This paradigm, rooted in Boltzmann's gas theory, has found applications from magnetic phenomena to subcellular processes and epidemic spreading. Yet, each of these advances were the result of decades of meticulous model building and validation, which are impossible to replicate in most complex biological, social or technological systems that lack accurate microscopic models. Here we develop a method to infer the microscopic dynamics of a complex system from observations of its response to external perturbations, allowing us to construct the most general class of nonlinear pairwise dynamics that are guaranteed to recover the observed behaviour. The result, which we test against both numerical and empirical data, is an effective dynamic model that can predict the system's behaviour and provide crucial insights into its inner workings.

  9. Reassessing Geophysical Models of the Bushveld Complex in 3D

    NASA Astrophysics Data System (ADS)

    Cole, J.; Webb, S. J.; Finn, C.

    2012-12-01

    Conceptual geophysical models of the Bushveld Igneous Complex show three possible geometries for its mafic component: 1) Separate intrusions with vertical feeders for the eastern and western lobes (Cousins, 1959) 2) Separate dipping sheets for the two lobes (Du Plessis and Kleywegt, 1987) 3) A single saucer-shaped unit connected at depth in the central part between the two lobes (Cawthorn et al, 1998) Model three incorporates isostatic adjustment of the crust in response to the weight of the dense mafic material. The model was corroborated by results of a broadband seismic array over southern Africa, known as the Southern African Seismic Experiment (SASE) (Nguuri, et al, 2001; Webb et al, 2004). This new information about the crustal thickness only became available in the last decade and could not be considered in the earlier models. Nevertheless, there is still on-going debate as to which model is correct. All of the models published up to now have been done in 2 or 2.5 dimensions. This is not well suited to modelling the complex geometry of the Bushveld intrusion. 3D modelling takes into account effects of variations in geometry and geophysical properties of lithologies in a full three dimensional sense and therefore affects the shape and amplitude of calculated fields. The main question is how the new knowledge of the increased crustal thickness, as well as the complexity of the Bushveld Complex, will impact on the gravity fields calculated for the existing conceptual models, when modelling in 3D. The three published geophysical models were remodelled using full 3Dl potential field modelling software, and including crustal thickness obtained from the SASE. The aim was not to construct very detailed models, but to test the existing conceptual models in an equally conceptual way. Firstly a specific 2D model was recreated in 3D, without crustal thickening, to establish the difference between 2D and 3D results. Then the thicker crust was added. Including the less

  10. Modeling acuity for optotypes varying in complexity.

    PubMed

    Watson, Andrew B; Ahumada, Albert J

    2012-01-01

    Watson and Ahumada (2008) described a template model of visual acuity based on an ideal-observer limited by optical filtering, neural filtering, and noise. They computed predictions for selected optotypes and optical aberrations. Here we compare this model's predictions to acuity data for six human observers, each viewing seven different optotype sets, consisting of one set of Sloan letters and six sets of Chinese characters, differing in complexity (Zhang, Zhang, Xue, Liu, & Yu, 2007). Since optical aberrations for the six observers were unknown, we constructed 200 model observers using aberrations collected from 200 normal human eyes (Thibos, Hong, Bradley, & Cheng, 2002). For each condition (observer, optotype set, model observer) we estimated the model noise required to match the data. Expressed as efficiency, performance for Chinese characters was 1.4 to 2.7 times lower than for Sloan letters. Efficiency was weakly and inversely related to perimetric complexity of optotype set. We also compared confusion matrices for human and model observers. Correlations for off-diagonal elements ranged from 0.5 to 0.8 for different sets, and the average correlation for the template model was superior to a geometrical moment model with a comparable number of parameters (Liu, Klein, Xue, Zhang, & Yu, 2009). The template model performed well overall. Estimated psychometric function slopes matched the data, and noise estimates agreed roughly with those obtained independently from contrast sensitivity to Gabor targets. For optotypes of low complexity, the model accurately predicted relative performance. This suggests the model may be used to compare acuities measured with different sets of simple optotypes. PMID:23024356

  11. The Kuramoto model in complex networks

    NASA Astrophysics Data System (ADS)

    Rodrigues, Francisco A.; Peron, Thomas K. DM.; Ji, Peng; Kurths, Jürgen

    2016-01-01

    Synchronization of an ensemble of oscillators is an emergent phenomenon present in several complex systems, ranging from social and physical to biological and technological systems. The most successful approach to describe how coherent behavior emerges in these complex systems is given by the paradigmatic Kuramoto model. This model has been traditionally studied in complete graphs. However, besides being intrinsically dynamical, complex systems present very heterogeneous structure, which can be represented as complex networks. This report is dedicated to review main contributions in the field of synchronization in networks of Kuramoto oscillators. In particular, we provide an overview of the impact of network patterns on the local and global dynamics of coupled phase oscillators. We cover many relevant topics, which encompass a description of the most used analytical approaches and the analysis of several numerical results. Furthermore, we discuss recent developments on variations of the Kuramoto model in networks, including the presence of noise and inertia. The rich potential for applications is discussed for special fields in engineering, neuroscience, physics and Earth science. Finally, we conclude by discussing problems that remain open after the last decade of intensive research on the Kuramoto model and point out some promising directions for future research.

  12. Complexity of precipitation patterns: Comparison of simulation with experiment.

    PubMed

    Polezhaev, A. A.; Muller, S. C.

    1994-12-01

    Numerical simulations show that a simple model for the formation of Liesegang precipitation patterns, which takes into account the dependence of nucleation and particle growth kinetics on supersaturation, can explain not only simple patterns like parallel bands in a test tube or concentric rings in a petri dish, but also more complex structural features, such as dislocations, helices, "Saturn rings," or patterns formed in the case of equal initial concentrations of the source substances. The limits of application of the model are discussed. (c) 1994 American Institute of Physics. PMID:12780140

  13. How useful are complex flood damage models?

    NASA Astrophysics Data System (ADS)

    Schröter, Kai; Kreibich, Heidi; Vogel, Kristin; Riggelsen, Carsten; Scherbaum, Frank; Merz, Bruno

    2014-04-01

    We investigate the usefulness of complex flood damage models for predicting relative damage to residential buildings in a spatial and temporal transfer context. We apply eight different flood damage models to predict relative building damage for five historic flood events in two different regions of Germany. Model complexity is measured in terms of the number of explanatory variables which varies from 1 variable up to 10 variables which are singled out from 28 candidate variables. Model validation is based on empirical damage data, whereas observation uncertainty is taken into consideration. The comparison of model predictive performance shows that additional explanatory variables besides the water depth improve the predictive capability in a spatial and temporal transfer context, i.e., when the models are transferred to different regions and different flood events. Concerning the trade-off between predictive capability and reliability the model structure seem more important than the number of explanatory variables. Among the models considered, the reliability of Bayesian network-based predictions in space-time transfer is larger than for the remaining models, and the uncertainties associated with damage predictions are reflected more completely.

  14. Experiments beyond the standard model

    SciTech Connect

    Perl, M.L.

    1984-09-01

    This paper is based upon lectures in which I have described and explored the ways in which experimenters can try to find answers, or at least clues toward answers, to some of the fundamental questions of elementary particle physics. All of these experimental techniques and directions have been discussed fully in other papers, for example: searches for heavy charged leptons, tests of quantum chromodynamics, searches for Higgs particles, searches for particles predicted by supersymmetric theories, searches for particles predicted by technicolor theories, searches for proton decay, searches for neutrino oscillations, monopole searches, studies of low transfer momentum hadron physics at very high energies, and elementary particle studies using cosmic rays. Each of these subjects requires several lectures by itself to do justice to the large amount of experimental work and theoretical thought which has been devoted to these subjects. My approach in these tutorial lectures is to describe general ways to experiment beyond the standard model. I will use some of the topics listed to illustrate these general ways. Also, in these lectures I present some dreams and challenges about new techniques in experimental particle physics and accelerator technology, I call these Experimental Needs. 92 references.

  15. Modeling of protein binary complexes using structural mass spectrometry data

    PubMed Central

    Kamal, J.K. Amisha; Chance, Mark R.

    2008-01-01

    In this article, we describe a general approach to modeling the structure of binary protein complexes using structural mass spectrometry data combined with molecular docking. In the first step, hydroxyl radical mediated oxidative protein footprinting is used to identify residues that experience conformational reorganization due to binding or participate in the binding interface. In the second step, a three-dimensional atomic structure of the complex is derived by computational modeling. Homology modeling approaches are used to define the structures of the individual proteins if footprinting detects significant conformational reorganization as a function of complex formation. A three-dimensional model of the complex is constructed from these binary partners using the ClusPro program, which is composed of docking, energy filtering, and clustering steps. Footprinting data are used to incorporate constraints—positive and/or negative—in the docking step and are also used to decide the type of energy filter—electrostatics or desolvation—in the successive energy-filtering step. By using this approach, we examine the structure of a number of binary complexes of monomeric actin and compare the results to crystallographic data. Based on docking alone, a number of competing models with widely varying structures are observed, one of which is likely to agree with crystallographic data. When the docking steps are guided by footprinting data, accurate models emerge as top scoring. We demonstrate this method with the actin/gelsolin segment-1 complex. We also provide a structural model for the actin/cofilin complex using this approach which does not have a crystal or NMR structure. PMID:18042684

  16. Modeling Electromagnetic Scattering From Complex Inhomogeneous Objects

    NASA Technical Reports Server (NTRS)

    Deshpande, Manohar; Reddy, C. J.

    2011-01-01

    This software innovation is designed to develop a mathematical formulation to estimate the electromagnetic scattering characteristics of complex, inhomogeneous objects using the finite-element-method (FEM) and method-of-moments (MoM) concepts, as well as to develop a FORTRAN code called FEMOM3DS (Finite Element Method and Method of Moments for 3-Dimensional Scattering), which will implement the steps that are described in the mathematical formulation. Very complex objects can be easily modeled, and the operator of the code is not required to know the details of electromagnetic theory to study electromagnetic scattering.

  17. Synthetic seismograms for a complex crustal model

    NASA Astrophysics Data System (ADS)

    Sandmeier, K.-J.; Wenzel, F.

    1986-01-01

    The algorithm of the original Reflectivity Method has been vectorized and implemented on a CDC CYBER 205 computer. Calculation times are shortened by a factor of 20 to 30 compared with a general purpose computer with a capacity of several million floating point operations per second (MFLOP). The rapid calculation of synthetic seismograms for complex models, high frequency sources and all offset ranges is a provision for modeling not only particular phases but the whole observed wavefield. As an example we model refraction data of the Black Forest, Southwest Germany and are able to derive rather tight constraints on the physical properties of the lower crust.

  18. Human driven transitions in complex model ecosystems

    NASA Astrophysics Data System (ADS)

    Harfoot, Mike; Newbold, Tim; Tittinsor, Derek; Purves, Drew

    2015-04-01

    Human activities have been observed to be impacting ecosystems across the globe, leading to reduced ecosystem functioning, altered trophic and biomass structure and ultimately ecosystem collapse. Previous attempts to understand global human impacts on ecosystems have usually relied on statistical models, which do not explicitly model the processes underlying the functioning of ecosystems, represent only a small proportion of organisms and do not adequately capture complex non-linear and dynamic responses of ecosystems to perturbations. We use a mechanistic ecosystem model (1), which simulates the underlying processes structuring ecosystems and can thus capture complex and dynamic interactions, to investigate boundaries of complex ecosystems to human perturbation. We explore several drivers including human appropriation of net primary production and harvesting of animal biomass. We also present an analysis of the key interactions between biotic, societal and abiotic earth system components, considering why and how we might think about these couplings. References: M. B. J. Harfoot et al., Emergent global patterns of ecosystem structure and function from a mechanistic general ecosystem model., PLoS Biol. 12, e1001841 (2014).

  19. Experiment Design for Complex VTOL Aircraft with Distributed Propulsion and Tilt Wing

    NASA Technical Reports Server (NTRS)

    Murphy, Patrick C.; Landman, Drew

    2015-01-01

    Selected experimental results from a wind tunnel study of a subscale VTOL concept with distributed propulsion and tilt lifting surfaces are presented. The vehicle complexity and automated test facility were ideal for use with a randomized designed experiment. Design of Experiments and Response Surface Methods were invoked to produce run efficient, statistically rigorous regression models with minimized prediction error. Static tests were conducted at the NASA Langley 12-Foot Low-Speed Tunnel to model all six aerodynamic coefficients over a large flight envelope. This work supports investigations at NASA Langley in developing advanced configurations, simulations, and advanced control systems.

  20. Intrinsic Uncertainties in Modeling Complex Systems.

    SciTech Connect

    Cooper, Curtis S; Bramson, Aaron L.; Ames, Arlo L.

    2014-09-01

    Models are built to understand and predict the behaviors of both natural and artificial systems. Because it is always necessary to abstract away aspects of any non-trivial system being modeled, we know models can potentially leave out important, even critical elements. This reality of the modeling enterprise forces us to consider the prospective impacts of those effects completely left out of a model - either intentionally or unconsidered. Insensitivity to new structure is an indication of diminishing returns. In this work, we represent a hypothetical unknown effect on a validated model as a finite perturba- tion whose amplitude is constrained within a control region. We find robustly that without further constraints, no meaningful bounds can be placed on the amplitude of a perturbation outside of the control region. Thus, forecasting into unsampled regions is a very risky proposition. We also present inherent difficulties with proper time discretization of models and representing in- herently discrete quantities. We point out potentially worrisome uncertainties, arising from math- ematical formulation alone, which modelers can inadvertently introduce into models of complex systems. Acknowledgements This work has been funded under early-career LDRD project #170979, entitled "Quantify- ing Confidence in Complex Systems Models Having Structural Uncertainties", which ran from 04/2013 to 09/2014. We wish to express our gratitude to the many researchers at Sandia who con- tributed ideas to this work, as well as feedback on the manuscript. In particular, we would like to mention George Barr, Alexander Outkin, Walt Beyeler, Eric Vugrin, and Laura Swiler for provid- ing invaluable advice and guidance through the course of the project. We would also like to thank Steven Kleban, Amanda Gonzales, Trevor Manzanares, and Sarah Burwell for their assistance in managing project tasks and resources.

  1. Simulating Complex Modulated Phases Through Spin Models

    NASA Astrophysics Data System (ADS)

    Selinger, Jonathan V.; Lopatina, Lena M.; Geng, Jun; Selinger, Robin L. B.

    2009-03-01

    We extend the computational approach for studying striped phases on curved surfaces, presented in the previous talk, to two new problems involving complex modulated phases. First, we simulate a smectic liquid crystal on an arbitrary mesh by mapping the director field onto a vector spin and the density wave onto an Ising spin. We can thereby determine how the smectic phase responds to any geometrical constraints, including hybrid boundary conditions, patterned substrates, and disordered substrates. This method may provide a useful tool for designing ferroelectric liquid crystal cells. Second, we explore a model of vector spins on a flat two-dimensional (2D) lattice with long-range antiferromagnetic interactions. This model generates modulated phases with surprisingly complex structures, including 1D stripes and 2D periodic cells, which are independent of the underlying lattice. We speculate on the physical significance of these structures.

  2. Industrial Source Complex (ISC) dispersion model. Software

    SciTech Connect

    Schewe, G.; Sieurin, E.

    1980-01-01

    The model updates various EPA dispersion model algorithms and combines them in two computer programs that can be used to assess the air quality impact of emissions from the wide variety of source types associated with an industrial source complex. The ISC Model short-term program ISCST, an updated version of the EPA Single Source (CRSTER) Model uses sequential hourly meteorological data to calculate values of average concentration or total dry deposition for time periods of 1, 2, 3, 4, 6, 8, 12 and 24 hours. Additionally, ISCST may be used to calculate 'N' is 366 days. The ISC Model long-term computer program ISCLT, a sector-averaged model that updates and combines basic features of the EPA Air Quality Display Model (AQDM) and the EPA Climatological Dispersion Model (CDM), uses STAR Summaries to calculate seasonal and/or annual average concentration or total deposition values. Both the ISCST and ISCLT programs make the same basic dispersion-model assumptions. Additionally, both the ISCST and ISCLT programs use either a polar or a Cartesian receptor grid...Software Description: The programs are written in the FORTRAN IV programming language for implementation on a UNIVAC 1110 computer and also on medium-to-large IBM or CDC systems. 65,000k words of core storage are required to operate the model.

  3. Noncommutative complex Grosse-Wulkenhaar model

    SciTech Connect

    Hounkonnou, Mahouton Norbert; Samary, Dine Ousmane

    2008-11-18

    This paper stands for an application of the noncommutative (NC) Noether theorem, given in our previous work [AIP Proc 956(2007) 55-60], for the NC complex Grosse-Wulkenhaar model. It provides with an extension of a recent work [Physics Letters B 653(2007) 343-345]. The local conservation of energy-momentum tensors (EMTs) is recovered using improvement procedures based on Moyal algebraic techniques. Broken dilatation symmetry is discussed. NC gauge currents are also explicitly computed.

  4. Surface complexation modeling of inositol hexaphosphate sorption onto gibbsite.

    PubMed

    Ruyter-Hooley, Maika; Larsson, Anna-Carin; Johnson, Bruce B; Antzutkin, Oleg N; Angove, Michael J

    2015-02-15

    The sorption of Inositol hexaphosphate (IP6) onto gibbsite was investigated using a combination of adsorption experiments, (31)P solid-state MAS NMR spectroscopy, and surface complexation modeling. Adsorption experiments conducted at four temperatures showed that IP6 sorption decreased with increasing pH. At pH 6, IP6 sorption increased with increasing temperature, while at pH 10 sorption decreased as the temperature was raised. (31)P MAS NMR measurements at pH 3, 6, 9 and 11 produced spectra with broad resonance lines that could be de-convoluted with up to five resonances (+5, 0, -6, -13 and -21ppm). The chemical shifts suggest the sorption process involves a combination of both outer- and inner-sphere complexation and surface precipitation. Relative intensities of the observed resonances indicate that outer-sphere complexation is important in the sorption process at higher pH, while inner-sphere complexation and surface precipitation are dominant at lower pH. Using the adsorption and (31)P MAS NMR data, IP6 sorption to gibbsite was modeled with an extended constant capacitance model (ECCM). The adsorption reactions that best described the sorption of IP6 to gibbsite included two inner-sphere surface complexes and one outer-sphere complex: ≡AlOH + IP₆¹²⁻ + 5H⁺ ↔ ≡Al(IP₆H₄)⁷⁻ + H₂O, ≡3AlOH + IP₆¹²⁻ + 6H⁺ ↔ ≡Al₃(IP₆H₃)⁶⁻ + 3H₂O, ≡2AlOH + IP₆¹²⁻ + 4H⁺ ↔ (≡AlOH₂)₂²⁺(IP₆H₂)¹⁰⁻. The inner-sphere complex involving three surface sites may be considered to be equivalent to a surface precipitate. Thermodynamic parameters were obtained from equilibrium constants derived from surface complexation modeling. Enthalpies for the formation of inner-sphere surface complexes were endothermic, while the enthalpy for the outer-sphere complex was exothermic. The entropies for the proposed sorption reactions were large and positive suggesting that changes in solvation of species play a major role in driving

  5. The noisy voter model on complex networks

    NASA Astrophysics Data System (ADS)

    Carro, Adrián; Toral, Raúl; San Miguel, Maxi

    2016-04-01

    We propose a new analytical method to study stochastic, binary-state models on complex networks. Moving beyond the usual mean-field theories, this alternative approach is based on the introduction of an annealed approximation for uncorrelated networks, allowing to deal with the network structure as parametric heterogeneity. As an illustration, we study the noisy voter model, a modification of the original voter model including random changes of state. The proposed method is able to unfold the dependence of the model not only on the mean degree (the mean-field prediction) but also on more complex averages over the degree distribution. In particular, we find that the degree heterogeneity—variance of the underlying degree distribution—has a strong influence on the location of the critical point of a noise-induced, finite-size transition occurring in the model, on the local ordering of the system, and on the functional form of its temporal correlations. Finally, we show how this latter point opens the possibility of inferring the degree heterogeneity of the underlying network by observing only the aggregate behavior of the system as a whole, an issue of interest for systems where only macroscopic, population level variables can be measured.

  6. The noisy voter model on complex networks

    PubMed Central

    Carro, Adrián; Toral, Raúl; San Miguel, Maxi

    2016-01-01

    We propose a new analytical method to study stochastic, binary-state models on complex networks. Moving beyond the usual mean-field theories, this alternative approach is based on the introduction of an annealed approximation for uncorrelated networks, allowing to deal with the network structure as parametric heterogeneity. As an illustration, we study the noisy voter model, a modification of the original voter model including random changes of state. The proposed method is able to unfold the dependence of the model not only on the mean degree (the mean-field prediction) but also on more complex averages over the degree distribution. In particular, we find that the degree heterogeneity—variance of the underlying degree distribution—has a strong influence on the location of the critical point of a noise-induced, finite-size transition occurring in the model, on the local ordering of the system, and on the functional form of its temporal correlations. Finally, we show how this latter point opens the possibility of inferring the degree heterogeneity of the underlying network by observing only the aggregate behavior of the system as a whole, an issue of interest for systems where only macroscopic, population level variables can be measured. PMID:27094773

  7. The noisy voter model on complex networks.

    PubMed

    Carro, Adrián; Toral, Raúl; San Miguel, Maxi

    2016-01-01

    We propose a new analytical method to study stochastic, binary-state models on complex networks. Moving beyond the usual mean-field theories, this alternative approach is based on the introduction of an annealed approximation for uncorrelated networks, allowing to deal with the network structure as parametric heterogeneity. As an illustration, we study the noisy voter model, a modification of the original voter model including random changes of state. The proposed method is able to unfold the dependence of the model not only on the mean degree (the mean-field prediction) but also on more complex averages over the degree distribution. In particular, we find that the degree heterogeneity-variance of the underlying degree distribution-has a strong influence on the location of the critical point of a noise-induced, finite-size transition occurring in the model, on the local ordering of the system, and on the functional form of its temporal correlations. Finally, we show how this latter point opens the possibility of inferring the degree heterogeneity of the underlying network by observing only the aggregate behavior of the system as a whole, an issue of interest for systems where only macroscopic, population level variables can be measured. PMID:27094773

  8. Complexity of groundwater models in catchment hydrological models

    NASA Astrophysics Data System (ADS)

    Attinger, Sabine; Herold, Christian; Kumar, Rohini; Mai, Juliane; Ross, Katharina; Samaniego, Luis; Zink, Matthias

    2015-04-01

    In catchment hydrological models, groundwater is usually modeled very simple: it is conceptualized as a linear reservoir that gets the water from the upper unsaturated zone reservoir and releases water to the river system as baseflow. The baseflow is only a minor component of the total river flow and groundwater reservoir parameters are therefore difficult to be inversely estimated by means of river flow data only. In addition, the modelled values of the absolute height of the water filling the groundwater reservoir - in other words the groundwater levels - are of limited meaning due to coarse or no spatial resolution of groundwater and due to the fact that only river flow data are used for the calibration. The talk focuses on the question: Which complexity in terms of model complexity and model resolution is necessary to characterize groundwater processes and groundwater responses adequately in distributed catchment hydrological models? Starting from a spatially distributed catchment hydrological model with a groundwater compartment that is conceptualized as a linear reservoir we stepwise increase the groundwater model complexity and its spatial resolution to investigate which resolution, which complexity and which data are needed to reproduce baseflow and groundwater level data adequately.

  9. Complex Constructivism: A Theoretical Model of Complexity and Cognition

    ERIC Educational Resources Information Center

    Doolittle, Peter E.

    2014-01-01

    Education has long been driven by its metaphors for teaching and learning. These metaphors have influenced both educational research and educational practice. Complexity and constructivism are two theories that provide functional and robust metaphors. Complexity provides a metaphor for the structure of myriad phenomena, while constructivism…

  10. Predictive modelling of complex agronomic and biological systems.

    PubMed

    Keurentjes, Joost J B; Molenaar, Jaap; Zwaan, Bas J

    2013-09-01

    Biological systems are tremendously complex in their functioning and regulation. Studying the multifaceted behaviour and describing the performance of such complexity has challenged the scientific community for years. The reduction of real-world intricacy into simple descriptive models has therefore convinced many researchers of the usefulness of introducing mathematics into biological sciences. Predictive modelling takes such an approach another step further in that it takes advantage of existing knowledge to project the performance of a system in alternating scenarios. The ever growing amounts of available data generated by assessing biological systems at increasingly higher detail provide unique opportunities for future modelling and experiment design. Here we aim to provide an overview of the progress made in modelling over time and the currently prevalent approaches for iterative modelling cycles in modern biology. We will further argue for the importance of versatility in modelling approaches, including parameter estimation, model reduction and network reconstruction. Finally, we will discuss the difficulties in overcoming the mathematical interpretation of in vivo complexity and address some of the future challenges lying ahead. PMID:23777295

  11. Magnetic modeling of the Bushveld Igneous Complex

    NASA Astrophysics Data System (ADS)

    Webb, S. J.; Cole, J.; Letts, S. A.; Finn, C.; Torsvik, T. H.; Lee, M. D.

    2009-12-01

    Magnetic modeling of the 2.06 Ga Bushveld Complex presents special challenges due a variety of magnetic effects. These include strong remanence in the Main Zone and extremely high magnetic susceptibilities in the Upper Zone, which exhibit self-demagnetization. Recent palaeomagnetic results have resolved a long standing discrepancy between age data, which constrain the emplacement to within 1 million years, and older palaeomagnetic data which suggested ~50 million years for emplacement. The new palaeomagnetic results agree with the age data and present a single consistent pole, as opposed to a long polar wander path, for the Bushveld for all of the Zones and all of the limbs. These results also pass a fold test indicating the Bushveld Complex was emplaced horizontally lending support to arguments for connectivity. The magnetic signature of the Bushveld Complex provides an ideal mapping tool as the UZ has high susceptibility values and is well layered showing up as distinct anomalies on new high resolution magnetic data. However, this signature is similar to the highly magnetic BIFs found in the Transvaal and in the Witwatersrand Supergroups. Through careful mapping using new high resolution aeromagnetic data, we have been able to map the Bushveld UZ in complicated geological regions and identify a characteristic signature with well defined layers. The Main Zone, which has a more subdued magnetic signature, does have a strong remanent component and exhibits several magnetic reversals. The magnetic layers of the UZ contain layers of magnetitite with as much as 80-90% pure magnetite with large crystals (1-2 cm). While these layers are not strongly remanent, they have extremely high magnetic susceptibilities, and the self demagnetization effect must be taken into account when modeling these layers. Because the Bushveld Complex is so large, the geometry of the Earth’s magnetic field relative to the layers of the UZ Bushveld Complex changes orientation, creating

  12. General Blending Models for Data From Mixture Experiments

    PubMed Central

    Brown, L.; Donev, A. N.; Bissett, A. C.

    2015-01-01

    We propose a new class of models providing a powerful unification and extension of existing statistical methodology for analysis of data obtained in mixture experiments. These models, which integrate models proposed by Scheffé and Becker, extend considerably the range of mixture component effects that may be described. They become complex when the studied phenomenon requires it, but remain simple whenever possible. This article has supplementary material online. PMID:26681812

  13. Sonoluminescence: Experiments and models (Review)

    NASA Astrophysics Data System (ADS)

    Borisenok, V. A.

    2015-05-01

    Three models of the sonoluminescence source formation are considered: the shock-free compression model, the shock wave model, and the polarization model. Each of them is tested by experimental data on the size of the radiating region and the angular radiation pattern; the shape and duration of the radiation pulse; the influence of the type of liquid, gas composition, surfactants, sound frequency, and temperature of the liquid on the radiation intensity; the characteristics of the shock wave in the liquid; and the radiation spectra. It is shown that the most adequate qualitative explanation of the entire set of experimental data is given by the polarization model. Methods for verifying the model are proposed. Publications devoted to studying the possibility a thermonuclear fusion reaction in a cavitation system are reviewed.

  14. Experiments on a Model Eye

    ERIC Educational Resources Information Center

    Arell, Antti; Kolari, Samuli

    1978-01-01

    Explains a laboratory experiment dealing with the optical features of the human eye. Shows how to measure the magnification of the retina and the refractive anomaly of the eye could be used to measure the refractive power of the observer's eye. (GA)

  15. Structured analysis and modeling of complex systems

    NASA Technical Reports Server (NTRS)

    Strome, David R.; Dalrymple, Mathieu A.

    1992-01-01

    The Aircrew Evaluation Sustained Operations Performance (AESOP) facility at Brooks AFB, Texas, combines the realism of an operational environment with the control of a research laboratory. In recent studies we collected extensive data from the Airborne Warning and Control Systems (AWACS) Weapons Directors subjected to high and low workload Defensive Counter Air Scenarios. A critical and complex task in this environment involves committing a friendly fighter against a hostile fighter. Structured Analysis and Design techniques and computer modeling systems were applied to this task as tools for analyzing subject performance and workload. This technology is being transferred to the Man-Systems Division of NASA Johnson Space Center for application to complex mission related tasks, such as manipulating the Shuttle grappler arm.

  16. The Intermediate Complexity Atmospheric Research Model

    NASA Astrophysics Data System (ADS)

    Gutmann, Ethan; Clark, Martyn; Rasmussen, Roy; Arnold, Jeffrey; Brekke, Levi

    2015-04-01

    The high-resolution, non-hydrostatic atmospheric models often used for dynamical downscaling are extremely computationally expensive, and, for a certain class of problems, their complexity hinders our ability to ask key scientific questions, particularly those related to hydrology and climate change. For changes in precipitation in particular, an atmospheric model grid spacing capable of resolving the structure of mountain ranges is of critical importance, yet such simulations can not currently be performed with an advanced regional climate model for long time periods, over large areas, and forced by many climate models. Here we present the newly developed Intermediate Complexity Atmospheric Research model (ICAR) capable of simulating critical atmospheric processes two to three orders of magnitude faster than a state of the art regional climate model. ICAR uses a simplified dynamical formulation based off of linear theory, combined with the circulation field from a low-resolution climate model. The resulting three-dimensional wind field is used to advect heat and moisture within the domain, while sub-grid physics (e.g. microphysics) are processed by standard and simplified physics schemes from the Weather Research and Forecasting (WRF) model. ICAR is tested in comparison to WRF by downscaling a climate change scenario over the Colorado Rockies. Both atmospheric models predict increases in precipitation across the domain with a greater increase on the western half. In contrast, statistically downscaled precipitation using multiple common statistical methods predict decreases in precipitation over the western half of the domain. Finally, we apply ICAR to multiple CMIP5 climate models and scenarios with multiple parameterization options to investigate the importance of uncertainty in sub-grid physics as compared to the uncertainty in the large scale climate scenario. ICAR is a useful tool for climate change and weather forecast downscaling, particularly for orographic

  17. Modeling the human prothrombinase complex components

    NASA Astrophysics Data System (ADS)

    Orban, Tivadar

    Thrombin generation is the culminating stage of the blood coagulation process. Thrombin is obtained from prothrombin (the substrate) in a reaction catalyzed by the prothrombinase complex (the enzyme). The prothrombinase complex is composed of factor Xa (the enzyme), factor Va (the cofactor) associated in the presence of calcium ions on a negatively charged cell membrane. Factor Xa, alone, can activate prothrombin to thrombin; however, the rate of conversion is not physiologically relevant for survival. Incorporation of factor Va into prothrombinase accelerates the rate of prothrombinase activity by 300,000-fold, and provides the physiological pathway of thrombin generation. The long-term goal of the current proposal is to provide the necessary support for the advancing of studies to design potential drug candidates that may be used to avoid development of deep venous thrombosis in high-risk patients. The short-term goals of the present proposal are to (1) to propose a model of a mixed asymmetric phospholipid bilayer, (2) expand the incomplete model of human coagulation factor Va and study its interaction with the phospholipid bilayer, (3) to create a homology model of prothrombin (4) to study the dynamics of interaction between prothrombin and the phospholipid bilayer.

  18. Rhabdomyomas and Tuberous sclerosis complex: our experience in 33 cases

    PubMed Central

    2014-01-01

    Background Rhabdomyomas are the most common type of cardiac tumors in children. Anatomically, they can be considered as hamartomas. They are usually randomly diagnosed antenatally or postnatally sometimes presenting in the neonatal period with haemodynamic compromise or severe arrhythmias although most neonatal cases remain asymptomatic. Typically rhabdomyomas are multiple lesions and usually regress spontaneously but are often associated with tuberous sclerosis complex (TSC), an autosomal dominant multisystem disorder caused by mutations in either of the two genes, TSC1 or TSC2. Diagnosis of tuberous sclerosis is usually made on clinical grounds and eventually confirmed by a genetic test by searching for TSC genes mutations. Methods We report our experience on 33 cases affected with rhabdomyomas and diagnosed from January 1989 to December 2012, focusing on the cardiac outcome and on association with the signs of tuberous sclerosis complex. We performed echocardiography using initially a Philips Sonos 2500 with a 7,5/5 probe and in the last 4 years a Philips IE33 with a S12-4 probe. We investigated the family history, brain, skin, kidney and retinal lesions, development of seizures, and neuropsychiatric disorders. Results At diagnosis we detected 205 masses, mostly localized in interventricular septum, right ventricle and left ventricle. Only in 4 babies (12%) the presence of a mass caused a significant obstruction. A baby, with an enormous septal rhabdomyoma associated to multiple rhabdomyomas in both right and left ventricular walls died just after birth due to severe heart failure. During follow-up we observed a reduction of rhabdomyomas in terms of both number and size in all 32 surviving patients except in one child. Eight patients (24,2%) had an arrhythmia and in 2 of these cases rhabdomyomas led to Wolf-Parkinson-White Syndrome. For all patients the arrhythmia spontaneously totally disappeared or was reduced gradually. With regarding to association with

  19. Lateral organization of complex lipid mixtures from multiscale modeling

    NASA Astrophysics Data System (ADS)

    Tumaneng, Paul W.; Pandit, Sagar A.; Zhao, Guijun; Scott, H. L.

    2010-02-01

    The organizational properties of complex lipid mixtures can give rise to functionally important structures in cell membranes. In model membranes, ternary lipid-cholesterol (CHOL) mixtures are often used as representative systems to investigate the formation and stabilization of localized structural domains ("rafts"). In this work, we describe a self-consistent mean-field model that builds on molecular dynamics simulations to incorporate multiple lipid components and to investigate the lateral organization of such mixtures. The model predictions reveal regions of bimodal order on ternary plots that are in good agreement with experiment. Specifically, we have applied the model to ternary mixtures composed of dioleoylphosphatidylcholine:18:0 sphingomyelin:CHOL. This work provides insight into the specific intermolecular interactions that drive the formation of localized domains in these mixtures. The model makes use of molecular dynamics simulations to extract interaction parameters and to provide chain configuration order parameter libraries.

  20. Structuring temporal sequences: comparison of models and factors of complexity.

    PubMed

    Essens, P

    1995-05-01

    Two stages for structuring tone sequences have been distinguished by Povel and Essens (1985). In the first, a mental clock segments a sequence into equal time units (clock model); in the second, intervals are specified in terms of subdivisions of these units. The present findings support the clock model in that it predicts human performance better than three other algorithmic models. Two further experiments in which clock and subdivision characteristics were varied did not support the hypothesized effect of the nature of the subdivisions on complexity. A model focusing on the variations in the beat-anchored envelopes of the tone clusters was proposed. Errors in reproduction suggest a dual-code representation comprising temporal and figural characteristics. The temporal part of the representation is based on the clock model but specifies, in addition, the metric of the level below the clock. The beat-tone-cluster envelope concept was proposed to specify the figural part. PMID:7596749

  1. Wind modelling over complex terrain using CFD

    NASA Astrophysics Data System (ADS)

    Avila, Matias; Owen, Herbert; Folch, Arnau; Prieto, Luis; Cosculluela, Luis

    2015-04-01

    The present work deals with the numerical CFD modelling of onshore wind farms in the context of High Performance Computing (HPC). The CFD model involves the numerical solution of the Reynolds-Averaged Navier-Stokes (RANS) equations together with a κ-ɛ turbulence model and the energy equation, specially designed for Atmospheric Boundary Layer (ABL) flows. The aim is to predict the wind velocity distribution over complex terrain, using a model that includes meteorological data assimilation, thermal coupling, forested canopy and Coriolis effects. The modelling strategy involves automatic mesh generation, terrain data assimilation and generation of boundary conditions for the inflow wind flow distribution up to the geostrophic height. The CFD model has been implemented in Alya, a HPC multi physics parallel solver able to run with thousands of processors with an optimal scalability, developed in Barcelona Supercomputing Center. The implemented thermal stability and canopy physical model was developed by Sogachev in 2012. The k-ɛ equations are of non-linear convection diffusion reaction type. The implemented numerical scheme consists on a stabilized finite element formulation based on the variational multiscale method, that is known to be stable for this kind of turbulence equations. We present a numerical formulation that stresses on the robustness of the solution method, tackling common problems that produce instability. The iterative strategy and linearization scheme is discussed. It intends to avoid the possibility of having negative values of diffusion during the iterative process, which may lead to divergence of the scheme. These problems are addressed by acting on the coefficients of the reaction and diffusion terms and on the turbulent variables themselves. The k-ɛ equations are highly nonlinear. Complex terrain induces transient flow instabilities that may preclude the convergence of computer flow simulations based on steady state formulation of the

  2. Modeling the relational complexities of symptoms.

    PubMed

    Dolin, R H

    1994-12-01

    Realization of the value of reliable codified medical data is growing at a rapid rate. Symptom data in particular have been shown to be useful in decision analysis and in the determination of patient outcomes. Electronic medical record systems are emerging, and attempts are underway to define the structure and content of these systems to support the storage of all medical data. The underlying models upon which these systems are being built continue to be strengthened by a deeper understanding of the complex information they are to store. This report analyzes symptoms as they might be recorded in free text notes and presents a high-level conceptual data model representation of this domain. PMID:7869941

  3. In Vivo Experiments with Dental Pulp Stem Cells for Pulp-Dentin Complex Regeneration

    PubMed Central

    Kim, Sunil; Shin, Su-Jung; Song, Yunjung; Kim, Euiseong

    2015-01-01

    In recent years, many studies have examined the pulp-dentin complex regeneration with DPSCs. While it is important to perform research on cells, scaffolds, and growth factors, it is also critical to develop animal models for preclinical trials. The development of a reproducible animal model of transplantation is essential for obtaining precise and accurate data in vivo. The efficacy of pulp regeneration should be assessed qualitatively and quantitatively using animal models. This review article sought to introduce in vivo experiments that have evaluated the potential of dental pulp stem cells for pulp-dentin complex regeneration. According to a review of various researches about DPSCs, the majority of studies have used subcutaneous mouse and dog teeth for animal models. There is no way to know which animal model will reproduce the clinical environment. If an animal model is developed which is easier to use and is useful in more situations than the currently popular models, it will be a substantial aid to studies examining pulp-dentin complex regeneration. PMID:26688616

  4. Inexpensive Complex Hand Model Twenty Years Later.

    PubMed

    Frenger, Paul

    2015-01-01

    Twenty years ago the author unveiled his inexpensive complex hand model, which reproduced every motion of the human hand. A control system programmed in the Forth language operated its actuators and sensors. Follow-on papers for this popular project were next presented in Texas, Canada and Germany. From this hand grew the author’s meter-tall robot (nicknamed ANNIE: Android With Neural Networks, Intellect and Emotions). It received machine vision, facial expressiveness, speech synthesis and speech recognition; a simian version also received a dexterous ape foot. New artificial intelligence features included op-amp neurons for OCR and simulated emotions, hormone emulation, endocannabinoid receptors, fear-trust-love mechanisms, a Grandmother Cell recognizer and artificial consciousness. Simulated illnesses included narcotic addiction, autism, PTSD, fibromyalgia and Alzheimer’s disease. The author gave 13 robotics-AI presentations at NASA in Houston since 2006. A meter-tall simian robot was proposed with gripping hand-feet for use with space vehicles and to explore distant planets and moons. Also proposed were: intelligent motorized exoskeletons for astronaut force multiplication; a cognitive prosthesis to detect and alleviate decreased crew mental performance; and a gynoid robot medic to tend astronauts in deep space missions. What began as a complex hand model evolved into an innovative robot-AI within two decades. PMID:25996742

  5. Complex Educational Design: A Course Design Model Based on Complexity

    ERIC Educational Resources Information Center

    Freire, Maximina Maria

    2013-01-01

    Purpose: This article aims at presenting a conceptual framework which, theoretically grounded on complexity, provides the basis to conceive of online language courses that intend to respond to the needs of students and society. Design/methodology/approach: This paper is introduced by reflections on distance education and on the paradigmatic view…

  6. An experiment with interactive planning models

    NASA Technical Reports Server (NTRS)

    Beville, J.; Wagner, J. H.; Zannetos, Z. S.

    1970-01-01

    Experiments on decision making in planning problems are described. Executives were tested in dealing with capital investments and competitive pricing decisions under conditions of uncertainty. A software package, the interactive risk analysis model system, was developed, and two controlled experiments were conducted. It is concluded that planning models can aid management, and predicted uses of the models are as a central tool, as an educational tool, to improve consistency in decision making, to improve communications, and as a tool for consensus decision making.

  7. Turbulence modeling needs of commercial CFD codes: Complex flows in the aerospace and automotive industries

    NASA Technical Reports Server (NTRS)

    Befrui, Bizhan A.

    1995-01-01

    This viewgraph presentation discusses the following: STAR-CD computational features; STAR-CD turbulence models; common features of industrial complex flows; industry-specific CFD development requirements; applications and experiences of industrial complex flows, including flow in rotating disc cavities, diffusion hole film cooling, internal blade cooling, and external car aerodynamics; and conclusions on turbulence modeling needs.

  8. Turbulence modeling needs of commercial CFD codes: Complex flows in the aerospace and automotive industries

    NASA Astrophysics Data System (ADS)

    Befrui, Bizhan A.

    1995-03-01

    This viewgraph presentation discusses the following: STAR-CD computational features; STAR-CD turbulence models; common features of industrial complex flows; industry-specific CFD development requirements; applications and experiences of industrial complex flows, including flow in rotating disc cavities, diffusion hole film cooling, internal blade cooling, and external car aerodynamics; and conclusions on turbulence modeling needs.

  9. Modeling of microgravity combustion experiments

    NASA Technical Reports Server (NTRS)

    Buckmaster, John

    1995-01-01

    This program started in February 1991, and is designed to improve our understanding of basic combustion phenomena by the modeling of various configurations undergoing experimental study by others. Results through 1992 were reported in the second workshop. Work since that time has examined the following topics: Flame-balls; Intrinsic and acoustic instabilities in multiphase mixtures; Radiation effects in premixed combustion; Smouldering, both forward and reverse, as well as two dimensional smoulder.

  10. A summary of computational experience at GE Aircraft Engines for complex turbulent flows in gas turbines

    NASA Astrophysics Data System (ADS)

    Zerkle, Ronald D.; Prakash, Chander

    1995-03-01

    This viewgraph presentation summarizes some CFD experience at GE Aircraft Engines for flows in the primary gaspath of a gas turbine engine and in turbine blade cooling passages. It is concluded that application of the standard k-epsilon turbulence model with wall functions is not adequate for accurate CFD simulation of aerodynamic performance and heat transfer in the primary gas path of a gas turbine engine. New models are required in the near-wall region which include more physics than wall functions. The two-layer modeling approach appears attractive because of its computational complexity. In addition, improved CFD simulation of film cooling and turbine blade internal cooling passages will require anisotropic turbulence models. New turbulence models must be practical in order to have a significant impact on the engine design process. A coordinated turbulence modeling effort between NASA centers would be beneficial to the gas turbine industry.

  11. The Database for Reaching Experiments and Models

    PubMed Central

    Walker, Ben; Kording, Konrad

    2013-01-01

    Reaching is one of the central experimental paradigms in the field of motor control, and many computational models of reaching have been published. While most of these models try to explain subject data (such as movement kinematics, reaching performance, forces, etc.) from only a single experiment, distinct experiments often share experimental conditions and record similar kinematics. This suggests that reaching models could be applied to (and falsified by) multiple experiments. However, using multiple datasets is difficult because experimental data formats vary widely. Standardizing data formats promises to enable scientists to test model predictions against many experiments and to compare experimental results across labs. Here we report on the development of a new resource available to scientists: a database of reaching called the Database for Reaching Experiments And Models (DREAM). DREAM collects both experimental datasets and models and facilitates their comparison by standardizing formats. The DREAM project promises to be useful for experimentalists who want to understand how their data relates to models, for modelers who want to test their theories, and for educators who want to help students better understand reaching experiments, models, and data analysis. PMID:24244351

  12. Using Perspective to Model Complex Processes

    SciTech Connect

    Kelsey, R.L.; Bisset, K.R.

    1999-04-04

    The notion of perspective, when supported in an object-based knowledge representation, can facilitate better abstractions of reality for modeling and simulation. The object modeling of complex physical and chemical processes is made more difficult in part due to the poor abstractions of state and phase changes available in these models. The notion of perspective can be used to create different views to represent the different states of matter in a process. These techniques can lead to a more understandable model. Additionally, the ability to record the progress of a process from start to finish is problematic. It is desirable to have a historic record of the entire process, not just the end result of the process. A historic record should facilitate backtracking and re-start of a process at different points in time. The same representation structures and techniques can be used to create a sequence of process markers to represent a historic record. By using perspective, the sequence of markers can have multiple and varying views tailored for a particular user's context of interest.

  13. Modeling the respiratory chain complexes with biothermokinetic equations - the case of complex I.

    PubMed

    Heiske, Margit; Nazaret, Christine; Mazat, Jean-Pierre

    2014-10-01

    The mitochondrial respiratory chain plays a crucial role in energy metabolism and its dysfunction is implicated in a wide range of human diseases. In order to understand the global expression of local mutations in the rate of oxygen consumption or in the production of adenosine triphosphate (ATP) it is useful to have a mathematical model in which the changes in a given respiratory complex are properly modeled. Our aim in this paper is to provide thermodynamics respecting and structurally simple equations to represent the kinetics of each isolated complexes which can, assembled in a dynamical system, also simulate the behavior of the respiratory chain, as a whole, under a large set of different physiological and pathological conditions. On the example of the reduced nicotinamide adenine dinucleotide (NADH)-ubiquinol-oxidoreductase (complex I) we analyze the suitability of different types of rate equations. Based on our kinetic experiments we show that very simple rate laws, as those often used in many respiratory chain models, fail to describe the kinetic behavior when applied to a wide concentration range. This led us to adapt rate equations containing the essential parameters of enzyme kinetic, maximal velocities and Henri-Michaelis-Menten like-constants (KM and KI) to satisfactorily simulate these data. PMID:25064016

  14. Mathematical modelling of complex contagion on clustered networks

    NASA Astrophysics Data System (ADS)

    O'sullivan, David J.; O'Keeffe, Gary; Fennell, Peter; Gleeson, James

    2015-09-01

    The spreading of behavior, such as the adoption of a new innovation, is influenced bythe structure of social networks that interconnect the population. In the experiments of Centola (Science, 2010), adoption of new behavior was shown to spread further and faster across clustered-lattice networks than across corresponding random networks. This implies that the “complex contagion” effects of social reinforcement are important in such diffusion, in contrast to “simple” contagion models of disease-spread which predict that epidemics would grow more efficiently on random networks than on clustered networks. To accurately model complex contagion on clustered networks remains a challenge because the usual assumptions (e.g. of mean-field theory) regarding tree-like networks are invalidated by the presence of triangles in the network; the triangles are, however, crucial to the social reinforcement mechanism, which posits an increased probability of a person adopting behavior that has been adopted by two or more neighbors. In this paper we modify the analytical approach that was introduced by Hebert-Dufresne et al. (Phys. Rev. E, 2010), to study disease-spread on clustered networks. We show how the approximation method can be adapted to a complex contagion model, and confirm the accuracy of the method with numerical simulations. The analytical results of the model enable us to quantify the level of social reinforcement that is required to observe—as in Centola’s experiments—faster diffusion on clustered topologies than on random networks.

  15. Capturing the experiences of patients across multiple complex interventions: a meta-qualitative approach

    PubMed Central

    Webster, Fiona; Christian, Jennifer; Mansfield, Elizabeth; Bhattacharyya, Onil; Hawker, Gillian; Levinson, Wendy; Naglie, Gary; Pham, Thuy-Nga; Rose, Louise; Schull, Michael; Sinha, Samir; Stergiopoulos, Vicky; Upshur, Ross; Wilson, Lynn

    2015-01-01

    Objectives The perspectives, needs and preferences of individuals with complex health and social needs can be overlooked in the design of healthcare interventions. This study was designed to provide new insights on patient perspectives drawing from the qualitative evaluation of 5 complex healthcare interventions. Setting Patients and their caregivers were recruited from 5 interventions based in primary, hospital and community care in Ontario, Canada. Participants We included 62 interviews from 44 patients and 18 non-clinical caregivers. Intervention Our team analysed the transcripts from 5 distinct projects. This approach to qualitative meta-evaluation identifies common issues described by a diverse group of patients, therefore providing potential insights into systems issues. Outcome measures This study is a secondary analysis of qualitative data; therefore, no outcome measures were identified. Results We identified 5 broad themes that capture the patients’ experience and highlight issues that might not be adequately addressed in complex interventions. In our study, we found that: (1) the emergency department is the unavoidable point of care; (2) patients and caregivers are part of complex and variable family systems; (3) non-medical issues mediate patients’ experiences of health and healthcare delivery; (4) the unanticipated consequences of complex healthcare interventions are often the most valuable; and (5) patient experiences are shaped by the healthcare discourses on medically complex patients. Conclusions Our findings suggest that key assumptions about patients that inform intervention design need to be made explicit in order to build capacity to better understand and support patients with multiple chronic diseases. Across many health systems internationally, multiple models are being implemented simultaneously that may have shared features and target similar patients, and a qualitative meta-evaluation approach, thus offers an opportunity for cumulative

  16. CALIBRATION OF SUBSURFACE BATCH AND REACTIVE-TRANSPORT MODELS INVOLVING COMPLEX BIOGEOCHEMICAL PROCESSES

    EPA Science Inventory

    In this study, the calibration of subsurface batch and reactive-transport models involving complex biogeochemical processes was systematically evaluated. Two hypothetical nitrate biodegradation scenarios were developed and simulated in numerical experiments to evaluate the perfor...

  17. Geomorphological experiments for understanding cross-scale complexity of earth surface processes

    NASA Astrophysics Data System (ADS)

    Seeger, Manuel

    2016-04-01

    The shape of the earth's surface is the result of a complex interaction of different processes at different spatial and temporal scales. The challenging problem is, that process observation is rarely possible due to this different scales. In addition, the resulting landform often does not match the scale of process observation. But it is indispensable for the development of concepts of formation of landforms to identify and understand the involved processes and their interaction. To develop models it is even necessary to quantify them and their relevant parameters. Experiments are able to bridge the constraints of process observation mentioned above: it is possible to observe and quantify individual processes as well as complex process combinations up to the development of geomorphological units. The contribution aims at showing, based on soil erosion research, the possibilities of experimental methods for contributing to th understanding of geomorphological processes. A special emphasis is put on the linkage of conceptual understanding of processes, their measurement and the following development of models. The development of experiments to quantify relevant parameters will be shown, as well as the steps undertaken to bring them into the field taking into account the resulting increase of uncertainty in system parameters and results. It will be shown that experiments are even so able to produce precise measurements on individual processes as well as of complex combinations of parameters and processes and to identify their influence on the overall geomorphological dynamics. Experiments are therefore a methodological package able to check complex soil erosion processes at different levels of conceptualization and to generate data for their quantification. And thus, also a methodological concept to take more into account and to further develop in geomorphological science.

  18. Ants (Formicidae): models for social complexity.

    PubMed

    Smith, Chris R; Dolezal, Adam; Eliyahu, Dorit; Holbrook, C Tate; Gadau, Jürgen

    2009-07-01

    The family Formicidae (ants) is composed of more than 12,000 described species that vary greatly in size, morphology, behavior, life history, ecology, and social organization. Ants occur in most terrestrial habitats and are the dominant animals in many of them. They have been used as models to address fundamental questions in ecology, evolution, behavior, and development. The literature on ants is extensive, and the natural history of many species is known in detail. Phylogenetic relationships for the family, as well as within many subfamilies, are known, enabling comparative studies. Their ease of sampling and ecological variation makes them attractive for studying populations and questions relating to communities. Their sociality and variation in social organization have contributed greatly to an understanding of complex systems, division of labor, and chemical communication. Ants occur in colonies composed of tens to millions of individuals that vary greatly in morphology, physiology, and behavior; this variation has been used to address proximate and ultimate mechanisms generating phenotypic plasticity. Relatedness asymmetries within colonies have been fundamental to the formulation and empirical testing of kin and group selection theories. Genomic resources have been developed for some species, and a whole-genome sequence for several species is likely to follow in the near future; comparative genomics in ants should provide new insights into the evolution of complexity and sociogenomics. Future studies using ants should help establish a more comprehensive understanding of social life, from molecules to colonies. PMID:20147200

  19. Physical modelling of the nuclear pore complex

    PubMed Central

    Fassati, Ariberto; Ford, Ian J.; Hoogenboom, Bart W.

    2013-01-01

    Physically interesting behaviour can arise when soft matter is confined to nanoscale dimensions. A highly relevant biological example of such a phenomenon is the Nuclear Pore Complex (NPC) found perforating the nuclear envelope of eukaryotic cells. In the central conduit of the NPC, of ∼30–60 nm diameter, a disordered network of proteins regulates all macromolecular transport between the nucleus and the cytoplasm. In spite of a wealth of experimental data, the selectivity barrier of the NPC has yet to be explained fully. Experimental and theoretical approaches are complicated by the disordered and heterogeneous nature of the NPC conduit. Modelling approaches have focused on the behaviour of the partially unfolded protein domains in the confined geometry of the NPC conduit, and have demonstrated that within the range of parameters thought relevant for the NPC, widely varying behaviour can be observed. In this review, we summarise recent efforts to physically model the NPC barrier and function. We illustrate how attempts to understand NPC barrier function have employed many different modelling techniques, each of which have contributed to our understanding of the NPC.

  20. 40 CFR 80.45 - Complex emissions model.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 17 2012-07-01 2012-07-01 false Complex emissions model. 80.45 Section...) REGULATION OF FUELS AND FUEL ADDITIVES Reformulated Gasoline § 80.45 Complex emissions model. (a) Definition... fuel which is being evaluated for its emissions performance using the complex model OXY =...

  1. Finding the right balance between groundwater model complexity and experimental effort via Bayesian model selection

    NASA Astrophysics Data System (ADS)

    Schöniger, Anneli; Illman, Walter A.; Wöhling, Thomas; Nowak, Wolfgang

    2015-12-01

    Groundwater modelers face the challenge of how to assign representative parameter values to the studied aquifer. Several approaches are available to parameterize spatial heterogeneity in aquifer parameters. They differ in their conceptualization and complexity, ranging from homogeneous models to heterogeneous random fields. While it is common practice to invest more effort into data collection for models with a finer resolution of heterogeneities, there is a lack of advice which amount of data is required to justify a certain level of model complexity. In this study, we propose to use concepts related to Bayesian model selection to identify this balance. We demonstrate our approach on the characterization of a heterogeneous aquifer via hydraulic tomography in a sandbox experiment (Illman et al., 2010). We consider four increasingly complex parameterizations of hydraulic conductivity: (1) Effective homogeneous medium, (2) geology-based zonation, (3) interpolation by pilot points, and (4) geostatistical random fields. First, we investigate the shift in justified complexity with increasing amount of available data by constructing a model confusion matrix. This matrix indicates the maximum level of complexity that can be justified given a specific experimental setup. Second, we determine which parameterization is most adequate given the observed drawdown data. Third, we test how the different parameterizations perform in a validation setup. The results of our test case indicate that aquifer characterization via hydraulic tomography does not necessarily require (or justify) a geostatistical description. Instead, a zonation-based model might be a more robust choice, but only if the zonation is geologically adequate.

  2. Modeling Choice and Valuation in Decision Experiments

    ERIC Educational Resources Information Center

    Loomes, Graham

    2010-01-01

    This article develops a parsimonious descriptive model of individual choice and valuation in the kinds of experiments that constitute a substantial part of the literature relating to decision making under risk and uncertainty. It suggests that many of the best known "regularities" observed in those experiments may arise from a tendency for…

  3. Experiments with models committees for flow forecasting

    NASA Astrophysics Data System (ADS)

    Ye, J.; Kayastha, N.; van Andel, S. J.; Fenicia, F.; Solomatine, D. P.

    2012-04-01

    In hydrological modelling typically a single model accounting for all possible hydrological loads, seasons and regimes is used. We argue however, that if a model is not complex enough (and this is the case if conceptual or semi-distributed models are used), then a single model can hardly capture all facets of a complex process, and hence more flexible modelling architectures are required. One possibility here is building several specialized models and making them responsible for various sub-processes. An output would be then a combination of outputs of individual models. In machine learning this approach is widely applied: several learning models are combined in a committee (where each model has a "voting" right with a particular weight). In this presentation we concentrate on optimising the above mentioned process of building a model committee, and on various ways of (a) building individual specialized models (mainly concentrating on calibrating them on various subsets of data and regimes corresponding to hydrological sub-processes), and (b) on various ways of combining their outputs (using the ideas of a fuzzy committee with various parameterisations). In doing so, we extend the approaches developed in [1, 2] and present new results. We consider this problem in multi-objective optimization setting (where objective functions correspond to different hydrological regimes) - leading to a number of Pareto-optimal model combinations from which the most appropriate for a given task can be chosen. Applications of the presented approach to flow forecasting are presented.

  4. Comparison of Two Pasture Growth Models of Differing Complexity

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Two pasture growth models that share many common features but differ in model complexity have been developed for incorporation into the Integrated Farm System Model (IFSM). Major differences between models include the explicit representation of roots in the more complex model, and their effects on c...

  5. Troposphere-lower-stratosphere connection in an intermediate complexity model.

    NASA Astrophysics Data System (ADS)

    Ruggieri, Paolo; King, Martin; Kucharski, Fred; Buizza, Roberto; Visconti, Guido

    2016-04-01

    The dynamical coupling between the troposphere and the lower stratosphere has been investigated using a low-top, intermediate complexity model provided by the Abdus Salam International Centre for Theoretical Physics (SPEEDY). The key question that we wanted to address is whether a simple model like SPEEDY can be used to understand troposphere-stratosphere interactions, e.g. forced by changes of sea-ice concentration in polar arctic regions. Three sets of experiments have been performed. Firstly, a potential vorticity perspective has been applied to understand the wave-like forcing of the troposphere on the stratosphere and to provide quantitative information on the sub seasonal variability of the coupling. Then, the zonally asymmetric, near-surface response to a lower-stratospheric forcing has been analysed in a set of forced experiments with an artificial heating imposed in the extra-tropical lower stratosphere. Finally, the lower-stratosphere response sensitivity to tropospheric initial conditions has been examined. Results indicate how SPEEDY captures the physics of the troposphere-stratosphere connection but also show the lack of stratospheric variability. Results also suggest that intermediate-complexity models such as SPEEDY could be used to investigate the effects that surface forcing (e.g. due to sea-ice concentration changes) have on the troposphere and the lower stratosphere.

  6. An Experiment on Isomerism in Metal-Amino Acid Complexes.

    ERIC Educational Resources Information Center

    Harrison, R. Graeme; Nolan, Kevin B.

    1982-01-01

    Background information, laboratory procedures, and discussion of results are provided for syntheses of cobalt (III) complexes, I-III, illustrating three possible bonding modes of glycine to a metal ion (the complex cations II and III being linkage/geometric isomers). Includes spectrophotometric and potentiometric methods to distinguish among the…

  7. Microwave scattering models and basic experiments

    NASA Technical Reports Server (NTRS)

    Fung, Adrian K.

    1989-01-01

    Progress is summarized which has been made in four areas of study: (1) scattering model development for sparsely populated media, such as a forested area; (2) scattering model development for dense media, such as a sea ice medium or a snow covered terrain; (3) model development for randomly rough surfaces; and (4) design and conduct of basic scattering and attenuation experiments suitable for the verification of theoretical models.

  8. Analytical models for complex swirling flows

    NASA Astrophysics Data System (ADS)

    Borissov, A.; Hussain, V.

    1996-11-01

    We develops a new class of analytical solutions of the Navier-Stokes equations for swirling flows, and suggests ways to predict and control such flows occurring in various technological applications. We view momentum accumulation on the axis as a key feature of swirling flows and consider vortex-sink flows on curved axisymmetric surfaces with an axial flow. We show that these solutions model swirling flows in a cylindrical can, whirlpools, tornadoes, and cosmic swirling jets. The singularity of these solutions on the flow axis is removed by matching them with near-axis Schlichting and Long's swirling jets. The matched solutions model flows with very complex patterns, consisting of up to seven separation regions with recirculatory 'bubbles' and vortex rings. We apply the matched solutions for computing flows in the Ranque-Hilsch tube, in the meniscus of electrosprays, in vortex breakdown, and in an industrial vortex burner. The simple analytical solutions allow a clear understanding of how different control parameters affect the flow and guide selection of optimal parameter values for desired flow features. These solutions permit extension to other problems (such as heat transfer and chemical reaction) and have the potential of being significantly useful for further detailed investigation by direct or large-eddy numerical simulations as well as laboratory experimentation.

  9. Using ecosystem experiments to improve vegetation models

    NASA Astrophysics Data System (ADS)

    Medlyn, Belinda E.; Zaehle, Sönke; de Kauwe, Martin G.; Walker, Anthony P.; Dietze, Michael C.; Hanson, Paul J.; Hickler, Thomas; Jain, Atul K.; Luo, Yiqi; Parton, William; Prentice, I. Colin; Thornton, Peter E.; Wang, Shusen; Wang, Ying-Ping; Weng, Ensheng; Iversen, Colleen M.; McCarthy, Heather R.; Warren, Jeffrey M.; Oren, Ram; Norby, Richard J.

    2015-06-01

    Ecosystem responses to rising CO2 concentrations are a major source of uncertainty in climate change projections. Data from ecosystem-scale Free-Air CO2 Enrichment (FACE) experiments provide a unique opportunity to reduce this uncertainty. The recent FACE Model-Data Synthesis project aimed to use the information gathered in two forest FACE experiments to assess and improve land ecosystem models. A new 'assumption-centred' model intercomparison approach was used, in which participating models were evaluated against experimental data based on the ways in which they represent key ecological processes. By identifying and evaluating the main assumptions causing differences among models, the assumption-centred approach produced a clear roadmap for reducing model uncertainty. Here, we explain this approach and summarize the resulting research agenda. We encourage the application of this approach in other model intercomparison projects to fundamentally improve predictive understanding of the Earth system.

  10. a Model Study of Complex Behavior in the Belousov - Reaction.

    NASA Astrophysics Data System (ADS)

    Lindberg, David Mark

    1988-12-01

    We have studied the complex oscillatory behavior in a model of the Belousov-Zhabotinskii (BZ) reaction in a continuously-fed stirred tank reactor (CSTR). The model consisted of a set of nonlinear ordinary differential equations derived from a reduced mechanism of the chemical system. These equations were integrated numerically on a computer, which yielded the concentrations of the constituent chemicals as functions of time. In addition, solutions were tracked as functions of a single parameter, the stability of the solutions was determined, and bifurcations of the solutions were located and studied. The intent of this study was to use this BZ model to explore further a region of complex oscillatory behavior found in experimental investigations, the most thorough of which revealed an alternating periodic-chaotic (P-C) sequence of states. A P-C sequence was discovered in the model which showed the same qualitative features as the experimental sequence. In order to better understand the P-C sequence, a detailed study was conducted in the vicinity of the P-C sequence, with two experimentally accessible parameters as control variables. This study mapped out the bifurcation sets, and included examination of the dynamics of the stable periodic, unstable periodic, and chaotic oscillatory motion. Observations made from the model results revealed a rough symmetry which suggests a new way of looking at the P-C sequence. Other nonlinear phenomena uncovered in the model were boundary and interior crises, several codimension-two bifurcations, and similarities in the shapes of areas of stability for periodic orbits in two-parameter space. Each earlier model study of this complex region involved only a limited one-parameter scan and had limited success in producing agreement with experiments. In contrast, for those regions of complex behavior that have been studied experimentally, the observations agree qualitatively with our model results. Several new predictions of the model

  11. A Hadronization Model for the MINOS Experiment

    NASA Astrophysics Data System (ADS)

    Yang, T.; Andreopoulos, C.; Gallagher, H.; Kehayias, P.

    2007-12-01

    We present a detailed description of the Andreopoulos-Gallagher-Kehayias-Yang (AGKY) hadronic multiparticle production model. This model was developed within the context of the MINOS experiment [4]. Its validity spans a wide invariant mass range starting from as low as the pion production threshold. It exhibits satisfactory agreement with a wide variety of experimental data.

  12. Epigenetics of complex diseases: from general theory to laboratory experiments.

    PubMed

    Schumacher, A; Petronis, A

    2006-01-01

    Despite significant effort, understanding the causes and mechanisms of complex non-Mendelian diseases remains a key challenge. Although numerous molecular genetic linkage and association studies have been conducted in order to explain the heritable predisposition to complex diseases, the resulting data are quite often inconsistent and even controversial. In a similar way, identification of environmental factors causal to a disease is difficult. In this article, a new interpretation of the paradigm of "genes plus environment" is presented in which the emphasis is shifted to epigenetic misregulation as a major etiopathogenic factor. Epigenetic mechanisms are consistent with various non-Mendelian irregularities of complex diseases, such as the existence of clinically indistinguishable sporadic and familial cases, sexual dimorphism, relatively late age of onset and peaks of susceptibility to some diseases, discordance of monozygotic twins and major fluctuations on the course of disease severity. It is also suggested that a substantial portion of phenotypic variance that traditionally has been attributed to environmental effects may result from stochastic epigenetic events in the cell. It is argued that epigenetic strategies, when applied in parallel with the traditional genetic ones, may significantly advance the discovery of etiopathogenic mechanisms of complex diseases. The second part of this chapter is dedicated to a review of laboratory methods for DNA methylation analysis, which may be useful in the study of complex diseases. In this context, epigenetic microarray technologies are emphasized, as it is evident that such technologies will significantly advance epigenetic analyses in complex diseases. PMID:16909908

  13. Reduced Complexity Modeling (RCM): toward more use of less

    NASA Astrophysics Data System (ADS)

    Paola, Chris; Voller, Vaughan

    2014-05-01

    Although not exact, there is a general correspondence between reductionism and detailed, high-fidelity models, while 'synthesism' is often associated with reduced-complexity modeling. There is no question that high-fidelity reduction- based computational models are extremely useful in simulating the behaviour of complex natural systems. In skilled hands they are also a source of insight and understanding. We focus here on the case for the other side (reduced-complexity models), not because we think they are 'better' but because their value is more subtle, and their natural constituency less clear. What kinds of problems and systems lend themselves to the reduced-complexity approach? RCM is predicated on the idea that the mechanism of the system or phenomenon in question is, for whatever reason, insensitive to the full details of the underlying physics. There are multiple ways in which this can happen. B.T. Werner argued for the importance of process hierarchies in which processes at larger scales depend on only a small subset of everything going on at smaller scales. Clear scale breaks would seem like a way to test systems for this property but to our knowledge has not been used in this way. We argue that scale-independent physics, as for example exhibited by natural fractals, is another. We also note that the same basic criterion - independence of the process in question from details of the underlying physics - underpins 'unreasonably effective' laboratory experiments. There is thus a link between suitability for experimentation at reduced scale and suitability for RCM. Examples from RCM approaches to erosional landscapes, braided rivers, and deltas illustrate these ideas, and suggest that they are insufficient. There is something of a 'wild west' nature to RCM that puts some researchers off by suggesting a departure from traditional methods that have served science well for centuries. We offer two thoughts: first, that in the end the measure of a model is its

  14. Argonne Bubble Experiment Thermal Model Development

    SciTech Connect

    Buechler, Cynthia Eileen

    2015-12-03

    This report will describe the Computational Fluid Dynamics (CFD) model that was developed to calculate the temperatures and gas volume fractions in the solution vessel during the irradiation. It is based on the model used to calculate temperatures and volume fractions in an annular vessel containing an aqueous solution of uranium . The experiment was repeated at several electron beam power levels, but the CFD analysis was performed only for the 12 kW irradiation, because this experiment came the closest to reaching a steady-state condition. The aim of the study is to compare results of the calculation with experimental measurements to determine the validity of the CFD model.

  15. STELLA Experiment: Design and Model Predictions

    SciTech Connect

    Kimura, W. D.; Babzien, M.; Ben-Zvi, I.; Campbell, L. P.; Cline, D. B.; Fiorito, R. B.; Gallardo, J. C.; Gottschalk, S. C.; He, P.; Kusche, K. P.; Liu, Y.; Pantell, R. H.; Pogorelsky, I. V.; Quimby, D. C.; Robinson, K. E.; Rule, D. W.; Sandweiss, J.; Skaritka, J.; van Steenbergen, A.; Steinhauer, L. C.; Yakimenko, V.

    1998-07-05

    The STaged ELectron Laser Acceleration (STELLA) experiment will be one of the first to examine the critical issue of staging the laser acceleration process. The BNL inverse free electron laser (EEL) will serve as a prebuncher to generate {approx} 1 {micro}m long microbunches. These microbunches will be accelerated by an inverse Cerenkov acceleration (ICA) stage. A comprehensive model of the STELLA experiment is described. This model includes the EEL prebunching, drift and focusing of the microbunches into the ICA stage, and their subsequent acceleration. The model predictions will be presented including the results of a system error study to determine the sensitivity to uncertainties in various system parameters.

  16. Surface complexation model of uranyl sorption on Georgia kaolinite

    USGS Publications Warehouse

    Payne, T.E.; Davis, J.A.; Lumpkin, G.R.; Chisari, R.; Waite, T.D.

    2004-01-01

    The adsorption of uranyl on standard Georgia kaolinites (KGa-1 and KGa-1B) was studied as a function of pH (3-10), total U (1 and 10 ??mol/l), and mass loading of clay (4 and 40 g/l). The uptake of uranyl in air-equilibrated systems increased with pH and reached a maximum in the near-neutral pH range. At higher pH values, the sorption decreased due to the presence of aqueous uranyl carbonate complexes. One kaolinite sample was examined after the uranyl uptake experiments by transmission electron microscopy (TEM), using energy dispersive X-ray spectroscopy (EDS) to determine the U content. It was found that uranium was preferentially adsorbed by Ti-rich impurity phases (predominantly anatase), which are present in the kaolinite samples. Uranyl sorption on the Georgia kaolinites was simulated with U sorption reactions on both titanol and aluminol sites, using a simple non-electrostatic surface complexation model (SCM). The relative amounts of U-binding >TiOH and >AlOH sites were estimated from the TEM/EDS results. A ternary uranyl carbonate complex on the titanol site improved the fit to the experimental data in the higher pH range. The final model contained only three optimised log K values, and was able to simulate adsorption data across a wide range of experimental conditions. The >TiOH (anatase) sites appear to play an important role in retaining U at low uranyl concentrations. As kaolinite often contains trace TiO2, its presence may need to be taken into account when modelling the results of sorption experiments with radionuclides or trace metals on kaolinite. ?? 2004 Elsevier B.V. All rights reserved.

  17. Using Ecosystem Experiments to Improve Vegetation Models

    SciTech Connect

    Medlyn, Belinda; Zaehle, S; DeKauwe, Martin G.; Walker, Anthony P.; Dietze, Michael; Hanson, Paul J.; Hickler, Thomas; Jain, Atul; Luo, Yiqi; Parton, William; Prentice, I. Collin; Thornton, Peter E.; Wang, Shusen; Wang, Yingping; Weng, Ensheng; Iversen, Colleen M.; McCarthy, Heather R.; Warren, Jeffrey; Oren, Ram; Norby, Richard J

    2015-05-21

    Ecosystem responses to rising CO2 concentrations are a major source of uncertainty in climate change projections. Data from ecosystem-scale Free-Air CO2 Enrichment (FACE) experiments provide a unique opportunity to reduce this uncertainty. The recent FACE Model–Data Synthesis project aimed to use the information gathered in two forest FACE experiments to assess and improve land ecosystem models. A new 'assumption-centred' model intercomparison approach was used, in which participating models were evaluated against experimental data based on the ways in which they represent key ecological processes. Identifying and evaluating the main assumptions caused differences among models, and the assumption-centered approach produced a clear roadmap for reducing model uncertainty. We explain this approach and summarize the resulting research agenda. We encourage the application of this approach in other model intercomparison projects to fundamentally improve predictive understanding of the Earth system.

  18. Using Ecosystem Experiments to Improve Vegetation Models

    DOE PAGESBeta

    Medlyn, Belinda; Zaehle, S; DeKauwe, Martin G.; Walker, Anthony P.; Dietze, Michael; Hanson, Paul J.; Hickler, Thomas; Jain, Atul; Luo, Yiqi; Parton, William; et al

    2015-05-21

    Ecosystem responses to rising CO2 concentrations are a major source of uncertainty in climate change projections. Data from ecosystem-scale Free-Air CO2 Enrichment (FACE) experiments provide a unique opportunity to reduce this uncertainty. The recent FACE Model–Data Synthesis project aimed to use the information gathered in two forest FACE experiments to assess and improve land ecosystem models. A new 'assumption-centred' model intercomparison approach was used, in which participating models were evaluated against experimental data based on the ways in which they represent key ecological processes. Identifying and evaluating the main assumptions caused differences among models, and the assumption-centered approach produced amore » clear roadmap for reducing model uncertainty. We explain this approach and summarize the resulting research agenda. We encourage the application of this approach in other model intercomparison projects to fundamentally improve predictive understanding of the Earth system.« less

  19. Experiences with two-equation turbulence models

    NASA Technical Reports Server (NTRS)

    Singhal, Ashok K.; Lai, Yong G.; Avva, Ram K.

    1995-01-01

    This viewgraph presentation discusses the following: introduction to CFD Research Corporation; experiences with two-equation models - models used, numerical difficulties, validation and applications, and strengths and weaknesses; and answers to three questions posed by the workshop organizing committee - what are your customers telling you, what are you doing in-house, and how can NASA-CMOTT (Center for Modeling of Turbulence and Transition) help.

  20. Modeling competitive substitution in a polyelectrolyte complex

    SciTech Connect

    Peng, B.; Muthukumar, M.

    2015-12-28

    We have simulated the invasion of a polyelectrolyte complex made of a polycation chain and a polyanion chain, by another longer polyanion chain, using the coarse-grained united atom model for the chains and the Langevin dynamics methodology. Our simulations reveal many intricate details of the substitution reaction in terms of conformational changes of the chains and competition between the invading chain and the chain being displaced for the common complementary chain. We show that the invading chain is required to be sufficiently longer than the chain being displaced for effecting the substitution. Yet, having the invading chain to be longer than a certain threshold value does not reduce the substitution time much further. While most of the simulations were carried out in salt-free conditions, we show that presence of salt facilitates the substitution reaction and reduces the substitution time. Analysis of our data shows that the dominant driving force for the substitution process involving polyelectrolytes lies in the release of counterions during the substitution.

  1. Newton and Colour: The Complex Interplay of Theory and Experiment.

    ERIC Educational Resources Information Center

    Martins, Roberto De Andrade; Silva, Cibelle Celestino

    2001-01-01

    Elucidates some aspects of Newton's theory of light and colors, specifically as presented in his first optical paper in 1672. Analyzes Newton's main experiments intended to show that light is a mixture of rays with different refrangibilities. (SAH)

  2. Industrial processing of complex fluids: Formulation and modeling

    SciTech Connect

    Scovel, J.C.; Bleasdale, S.; Forest, G.M.; Bechtel, S.

    1997-08-01

    The production of many important commercial materials involves the evolution of a complex fluid through a cooling phase into a hardened product. Textile fibers, high-strength fibers(KEVLAR, VECTRAN), plastics, chopped-fiber compounds, and fiber optical cable are such materials. Industry desires to replace experiments with on-line, real time models of these processes. Solutions to the problems are not just a matter of technology transfer, but require a fundamental description and simulation of the processes. Goals of the project are to develop models that can be used to optimize macroscopic properties of the solid product, to identify sources of undesirable defects, and to seek boundary-temperature and flow-and-material controls to optimize desired properties.

  3. Management of complex immunogenetics information using an enhanced relational model.

    PubMed

    Barsalou, T; Sujansky, W; Herzenberg, L A; Wiederhold, G

    1991-10-01

    Flow cytometry has become a technique of paramount importance in the armamentarium of the scientist in such domains as immunogenetics. In the PENGUIN project, we are currently developing the architecture for an expert database system to facilitate the design of flow-cytometry experiments. This paper describes the core of this architecture--a methodology for managing complex biomedical information in an extended relational framework. More specifically, we exploit a semantic data model to enhance relational databases with structuring and manipulation tools that take more domain information into account and provide the user with an appropriate level of abstraction. We present specific applications of the structural model to database schema management, data retrieval and browsing, and integrity maintenance. PMID:1743006

  4. The Teaching-Upbringing Complex: Experience, Problems, Prospects.

    ERIC Educational Resources Information Center

    Vul'fov, B. Z.; And Others

    1990-01-01

    Describes the teaching-upbringing complex (UVK), a new type of Soviet school that attempts to deal with raising and educating children in an integrated manner. Stresses combining required subjects with students' special interests to encourage student achievement and teacher involvement. Concentrates on the development of self-expression and…

  5. Clinical complexity in medicine: A measurement model of task and patient complexity

    PubMed Central

    Islam, R.; Weir, C.; Fiol, G. Del

    2016-01-01

    Summary Background Complexity in medicine needs to be reduced to simple components in a way that is comprehensible to researchers and clinicians. Few studies in the current literature propose a measurement model that addresses both task and patient complexity in medicine. Objective The objective of this paper is to develop an integrated approach to understand and measure clinical complexity by incorporating both task and patient complexity components focusing on infectious disease domain. The measurement model was adapted and modified to healthcare domain. Methods Three clinical Infectious Disease teams were observed, audio-recorded and transcribed. Each team included an Infectious Diseases expert, one Infectious Diseases fellow, one physician assistant and one pharmacy resident fellow. The transcripts were parsed and the authors independently coded complexity attributes. This baseline measurement model of clinical complexity was modified in an initial set of coding process and further validated in a consensus-based iterative process that included several meetings and email discussions by three clinical experts from diverse backgrounds from the Department of Biomedical Informatics at the University of Utah. Inter-rater reliability was calculated using Cohen’s kappa. Results The proposed clinical complexity model consists of two separate components. The first is a clinical task complexity model with 13 clinical complexity-contributing factors and 7 dimensions. The second is the patient complexity model with 11 complexity-contributing factors and 5 dimensions. Conclusion The measurement model for complexity encompassing both task and patient complexity will be a valuable resource for future researchers and industry to measure and understand complexity in healthcare. PMID:26404626

  6. Graduate Social Work Education and Cognitive Complexity: Does Prior Experience Really Matter?

    ERIC Educational Resources Information Center

    Simmons, Chris

    2014-01-01

    This study examined the extent to which age, education, and practice experience among social work graduate students (N = 184) predicted cognitive complexity, an essential aspect of critical thinking. In the regression analysis, education accounted for more of the variance associated with cognitive complexity than age and practice experience. When…

  7. Multicomponent reactive transport modeling of uranium bioremediation field experiments

    SciTech Connect

    Fang, Yilin; Yabusaki, Steven B.; Morrison, Stan J.; Amonette, James E.; Long, Philip E.

    2009-10-15

    Biostimulation field experiments with acetate amendment are being performed at a former uranium mill tailings site in Rifle, Colorado, to investigate subsurface processes controlling in situ bioremediation of uranium-contaminated groundwater. An important part of the research is identifying and quantifying field-scale models of the principal terminal electron-accepting processes (TEAPs) during biostimulation and the consequent biogeochemical impacts to the subsurface receiving environment. Integrating abiotic chemistry with the microbially mediated TEAPs in the reaction network brings into play geochemical observations (e.g., pH, alkalinity, redox potential, major ions, and secondary minerals) that the reactive transport model must recognize. These additional constraints provide for a more systematic and mechanistic interpretation of the field behaviors during biostimulation. The reaction network specification developed for the 2002 biostimulation field experiment was successfully applied without additional calibration to the 2003 and 2007 field experiments. The robustness of the model specification is significant in that 1) the 2003 biostimulation field experiment was performed with 3 times higher acetate concentrations than the previous biostimulation in the same field plot (i.e., the 2002 experiment), and 2) the 2007 field experiment was performed in a new unperturbed plot on the same site. The biogeochemical reactive transport simulations accounted for four TEAPs, two distinct functional microbial populations, two pools of bioavailable Fe(III) minerals (iron oxides and phyllosilicate iron), uranium aqueous and surface complexation, mineral precipitation, and dissolution. The conceptual model for bioavailable iron reflects recent laboratory studies with sediments from the Old Rifle Uranium Mill Tailings Remedial Action (UMTRA) site that demonstrated that the bulk (~90%) of Fe(III) bioreduction is associated with the phyllosilicates rather than the iron oxides

  8. IDMS: inert dark matter model with a complex singlet

    NASA Astrophysics Data System (ADS)

    Bonilla, Cesar; Sokolowska, Dorota; Darvishi, Neda; Diaz-Cruz, J. Lorenzo; Krawczyk, Maria

    2016-06-01

    We study an extension of the inert doublet model (IDM) that includes an extra complex singlet of the scalars fields, which we call the IDMS. In this model there are three Higgs particles, among them a SM-like Higgs particle, and the lightest neutral scalar, from the inert sector, remains a viable dark matter (DM) candidate. We assume a non-zero complex vacuum expectation value for the singlet, so that the visible sector can introduce extra sources of CP violation. We construct the scalar potential of IDMS, assuming an exact Z 2 symmetry, with the new singlet being Z 2-even, as well as a softly broken U(1) symmetry, which allows a reduced number of free parameters in the potential. In this paper we explore the foundations of the model, in particular the masses and interactions of scalar particles for a few benchmark scenarios. Constraints from collider physics, in particular from the Higgs signal observed at the Large Hadron Collider with {M}h≈ 125 {{GeV}}, as well as constraints from the DM experiments, such as relic density measurements and direct detection limits, are included in the analysis. We observe significant differences with respect to the IDM in relic density values from additional annihilation channels, interference and resonance effects due to the extended Higgs sector.

  9. The Effect of Complex Formation upon the Redox Potentials of Metallic Ions. Cyclic Voltammetry Experiments.

    ERIC Educational Resources Information Center

    Ibanez, Jorge G.; And Others

    1988-01-01

    Describes experiments in which students prepare in situ soluble complexes of metal ions with different ligands and observe and estimate the change in formal potential that the ion undergoes upon complexation. Discusses student formation and analysis of soluble complexes of two different metal ions with the same ligand. (CW)

  10. Complexation Effect on Redox Potential of Iron(III)-Iron(II) Couple: A Simple Potentiometric Experiment

    ERIC Educational Resources Information Center

    Rizvi, Masood Ahmad; Syed, Raashid Maqsood; Khan, Badruddin

    2011-01-01

    A titration curve with multiple inflection points results when a mixture of two or more reducing agents with sufficiently different reduction potentials are titrated. In this experiment iron(II) complexes are combined into a mixture of reducing agents and are oxidized to the corresponding iron(III) complexes. As all of the complexes involve the…

  11. Modeling the Propagation of Mobile Phone Virus under Complex Network

    PubMed Central

    Yang, Wei; Wei, Xi-liang; Guo, Hao; An, Gang; Guo, Lei

    2014-01-01

    Mobile phone virus is a rogue program written to propagate from one phone to another, which can take control of a mobile device by exploiting its vulnerabilities. In this paper the propagation model of mobile phone virus is tackled to understand how particular factors can affect its propagation and design effective containment strategies to suppress mobile phone virus. Two different propagation models of mobile phone viruses under the complex network are proposed in this paper. One is intended to describe the propagation of user-tricking virus, and the other is to describe the propagation of the vulnerability-exploiting virus. Based on the traditional epidemic models, the characteristics of mobile phone viruses and the network topology structure are incorporated into our models. A detailed analysis is conducted to analyze the propagation models. Through analysis, the stable infection-free equilibrium point and the stability condition are derived. Finally, considering the network topology, the numerical and simulation experiments are carried out. Results indicate that both models are correct and suitable for describing the spread of two different mobile phone viruses, respectively. PMID:25133209

  12. Power Curve Modeling in Complex Terrain Using Statistical Models

    NASA Astrophysics Data System (ADS)

    Bulaevskaya, V.; Wharton, S.; Clifton, A.; Qualley, G.; Miller, W.

    2014-12-01

    Traditional power output curves typically model power only as a function of the wind speed at the turbine hub height. While the latter is an essential predictor of power output, wind speed information in other parts of the vertical profile, as well as additional atmospheric variables, are also important determinants of power. The goal of this work was to determine the gain in predictive ability afforded by adding wind speed information at other heights, as well as other atmospheric variables, to the power prediction model. Using data from a wind farm with a moderately complex terrain in the Altamont Pass region in California, we trained three statistical models, a neural network, a random forest and a Gaussian process model, to predict power output from various sets of aforementioned predictors. The comparison of these predictions to the observed power data revealed that considerable improvements in prediction accuracy can be achieved both through the addition of predictors other than the hub-height wind speed and the use of statistical models. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under contract DE-AC52-07NA27344 and was funded by Wind Uncertainty Quantification Laboratory Directed Research and Development Project at LLNL under project tracking code 12-ERD-069.

  13. Assessing the experience in complex hepatopancreatobiliary surgery among graduating chief residents: Is the operative experience enough?

    PubMed Central

    Sachs, Teviah E.; Ejaz, Aslam; Weiss, Matthew; Spolverato, Gaya; Ahuja, Nita; Makary, Martin A.; Wolfgang, Christopher L.; Hirose, Kenzo; Pawlik, Timothy M.

    2015-01-01

    Introduction Resident operative autonomy and case volume is associated with posttraining confidence and practice plans. Accreditation Council for Graduate Medical Education requirements for graduating general surgery residents are four liver and three pancreas cases. We sought to evaluate trends in resident experience and autonomy for complex hepatopancreatobiliary (HPB) surgery over time. Methods We queried the Accreditation Council for Graduate Medical Education General Surgery Case Log (2003–2012) for all cases performed by graduating chief residents (GCR) relating to liver, pancreas, and the biliary tract (HPB); simple cholecystectomy was excluded. Mean (±SD), median [10th–90th percentiles] and maximum case volumes were compared from 2003 to 2012 using R2 for all trends. Results A total of 252,977 complex HPB cases (36% liver, 43% pancreas, 21% biliary) were performed by 10,288 GCR during the 10-year period examined (Mean = 24.6 per GCR). Of these, 57% were performed during the chief year, whereas 43% were performed as postgraduate year 1–4. Only 52% of liver cases were anatomic resections, whereas 71% of pancreas cases were major resections. Total number of cases increased from 22,516 (mean = 23.0) in 2003 to 27,191 (mean = 24.9) in 2012. During this same time period, the percentage of HPB cases that were performed during the chief year decreased by 7% (liver: 13%, pancreas 8%, biliary 4%). There was an increasing trend in the mean number of operations (mean ± SD) logged by GCR on the pancreas (9.1 ± 5.9 to 11.3 ± 4.3; R2 = .85) and liver (8.0 ± 5.9 to 9.4 ± 3.4; R2 = .91), whereas those for the biliary tract decreased (5.9 ± 2.5 to 3.8 ± 2.1; R2 = .96). Although the median number of cases [10th:90th percentile] increased slightly for both pancreas (7.0 [4.0:15] to 8.0 [4:20]) and liver (7.0 [4:13] to 8.0 [5:14]), the maximum number of cases preformed by any given GCR remained stable for pancreas (51 to 53; R2 = .18), but increased for liver (38

  14. A Computer Simulated Experiment in Complex Order Kinetics

    ERIC Educational Resources Information Center

    Merrill, J. C.; And Others

    1975-01-01

    Describes a computer simulation experiment in which physical chemistry students can determine all of the kinetic parameters of a reaction, such as order of the reaction with respect to each reagent, forward and reverse rate constants for the overall reaction, and forward and reverse activation energies. (MLH)

  15. Projection- vs. selection-based model reduction of complex hydro-ecological models

    NASA Astrophysics Data System (ADS)

    Galelli, S.; Giuliani, M.; Castelletti, A.; Alsahaf, A.

    2014-12-01

    Projection-based model reduction is one of the most popular approaches used for the identification of reduced-order models (emulators). It is based on the idea of sampling from the original model various values, or snapshots, of the state variables, and then using these snapshots in a projection scheme to find a lower-dimensional subspace that captures the majority of the variation of the original model. The model is then projected onto this subspace and solved, yielding a computationally efficient emulator. Yet, this approach may unnecessarily increase the complexity of the emulator, especially when only a few state variables of the original model are relevant with respect to the output of interest. This is the case of complex hydro-ecological models, which typically account for a variety of water quality processes. On the other hand, selection-based model reduction uses the information contained in the snapshots to select the state variables of the original model that are relevant with respect to the emulator's output, thus allowing for model reduction. This provides a better trade-off between fidelity and model complexity, since the irrelevant and redundant state variables are excluded from the model reduction process. In this work we address these issues by presenting an exhaustive experimental comparison between two popular projection- and selection-based methods, namely Proper Orthogonal Decomposition (POD) and Dynamic Emulation Modelling (DEMo). The comparison is performed on the reduction of DYRESM-CAEDYM, a 1D hydro-ecological model used to describe the in-reservoir water quality conditions of Tono Dam, an artificial reservoir located in western Japan. Experiments on two different output variables (i.e. chlorophyll-a concentration and release water temperature) show that DEMo allows obtaining the same fidelity as POD while reducing the number of state variables in the emulator.

  16. Data production models for the CDF experiment

    SciTech Connect

    Antos, J.; Babik, M.; Benjamin, D.; Cabrera, S.; Chan, A.W.; Chen, Y.C.; Coca, M.; Cooper, B.; Genser, K.; Hatakeyama, K.; Hou, S.; Hsieh, T.L.; Jayatilaka, B.; Kraan, A.C.; Lysak, R.; Mandrichenko, I.V.; Robson, A.; Siket, M.; Stelzer, B.; Syu, J.; Teng, P.K.; /Kosice, IEF /Duke U. /Taiwan, Inst. Phys. /University Coll. London /Fermilab /Rockefeller U. /Michigan U. /Pennsylvania U. /Glasgow U. /UCLA /Tsukuba U. /New Mexico U.

    2006-06-01

    The data production for the CDF experiment is conducted on a large Linux PC farm designed to meet the needs of data collection at a maximum rate of 40 MByte/sec. We present two data production models that exploits advances in computing and communication technology. The first production farm is a centralized system that has achieved a stable data processing rate of approximately 2 TByte per day. The recently upgraded farm is migrated to the SAM (Sequential Access to data via Metadata) data handling system. The software and hardware of the CDF production farms has been successful in providing large computing and data throughput capacity to the experiment.

  17. Model-scale sound propagation experiment

    NASA Technical Reports Server (NTRS)

    Willshire, William L., Jr.

    1988-01-01

    The results of a scale model propagation experiment to investigate grazing propagation above a finite impedance boundary are reported. In the experiment, a 20 x 25 ft ground plane was installed in an anechoic chamber. Propagation tests were performed over the plywood surface of the ground plane and with the ground plane covered with felt, styrofoam, and fiberboard. Tests were performed with discrete tones in the frequency range of 10 to 15 kHz. The acoustic source and microphones varied in height above the test surface from flush to 6 in. Microphones were located in a linear array up to 18 ft from the source. A preliminary experiment using the same ground plane, but only testing the plywood and felt surfaces was performed. The results of this first experiment were encouraging, but data variability and repeatability were poor, particularly, for the felt surface, making comparisons with theoretical predictions difficult. In the main experiment the sound source, microphones, microphone positioning, data acquisition, quality of the anechoic chamber, and environmental control of the anechoic chamber were improved. High-quality, repeatable acoustic data were measured in the main experiment for all four test surfaces. Comparisons with predictions are good, but limited by uncertainties of the impedance values of the test surfaces.

  18. Modeling Complex Workflow in Molecular Diagnostics

    PubMed Central

    Gomah, Mohamed E.; Turley, James P.; Lu, Huimin; Jones, Dan

    2010-01-01

    One of the hurdles to achieving personalized medicine has been implementing the laboratory processes for performing and reporting complex molecular tests. The rapidly changing test rosters and complex analysis platforms in molecular diagnostics have meant that many clinical laboratories still use labor-intensive manual processing and testing without the level of automation seen in high-volume chemistry and hematology testing. We provide here a discussion of design requirements and the results of implementation of a suite of lab management tools that incorporate the many elements required for use of molecular diagnostics in personalized medicine, particularly in cancer. These applications provide the functionality required for sample accessioning and tracking, material generation, and testing that are particular to the evolving needs of individualized molecular diagnostics. On implementation, the applications described here resulted in improvements in the turn-around time for reporting of more complex molecular test sets, and significant changes in the workflow. Therefore, careful mapping of workflow can permit design of software applications that simplify even the complex demands of specialized molecular testing. By incorporating design features for order review, software tools can permit a more personalized approach to sample handling and test selection without compromising efficiency. PMID:20007844

  19. APPLICATION OF SURFACE COMPLEXATION MODELS TO SOIL SYSTEMS

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Chemical surface complexation models were developed to describe potentiometric titration and ion adsorption data on oxide minerals. These models provide molecular descriptions of adsorption using an equilibrium approach that defines surface species, chemical reactions, mass and charge balances and ...

  20. Design and modeling of small scale multiple fracturing experiments

    SciTech Connect

    Cuderman, J F

    1981-12-01

    Recent experiments at the Nevada Test Site (NTS) have demonstrated the existence of three distinct fracture regimes. Depending on the pressure rise time in a borehole, one can obtain hydraulic, multiple, or explosive fracturing behavior. The use of propellants rather than explosives in tamped boreholes permits tailoring of the pressure risetime over a wide range since propellants having a wide range of burn rates are available. This technique of using the combustion gases from a full bore propellant charge to produce controlled borehole pressurization is termed High Energy Gas Fracturing (HEGF). Several series of HEGF, in 0.15 m and 0.2 m diameter boreholes at 12 m depths, have been completed in a tunnel complex at NTS where mineback permitted direct observation of fracturing obtained. Because such large experiments are costly and time consuming, smaller scale experiments are desirable, provided results from small experiments can be used to predict fracture behavior in larger boreholes. In order to design small scale gas fracture experiments, the available data from previous HEGF experiments were carefully reviewed, analytical elastic wave modeling was initiated, and semi-empirical modeling was conducted which combined predictions for statically pressurized boreholes with experimental data. The results of these efforts include (1) the definition of what constitutes small scale experiments for emplacement in a tunnel complex at the Nevada Test Site, (2) prediction of average crack radius, in ash fall tuff, as a function of borehole size and energy input per unit length, (3) definition of multiple-hydraulic and multiple-explosive fracture boundaries as a function of boreholes size and surface wave velocity, (4) semi-empirical criteria for estimating stress and acceleration, and (5) a proposal that multiple fracture orientations may be governed by in situ stresses.

  1. Analysis of Complex Intervention Effects in Time-Series Experiments.

    ERIC Educational Resources Information Center

    Bower, Cathleen

    An iterative least squares procedure for analyzing the effect of various kinds of intervention in time-series data is described. There are numerous applications of this design in economics, education, and psychology, although until recently, no appropriate analysis techniques had been developed to deal with the model adequately. This paper…

  2. Modeling Hemispheric Detonation Experiments in 2-Dimensions

    SciTech Connect

    Howard, W M; Fried, L E; Vitello, P A; Druce, R L; Phillips, D; Lee, R; Mudge, S; Roeske, F

    2006-06-22

    Experiments have been performed with LX-17 (92.5% TATB and 7.5% Kel-F 800 binder) to study scaling of detonation waves using a dimensional scaling in a hemispherical divergent geometry. We model these experiments using an arbitrary Lagrange-Eulerian (ALE3D) hydrodynamics code, with reactive flow models based on the thermo-chemical code, Cheetah. The thermo-chemical code Cheetah provides a pressure-dependent kinetic rate law, along with an equation of state based on exponential-6 fluid potentials for individual detonation product species, calibrated to high pressures ({approx} few Mbars) and high temperatures (20000K). The parameters for these potentials are fit to a wide variety of experimental data, including shock, compression and sound speed data. For the un-reacted high explosive equation of state we use a modified Murnaghan form. We model the detonator (including the flyer plate) and initiation system in detail. The detonator is composed of LX-16, for which we use a program burn model. Steinberg-Guinan models5 are used for the metal components of the detonator. The booster and high explosive are LX-10 and LX-17, respectively. For both the LX-10 and LX-17, we use a pressure dependent rate law, coupled with a chemical equilibrium equation of state based on Cheetah. For LX-17, the kinetic model includes carbon clustering on the nanometer size scale.

  3. Dispersion Modeling in Complex Urban Systems

    EPA Science Inventory

    Models are used to represent real systems in an understandable way. They take many forms. A conceptual model explains the way a system works. In environmental studies, for example, a conceptual model may delineate all the factors and parameters for determining how a particle move...

  4. Specifying and Refining a Complex Measurement Model.

    ERIC Educational Resources Information Center

    Levy, Roy; Mislevy, Robert J.

    This paper aims to describe a Bayesian approach to modeling and estimating cognitive models both in terms of statistical machinery and actual instrument development. Such a method taps the knowledge of experts to provide initial estimates for the probabilistic relationships among the variables in a multivariate latent variable model and refines…

  5. Turbulence modeling for complex hypersonic flows

    NASA Technical Reports Server (NTRS)

    Huang, P. G.; Coakley, T. J.

    1993-01-01

    The paper presents results of calculations for a range of 2D turbulent hypersonic flows using two-equation models. The baseline models and the model corrections required for good hypersonic-flow predictions will be illustrated. Three experimental data sets were chosen for comparison. They are: (1) the hypersonic flare flows of Kussoy and Horstman, (2) a 2D hypersonic compression corner flow of Coleman and Stollery, and (3) the ogive-cylinder impinging shock-expansion flows of Kussoy and Horstman. Comparisons with the experimental data have shown that baseline models under-predict the extent of flow separation but over-predict the heat transfer rate near flow reattachment. Modifications to the models are described which remove the above-mentioned deficiencies. Although we have restricted the discussion only to the selected baseline models in this paper, the modifications proposed are universal and can in principle be transferred to any existing two-equation model formulation.

  6. Simulation model for the closed plant experiment facility of CEEF.

    PubMed

    Abe, Koichi; Ishikawa, Yoshio; Kibe, Seishiro; Nitta, Keiji

    2005-01-01

    The Closed Ecology Experiment Facilities (CEEF) is a testbed for Controlled Ecological Life Support Systems (CELSS) investigations. CEEF including the physico-chemical material regenerative system has been constructed for the experiments of material circulation among plants, breeding animals and crew of CEEF. Because CEEF is a complex system, an appropriate schedule for the operation must be prepared in advance. The CEEF behavioral Prediction System, CPS, that will help to confirm the operation schedule, is under development. CPS will simulate CEEFs behavior with data (conditions of equipments, quantity of materials in tanks, etc.) of CEEF and an operation schedule that will be made by the operation team everyday, before the schedule will be carried out. The result of the simulation will show whether the operation schedule is appropriate or not. In order to realize CPS, models of the simulation program that is installed in CPS must mirror the real facilities of CEEF. For the first step of development, a flexible algorithm of the simulation program was investigated. The next step was development of a replicate simulation model of the material circulation system for the Closed Plant Experiment Facility (CPEF) that is a part of CEEF. All the parts of a real material circulation system for CPEF are connected together and work as a complex mechanism. In the simulation model, the system was separated into 38 units according to its operational segmentation. In order to develop each model for its corresponding unit, specifications for the model were fixed based on the specifications of the real part. These models were put into a simulation model for the system. PMID:16175692

  7. Simulation model for the closed plant experiment facility of CEEF

    NASA Astrophysics Data System (ADS)

    Abe, Koichi; Ishikawa, Yoshio; Kibe, Seishiro; Nitta, Keiji

    The Closed Ecology Experiment Facilities (CEEF) is a testbed for Controlled Ecological Life Support Systems (CELSS) investigations. CEEF including the physico-chemical material regenerative system has been constructed for the experiments of material circulation among plants, breeding animals and crew of CEEF. Because CEEF is a complex system, an appropriate schedule for the operation must be prepared in advance. The CEEF behavioral Prediction System, CPS, that will help to confirm the operation schedule, is under development. CPS will simulate CEEFs behavior with data (conditions of equipments, quantity of materials in tanks, etc.) of CEEF and an operation schedule that will be made by the operation team everyday, before the schedule will be carried out. The result of the simulation will show whether the operation schedule is appropriate or not. In order to realize CPS, models of the simulation program that is installed in CPS must mirror the real facilities of CEEF. For the first step of development, a flexible algorithm of the simulation program was investigated. The next step was development of a replicate simulation model of the material circulation system for the Closed Plant Experiment Facility (CPEF) that is a part of CEEF. All the parts of a real material circulation system for CPEF are connected together and work as a complex mechanism. In the simulation model, the system was separated into 38 units according to its operational segmentation. In order to develop each model for its corresponding unit, specifications for the model were fixed based on the specifications of the real part. These models were put into a simulation model for the system.

  8. Design of Experiments, Model Calibration and Data Assimilation

    SciTech Connect

    Williams, Brian J.

    2014-07-30

    This presentation provides an overview of emulation, calibration and experiment design for computer experiments. Emulation refers to building a statistical surrogate from a carefully selected and limited set of model runs to predict unsampled outputs. The standard kriging approach to emulation of complex computer models is presented. Calibration refers to the process of probabilistically constraining uncertain physics/engineering model inputs to be consistent with observed experimental data. An initial probability distribution for these parameters is updated using the experimental information. Markov chain Monte Carlo (MCMC) algorithms are often used to sample the calibrated parameter distribution. Several MCMC algorithms commonly employed in practice are presented, along with a popular diagnostic for evaluating chain behavior. Space-filling approaches to experiment design for selecting model runs to build effective emulators are discussed, including Latin Hypercube Design and extensions based on orthogonal array skeleton designs and imposed symmetry requirements. Optimization criteria that further enforce space-filling, possibly in projections of the input space, are mentioned. Designs to screen for important input variations are summarized and used for variable selection in a nuclear fuels performance application. This is followed by illustration of sequential experiment design strategies for optimization, global prediction, and rare event inference.

  9. Experience from the ECORS program in regions of complex geology

    NASA Astrophysics Data System (ADS)

    Damotte, B.

    1993-04-01

    The French ECORS program was launched in 1983 by a cooperation agreement between universities and petroleum companies. Crustal surveys have tried to find explanations for the formation of geological features, such as rifts, mountains ranges or subsidence in sedimentary basins. Several seismic surveys were carried out, some across areas with complex geological structures. The seismic techniques and equipment used were those developed by petroleum geophysicists, adapted to the depth aimed at (30-50 km) and to various physical constraints encountered in the field. In France, ECORS has recorded 850 km of deep seismic lines onshore across plains and mountains, on various kinds of geological formations. Different variations of the seismic method (reflection, refraction, long-offset seismic) were used, often simultaneously. Multiple coverage profiling constitutes the essential part of this data acquisition. Vibrators and dynamite shots were employed with a spread generally 15 km long, but sometimes 100 km long. Some typical seismic examples show that obtaining crustal reflections essentialy depends on two factors: (1) the type and structure of shallow formations, and (2) the sources used. Thus, when seismic energy is strongly absorbed across the first kilometers in shallow formations, or when these formations are highly structured, standard multiple-coverage profiling is not able to provide results beyond a few seconds. In this case, it is recommended to simultaneously carry out long-offset seismic in low multiple coverage. Other more methodological examples show: how the impact on the crust of a surface fault may be evaluated according to the seismic method implemented ( VIBROSEIS 96-fold coverage or single dynamite shot); that vibrators make it possible to implement wide-angle seismic surveying with an offset 80 km long; how to implement the seismic reflection method on complex formations in high mountains. All data were processed using industrial seismic software

  10. Background modeling for the GERDA experiment

    NASA Astrophysics Data System (ADS)

    Becerici-Schmidt, N.; Gerda Collaboration

    2013-08-01

    The neutrinoless double beta (0νββ) decay experiment GERDA at the LNGS of INFN has started physics data taking in November 2011. This paper presents an analysis aimed at understanding and modeling the observed background energy spectrum, which plays an essential role in searches for a rare signal like 0νββ decay. A very promising preliminary model has been obtained, with the systematic uncertainties still under study. Important information can be deduced from the model such as the expected background and its decomposition in the signal region. According to the model the main background contributions around Qββ come from 214Bi, 228Th, 42K, 60Co and α emitting isotopes in the 226Ra decay chain, with a fraction depending on the assumed source positions.

  11. Background modeling for the GERDA experiment

    SciTech Connect

    Becerici-Schmidt, N.; Collaboration: GERDA Collaboration

    2013-08-08

    The neutrinoless double beta (0νββ) decay experiment GERDA at the LNGS of INFN has started physics data taking in November 2011. This paper presents an analysis aimed at understanding and modeling the observed background energy spectrum, which plays an essential role in searches for a rare signal like 0νββ decay. A very promising preliminary model has been obtained, with the systematic uncertainties still under study. Important information can be deduced from the model such as the expected background and its decomposition in the signal region. According to the model the main background contributions around Q{sub ββ} come from {sup 214}Bi, {sup 228}Th, {sup 42}K, {sup 60}Co and α emitting isotopes in the {sup 226}Ra decay chain, with a fraction depending on the assumed source positions.

  12. Impact polymorphs of quartz: experiments and modelling

    NASA Astrophysics Data System (ADS)

    Price, M. C.; Dutta, R.; Burchell, M. J.; Cole, M. J.

    2013-09-01

    We have used the light gas gun at the University of Kent to perform a series of impact experiments firing quartz projectiles onto metal, quartz and sapphire targets. The aim is to quantify the amount of any high pressure quartz polymorphs produced, and use these data to develop our hydrocode modelling to enable the predict ion of the quantity of polymorphs produced during a planetary scale impact.

  13. Sequential Development of Interfering Metamorphic Core Complexes: Numerical Experiments and Comparison to the Cyclades, Greece

    NASA Astrophysics Data System (ADS)

    Tirel, C.; Gautier, P.; van Hinsbergen, D.; Wortel, R.

    2007-12-01

    The Cycladic extensional province (Greece) contains classical examples of metamorphic core complexes (MCCs), where exhumation was accommodated along multiple interfering and/or sequentially developed syn- and antithetic extensional detachment zones. Previous studies on the development of MCCs did not take into account the possible interference between multiple and closely spaced MCCs. In the present study, we have performed new lithosphere-scale experiments in which the deformation is not a priori localized so as to explore the conditions of the development of several MCCs in a direction parallel to extension. In a narrow range of conditions, MCCs are closely spaced, interfere with each other, and develop in sequence. From a comparison between numerical results and geological observations, we find that the Cyclades metamorphic core complexes are in good agreement with the model in terms of Moho geometry and depth, kinematic and structural history, timing and duration of core complex formation and metamorphic history. We infer that, for Cycladic MCC-type to develop, an initial crustal thickness prior to the onset of post-orogenic extension between 40 and 44 km, a boundary velocity close to 2 cm/yr and an initial thermal lithospheric thickness of about 60 km are required. The latter may be explained by a significant heating due to delamination of subducting continental crust or vigorous small-scale thermal convection.

  14. Data assimilation and model evaluation experiment datasets

    NASA Technical Reports Server (NTRS)

    Lai, Chung-Cheng A.; Qian, Wen; Glenn, Scott M.

    1994-01-01

    The Institute for Naval Oceanography, in cooperation with Naval Research Laboratories and universities, executed the Data Assimilation and Model Evaluation Experiment (DAMEE) for the Gulf Stream region during fiscal years 1991-1993. Enormous effort has gone into the preparation of several high-quality and consistent datasets for model initialization and verification. This paper describes the preparation process, the temporal and spatial scopes, the contents, the structure, etc., of these datasets. The goal of DAMEE and the need of data for the four phases of experiment are briefly stated. The preparation of DAMEE datasets consisted of a series of processes: (1) collection of observational data; (2) analysis and interpretation; (3) interpolation using the Optimum Thermal Interpolation System package; (4) quality control and re-analysis; and (5) data archiving and software documentation. The data products from these processes included a time series of 3D fields of temperature and salinity, 2D fields of surface dynamic height and mixed-layer depth, analysis of the Gulf Stream and rings system, and bathythermograph profiles. To date, these are the most detailed and high-quality data for mesoscale ocean modeling, data assimilation, and forecasting research. Feedback from ocean modeling groups who tested this data was incorporated into its refinement. Suggestions for DAMEE data usages include (1) ocean modeling and data assimilation studies, (2) diagnosis and theoretical studies, and (3) comparisons with locally detailed observations.

  15. Studying complex chemistries using PLASIMO's global model

    NASA Astrophysics Data System (ADS)

    Koelman, PMJ; Tadayon Mousavi, S.; Perillo, R.; Graef, WAAD; Mihailova, DB; van Dijk, J.

    2016-02-01

    The Plasimo simulation software is used to construct a Global Model of a CO2 plasma. A DBD plasma between two coaxial cylinders is considered, which is driven by a triangular input power pulse. The plasma chemistry is studied during this power pulse and in the afterglow. The model consists of 71 species that interact in 3500 reactions. Preliminary results from the model are presented. The model has been validated by comparing its results with those presented in Kozák et al. (Plasma Sources Science and Technology 23(4) p. 045004, 2014). A good qualitative agreement has been reached; potential sources of remaining discrepancies are extensively discussed.

  16. Multiscale Computational Models of Complex Biological Systems

    PubMed Central

    Walpole, Joseph; Papin, Jason A.; Peirce, Shayn M.

    2014-01-01

    Integration of data across spatial, temporal, and functional scales is a primary focus of biomedical engineering efforts. The advent of powerful computing platforms, coupled with quantitative data from high-throughput experimental platforms, has allowed multiscale modeling to expand as a means to more comprehensively investigate biological phenomena in experimentally relevant ways. This review aims to highlight recently published multiscale models of biological systems while using their successes to propose the best practices for future model development. We demonstrate that coupling continuous and discrete systems best captures biological information across spatial scales by selecting modeling techniques that are suited to the task. Further, we suggest how to best leverage these multiscale models to gain insight into biological systems using quantitative, biomedical engineering methods to analyze data in non-intuitive ways. These topics are discussed with a focus on the future of the field, the current challenges encountered, and opportunities yet to be realized. PMID:23642247

  17. Information, complexity and efficiency: The automobile model

    SciTech Connect

    Allenby, B. |

    1996-08-08

    The new, rapidly evolving field of industrial ecology - the objective, multidisciplinary study of industrial and economic systems and their linkages with fundamental natural systems - provides strong ground for believing that a more environmentally and economically efficient economy will be more information intensive and complex. Information and intellectual capital will be substituted for the more traditional inputs of materials and energy in producing a desirable, yet sustainable, quality of life. While at this point this remains a strong hypothesis, the evolution of the automobile industry can be used to illustrate how such substitution may, in fact, already be occurring in an environmentally and economically critical sector.

  18. Information-driven modeling of protein-peptide complexes.

    PubMed

    Trellet, Mikael; Melquiond, Adrien S J; Bonvin, Alexandre M J J

    2015-01-01

    Despite their biological importance in many regulatory processes, protein-peptide recognition mechanisms are difficult to study experimentally at the structural level because of the inherent flexibility of peptides and the often transient interactions on which they rely. Complementary methods like biomolecular docking are therefore required. The prediction of the three-dimensional structure of protein-peptide complexes raises unique challenges for computational algorithms, as exemplified by the recent introduction of protein-peptide targets in the blind international experiment CAPRI (Critical Assessment of PRedicted Interactions). Conventional protein-protein docking approaches are often struggling with the high flexibility of peptides whose short sizes impede protocols and scoring functions developed for larger interfaces. On the other side, protein-small ligand docking methods are unable to cope with the larger number of degrees of freedom in peptides compared to small molecules and the typically reduced available information to define the binding site. In this chapter, we describe a protocol to model protein-peptide complexes using the HADDOCK web server, working through a test case to illustrate every steps. The flexibility challenge that peptides represent is dealt with by combining elements of conformational selection and induced fit molecular recognition theories. PMID:25555727

  19. Observation simulation experiments with regional prediction models

    NASA Technical Reports Server (NTRS)

    Diak, George; Perkey, Donald J.; Kalb, Michael; Robertson, Franklin R.; Jedlovec, Gary

    1990-01-01

    Research efforts in FY 1990 included studies employing regional scale numerical models as aids in evaluating potential contributions of specific satellite observing systems (current and future) to numerical prediction. One study involves Observing System Simulation Experiments (OSSEs) which mimic operational initialization/forecast cycles but incorporate simulated Advanced Microwave Sounding Unit (AMSU) radiances as input data. The objective of this and related studies is to anticipate the potential value of data from these satellite systems, and develop applications of remotely sensed data for the benefit of short range forecasts. Techniques are also being used that rely on numerical model-based synthetic satellite radiances to interpret the information content of various types of remotely sensed image and sounding products. With this approach, evolution of simulated channel radiance image features can be directly interpreted in terms of the atmospheric dynamical processes depicted by a model. Progress is being made in a study using the internal consistency of a regional prediction model to simplify the assessment of forced diabatic heating and moisture initialization in reducing model spinup times. Techniques for model initialization are being examined, with focus on implications for potential applications of remote microwave observations, including AMSU and Special Sensor Microwave Imager (SSM/I), in shortening model spinup time for regional prediction.

  20. Sensitivity Analysis in Complex Plasma Chemistry Models

    NASA Astrophysics Data System (ADS)

    Turner, Miles

    2015-09-01

    The purpose of a plasma chemistry model is prediction of chemical species densities, including understanding the mechanisms by which such species are formed. These aims are compromised by an uncertain knowledge of the rate constants included in the model, which directly causes uncertainty in the model predictions. We recently showed that this predictive uncertainty can be large--a factor of ten or more in some cases. There is probably no context in which a plasma chemistry model might be used where the existence of uncertainty on this scale could not be a matter of concern. A question that at once follows is: Which rate constants cause such uncertainty? In the present paper we show how this question can be answered by applying a systematic screening procedure--the so-called Morris method--to identify sensitive rate constants. We investigate the topical example of the helium-oxygen chemistry. Beginning with a model with almost four hundred reactions, we show that only about fifty rate constants materially affect the model results, and as few as ten cause most of the uncertainty. This means that the model can be improved, and the uncertainty substantially reduced, by focussing attention on this tractably small set of rate constants. Work supported by Science Foundation Ireland under grant08/SRC/I1411, and by COST Action MP1101 ``Biomedical Applications of Atmospheric Pressure Plasmas.''

  1. Experiences in evaluating regional air quality models

    NASA Astrophysics Data System (ADS)

    Liu, Mei-Kao; Greenfield, Stanley M.

    Any area of the world concerned with the health and welfare of its people and the viability of its ecological system must eventually address the question of the control of air pollution. This is true in developed countries as well as countries that are undergoing a considerable degree of industrialization. The control or limitation of the emissions of a pollutant can be very costly. To avoid ineffective or unnecessary control, the nature of the problem must be fully understood and the relationship between source emissions and ambient concentrations must be established. Mathematical models, while admittedly containing large uncertainties, can be used to examine alternatives of emission restrictions for achieving safe ambient concentrations. The focus of this paper is to summarize our experiences with modeling regional air quality in the United States and Western Europe. The following modeling experiences have been used: future SO 2 and sulfate distributions and projected acidic deposition as related to coal development in the northern Great Plains in the U.S.; analysis of regional ozone and sulfate episodes in the northeastern U.S.; analysis of the regional ozone problem in western Europe in support of alternative emission control strategies; analysis of distributions of toxic chemicals in the Southeast Ohio River Valley in support of the design of a monitoring network human exposure. Collectively, these prior modeling analyses can be invaluable in examining a similar problem in other parts of the world as well, such as the Pacific rim in Asia.

  2. Modeling Power Systems as Complex Adaptive Systems

    SciTech Connect

    Chassin, David P.; Malard, Joel M.; Posse, Christian; Gangopadhyaya, Asim; Lu, Ning; Katipamula, Srinivas; Mallow, J V.

    2004-12-30

    Physical analogs have shown considerable promise for understanding the behavior of complex adaptive systems, including macroeconomics, biological systems, social networks, and electric power markets. Many of today's most challenging technical and policy questions can be reduced to a distributed economic control problem. Indeed, economically based control of large-scale systems is founded on the conjecture that the price-based regulation (e.g., auctions, markets) results in an optimal allocation of resources and emergent optimal system control. This report explores the state-of-the-art physical analogs for understanding the behavior of some econophysical systems and deriving stable and robust control strategies for using them. We review and discuss applications of some analytic methods based on a thermodynamic metaphor, according to which the interplay between system entropy and conservation laws gives rise to intuitive and governing global properties of complex systems that cannot be otherwise understood. We apply these methods to the question of how power markets can be expected to behave under a variety of conditions.

  3. Uniform surface complexation approaches to radionuclide sorption modeling

    SciTech Connect

    Turner, D.R.; Pabalan, R.T.; Muller, P.; Bertetti, F.P.

    1995-12-01

    Simplified surface complexation models, based on a uniform set of model parameters have been developed to address complex radionuclide sorption behavior. Existing data have been examined, and interpreted using numerical nonlinear least-squares optimization techniques to determine the necessary binding constants. Simplified modeling approaches have generally proven successful at simulating and predicting radionuclide sorption on (hydr)oxides and aluminosilicates over a wide range of physical and chemical conditions.

  4. Integrated Modeling of Complex Optomechanical Systems

    NASA Astrophysics Data System (ADS)

    Andersen, Torben; Enmark, Anita

    2011-09-01

    Mathematical modeling and performance simulation are playing an increasing role in large, high-technology projects. There are two reasons; first, projects are now larger than they were before, and the high cost calls for detailed performance prediction before construction. Second, in particular for space-related designs, it is often difficult to test systems under realistic conditions beforehand, and mathematical modeling is then needed to verify in advance that a system will work as planned. Computers have become much more powerful, permitting calculations that were not possible before. At the same time mathematical tools have been further developed and found acceptance in the community. Particular progress has been made in the fields of structural mechanics, optics and control engineering, where new methods have gained importance over the last few decades. Also, methods for combining optical, structural and control system models into global models have found widespread use. Such combined models are usually called integrated models and were the subject of this symposium. The objective was to bring together people working in the fields of groundbased optical telescopes, ground-based radio telescopes, and space telescopes. We succeeded in doing so and had 39 interesting presentations and many fruitful discussions during coffee and lunch breaks and social arrangements. We are grateful that so many top ranked specialists found their way to Kiruna and we believe that these proceedings will prove valuable during much future work.

  5. Ballistic Response of Fabrics: Model and Experiments

    NASA Astrophysics Data System (ADS)

    Orphal, Dennis L.; Walker Anderson, James D., Jr.

    2001-06-01

    Walker (1999)developed an analytical model for the dynamic response of fabrics to ballistic impact. From this model the force, F, applied to the projectile by the fabric is derived to be F = 8/9 (ET*)h^3/R^2, where E is the Young's modulus of the fabric, T* is the "effective thickness" of the fabric and equal to the ratio of the areal density of the fabric to the fiber density, h is the displacement of the fabric on the axis of impact and R is the radius of the fabric deformation or "bulge". Ballistic tests against Zylon^TM fabric have been performed to measure h and R as a function of time. The results of these experiments are presented and analyzed in the context of the Walker model. Walker (1999), Proceedings of the 18th International Symposium on Ballistics, pp. 1231.

  6. A tracer experiment study to evaluate the CALPUFF real time application in a near-field complex terrain setting

    NASA Astrophysics Data System (ADS)

    cui, Huiling; Yao, Rentai; Xu, Xiangjun; Xin, Cuntian; Yang, jinming

    2011-12-01

    CALPUFF is an atmospheric source-receptor model recommended by the US Environmental Protection Agency (EPA) for use on a case-by-case basis in complex terrain and wind condition. As the bulk of validation of CALPUFF has focused on long-range or short-range but long-term dispersion, we can not gauge the reliability of the model for predicting the short-term emission in near-field especially complex terrain, and sometimes this situation is important for emergency emission. To validate the CALPUFF's application in such condition, we carried out a tracer experiment in a near-field complex terrain setting and used CALPUFF atmospheric dispersion model to simulate the tracer experiment in real condition. From the centroid trajectory comparison of predictions and measures, we can see that the model can correctly predict the centroid trajectory and shape of tracer cloud, and the results also indicate that sufficient observed weather data only can develop a good wind field for near-field. From the concentration comparison in each arc, we can see the model underestimate horizontal extent of tracer puff and can not reflect the irregular characters showed in measurements. The result of global analysis is FOEX of -25.91%, FA2 of 27.06%, FA5 of 61.41%. The simulations shows that the CALPUFF can simulate the position and direction of tracer cloud in near-field complex terrain but underestimate over measurements especially in peak concentrations.

  7. Process modelling for Space Station experiments

    NASA Technical Reports Server (NTRS)

    Alexander, J. Iwan D.; Rosenberger, Franz; Nadarajah, Arunan; Ouazzani, Jalil; Amiroudine, Sakir

    1990-01-01

    Examined here is the sensitivity of a variety of space experiments to residual accelerations. In all the cases discussed the sensitivity is related to the dynamic response of a fluid. In some cases the sensitivity can be defined by the magnitude of the response of the velocity field. This response may involve motion of the fluid associated with internal density gradients, or the motion of a free liquid surface. For fluids with internal density gradients, the type of acceleration to which the experiment is sensitive will depend on whether buoyancy driven convection must be small in comparison to other types of fluid motion, or fluid motion must be suppressed or eliminated. In the latter case, the experiments are sensitive to steady and low frequency accelerations. For experiments such as the directional solidification of melts with two or more components, determination of the velocity response alone is insufficient to assess the sensitivity. The effect of the velocity on the composition and temperature field must be considered, particularly in the vicinity of the melt-crystal interface. As far as the response to transient disturbances is concerned, the sensitivity is determined by both the magnitude and frequency of the acceleration and the characteristic momentum and solute diffusion times. The microgravity environment, a numerical analysis of low gravity tolerance of the Bridgman-Stockbarger technique, and modeling crystal growth by physical vapor transport in closed ampoules are discussed.

  8. Spectroscopic studies of molybdenum complexes as models for nitrogenase

    SciTech Connect

    Walker, T.P.

    1981-05-01

    Because biological nitrogen fixation requires Mo, there is an interest in inorganic Mo complexes which mimic the reactions of nitrogen-fixing enzymes. Two such complexes are the dimer Mo/sub 2/O/sub 4/ (cysteine)/sub 2//sup 2 -/ and trans-Mo(N/sub 2/)/sub 2/(dppe)/sub 2/ (dppe = 1,2-bis(diphenylphosphino)ethane). The H/sup 1/ and C/sup 13/ NMR of solutions of Mo/sub 2/O/sub 4/(cys)/sub 2//sup 2 -/ are described. It is shown that in aqueous solution the cysteine ligands assume at least three distinct configurations. A step-wise dissociation of the cysteine ligand is proposed to explain the data. The Extended X-ray Absorption Fine Structure (EXAFS) of trans-Mo(N/sub 2/)/sub 2/(dppe)/sub 2/ is described and compared to the EXAFS of MoH/sub 4/(dppe)/sub 2/. The spectra are fitted to amplitude and phase parameters developed at Bell Laboratories. On the basis of this analysis, one can determine (1) that the dinitrogen complex contains nitrogen and the hydride complex does not and (2) the correct Mo-N distance. This is significant because the Mo inn both complexes is coordinated by four P atoms which dominate the EXAFS. A similar sort of interference is present in nitrogenase due to S coordination of the Mo in the enzyme. This model experiment indicates that, given adequate signal to noise ratios, the presence or absence of dinitrogen coordination to Mo in the enzyme may be determined by EXAFS using existing data analysis techniques. A new reaction between Mo/sub 2/O/sub 4/(cys)/sub 2//sup 2 -/ and acetylene is described to the extent it is presently understood. A strong EPR signal is observed, suggesting the production of stable Mo(V) monomers. EXAFS studies support this suggestion. The Mo K-edge is described. The edge data suggests Mo(VI) is also produced in the reaction. Ultraviolet spectra suggest that cysteine is released in the course of the reaction.

  9. Formation rates of complex organics in UV irradiated CH_3OH-rich ices. I. Experiments

    NASA Astrophysics Data System (ADS)

    Öberg, K. I.; Garrod, R. T.; van Dishoeck, E. F.; Linnartz, H.

    2009-09-01

    Context: Gas-phase complex organic molecules are commonly detected in the warm inner regions of protostellar envelopes, so-called hot cores. Recent models show that photochemistry in ices followed by desorption may explain the observed abundances. There is, however, a general lack of quantitative data on UV-induced complex chemistry in ices. Aims: This study aims to experimentally quantify the UV-induced production rates of complex organics in CH3OH-rich ices under a variety of astrophysically relevant conditions. Methods: The ices are irradiated with a broad-band UV hydrogen microwave-discharge lamp under ultra-high vacuum conditions, at 20-70 K, and then heated to 200 K. The reaction products are identified by reflection-absorption infrared spectroscopy (RAIRS) and temperature programmed desorption (TPD), through comparison with RAIRS and TPD curves of pure complex species, and through the observed effects of isotopic substitution and enhancement of specific functional groups, such as CH3, in the ice. Results: Complex organics are readily formed in all experiments, both during irradiation and during the slow warm-up of the ices after the UV lamp is turned off. The relative abundances of photoproducts depend on the UV fluence, the ice temperature, and whether pure CH3OH ice or CH3OH:CH4/CO ice mixtures are used. C2H6, CH3CHO, CH3CH2OH, CH3OCH3, HCOOCH3, HOCH2CHO and (CH2OH)2 are all detected in at least one experiment. Varying the ice thickness and the UV flux does not affect the chemistry. The derived product-formation yields and their dependences on different experimental parameters, such as the initial ice composition, are used to estimate the CH3OH photodissociation branching ratios in ice and the relative diffusion barriers of the formed radicals. At 20 K, the pure CH3OH photodesorption yield is 2.1(±1.0)×10-3 per incident UV photon, the photo-destruction cross section 2.6(±0.9)×10-18 cm^2. Conclusions: Photochemistry in CH3OH ices is efficient enough to

  10. Smoothed Particle Hydrodynamics simulation and laboratory-scale experiments of complex flow dynamics in unsaturated fractures

    NASA Astrophysics Data System (ADS)

    Kordilla, J.; Tartakovsky, A. M.; Pan, W.; Shigorina, E.; Noffz, T.; Geyer, T.

    2015-12-01

    Unsaturated flow in fractured porous media exhibits highly complex flow dynamics and a wide range of intermittent flow processes. Especially in wide aperture fractures, flow processes may be dominated by gravitational instead of capillary forces leading to a deviation from the classical volume effective approaches (Richard's equation, Van Genuchten type relationships). The existence of various flow modes such as droplets, rivulets, turbulent and adsorbed films is well known, however, their spatial and temporal distribution within fracture networks is still an open question partially due to the lack of appropriate modeling tools. With our work we want to gain a deeper understanding of the underlying flow and transport dynamics in unsaturated fractured media in order to support the development of more refined upscaled methods, applicable on catchment scales. We present fracture-scale flow simulations obtained with a parallelized Smoothed Particle Hydrodynamics (SPH) model. The model allows us to simulate free-surface flow dynamics including the effect of surface tension for a wide range of wetting conditions in smooth and rough fractures. Due to the highly efficient generation of surface tension via particle-particle interaction forces the dynamic wetting of surfaces can readily be obtained. We validated the model via empirical and semi-analytical solutions and conducted laboratory-scale percolation experiments of unsaturated flow through synthetic fracture systems. The setup allows us to obtain travel time distributions and identify characteristic flow mode distributions on wide aperture fractures intercepted by horizontal fracture elements.

  11. The sigma model on complex projective superspaces

    NASA Astrophysics Data System (ADS)

    Candu, Constantin; Mitev, Vladimir; Quella, Thomas; Saleur, Hubert; Schomerus, Volker

    2010-02-01

    The sigma model on projective superspaces mathbb{C}{mathbb{P}^{S - 1left| S right.}} gives rise to a continuous family of interacting 2D conformal field theories which are parametrized by the curvature radius R and the theta angle θ. Our main goal is to determine the spectrum of the model, non-perturbatively as a function of both parameters. We succeed to do so for all open boundary conditions preserving the full global symmetry of the model. In string theory parlor, these correspond to volume filling branes that are equipped with a monopole line bundle and connection. The paper consists of two parts. In the first part, we approach the problem within the continuum formulation. Combining combinatorial arguments with perturbative studies and some simple free field calculations, we determine a closed formula for the partition function of the theory. This is then tested numerically in the second part. There we extend the proposal of [ arXiv:0908.1081 ] for a spin chain regularization of the mathbb{C}{mathbb{P}^{S - 1left| S right.}} model with open boundary conditions and use it to determine the spectrum at the conformal fixed point. The numerical results are in remarkable agreement with the continuum analysis.

  12. A simple model clarifies the complicated relationships of complex networks

    NASA Astrophysics Data System (ADS)

    Zheng, Bojin; Wu, Hongrun; Kuang, Li; Qin, Jun; Du, Wenhua; Wang, Jianmin; Li, Deyi

    2014-08-01

    Real-world networks such as the Internet and WWW have many common traits. Until now, hundreds of models were proposed to characterize these traits for understanding the networks. Because different models used very different mechanisms, it is widely believed that these traits origin from different causes. However, we find that a simple model based on optimisation can produce many traits, including scale-free, small-world, ultra small-world, Delta-distribution, compact, fractal, regular and random networks. Moreover, by revising the proposed model, the community-structure networks are generated. By this model and the revised versions, the complicated relationships of complex networks are illustrated. The model brings a new universal perspective to the understanding of complex networks and provide a universal method to model complex networks from the viewpoint of optimisation.

  13. A simple model clarifies the complicated relationships of complex networks

    PubMed Central

    Zheng, Bojin; Wu, Hongrun; Kuang, Li; Qin, Jun; Du, Wenhua; Wang, Jianmin; Li, Deyi

    2014-01-01

    Real-world networks such as the Internet and WWW have many common traits. Until now, hundreds of models were proposed to characterize these traits for understanding the networks. Because different models used very different mechanisms, it is widely believed that these traits origin from different causes. However, we find that a simple model based on optimisation can produce many traits, including scale-free, small-world, ultra small-world, Delta-distribution, compact, fractal, regular and random networks. Moreover, by revising the proposed model, the community-structure networks are generated. By this model and the revised versions, the complicated relationships of complex networks are illustrated. The model brings a new universal perspective to the understanding of complex networks and provide a universal method to model complex networks from the viewpoint of optimisation. PMID:25160506

  14. Improving phylogenetic regression under complex evolutionary models.

    PubMed

    Mazel, Florent; Davies, T Jonathan; Georges, Damien; Lavergne, Sébastien; Thuiller, Wilfried; Peres-NetoO, Pedro R

    2016-02-01

    Phylogenetic Generalized Least Square (PGLS) is the tool of choice among phylogenetic comparative methods to measure the correlation between species features such as morphological and life-history traits or niche characteristics. In its usual form, it assumes that the residual variation follows a homogenous model of evolution across the branches of the phylogenetic tree. Since a homogenous model of evolution is unlikely to be realistic in nature, we explored the robustness of the phylogenetic regression when this assumption is violated. We did so by simulating a set of traits under various heterogeneous models of evolution, and evaluating the statistical performance (type I error [the percentage of tests based on samples that incorrectly rejected a true null hypothesis] and power [the percentage of tests that correctly rejected a false null hypothesis]) of classical phylogenetic regression. We found that PGLS has good power but unacceptable type I error rates. This finding is important since this method has been increasingly used in comparative analyses over the last decade. To address this issue, we propose a simple solution based on transforming the underlying variance-covariance matrix to adjust for model heterogeneity within PGLS. We suggest that heterogeneous rates of evolution might be particularly prevalent in large phylogenetic trees, while most current approaches assume a homogenous rate of evolution. Our analysis demonstrates that overlooking rate heterogeneity can result in inflated type I errors, thus misleading comparative analyses. We show that it is possible to correct for this bias even when the underlying model of evolution is not known a priori. PMID:27145604

  15. A musculoskeletal model of the elbow joint complex

    NASA Technical Reports Server (NTRS)

    Gonzalez, Roger V.; Barr, Ronald E.; Abraham, Lawrence D.

    1993-01-01

    This paper describes a musculoskeletal model that represents human elbow flexion-extension and forearm pronation-supination. Musculotendon parameters and the skeletal geometry were determined for the musculoskeletal model in the analysis of ballistic elbow joint complex movements. The key objective was to develop a computational model, guided by optimal control, to investigate the relationship among patterns of muscle excitation, individual muscle forces, and movement kinematics. The model was verified using experimental kinematic, torque, and electromyographic data from volunteer subjects performing both isometric and ballistic elbow joint complex movements. In general, the model predicted kinematic and muscle excitation patterns similar to what was experimentally measured.

  16. Complex Perceptions of Identity: The Experiences of Student Combat Veterans in Community College

    ERIC Educational Resources Information Center

    Hammond, Shane Patrick

    2016-01-01

    This qualitative study illustrates how complex perceptions of identity influence the community college experience for student veterans who have been in combat, creating barriers to their overall persistence. The collective experiences of student combat veterans at two community colleges in northwestern Massachusetts are presented, and a Combat…

  17. Communicating about Loss: Experiences of Older Australian Adults with Cerebral Palsy and Complex Communication Needs

    ERIC Educational Resources Information Center

    Dark, Leigha; Balandin, Susan; Clemson, Lindy

    2011-01-01

    Loss and grief is a universal human experience, yet little is known about how older adults with a lifelong disability, such as cerebral palsy, and complex communication needs (CCN) experience loss and manage the grieving process. In-depth interviews were conducted with 20 Australian participants with cerebral palsy and CCN to determine the types…

  18. Experience with the CMS Event Data Model

    SciTech Connect

    Elmer, P.; Hegner, B.; Sexton-Kennedy, L.; /Fermilab

    2009-06-01

    The re-engineered CMS EDM was presented at CHEP in 2006. Since that time we have gained a lot of operational experience with the chosen model. We will present some of our findings, and attempt to evaluate how well it is meeting its goals. We will discuss some of the new features that have been added since 2006 as well as some of the problems that have been addressed. Also discussed is the level of adoption throughout CMS, which spans the trigger farm up to the final physics analysis. Future plans, in particular dealing with schema evolution and scaling, will be discussed briefly.

  19. Experiments for foam model development and validation.

    SciTech Connect

    Bourdon, Christopher Jay; Cote, Raymond O.; Moffat, Harry K.; Grillet, Anne Mary; Mahoney, James F.; Russick, Edward Mark; Adolf, Douglas Brian; Rao, Rekha Ranjana; Thompson, Kyle Richard; Kraynik, Andrew Michael; Castaneda, Jaime N.; Brotherton, Christopher M.; Mondy, Lisa Ann; Gorby, Allen D.

    2008-09-01

    A series of experiments has been performed to allow observation of the foaming process and the collection of temperature, rise rate, and microstructural data. Microfocus video is used in conjunction with particle image velocimetry (PIV) to elucidate the boundary condition at the wall. Rheology, reaction kinetics and density measurements complement the flow visualization. X-ray computed tomography (CT) is used to examine the cured foams to determine density gradients. These data provide input to a continuum level finite element model of the blowing process.

  20. Multiaxial behavior of foams - Experiments and modeling

    NASA Astrophysics Data System (ADS)

    Maheo, Laurent; Guérard, Sandra; Rio, Gérard; Donnard, Adrien; Viot, Philippe

    2015-09-01

    Cellular materials are strongly related to pressure level inside the material. It is therefore important to use experiments which can highlight (i) the pressure-volume behavior, (ii) the shear-shape behavior for different pressure level. Authors propose to use hydrostatic compressive, shear and combined pressure-shear tests to determine cellular materials behavior. Finite Element Modeling must take into account these behavior specificities. Authors chose to use a behavior law with a Hyperelastic, a Viscous and a Hysteretic contributions. Specific developments has been performed on the Hyperelastic one by separating the spherical and the deviatoric part to take into account volume change and shape change characteristics of cellular materials.

  1. Optimal Complexity of Nonlinear Rainfall-Runoff Models

    NASA Astrophysics Data System (ADS)

    Schoups, G.; Vrugt, J.; van de Giesen, N.; Fenicia, F.

    2008-12-01

    Identification of an appropriate level of model complexity to accurately translate rainfall into runoff remains an unresolved issue. The model has to be complex enough to generate accurate predictions, but not too complex such that its parameters cannot be reliably estimated from the data. Earlier work with linear models (Jakeman and Hornberger, 1993) concluded that a model with 4 to 5 parameters is sufficient. However, more recent results with a nonlinear model (Vrugt et al., 2006) suggest that 10 or more parameters may be identified from daily rainfall-runoff time-series. The goal here is to systematically investigate optimal complexity of nonlinear rainfall-runoff models, yielding accurate models with identifiable parameters. Our methodology consists of four steps: (i) a priori specification of a family of model structures from which to pick an optimal one, (ii) parameter optimization of each model structure to estimate empirical or calibration error, (iii) estimation of parameter uncertainty of each calibrated model structure, and (iv) estimation of prediction error of each calibrated model structure. For the first step we formulate a flexible model structure that allows us to systematically vary the complexity with which physical processes are simulated. The second and third steps are achieved using a recently developed Markov chain Monte Carlo algorithm (DREAM), which minimizes calibration error yielding optimal parameter values and their underlying posterior probability density function. Finally, we compare several methods for estimating prediction error of each model structure, including statistical methods based on information criteria and split-sample calibration-validation. Estimates of parameter uncertainty and prediction error are then used to identify optimal complexity for rainfall-runoff modeling, using data from dry and wet MOPEX catchments as case studies.

  2. Blueprints for Complex Learning: The 4C/ID-Model.

    ERIC Educational Resources Information Center

    van Merrienboer, Jeroen J. G.; Clark, Richard E.; de Croock, Marcel B. M.

    2002-01-01

    Describes the four-component instructional design system (4C/ID-model) developed for the design of training programs for complex skills. Discusses the structure of training blueprints for complex learning and associated instructional methods, focusing on learning tasks, supportive information, just-in-time information, and part-task practice.…

  3. Classrooms as Complex Adaptive Systems: A Relational Model

    ERIC Educational Resources Information Center

    Burns, Anne; Knox, John S.

    2011-01-01

    In this article, we describe and model the language classroom as a complex adaptive system (see Logan & Schumann, 2005). We argue that linear, categorical descriptions of classroom processes and interactions do not sufficiently explain the complex nature of classrooms, and cannot account for how classroom change occurs (or does not occur), over…

  4. Phytoavailability of thallium - A model soil experiment

    NASA Astrophysics Data System (ADS)

    Vanek, Ales; Mihaljevic, Martin; Galuskova, Ivana; Komarek, Michael

    2013-04-01

    The study deals with the environmental stability of Tl-modified phases (ferrihydrite, goethite, birnessite, calcite and illite) and phytoavailability of Tl in synthetically prepared soils used in a model vegetation experiment. The obtained data clearly demonstrate a strong relationship between the mineralogical position of Tl in the model soil and its uptake by the plant (Sinapis alba L.). The maximum rate of Tl uptake was observed for plants grown on soil containing Tl-modified illite. In contrast, soil enriched in Ksat-birnessite had the lowest potential for Tl release and phytoaccumulation. Root-induced dissolution of synthetic calcite and ferrihydrite in the rhizosphere followed by Tl mobilization was detected. Highly crystalline goethite was more stable in the rhizosphere, compared to ferrihydrite, leading to reduced biological uptake of Tl. Based on the results, the mineralogical aspect must be taken into account prior to general environmental recommendations in areas affected by Tl.

  5. Prequential Analysis of Complex Data with Adaptive Model Reselection†

    PubMed Central

    Clarke, Jennifer; Clarke, Bertrand

    2010-01-01

    In Prequential analysis, an inference method is viewed as a forecasting system, and the quality of the inference method is based on the quality of its predictions. This is an alternative approach to more traditional statistical methods that focus on the inference of parameters of the data generating distribution. In this paper, we introduce adaptive combined average predictors (ACAPs) for the Prequential analysis of complex data. That is, we use convex combinations of two different model averages to form a predictor at each time step in a sequence. A novel feature of our strategy is that the models in each average are re-chosen adaptively at each time step. To assess the complexity of a given data set, we introduce measures of data complexity for continuous response data. We validate our measures in several simulated contexts prior to using them in real data examples. The performance of ACAPs is compared with the performances of predictors based on stacking or likelihood weighted averaging in several model classes and in both simulated and real data sets. Our results suggest that ACAPs achieve a better trade off between model list bias and model list variability in cases where the data is very complex. This implies that the choices of model class and averaging method should be guided by a concept of complexity matching, i.e. the analysis of a complex data set may require a more complex model class and averaging strategy than the analysis of a simpler data set. We propose that complexity matching is akin to a bias–variance tradeoff in statistical modeling. PMID:20617104

  6. Size and complexity in model financial systems.

    PubMed

    Arinaminpathy, Nimalan; Kapadia, Sujit; May, Robert M

    2012-11-01

    The global financial crisis has precipitated an increasing appreciation of the need for a systemic perspective toward financial stability. For example: What role do large banks play in systemic risk? How should capital adequacy standards recognize this role? How is stability shaped by concentration and diversification in the financial system? We explore these questions using a deliberately simplified, dynamic model of a banking system that combines three different channels for direct transmission of contagion from one bank to another: liquidity hoarding, asset price contagion, and the propagation of defaults via counterparty credit risk. Importantly, we also introduce a mechanism for capturing how swings in "confidence" in the system may contribute to instability. Our results highlight that the importance of relatively large, well-connected banks in system stability scales more than proportionately with their size: the impact of their collapse arises not only from their connectivity, but also from their effect on confidence in the system. Imposing tougher capital requirements on larger banks than smaller ones can thus enhance the resilience of the system. Moreover, these effects are more pronounced in more concentrated systems, and continue to apply, even when allowing for potential diversification benefits that may be realized by larger banks. We discuss some tentative implications for policy, as well as conceptual analogies in ecosystem stability and in the control of infectious diseases. PMID:23091020

  7. Size and complexity in model financial systems

    PubMed Central

    Arinaminpathy, Nimalan; Kapadia, Sujit; May, Robert M.

    2012-01-01

    The global financial crisis has precipitated an increasing appreciation of the need for a systemic perspective toward financial stability. For example: What role do large banks play in systemic risk? How should capital adequacy standards recognize this role? How is stability shaped by concentration and diversification in the financial system? We explore these questions using a deliberately simplified, dynamic model of a banking system that combines three different channels for direct transmission of contagion from one bank to another: liquidity hoarding, asset price contagion, and the propagation of defaults via counterparty credit risk. Importantly, we also introduce a mechanism for capturing how swings in “confidence” in the system may contribute to instability. Our results highlight that the importance of relatively large, well-connected banks in system stability scales more than proportionately with their size: the impact of their collapse arises not only from their connectivity, but also from their effect on confidence in the system. Imposing tougher capital requirements on larger banks than smaller ones can thus enhance the resilience of the system. Moreover, these effects are more pronounced in more concentrated systems, and continue to apply, even when allowing for potential diversification benefits that may be realized by larger banks. We discuss some tentative implications for policy, as well as conceptual analogies in ecosystem stability and in the control of infectious diseases. PMID:23091020

  8. Micro Wire-Drawing: Experiments And Modelling

    SciTech Connect

    Berti, G. A.; Monti, M.; Bietresato, M.; D'Angelo, L.

    2007-05-17

    In the paper, the authors propose to adopt the micro wire-drawing as a key for investigating models of micro forming processes. The reasons of this choice arose in the fact that this process can be considered a quasi-stationary process where tribological conditions at the interface between the material and the die can be assumed to be constant during the whole deformation. Two different materials have been investigated: i) a low-carbon steel and, ii) a nonferrous metal (copper). The micro hardness and tensile tests performed on each drawn wire show a thin hardened layer (more evident then in macro wires) on the external surface of the wire and hardening decreases rapidly from the surface layer to the center. For the copper wire this effect is reduced and traditional material constitutive model seems to be adequate to predict experimentation. For the low-carbon steel a modified constitutive material model has been proposed and implemented in a FE code giving a better agreement with the experiments.

  9. Comparative Assessment of Complex Stabilities of Radiocopper Chelating Agents by a Combination of Complex Challenge and in vivo Experiments.

    PubMed

    Litau, Shanna; Seibold, Uwe; Vall-Sagarra, Alicia; Fricker, Gert; Wängler, Björn; Wängler, Carmen

    2015-07-01

    For (64) Cu radiolabeling of biomolecules to be used as in vivo positron emission tomography (PET) imaging agents, various chelators are commonly applied. It has not yet been determined which of the most potent chelators--NODA-GA ((1,4,7-triazacyclononane-4,7-diyl)diacetic acid-1-glutaric acid), CB-TE2A (2,2'-(1,4,8,11-tetraazabicyclo[6.6.2]hexadecane-4,11-diyl)diacetic acid), or CB-TE1A-GA (1,4,8,11-tetraazabicyclo[6.6.2]hexadecane-4,11-diyl-8-acetic acid-1-glutaric acid)--forms the most stable complexes resulting in PET images of highest quality. We determined the (64) Cu complex stabilities for these three chelators by a combination of complex challenge and an in vivo approach. For this purpose, bioconjugates of the chelating agents with the gastrin-releasing peptide receptor (GRPR)-affine peptide PESIN and an integrin αv β3 -affine c(RGDfC) tetramer were synthesized and radiolabeled with (64) Cu in excellent yields and specific activities. The (64) Cu-labeled biomolecules were evaluated for their complex stabilities in vitro by conducting a challenge experiment with the respective other chelators as challengers. The in vivo stabilities of the complexes were also determined, showing the highest stability for the (64) Cu-CB-TE1A-GA complex in both experimental setups. Therefore, CB-TE1A-GA is the most appropriate chelating agent for *Cu-labeled radiotracers and in vivo imaging applications. PMID:26011290

  10. STATegra EMS: an Experiment Management System for complex next-generation omics experiments

    PubMed Central

    2014-01-01

    High-throughput sequencing assays are now routinely used to study different aspects of genome organization. As decreasing costs and widespread availability of sequencing enable more laboratories to use sequencing assays in their research projects, the number of samples and replicates in these experiments can quickly grow to several dozens of samples and thus require standardized annotation, storage and management of preprocessing steps. As a part of the STATegra project, we have developed an Experiment Management System (EMS) for high throughput omics data that supports different types of sequencing-based assays such as RNA-seq, ChIP-seq, Methyl-seq, etc, as well as proteomics and metabolomics data. The STATegra EMS provides metadata annotation of experimental design, samples and processing pipelines, as well as storage of different types of data files, from raw data to ready-to-use measurements. The system has been developed to provide research laboratories with a freely-available, integrated system that offers a simple and effective way for experiment annotation and tracking of analysis procedures. PMID:25033091

  11. The Use of Behavior Models for Predicting Complex Operations

    NASA Technical Reports Server (NTRS)

    Gore, Brian F.

    2010-01-01

    Modeling and simulation (M&S) plays an important role when complex human-system notions are being proposed, developed and tested within the system design process. National Aeronautics and Space Administration (NASA) as an agency uses many different types of M&S approaches for predicting human-system interactions, especially when it is early in the development phase of a conceptual design. NASA Ames Research Center possesses a number of M&S capabilities ranging from airflow, flight path models, aircraft models, scheduling models, human performance models (HPMs), and bioinformatics models among a host of other kinds of M&S capabilities that are used for predicting whether the proposed designs will benefit the specific mission criteria. The Man-Machine Integration Design and Analysis System (MIDAS) is a NASA ARC HPM software tool that integrates many models of human behavior with environment models, equipment models, and procedural / task models. The challenge to model comprehensibility is heightened as the number of models that are integrated and the requisite fidelity of the procedural sets are increased. Model transparency is needed for some of the more complex HPMs to maintain comprehensibility of the integrated model performance. This will be exemplified in a recent MIDAS v5 application model and plans for future model refinements will be presented.

  12. Full-Scale Cookoff Model Validation Experiments

    SciTech Connect

    McClelland, M A; Rattanapote, M K; Heimdahl, E R; Erikson, W E; Curran, P O; Atwood, A I

    2003-11-25

    This paper presents the experimental results of the third and final phase of a cookoff model validation effort. In this phase of the work, two generic Heavy Wall Penetrators (HWP) were tested in two heating orientations. Temperature and strain gage data were collected over the entire test period. Predictions for time and temperature of reaction were made prior to release of the live data. Predictions were comparable to the measured values and were highly dependent on the established boundary conditions. Both HWP tests failed at a weld located near the aft closure of the device. More than 90 percent of unreacted explosive was recovered in the end heated experiment and less than 30 percent recovered in the side heated test.

  13. Modeling PBX 9501 overdriven release experiments

    SciTech Connect

    Tang, P.K.

    1997-11-01

    High Explosives (HE) performs work by the expansion of its detonation products. Along with the propagation of the detonation wave, the equation of state (EOS) of the products determines the HE performance in an engineering system. The authors show the failure of the standard Jones-Wilkins-Lee (JWL) equation of state (EOS) in modeling the overdriven release experiments of PBX 9501. The deficiency can be tracked back to inability of the same EOS in matching the shock pressure and the sound speed on the Hugoniot in the hydrodynamic regime above the Chapman-Jouguet pressure. After adding correction terms to the principal isentrope of the standard JWL EOS, the authors are able to remedy this shortcoming and the simulation was successful.

  14. Nanofluid Drop Evaporation: Experiment, Theory, and Modeling

    NASA Astrophysics Data System (ADS)

    Gerken, William James

    Nanofluids, stable colloidal suspensions of nanoparticles in a base fluid, have potential applications in the heat transfer, combustion and propulsion, manufacturing, and medical fields. Experiments were conducted to determine the evaporation rate of room temperature, millimeter-sized pendant drops of ethanol laden with varying amounts (0-3% by weight) of 40-60 nm aluminum nanoparticles (nAl). Time-resolved high-resolution drop images were collected for the determination of early-time evaporation rate (D2/D 02 > 0.75), shown to exhibit D-square law behavior, and surface tension. Results show an asymptotic decrease in pendant drop evaporation rate with increasing nAl loading. The evaporation rate decreases by approximately 15% at around 1% to 3% nAl loading relative to the evaporation rate of pure ethanol. Surface tension was observed to be unaffected by nAl loading up to 3% by weight. A model was developed to describe the evaporation of the nanofluid pendant drops based on D-square law analysis for the gas domain and a description of the reduction in liquid fraction available for evaporation due to nanoparticle agglomerate packing near the evaporating drop surface. Model predictions are in relatively good agreement with experiment, within a few percent of measured nanofluid pendant drop evaporation rate. The evaporation of pinned nanofluid sessile drops was also considered via modeling. It was found that the same mechanism for nanofluid evaporation rate reduction used to explain pendant drops could be used for sessile drops. That mechanism is a reduction in evaporation rate due to a reduction in available ethanol for evaporation at the drop surface caused by the packing of nanoparticle agglomerates near the drop surface. Comparisons of the present modeling predictions with sessile drop evaporation rate measurements reported for nAl/ethanol nanofluids by Sefiane and Bennacer [11] are in fairly good agreement. Portions of this abstract previously appeared as: W. J

  15. Geometric modeling of subcellular structures, organelles, and multiprotein complexes

    PubMed Central

    Feng, Xin; Xia, Kelin; Tong, Yiying; Wei, Guo-Wei

    2013-01-01

    SUMMARY Recently, the structure, function, stability, and dynamics of subcellular structures, organelles, and multi-protein complexes have emerged as a leading interest in structural biology. Geometric modeling not only provides visualizations of shapes for large biomolecular complexes but also fills the gap between structural information and theoretical modeling, and enables the understanding of function, stability, and dynamics. This paper introduces a suite of computational tools for volumetric data processing, information extraction, surface mesh rendering, geometric measurement, and curvature estimation of biomolecular complexes. Particular emphasis is given to the modeling of cryo-electron microscopy data. Lagrangian-triangle meshes are employed for the surface presentation. On the basis of this representation, algorithms are developed for surface area and surface-enclosed volume calculation, and curvature estimation. Methods for volumetric meshing have also been presented. Because the technological development in computer science and mathematics has led to multiple choices at each stage of the geometric modeling, we discuss the rationales in the design and selection of various algorithms. Analytical models are designed to test the computational accuracy and convergence of proposed algorithms. Finally, we select a set of six cryo-electron microscopy data representing typical subcellular complexes to demonstrate the efficacy of the proposed algorithms in handling biomolecular surfaces and explore their capability of geometric characterization of binding targets. This paper offers a comprehensive protocol for the geometric modeling of subcellular structures, organelles, and multiprotein complexes. PMID:23212797

  16. Between complexity of modelling and modelling of complexity: An essay on econophysics

    NASA Astrophysics Data System (ADS)

    Schinckus, C.

    2013-09-01

    Econophysics is an emerging field dealing with complex systems and emergent properties. A deeper analysis of themes studied by econophysicists shows that research conducted in this field can be decomposed into two different computational approaches: “statistical econophysics” and “agent-based econophysics”. This methodological scission complicates the definition of the complexity used in econophysics. Therefore, this article aims to clarify what kind of emergences and complexities we can find in econophysics in order to better understand, on one hand, the current scientific modes of reasoning this new field provides; and on the other hand, the future methodological evolution of the field.

  17. Using fMRI to Test Models of Complex Cognition

    ERIC Educational Resources Information Center

    Anderson, John R.; Carter, Cameron S.; Fincham, Jon M.; Qin, Yulin; Ravizza, Susan M.; Rosenberg-Lee, Miriam

    2008-01-01

    This article investigates the potential of fMRI to test assumptions about different components in models of complex cognitive tasks. If the components of a model can be associated with specific brain regions, one can make predictions for the temporal course of the BOLD response in these regions. An event-locked procedure is described for dealing…

  18. Tips on Creating Complex Geometry Using Solid Modeling Software

    ERIC Educational Resources Information Center

    Gow, George

    2008-01-01

    Three-dimensional computer-aided drafting (CAD) software, sometimes referred to as "solid modeling" software, is easy to learn, fun to use, and becoming the standard in industry. However, many users have difficulty creating complex geometry with the solid modeling software. And the problem is not entirely a student problem. Even some teachers and…

  19. Network model of bilateral power markets based on complex networks

    NASA Astrophysics Data System (ADS)

    Wu, Yang; Liu, Junyong; Li, Furong; Yan, Zhanxin; Zhang, Li

    2014-06-01

    The bilateral power transaction (BPT) mode becomes a typical market organization with the restructuring of electric power industry, the proper model which could capture its characteristics is in urgent need. However, the model is lacking because of this market organization's complexity. As a promising approach to modeling complex systems, complex networks could provide a sound theoretical framework for developing proper simulation model. In this paper, a complex network model of the BPT market is proposed. In this model, price advantage mechanism is a precondition. Unlike other general commodity transactions, both of the financial layer and the physical layer are considered in the model. Through simulation analysis, the feasibility and validity of the model are verified. At same time, some typical statistical features of BPT network are identified. Namely, the degree distribution follows the power law, the clustering coefficient is low and the average path length is a bit long. Moreover, the topological stability of the BPT network is tested. The results show that the network displays a topological robustness to random market member's failures while it is fragile against deliberate attacks, and the network could resist cascading failure to some extent. These features are helpful for making decisions and risk management in BPT markets.

  20. Evapotranspiration model of different complexity for multiple land cover types

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A comparison between half-hourly and daily measured and computed evapotranspiration (ET) using three models of different complexity, namely the Priestley-Taylor (P-T), reference Penman-Monteith (P-M), and Common Land Model (CLM) was conducted using three AmeriFlux sites under different land cover an...

  1. Zebrafish as an emerging model for studying complex brain disorders

    PubMed Central

    Kalueff, Allan V.; Stewart, Adam Michael; Gerlai, Robert

    2014-01-01

    The zebrafish (Danio rerio) is rapidly becoming a popular model organism in pharmacogenetics and neuropharmacology. Both larval and adult zebrafish are currently used to increase our understanding of brain function, dysfunction, and their genetic and pharmacological modulation. Here we review the developing utility of zebrafish in the analysis of complex brain disorders (including, for example, depression, autism, psychoses, drug abuse and cognitive disorders), also covering zebrafish applications towards the goal of modeling major human neuropsychiatric and drug-induced syndromes. We argue that zebrafish models of complex brain disorders and drug-induced conditions have become a rapidly emerging critical field in translational neuropharmacology research. PMID:24412421

  2. Simulating complex intracellular processes using object-oriented computational modelling.

    PubMed

    Johnson, Colin G; Goldman, Jacki P; Gullick, William J

    2004-11-01

    The aim of this paper is to give an overview of computer modelling and simulation in cellular biology, in particular as applied to complex biochemical processes within the cell. This is illustrated by the use of the techniques of object-oriented modelling, where the computer is used to construct abstractions of objects in the domain being modelled, and these objects then interact within the computer to simulate the system and allow emergent properties to be observed. The paper also discusses the role of computer simulation in understanding complexity in biological systems, and the kinds of information which can be obtained about biology via simulation. PMID:15302205

  3. MatOFF: A Tool For Analyzing Behaviorally-Complex Neurophysiological Experiments

    PubMed Central

    Genovesio, Aldo; Mitz, Andrew R.

    2007-01-01

    The simple operant conditioning originally used in behavioral neurophysiology 30 years ago has given way to complex and sophisticated behavioral paradigms; so much so, that early general purpose programs for analyzing neurophysiological data are ill-suited for complex experiments. The trend has been to develop custom software for each class of experiment, but custom software can have serious drawbacks. We describe here a general purpose software tool for behavioral and electrophysiological studies, called MatOFF, that is especially suited for processing neurophysiological data gathered during the execution of complex behaviors. Written in the MATLAB programming language, MatOFF solves the problem of handling complex analysis requirements in a unique and powerful way. While other neurophysiological programs are either a loose collection of tools or append MATLAB as a post-processing step, MatOFF is an integrated environment that supports MATLAB scripting within the event search engine safely isolated in programming sandbox. The results from scripting are stored separately, but in parallel with the raw data, and thus available to all subsequent MatOFF analysis and display processing. An example from a recently published experiment shows how all the features of MatOFF work together to analyze complex experiments and mine neurophysiological data in efficient ways. PMID:17604115

  4. Characterization of Complex Systems Using the Design of Experiments Approach: Transient Protein Expression in Tobacco as a Case Study

    PubMed Central

    Buyel, Johannes Felix; Fischer, Rainer

    2014-01-01

    Plants provide multiple benefits for the production of biopharmaceuticals including low costs, scalability, and safety. Transient expression offers the additional advantage of short development and production times, but expression levels can vary significantly between batches thus giving rise to regulatory concerns in the context of good manufacturing practice. We used a design of experiments (DoE) approach to determine the impact of major factors such as regulatory elements in the expression construct, plant growth and development parameters, and the incubation conditions during expression, on the variability of expression between batches. We tested plants expressing a model anti-HIV monoclonal antibody (2G12) and a fluorescent marker protein (DsRed). We discuss the rationale for selecting certain properties of the model and identify its potential limitations. The general approach can easily be transferred to other problems because the principles of the model are broadly applicable: knowledge-based parameter selection, complexity reduction by splitting the initial problem into smaller modules, software-guided setup of optimal experiment combinations and step-wise design augmentation. Therefore, the methodology is not only useful for characterizing protein expression in plants but also for the investigation of other complex systems lacking a mechanistic description. The predictive equations describing the interconnectivity between parameters can be used to establish mechanistic models for other complex systems. PMID:24514765

  5. Multi-scale modelling for HEDP experiments on Orion

    NASA Astrophysics Data System (ADS)

    Sircombe, N. J.; Ramsay, M. G.; Hughes, S. J.; Hoarty, D. J.

    2016-05-01

    The Orion laser at AWE couples high energy long-pulse lasers with high intensity short-pulses, allowing material to be compressed beyond solid density and heated isochorically. This experimental capability has been demonstrated as a platform for conducting High Energy Density Physics material properties experiments. A clear understanding of the physics in experiments at this scale, combined with a robust, flexible and predictive modelling capability, is an important step towards more complex experimental platforms and ICF schemes which rely on high power lasers to achieve ignition. These experiments present a significant modelling challenge, the system is characterised by hydrodynamic effects over nanoseconds, driven by long-pulse lasers or the pre-pulse of the petawatt beams, and fast electron generation, transport, and heating effects over picoseconds, driven by short-pulse high intensity lasers. We describe the approach taken at AWE; to integrate a number of codes which capture the detailed physics for each spatial and temporal scale. Simulations of the heating of buried aluminium microdot targets are discussed and we consider the role such tools can play in understanding the impact of changes to the laser parameters, such as frequency and pre-pulse, as well as understanding effects which are difficult to observe experimentally.

  6. Ringing load models verified against experiments

    SciTech Connect

    Krokstad, J.R.; Stansberg, C.T.

    1995-12-31

    What is believed to be the main reason for discrepancies between measured and simulated loads in previous studies has been assessed. One has focused on the balance between second- and third-order load components in relation to what is called ``fat body`` load correction. It is important to understand that the use of Morison strip theory in combination with second-order wave theory give rise to second- as well as third-order components in the horizontal force. A proper balance between second- and third-order components in horizontal force is regarded as the most central requirements for a sufficient accurate ringing load model in irregular sea. It is also verified that simulated second-order components are largely overpredicted both in regular and irregular seas. Nonslender diffraction effects are important to incorporate in the FNV formulation in order to reduce the simulated second-order component and to match experiments more closely. A sufficient accurate ringing simulation model with the use of simplified methods is shown to be within close reach. Some further development and experimental verification must however be performed in order to take non-slender effects into account.

  7. Numerical modeling of Deep Impact experiment

    NASA Astrophysics Data System (ADS)

    Sultanov, V. G.; Kim, V. V.; Lomonosov, I. V.; Shutov, A. V.; Fortov, V. E.

    2007-06-01

    The Deep Impact active space experiment has been done [1,2] to study a hypervelocity collision of a metal impactor with the comet 9P/Temple 1. The modeling of impact on solid or porous ice made it possible to conclude: the form and size of crater depends strongly on the density of comet material; the copper impactor does not melt and remains in the solid state; the temperature of ejecta varies from 5000 K for solid ice to 15000 K for porous ice. The impact on moist water- saturated sand demonstrated different results. In this case, the copper impactor practically does not penetrate the comet surface, melts, destroys and the ricochet process takes place. In the case of moist porous sand the produced crater is stretched in the direction of impact. The analysis of modeling results indicates to the presence of volatile easy-vaporized chemical compounds in the cometary surface. The hypothesis that the cometary surface consists of only ice does not agree with experimental and computational data on the forming and spreading of impact ejecta. [1] http://deepimpact.jpl.nasa.gov/home/index.html [2] M. F. A'Hearn et al, Deep Impact: Excavating Comet Tempel 1 // Science, 2005, v.310, pp. 258-264

  8. Complexation and molecular modeling studies of europium(III)-gallic acid-amino acid complexes.

    PubMed

    Taha, Mohamed; Khan, Imran; Coutinho, João A P

    2016-04-01

    With many metal-based drugs extensively used today in the treatment of cancer, attention has focused on the development of new coordination compounds with antitumor activity with europium(III) complexes recently introduced as novel anticancer drugs. The aim of this work is to design new Eu(III) complexes with gallic acid, an antioxida'nt phenolic compound. Gallic acid was chosen because it shows anticancer activity without harming health cells. As antioxidant, it helps to protect human cells against oxidative damage that implicated in DNA damage, cancer, and accelerated cell aging. In this work, the formation of binary and ternary complexes of Eu(III) with gallic acid, primary ligand, and amino acids alanine, leucine, isoleucine, and tryptophan was studied by glass electrode potentiometry in aqueous solution containing 0.1M NaNO3 at (298.2±0.1) K. Their overall stability constants were evaluated and the concentration distributions of the complex species in solution were calculated. The protonation constants of gallic acid and amino acids were also determined at our experimental conditions and compared with those predicted by using conductor-like screening model for realistic solvation (COSMO-RS) model. The geometries of Eu(III)-gallic acid complexes were characterized by the density functional theory (DFT). The spectroscopic UV-visible and photoluminescence measurements are carried out to confirm the formation of Eu(III)-gallic acid complexes in aqueous solutions. PMID:26827296

  9. Multiscale Model for the Assembly Kinetics of Protein Complexes.

    PubMed

    Xie, Zhong-Ru; Chen, Jiawen; Wu, Yinghao

    2016-02-01

    The assembly of proteins into high-order complexes is a general mechanism for these biomolecules to implement their versatile functions in cells. Natural evolution has developed various assembling pathways for specific protein complexes to maintain their stability and proper activities. Previous studies have provided numerous examples of the misassembly of protein complexes leading to severe biological consequences. Although the research focusing on protein complexes has started to move beyond the static representation of quaternary structures to the dynamic aspect of their assembly, the current understanding of the assembly mechanism of protein complexes is still largely limited. To tackle this problem, we developed a new multiscale modeling framework. This framework combines a lower-resolution rigid-body-based simulation with a higher-resolution Cα-based simulation method so that protein complexes can be assembled with both structural details and computational efficiency. We applied this model to a homotrimer and a heterotetramer as simple test systems. Consistent with experimental observations, our simulations indicated very different kinetics between protein oligomerization and dimerization. The formation of protein oligomers is a multistep process that is much slower than dimerization but thermodynamically more stable. Moreover, we showed that even the same protein quaternary structure can have very diverse assembly pathways under different binding constants between subunits, which is important for regulating the functions of protein complexes. Finally, we revealed that the binding between subunits in a complex can be synergistically strengthened during assembly without considering allosteric regulation or conformational changes. Therefore, our model provides a useful tool to understand the general principles of protein complex assembly. PMID:26738810

  10. Pedigree models for complex human traits involving the mitochondrial genome.

    PubMed Central

    Schork, N J; Guo, S W

    1993-01-01

    Recent biochemical and molecular-genetic discoveries concerning variations in human mtDNA have suggested a role for mtDNA mutations in a number of human traits and disorders. Although the importance of these discoveries cannot be emphasized enough, the complex natures of mitochondrial biogenesis, mutant mtDNA phenotype expression, and the maternal inheritance pattern exhibited by mtDNA transmission make it difficult to develop models that can be used routinely in pedigree analyses to quantify and test hypotheses about the role of mtDNA in the expression of a trait. In the present paper, we describe complexities inherent in mitochondrial biogenesis and genetic transmission and show how these complexities can be incorporated into appropriate mathematical models. We offer a variety of likelihood-based models which account for the complexities discussed. The derivation of our models is meant to stimulate the construction of statistical tests for putative mtDNA contribution to a trait. Results of simulation studies which make use of the proposed models are described. The results of the simulation studies suggest that, although pedigree models of mtDNA effects can be reliable, success in mapping chromosomal determinants of a trait does not preclude the possibility that mtDNA determinants exists for the trait as well. Shortcomings inherent in the proposed models are described in an effort to expose areas in need of additional research. PMID:8250048

  11. Pedigree models for complex human traits involving the mitochrondrial genome

    SciTech Connect

    Schork, N.J.; Guo, S.W. )

    1993-12-01

    Recent biochemical and molecular-genetic discoveries concerning variations in human mtDNA have suggested a role for mtDNA mutations in a number of human traits and disorders. Although the importance of these discoveries cannot be emphasized enough, the complex natures of mitochondrial biogenesis, mutant mtDNA phenotype expression, and the maternal inheritance pattern exhibited by mtDNA transmission make it difficult to develop models that can be used routinely in pedigree analyses to quantify and test hypotheses about the role of mtDNA in the expression of a trait. In the present paper, the authors describe complexities inherent in mitochondrial biogenesis and genetic transmission and show how these complexities can be incorporated into appropriate mathematical models. The authors offer a variety of likelihood-based models which account for the complexities discussed. The derivation of the models is meant to stimulate the construction of statistical tests for putative mtDNA contribution to a trait. Results of simulation studies which make use of the proposed models are described. The results of the simulation studies suggest that, although pedigree models of mtDNA effects can be reliable, success in mapping chromosomal determinants of a trait does not preclude the possibility that mtDNA determinants exist for the trait as well. Shortcomings inherent in the proposed models are described in an effort to expose areas in need of additional research. 58 refs., 5 figs., 2 tabs.

  12. Systems Engineering Metrics: Organizational Complexity and Product Quality Modeling

    NASA Technical Reports Server (NTRS)

    Mog, Robert A.

    1997-01-01

    Innovative organizational complexity and product quality models applicable to performance metrics for NASA-MSFC's Systems Analysis and Integration Laboratory (SAIL) missions and objectives are presented. An intensive research effort focuses on the synergistic combination of stochastic process modeling, nodal and spatial decomposition techniques, organizational and computational complexity, systems science and metrics, chaos, and proprietary statistical tools for accelerated risk assessment. This is followed by the development of a preliminary model, which is uniquely applicable and robust for quantitative purposes. Exercise of the preliminary model using a generic system hierarchy and the AXAF-I architectural hierarchy is provided. The Kendall test for positive dependence provides an initial verification and validation of the model. Finally, the research and development of the innovation is revisited, prior to peer review. This research and development effort results in near-term, measurable SAIL organizational and product quality methodologies, enhanced organizational risk assessment and evolutionary modeling results, and 91 improved statistical quantification of SAIL productivity interests.

  13. Submarine sand volcanos: experiments and numerical modelling

    NASA Astrophysics Data System (ADS)

    Philippe, P.; Ngoma, J.; Delenne, J.

    2012-12-01

    Fluid overpressure at the bottom of a soil layer may generate fracturation in preferential paths for a cohesive material. But the case of sandy soils is rather different: a significant internal flow is allowed within the material and can potentially induce hydro-mechanical instabilities whose most common example is fluidization. Many works have been devoted to fluidization but very few have the issue of initiation and development of a fluidized zone inside a granular bed, prior entire fluidization of the medium. In this contribution, we report experimental results and numerical simulations on a model system of immersed sand volcanos generated by a localized upward spring of liquid, injected at constant flow-rate at the bottom of a granular layer. Such a localized state of fluidization is relevant for some industrial processes (spouted bed, maintenance of navigable waterways,…) and for several geological issues (kimberlite volcano conduits, fluid venting, oil recovery in sandy soil, More precisely, what is presented here is a comparison between experiments, carried out by direct visualization throughout the medium, and numerical simulations, based on DEM modelling of the grains coupled to resolution of NS equations in the liquid phase (LBM). There is a very good agreement between the experimental phenomenology and the simulation results. When the flow-rate is increased, three regimes are successively observed: static bed, fluidized cavity that does not extend to the top of the layer, and finally fluidization over the entire height of layer that creates a fluidized chimney. A very strong hysteretic effect is present here with an extended range of stability for fluidized cavities when flow-rate is decreased back. This can be interpreted in terms force chains and arches. The influences of grain diameter, layer height and injection width are studied and interpreted using a model previously developed by Zoueshtiagh [1]. Finally, growing rate of the fluidized zone and

  14. Theoretical Simulations and Ultrafast Pump-probe Spectroscopy Experiments in Pigment-protein Photosynthetic Complexes

    SciTech Connect

    Buck, D.R.

    2000-09-12

    Theoretical simulations and ultrafast pump-probe laser spectroscopy experiments were used to study photosynthetic pigment-protein complexes and antennae found in green sulfur bacteria such as Prosthecochloris aestuarii, Chloroflexus aurantiacus, and Chlorobium tepidum. The work focused on understanding structure-function relationships in energy transfer processes in these complexes through experiments and trying to model that data as we tested our theoretical assumptions with calculations. Theoretical exciton calculations on tubular pigment aggregates yield electronic absorption spectra that are superimpositions of linear J-aggregate spectra. The electronic spectroscopy of BChl c/d/e antennae in light harvesting chlorosomes from Chloroflexus aurantiacus differs considerably from J-aggregate spectra. Strong symmetry breaking is needed if we hope to simulate the absorption spectra of the BChl c antenna. The theory for simulating absorption difference spectra in strongly coupled photosynthetic antenna is described, first for a relatively simple heterodimer, then for the general N-pigment system. The theory is applied to the Fenna-Matthews-Olson (FMO) BChl a protein trimers from Prosthecochloris aestuarii and then compared with experimental low-temperature absorption difference spectra of FMO trimers from Chlorobium tepidum. Circular dichroism spectra of the FMO trimer are unusually sensitive to diagonal energy disorder. Substantial differences occur between CD spectra in exciton simulations performed with and without realistic inhomogeneous distribution functions for the input pigment diagonal energies. Anisotropic absorption difference spectroscopy measurements are less consistent with 21-pigment trimer simulations than 7-pigment monomer simulations which assume that the laser-prepared states are localized within a subunit of the trimer. Experimental anisotropies from real samples likely arise from statistical averaging over states with diagonal energies shifted by

  15. Calibration of Complex Subsurface Reaction Models Using a Surrogate-Model Approach

    EPA Science Inventory

    Application of model assessment techniques to complex subsurface reaction models involves numerous difficulties, including non-trivial model selection, parameter non-uniqueness, and excessive computational burden. To overcome these difficulties, this study introduces SAMM (Simult...

  16. Complex groundwater flow systems as traveling agent models

    PubMed Central

    Padilla, Pablo; Escolero, Oscar; González, Tomas; Morales-Casique, Eric; Osorio-Olvera, Luis

    2014-01-01

    Analyzing field data from pumping tests, we show that as with many other natural phenomena, groundwater flow exhibits complex dynamics described by 1/f power spectrum. This result is theoretically studied within an agent perspective. Using a traveling agent model, we prove that this statistical behavior emerges when the medium is complex. Some heuristic reasoning is provided to justify both spatial and dynamic complexity, as the result of the superposition of an infinite number of stochastic processes. Even more, we show that this implies that non-Kolmogorovian probability is needed for its study, and provide a set of new partial differential equations for groundwater flow. PMID:25337455

  17. Improving a regional model using reduced complexity and parameter estimation

    USGS Publications Warehouse

    Kelson, Victor A.; Hunt, Randall J.; Haitjema, Henk M.

    2002-01-01

    The availability of powerful desktop computers and graphical user interfaces for ground water flow models makes possible the construction of ever more complex models. A proposed copper-zinc sulfide mine in northern Wisconsin offers a unique case in which the same hydrologic system has been modeled using a variety of techniques covering a wide range of sophistication and complexity. Early in the permitting process, simple numerical models were used to evaluate the necessary amount of water to be pumped from the mine, reductions in streamflow, and the drawdowns in the regional aquifer. More complex models have subsequently been used in an attempt to refine the predictions. Even after so much modeling effort, questions regarding the accuracy and reliability of the predictions remain. We have performed a new analysis of the proposed mine using the two-dimensional analytic element code GFLOW coupled with the nonlinear parameter estimation code UCODE. The new model is parsimonious, containing fewer than 10 parameters, and covers a region several times larger in areal extent than any of the previous models. The model demonstrates the suitability of analytic element codes for use with parameter estimation codes. The simplified model results are similar to the more complex models; predicted mine inflows and UCODE-derived 95% confidence intervals are consistent with the previous predictions. More important, the large areal extent of the model allowed us to examine hydrological features not included in the previous models, resulting in new insights about the effects that far-field boundary conditions can have on near-field model calibration and parameterization. In this case, the addition of surface water runoff into a lake in the headwaters of a stream while holding recharge constant moved a regional ground watershed divide and resulted in some of the added water being captured by the adjoining basin. Finally, a simple analytical solution was used to clarify the GFLOW model

  18. Analogue experiments as benchmarks for models of lava flow emplacement

    NASA Astrophysics Data System (ADS)

    Garel, F.; Kaminski, E. C.; Tait, S.; Limare, A.

    2013-12-01

    experimental observations of the effect of wind the surface thermal structure of a viscous flow, that could be used to benchmark a thermal heat loss model. We will also briefly present more complex analogue experiments using wax material. These experiments present discontinuous advance behavior, and a dual surface thermal structure with low (solidified) vs. high (hot liquid exposed at the surface) surface temperatures regions. Emplacement models should tend to reproduce these two features, also observed on lava flows, to better predict the hazard of lava inundation.

  19. On explicit algebraic stress models for complex turbulent flows

    NASA Technical Reports Server (NTRS)

    Gatski, T. B.; Speziale, C. G.

    1992-01-01

    Explicit algebraic stress models that are valid for three-dimensional turbulent flows in noninertial frames are systematically derived from a hierarchy of second-order closure models. This represents a generalization of the model derived by Pope who based his analysis on the Launder, Reece, and Rodi model restricted to two-dimensional turbulent flows in an inertial frame. The relationship between the new models and traditional algebraic stress models -- as well as anistropic eddy visosity models -- is theoretically established. The need for regularization is demonstrated in an effort to explain why traditional algebraic stress models have failed in complex flows. It is also shown that these explicit algebraic stress models can shed new light on what second-order closure models predict for the equilibrium states of homogeneous turbulent flows and can serve as a useful alternative in practical computations.

  20. Prediction of Complex Aerodynamic Flows with Explicit Algebraic Stress Models

    NASA Technical Reports Server (NTRS)

    Abid, Ridha; Morrison, Joseph H.; Gatski, Thomas B.; Speziale, Charles G.

    1996-01-01

    An explicit algebraic stress equation, developed by Gatski and Speziale, is used in the framework of K-epsilon formulation to predict complex aerodynamic turbulent flows. The nonequilibrium effects are modeled through coefficients that depend nonlinearly on both rotational and irrotational strains. The proposed model was implemented in the ISAAC Navier-Stokes code. Comparisons with the experimental data are presented which clearly demonstrate that explicit algebraic stress models can predict the correct response to nonequilibrium flow.

  1. Real-data Calibration Experiments On A Distributed Hydrologic Model

    NASA Astrophysics Data System (ADS)

    Brath, A.; Montanari, A.; Toth, E.

    The increasing availability of extended information on the study watersheds does not generally overcome the need for the determination through calibration of at least a part of the parameters of distributed hydrologic models. The complexity of such models, making the computations highly intensive, has often prevented an extensive analysis of calibration issues. The purpose of this study is an evaluation of the validation results of a series of automatic calibration experiments (using the shuffled complex evolu- tion method, Duan et al., 1992) performed with a highly conceptualised, continuously simulating, distributed hydrologic model applied on the real data of a mid-sized Ital- ian watershed. Major flood events occurred in the 1990-2000 decade are simulated with the parameters obtained by the calibration of the model against discharge data observed at the closure section of the watershed and the hydrological features (overall agreement, volumes, peaks and times to peak) of the discharges obtained both in the closure and in an interior stream-gauge are analysed for validation purposes. A first set of calibrations investigates the effect of the variability of the calibration periods, using the data from several single flood events and from longer, continuous periods. Another analysis regards the influence of rainfall input and it is carried out varying the size and distribution of the raingauge network, in order to examine the relation between the spatial pattern of observed rainfall and the variability of modelled runoff. Lastly, a comparison of the hydrographs obtained for the flood events with the model parameterisation resulting when modifying the objective function to be minimised in the automatic calibration procedure is presented.

  2. A Complex Systems Model Approach to Quantified Mineral Resource Appraisal

    USGS Publications Warehouse

    Gettings, M.E.; Bultman, M.W.; Fisher, F.S.

    2004-01-01

    For federal and state land management agencies, mineral resource appraisal has evolved from value-based to outcome-based procedures wherein the consequences of resource development are compared with those of other management options. Complex systems modeling is proposed as a general framework in which to build models that can evaluate outcomes. Three frequently used methods of mineral resource appraisal (subjective probabilistic estimates, weights of evidence modeling, and fuzzy logic modeling) are discussed to obtain insight into methods of incorporating complexity into mineral resource appraisal models. Fuzzy logic and weights of evidence are most easily utilized in complex systems models. A fundamental product of new appraisals is the production of reusable, accessible databases and methodologies so that appraisals can easily be repeated with new or refined data. The data are representations of complex systems and must be so regarded if all of their information content is to be utilized. The proposed generalized model framework is applicable to mineral assessment and other geoscience problems. We begin with a (fuzzy) cognitive map using (+1,0,-1) values for the links and evaluate the map for various scenarios to obtain a ranking of the importance of various links. Fieldwork and modeling studies identify important links and help identify unanticipated links. Next, the links are given membership functions in accordance with the data. Finally, processes are associated with the links; ideally, the controlling physical and chemical events and equations are found for each link. After calibration and testing, this complex systems model is used for predictions under various scenarios.

  3. Humic Acid Complexation of Th, Hf and Zr in Ligand Competition Experiments: Metal Loading and Ph Effects

    NASA Technical Reports Server (NTRS)

    Stern, Jennifer C.; Foustoukos, Dionysis I.; Sonke, Jeroen E.; Salters, Vincent J. M.

    2014-01-01

    The mobility of metals in soils and subsurface aquifers is strongly affected by sorption and complexation with dissolved organic matter, oxyhydroxides, clay minerals, and inorganic ligands. Humic substances (HS) are organic macromolecules with functional groups that have a strong affinity for binding metals, such as actinides. Thorium, often studied as an analog for tetravalent actinides, has also been shown to strongly associate with dissolved and colloidal HS in natural waters. The effects of HS on the mobilization dynamics of actinides are of particular interest in risk assessment of nuclear waste repositories. Here, we present conditional equilibrium binding constants (Kc, MHA) of thorium, hafnium, and zirconium-humic acid complexes from ligand competition experiments using capillary electrophoresis coupled with ICP-MS (CE- ICP-MS). Equilibrium dialysis ligand exchange (EDLE) experiments using size exclusion via a 1000 Damembrane were also performed to validate the CE-ICP-MS analysis. Experiments were performed at pH 3.5-7 with solutions containing one tetravalent metal (Th, Hf, or Zr), Elliot soil humic acid (EHA) or Pahokee peat humic acid (PHA), and EDTA. CE-ICP-MS and EDLE experiments yielded nearly identical binding constants for the metal- humic acid complexes, indicating that both methods are appropriate for examining metal speciation at conditions lower than neutral pH. We find that tetravalent metals form strong complexes with humic acids, with Kc, MHA several orders of magnitude above REE-humic complexes. Experiments were conducted at a range of dissolved HA concentrations to examine the effect of [HA]/[Th] molar ratio on Kc, MHA. At low metal loading conditions (i.e. elevated [HA]/[Th] ratios) the ThHA binding constant reached values that were not affected by the relative abundance of humic acid and thorium. The importance of [HA]/[Th] molar ratios on constraining the equilibrium of MHA complexation is apparent when our estimated Kc, MHA values

  4. Implementation of a complex multi-phase equation of state for cerium and its correlation with experiment

    SciTech Connect

    Cherne, Frank J; Jensen, Brian J; Elkin, Vyacheslav M

    2009-01-01

    The complexity of cerium combined with its interesting material properties makes it a desirable material to examine dynamically. Characteristics such as the softening of the material before the phase change, low pressure solid-solid phase change, predicted low pressure melt boundary, and the solid-solid critical point add complexity to the construction of its equation of state. Currently, we are incorporating a feedback loop between a theoretical understanding of the material and an experimental understanding. Using a model equation of state for cerium we compare calculated wave profiles with experimental wave profiles for a number of front surface impact (cerium impacting a plated window) experiments. Using the calculated release isentrope we predict the temperature of the observed rarefaction shock. These experiments showed that the release state occurs at different magnitudes, thus allowing us to infer where dynamic {gamma} - {alpha} phase boundary is.

  5. Mathematical approaches for complexity/predictivity trade-offs in complex system models : LDRD final report.

    SciTech Connect

    Goldsby, Michael E.; Mayo, Jackson R.; Bhattacharyya, Arnab; Armstrong, Robert C.; Vanderveen, Keith

    2008-09-01

    The goal of this research was to examine foundational methods, both computational and theoretical, that can improve the veracity of entity-based complex system models and increase confidence in their predictions for emergent behavior. The strategy was to seek insight and guidance from simplified yet realistic models, such as cellular automata and Boolean networks, whose properties can be generalized to production entity-based simulations. We have explored the usefulness of renormalization-group methods for finding reduced models of such idealized complex systems. We have prototyped representative models that are both tractable and relevant to Sandia mission applications, and quantified the effect of computational renormalization on the predictive accuracy of these models, finding good predictivity from renormalized versions of cellular automata and Boolean networks. Furthermore, we have theoretically analyzed the robustness properties of certain Boolean networks, relevant for characterizing organic behavior, and obtained precise mathematical constraints on systems that are robust to failures. In combination, our results provide important guidance for more rigorous construction of entity-based models, which currently are often devised in an ad-hoc manner. Our results can also help in designing complex systems with the goal of predictable behavior, e.g., for cybersecurity.

  6. Evaluation of soil flushing of complex contaminated soil: an experimental and modeling simulation study.

    PubMed

    Yun, Sung Mi; Kang, Christina S; Kim, Jonghwa; Kim, Han S

    2015-04-28

    The removal of heavy metals (Zn and Pb) and heavy petroleum oils (HPOs) from a soil with complex contamination was examined by soil flushing. Desorption and transport behaviors of the complex contaminants were assessed by batch and continuous flow reactor experiments and through modeling simulations. Flushing a one-dimensional flow column packed with complex contaminated soil sequentially with citric acid then a surfactant resulted in the removal of 85.6% of Zn, 62% of Pb, and 31.6% of HPO. The desorption distribution coefficients, KUbatch and KLbatch, converged to constant values as Ce increased. An equilibrium model (ADR) and nonequilibrium models (TSNE and TRNE) were used to predict the desorption and transport of complex contaminants. The nonequilibrium models demonstrated better fits with the experimental values obtained from the column test than the equilibrium model. The ranges of KUbatch and KLbatch were very close to those of KUfit and KLfit determined from model simulations. The parameters (R, β, ω, α, and f) determined from model simulations were useful for characterizing the transport of contaminants within the soil matrix. The results of this study provide useful information for the operational parameters of the flushing process for soils with complex contamination. PMID:25698434

  7. SEE Rate Estimation: Model Complexity and Data Requirements

    NASA Technical Reports Server (NTRS)

    Ladbury, Ray

    2008-01-01

    Statistical Methods outlined in [Ladbury, TNS20071 can be generalized for Monte Carlo Rate Calculation Methods Two Monte Carlo Approaches: a) Rate based on vendor-supplied (or reverse-engineered) model SEE testing and statistical analysis performed to validate model; b) Rate calculated based on model fit to SEE data Statistical analysis very similar to case for CREME96. Information Theory allows simultaneous consideration of multiple models with different complexities: a) Model with lowest AIC usually has greatest predictive power; b) Model averaging using AIC weights may give better performance if several models have similar good performance; and c) Rates can be bounded for a given confidence level over multiple models, as well as over the parameter space of a model.

  8. Multikernel linear mixed models for complex phenotype prediction.

    PubMed

    Weissbrod, Omer; Geiger, Dan; Rosset, Saharon

    2016-07-01

    Linear mixed models (LMMs) and their extensions have recently become the method of choice in phenotype prediction for complex traits. However, LMM use to date has typically been limited by assuming simple genetic architectures. Here, we present multikernel linear mixed model (MKLMM), a predictive modeling framework that extends the standard LMM using multiple-kernel machine learning approaches. MKLMM can model genetic interactions and is particularly suitable for modeling complex local interactions between nearby variants. We additionally present MKLMM-Adapt, which automatically infers interaction types across multiple genomic regions. In an analysis of eight case-control data sets from the Wellcome Trust Case Control Consortium and more than a hundred mouse phenotypes, MKLMM-Adapt consistently outperforms competing methods in phenotype prediction. MKLMM is as computationally efficient as standard LMMs and does not require storage of genotypes, thus achieving state-of-the-art predictive power without compromising computational feasibility or genomic privacy. PMID:27302636

  9. Turing instability in reaction-diffusion models on complex networks

    NASA Astrophysics Data System (ADS)

    Ide, Yusuke; Izuhara, Hirofumi; Machida, Takuya

    2016-09-01

    In this paper, the Turing instability in reaction-diffusion models defined on complex networks is studied. Here, we focus on three types of models which generate complex networks, i.e. the Erdős-Rényi, the Watts-Strogatz, and the threshold network models. From analysis of the Laplacian matrices of graphs generated by these models, we numerically reveal that stable and unstable regions of a homogeneous steady state on the parameter space of two diffusion coefficients completely differ, depending on the network architecture. In addition, we theoretically discuss the stable and unstable regions in the cases of regular enhanced ring lattices which include regular circles, and networks generated by the threshold network model when the number of vertices is large enough.

  10. Multiwell experiment: reservoir modeling analysis, Volume II

    SciTech Connect

    Horton, A.I.

    1985-05-01

    This report updates an ongoing analysis by reservoir modelers at the Morgantown Energy Technology Center (METC) of well test data from the Department of Energy's Multiwell Experiment (MWX). Results of previous efforts were presented in a recent METC Technical Note (Horton 1985). Results included in this report pertain to the poststimulation well tests of Zones 3 and 4 of the Paludal Sandstone Interval and the prestimulation well tests of the Red and Yellow Zones of the Coastal Sandstone Interval. The following results were obtained by using a reservoir model and history matching procedures: (1) Post-minifracture analysis indicated that the minifracture stimulation of the Paludal Interval did not produce an induced fracture, and extreme formation damage did occur, since a 65% permeability reduction around the wellbore was estimated. The design for this minifracture was from 200 to 300 feet on each side of the wellbore; (2) Post full-scale stimulation analysis for the Paludal Interval also showed that extreme formation damage occurred during the stimulation as indicated by a 75% permeability reduction 20 feet on each side of the induced fracture. Also, an induced fracture half-length of 100 feet was determined to have occurred, as compared to a designed fracture half-length of 500 to 600 feet; and (3) Analysis of prestimulation well test data from the Coastal Interval agreed with previous well-to-well interference tests that showed extreme permeability anisotropy was not a factor for this zone. This lack of permeability anisotropy was also verified by a nitrogen injection test performed on the Coastal Red and Yellow Zones. 8 refs., 7 figs., 2 tabs.

  11. Complex solutions for the scalar field model of the Universe

    NASA Astrophysics Data System (ADS)

    Lyons, Glenn W.

    1992-08-01

    The Hartle-Hawking proposal is implemented for Hawking's scalar field model of the Universe. For this model the complex saddle-point geometries required by the semiclassical approximation to the path integral cannot simply be deformed into real Euclidean and real Lorentzian sections. Approximate saddle points are constructed which are fully complex and have contours of real Lorentzian evolution. The semiclassical wave function is found to give rise to classical spacetimes at late times and extra terms in the Hamilton-Jacobi equation do not contribute significantly to the potential.

  12. Introduction to a special section on ecohydrology of semiarid environments: Confronting mathematical models with ecosystem complexity

    NASA Astrophysics Data System (ADS)

    Svoray, Tal; Assouline, Shmuel; Katul, Gabriel

    2015-11-01

    Current literature provides large number of publications about ecohydrological processes and their effect on the biota in drylands. Given the limited laboratory and field experiments in such systems, many of these publications are based on mathematical models of varying complexity. The underlying implicit assumption is that the data set used to evaluate these models covers the parameter space of conditions that characterize drylands and that the models represent the actual processes with acceptable certainty. However, a question raised is to what extent these mathematical models are valid when confronted with observed ecosystem complexity? This Introduction reviews the 16 papers that comprise the Special Section on Eco-hydrology of Semiarid Environments: Confronting Mathematical Models with Ecosystem Complexity. The subjects studied in these papers include rainfall regime, infiltration and preferential flow, evaporation and evapotranspiration, annual net primary production, dispersal and invasion, and vegetation greening. The findings in the papers published in this Special Section show that innovative mathematical modeling approaches can represent actual field measurements. Hence, there are strong grounds for suggesting that mathematical models can contribute to greater understanding of ecosystem complexity through characterization of space-time dynamics of biomass and water storage as well as their multiscale interactions. However, the generality of the models and their low-dimensional representation of many processes may also be a "curse" that results in failures when particulars of an ecosystem are required. It is envisaged that the search for a unifying "general" model, while seductive, may remain elusive in the foreseeable future. It is for this reason that improving the merger between experiments and models of various degrees of complexity continues to shape the future research agenda.

  13. Deterministic ripple-spreading model for complex networks.

    PubMed

    Hu, Xiao-Bing; Wang, Ming; Leeson, Mark S; Hines, Evor L; Di Paolo, Ezequiel

    2011-04-01

    This paper proposes a deterministic complex network model, which is inspired by the natural ripple-spreading phenomenon. The motivations and main advantages of the model are the following: (i) The establishment of many real-world networks is a dynamic process, where it is often observed that the influence of a few local events spreads out through nodes, and then largely determines the final network topology. Obviously, this dynamic process involves many spatial and temporal factors. By simulating the natural ripple-spreading process, this paper reports a very natural way to set up a spatial and temporal model for such complex networks. (ii) Existing relevant network models are all stochastic models, i.e., with a given input, they cannot output a unique topology. Differently, the proposed ripple-spreading model can uniquely determine the final network topology, and at the same time, the stochastic feature of complex networks is captured by randomly initializing ripple-spreading related parameters. (iii) The proposed model can use an easily manageable number of ripple-spreading related parameters to precisely describe a network topology, which is more memory efficient when compared with traditional adjacency matrix or similar memory-expensive data structures. (iv) The ripple-spreading model has a very good potential for both extensions and applications. PMID:21599256

  14. A Compact Model for the Complex Plant Circadian Clock

    PubMed Central

    De Caluwé, Joëlle; Xiao, Qiying; Hermans, Christian; Verbruggen, Nathalie; Leloup, Jean-Christophe; Gonze, Didier

    2016-01-01

    The circadian clock is an endogenous timekeeper that allows organisms to anticipate and adapt to the daily variations of their environment. The plant clock is an intricate network of interlocked feedback loops, in which transcription factors regulate each other to generate oscillations with expression peaks at specific times of the day. Over the last decade, mathematical modeling approaches have been used to understand the inner workings of the clock in the model plant Arabidopsis thaliana. Those efforts have produced a number of models of ever increasing complexity. Here, we present an alternative model that combines a low number of equations and parameters, similar to the very earliest models, with the complex network structure found in more recent ones. This simple model describes the temporal evolution of the abundance of eight clock gene mRNA/protein and captures key features of the clock on a qualitative level, namely the entrained and free-running behaviors of the wild type clock, as well as the defects found in knockout mutants (such as altered free-running periods, lack of entrainment, or changes in the expression of other clock genes). Additionally, our model produces complex responses to various light cues, such as extreme photoperiods and non-24 h environmental cycles, and can describe the control of hypocotyl growth by the clock. Our model constitutes a useful tool to probe dynamical properties of the core clock as well as clock-dependent processes. PMID:26904049

  15. Historical and idealized climate model experiments: an EMIC intercomparison

    NASA Astrophysics Data System (ADS)

    Eby, M.; Weaver, A. J.; Alexander, K.; Zickfeld, K.; Abe-Ouchi, A.; Cimatoribus, A. A.; Crespin, E.; Drijfhout, S. S.; Edwards, N. R.; Eliseev, A. V.; Feulner, G.; Fichefet, T.; Forest, C. E.; Goosse, H.; Holden, P. B.; Joos, F.; Kawamiya, M.; Kicklighter, D.; Kienert, H.; Matsumoto, K.; Mokhov, I. I.; Monier, E.; Olsen, S. M.; Pedersen, J. O. P.; Perrette, M.; Philippon-Berthier, G.; Ridgwell, A.; Schlosser, A.; Schneider von Deimling, T.; Shaffer, G.; Smith, R. S.; Spahni, R.; Sokolov, A. P.; Steinacher, M.; Tachiiri, K.; Tokos, K.; Yoshimori, M.; Zeng, N.; Zhao, F.

    2012-08-01

    Both historical and idealized climate model experiments are performed with a variety of Earth System Models of Intermediate Complexity (EMICs) as part of a community contribution to the Intergovernmental Panel on Climate Change Fifth Assessment Report. Historical simulations start at 850 CE and continue through to 2005. The standard simulations include changes in forcing from solar luminosity, Earth's orbital configuration, CO2, additional greenhouse gases, land-use, and sulphate and volcanic aerosols. In spite of very different modelled pre-industrial global surface air temperatures, overall 20th century trends in surface air temperature and carbon uptake are reasonably well simulated when compared to observed trends. Land carbon fluxes show much more variation between models than ocean carbon fluxes, and recent land fluxes seem to be underestimated. It is possible that recent modelled climate trends or climate-carbon feedbacks are overestimated resulting in too much land carbon loss or that carbon uptake due to CO2 and/or nitrogen fertilization is underestimated. Several one thousand year long, idealized, 2x and 4x CO2 experiments are used to quantify standard model characteristics, including transient and equilibrium climate sensitivities, and climate-carbon feedbacks. The values from EMICs generally fall within the range given by General Circulation Models. Seven additional historical simulations, each including a single specified forcing, are used to assess the contributions of different climate forcings to the overall climate and carbon cycle response. The response of surface air temperature is the linear sum of the individual forcings, while the carbon cycle response shows considerable synergy between land-use change and CO2 forcings for some models. Finally, the preindustrial portions of the last millennium simulations are used to assess historical model carbon-climate feedbacks. Given the specified forcing, there is a tendency for the EMICs to

  16. Deaf Children with Complex Needs: Parental Experience of Access to Cochlear Implants and Ongoing Support

    ERIC Educational Resources Information Center

    McCracken, Wendy; Turner, Oliver

    2012-01-01

    This paper discusses the experiences of parents of deaf children with additional complex needs (ACN) in accessing cochlear implant (CI) services and achieving ongoing support. Of a total study group of fifty-one children with ACN, twelve had been fitted with a CI. The parental accounts provide a rich and varied picture of service access. For some…

  17. Simple and complex models for studying muscle function in walking.

    PubMed

    Pandy, Marcus G

    2003-09-29

    While simple models can be helpful in identifying basic features of muscle function, more complex models are needed to discern the functional roles of specific muscles in movement. In this paper, two very different models of walking, one simple and one complex, are used to study how muscle forces, gravitational forces and centrifugal forces (i.e. forces arising from motion of the joints) combine to produce the pattern of force exerted on the ground. Both the simple model and the complex one predict that muscles contribute significantly to the ground force pattern generated in walking; indeed, both models show that muscle action is responsible for the appearance of the two peaks in the vertical force. The simple model, an inverted double pendulum, suggests further that the first and second peaks are due to net extensor muscle moments exerted about the knee and ankle, respectively. Analyses based on a much more complex, muscle-actuated simulation of walking are in general agreement with these results; however, the more detailed model also reveals that both the hip extensor and hip abductor muscles contribute significantly to vertical motion of the centre of mass, and therefore to the appearance of the first peak in the vertical ground force, in early single-leg stance. This discrepancy in the model predictions is most probably explained by the difference in model complexity. First, movements of the upper body in the sagittal plane are not represented properly in the double-pendulum model, which may explain the anomalous result obtained for the contribution of a hip-extensor torque to the vertical ground force. Second, the double-pendulum model incorporates only three of the six major elements of walking, whereas the complex model is fully 3D and incorporates all six gait determinants. In particular, pelvic list occurs primarily in the frontal plane, so there is the potential for this mechanism to contribute significantly to the vertical ground force, especially

  18. Dynamic crack initiation toughness : experiments and peridynamic modeling.

    SciTech Connect

    Foster, John T.

    2009-10-01

    This is a dissertation on research conducted studying the dynamic crack initiation toughness of a 4340 steel. Researchers have been conducting experimental testing of dynamic crack initiation toughness, K{sub Ic}, for many years, using many experimental techniques with vastly different trends in the results when reporting K{sub Ic} as a function of loading rate. The dissertation describes a novel experimental technique for measuring K{sub Ic} in metals using the Kolsky bar. The method borrows from improvements made in recent years in traditional Kolsky bar testing by using pulse shaping techniques to ensure a constant loading rate applied to the sample before crack initiation. Dynamic crack initiation measurements were reported on a 4340 steel at two different loading rates. The steel was shown to exhibit a rate dependence, with the recorded values of K{sub Ic} being much higher at the higher loading rate. Using the knowledge of this rate dependence as a motivation in attempting to model the fracture events, a viscoplastic constitutive model was implemented into a peridynamic computational mechanics code. Peridynamics is a newly developed theory in solid mechanics that replaces the classical partial differential equations of motion with integral-differential equations which do not require the existence of spatial derivatives in the displacement field. This allows for the straightforward modeling of unguided crack initiation and growth. To date, peridynamic implementations have used severely restricted constitutive models. This research represents the first implementation of a complex material model and its validation. After showing results comparing deformations to experimental Taylor anvil impact for the viscoplastic material model, a novel failure criterion is introduced to model the dynamic crack initiation toughness experiments. The failure model is based on an energy criterion and uses the K{sub Ic} values recorded experimentally as an input. The failure model

  19. Optimal parameter and uncertainty estimation of a land surface model: Sensitivity to parameter ranges and model complexities

    NASA Astrophysics Data System (ADS)

    Xia, Youlong; Yang, Zong-Liang; Stoffa, Paul L.; Sen, Mrinal K.

    2005-01-01

    Most previous land-surface model calibration studies have defined global ranges for their parameters to search for optimal parameter sets. Little work has been conducted to study the impacts of realistic versus global ranges as well as model complexities on the calibration and uncertainty estimates. The primary purpose of this paper is to investigate these impacts by employing Bayesian Stochastic Inversion (BSI) to the Chameleon Surface Model (CHASM). The CHASM was designed to explore the general aspects of land-surface energy balance representation within a common modeling framework that can be run from a simple energy balance formulation to a complex mosaic type structure. The BSI is an uncertainty estimation technique based on Bayes theorem, importance sampling, and very fast simulated annealing. The model forcing data and surface flux data were collected at seven sites representing a wide range of climate and vegetation conditions. For each site, four experiments were performed with simple and complex CHASM formulations as well as realistic and global parameter ranges. Twenty eight experiments were conducted and 50 000 parameter sets were used for each run. The results show that the use of global and realistic ranges gives similar simulations for both modes for most sites, but the global ranges tend to produce some unreasonable optimal parameter values. Comparison of simple and complex modes shows that the simple mode has more parameters with unreasonable optimal values. Use of parameter ranges and model complexities have significant impacts on frequency distribution of parameters, marginal posterior probability density functions, and estimates of uncertainty of simulated sensible and latent heat fluxes. Comparison between model complexity and parameter ranges shows that the former has more significant impacts on parameter and uncertainty estimations.

  20. METEOROLOGICAL EVENTS THAT PRODUCED THE HIGHEST GROUND-LEVEL CONCENTRATIONS DURING COMPLEX TERRAIN FIELD EXPERIMENTS

    EPA Science Inventory

    The U.S. Environmental Protection Agency (EPA) is sponsoring the Complex Terrain Model Development project, a multi-year study to develop improved models for calculating ground-level air pollutant concentrations that result from large emission sources located in mountainous terra...

  1. (Relatively) Simple Models of Flow in Complex Terrain

    NASA Astrophysics Data System (ADS)

    Taylor, Peter; Weng, Wensong; Salmon, Jim

    2013-04-01

    The term, "complex terrain" includes both topography and variations in surface roughness and thermal properties. The scales that are affected can differ and there are some advantages to modeling them separately. In studies of flow in complex terrain we have developed 2 D and 3 D models of atmospheric PBL boundary layer flow over roughness changes, appropriate for longer fetches than most existing models. These "internal boundary layers" are especially important for understanding and predicting wind speed variations with distance from shorelines, an important factor for wind farms around, and potentially in, the Great Lakes. The models can also form a base for studying the wakes behind woodlots and wind turbines. Some sample calculations of wind speed evolution over water and the reduced wind speeds behind an isolated woodlot, represented simply in terms of an increase in surface roughness, will be presented. Note that these models can also include thermal effects and non-neutral stratification. We can use the model to deal with 3-D roughness variations and will describe applications to both on-shore and off-shore situations around the Great Lakes. In particular we will show typical results for hub height winds and indicate the length of over-water fetch needed to get the full benefit of siting turbines over water. The linear Mixed Spectral Finite-Difference (MSFD) and non-linear (NLMSFD) models for surface boundary-layer flow over complex terrain have been extended to planetary boundary-layer flow over topography This allows for their use for larger scale regions and increased heights. The models have been applied to successfully simulate the Askervein hill experimental case and we will show examples of applications to more complex terrain, typical of some Canadian wind farms. Output from the model can be used as an alternative to MS-Micro, WAsP or other CFD calculations of topographic impacts for input to wind farm design software.

  2. Electron-impact ionization of neon at low projectile energy: an internormalized experiment and theory for a complex target.

    PubMed

    Pflüger, Thomas; Zatsarinny, Oleg; Bartschat, Klaus; Senftleben, Arne; Ren, Xueguang; Ullrich, Joachim; Dorn, Alexander

    2013-04-12

    As a fundamental test for state-of-the-art theoretical approaches, we have studied the single ionization (2p) of neon at a projectile energy of 100 eV. The experimental data were acquired using an advanced reaction microscope that benefits from high efficiency and a large solid-angle acceptance of almost 4π. We put special emphasis on the ability to measure internormalized triple-differential cross sections over a large part of the phase space. The data are compared to predictions from a second-order hybrid distorted-wave plus R-matrix model and a fully nonperturbative B-spline R-matrix (BSR) with pseudostates approach. For a target of this complexity and the low-energy regime, unprecedented agreement between experiment and the BSR model is found. This represents a significant step forward in the investigation of complex targets. PMID:25167263

  3. Force-dependent persistence length of DNA-intercalator complexes measured in single molecule stretching experiments.

    PubMed

    Bazoni, R F; Lima, C H M; Ramos, E B; Rocha, M S

    2015-06-01

    By using optical tweezers with an adjustable trap stiffness, we have performed systematic single molecule stretching experiments with two types of DNA-intercalator complexes, in order to investigate the effects of the maximum applied forces on the mechanical response of such complexes. We have explicitly shown that even in the low-force entropic regime the persistence length of the DNA-intercalator complexes is strongly force-dependent, although such behavior is not exhibited by bare DNA molecules. We discuss the possible physicochemical effects that can lead to such results. In particular, we propose that the stretching force can promote partial denaturation on the highly distorted double-helix of the DNA-intercalator complexes, which interfere strongly in the measured values of the persistence length. PMID:25913936

  4. Electrostatic Model Applied to ISS Charged Water Droplet Experiment

    NASA Technical Reports Server (NTRS)

    Stevenson, Daan; Schaub, Hanspeter; Pettit, Donald R.

    2015-01-01

    The electrostatic force can be used to create novel relative motion between charged bodies if it can be isolated from the stronger gravitational and dissipative forces. Recently, Coulomb orbital motion was demonstrated on the International Space Station by releasing charged water droplets in the vicinity of a charged knitting needle. In this investigation, the Multi-Sphere Method, an electrostatic model developed to study active spacecraft position control by Coulomb charging, is used to simulate the complex orbital motion of the droplets. When atmospheric drag is introduced, the simulated motion closely mimics that seen in the video footage of the experiment. The electrostatic force's inverse dependency on separation distance near the center of the needle lends itself to analytic predictions of the radial motion.

  5. Computer models of complex multiloop branched pipeline systems

    NASA Astrophysics Data System (ADS)

    Kudinov, I. V.; Kolesnikov, S. V.; Eremin, A. V.; Branfileva, A. N.

    2013-11-01

    This paper describes the principal theoretical concepts of the method used for constructing computer models of complex multiloop branched pipeline networks, and this method is based on the theory of graphs and two Kirchhoff's laws applied to electrical circuits. The models make it possible to calculate velocities, flow rates, and pressures of a fluid medium in any section of pipeline networks, when the latter are considered as single hydraulic systems. On the basis of multivariant calculations the reasons for existing problems can be identified, the least costly methods of their elimination can be proposed, and recommendations for planning the modernization of pipeline systems and construction of their new sections can be made. The results obtained can be applied to complex pipeline systems intended for various purposes (water pipelines, petroleum pipelines, etc.). The operability of the model has been verified on an example of designing a unified computer model of the heat network for centralized heat supply of the city of Samara.

  6. Modeling the propagation of mobile malware on complex networks

    NASA Astrophysics Data System (ADS)

    Liu, Wanping; Liu, Chao; Yang, Zheng; Liu, Xiaoyang; Zhang, Yihao; Wei, Zuxue

    2016-08-01

    In this paper, the spreading behavior of malware across mobile devices is addressed. By introducing complex networks to model mobile networks, which follows the power-law degree distribution, a novel epidemic model for mobile malware propagation is proposed. The spreading threshold that guarantees the dynamics of the model is calculated. Theoretically, the asymptotic stability of the malware-free equilibrium is confirmed when the threshold is below the unity, and the global stability is further proved under some sufficient conditions. The influences of different model parameters as well as the network topology on malware propagation are also analyzed. Our theoretical studies and numerical simulations show that networks with higher heterogeneity conduce to the diffusion of malware, and complex networks with lower power-law exponents benefit malware spreading.

  7. Petri net model for analysis of concurrently processed complex algorithms

    NASA Technical Reports Server (NTRS)

    Stoughton, John W.; Mielke, Roland R.

    1986-01-01

    This paper presents a Petri-net model suitable for analyzing the concurrent processing of computationally complex algorithms. The decomposed operations are to be processed in a multiple processor, data driven architecture. Of particular interest is the application of the model to both the description of the data/control flow of a particular algorithm, and to the general specification of the data driven architecture. A candidate architecture is also presented.

  8. Pest control experiments show benefits of complexity at landscape and local scales.

    PubMed

    Chaplin-Kramer, Rebecca; Kremen, Claire

    2012-10-01

    Farms benefit from pest control services provided by nature, but management of these services requires an understanding of how habitat complexity within and around the farm impacts the relationship between agricultural pests and their enemies. Using cage experiments, this study measures the effect of habitat complexity across scales on pest suppression of the cabbage aphid Brevicoryne brassicae in broccoli. Our results reveal that proportional reduction of pest density increases with complexity both at the landscape scale (measured by natural habitat cover in the 1 km around the farm) and at the local scale (plant diversity). While high local complexity can compensate for low complexity at landscape scales and vice versa, a delay in natural enemy arrival to locally complex sites in simple landscapes may compromise the enemies' ability to provide adequate control. Local complexity in simplified landscapes may only provide adequate top-down pest control in cooler microclimates with relatively low aphid colonization rates. Even so, strong natural enemy function can be overwhelmed by high rates of pest reproduction or colonization from nearby source habitat. PMID:23210310

  9. Modeling complex diffusion mechanisms in L1 2 -structured compounds

    NASA Astrophysics Data System (ADS)

    Zacate, M. O.; Lape, M.; Stufflebeam, M.; Evenson, W. E.

    2010-04-01

    We report on a procedure developed to create stochastic models of hyperfine interactions for complex diffusion mechanisms and demonstrate its application to simulate perturbed angular correlation spectra for the divacancy and 6-jump cycle diffusion mechanisms in L12-structured compounds.

  10. Performance of Random Effects Model Estimators under Complex Sampling Designs

    ERIC Educational Resources Information Center

    Jia, Yue; Stokes, Lynne; Harris, Ian; Wang, Yan

    2011-01-01

    In this article, we consider estimation of parameters of random effects models from samples collected via complex multistage designs. Incorporation of sampling weights is one way to reduce estimation bias due to unequal probabilities of selection. Several weighting methods have been proposed in the literature for estimating the parameters of…

  11. The Complex Model of Television Viewing and Educational Achievement.

    ERIC Educational Resources Information Center

    Razel, Micha

    2001-01-01

    Meta-analyzed data from six national studies of elementary through high school students to determine the relationship between amount of television viewing and educational achievement. According to a complex viewing-achievement model, for small amounts of viewing, achievement increased with viewing, but as viewing increased beyond a certain point,…

  12. Conceptual Complexity, Teaching Style and Models of Teaching.

    ERIC Educational Resources Information Center

    Joyce, Bruce; Weil, Marsha

    The focus of this paper is on the relative roles of personality and training in enabling teachers to carry out the kinds of complex learning models which are envisioned by curriculum reformers in the social sciences. The paper surveys some of the major research done in this area and concludes that: 1) Most teachers do not manifest the complex…

  13. Surface complexation modeling of americium sorption onto volcanic tuff.

    PubMed

    Ding, M; Kelkar, S; Meijer, A

    2014-10-01

    Results of a surface complexation model (SCM) for americium sorption on volcanic rocks (devitrified and zeolitic tuff) are presented. The model was developed using PHREEQC and based on laboratory data for americium sorption on quartz. Available data for sorption of americium on quartz as a function of pH in dilute groundwater can be modeled with two surface reactions involving an americium sulfate and an americium carbonate complex. It was assumed in applying the model to volcanic rocks from Yucca Mountain, that the surface properties of volcanic rocks can be represented by a quartz surface. Using groundwaters compositionally representative of Yucca Mountain, americium sorption distribution coefficient (Kd, L/Kg) values were calculated as function of pH. These Kd values are close to the experimentally determined Kd values for americium sorption on volcanic rocks, decreasing with increasing pH in the pH range from 7 to 9. The surface complexation constants, derived in this study, allow prediction of sorption of americium in a natural complex system, taking into account the inherent uncertainty associated with geochemical conditions that occur along transport pathways. PMID:24963803

  14. Fischer and Schrock Carbene Complexes: A Molecular Modeling Exercise

    ERIC Educational Resources Information Center

    Montgomery, Craig D.

    2015-01-01

    An exercise in molecular modeling that demonstrates the distinctive features of Fischer and Schrock carbene complexes is presented. Semi-empirical calculations (PM3) demonstrate the singlet ground electronic state, restricted rotation about the C-Y bond, the positive charge on the carbon atom, and hence, the electrophilic nature of the Fischer…

  15. Catastrophe, Chaos, and Complexity Models and Psychosocial Adjustment to Disability.

    ERIC Educational Resources Information Center

    Parker, Randall M.; Schaller, James; Hansmann, Sandra

    2003-01-01

    Rehabilitation professionals may unknowingly rely on stereotypes and specious beliefs when dealing with people with disabilities, despite the formulation of theories that suggest new models of the adjustment process. Suggests that Catastrophe, Chaos, and Complexity Theories hold considerable promise in this regard. This article reviews these…

  16. The Complexity of Developmental Predictions from Dual Process Models

    ERIC Educational Resources Information Center

    Stanovich, Keith E.; West, Richard F.; Toplak, Maggie E.

    2011-01-01

    Drawing developmental predictions from dual-process theories is more complex than is commonly realized. Overly simplified predictions drawn from such models may lead to premature rejection of the dual process approach as one of many tools for understanding cognitive development. Misleading predictions can be avoided by paying attention to several…

  17. Modeling active memory: Experiment, theory and simulation

    NASA Astrophysics Data System (ADS)

    Amit, Daniel J.

    2001-06-01

    Neuro-physiological experiments on cognitively performing primates are described to argue that strong evidence exists for localized, non-ergodic (stimulus specific) attractor dynamics in the cortex. The specific phenomena are delay activity distributions-enhanced spike-rate distributions resulting from training, which we associate with working memory. The anatomy of the relevant cortex region and the physiological characteristics of the participating elements (neural cells) are reviewed to provide a substrate for modeling the observed phenomena. Modeling is based on the properties of the integrate-and-fire neural element in presence of an input current of Gaussian distribution. Theory of stochastic processes provides an expression for the spike emission rate as a function of the mean and the variance of the current distribution. Mean-field theory is then based on the assumption that spike emission processes in different neurons in the network are independent, and hence the input current to a neuron is Gaussian. Consequently, the dynamics of the interacting network is reduced to the computation of the mean and the variance of the current received by a cell of a given population in terms of the constitutive parameters of the network and the emission rates of the neurons in the different populations. Within this logic we analyze the stationary states of an unstructured network, corresponding to spontaneous activity, and show that it can be stable only if locally the net input current of a neuron is inhibitory. This is then tested against simulations and it is found that agreement is excellent down to great detail. A confirmation of the independence hypothesis. On top of stable spontaneous activity, keeping all parameters fixed, training is described by (Hebbian) modification of synapses between neurons responsive to a stimulus and other neurons in the module-synapses are potentiated between two excited neurons and depressed between an excited and a quiescent neuron

  18. Experience and extensions to the ASM2 family of models.

    PubMed

    Dudley, J; Buck, G; Ashley, R; Jack, A

    2002-01-01

    The development of ASM2 has created a complex model for biological phosphorus removal. Most of the published work on calibrating this model has focused on the design of experiments to maximise information with which to calibrate the model, or the use of hourly data collected around and within an aeration tank. But many sewage works do not collect such data, nor have such instrumentation. The application of ASM2 with sparse data collected at a low frequency, and mostly only input-output, is considered in this paper, based on data collected at a Swedish sewage works. This paper shows that ASM2 can be calibrated with such measurements. This paper also looks at a modification to ASM2d to better handle heterotrophic usage of volatile fatty acids, and the use of this model to study the effects of large increases in in-sewer storage on sewage treatment works. Concern about the generation of large quantities of VFAs, and their effect on the sewage treatment processes, was unfounded. PMID:11989871

  19. Kinetic modeling of molecular motors: pause model and parameter determination from single-molecule experiments

    NASA Astrophysics Data System (ADS)

    Morin, José A.; Ibarra, Borja; Cao, Francisco J.

    2016-05-01

    Single-molecule manipulation experiments of molecular motors provide essential information about the rate and conformational changes of the steps of the reaction located along the manipulation coordinate. This information is not always sufficient to define a particular kinetic cycle. Recent single-molecule experiments with optical tweezers showed that the DNA unwinding activity of a Phi29 DNA polymerase mutant presents a complex pause behavior, which includes short and long pauses. Here we show that different kinetic models, considering different connections between the active and the pause states, can explain the experimental pause behavior. Both the two independent pause model and the two connected pause model are able to describe the pause behavior of a mutated Phi29 DNA polymerase observed in an optical tweezers single-molecule experiment. For the two independent pause model all parameters are fixed by the observed data, while for the more general two connected pause model there is a range of values of the parameters compatible with the observed data (which can be expressed in terms of two of the rates and their force dependencies). This general model includes models with indirect entry and exit to the long-pause state, and also models with cycling in both directions. Additionally, assuming that detailed balance is verified, which forbids cycling, this reduces the ranges of the values of the parameters (which can then be expressed in terms of one rate and its force dependency). The resulting model interpolates between the independent pause model and the indirect entry and exit to the long-pause state model

  20. A random interacting network model for complex networks

    NASA Astrophysics Data System (ADS)

    Goswami, Bedartha; Shekatkar, Snehal M.; Rheinwalt, Aljoscha; Ambika, G.; Kurths, Jürgen

    2015-12-01

    We propose a RAndom Interacting Network (RAIN) model to study the interactions between a pair of complex networks. The model involves two major steps: (i) the selection of a pair of nodes, one from each network, based on intra-network node-based characteristics, and (ii) the placement of a link between selected nodes based on the similarity of their relative importance in their respective networks. Node selection is based on a selection fitness function and node linkage is based on a linkage probability defined on the linkage scores of nodes. The model allows us to relate within-network characteristics to between-network structure. We apply the model to the interaction between the USA and Schengen airline transportation networks (ATNs). Our results indicate that two mechanisms: degree-based preferential node selection and degree-assortative link placement are necessary to replicate the observed inter-network degree distributions as well as the observed inter-network assortativity. The RAIN model offers the possibility to test multiple hypotheses regarding the mechanisms underlying network interactions. It can also incorporate complex interaction topologies. Furthermore, the framework of the RAIN model is general and can be potentially adapted to various real-world complex systems.

  1. A random interacting network model for complex networks

    PubMed Central

    Goswami, Bedartha; Shekatkar, Snehal M.; Rheinwalt, Aljoscha; Ambika, G.; Kurths, Jürgen

    2015-01-01

    We propose a RAndom Interacting Network (RAIN) model to study the interactions between a pair of complex networks. The model involves two major steps: (i) the selection of a pair of nodes, one from each network, based on intra-network node-based characteristics, and (ii) the placement of a link between selected nodes based on the similarity of their relative importance in their respective networks. Node selection is based on a selection fitness function and node linkage is based on a linkage probability defined on the linkage scores of nodes. The model allows us to relate within-network characteristics to between-network structure. We apply the model to the interaction between the USA and Schengen airline transportation networks (ATNs). Our results indicate that two mechanisms: degree-based preferential node selection and degree-assortative link placement are necessary to replicate the observed inter-network degree distributions as well as the observed inter-network assortativity. The RAIN model offers the possibility to test multiple hypotheses regarding the mechanisms underlying network interactions. It can also incorporate complex interaction topologies. Furthermore, the framework of the RAIN model is general and can be potentially adapted to various real-world complex systems. PMID:26657032

  2. A random interacting network model for complex networks.

    PubMed

    Goswami, Bedartha; Shekatkar, Snehal M; Rheinwalt, Aljoscha; Ambika, G; Kurths, Jürgen

    2015-01-01

    We propose a RAndom Interacting Network (RAIN) model to study the interactions between a pair of complex networks. The model involves two major steps: (i) the selection of a pair of nodes, one from each network, based on intra-network node-based characteristics, and (ii) the placement of a link between selected nodes based on the similarity of their relative importance in their respective networks. Node selection is based on a selection fitness function and node linkage is based on a linkage probability defined on the linkage scores of nodes. The model allows us to relate within-network characteristics to between-network structure. We apply the model to the interaction between the USA and Schengen airline transportation networks (ATNs). Our results indicate that two mechanisms: degree-based preferential node selection and degree-assortative link placement are necessary to replicate the observed inter-network degree distributions as well as the observed inter-network assortativity. The RAIN model offers the possibility to test multiple hypotheses regarding the mechanisms underlying network interactions. It can also incorporate complex interaction topologies. Furthermore, the framework of the RAIN model is general and can be potentially adapted to various real-world complex systems. PMID:26657032

  3. On Using Meta-Modeling and Multi-Modeling to Address Complex Problems

    ERIC Educational Resources Information Center

    Abu Jbara, Ahmed

    2013-01-01

    Models, created using different modeling techniques, usually serve different purposes and provide unique insights. While each modeling technique might be capable of answering specific questions, complex problems require multiple models interoperating to complement/supplement each other; we call this Multi-Modeling. To address the syntactic and…

  4. Towards an assessment of simple global marine biogeochemical models of different complexity

    NASA Astrophysics Data System (ADS)

    Kriest, I.; Khatiwala, S.; Oschlies, A.

    2010-09-01

    We present a suite of experiments with a hierarchy of biogeochemical models of increasing complexity coupled to an offline global ocean circulation model based on the “transport matrix method”. Biogeochemical model structures range from simple nutrient models to more complex nutrient-phytoplankton-zooplankton-detritus-DOP models. The models’ skill is assessed by various misfit functions with respect to observed phosphate and oxygen distributions. While there is generally good agreement between the different metrics employed, an exception is a cost function based on the relative model-data misfit. We show that alterations in parameters and/or structure of the models - especially those that change particle export or remineralization profile - affect subsurface and mesopelagic phosphate and oxygen, particularly in the upwelling regions. Visual inspection of simulated biogeochemical tracer distributions as well as the evaluation of different cost functions suggest that increasing complexity of untuned, unoptimized models, simulated with parameters commonly used in large-scale model studies does not necessarily improve performance. Instead, variations in individual model parameters may be of equal, if not greater, importance.

  5. Probabilistic Analysis Techniques Applied to Complex Spacecraft Power System Modeling

    NASA Technical Reports Server (NTRS)

    Hojnicki, Jeffrey S.; Rusick, Jeffrey J.

    2005-01-01

    Electric power system performance predictions are critical to spacecraft, such as the International Space Station (ISS), to ensure that sufficient power is available to support all the spacecraft s power needs. In the case of the ISS power system, analyses to date have been deterministic, meaning that each analysis produces a single-valued result for power capability because of the complexity and large size of the model. As a result, the deterministic ISS analyses did not account for the sensitivity of the power capability to uncertainties in model input variables. Over the last 10 years, the NASA Glenn Research Center has developed advanced, computationally fast, probabilistic analysis techniques and successfully applied them to large (thousands of nodes) complex structural analysis models. These same techniques were recently applied to large, complex ISS power system models. This new application enables probabilistic power analyses that account for input uncertainties and produce results that include variations caused by these uncertainties. Specifically, N&R Engineering, under contract to NASA, integrated these advanced probabilistic techniques with Glenn s internationally recognized ISS power system model, System Power Analysis for Capability Evaluation (SPACE).

  6. Boolean modeling of collective effects in complex networks.

    PubMed

    Norrell, Johannes; Socolar, Joshua E S

    2009-06-01

    Complex systems are often modeled as Boolean networks in attempts to capture their logical structure and reveal its dynamical consequences. Approximating the dynamics of continuous variables by discrete values and Boolean logic gates may, however, introduce dynamical possibilities that are not accessible to the original system. We show that large random networks of variables coupled through continuous transfer functions often fail to exhibit the complex dynamics of corresponding Boolean models in the disordered (chaotic) regime, even when each individual function appears to be a good candidate for Boolean idealization. A suitably modified Boolean theory explains the behavior of systems in which information does not propagate faithfully down certain chains of nodes. Model networks incorporating calculated or directly measured transfer functions reported in the literature on transcriptional regulation of genes are described by the modified theory. PMID:19658525

  7. Entropy, complexity, and Markov diagrams for random walk cancer models

    PubMed Central

    Newton, Paul K.; Mason, Jeremy; Hurt, Brian; Bethel, Kelly; Bazhenova, Lyudmila; Nieva, Jorge; Kuhn, Peter

    2014-01-01

    The notion of entropy is used to compare the complexity associated with 12 common cancers based on metastatic tumor distribution autopsy data. We characterize power-law distributions, entropy, and Kullback-Liebler divergence associated with each primary cancer as compared with data for all cancer types aggregated. We then correlate entropy values with other measures of complexity associated with Markov chain dynamical systems models of progression. The Markov transition matrix associated with each cancer is associated with a directed graph model where nodes are anatomical locations where a metastatic tumor could develop, and edge weightings are transition probabilities of progression from site to site. The steady-state distribution corresponds to the autopsy data distribution. Entropy correlates well with the overall complexity of the reduced directed graph structure for each cancer and with a measure of systemic interconnectedness of the graph, called graph conductance. The models suggest that grouping cancers according to their entropy values, with skin, breast, kidney, and lung cancers being prototypical high entropy cancers, stomach, uterine, pancreatic and ovarian being mid-level entropy cancers, and colorectal, cervical, bladder, and prostate cancers being prototypical low entropy cancers, provides a potentially useful framework for viewing metastatic cancer in terms of predictability, complexity, and metastatic potential. PMID:25523357

  8. Entropy, complexity, and Markov diagrams for random walk cancer models

    NASA Astrophysics Data System (ADS)

    Newton, Paul K.; Mason, Jeremy; Hurt, Brian; Bethel, Kelly; Bazhenova, Lyudmila; Nieva, Jorge; Kuhn, Peter

    2014-12-01

    The notion of entropy is used to compare the complexity associated with 12 common cancers based on metastatic tumor distribution autopsy data. We characterize power-law distributions, entropy, and Kullback-Liebler divergence associated with each primary cancer as compared with data for all cancer types aggregated. We then correlate entropy values with other measures of complexity associated with Markov chain dynamical systems models of progression. The Markov transition matrix associated with each cancer is associated with a directed graph model where nodes are anatomical locations where a metastatic tumor could develop, and edge weightings are transition probabilities of progression from site to site. The steady-state distribution corresponds to the autopsy data distribution. Entropy correlates well with the overall complexity of the reduced directed graph structure for each cancer and with a measure of systemic interconnectedness of the graph, called graph conductance. The models suggest that grouping cancers according to their entropy values, with skin, breast, kidney, and lung cancers being prototypical high entropy cancers, stomach, uterine, pancreatic and ovarian being mid-level entropy cancers, and colorectal, cervical, bladder, and prostate cancers being prototypical low entropy cancers, provides a potentially useful framework for viewing metastatic cancer in terms of predictability, complexity, and metastatic potential.

  9. Complex 2D matrix model and geometrical map on the complex-Nc plane

    NASA Astrophysics Data System (ADS)

    Nawa, Kanabu; Ozaki, Sho; Nagahiro, Hideko; Jido, Daisuke; Hosaka, Atsushi

    2013-08-01

    We study the parameter dependence of the internal structure of resonance states by formulating a complex two-dimensional (2D) matrix model, where the two dimensions represent two levels of resonances. We calculate a critical value of the parameter at which a "nature transition" with character exchange occurs between two resonance states, from the viewpoint of geometry on complex-parameter space. Such a critical value is useful for identifying the internal structure of resonance states with variation of the parameter in the system. We apply the model to analyze the internal structure of hadrons with variation of the color number N_c from infty to a realistic value 3. By regarding 1/N_c as the variable parameter in our model, we calculate a critical color number of the nature transition between hadronic states in terms of a quark-antiquark pair and a mesonic molecule as exotics from the geometry on the complex-N_c plane. For large-N_c effective theory, we employ the chiral Lagrangian induced by holographic QCD with a D4/D8/overline {D8} multi-D brane system in type IIA superstring theory.

  10. Coupled Thermal-Chemical-Mechanical Modeling of Validation Cookoff Experiments

    SciTech Connect

    ERIKSON,WILLIAM W.; SCHMITT,ROBERT G.; ATWOOD,A.I.; CURRAN,P.D.

    2000-11-27

    -dominated failure mode experienced in the tests. High-pressure burning rates are needed for more detailed post-ignition studies. Sub-models for chemistry, mechanical response and burn dynamics need to be validated against data from less complex experiments. The sub-models can then be used in integrated analysis for comparison with experimental data taken during integrated tests.

  11. Complex Behavior in Simple Models of Biological Coevolution

    NASA Astrophysics Data System (ADS)

    Rikvold, Per Arne

    We explore the complex dynamical behavior of simple predator-prey models of biological coevolution that account for interspecific and intraspecific competition for resources, as well as adaptive foraging behavior. In long kinetic Monte Carlo simulations of these models we find quite robust 1/f-like noise in species diversity and population sizes, as well as power-law distributions for the lifetimes of individual species and the durations of quiet periods of relative evolutionary stasis. In one model, based on the Holling Type II functional response, adaptive foraging produces a metastable low-diversity phase and a stable high-diversity phase.

  12. Modeling and Algorithmic Approaches to Constitutively-Complex, Microstructured Fluids

    SciTech Connect

    Miller, Gregory H.; Forest, Gregory

    2011-12-22

    We present a new multiscale model for complex uids based on three scales: microscopic, kinetic, and continuum. We choose the microscopic level as Kramers' bead-rod model for polymers, which we describe as a system of stochastic di erential equations with an implicit constraint formulation. The associated Fokker-Planck equation is then derived, and adiabatic elimination removes the fast momentum coordinates. Approached in this way, the kinetic level reduces to a dispersive drift equation. The continuum level is modeled with a nite volume Godunov-projection algorithm. We demonstrate computation of viscoelastic stress divergence using this multiscale approach.

  13. Modeling of Carbohydrate Binding Modules Complexed to Cellulose

    SciTech Connect

    Nimlos, M. R.; Beckham, G. T.; Bu, L.; Himmel, M. E.; Crowley, M. F.; Bomble, Y. J.

    2012-01-01

    Modeling results are presented for the interaction of two carbohydrate binding modules (CBMs) with cellulose. The family 1 CBM from Trichoderma reesei's Cel7A cellulase was modeled using molecular dynamics to confirm that this protein selectively binds to the hydrophobic (100) surface of cellulose fibrils and to determine the energetics and mechanisms for locating this surface. Modeling was also conducted of binding of the family 4 CBM from the CbhA complex from Clostridium thermocellum. There is a cleft in this protein, which may accommodate a cellulose chain that is detached from crystalline cellulose. This possibility is explored using molecular dynamics.

  14. Rethinking the Psychogenic Model of Complex Regional Pain Syndrome: Somatoform Disorders and Complex Regional Pain Syndrome

    PubMed Central

    Hill, Renee J.; Chopra, Pradeep; Richardi, Toni

    2012-01-01

    Abstract Explaining the etiology of Complex Regional Pain Syndrome (CRPS) from the psychogenic model is exceedingly unsophisticated, because neurocognitive deficits, neuroanatomical abnormalities, and distortions in cognitive mapping are features of CRPS pathology. More importantly, many people who have developed CRPS have no history of mental illness. The psychogenic model offers comfort to physicians and mental health practitioners (MHPs) who have difficulty understanding pain maintained by newly uncovered neuro inflammatory processes. With increased education about CRPS through a biopsychosocial perspective, both physicians and MHPs can better diagnose, treat, and manage CRPS symptomatology. PMID:24223338

  15. Investigation of models for large-scale meteorological prediction experiments

    NASA Technical Reports Server (NTRS)

    Spar, J.

    1975-01-01

    The feasibility of extended and long-range weather prediction by means of global atmospheric models was studied. A number of computer experiments were conducted at GISS with the GISS global general circulation model. Topics discussed include atmospheric response to sea-surface temperature anomalies, and monthly mean forecast experiments with the global model.

  16. Modeling of E-164X Experiment

    SciTech Connect

    Deng, S.; Muggli, P.; Barnes, C.D.; Clayton, C.E.; Decker, F.J.; Fonseca, R.A.; Huang, C.; Hogan, M.J.; Iverson, R.; Johnson, D.K.; Joshi, C.; Katsouleas, T.; Krejcik, P.; Lu, W.; Marsh, K.A.; Mori, W.B.; O'Connell, C.; Oz, E.; Tsung, F.; Zhou, M.M.; /Southern California U. /UCLA /SLAC /Lisbon, IST

    2005-06-28

    In current plasma-based accelerator experiments, very short bunches (100-150 {micro}m for E164 [1] and 10-20 {micro}m for E164X [2] experiment at Stanford Linear Accelerator Center (SLAC)) are used to drive plasma wakes and achieve high accelerating gradients, on the order of 10-100GV/m. The self-fields of such intense bunches can tunnel ionize neutral gases and create the plasma [3,4]. This may completely change the physics of plasma wakes. A 3-D object-oriented fully parallel PIC code OSIRIS [5] is used to simulate various gas types, beam parameters, etc. to support the design of the experiments. The simulation results for real experiment parameters are presented.

  17. Modeling of E-164X Experiment

    SciTech Connect

    Deng, S.; Muggli, P.; Katsouleas, T.; Oz, E.; Barnes, C.D.; Decker, F.J.; Hogan, M.J.; Iverson, R.; Krejcik, P.; O'Connell, C.; Clayton, C.E.; Huang, C.; Johnson, D.K.; Joshi, C.; Lu, W.; Marsh, K.A.; Mori, W.B.; Tsung, F.; Zhou, M.M.; Fonseca, R.A.

    2004-12-07

    In current plasma-based accelerator experiments, very short bunches (100-150{mu}m for E164 and 10-20 {mu}m for E164X experiment at Stanford Linear Accelerator Center (SLAC)) are used to drive plasma wakes and achieve high accelerating gradients, on the order of 10-100GV/m. The self-fields of such intense bunches can tunnel ionize neutral gases and create the plasma. This may completely change the physics of plasma wakes. A 3-D object-oriented fully parallel PIC code OSIRIS is used to simulate various gas types, beam parameters, etc. to support the design of the experiments. The simulation results for real experiment parameters are presented.

  18. Design of Low Complexity Model Reference Adaptive Controllers

    NASA Technical Reports Server (NTRS)

    Hanson, Curt; Schaefer, Jacob; Johnson, Marcus; Nguyen, Nhan

    2012-01-01

    Flight research experiments have demonstrated that adaptive flight controls can be an effective technology for improving aircraft safety in the event of failures or damage. However, the nonlinear, timevarying nature of adaptive algorithms continues to challenge traditional methods for the verification and validation testing of safety-critical flight control systems. Increasingly complex adaptive control theories and designs are emerging, but only make testing challenges more difficult. A potential first step toward the acceptance of adaptive flight controllers by aircraft manufacturers, operators, and certification authorities is a very simple design that operates as an augmentation to a non-adaptive baseline controller. Three such controllers were developed as part of a National Aeronautics and Space Administration flight research experiment to determine the appropriate level of complexity required to restore acceptable handling qualities to an aircraft that has suffered failures or damage. The controllers consist of the same basic design, but incorporate incrementally-increasing levels of complexity. Derivations of the controllers and their adaptive parameter update laws are presented along with details of the controllers implementations.

  19. Bridging Mechanistic and Phenomenological Models of Complex Biological Systems

    PubMed Central

    Transtrum, Mark K.; Qiu, Peng

    2016-01-01

    The inherent complexity of biological systems gives rise to complicated mechanistic models with a large number of parameters. On the other hand, the collective behavior of these systems can often be characterized by a relatively small number of phenomenological parameters. We use the Manifold Boundary Approximation Method (MBAM) as a tool for deriving simple phenomenological models from complicated mechanistic models. The resulting models are not black boxes, but remain expressed in terms of the microscopic parameters. In this way, we explicitly connect the macroscopic and microscopic descriptions, characterize the equivalence class of distinct systems exhibiting the same range of collective behavior, and identify the combinations of components that function as tunable control knobs for the behavior. We demonstrate the procedure for adaptation behavior exhibited by the EGFR pathway. From a 48 parameter mechanistic model, the system can be effectively described by a single adaptation parameter τ characterizing the ratio of time scales for the initial response and recovery time of the system which can in turn be expressed as a combination of microscopic reaction rates, Michaelis-Menten constants, and biochemical concentrations. The situation is not unlike modeling in physics in which microscopically complex processes can often be renormalized into simple phenomenological models with only a few effective parameters. The proposed method additionally provides a mechanistic explanation for non-universal features of the behavior. PMID:27187545

  20. Bridging Mechanistic and Phenomenological Models of Complex Biological Systems.

    PubMed

    Transtrum, Mark K; Qiu, Peng

    2016-05-01

    The inherent complexity of biological systems gives rise to complicated mechanistic models with a large number of parameters. On the other hand, the collective behavior of these systems can often be characterized by a relatively small number of phenomenological parameters. We use the Manifold Boundary Approximation Method (MBAM) as a tool for deriving simple phenomenological models from complicated mechanistic models. The resulting models are not black boxes, but remain expressed in terms of the microscopic parameters. In this way, we explicitly connect the macroscopic and microscopic descriptions, characterize the equivalence class of distinct systems exhibiting the same range of collective behavior, and identify the combinations of components that function as tunable control knobs for the behavior. We demonstrate the procedure for adaptation behavior exhibited by the EGFR pathway. From a 48 parameter mechanistic model, the system can be effectively described by a single adaptation parameter τ characterizing the ratio of time scales for the initial response and recovery time of the system which can in turn be expressed as a combination of microscopic reaction rates, Michaelis-Menten constants, and biochemical concentrations. The situation is not unlike modeling in physics in which microscopically complex processes can often be renormalized into simple phenomenological models with only a few effective parameters. The proposed method additionally provides a mechanistic explanation for non-universal features of the behavior. PMID:27187545

  1. Relating equivalence relations to equivalence relations: A relational framing model of complex human functioning

    PubMed Central

    Barnes, Dermot; Hegarty, Neil; Smeets, Paul M.

    1997-01-01

    The current study aimed to develop a behavior-analytic model of analogical reasoning. In Experiments 1 and 2 subjects (adults and children) were trained and tested for the formation of four, three-member equivalence relations using a delayed matching-to-sample procedure. All subjects (Experiments 1 and 2) were exposed to tests that examined relations between equivalence and non-equivalence relations. For example, on an equivalence-equivalence relation test, the complex sample B1/C1 and the two complex comparisons B3/C3 and B3/C4 were used, and on a nonequivalence-nonequivalence relation test the complex sample B1/C2 was presented with the same two comparisons. All subjects consistently related equivalence relations to equivalence relations and nonequivalence relations to nonequivalence relations (e.g., picked B3/C3 in the presence of B1/C1 and picked B3/C4 in the presence of B1/C2). In Experiment 3, the equivalence responding, the equivalence-equivalence responding, and the nonequivalence-nonequivalence responding was successfully brought under contextual control. Finally, it was shown that the contextual cues could function successfully as comparisons, and the complex samples and comparisons could function successfully as contextual cues and samples, respectively. These data extend the equivalence paradigm and contribute to a behaviour-analytic interpretation of analogical reasoning and complex human functioning, in general. PMID:22477120

  2. Modeling of ion complexation and extraction using substructural molecular fragments

    PubMed

    Solov'ev; Varnek; Wipff

    2000-05-01

    A substructural molecular fragment (SMF) method has been developed to model the relationships between the structure of organic molecules and their thermodynamic parameters of complexation or extraction. The method is based on the splitting of a molecule into fragments, and on calculations of their contributions to a given property. It uses two types of fragments: atom/bond sequences and "augmented atoms" (atoms with their nearest neighbors). The SMF approach is tested on physical properties of C2-C9 alkanes (boiling point, molar volume, molar refraction, heat of vaporization, surface tension, melting point, critical temperature, and critical pressures) and on octanol/water partition coefficients. Then, it is applied to the assessment of (i) complexation stability constants of alkali cations with crown ethers and phosphoryl-containing podands, and of beta-cyclodextrins with mono- and 1,4-disubstituted benzenes, and (ii) solvent extraction constants for the complexes of uranyl cation by phosphoryl-containing ligands. PMID:10850791

  3. Complexity and robustness in hypernetwork models of metabolism.

    PubMed

    Pearcy, Nicole; Chuzhanova, Nadia; Crofts, Jonathan J

    2016-10-01

    Metabolic reaction data is commonly modelled using a complex network approach, whereby nodes represent the chemical species present within the organism of interest, and connections are formed between those nodes participating in the same chemical reaction. Unfortunately, such an approach provides an inadequate description of the metabolic process in general, as a typical chemical reaction will involve more than two nodes, thus risking oversimplification of the system of interest in a potentially significant way. In this paper, we employ a complex hypernetwork formalism to investigate the robustness of bacterial metabolic hypernetworks by extending the concept of a percolation process to hypernetworks. Importantly, this provides a novel method for determining the robustness of these systems and thus for quantifying their resilience to random attacks/errors. Moreover, we performed a site percolation analysis on a large cohort of bacterial metabolic networks and found that hypernetworks that evolved in more variable environments displayed increased levels of robustness and topological complexity. PMID:27354314

  4. Parameter uncertainty and interaction in complex environmental models

    NASA Astrophysics Data System (ADS)

    Spear, Robert C.; Grieb, Thomas M.; Shang, Nong

    1994-11-01

    Recently developed models for the estimation of risks arising from the release of toxic chemicals from hazardous waste sites are inherently complex both structurally and parametrically. To better understand the impact of uncertainty and interaction in the high-dimensional parameter spaces of these models, the set of procedures termed regional sensitivity analysis has been extended and applied to the groundwater pathway of the MMSOILS model. The extension consists of a tree-structured density estimation technique which allows the characterization of complex interaction in that portion of the parameter space which gives rise to successful simulation. Results show that the parameter space can be partitioned into small, densely populated regions and relatively large, sparsely populated regions. From the high-density regions one can identify the important or controlling parameters as well as the interaction between parameters in different local areas of the space. This new tool can provide guidance in the analysis and interpretation of site-specific application of these complex models.

  5. An Adaptive Complex Network Model for Brain Functional Networks

    PubMed Central

    Gomez Portillo, Ignacio J.; Gleiser, Pablo M.

    2009-01-01

    Brain functional networks are graph representations of activity in the brain, where the vertices represent anatomical regions and the edges their functional connectivity. These networks present a robust small world topological structure, characterized by highly integrated modules connected sparsely by long range links. Recent studies showed that other topological properties such as the degree distribution and the presence (or absence) of a hierarchical structure are not robust, and show different intriguing behaviors. In order to understand the basic ingredients necessary for the emergence of these complex network structures we present an adaptive complex network model for human brain functional networks. The microscopic units of the model are dynamical nodes that represent active regions of the brain, whose interaction gives rise to complex network structures. The links between the nodes are chosen following an adaptive algorithm that establishes connections between dynamical elements with similar internal states. We show that the model is able to describe topological characteristics of human brain networks obtained from functional magnetic resonance imaging studies. In particular, when the dynamical rules of the model allow for integrated processing over the entire network scale-free non-hierarchical networks with well defined communities emerge. On the other hand, when the dynamical rules restrict the information to a local neighborhood, communities cluster together into larger ones, giving rise to a hierarchical structure, with a truncated power law degree distribution. PMID:19738902

  6. Cloud chamber experiments on the origin of ice crystal complexity in cirrus clouds

    NASA Astrophysics Data System (ADS)

    Schnaiter, M.; Järvinen, E.; Vochezer, P.; Abdelmonem, A.; Wagner, R.; Jourdan, O.; Mioche, G.; Shcherbakov, V. N.; Schmitt, C. G.; Tricoli, U.; Ulanowski, Z.; Heymsfield, A. J.

    2015-11-01

    This study reports on the origin of ice crystal complexity and its influence on the angular light scattering properties of cirrus clouds. Cloud simulation experiments were conducted at the AIDA (Aerosol Interactions and Dynamics in the Atmosphere) cloud chamber of the Karlsruhe Institute of Technology (KIT). A new experimental procedure was applied to grow and sublimate ice particles at defined super- and subsaturated ice conditions and for temperatures in the -40 to -60 °C range. The experiments were performed for ice clouds generated via homogeneous and heterogeneous initial nucleation. Ice crystal complexity was deduced from measurements of spatially resolved single particle light scattering patterns by the latest version of the Small Ice Detector (SID-3). It was found that a high ice crystal complexity is dominating the microphysics of the simulated clouds and the degree of this complexity is dependent on the available water vapour during the crystal growth. Indications were found that the crystal complexity is influenced by unfrozen H2SO4/H2O residuals in the case of homogeneous initial ice nucleation. Angular light scattering functions of the simulated ice clouds were measured by the two currently available airborne polar nephelometers; the Polar Nephelometer (PN) probe of LaMP and the Particle Habit Imaging and Polar Scattering (PHIPS-HALO) probe of KIT. The measured scattering functions are featureless and flat in the side- and backward scattering directions resulting in low asymmetry parameters g around 0.78. It was found that these functions have a rather low sensitivity to the crystal complexity for ice clouds that were grown under typical atmospheric conditions. These results have implications for the microphysical properties of cirrus clouds and for the radiative transfer through these clouds.

  7. Cloud chamber experiments on the origin of ice crystal complexity in cirrus clouds

    NASA Astrophysics Data System (ADS)

    Schnaiter, Martin; Järvinen, Emma; Vochezer, Paul; Abdelmonem, Ahmed; Wagner, Robert; Jourdan, Olivier; Mioche, Guillaume; Shcherbakov, Valery N.; Schmitt, Carl G.; Tricoli, Ugo; Ulanowski, Zbigniew; Heymsfield, Andrew J.

    2016-04-01

    This study reports on the origin of small-scale ice crystal complexity and its influence on the angular light scattering properties of cirrus clouds. Cloud simulation experiments were conducted at the AIDA (Aerosol Interactions and Dynamics in the Atmosphere) cloud chamber of the Karlsruhe Institute of Technology (KIT). A new experimental procedure was applied to grow and sublimate ice particles at defined super- and subsaturated ice conditions and for temperatures in the -40 to -60 °C range. The experiments were performed for ice clouds generated via homogeneous and heterogeneous initial nucleation. Small-scale ice crystal complexity was deduced from measurements of spatially resolved single particle light scattering patterns by the latest version of the Small Ice Detector (SID-3). It was found that a high crystal complexity dominates the microphysics of the simulated clouds and the degree of this complexity is dependent on the available water vapor during the crystal growth. Indications were found that the small-scale crystal complexity is influenced by unfrozen H2SO4 / H2O residuals in the case of homogeneous initial ice nucleation. Angular light scattering functions of the simulated ice clouds were measured by the two currently available airborne polar nephelometers: the polar nephelometer (PN) probe of Laboratoire de Métérologie et Physique (LaMP) and the Particle Habit Imaging and Polar Scattering (PHIPS-HALO) probe of KIT. The measured scattering functions are featureless and flat in the side and backward scattering directions. It was found that these functions have a rather low sensitivity to the small-scale crystal complexity for ice clouds that were grown under typical atmospheric conditions. These results have implications for the microphysical properties of cirrus clouds and for the radiative transfer through these clouds.

  8. Cx-02 Program, workshop on modeling complex systems

    USGS Publications Warehouse

    Mossotti, Victor G.; Barragan, Jo Ann; Westergard, Todd D.

    2003-01-01

    This publication contains the abstracts and program for the workshop on complex systems that was held on November 19-21, 2002, in Reno, Nevada. Complex systems are ubiquitous within the realm of the earth sciences. Geological systems consist of a multiplicity of linked components with nested feedback loops; the dynamics of these systems are non-linear, iterative, multi-scale, and operate far from equilibrium. That notwithstanding, It appears that, with the exception of papers on seismic studies, geology and geophysics work has been disproportionally underrepresented at regional and national meetings on complex systems relative to papers in the life sciences. This is somewhat puzzling because geologists and geophysicists are, in many ways, preadapted to thinking of complex system mechanisms. Geologists and geophysicists think about processes involving large volumes of rock below the sunlit surface of Earth, the accumulated consequence of processes extending hundreds of millions of years in the past. Not only do geologists think in the abstract by virtue of the vast time spans, most of the evidence is out-of-sight. A primary goal of this workshop is to begin to bridge the gap between the Earth sciences and life sciences through demonstration of the universality of complex systems science, both philosophically and in model structures.

  9. Mechanistic modeling confronts the complexity of molecular cell biology.

    PubMed

    Phair, Robert D

    2014-11-01

    Mechanistic modeling has the potential to transform how cell biologists contend with the inescapable complexity of modern biology. I am a physiologist-electrical engineer-systems biologist who has been working at the level of cell biology for the past 24 years. This perspective aims 1) to convey why we build models, 2) to enumerate the major approaches to modeling and their philosophical differences, 3) to address some recurrent concerns raised by experimentalists, and then 4) to imagine a future in which teams of experimentalists and modelers build-and subject to exhaustive experimental tests-models covering the entire spectrum from molecular cell biology to human pathophysiology. There is, in my view, no technical obstacle to this future, but it will require some plasticity in the biological research mind-set. PMID:25368428

  10. Paradigms of Complexity in Modelling of Fluid and Kinetic Processes

    NASA Astrophysics Data System (ADS)

    Diamond, P. H.

    2006-10-01

    The need to discuss and compare a wide variety of models of fluid and kinetic processes is motivated by the astonishing wide variety of complex physical phenomena which occur in plasmas in nature. Such phenomena include, but are not limited to: turbulence, turbulent transport and mixing, reconnection and structure formation. In this talk, I will review how various fluid and kinetic models come to grips with the essential physics of these phenomena. For example, I will discuss how the idea of a turbulent cascade and the concept of an ``eddy'' are realized quite differently in fluid and Vlasov models. Attention will be placed primarily on physical processes, the physics content of various models, and the consequences of choices in model construction, rather than on the intrinsic mathematical structure of the theories. Examples will be chosen from fusion, laboratory, space and astrophysical plasmas.

  11. RHIC injector complex online model status and plans

    SciTech Connect

    Schoefer,V.; Ahrens, L.; Brown, K.; Morris, J.; Nemesure, S.

    2009-05-04

    An online modeling system is being developed for the RHIC injector complex, which consists of the Booster, the AGS and the transfer lines connecting the Booster to the AGS and the AGS to RHIC. Historically the injectors have been operated using static values from design specifications or offline model runs, but tighter beam optics constraints required by polarized proton operations (e.g, accelerating with near-integer tunes) have necessitated a more dynamic system. An online model server for the AGS has been implemented using MAD-X [1] as the model engine, with plans to extend the system to the Booster and the injector transfer lines and to add the option of calculating optics using the Polymorphic Tracking Code (PTC [2]) as the model engine.

  12. GalaxyRefineComplex: Refinement of protein-protein complex model structures driven by interface repacking

    PubMed Central

    Heo, Lim; Lee, Hasup; Seok, Chaok

    2016-01-01

    Protein-protein docking methods have been widely used to gain an atomic-level understanding of protein interactions. However, docking methods that employ low-resolution energy functions are popular because of computational efficiency. Low-resolution docking tends to generate protein complex structures that are not fully optimized. GalaxyRefineComplex takes such low-resolution docking structures and refines them to improve model accuracy in terms of both interface contact and inter-protein orientation. This refinement method allows flexibility at the protein interface and in the overall docking structure to capture conformational changes that occur upon binding. Symmetric refinement is also provided for symmetric homo-complexes. This method was validated by refining models produced by available docking programs, including ZDOCK and M-ZDOCK, and was successfully applied to CAPRI targets in a blind fashion. An example of using the refinement method with an existing docking method for ligand binding mode prediction of a drug target is also presented. A web server that implements the method is freely available at http://galaxy.seoklab.org/refinecomplex. PMID:27535582

  13. GalaxyRefineComplex: Refinement of protein-protein complex model structures driven by interface repacking.

    PubMed

    Heo, Lim; Lee, Hasup; Seok, Chaok

    2016-01-01

    Protein-protein docking methods have been widely used to gain an atomic-level understanding of protein interactions. However, docking methods that employ low-resolution energy functions are popular because of computational efficiency. Low-resolution docking tends to generate protein complex structures that are not fully optimized. GalaxyRefineComplex takes such low-resolution docking structures and refines them to improve model accuracy in terms of both interface contact and inter-protein orientation. This refinement method allows flexibility at the protein interface and in the overall docking structure to capture conformational changes that occur upon binding. Symmetric refinement is also provided for symmetric homo-complexes. This method was validated by refining models produced by available docking programs, including ZDOCK and M-ZDOCK, and was successfully applied to CAPRI targets in a blind fashion. An example of using the refinement method with an existing docking method for ligand binding mode prediction of a drug target is also presented. A web server that implements the method is freely available at http://galaxy.seoklab.org/refinecomplex. PMID:27535582

  14. A new macrocyclic terbium(III) complex for use in RNA footprinting experiments

    PubMed Central

    Belousoff, Matthew J.; Ung, Phuc; Forsyth, Craig M.; Tor, Yitzhak; Spiccia, Leone; Graham, Bim

    2009-01-01

    Reaction of terbium triflate with a heptadentate ligand derivative of cyclen, L1 = 2-[7-ethyl-4,10-bis(isopropylcarbamoylmethyl)-1,4,7,10-tetraazacyclododec-1-yl]-N-isopropylacetamide, produced a new synthetic ribonuclease, [Tb(L1)(OTf)(OH2)](OTf)2·MeCN (C1). X-ray crystal structure analysis indicates that the terbium(III) centre in C1 is 9-coordinate, with a capped square-antiprism geometry. Whilst the terbium(III) center is tightly bound by the L1 ligand, two of the coordination sites are occupied by labile water and triflate ligands. In water, the triflate ligand is likely to be displaced, forming [Tb(L1)(OH2)2]3+, which is able to effectively promote RNA cleavage. This complex greatly accelerates the rate of intramolecular transesterification of an activated model RNA phosphodiester, uridine-3′-p-nitrophenylphosphate (UpNP), with kobs = 5.5(1) × 10-2 s-1 at 21°C and pH 7.5, corresponding to an apparent second-order rate constant of 277(5) M-1s-1. By contrast, the analogous complex of an octadentate derivative of cyclen featuring only a single labile coordination site, [Tb(L2)(OH2)](OTf)3 (C2), where L2 = 2-[4,7,10-tris(isopropylcarbamoylmethyl)-1,4,7,10-tetraazacyclododec-1-yl]-N-isopropyl-acetamide, is inactive. [Tb(L1)(OH2)2]3+ is also capable of hydrolyzing short transcripts of the HIV-1 transactivation response (TAR) element, HIV-1 dimerization initiation site (DIS) and ribosomal A-site, as well as formyl methionine transfer RNA (tRNAfMet), albeit at a considerably slower rate than UpNP transesterification (kobs = 2.78(8) × 10-5 M-1s-1 for TAR cleavage at 37°C, pH 6.5, corresponding to an apparent second-order rate constant of 0.56(2) M-1s-1). Cleavage is concentrated at the single-stranded “bulge” regions of these RNA motifs. Exploiting this selectivity, [Tb(L1)(OH2)23+ was successfully employed in footprinting experiments, in which binding of the Tat peptide and neomycin B to the bulge region of the TAR stem-loop was confirmed. PMID:19119812

  15. A Hardware Model Validation Tool for Use in Complex Space Systems

    NASA Technical Reports Server (NTRS)

    Davies, Misty Dawn; Gundy-Burlet, Karen L.; Limes, Gregory L.

    2010-01-01

    One of the many technological hurdles that must be overcome in future missions is the challenge of validating as-built systems against the models used for design. We propose a technique composed of intelligent parameter exploration in concert with automated failure analysis as a scalable method for the validation of complex space systems. The technique is impervious to discontinuities and linear dependencies in the data, and can handle dimensionalities consisting of hundreds of variables over tens of thousands of experiments.

  16. Adapting hydrological model structure to catchment characteristics: A large-sample experiment

    NASA Astrophysics Data System (ADS)

    Addor, Nans; Clark, Martyn P.; Nijssen, Bart

    2016-04-01

    Current hydrological modeling frameworks do not offer a clear way to systematically investigate the relationship between model complexity and model fidelity. The characterization of this relationship has so far relied on comparisons of different modules within the same model or comparisons of entirely different models. This lack of granularity in the differences between the model constructs makes it difficult to pinpoint model features that contribute to good simulations and means that the number of models or modeling hypotheses evaluated is usually small. Here we use flexible modeling frameworks to comprehensively and systematically compare modeling alternatives across the continuum of model complexity. A key goal is to explore which model structures are most adequate for catchments in different hydroclimatic conditions. Starting from conceptual models based on the Framework for Understanding Structural Errors (FUSE), we progressively increase model complexity by replacing conceptual formulations by physically explicit ones (process complexity) and by refining model spatial resolution (spatial complexity) using the newly developed Structure for Unifying Multiple Modeling Alternatives (SUMMA). To investigate how to best reflect catchment characteristics using model structure, we rely on a recently released data set of 671 catchments in the continuous United States. Instead of running hydrological simulations in every catchment, we use clustering techniques to define catchment clusters, run hydrological simulations for representative members of each cluster, develop hypotheses (e.g., when specific process representations have useful explanatory power) and test these hypotheses using other members of the cluster. We thus refine our catchment clustering based on insights into dominant hydrological processes gained from our modeling approach. With this large-sample experiment, we seek to uncover trade-offs between realism and practicality, and formulate general

  17. Engineering complex topological memories from simple Abelian models

    NASA Astrophysics Data System (ADS)

    Wootton, James R.; Lahtinen, Ville; Doucot, Benoit; Pachos, Jiannis K.

    2011-09-01

    In three spatial dimensions, particles are limited to either bosonic or fermionic statistics. Two-dimensional systems, on the other hand, can support anyonic quasiparticles exhibiting richer statistical behaviors. An exciting proposal for quantum computation is to employ anyonic statistics to manipulate information. Since such statistical evolutions depend only on topological characteristics, the resulting computation is intrinsically resilient to errors. The so-called non-Abelian anyons are most promising for quantum computation, but their physical realization may prove to be complex. Abelian anyons, however, are easier to understand theoretically and realize experimentally. Here we show that complex topological memories inspired by non-Abelian anyons can be engineered in Abelian models. We explicitly demonstrate the control procedures for the encoding and manipulation of quantum information in specific lattice models that can be implemented in the laboratory. This bridges the gap between requirements for anyonic quantum computation and the potential of state-of-the-art technology.

  18. An Ontology for Modeling Complex Inter-relational Organizations

    NASA Astrophysics Data System (ADS)

    Wautelet, Yves; Neysen, Nicolas; Kolp, Manuel

    This paper presents an ontology for organizational modeling through multiple complementary aspects. The primary goal of the ontology is to dispose of an adequate set of related concepts for studying complex organizations involved in a lot of relationships at the same time. In this paper, we define complex organizations as networked organizations involved in a market eco-system that are playing several roles simultaneously. In such a context, traditional approaches focus on the macro analytic level of transactions; this is supplemented here with a micro analytic study of the actors' rationale. At first, the paper overviews enterprise ontologies literature to position our proposal and exposes its contributions and limitations. The ontology is then brought to an advanced level of formalization: a meta-model in the form of a UML class diagram allows to overview the ontology concepts and their relationships which are formally defined. Finally, the paper presents the case study on which the ontology has been validated.

  19. Polygonal Shapes Detection in 3d Models of Complex Architectures

    NASA Astrophysics Data System (ADS)

    Benciolini, G. B.; Vitti, A.

    2015-02-01

    A sequential application of two global models defined on a variational framework is proposed for the detection of polygonal shapes in 3D models of complex architectures. As a first step, the procedure involves the use of the Mumford and Shah (1989) 1st-order variational model in dimension two (gridded height data are processed). In the Mumford-Shah model an auxiliary function detects the sharp changes, i.e., the discontinuities, of a piecewise smooth approximation of the data. The Mumford-Shah model requires the global minimization of a specific functional to simultaneously produce both the smooth approximation and its discontinuities. In the proposed procedure, the edges of the smooth approximation derived by a specific processing of the auxiliary function are then processed using the Blake and Zisserman (1987) 2nd-order variational model in dimension one (edges are processed in the plane). This second step permits to describe the edges of an object by means of piecewise almost-linear approximation of the input edges themselves and to detects sharp changes of the first-derivative of the edges so to detect corners. The Mumford-Shah variational model is used in two dimensions accepting the original data as primary input. The Blake-Zisserman variational model is used in one dimension for the refinement of the description of the edges. The selection among all the boundaries detected by the Mumford-Shah model of those that present a shape close to a polygon is performed by considering only those boundaries for which the Blake-Zisserman model identified discontinuities in their first derivative. The output of the procedure are hence shapes, coming from 3D geometric data, that can be considered as polygons. The application of the procedure is suitable for, but not limited to, the detection of objects such as foot-print of polygonal buildings, building facade boundaries or windows contours. v The procedure is applied to a height model of the building of the Engineering

  20. Nuclear reaction modeling, verification experiments, and applications

    SciTech Connect

    Dietrich, F.S.

    1995-10-01

    This presentation summarized the recent accomplishments and future promise of the neutron nuclear physics program at the Manuel Lujan Jr. Neutron Scatter Center (MLNSC) and the Weapons Neutron Research (WNR) facility. The unique capabilities of the spallation sources enable a broad range of experiments in weapons-related physics, basic science, nuclear technology, industrial applications, and medical physics.

  1. Thermodynamic model to describe miscibility in complex fluid systems

    SciTech Connect

    Guerrero, M.I.

    1982-01-01

    In the basic studies of tertiary oil recovery, it is necessary to describe the phase diagrams of mixtures of hydrocarbons, surfactants and brine. It has been observed that certain features of those phase diagrams, such as the appearance of 3-phase regions, can be correlated to ultra-low interfacial tensions. In this work, a simple thermodynamic model is described. The phase diagram obtained is qualitatively identical to that of real, more complex systems. 13 references.

  2. Complex polysaccharides as PCR inhibitors in feces: Helicobacter pylori model.

    PubMed

    Monteiro, L; Bonnemaison, D; Vekris, A; Petry, K G; Bonnet, J; Vidal, R; Cabrita, J; Mégraud, F

    1997-04-01

    A model was developed to study inhibitors present in feces which prevent the use of PCR for the detection of Helicobacter pylori. A DNA fragment amplified with the same primers as H. pylori was used to spike samples before extraction by a modified QIAamp tissue method. Inhibitors, separated on an Ultrogel AcA44 column, were characterized. Inhibitors in feces are complex polysaccharides possibly originating from vegetable material in the diet. PMID:9157172

  3. Modeling of Interaction of Hydraulic Fractures in Complex Fracture Networks

    NASA Astrophysics Data System (ADS)

    Kresse, O. 2; Wu, R.; Weng, X.; Gu, H.; Cohen, C.

    2011-12-01

    A recently developed unconventional fracture model (UFM) is able to simulate complex fracture network propagation in a formation with pre-existing natural fractures. Multiple fracture branches can propagate at the same time and intersect/cross each other. Each open fracture exerts additional stresses on the surrounding rock and adjacent fractures, which is often referred to as "stress shadow" effect. The stress shadow can cause significant restriction of fracture width, leading to greater risk of proppant screenout. It can also alter the fracture propagation path and drastically affect fracture network patterns. It is hence critical to properly model the fracture interaction in a complex fracture model. A method for computing the stress shadow in a complex hydraulic fracture network is presented. The method is based on an enhanced 2D Displacement Discontinuity Method (DDM) with correction for finite fracture height. The computed stress field is compared to 3D numerical simulation in a few simple examples and shows the method provides a good approximation for the 3D fracture problem. This stress shadow calculation is incorporated in the UFM. The results for simple cases of two fractures are presented that show the fractures can either attract or expel each other depending on their initial relative positions, and compares favorably with an independent 2D non-planar hydraulic fracture model. Additional examples of both planar and complex fractures propagating from multiple perforation clusters are presented, showing that fracture interaction controls the fracture dimension and propagation pattern. In a formation with no or small stress anisotropy, fracture interaction can lead to dramatic divergence of the fractures as they tend to repel each other. However, when stress anisotropy is large, the fracture propagation direction is dominated by the stress field and fracture turning due to fracture interaction is limited. However, stress shadowing still has a strong effect

  4. A radio-frequency sheath model for complex waveforms

    SciTech Connect

    Turner, M. M.; Chabert, P.

    2014-04-21

    Plasma sheaths driven by radio-frequency voltages occur in contexts ranging from plasma processing to magnetically confined fusion experiments. An analytical understanding of such sheaths is therefore important, both intrinsically and as an element in more elaborate theoretical structures. Radio-frequency sheaths are commonly excited by highly anharmonic waveforms, but no analytical model exists for this general case. We present a mathematically simple sheath model that is in good agreement with earlier models for single frequency excitation, yet can be solved for arbitrary excitation waveforms. As examples, we discuss dual-frequency and pulse-like waveforms. The model employs the ansatz that the time-averaged electron density is a constant fraction of the ion density. In the cases we discuss, the error introduced by this approximation is small, and in general it can be quantified through an internal consistency condition of the model. This simple and accurate model is likely to have wide application.

  5. Apollo Experiment Report: Lunar-Sample Processing in the Lunar Receiving Laboratory High-Vacuum Complex

    NASA Technical Reports Server (NTRS)

    White, D. R.

    1976-01-01

    A high-vacuum complex composed of an atmospheric decontamination system, sample-processing chambers, storage chambers, and a transfer system was built to process and examine lunar material while maintaining quarantine status. Problems identified, equipment modifications, and procedure changes made for Apollo 11 and 12 sample processing are presented. The sample processing experiences indicate that only a few operating personnel are required to process the sample efficiently, safely, and rapidly in the high-vacuum complex. The high-vacuum complex was designed to handle the many contingencies, both quarantine and scientific, associated with handling an unknown entity such as the lunar sample. Lunar sample handling necessitated a complex system that could not respond rapidly to changing scientific requirements as the characteristics of the lunar sample were better defined. Although the complex successfully handled the processing of Apollo 11 and 12 lunar samples, the scientific requirement for vacuum samples was deleted after the Apollo 12 mission just as the vacuum system was reaching its full potential.

  6. Termination of Multipartite Graph Series Arising from Complex Network Modelling

    NASA Astrophysics Data System (ADS)

    Latapy, Matthieu; Phan, Thi Ha Duong; Crespelle, Christophe; Nguyen, Thanh Qui

    An intense activity is nowadays devoted to the definition of models capturing the properties of complex networks. Among the most promising approaches, it has been proposed to model these graphs via their clique incidence bipartite graphs. However, this approach has, until now, severe limitations resulting from its incapacity to reproduce a key property of this object: the overlapping nature of cliques in complex networks. In order to get rid of these limitations we propose to encode the structure of clique overlaps in a network thanks to a process consisting in iteratively factorising the maximal bicliques between the upper level and the other levels of a multipartite graph. We show that the most natural definition of this factorising process leads to infinite series for some instances. Our main result is to design a restriction of this process that terminates for any arbitrary graph. Moreover, we show that the resulting multipartite graph has remarkable combinatorial properties and is closely related to another fundamental combinatorial object. Finally, we show that, in practice, this multipartite graph is computationally tractable and has a size that makes it suitable for complex network modelling.

  7. Modeling high-resolution broadband discourse in complex adaptive systems.

    PubMed

    Dooley, Kevin J; Corman, Steven R; McPhee, Robert D; Kuhn, Timothy

    2003-01-01

    Numerous researchers and practitioners have turned to complexity science to better understand human systems. Simulation can be used to observe how the microlevel actions of many human agents create emergent structures and novel behavior in complex adaptive systems. In such simulations, communication between human agents is often modeled simply as message passing, where a message or text may transfer data, trigger action, or inform context. Human communication involves more than the transmission of texts and messages, however. Such a perspective is likely to limit the effectiveness and insight that we can gain from simulations, and complexity science itself. In this paper, we propose a model of how close analysis of discursive processes between individuals (high-resolution), which occur simultaneously across a human system (broadband), dynamically evolve. We propose six different processes that describe how evolutionary variation can occur in texts-recontextualization, pruning, chunking, merging, appropriation, and mutation. These process models can facilitate the simulation of high-resolution, broadband discourse processes, and can aid in the analysis of data from such processes. Examples are used to illustrate each process. We make the tentative suggestion that discourse may evolve to the "edge of chaos." We conclude with a discussion concerning how high-resolution, broadband discourse data could actually be collected. PMID:12876447

  8. Modeling the Classic Meselson and Stahl Experiment.

    ERIC Educational Resources Information Center

    D'Agostino, JoBeth

    2001-01-01

    Points out the importance of molecular models in biology and chemistry. Presents a laboratory activity on DNA. Uses different colored wax strips to represent "heavy" and "light" DNA, cesium chloride for identification of small density differences, and three different liquids with varying densities to model gradient centrifugation. (YDS)

  9. Experiments with a Model Water Tunnel

    NASA Technical Reports Server (NTRS)

    Jacobs, Eastman N; Abbott, Ira H

    1930-01-01

    This report describes a model water tunnel built in 1928 by the NACA to investigate the possibility of using water tunnels for aerodynamic investigations at large scales. The model tunnel is similar to an open-throat wind tunnel, but uses water for the working fluid.

  10. Syntheses and Characterization of Ruthenium(II) Tetrakis(pyridine)complexes: An Advanced Coordination Chemistry Experiment or Mini-Project

    ERIC Educational Resources Information Center

    Coe, Benjamin J.

    2004-01-01

    An experiment for third-year undergraduate a student is designed which provides synthetic experience and qualitative interpretation of the spectroscopic properties of the ruthenium complexes. It involves the syntheses and characterization of several coordination complexes of ruthenium, the element found directly beneath iron in the middle of the…

  11. Accelerating the connection between experiments and models: The FACE-MDS experience

    NASA Astrophysics Data System (ADS)

    Norby, R. J.; Medlyn, B. E.; De Kauwe, M. G.; Zaehle, S.; Walker, A. P.

    2014-12-01

    The mandate is clear for improving communication between models and experiments to better evaluate terrestrial responses to atmospheric and climatic change. Unfortunately, progress in linking experimental and modeling approaches has been slow and sometimes frustrating. Recent successes in linking results from the Duke and Oak Ridge free-air CO2 enrichment (FACE) experiments with ecosystem and land surface models - the FACE Model-Data Synthesis (FACE-MDS) project - came only after a period of slow progress, but the experience points the way to future model-experiment interactions. As the FACE experiments were approaching their termination, the FACE research community made an explicit attempt to work together with the modeling community to synthesize and deliver experimental data to benchmark models and to use models to supply appropriate context for the experimental results. Initial problems that impeded progress were: measurement protocols were not consistent across different experiments; data were not well organized for model input; and parameterizing and spinning up models that were not designed for simulating a specific site was difficult. Once these problems were worked out, the FACE-MDS project has been very successful in using data from the Duke and ORNL FACE experiment to test critical assumptions in the models. The project showed, for example, that the stomatal conductance model most widely used in models was supported by experimental data, but models did not capture important responses such as increased leaf mass per unit area in elevated CO2, and did not appropriately represent foliar nitrogen allocation. We now have an opportunity to learn from this experience. New FACE experiments that have recently been initiated, or are about to be initiated, include a eucalyptus forest in Australia; the AmazonFACE experiment in a primary, tropical forest in Brazil; and a mature oak woodland in England. Cross-site science questions are being developed that will have a

  12. Experience With Bayesian Image Based Surface Modeling

    NASA Technical Reports Server (NTRS)

    Stutz, John C.

    2005-01-01

    Bayesian surface modeling from images requires modeling both the surface and the image generation process, in order to optimize the models by comparing actual and generated images. Thus it differs greatly, both conceptually and in computational difficulty, from conventional stereo surface recovery techniques. But it offers the possibility of using any number of images, taken under quite different conditions, and by different instruments that provide independent and often complementary information, to generate a single surface model that fuses all available information. I describe an implemented system, with a brief introduction to the underlying mathematical models and the compromises made for computational efficiency. I describe successes and failures achieved on actual imagery, where we went wrong and what we did right, and how our approach could be improved. Lastly I discuss how the same approach can be extended to distinct types of instruments, to achieve true sensor fusion.

  13. A Simple Model for Complex Dynamical Transitions in Epidemics

    NASA Astrophysics Data System (ADS)

    Earn, David J. D.; Rohani, Pejman; Bolker, Benjamin M.; Grenfell, Bryan T.

    2000-01-01

    Dramatic changes in patterns of epidemics have been observed throughout this century. For childhood infectious diseases such as measles, the major transitions are between regular cycles and irregular, possibly chaotic epidemics, and from regionally synchronized oscillations to complex, spatially incoherent epidemics. A simple model can explain both kinds of transitions as the consequences of changes in birth and vaccination rates. Measles is a natural ecological system that exhibits different dynamical transitions at different times and places, yet all of these transitions can be predicted as bifurcations of a single nonlinear model.

  14. The evaluative imaging of mental models - Visual representations of complexity

    NASA Technical Reports Server (NTRS)

    Dede, Christopher

    1989-01-01

    The paper deals with some design issues involved in building a system that could visually represent the semantic structures of training materials and their underlying mental models. In particular, hypermedia-based semantic networks that instantiate classification problem solving strategies are thought to be a useful formalism for such representations; the complexity of these web structures can be best managed through visual depictions. It is also noted that a useful approach to implement in these hypermedia models would be some metrics of conceptual distance.

  15. Tutoring and Multi-Agent Systems: Modeling from Experiences

    ERIC Educational Resources Information Center

    Bennane, Abdellah

    2010-01-01

    Tutoring systems become complex and are offering varieties of pedagogical software as course modules, exercises, simulators, systems online or offline, for single user or multi-user. This complexity motivates new forms and approaches to the design and the modelling. Studies and research in this field introduce emergent concepts that allow the…

  16. The use of workflows in the design and implementation of complex experiments in macromolecular crystallography

    SciTech Connect

    Brockhauser, Sandor; Svensson, Olof; Bowler, Matthew W.; Nanao, Max; Gordon, Elspeth; Leal, Ricardo M. F.; Popov, Alexander; Gerring, Matthew; McCarthy, Andrew A.; Gotz, Andy

    2012-08-01

    A powerful and easy-to-use workflow environment has been developed at the ESRF for combining experiment control with online data analysis on synchrotron beamlines. This tool provides the possibility of automating complex experiments without the need for expertise in instrumentation control and programming, but rather by accessing defined beamline services. The automation of beam delivery, sample handling and data analysis, together with increasing photon flux, diminishing focal spot size and the appearance of fast-readout detectors on synchrotron beamlines, have changed the way that many macromolecular crystallography experiments are planned and executed. Screening for the best diffracting crystal, or even the best diffracting part of a selected crystal, has been enabled by the development of microfocus beams, precise goniometers and fast-readout detectors that all require rapid feedback from the initial processing of images in order to be effective. All of these advances require the coupling of data feedback to the experimental control system and depend on immediate online data-analysis results during the experiment. To facilitate this, a Data Analysis WorkBench (DAWB) for the flexible creation of complex automated protocols has been developed. Here, example workflows designed and implemented using DAWB are presented for enhanced multi-step crystal characterizations, experiments involving crystal reorientation with kappa goniometers, crystal-burning experiments for empirically determining the radiation sensitivity of a crystal system and the application of mesh scans to find the best location of a crystal to obtain the highest diffraction quality. Beamline users interact with the prepared workflows through a specific brick within the beamline-control GUI MXCuBE.

  17. Uncertainty Analysis with Site Specific Groundwater Models: Experiences and Observations

    SciTech Connect

    Brewer, K.

    2003-07-15

    Groundwater flow and transport predictions are a major component of remedial action evaluations for contaminated groundwater at the Savannah River Site. Because all groundwater modeling results are subject to uncertainty from various causes; quantification of the level of uncertainty in the modeling predictions is beneficial to project decision makers. Complex site-specific models present formidable challenges for implementing an uncertainty analysis.

  18. Complexity, parameter sensitivity and parameter transferability in the modelling of floodplain inundation

    NASA Astrophysics Data System (ADS)

    Bates, P. D.; Neal, J. C.; Fewtrell, T. J.

    2012-12-01

    In this we paper we consider two related questions. First, we address the issue of how much physical complexity is necessary in a model in order to simulate floodplain inundation to within validation data error. This is achieved through development of a single code/multiple physics hydraulic model (LISFLOOD-FP) where different degrees of complexity can be switched on or off. Different configurations of this code are applied to four benchmark test cases, and compared to the results of a number of industry standard models. Second we address the issue of how parameter sensitivity and transferability change with increasing complexity using numerical experiments with models of different physical and geometric intricacy. Hydraulic models are a good example system with which to address such generic modelling questions as: (1) they have a strong physical basis; (2) there is only one set of equations to solve; (3) they require only topography and boundary conditions as input data; and (4) they typically require only a single free parameter, namely boundary friction. In terms of complexity required we show that for the problem of sub-critical floodplain inundation a number of codes of different dimensionality and resolution can be found to fit uncertain model validation data equally well, and that in this situation Occam's razor emerges as a useful logic to guide model selection. We find also find that model skill usually improves more rapidly with increases in model spatial resolution than increases in physical complexity, and that standard approaches to testing hydraulic models against laboratory data or analytical solutions may fail to identify this important fact. Lastly, we find that in benchmark testing studies significant differences can exist between codes with identical numerical solution techniques as a result of auxiliary choices regarding the specifics of model implementation that are frequently unreported by code developers. As a consequence, making sound

  19. Turning of slender workpieces: Modeling and experiments

    NASA Astrophysics Data System (ADS)

    Katz, R.; Lee, C. W.; Ulsoy, A. G.; Scott, R. A.

    1989-04-01

    This paper introduces a dynamic cutting force model for turning of slender workpieces, as well as experimental results related to the frequency response of the workpiece in turning. The model is based on a flexible workpiece and rigid machine tool, and a workpiece displacement dependent cutting force. The model is described and studied theoretically as well as experimentally. The experimental studies utilise both cutting force and workpiece vibration measurements in two orthogonal directions. This data is obtained for both cutting and non-cutting conditions, and analysed in the frequency domain. The model was found to be in partial agreement with the experimental results. The experimental procedure described here represents a new method for determining the cutting process damping ratio, based on differences in the measured workpiece natural frequencies with and without cutting.

  20. Hybrid rocket engine, theoretical model and experiment

    NASA Astrophysics Data System (ADS)

    Chelaru, Teodor-Viorel; Mingireanu, Florin

    2011-06-01

    The purpose of this paper is to build a theoretical model for the hybrid rocket engine/motor and to validate it using experimental results. The work approaches the main problems of the hybrid motor: the scalability, the stability/controllability of the operating parameters and the increasing of the solid fuel regression rate. At first, we focus on theoretical models for hybrid rocket motor and compare the results with already available experimental data from various research groups. A primary computation model is presented together with results from a numerical algorithm based on a computational model. We present theoretical predictions for several commercial hybrid rocket motors, having different scales and compare them with experimental measurements of those hybrid rocket motors. Next the paper focuses on tribrid rocket motor concept, which by supplementary liquid fuel injection can improve the thrust controllability. A complementary computation model is also presented to estimate regression rate increase of solid fuel doped with oxidizer. Finally, the stability of the hybrid rocket motor is investigated using Liapunov theory. Stability coefficients obtained are dependent on burning parameters while the stability and command matrixes are identified. The paper presents thoroughly the input data of the model, which ensures the reproducibility of the numerical results by independent researchers.

  1. Hybrid Structural Model of the Complete Human ESCRT-0 Complex

    SciTech Connect

    Ren, Xuefeng; Kloer, Daniel P.; Kim, Young C.; Ghirlando, Rodolfo; Saidi, Layla F.; Hummer, Gerhard; Hurley, James H.

    2009-03-31

    The human Hrs and STAM proteins comprise the ESCRT-0 complex, which sorts ubiquitinated cell surface receptors to lysosomes for degradation. Here we report a model for the complete ESCRT-0 complex based on the crystal structure of the Hrs-STAM core complex, previously solved domain structures, hydrodynamic measurements, and Monte Carlo simulations. ESCRT-0 expressed in insect cells has a hydrodynamic radius of R{sub H} = 7.9 nm and is a 1:1 heterodimer. The 2.3 {angstrom} crystal structure of the ESCRT-0 core complex reveals two domain-swapped GAT domains and an antiparallel two-stranded coiled-coil, similar to yeast ESCRT-0. ESCRT-0 typifies a class of biomolecular assemblies that combine structured and unstructured elements, and have dynamic and open conformations to ensure versatility in target recognition. Coarse-grained Monte Carlo simulations constrained by experimental R{sub H} values for ESCRT-0 reveal a dynamic ensemble of conformations well suited for diverse functions.

  2. Hybrid structural model of the complete human ESCRT-0 complex.

    PubMed

    Ren, Xuefeng; Kloer, Daniel P; Kim, Young C; Ghirlando, Rodolfo; Saidi, Layla F; Hummer, Gerhard; Hurley, James H

    2009-03-11

    The human Hrs and STAM proteins comprise the ESCRT-0 complex, which sorts ubiquitinated cell surface receptors to lysosomes for degradation. Here we report a model for the complete ESCRT-0 complex based on the crystal structure of the Hrs-STAM core complex, previously solved domain structures, hydrodynamic measurements, and Monte Carlo simulations. ESCRT-0 expressed in insect cells has a hydrodynamic radius of RH = 7.9 nm and is a 1:1 heterodimer. The 2.3 Angstroms crystal structure of the ESCRT-0 core complex reveals two domain-swapped GAT domains and an antiparallel two-stranded coiled-coil, similar to yeast ESCRT-0. ESCRT-0 typifies a class of biomolecular assemblies that combine structured and unstructured elements, and have dynamic and open conformations to ensure versatility in target recognition. Coarse-grained Monte Carlo simulations constrained by experimental RH values for ESCRT-0 reveal a dynamic ensemble of conformations well suited for diverse functions. PMID:19278655

  3. Teaching "Instant Experience" with Graphical Model Validation Techniques

    ERIC Educational Resources Information Center

    Ekstrøm, Claus Thorn

    2014-01-01

    Graphical model validation techniques for linear normal models are often used to check the assumptions underlying a statistical model. We describe an approach to provide "instant experience" in looking at a graphical model validation plot, so it becomes easier to validate if any of the underlying assumptions are violated.

  4. Parameter estimation for distributed parameter models of complex, flexible structures

    NASA Technical Reports Server (NTRS)

    Taylor, Lawrence W., Jr.

    1991-01-01

    Distributed parameter modeling of structural dynamics has been limited to simple spacecraft configurations because of the difficulty of handling several distributed parameter systems linked at their boundaries. Although there is other computer software able to generate such models or complex, flexible spacecraft, unfortunately, neither is suitable for parameter estimation. Because of this limitation the computer software PDEMOD is being developed for the express purposes of modeling, control system analysis, parameter estimation and structure optimization. PDEMOD is capable of modeling complex, flexible spacecraft which consist of a three-dimensional network of flexible beams and rigid bodies. Each beam has bending (Bernoulli-Euler or Timoshenko) in two directions, torsion, and elongation degrees of freedom. The rigid bodies can be attached to the beam ends at any angle or body location. PDEMOD is also capable of performing parameter estimation based on matching experimental modal frequencies and static deflection test data. The underlying formulation and the results of using this approach for test data of the Mini-MAST truss will be discussed. The resulting accuracy of the parameter estimates when using such limited data can impact significantly the instrumentation requirements for on-orbit tests.

  5. A model of the proton translocation mechanism of complex I.

    PubMed

    Treberg, Jason R; Brand, Martin D

    2011-05-20

    Despite decades of speculation, the proton pumping mechanism of complex I (NADH-ubiquinone oxidoreductase) is unknown and continues to be controversial. Recent descriptions of the architecture of the hydrophobic region of complex I have resolved one vital issue: this region appears to have multiple proton transporters that are mechanically interlinked. Thus, transduction of conformational changes to drive the transmembrane transporters linked by a "connecting rod" during the reduction of ubiquinone (Q) can account for two or three of the four protons pumped per NADH oxidized. The remaining proton(s) must be pumped by direct coupling at the Q-binding site. Here, we present a mixed model based on a crucial constraint: the strong dependence on the pH gradient across the membrane (ΔpH) of superoxide generation at the Q-binding site of complex I. This model combines direct and indirect coupling mechanisms to account for the pumping of the four protons. It explains the observed properties of the semiquinone in the Q-binding site, the rapid superoxide production from this site during reverse electron transport, its low superoxide production during forward electron transport except in the presence of inhibitory Q-analogs and high protonmotive force, and the strong dependence of both modes of superoxide production on ΔpH. PMID:21454533

  6. A Model of the Proton Translocation Mechanism of Complex I*

    PubMed Central

    Treberg, Jason R.; Brand, Martin D.

    2011-01-01

    Despite decades of speculation, the proton pumping mechanism of complex I (NADH-ubiquinone oxidoreductase) is unknown and continues to be controversial. Recent descriptions of the architecture of the hydrophobic region of complex I have resolved one vital issue: this region appears to have multiple proton transporters that are mechanically interlinked. Thus, transduction of conformational changes to drive the transmembrane transporters linked by a “connecting rod” during the reduction of ubiquinone (Q) can account for two or three of the four protons pumped per NADH oxidized. The remaining proton(s) must be pumped by direct coupling at the Q-binding site. Here, we present a mixed model based on a crucial constraint: the strong dependence on the pH gradient across the membrane (ΔpH) of superoxide generation at the Q-binding site of complex I. This model combines direct and indirect coupling mechanisms to account for the pumping of the four protons. It explains the observed properties of the semiquinone in the Q-binding site, the rapid superoxide production from this site during reverse electron transport, its low superoxide production during forward electron transport except in the presence of inhibitory Q-analogs and high protonmotive force, and the strong dependence of both modes of superoxide production on ΔpH. PMID:21454533

  7. Semiotic aspects of control and modeling relations in complex systems

    SciTech Connect

    Joslyn, C.

    1996-08-01

    A conceptual analysis of the semiotic nature of control is provided with the goal of elucidating its nature in complex systems. Control is identified as a canonical form of semiotic relation of a system to its environment. As a form of constraint between a system and its environment, its necessary and sufficient conditions are established, and the stabilities resulting from control are distinguished from other forms of stability. These result from the presence of semantic coding relations, and thus the class of control systems is hypothesized to be equivalent to that of semiotic systems. Control systems are contrasted with models, which, while they have the same measurement functions as control systems, do not necessarily require semantic relations because of the lack of the requirement of an interpreter. A hybrid construction of models in control systems is detailed. Towards the goal of considering the nature of control in complex systems, the possible relations among collections of control systems are considered. Powers arguments on conflict among control systems and the possible nature of control in social systems are reviewed, and reconsidered based on our observations about hierarchical control. Finally, we discuss the necessary semantic functions which must be present in complex systems for control in this sense to be present at all.

  8. Experiences Using Formal Methods for Requirements Modeling

    NASA Technical Reports Server (NTRS)

    Easterbrook, Steve; Lutz, Robyn; Covington, Rick; Kelly, John; Ampo, Yoko; Hamilton, David

    1996-01-01

    This paper describes three cases studies in the lightweight application of formal methods to requirements modeling for spacecraft fault protection systems. The case studies differ from previously reported applications of formal methods in that formal methods were applied very early in the requirements engineering process, to validate the evolving requirements. The results were fed back into the projects, to improve the informal specifications. For each case study, we describe what methods were applied, how they were applied, how much effort was involved, and what the findings were. In all three cases, the formal modeling provided a cost effective enhancement of the existing verification and validation processes. We conclude that the benefits gained from early modeling of unstable requirements more than outweigh the effort needed to maintain multiple representations.

  9. Silicon Carbide Derived Carbons: Experiments and Modeling

    SciTech Connect

    Kertesz, Miklos

    2011-02-28

    The main results of the computational modeling was: 1. Development of a new genealogical algorithm to generate vacancy clusters in diamond starting from monovacancies combined with energy criteria based on TBDFT energetics. The method revealed that for smaller vacancy clusters the energetically optimal shapes are compact but for larger sizes they tend to show graphitized regions. In fact smaller clusters of the size as small as 12 already show signatures of this graphitization. The modeling gives firm basis for the slit-pore modeling of porous carbon materials and explains some of their properties. 2. We discovered small vacancy clusters and their physical characteristics that can be used to spectroscopically identify them. 3. We found low barrier pathways for vacancy migration in diamond-like materials by obtaining for the first time optimized reaction pathways.

  10. Fundamental Rotorcraft Acoustic Modeling From Experiments (FRAME)

    NASA Technical Reports Server (NTRS)

    Greenwood, Eric

    2011-01-01

    A new methodology is developed for the construction of helicopter source noise models for use in mission planning tools from experimental measurements of helicopter external noise radiation. The models are constructed by employing a parameter identification method to an assumed analytical model of the rotor harmonic noise sources. This new method allows for the identification of individual rotor harmonic noise sources and allows them to be characterized in terms of their individual non-dimensional governing parameters. The method is applied to both wind tunnel measurements and ground noise measurements of two-bladed rotors. The method is shown to match the parametric trends of main rotor harmonic noise, allowing accurate estimates of the dominant rotorcraft noise sources to be made for operating conditions based on a small number of measurements taken at different operating conditions. The ability of this method to estimate changes in noise radiation due to changes in ambient conditions is also demonstrated.

  11. Plasma Reactor Modeling and Validation Experiments

    NASA Technical Reports Server (NTRS)

    Meyyappan, M.; Bose, D.; Hash, D.; Hwang, H.; Cruden, B.; Sharma, S. P.; Rao, M. V. V. S.; Arnold, Jim (Technical Monitor)

    2001-01-01

    Plasma processing is a key processing stop in integrated circuit manufacturing. Low pressure, high density plum reactors are widely used for etching and deposition. Inductively coupled plasma (ICP) source has become popular recently in many processing applications. In order to accelerate equipment and process design, an understanding of the physics and chemistry, particularly, plasma power coupling, plasma and processing uniformity and mechanism is important. This understanding is facilitated by comprehensive modeling and simulation as well as plasma diagnostics to provide the necessary data for model validation which are addressed in this presentation. We have developed a complete code for simulating an ICP reactor and the model consists of transport of electrons, ions, and neutrals, Poisson's equation, and Maxwell's equation along with gas flow and energy equations. Results will be presented for chlorine and fluorocarbon plasmas and compared with data from Langmuir probe, mass spectrometry and FTIR.

  12. Fundamental Rotorcraft Acoustic Modeling from Experiments (FRAME)

    NASA Astrophysics Data System (ADS)

    Greenwood, Eric, II

    2011-12-01

    A new methodology is developed for the construction of helicopter source noise models for use in mission planning tools from experimental measurements of helicopter external noise radiation. The models are constructed by employing a parameter identification method to an assumed analytical model of the rotor harmonic noise sources. This new method allows for the identification of individual rotor harmonic noise sources and allows them to be characterized in terms of their individual non-dimensional governing parameters. The method is applied to both wind tunnel measurements and ground noise measurements of two-bladed rotors. The method is shown to match the parametric trends of main rotor harmonic noise, allowing accurate estimates of the dominant rotorcraft noise sources to be made for operating conditions based on a small number of measurements taken at different operating conditions. The ability of this method to estimate changes in noise radiation due to changes in ambient conditions is also demonstrated.

  13. Propagating precipitation waves: experiments and modeling.

    PubMed

    Tinsley, Mark R; Collison, Darrell; Showalter, Kenneth

    2013-12-01

    Traveling precipitation waves, including counterrotating spiral waves, are observed in the precipitation reaction of AlCl3 with NaOH [Volford, A.; et al. Langmuir 2007, 23, 961 - 964]. Experimental and computational studies are carried out to characterize the wave behavior in cross-section configurations. A modified sol-coagulation model is developed that is based on models of Liesegang band and redissolution systems. The dynamics of the propagating waves is characterized in terms of growth and redissolution of a precipitation feature that travels through a migrating band of colloidal precipitate. PMID:24191642

  14. Long-range (CAPTEX (Cross-APpalachian Tracer EXperiment)) and complex terrain (ASCOT (Atmospheric Studies of COmplex Terrain)) perfluorocarbon tracer studies

    SciTech Connect

    Jeffter, J.L.; Yamada, T.; Dietz, R.N.

    1986-01-01

    Perfluorocarbon tracer (PFT) technology, consisting of tracers, samplers, and analytical equipment, has been deployed in numerous meteorological experiments for the verification of long-range and complex terrain transport and dispersion models. The CAPTEX (Cross-APpalachain Tracer EXperiment) ''83 was conducted from mid-September through October 1983, in which seven 3-h tracer releases (5 from Dayton, Ohio, and 2 from Sudbury, Ontario) were made of a single PFT. Ground sampling occurred at 80 sites in the northeastern US and southeastern Canada at distances of 300 to 1100 km from the release sites, with a total of 3000 samples collected. Seven aircraft gathered 1600 crosswind and vertical spiral samples at distance of 200 to 900 km from the release sites. Peak ground concentrations of over 30 times background and peak aircraft values of over 150 times background were measured at the most distant sites; some typical results are shown. The branching atmospheric trajectory (BAT) long-range transport was described. The model-calculated maximum ground level PFT concentrations were compared with the measured concentration isopleths as well as through the use of scatter diagrams of concentrations, spatial errors, and frequency of space- and time-averaged concentrations. The average spatial error found for each of the 7 releases ranged from 1.3/sup 0/ to 1.7/sup 0/ lat. The crosswind standard deviations of aircraft traverses at 600 to 800 km downwind varied from 12 to 20 km which corresponded to 1.0/sup 0/ to 1.6/sup 0/ lat., indicating that the model was accurate to within one standard deviation of the real-time tracer profiles. On average, for the 7 runs, 50% of the model-calculated concentrations were within a factor of 20 of the observations, indicating that, in general, 1/sup 0/ lat. shifts can easily cause order-of-magnitude changes in observed concentrations.

  15. Modelling of dynamic experiments in MCNP5 environment.

    PubMed

    Mosorov, Volodymyr; Zych, Marcin; Hanus, Robert; Petryka, Leszek

    2016-06-01

    The design of radiation measurement systems includes a modelling phase which ascertains the best 3D geometry for a projected gauge. To simulate measured counts by a detector, the widely-used rigorous phenomenological model is used. However, this model does not consider possible source or/and detector movement during a measurement interval. Therefore, the phenomenological model has been successfully modified in order to consider such a displacement during the time sampling interval in dynamic experiments. To validate the proposed model, a simple radiation system was accurately implemented in the MCNP5 code. The experiments confirmed the accuracy of the proposed model. PMID:27058321

  16. Nonlinear vibrations of shallow shells with complex boundary: R-functions method and experiments

    NASA Astrophysics Data System (ADS)

    Kurpa, Lidia; Pilgun, Galina; Amabili, Marco

    2007-10-01

    Geometrically nonlinear vibrations of shallow circular cylindrical panels with complex shape of the boundary are considered. The R-functions theory and variational methods are used to study the problem. The R-functions method (RFM) allows constructing in analytical form the sequence of basis functions satisfying the given boundary conditions in case of complex shape of the boundary. The problem is reduced to a single second-order differential equation with quadratic and cubic nonlinear terms. The method developed has been initially applied to study free vibrations of shallow circular cylindrical panels with rectangular base for different boundary conditions: (i) clamped edges, (ii) in-plane immovable simply supported edges, (iii) classically simply supported edges, and (iv) in-plane free simply supported edges. Then, the same approach is applied to a shell with complex shape of the boundary. Experiments have been conducted on an aluminum panel with complex shape of the boundary in order to identify the nonlinear response of the fundamental mode; these experimental results have been compared to numerical results.

  17. Complexities in Ferret Influenza Virus Pathogenesis and Transmission Models.

    PubMed

    Belser, Jessica A; Eckert, Alissa M; Tumpey, Terrence M; Maines, Taronna R

    2016-09-01

    Ferrets are widely employed to study the pathogenicity, transmissibility, and tropism of influenza viruses. However, inherent variations in inoculation methods, sampling schemes, and experimental designs are often overlooked when contextualizing or aggregating data between laboratories, leading to potential confusion or misinterpretation of results. Here, we provide a comprehensive overview of parameters to consider when planning an experiment using ferrets, collecting data from the experiment, and placing results in context with previously performed studies. This review offers information that is of particular importance for researchers in the field who rely on ferret data but do not perform the experiments themselves. Furthermore, this review highlights the breadth of experimental designs and techniques currently available to study influenza viruses in this model, underscoring the wide heterogeneity of protocols currently used for ferret studies while demonstrating the wealth of information which can benefit risk assessments of emerging influenza viruses. PMID:27412880

  18. A qualitative model of human interaction with complex dynamic systems

    NASA Technical Reports Server (NTRS)

    Hess, Ronald A.

    1987-01-01

    A qualitative model describing human interaction with complex dynamic systems is developed. The model is hierarchical in nature and consists of three parts: a behavior generator, an internal model, and a sensory information processor. The behavior generator is responsible for action decomposition, turning higher level goals or missions into physical action at the human-machine interface. The internal model is an internal representation of the environment which the human is assumed to possess and is divided into four submodel categories. The sensory information processor is responsible for sensory composition. All three parts of the model act in consort to allow anticipatory behavior on the part of the human in goal-directed interaction with dynamic systems. Human workload and error are interpreted in this framework, and the familiar example of an automobile commute is used to illustrate the nature of the activity in the three model elements. Finally, with the qualitative model as a guide, verbal protocols from a manned simulation study of a helicopter instrument landing task are analyzed with particular emphasis on the effect of automation on human-machine performance.

  19. A Qualitative Model of Human Interaction with Complex Dynamic Systems

    NASA Technical Reports Server (NTRS)

    Hess, Ronald A.

    1987-01-01

    A qualitative model describing human interaction with complex dynamic systems is developed. The model is hierarchical in nature and consists of three parts: a behavior generator, an internal model, and a sensory information processor. The behavior generator is responsible for action decomposition, turning higher level goals or missions into physical action at the human-machine interface. The internal model is an internal representation of the environment which the human is assumed to possess and is divided into four submodel categories. The sensory information processor is responsible for sensory composition. All three parts of the model act in consort to allow anticipatory behavior on the part of the human in goal-directed interaction with dynamic systems. Human workload and error are interpreted in this framework, and the familiar example of an automobile commute is used to illustrate the nature of the activity in the three model elements. Finally, with the qualitative model as a guide, verbal protocols from a manned simulation study of a helicopter instrument landing task are analyzed with particular emphasis on the effect of automation on human-machine performance.

  20. An ice sheet model of reduced complexity for paleoclimate studies

    NASA Astrophysics Data System (ADS)

    Neff, B.; Born, A.; Stocker, T. F.

    2015-08-01

    IceBern2D is a vertically integrated ice sheet model to investigate the ice distribution on long timescales under different climatic conditions. It is forced by simulated fields of surface temperature and precipitation of the last glacial maximum and present day climate from a comprehensive climate model. This constant forcing is adjusted to changes in ice elevation. Bedrock sinking and sea level are a function of ice volume. Due to its reduced complexity and computational efficiency, the model is well-suited for extensive sensitivity studies and ensemble simulations on extensive temporal and spatial scales. It shows good quantitative agreement with standardized benchmarks on an artificial domain (EISMINT). Present day and last glacial maximum ice distributions on the Northern Hemisphere are also simulated with good agreement. Glacial ice volume in Eurasia is underestimated due to the lack of ice shelves in our model. The efficiency of the model is utilized by running an ensemble of 400 simulations with perturbed model parameters and two different estimates of the climate at the last glacial maximum. The sensitivity to the imposed climate boundary conditions and the positive degree day factor β, i.e., the surface mass balance, outweighs the influence of parameters that disturb the flow of ice. This justifies the use of simplified dynamics as a means to achieve computational efficiency for simulations that cover several glacial cycles. The sensitivity of the model to changes in surface temperature is illustrated as a hysteresis based on 5 million year long simulations.

  1. Computational Model of Fluorine-20 Experiment

    NASA Astrophysics Data System (ADS)

    Chuna, Thomas; Voytas, Paul; George, Elizabeth; Naviliat-Cuncic, Oscar; Gade, Alexandra; Hughes, Max; Huyan, Xueying; Liddick, Sean; Minamisono, Kei; Weisshaar, Dirk; Paulauskas, Stanley; Ban, Gilles; Flechard, Xavier; Lienard, Etienne

    2015-10-01

    The Conserved Vector Current (CVC) hypothesis of the standard model of the electroweak interaction predicts there is a contribution to the shape of the spectrum in the beta-minus decay of 20F related to a property of the analogous gamma decay of excited 20Ne. To provide a strong test of the CVC hypothesis, a precise measurement of the 20F beta decay spectrum will be taken at the National Superconducting Cyclotron Laboratory. This measurement uses unconventional measurement techniques in that 20F will be implanted directly into a scintillator. As the emitted electrons interact with the detector material, bremsstrahlung interactions occur and the escape of the resultant photons will distort the measured spectrum. Thus, a Monte Carlo simulation has been constructed using EGSnrc radiation transport software. This computational model's intended use is to quantify and correct for distortion in the observed beta spectrum due, primarily, to the aforementioned bremsstrahlung. The focus of this presentation is twofold: the analysis of the computational model itself and the results produced by the model. Wittenberg University.

  2. Metal-mediated reaction modeled on nature: the activation of isothiocyanates initiated by zinc thiolate complexes.

    PubMed

    Eger, Wilhelm A; Presselt, Martin; Jahn, Burkhard O; Schmitt, Michael; Popp, Jürgen; Anders, Ernst

    2011-04-18

    On the basis of detailed theoretical studies of the mode of action of carbonic anhydrase (CA) and models resembling only its reactive core, a complete computational pathway analysis of the reaction between several isothiocyanates and methyl mercaptan activated by a thiolate-bearing model complex [Zn(NH(3))(3)SMe](+) was performed at a high level of density functional theory (DFT). Furthermore, model reactions have been studied in the experiment using relatively stable zinc complexes and have been investigated by gas chromatography/mass spectrometry and Raman spectroscopy. The model complexes used in the experiment are based upon the well-known azamacrocyclic ligand family ([12]aneN(4), [14]aneN(4), i-[14]aneN(4), and [15]aneN(4)) and are commonly formulated as ([Zn([X]aneN(4))(SBn)]ClO(4). As predicted by our DFT calculations, all of these complexes are capable of insertion into the heterocumulene system. Raman spectroscopic investigations indicate that aryl-substituted isothiocyanates predominantly add to the C═N bond and that the size of the ring-shaped ligands of the zinc complex also has a very significant influence on the selectivity and on the reactivity as well. Unfortunately, the activated isothiocyanate is not able to add to the thiolate-corresponding mercaptan to invoke a CA analogous catalytic cycle. However, more reactive compounds such as methyl iodide can be incorporated. This work gives new insight into the mode of action and reaction path variants derived from the CA principles. Further, aspects of the reliability of DFT calculations concerning the prediction of the selectivity and reactivity are discussed. In addition, the presented synthetic pathways can offer a completely new access to a variety of dithiocarbamates. PMID:21405064

  3. Integrated Bayesian network framework for modeling complex ecological issues.

    PubMed

    Johnson, Sandra; Mengersen, Kerrie

    2012-07-01

    The management of environmental problems is multifaceted, requiring varied and sometimes conflicting objectives and perspectives to be considered. Bayesian network (BN) modeling facilitates the integration of information from diverse sources and is well suited to tackling the management challenges of complex environmental problems. However, combining several perspectives in one model can lead to large, unwieldy BNs that are difficult to maintain and understand. Conversely, an oversimplified model may lead to an unrealistic representation of the environmental problem. Environmental managers require the current research and available knowledge about an environmental problem of interest to be consolidated in a meaningful way, thereby enabling the assessment of potential impacts and different courses of action. Previous investigations of the environmental problem of interest may have already resulted in the construction of several disparate ecological models. On the other hand, the opportunity may exist to initiate this modeling. In the first instance, the challenge is to integrate existing models and to merge the information and perspectives from these models. In the second instance, the challenge is to include different aspects of the environmental problem incorporating both the scientific and management requirements. Although the paths leading to the combined model may differ for these 2 situations, the common objective is to design an integrated model that captures the available information and research, yet is simple to maintain, expand, and refine. BN modeling is typically an iterative process, and we describe a heuristic method, the iterative Bayesian network development cycle (IBNDC), for the development of integrated BN models that are suitable for both situations outlined above. The IBNDC approach facilitates object-oriented BN (OOBN) modeling, arguably viewed as the next logical step in adaptive management modeling, and that embraces iterative development

  4. Emission spectra of LH2 complex: full Hamiltonian model

    NASA Astrophysics Data System (ADS)

    Heřman, Pavel; Zapletal, David; Horák, Milan

    2013-05-01

    In the present contribution we study the absorption and steady-state fluorescence spectra for ring molecular system, which can model B850 ring of peripheral light-harvesting complex LH2 from purple bacterium Rhodopseudomonas acidophila (Rhodoblastus acidophilus). LH2 is a highly symmetric ring of nine pigment-protein subunits, each containing two transmembrane polypeptide helixes and three bacteriochlorophylls (BChl). The uncorrelated diagonal static disorder with Gaussian distribution (fluctuations of local excitation energies) simultaneously with the diagonal dynamic disorder (interaction with a bath) in Markovian approximation is used in our simulations. We compare calculated absorption and steady state fluorescence spectra obtained within the full Hamiltonian model of the B850 ring with our previous results calculated within the nearest neighbour approximation model and also with experimental data.

  5. 3D model of amphioxus steroid receptor complexed with estradiol

    SciTech Connect

    Baker, Michael E.; Chang, David J.

    2009-08-28

    The origins of signaling by vertebrate steroids are not fully understood. An important advance was the report that an estrogen-binding steroid receptor [SR] is present in amphioxus, a basal chordate with a similar body plan as vertebrates. To investigate the evolution of estrogen-binding to steroid receptors, we constructed a 3D model of amphioxus SR complexed with estradiol. This 3D model indicates that although the SR is activated by estradiol, some interactions between estradiol and human ER{alpha} are not conserved in the SR, which can explain the low affinity of estradiol for the SR. These differences between the SR and ER{alpha} in the steroid-binding domain are sufficient to suggest that another steroid is the physiological regulator of the SR. The 3D model predicts that mutation of Glu-346 to Gln will increase the affinity of testosterone for amphioxus SR and elucidate the evolution of steroid-binding to nuclear receptors.

  6. Equilibrium modeling of trace metal transport from Duluth complex rockpile

    SciTech Connect

    Kelsey, P.D.; Klusman, R.W.; Lapakko, K.

    1996-12-31

    Geochemical modeling was used to predict weathering processes and the formation of trace metal-adsorbing secondary phases in a waste rock stockpile containing Cu-Ni ore mined from the Duluth Complex, MN. Amorphous ferric hydroxide was identified as a secondary phase within the pile, from observation and geochemical modeling of the weathering process. Due to the high content of cobalt, copper, nickel, and zinc in the primary minerals of the waste rock and in the effluent, it was hypothesized that the predicted and observed precipitant ferric hydroxide would adsorb small quantities of these trace metals. This was verified using sequential extractions and simulated using adsorption geochemical modeling. It was concluded that the trace metals were adsorbed in small quantities, and adsorption onto the amorphous ferric hydroxide was in decreasing order of Cu > Ni > Zn > Co. The low degree of adsorption was due to low pH water and competition for adsorption sites with other ions in solution.

  7. A two-level complex network model and its application

    NASA Astrophysics Data System (ADS)

    Yang, Jianmei; Wang, Wenjie; Chen, Guanrong

    2009-06-01

    This paper investigates the competitive relationship and rivalry of industrial markets, using Chinese household electrical appliance firms as a platform for the study. The common complex network models belong to one-level networks in layered classification, while this paper formulates and evaluates a new two-level network model, in which the first level is the whole unweighted-undirected network useful for macro-analyzing the industrial market structure while the second level is a local weighted-directed network capable of micro-analyzing the inter-firm rivalry in the market. It is believed that the relationship is determined by objective factors whereas the action is rather subjective, and the idea in this paper lies in that the objective relationship and the subjective action subjected to this relationship are being simultaneously considered but at deferent levels of the model which may be applicable to many real applications.

  8. Rumor spreading model considering hesitating mechanism in complex social networks

    NASA Astrophysics Data System (ADS)

    Xia, Ling-Ling; Jiang, Guo-Ping; Song, Bo; Song, Yu-Rong

    2015-11-01

    The study of rumor spreading has become an important issue on complex social networks. On the basis of prior studies, we propose a modified ​susceptible-exposed-infected-removed (SEIR) model with hesitating mechanism by considering the attractiveness and fuzziness of the content of rumors. We derive mean-field equations to characterize the dynamics of SEIR model on both homogeneous and heterogeneous networks. Then a steady-state analysis is conducted to investigate the spreading threshold and the final rumor size. Simulations on both artificial and real networks show that a decrease of fuzziness can effectively increase the spreading threshold of the SEIR model and reduce the maximum rumor influence. In addition, the spreading threshold is independent of the attractiveness of rumor. Simulation results also show that the speed of rumor spreading obeys the relation "BA network > WS network", whereas the final scale of spreading obeys the opposite relation.

  9. Simultaneous application of dissolution/precipitation and surface complexation/surface precipitation modeling to contaminant leaching.

    PubMed

    Apul, Defne S; Gardner, Kevin H; Eighmy, T Taylor; Fällman, Ann-Marie; Comans, Rob N J

    2005-08-01

    This paper discusses the modeling of anion and cation leaching from complex matrixes such as weathered steel slag. The novelty of the method is its simultaneous application of the theoretical models for solubility, competitive sorption, and surface precipitation phenomena to a complex system. Selective chemical extractions, pH dependent leaching experiments, and geochemical modeling were used to investigate the thermodynamic equilibrium of 12 ions (As, Ca, Cr, Ba, SO4, Mg, Cd, Cu, Mo, Pb, V, and Zn) with aqueous complexes, soluble solids, and sorptive surfaces in the presence of 12 background analytes (Al, Cl, Co, Fe, K, Mn, Na, Ni, Hg, NO3, CO3, and Ba). Modeling results show that surface complexation and surface precipitation reactions limit the aqueous concentrations of Cd, Zn, and Pb in an environment where Ca, Mg, Si, and CO3 dissolve from soluble solids and compete for sorption sites. The leaching of SO4, Cr, As, Si, Ca, and Mg appears to be controlled by corresponding soluble solids. PMID:16124310

  10. A novel prediction method about single components of analog circuits based on complex field modeling.

    PubMed

    Zhou, Jingyu; Tian, Shulin; Yang, Chenglin

    2014-01-01

    Few researches pay attention to prediction about analog circuits. The few methods lack the correlation with circuit analysis during extracting and calculating features so that FI (fault indicator) calculation often lack rationality, thus affecting prognostic performance. To solve the above problem, this paper proposes a novel prediction method about single components of analog circuits based on complex field modeling. Aiming at the feature that faults of single components hold the largest number in analog circuits, the method starts with circuit structure, analyzes transfer function of circuits, and implements complex field modeling. Then, by an established parameter scanning model related to complex field, it analyzes the relationship between parameter variation and degeneration of single components in the model in order to obtain a more reasonable FI feature set via calculation. According to the obtained FI feature set, it establishes a novel model about degeneration trend of analog circuits' single components. At last, it uses particle filter (PF) to update parameters for the model and predicts remaining useful performance (RUP) of analog circuits' single components. Since calculation about the FI feature set is more reasonable, accuracy of prediction is improved to some extent. Finally, the foregoing conclusions are verified by experiments. PMID:25147853

  11. Modeling and Algorithmic Approaches to Constitutively-Complex, Micro-structured Fluids

    SciTech Connect

    Forest, Mark Gregory

    2014-05-06

    The team for this Project made significant progress on modeling and algorithmic approaches to hydrodynamics of fluids with complex microstructure. Our advances are broken down into modeling and algorithmic approaches. In experiments a driven magnetic bead in a complex fluid accelerates out of the Stokes regime and settles into another apparent linear response regime. The modeling explains the take-off as a deformation of entanglements, and the longtime behavior is a nonlinear, far-from-equilibrium property. Furthermore, the model has predictive value, as we can tune microstructural properties relative to the magnetic force applied to the bead to exhibit all possible behaviors. Wave-theoretic probes of complex fluids have been extended in two significant directions, to small volumes and the nonlinear regime. Heterogeneous stress and strain features that lie beyond experimental capability were studied. It was shown that nonlinear penetration of boundary stress in confined viscoelastic fluids is not monotone, indicating the possibility of interlacing layers of linear and nonlinear behavior, and thus layers of variable viscosity. Models, algorithms, and codes were developed and simulations performed leading to phase diagrams of nanorod dispersion hydrodynamics in parallel shear cells and confined cavities representative of film and membrane processing conditions. Hydrodynamic codes for polymeric fluids are extended to include coupling between microscopic and macroscopic models, and to the strongly nonlinear regime.

  12. Characterizing and Modeling the Noise and Complex Impedance of Feedhorn-Coupled TES Polarimeters

    SciTech Connect

    Appel, J. W.; Beall, J. A.; Essinger-Hileman, T.; Parker, L. P.; Staggs, S. T.; Visnjic, C.; Zhao, Y.; Austermann, J. E.; Halverson, N. W.; Henning, J. W.; Simon, S. M.; Becker, D.; Britton, J.; Cho, H. M.; Hilton, G. C.; Irwin, K. D.; Niemack, M. D.; Yoon, K. W.; Benson, B. A.; Bleem, L. E.

    2009-12-16

    We present results from modeling the electrothermal performance of feedhorn-coupled transition edge sensor (TES) polarimeters under development for use in cosmic microwave background (CMB) polarization experiments. Each polarimeter couples radiation from a corrugated feedhorn through a planar orthomode transducer, which transmits power from orthogonal polarization modes to two TES bolometers. We model our TES with two- and three-block thermal architectures. We fit the complex impedance data at multiple points in the TES transition. From the fits, we predict the noise spectra. We present comparisons of these predictions to the data for two TESes on a prototype polarimeter.

  13. Using Gaussian Processes for the Calibration and Exploration of Complex Computer Models

    NASA Astrophysics Data System (ADS)

    Coleman-Smith, C. E.

    Cutting edge research problems require the use of complicated and computationally expensive computer models. I will present a practical overview of the design and analysis of computer experiments in high energy nuclear and astro phsyics. The aim of these experiments is to infer credible ranges for certain fundamental parameters of the underlying physical processes through the analysis of model output and experimental data. To be truly useful computer models must be calibrated against experimental data. Gaining an understanding of the response of expensive models across the full range of inputs can be a slow and painful process. Gaussian Process emulators can be an efficient and informative surrogate for expensive computer models and prove to be an ideal mechanism for exploring the response of these models to variations in their inputs. A sensitivity analysis can be performed on these model emulators to characterize and quantify the relationship between model input parameters and predicted observable properties. The result of this analysis provides the user with information about which parameters are most important and most likely to affect the prediction of a given observable. Sensitivity analysis allow us to identify what model parameters can be most efficiently constrained by the given observational data set. In this thesis I describe a range of techniques for the calibration and exploration of the complex and expensive computer models so common in modern physics research. These statistical methods are illustrated with examples drawn from the fields of high energy nuclear physics and galaxy formation.

  14. Modeling and analysis of pinhole occulter experiment

    NASA Technical Reports Server (NTRS)

    Ring, J. R.

    1986-01-01

    The objectives were to improve pointing control system implementation by converting the dynamic compensator from a continuous domain representation to a discrete one; to determine pointing stability sensitivites to sensor and actuator errors by adding sensor and actuator error models to treetops and by developing an error budget for meeting pointing stability requirements; and to determine pointing performance for alternate mounting bases (space station for example).

  15. Flexible robot control: Modeling and experiments

    NASA Technical Reports Server (NTRS)

    Oppenheim, Irving J.; Shimoyama, Isao

    1989-01-01

    Described here is a model and its use in experimental studies of flexible manipulators. The analytical model uses the equivalent of Rayleigh's method to approximate the displaced shape of a flexible link as the static elastic displacement which would occur under end rotations as applied at the joints. The generalized coordinates are thereby expressly compatible with joint motions and rotations in serial link manipulators, because the amplitude variables are simply the end rotations between the flexible link and the chord connecting the end points. The equations for the system dynamics are quite simple and can readily be formulated for the multi-link, three-dimensional case. When the flexible links possess mass and (polar moment of) inertia which are small compared to the concentrated mass and inertia at the joints, the analytical model is exact and displays the additional advantage of reduction in system dimension for the governing equations. Four series of pilot tests have been completed. Studies on a planar single-link system were conducted at Carnegie-Mellon University, and tests conducted at Toshiba Corporation on a planar two-link system were then incorporated into the study. A single link system under three-dimensional motion, displaying biaxial flexure, was then tested at Carnegie-Mellon.

  16. Multiagent model and mean field theory of complex auction dynamics

    NASA Astrophysics Data System (ADS)

    Chen, Qinghua; Huang, Zi-Gang; Wang, Yougui; Lai, Ying-Cheng

    2015-09-01

    Recent years have witnessed a growing interest in analyzing a variety of socio-economic phenomena using methods from statistical and nonlinear physics. We study a class of complex systems arising from economics, the lowest unique bid auction (LUBA) systems, which is a recently emerged class of online auction game systems. Through analyzing large, empirical data sets of LUBA, we identify a general feature of the bid price distribution: an inverted J-shaped function with exponential decay in the large bid price region. To account for the distribution, we propose a multi-agent model in which each agent bids stochastically in the field of winner’s attractiveness, and develop a theoretical framework to obtain analytic solutions of the model based on mean field analysis. The theory produces bid-price distributions that are in excellent agreement with those from the real data. Our model and theory capture the essential features of human behaviors in the competitive environment as exemplified by LUBA, and may provide significant quantitative insights into complex socio-economic phenomena.

  17. Preconditioning the bidomain model with almost linear complexity

    NASA Astrophysics Data System (ADS)

    Pierre, Charles

    2012-01-01

    The bidomain model is widely used in electro-cardiology to simulate spreading of excitation in the myocardium and electrocardiograms. It consists of a system of two parabolic reaction diffusion equations coupled with an ODE system. Its discretisation displays an ill-conditioned system matrix to be inverted at each time step: simulations based on the bidomain model therefore are associated with high computational costs. In this paper we propose a preconditioning for the bidomain model either for an isolated heart or in an extended framework including a coupling with the surrounding tissues (the torso). The preconditioning is based on a formulation of the discrete problem that is shown to be symmetric positive semi-definite. A block LU decomposition of the system together with a heuristic approximation (referred to as the monodomain approximation) are the key ingredients for the preconditioning definition. Numerical results are provided for two test cases: a 2D test case on a realistic slice of the thorax based on a segmented heart medical image geometry, a 3D test case involving a small cubic slab of tissue with orthotropic anisotropy. The analysis of the resulting computational cost (both in terms of CPU time and of iteration number) shows an almost linear complexity with the problem size, i.e. of type nlog α( n) (for some constant α) which is optimal complexity for such problems.

  18. Experiments and Models for Polymeric Microsphere Foams

    NASA Technical Reports Server (NTRS)

    Pipes, R. Byrona; Kyu, Thein

    2005-01-01

    The current project was performed under the direction of Dr. Byron Pipes as its lead investigator from January 2001 to August 2004. With the permission of the NASA, the project was transferred to Dr. Thein Kyu as the principle investigator for the period of September 2004 - June 2005. There were two major thrust areas in the original proposal; (1) experimental characterization and kinematics of foam structure formation and (2) determination of the mechanical, physical, and thermal properties, although these thrust areas were further sub- divided into 7 tasks. The present project has been directed primarily to elucidate kinematics of micro-foam formation (tasks 1 and 3) and to characterize micro-foam structures, since the control of the micro-structure of these foams is of paramount importance in determining their physical, mechanical and thermal properties. The first thrust area was accomplished in a timely manner; however, the second thrust area of foam properties (tasks 2,4-7) has yet to be completed because the area of kinematics of foam structure formation turned out to be extremely complex and thus consumed more time than what have been anticipated. As will be reported in what follows, the present studies have greatly enhances the in-depth understanding of mechanisms and kinematics of the micro-foam formation from solid powders. However, in order to implement all objectives of the second thrust areas regarding investigations of mechanical, physical, and thermal properties and establishment of the correlation of structure - properties of the foams, the project needs additional time and resources. The technical highlights of the accomplishment are summarized as follows. The present study represents a first approach to understanding the complexities that act together in the powder foaming process to achieve the successful inflation of polyimide microstructures. This type of study is novel as no prior work had dissected the fundamentals that govern the inflation

  19. A complex mathematical model of the human menstrual cycle.

    PubMed

    Reinecke, Isabel; Deuflhard, Peter

    2007-07-21

    Despite the fact that more than 100 million women worldwide use birth control pills and that half of the world's population is concerned, the menstrual cycle has so far received comparatively little attention in the field of mathematical modeling. The term menstrual cycle comprises the processes of the control system in the female body that, under healthy circumstances, lead to ovulation at regular intervals, thus making reproduction possible. If this is not the case or ovulation is not desired, the question arises how this control system can be influenced, for example, by hormonal treatments. In order to be able to cover a vast range of external manipulations, the mathematical model must comprise the main components where the processes belonging to the menstrual cycle occur, as well as their interrelations. A system of differential equations serves as the mathematical model, describing the dynamics of hormones, enzymes, receptors, and follicular phases. Since the processes take place in different parts of the body and influence each other with a certain delay, passing over to delay differential equations is deemed a reasonable step. The pulsatile release of the gonadotropin-releasing hormone (GnRH) is controlled by a complex neural network. We choose to model the pulse time points of this GnRH pulse generator by a stochastic process. Focus in this paper is on the model development. This rather elaborate mathematical model is the basis for a detailed analysis and could be helpful for possible drug design. PMID:17448501

  20. An ice sheet model of reduced complexity for paleoclimate studies

    NASA Astrophysics Data System (ADS)

    Neff, Basil; Born, Andreas; Stocker, Thomas F.

    2016-04-01

    IceBern2D is a vertically integrated ice sheet model to investigate the ice distribution on long timescales under different climatic conditions. It is forced by simulated fields of surface temperature and precipitation of the Last Glacial Maximum and present-day climate from a comprehensive climate model. This constant forcing is adjusted to changes in ice elevation. Due to its reduced complexity and computational efficiency, the model is well suited for extensive sensitivity studies and ensemble simulations on extensive temporal and spatial scales. It shows good quantitative agreement with standardized benchmarks on an artificial domain (EISMINT). Present-day and Last Glacial Maximum ice distributions in the Northern Hemisphere are also simulated with good agreement. Glacial ice volume in Eurasia is underestimated due to the lack of ice shelves in our model. The efficiency of the model is utilized by running an ensemble of 400 simulations with perturbed model parameters and two different estimates of the climate at the Last Glacial Maximum. The sensitivity to the imposed climate boundary conditions and the positive degree-day factor β, i.e., the surface mass balance, outweighs the influence of parameters that disturb the flow of ice. This justifies the use of simplified dynamics as a means to achieve computational efficiency for simulations that cover several glacial cycles. Hysteresis simulations over 5 million years illustrate the stability of the simulated ice sheets to variations in surface air temperature.

  1. Modeling and Visualizing Flow of Chemical Agents Across Complex Terrain

    NASA Technical Reports Server (NTRS)

    Kao, David; Kramer, Marc; Chaderjian, Neal

    2005-01-01

    Release of chemical agents across complex terrain presents a real threat to homeland security. Modeling and visualization tools are being developed that capture flow fluid terrain interaction as well as point dispersal downstream flow paths. These analytic tools when coupled with UAV atmospheric observations provide predictive capabilities to allow for rapid emergency response as well as developing a comprehensive preemptive counter-threat evacuation plan. The visualization tools involve high-end computing and massive parallel processing combined with texture mapping. We demonstrate our approach across a mountainous portion of North California under two contrasting meteorological conditions. Animations depicting flow over this geographical location provide immediate assistance in decision support and crisis management.

  2. Order parameter in complex dipolar structures: Microscopic modeling

    NASA Astrophysics Data System (ADS)

    Prosandeev, S.; Bellaiche, L.

    2008-02-01

    Microscopic models have been used to reveal the existence of an order parameter that is associated with many complex dipolar structures in magnets and ferroelectrics. This order parameter involves a double cross product of the local dipoles with their positions. It provides a measure of subtle microscopic features, such as the helicity of the two domains inherent to onion states, curvature of the dipolar pattern in flower states, or characteristics of sets of vortices with opposite chirality (e.g., distance between the vortex centers and/or the magnitude of their local dipoles).

  3. The modeling of complex continua: Fundamental obstacles and grand challenges

    SciTech Connect

    Not Available

    1993-01-01

    The research is divided into: discontinuities and adaptive computation, chaotic flows, dispersion of flow in porous media, and nonlinear waves and nonlinear materials. The research program has emphasized innovative computation and theory. The approach depends on abstracting mathematical concepts and computational methods from individual applications to a wide range of problems involving complex continua. The generic difficulties in the modeling of continua that guide this abstraction are multiple length and time scales, microstructures (bubbles, droplets, vortices, crystal defects), and chaotic or random phenomena described by a statistical formulation.

  4. MHD Modeling in Complex 3D Geometries: Towards Predictive Simulation of SIHI Current Drive

    NASA Astrophysics Data System (ADS)

    Hansen, Christopher James

    The HIT-SI experiment studies Steady Inductive Helicity Injection (SIHI) for the purpose of forming and sustaining a spheromak plasma. A spheromak is formed in a nearly axisymmetric flux conserver, with a bow tie cross section, by means of two semi-toroidal injectors. The plasma-facing surfaces of the device, which are made of copper for its low resistivity, are covered in an insulating coating in order to operate in a purely inductive manner. Following formation, the spheromak flux and current are increased during a quiescent period marked by a decrease in the global mode activity. A proposed mechanism, Imposed Dynamo Current Drive (IDCD), is expected to be responsible for this phase of quiescent current drive. Due to the geometric complexity of the experiment, previous numerical modeling efforts have used a simplified geometry that excludes the injector volumes from the simulated domain. The effect of helicity injection is then modeled by boundary conditions on this reduced plasma volume. The work presented here has explored and developed more complete computational models of the HIT-SI device. This work is separated into 3 distinct but complementary areas: 1) Development of a 3D MHD equilibrium code that can incorporate the non-axisymmetric injector fields present in HIT-SI and investigation of equilibria of interest during spheromak sustainment. 2) A 2D axisymmetric MHD equilibrium code that was used to explore reduced order models for mean-field evolution using equations derived from IDCD theory including coupling to 3D equilibria. 3) A 3D time-dependent non-linear MHD code that is capable of modeling the entire plasma volume including dynamics within the injectors. Although HIT-SI was the motivation for, and experiment studied in this research, the tools and methods developed are general --- allowing their application to a broad range of magnetic confinement experiments. These tools constitute a significant advance for modeling plasma dynamics in devices with

  5. Process modelling for materials preparation experiments

    NASA Technical Reports Server (NTRS)

    Rosenberger, Franz; Alexander, J. Iwan D.

    1993-01-01

    The main goals of the research under this grant consist of the development of mathematical tools and measurement of transport properties necessary for high fidelity modeling of crystal growth from the melt and solution, in particular, for the Bridgman-Stockbarger growth of mercury cadmium telluride (MCT) and the solution growth of triglycine sulphate (TGS). Of the tasks described in detail in the original proposal, two remain to be worked on: (1) development of a spectral code for moving boundary problems; and (2) diffusivity measurements on concentrated and supersaturated TGS solutions. Progress made during this seventh half-year period is reported.

  6. Process modelling for materials preparation experiments

    NASA Technical Reports Server (NTRS)

    Rosenberger, Franz; Alexander, J. Iwan D.

    1993-01-01

    The main goals of the research consist of the development of mathematical tools and measurement of transport properties necessary for high fidelity modeling of crystal growth from the melt and solution, in particular for the Bridgman-Stockbarger growth of mercury cadmium telluride (MCT) and the solution growth of triglycine sulphate (TGS). Of the tasks described in detail in the original proposal, two remain to be worked on: development of a spectral code for moving boundary problems, and diffusivity measurements on concentrated and supersaturated TGS solutions. During this eighth half-year period, good progress was made on these tasks.

  7. Process modelling for materials preparation experiments

    NASA Technical Reports Server (NTRS)

    Rosenberger, Franz; Alexander, J. Iwan D.

    1992-01-01

    The development is examined of mathematical tools and measurement of transport properties necessary for high fidelity modeling of crystal growth from the melt and solution, in particular for the Bridgman-Stockbarger growth of mercury cadmium telluride (MCT) and the solution growth of triglycine sulphate (TGS). The tasks include development of a spectral code for moving boundary problems, kinematic viscosity measurements on liquid MCT at temperatures close to the melting point, and diffusivity measurements on concentrated and supersaturated TGS solutions. A detailed description is given of the work performed for these tasks, together with a summary of the resulting publications and presentations.

  8. Does model performance improve with complexity? A case study with three hydrological models

    NASA Astrophysics Data System (ADS)

    Orth, Rene; Staudinger, Maria; Seneviratne, Sonia I.; Seibert, Jan; Zappa, Massimiliano

    2015-04-01

    In recent decades considerable progress has been made in climate model development. Following the massive increase in computational power, models became more sophisti- cated. At the same time also simple conceptual models have advanced. In this study we validate and compare three hydrological models of different complexity to investigate whether their performance varies accordingly. For this purpose we use runoff and also soil moisture measurements, which allow a truly independent validation, from several sites across Switzerland. The models are calibrated in similar ways with the same runoff data. Our results show that the more complex models HBV and PREVAH outperform the simple water balance model (SWBM) in case of runoff but not for soil moisture. Furthermore the most sophisticated PREVAH model shows an added value compared to the HBV model only in case of soil moisture. Focusing on extreme events we find generally improved performance of the SWBM during drought conditions and degraded agreement with observations during wet extremes. For the more complex models we find the opposite behavior, probably because they were primarily developed for predic- tion of runoff extremes. As expected given their complexity, HBV and PREVAH have more problems with over-fitting. All models show a tendency towards better perfor- mance in lower altitudes as opposed to (pre-)alpine sites. The results vary considerably across the investigated sites. In contrast, the different metrics we consider to estimate the agreement between models and observations lead to similar conclusions, indicating that the performance of the considered models is similar at different time scales as well as for anomalies and long-term means. We conclude that added complexity does not necessarily lead to improved performance of hydrological models, and that performance can vary greatly depending on the considered hydrological variable (e.g. runoff vs. soil moisture) or hydrological conditions (floods vs

  9. Does model performance improve with complexity? A case study with three hydrological models

    NASA Astrophysics Data System (ADS)

    Orth, Rene; Staudinger, Maria; Seneviratne, Sonia I.; Seibert, Jan; Zappa, Massimiliano

    2015-04-01

    In recent decades considerable progress has been made in climate model development. Following the massive increase in computational power, models became more sophisticated. At the same time also simple conceptual models have advanced. In this study we validate and compare three hydrological models of different complexity to investigate whether their performance varies accordingly. For this purpose we use runoff and also soil moisture measurements, which allow a truly independent validation, from several sites across Switzerland. The models are calibrated in similar ways with the same runoff data. Our results show that the more complex models HBV and PREVAH outperform the simple water balance model (SWBM) in case of runoff but not for soil moisture. Furthermore the most sophisticated PREVAH model shows an added value compared to the HBV model only in case of soil moisture. Focusing on extreme events we find generally improved performance of the SWBM during drought conditions and degraded agreement with observations during wet extremes. For the more complex models we find the opposite behavior, probably because they were primarily developed for prediction of runoff extremes. As expected given their complexity, HBV and PREVAH have more problems with over-fitting. All models show a tendency towards better performance in lower altitudes as opposed to (pre-) alpine sites. The results vary considerably across the investigated sites. In contrast, the different metrics we consider to estimate the agreement between models and observations lead to similar conclusions, indicating that the performance of the considered models is similar at different time scales as well as for anomalies and long-term means. We conclude that added complexity does not necessarily lead to improved performance of hydrological models, and that performance can vary greatly depending on the considered hydrological variable (e.g. runoff vs. soil moisture) or hydrological conditions (floods vs. droughts).

  10. Recirculating Planar Magnetron Modeling and Experiments

    NASA Astrophysics Data System (ADS)

    Franzi, Matthew; Gilgenbach, Ronald; Hoff, Brad; French, Dave; Lau, Y. Y.

    2011-10-01

    We present simulations and initial experimental results of a new class of crossed field device: Recirculating Planar Magnetrons (RPM). Two geometries of RPM are being explored: 1) Dual planar-magnetrons connected by a recirculating section with axial magnetic field and transverse electric field, and 2) Planar cathode and anode-cavity rings with radial magnetic field and axial electric field. These RPMs have numerous advantages for high power microwave generation by virtue of larger area cathodes and anodes. The axial B-field RPM can be configured in either the conventional or inverted (faster startup) configuration. Two and three-dimensional EM PIC simulations show rapid electron spoke formation and microwave oscillation in pi-mode. Smoothbore prototype axial-B RPM experiments are underway using the MELBA accelerator at parameters of -300 kV, 1-20 kA and pulselengths of 0.5-1 microsecond. Implementation and operation of the first RPM slow wave structure, operating at 1GHz, will be discussed. Research supported by AFOSR, AFRL, L-3 Communications, and Northrop Grumman. Done...processed 1830 records...17:52:57 Beginning APS data extraction...17:52:57

  11. Statistical analysis of target acquisition sensor modeling experiments

    NASA Astrophysics Data System (ADS)

    Deaver, Dawne M.; Moyer, Steve

    2015-05-01

    The U.S. Army RDECOM CERDEC NVESD Modeling and Simulation Division is charged with the development and advancement of military target acquisition models to estimate expected soldier performance when using all types of imaging sensors. Two elements of sensor modeling are (1) laboratory-based psychophysical experiments used to measure task performance and calibrate the various models and (2) field-based experiments used to verify the model estimates for specific sensors. In both types of experiments, it is common practice to control or measure environmental, sensor, and target physical parameters in order to minimize uncertainty of the physics based modeling. Predicting the minimum number of test subjects required to calibrate or validate the model should be, but is not always, done during test planning. The objective of this analysis is to develop guidelines for test planners which recommend the number and types of test samples required to yield a statistically significant result.

  12. The Eemian climate simulated by two models of different complexities

    NASA Astrophysics Data System (ADS)

    Nikolova, Irina; Yin, Qiuzhen; Berger, Andre; Singh, Umesh; Karami, Pasha

    2013-04-01

    The Eemian period, also known as MIS-5, experienced warmer than today climate, reduction in ice sheets and important sea-level rise. These interesting features have made the Eemian appropriate to evaluate climate models when forced with astronomical and greenhouse gas forcings different from today. In this work, we present the simulated Eemian climate by two climate models of different complexities, LOVECLIM (LLN Earth system model of intermediate complexity) and CCSM3 (NCAR atmosphere-ocean general circulation model). Feedbacks from sea ice, vegetation, monsoon and ENSO phenomena are discussed to explain the regional similarities/dissimilarities in both models with respect to the pre-industrial (PI) climate. Significant warming (cooling) over almost all the continents during boreal summer (winter) leads to a largely increased (reduced) seasonal contrast in the northern (southern) hemisphere, mainly due to the much higher (lower) insolation received by the whole Earth in boreal summer (winter). The arctic is warmer than at PI through the whole year, resulting from its much higher summer insolation and its remnant effect in the following fall-winter through the interactions between atmosphere, ocean and sea ice. Regional discrepancies exist in the sea-ice formation zones between the two models. Excessive sea-ice formation in CCSM3 results in intense regional cooling. In both models intensified African monsoon and vegetation feedback are responsible for the cooling during summer in North Africa and on the Arabian Peninsula. Over India precipitation maximum is found further west, while in Africa the precipitation maximum migrates further north. Trees and grassland expand north in Sahel/Sahara, trees being more abundant in the results from LOVECLIM than from CCSM3. A mix of forest and grassland occupies continents and expand deep in the high northern latitudes in line with proxy records. Desert areas reduce significantly in Northern Hemisphere, but increase in North

  13. Complex 3D crustal model of Asia region

    NASA Astrophysics Data System (ADS)

    Baranov, A. A.

    2009-04-01

    The Southern and Central Asia is tectonically complex region with great collision between Asian and Indian plates and its evolution is strongly related to the active subduction along the Pacific border. Previous global crustal model (CRUST 2.0.) for Asia region have resolution 2x2 degree. Model AsCRUST-08 (Baranov et al., 2008) of Central and Southern Asia with resolution of 1x1 degree was sufficiently improved in several regions and we built integrated model of the crust for Asia region. Also we add several regions in North Eurasia as Mongolia, Kazahstan and others. For such regions as Red and Dead sea, Northern China, Southern India we built regional maps with more detailed resolution. It was used data of deep seismic reflection, refraction and receiver functions studies from published papers. The existing data were verified and crosschecked. As the first result, we demonstrate a new Moho map for the region. The complex crustal model consists of three layers: upper, middle and lower crust. Besides depth to the boundaries, we provide average P-wave velocities in the upper, middle and lower parts of the crystalline crust. Limits for Vp velocities are: for upper crust 5.5-6.2 km/s, for middle 6.0-6.6 km/s, for lower crust 6.6-7.5km/s. Also we recalculated seismic P velocity data to density in crustal layers using rheology properties and geology data. Conclusions: Moho map and the velocity structure of the crust are much more heterogeneous than in previous maps CRUST 2.0. (Bassin et al., 2000), and CRUST 5.1. (Mooney et al., 1998). Our model offers a starting point for numerical modeling of deep structures by allowing correction for crustal effects beforehand and to resolve trade-off with mantle heterogeneities. This model will be used as a starting point in the gravity modeling of the lithosphere and mantle structure. [1] A. Baranov et al., First steps towards a new crustal model of South and Central Asia , Geophysical Research Abstracts, Vol. 10, EGU2008-A-05313

  14. Indian Consortia Models: FORSA Libraries' Experiences

    NASA Astrophysics Data System (ADS)

    Patil, Y. M.; Birdie, C.; Bawdekar, N.; Barve, S.; Anilkumar, N.

    2007-10-01

    With increases in prices of journals, shrinking library budgets and cuts in subscriptions to journals over the years, there has been a big challenge facing Indian library professionals to cope with the proliferation of electronic information resources. There have been sporadic efforts by different groups of libraries in forming consortia at different levels. The types of consortia identified are generally based on various models evolved in India in a variety of forms depending upon the participants' affiliations and funding sources. Indian astronomy library professionals have formed a group called Forum for Resource Sharing in Astronomy and Astrophysics (FORSA), which falls under `Open Consortia', wherein participants are affiliated to different government departments. This is a model where professionals willingly come forward and actively support consortia formation; thereby everyone benefits. As such, FORSA has realized four consortia, viz. Nature Online Consortium; Indian Astrophysics Consortium for physics/astronomy journals of Springer/Kluwer; Consortium for Scientific American Online Archive (EBSCO); and Open Consortium for Lecture Notes in Physics (Springer), which are discussed briefly.

  15. The OECI model: the CRO Aviano experience.

    PubMed

    Da Pieve, Lucia; Collazzo, Raffaele; Masutti, Monica; De Paoli, Paolo

    2015-01-01

    In 2012, the "Centro di Riferimento Oncologico" (CRO) National Cancer Institute joined the accreditation program of the Organisation of European Cancer Institutes (OECI) and was one of the first institutes in Italy to receive recognition as a Comprehensive Cancer Center. At the end of the project, a strengths, weaknesses, opportunities, and threats (SWOT) analysis aimed at identifying the pros and cons, both for the institute and of the accreditation model in general, was performed. The analysis shows significant strengths, such as the affinity with other improvement systems and current regulations, and the focus on a multidisciplinary approach. The proposed suggestions for improvement concern mainly the structure of the standards and aim to facilitate the assessment, benchmarking, and sharing of best practices. The OECI accreditation model provided a valuable executive tool and a framework in which we can identify several important development projects. An additional impact for our institute is the participation in the project BenchCan, of which the OECI is lead partner. PMID:27096265

  16. Experiments and Valve Modelling in Thermoacoustic Device

    NASA Astrophysics Data System (ADS)

    Duthil, P.; Baltean Carlès, D.; Bétrancourt, A.; François, M. X.; Yu, Z. B.; Thermeau, J. P.

    2006-04-01

    In a so called heat driven thermoacoustic refrigerator, using either a pulse tube or a lumped boost configuration, heat pumping is induced by Stirling type thermodynamic cycles within the regenerator. The time phase between acoustic pressure and flow rate throughout must then be close to that met for a purely progressive wave. The study presented here relates the experimental characterization of passive elements such as valves, tubes and tanks which are likely to act on this phase relationship when included in the propagation line of the wave resonator. In order to carry out a characterization — from the acoustic point of view — of these elements, systematic measurements of the acoustic field are performed varying various parameters: mean pressure, oscillations frequency, supplied heat power. Acoustic waves are indeed generated by use of a thermoacoustic prime mover driving a pulse tube refrigerator. The experimental results are then compared with the solutions obtained with various one-dimensional linear models including non linear correction factors. It turns out that when using a non symmetrical valve, and for large dissipative effects, the measurements disagree with the linear modelling and non linear behaviour of this particular element is shown.

  17. The independent spreaders involved SIR Rumor model in complex networks

    NASA Astrophysics Data System (ADS)

    Qian, Zhen; Tang, Shaoting; Zhang, Xiao; Zheng, Zhiming

    2015-07-01

    Recent studies of rumor or information diffusion process in complex networks show that in contrast to traditional comprehension, individuals who participate in rumor spreading within one network do not always get the rumor from their neighbors. They can obtain the rumor from different sources like online social networks and then publish it on their personal sites. In our paper, we discuss this phenomenon in complex networks by adopting the concept of independent spreaders. Rather than getting the rumor from neighbors, independent spreaders learn it from other channels. We further develop the classic "ignorant-spreaders-stiflers" or SIR model of rumor diffusion process in complex networks. A steady-state analysis is conducted to investigate the final spectrum of the rumor spreading under various spreading rate, stifling rate, density of independent spreaders and average degree of the network. Results show that independent spreaders effectively enhance the rumor diffusion process, by delivering the rumor to regions far away from the current rumor infected regions. And though the rumor spreading process in SF networks is faster than that in ER networks, the final size of rumor spreading in ER networks is larger than that in SF networks.

  18. The use of workflows in the design and implementation of complex experiments in macromolecular crystallography

    PubMed Central

    Brockhauser, Sandor; Svensson, Olof; Bowler, Matthew W.; Nanao, Max; Gordon, Elspeth; Leal, Ricardo M. F.; Popov, Alexander; Gerring, Matthew; McCarthy, Andrew A.; Gotz, Andy

    2012-01-01

    The automation of beam delivery, sample handling and data analysis, together with increasing photon flux, diminishing focal spot size and the appearance of fast-readout detectors on synchrotron beamlines, have changed the way that many macromolecular crystallography experiments are planned and executed. Screening for the best diffracting crystal, or even the best diffracting part of a selected crystal, has been enabled by the development of microfocus beams, precise goniometers and fast-readout detectors that all require rapid feedback from the initial processing of images in order to be effective. All of these advances require the coupling of data feedback to the experimental control system and depend on immediate online data-analysis results during the experiment. To facilitate this, a Data Analysis WorkBench (DAWB) for the flexible creation of complex automated protocols has been developed. Here, example workflows designed and implemented using DAWB are presented for enhanced multi-step crystal characterizations, experiments involving crystal re­orientation with kappa goniometers, crystal-burning experiments for empirically determining the radiation sensitivity of a crystal system and the application of mesh scans to find the best location of a crystal to obtain the highest diffraction quality. Beamline users interact with the prepared workflows through a specific brick within the beamline-control GUI MXCuBE. PMID:22868763

  19. Real-time modeling of complex atmospheric releases in urban areas

    SciTech Connect

    Baskett, R.L.; Ellis, J.S.; Sullivan, T.J.

    1994-08-01

    If a nuclear installation in or near an urban area has a venting, fire, or explosion, airborne radioactivity becomes the major concern. Dispersion models are the immediate tool for estimating the dose and contamination. Responses in urban areas depend on knowledge of the amount of the release, representative meteorological data, and the ability of the dispersion model to simulate the complex flows as modified by terrain or local wind conditions. A centralized dispersion modeling system can produce realistic assessments of radiological accidents anywhere in a country within several minutes if it is computer-automated. The system requires source-term, terrain, mapping and dose-factor databases, real-time meteorological data acquisition, three-dimensional atmospheric transport and dispersion models, and experienced staff. Experience with past responses in urban areas by the Atmospheric Release Advisory Capability (ARAC) program at Lawrence Livermore National Laboratory illustrate the challenges for three-dimensional dispersion models.

  20. Complex Geometric Models of Diffusion and Relaxation in Healthy and Damaged White Matter

    PubMed Central

    Farrell, Jonathan A.D.; Smith, Seth A.; Reich, Daniel S.; Calabresi, Peter A.; van Zijl, Peter C.M.

    2010-01-01

    Which aspects of tissue microstructure affect diffusion weighted MRI signals? Prior models, many of which use Monte-Carlo simulations, have focused on relatively simple models of the cellular microenvironment and have not considered important anatomic details. With the advent of higher-order analysis models for diffusion imaging, such as high-angular-resolution diffusion imaging (HARDI), more realistic models are necessary. This paper presents and evaluates the reproducibility of simulations of diffusion in complex geometries. Our framework is quantitative, does not require specialized hardware, is easily implemented with little programming experience, and is freely available as open-source software. Models may include compartments with different diffusivities, permeabilities, and T2 time constants using both parametric (e.g., spheres and cylinders) and arbitrary (e.g., mesh-based) geometries. Three-dimensional diffusion displacement-probability functions are mapped with high reproducibility, and thus can be readily used to assess reproducibility of diffusion-derived contrasts. PMID:19739233

  1. a Range Based Method for Complex Facade Modeling

    NASA Astrophysics Data System (ADS)

    Adami, A.; Fregonese, L.; Taffurelli, L.

    2011-09-01

    3d modelling of Architectural Heritage does not follow a very well-defined way, but it goes through different algorithms and digital form according to the shape complexity of the object, to the main goal of the representation and to the starting data. Even if the process starts from the same data, such as a pointcloud acquired by laser scanner, there are different possibilities to realize a digital model. In particular we can choose between two different attitudes: the mesh and the solid model. In the first case the complexity of architecture is represented by a dense net of triangular surfaces which approximates the real surface of the object. In the other -opposite- case the 3d digital model can be realized by the use of simple geometrical shapes, by the use of sweeping algorithm and the Boolean operations. Obviously these two models are not the same and each one is characterized by some peculiarities concerning the way of modelling (the choice of a particular triangulation algorithm or the quasi-automatic modelling by known shapes) and the final results (a more detailed and complex mesh versus an approximate and more simple solid model). Usually the expected final representation and the possibility of publishing lead to one way or the other. In this paper we want to suggest a semiautomatic process to build 3d digital models of the facades of complex architecture to be used for example in city models or in other large scale representations. This way of modelling guarantees also to obtain small files to be published on the web or to be transmitted. The modelling procedure starts from laser scanner data which can be processed in the well known way. Usually more than one scan is necessary to describe a complex architecture and to avoid some shadows on the facades. These have to be registered in a single reference system by the use of targets which are surveyed by topography and then to be filtered in order to obtain a well controlled and homogeneous point cloud of

  2. Phase diagrams of the Bose-Hubbard model and the Haldane-Bose-Hubbard model with complex hopping amplitudes

    NASA Astrophysics Data System (ADS)

    Kuno, Yoshihito; Nakafuji, Takashi; Ichinose, Ikuo

    2015-12-01

    In this paper, we study Bose-Hubbard models on square and honeycomb lattices with complex hopping amplitudes, which are feasible by recent experiments of cold atomic gases in optical lattices. To clarify phase diagrams, we use extended quantum Monte Carlo simulations (eQMC). For the system on the square lattice, the complex hopping is realized by an artificial magnetic field. We found that vortex-solid states form for certain set of magnetic field, i.e., the magnetic field with the flux quanta per plaquette f =p /q , where p and q are co-prime natural numbers. For the system on the honeycomb lattice, we add the next-nearest-neighbor complex hopping. The model is a bosonic analog of the Haldane-Hubbard model. By means of eQMC, we study the model with both weak and strong onsite repulsions. Numerical study shows that the model has a rich phase diagram. We also found that in the system defined on the honeycomb lattice of the cylinder geometry, an interesting edge state appears.

  3. Process modelling for space station experiments

    NASA Technical Reports Server (NTRS)

    Rosenberger, Franz; Alexander, J. Iwan D.

    1988-01-01

    The work performed during the first year 1 Oct. 1987 to 30 Sept. 1988 involved analyses of crystal growth from the melt and from solution. The particular melt growth technique under investigation is directional solidification by the Bridgman-Stockbarger method. Two types of solution growth systems are also being studied. One involves growth from solution in a closed container, the other concerns growth of protein crystals by the hanging drop method. Following discussions with Dr. R. J. Naumann of the Low Gravity Science Division at MSFC it was decided to tackle the analysis of crystal growth from the melt earlier than originally proposed. Rapid progress was made in this area. Work is on schedule and full calculations were underway for some time. Progress was also made in the formulation of the two solution growth models.

  4. A subsurface model of the beaver meadow complex

    NASA Astrophysics Data System (ADS)

    Nash, C.; Grant, G.; Flinchum, B. A.; Lancaster, J.; Holbrook, W. S.; Davis, L. G.; Lewis, S.

    2015-12-01

    Wet meadows are a vital component of arid and semi-arid environments. These valley spanning, seasonally inundated wetlands provide critical habitat and refugia for wildlife, and may potentially mediate catchment-scale hydrology in otherwise "water challenged" landscapes. In the last 150 years, these meadows have begun incising rapidly, causing the wetlands to drain and much of the ecological benefit to be lost. The mechanisms driving this incision are poorly understood, with proposed means ranging from cattle grazing to climate change, to the removal of beaver. There is considerable interest in identifying cost-effective strategies to restore the hydrologic and ecological conditions of these meadows at a meaningful scale, but effective process based restoration first requires a thorough understanding of the constructional history of these ubiquitous features. There is emerging evidence to suggest that the North American beaver may have had a considerable role in shaping this landscape through the building of dams. This "beaver meadow complex hypothesis" posits that as beaver dams filled with fine-grained sediments, they became large wet meadows on which new dams, and new complexes, were formed, thereby aggrading valley bottoms. A pioneering study done in Yellowstone indicated that 32-50% of the alluvial sediment was deposited in ponded environments. The observed aggradation rates were highly heterogeneous, suggesting spatial variability in the depositional process - all consistent with the beaver meadow complex hypothesis (Polvi and Wohl, 2012). To expand on this initial work, we have probed deeper into these meadow complexes using a combination of geophysical techniques, coring methods and numerical modeling to create a 3-dimensional representation of the subsurface environments. This imaging has given us a unique view into the patterns and processes responsible for the landforms, and may shed further light on the role of beaver in shaping these landscapes.

  5. Complex Wall Boundary Conditions for Modeling Combustion in Catalytic Channels

    NASA Astrophysics Data System (ADS)

    Zhu, Huayang; Jackson, Gregory

    2000-11-01

    Monolith catalytic reactors for exothermic oxidation are being used in automobile exhaust clean-up and ultra-low emissions combustion systems. The reactors present a unique coupling between mass, heat, and momentum transport in a channel flow configuration. The use of porous catalytic coatings along the channel wall presents a complex boundary condition when modeled with the two-dimensional channel flow. This current work presents a 2-D transient model for predicting the performance of catalytic combustion systems for methane oxidation on Pd catalysts. The model solves the 2-D compressible transport equations for momentum, species, and energy, which are solved with a porous washcoat model for the wall boundary conditions. A time-splitting algorithm is used to separate the stiff chemical reactions from the convective/diffusive equations for the channel flow. A detailed surface chemistry mechanism is incorporated for the catalytic wall model and is used to predict transient ignition and steady-state conversion of CH4-air flows in the catalytic reactor.

  6. Velocity response curves demonstrate the complexity of modeling entrainable clocks.

    PubMed

    Taylor, Stephanie R; Cheever, Allyson; Harmon, Sarah M

    2014-12-21

    Circadian clocks are biological oscillators that regulate daily behaviors in organisms across the kingdoms of life. Their rhythms are generated by complex systems, generally involving interlocked regulatory feedback loops. These rhythms are entrained by the daily light/dark cycle, ensuring that the internal clock time is coordinated with the environment. Mathematical models play an important role in understanding how the components work together to function as a clock which can be entrained by light. For a clock to entrain, it must be possible for it to be sped up or slowed down at appropriate times. To understand how biophysical processes affect the speed of the clock, one can compute velocity response curves (VRCs). Here, in a case study involving the fruit fly clock, we demonstrate that VRC analysis provides insight into a clock׳s response to light. We also show that biochemical mechanisms and parameters together determine a model׳s ability to respond realistically to light. The implication is that, if one is developing a model and its current form has an unrealistic response to light, then one must reexamine one׳s model structure, because searching for better parameter values is unlikely to lead to a realistic response to light. PMID:25193284

  7. Modeling pedestrian's conformity violation behavior: a complex network based approach.

    PubMed

    Zhou, Zhuping; Hu, Qizhou; Wang, Wei

    2014-01-01

    Pedestrian injuries and fatalities present a problem all over the world. Pedestrian conformity violation behaviors, which lead to many pedestrian crashes, are common phenomena at the signalized intersections in China. The concepts and metrics of complex networks are applied to analyze the structural characteristics and evolution rules of pedestrian network about the conformity violation crossings. First, a network of pedestrians crossing the street is established, and the network's degree distributions are analyzed. Then, by using the basic idea of SI model, a spreading model of pedestrian illegal crossing behavior is proposed. Finally, through simulation analysis, pedestrian's illegal crossing behavior trends are obtained in different network structures and different spreading rates. Some conclusions are drawn: as the waiting time increases, more pedestrians will join in the violation crossing once a pedestrian crosses on red firstly. And pedestrian's conformity violation behavior will increase as the spreading rate increases. PMID:25530755

  8. Dynamic workflow model for complex activity in intensive care unit.

    PubMed

    Bricon-Souf, N; Renard, J M; Beuscart, R

    1998-01-01

    Cooperation is very important in Medical care, especially in the Intensive Care Unit (ICU) where the difficulties increase which is due to the urgency of the work. Workflow systems are considered as well adapted to modelize productive work in business process. We aim at introducing this approach in the Health Care domain. We have proposed a conversation-based Workflow in order to modelize the therapeutics plan in the ICU [1]. But in such a complex field, the flexibility of the workflow system is essential for the system to be usable. In this paper, we focus on the main points used to increase the dynamicity. We report on affecting roles, highlighting information, and controlling the system We propose some solutions and describe our prototype in the ICU. PMID:10384452

  9. Complex fluid flow modeling with SPH on GPU

    NASA Astrophysics Data System (ADS)

    Bilotta, Giuseppe; Hérault, Alexis; Del Negro, Ciro; Russo, Giovanni; Vicari, Annamaria

    2010-05-01

    We describe an implementation of the Smoothed Particle Hydrodynamics (SPH) method for the simulation of complex fluid flows. The algorithm is entirely executed on Graphic Processing Units (GPUs) using the Compute Unified Device Architecture (CUDA) developed by NVIDIA and fully exploiting their computational power. An increase of one to two orders of magnitude in simulation speed over equivalent CPU code is achieved. A complete modeling of the flow of a complex fluid such as lava is challenging from the modelistic, numerical and computational points of view. The natural topography irregularities, the dynamic free boundaries and phenomena such as solidification, presence of floating solid bodies or other obstacles and their eventual fragmentation make the problem difficult to solve using traditional numerical methods (finite volumes, finite elements): the need to refine the discretization grid in correspondence of high gradients, when possible, is computationally expensive and with an often inadequate control of the error; for real-world applications, moreover, the information needed by the grid refinement may not be available (e.g. because the Digital Elevation Models are too coarse); boundary tracking is also problematic with Eulerian discretizations, more so with complex fluids due to the presence of internal boundaries given by fluid inhomogeneity and presence of solidification fronts. An alternative approach is offered by mesh-free particle methods, that solve most of the problems connected to the dynamics of complex fluids in a natural way. Particle methods discretize the fluid using nodes which are not forced on a given topological structure: boundary treatment is therefore implicit and automatic; the movement freedom of the particles also permits the treatment of deformations without incurring in any significant penalty; finally, the accuracy is easily controlled by the insertion of new particles where needed. Our team has developed a new model based on the

  10. Modeling Pedestrian's Conformity Violation Behavior: A Complex Network Based Approach

    PubMed Central

    Zhou, Zhuping; Hu, Qizhou; Wang, Wei

    2014-01-01

    Pedestrian injuries and fatalities present a problem all over the world. Pedestrian conformity violation behaviors, which lead to many pedestrian crashes, are common phenomena at the signalized intersections in China. The concepts and metrics of complex networks are applied to analyze the structural characteristics and evolution rules of pedestrian network about the conformity violation crossings. First, a network of pedestrians crossing the street is established, and the network's degree distributions are analyzed. Then, by using the basic idea of SI model, a spreading model of pedestrian illegal crossing behavior is proposed. Finally, through simulation analysis, pedestrian's illegal crossing behavior trends are obtained in different network structures and different spreading rates. Some conclusions are drawn: as the waiting time increases, more pedestrians will join in the violation crossing once a pedestrian crosses on red firstly. And pedestrian's conformity violation behavior will increase as the spreading rate increases. PMID:25530755

  11. Experiences of parenting a child with medical complexity in need of acute hospital care.

    PubMed

    Hagvall, Monica; Ehnfors, Margareta; Anderzén-Carlsson, Agneta

    2016-03-01

    Parents of children with medical complexity have described being responsible for providing advanced care for the child. When the child is acutely ill, they must rely on the health-care services during short or long periods of hospitalization. The purpose of this study was to describe parental experiences of caring for their child with medical complexity during hospitalization for acute deterioration, specifically focussing on parental needs and their experiences of the attitudes of staff. Data were gathered through individual interviews and analyzed using qualitative content analysis. The care period can be interpreted as a balancing act between acting as a caregiver and being in need of care. The parents needed skilled staff who could relieve them of medical responsibility, but they wanted to be involved in the care and in the decisions taken. They needed support, including relief, in order to meet their own needs and to be able to take care of their children. It was important that the child was treated with respect in order for the parent to trust the staff. An approach where staff view parents and children as a single unit, as recipients of care, would probably make the situation easier for these parents and children. PMID:25352538

  12. The first naphthosemiquinone complex of K+ with vitamin K3 analog: Experiment and density functional theory

    NASA Astrophysics Data System (ADS)

    Kathawate, Laxmi; Gejji, Shridhar P.; Yeole, Sachin D.; Verma, Prakash L.; Puranik, Vedavati G.; Salunke-Gawali, Sunita

    2015-05-01

    Synthesis and characterization of potassium complex of 2-hydroxy-3-methyl-1,4-naphthoquinone (phthiocol), the vitamin K3 analog, has been carried out using FT-IR, UV-Vis, 1H and 13C NMR, EPR, cyclic voltammetry and single crystal X-ray diffraction experiments combined with the density functional theory. It has been observed that naphthosemiquinone binds to two K+ ions extending the polymeric chain through bridging oxygens O(2) and O(3). The crystal network possesses hydrogen bonding interactions from coordinated water molecules showing water channels along the c-axis. 13C NMR spectra revealed that the complexation of phthiocol with potassium ion engenders deshielding of C(2) signals, which appear at δ = ∼14.6 ppm whereas those of C(3) exhibit up-field signals near δ ∼ 6.9 ppm. These inferences are supported by the M06-2x based density functional theory. Electrochemical experiments further suggest that reduction of naphthosemiquinone results in only a cathodic peak from catechol. A triplet state arising from interactions between neighboring phthiocol anion lead to a half field signal at g = 4.1 in the polycrystalline X-band EPR spectra at 133 K.

  13. Chromate adsorption on selected soil minerals: Surface complexation modeling coupled with spectroscopic investigation.

    PubMed

    Veselská, Veronika; Fajgar, Radek; Číhalová, Sylva; Bolanz, Ralph M; Göttlicher, Jörg; Steininger, Ralph; Siddique, Jamal A; Komárek, Michael

    2016-11-15

    This study investigates the mechanisms of Cr(VI) adsorption on natural clay (illite and kaolinite) and synthetic (birnessite and ferrihydrite) minerals, including its speciation changes, and combining quantitative thermodynamically based mechanistic surface complexation models (SCMs) with spectroscopic measurements. Series of adsorption experiments have been performed at different pH values (3-10), ionic strengths (0.001-0.1M KNO3), sorbate concentrations (10(-4), 10(-5), and 10(-6)M Cr(VI)), and sorbate/sorbent ratios (50-500). Fourier transform infrared spectroscopy, X-ray photoelectron spectroscopy, and X-ray absorption spectroscopy were used to determine the surface complexes, including surface reactions. Adsorption of Cr(VI) is strongly ionic strength dependent. For ferrihydrite at pH <7, a simple diffuse-layer model provides a reasonable prediction of adsorption. For birnessite, bidentate inner-sphere complexes of chromate and dichromate resulted in a better diffuse-layer model fit. For kaolinite, outer-sphere complexation prevails mainly at lower Cr(VI) loadings. Dissolution of solid phases needs to be considered for better SCMs fits. The coupled SCM and spectroscopic approach is thus useful for investigating individual minerals responsible for Cr(VI) retention in soils, and improving the handling and remediation processes. PMID:27450335

  14. FIELD EXPERIMENTS AND MODELING AT CDG AIRPORTS

    NASA Astrophysics Data System (ADS)

    Ramaroson, R.

    2009-12-01

    Richard Ramaroson1,4, Klaus Schaefer2, Stefan Emeis2, Carsten Jahn2, Gregor Schürmann2, Maria Hoffmann2, Mikhael Zatevakhin3, Alexandre Ignatyev3. 1ONERA, Châtillon, France; 4SEAS, Harvard University, Cambridge, USA; 2FZK, Garmisch, Germany; (3)FSUE SPbAEP, St Petersburg, Russia. 2-month field campaigns have been organized at CDG airports in autumn 2004 and summer 2005. Air quality and ground air traffic emissions have been monitored continuously at terminals and taxi-runways, along with meteorological parameters onboard trucks and with a SODAR. This paper analyses the commercial engine emissions characteristics at airports and their effects on gas pollutants and airborne particles coupled to meteorology. LES model results for PM dispersion coupled to microphysics in the PBL are compared to measurements. Winds and temperature at the surface and their vertical profiles have been stored with turbulence. SODAR observations show the time-development of the mixing layer depth and turbulent mixing in summer up to 800m. Active low level jets and their regional extent have been observed and analyzed. PM number and mass size distribution, morphology and chemical contents are investigated. Formation of new ultra fine volatile (UFV) particles in the ambient plume downstream of running engines is observed. Soot particles are mostly observed at significant level at high power thrusts at take-off (TO) and on touch-down whereas at lower thrusts at taxi and aprons ultra the UFV PM emissions become higher. Ambient airborne PM1/2.5 is closely correlated to air traffic volume and shows a maximum beside runways. PM number distribution at airports is composed mainly by volatile UF PM abundant at apron. Ambient PM mass in autumn is higher than in summer. The expected differences between TO and taxi emissions are confirmed for NO, NO2, speciated VOC and CO. NO/NO2 emissions are larger at runways due to higher power. Reactive VOC and CO are more produced at low powers during idling at

  15. Fish locomotion: insights from both simple and complex mechanical models

    NASA Astrophysics Data System (ADS)

    Lauder, George

    2015-11-01

    Fishes are well-known for their ability to swim and maneuver effectively in the water, and recent years have seen great progress in understanding the hydrodynamics of aquatic locomotion. But studying freely-swimming fishes is challenging due to difficulties in controlling fish behavior. Mechanical models of aquatic locomotion have many advantages over studying live animals, including the ability to manipulate and control individual structural or kinematic factors, easier measurement of forces and torques, and the ability to abstract complex animal designs into simpler components. Such simplifications, while not without their drawbacks, facilitate interpretation of how individual traits alter swimming performance and the discovery of underlying physical principles. In this presentation I will discuss the use of a variety of mechanical models for fish locomotion, ranging from simple flexing panels to complex biomimetic designs incorporating flexible, actively moved, fin rays on multiple fins. Mechanical devices have provided great insight into the dynamics of aquatic propulsion and, integrated with studies of locomotion in freely-swimming fishes, provide new insights into how fishes move through the water.

  16. Phase-separation models for swimming enhancement in complex fluids

    NASA Astrophysics Data System (ADS)

    Man, Yi; Lauga, Eric

    2015-08-01

    Swimming cells often have to self-propel through fluids displaying non-Newtonian rheology. While past theoretical work seems to indicate that stresses arising from complex fluids should systematically hinder low-Reynolds number locomotion, experimental observations suggest that locomotion enhancement is possible. In this paper we propose a physical mechanism for locomotion enhancement of microscopic swimmers in a complex fluid. It is based on the fact that microstructured fluids will generically phase-separate near surfaces, leading to the presence of low-viscosity layers, which promote slip and decrease viscous friction near the surface of the swimmer. We use two models to address the consequence of this phase separation: a nonzero apparent slip length for the fluid and then an explicit modeling of the change of viscosity in a thin layer near the swimmer. Considering two canonical setups for low-Reynolds number locomotion, namely the waving locomotion of a two-dimensional sheet and that of a three-dimensional filament, we show that phase-separation systematically increases the locomotion speeds, possibly by orders of magnitude. We close by confronting our predictions with recent experimental results.

  17. Phase-separation models for swimming enhancement in complex fluids.

    PubMed

    Man, Yi; Lauga, Eric

    2015-08-01

    Swimming cells often have to self-propel through fluids displaying non-Newtonian rheology. While past theoretical work seems to indicate that stresses arising from complex fluids should systematically hinder low-Reynolds number locomotion, experimental observations suggest that locomotion enhancement is possible. In this paper we propose a physical mechanism for locomotion enhancement of microscopic swimmers in a complex fluid. It is based on the fact that microstructured fluids will generically phase-separate near surfaces, leading to the presence of low-viscosity layers, which promote slip and decrease viscous friction near the surface of the swimmer. We use two models to address the consequence of this phase separation: a nonzero apparent slip length for the fluid and then an explicit modeling of the change of viscosity in a thin layer near the swimmer. Considering two canonical setups for low-Reynolds number locomotion, namely the waving locomotion of a two-dimensional sheet and that of a three-dimensional filament, we show that phase-separation systematically increases the locomotion speeds, possibly by orders of magnitude. We close by confronting our predictions with recent experimental results. PMID:26382500

  18. Modeling the complex pathology of Alzheimer's disease in Drosophila.

    PubMed

    Fernandez-Funez, Pedro; de Mena, Lorena; Rincon-Limas, Diego E

    2015-12-01

    Alzheimer's disease (AD) is the leading cause of dementia and the most common neurodegenerative disorder. AD is mostly a sporadic disorder and its main risk factor is age, but mutations in three genes that promote the accumulation of the amyloid-β (Aβ42) peptide revealed the critical role of amyloid precursor protein (APP) processing in AD. Neurofibrillary tangles enriched in tau are the other pathological hallmark of AD, but the lack of causative tau mutations still puzzles researchers. Here, we describe the contribution of a powerful invertebrate model, the fruit fly Drosophila melanogaster, to uncover the function and pathogenesis of human APP, Aβ42, and tau. APP and tau participate in many complex cellular processes, although their main function is microtubule stabilization and the to-and-fro transport of axonal vesicles. Additionally, expression of secreted Aβ42 induces prominent neuronal death in Drosophila, a critical feature of AD, making this model a popular choice for identifying intrinsic and extrinsic factors mediating Aβ42 neurotoxicity. Overall, Drosophila has made significant contributions to better understand the complex pathology of AD, although additional insight can be expected from combining multiple transgenes, performing genome-wide loss-of-function screens, and testing anti-tau therapies alone or in combination with Aβ42. PMID:26024860

  19. Alpha Decay in the Complex-Energy Shell Model

    SciTech Connect

    Betan, R. Id

    2012-01-01

    Background: Alpha emission from a nucleus is a fundamental decay process in which the alpha particle formed inside the nucleus tunnels out through the potential barrier. Purpose: We describe alpha decay of 212Po and 104Te by means of the configuration interaction approach. Method: To compute the preformation factor and penetrability, we use the complex-energy shell model with a separable T = 1 interaction. The single-particle space is expanded in a Woods-Saxon basis that consists of bound and unbound resonant states. Special attention is paid to the treatment of the norm kernel appearing in the definition of the formation amplitude that guarantees the normalization of the channel function. Results: Without explicitly considering the alpha-cluster component in the wave function of the parent nucleus, we reproduce the experimental alpha-decay width of 212Po and predict an upper limit of T1/2 = 5.5 10 7 sec for the half-life of 104Te. Conclusions: The complex-energy shell model in a large valence configuration space is capable of providing a microscopic description of the alpha decay of heavy nuclei having two valence protons and two valence neutrons outside the doubly magic core. The inclusion of proton-neutron interaction between the valence nucleons is likely to shorten the predicted half-live of 104Te.

  20. Process modelling for materials preparation experiments

    NASA Technical Reports Server (NTRS)

    Rosenberger, Franz; Alexander, J. Iwan D.

    1994-01-01

    The main goals of the research under this grant consist of the development of mathematical tools and measurement techniques for transport properties necessary for high fidelity modelling of crystal growth from the melt and solution. Of the tasks described in detail in the original proposal, two remain to be worked on: development of a spectral code for moving boundary problems, and development of an expedient diffusivity measurement technique for concentrated and supersaturated solutions. We have focused on developing a code to solve for interface shape, heat and species transport during directional solidification. The work involved the computation of heat, mass and momentum transfer during Bridgman-Stockbarger solidification of compound semiconductors. Domain decomposition techniques and preconditioning methods were used in conjunction with Chebyshev spectral methods to accelerate convergence while retaining the high-order spectral accuracy. During the report period we have further improved our experimental setup. These improvements include: temperature control of the measurement cell to 0.1 C between 10 and 60 C; enclosure of the optical measurement path outside the ZYGO interferometer in a metal housing that is temperature controlled to the same temperature setting as the measurement cell; simultaneous dispensing and partial removal of the lower concentration (lighter) solution above the higher concentration (heavier) solution through independently motor-driven syringes; three-fold increase in data resolution by orientation of the interferometer with respect to diffusion direction; and increase of the optical path length in the solution cell to 12 mm.

  1. Modeling platinum group metal complexes in aqueous solution.

    PubMed

    Lienke, A; Klatt, G; Robinson, D J; Koch, K R; Naidoo, K J

    2001-05-01

    We construct force fields suited for the study of three platinum group metals (PGM) as chloranions in aqueous solution from quantum chemical computations and report experimental data. Density functional theory (DFT) using the local density approximation (LDA), as well as extended basis sets that incorporate relativistic corrections for the transition metal atoms, has been used to obtain equilibrium geometries, harmonic vibrational frequencies, and atomic charges for the complexes. We found that DFT calculations of [PtCl(6)](2-).3H(2)O, [PdCl(4)](2-).2H(2)O, and [RhCl(6)](3-).3H(2)O water clusters compared well with molecular mechanics (MM) calculations using the specific force field developed here. The force field performed equally well in condensed phase simulations. A 500 ps molecular dynamics (MD) simulation of [PtCl(6)](2-) in water was used to study the structure of the solvation shell around the anion. The resulting data were compared to an experimental radial distribution function derived from X-ray diffraction experiments. We found the calculated pair correlation functions (PCF) for hexachloroplatinate to be in good agreement with experiment and were able to use the simulation results to identify and resolve two water-anion peaks in the experimental spectrum. PMID:11327912

  2. Efficient design of experiments for complex response surfaces with application to etching uniformity in a plasma reactor

    NASA Astrophysics Data System (ADS)

    Tatavalli Mittadar, Nirmal

    Plasma etching uniformity across silicon wafers is of paramount importance in the semiconductor industry. The complexity of plasma etching, coupled with lack of instrumentation to provide real-time process information (that could be used for feedback control), necessitate that optimal conditions for uniform etching must be designed into the reactor and process recipe. This is often done empirically using standard design of experiments which, however, are very costly and time consuming. The objective of this study was to develop a general purpose efficient design strategy that requires a minimum number of experiments, and can handle complex constraints in the presence of uncertainties. Traditionally, Response Surface Methodology (RSM) is used in these applications to design experiments to determine the optimal value of decision variables or inputs. We demonstrated that standard RSM, when applied to the problem of plasma etching uniformity, has the following drawbacks (1) inefficient search due to process nonlinearities, (2) lack of converge to the optimum, and, (3) inability to handle complex inequality constraints. We developed a four-phase Efficient Design Strategy (EDS) based on the DACE paradigm (Design and Analysis of Computer Experiments) and Bayesian search algorithms. The four phases of EDS are: (1) exploration of the design space by maximizing information, (2) exploration of the design space for feasible points by maximizing probability of constraint satisfaction, (3) optimization of the objective and (4) constrained local search. We also designed novel algorithms to switch between the different phases. The choice of model parameters for DACE predictors is usually determined by the Maximum Likelihood Estimation (MLE) method. Depending on the dataset, MLE could result in unrealistic predictors that show a peak-and-dip behavior. To solve this problem we developed techniques to detect the presence of peak-and-dip behavior and a new scheme based on Maximum a

  3. Underwater Blast Experiments and Modeling for Shock Mitigation

    SciTech Connect

    Glascoe, L; McMichael, L; Vandersall, K; Margraf, J

    2010-03-07

    A simple but novel mitigation concept to enforce standoff distance and reduce shock loading on a vertical, partially-submerged structure is evaluated using scaled aquarium experiments and numerical modeling. Scaled, water tamped explosive experiments were performed using three gallon aquariums. The effectiveness of different mitigation configurations, including air-filled media and an air gap, is assessed relative to an unmitigated detonation using the same charge weight and standoff distance. Experiments using an air-filled media mitigation concept were found to effectively dampen the explosive response of the aluminum plate and reduce the final displacement at plate center by approximately half. The finite element model used for the initial experimental design compares very well to the experimental DIC results both spatially and temporally. Details of the experiment and finite element aquarium models are described including the boundary conditions, Eulerian and Lagrangian techniques, detonation models, experimental design and test diagnostics.

  4. A Community Mentoring Model for STEM Undergraduate Research Experiences

    ERIC Educational Resources Information Center

    Kobulnicky, Henry A.; Dale, Daniel A.

    2016-01-01

    This article describes a community mentoring model for UREs that avoids some of the common pitfalls of the traditional paradigm while harnessing the power of learning communities to provide young scholars a stimulating collaborative STEM research experience.

  5. Applying modeling Results in designing a global tropospheric experiment

    NASA Technical Reports Server (NTRS)

    1982-01-01

    A set of field experiments and advanced modeling studies which provide a strategy for a program of global tropospheric experiments was identified. An expanded effort to develop space applications for trospheric air quality monitoring and studies was recommended. The tropospheric ozone, carbon, nitrogen, and sulfur cycles are addressed. Stratospheric-tropospheric exchange is discussed. Fast photochemical processes in the free troposphere are considered.

  6. Characteristics of a Model Industrial Technology Education Field Experience.

    ERIC Educational Resources Information Center

    Foster, Phillip R.; Kozak, Michael R.

    1986-01-01

    This report contains selected findings from a research project that investigated field experiences in industrial technology education. Funded by the Texas Education Agency, the project addressed the identification of characteristics of a model field experience in industrial technology education. This was accomplished using the Delphi technique.…

  7. Hierarchical Modeling of Sequential Behavioral Data: Examining Complex Association Patterns in Mediation Models

    ERIC Educational Resources Information Center

    Dagne, Getachew A.; Brown, C. Hendricks; Howe, George W.

    2007-01-01

    This article presents new methods for modeling the strength of association between multiple behaviors in a behavioral sequence, particularly those involving substantively important interaction patterns. Modeling and identifying such interaction patterns becomes more complex when behaviors are assigned to more than two categories, as is the case…

  8. Complex magnetic field exposure system for in vitro experiments at intermediate frequencies.

    PubMed

    Lodato, Rossella; Merla, Caterina; Pinto, Rosanna; Mancini, Sergio; Lopresto, Vanni; Lovisolo, Giorgio A

    2013-04-01

    In occupational environments, an increasing number of electromagnetic sources emitting complex magnetic field waveforms in the range of intermediate frequencies is present, requiring an accurate exposure risk assessment with both in vitro and in vivo experiments. In this article, an in vitro exposure system able to generate complex magnetic flux density B-fields, reproducing signals from actual intermediate frequency sources such as magnetic resonance imaging (MRI) scanners, for instance, is developed and validated. The system consists of a magnetic field generation system and an exposure apparatus realized with a couple of square coils. A wide homogeneity (99.9%) volume of 210 × 210 × 110 mm(3) was obtained within the coils, with the possibility of simultaneous exposure of a large number of standard Petri dishes. The system is able to process any numerical input sequence through a filtering technique aimed at compensating the coils' impedance effect. The B-field, measured in proximity to a 1.5 T MRI bore during a typical examination, was excellently reproduced (cross-correlation index of 0.99). Thus, it confirms the ability of the proposed setup to accurately simulate complex waveforms in the intermediate frequency band. Suitable field levels were also attained. Moreover, a dosimetry index based on the weighted-peak method was evaluated considering the induced E-field on a Petri dish exposed to the reproduced complex B-field. The weighted-peak index was equal to 0.028 for the induced E-field, indicating an exposure level compliant with the basic restrictions of the International Commission on Non-Ionizing Radiation Protection. Bioelectromagnetics 34:211-219, 2013. © 2012 Wiley Periodicals, Inc. PMID:23060274

  9. Complex Geometry Creation and Turbulent Conjugate Heat Transfer Modeling

    SciTech Connect

    Bodey, Isaac T; Arimilli, Rao V; Freels, James D

    2011-01-01

    The multiphysics capabilities of COMSOL provide the necessary tools to simulate the turbulent thermal-fluid aspects of the High Flux Isotope Reactor (HFIR). Version 4.1, and later, of COMSOL provides three different turbulence models: the standard k-{var_epsilon} closure model, the low Reynolds number (LRN) k-{var_epsilon} model, and the Spalart-Allmaras model. The LRN meets the needs of the nominal HFIR thermal-hydraulic requirements for 2D and 3D simulations. COMSOL also has the capability to create complex geometries. The circular involute fuel plates used in the HFIR require the use of algebraic equations to generate an accurate geometrical representation in the simulation environment. The best-estimate simulation results show that the maximum fuel plate clad surface temperatures are lower than those predicted by the legacy thermal safety code used at HFIR by approximately 17 K. The best-estimate temperature distribution determined by COMSOL was then used to determine the necessary increase in the magnitude of the power density profile (PDP) to produce a similar clad surface temperature as compared to the legacy thermal safety code. It was determined and verified that a 19% power increase was sufficient to bring the two temperature profiles to relatively good agreement.

  10. Wind Power Curve Modeling in Simple and Complex Terrain

    SciTech Connect

    Bulaevskaya, V.; Wharton, S.; Irons, Z.; Qualley, G.

    2015-02-09

    Our previous work on wind power curve modeling using statistical models focused on a location with a moderately complex terrain in the Altamont Pass region in northern California (CA). The work described here is the follow-up to that work, but at a location with a simple terrain in northern Oklahoma (OK). The goal of the present analysis was to determine the gain in predictive ability afforded by adding information beyond the hub-height wind speed, such as wind speeds at other heights, as well as other atmospheric variables, to the power prediction model at this new location and compare the results to those obtained at the CA site in the previous study. While we reach some of the same conclusions at both sites, many results reported for the CA site do not hold at the OK site. In particular, using the entire vertical profile of wind speeds improves the accuracy of wind power prediction relative to using the hub-height wind speed alone at both sites. However, in contrast to the CA site, the rotor equivalent wind speed (REWS) performs almost as well as the entire profile at the OK site. Another difference is that at the CA site, adding wind veer as a predictor significantly improved the power prediction accuracy. The same was true for that site when air density was added to the model separately instead of using the standard air density adjustment. At the OK site, these additional variables result in no significant benefit for the prediction accuracy.

  11. Lupus Nephritis: Animal Modeling of a Complex Disease Syndrome Pathology

    PubMed Central

    McGaha, Tracy L; Madaio, Michael P.

    2014-01-01

    Nephritis as a result of autoimmunity is a common morbidity associated with Systemic Lupus Erythematosus (SLE). There is substantial clinical and industry interest in medicinal intervention in the SLE nephritic process; however, clinical trials to specifically treat lupus nephritis have not resulted in complete and sustained remission in all patients. Multiple mouse models have been used to investigate the pathologic interactions between autoimmune reactivity and SLE pathology. While several models bear a remarkable similarity to SLE-driven nephritis, there are limitations for each that can make the task of choosing the appropriate model for a particular aspect of SLE pathology challenging. This is not surprising given the variable and diverse nature of human disease. In many respects, features among murine strains mimic some (but never all) of the autoimmune and pathologic features of lupus patients. Although the diversity often limits universal conclusions relevant to pathogenesis, they provide insights into the complex process that result in phenotypic manifestations of nephritis. Thus nephritis represents a microcosm of systemic disease, with variable lesions and clinical features. In this review, we discuss some of the most commonly used models of lupus nephritis (LN) and immune-mediated glomerular damage examining their relative strengths and weaknesses, which may provide insight in the human condition. PMID:25722732

  12. Stepwise building of plankton functional type (PFT) models: A feasible route to complex models?

    NASA Astrophysics Data System (ADS)

    Frede Thingstad, T.; Strand, Espen; Larsen, Aud

    2010-01-01

    We discuss the strategy of building models of the lower part of the planktonic food web in a stepwise manner: starting with few plankton functional types (PFTs) and adding resolution and complexity while carrying along the insight and results gained from simpler models. A central requirement for PFT models is that they allow sustained coexistence of the PFTs. Here we discuss how this identifies a need to consider predation, parasitism and defence mechanisms together with nutrient acquisition and competition. Although the stepwise addition of complexity is assumed to be useful and feasible, a rapid increase in complexity strongly calls for alternative approaches able to model emergent system-level features without a need for detailed representation of all the underlying biological detail.

  13. Three-dimensional Physical Modeling: Applications and Experience at Mayo Clinic.

    PubMed

    Matsumoto, Jane S; Morris, Jonathan M; Foley, Thomas A; Williamson, Eric E; Leng, Shuai; McGee, Kiaran P; Kuhlmann, Joel L; Nesberg, Linda E; Vrtiska, Terri J

    2015-01-01

    Radiologists will be at the center of the rapid technologic expansion of three-dimensional (3D) printing of medical models, as accurate models depend on well-planned, high-quality imaging studies. This article outlines the available technology and the processes necessary to create 3D models from the radiologist's perspective. We review the published medical literature regarding the use of 3D models in various surgical practices and share our experience in creating a hospital-based three-dimensional printing laboratory to aid in the planning of complex surgeries. PMID:26562234

  14. Emulation of a complex global aerosol model to quantify sensitivity to uncertain parameters

    NASA Astrophysics Data System (ADS)

    Lee, L. A.; Carslaw, K. S.; Pringle, K. J.; Mann, G. W.; Spracklen, D. V.

    2011-12-01

    Sensitivity analysis of atmospheric models is necessary to identify the processes that lead to uncertainty in model predictions, to help understand model diversity through comparison of driving processes, and to prioritise research. Assessing the effect of parameter uncertainty in complex models is challenging and often limited by CPU constraints. Here we present a cost-effective application of variance-based sensitivity analysis to quantify the sensitivity of a 3-D global aerosol model to uncertain parameters. A Gaussian process emulator is used to estimate the model output across multi-dimensional parameter space, using information from a small number of model runs at points chosen using a Latin hypercube space-filling design. Gaussian process emulation is a Bayesian approach that uses information from the model runs along with some prior assumptions about the model behaviour to predict model output everywhere in the uncertainty space. We use the Gaussian process emulator to calculate the percentage of expected output variance explained by uncertainty in global aerosol model parameters and their interactions. To demonstrate the technique, we show examples of cloud condensation nuclei (CCN) sensitivity to 8 model parameters in polluted and remote marine environments as a function of altitude. In the polluted environment 95 % of the variance of CCN concentration is described by uncertainty in the 8 parameters (excluding their interaction effects) and is dominated by the uncertainty in the sulphur emissions, which explains 80 % of the variance. However, in the remote region parameter interaction effects become important, accounting for up to 40 % of the total variance. Some parameters are shown to have a negligible individual effect but a substantial interaction effect. Such sensitivities would not be detected in the commonly used single parameter perturbation experiments, which would therefore underpredict total uncertainty. Gaussian process emulation is shown to

  15. Emulation of a complex global aerosol model to quantify sensitivity to uncertain parameters

    NASA Astrophysics Data System (ADS)

    Lee, L. A.; Carslaw, K. S.; Pringle, K.; Mann, G. W.; Spracklen, D. V.

    2011-07-01

    Sensitivity analysis of atmospheric models is necessary to identify the processes that lead to uncertainty in model predictions, to help understand model diversity, and to prioritise research. Assessing the effect of parameter uncertainty in complex models is challenging and often limited by CPU constraints. Here we present a cost-effective application of variance-based sensitivity analysis to quantify the sensitivity of a 3-D global aerosol model to uncertain parameters. A Gaussian process emulator is used to estimate the model output across multi-dimensional parameter space using information from a small number of model runs at points chosen using a Latin hypercube space-filling design. Gaussian process emulation is a Bayesian approach that uses information from the model runs along with some prior assumptions about the model behaviour to predict model output everywhere in the uncertainty space. We use the Gaussian process emulator to calculate the percentage of expected output variance explained by uncertainty in global aerosol model parameters and their interactions. To demonstrate the technique, we show examples of cloud condensation nuclei (CCN) sensitivity to 8 model parameters in polluted and remote marine environments as a function of altitude. In the polluted environment 95 % of the variance of CCN concentration is described by uncertainty in the 8 parameters (excluding their interaction effects) and is dominated by the uncertainty in the sulphur emissions, which explains 80 % of the variance. However, in the remote region parameter interaction effects become important, accounting for up to 40 % of the total variance. Some parameters are shown to have a negligible individual effect but a substantial interaction effect. Such sensitivities would not be detected in the commonly used single parameter perturbation experiments, which would therefore underpredict total uncertainty. Gaussian process emulation is shown to be an efficient and useful technique for

  16. Combined (Super 31)P and (Super 1)H NMR Experiments in the Structural Elucidation of Polynuclear Thiolate Complexes

    ERIC Educational Resources Information Center

    Cerrada, Elena; Laguna, Mariano

    2005-01-01

    A facile synthesis of two gold(I) complexes with 1,2-benzenedithiolate ligand and two different bidentate phosphines are described. A detailed sequence of NMR experiments is suggested to determine the structure of the compounds.

  17. Engineering teacher training models and experiences

    NASA Astrophysics Data System (ADS)

    González-Tirados, R. M.

    2009-04-01

    Education Area, we renewed the programme, content and methodology, teaching the course under the name of "Initial Teacher Training Course within the framework of the European Higher Education Area". Continuous Training means learning throughout one's life as an Engineering teacher. They are actions designed to update and improve teaching staff, and are systematically offered on the current issues of: Teaching Strategies, training for research, training for personal development, classroom innovations, etc. They are activities aimed at conceptual change, changing the way of teaching and bringing teaching staff up-to-date. At the same time, the Institution is at the disposal of all teaching staff as a meeting point to discuss issues in common, attend conferences, department meetings, etc. In this Congress we present a justification of both training models and their design together with some results obtained on: training needs, participation, how it is developing and to what extent students are profiting from it.

  18. Polysaccharide-Protein Complexes in a Coarse-Grained Model.

    PubMed

    Poma, Adolfo B; Chwastyk, Mateusz; Cieplak, Marek

    2015-09-10

    We construct two variants of coarse-grained models of three hexaoses: one based on the centers of mass of the monomers and the other associated with the C4 atoms. The latter is found to be better defined and more suitable for studying interactions with proteins described within α-C based models. We determine the corresponding effective stiffness constants through all-atom simulations and two statistical methods. One method is the Boltzmann inversion (BI) and the other, named energy-based (EB), involves direct monitoring of energies as a function of the variables that define the stiffness potentials. The two methods are generally consistent in their account of the stiffness. We find that the elastic constants differ between the hexaoses and are noticeably different from those determined for the crystalline cellulose Iβ. The nonbonded couplings through hydrogen bonds between different sugar molecules are modeled by the Lennard-Jones potentials and are found to be stronger than the hydrogen bonds in proteins. We observe that the EB method agrees with other theoretical and experimental determinations of the nonbonded parameters much better than BI. We then consider the hexaose-Man5B catalytic complexes and determine the contact energies between their the C4-α-C atoms. These interactions are found to be stronger than the proteinic hydrogen bonds: about four times as strong for cellohexaose and two times for mannohexaose. The fluctuational dynamics of the coarse-grained complexes are found to be compatible with previous all-atom studies by Bernardi et al. PMID:26291477

  19. Model slope infiltration experiments for shallow landslides early warning

    NASA Astrophysics Data System (ADS)

    Damiano, E.; Greco, R.; Guida, A.; Olivares, L.; Picarelli, L.

    2009-04-01

    simple empirical models [Versace et al., 2003] based on correlation between some features of rainfall records (cumulated height, duration, season etc.) and the correspondent observed landslides. Laboratory experiments on instrumented small scale slope models represent an effective way to provide data sets [Eckersley, 1990; Wang and Sassa, 2001] useful for building up more complex models of landslide triggering prediction. At the Geotechnical Laboratory of C.I.R.I.AM. an instrumented flume to investigate on the mechanics of landslides in unsaturated deposits of granular soils is available [Olivares et al. 2003; Damiano, 2004; Olivares et al., 2007]. In the flume a model slope is reconstituted by a moist-tamping technique and subjected to an artificial uniform rainfall since failure happens. The state of stress and strain of the slope is monitored during the entire test starting from the infiltration process since the early post-failure stage: the monitoring system is constituted by several mini-tensiometers placed at different locations and depths, to measure suction, mini-transducers to measure positive pore pressures, laser sensors, to measure settlements of the ground surface, and high definition video-cameras to obtain, through a software (PIV) appositely dedicated, the overall horizontal displacement field. Besides, TDR sensors, used with an innovative technique [Greco, 2006], allow to reconstruct the water content profile of soil along the entire thickness of the investigated deposit and to monitor its continuous changes during infiltration. In this paper a series of laboratory tests carried out on model slopes in granular pyroclastic soils taken in the mountainous area north-eastern of Napoli, are presented. The experimental results demonstrate the completeness of information provided by the various sensors installed. In particular, very useful information is given by the coupled measurements of soil water content by TDR and suction by tensiometers. Knowledge of

  20. Discriminating between rival biochemical network models: three approaches to optimal experiment design

    PubMed Central

    2010-01-01

    Background The success of molecular systems biology hinges on the ability to use computational models to design predictive experiments, and ultimately unravel underlying biological mechanisms. A problem commonly encountered in the computational modelling of biological networks is that alternative, structurally different models of similar complexity fit a set of experimental data equally well. In this case, more than one molecular mechanism can explain available data. In order to rule out the incorrect mechanisms, one needs to invalidate incorrect models. At this point, new experiments maximizing the difference between the measured values of alternative models should be proposed and conducted. Such experiments should be optimally designed to produce data that are most likely to invalidate incorrect model structures. Results In this paper we develop methodologies for the optimal design of experiments with the aim of discriminating between different mathematical models of the same biological system. The first approach determines the 'best' initial condition that maximizes the L2 (energy) distance between the outputs of the rival models. In the second approach, we maximize the L2-distance of the outputs by designing the optimal external stimulus (input) profile of unit L2-norm. Our third method uses optimized structural changes (corresponding, for example, to parameter value changes reflecting gene knock-outs) to achieve the same goal. The numerical implementation of each method is considered in an example, signal processing in starving Dictyostelium amœbæ. Conclusions Model-based design of experiments improves both the reliability and the efficiency of biochemical network model discrimination. This opens the way to model invalidation, which can be used to perfect our understanding of biochemical networks. Our general problem formulation together with the three proposed experiment design methods give the practitioner new tools for a systems biology approach to

  1. An evaluation of the PENCURV model for penetration events in complex targets.

    SciTech Connect

    Broyles, Todd P.

    2004-07-01

    Three complex target penetration scenarios are run with a model developed by the U. S. Army Engineer Waterways Experiment Station, called PENCURV. The results are compared with both test data and a Zapotec model to evaluate PENCURV's suitability for conducting broad-based scoping studies on a variety of targets to give first order solutions to the problem of G-loading. Under many circumstances, the simpler, empirically based PENCURV model compares well with test data and the much more sophisticated Zapotec model. The results suggest that, if PENCURV were enhanced to include rotational acceleration in its G-loading computations, it would provide much more accurate solutions for a wide variety of penetration problems. Data from an improved PENCURV program would allow for faster, lower cost optimization of targets, test parameters and penetration bodies as Sandia National Laboratories continues in its evaluation of the survivability requirements for earth penetrating sensors and weapons.

  2. Social epidemiology and complex system dynamic modelling as applied to health behaviour and drug use research

    PubMed Central

    Galea, Sandro; Hall, Chris; Kaplan, George A

    2009-01-01

    A social epidemiologic perspective considers factors at multiple levels of influence (e.g., social networks, neighborhoods, states) that may individually or jointly affect health and health behaviour. This provides a useful lens through which to understand the production of health behaviours in general, and drug use in particular. However, the analytic models that are commonly applied in population health sciences limit the inference we are able to draw about the determination of health behaviour by factors, likely interrelated, across levels of influence. Complex system dynamic modelling techniques may be useful in enabling the adoption of a social epidemiologic approach in health behaviour and drug use research. We provide an example of a model that aims to incorporate factors at multiple levels of influence in understanding drug dependence. We conclude with suggestions about future directions in the field and how such models may serve as virtual laboratories for policy experiments aimed at improving health behaviour. PMID:18930649

  3. Direct, inverse, and combined problems in complex engineered system modeling by artificial neural networks

    NASA Astrophysics Data System (ADS)

    Terekhoff, Serge A.

    1997-04-01

    This paper summarizes theoretical findings and applications of artificial neural networks to modeling of complex engineered system response in the abnormal environments. The thermal fire impact on the industrial container for waste and fissile materials was investigated using model and experimental data. Solutions for the direct problem show that the generalization properties of neural network based model are significantly better than those for standard interpolation methods. Minimal amount of data required for good prediction of system response is estimated in computer experiments with MLP network. It was shown that Kohonen's self-organizing map with counterpropagation may also estimate local accuracy of regularized solution for inverse and combined problems. Feature space regions of partial correctness of the inverse model can be automatically extracted using adaptive clustering. Practical findings include time strategy recommendations for fire-safe services when industrial or transport accidents occur.

  4. Modeling relations in nature and eco-informatics: a practical application of rosennean complexity.

    PubMed

    Kineman, John J

    2007-10-01

    The purpose of eco-informatics is to communicate critical information about organisms and ecosystems. To accomplish this, it must reflect the complexity of natural systems. Present information systems are designed around mechanistic concepts that do not capture complexity. Robert Rosen's relational theory offers a way of representing complexity in terms of information entailments that are part of an ontologically implicit 'modeling relation'. This relation has corresponding epistemological components that can be captured empirically, the components being structure (associated with model encoding) and function (associated with model decoding). Relational complexity, thus, provides a long-awaited theoretical underpinning for these concepts that ecology has found indispensable. Structural information pertains to the material organization of a system, which can be represented by data. Functional information specifies potential change, which can be inferred from experiment and represented as models or descriptions of state transformations. Contextual dependency (of structure or function) implies meaning. Biological functions imply internalized or system-dependent laws. Complexity can be represented epistemologically by relating structure and function in two different ways. One expresses the phenomenal relation that exists in any present or past instance, and the other draws the ontology of a system into the empirical world in terms of multiple potentials subject to natural forms of selection and optimality. These act as system attractors. Implementing these components and their theoretical relations in an informatics system will provide more-complete ecological informatics than is possible from a strictly mechanistic point of view. This approach will enable many new possibilities for supporting science and decision making. PMID:17955469

  5. Staff Experiences of Supported Employment with the Sustainable Hub of Innovative Employment for People with Complex Needs

    ERIC Educational Resources Information Center

    Gore, Nick J.; Forrester-Jones, Rachel; Young, Rhea

    2014-01-01

    Whilst the value of supported employment for people with learning disabilities is well substantiated, the experiences of supporting individuals into work are less well documented. The Sustainable Hub of Innovative Employment for people with Complex needs aims to support people with learning disabilities and complex needs to find paid employment.…

  6. Experimental and Numerical Modelling of Flow over Complex Terrain: The Bolund Hill

    NASA Astrophysics Data System (ADS)

    Conan, Boris; Chaudhari, Ashvinkumar; Aubrun, Sandrine; van Beeck, Jeroen; Hämäläinen, Jari; Hellsten, Antti

    2016-02-01

    In the wind-energy sector, wind-power forecasting, turbine siting, and turbine-design selection are all highly dependent on a precise evaluation of atmospheric wind conditions. On-site measurements provide reliable data; however, in complex terrain and at the scale of a wind farm, local measurements may be insufficient for a detailed site description. On highly variable terrain, numerical models are commonly used but still constitute a challenge regarding simulation and interpretation. We propose a joint state-of-the-art study of two approaches to modelling atmospheric flow over the Bolund hill: a wind-tunnel test and a large-eddy simulation (LES). The approach has the particularity of describing both methods in parallel in order to highlight their similarities and differences. The work provides a first detailed comparison between field measurements, wind-tunnel experiments and numerical simulations. The systematic and quantitative approach used for the comparison contributes to a better understanding of the strengths and weaknesses of each model and, therefore, to their enhancement. Despite fundamental modelling differences, both techniques result in only a 5 % difference in the mean wind speed and 15 % in the turbulent kinetic energy (TKE). The joint comparison makes it possible to identify the most difficult features to model: the near-ground flow and the wake of the hill. When compared to field data, both models reach 11 % error for the mean wind speed, which is close to the best performance reported in the literature. For the TKE, a great improvement is found using the LES model compared to previous studies (20 % error). Wind-tunnel results are in the low range of error when compared to experiments reported previously (40 % error). This comparison highlights the potential of such approaches and gives directions for the improvement of complex flow modelling.

  7. Modeling Cu2+-Aβ complexes from computational approaches

    NASA Astrophysics Data System (ADS)

    Alí-Torres, Jorge; Mirats, Andrea; Maréchal, Jean-Didier; Rodríguez-Santiago, Luis; Sodupe, Mariona

    2015-09-01

    Amyloid plaques formation and oxidative stress are two key events in the pathology of the Alzheimer disease (AD), in which metal cations have been shown to play an important role. In particular, the interaction of the redox active Cu2+ metal cation with Aβ has been found to interfere in amyloid aggregation and to lead to reactive oxygen species (ROS). A detailed knowledge of the electronic and molecular structure of Cu2+-Aβ complexes is thus important to get a better understanding of the role of these complexes in the development and progression of the AD disease. The computational treatment of these systems requires a combination of several available computational methodologies, because two fundamental aspects have to be addressed: the metal coordination sphere and the conformation adopted by the peptide upon copper binding. In this paper we review the main computational strategies used to deal with the Cu2+-Aβ coordination and build plausible Cu2+-Aβ models that will afterwards allow determining physicochemical properties of interest, such as their redox potential.

  8. Complex dynamics in the Oregonator model with linear delayed feedback

    NASA Astrophysics Data System (ADS)

    Sriram, K.; Bernard, S.

    2008-06-01

    The Belousov-Zhabotinsky (BZ) reaction can display a rich dynamics when a delayed feedback is applied. We used the Oregonator model of the oscillating BZ reaction to explore the dynamics brought about by a linear delayed feedback. The time-delayed feedback can generate a succession of complex dynamics: period-doubling bifurcation route to chaos; amplitude death; fat, wrinkled, fractal, and broken tori; and mixed-mode oscillations. We observed that this dynamics arises due to a delay-driven transition, or toggling of the system between large and small amplitude oscillations, through a canard bifurcation. We used a combination of numerical bifurcation continuation techniques and other numerical methods to explore the dynamics in the strength of feedback-delay space. We observed that the period-doubling and quasiperiodic route to chaos span a low-dimensional subspace, perhaps due to the trapping of the trajectories in the small amplitude regime near the canard; and the trapped chaotic trajectories get ejected from the small amplitude regime due to a crowding effect to generate chaotic-excitable spikes. We also qualitatively explained the observed dynamics by projecting a three-dimensional phase portrait of the delayed dynamics on the two-dimensional nullclines. This is the first instance in which it is shown that the interaction of delay and canard can bring about complex dynamics.

  9. Effects of borehole design on complex electrical resistivity measurements: laboratory validation and numerical experiments

    NASA Astrophysics Data System (ADS)

    Treichel, A.; Huisman, J. A.; Zhao, Y.; Zimmermann, E.; Esser, O.; Kemna, A.; Vereecken, H.

    2012-12-01

    Geophysical measurements within a borehole are typically affected by the presence of the borehole. The focus of the current study is to quantify the effect of borehole design on broadband electrical impedance tomography (EIT) measurements within boreholes. Previous studies have shown that effects on the real part of the electrical resistivity are largest for boreholes with large diameters and for materials with a large formation factor. However, these studies have not considered the effect of the well casing and the filter gravel on the measurement of the real part of the electrical resistivity. In addition, the effect of borehole design on the imaginary part of the electrical resistivity has not been investigated yet. Therefore, the aim of this study is to investigate the effect of borehole design on the complex electrical resistivity using laboratory measurements and numerical simulations. In order to do so, we developed a high resolution two dimensional axisymmetric finite element model (FE) that enables us to simulate the effects of several key borehole design parameters (e.g. borehole diameter, thickness of PVC well casing) on the measurement process. For the material surrounding the borehole, realistic values for complex resistivity were obtained from a database of laboratory measurements of complex resistivity from the test site Krauthausen (Germany). The slotted PVC well casing is represented by an effective resistivity calculated from the water-filled slot volume and the PVC volume. Measurements with and without PVC well casing were made with a four-electrode EIT logging tool in a water-filled rain barrel. The initial comparison for the case that the logging tool was inserted in the PVC well casing showed a considerable mismatch between measured and modeled values. It was required to consider a complete electrode model instead of point electrodes to remove this mismatch. This validated model was used to investigate in detail how complex resistivity

  10. Disulfide Trapping for Modeling and Structure Determination of Receptor:Chemokine Complexes

    PubMed Central

    Kufareva, Irina; Gustavsson, Martin; Holden, Lauren G.; Qin, Ling; Zheng, Yi; Handel, Tracy M.

    2016-01-01

    Despite the recent breakthrough advances in GPCR crystallography, structure determination of protein-protein complexes involving chemokine receptors and their endogenous chemokine ligands remains challenging. Here we describe disulfide trapping, a methodology for generating irreversible covalent binary protein complexes from unbound protein partners by introducing two cysteine residues, one per interaction partner, at selected positions within their interaction interface. Disulfide trapping can serve at least two distinct purposes: (i) stabilization of the complex to assist structural studies, and/or (ii) determination of pairwise residue proximities to guide molecular modeling. Methods for characterization of disulfide-trapped complexes are described and evaluated in terms of throughput, sensitivity, and specificity towards the most energetically favorable cross-links. Due to abundance of native disulfide bonds at receptor:chemokine interfaces, disulfide trapping of their complexes can be associated with intramolecular disulfide shuffling and result in misfolding of the component proteins; because of this, evidence from several experiments is typically needed to firmly establish a positive disulfide crosslink. An optimal pipeline that maximizes throughput and minimizes time and costs by early triage of unsuccessful candidate constructs is proposed. PMID:26921956

  11. Informing education policy in Afghanistan: Using design of experiments and data envelopment analysis to provide transparency in complex simulation

    NASA Astrophysics Data System (ADS)

    Marlin, Benjamin

    Education planning provides the policy maker and the decision maker a logical framework in which to develop and implement education policy. At the international level, education planning is often confounded by both internal and external complexities, making the development of education policy difficult. This research presents a discrete event simulation in which individual students and teachers flow through the system across a variable time horizon. This simulation is then used with advancements in design of experiments, multivariate statistical analysis, and data envelopment analysis, to provide a methodology designed to assist the international education planning community. We propose that this methodology will provide the education planner with insights into the complexity of the education system, the effects of both endogenous and exogenous factors upon the system, and the implications of policies as they pertain to potential futures of the system. We do this recognizing that there are multiple actors and stochastic events in play, which although cannot be accurately forecasted, must be accounted for within the education model. To both test the implementation and usefulness of such a model and to prove its relevance, we chose the Afghan education system as the focal point of this research. The Afghan education system is a complex, real world system with competing actors, dynamic requirements, and ambiguous states. At the time of this writing, Afghanistan is at a pivotal point as a nation, and has been the recipient of a tremendous amount of international support and attention. Finally, Afghanistan is a fragile state, and the proliferation of the current disparity in education across gender, districts, and ethnicity could provide the catalyst to drive the country into hostility. In order to prevent the failure of the current government, it is essential that the education system is able to meet the demands of the Afghan people. This work provides insights into

  12. Neurocomputational Model of EEG Complexity during Mind Wandering

    PubMed Central

    Ibáñez-Molina, Antonio J.; Iglesias-Parro, Sergio

    2016-01-01

    Mind wandering (MW) can be understood as a transient state in which attention drifts from an external task to internal self-generated thoughts. MW has been associated with the activation of the Default Mode Network (DMN). In addition, it has been shown that the activity of the DMN is anti-correlated with activation in brain networks related to the processing of external events (e.g., Salience network, SN). In this study, we present a mean field model based on weakly coupled Kuramoto oscillators. We simulated the oscillatory activity of the entire brain and explored the role of the interaction between the nodes from the DMN and SN in MW states. External stimulation was added to the network model in two opposite conditions. Stimuli could be presented when oscillators in the SN showed more internal coherence (synchrony) than in the DMN, or, on the contrary, when the coherence in the SN was lower than in the DMN. The resulting phases of the oscillators were analyzed and used to simulate EEG signals. Our results showed that the structural complexity from both simulated and real data was higher when the model was stimulated during periods in which DMN was more coherent than the SN. Overall, our results provided a plausible mechanistic explanation to MW as a state in which high coherence in the DMN partially suppresses the capacity of the system to process external stimuli. PMID:26973505

  13. Dynamic workflow model for complex activity in intensive care unit.

    PubMed

    Bricon-Souf, N; Renard, J M; Beuscart, R

    1999-01-01

    Co-operation is very important in Medical care, especially in the Intensive Care Unit (ICU) where the difficulties increase which is due to the urgency of the work. Workflow systems are considered as well adapted to modelize productive work in business process. We aim at introducing this approach in the Health Care domain. We have proposed a conversation-based workflow in order to modelize the therapeutics plan in the ICU [1]. But in such a complex field, the flexibility of the workflow system is essential for the system to be usable. We have concentrated on three main points usually proposed in the workflow models, suffering from a lack of dynamicity: static links between roles and actors, global notification of information changes, lack of human control on the system. In this paper, we focus on the main points used to increase the dynamicity. We report on affecting roles, highlighting information, and controlling the system. We propose some solutions and describe our prototype in the ICU. PMID:10193884

  14. Deposition parameterizations for the Industrial Source Complex (ISC3) model

    SciTech Connect

    Wesely, Marvin L.; Doskey, Paul V.; Shannon, J. D.

    2002-06-01

    Improved algorithms have been developed to simulate the dry and wet deposition of hazardous air pollutants (HAPs) with the Industrial Source Complex version 3 (ISC3) model system. The dry deposition velocities (concentrations divided by downward flux at a specified height) of the gaseous HAPs are modeled with algorithms adapted from existing dry deposition modules. The dry deposition velocities are described in a conventional resistance scheme, for which micrometeorological formulas are applied to describe the aerodynamic resistances above the surface. Pathways to uptake at the ground and in vegetative canopies are depicted with several resistances that are affected by variations in air temperature, humidity, solar irradiance, and soil moisture. The role of soil moisture variations in affecting the uptake of gases through vegetative plant leaf stomata is assessed with the relative available soil moisture, which is estimated with a rudimentary budget of soil moisture content. Some of the procedures and equations are simplified to be commensurate with the type and extent of information on atmospheric and surface conditions available to the ISC3 model system user. For example, standardized land use types and seasonal categories provide sets of resistances to uptake by various components of the surface. To describe the dry deposition of the large number of gaseous organic HAPS, a new technique based on laboratory study results and theoretical considerations has been developed providing a means of evaluating the role of lipid solubility in uptake by the waxy outer cuticle of vegetative plant leaves.

  15. Critical noise of majority-vote model on complex networks

    NASA Astrophysics Data System (ADS)

    Chen, Hanshuang; Shen, Chuansheng; He, Gang; Zhang, Haifeng; Hou, Zhonghuai

    2015-02-01

    The majority-vote model with noise is one of the simplest nonequilibrium statistical model that has been extensively studied in the context of complex networks. However, the relationship between the critical noise where the order-disorder phase transition takes place and the topology of the underlying networks is still lacking. In this paper, we use the heterogeneous mean-field theory to derive the rate equation for governing the model's dynamics that can analytically determine the critical noise fc in the limit of infinite network size N →∞ . The result shows that fc depends on the ratio of to , where and are the average degree and the 3 /2 order moment of degree distribution, respectively. Furthermore, we consider the finite-size effect where the stochastic fluctuation should be involved. To the end, we derive the Langevin equation and obtain the potential of the corresponding Fokker-Planck equation. This allows us to calculate the effective critical noise fc(N ) at which the susceptibility is maximal in finite-size networks. We find that the fc-fc(N ) decays with N in a power-law way and vanishes for N →∞ . All the theoretical results are confirmed by performing the extensive Monte Carlo simulations in random k -regular networks, Erdös-Rényi random networks, and scale-free networks.

  16. A model for navigational errors in complex environmental fields.

    PubMed

    Postlethwaite, Claire M; Walker, Michael M

    2014-12-21

    Many animals are believed to navigate using environmental signals such as light, sound, odours and magnetic fields. However, animals rarely navigate directly to their target location, but instead make a series of navigational errors which are corrected during transit. In previous work, we introduced a model showing that differences between an animal׳s 'cognitive map' of the environmental signals used for navigation and the true nature of these signals caused a systematic pattern in orientation errors when navigation begins. The model successfully predicted the pattern of errors seen in previously collected data from homing pigeons, but underestimated the amplitude of the errors. In this paper, we extend our previous model to include more complicated distortions of the contour lines of the environmental signals. Specifically, we consider the occurrence of critical points in the fields describing the signals. We consider three scenarios and compute orientation errors as parameters are varied in each case. We show that the occurrence of critical points can be associated with large variations in initial orientation errors over a small geographic area. We discuss the implications that these results have on predicting how animals will behave when encountering complex distortions in any environmental signals they use to navigate. PMID:25149368

  17. Integrated modeling tool for performance engineering of complex computer systems

    NASA Technical Reports Server (NTRS)

    Wright, Gary; Ball, Duane; Hoyt, Susan; Steele, Oscar

    1989-01-01

    This report summarizes Advanced System Technologies' accomplishments on the Phase 2 SBIR contract NAS7-995. The technical objectives of the report are: (1) to develop an evaluation version of a graphical, integrated modeling language according to the specification resulting from the Phase 2 research; and (2) to determine the degree to which the language meets its objectives by evaluating ease of use, utility of two sets of performance predictions, and the power of the language constructs. The technical approach followed to meet these objectives was to design, develop, and test an evaluation prototype of a graphical, performance prediction tool. The utility of the prototype was then evaluated by applying it to a variety of test cases found in the literature and in AST case histories. Numerous models were constructed and successfully tested. The major conclusion of this Phase 2 SBIR research and development effort is that complex, real-time computer systems can be specified in a non-procedural manner using combinations of icons, windows, menus, and dialogs. Such a specification technique provides an interface that system designers and architects find natural and easy to use. In addition, PEDESTAL's multiview approach provides system engineers with the capability to perform the trade-offs necessary to produce a design that meets timing performance requirements. Sample system designs analyzed during the development effort showed that models could be constructed in a fraction of the time required by non-visual system design capture tools.

  18. Uncertainty evaluation in numerical modeling of complex devices

    NASA Astrophysics Data System (ADS)

    Cheng, X.; Monebhurrun, V.

    2014-10-01

    Numerical simulation is an efficient tool for exploring and understanding the physics of complex devices, e.g. mobile phones. For meaningful results, it is important to evaluate the uncertainty of the numerical simulation. Uncertainty quantification in specific absorption rate (SAR) calculation using a full computer-aided design (CAD) mobile phone model is a challenging task. Since a typical SAR numerical simulation is computationally expensive, the traditional Monte Carlo (MC) simulation method proves inadequate. The unscented transformation (UT) is an alternative and numerically efficient method herein investigated to evaluate the uncertainty in the SAR calculation using the realistic models of two commercially available mobile phones. The electromagnetic simulation process is modeled as a nonlinear mapping with the uncertainty in the inputs e.g. the relative permittivity values of the mobile phone material properties, inducing an uncertainty in the output, e.g. the peak spatial-average SAR value.The numerical simulation results demonstrate that UT may be a potential candidate for the uncertainty quantification in SAR calculations since only a few simulations are necessary to obtain results similar to those obtained after hundreds or thousands of MC simulations.

  19. Autoimmunity contributes to nociceptive sensitization in a mouse model of complex regional pain syndrome.

    PubMed

    Li, Wen-Wu; Guo, Tian-Zhi; Shi, Xiaoyou; Czirr, Eva; Stan, Trisha; Sahbaie, Peyman; Wyss-Coray, Tony; Kingery, Wade S; Clark, J David

    2014-11-01

    Complex regional pain syndrome (CRPS) is a painful, disabling, chronic condition whose etiology remains poorly understood. The recent suggestion that immunological mechanisms may underlie CRPS provides an entirely novel framework in which to study the condition and consider new approaches to treatment. Using a murine fracture/cast model of CRPS, we studied the effects of B-cell depletion using anti-CD20 antibodies or by performing experiments in genetically B-cell-deficient (μMT) mice. We observed that mice treated with anti-CD20 developed attenuated vascular and nociceptive CRPS-like changes after tibial fracture and 3 weeks of cast immobilization. In mice with established CRPS-like changes, the depletion of CD-20+ cells slowly reversed nociceptive sensitization. Correspondingly, μMT mice, deficient in producing immunoglobulin M (IgM), failed to fully develop CRPS-like changes after fracture and casting. Depletion of CD20+ cells had no detectable effects on nociceptive sensitization in a model of postoperative incisional pain, however. Immunohistochemical experiments showed that CD20+ cells accumulate near the healing fracture but few such cells collect in skin or sciatic nerves. On the other hand, IgM-containing immune complexes were deposited in skin and sciatic nerve after fracture in wild-type, but not in μMT fracture/cast, mice. Additional experiments demonstrated that complement system activation and deposition of membrane attack complexes were partially blocked by anti-CD20+ treatment. Collectively, our results suggest that CD20-positive B cells produce antibodies that ultimately support the CRPS-like changes in the murine fracture/cast model. Therapies directed at reducing B-cell activity may be of use in treating patients with CRPS. PMID:25218828

  20. Autoimmunity contributes to nociceptive sensitization in a mouse model of complex regional pain syndrome

    PubMed Central

    Li, Wen-Wu; Guo, Tian-Zhi; Shi, Xiaoyou; Czirr, Eva; Stan, Trisha; Sahbaie, Peyman; Wyss-Coray, Tony; Kingery, Wade S.; Clark, J. David

    2014-01-01

    Complex regional pain syndrome (CRPS) is a painful, disabling, chronic condition whose etiology remains poorly understood. The recent suggestion that immunological mechanisms may underlie CRPS provides an entirely novel framework in which to study the condition and consider new approaches to treatment. Using a murine fracture/cast model of CRPS, we studied the effects of B-cell depletion using anti-CD20 antibodies or by performing experiments in genetically B-cell-deficient (µMT) mice. We observed that mice treated with anti-CD20 developed attenuated vascular and nociceptive CRPS-like changes after tibial fracture and 3 weeks of cast immobilization. In mice with established CRPS-like changes, the depletion of CD-20+ cells slowly reversed nociceptive sensitization. Correspondingly, µMT mice, deficient in producing immunoglobulin M (IgM), failed to fully develop CRPS-like changes after fracture and casting. Depletion of CD20+ cells had no detectable effects on nociceptive sensitization in a model of postoperative incisional pain, however. Immunohistochemical experiments showed that CD20+ cells accumulate near the healing fracture but few such cells collect in skin or sciatic nerves. On the other hand, IgM-containing immune complexes were deposited in skin and sciatic nerve after fracture in wild-type, but not in µMT fracture/cast, mice. Additional experiments demonstrated that complement system activation and deposition of membrane attack complexes were partially blocked by anti-CD20+ treatment. Collectively, our results suggest that CD20-positive B cells produce antibodies that ultimately support the CRPS-like changes in the murine fracture/cast model. Therapies directed at reducing B-cell activity may be of use in treating patients with CRPS. PMID:25218828

  1. Calibration of two complex ecosystem models with different likelihood functions

    NASA Astrophysics Data System (ADS)

    Hidy, Dóra; Haszpra, László; Pintér, Krisztina; Nagy, Zoltán; Barcza, Zoltán

    2014-05-01

    The biosphere is a sensitive carbon reservoir. Terrestrial ecosystems were approximately carbon neutral during the past centuries, but they became net carbon sinks due to climate change induced environmental change and associated CO2 fertilization effect of the atmosphere. Model studies and measurements indicate that the biospheric carbon sink can saturate in the future due to ongoing climate change which can act as a positive feedback. Robustness of carbon cycle models is a key issue when trying to choose the appropriate model for decision support. The input parameters of the process-based models are decisive regarding the model output. At the same time there are several input parameters for which accurate values are hard to obtain directly from experiments or no local measurements are available. Due to the uncertainty associated with the unknown model parameters significant bias can be experienced if the model is used to simulate the carbon and nitrogen cycle components of different ecosystems. In order to improve model performance the unknown model parameters has to be estimated. We developed a multi-objective, two-step calibration method based on Bayesian approach in order to estimate the unknown parameters of PaSim and Biome-BGC models. Biome-BGC and PaSim are a widely used biogeochemical models that simulate the storage and flux of water, carbon, and nitrogen between the ecosystem and the atmosphere, and within the components of the terrestrial ecosystems (in this research the developed version of Biome-BGC is used which is referred as BBGC MuSo). Both models were calibrated regardless the simulated processes and type of model parameters. The calibration procedure is based on the comparison of measured data with simulated results via calculating a likelihood function (degree of goodness-of-fit between simulated and measured data). In our research different likelihood function formulations were used in order to examine the effect of the different model

  2. Complex events in a fault model with interacting asperities

    NASA Astrophysics Data System (ADS)

    Dragoni, Michele; Tallarico, Andrea

    2016-08-01

    The dynamics of a fault with heterogeneous friction is studied by employing a discrete fault model with two asperities of different strengths. The average values of stress, friction and slip on each asperity are considered and the state of the fault is described by the slip deficits of the asperities as functions of time. The fault has three different slipping modes, corresponding to the asperities slipping one at a time or simultaneously. Any seismic event produced by the fault is a sequence of n slipping modes. According to initial conditions, seismic events can be different sequences of slipping modes, implying different moment rates and seismic moments. Each event can be represented geometrically in the state space by an orbit that is the union of n damped Lissajous curves. We focus our interest on events that are sequences of two or more slipping modes: they show a complex stress interchange between the asperities and a complex temporal pattern of slip rate. The initial stress distribution producing these events is not uniform on the fault. We calculate the stress drop, the moment rate and the frequency spectrum of the events, showing how these quantities depend on initial conditions. These events have the greatest seismic moments that can be produced by fault slip. As an example, we model the moment rate of the 1992 Landers, California, earthquake that can be described as the consecutive failure of two asperities, one of which has a double strength than the other, and evaluate the evolution of stress distribution on the fault during the event.

  3. Inverse Problems in Complex Models and Applications to Earth Sciences

    NASA Astrophysics Data System (ADS)

    Bosch, M. E.

    2015-12-01

    The inference of the subsurface earth structure and properties requires the integration of different types of data, information and knowledge, by combined processes of analysis and synthesis. To support the process of integrating information, the regular concept of data inversion is evolving to expand its application to models with multiple inner components (properties, scales, structural parameters) that explain multiple data (geophysical survey data, well-logs, core data). The probabilistic inference methods provide the natural framework for the formulation of these problems, considering a posterior probability density function (PDF) that combines the information from a prior information PDF and the new sets of observations. To formulate the posterior PDF in the context of multiple datasets, the data likelihood functions are factorized assuming independence of uncertainties for data originating across different surveys. A realistic description of the earth medium requires modeling several properties and structural parameters, which relate to each other according to dependency and independency notions. Thus, conditional probabilities across model components also factorize. A common setting proceeds by structuring the model parameter space in hierarchical layers. A primary layer (e.g. lithology) conditions a secondary layer (e.g. physical medium properties), which conditions a third layer (e.g. geophysical data). In general, less structured relations within model components and data emerge from the analysis of other inverse problems. They can be described with flexibility via direct acyclic graphs, which are graphs that map dependency relations between the model components. Examples of inverse problems in complex models can be shown at various scales. At local scale, for example, the distribution of gas saturation is inferred from pre-stack seismic data and a calibrated rock-physics model. At regional scale, joint inversion of gravity and magnetic data is applied

  4. A Comparison of Internship Stage Models: Evidence from Intern Experiences

    ERIC Educational Resources Information Center

    Diambra, Joel F.; Cole-Zakrzewski, Kylie G.; Booher, Josh

    2004-01-01

    Human service interns completing their four-year Bachelor of Science degree at the University of Tennessee participated in this study which focused on investigating student internship experiences from the perspective of three different internship stage models. the three models studied include those of Infester and Boss (1998), Sweitzer and King…

  5. Initial experience with transluminally placed endovascular grafts for the treatment of complex vascular lesions.

    PubMed Central

    Marin, M L; Veith, F J; Cynamon, J; Sanchez, L A; Lyon, R T; Levine, B A; Bakal, C W; Suggs, W D; Wengerter, K R; Rivers, S P

    1995-01-01

    OBJECTIVES: Complex arterial occlusive, traumatic, and aneurysmal lesions may be difficult or impossible to treat successfully by standard surgical techniques when severe medical or surgical comorbidities exist. The authors describe a single center's experience over a 2 1/2-year period with 96 endovascular graft procedures performed to treat 100 arterial lesions in 92 patients. PATIENTS AND METHODS: Thirty-three patients had 36 large aortic and/or peripheral artery aneurysms, 48 had 53 multilevel limb-threatening aortoiliac and/or femoropopliteal occlusive lesions, and 11 had traumatic arterial injuries (false aneurysms and arteriovenous fistulas). Endovascular grafts were placed through remote arteriotomies under local (16[17%]), epidural (42[43%]), or general (38[40%]) anesthesia. RESULTS: Technical and clinical successes were achieved in 91% of the patients with aneurysms, 91% with occlusive lesions, and 100% with traumatic arterial lesions. These patients and grafts have been followed from 1 to 30 months (mean, 13 months). The primary and secondary patency rates at 18 months for aortoiliac occlusions were 77% and 95%, respectively. The 18-month limb salvage rate was 98%. Immediately after aortic aneurysm exclusion, a total of 6 (33%) perigraft channels were detected; 3 of these closed within 8 weeks. Endovascular stented graft procedures were associated with a 10% major and a 14% minor complication rate. The overall 30-day mortality rate for this entire series was 6%. CONCLUSIONS: This initial experience with endovascular graft repair of complex arterial lesions justifies further use and careful evaluation of this technique for major arterial reconstruction. Images Figure 1. Figure 4. Figure 5. Figure 5. Figure 6. Figure 7. Figure 8. Figure 9. Figure 11. PMID:7574926

  6. Complex Environmental Data Modelling Using Adaptive General Regression Neural Networks

    NASA Astrophysics Data System (ADS)

    Kanevski, Mikhail

    2015-04-01

    The research deals with an adaptation and application of Adaptive General Regression Neural Networks (GRNN) to high dimensional environmental data. GRNN [1,2,3] are efficient modelling tools both for spatial and temporal data and are based on nonparametric kernel methods closely related to classical Nadaraya-Watson estimator. Adaptive GRNN, using anisotropic kernels, can be also applied for features selection tasks when working with high dimensional data [1,3]. In the present research Adaptive GRNN are used to study geospatial data predictability and relevant feature selection using both simulated and real data case studies. The original raw data were either three dimensional monthly precipitation data or monthly wind speeds embedded into 13 dimensional space constructed by geographical coordinates and geo-features calculated from digital elevation model. GRNN were applied in two different ways: 1) adaptive GRNN with the resulting list of features ordered according to their relevancy; and 2) adaptive GRNN applied to evaluate all possible models N [in case of wind fields N=(2^13 -1)=8191] and rank them according to the cross-validation error. In both cases training were carried out applying leave-one-out procedure. An important result of the study is that the set of the most relevant features depends on the month (strong seasonal effect) and year. The predictabilities of precipitation and wind field patterns, estimated using the cross-validation and testing errors of raw and shuffled data, were studied in detail. The results of both approaches were qualitatively and quantitatively compared. In conclusion, Adaptive GRNN with their ability to select features and efficient modelling of complex high dimensional data can be widely used in automatic/on-line mapping and as an integrated part of environmental decision support systems. 1. Kanevski M., Pozdnoukhov A., Timonin V. Machine Learning for Spatial Environmental Data. Theory, applications and software. EPFL Press

  7. The Entropy and Complexity of Drift waves in a LAPTAG Plasma Physics Experiment

    NASA Astrophysics Data System (ADS)

    Birge-Lee, Henry; Gekelman, Walter; Pribyl, Patrick; Wise, Joe; Katz, Cami; Baker, Bob; Marmie, Ken; Thomas, Sam; Buckley-Bonnano, Samuel

    2015-11-01

    Drift waves grow from noise on a density gradient in a narrow (dia = 3 cm, L = 1.5 m) magnetized (Boz = 160G) plasma column. A two-dimensional probe drive measured fluctuations in the plasma column in a plane transverse to the background magnetic field. Correlation techniques determined that the fluctuations were that of electrostatic drift waves. The time series data was used to generate the Bandt-Pompe/Shannon entropy, H, and Jensen-Shannon complexity, CJS. C-H diagrams can be used to tell the difference between deterministic chaos, random noise and stochastic processes and simple waves, which makes it a powerful tool in nonlinear dynamics. The C-H diagram in this experiment, reveal that the combination of drift waves and other background fluctuations is a deterministically chaotic system. The PDF of the time series, the wave spectra the spatial dependence of the entropy wave complexity will be presented. LAPTAG is a university-high school alliance outreach program, which has been in existence for over 20 years. Work done at BaPSF at UCLA and supported by NSF and DOE.

  8. Team Preceptorship Model: A Solution for Students' Clinical Experience

    PubMed Central

    Cooper Brathwaite, Angela; Lemonde, Manon

    2011-01-01

    There is a shortage of registered nurses in developed countries, and this shortage is due to the aging nursing workforce, demand for healthcare services, and shortage of nursing professors to teach students. In order to increase the number of clinical placements for nursing students, the authors developed and implemented a collaborative preceptorship model between a Canadian University and Public Health Department to facilitate the clinical experiences of Bachelor of Science of Nursing (BScN) students. This paper describes the Team Preceptorship Model which guided the clinical experience of nine students and 14 preceptors. It also highlights the model's evaluation, strengths, and limitations. PMID:21994893

  9. Stochastic Path Model of Polaroid Polarizer for Bell's Theorem and Triphoton Experiments

    NASA Astrophysics Data System (ADS)

    Werbos, Paul J.

    Depending on the outcome of the triphoton experiment now underway, it is possible that the new local realistic Markov Random Field (MRF) models will be the only models now available to correctly predict both that experiment and Bell's theorem experiments. The MRF models represent the experiments as graphs of discrete events over space-time. This paper extends the MRF approach to continuous time, by defining a new class of realistic model, the stochastic path model, and showing how it can be applied to ideal polaroid type polarizers in such experiments. The final section discusses possibilities for future research, ranging from uses in other experiments or novel quantum communication systems, to extensions involving stochastic paths in the space of functions over continuous space. As part of this, it derives a new Boltzmann-like density operator over Fock space, which predicts the emergent statistical equilibria of nonlinear Hamiltonian field theories, based on our previous work of extending the Glauber-Sudarshan P mapping from the case of classical systems described by a complex state variable α to the case of classical continuous fields. This extension may explain the stochastic aspects of quantum theory as the emergent outcome of nonlinear PDE in a time-symmetric universe.

  10. Surface complexation modeling of uranium(VI) sorbed onto lanthanum monophosphate.

    PubMed

    Ordoñez-Regil, E; Drot, R; Simoni, E

    2003-07-15

    Sorption/desorption are basic processes in the field of contaminant transport. In order to develop mechanistically accurate thermodynamic sorption models, the simulation of retention data has to take into account molecular scale informations provided by structural investigations. In this way, the uranyl sorption constants onto lanthanum monophosphate (LaPO(4)) were determined on the basis of a previously published structural investigation. The surface complexation modeling of U(VI) retention onto LaPO(4) has been performed using the constant capacitance model included in the FITEQLv3.2 program. The electrical behavior of the solid surface was investigated using electrophoretic measurements and potentiometric titration experiments. The point of zero charge was found to be 3.5 and surface complexation modeling has made it possible to calculate the surface acidity constants. The fitting procedure was done with respect to the spectroscopic results, which have shown that LaPO(4) presents two kinds of reactive surface sites (lanthanum atoms and phosphate groups). The uranyl sorption edges were determined for two surface coverages: 40 and 20% of the surface sites that are occupied, assuming complete sorption. The modeling of these experimental data was realized by considering two uranyl species ("free" uranyl and uranyl nitrate complex) sorbed only onto phosphate surface groups according to the previously published structural investigation. The obtained sorption constants present similar values for both surface complexes and make it possible to fit both sorption edges: logK(U)=9.4 for z.tbnd;P(OH)(2)+UO(2)(2+)<-->z.tbnd;P(OH)(2)UO(2)(2+) and logK(UN)=9.7 for z.tbnd;P(OH)(2)+UO(2)NO(3)(+)<-->z.tbnd;P(OH)(2)UO(2)NO(3)(+). PMID:12909028

  11. Complex Pathways in Folding of Protein G Explored by Simulation and Experiment

    PubMed Central

    Lapidus, Lisa J.; Acharya, Srabasti; Schwantes, Christian R.; Wu, Ling; Shukla, Diwakar; King, Michael; DeCamp, Stephen J.; Pande, Vijay S.

    2014-01-01

    The B1 domain of protein G has been a classic model system of folding for decades, the subject of numerous experimental and computational studies. Most of the experimental work has focused on whether the protein folds via an intermediate, but the evidence is mostly limited to relatively slow kinetic observations with a few structural probes. In this work we observe folding on the submillisecond timescale with microfluidic mixers using a variety of probes including tryptophan fluorescence, circular dichroism, and photochemical oxidation. We find that each probe yields different kinetics and compare these observations with a Markov State Model constructed from large-scale molecular dynamics simulations and find a complex network of states that yield different kinetics for different observables. We conclude that there are many folding pathways before the final folding step and that these paths do not have large free energy barriers. PMID:25140430

  12. Using Interactive 3D PDF for Exploring Complex Biomedical Data: Experiences and Solutions.

    PubMed

    Newe, Axel; Becker, Linda

    2016-01-01

    The Portable Document Format (PDF) is the most commonly used file format for the exchange of electronic documents. A lesser-known feature of PDF is the possibility to embed three-dimensional models and to display these models interactively with a qualified reader. This technology is well suited to present, to explore and to communicate complex biomedical data. This applies in particular for data which would suffer from a loss of information if it was reduced to a static two-dimensional projection. In this article, we present applications of 3D PDF for selected scholarly and clinical use cases in the biomedical domain. Furthermore, we present a sophisticated tool for the generation of respective PDF documents. PMID:27577484

  13. Assessing model sensitivity and uncertainty across multiple Free-Air CO2 Enrichment experiments.

    NASA Astrophysics Data System (ADS)

    Cowdery, E.; Dietze, M.

    2015-12-01

    As atmospheric levels of carbon dioxide levels continue to increase, it is critical that terrestrial ecosystem models can accurately predict ecological responses to the changing environment. Current predictions of net primary productivity (NPP) in response to elevated atmospheric CO2 concentrations are highly variable and contain a considerable amount of uncertainty. It is necessary that we understand which factors are driving this uncertainty. The Free-Air CO2 Enrichment (FACE) experiments have equipped us with a rich data source that can be used to calibrate and validate these model predictions. To identify and evaluate the assumptions causing inter-model differences we performed model sensitivity and uncertainty analysis across ambient and elevated CO2 treatments using the Data Assimilation Linked Ecosystem Carbon (DALEC) model and the Ecosystem Demography Model (ED2), two process-based models ranging from low to high complexity respectively. These modeled process responses were compared to experimental data from the Kennedy Space Center Open Top Chamber Experiment, the Nevada Desert Free Air CO2 Enrichment Facility, the Rhinelander FACE experiment, the Wyoming Prairie Heating and CO2 Enrichment Experiment, the Duke Forest Face experiment and the Oak Ridge Experiment on CO2 Enrichment. By leveraging data access proxy and data tilling services provided by the BrownDog data curation project alongside analysis modules available in the Predictive Ecosystem Analyzer (PEcAn), we produced automated, repeatable benchmarking workflows that are generalized to incorporate different sites and ecological models. Combining the observed patterns of uncertainty between the two models with results of the recent FACE-model data synthesis project (FACE-MDS) can help identify which processes need further study and additional data constraints. These findings can be used to inform future experimental design and in turn can provide informative starting point for data assimilation.

  14. Time to change from a simple linear model to a complex systems model

    PubMed Central

    2016-01-01

    A simple linear model to test the hypothesis based on one-on-one relationship has been used to find the causative factors of diseases. However, we now know that not just one, but many factors from different systems such as chemical exposure, genes, epigenetic changes, and proteins are involved in the pathogenesis of chronic diseases such as diabetes mellitus. So, with availability of modern technologies to understand the intricate nature of relations among complex systems, we need to move forward to the future by taking complex systems model. PMID:27158003

  15. Electronic Transitions as a Probe of Tetrahedral versus Octahedral Coordination in Nickel(II) Complexes: An Undergraduate Inorganic Chemistry Experiment.

    ERIC Educational Resources Information Center

    Filgueiras, Carlos A. L.; Carazza, Fernando

    1980-01-01

    Discusses procedures, theoretical considerations, and results of an experiment involving the preparation of a tetrahedral nickel(II) complex and its transformation into an octahedral species. Suggests that fundamental aspects of coordination chemistry can be demonstrated by simple experiments performed in introductory level courses. (Author/JN)

  16. Thermophysical Model of S-complex NEAs: 1627 Ivar

    NASA Astrophysics Data System (ADS)

    Crowell, Jenna L.; Howell, Ellen S.; Magri, Christopher; Fernandez, Yan R.; Marshall, Sean E.; Warner, Brian D.; Vervack, Ronald J.

    2015-11-01

    We present updates to the thermophysical model of asteroid 1627 Ivar. Ivar is an Amor class near Earth asteroid (NEA) with a taxonomic type of Sqw [1] and a rotation rate of 4.795162 ± 5.4 * 10-6 hours [2]. In 2013, our group observed Ivar in radar, in CCD lightcurves, and in the near-IR’s reflected and thermal regimes (0.8 - 4.1 µm) using the Arecibo Observatory’s 2380 MHz radar, the Palmer Divide Station’s 0.35m telescope, and the SpeX instrument at the NASA IRTF respectively. Using these radar and lightcurve data, we generated a detailed shape model of Ivar using the software SHAPE [3,4]. Our shape model reveals more surface detail compared to earlier models [5] and we found Ivar to be an elongated asteroid with the maximum extended length along the three body-fixed coordinates being 12 x 11.76 x 6 km. For our thermophysical modeling, we have used SHERMAN [6,7] with input parameters such as the asteroid’s IR emissivity, optical scattering law and thermal inertia, in order to complete thermal computations based on our shape model and the known spin state. We then create synthetic near-IR spectra that can be compared to our observed spectra, which cover a wide range of Ivar’s rotational longitudes and viewing geometries. As has been noted [6,8], the use of an accurate shape model is often crucial for correctly interpreting multi-epoch thermal emission observations. We will present what SHERMAN has let us determine about the reflective, thermal, and surface properties for Ivar that best reproduce our spectra. From our derived best-fit thermal parameters, we will learn more about the regolith, surface properties, and heterogeneity of Ivar and how those properties compare to those of other S-complex asteroids. References: [1] DeMeo et al. 2009, Icarus 202, 160-180 [2] Crowell, J. et al. 2015, LPSC 46 [3] Magri C. et al. 2007, Icarus 186, 152-177 [4] Crowell, J. et al. 2014, AAS/DPS 46 [5] Kaasalainen, M. et al. 2004, Icarus 167, 178-196 [6] Crowell, J. et

  17. Model-Driven Design: Systematically Building Integrated Blended Learning Experiences

    ERIC Educational Resources Information Center

    Laster, Stephen

    2010-01-01

    Developing and delivering curricula that are integrated and that use blended learning techniques requires a highly orchestrated design. While institutions have demonstrated the ability to design complex curricula on an ad-hoc basis, these projects are generally successful at a great human and capital cost. Model-driven design provides a…

  18. Amblypygids: Model Organisms for the Study of Arthropod Navigation Mechanisms in Complex Environments?

    PubMed Central

    Wiegmann, Daniel D.; Hebets, Eileen A.; Gronenberg, Wulfila; Graving, Jacob M.; Bingman, Verner P.

    2016-01-01

    Navigation is an ideal behavioral model for the study of sensory system integration and the neural substrates associated with complex behavior. For this broader purpose, however, it may be profitable to develop new model systems that are both tractable and sufficiently complex to ensure that information derived from a single sensory modality and path integration are inadequate to locate a goal. Here, we discuss some recent discoveries related to navigation by amblypygids, nocturnal arachnids that inhabit the tropics and sub-tropics. Nocturnal displacement experiments under the cover of a tropical rainforest reveal that these animals possess navigational abilities that are reminiscent, albeit on a smaller spatial scale, of true-navigating vertebrates. Specialized legs, called antenniform legs, which possess hundreds of olfactory and tactile sensory hairs, and vision appear to be involved. These animals also have enormous mushroom bodies, higher-order brain regions that, in insects, integrate contextual cues and may be involved in spatial memory. In amblypygids, the complexity of a nocturnal rainforest may impose navigational challenges that favor the integration of information derived from multimodal cues. Moreover, the movement of these animals is easily studied in the laboratory and putative neural integration sites of sensory information can be manipulated. Thus, amblypygids could serve as model organisms for the discovery of neural substrates associated with a unique and potentially sophisticated navigational capability. The diversity of habitats in which amblypygids are found also offers an opportunity for comparative studies of sensory integration and ecological selection pressures on navigation mechanisms. PMID:27014008

  19. An Aqueous Thermodynamic Model for the Complexation of Nickel with EDTA Valid to high Base Concentration

    SciTech Connect

    Felmy, Andrew R.; Qafoku, Odeta

    2004-09-01

    An aqueous thermodynamic model is developed which accurately describes the effects of high base concentration on the complexation of Ni2+ by ethylenedinitrilotetraacetic acid (EDTA). The model is primarily developed from an extensive data on the solubility of Ni(OH)2(c) in the presence of EDTA and in the presence and absence of Ca2+ as the competing metal ion. The solubility data for Ni(OH)2(c) were obtained in solutions ranging in NaOH concentration from 0.01 to 11.6m, and in Ca 2+ concentrations extending to saturation with respect to portlandite, Ca(OH)2. Owing to the inert nature of the Ni-EDTA complexation reactions, solubility experiments were approached from both the oversaturation and undersaturation direction and over time frames extending to 413 days. The final aqueous thermodynamic model is based upon the equations of Pitzer, accurately predicts the observed solubilities to concentrations as high as 11.6m NaOH, and is consistent with UV-Vis spectroscopic studies of the complexes in solution.

  20. An Experiment in Determining Software Reliability Model Applicability

    NASA Technical Reports Server (NTRS)

    Nikora, Allen P.

    1995-01-01

    There have been few reports on the behavior of software reliability models under controlled conditions. That is to say, most of the reported experience with the models is during the testing phase of actual projects, during which researchers have little or no control over the data with which they work. Give that failure data for actual projects can be noisy, distorted, and uncertain, reported procedures for determining model applicability may be incomplete.