Science.gov

Sample records for realistic large-scale model

  1. Efficient Large-Scale Coating Microstructure Formation Using Realistic CFD Models

    NASA Astrophysics Data System (ADS)

    Wiederkehr, Thomas; Müller, Heinrich

    2015-02-01

    For the understanding of physical effects during the formation of thermally sprayed coating layers and the deduction of the macroscopic properties of a coating, microstructure modeling and simulation techniques play an important role. In this contribution, a coupled simulation framework consisting of a detailed, CFD-based single splat simulation, and a large-scale coating build-up simulation is presented that is capable to compute large-scale, three-dimensional, porous microstructures by sequential drop impingement of more than 10,000 individual particles on multicore workstation hardware. Due to the geometry-based coupling of the two simulations, the deformation, cooling, and solidification of every particle is sensitive to the hit surface area and thereby pores develop naturally in the model. The single splat simulation employs the highly parallel Lattice-Boltzmann method, which is well suited for GPU-acceleration. In order to save splat calculations, the coating simulation includes a database-driven approach that re-uses already computed splats for similar underground shapes at the randomly chosen impact sites. For a fast database search, three different methods of efficient pre-selection of candidates are described and compared against each other.

  2. The composite neuron: a realistic one-compartment Purkinje cell model suitable for large-scale neuronal network simulations.

    PubMed

    Coop, A D; Reeke, G N

    2001-01-01

    We present a simple method for the realistic description of neurons that is well suited to the development of large-scale neuronal network models where the interactions within and between neural circuits are the object of study rather than the details of dendritic signal propagation in individual cells. Referred to as the composite approach, it combines in a one-compartment model elements of both the leaky integrator cell and the conductance-based formalism of Hodgkin and Huxley (1952). Composite models treat the cell membrane as an equivalent circuit that contains ligand-gated synaptic, voltage-gated, and voltage- and concentration-dependent conductances. The time dependences of these various conductances are assumed to correlate with their spatial locations in the real cell. Thus, when viewed from the soma, ligand-gated synaptic and other dendritically located conductances can be modeled as either single alpha or double exponential functions of time, whereas, with the exception of discharge-related conductances, somatic and proximal dendritic conductances can be well approximated by simple current-voltage relationships. As an example of the composite approach to neuronal modeling we describe a composite model of a cerebellar Purkinje neuron.

  3. A realistic large-scale model of the cerebellum granular layer predicts circuit spatio-temporal filtering properties.

    PubMed

    Solinas, Sergio; Nieus, Thierry; D'Angelo, Egidio

    2010-01-01

    The way the cerebellar granular layer transforms incoming mossy fiber signals into new spike patterns to be related to Purkinje cells is not yet clear. Here, a realistic computational model of the granular layer was developed and used to address four main functional hypotheses: center-surround organization, time-windowing, high-pass filtering in responses to spike bursts and coherent oscillations in response to diffuse random activity. The model network was activated using patterns inspired by those recorded in vivo. Burst stimulation of a small mossy fiber bundle resulted in granule cell bursts delimited in time (time windowing) and space (center-surround) by network inhibition. This burst-burst transmission showed marked frequency-dependence configuring a high-pass filter with cut-off frequency around 100 Hz. The contrast between center and surround properties was regulated by the excitatory-inhibitory balance. The stronger excitation made the center more responsive to 10-50 Hz input frequencies and enhanced the granule cell output (with spikes occurring earlier and with higher frequency and number) compared to the surround. Finally, over a certain level of mossy fiber background activity, the circuit generated coherent oscillations in the theta-frequency band. All these processes were fine-tuned by NMDA and GABA-A receptor activation and neurotransmitter vesicle cycling in the cerebellar glomeruli. This model shows that available knowledge on cellular mechanisms is sufficient to unify the main functional hypotheses on the cerebellum granular layer and suggests that this network can behave as an adaptable spatio-temporal filter coordinated by theta-frequency oscillations.

  4. Modeling of the cross-beam energy transfer with realistic inertial-confinement-fusion beams in a large-scale hydrocode.

    PubMed

    Colaïtis, A; Duchateau, G; Ribeyre, X; Tikhonchuk, V

    2015-01-01

    A method for modeling realistic laser beams smoothed by kinoform phase plates is presented. The ray-based paraxial complex geometrical optics (PCGO) model with Gaussian thick rays allows one to create intensity variations, or pseudospeckles, that reproduce the beam envelope, contrast, and high-intensity statistics predicted by paraxial laser propagation codes. A steady-state cross-beam energy-transfer (CBET) model is implemented in a large-scale radiative hydrocode based on the PCGO model. It is used in conjunction with the realistic beam modeling technique to study the effects of CBET between coplanar laser beams on the target implosion. The pseudospeckle pattern imposed by PCGO produces modulations in the irradiation field and the shell implosion pressure. Cross-beam energy transfer between beams at 20(∘) and 40(∘) significantly degrades the irradiation symmetry by amplifying low-frequency modes and reducing the laser-capsule coupling efficiency, ultimately leading to large modulations of the shell areal density and lower convergence ratios. These results highlight the role of laser-plasma interaction and its influence on the implosion dynamics.

  5. Modeling the Internet's large-scale topology

    PubMed Central

    Yook, Soon-Hyung; Jeong, Hawoong; Barabási, Albert-László

    2002-01-01

    Network generators that capture the Internet's large-scale topology are crucial for the development of efficient routing protocols and modeling Internet traffic. Our ability to design realistic generators is limited by the incomplete understanding of the fundamental driving forces that affect the Internet's evolution. By combining several independent databases capturing the time evolution, topology, and physical layout of the Internet, we identify the universal mechanisms that shape the Internet's router and autonomous system level topology. We find that the physical layout of nodes form a fractal set, determined by population density patterns around the globe. The placement of links is driven by competition between preferential attachment and linear distance dependence, a marked departure from the currently used exponential laws. The universal parameters that we extract significantly restrict the class of potentially correct Internet models and indicate that the networks created by all available topology generators are fundamentally different from the current Internet. PMID:12368484

  6. Numerical Modeling for Large Scale Hydrothermal System

    NASA Astrophysics Data System (ADS)

    Sohrabi, Reza; Jansen, Gunnar; Malvoisin, Benjamin; Mazzini, Adriano; Miller, Stephen A.

    2017-04-01

    Moderate-to-high enthalpy systems are driven by multiphase and multicomponent processes, fluid and rock mechanics, and heat transport processes, all of which present challenges in developing realistic numerical models of the underlying physics. The objective of this work is to present an approach, and some initial results, for modeling and understanding dynamics of the birth of large scale hydrothermal systems. Numerical modeling of such complex systems must take into account a variety of coupled thermal, hydraulic, mechanical and chemical processes, which is numerically challenging. To provide first estimates of the behavior of this deep complex systems, geological structures must be constrained, and the fluid dynamics, mechanics and the heat transport need to be investigated in three dimensions. Modeling these processes numerically at adequate resolution and reasonable computation times requires a suite of tools that we are developing and/or utilizing to investigate such systems. Our long-term goal is to develop 3D numerical models, based on a geological models, which couples mechanics with the hydraulics and thermal processes driving hydrothermal system. Our first results from the Lusi hydrothermal system in East Java, Indonesia provide a basis for more sophisticated studies, eventually in 3D, and we introduce a workflow necessary to achieve these objectives. Future work focuses with the aim and parallelization suitable for High Performance Computing (HPC). Such developments are necessary to achieve high-resolution simulations to more fully understand the complex dynamics of hydrothermal systems.

  7. Large-scale multimedia modeling applications

    SciTech Connect

    Droppo, J.G. Jr.; Buck, J.W.; Whelan, G.; Strenge, D.L.; Castleton, K.J.; Gelston, G.M.

    1995-08-01

    Over the past decade, the US Department of Energy (DOE) and other agencies have faced increasing scrutiny for a wide range of environmental issues related to past and current practices. A number of large-scale applications have been undertaken that required analysis of large numbers of potential environmental issues over a wide range of environmental conditions and contaminants. Several of these applications, referred to here as large-scale applications, have addressed long-term public health risks using a holistic approach for assessing impacts from potential waterborne and airborne transport pathways. Multimedia models such as the Multimedia Environmental Pollutant Assessment System (MEPAS) were designed for use in such applications. MEPAS integrates radioactive and hazardous contaminants impact computations for major exposure routes via air, surface water, ground water, and overland flow transport. A number of large-scale applications of MEPAS have been conducted to assess various endpoints for environmental and human health impacts. These applications are described in terms of lessons learned in the development of an effective approach for large-scale applications.

  8. Modeling Human Behavior at a Large Scale

    DTIC Science & Technology

    2012-01-01

    Discerning intentions in dynamic human action. Trends in Cognitive Sciences , 5(4):171 – 178, 2001. Shirli Bar-David, Israel Bar-David, Paul C. Cross, Sadie...Limits of predictability in human mobility. Science , 327(5968):1018, 2010. S.A. Stouffer. Intervening opportunities: a theory relating mobility and...Modeling Human Behavior at a Large Scale by Adam Sadilek Submitted in Partial Fulfillment of the Requirements for the Degree Doctor of Philosophy

  9. Is realistic neuronal modeling realistic?

    PubMed

    Almog, Mara; Korngreen, Alon

    2016-11-01

    Scientific models are abstractions that aim to explain natural phenomena. A successful model shows how a complex phenomenon arises from relatively simple principles while preserving major physical or biological rules and predicting novel experiments. A model should not be a facsimile of reality; it is an aid for understanding it. Contrary to this basic premise, with the 21st century has come a surge in computational efforts to model biological processes in great detail. Here we discuss the oxymoronic, realistic modeling of single neurons. This rapidly advancing field is driven by the discovery that some neurons don't merely sum their inputs and fire if the sum exceeds some threshold. Thus researchers have asked what are the computational abilities of single neurons and attempted to give answers using realistic models. We briefly review the state of the art of compartmental modeling highlighting recent progress and intrinsic flaws. We then attempt to address two fundamental questions. Practically, can we realistically model single neurons? Philosophically, should we realistically model single neurons? We use layer 5 neocortical pyramidal neurons as a test case to examine these issues. We subject three publically available models of layer 5 pyramidal neurons to three simple computational challenges. Based on their performance and a partial survey of published models, we conclude that current compartmental models are ad hoc, unrealistic models functioning poorly once they are stretched beyond the specific problems for which they were designed. We then attempt to plot possible paths for generating realistic single neuron models. Copyright © 2016 the American Physiological Society.

  10. Large-Scale Aerosol Modeling and Analysis

    DTIC Science & Technology

    2008-09-30

    aerosol species up to six days in advance anywhere on the globe. NAAPS and COAMPS are particularly useful for forecasts of dust storms in areas...impact cloud processes globally. With increasing dust storms due to climate change and land use changes in desert regions, the impact of the...bacteria in large-scale dust storms is expected to significantly impact warm ice cloud formation, human health, and ecosystems globally. In Niemi et al

  11. SU(3)-guided Realistic Nucleon-nucleon Interaction for Large-scale Calculations

    NASA Astrophysics Data System (ADS)

    Sargsyan, Grigor; Launey, Kristina; Baker, Robert; Dytrych, Tomas; Draayer, Jerry

    2017-01-01

    We examine nucleon-nucleon (NN) realistic interactions, such as JISP16 and N3LO, based on their SU(3) decomposition and identify components of the interactions that are sufficient to describe the structure of low-lying states in nuclei. We observe that many of the interaction components, when expressed as SU(3) tensors, become negligible. Paring the interaction down to its physically relevant terms improves the efficacy of large-scale calculations from first principles (ab initio). The work compares spectral properties for low-lying states in 12C calculated by means of the selected interaction to the results obtained when the full interaction is used and confirms the validity of the method. Supported by the U.S. NSF (OCI-0904874, ACI -1516338) and the U.S. DOE (DE-SC0005248), and benefited from computing resources provided by Blue Waters and Louisiana State University's Center for Computation & Technology.

  12. Large-Scale Aerosol Modeling and Analysis

    DTIC Science & Technology

    2007-09-30

    to six days in advance anywhere on the globe. NAAPS and COAMPS are particularly useful for forecasts of dust storms in areas downwind of the large...in FY08. NAAPS forecasts of CONUS dust storms and long-range dust transport to CONUS were further evaluated in collaboration with CSU. These...visibility. The regional model ( COAMPS /Aerosol) became operational during OIF. The global model Navy Aerosol Analysis and Prediction System (NAAPS

  13. Large Scale, High Resolution, Mantle Dynamics Modeling

    NASA Astrophysics Data System (ADS)

    Geenen, T.; Berg, A. V.; Spakman, W.

    2007-12-01

    To model the geodynamic evolution of plate convergence, subduction and collision and to allow for a connection to various types of observational data, geophysical, geodetical and geological, we developed a 4D (space-time) numerical mantle convection code. The model is based on a spherical 3D Eulerian fem model, with quadratic elements, on top of which we constructed a 3D Lagrangian particle in cell(PIC) method. We use the PIC method to transport material properties and to incorporate a viscoelastic rheology. Since capturing small scale processes associated with localization phenomena require a high resolution, we spend a considerable effort on implementing solvers suitable to solve for models with over 100 million degrees of freedom. We implemented Additive Schwartz type ILU based methods in combination with a Krylov solver, GMRES. However we found that for problems with over 500 thousend degrees of freedom the convergence of the solver degraded severely. This observation is known from the literature [Saad, 2003] and results from the local character of the ILU preconditioner resulting in a poor approximation of the inverse of A for large A. The size of A for which ILU is no longer usable depends on the condition of A and on the amount of fill in allowed for the ILU preconditioner. We found that for our problems with over 5×105 degrees of freedom convergence became to slow to solve the system within an acceptable amount of walltime, one minute, even when allowing for considerable amount of fill in. We also implemented MUMPS and found good scaling results for problems up to 107 degrees of freedom for up to 32 CPU¡¯s. For problems with over 100 million degrees of freedom we implemented Algebraic Multigrid type methods (AMG) from the ML library [Sala, 2006]. Since multigrid methods are most effective for single parameter problems, we rebuild our model to use the SIMPLE method in the Stokes solver [Patankar, 1980]. We present scaling results from these solvers for 3D

  14. Large-Scale Aerosol Modeling and Analysis

    DTIC Science & Technology

    2010-09-30

    advance anywhere on the globe. NAAPS and COAMPS are particularly useful for forecasts of dust storms in areas downwind of the large deserts of the world... dust source regions in NAAPS. The DSD has been crucial for high-resolution dust forecasting in SW Asia using COAMPS (Walker et al., 2009). Dust ...6 Figure 2. Four-panel product used to compare multiple model forecasts of visibility in SW Asia dust storms . On the web the product is

  15. Adaptive Texture Synthesis for Large Scale City Modeling

    NASA Astrophysics Data System (ADS)

    Despine, G.; Colleu, T.

    2015-02-01

    Large scale city models textured with aerial images are well suited for bird-eye navigation but generally the image resolution does not allow pedestrian navigation. One solution to face this problem is to use high resolution terrestrial photos but it requires huge amount of manual work to remove occlusions. Another solution is to synthesize generic textures with a set of procedural rules and elementary patterns like bricks, roof tiles, doors and windows. This solution may give realistic textures but with no correlation to the ground truth. Instead of using pure procedural modelling we present a method to extract information from aerial images and adapt the texture synthesis to each building. We describe a workflow allowing the user to drive the information extraction and to select the appropriate texture patterns. We also emphasize the importance to organize the knowledge about elementary pattern in a texture catalogue allowing attaching physical information, semantic attributes and to execute selection requests. Roofs are processed according to the detected building material. Façades are first described in terms of principal colours, then opening positions are detected and some window features are computed. These features allow selecting the most appropriate patterns from the texture catalogue. We experimented this workflow on two samples with 20 cm and 5 cm resolution images. The roof texture synthesis and opening detection were successfully conducted on hundreds of buildings. The window characterization is still sensitive to the distortions inherent to the projection of aerial images onto the facades.

  16. A model of plasma heating by large-scale flow

    NASA Astrophysics Data System (ADS)

    Pongkitiwanichakul, P.; Cattaneo, F.; Boldyrev, S.; Mason, J.; Perez, J. C.

    2015-12-01

    In this work, we study the process of energy dissipation triggered by a slow large-scale motion of a magnetized conducting fluid. Our consideration is motivated by the problem of heating the solar corona, which is believed to be governed by fast reconnection events set off by the slow motion of magnetic field lines anchored in the photospheric plasma. To elucidate the physics governing the disruption of the imposed laminar motion and the energy transfer to small scales, we propose a simplified model where the large-scale motion of magnetic field lines is prescribed not at the footpoints but rather imposed volumetrically. As a result, the problem can be treated numerically with an efficient, highly accurate spectral method, allowing us to use a resolution and statistical ensemble exceeding those of the previous work. We find that, even though the large-scale deformations are slow, they eventually lead to reconnection events that drive a turbulent state at smaller scales. The small-scale turbulence displays many of the universal features of field-guided magnetohydrodynamic turbulence like a well-developed inertial range spectrum. Based on these observations, we construct a phenomenological model that gives the scalings of the amplitude of the fluctuations and the energy-dissipation rate as functions of the input parameters. We find good agreement between the numerical results and the predictions of the model.

  17. Homogenization of Large-Scale Movement Models in Ecology

    USGS Publications Warehouse

    Garlick, M.J.; Powell, J.A.; Hooten, M.B.; McFarlane, L.R.

    2011-01-01

    A difficulty in using diffusion models to predict large scale animal population dispersal is that individuals move differently based on local information (as opposed to gradients) in differing habitat types. This can be accommodated by using ecological diffusion. However, real environments are often spatially complex, limiting application of a direct approach. Homogenization for partial differential equations has long been applied to Fickian diffusion (in which average individual movement is organized along gradients of habitat and population density). We derive a homogenization procedure for ecological diffusion and apply it to a simple model for chronic wasting disease in mule deer. Homogenization allows us to determine the impact of small scale (10-100 m) habitat variability on large scale (10-100 km) movement. The procedure generates asymptotic equations for solutions on the large scale with parameters defined by small-scale variation. The simplicity of this homogenization procedure is striking when compared to the multi-dimensional homogenization procedure for Fickian diffusion,and the method will be equally straightforward for more complex models. ?? 2010 Society for Mathematical Biology.

  18. Multiresolution comparison of precipitation datasets for large-scale models

    NASA Astrophysics Data System (ADS)

    Chun, K. P.; Sapriza Azuri, G.; Davison, B.; DeBeer, C. M.; Wheater, H. S.

    2014-12-01

    Gridded precipitation datasets are crucial for driving large-scale models which are related to weather forecast and climate research. However, the quality of precipitation products is usually validated individually. Comparisons between gridded precipitation products along with ground observations provide another avenue for investigating how the precipitation uncertainty would affect the performance of large-scale models. In this study, using data from a set of precipitation gauges over British Columbia and Alberta, we evaluate several widely used North America gridded products including the Canadian Gridded Precipitation Anomalies (CANGRD), the National Center for Environmental Prediction (NCEP) reanalysis, the Water and Global Change (WATCH) project, the thin plate spline smoothing algorithms (ANUSPLIN) and Canadian Precipitation Analysis (CaPA). Based on verification criteria for various temporal and spatial scales, results provide an assessment of possible applications for various precipitation datasets. For long-term climate variation studies (~100 years), CANGRD, NCEP, WATCH and ANUSPLIN have different comparative advantages in terms of their resolution and accuracy. For synoptic and mesoscale precipitation patterns, CaPA provides appealing performance of spatial coherence. In addition to the products comparison, various downscaling methods are also surveyed to explore new verification and bias-reduction methods for improving gridded precipitation outputs for large-scale models.

  19. Optimization of Nanoparticle-Based SERS Substrates through Large-Scale Realistic Simulations

    PubMed Central

    2016-01-01

    Surface-enhanced Raman scattering (SERS) has become a widely used spectroscopic technique for chemical identification, providing unbeaten sensitivity down to the single-molecule level. The amplification of the optical near field produced by collective electron excitations —plasmons— in nanostructured metal surfaces gives rise to a dramatic increase by many orders of magnitude in the Raman scattering intensities from neighboring molecules. This effect strongly depends on the detailed geometry and composition of the plasmon-supporting metallic structures. However, the search for optimized SERS substrates has largely relied on empirical data, due in part to the complexity of the structures, whose simulation becomes prohibitively demanding. In this work, we use state-of-the-art electromagnetic computation techniques to produce predictive simulations for a wide range of nanoparticle-based SERS substrates, including realistic configurations consisting of random arrangements of hundreds of nanoparticles with various morphologies. This allows us to derive rules of thumb for the influence of particle anisotropy and substrate coverage on the obtained SERS enhancement and optimum spectral ranges of operation. Our results provide a solid background to understand and design optimized SERS substrates. PMID:28239616

  20. Optimization of Nanoparticle-Based SERS Substrates through Large-Scale Realistic Simulations.

    PubMed

    Solís, Diego M; Taboada, José M; Obelleiro, Fernando; Liz-Marzán, Luis M; García de Abajo, F Javier

    2017-02-15

    Surface-enhanced Raman scattering (SERS) has become a widely used spectroscopic technique for chemical identification, providing unbeaten sensitivity down to the single-molecule level. The amplification of the optical near field produced by collective electron excitations -plasmons- in nanostructured metal surfaces gives rise to a dramatic increase by many orders of magnitude in the Raman scattering intensities from neighboring molecules. This effect strongly depends on the detailed geometry and composition of the plasmon-supporting metallic structures. However, the search for optimized SERS substrates has largely relied on empirical data, due in part to the complexity of the structures, whose simulation becomes prohibitively demanding. In this work, we use state-of-the-art electromagnetic computation techniques to produce predictive simulations for a wide range of nanoparticle-based SERS substrates, including realistic configurations consisting of random arrangements of hundreds of nanoparticles with various morphologies. This allows us to derive rules of thumb for the influence of particle anisotropy and substrate coverage on the obtained SERS enhancement and optimum spectral ranges of operation. Our results provide a solid background to understand and design optimized SERS substrates.

  1. Large-Scale, Full-Wave Scattering Phenomenology Characterization of Realistic Trees: Preliminary Results

    DTIC Science & Technology

    2012-09-01

    2 Fig. 2 Sassafras tree model ...............................................................................2 Fig. 3 Eastern cottonwood...to right Fig. 2 Sassafras tree model Fig. 3 Eastern cottonwood (Populus deltoides) tree model After the mesh has been properly processed

  2. Ecohydrological modeling for large-scale environmental impact assessment.

    PubMed

    Woznicki, Sean A; Nejadhashemi, A Pouyan; Abouali, Mohammad; Herman, Matthew R; Esfahanian, Elaheh; Hamaamin, Yaseen A; Zhang, Zhen

    2016-02-01

    Ecohydrological models are frequently used to assess the biological integrity of unsampled streams. These models vary in complexity and scale, and their utility depends on their final application. Tradeoffs are usually made in model scale, where large-scale models are useful for determining broad impacts of human activities on biological conditions, and regional-scale (e.g. watershed or ecoregion) models provide stakeholders greater detail at the individual stream reach level. Given these tradeoffs, the objective of this study was to develop large-scale stream health models with reach level accuracy similar to regional-scale models thereby allowing for impacts assessments and improved decision-making capabilities. To accomplish this, four measures of biological integrity (Ephemeroptera, Plecoptera, and Trichoptera taxa (EPT), Family Index of Biotic Integrity (FIBI), Hilsenhoff Biotic Index (HBI), and fish Index of Biotic Integrity (IBI)) were modeled based on four thermal classes (cold, cold-transitional, cool, and warm) of streams that broadly dictate the distribution of aquatic biota in Michigan. The Soil and Water Assessment Tool (SWAT) was used to simulate streamflow and water quality in seven watersheds and the Hydrologic Index Tool was used to calculate 171 ecologically relevant flow regime variables. Unique variables were selected for each thermal class using a Bayesian variable selection method. The variables were then used in development of adaptive neuro-fuzzy inference systems (ANFIS) models of EPT, FIBI, HBI, and IBI. ANFIS model accuracy improved when accounting for stream thermal class rather than developing a global model.

  3. Extending SME to Handle Large-Scale Cognitive Modeling.

    PubMed

    Forbus, Kenneth D; Ferguson, Ronald W; Lovett, Andrew; Gentner, Dedre

    2016-06-20

    Analogy and similarity are central phenomena in human cognition, involved in processes ranging from visual perception to conceptual change. To capture this centrality requires that a model of comparison must be able to integrate with other processes and handle the size and complexity of the representations required by the tasks being modeled. This paper describes extensions to Structure-Mapping Engine (SME) since its inception in 1986 that have increased its scope of operation. We first review the basic SME algorithm, describe psychological evidence for SME as a process model, and summarize its role in simulating similarity-based retrieval and generalization. Then we describe five techniques now incorporated into the SME that have enabled it to tackle large-scale modeling tasks: (a) Greedy merging rapidly constructs one or more best interpretations of a match in polynomial time: O(n(2) log(n)); (b) Incremental operation enables mappings to be extended as new information is retrieved or derived about the base or target, to model situations where information in a task is updated over time; (c) Ubiquitous predicates model the varying degrees to which items may suggest alignment; (d) Structural evaluation of analogical inferences models aspects of plausibility judgments; (e) Match filters enable large-scale task models to communicate constraints to SME to influence the mapping process. We illustrate via examples from published studies how these enable it to capture a broader range of psychological phenomena than before.

  4. Challenges of Modeling Flood Risk at Large Scales

    NASA Astrophysics Data System (ADS)

    Guin, J.; Simic, M.; Rowe, J.

    2009-04-01

    algorithm propagates the flows for each simulated event. The model incorporates a digital terrain model (DTM) at 10m horizontal resolution, which is used to extract flood plain cross-sections such that a one-dimensional hydraulic model can be used to estimate extent and elevation of flooding. In doing so the effect of flood defenses in mitigating floods are accounted for. Finally a suite of vulnerability relationships have been developed to estimate flood losses for a portfolio of properties that are exposed to flood hazard. Historical experience indicates that a for recent floods in Great Britain more than 50% of insurance claims occur outside the flood plain and these are primarily a result of excess surface flow, hillside flooding, flooding due to inadequate drainage. A sub-component of the model addresses this issue by considering several parameters that best explain the variability of claims off the flood plain. The challenges of modeling such a complex phenomenon at a large scale largely dictate the choice of modeling approaches that need to be adopted for each of these model components. While detailed numerically-based physical models exist and have been used for conducting flood hazard studies, they are generally restricted to small geographic regions. In a probabilistic risk estimation framework like our current model, a blend of deterministic and statistical techniques have to be employed such that each model component is independent, physically sound and is able to maintain the statistical properties of observed historical data. This is particularly important because of the highly non-linear behavior of the flooding process. With respect to vulnerability modeling, both on and off the flood plain, the challenges include the appropriate scaling of a damage relationship when applied to a portfolio of properties. This arises from the fact that the estimated hazard parameter used for damage assessment, namely maximum flood depth has considerable uncertainty. The

  5. Large scale stochastic spatio-temporal modelling with PCRaster

    NASA Astrophysics Data System (ADS)

    Karssenberg, Derek; Drost, Niels; Schmitz, Oliver; de Jong, Kor; Bierkens, Marc F. P.

    2013-04-01

    software from the eScience Technology Platform (eSTeP), developed at the Netherlands eScience Center. This will allow us to scale up to hundreds of machines, with thousands of compute cores. A key requirement is not to change the user experience of the software. PCRaster operations and the use of the Python framework classes should work in a similar manner on machines ranging from a laptop to a supercomputer. This enables a seamless transfer of models from small machines, where model development is done, to large machines used for large-scale model runs. Domain specialists from a large range of disciplines, including hydrology, ecology, sedimentology, and land use change studies, currently use the PCRaster Python software within research projects. Applications include global scale hydrological modelling and error propagation in large-scale land use change models. The software runs on MS Windows, Linux operating systems, and OS X.

  6. Performance modeling and analysis of consumer classes in large scale systems

    NASA Astrophysics Data System (ADS)

    Al-Shukri, Sh.; Lenin, R. B.; Ramaswamy, S.; Anand, A.; Narasimhan, V. L.; Abraham, J.; Varadan, Vijay

    2009-03-01

    Peer-to-Peer (P2P) networks have been used efficiently as building blocks as overlay networks for large-scale distributed network applications with Internet Protocol (IP) based bottom layer networks. With large scale Wireless Sensor Networks (WSNs) becoming increasingly realistic, it is important to overlay networks with WSNs in the bottom layer. The suitable mathematical (stochastic) model that can model the overlay network over WSNs is Queuing Networks with Multi-Class customers. In this paper, we discuss how these mathematical network models can be simulated using the object oriented simulation package OMNeT++. We discuss the Graphical User Interface (GUI) which is developed to accept the input parameter files and execute the simulation using this interface. We compare the simulation results with analytical formulas available in the literature for these mathematical models.

  7. Investigation of flow fields within large scale hypersonic inlet models

    NASA Technical Reports Server (NTRS)

    Gnos, A. V.; Watson, E. C.; Seebaugh, W. R.; Sanator, R. J.; Decarlo, J. P.

    1973-01-01

    Analytical and experimental investigations were conducted to determine the internal flow characteristics in model passages representative of hypersonic inlets for use at Mach numbers to about 12. The passages were large enough to permit measurements to be made in both the core flow and boundary layers. The analytical techniques for designing the internal contours and predicting the internal flow-field development accounted for coupling between the boundary layers and inviscid flow fields by means of a displacement-thickness correction. Three large-scale inlet models, each having a different internal compression ratio, were designed to provide high internal performance with an approximately uniform static-pressure distribution at the throat station. The models were tested in the Ames 3.5-Foot Hypersonic Wind Tunnel at a nominal free-stream Mach number of 7.4 and a unit free-stream Reynolds number of 8.86 X one million per meter.

  8. Modelling large-scale halo bias using the bispectrum

    NASA Astrophysics Data System (ADS)

    Pollack, Jennifer E.; Smith, Robert E.; Porciani, Cristiano

    2012-03-01

    We study the relation between the density distribution of tracers for large-scale structure and the underlying matter distribution - commonly termed bias - in the Λ cold dark matter framework. In particular, we examine the validity of the local model of biasing at quadratic order in the matter density. This model is characterized by parameters b1 and b2. Using an ensemble of N-body simulations, we apply several statistical methods to estimate the parameters. We measure halo and matter fluctuations smoothed on various scales. We find that, whilst the fits are reasonably good, the parameters vary with smoothing scale. We argue that, for real-space measurements, owing to the mixing of wavemodes, no smoothing scale can be found for which the parameters are independent of smoothing. However, this is not the case in Fourier space. We measure halo and halo-mass power spectra and from these construct estimates of the effective large-scale bias as a guide for b1. We measure the configuration dependence of the halo bispectra Bhhh and reduced bispectra Qhhh for very large-scale k-space triangles. From these data, we constrain b1 and b2, taking into account the full bispectrum covariance matrix. Using the lowest order perturbation theory, we find that for Bhhh the best-fitting parameters are in reasonable agreement with one another as the triangle scale is varied, although the fits become poor as smaller scales are included. The same is true for Qhhh. The best-fitting values were found to depend on the discreteness correction. This led us to consider halo-mass cross-bispectra. The results from these statistics supported our earlier findings. We then developed a test to explore whether the inconsistency in the recovered bias parameters could be attributed to missing higher order corrections in the models. We prove that low-order expansions are not sufficiently accurate to model the data, even on scales k1˜ 0.04 h Mpc-1. If robust inferences concerning bias are to be drawn

  9. Modeling Failure Propagation in Large-Scale Engineering Networks

    NASA Astrophysics Data System (ADS)

    Schläpfer, Markus; Shapiro, Jonathan L.

    The simultaneous unavailability of several technical components within large-scale engineering systems can lead to high stress, rendering them prone to cascading events. In order to gain qualitative insights into the failure propagation mechanisms resulting from independent outages, we adopt a minimalistic model representing the components and their interdependencies by an undirected, unweighted network. The failure dynamics are modeled by an anticipated accelerated “wearout” process being dependent on the initial degree of a node and on the number of failed nearest neighbors. The results of the stochastic simulations imply that the influence of the network topology on the speed of the cascade highly depends on how the number of failed nearest neighbors shortens the life expectancy of a node. As a formal description of the decaying networks we propose a continuous-time mean field approximation, estimating the average failure rate of the nearest neighbors of a node based on the degree-degree distribution.

  10. Research on large-scale wind farm modeling

    NASA Astrophysics Data System (ADS)

    Ma, Longfei; Zhang, Baoqun; Gong, Cheng; Jiao, Ran; Shi, Rui; Chi, Zhongjun; Ding, Yifeng

    2017-01-01

    Due to intermittent and adulatory properties of wind energy, when large-scale wind farm connected to the grid, it will have much impact on the power system, which is different from traditional power plants. Therefore it is necessary to establish an effective wind farm model to simulate and analyze the influence wind farms have on the grid as well as the transient characteristics of the wind turbines when the grid is at fault. However we must first establish an effective WTGs model. As the doubly-fed VSCF wind turbine has become the mainstream wind turbine model currently, this article first investigates the research progress of doubly-fed VSCF wind turbine, and then describes the detailed building process of the model. After that investigating the common wind farm modeling methods and pointing out the problems encountered. As WAMS is widely used in the power system, which makes online parameter identification of the wind farm model based on off-output characteristics of wind farm be possible, with a focus on interpretation of the new idea of identification-based modeling of large wind farms, which can be realized by two concrete methods.

  11. Large-scale Modeling of Inundation in the Amazon Basin

    NASA Astrophysics Data System (ADS)

    Luo, X.; Li, H. Y.; Getirana, A.; Leung, L. R.; Tesfa, T. K.

    2015-12-01

    Flood events have impacts on the exchange of energy, water and trace gases between land and atmosphere, hence potentially affecting the climate. The Amazon River basin is the world's largest river basin. Seasonal floods occur in the Amazon Basin each year. The basin being characterized by flat gradients, backwater effects are evident in the river dynamics. This factor, together with large uncertainties in river hydraulic geometry, surface topography and other datasets, contribute to difficulties in simulating flooding processes over this basin. We have developed a large-scale inundation scheme in the framework of the Model for Scale Adaptive River Transport (MOSART) river routing model. Both the kinematic wave and the diffusion wave routing methods are implemented in the model. A new process-based algorithm is designed to represent river channel - floodplain interactions. Uncertainties in the input datasets are partly addressed through model calibration. We will present the comparison of simulated results against satellite and in situ observations and analysis to understand factors that influence inundation processes in the Amazon Basin.

  12. Surrogate population models for large-scale neural simulations.

    PubMed

    Tripp, Bryan P

    2015-06-01

    Because different parts of the brain have rich interconnections, it is not possible to model small parts realistically in isolation. However, it is also impractical to simulate large neural systems in detail. This article outlines a new approach to multiscale modeling of neural systems that involves constructing efficient surrogate models of populations. Given a population of neuron models with correlated activity and with specific, nonrandom connections, a surrogate model is constructed in order to approximate the aggregate outputs of the population. The surrogate model requires less computation than the neural model, but it has a clear and specific relationship with the neural model. For example, approximate spike rasters for specific neurons can be derived from a simulation of the surrogate model. This article deals specifically with neural engineering framework (NEF) circuits of leaky-integrate-and-fire point neurons. Weighted sums of spikes are modeled by interpolating over latent variables in the population activity, and linear filters operate on gaussian random variables to approximate spike-related fluctuations. It is found that the surrogate models can often closely approximate network behavior with orders-of-magnitude reduction in computational demands, although there are certain systematic differences between the spiking and surrogate models. Since individual spikes are not modeled, some simulations can be performed with much longer steps sizes (e.g., 20 ms). Possible extensions to non-NEF networks and to more complex neuron models are discussed.

  13. Large scale cardiac modeling on the Blue Gene supercomputer.

    PubMed

    Reumann, Matthias; Fitch, Blake G; Rayshubskiy, Aleksandr; Keller, David U; Weiss, Daniel L; Seemann, Gunnar; Dössel, Olaf; Pitman, Michael C; Rice, John J

    2008-01-01

    Multi-scale, multi-physical heart models have not yet been able to include a high degree of accuracy and resolution with respect to model detail and spatial resolution due to computational limitations of current systems. We propose a framework to compute large scale cardiac models. Decomposition of anatomical data in segments to be distributed on a parallel computer is carried out by optimal recursive bisection (ORB). The algorithm takes into account a computational load parameter which has to be adjusted according to the cell models used. The diffusion term is realized by the monodomain equations. The anatomical data-set was given by both ventricles of the Visible Female data-set in a 0.2 mm resolution. Heterogeneous anisotropy was included in the computation. Model weights as input for the decomposition and load balancing were set to (a) 1 for tissue and 0 for non-tissue elements; (b) 10 for tissue and 1 for non-tissue elements. Scaling results for 512, 1024, 2048, 4096 and 8192 computational nodes were obtained for 10 ms simulation time. The simulations were carried out on an IBM Blue Gene/L parallel computer. A 1 s simulation was then carried out on 2048 nodes for the optimal model load. Load balances did not differ significantly across computational nodes even if the number of data elements distributed to each node differed greatly. Since the ORB algorithm did not take into account computational load due to communication cycles, the speedup is close to optimal for the computation time but not optimal overall due to the communication overhead. However, the simulation times were reduced form 87 minutes on 512 to 11 minutes on 8192 nodes. This work demonstrates that it is possible to run simulations of the presented detailed cardiac model within hours for the simulation of a heart beat.

  14. Numerically modelling the large scale coronal magnetic field

    NASA Astrophysics Data System (ADS)

    Panja, Mayukh; Nandi, Dibyendu

    2016-07-01

    The solar corona spews out vast amounts of magnetized plasma into the heliosphere which has a direct impact on the Earth's magnetosphere. Thus it is important that we develop an understanding of the dynamics of the solar corona. With our present technology it has not been possible to generate 3D magnetic maps of the solar corona; this warrants the use of numerical simulations to study the coronal magnetic field. A very popular method of doing this, is to extrapolate the photospheric magnetic field using NLFF or PFSS codes. However the extrapolations at different time intervals are completely independent of each other and do not capture the temporal evolution of magnetic fields. On the other hand full MHD simulations of the global coronal field, apart from being computationally very expensive would be physically less transparent, owing to the large number of free parameters that are typically used in such codes. This brings us to the Magneto-frictional model which is relatively simpler and computationally more economic. We have developed a Magnetofrictional Model, in 3D spherical polar co-ordinates to study the large scale global coronal field. Here we present studies of changing connectivities between active regions, in response to photospheric motions.

  15. Multi-Resolution Modeling of Large Scale Scientific Simulation Data

    SciTech Connect

    Baldwin, C; Abdulla, G; Critchlow, T

    2003-01-31

    This paper discusses using the wavelets modeling technique as a mechanism for querying large-scale spatio-temporal scientific simulation data. Wavelets have been used successfully in time series analysis and in answering surprise and trend queries. Our approach however is driven by the need for compression, which is necessary for viable throughput given the size of the targeted data, along with the end user requirements from the discovery process. Our users would like to run fast queries to check the validity of the simulation algorithms used. In some cases users are welling to accept approximate results if the answer comes back within a reasonable time. In other cases they might want to identify a certain phenomena and track it over time. We face a unique problem because of the data set sizes. It may take months to generate one set of the targeted data; because of its shear size, the data cannot be stored on disk for long and thus needs to be analyzed immediately before it is sent to tape. We integrated wavelets within AQSIM, a system that we are developing to support exploration and analyses of tera-scale size data sets. We will discuss the way we utilized wavelets decomposition in our domain to facilitate compression and in answering a specific class of queries that is harder to answer with any other modeling technique. We will also discuss some of the shortcomings of our implementation and how to address them.

  16. A first large-scale flood inundation forecasting model

    SciTech Connect

    Schumann, Guy J-P; Neal, Jeffrey C.; Voisin, Nathalie; Andreadis, Konstantinos M.; Pappenberger, Florian; Phanthuwongpakdee, Kay; Hall, Amanda C.; Bates, Paul D.

    2013-11-04

    At present continental to global scale flood forecasting focusses on predicting at a point discharge, with little attention to the detail and accuracy of local scale inundation predictions. Yet, inundation is actually the variable of interest and all flood impacts are inherently local in nature. This paper proposes a first large scale flood inundation ensemble forecasting model that uses best available data and modeling approaches in data scarce areas and at continental scales. The model was built for the Lower Zambezi River in southeast Africa to demonstrate current flood inundation forecasting capabilities in large data-scarce regions. The inundation model domain has a surface area of approximately 170k km2. ECMWF meteorological data were used to force the VIC (Variable Infiltration Capacity) macro-scale hydrological model which simulated and routed daily flows to the input boundary locations of the 2-D hydrodynamic model. Efficient hydrodynamic modeling over large areas still requires model grid resolutions that are typically larger than the width of many river channels that play a key a role in flood wave propagation. We therefore employed a novel sub-grid channel scheme to describe the river network in detail whilst at the same time representing the floodplain at an appropriate and efficient scale. The modeling system was first calibrated using water levels on the main channel from the ICESat (Ice, Cloud, and land Elevation Satellite) laser altimeter and then applied to predict the February 2007 Mozambique floods. Model evaluation showed that simulated flood edge cells were within a distance of about 1 km (one model resolution) compared to an observed flood edge of the event. Our study highlights that physically plausible parameter values and satisfactory performance can be achieved at spatial scales ranging from tens to several hundreds of thousands of km2 and at model grid resolutions up to several km2. However, initial model test runs in forecast mode

  17. Low energy dipole strength from large scale shell model calculations

    NASA Astrophysics Data System (ADS)

    Sieja, Kamila

    2017-09-01

    Low energy enhancement of radiative strength functions has been deduced from experiments in several mass regions of nuclei. Such an enhancement is believed to impact the calculated neutron capture rates which are crucial input for reaction rates of astrophysical interest. Recently, shell model calculations have been performed to explain the upbend of the γ-strength as due to the M1 transitions between close-lying states in the quasi-continuum in Fe and Mo nuclei. Beyond mean-↓eld calculations in Mo suggested, however, a non-negligible role of electric dipole in the low energy enhancement. So far, no calculations of both dipole components within the same theoretical framework have been presented in this context. In this work we present newly developed large scale shell model appraoch that allows to treat on the same footing natural and non-natural parity states. The calculations are performed in a large sd - pf - gds model space, allowing for 1p{1h excitations on the top of the full pf-shell con↓guration mixing. We restrict the discussion to the magnetic part of the dipole strength, however, we calculate for the ↓rst time the magnetic dipole strength between states built of excitations going beyond the classical shell model spaces. Our results corroborate previous ↓ndings for the M1 enhancement for the natural parity states while we observe no enhancement for the 1p{1h contributions. We also discuss in more detail the e↑ects of con↓guration mixing limitations on the enhancement coming out from shell model calculations.

  18. Symmetry-guided large-scale shell-model theory

    NASA Astrophysics Data System (ADS)

    Launey, Kristina D.; Dytrych, Tomas; Draayer, Jerry P.

    2016-07-01

    In this review, we present a symmetry-guided strategy that utilizes exact as well as partial symmetries for enabling a deeper understanding of and advancing ab initio studies for determining the microscopic structure of atomic nuclei. These symmetries expose physically relevant degrees of freedom that, for large-scale calculations with QCD-inspired interactions, allow the model space size to be reduced through a very structured selection of the basis states to physically relevant subspaces. This can guide explorations of simple patterns in nuclei and how they emerge from first principles, as well as extensions of the theory beyond current limitations toward heavier nuclei and larger model spaces. This is illustrated for the ab initio symmetry-adapted no-core shell model (SA-NCSM) and two significant underlying symmetries, the symplectic Sp(3 , R) group and its deformation-related SU(3) subgroup. We review the broad scope of nuclei, where these symmetries have been found to play a key role-from the light p-shell systems, such as 6Li, 8B, 8Be, 12C, and 16O, and sd-shell nuclei exemplified by 20Ne, based on first-principle explorations; through the Hoyle state in 12C and enhanced collectivity in intermediate-mass nuclei, within a no-core shell-model perspective; up to strongly deformed species of the rare-earth and actinide regions, as investigated in earlier studies. A complementary picture, driven by symmetries dual to Sp(3 , R) , is also discussed. We briefly review symmetry-guided techniques that prove useful in various nuclear-theory models, such as Elliott model, ab initio SA-NCSM, symplectic model, pseudo- SU(3) and pseudo-symplectic models, ab initio hyperspherical harmonics method, ab initio lattice effective field theory, exact pairing-plus-shell model approaches, and cluster models, including the resonating-group method. Important implications of these approaches that have deepened our understanding of emergent phenomena in nuclei, such as enhanced

  19. Multi-Resolution Modeling of Large Scale Scientific Simulation Data

    SciTech Connect

    Baldwin, C; Abdulla, G; Critchlow, T

    2002-02-25

    Data produced by large scale scientific simulations, experiments, and observations can easily reach tera-bytes in size. The ability to examine data-sets of this magnitude, even in moderate detail, is problematic at best. Generally this scientific data consists of multivariate field quantities with complex inter-variable correlations and spatial-temporal structure. To provide scientists and engineers with the ability to explore and analyze such data sets we are using a twofold approach. First, we model the data with the objective of creating a compressed yet manageable representation. Second, with that compressed representation, we provide the user with the ability to query the resulting approximation to obtain approximate yet sufficient answers; a process called adhoc querying. This paper is concerned with a wavelet modeling technique that seeks to capture the important physical characteristics of the target scientific data. Our approach is driven by the compression, which is necessary for viable throughput, along with the end user requirements from the discovery process. Our work contrasts existing research which applies wavelets to range querying, change detection, and clustering problems by working directly with a decomposition of the data. The difference in this procedures is due primarily to the nature of the data and the requirements of the scientists and engineers. Our approach directly uses the wavelet coefficients of the data to compress as well as query. We will provide some background on the problem, describe how the wavelet decomposition is used to facilitate data compression and how queries are posed on the resulting compressed model. Results of this process will be shown for several problems of interest and we will end with some observations and conclusions about this research.

  20. Noise transmission characteristics of a large scale composite fuselage model

    NASA Technical Reports Server (NTRS)

    Beyer, Todd B.; Silcox, Richard J.

    1990-01-01

    Results from an experimental test undertaken to study the basic noise transmission characteristics of a realistic, built-up composite fuselage model are presented. The floor-equipped stiffened composite cylinder was exposed to a number of different exterior noise source configurations in a large anechoic chamber. These exterior source configurations included two point sources located in the same plane on opposite sides of the cylinder, a single point source and a propeller simulator. The results indicate that the interior source field is affected strongly by exterior noise source phasing. Sidewall treatment is seen to reduce the overall interior sound pressure levels and dampen dominant acoustic resonances so that other acoustic modes can affect interior noise distribution.

  1. Do land parameters matter in large-scale hydrological modelling?

    NASA Astrophysics Data System (ADS)

    Gudmundsson, Lukas; Seneviratne, Sonia I.

    2013-04-01

    Many of the most pending issues in large-scale hydrology are concerned with predicting hydrological variability at ungauged locations. However, current-generation hydrological and land surface models that are used for their estimation suffer from large uncertainties. These models rely on mathematical approximations of the physical system as well as on mapped values of land parameters (e.g. topography, soil types, land cover) to predict hydrological variables (e.g. evapotranspiration, soil moisture, stream flow) as a function of atmospheric forcing (e.g. precipitation, temperature, humidity). Despite considerable progress in recent years, it remains unclear whether better estimates of land parameters can improve predictions - or - if a refinement of model physics is necessary. To approach this question we suggest scrutinizing our perception of hydrological systems by confronting it with the radical assumption that hydrological variability at any location in space depends on past and present atmospheric forcing only, and not on location-specific land parameters. This so called "Constant Land Parameter Hypothesis (CLPH)" assumes that variables like runoff can be predicted without taking location specific factors such as topography or soil types into account. We demonstrate, using a modern statistical tool, that monthly runoff in Europe can be skilfully estimated using atmospheric forcing alone, without accounting for locally varying land parameters. The resulting runoff estimates are used to benchmark state-of-the-art process models. These are found to have inferior performance, despite their explicit process representation, which accounts for locally varying land parameters. This suggests that progress in the theory of hydrological systems is likely to yield larger improvements in model performance than more precise land parameter estimates. The results also question the current modelling paradigm that is dominated by the attempt to account for locally varying land

  2. Oligopolistic competition in wholesale electricity markets: Large-scale simulation and policy analysis using complementarity models

    NASA Astrophysics Data System (ADS)

    Helman, E. Udi

    This dissertation conducts research into the large-scale simulation of oligopolistic competition in wholesale electricity markets. The dissertation has two parts. Part I is an examination of the structure and properties of several spatial, or network, equilibrium models of oligopolistic electricity markets formulated as mixed linear complementarity problems (LCP). Part II is a large-scale application of such models to the electricity system that encompasses most of the United States east of the Rocky Mountains, the Eastern Interconnection. Part I consists of Chapters 1 to 6. The models developed in this part continue research into mixed LCP models of oligopolistic electricity markets initiated by Hobbs [67] and subsequently developed by Metzler [87] and Metzler, Hobbs and Pang [88]. Hobbs' central contribution is a network market model with Cournot competition in generation and a price-taking spatial arbitrage firm that eliminates spatial price discrimination by the Cournot firms. In one variant, the solution to this model is shown to be equivalent to the "no arbitrage" condition in a "pool" market, in which a Regional Transmission Operator optimizes spot sales such that the congestion price between two locations is exactly equivalent to the difference in the energy prices at those locations (commonly known as locational marginal pricing). Extensions to this model are presented in Chapters 5 and 6. One of these is a market model with a profit-maximizing arbitrage firm. This model is structured as a mathematical program with equilibrium constraints (MPEC), but due to the linearity of its constraints, can be solved as a mixed LCP. Part II consists of Chapters 7 to 12. The core of these chapters is a large-scale simulation of the U.S. Eastern Interconnection applying one of the Cournot competition with arbitrage models. This is the first oligopolistic equilibrium market model to encompass the full Eastern Interconnection with a realistic network representation (using

  3. Graph theoretic modeling of large-scale semantic networks.

    PubMed

    Bales, Michael E; Johnson, Stephen B

    2006-08-01

    During the past several years, social network analysis methods have been used to model many complex real-world phenomena, including social networks, transportation networks, and the Internet. Graph theoretic methods, based on an elegant representation of entities and relationships, have been used in computational biology to study biological networks; however they have not yet been adopted widely by the greater informatics community. The graphs produced are generally large, sparse, and complex, and share common global topological properties. In this review of research (1998-2005) on large-scale semantic networks, we used a tailored search strategy to identify articles involving both a graph theoretic perspective and semantic information. Thirty-one relevant articles were retrieved. The majority (28, 90.3%) involved an investigation of a real-world network. These included corpora, thesauri, dictionaries, large computer programs, biological neuronal networks, word association networks, and files on the Internet. Twenty-two of the 28 (78.6%) involved a graph comprised of words or phrases. Fifteen of the 28 (53.6%) mentioned evidence of small-world characteristics in the network investigated. Eleven (39.3%) reported a scale-free topology, which tends to have a similar appearance when examined at varying scales. The results of this review indicate that networks generated from natural language have topological properties common to other natural phenomena. It has not yet been determined whether artificial human-curated terminology systems in biomedicine share these properties. Large network analysis methods have potential application in a variety of areas of informatics, such as in development of controlled vocabularies and for characterizing a given domain.

  4. Modeling emergent large-scale structures of barchan dune fields

    NASA Astrophysics Data System (ADS)

    Worman, S. L.; Murray, A. B.; Littlewood, R.; Andreotti, B.; Claudin, P.

    2013-10-01

    In nature, barchan dunes typically exist as members of larger fields that display striking, enigmatic structures that cannot be readily explained by examining the dynamics at the scale of single dunes, or by appealing to patterns in external forcing. To explore the possibility that observed structures emerge spontaneously as a collective result of many dunes interacting with each other, we built a numerical model that treats barchans as discrete entities that interact with one another according to simplified rules derived from theoretical and numerical work and from field observations: (1) Dunes exchange sand through the fluxes that leak from the downwind side of each dune and are captured on their upstream sides; (2) when dunes become sufficiently large, small dunes are born on their downwind sides (`calving'); and (3) when dunes collide directly enough, they merge. Results show that these relatively simple interactions provide potential explanations for a range of field-scale phenomena including isolated patches of dunes and heterogeneous arrangements of similarly sized dunes in denser fields. The results also suggest that (1) dune field characteristics depend on the sand flux fed into the upwind boundary, although (2) moving downwind, the system approaches a common attracting state in which the memory of the upwind conditions vanishes. This work supports the hypothesis that calving exerts a first-order control on field-scale phenomena; it prevents individual dunes from growing without bound, as single-dune analyses suggest, and allows the formation of roughly realistic, persistent dune field patterns.

  5. Modeling emergent large-scale structures of barchan dune fields

    NASA Astrophysics Data System (ADS)

    Worman, S. L.; Murray, A.; Littlewood, R. C.; Andreotti, B.; Claudin, P.

    2013-12-01

    In nature, barchan dunes typically exist as members of larger fields that display striking, enigmatic structures that cannot be readily explained by examining the dynamics at the scale of single dunes, or by appealing to patterns in external forcing. To explore the possibility that observed structures emerge spontaneously as a collective result of many dunes interacting with each other, we built a numerical model that treats barchans as discrete entities that interact with one another according to simplified rules derived from theoretical and numerical work, and from field observations: Dunes exchange sand through the fluxes that leak from the downwind side of each dune and are captured on their upstream sides; when dunes become sufficiently large, small dunes are born on their downwind sides ('calving'); and when dunes collide directly enough, they merge. Results show that these relatively simple interactions provide potential explanations for a range of field-scale phenomena including isolated patches of dunes and heterogeneous arrangements of similarly sized dunes in denser fields. The results also suggest that (1) dune field characteristics depend on the sand flux fed into the upwind boundary, although (2) moving downwind, the system approaches a common attracting state in which the memory of the upwind conditions vanishes. This work supports the hypothesis that calving exerts a first order control on field-scale phenomena; it prevents individual dunes from growing without bound, as single-dune analyses suggest, and allows the formation of roughly realistic, persistent dune field patterns.

  6. Gale: Large Scale Tectonics Modelling With Free Software

    NASA Astrophysics Data System (ADS)

    Landry, W.; Hodkinson, L.

    2007-12-01

    In response to requests from the long timescale tectonics community, we have developed Gale, a parallel 2D and 3D finite element code. Gale's focus is on orogenesis, rifting, and subduction, although it is flexible enough to be applied to such diverse problems as coronae formation on Venus and 3D evolution of crustal fault systems. Gale solves the Stokes and heat transport equations with a large selection of viscous and plastic rheologies. Material properties are tracked using particles, allowing Gale to accurately track interfaces and simulate large deformations. In addition, Gale has a true free surface and a simple programming interface that allows you to plug in your own surface process model. Gale supports a wide variety of boundary conditions, including inflow/outflow, fixed, stress, and static and dynamic friction. Gale has been extensively tested and validated and is exhaustively documented with a 100+ page manual. Gale has been run on everything from laptops to 1000+ processor clusters. Source and prebuilt binaries are freely available at the CIG website. We will discuss Gale's capabilities, present benchmark results, and demonstrate solutions to realistic problems.

  7. Numerical Modeling of Large-Scale Rocky Coastline Evolution

    NASA Astrophysics Data System (ADS)

    Limber, P.; Murray, A. B.; Littlewood, R.; Valvo, L.

    2008-12-01

    Seventy-five percent of the world's ocean coastline is rocky. On large scales (i.e. greater than a kilometer), many intertwined processes drive rocky coastline evolution, including coastal erosion and sediment transport, tectonics, antecedent topography, and variations in sea cliff lithology. In areas such as California, an additional aspect of rocky coastline evolution involves submarine canyons that cut across the continental shelf and extend into the nearshore zone. These types of canyons intercept alongshore sediment transport and flush sand to abyssal depths during periodic turbidity currents, thereby delineating coastal sediment transport pathways and affecting shoreline evolution over large spatial and time scales. How tectonic, sediment transport, and canyon processes interact with inherited topographic and lithologic settings to shape rocky coastlines remains an unanswered, and largely unexplored, question. We will present numerical model results of rocky coastline evolution that starts with an immature fractal coastline. The initial shape is modified by headland erosion, wave-driven alongshore sediment transport, and submarine canyon placement. Our previous model results have shown that, as expected, an initial sediment-free irregularly shaped rocky coastline with homogeneous lithology will undergo smoothing in response to wave attack; headlands erode and mobile sediment is swept into bays, forming isolated pocket beaches. As this diffusive process continues, pocket beaches coalesce, and a continuous sediment transport pathway results. However, when a randomly placed submarine canyon is introduced to the system as a sediment sink, the end results are wholly different: sediment cover is reduced, which in turn increases weathering and erosion rates and causes the entire shoreline to move landward more rapidly. The canyon's alongshore position also affects coastline morphology. When placed offshore of a headland, the submarine canyon captures local sediment

  8. A new mixed-mode fracture criterion for large-scale lattice models

    NASA Astrophysics Data System (ADS)

    Sachau, T.; Koehn, D.

    2014-01-01

    Reasonable fracture criteria are crucial for the modeling of dynamic failure in computational lattice models. Successful criteria exist for experiments on the micro- and on the mesoscale, which are based on the stress that a bond experiences. In this paper, we test the applicability of these failure criteria to large-scale models, where gravity plays an important role in addition to the externally applied deformation. Brittle structures, resulting from these criteria, do not resemble the outcome predicted by fracture mechanics and by geological observations. For this reason we derive an elliptical fracture criterion, which is based on the strain energy stored in a bond. Simulations using the new criterion result in realistic structures. It is another great advantage of this fracture model that it can be combined with classic geological material parameters: the tensile strength σ0 and the shear cohesion τ0. The proposed fracture criterion is much more robust with regard to numerical strain increments than fracture criteria based on stress (e.g., Drucker-Prager). While we tested the fracture model only for large-scale structures, there is strong reason to believe that the model is equally applicable to lattice simulations on the micro- and on the mesoscale.

  9. Effectively-truncated large-scale shell-model calculations and nuclei around 100Sn

    NASA Astrophysics Data System (ADS)

    Gargano, A.; Coraggio, L.; Itaco, N.

    2017-09-01

    This paper presents a short overview of a procedure we have recently introduced, dubbed the double-step truncation method, which is aimed to reduce the computational complexity of large-scale shell-model calculations. Within this procedure, one starts with a realistic shell-model Hamiltonian defined in a large model space, and then, by analyzing the effective single particle energies of this Hamiltonian as a function of the number of valence protons and/or neutrons, reduced model spaces are identified containing only the single-particle orbitals relevant to the description of the spectroscopic properties of a certain class of nuclei. As a final step, new effective shell-model Hamiltonians defined within the reduced model spaces are derived by way of a unitary transformation of the original large-scale Hamiltonian. A detailed account of this transformation is given and the merit of the double-step truncation method is illustrated by discussing few selected results for 96Mo, described as four protons and four neutrons outside 88Sr. Some new preliminary results for light odd-tin isotopes from A = 101 to 107 are also reported.

  10. Functional models for large-scale gene regulation networks: realism and fiction.

    PubMed

    Lagomarsino, Marco Cosentino; Bassetti, Bruno; Castellani, Gastone; Remondini, Daniel

    2009-04-01

    High-throughput experiments are shedding light on the topology of large regulatory networks and at the same time their functional states, namely the states of activation of the nodes (for example transcript or protein levels) in different conditions, times, environments. We now possess a certain amount of information about these two levels of description, stored in libraries, databases and ontologies. A current challenge is to bridge the gap between topology and function, i.e. developing quantitative models aimed at characterizing the expression patterns of large sets of genes. However, approaches that work well for small networks become impossible to master at large scales, mainly because parameters proliferate. In this review we discuss the state of the art of large-scale functional network models, addressing the issue of what can be considered as "realistic" and what the main limitations may be. We also show some directions for future work, trying to set the goals that future models should try to achieve. Finally, we will emphasize the possible benefits in the understanding of biological mechanisms underlying complex multifactorial diseases, and in the development of novel strategies for the description and the treatment of such pathologies.

  11. Modeling the spreading of large-scale wildland fires

    Treesearch

    Mohamed Drissi

    2015-01-01

    The objective of the present study is twofold. First, the last developments and validation results of a hybrid model designed to simulate fire patterns in heterogeneous landscapes are presented. The model combines the features of a stochastic small-world network model with those of a deterministic semi-physical model of the interaction between burning and non-burning...

  12. Investigation of models for large-scale meteorological prediction experiments

    NASA Technical Reports Server (NTRS)

    Spar, J.

    1975-01-01

    The feasibility of extended and long-range weather prediction by means of global atmospheric models was studied. A number of computer experiments were conducted at GISS with the GISS global general circulation model. Topics discussed include atmospheric response to sea-surface temperature anomalies, and monthly mean forecast experiments with the global model.

  13. Large scale modelling of catastrophic floods in Italy

    NASA Astrophysics Data System (ADS)

    Azemar, Frédéric; Nicótina, Ludovico; Sassi, Maximiliano; Savina, Maurizio; Hilberts, Arno

    2017-04-01

    The RMS European Flood HD model® is a suite of country scale flood catastrophe models covering 13 countries throughout continental Europe and the UK. The models are developed with the goal of supporting risk assessment analyses for the insurance industry. Within this framework RMS is developing a hydrologic and inundation model for Italy. The model aims at reproducing the hydrologic and hydraulic properties across the domain through a modeling chain. A semi-distributed hydrologic model that allows capturing the spatial variability of the runoff formation processes is coupled with a one-dimensional river routing algorithm and a two-dimensional (depth averaged) inundation model. This model setup allows capturing the flood risk from both pluvial (overland flow) and fluvial flooding. Here we describe the calibration and validation methodologies for this modelling suite applied to the Italian river basins. The variability that characterizes the domain (in terms of meteorology, topography and hydrologic regimes) requires a modeling approach able to represent a broad range of meteo-hydrologic regimes. The calibration of the rainfall-runoff and river routing models is performed by means of a genetic algorithm that identifies the set of best performing parameters within the search space over the last 50 years. We first establish the quality of the calibration parameters on the full hydrologic balance and on individual discharge peaks by comparing extreme statistics to observations over the calibration period on several stations. The model is then used to analyze the major floods in the country; we discuss the different meteorological setup leading to the historical events and the physical mechanisms that induced these floods. We can thus assess the performance of RMS' hydrological model in view of the physical mechanisms leading to flood and highlight the main controls on flood risk modelling throughout the country. The model's ability to accurately simulate antecedent

  14. Investigation of models for large-scale meteorological prediction experiments

    NASA Technical Reports Server (NTRS)

    Spar, J.

    1978-01-01

    The feasibility of long-range weather prediction through the use of global general circulation models (GCMs) was investigated. A climate model was developed to simulate the monthly mean state of the atmosphere from real global initial data at the beginning of the month. The model contains the same dynamic and physical ingredients as most numerical weather prediction models and GCMs. The model generates a one-day global simulation on the 8 x 10 grid in four minutes (on an IBM 360/95 computer), so that a 30 day forecast can be executed in two hours. The high speed of the model is achieved mainly at the price of its coarse resolution, which requires certain parameterizations of surface boundary conditions, as well as inherent filtering of smaller-scale features of the initial state.

  15. Investigation of models for large scale meteorological prediction experiments

    NASA Technical Reports Server (NTRS)

    Spar, J.

    1982-01-01

    Long-range numerical prediction and climate simulation experiments with various global atmospheric general circulation models are reported. A chronological listing of the titles of all publications and technical reports already distributed is presented together with an account of the most recent reseach. Several reports on a series of perpetual January climate simulations with the GISS coarse mesh climate model are listed. A set of perpetual July climate simulations with the same model is presented and the results are described.

  16. A robust and quick method to validate large scale flood inundation modelling with SAR remote sensing

    NASA Astrophysics Data System (ADS)

    Schumann, G. J.; Neal, J. C.; Bates, P. D.

    2011-12-01

    With flood frequency likely to increase as a result of altered precipitation patterns triggered by climate change, there is a growing demand for more data and, at the same time, improved flood inundation modeling. The aim is to develop more reliable flood forecasting systems over large scales that account for errors and inconsistencies in observations, modeling, and output. Over the last few decades, there have been major advances in the fields of remote sensing, particularly microwave remote sensing, and flood inundation modeling. At the same time both research communities are attempting to roll out their products on a continental to global scale. In a first attempt to harmonize both research efforts on a very large scale, a two-dimensional flood model has been built for the Niger Inland Delta basin in northwest Africa on a 700 km reach of the Niger River, an area similar to the size of the UK. This scale demands a different approach to traditional 2D model structuring and we have implemented a simplified version of the shallow water equations as developed in [1] and complemented this formulation with a sub-grid structure for simulating flows in a channel much smaller than the actual grid resolution of the model. This joined integration allows to model flood flows across two dimensions with efficient computational speeds but without losing out on channel resolution when moving to coarse model grids. Using gaged daily flows, the model was applied to simulate the wetting and drying of the Inland Delta floodplain for 7 years from 2002 to 2008, taking less than 30 minutes to simulate 365 days at 1 km resolution. In these rather data poor regions of the world and at this type of scale, verification of flood modeling is realistically only feasible with wide swath or global mode remotely sensed imagery. Validation of the Niger model was carried out using sequential global mode SAR images over the period 2006/7. This scale not only requires different types of models and

  17. A Large Scale, High Resolution Agent-Based Insurgency Model

    DTIC Science & Technology

    2013-09-30

    2007). HSCB Models can be employed for simulating mission scenarios, determining optimal strategies for disrupting terrorist networks, or training and...High Resolution Agent-Based Insurgency Model ∑ = ⎜ ⎜ ⎝ ⎛ − −− = desired 1 move,desired, desired,,desired, desired,, N j ij jmoveij moveiD rp prp

  18. Oscillations and Synchrony in Large-scale Cortical Network Models

    DTIC Science & Technology

    2008-06-17

    Intrinsic neuronal and circuit properties control the responses of large ensembles of neurons by creating spatiotemporal patterns of ...map-based models) to simulate the intrinsic dynamics of biological neurons . These phenomenological models were designed to capture the main response...function of parameters that affect synaptic interactions and intrinsic states of the neurons . Keywords

  19. Coordinated reset stimulation in a large-scale model of the STN-GPe circuit

    PubMed Central

    Ebert, Martin; Hauptmann, Christian; Tass, Peter A.

    2014-01-01

    Synchronization of populations of neurons is a hallmark of several brain diseases. Coordinated reset (CR) stimulation is a model-based stimulation technique which specifically counteracts abnormal synchrony by desynchronization. Electrical CR stimulation, e.g., for the treatment of Parkinson's disease (PD), is administered via depth electrodes. In order to get a deeper understanding of this technique, we extended the top-down approach of previous studies and constructed a large-scale computational model of the respective brain areas. Furthermore, we took into account the spatial anatomical properties of the simulated brain structures and incorporated a detailed numerical representation of 2 · 104 simulated neurons. We simulated the subthalamic nucleus (STN) and the globus pallidus externus (GPe). Connections within the STN were governed by spike-timing dependent plasticity (STDP). In this way, we modeled the physiological and pathological activity of the considered brain structures. In particular, we investigated how plasticity could be exploited and how the model could be shifted from strongly synchronized (pathological) activity to strongly desynchronized (healthy) activity of the neuronal populations via CR stimulation of the STN neurons. Furthermore, we investigated the impact of specific stimulation parameters especially the electrode position on the stimulation outcome. Our model provides a step forward toward a biophysically realistic model of the brain areas relevant to the emergence of pathological neuronal activity in PD. Furthermore, our model constitutes a test bench for the optimization of both stimulation parameters and novel electrode geometries for efficient CR stimulation. PMID:25505882

  20. Large-scale measurement and modeling of backbone Internet traffic

    NASA Astrophysics Data System (ADS)

    Roughan, Matthew; Gottlieb, Joel

    2002-07-01

    There is a brewing controversy in the traffic modeling community concerning how to model backbone traffic. The fundamental work on self-similarity in data traffic appears to be contradicted by recent findings that suggest that backbone traffic is smooth. The traffic analysis work to date has focused on high-quality but limited-scope packet trace measurements; this limits its applicability to high-speed backbone traffic. This paper uses more than one year's worth of SNMP traffic data covering an entire Tier 1 ISP backbone to address the question of how backbone network traffic should be modeled. Although the limitations of SNMP measurements do not permit us to comment on the fine timescale behavior of the traffic, careful analysis of the data suggests that irrespective of the variation at fine timescales, we can construct a simple traffic model that captures key features of the observed traffic. Furthermore, the model's parameters are measurable using existing network infrastructure, making this model practical in a present-day operational network. In addition to its practicality, the model verifies basic statistical multiplexing results, and thus sheds deep insight into how smooth backbone traffic really is.

  1. Multilevel method for modeling large-scale networks.

    SciTech Connect

    Safro, I. M.

    2012-02-24

    Understanding the behavior of real complex networks is of great theoretical and practical significance. It includes developing accurate artificial models whose topological properties are similar to the real networks, generating the artificial networks at different scales under special conditions, investigating a network dynamics, reconstructing missing data, predicting network response, detecting anomalies and other tasks. Network generation, reconstruction, and prediction of its future topology are central issues of this field. In this project, we address the questions related to the understanding of the network modeling, investigating its structure and properties, and generating artificial networks. Most of the modern network generation methods are based either on various random graph models (reinforced by a set of properties such as power law distribution of node degrees, graph diameter, and number of triangles) or on the principle of replicating an existing model with elements of randomization such as R-MAT generator and Kronecker product modeling. Hierarchical models operate at different levels of network hierarchy but with the same finest elements of the network. However, in many cases the methods that include randomization and replication elements on the finest relationships between network nodes and modeling that addresses the problem of preserving a set of simplified properties do not fit accurately enough the real networks. Among the unsatisfactory features are numerically inadequate results, non-stability of algorithms on real (artificial) data, that have been tested on artificial (real) data, and incorrect behavior at different scales. One reason is that randomization and replication of existing structures can create conflicts between fine and coarse scales of the real network geometry. Moreover, the randomization and satisfying of some attribute at the same time can abolish those topological attributes that have been undefined or hidden from

  2. Large-scale spherical fixed bed reactors: Modeling and optimization

    SciTech Connect

    Hartig, F.; Keil, F.J. )

    1993-03-01

    Iterative dynamic programming (IDP) according to Luus was used for the optimization of the methanol production in a cascade of spherical reactors. The system of three spherical reactors was compared to an externally cooled tubular reactor and a quench reactor. The reactors were modeled by the pseudohomogeneous and heterogeneous approach. The effectiveness factors of the heterogeneous model were calculated by the dusty gas model. The IDP method was compared with sequential quadratic programming (SQP) and the Box complex method. The optimized distributions of catalyst volume with the pseudohomogeneous and heterogeneous model lead to different results. The IDP method finds the global optimum with high probability. A combination of IDP and SQP provides a reliable optimization procedure that needs minimum computing time.

  3. Geometric algorithms for electromagnetic modeling of large scale structures

    NASA Astrophysics Data System (ADS)

    Pingenot, James

    With the rapid increase in the speed and complexity of integrated circuit designs, 3D full wave and time domain simulation of chip, package, and board systems becomes more and more important for the engineering of modern designs. Much effort has been applied to the problem of electromagnetic (EM) simulation of such systems in recent years. Major advances in boundary element EM simulations have led to O(n log n) simulations using iterative methods and advanced Fast. Fourier Transform (FFT), Multi-Level Fast Multi-pole Methods (MLFMM), and low-rank matrix compression techniques. These advances have been augmented with an explosion of multi-core and distributed computing technologies, however, realization of the full scale of these capabilities has been hindered by cumbersome and inefficient geometric processing. Anecdotal evidence from industry suggests that users may spend around 80% of turn-around time manipulating the geometric model and mesh. This dissertation addresses this problem by developing fast and efficient data structures and algorithms for 3D modeling of chips, packages, and boards. The methods proposed here harness the regular, layered 2D nature of the models (often referred to as "2.5D") to optimize these systems for large geometries. First, an architecture is developed for efficient storage and manipulation of 2.5D models. The architecture gives special attention to native representation of structures across various input models and special issues particular to 3D modeling. The 2.5D structure is then used to optimize the mesh systems First, circuit/EM co-simulation techniques are extended to provide electrical connectivity between objects. This concept is used to connect independently meshed layers, allowing simple and efficient 2D mesh algorithms to be used in creating a 3D mesh. Here, adaptive meshing is used to ensure that the mesh accurately models the physical unknowns (current and charge). Utilizing the regularized nature of 2.5D objects and

  4. Simulation of large-scale rule-based models

    SciTech Connect

    Hlavacek, William S; Monnie, Michael I; Colvin, Joshua; Faseder, James

    2008-01-01

    Interactions of molecules, such as signaling proteins, with multiple binding sites and/or multiple sites of post-translational covalent modification can be modeled using reaction rules. Rules comprehensively, but implicitly, define the individual chemical species and reactions that molecular interactions can potentially generate. Although rules can be automatically processed to define a biochemical reaction network, the network implied by a set of rules is often too large to generate completely or to simulate using conventional procedures. To address this problem, we present DYNSTOC, a general-purpose tool for simulating rule-based models. DYNSTOC implements a null-event algorithm for simulating chemical reactions in a homogenous reaction compartment. The simulation method does not require that a reaction network be specified explicitly in advance, but rather takes advantage of the availability of the reaction rules in a rule-based specification of a network to determine if a randomly selected set of molecular components participates in a reaction during a time step. DYNSTOC reads reaction rules written in the BioNetGen language which is useful for modeling protein-protein interactions involved in signal transduction. The method of DYNSTOC is closely related to that of STOCHSIM. DYNSTOC differs from STOCHSIM by allowing for model specification in terms of BNGL, which extends the range of protein complexes that can be considered in a model. DYNSTOC enables the simulation of rule-based models that cannot be simulated by conventional methods. We demonstrate the ability of DYNSTOC to simulate models accounting for multisite phosphorylation and multivalent binding processes that are characterized by large numbers of reactions. DYNSTOC is free for non-commercial use. The C source code, supporting documentation and example input files are available at .

  5. Modeling and simulation of large scale stirred tank

    NASA Astrophysics Data System (ADS)

    Neuville, John R.

    The purpose of this dissertation is to provide a written record of the evaluation performed on the DWPF mixing process by the construction of numerical models that resemble the geometry of this process. There were seven numerical models constructed to evaluate the DWPF mixing process and four pilot plants. The models were developed with Fluent software and the results from these models were used to evaluate the structure of the flow field and the power demand of the agitator. The results from the numerical models were compared with empirical data collected from these pilot plants that had been operated at an earlier date. Mixing is commonly used in a variety ways throughout industry to blend miscible liquids, disperse gas through liquid, form emulsions, promote heat transfer and, suspend solid particles. The DOE Sites at Hanford in Richland Washington, West Valley in New York, and Savannah River Site in Aiken South Carolina have developed a process that immobilizes highly radioactive liquid waste. The radioactive liquid waste at DWPF is an opaque sludge that is mixed in a stirred tank with glass frit particles and water to form slurry of specified proportions. The DWPF mixing process is composed of a flat bottom cylindrical mixing vessel with a centrally located helical coil, and agitator. The helical coil is used to heat and cool the contents of the tank and can improve flow circulation. The agitator shaft has two impellers; a radial blade and a hydrofoil blade. The hydrofoil is used to circulate the mixture between the top region and bottom region of the tank. The radial blade sweeps the bottom of the tank and pushes the fluid in the outward radial direction. The full scale vessel contains about 9500 gallons of slurry with flow behavior characterized as a Bingham Plastic. Particles in the mixture have an abrasive characteristic that cause excessive erosion to internal vessel components at higher impeller speeds. The desire for this mixing process is to ensure the

  6. Sediment Yield Modeling in a Large Scale Drainage Basin

    NASA Astrophysics Data System (ADS)

    Ali, K.; de Boer, D. H.

    2009-05-01

    This paper presents the findings of spatially distributed sediment yield modeling in the upper Indus River basin. Spatial erosion rates calculated by using the Thornes model at 1-kilometre spatial resolution and monthly time scale indicate that 87 % of the annual gross erosion takes place in the three summer months. The model predicts a total annual erosion rate of 868 million tons, which is approximately 4.5 times the long- term observed annual sediment yield of the basin. Sediment delivery ratios (SDR) are hypothesized to be a function of the travel time of surface runoff from catchment cells to the nearest downstream channel. Model results indicate that higher delivery ratios (SDR > 0.6) are found in 18 % of the basin area, mostly located in the high-relief sub-basins and in the areas around the Nanga Parbat Massif. The sediment delivery ratio is lower than 0.2 in 70 % of the basin area, predominantly in the low-relief sub-basins like the Shyok on the Tibetan Plateau. The predicted annual basin sediment yield is 244 million tons which compares reasonably to the measured value of 192.5 million tons. The average annual specific sediment yield in the basin is predicted as 1110 tons per square kilometre. Model evaluation based on accuracy statistics shows very good to satisfactory performance ratings for predicted monthly basin sediment yields and for mean annual sediment yields of 17 sub-basins. This modeling framework mainly requires global datasets, and hence can be used to predict erosion and sediment yield in other ungauged drainage basins.

  7. Large-Scale Modeling of Wordform Learning and Representation

    ERIC Educational Resources Information Center

    Sibley, Daragh E.; Kello, Christopher T.; Plaut, David C.; Elman, Jeffrey L.

    2008-01-01

    The forms of words as they appear in text and speech are central to theories and models of lexical processing. Nonetheless, current methods for simulating their learning and representation fail to approach the scale and heterogeneity of real wordform lexicons. A connectionist architecture termed the "sequence encoder" is used to learn…

  8. Modelling large scale human activity in San Francisco

    NASA Astrophysics Data System (ADS)

    Gonzalez, Marta

    2010-03-01

    Diverse group of people with a wide variety of schedules, activities and travel needs compose our cities nowadays. This represents a big challenge for modeling travel behaviors in urban environments; those models are of crucial interest for a wide variety of applications such as traffic forecasting, spreading of viruses, or measuring human exposure to air pollutants. The traditional means to obtain knowledge about travel behavior is limited to surveys on travel journeys. The obtained information is based in questionnaires that are usually costly to implement and with intrinsic limitations to cover large number of individuals and some problems of reliability. Using mobile phone data, we explore the basic characteristics of a model of human travel: The distribution of agents is proportional to the population density of a given region, and each agent has a characteristic trajectory size contain information on frequency of visits to different locations. Additionally we use a complementary data set given by smart subway fare cards offering us information about the exact time of each passenger getting in or getting out of the subway station and the coordinates of it. This allows us to uncover the temporal aspects of the mobility. Since we have the actual time and place of individual's origin and destination we can understand the temporal patterns in each visited location with further details. Integrating two described data set we provide a dynamical model of human travels that incorporates different aspects observed empirically.

  9. Large-Scale Modeling of Wordform Learning and Representation

    ERIC Educational Resources Information Center

    Sibley, Daragh E.; Kello, Christopher T.; Plaut, David C.; Elman, Jeffrey L.

    2008-01-01

    The forms of words as they appear in text and speech are central to theories and models of lexical processing. Nonetheless, current methods for simulating their learning and representation fail to approach the scale and heterogeneity of real wordform lexicons. A connectionist architecture termed the "sequence encoder" is used to learn…

  10. Large Scale Finite Element Modeling Using Scalable Parallel Processing

    NASA Technical Reports Server (NTRS)

    Cwik, T.; Katz, D.; Zuffada, C.; Jamnejad, V.

    1995-01-01

    An iterative solver for use with finite element codes was developed for the Cray T3D massively parallel processor at the Jet Propulsion Laboratory. Finite element modeling is useful for simulating scattered or radiated electromagnetic fields from complex three-dimensional objects with geometry variations smaller than an electrical wavelength.

  11. Parameterization of Fire Injection Height in Large Scale Transport Model

    NASA Astrophysics Data System (ADS)

    Paugam, R.; Wooster, M.; Atherton, J.; Val Martin, M.; Freitas, S.; Kaiser, J. W.; Schultz, M. G.

    2012-12-01

    The parameterization of fire injection height in global chemistry transport model is currently a subject of debate in the atmospheric community. The approach usually proposed in the literature is based on relationships linking injection height and remote sensing products like the Fire Radiative Power (FRP) which can measure active fire properties. In this work we present an approach based on the Plume Rise Model (PRM) developed by Freitas et al (2007, 2010). This plume model is already used in different host models (e.g. WRF, BRAMS). In its original version, the fire is modeled by: a convective heat flux (CHF; pre-defined by the land cover and evaluated as a fixed part of the total heat released) and a plume radius (derived from the GOES Wildfire-ABBA product) which defines the fire extension where the CHF is homogeneously distributed. Here in our approach the Freitas model is modified, in particular we added (i) an equation for mass conservation, (ii) a scheme to parameterize horizontal entrainment/detrainment, and (iii) a new initialization module which estimates the sensible heat released by the fire on the basis of measured FRP rather than fuel cover type. FRP and Active Fire (AF) area necessary for the initialization of the model are directly derived from a modified version of the Dozier algorithm applied to the MOD14 product. An optimization (using the simulating annealing method) of this new version of the PRM is then proposed based on fire plume characteristics derived from the official MISR plume height project and atmospheric profiles extracted from the ECMWF analysis. The data set covers the main fire region (Africa, Siberia, Indonesia, and North and South America) and is set up to (i) retain fires where plume height and FRP can be easily linked (i.e. avoid large fire cluster where individual plume might interact), (ii) keep fire which show decrease of FRP and AF area after MISR overpass (i.e. to minimize effect of the time period needed for the plume to

  12. Parameterization of Fire Injection Height in Large Scale Transport Model

    NASA Astrophysics Data System (ADS)

    Paugam, r.; Wooster, m.; Freitas, s.; Gonzi, s.; Palmer, p.

    2012-04-01

    The parameterization of fire injection height in global chemistry transport model is currently a subject of debate in the atmospheric community. The approach usually proposed in the literature is based on relationships linking injection height and remote sensing products like the Fire Radiative Power (FRP) which can measure active fire properties. In this work we present an approach based on the Plume Rise Model (PRM) developed by Freitas et al (2007, 2010). This plume model is already used in different host models (e.g. WRF, BRAMS). In its original version, the fire is modelled by: a convective heat flux (CHF; pre-defined by the land cover and evaluated as a fixed part of the total heat released) and a plume radius (derived from the GOES Wildfire-ABBA product) which defines the fire extension where the CHF is homogeneously distributed. Here in our approach the Freitas model is modified. Major modifications are implemented in its initialisation module: (i) CHF and the Active Fire area are directly force from FRP data derived from a modified version of the Dozier algorithm applied to the MOD12 product, (ii) and a new module of the buoyancy flux calculation is implemented instead of the original module based on the Morton Taylor and Turner equation. Furthermore the dynamical core of the plume model is also modified with a new entrainment scheme inspired from latest results from shallow convection parameterization. Optimization and validation of this new version of the Freitas PRM is based on fire plume characteristics derived from the official MISR plume height project and atmospheric profile extracted from the ECMWF analysis. The data set is (i) build up to only keep fires where plume height and FRP can be easily linked (i.e. avoid large fire cluster where individual plume might interact) and (ii) split per fire land cover type to optimize the constant of the buoyancy flux module and the entrainment scheme to different fire regime. Result shows that the new PRM is

  13. GIS for large-scale watershed observational data model

    NASA Astrophysics Data System (ADS)

    Patino-Gomez, Carlos

    Because integrated management of a river basin requires the development of models that are used for many purposes, e.g., to assess risks and possible mitigation of droughts and floods, manage water rights, assess water quality, and simply to understand the hydrology of the basin, the development of a relational database from which models can access the various data needed to describe the systems being modeled is fundamental. In order for this concept to be useful and widely applicable, however, it must have a standard design. The recently developed ArcHydro data model facilitates the organization of data according to the "basin" principle and allows access to hydrologic information by models. The development of a basin-scale relational database for the Rio Grande/Bravo basin implemented in a Geographic Information System is one of the contributions of this research. This geodatabase represents the first major attempt to establish a more complete understanding of the basin as a whole, including spatial and temporal information obtained from the United States of America and Mexico. Difficulties in processing raster datasets over large regions are studied in this research. One of the most important contributions is the application of a Raster-Network Regionalization technique, which utilizes raster-based analysis at the subregional scale in an efficient manner and combines the resulting subregional vector datasets into a regional database. Another important contribution of this research is focused on implementing a robust structure for handling huge temporal data sets related to monitoring points such as hydrometric and climatic stations, reservoir inlets and outlets, water rights, etc. For the Rio Grande study area, the ArcHydro format is applied to the historical information collected in order to include and relate these time series to the monitoring points in the geodatabase. Its standard time series format is changed to include a relationship to the agency from

  14. Uncertainty Quantification for Large-Scale Ice Sheet Modeling

    SciTech Connect

    Ghattas, Omar

    2016-02-05

    This report summarizes our work to develop advanced forward and inverse solvers and uncertainty quantification capabilities for a nonlinear 3D full Stokes continental-scale ice sheet flow model. The components include: (1) forward solver: a new state-of-the-art parallel adaptive scalable high-order-accurate mass-conservative Newton-based 3D nonlinear full Stokes ice sheet flow simulator; (2) inverse solver: a new adjoint-based inexact Newton method for solution of deterministic inverse problems governed by the above 3D nonlinear full Stokes ice flow model; and (3) uncertainty quantification: a novel Hessian-based Bayesian method for quantifying uncertainties in the inverse ice sheet flow solution and propagating them forward into predictions of quantities of interest such as ice mass flux to the ocean.

  15. Complex nuclear spectra in a large scale shell model approach

    NASA Astrophysics Data System (ADS)

    D, Bianco; F, Andreozzi; Iudice N, Lo; A, Porrino; F, Knapp

    2012-05-01

    We report on a shell model implementation of an iterative matrix diagonalization algorithm in the spin uncoupled scheme. A new importance sampling is adopted which brings the eigenvalues to convergence with about 10% of the basis states. The method is shown to be able to provide an exhaustive description of the low-energy spectroscopic properties of 132-134Xe isotopes and of the spectrum of 130Xe.

  16. Multistability in Large Scale Models of Brain Activity

    PubMed Central

    Golos, Mathieu; Jirsa, Viktor; Daucé, Emmanuel

    2015-01-01

    Noise driven exploration of a brain network’s dynamic repertoire has been hypothesized to be causally involved in cognitive function, aging and neurodegeneration. The dynamic repertoire crucially depends on the network’s capacity to store patterns, as well as their stability. Here we systematically explore the capacity of networks derived from human connectomes to store attractor states, as well as various network mechanisms to control the brain’s dynamic repertoire. Using a deterministic graded response Hopfield model with connectome-based interactions, we reconstruct the system’s attractor space through a uniform sampling of the initial conditions. Large fixed-point attractor sets are obtained in the low temperature condition, with a bigger number of attractors than ever reported so far. Different variants of the initial model, including (i) a uniform activation threshold or (ii) a global negative feedback, produce a similarly robust multistability in a limited parameter range. A numerical analysis of the distribution of the attractors identifies spatially-segregated components, with a centro-medial core and several well-delineated regional patches. Those different modes share similarity with the fMRI independent components observed in the “resting state” condition. We demonstrate non-stationary behavior in noise-driven generalizations of the models, with different meta-stable attractors visited along the same time course. Only the model with a global dynamic density control is found to display robust and long-lasting non-stationarity with no tendency toward either overactivity or extinction. The best fit with empirical signals is observed at the edge of multistability, a parameter region that also corresponds to the highest entropy of the attractors. PMID:26709852

  17. Improving large-scale groundwater models by considering fossil gradients

    NASA Astrophysics Data System (ADS)

    Schulz, Stephan; Walther, Marc; Michelsen, Nils; Rausch, Randolf; Dirks, Heiko; Al-Saud, Mohammed; Merz, Ralf; Kolditz, Olaf; Schüth, Christoph

    2017-05-01

    Due to limited availability of surface water, many arid to semi-arid countries rely on their groundwater resources. Despite the quasi-absence of present day replenishment, some of these groundwater bodies contain large amounts of water, which was recharged during pluvial periods of the Late Pleistocene to Early Holocene. These mostly fossil, non-renewable resources require different management schemes compared to those which are usually applied in renewable systems. Fossil groundwater is a finite resource and its withdrawal implies mining of aquifer storage reserves. Although they receive almost no recharge, some of them show notable hydraulic gradients and a flow towards their discharge areas, even without pumping. As a result, these systems have more discharge than recharge and hence are not in steady state, which makes their modelling, in particular the calibration, very challenging. In this study, we introduce a new calibration approach, composed of four steps: (i) estimating the fossil discharge component, (ii) determining the origin of fossil discharge, (iii) fitting the hydraulic conductivity with a pseudo steady-state model, and (iv) fitting the storage capacity with a transient model by reconstructing head drawdown induced by pumping activities. Finally, we test the relevance of our approach and evaluated the effect of considering or ignoring fossil gradients on aquifer parameterization for the Upper Mega Aquifer (UMA) on the Arabian Peninsula.

  18. Renormalizing a viscous fluid model for large scale structure formation

    SciTech Connect

    Führer, Florian; Rigopoulos, Gerasimos E-mail: gerasimos.rigopoulos@ncl.ac.uk

    2016-02-01

    Using the Stochastic Adhesion Model (SAM) as a simple toy model for cosmic structure formation, we study renormalization and the removal of the cutoff dependence from loop integrals in perturbative calculations. SAM shares the same symmetry with the full system of continuity+Euler equations and includes a viscosity term and a stochastic noise term, similar to the effective theories recently put forward to model CDM clustering. We show in this context that if the viscosity and noise terms are treated as perturbative corrections to the standard eulerian perturbation theory, they are necessarily non-local in time. To ensure Galilean Invariance higher order vertices related to the viscosity and the noise must then be added and we explicitly show at one-loop that these terms act as counter terms for vertex diagrams. The Ward Identities ensure that the non-local-in-time theory can be renormalized consistently. Another possibility is to include the viscosity in the linear propagator, resulting in exponential damping at high wavenumber. The resulting local-in-time theory is then renormalizable to one loop, requiring less free parameters for its renormalization.

  19. Large Scale Simulations of the Kinetic Ising Model

    NASA Astrophysics Data System (ADS)

    Münkel, Christian

    We present Monte Carlo simulation results for the dynamical critical exponent z of the two- and three-dimensional kinetic Ising model. The z-values were calculated from the magnetization relaxation from an ordered state into the equilibrium state at Tc for very large systems with up to (169984)2 and (3072)3 spins. To our knowledge, these are the largest Ising-systems simulated todate. We also report the successful simulation of very large lattices on a massively parallel MIMD computer with high speedups of approximately 1000 and an efficiency of about 0.93.

  20. Computational models for large-scale simulations of facilitated diffusion.

    PubMed

    Zabet, Nicolae Radu; Adryan, Boris

    2012-11-01

    The binding of site-specific transcription factors to their genomic target sites is a key step in gene regulation. While the genome is huge, transcription factors belong to the least abundant protein classes in the cell. It is therefore fascinating how short the time frame is that they require to home in on their target sites. The underlying search mechanism is called facilitated diffusion and assumes a combination of three-dimensional diffusion in the space around the DNA combined with one-dimensional random walk on it. In this review, we present the current understanding of the facilitated diffusion mechanism and identify questions that lack a clear or detailed answer. One way to investigate these questions is through stochastic simulation and, in this manuscript, we support the idea that such simulations are able to address them. Finally, we review which biological parameters need to be included in such computational models in order to obtain a detailed representation of the actual process.

  1. Large-scale assessment of the gliomasphere model system.

    PubMed

    Laks, Dan R; Crisman, Thomas J; Shih, Michelle Y S; Mottahedeh, Jack; Gao, Fuying; Sperry, Jantzen; Garrett, Matthew C; Yong, William H; Cloughesy, Timothy F; Liau, Linda M; Lai, Albert; Coppola, Giovanni; Kornblum, Harley I

    2016-10-01

    Gliomasphere cultures are widely utilized for the study of glioblastoma (GBM). However, this model system is not well characterized, and the utility of current classification methods is not clear. We used 71 gliomasphere cultures from 68 individuals. Using gene expression-based classification, we performed unsupervised clustering and associated gene expression with gliomasphere phenotypes and patient survival. Some aspects of the gene expression-based classification method were robust because the gliomasphere cultures retained their classification over many passages, and IDH1 mutant gliomaspheres were all proneural. While gene expression of a subset of gliomasphere cultures was more like the parent tumor than any other tumor, gliomaspheres did not always harbor the same classification as their parent tumor. Classification was not associated with whether a sphere culture was derived from primary or recurrent GBM or associated with the presence of EGFR amplification or rearrangement. Unsupervised clustering of gliomasphere gene expression distinguished 2 general categories (mesenchymal and nonmesenchymal), while multidimensional scaling distinguished 3 main groups and a fourth minor group. Unbiased approaches revealed that PI3Kinase, protein kinase A, mTOR, ERK, Integrin, and beta-catenin pathways were associated with in vitro measures of proliferation and sphere formation. Associating gene expression with gliomasphere phenotypes and patient outcome, we identified genes not previously associated with GBM: PTGR1, which suppresses proliferation, and EFEMP2 and LGALS8, which promote cell proliferation. This comprehensive assessment reveals advantages and limitations of using gliomaspheres to model GBM biology, and provides a novel strategy for selecting genes for future study. © The Author(s) 2016. Published by Oxford University Press on behalf of the Society for Neuro-Oncology. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  2. Evaluation of drought propagation in an ensemble mean of large-scale hydrological models

    NASA Astrophysics Data System (ADS)

    Van Loon, A. F.; Van Huijgevoort, M. H. J.; Van Lanen, H. A. J.

    2012-07-01

    Hydrological drought is increasingly studied using large-scale models. It is, however, not sure whether large-scale models reproduce the development of hydrological drought correctly. The pressing question is: how well do large-scale models simulate the propagation from meteorological to hydrological drought? To answer this question, we evaluated the simulation of drought propagation in an ensemble mean of ten large-scale models, both land-surface models and global hydrological models, that were part of the model intercomparison project of WATCH (WaterMIP). For a selection of case study areas, we studied drought characteristics (number of droughts, duration, severity), drought propagation features (pooling, attenuation, lag, lengthening), and hydrological drought typology (classical rainfall deficit drought, rain-to-snow-season drought, wet-to-dry-season drought, cold snow season drought, warm snow season drought, composite drought). Drought characteristics simulated by large-scale models clearly reflected drought propagation, i.e. drought events became less and longer when moving through the hydrological cycle. However, more differentiation was expected between fast and slowly responding systems, with slowly responding systems having less and longer droughts in runoff than fast responding systems. This was not found using large-scale models. Drought propagation features were poorly reproduced by the large-scale models, because runoff reacted immediately to precipitation, in all case study areas. This fast reaction to precipitation, even in cold climates in winter and in semi-arid climates in summer, also greatly influenced the hydrological drought typology as identified by the large-scale models. In general, the large-scale models had the correct representation of drought types, but the percentages of occurrence had some important mismatches, e.g. an overestimation of classical rainfall deficit droughts, and an underestimation of wet-to-dry-season droughts and

  3. Computational models for large-scale simulations of facilitated diffusion

    PubMed Central

    Zabet, Nicolae Radu; Adryan, Boris

    2014-01-01

    The binding of site-specific transcription factors to their genomic target sites is a key step in gene regulation. While the genome is huge, transcription factors belong to the least abundant protein classes in the cell. It is therefore fascinating how short the time frame is that they require to home in on their target sites. The underlying search mechanism is called facilitated diffusion and assumes a combination of three-dimensional diffusion in the space around the DNA combined with one-dimensional random walk on it. In this review, we present the current understanding of the facilitated diffusion mechanism and identify questions that lack a clear or detailed answer. One way to investigate these questions is through stochastic simulation and, in this manuscript, we support the idea that such simulations are able to address them. Finally, we review which biological parameters need to be included in such computational models in order to obtain a detailed representation of the actual process. PMID:22892851

  4. Large Scale Modelling of Glow Discharges or Non - Plasmas

    NASA Astrophysics Data System (ADS)

    Shankar, Sadasivan

    The Electron Velocity Distribution Function (EVDF) in the cathode fall of a DC helium glow discharge was evaluated from a numerical solution of the Boltzmann Transport Equation(BTE). The numerical technique was based on a Petrov-Galerkin technique and a unique combination of streamline upwinding with self -consistent feedback-based shock-capturing. EVDF for the cathode fall was solved at 1 Torr, as a function of position x, axial velocity v_{rm x}, radial velocity v_{rm r}, and time t. The electron-neutral collisions consisted of elastic, excitation, and ionization processes. The algorithm was optimized and vectorized to speed execution by more than a factor of 10 on CRAY-XMP. Efficient storage schemes were used to save the memory allocation required by the algorithm. The analysis of the solution of BTE was done in terms of the 8-moments that were evaluated. Higher moments were found necessary to study the momentum and energy fluxes. The time and length scales were estimated and used as a basis for the characterization of DC glow discharges. Based on an exhaustive study of Knudsen numbers, it was observed that the electrons in the cathode fall were in the transition or Boltzmann regime. The shortest relaxation time was the momentum relaxation and the longest times were the ionization and energy relaxation times. The other times in the processes were that for plasma reaction, diffusion, convection, transit, entropy relaxation, and that for mean free flight between the collisions. Different models were classified based on the moments, time scales, and length scales in their applicability to glow discharges. These consisted of BTE with different number af phase and configuration dimensions, Bhatnagar-Gross-Krook equation, moment equations (e.g. Drift-Diffusion, Drift-Diffusion-Inertia), and spherical harmonic expansions.

  5. Modeling of Cloud/Radiation Processes for Large-Scale Clouds and Tropical Anvils

    DTIC Science & Technology

    1994-05-31

    three-dimensional, large-scale cloud model has been developed for the prediction of cloud cover, cloud liquid /ice water content (LWC/IWC), precipitation...specific humidity and temperature. Partial cloudiness is allowed to form when large-scale relative humidity is less than 100%. Both liquid and ice...phases are included in the model. The liquid phase processes consist of evaporation, condensation, autoconversion and precipitation. The ice phase

  6. Interaction of a cumulus cloud ensemble with the large-scale environment. IV - The discrete model

    NASA Technical Reports Server (NTRS)

    Lord, S. J.; Chao, W. C.; Arakawa, A.

    1982-01-01

    The Arakawa-Schubert (1974) parameterization is applied to a prognostic model of large-scale atmospheric circulations and used to analyze data in a general circulation model (GCM). The vertical structure of the large-scale model and the solution for the cloud subensemble thermodynamical properties are examined to choose cloud levels and representative regions. A mass flux distribution equation is adapted to formulate algorithms for calculating the large-scale forcing and the mass flux kernel, using either direct solution or linear programming. Finally, the feedback of the cumulus ensemble on the large-scale environment for a given subensemble mass flux is calculated. All cloud subensemble properties were determined from the conservation of mass, moist static energy, and total water.

  7. Using large-scale neural models to interpret connectivity measures of cortico-cortical dynamics at millisecond temporal resolution

    PubMed Central

    Banerjee, Arpan; Pillai, Ajay S.; Horwitz, Barry

    2012-01-01

    Over the last two decades numerous functional imaging studies have shown that higher order cognitive functions are crucially dependent on the formation of distributed, large-scale neuronal assemblies (neurocognitive networks), often for very short durations. This has fueled the development of a vast number of functional connectivity measures that attempt to capture the spatiotemporal evolution of neurocognitive networks. Unfortunately, interpreting the neural basis of goal directed behavior using connectivity measures on neuroimaging data are highly dependent on the assumptions underlying the development of the measure, the nature of the task, and the modality of the neuroimaging technique that was used. This paper has two main purposes. The first is to provide an overview of some of the different measures of functional/effective connectivity that deal with high temporal resolution neuroimaging data. We will include some results that come from a recent approach that we have developed to identify the formation and extinction of task-specific, large-scale neuronal assemblies from electrophysiological recordings at a ms-by-ms temporal resolution. The second purpose of this paper is to indicate how to partially validate the interpretations drawn from this (or any other) connectivity technique by using simulated data from large-scale, neurobiologically realistic models. Specifically, we applied our recently developed method to realistic simulations of MEG data during a delayed match-to-sample (DMS) task condition and a passive viewing of stimuli condition using a large-scale neural model of the ventral visual processing pathway. Simulated MEG data using simple head models were generated from sources placed in V1, V4, IT, and prefrontal cortex (PFC) for the passive viewing condition. The results show how closely the conclusions obtained from the functional connectivity method match with what actually occurred at the neuronal network level. PMID:22291621

  8. Odor Experience Facilitates Sparse Representations of New Odors in a Large-Scale Olfactory Bulb Model

    PubMed Central

    Zhou, Shanglin; Migliore, Michele; Yu, Yuguo

    2016-01-01

    Prior odor experience has a profound effect on the coding of new odor inputs by animals. The olfactory bulb, the first relay of the olfactory pathway, can substantially shape the representations of odor inputs. How prior odor experience affects the representation of new odor inputs in olfactory bulb and its underlying network mechanism are still unclear. Here we carried out a series of simulations based on a large-scale realistic mitral-granule network model and found that prior odor experience not only accelerated formation of the network, but it also significantly strengthened sparse responses in the mitral cell network while decreasing sparse responses in the granule cell network. This modulation of sparse representations may be due to the increase of inhibitory synaptic weights. Correlations among mitral cells within the network and correlations between mitral network responses to different odors decreased gradually when the number of prior training odors was increased, resulting in a greater decorrelation of the bulb representations of input odors. Based on these findings, we conclude that the degree of prior odor experience facilitates degrees of sparse representations of new odors by the mitral cell network through experience-enhanced inhibition mechanism. PMID:26903819

  9. Reconstruction and visualization of large-scale volumetric models of neocortical circuits for physically-plausible in silico optical studies.

    PubMed

    Abdellah, Marwan; Hernando, Juan; Antille, Nicolas; Eilemann, Stefan; Markram, Henry; Schürmann, Felix

    2017-09-13

    We present a software workflow capable of building large scale, highly detailed and realistic volumetric models of neocortical circuits from the morphological skeletons of their digitally reconstructed neurons. The limitations of the existing approaches for creating those models are explained, and then, a multi-stage pipeline is discussed to overcome those limitations. Starting from the neuronal morphologies, we create smooth piecewise watertight polygonal models that can be efficiently utilized to synthesize continuous and plausible volumetric models of the neurons with solid voxelization. The somata of the neurons are reconstructed on a physically-plausible basis relying on the physics engine in Blender. Our pipeline is applied to create 55 exemplar neurons representing the various morphological types that are reconstructed from the somatsensory cortex of a juvenile rat. The pipeline is then used to reconstruct a volumetric slice of a cortical circuit model that contains ∼210,000 neurons. The applicability of our pipeline to create highly realistic volumetric models of neocortical circuits is demonstrated with an in silico imaging experiment that simulates tissue visualization with brightfield microscopy. The results were evaluated with a group of domain experts to address their demands and also to extend the workflow based on their feedback. A systematic workflow is presented to create large scale synthetic tissue models of the neocortical circuitry. This workflow is fundamental to enlarge the scale of in silico neuroscientific optical experiments from several tens of cubic micrometers to a few cubic millimeters. Modelling and Simulation.

  10. Evaluation of drought propagation in an ensemble mean of large-scale hydrological models

    NASA Astrophysics Data System (ADS)

    Van Loon, A. F.; Van Huijgevoort, M. H. J.; Van Lanen, H. A. J.

    2012-11-01

    Hydrological drought is increasingly studied using large-scale models. It is, however, not sure whether large-scale models reproduce the development of hydrological drought correctly. The pressing question is how well do large-scale models simulate the propagation from meteorological to hydrological drought? To answer this question, we evaluated the simulation of drought propagation in an ensemble mean of ten large-scale models, both land-surface models and global hydrological models, that participated in the model intercomparison project of WATCH (WaterMIP). For a selection of case study areas, we studied drought characteristics (number of droughts, duration, severity), drought propagation features (pooling, attenuation, lag, lengthening), and hydrological drought typology (classical rainfall deficit drought, rain-to-snow-season drought, wet-to-dry-season drought, cold snow season drought, warm snow season drought, composite drought). Drought characteristics simulated by large-scale models clearly reflected drought propagation; i.e. drought events became fewer and longer when moving through the hydrological cycle. However, more differentiation was expected between fast and slowly responding systems, with slowly responding systems having fewer and longer droughts in runoff than fast responding systems. This was not found using large-scale models. Drought propagation features were poorly reproduced by the large-scale models, because runoff reacted immediately to precipitation, in all case study areas. This fast reaction to precipitation, even in cold climates in winter and in semi-arid climates in summer, also greatly influenced the hydrological drought typology as identified by the large-scale models. In general, the large-scale models had the correct representation of drought types, but the percentages of occurrence had some important mismatches, e.g. an overestimation of classical rainfall deficit droughts, and an underestimation of wet-to-dry-season droughts and

  11. Realistic models of paracrystalline silicon

    NASA Astrophysics Data System (ADS)

    Nakhmanson, S. M.; Voyles, P. M.; Mousseau, Normand; Barkema, G. T.; Drabold, D. A.

    2001-06-01

    We present a procedure for the preparation of physically realistic models of paracrystalline silicon based on a modification of the bond-switching method of Wooten, Winer, and Weaire. The models contain randomly oriented c-Si grains embedded in a disordered matrix. Our technique creates interfaces between the crystalline and disordered phases of Si with an extremely low concentration of coordination defects. The resulting models possess structural and vibrational properties comparable with those of good continuous random network models of amorphous silicon and display realistic optical properties, correctly reproducing the electronic band gap of amorphous silicon. The largest of our models also shows the best agreement of any atomistic model structure that we tested with fluctuation microscopy experiments, indicating that this model has a degree of medium-range order closest to that of the real material.

  12. An overview of comparative modelling and resources dedicated to large-scale modelling of genome sequences.

    PubMed

    Lam, Su Datt; Das, Sayoni; Sillitoe, Ian; Orengo, Christine

    2017-08-01

    Computational modelling of proteins has been a major catalyst in structural biology. Bioinformatics groups have exploited the repositories of known structures to predict high-quality structural models with high efficiency at low cost. This article provides an overview of comparative modelling, reviews recent developments and describes resources dedicated to large-scale comparative modelling of genome sequences. The value of subclustering protein domain superfamilies to guide the template-selection process is investigated. Some recent cases in which structural modelling has aided experimental work to determine very large macromolecular complexes are also cited.

  13. An overview of comparative modelling and resources dedicated to large-scale modelling of genome sequences

    PubMed Central

    Lam, Su Datt; Das, Sayoni; Sillitoe, Ian; Orengo, Christine

    2017-01-01

    Computational modelling of proteins has been a major catalyst in structural biology. Bioinformatics groups have exploited the repositories of known structures to predict high-quality structural models with high efficiency at low cost. This article provides an overview of comparative modelling, reviews recent developments and describes resources dedicated to large-scale comparative modelling of genome sequences. The value of subclustering protein domain superfamilies to guide the template-selection process is investigated. Some recent cases in which structural modelling has aided experimental work to determine very large macromolecular complexes are also cited. PMID:28777078

  14. Analytical model of the statistical properties of contrast of large-scale ionospheric inhomogeneities.

    NASA Astrophysics Data System (ADS)

    Vsekhsvyatskaya, I. S.; Evstratova, E. A.; Kalinin, Yu. K.; Romanchuk, A. A.

    1989-08-01

    A new analytical model is proposed for the distribution of variations of the relative electron-density contrast of large-scale ionospheric inhomogeneities. The model is characterized by other-than-zero skewness and kurtosis. It is shown that the model is applicable in the interval of horizontal dimensions of inhomogeneities from hundreds to thousands of kilometers.

  15. Software System Design for Large Scale, Spatially-explicit Agroecosystem Modeling

    SciTech Connect

    Wang, Dali; Nichols, Dr Jeff A; Kang, Shujiang; Post, Wilfred M; Liu, Sumang

    2012-01-01

    Recently, site-based agroecosystem model has been applied at regional and state level to enable comprehensive analyses of environmental sustainability of food and biofuel production. Those large-scale, spatially-explicit simulations present computational challenges in software systems design. Herein, we describe our software system design for large-scale, spatially-explicit agroecosystem modeling and data analysis. First, we describe the software design principles in three major phases: data preparation, high performance simulation, and data management and analysis. Then, we use a case study at a regional intensive modeling area (RIMA) to demonstrate our system implementation and capability.

  16. Identification of large-scale genomic variation in cancer genomes using in silico reference models

    PubMed Central

    Killcoyne, Sarah; del Sol, Antonio

    2016-01-01

    Identifying large-scale structural variation in cancer genomes continues to be a challenge to researchers. Current methods rely on genome alignments based on a reference that can be a poor fit to highly variant and complex tumor genomes. To address this challenge we developed a method that uses available breakpoint information to generate models of structural variations. We use these models as references to align previously unmapped and discordant reads from a genome. By using these models to align unmapped reads, we show that our method can help to identify large-scale variations that have been previously missed. PMID:26264669

  17. CytoModeler: a tool for bridging large-scale network analysis and dynamic quantitative modeling

    PubMed Central

    Xia, Tian; Van Hemert, John; Dickerson, Julie A.

    2011-01-01

    Summary: CytoModeler is an open-source Java application based on the Cytoscape platform. It integrates large-scale network analysis and quantitative modeling by combining omics analysis on the Cytoscape platform, access to deterministic and stochastic simulators, and static and dynamic network context visualizations of simulation results. Availability: Implemented in Java, CytoModeler runs with Cytoscape 2.6 and 2.7. Binaries, documentation and video walkthroughs are freely available at http://vrac.iastate.edu/~jlv/cytomodeler/. Contact: julied@iastate.edu; netscape@iastate.edu Supplementary Information: Supplementary data are available at Bioinformatics online. PMID:21511714

  18. Non-Gaussianity and Large Scale Structure in a two-field Inflationary model

    SciTech Connect

    Tseliakhovich, D.; Slosar, A.; Hirata, C.

    2010-08-30

    Single-field inflationary models predict nearly Gaussian initial conditions, and hence a detection of non-Gaussianity would be a signature of the more complex inflationary scenarios. In this paper we study the effect on the cosmic microwave background and on large-scale structure from primordial non-Gaussianity in a two-field inflationary model in which both the inflaton and curvaton contribute to the density perturbations. We show that in addition to the previously described enhancement of the galaxy bias on large scales, this setup results in large-scale stochasticity. We provide joint constraints on the local non-Gaussianity parameter f*{sub NL} and the ratio {zeta} of the amplitude of primordial perturbations due to the inflaton and curvaton using WMAP and Sloan Digital Sky Survey data.

  19. Non-Gaussianity and large-scale structure in a two-field inflationary model

    SciTech Connect

    Tseliakhovich, Dmitriy; Hirata, Christopher

    2010-08-15

    Single-field inflationary models predict nearly Gaussian initial conditions, and hence a detection of non-Gaussianity would be a signature of the more complex inflationary scenarios. In this paper we study the effect on the cosmic microwave background and on large-scale structure from primordial non-Gaussianity in a two-field inflationary model in which both the inflaton and curvaton contribute to the density perturbations. We show that in addition to the previously described enhancement of the galaxy bias on large scales, this setup results in large-scale stochasticity. We provide joint constraints on the local non-Gaussianity parameter f-tilde{sub NL} and the ratio {xi} of the amplitude of primordial perturbations due to the inflaton and curvaton using WMAP and Sloan Digital Sky Survey data.

  20. Measuring Growth in a Longitudinal Large-Scale Assessment with a General Latent Variable Model

    ERIC Educational Resources Information Center

    von Davier, Matthias; Xu, Xueli; Carstensen, Claus H.

    2011-01-01

    The aim of the research presented here is the use of extensions of longitudinal item response theory (IRT) models in the analysis and comparison of group-specific growth in large-scale assessments of educational outcomes. A general discrete latent variable model was used to specify and compare two types of multidimensional item-response-theory…

  1. Finite Mixture Multilevel Multidimensional Ordinal IRT Models for Large Scale Cross-Cultural Research

    ERIC Educational Resources Information Center

    de Jong, Martijn G.; Steenkamp, Jan-Benedict E. M.

    2010-01-01

    We present a class of finite mixture multilevel multidimensional ordinal IRT models for large scale cross-cultural research. Our model is proposed for confirmatory research settings. Our prior for item parameters is a mixture distribution to accommodate situations where different groups of countries have different measurement operations, while…

  2. On Applications of Rasch Models in International Comparative Large-Scale Assessments: A Historical Review

    ERIC Educational Resources Information Center

    Wendt, Heike; Bos, Wilfried; Goy, Martin

    2011-01-01

    Several current international comparative large-scale assessments of educational achievement (ICLSA) make use of "Rasch models", to address functions essential for valid cross-cultural comparisons. From a historical perspective, ICLSA and Georg Rasch's "models for measurement" emerged at about the same time, half a century ago. However, the…

  3. An Alternative Way to Model Population Ability Distributions in Large-Scale Educational Surveys

    ERIC Educational Resources Information Center

    Wetzel, Eunike; Xu, Xueli; von Davier, Matthias

    2015-01-01

    In large-scale educational surveys, a latent regression model is used to compensate for the shortage of cognitive information. Conventionally, the covariates in the latent regression model are principal components extracted from background data. This operational method has several important disadvantages, such as the handling of missing data and…

  4. An Alternative Way to Model Population Ability Distributions in Large-Scale Educational Surveys

    ERIC Educational Resources Information Center

    Wetzel, Eunike; Xu, Xueli; von Davier, Matthias

    2015-01-01

    In large-scale educational surveys, a latent regression model is used to compensate for the shortage of cognitive information. Conventionally, the covariates in the latent regression model are principal components extracted from background data. This operational method has several important disadvantages, such as the handling of missing data and…

  5. The three-point function as a probe of models for large-scale structure

    NASA Technical Reports Server (NTRS)

    Frieman, Joshua A.; Gaztanaga, Enrique

    1994-01-01

    We analyze the consequences of models of structure formation for higher order (n-point) galaxy correlation functions in the mildly nonlinear regime. Several variations of the standard Omega = 1 cold dark matter model with scale-invariant primordial perturbations have recently been introduced to obtain more power on large scales, R(sub p) is approximately 20/h Mpc, e.g., low matter-density (nonzero cosmological constant) models, 'tilted' primordial spectra, and scenarios with a mixture of cold and hot dark matter. They also include models with an effective scale-dependent bias, such as the cooperative galaxy formation scenario of Bower et al. We show that higher-order (n-point) galaxy correlation functions can provide a useful test of such models and can discriminate between models with true large-scale power in the density field and those where the galaxy power arises from scale-dependent bias: a bias with rapid scale dependence leads to a dramatic decrease of the the hierarchical amplitudes Q(sub J) at large scales, r is greater than or approximately R(sub p). Current observational constraints on the three-point amplitudes Q(sub 3) and S(sub 3) can place limits on the bias parameter(s) and appear to disfavor, but not yet rule out, the hypothesis that scale-dependent bias is responsible for the extra power observed on large scales.

  6. The three-point function as a probe of models for large-scale structure

    NASA Technical Reports Server (NTRS)

    Frieman, Joshua A.; Gaztanaga, Enrique

    1993-01-01

    The consequences of models of structure formation for higher-order (n-point) galaxy correlation functions in the mildly non-linear regime are analyzed. Several variations of the standard Omega = 1 cold dark matter model with scale-invariant primordial perturbations were recently introduced to obtain more power on large scales, R(sub p) is approximately 20 h(sup -1) Mpc, e.g., low-matter-density (non-zero cosmological constant) models, 'tilted' primordial spectra, and scenarios with a mixture of cold and hot dark matter. They also include models with an effective scale-dependent bias, such as the cooperative galaxy formation scenario of Bower, etal. It is shown that higher-order (n-point) galaxy correlation functions can provide a useful test of such models and can discriminate between models with true large-scale power in the density field and those where the galaxy power arises from scale-dependent bias: a bias with rapid scale-dependence leads to a dramatic decrease of the hierarchical amplitudes Q(sub J) at large scales, r is approximately greater than R(sub p). Current observational constraints on the three-point amplitudes Q(sub 3) and S(sub 3) can place limits on the bias parameter(s) and appear to disfavor, but not yet rule out, the hypothesis that scale-dependent bias is responsible for the extra power observed on large scales.

  7. The three-point function as a probe of models for large-scale structure

    NASA Technical Reports Server (NTRS)

    Frieman, Joshua A.; Gaztanaga, Enrique

    1994-01-01

    We analyze the consequences of models of structure formation for higher order (n-point) galaxy correlation functions in the mildly nonlinear regime. Several variations of the standard Omega = 1 cold dark matter model with scale-invariant primordial perturbations have recently been introduced to obtain more power on large scales, R(sub p) is approximately 20/h Mpc, e.g., low matter-density (nonzero cosmological constant) models, 'tilted' primordial spectra, and scenarios with a mixture of cold and hot dark matter. They also include models with an effective scale-dependent bias, such as the cooperative galaxy formation scenario of Bower et al. We show that higher-order (n-point) galaxy correlation functions can provide a useful test of such models and can discriminate between models with true large-scale power in the density field and those where the galaxy power arises from scale-dependent bias: a bias with rapid scale dependence leads to a dramatic decrease of the the hierarchical amplitudes Q(sub J) at large scales, r is greater than or approximately R(sub p). Current observational constraints on the three-point amplitudes Q(sub 3) and S(sub 3) can place limits on the bias parameter(s) and appear to disfavor, but not yet rule out, the hypothesis that scale-dependent bias is responsible for the extra power observed on large scales.

  8. The three-point function as a probe of models for large-scale structure

    SciTech Connect

    Frieman, J.A. ); Gaztanaga, E. )

    1993-06-19

    The authors analyze the consequences of models of structure formation for higher-order (n-point) galaxy correlation functions in the mildly non-linear regime. Several variations of the standard [Omega] = 1 cold dark matter model with scale-invariant primordial perturbations have recently been introduced to obtain more power on large scales, R[sub p] [approximately]20 h[sup [minus]1] Mpc, e.g., low-matter-density (non-zero cosmological constant) models, [open quote]tilted[close quote] primordial spectra, and scenarios with a mixture of cold and hot dark matter. They also include models with an effective scale-dependent bias, such as the cooperative galaxy formation scenario of Bower, et al. The authors show that higher-order (n-point) galaxy correlation functions can provide a useful test of such models and can discriminate between models with true large-scale power in the density field and those where the galaxy power arises from scale-dependent bias: a bias with rapid scale-dependence leads to a dramatic decrease of the hierarchical amplitudes Q[sub J] at large scales, r [approx gt] R[sub p]. Current observational constraints on the three-point amplitudes Q[sub 3] and S[sub 3] can place limits on the bias parameter(s) and appear to disfavor, but not yet rule out, the hypothesis that scale-dependent bias is responsible for the extra power observed on large scales.

  9. Optimization of large-scale heterogeneous system-of-systems models.

    SciTech Connect

    Parekh, Ojas; Watson, Jean-Paul; Phillips, Cynthia Ann; Siirola, John; Swiler, Laura Painton; Hough, Patricia Diane; Lee, Herbert K. H.; Hart, William Eugene; Gray, Genetha Anne; Woodruff, David L.

    2012-01-01

    Decision makers increasingly rely on large-scale computational models to simulate and analyze complex man-made systems. For example, computational models of national infrastructures are being used to inform government policy, assess economic and national security risks, evaluate infrastructure interdependencies, and plan for the growth and evolution of infrastructure capabilities. A major challenge for decision makers is the analysis of national-scale models that are composed of interacting systems: effective integration of system models is difficult, there are many parameters to analyze in these systems, and fundamental modeling uncertainties complicate analysis. This project is developing optimization methods to effectively represent and analyze large-scale heterogeneous system of systems (HSoS) models, which have emerged as a promising approach for describing such complex man-made systems. These optimization methods enable decision makers to predict future system behavior, manage system risk, assess tradeoffs between system criteria, and identify critical modeling uncertainties.

  10. PLATO: data-oriented approach to collaborative large-scale brain system modeling.

    PubMed

    Kannon, Takayuki; Inagaki, Keiichiro; Kamiji, Nilton L; Makimura, Kouji; Usui, Shiro

    2011-11-01

    The brain is a complex information processing system, which can be divided into sub-systems, such as the sensory organs, functional areas in the cortex, and motor control systems. In this sense, most of the mathematical models developed in the field of neuroscience have mainly targeted a specific sub-system. In order to understand the details of the brain as a whole, such sub-system models need to be integrated toward the development of a neurophysiologically plausible large-scale system model. In the present work, we propose a model integration library where models can be connected by means of a common data format. Here, the common data format should be portable so that models written in any programming language, computer architecture, and operating system can be connected. Moreover, the library should be simple so that models can be adapted to use the common data format without requiring any detailed knowledge on its use. Using this library, we have successfully connected existing models reproducing certain features of the visual system, toward the development of a large-scale visual system model. This library will enable users to reuse and integrate existing and newly developed models toward the development and simulation of a large-scale brain system model. The resulting model can also be executed on high performance computers using Message Passing Interface (MPI).

  11. An Efficient Simulation Environment for Modeling Large-Scale Cortical Processing

    PubMed Central

    Richert, Micah; Nageswaran, Jayram Moorkanikara; Dutt, Nikil; Krichmar, Jeffrey L.

    2011-01-01

    We have developed a spiking neural network simulator, which is both easy to use and computationally efficient, for the generation of large-scale computational neuroscience models. The simulator implements current or conductance based Izhikevich neuron networks, having spike-timing dependent plasticity and short-term plasticity. It uses a standard network construction interface. The simulator allows for execution on either GPUs or CPUs. The simulator, which is written in C/C++, allows for both fine grain and coarse grain specificity of a host of parameters. We demonstrate the ease of use and computational efficiency of this model by implementing a large-scale model of cortical areas V1, V4, and area MT. The complete model, which has 138,240 neurons and approximately 30 million synapses, runs in real-time on an off-the-shelf GPU. The simulator source code, as well as the source code for the cortical model examples is publicly available. PMID:22007166

  12. An efficient simulation environment for modeling large-scale cortical processing.

    PubMed

    Richert, Micah; Nageswaran, Jayram Moorkanikara; Dutt, Nikil; Krichmar, Jeffrey L

    2011-01-01

    We have developed a spiking neural network simulator, which is both easy to use and computationally efficient, for the generation of large-scale computational neuroscience models. The simulator implements current or conductance based Izhikevich neuron networks, having spike-timing dependent plasticity and short-term plasticity. It uses a standard network construction interface. The simulator allows for execution on either GPUs or CPUs. The simulator, which is written in C/C++, allows for both fine grain and coarse grain specificity of a host of parameters. We demonstrate the ease of use and computational efficiency of this model by implementing a large-scale model of cortical areas V1, V4, and area MT. The complete model, which has 138,240 neurons and approximately 30 million synapses, runs in real-time on an off-the-shelf GPU. The simulator source code, as well as the source code for the cortical model examples is publicly available.

  13. Influence of a compost layer on the attenuation of 28 selected organic micropollutants under realistic soil aquifer treatment conditions: insights from a large scale column experiment.

    PubMed

    Schaffer, Mario; Kröger, Kerrin Franziska; Nödler, Karsten; Ayora, Carlos; Carrera, Jesús; Hernández, Marta; Licha, Tobias

    2015-05-01

    Soil aquifer treatment is widely applied to improve the quality of treated wastewater in its reuse as alternative source of water. To gain a deeper understanding of the fate of thereby introduced organic micropollutants, the attenuation of 28 compounds was investigated in column experiments using two large scale column systems in duplicate. The influence of increasing proportions of solid organic matter (0.04% vs. 0.17%) and decreasing redox potentials (denitrification vs. iron reduction) was studied by introducing a layer of compost. Secondary effluent from a wastewater treatment plant was used as water matrix for simulating soil aquifer treatment. For neutral and anionic compounds, sorption generally increases with the compound hydrophobicity and the solid organic matter in the column system. Organic cations showed the highest attenuation. Among them, breakthroughs were only registered for the cationic beta-blockers atenolol and metoprolol. An enhanced degradation in the columns with organic infiltration layer was observed for the majority of the compounds, suggesting an improved degradation for higher levels of biodegradable dissolved organic carbon. Solely the degradation of sulfamethoxazole could clearly be attributed to redox effects (when reaching iron reducing conditions). The study provides valuable insights into the attenuation potential for a wide spectrum of organic micropollutants under realistic soil aquifer treatment conditions. Furthermore, the introduction of the compost layer generally showed positive effects on the removal of compounds preferentially degraded under reducing conditions and also increases the residence times in the soil aquifer treatment system via sorption. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. Simulated pre-industrial climate in Bergen Climate Model (version 2): model description and large-scale circulation features

    NASA Astrophysics Data System (ADS)

    Otterâ, O. H.; Bentsen, M.; Bethke, I.; Kvamstø, N. G.

    2009-11-01

    The Bergen Climate Model (BCM) is a fully-coupled atmosphere-ocean-sea-ice model that provides state-of-the-art computer simulations of the Earth's past, present, and future climate. Here, a pre-industrial multi-century simulation with an updated version of BCM is described and compared to observational data. The model is run without any form of flux adjustments and is stable for several centuries. The simulated climate reproduces the general large-scale circulation in the atmosphere reasonably well, except for a positive bias in the high latitude sea level pressure distribution. Also, by introducing an updated turbulence scheme in the atmosphere model a persistent cold bias has been eliminated. For the ocean part, the model drifts in sea surface temperatures and salinities are considerably reduced compared to earlier versions of BCM. Improved conservation properties in the ocean model have contributed to this. Furthermore, by choosing a reference pressure at 2000 m and including thermobaric effects in the ocean model, a more realistic meridional overturning circulation is simulated in the Atlantic Ocean. The simulated sea-ice extent in the Northern Hemisphere is in general agreement with observational data except for summer where the extent is somewhat underestimated. In the Southern Hemisphere, large negative biases are found in the simulated sea-ice extent. This is partly related to problems with the mixed layer parametrization, causing the mixed layer in the Southern Ocean to be too deep, which in turn makes it hard to maintain a realistic sea-ice cover here. However, despite some problematic issues, the pre-industrial control simulation presented here should still be appropriate for climate change studies requiring multi-century simulations.

  15. Large-scale simulation of karst processes - parameter estimation, model evaluation and quantification of uncertainty

    NASA Astrophysics Data System (ADS)

    Hartmann, A. J.

    2016-12-01

    Heterogeneity is an intrinsic property of karst systems. It results in complex hydrological behavior that is characterized by an interplay of diffuse and concentrated flow and transport. In large-scale hydrological models, these processes are usually not considered. Instead average or representative values are chosen for each of the simulated grid cells omitting many aspects of their sub-grid variability. In karst regions, this may lead to unreliable predictions when those models are used for assessing future water resources availability, floods or droughts, or when they are used for recommendations for more sustainable water management. In this contribution I present a large-scale groundwater recharge model (0.25° x 0.25° resolution) that takes into karst hydrological processes by using statistical distribution functions to express subsurface heterogeneity. The model is applied over Europe's and the Mediterranean's carbonate rock regions ( 25% of the total area). As no measurements of the variability of subsurface properties are available at this scale, a parameter estimation procedure, which uses latent heat flux and soil moisture observations and quantifies the remaining uncertainty, was applied. The model is evaluated by sensitivity analysis, comparison to other large-scale models without karst processes included and independent recharge observations. Using with historic data (2002-2012) I can show that recharge rates vary strongly over Europe and the Mediterranean. At regions with low information for parameter estimation there is a larger prediction uncertainty (for instance in desert regions). Evaluation with independent recharge estimates shows that, on average, the model provides acceptable estimates, while the other large scale models under-estimate karstic recharge. The results of the sensitivity analysis corroborate the importance of including karst heterogeneity into the model as the distribution shape factor is the most sensitive parameter for

  16. Large-scale peculiar velocity field in flat models of the universe

    SciTech Connect

    Vittorio, N.; Turner, M.S.

    1987-05-01

    The inflationary universe scenario predicts a flat universe and both adiabatic and isocurvature primordial density perturbations with the Zel'dovich spectrum. The two simplest realizations, models dominated by hot or cold dark matter, seem to be in conflict with observations. Flat models with two components of mass density, where one of the components of mass density is smoothly distributed, are examined, and the large-scale peculiar velocity field for these models is computed. For the smooth component the authors consider relativistic particles, a relic cosmological term, and light strings. At present the observational situation is unsettled, but, in principle, the large-scale peculiar velocity field is a very powerful discriminator between these different models. 66 references.

  17. The large-scale peculiar velocity field in flat models of the universe

    NASA Technical Reports Server (NTRS)

    Vittorio, Nicola; Turner, Michael S.

    1987-01-01

    The inflationary universe scenario predicts a flat universe and both adiabatic and isocurvature primordial density perturbations with the Zel'dovich spectrum. The two simplest realizations, models dominated by hot or cold dark matter, seem to be in conflict with observations. Flat models with two components of mass density, where one of the components of mass density is smoothly distributed, are examined, and the large-scale peculiar velocity field for these models is computed. For the smooth component the authors consider relativistic particles, a relic cosmological term, and light strings. At present the observational situation is unsettled, but, in principle, the large-scale peculiar velocity field is a very powerful discriminator between these different models.

  18. Aspects of investigating STOL noise using large scale wind tunnel models

    NASA Technical Reports Server (NTRS)

    Falarski, M. D.; Koenig, D. G.; Soderman, P. T.

    1972-01-01

    The applicability of the NASA Ames 40- by 80-ft wind tunnel for acoustic research on STOL concepts has been investigated. The acoustic characteristics of the wind tunnel test section has been studied with calibrated acoustic sources. Acoustic characteristics of several large-scale STOL models have been studied both in the free-field and wind tunnel acoustic environments. The results indicate that the acoustic characteristics of large-scale STOL models can be measured in the wind tunnel if the test section acoustic environment and model acoustic similitude are taken into consideration. The reverberant field of the test section must be determined with an acoustically similar noise source. Directional microphone and extrapolation of near-field data to far-field are some of the techniques being explored as possible solutions to the directivity loss in a reverberant field. The model sound pressure levels must be of sufficient magnitude to be discernable from the wind tunnel background noise.

  19. Interactions among Radiation, Convection, and Large-Scale Dynamics in a General Circulation Model.

    NASA Astrophysics Data System (ADS)

    Randall, David A.; Harshvardhan; Dazlich, Donald A.; Corsetti, Thomas G.

    1989-07-01

    We have analyzed the effects of radiatively active clouds on the climate simulated by the UCLA/GLA GCM, with particular attention to the effects of the upper tropospheric stratiform clouds associated with deep cumulus convection, and the interactions of these clouds with convection and the large-scale circulation.Several numerical experiments have been performed to investigate the mechanisms through which the clouds influence the large-scale circulation. In the `NODETLQ' experiment, no liquid water or ice was detrained from cumulus clouds into the environment; all of the condensate was rained out. Upper level supersaturation cloudiness was drastically reduced, the atmosphere dried, and tropical outgoing longwave radiation increased. In the `NOANVIL' experiment, the radiative effects of the optically thich upper-level cloud sheets associated with deep cumulus convection were neglected. The land surface received more solar radiation in regions of convection, leading to enhanced surface fluxes and a dramatic increase in precipitation. In the `NOCRF' experiment, the longwave atmospheric cloud radiative forcing (ACRF) was omitted, paralleling the recent experiment of Slingo and Slingo. The results suggest that the ACRF enhances deep penetrative convection and precipitation, while suppressing shallow convection. They also indicate that the ACRF warms and moistens the tropical troposphere. The results of this experiment are somewhat ambiguous, however; for example, the ACRF suppresses precipitation in some parts of the tropics, and enhances it in others.To isolate the effects of the ACRF in a simpler setting, we have analyzed the climate of an ocean-covered Earth, which we call Seaworld. The key simplicities of Seaworld are the fixed boundary temperature with no land points, the lack of mountains, and the zonal uniformity of the boundary conditions. Results are presented from two Seaworld simulations. The first includes a full suite of physical parameterizations, while

  20. Evaluation of variational principle based model for LDPE large scale film blowing process

    NASA Astrophysics Data System (ADS)

    Kolarik, Roman; Zatloukal, Martin

    2013-04-01

    In this work, variational principle based film blowing model combined with Pearson and Petrie formulation, considering non-isothermal processing conditions and novel generalized Newtonian model allowing to capture steady shear and uniaxial extensional viscosities has been validated by using experimentally determined bubble shape and velocity profile for LDPE sample on large scale film blowing line. It has been revealed that the minute change in the flow activation energy can significantly influence the film stretching level.

  1. Simulated pre-industrial climate in Bergen Climate Model (version 2): model description and large-scale circulation features

    NASA Astrophysics Data System (ADS)

    Otterå, O. H.; Bentsen, M.; Bethke, I.; Kvamstø, N. G.

    2009-05-01

    The Bergen Climate Model (BCM) is a fully-coupled atmosphere-ocean-sea-ice model that provides state-of-the-art computer simulations of the Earth's past, present, and future climate. Here, a pre-industrial multi-century simulation with an updated version of BCM is described and compared to observational data. The model is run without any form of flux adjustments and is stable for several centuries. The simulated climate reproduces the general large scale circulation in the atmosphere reasonably well, except for a positive bias in the high latitude sea level pressures distribution. Also, by introducing an updated turbulence scheme in the atmosphere model a persistent cold bias has been eliminated. For the ocean part, the model drifts in sea surface temperatures and salinities are considerably reduced compared to earlier versions of BCM. Improved conservation properties in the ocean have contributed to this. Furthermore, by choosing a reference pressure at 2000 m and including thermobaric effects in the ocean model, a more realistic meridional overturning circulation is simulated in the Atlantic Ocean. The simulated sea-ice extent in the Northern Hemisphere is in general agreement with observational data except for summer where the extent is somewhat underestimated. In the Southern Hemisphere, large negative biases are found in the simulated sea-ice extent. This is partly related to problems with the mixed layer parametrization, causing the mixed layer in the Southern Ocean to be too deep, which in turn makes it hard to maintain a realistic sea-ice cover here. However, despite some problematic issues, the pre-industrial control simulation presented here should still be appropriate for climate change studies requiring multi-century simulations.

  2. Modelling Convective Dust Storms in Large-Scale Weather and Climate Models

    NASA Astrophysics Data System (ADS)

    Pantillon, F.; Knippertz, P.; Marsham, J. H.; Panitz, H. J.; Bischoff-Gauss, I.

    2015-12-01

    Recent field campaigns have shown that convective dust storms - also known as haboobs or cold pool outflows - contribute a significant fraction of dust uplift over the Sahara and Sahel in summer. However, in-situ observations are sparse and convective dust storms are frequently concealed by clouds in satellite imagery. Therefore numerical models are often the only available source of information over the area. Here a regional climate model with explicit representation of convection delivers the first full seasonal cycle of convective dust storms over North Africa. The model suggests that they contribute one fifth of the annual dust uplift over North Africa, one fourth between May and October, and one third over the western Sahel during this season. In contrast, most large-scale weather and climate models do not explicitly represent convection and thus lack such storms.A simple parameterization of convective dust storms has recently been developed, based on the downdraft mass flux of convection schemes. The parameterization is applied here to a set of regional climate runs with different horizontal resolutions and convection schemes, and assessed against the explicit run and against sparse station observations. The parameterization succeeds in capturing the geographical distribution and seasonal cycle of convective dust storms. It can be tuned to different horizontal resolutions and convection schemes, although the details of the geographical distribution and seasonal cycle depend on the representation of the monsoon in the parent model. Different versions of the parameterization are further discussed with respect to differences in the frequency of extreme events. The results show that the parameterization is reliable and can therefore solve a long-standing problem in simulating dust storms in large-scale weather and climate models.

  3. Modelling the large-scale redshift-space 3-point correlation function of galaxies

    NASA Astrophysics Data System (ADS)

    Slepian, Zachary; Eisenstein, Daniel J.

    2017-08-01

    We present a configuration-space model of the large-scale galaxy 3-point correlation function (3PCF) based on leading-order perturbation theory and including redshift-space distortions (RSD). This model should be useful in extracting distance-scale information from the 3PCF via the baryon acoustic oscillation method. We include the first redshift-space treatment of biasing by the baryon-dark matter relative velocity. Overall, on large scales the effect of RSD is primarily a renormalization of the 3PCF that is roughly independent of both physical scale and triangle opening angle; for our adopted Ωm and bias values, the rescaling is a factor of ∼1.8. We also present an efficient scheme for computing 3PCF predictions from our model, important for allowing fast exploration of the space of cosmological parameters in future analyses.

  4. Remote control and telemetry system for large-scale model test at sea

    NASA Astrophysics Data System (ADS)

    Sun, Shu-Zheng; Li, Ji-De; Zhao, Xiao-Dong; Luan, Jing-Lei; Wang, Chang-Tao

    2010-09-01

    Physical testing of large-scale ship models at sea is a new experimental method. It is a cheap and reliable way to research the environment adaptability of a ship in complex and extreme wave conditions. It is necessary to have a stable experimental system for the test. Since the experimental area is large, a remote control system and a telemetry system are essential, and were designed by the authors. An experiment was conducted on the Songhuajiang River to test the systems. The relationship between the model’s speed and its electromotor’s revolutions was also measured during the model test. The results showed that the two systems make it possible to carry out large-scale model tests at sea.

  5. Simplified radiation and convection treatments for large- scale tropical atmospheric modeling

    NASA Astrophysics Data System (ADS)

    Chou, Chia

    1997-05-01

    A physical parameterization package is developed for intermediate tropical atmospheric models, i.e., models slightly less complex than full general circulation models (GCMs). This package includes a linearized longwave radiation scheme, a simplified parameterization for surface solar radiation, and a cloudiness prediction scheme. A quantity that measures the net large-scale vertical stratification in deep convective regions, the gross moist stability, is estimated from observations. Using a Green's function method, the longwave radiation scheme is linearized from a fully nonlinear scheme used in GCMs. This includes the radiative flux dependence on large-scale variables, such as temperature, moisture, cloud fraction, and cloud top. A comparison with the fully nonlinear scheme in simulating tropical climatology, seasonal variations, and interannual variability is carried out using the observed large-scale variables as input. For these applications, the linearized scheme accurately reproduces the nonlinear results, and it can be easily applied in atmospheric models. The simplified solar radiation scheme is used to calculate surface solar irradiance as a function of cloud fraction and solar zenith angle. Cloud optical thickness is fixed for each cloud type, and cloud albedo is assumed to depend linearly on solar zenith angle. Comparison is made with two satellite-derived data sets. The cloudiness prediction scheme consists of empirical relations for cloudiness associated with deep convection, and is appropriate for long Reynolds-averaging intervals. Deep cloud can be estimated by large-scale precipitation in the tropics. Deep cloud and cirrostratus/cirrocumulus corresponding to tower and anvil clouds have a linear relation. Cirrus cloud fraction is calculated by a 2-D prognostic cloud ice budget equation. A deep-cloud-top- temperature postulate is used for parameterizing the cirrus source. The data analysis yields the physical hypothesis that deep cloud top temperature

  6. Evolution of Large-Scale Circulation during TOGA COARE: Model Intercomparison and Basic Features.

    NASA Astrophysics Data System (ADS)

    Lau, K.-M.; Sheu, P. J.; Schubert, S.; Ledvina, D.; Weng, H.

    1996-05-01

    An intercomparison study of the evolution of large-scale circulation features during TOGA COARE has been carried out using data from three 4D assimilation systems: the National Meteorological Center (NMC, currently known as the National Center for Environmental Prediction), the Navy Fleet Numerical Oceanography Center, and the NASA Goddard Space Flight Center. Results show that the preliminary assimilation products, though somewhat crude, can provide important information concerning the evolution of the large-scale atmospheric circulation over the tropical western Pacific during TOGA COARE. Large-scale features such as sea level pressure, rotational wind field, and temperature are highly consistent among models. However, the rainfall and wind divergence distributions show poor agreement among models, even though some useful information can still be derived. All three models shows a continuous background rain over the Intensive Flux Area (IFA), even during periods with suppressed convection, in contrast to the radar-estimated rainfall that is more episodic. This may reflect a generic deficiency in the oversimplified representation of large-scale rain in all three models.Based on the comparative model diagnostics, a consistent picture of large-scale evolution and multiscale interaction during TOGA COARF emerges. The propagation of the Madden and Julian Oscillation (MJO) from the equatorial Indian Ocean region into the western Pacific foreshadows the establishment of westerly wind events over the COARE region. The genesis and maintenance of the westerly wind (WW) events during TOGA COARE are related to the establishment of a large-scale east-west pressure dipole between the Maritime Continent and the equatorial central Pacific. This pressure dipole could be identified in part with the ascending (low pressure) and descending (high pressure) branches of the MJO and in part with the fluctuations of the austral summer monsoon.Accompanying the development of WW over the

  7. REIONIZATION ON LARGE SCALES. I. A PARAMETRIC MODEL CONSTRUCTED FROM RADIATION-HYDRODYNAMIC SIMULATIONS

    SciTech Connect

    Battaglia, N.; Trac, H.; Cen, R.; Loeb, A.

    2013-10-20

    We present a new method for modeling inhomogeneous cosmic reionization on large scales. Utilizing high-resolution radiation-hydrodynamic simulations with 2048{sup 3} dark matter particles, 2048{sup 3} gas cells, and 17 billion adaptive rays in a L = 100 Mpc h {sup –1} box, we show that the density and reionization redshift fields are highly correlated on large scales (∼> 1 Mpc h {sup –1}). This correlation can be statistically represented by a scale-dependent linear bias. We construct a parametric function for the bias, which is then used to filter any large-scale density field to derive the corresponding spatially varying reionization redshift field. The parametric model has three free parameters that can be reduced to one free parameter when we fit the two bias parameters to simulation results. We can differentiate degenerate combinations of the bias parameters by combining results for the global ionization histories and correlation length between ionized regions. Unlike previous semi-analytic models, the evolution of the reionization redshift field in our model is directly compared cell by cell against simulations and performs well in all tests. Our model maps the high-resolution, intermediate-volume radiation-hydrodynamic simulations onto lower-resolution, larger-volume N-body simulations (∼> 2 Gpc h {sup –1}) in order to make mock observations and theoretical predictions.

  8. Small parametric model for nonlinear dynamics of large scale cyclogenesis with wind speed variations

    NASA Astrophysics Data System (ADS)

    Erokhin, Nikolay; Shkevov, Rumen; Zolnikova, Nadezhda; Mikhailovskaya, Ludmila

    2016-07-01

    It is performed a numerical investigation of a self consistent small parametric model (SPM) for large scale cyclogenesis (RLSC) by usage of connected nonlinear equations for mean wind speed and ocean surface temperature in the tropical cyclone (TC). These equations may describe the different scenario of temporal dynamics of a powerful atmospheric vortex during its full life cycle. The numerical calculations have shown that relevant choice of SPMTs incoming parameters allows to describe the seasonal behavior of regional large scale cyclogenesis dynamics for a given number of TC during the active season. It is shown that SPM allows describe also the variable wind speed variations inside the TC. Thus by usage of the nonlinear small parametric model it is possible to study the features of RLSCTs temporal dynamics during the active season in the region given and to analyze the relationship between regional cyclogenesis parameters and different external factors like the space weather including the solar activity level and cosmic rays variations.

  9. A PRACTICAL ONTOLOGY FOR THE LARGE-SCALE MODELING OF SCHOLARLY ARTIFACTS AND THEIR USAGE

    SciTech Connect

    RODRIGUEZ, MARKO A.; BOLLEN, JOHAN; VAN DE SOMPEL, HERBERT

    2007-01-30

    The large-scale analysis of scholarly artifact usage is constrained primarily by current practices in usage data archiving, privacy issues concerned with the dissemination of usage data, and the lack of a practical ontology for modeling the usage domain. As a remedy to the third constraint, this article presents a scholarly ontology that was engineered to represent those classes for which large-scale bibliographic and usage data exists, supports usage research, and whose instantiation is scalable to the order of 50 million articles along with their associated artifacts (e.g. authors and journals) and an accompanying 1 billion usage events. The real world instantiation of the presented abstract ontology is a semantic network model of the scholarly community which lends the scholarly process to statistical analysis and computational support. They present the ontology, discuss its instantiation, and provide some example inference rules for calculating various scholarly artifact metrics.

  10. Exploiting multi-scale parallelism for large scale numerical modelling of laser wakefield accelerators

    NASA Astrophysics Data System (ADS)

    Fonseca, R. A.; Vieira, J.; Fiuza, F.; Davidson, A.; Tsung, F. S.; Mori, W. B.; Silva, L. O.

    2013-12-01

    A new generation of laser wakefield accelerators (LWFA), supported by the extreme accelerating fields generated in the interaction of PW-Class lasers and underdense targets, promises the production of high quality electron beams in short distances for multiple applications. Achieving this goal will rely heavily on numerical modelling to further understand the underlying physics and identify optimal regimes, but large scale modelling of these scenarios is computationally heavy and requires the efficient use of state-of-the-art petascale supercomputing systems. We discuss the main difficulties involved in running these simulations and the new developments implemented in the OSIRIS framework to address these issues, ranging from multi-dimensional dynamic load balancing and hybrid distributed/shared memory parallelism to the vectorization of the PIC algorithm. We present the results of the OASCR Joule Metric program on the issue of large scale modelling of LWFA, demonstrating speedups of over 1 order of magnitude on the same hardware. Finally, scalability to over ˜106 cores and sustained performance over ˜2 P Flops is demonstrated, opening the way for large scale modelling of LWFA scenarios.

  11. Computational fluid dynamics simulations of particle deposition in large-scale, multigenerational lung models.

    PubMed

    Walters, D Keith; Luke, William H

    2011-01-01

    Computational fluid dynamics (CFD) has emerged as a useful tool for the prediction of airflow and particle transport within the human lung airway. Several published studies have demonstrated the use of Eulerian finite-volume CFD simulations coupled with Lagrangian particle tracking methods to determine local and regional particle deposition rates in small subsections of the bronchopulmonary tree. However, the simulation of particle transport and deposition in large-scale models encompassing more than a few generations is less common, due in part to the sheer size and complexity of the human lung airway. Highly resolved, fully coupled flowfield solution and particle tracking in the entire lung, for example, is currently an intractable problem and will remain so for the foreseeable future. This paper adopts a previously reported methodology for simulating large-scale regions of the lung airway (Walters, D. K., and Luke, W. H., 2010, "A Method for Three-Dimensional Navier-Stokes Simulations of Large-Scale Regions of the Human Lung Airway," ASME J. Fluids Eng., 132(5), p. 051101), which was shown to produce results similar to fully resolved geometries using approximate, reduced geometry models. The methodology is extended here to particle transport and deposition simulations. Lagrangian particle tracking simulations are performed in combination with Eulerian simulations of the airflow in an idealized representation of the human lung airway tree. Results using the reduced models are compared with those using the fully resolved models for an eight-generation region of the conducting zone. The agreement between fully resolved and reduced geometry simulations indicates that the new method can provide an accurate alternative for large-scale CFD simulations while potentially reducing the computational cost of these simulations by several orders of magnitude.

  12. An integrated model for assessing both crop productivity and agricultural water resources at a large scale

    NASA Astrophysics Data System (ADS)

    Okada, M.; Sakurai, G.; Iizumi, T.; Yokozawa, M.

    2012-12-01

    Agricultural production utilizes regional resources (e.g. river water and ground water) as well as local resources (e.g. temperature, rainfall, solar energy). Future climate changes and increasing demand due to population increases and economic developments would intensively affect the availability of water resources for agricultural production. While many studies assessed the impacts of climate change on agriculture, there are few studies that dynamically account for changes in water resources and crop production. This study proposes an integrated model for assessing both crop productivity and agricultural water resources at a large scale. Also, the irrigation management to subseasonal variability in weather and crop response varies for each region and each crop. To deal with such variations, we used the Markov Chain Monte Carlo technique to quantify regional-specific parameters associated with crop growth and irrigation water estimations. We coupled a large-scale crop model (Sakurai et al. 2012), with a global water resources model, H08 (Hanasaki et al. 2008). The integrated model was consisting of five sub-models for the following processes: land surface, crop growth, river routing, reservoir operation, and anthropogenic water withdrawal. The land surface sub-model was based on a watershed hydrology model, SWAT (Neitsch et al. 2009). Surface and subsurface runoffs simulated by the land surface sub-model were input to the river routing sub-model of the H08 model. A part of regional water resources available for agriculture, simulated by the H08 model, was input as irrigation water to the land surface sub-model. The timing and amount of irrigation water was simulated at a daily step. The integrated model reproduced the observed streamflow in an individual watershed. Additionally, the model accurately reproduced the trends and interannual variations of crop yields. To demonstrate the usefulness of the integrated model, we compared two types of impact assessment of

  13. Coupling large scale hydrologic-reservoir-hydraulic models for impact studies in data sparse regions

    NASA Astrophysics Data System (ADS)

    O'Loughlin, Fiachra; Neal, Jeff; Wagener, Thorsten; Bates, Paul; Freer, Jim; Woods, Ross; Pianosi, Francesca; Sheffied, Justin

    2017-04-01

    As hydraulic modelling moves to increasingly large spatial domains it has become essential to take reservoirs and their operations into account. Large-scale hydrological models have been including reservoirs for at least the past two decades, yet they cannot explicitly model the variations in spatial extent of reservoirs, and many reservoirs operations in hydrological models are not undertaken during the run-time operation. This requires a hydraulic model, yet to-date no continental scale hydraulic model has directly simulated reservoirs and their operations. In addition to the need to include reservoirs and their operations in hydraulic models as they move to global coverage, there is also a need to link such models to large scale hydrology models or land surface schemes. This is especially true for Africa where the number of river gauges has consistently declined since the middle of the twentieth century. In this study we address these two major issues by developing: 1) a coupling methodology for the VIC large-scale hydrological model and the LISFLOOD-FP hydraulic model, and 2) a reservoir module for the LISFLOOD-FP model, which currently includes four sets of reservoir operating rules taken from the major large-scale hydrological models. The Volta Basin, West Africa, was chosen to demonstrate the capability of the modelling framework as it is a large river basin ( 400,000 km2) and contains the largest man-made lake in terms of area (8,482 km2), Lake Volta, created by the Akosombo dam. Lake Volta also experiences a seasonal variation in water levels of between two and six metres that creates a dynamic shoreline. In this study, we first run our coupled VIC and LISFLOOD-FP model without explicitly modelling Lake Volta and then compare these results with those from model runs where the dam operations and Lake Volta are included. The results show that we are able to obtain variation in the Lake Volta water levels and that including the dam operations and Lake Volta

  14. Using Agent Base Models to Optimize Large Scale Network for Large System Inventories

    NASA Technical Reports Server (NTRS)

    Shameldin, Ramez Ahmed; Bowling, Shannon R.

    2010-01-01

    The aim of this paper is to use Agent Base Models (ABM) to optimize large scale network handling capabilities for large system inventories and to implement strategies for the purpose of reducing capital expenses. The models used in this paper either use computational algorithms or procedure implementations developed by Matlab to simulate agent based models in a principal programming language and mathematical theory using clusters, these clusters work as a high performance computational performance to run the program in parallel computational. In both cases, a model is defined as compilation of a set of structures and processes assumed to underlie the behavior of a network system.

  15. A Novel Large-Scale Temperature Dominated Model for Predicting the End of the Growing Season

    PubMed Central

    Fu, Yang; Zheng, Zeyu; Shi, Haibo; Xiao, Rui

    2016-01-01

    Vegetation phenology regulates many ecosystem processes and is an indicator of the biological responses to climate change. It is important to model the timing of leaf senescence accurately, since the canopy duration and carbon assimilation are strongly determined by the timings of leaf senescence. However, the existing phenology models are unlikely to accurately predict the end of the growing season (EGS) on large scales, resulting in the misrepresentation of the seasonality and interannual variability of biosphere–atmosphere feedbacks and interactions in coupled global climate models. In this paper, we presented a novel large-scale temperature dominated model integrated with the physiological adaptation of plants to the local temperature to assess the spatial pattern and interannual variability of the EGS. Our model was validated in all temperate vegetation types over the Northern Hemisphere. The results indicated that our model showed better performance in representing the spatial and interannual variability of leaf senescence, compared with the original phenology model in the Integrated Biosphere Simulator (IBIS). Our model explained approximately 63% of the EGS variations, whereas the original model explained much lower variations (coefficient of determination R2 = 0.01–0.18). In addition, the differences between the EGS reproduced by our model and the MODIS EGS at 71.3% of the pixels were within 10 days. For the original model, it is only 26.1%. We also found that the temperature threshold (TcritTm) of grassland was lower than that of woody species in the same latitudinal zone. PMID:27893828

  16. Cooling biogeophysical effect of large-scale tropical deforestation in three Earth System models

    NASA Astrophysics Data System (ADS)

    Brovkin, V.; Pugh, T.; Robertson, E.; Bathiany, S.; Jones, C.; Arneth, A.

    2015-12-01

    Vegetation cover in the tropics is limited by moisture availability. Since transpiration from forests is generally greater than from grasslands, the sensitivity of precipitation in the Amazon to large-scale deforestation has long been seen as a critical parameter of climate-vegetation interactions. Most Amazon deforestation experiments to date have been performed with interactive land-atmosphere models but prescribed sea surface temperatures (SSTs). They reveal a strong reduction in evapotranspiration and precipitation, and an increase in global air surface temperature due to reduced latent heat flux. We performed large-scale tropical deforestation experiments with three Earth system models (ESMs) including interactive ocean models, which participated in the FP7 project EMBRACE. In response to tropical deforestation, all models simulate a significant reduction in tropical precipitation, similar to the experiments with prescribed SSTs. However, all three models suggest that the response of global temperature to the deforestation is a cooling or no change, differing from the result of a global warming in prescribed SSTs runs. Presumably, changes in the hydrological cycle and in the water vapor feedback due to deforestation operate in the direction of a global cooling. In addition, one of the models simulates a local cooling over the deforested tropical region. This is opposite to the local warming in the other models. This suggests that the balance between warming due to latent heat flux decrease and cooling due to albedo increase is rather subtle and model-dependent. Last but not least, we suggest using large-scale deforestation as a standard biogeophysical experiment for model intercomparison within the CMIP6 framework.

  17. Image fusion for remote sensing using fast, large-scale neuroscience models

    NASA Astrophysics Data System (ADS)

    Brumby, Steven P.

    2011-05-01

    We present results with large-scale neuroscience-inspired models for feature detection using multi-spectral visible/ infrared satellite imagery. We describe a model using an artificial neural network architecture and learning rules to build sparse scene representations over an adaptive dictionary, fusing spectral and spatial textural characteristics of the objects of interest. Our results with fast codes implemented on clusters of graphical processor units (GPUs) suggest that visual cortex models are a promising approach to practical pattern recognition problems in remote sensing, even for datasets using spectral bands not found in natural visual systems.

  18. Large-scale Ice Discharge Events in a Pure Ice Sheet Model

    NASA Astrophysics Data System (ADS)

    Alverson, K.; Legrand, P.; Papa, B. D.; Mysak, L. A.; Wang, Z.

    2004-05-01

    Sediment cores in the North Atlantic show evidence of periodic large-scale ice discharge events between 60 ka and 10 ka BP. These events occurred with a typical period between 5 kyr and 10 kyr. During each event, a significant amount of ice was discharged from the Hudson Bay region through the Hudson Strait and into the North Atlantic. This input of freshwater through the melting of icebergs is thought to have strongly affected the Atlantic thermohaline circulation. One theory is that these periodic ice discharge events represent an internal oscillation of the ice sheet under constant forcing. A second theory requires some variable external forcing on an unstable ice sheet to produce a discharge event. Using the ice sheet model of Marshall, an attempt is made to simulate periodic large-scale ice discharge events within the framework of the first theory. In this case, ice sheet surges and large-scale discharge events occur as a free oscillation of the ice sheet. An analysis of the activation of ice surge events and the thermodynamic controls on these events is also made.

  19. Modeling dynamic functional information flows on large-scale brain networks.

    PubMed

    Lv, Peili; Guo, Lei; Hu, Xintao; Li, Xiang; Jin, Changfeng; Han, Junwei; Li, Lingjiang; Liu, Tianming

    2013-01-01

    Growing evidence from the functional neuroimaging field suggests that human brain functions are realized via dynamic functional interactions on large-scale structural networks. Even in resting state, functional brain networks exhibit remarkable temporal dynamics. However, it has been rarely explored to computationally model such dynamic functional information flows on large-scale brain networks. In this paper, we present a novel computational framework to explore this problem using multimodal resting state fMRI (R-fMRI) and diffusion tensor imaging (DTI) data. Basically, recent literature reports including our own studies have demonstrated that the resting state brain networks dynamically undergo a set of distinct brain states. Within each quasi-stable state, functional information flows from one set of structural brain nodes to other sets of nodes, which is analogous to the message package routing on the Internet from the source node to the destination. Therefore, based on the large-scale structural brain networks constructed from DTI data, we employ a dynamic programming strategy to infer functional information transition routines on structural networks, based on which hub routers that most frequently participate in these routines are identified. It is interesting that a majority of those hub routers are located within the default mode network (DMN), revealing a possible mechanism of the critical functional hub roles played by the DMN in resting state. Also, application of this framework on a post trauma stress disorder (PTSD) dataset demonstrated interesting difference in hub router distributions between PTSD patients and healthy controls.

  20. UDEC-AUTODYN Hybrid Modeling of a Large-Scale Underground Explosion Test

    NASA Astrophysics Data System (ADS)

    Deng, X. F.; Chen, S. G.; Zhu, J. B.; Zhou, Y. X.; Zhao, Z. Y.; Zhao, J.

    2015-03-01

    In this study, numerical modeling of a large-scale decoupled underground explosion test with 10 tons of TNT in Älvdalen, Sweden is performed by combining DEM and FEM with codes UDEC and AUTODYN. AUTODYN is adopted to model the explosion process, blast wave generation, and its action on the explosion chamber surfaces, while UDEC modeling is focused on shock wave propagation in jointed rock masses surrounding the explosion chamber. The numerical modeling results with the hybrid AUTODYN-UDEC method are compared with empirical estimations, purely AUTODYN modeling results, and the field test data. It is found that in terms of peak particle velocity, empirical estimations are much smaller than the measured data, while purely AUTODYN modeling results are larger than the test data. The UDEC-AUTODYN numerical modeling results agree well with the test data. Therefore, the UDEC-AUTODYN method is appropriate in modeling a large-scale explosive detonation in a closed space and the following wave propagation in jointed rock masses. It should be noted that joint mechanical and spatial properties adopted in UDEC-AUTODYN modeling are determined with empirical equations and available geological data, and they may not be sufficiently accurate.

  1. A cooperative strategy for parameter estimation in large scale systems biology models

    PubMed Central

    2012-01-01

    Background Mathematical models play a key role in systems biology: they summarize the currently available knowledge in a way that allows to make experimentally verifiable predictions. Model calibration consists of finding the parameters that give the best fit to a set of experimental data, which entails minimizing a cost function that measures the goodness of this fit. Most mathematical models in systems biology present three characteristics which make this problem very difficult to solve: they are highly non-linear, they have a large number of parameters to be estimated, and the information content of the available experimental data is frequently scarce. Hence, there is a need for global optimization methods capable of solving this problem efficiently. Results A new approach for parameter estimation of large scale models, called Cooperative Enhanced Scatter Search (CeSS), is presented. Its key feature is the cooperation between different programs (“threads”) that run in parallel in different processors. Each thread implements a state of the art metaheuristic, the enhanced Scatter Search algorithm (eSS). Cooperation, meaning information sharing between threads, modifies the systemic properties of the algorithm and allows to speed up performance. Two parameter estimation problems involving models related with the central carbon metabolism of E. coli which include different regulatory levels (metabolic and transcriptional) are used as case studies. The performance and capabilities of the method are also evaluated using benchmark problems of large-scale global optimization, with excellent results. Conclusions The cooperative CeSS strategy is a general purpose technique that can be applied to any model calibration problem. Its capability has been demonstrated by calibrating two large-scale models of different characteristics, improving the performance of previously existing methods in both cases. The cooperative metaheuristic presented here can be easily extended

  2. Downscaling large-scale climate variability using a regional climate model: the case of ENSO over Southern Africa

    NASA Astrophysics Data System (ADS)

    Boulard, Damien; Pohl, Benjamin; Crétat, Julien; Vigaud, Nicolas; Pham-Xuan, Thanh

    2013-03-01

    This study documents methodological issues arising when downscaling modes of large-scale atmospheric variability with a regional climate model, over a remote region that is yet under their influence. The retained case study is El Niño Southern Oscillation and its impacts on Southern Africa and the South West Indian Ocean. Regional simulations are performed with WRF model, driven laterally by ERA40 reanalyses over the 1971-1998 period. We document the sensitivity of simulated climate variability to the model physics, the constraint of relaxing the model solutions towards reanalyses, the size of the relaxation buffer zone towards the lateral forcings and the forcing fields through ERA-Interim driven simulations. The model's internal variability is quantified using 15-member ensemble simulations for seasons of interest, single 30-year integrations appearing as inappropriate to investigate the simulated interannual variability properly. The incidence of SST prescription is also assessed through additional integrations using a simple ocean mixed-layer model. Results show a limited skill of the model to reproduce the seasonal droughts associated with El Niño conditions. The model deficiencies are found to result from biased atmospheric forcings and/or biased response to these forcings, whatever the physical package retained. In contrast, regional SST forcing over adjacent oceans favor realistic rainfall anomalies over the continent, although their amplitude remains too weak. These results confirm the significant contribution of nearby ocean SST to the regional effects of ENSO, but also illustrate that regionalizing large-scale climate variability can be a demanding exercise.

  3. Can limited area NWP and/or RCM models improve on large scales inside their domain?

    NASA Astrophysics Data System (ADS)

    Mesinger, Fedor; Veljovic, Katarina

    2017-04-01

    In a paper in press in Meteorology and Atmospheric Physics at the time this abstract is being written, Mesinger and Veljovic point out four requirements that need to be fulfilled by a limited area model (LAM), be it in NWP or RCM environment, to improve on large scales inside its domain. First, NWP/RCM model needs to be run on a relatively large domain. Note that domain size in quite inexpensive compared to resolution. Second, NWP/RCM model should not use more forcing at its boundaries than required by the mathematics of the problem. That means prescribing lateral boundary conditions only at its outside boundary, with one less prognostic variable prescribed at the outflow than at the inflow parts of the boundary. Next, nudging towards the large scales of the driver model must not be used, as it would obviously be nudging in the wrong direction if the nested model can improve on large scales inside its domain. And finally, the NWP/RCM model must have features that enable development of large scales improved compared to those of the driver model. This would typically include higher resolution, but obviously does not have to. Integrations showing improvements in large scales by LAM ensemble members are summarized in the mentioned paper in press. Ensemble members referred to are run using the Eta model, and are driven by ECMWF 32-day ensemble members, initialized 0000 UTC 4 October 2012. The Eta model used is the so-called "upgraded Eta," or "sloping steps Eta," which is free of the Gallus-Klemp problem of weak flow in the lee of the bell-shaped topography, seemed to many as suggesting the eta coordinate to be ill suited for high resolution models. The "sloping steps" in fact represent a simple version of the cut cell scheme. Accuracy of forecasting the position of jet stream winds, chosen to be those of speeds greater than 45 m/s at 250 hPa, expressed by Equitable Threat (or Gilbert) skill scores adjusted to unit bias (ETSa) was taken to show the skill at large scales

  4. Large-scale growth evolution in the Szekeres inhomogeneous cosmological models with comparison to growth data

    NASA Astrophysics Data System (ADS)

    Peel, Austin; Ishak, Mustapha; Troxel, M. A.

    2012-12-01

    We use the Szekeres inhomogeneous cosmological models to study the growth of large-scale structure in the universe including nonzero spatial curvature and a cosmological constant. In particular, we use the Goode and Wainwright formulation of the solution, as in this form the models can be considered to represent exact nonlinear perturbations of an averaged background. We identify a density contrast in both classes I and II of the models, for which we derive growth evolution equations. By including Λ, the time evolution of the density contrast as well as kinematic quantities of interest can be tracked through the matter- and Λ-dominated cosmic eras up to the present and into the future. In class I, we consider a localized cosmic structure representing an overdensity neighboring a central void, surrounded by an almost Friedmann-Lemaître-Robertson-Walker background, while for class II, the exact perturbations exist globally. In various models of class I and class II, the growth rate is found to be stronger in the matter-dominated era than that of the standard lambda-cold dark matter (ΛCDM) cosmology, and it is suppressed at later times due to the presence of the cosmological constant. We find that there are Szekeres models able to provide a growth history similar to that of ΛCDM while requiring less matter content and nonzero spatial curvature, which speaks to the importance of including the effects of large-scale inhomogeneities in analyzing the growth of large-scale structure. Using data for the growth factor f from redshift space distortions and the Lyman-α forest, we obtain best fit parameters for class II models and compare their ability to match observations with ΛCDM. We find that there is negligible difference between best fit Szekeres models with no priors and those for ΛCDM, both including and excluding Lyman-α data. We also find that the standard growth index γ parametrization cannot be applied in a simple way to the growth in Szekeres models, so

  5. Remarks on discrete and continuous large-scale models of DNA dynamics.

    PubMed Central

    Klapper, I; Qian, H

    1998-01-01

    We present a comparison of the continuous versus discrete models of large-scale DNA conformation, focusing on issues of relevance to molecular dynamics. Starting from conventional expressions for elastic potential energy, we derive elastic dynamic equations in terms of Cartesian coordinates of the helical axis curve, together with a twist function representing the helical or excess twist. It is noted that the conventional potential energies for the two models are not consistent. In addition, we derive expressions for random Brownian forcing for the nonlinear elastic dynamics and discuss the nature of such forces in a continuous system. PMID:9591677

  6. Comparison of void strengthening in fcc and bcc metals : large-scale atomic-level modelling.

    SciTech Connect

    Osetskiy, Yury N; Bacon, David J

    2005-01-01

    Strengthening due to voids can be a significant radiation effect in metals. Treatment of this by elasticity theory of dislocations is difficult when atomic structure of the obstacle and dislocation is influential. In this paper, we report results of large-scale atomic-level modelling of edge dislocation-void interaction in fcc (copper) and bcc (iron) metals. Voids of up to 5 nm diameter were studied over the temperature range from 0 to 600 K. We demonstrate that atomistic modelling is able to reveal important effects, which are beyond the continuum approach. Some arise from features of the dislocation core and crystal structure, others involve dislocation climb and temperature effects.

  7. Formation and disruption of tonotopy in a large-scale model of the auditory cortex.

    PubMed

    Tomková, Markéta; Tomek, Jakub; Novák, Ondřej; Zelenka, Ondřej; Syka, Josef; Brom, Cyril

    2015-10-01

    There is ample experimental evidence describing changes of tonotopic organisation in the auditory cortex due to environmental factors. In order to uncover the underlying mechanisms, we designed a large-scale computational model of the auditory cortex. The model has up to 100 000 Izhikevich's spiking neurons of 17 different types, almost 21 million synapses, which are evolved according to Spike-Timing-Dependent Plasticity (STDP) and have an architecture akin to existing observations. Validation of the model revealed alternating synchronised/desynchronised states and different modes of oscillatory activity. We provide insight into these phenomena via analysing the activity of neuronal subtypes and testing different causal interventions into the simulation. Our model is able to produce experimental predictions on a cell type basis. To study the influence of environmental factors on the tonotopy, different types of auditory stimulations during the evolution of the network were modelled and compared. We found that strong white noise resulted in completely disrupted tonotopy, which is consistent with in vivo experimental observations. Stimulation with pure tones or spontaneous activity led to a similar degree of tonotopy as in the initial state of the network. Interestingly, weak white noise led to a substantial increase in tonotopy. As the STDP was the only mechanism of plasticity in our model, our results suggest that STDP is a sufficient condition for the emergence and disruption of tonotopy under various types of stimuli. The presented large-scale model of the auditory cortex and the core simulator, SUSNOIMAC, have been made publicly available.

  8. Nengo: a Python tool for building large-scale functional brain models

    PubMed Central

    Bekolay, Trevor; Bergstra, James; Hunsberger, Eric; DeWolf, Travis; Stewart, Terrence C.; Rasmussen, Daniel; Choo, Xuan; Voelker, Aaron Russell; Eliasmith, Chris

    2014-01-01

    Neuroscience currently lacks a comprehensive theory of how cognitive processes can be implemented in a biological substrate. The Neural Engineering Framework (NEF) proposes one such theory, but has not yet gathered significant empirical support, partly due to the technical challenge of building and simulating large-scale models with the NEF. Nengo is a software tool that can be used to build and simulate large-scale models based on the NEF; currently, it is the primary resource for both teaching how the NEF is used, and for doing research that generates specific NEF models to explain experimental data. Nengo 1.4, which was implemented in Java, was used to create Spaun, the world's largest functional brain model (Eliasmith et al., 2012). Simulating Spaun highlighted limitations in Nengo 1.4's ability to support model construction with simple syntax, to simulate large models quickly, and to collect large amounts of data for subsequent analysis. This paper describes Nengo 2.0, which is implemented in Python and overcomes these limitations. It uses simple and extendable syntax, simulates a benchmark model on the scale of Spaun 50 times faster than Nengo 1.4, and has a flexible mechanism for collecting simulation results. PMID:24431999

  9. Lateral Flow across Multi-parallel Columns and Their Implications on Large-Scale Evapotranspiration Modeling

    NASA Astrophysics Data System (ADS)

    Sun, D.; Zhu, J.

    2011-12-01

    Evapotranspiration (ET, i.e., evaporation and plant transpiration) is an important component in hydrological cycle, especially for semi-arid and arid environments. The representation of soil hydrologic processes and parameters at scales different from the scale at which observations and measurements are made is a major challenge. Large scale evapotranspiration is often quantified through simulation of multiple columns of independent one-dimensional local scale vertical flow. The soil column used in each simulation is considered homogeneous for the purpose of modeling over short depths. A main limitation is that this purely one-dimensional modeling approach does not consider interaction between columns. Lateral flows might be significant for long and narrow tubes and heterogeneous hydraulic properties and plant characteristics. This study is to quantify the significance of lateral flow and examine whether using this one-dimensional modeling approach may introduce unacceptable errors for large scale evapotranspiration simulations using a three-dimensional modeling appraoch. Instead of using convenient parallel column models of independent hydrologic processes, this study simulates three-dimensional transpiration and evaporation in multiple columns which allow lateral interactions. Specifically, we examined the impact of plant rooting density, depth, pattern and other characteristics on the accuracy of this commonly used one-dimensional approximation of hydrological processes. In addition, the influence of spatial variability of hydraulic properties on the validity of the one-dimensional approach and the difference of wetting and drying processes are discussed. The results provide applicable guidance for applications of one-dimensional approach to simulate large scale evapotranspiration in a heterogeneous landscape.

  10. A Large-Scale, Energetic Model of Cardiovascular Homeostasis Predicts Dynamics of Arterial Pressure in Humans

    PubMed Central

    Roytvarf, Alexander; Shusterman, Vladimir

    2008-01-01

    The energetic balance of forces in the cardiovascular system is vital to the stability of blood flow to all physiological systems in mammals. Yet, a large-scale, theoretical model, summarizing the energetic balance of major forces in a single, mathematically closed system has not been described. Although a number of computer simulations have been successfully performed with the use of analog models, the analysis of energetic balance of forces in such models is obscured by a big number of interacting elements. Hence, the goal of our study was to develop a theoretical model that represents large-scale, energetic balance in the cardiovascular system, including the energies of arterial pressure wave, blood flow, and the smooth muscle tone of arterial walls. Because the emphasis of our study was on tracking beat-to-beat changes in the balance of forces, we used a simplified representation of the blood pressure wave as a trapezoidal pressure-pulse with a strong-discontinuity leading front. This allowed significant reduction in the number of required parameters. Our approach has been validated using theoretical analysis, and its accuracy has been confirmed experimentally. The model predicted the dynamics of arterial pressure in human subjects undergoing physiological tests and provided insights into the relationships between arterial pressure and pressure wave velocity. PMID:18269976

  11. Automatic Construction of Predictive Neuron Models through Large Scale Assimilation of Electrophysiological Data

    NASA Astrophysics Data System (ADS)

    Nogaret, Alain; Meliza, C. Daniel; Margoliash, Daniel; Abarbanel, Henry D. I.

    2016-09-01

    We report on the construction of neuron models by assimilating electrophysiological data with large-scale constrained nonlinear optimization. The method implements interior point line parameter search to determine parameters from the responses to intracellular current injections of zebra finch HVC neurons. We incorporated these parameters into a nine ionic channel conductance model to obtain completed models which we then use to predict the state of the neuron under arbitrary current stimulation. Each model was validated by successfully predicting the dynamics of the membrane potential induced by 20–50 different current protocols. The dispersion of parameters extracted from different assimilation windows was studied. Differences in constraints from current protocols, stochastic variability in neuron output, and noise behave as a residual temperature which broadens the global minimum of the objective function to an ellipsoid domain whose principal axes follow an exponentially decaying distribution. The maximum likelihood expectation of extracted parameters was found to provide an excellent approximation of the global minimum and yields highly consistent kinetics for both neurons studied. Large scale assimilation absorbs the intrinsic variability of electrophysiological data over wide assimilation windows. It builds models in an automatic manner treating all data as equal quantities and requiring minimal additional insight.

  12. Automatic Construction of Predictive Neuron Models through Large Scale Assimilation of Electrophysiological Data

    PubMed Central

    Nogaret, Alain; Meliza, C. Daniel; Margoliash, Daniel; Abarbanel, Henry D. I.

    2016-01-01

    We report on the construction of neuron models by assimilating electrophysiological data with large-scale constrained nonlinear optimization. The method implements interior point line parameter search to determine parameters from the responses to intracellular current injections of zebra finch HVC neurons. We incorporated these parameters into a nine ionic channel conductance model to obtain completed models which we then use to predict the state of the neuron under arbitrary current stimulation. Each model was validated by successfully predicting the dynamics of the membrane potential induced by 20–50 different current protocols. The dispersion of parameters extracted from different assimilation windows was studied. Differences in constraints from current protocols, stochastic variability in neuron output, and noise behave as a residual temperature which broadens the global minimum of the objective function to an ellipsoid domain whose principal axes follow an exponentially decaying distribution. The maximum likelihood expectation of extracted parameters was found to provide an excellent approximation of the global minimum and yields highly consistent kinetics for both neurons studied. Large scale assimilation absorbs the intrinsic variability of electrophysiological data over wide assimilation windows. It builds models in an automatic manner treating all data as equal quantities and requiring minimal additional insight. PMID:27605157

  13. A new wind-farm parameterization for large-scale atmospheric models

    NASA Astrophysics Data System (ADS)

    Abkar, Mahdi; Porté-Agel, Fernando

    2015-04-01

    In this study, a new model is developed to parameterize the effect of wind farms in large-scale atmospheric models such as weather models. In the new model, wind turbines in a wind farm are parameterized as elevated sinks of momentum and sources of turbulence. An analytical approach is used to estimate the turbine-induced forces as well as the turbulent kinetic energy (TKE) generated by the turbines inside the atmospheric boundary layer (ABL). In addition, the proposed model can take into account not only the effect of wind-farm density, but also the effect of wind-farm layout and wind direction. The performance of the new model is tested with large-eddy simulations (LESs) of ABL flows over very large wind farms with different turbine configurations. The results show that the new model is capable to accurately predict the turbine-induced forces as well as the TKE generated by the turbines inside the ABL.

  14. Design of a V/STOL propulsion system for a large-scale fighter model

    NASA Technical Reports Server (NTRS)

    Willis, W. S.

    1981-01-01

    Modifications were made to the existing Large-Scale STOL fighter model to simulate a V/STOL configuration. Modifications include the substitutions of two dimensional lift/cruise exhaust nozzles in the nacelles, and the addition of a third J97 engine in the fuselage to suppy a remote exhaust nozzle simulating a Remote Augmented Lift System. A preliminary design of the inlet and exhaust ducting for the third engine was developed and a detailed design was completed of the hot exhaust ducting and remote nozzle.

  15. Large-scale shell-model calculations for 32-39P isotopes

    NASA Astrophysics Data System (ADS)

    Srivastava, P. C.; Hirsch, J. G.; Ermamatov, M. J.; Kota, V. K. B.

    2012-10-01

    In this work, the structure of 32-39P isotopes is described in the framework of stateof-the-art large-scale shell-model calculations, employing the code ANTOINE with three modern effective interactions: SDPF-U, SDPF-NR and the extended pairing plus quadrupole-quadrupoletype forces with inclusion of monopole interaction (EPQQM). Protons are restricted to fill the sd shell, while neutrons are active in the sd - pf valence space. Results for positive and negative level energies and electromagnetic observables are compared with the available experimental data.

  16. Aerodynamic characteristics of a large scale model with a swept wing and augmented jet flap

    NASA Technical Reports Server (NTRS)

    Falarski, M. D.; Koenig, D. G.

    1971-01-01

    Data of tests of a large-scale swept augmentor wing model in the 40- by 80-foot wind tunnel are presented. The data includes longitudinal characteristics with and without a horizontal tail as well as results of preliminary investigation of lateral-directional characteristics. The augmentor flap deflection was varied from 0 deg to 70.6 deg at isentropic jet thrust coefficients of 0 to 1.47. The tests were made at a Reynolds number from 2.43 to 4.1 times one million.

  17. Sensitivity analysis of key components in large-scale hydroeconomic models

    NASA Astrophysics Data System (ADS)

    Medellin-Azuara, J.; Connell, C. R.; Lund, J. R.; Howitt, R. E.

    2008-12-01

    This paper explores the likely impact of different estimation methods in key components of hydro-economic models such as hydrology and economic costs or benefits, using the CALVIN hydro-economic optimization for water supply in California. In perform our analysis using two climate scenarios: historical and warm-dry. The components compared were perturbed hydrology using six versus eighteen basins, highly-elastic urban water demands, and different valuation of agricultural water scarcity. Results indicate that large scale hydroeconomic hydro-economic models are often rather robust to a variety of estimation methods of ancillary models and components. Increasing the level of detail in the hydrologic representation of this system might not greatly affect overall estimates of climate and its effects and adaptations for California's water supply. More price responsive urban water demands will have a limited role in allocating water optimally among competing uses. Different estimation methods for the economic value of water and scarcity in agriculture may influence economically optimal water allocation; however land conversion patterns may have a stronger influence in this allocation. Overall optimization results of large-scale hydro-economic models remain useful for a wide range of assumptions in eliciting promising water management alternatives.

  18. Fast 3-D large-scale gravity and magnetic modeling using unstructured grids and an adaptive multilevel fast multipole method

    NASA Astrophysics Data System (ADS)

    Ren, Zhengyong; Tang, Jingtian; Kalscheuer, Thomas; Maurer, Hansruedi

    2017-01-01

    A novel fast and accurate algorithm is developed for large-scale 3-D gravity and magnetic modeling problems. An unstructured grid discretization is used to approximate sources with arbitrary mass and magnetization distributions. A novel adaptive multilevel fast multipole (AMFM) method is developed to reduce the modeling time. An observation octree is constructed on a set of arbitrarily distributed observation sites, while a source octree is constructed on a source tetrahedral grid. A novel characteristic is the independence between the observation octree and the source octree, which simplifies the implementation of different survey configurations such as airborne and ground surveys. Two synthetic models, a cubic model and a half-space model with mountain-valley topography, are tested. As compared to analytical solutions of gravity and magnetic signals, excellent agreements of the solutions verify the accuracy of our AMFM algorithm. Finally, our AMFM method is used to calculate the terrain effect on an airborne gravity data set for a realistic topography model represented by a triangular surface retrieved from a digital elevation model. Using 16 threads, more than 5800 billion interactions between 1,002,001 observation points and 5,839,830 tetrahedral elements are computed in 453.6 s. A traditional first-order Gaussian quadrature approach requires 3.77 days. Hence, our new AMFM algorithm not only can quickly compute the gravity and magnetic signals for complicated problems but also can substantially accelerate the solution of 3-D inversion problems.

  19. Assimilative Modeling of Large-Scale Equatorial Plasma Trenches Observed by C/NOFS

    NASA Astrophysics Data System (ADS)

    Su, Y.; Retterer, J. M.; de La Beaujardiere, O.; Burke, W. J.; Roddy, P. A.; Pfaff, R. F.; Hunton, D. E.

    2009-12-01

    Low-latitude plasma irregularities commonly observed during post sunset local times have been studied extensively by ground-based measurements such as coherent and incoherent scatter radars and ionosondes, as well as by satellite observations. The pre-reversal enhancement in the upward plasma drift due to eastward electric fields has been identified as the primary cause of these irregularities. Reports of plasma depletions at post-midnight and early morning local times are scarce and are typically limited to storm time conditions. Such dawn plasma depletions were frequently observed by C/NOFS in June 2008 [de La Beaujardière et al., 2009]. We are able to qualitatively reproduce the large-scale density depletion observed by the Planar Langmuir Probe (PLP) on June 17, 2008 [Su et al., 2009], based on the assimilative physics-based ionospheric model (PBMOD) using available electric field data obtained from the Vector Electric Field Instrument (VEFI) as the model input. In comparison, no plasma depletion or irregularity is obtained from the climatology version of our model when large upward drift velocities caused by observed eastward electric fields were absent. In this presentation, we extend our study for the entire month of June 2008 to exercise the forecast capability of large-scale density trenches by PBMOD with available VEFI data. Geophys. Res. Lett, 36, L00C06, doi:10.1029/2009GL038884, 2009.Geophys. Res. Lett., 36, L00C02, doi:10.1029/ 2009GL038946, 2009.

  20. Changes of traffic characteristics after large-scale aggregation in 3Tnet: modeling, analysis, and evaluation

    NASA Astrophysics Data System (ADS)

    Yuan, Chi; Huang, Junbin; Li, Zhengbin; He, Yongqi; Xu, Anshi

    2007-11-01

    Understanding network traffic behavior is essential for all aspects of network design and operation, e.g. component design, protocol design, provisioning, operations, administration and maintenance (OAM). A careful study of traffic behavior can lead to improvements in underlying protocols to attain greater efficiencies and higher performance. Many researches have shown that traffic in Ethernet and other networks, either in local or wide area networks, exhibit properties of self-similarity. Several empirical studies on network traffic indicate that this traffic is self-similar in nature. However, the network modeling methods used in current networks have been primarily designed and analyzed under the assumption of the traditional Poisson arrival process. These "Poisson-like" models suggest that the network traffic is smooth, which is inherently unable to capture the self-similar characteristic of traffic. In this paper, after introduce the high performance broadband information network (3Tnet) of China, an aggregation model at access convergence router (ACR) is proposed and analyzed in 3Tnet. We studied the impact of large-scale aggregation applied at the edge of 3Tnet in terms of the self-similarity level observed at the output traffic in presence of self-similar input traffic. Two formulas were presented to describe the changes of Hurst parameter. Using OPNET software simulator, changes of traffic characteristics after large-scale aggregation in 3Tnet was extensive studied. The theoretic analysis results were consistent with the simulation results.

  1. An assembly model for simulation of large-scale ground water flow and transport.

    PubMed

    Huang, Junqi; Christ, John A; Goltz, Mark N

    2008-01-01

    When managing large-scale ground water contamination problems, it is often necessary to model flow and transport using finely discretized domains--for instance (1) to simulate flow and transport near a contamination source area or in the area where a remediation technology is being implemented; (2) to account for small-scale heterogeneities; (3) to represent ground water-surface water interactions; or (4) some combination of these scenarios. A model with a large domain and fine-grid resolution will need extensive computing resources. In this work, a domain decomposition-based assembly model implemented in a parallel computing environment is developed, which will allow efficient simulation of large-scale ground water flow and transport problems using domain-wide grid refinement. The method employs common ground water flow (MODFLOW) and transport (RT3D) simulators, enabling the solution of almost all commonly encountered ground water flow and transport problems. The basic approach partitions a large model domain into any number of subdomains. Parallel processors are used to solve the model equations within each subdomain. Schwarz iteration is applied to match the flow solution at the subdomain boundaries. For the transport model, an extended numerical array is implemented to permit the exchange of dispersive and advective flux information across subdomain boundaries. The model is verified using a conventional single-domain model. Model simulations demonstrate that the proposed model operated in a parallel computing environment can result in considerable savings in computer run times (between 50% and 80%) compared with conventional modeling approaches and may be used to simulate grid discretizations that were formerly intractable.

  2. Mutual coupling of hydrologic and hydrodynamic models - a viable approach for improved large-scale inundation estimates?

    NASA Astrophysics Data System (ADS)

    Hoch, Jannis; Winsemius, Hessel; van Beek, Ludovicus; Haag, Arjen; Bierkens, Marc

    2016-04-01

    Due to their increasing occurrence rate and associated economic costs, fluvial floods are large-scale and cross-border phenomena that need to be well understood. Sound information about temporal and spatial variations of flood hazard is essential for adequate flood risk management and climate change adaption measures. While progress has been made in assessments of flood hazard and risk on the global scale, studies to date have made compromises between spatial resolution on the one hand and local detail that influences their temporal characteristics (rate of rise, duration) on the other. Moreover, global models cannot realistically model flood wave propagation due to a lack of detail in channel and floodplain geometry, and the representation of hydrologic processes influencing the surface water balance such as open water evaporation from inundated water and re-infiltration of water in river banks. To overcome these restrictions and to obtain a better understanding of flood propagation including its spatio-temporal variations at the large scale, yet at a sufficiently high resolution, the present study aims to develop a large-scale modeling tool by coupling the global hydrologic model PCR-GLOBWB and the recently developed hydrodynamic model DELFT3D-FM. The first computes surface water volumes which are routed by the latter, solving the full Saint-Venant equations. With DELFT3D FM being capable of representing the model domain as a flexible mesh, model accuracy is only improved at relevant locations (river and adjacent floodplain) and the computation time is not unnecessarily increased. This efficiency is very advantageous for large-scale modelling approaches. The model domain is thereby schematized by 2D floodplains, being derived from global data sets (HydroSHEDS and G3WBM, respectively). Since a previous study with 1way-coupling showed good model performance (J.M. Hoch et al., in prep.), this approach was extended to 2way-coupling to fully represent evaporation

  3. LipidWrapper: An Algorithm for Generating Large-Scale Membrane Models of Arbitrary Geometry

    PubMed Central

    Durrant, Jacob D.; Amaro, Rommie E.

    2014-01-01

    As ever larger and more complex biological systems are modeled in silico, approximating physiological lipid bilayers with simple planar models becomes increasingly unrealistic. In order to build accurate large-scale models of subcellular environments, models of lipid membranes with carefully considered, biologically relevant curvature will be essential. In the current work, we present a multi-scale utility called LipidWrapper capable of creating curved membrane models with geometries derived from various sources, both experimental and theoretical. To demonstrate its utility, we use LipidWrapper to examine an important mechanism of influenza virulence. A copy of the program can be downloaded free of charge under the terms of the open-source FreeBSD License from http://nbcr.ucsd.edu/lipidwrapper. LipidWrapper has been tested on all major computer operating systems. PMID:25032790

  4. A New Statistically based Autoconversion rate Parameterization for use in Large-Scale Models

    NASA Technical Reports Server (NTRS)

    Lin, Bing; Zhang, Junhua; Lohmann, Ulrike

    2002-01-01

    The autoconversion rate is a key process for the formation of precipitation in warm clouds. In climate models, physical processes such as autoconversion rate, which are calculated from grid mean values, are biased, because they do not take subgrid variability into account. Recently, statistical cloud schemes have been introduced in large-scale models to account for partially cloud-covered grid boxes. However, these schemes do not include the in-cloud variability in their parameterizations. In this paper, a new statistically based autoconversion rate considering the in-cloud variability is introduced and tested in three cases using the Canadian Single Column Model (SCM) of the global climate model. The results show that the new autoconversion rate improves the model simulation, especially in terms of liquid water path in all three case studies.

  5. A New Statistically based Autoconversion rate Parameterization for use in Large-Scale Models

    NASA Technical Reports Server (NTRS)

    Lin, Bing; Zhang, Junhua; Lohmann, Ulrike

    2002-01-01

    The autoconversion rate is a key process for the formation of precipitation in warm clouds. In climate models, physical processes such as autoconversion rate, which are calculated from grid mean values, are biased, because they do not take subgrid variability into account. Recently, statistical cloud schemes have been introduced in large-scale models to account for partially cloud-covered grid boxes. However, these schemes do not include the in-cloud variability in their parameterizations. In this paper, a new statistically based autoconversion rate considering the in-cloud variability is introduced and tested in three cases using the Canadian Single Column Model (SCM) of the global climate model. The results show that the new autoconversion rate improves the model simulation, especially in terms of liquid water path in all three case studies.

  6. LASSIE: simulating large-scale models of biochemical systems on GPUs.

    PubMed

    Tangherloni, Andrea; Nobile, Marco S; Besozzi, Daniela; Mauri, Giancarlo; Cazzaniga, Paolo

    2017-05-10

    Mathematical modeling and in silico analysis are widely acknowledged as complementary tools to biological laboratory methods, to achieve a thorough understanding of emergent behaviors of cellular processes in both physiological and perturbed conditions. Though, the simulation of large-scale models-consisting in hundreds or thousands of reactions and molecular species-can rapidly overtake the capabilities of Central Processing Units (CPUs). The purpose of this work is to exploit alternative high-performance computing solutions, such as Graphics Processing Units (GPUs), to allow the investigation of these models at reduced computational costs. LASSIE is a "black-box" GPU-accelerated deterministic simulator, specifically designed for large-scale models and not requiring any expertise in mathematical modeling, simulation algorithms or GPU programming. Given a reaction-based model of a cellular process, LASSIE automatically generates the corresponding system of Ordinary Differential Equations (ODEs), assuming mass-action kinetics. The numerical solution of the ODEs is obtained by automatically switching between the Runge-Kutta-Fehlberg method in the absence of stiffness, and the Backward Differentiation Formulae of first order in presence of stiffness. The computational performance of LASSIE are assessed using a set of randomly generated synthetic reaction-based models of increasing size, ranging from 64 to 8192 reactions and species, and compared to a CPU-implementation of the LSODA numerical integration algorithm. LASSIE adopts a novel fine-grained parallelization strategy to distribute on the GPU cores all the calculations required to solve the system of ODEs. By virtue of this implementation, LASSIE achieves up to 92× speed-up with respect to LSODA, therefore reducing the running time from approximately 1 month down to 8 h to simulate models consisting in, for instance, four thousands of reactions and species. Notably, thanks to its smaller memory footprint, LASSIE

  7. Calibration of a large-scale semi-distributed hydrological model for the continental United States

    NASA Astrophysics Data System (ADS)

    Li, S.; Lohmann, D.

    2011-12-01

    Recent major flood losses raised the awareness of flood risk worldwide. In large-scale (e.g., country) flood simulation, semi-distributed hydrological model shows its advantage in capturing spatial heterogeneity of hydrological characteristics within a basin with relatively low computational cost. However, it is still very challenging to calibrate the model over large scale and a wide variety of hydroclimatic conditions. The objectives of this study are (1) to compare the effectiveness of state-of-the-art evolutionary multiobjective algorithms in calibrating a semi-distributed hydrological model used in the RMS flood loss model; (2) to calibrate the model over the entire continental United States. Firstly, the computational efficiency of the following four algorithms is evaluated: the Non-Dominated Sorted Genetic Algorithm II (NSGAII), the Strength Pareto Evolutionary Algorithm 2 (SPEA2), the Epsilon-Dominance Non-Dominated Sorted Genetic Algorithm II (ɛ-NSGAII), and the Epsilon-Dominance Multi-Objective Evolutionary Algorithm (ɛMOEA). The test was conducted on four river basins with a wide variety of hydro-climatic conditions in US. The optimization objectives include RMSE and high-flow RMSE. Results of the analysis indicate that NSGAII has the best performance in terms of effectiveness and stability. Then we applied the modified version of NSGAII to calibrate the hydrological model over the entire continental US. Comparing with the observation and published data shows the performance of the calibrated model is good overall. This well-calibrated model allows a more accurate modeling of flood risk and loss in the continental United States. Furthermore it will allow underwriters to better manage the exposure.

  8. Generative models of rich clubs in Hebbian neuronal networks and large-scale human brain networks

    PubMed Central

    Vértes, Petra E.; Alexander-Bloch, Aaron; Bullmore, Edward T.

    2014-01-01

    Rich clubs arise when nodes that are ‘rich’ in connections also form an elite, densely connected ‘club’. In brain networks, rich clubs incur high physical connection costs but also appear to be especially valuable to brain function. However, little is known about the selection pressures that drive their formation. Here, we take two complementary approaches to this question: firstly we show, using generative modelling, that the emergence of rich clubs in large-scale human brain networks can be driven by an economic trade-off between connection costs and a second, competing topological term. Secondly we show, using simulated neural networks, that Hebbian learning rules also drive the emergence of rich clubs at the microscopic level, and that the prominence of these features increases with learning time. These results suggest that Hebbian learning may provide a neuronal mechanism for the selection of complex features such as rich clubs. The neural networks that we investigate are explicitly Hebbian, and we argue that the topological term in our model of large-scale brain connectivity may represent an analogous connection rule. This putative link between learning and rich clubs is also consistent with predictions that integrative aspects of brain network organization are especially important for adaptive behaviour. PMID:25180309

  9. Large scale structure simulations of inhomogeneous Lemaître-Tolman-Bondi void models

    NASA Astrophysics Data System (ADS)

    Alonso, David; García-Bellido, Juan; Haugbølle, Troels; Vicente, Julián

    2010-12-01

    We perform numerical simulations of large scale structure evolution in an inhomogeneous Lemaître-Tolman-Bondi (LTB) model of the Universe. We follow the gravitational collapse of a large underdense region (a void) in an otherwise flat matter-dominated Einstein-de Sitter model. We observe how the (background) density contrast at the center of the void grows to be of order one, and show that the density and velocity profiles follow the exact nonlinear LTB solution to the full Einstein equations for all but the most extreme voids. This result seems to contradict previous claims that fully relativistic codes are needed to properly handle the nonlinear evolution of large scale structures, and that local Newtonian dynamics with an explicit expansion term is not adequate. We also find that the (local) matter density contrast grows with the scale factor in a way analogous to that of an open universe with a value of the matter density ΩM(r) corresponding to the appropriate location within the void.

  10. Comparison of the KAMELEON fire model to large-scale open pool fire data

    SciTech Connect

    Nicolette, V.F.; Gritzo, L.A.; Holen, J.; Magnussen, B.F.

    1994-06-01

    A comparison of the KAMELEON Fire model to large-scale open pool fire experimental data is presented. The model was used to calculate large-scale JP-4 pool fires with and without wind, and with and without large objects in the fire. The effect of wind and large objects on the fire environment is clearly seen. For the pool fire calculations without any object in the fire, excellent agreement is seen in the location of the oxygen-starved region near the pool center. Calculated flame temperatures are about 200--300 K higher than measured. This results in higher heat fluxes back to the fuel pool and higher fuel evaporation rates (by a factor of 2). Fuel concentrations at lower elevations and peak soot concentrations are in good agreement with data. For pool fire calculations with objects, similar trends in the fire environment are observed. Excellent agreement is seen in the distribution of the heat flux around a cylindrical calorimeter in a rectangular pool with wind effects. The magnitude of the calculated heat flux to the object is high by a factor of 2 relative to the test data, due to the higher temperatures calculated. For the case of a large flat plate adjacent to a circular pool, excellent qualitative agreement is seen in the predicted and measured flame shapes as a function of wind.

  11. Generative models of rich clubs in Hebbian neuronal networks and large-scale human brain networks.

    PubMed

    Vértes, Petra E; Alexander-Bloch, Aaron; Bullmore, Edward T

    2014-10-05

    Rich clubs arise when nodes that are 'rich' in connections also form an elite, densely connected 'club'. In brain networks, rich clubs incur high physical connection costs but also appear to be especially valuable to brain function. However, little is known about the selection pressures that drive their formation. Here, we take two complementary approaches to this question: firstly we show, using generative modelling, that the emergence of rich clubs in large-scale human brain networks can be driven by an economic trade-off between connection costs and a second, competing topological term. Secondly we show, using simulated neural networks, that Hebbian learning rules also drive the emergence of rich clubs at the microscopic level, and that the prominence of these features increases with learning time. These results suggest that Hebbian learning may provide a neuronal mechanism for the selection of complex features such as rich clubs. The neural networks that we investigate are explicitly Hebbian, and we argue that the topological term in our model of large-scale brain connectivity may represent an analogous connection rule. This putative link between learning and rich clubs is also consistent with predictions that integrative aspects of brain network organization are especially important for adaptive behaviour.

  12. Robust linear equation dwell time model compatible with large scale discrete surface error matrix.

    PubMed

    Dong, Zhichao; Cheng, Haobo; Tam, Hon-Yuen

    2015-04-01

    The linear equation dwell time model can translate the 2D convolution process of material removal during subaperture polishing into a more intuitional expression, and may provide relatively fast and reliable results. However, the accurate solution of this ill-posed equation is not so easy, and its practicability for a large scale surface error matrix is still limited. This study first solves this ill-posed equation by Tikhonov regularization and the least square QR decomposition (LSQR) method, and automatically determines an optional interval and a typical value for the damped factor of regularization, which are dependent on the peak removal rate of tool influence functions. Then, a constrained LSQR method is presented to increase the robustness of the damped factor, which can provide more consistent dwell time maps than traditional LSQR. Finally, a matrix segmentation and stitching method is used to cope with large scale surface error matrices. Using these proposed methods, the linear equation model becomes more reliable and efficient in practical engineering.

  13. Acoustic characteristics of large-scale STOL model at forward speed

    NASA Technical Reports Server (NTRS)

    Falarski, M. D.; Aoyagi, K.; Koenig, D. G.

    1972-01-01

    Wind-tunnel investigations of the acoustic characteristics of the externally blown jet flap (EBF) and augmentor wing STOL concepts are dicussed. The large-scale EBF model was equipped with a triple-slotted blown by four JT15D turbofan engines with circular, coannular exhaust nozzles. The large-scale augmentor wing model was equipped with an unlined augmentor blown by a slot primary nozzle. The effects of airspeed and angle of attack on the acoustics of the EBF were small. At a forward speed of 60 knots, the impingement noise of the landing flap was approximately 2 db lower than in the static tests. Angle of attack increased the impingement noise approximately 0.1 decibels per degree. Flap deflection had a greater effect on the acoustics of the augmentor wing than did airspeed. For a nozzle pressure ratio of 1.9, the peak perceived noise level of the landing flap was 3 to 5 PNdb higher than that of the takeoff flap. The total sound power was also significantly higher for landing indicating that turning in the augmentor generated acoustic energy. Airspeed produced a small aft shift in acoustic directivity with no significant change in the peak perceived noise levels or sound power levels.

  14. Acoustic characteristics of large-scale STOL models at forward speed

    NASA Technical Reports Server (NTRS)

    Falarski, M. D.; Aoyagi, K.; Koenig, D. G.

    1972-01-01

    Wind-tunnel investigations of the acoustic characteristics of the externally blown jet flap (EBF) and augmentor wing STOL concepts are discussed. The large-scale EBF model was equipped with a triple-slotted flap blown by four JT15D turbofan engines with circular, coannular exhaust nozzles. The large-scale augmentor wing model was equipped with an unlined augmentor blown by a slot primary nozzle. The effects of airspeed and angle of attack on the acoustics of the EBF were small. Flap deflection had a greater effect on the acoustics of the augmentor wing than did airspeed. The total sound power was also significantly higher for landing indicating that turning in the augmentor generated acoustic energy. Airspeed produced a small aft shift in acoustic directivity with no significant change in the peak perceived noise levels or sound power levels. Small-scale research of the acoustics for the augmentor wing has shown that by blowing an acoustically treated augmentor with a lobed primary nozzle, the 95-PNdb noise level goal can be achieved or surpassed.

  15. Modeling the interdecadal eurasian snow cover variations influenced by large-scale atmospheric modes

    NASA Astrophysics Data System (ADS)

    Shmakin, A. B.; Popova, V. V.

    2003-04-01

    The variations of snow water equivalent (SWE) in Eurasia during the last 100 years have been evaluated using a simplified model of heat/water exchange at the land surface. The model is designed for monthly time step, and its equations are written in deviations from average climatic regime. The forcing anomalies of meteorological parameters for 20th century at each grid cell were specified according to large-scale atmospheric indices (such as NAO, PNA, etc.) and regressions between the indices and the meteorological variables. The results were tested against the data observed in Russia during several decades at regular stations and in their vicinity in typical environment. The observed data, Former Soviet Union Hydrological Snow Surveys, were obtained from the National Snow and Ice Data Center (NSIDC), University of Colorado at Boulder. The main features of SWE spatial distribution and its interdecadal variance were reproduced satisfactorily, but the errors were greater in the regions with poorer correlation between atmospheric variables and circulation indices. The regions located closer to Atlantic and, to lesser extent, Pacific coast, demonstrated better agreement with observed data. The large-scale atmospheric modes most responsible for the Eurasian SWE variations at decadal time scale are NAO and intensity of Aleutian low. The study was supported by the Russian Foundation for Basic Research (grants 01-05-64707 and 01-05-64395).

  16. A case study of large-scale structure in a 'hot' model universe

    NASA Technical Reports Server (NTRS)

    Centrella, Joan M.; Gallagher, John S., III; Melott, Adrian L.; Bushouse, Howard A.

    1988-01-01

    Large-scale structure is studied in an Omega(0) = 1 model universe filled with 'hot' dark matter. A particle mesh computer code is used to calculate the development of gravitational instabilities in 64-cubed mass clouds on a 64-cubed three-dimensional grid over an expansion factor of about 1000. The present epoch is identified by matching the slope of the model particle-particle two-point correlation function with that obtained from observations of galaxies, and the model then corresponds to a cubical sample of the universe of about 105/h Mpc on a side. Properties of the simulated universe are investigated by casting the model quantities into observer's coordinates and comparing the results with observations of the spatial and velocity distributions of luminous matter. It is concluded based on simple arguments that current limits on the time of galaxy formation do not rule out 'hot' dark matter.

  17. Aerodynamic force measurement on a large-scale model in a short duration test facility

    SciTech Connect

    Tanno, H.; Kodera, M.; Komuro, T.; Sato, K.; Takahasi, M.; Itoh, K.

    2005-03-01

    A force measurement technique has been developed for large-scale aerodynamic models with a short test time. The technique is based on direct acceleration measurements, with miniature accelerometers mounted on a test model suspended by wires. Measuring acceleration at two different locations, the technique can eliminate oscillations from natural vibration of the model. The technique was used for drag force measurements on a 3 m long supersonic combustor model in the HIEST free-piston driven shock tunnel. A time resolution of 350 {mu}s is guaranteed during measurements, whose resolution is enough for ms order test time in HIEST. To evaluate measurement reliability and accuracy, measured values were compared with results from a three-dimensional Navier-Stokes numerical simulation. The difference between measured values and numerical simulation values was less than 5%. We conclude that this measurement technique is sufficiently reliable for measuring aerodynamic force within test durations of 1 ms.

  18. Large-scale Monte Carlo simulations for the depinning transition in Ising-type lattice models

    NASA Astrophysics Data System (ADS)

    Si, Lisha; Liao, Xiaoyun; Zhou, Nengji

    2016-12-01

    With the developed "extended Monte Carlo" (EMC) algorithm, we have studied the depinning transition in Ising-type lattice models by extensive numerical simulations, taking the random-field Ising model with a driving field and the driven bond-diluted Ising model as examples. In comparison with the usual Monte Carlo method, the EMC algorithm exhibits greater efficiency of the simulations. Based on the short-time dynamic scaling form, both the transition field and critical exponents of the depinning transition are determined accurately via the large-scale simulations with the lattice size up to L = 8912, significantly refining the results in earlier literature. In the strong-disorder regime, a new universality class of the Ising-type lattice model is unveiled with the exponents β = 0.304(5) , ν = 1.32(3) , z = 1.12(1) , and ζ = 0.90(1) , quite different from that of the quenched Edwards-Wilkinson equation.

  19. Large-scale shell-model calculations on the spectroscopy of N <126 Pb isotopes

    NASA Astrophysics Data System (ADS)

    Qi, Chong; Jia, L. Y.; Fu, G. J.

    2016-07-01

    Large-scale shell-model calculations are carried out in the model space including neutron-hole orbitals 2 p1 /2 ,1 f5 /2 ,2 p3 /2 ,0 i13 /2 ,1 f7 /2 , and 0 h9 /2 to study the structure and electromagnetic properties of neutron-deficient Pb isotopes. An optimized effective interaction is used. Good agreement between full shell-model calculations and experimental data is obtained for the spherical states in isotopes Pb-206194. The lighter isotopes are calculated with an importance-truncation approach constructed based on the monopole Hamiltonian. The full shell-model results also agree well with our generalized seniority and nucleon-pair-approximation truncation calculations. The deviations between theory and experiment concerning the excitation energies and electromagnetic properties of low-lying 0+ and 2+ excited states and isomeric states may provide a constraint on our understanding of nuclear deformation and intruder configuration in this region.

  20. The relationship between large-scale and convective states in the tropics - Towards an improved representation of convection in large-scale models

    SciTech Connect

    Jakob, Christian

    2015-02-26

    This report summarises an investigation into the relationship of tropical thunderstorms to the atmospheric conditions they are embedded in. The study is based on the use of radar observations at the Atmospheric Radiation Measurement site in Darwin run under the auspices of the DOE Atmospheric Systems Research program. Linking the larger scales of the atmosphere with the smaller scales of thunderstorms is crucial for the development of the representation of thunderstorms in weather and climate models, which is carried out by a process termed parametrisation. Through the analysis of radar and wind profiler observations the project made several fundamental discoveries about tropical storms and quantified the relationship of the occurrence and intensity of these storms to the large-scale atmosphere. We were able to show that the rainfall averaged over an area the size of a typical climate model grid-box is largely controlled by the number of storms in the area, and less so by the storm intensity. This allows us to completely rethink the way we represent such storms in climate models. We also found that storms occur in three distinct categories based on their depth and that the transition between these categories is strongly related to the larger scale dynamical features of the atmosphere more so than its thermodynamic state. Finally, we used our observational findings to test and refine a new approach to cumulus parametrisation which relies on the stochastic modelling of the area covered by different convective cloud types.

  1. Large-scale hydrological modelling by using modified PUB recommendations: the India-HYPE case

    NASA Astrophysics Data System (ADS)

    Pechlivanidis, I. G.; Arheimer, B.

    2015-03-01

    The Prediction in Ungauged Basins (PUB) scientific initiative (2003-2012 by IAHS) put considerable effort into improving the reliability of hydrological models to predict flow response in ungauged rivers. PUB's collective experience advanced hydrologic science and defined guidelines to make predictions in catchments without observed runoff data. At present, there is a raised interest in applying catchment models for large domains and large data samples in a multi-basin manner. However, such modelling involves several sources of uncertainties, which may be caused by the imperfectness of input data, i.e. particularly regional and global databases. This may lead to inaccurate model parameterisation and incomplete process understanding. In order to bridge the gap between the best practices for single catchments and large-scale hydrology, we present a further developed and slightly modified version of the recommended best practices for PUB by Takeuchi et al. (2013). By using examples from a recent HYPE hydrological model set-up on the Indian subcontinent, named India-HYPE v1.0, we explore the recommendations, indicate challenges and recommend quality checks to avoid erroneous assumptions. We identify the obstacles, ways to overcome them and describe the work process related to: (a) errors and inconsistencies in global databases, unknown human impacts, poor data quality, (b) robust approaches to identify parameters using a stepwise calibration approach, remote sensing data, expert knowledge and catchment similarities; and (c) evaluation based on flow signatures and performance metrics, using both multiple criteria and multiple variables, and independent gauges for "blind tests". The results show that despite the strong hydro-climatic gradient over the subcontinent, a single model can adequately describe the spatial variability in dominant hydrological processes at the catchment scale. Eventually, during calibration of India-HYPE, the median Kling-Gupta Efficiency for

  2. Realistic modeling of neurons and networks: towards brain simulation.

    PubMed

    D'Angelo, Egidio; Solinas, Sergio; Garrido, Jesus; Casellato, Claudia; Pedrocchi, Alessandra; Mapelli, Jonathan; Gandolfi, Daniela; Prestori, Francesca

    2013-01-01

    Realistic modeling is a new advanced methodology for investigating brain functions. Realistic modeling is based on a detailed biophysical description of neurons and synapses, which can be integrated into microcircuits. The latter can, in turn, be further integrated to form large-scale brain networks and eventually to reconstruct complex brain systems. Here we provide a review of the realistic simulation strategy and use the cerebellar network as an example. This network has been carefully investigated at molecular and cellular level and has been the object of intense theoretical investigation. The cerebellum is thought to lie at the core of the forward controller operations of the brain and to implement timing and sensory prediction functions. The cerebellum is well described and provides a challenging field in which one of the most advanced realistic microcircuit models has been generated. We illustrate how these models can be elaborated and embedded into robotic control systems to gain insight into how the cellular properties of cerebellar neurons emerge in integrated behaviors. Realistic network modeling opens up new perspectives for the investigation of brain pathologies and for the neurorobotic field.

  3. Realistic modeling of neurons and networks: towards brain simulation

    PubMed Central

    D’Angelo, Egidio; Solinas, Sergio; Garrido, Jesus; Casellato, Claudia; Pedrocchi, Alessandra; Mapelli, Jonathan; Gandolfi, Daniela; Prestori, Francesca

    Summary Realistic modeling is a new advanced methodology for investigating brain functions. Realistic modeling is based on a detailed biophysical description of neurons and synapses, which can be integrated into microcircuits. The latter can, in turn, be further integrated to form large-scale brain networks and eventually to reconstruct complex brain systems. Here we provide a review of the realistic simulation strategy and use the cerebellar network as an example. This network has been carefully investigated at molecular and cellular level and has been the object of intense theoretical investigation. The cerebellum is thought to lie at the core of the forward controller operations of the brain and to implement timing and sensory prediction functions. The cerebellum is well described and provides a challenging field in which one of the most advanced realistic microcircuit models has been generated. We illustrate how these models can be elaborated and embedded into robotic control systems to gain insight into how the cellular properties of cerebellar neurons emerge in integrated behaviors. Realistic network modeling opens up new perspectives for the investigation of brain pathologies and for the neurorobotic field. PMID:24139652

  4. Integrating adaptive behaviour in large-scale flood risk assessments: an Agent-Based Modelling approach

    NASA Astrophysics Data System (ADS)

    Haer, Toon; Aerts, Jeroen

    2015-04-01

    Between 1998 and 2009, Europe suffered over 213 major damaging floods, causing 1126 deaths, displacing around half a million people. In this period, floods caused at least 52 billion euro in insured economic losses making floods the most costly natural hazard faced in Europe. In many low-lying areas, the main strategy to cope with floods is to reduce the risk of the hazard through flood defence structures, like dikes and levees. However, it is suggested that part of the responsibility for flood protection needs to shift to households and businesses in areas at risk, and that governments and insurers can effectively stimulate the implementation of individual protective measures. However, adaptive behaviour towards flood risk reduction and the interaction between the government, insurers, and individuals has hardly been studied in large-scale flood risk assessments. In this study, an European Agent-Based Model is developed including agent representatives for the administrative stakeholders of European Member states, insurers and reinsurers markets, and individuals following complex behaviour models. The Agent-Based Modelling approach allows for an in-depth analysis of the interaction between heterogeneous autonomous agents and the resulting (non-)adaptive behaviour. Existing flood damage models are part of the European Agent-Based Model to allow for a dynamic response of both the agents and the environment to changing flood risk and protective efforts. By following an Agent-Based Modelling approach this study is a first contribution to overcome the limitations of traditional large-scale flood risk models in which the influence of individual adaptive behaviour towards flood risk reduction is often lacking.

  5. Automatic Generation of Connectivity for Large-Scale Neuronal Network Models through Structural Plasticity

    PubMed Central

    Diaz-Pier, Sandra; Naveau, Mikaël; Butz-Ostendorf, Markus; Morrison, Abigail

    2016-01-01

    With the emergence of new high performance computation technology in the last decade, the simulation of large scale neural networks which are able to reproduce the behavior and structure of the brain has finally become an achievable target of neuroscience. Due to the number of synaptic connections between neurons and the complexity of biological networks, most contemporary models have manually defined or static connectivity. However, it is expected that modeling the dynamic generation and deletion of the links among neurons, locally and between different regions of the brain, is crucial to unravel important mechanisms associated with learning, memory and healing. Moreover, for many neural circuits that could potentially be modeled, activity data is more readily and reliably available than connectivity data. Thus, a framework that enables networks to wire themselves on the basis of specified activity targets can be of great value in specifying network models where connectivity data is incomplete or has large error margins. To address these issues, in the present work we present an implementation of a model of structural plasticity in the neural network simulator NEST. In this model, synapses consist of two parts, a pre- and a post-synaptic element. Synapses are created and deleted during the execution of the simulation following local homeostatic rules until a mean level of electrical activity is reached in the network. We assess the scalability of the implementation in order to evaluate its potential usage in the self generation of connectivity of large scale networks. We show and discuss the results of simulations on simple two population networks and more complex models of the cortical microcircuit involving 8 populations and 4 layers using the new framework. PMID:27303272

  6. Towards large scale modelling of wetland water dynamics in northern basins.

    NASA Astrophysics Data System (ADS)

    Pedinotti, V.; Sapriza, G.; Stone, L.; Davison, B.; Pietroniro, A.; Quinton, W. L.; Spence, C.; Wheater, H. S.

    2015-12-01

    Understanding the hydrological behaviour of low topography, wetland-dominated sub-arctic areas is one major issue needed for the improvement of large scale hydrological models. These wet organic soils cover a large extent of Northern America and have a considerable impact on the rainfall-runoff response of a catchment. Moreover their strong interactions with the lower atmosphere and the carbon cycle make of these areas a noteworthy component of the regional climate system. In the framework of the Changing Cold Regions Network (CCRN), this study aims at providing a model for wetland water dynamics that can be used for large scale applications in cold regions. The modelling system has two main components : a) the simulation of surface runoff using the Modélisation Environmentale Communautaire - Surface and Hydrology (MESH) land surface model driven with several gridded atmospheric datasets and b) the routing of surface runoff using the WATROUTE channel scheme. As a preliminary study, we focus on two small representative study basins in Northern Canada : Scotty Creek in the lower Liard River valley of the Northwest Territories and Baker Creek, located a few kilometers north of Yellowknife. Both areas present characteristic landscapes dominated by a series of peat plateaus, channel fens, small lakes and bogs. Moreover, they constitute important fieldwork sites with detailed data to support our modelling study. The challenge of our new wetland model is to represent the hydrological functioning of the various landscape units encountered in those watersheds and their interactions using simple numerical formulations that can be later extended to larger basins such as the Mackenzie river basin. Using observed datasets, the performance of the model to simulate the temporal evolution of hydrological variables such as the water table depth, frost table depth and discharge is assessed.

  7. Diversity in the representation of large-scale circulation associated with ENSO-Indian summer monsoon teleconnections in CMIP5 models

    NASA Astrophysics Data System (ADS)

    Ramu, Dandi A.; Chowdary, Jasti S.; Ramakrishna, S. S. V. S.; Kumar, O. S. R. U. B.

    2017-03-01

    Realistic simulation of large-scale circulation patterns associated with El Niño-Southern Oscillation (ENSO) is vital in coupled models in order to represent teleconnections to different regions of globe. The diversity in representing large-scale circulation patterns associated with ENSO-Indian summer monsoon (ISM) teleconnections in 23 Coupled Model Intercomparison Project Phase 5 (CMIP5) models is examined. CMIP5 models have been classified into three groups based on the correlation between Niño3.4 sea surface temperature (SST) index and ISM rainfall anomalies, models in group 1 (G1) overestimated El Niño-ISM teleconections and group 3 (G3) models underestimated it, whereas these teleconnections are better represented in group 2 (G2) models. Results show that in G1 models, El Niño-induced Tropical Indian Ocean (TIO) SST anomalies are not well represented. Anomalous low-level anticyclonic circulation anomalies over the southeastern TIO and western subtropical northwest Pacific (WSNP) cyclonic circulation are shifted too far west to 60° E and 120° E, respectively. This bias in circulation patterns implies dry wind advection from extratropics/midlatitudes to Indian subcontinent. In addition to this, large-scale upper level convergence together with lower level divergence over ISM region corresponding to El Niño are stronger in G1 models than in observations. Thus, unrealistic shift in low-level circulation centers corroborated by upper level circulation changes are responsible for overestimation of ENSO-ISM teleconnections in G1 models. Warm Pacific SST anomalies associated with El Niño are shifted too far west in many G3 models unlike in the observations. Further large-scale circulation anomalies over the Pacific and ISM region are misrepresented during El Niño years in G3 models. Too strong upper-level convergence away from Indian subcontinent and too weak WSNP cyclonic circulation are prominent in most of G3 models in which ENSO-ISM teleconnections are

  8. Pangolin v1.0, a conservative 2-D transport model for large scale parallel calculation

    NASA Astrophysics Data System (ADS)

    Praga, A.; Cariolle, D.; Giraud, L.

    2014-07-01

    To exploit the possibilities of parallel computers, we designed a large-scale bidimensional atmospheric transport model named Pangolin. As the basis for a future chemistry-transport model, a finite-volume approach was chosen both for mass preservation and to ease parallelization. To overcome the pole restriction on time-steps for a regular latitude-longitude grid, Pangolin uses a quasi-area-preserving reduced latitude-longitude grid. The features of the regular grid are exploited to improve parallel performances and a custom domain decomposition algorithm is presented. To assess the validity of the transport scheme, its results are compared with state-of-the-art models on analytical test cases. Finally, parallel performances are shown in terms of strong scaling and confirm the efficient scalability up to a few hundred of cores.

  9. Numerical modeling of water spray suppression of conveyor belt fires in a large-scale tunnel.

    PubMed

    Yuan, Liming; Smith, Alex C

    2015-05-01

    Conveyor belt fires in an underground mine pose a serious life threat to miners. Water sprinkler systems are usually used to extinguish underground conveyor belt fires, but because of the complex interaction between conveyor belt fires and mine ventilation airflow, more effective engineering designs are needed for the installation of water sprinkler systems. A computational fluid dynamics (CFD) model was developed to simulate the interaction between the ventilation airflow, the belt flame spread, and the water spray system in a mine entry. The CFD model was calibrated using test results from a large-scale conveyor belt fire suppression experiment. Simulations were conducted using the calibrated CFD model to investigate the effects of sprinkler location, water flow rate, and sprinkler activation temperature on the suppression of conveyor belt fires. The sprinkler location and the activation temperature were found to have a major effect on the suppression of the belt fire, while the water flow rate had a minor effect.

  10. Numerical modeling of water spray suppression of conveyor belt fires in a large-scale tunnel

    PubMed Central

    Yuan, Liming; Smith, Alex C.

    2015-01-01

    Conveyor belt fires in an underground mine pose a serious life threat to miners. Water sprinkler systems are usually used to extinguish underground conveyor belt fires, but because of the complex interaction between conveyor belt fires and mine ventilation airflow, more effective engineering designs are needed for the installation of water sprinkler systems. A computational fluid dynamics (CFD) model was developed to simulate the interaction between the ventilation airflow, the belt flame spread, and the water spray system in a mine entry. The CFD model was calibrated using test results from a large-scale conveyor belt fire suppression experiment. Simulations were conducted using the calibrated CFD model to investigate the effects of sprinkler location, water flow rate, and sprinkler activation temperature on the suppression of conveyor belt fires. The sprinkler location and the activation temperature were found to have a major effect on the suppression of the belt fire, while the water flow rate had a minor effect. PMID:26190905

  11. A simple simulation model of tuberculosis epidemiology for use without large-scale computers.

    PubMed

    Azuma, Y

    1975-01-01

    A large-scale computer service is not always available in many countries with tuberculosis problems needing epidemiological analysis. To facilitate work in such countries, a simple epidemiological model was made to calculate annual trends in the prevalence and incidence of tuberculosis and its infection, in tuberculosis mortality, and in BCG coverage, using average parameter values not specific for age groups or birth year cohorts. To test its approximation capabilities and limits, the model was applied to epidemiological data from Japan, where sufficient information was available from repeated nation-wide sample surveys and national statistics. The approximation was found to be satisfactory within certain limits. The model is best used with a desk-top computer, but the calculations can be performed with a small calculator or even by hand.

  12. Localization Algorithm Based on a Spring Model (LASM) for Large Scale Wireless Sensor Networks

    PubMed Central

    Chen, Wanming; Mei, Tao; Meng, Max Q.-H.; Liang, Huawei; Liu, Yumei; Li, Yangming; Li, Shuai

    2008-01-01

    A navigation method for a lunar rover based on large scale wireless sensor networks is proposed. To obtain high navigation accuracy and large exploration area, high node localization accuracy and large network scale are required. However, the computational and communication complexity and time consumption are greatly increased with the increase of the network scales. A localization algorithm based on a spring model (LASM) method is proposed to reduce the computational complexity, while maintaining the localization accuracy in large scale sensor networks. The algorithm simulates the dynamics of physical spring system to estimate the positions of nodes. The sensor nodes are set as particles with masses and connected with neighbor nodes by virtual springs. The virtual springs will force the particles move to the original positions, the node positions correspondingly, from the randomly set positions. Therefore, a blind node position can be determined from the LASM algorithm by calculating the related forces with the neighbor nodes. The computational and communication complexity are O(1) for each node, since the number of the neighbor nodes does not increase proportionally with the network scale size. Three patches are proposed to avoid local optimization, kick out bad nodes and deal with node variation. Simulation results show that the computational and communication complexity are almost constant despite of the increase of the network scale size. The time consumption has also been proven to remain almost constant since the calculation steps are almost unrelated with the network scale size. PMID:27879793

  13. Localization Algorithm Based on a Spring Model (LASM) for Large Scale Wireless Sensor Networks.

    PubMed

    Chen, Wanming; Mei, Tao; Meng, Max Q-H; Liang, Huawei; Liu, Yumei; Li, Yangming; Li, Shuai

    2008-03-15

    A navigation method for a lunar rover based on large scale wireless sensornetworks is proposed. To obtain high navigation accuracy and large exploration area, highnode localization accuracy and large network scale are required. However, thecomputational and communication complexity and time consumption are greatly increasedwith the increase of the network scales. A localization algorithm based on a spring model(LASM) method is proposed to reduce the computational complexity, while maintainingthe localization accuracy in large scale sensor networks. The algorithm simulates thedynamics of physical spring system to estimate the positions of nodes. The sensor nodesare set as particles with masses and connected with neighbor nodes by virtual springs. Thevirtual springs will force the particles move to the original positions, the node positionscorrespondingly, from the randomly set positions. Therefore, a blind node position can bedetermined from the LASM algorithm by calculating the related forces with the neighbornodes. The computational and communication complexity are O(1) for each node, since thenumber of the neighbor nodes does not increase proportionally with the network scale size.Three patches are proposed to avoid local optimization, kick out bad nodes and deal withnode variation. Simulation results show that the computational and communicationcomplexity are almost constant despite of the increase of the network scale size. The time consumption has also been proven to remain almost constant since the calculation steps arealmost unrelated with the network scale size.

  14. Meta-Analysis in Human Neuroimaging: Computational Modeling of Large-Scale Databases

    PubMed Central

    Fox, Peter T.; Lancaster, Jack L.; Laird, Angela R.; Eickhoff, Simon B.

    2016-01-01

    Spatial normalization—applying standardized coordinates as anatomical addresses within a reference space—was introduced to human neuroimaging research nearly 30 years ago. Over these three decades, an impressive series of methodological advances have adopted, extended, and popularized this standard. Collectively, this work has generated a methodologically coherent literature of unprecedented rigor, size, and scope. Large-scale online databases have compiled these observations and their associated meta-data, stimulating the development of meta-analytic methods to exploit this expanding corpus. Coordinate-based meta-analytic methods have emerged and evolved in rigor and utility. Early methods computed cross-study consensus, in a manner roughly comparable to traditional (nonimaging) meta-analysis. Recent advances now compute coactivation-based connectivity, connectivity-based functional parcellation, and complex network models powered from data sets representing tens of thousands of subjects. Meta-analyses of human neuroimaging data in large-scale databases now stand at the forefront of computational neurobiology. PMID:25032500

  15. Inclusive constraints on unified dark matter models from future large-scale surveys

    SciTech Connect

    Camera, Stefano; Carbone, Carmelita; Moscardini, Lauro E-mail: carmelita.carbone@unibo.it

    2012-03-01

    In the very last years, cosmological models where the properties of the dark components of the Universe — dark matter and dark energy — are accounted for by a single ''dark fluid'' have drawn increasing attention and interest. Amongst many proposals, Unified Dark Matter (UDM) cosmologies are promising candidates as effective theories. In these models, a scalar field with a non-canonical kinetic term in its Lagrangian mimics both the accelerated expansion of the Universe at late times and the clustering properties of the large-scale structure of the cosmos. However, UDM models also present peculiar behaviours, the most interesting one being the fact that the perturbations in the dark-matter component of the scalar field do have a non-negligible speed of sound. This gives rise to an effective Jeans scale for the Newtonian potential, below which the dark fluid does not cluster any more. This implies a growth of structures fairly different from that of the concordance ΛCDM model. In this paper, we demonstrate that forthcoming large-scale surveys will be able to discriminate between viable UDM models and ΛCDM to a good degree of accuracy. To this purpose, the planned Euclid satellite will be a powerful tool, since it will provide very accurate data on galaxy clustering and the weak lensing effect of cosmic shear. Finally, we also exploit the constraining power of the ongoing CMB Planck experiment. Although our approach is the most conservative, with the inclusion of only well-understood, linear dynamics, in the end we also show what could be done if some amount of non-linear information were included.

  16. Inclusive constraints on unified dark matter models from future large-scale surveys

    NASA Astrophysics Data System (ADS)

    Camera, Stefano; Carbone, Carmelita; Moscardini, Lauro

    2012-03-01

    In the very last years, cosmological models where the properties of the dark components of the Universe — dark matter and dark energy — are accounted for by a single ``dark fluid'' have drawn increasing attention and interest. Amongst many proposals, Unified Dark Matter (UDM) cosmologies are promising candidates as effective theories. In these models, a scalar field with a non-canonical kinetic term in its Lagrangian mimics both the accelerated expansion of the Universe at late times and the clustering properties of the large-scale structure of the cosmos. However, UDM models also present peculiar behaviours, the most interesting one being the fact that the perturbations in the dark-matter component of the scalar field do have a non-negligible speed of sound. This gives rise to an effective Jeans scale for the Newtonian potential, below which the dark fluid does not cluster any more. This implies a growth of structures fairly different from that of the concordance ΛCDM model. In this paper, we demonstrate that forthcoming large-scale surveys will be able to discriminate between viable UDM models and ΛCDM to a good degree of accuracy. To this purpose, the planned Euclid satellite will be a powerful tool, since it will provide very accurate data on galaxy clustering and the weak lensing effect of cosmic shear. Finally, we also exploit the constraining power of the ongoing CMB Planck experiment. Although our approach is the most conservative, with the inclusion of only well-understood, linear dynamics, in the end we also show what could be done if some amount of non-linear information were included.

  17. Large scale nonlinear numerical optimal control for finite element models of flexible structures

    NASA Technical Reports Server (NTRS)

    Shoemaker, Christine A.; Liao, Li-Zhi

    1990-01-01

    This paper discusses the development of large scale numerical optimal control algorithms for nonlinear systems and their application to finite element models of structures. This work is based on our expansion of the optimal control algorithm (DDP) in the following steps: improvement of convergence for initial policies in non-convex regions, development of a numerically accurate penalty function method approach for constrained DDP problems, and parallel processing on supercomputers. The expanded constrained DDP algorithm was applied to the control of a four-bay, two dimensional truss with 12 soft members, which generates geometric nonlinearities. Using an explicit finite element model to describe the structural system requires 32 state variables and 10,000 time steps. Our numerical results indicate that for constrained or unconstrained structural problems with nonlinear dynamics, the results obtained by our expanded constrained DDP are significantly better than those obtained using linear-quadratic feedback control.

  18. Phanerozoic marine diversity: rock record modelling provides an independent test of large-scale trends.

    PubMed

    Smith, Andrew B; Lloyd, Graeme T; McGowan, Alistair J

    2012-11-07

    Sampling bias created by a heterogeneous rock record can seriously distort estimates of marine diversity and makes a direct reading of the fossil record unreliable. Here we compare two independent estimates of Phanerozoic marine diversity that explicitly take account of variation in sampling-a subsampling approach that standardizes for differences in fossil collection intensity, and a rock area modelling approach that takes account of differences in rock availability. Using the fossil records of North America and Western Europe, we demonstrate that a modelling approach applied to the combined data produces results that are significantly correlated with those derived from subsampling. This concordance between independent approaches argues strongly for the reality of the large-scale trends in diversity we identify from both approaches.

  19. Large-scale shell-model calculations of nuclei around mass 210

    NASA Astrophysics Data System (ADS)

    Teruya, E.; Higashiyama, K.; Yoshinaga, N.

    2016-06-01

    Large-scale shell-model calculations are performed for even-even, odd-mass, and doubly odd nuclei of Pb, Bi, Po, At, Rn, and Fr isotopes in the neutron deficit region (Z ≥82 ,N ≤126 ) assuming 208Pb as a doubly magic core. All the six single-particle orbitals between the magic numbers 82 and 126, namely, 0 h9 /2,1 f7 /2,0 i13 /2,2 p3 /2,1 f5 /2 , and 2 p1 /2 , are considered. For a phenomenological effective two-body interaction, one set of the monopole pairing and quadrupole-quadrupole interactions including the multipole-pairing interactions is adopted for all the nuclei considered. The calculated energies and electromagnetic properties are compared with the experimental data. Furthermore, many isomeric states are analyzed in terms of the shell-model configurations.

  20. GPU-Based Parallelized Solver for Large Scale Vascular Blood Flow Modeling and Simulations.

    PubMed

    Santhanam, Anand P; Neylon, John; Eldredge, Jeff; Teran, Joseph; Dutson, Erik; Benharash, Peyman

    2016-01-01

    Cardio-vascular blood flow simulations are essential in understanding the blood flow behavior during normal and disease conditions. To date, such blood flow simulations have only been done at a macro scale level due to computational limitations. In this paper, we present a GPU based large scale solver that enables modeling the flow even in the smallest arteries. A mechanical equivalent of the circuit based flow modeling system is first developed to employ the GPU computing framework. Numerical studies were employed using a set of 10 million connected vascular elements. Run-time flow analysis were performed to simulate vascular blockages, as well as arterial cut-off. Our results showed that we can achieve ~100 FPS using a GTX 680m and ~40 FPS using a Tegra K1 computing platform.

  1. A review of large-scale LNG spills : experiment and modeling.

    SciTech Connect

    Luketa-Hanlin, Anay Josephine

    2005-04-01

    The prediction of the possible hazards associated with the storage and transportation of liquefied natural gas (LNG) by ship has motivated a substantial number of experimental and analytical studies. This paper reviews the experimental and analytical work performed to date on large-scale spills of LNG. Specifically, experiments on the dispersion of LNG, as well as experiments of LNG fires from spills on water and land are reviewed. Explosion, pool boiling, and rapid phase transition (RPT) explosion studies are described and discussed, as well as models used to predict dispersion and thermal hazard distances. Although there have been significant advances in understanding the behavior of LNG spills, technical knowledge gaps to improve hazard prediction are identified. Some of these gaps can be addressed with current modeling and testing capabilities. A discussion of the state of knowledge and recommendations to further improve the understanding of the behavior of LNG spills on water is provided.

  2. A review of large-scale LNG spills: experiments and modeling.

    PubMed

    Luketa-Hanlin, Anay

    2006-05-20

    The prediction of the possible hazards associated with the storage and transportation of liquefied natural gas (LNG) by ship has motivated a substantial number of experimental and analytical studies. This paper reviews the experimental and analytical work performed to date on large-scale spills of LNG. Specifically, experiments on the dispersion of LNG, as well as experiments of LNG fires from spills on water and land are reviewed. Explosion, pool boiling, and rapid phase transition (RPT) explosion studies are described and discussed, as well as models used to predict dispersion and thermal hazard distances. Although there have been significant advances in understanding the behavior of LNG spills, technical knowledge gaps to improve hazard prediction are identified. Some of these gaps can be addressed with current modeling and testing capabilities. A discussion of the state of knowledge and recommendations to further improve the understanding of the behavior of LNG spills on water is provided.

  3. Large scale landslide mud flow modeling, simulation, and comparison with observations

    NASA Astrophysics Data System (ADS)

    Liu, F.; Shao, X.; Zhang, B.

    2012-12-01

    Landslide is a catastrophic natural event. Modeling, simulation, and early warning of landslide event can protect the safety of lives and properties. Therefore, study of landslide bears important scientific and practical value. In this research, we constructed a high performance parallel fluid dynamics model to study large scale landslide transport and evolution process. This model solves shallow water equation derived from 3 dimensional Euler equations in Cartesian coordinate system. Based on bottom topography, initial condition, bottom friction, and mudflow viscosity coefficient, density and other parameters, this model predicts landslide transport process and deposition distribution. Using 3 dimension bottom topography data from an digital elevation model in Zhou Qu area, this model produces the onset, transport and deposition process happened during Zhou Qu landslide. It also calculates spatial and temporal distribution of the mud flow transportation route, deposition depth, and kinematic energy of the event. This model together with an early warning system can lead to significant improvement to construction planning in landslide susceptible area.; Zhou Qu topography from Digital Elevation Model ; Modeling result from PLM (parallel landslide model)

  4. A Data-driven Analytic Model for Proton Acceleration by Large-scale Solar Coronal Shocks

    NASA Astrophysics Data System (ADS)

    Kozarev, Kamen A.; Schwadron, Nathan A.

    2016-11-01

    We have recently studied the development of an eruptive filament-driven, large-scale off-limb coronal bright front (OCBF) in the low solar corona, using remote observations from the Solar Dynamics Observatory’s Advanced Imaging Assembly EUV telescopes. In that study, we obtained high-temporal resolution estimates of the OCBF parameters regulating the efficiency of charged particle acceleration within the theoretical framework of diffusive shock acceleration (DSA). These parameters include the time-dependent front size, speed, and strength, as well as the upstream coronal magnetic field orientations with respect to the front’s surface normal direction. Here we present an analytical particle acceleration model, specifically developed to incorporate the coronal shock/compressive front properties described above, derived from remote observations. We verify the model’s performance through a grid of idealized case runs using input parameters typical for large-scale coronal shocks, and demonstrate that the results approach the expected DSA steady-state behavior. We then apply the model to the event of 2011 May 11 using the OCBF time-dependent parameters derived by Kozarev et al. We find that the compressive front likely produced energetic particles as low as 1.3 solar radii in the corona. Comparing the modeled and observed fluences near Earth, we also find that the bulk of the acceleration during this event must have occurred above 1.5 solar radii. With this study we have taken a first step in using direct observations of shocks and compressions in the innermost corona to predict the onsets and intensities of solar energetic particle events.

  5. Global Sensitivity Analysis for Large-scale Socio-hydrological Models using the Cloud

    NASA Astrophysics Data System (ADS)

    Hu, Y.; Garcia-Cabrejo, O.; Cai, X.; Valocchi, A. J.; Dupont, B.

    2014-12-01

    In the context of coupled human and natural system (CHNS), incorporating human factors into water resource management provides us with the opportunity to understand the interactions between human and environmental systems. A multi-agent system (MAS) model is designed to couple with the physically-based Republican River Compact Administration (RRCA) groundwater model, in an attempt to understand the declining water table and base flow in the heavily irrigated Republican River basin. For MAS modelling, we defined five behavioral parameters (κ_pr, ν_pr, κ_prep, ν_prep and λ) to characterize the agent's pumping behavior given the uncertainties of the future crop prices and precipitation. κ and ν describe agent's beliefs in their prior knowledge of the mean and variance of crop prices (κ_pr, ν_pr) and precipitation (κ_prep, ν_prep), and λ is used to describe the agent's attitude towards the fluctuation of crop profits. Notice that these human behavioral parameters as inputs to the MAS model are highly uncertain and even not measurable. Thus, we estimate the influences of these behavioral parameters on the coupled models using Global Sensitivity Analysis (GSA). In this paper, we address two main challenges arising from GSA with such a large-scale socio-hydrological model by using Hadoop-based Cloud Computing techniques and Polynomial Chaos Expansion (PCE) based variance decomposition approach. As a result, 1,000 scenarios of the coupled models are completed within two hours with the Hadoop framework, rather than about 28days if we run those scenarios sequentially. Based on the model results, GSA using PCE is able to measure the impacts of the spatial and temporal variations of these behavioral parameters on crop profits and water table, and thus identifies two influential parameters, κ_pr and λ. The major contribution of this work is a methodological framework for the application of GSA in large-scale socio-hydrological models. This framework attempts to

  6. Development of a 3D Stream Network and Topography for Improved Large-Scale Hydraulic Modeling

    NASA Astrophysics Data System (ADS)

    Saksena, S.; Dey, S.; Merwade, V.

    2016-12-01

    Most digital elevation models (DEMs) used for hydraulic modeling do not include channel bed elevations. As a result, the DEMs are complimented with additional bathymetric data for accurate hydraulic simulations. Existing methods to acquire bathymetric information through field surveys or through conceptual models are limited to reach-scale applications. With an increasing focus on large scale hydraulic modeling of rivers, a framework to estimate and incorporate bathymetry for an entire stream network is needed. This study proposes an interpolation-based algorithm to estimate bathymetry for a stream network by modifying the reach-based empirical River Channel Morphology Model (RCMM). The effect of a 3D stream network that includes river bathymetry is then investigated by creating a 1D hydraulic model (HEC-RAS) and 2D hydrodynamic model (Integrated Channel and Pond Routing) for the Upper Wabash River Basin in Indiana, USA. Results show improved simulation of flood depths and storage in the floodplain. Similarly, the impact of river bathymetry incorporation is more significant in the 2D model as compared to the 1D model.

  7. Enhanced ICP for the Registration of Large-Scale 3D Environment Models: An Experimental Study.

    PubMed

    Han, Jianda; Yin, Peng; He, Yuqing; Gu, Feng

    2016-02-15

    One of the main applications of mobile robots is the large-scale perception of the outdoor environment. One of the main challenges of this application is fusing environmental data obtained by multiple robots, especially heterogeneous robots. This paper proposes an enhanced iterative closest point (ICP) method for the fast and accurate registration of 3D environmental models. First, a hierarchical searching scheme is combined with the octree-based ICP algorithm. Second, an early-warning mechanism is used to perceive the local minimum problem. Third, a heuristic escape scheme based on sampled potential transformation vectors is used to avoid local minima and achieve optimal registration. Experiments involving one unmanned aerial vehicle and one unmanned surface vehicle were conducted to verify the proposed technique. The experimental results were compared with those of normal ICP registration algorithms to demonstrate the superior performance of the proposed method.

  8. Modeling of the dielectrophoretic conveyer-belt assembling microparticles into large-scale structures

    NASA Astrophysics Data System (ADS)

    Khusid, Boris; Jacqmin, David; Kumar, Anil; Acrivos, Andreas

    2007-11-01

    A dielectrophoretic conveyor-belt method for assembling negatively polarized microparticles into large-scale structures was recently developed (APL 90, 154104, 2007). To do this, first, an array of microelectrodes is energized to generate a spatially periodic AC electric field that causes the particles to aggregate into boluses in positions of the field intensity- minima, which are located mid-way along the height of the channel. The minima and their associated boluses are then moved by periodically grounding and energizing the electrode array so as to generate an electrical field moving along the electrode array. We simulate this experiment numerically via a two- dimensional electro-hydrodynamic model (PRE 69, 021402, 2004). The numerical results are in qualitative agreement with experiments in that they show similar particle aggregation rates, bolus sizes and bolus transport speeds.

  9. Large-Scale Shell-Model Analysis of the Neutrinoless β β Decay of 48Ca

    NASA Astrophysics Data System (ADS)

    Iwata, Y.; Shimizu, N.; Otsuka, T.; Utsuno, Y.; Menéndez, J.; Honma, M.; Abe, T.

    2016-03-01

    We present the nuclear matrix element for the neutrinoless double-beta decay of 48Ca based on large-scale shell-model calculations including two harmonic oscillator shells (s d and p f shells). The excitation spectra of 48Ca and 48Ti, and the two-neutrino double-beta decay of 48Ca are reproduced in good agreement to the experimental data. We find that the neutrinoless double-beta decay nuclear matrix element is enhanced by about 30% compared to p f -shell calculations. This reduces the decay lifetime by almost a factor of 2. The matrix-element increase is mostly due to pairing correlations associated with cross-shell s d -p f excitations. We also investigate possible implications for heavier neutrinoless double-beta decay candidates.

  10. Computational framework for modeling the dynamic evolution of large-scale multi-agent organizations

    NASA Astrophysics Data System (ADS)

    Lazar, Alina; Reynolds, Robert G.

    2002-07-01

    A multi-agent system model of the origins of an archaic state is developed. Agent interaction is mediated by a collection of rules. The rules are mined from a related large-scale data base using two different techniques. One technique uses decision trees while the other uses rough sets. The latter was used since the data collection techniques were associated with a certain degree of uncertainty. The generation of the rough set rules was guided by Genetic Algorithms. Since the rules mediate agent interaction, the rule set with fewer rules and conditionals to check will make scaling up the simulation easier to do. The results suggest that explicitly dealing with uncertainty in rule formation can produce simpler rules than ignoring that uncertainty in situations where uncertainty is a factor in the measurement process.

  11. Enhanced ICP for the Registration of Large-Scale 3D Environment Models: An Experimental Study

    PubMed Central

    Han, Jianda; Yin, Peng; He, Yuqing; Gu, Feng

    2016-01-01

    One of the main applications of mobile robots is the large-scale perception of the outdoor environment. One of the main challenges of this application is fusing environmental data obtained by multiple robots, especially heterogeneous robots. This paper proposes an enhanced iterative closest point (ICP) method for the fast and accurate registration of 3D environmental models. First, a hierarchical searching scheme is combined with the octree-based ICP algorithm. Second, an early-warning mechanism is used to perceive the local minimum problem. Third, a heuristic escape scheme based on sampled potential transformation vectors is used to avoid local minima and achieve optimal registration. Experiments involving one unmanned aerial vehicle and one unmanned surface vehicle were conducted to verify the proposed technique. The experimental results were compared with those of normal ICP registration algorithms to demonstrate the superior performance of the proposed method. PMID:26891298

  12. LARGE SCALE DISTRIBUTED PARAMETER MODEL OF MAIN MAGNET SYSTEM AND FREQUENCY DECOMPOSITION ANALYSIS

    SciTech Connect

    ZHANG,W.; MARNERIS, I.; SANDBERG, J.

    2007-06-25

    Large accelerator main magnet system consists of hundreds, even thousands, of dipole magnets. They are linked together under selected configurations to provide highly uniform dipole fields when powered. Distributed capacitance, insulation resistance, coil resistance, magnet inductance, and coupling inductance of upper and lower pancakes make each magnet a complex network. When all dipole magnets are chained together in a circle, they become a coupled pair of very high order complex ladder networks. In this study, a network of more than thousand inductive, capacitive or resistive elements are used to model an actual system. The circuit is a large-scale network. Its equivalent polynomial form has several hundred degrees. Analysis of this high order circuit and simulation of the response of any or all components is often computationally infeasible. We present methods to use frequency decomposition approach to effectively simulate and analyze magnet configuration and power supply topologies.

  13. Large-scale Individual-based Models of Pandemic Influenza Mitigation Strategies

    NASA Astrophysics Data System (ADS)

    Kadau, Kai; Germann, Timothy; Longini, Ira; Macken, Catherine

    2007-03-01

    We have developed a large-scale stochastic simulation model to investigate the spread of a pandemic strain of influenza virus through the U.S. population of 281 million people, to assess the likely effectiveness of various potential intervention strategies including antiviral agents, vaccines, and modified social mobility (including school closure and travel restrictions) [1]. The heterogeneous population structure and mobility is based on available Census and Department of Transportation data where available. Our simulations demonstrate that, in a highly mobile population, restricting travel after an outbreak is detected is likely to delay slightly the time course of the outbreak without impacting the eventual number ill. For large basic reproductive numbers R0, we predict that multiple strategies in combination (involving both social and medical interventions) will be required to achieve a substantial reduction in illness rates. [1] T. C. Germann, K. Kadau, I. M. Longini, and C. A. Macken, Proc. Natl. Acad. Sci. (USA) 103, 5935-5940 (2006).

  14. Excavating the Genome: Large Scale Mutagenesis Screening for the Discovery of New Mouse Models

    PubMed Central

    Sundberg, John P.; Dadras, Soheil S.; Silva, Kathleen A.; Kennedy, Victoria E.; Murray, Stephen A.; Denegre, James; Schofield, Paul N.; King, Lloyd E.; Wiles, Michael; Pratt, C. Herbert

    2016-01-01

    Technology now exists for rapid screening of mutated laboratory mice to identify phenotypes associated with specific genetic mutations. Large repositories exist for spontaneous mutants and those induced by chemical mutagenesis, many of which have never been studied or comprehensively evaluated. To supplement these resources, a variety of techniques have been consolidated in an international effort to create mutations in all known protein coding genes in the mouse. With targeted embryonic stem cell lines now available for almost all protein coding genes and more recently CRISPR/Cas9 technology, large-scale efforts are underway to create novel mutant mouse strains and to characterize their phenotypes. However, accurate diagnosis of skin, hair, and nail diseases still relies on careful gross and histological analysis. While not automated to the level of the physiological phenotyping, histopathology provides the most direct and accurate diagnosis and correlation with human diseases. As a result of these efforts, many new mouse dermatological disease models are being developed. PMID:26551941

  15. Large-scale shell model study of the newly found isomer in 136La

    NASA Astrophysics Data System (ADS)

    Teruya, E.; Yoshinaga, N.; Higashiyama, K.; Nishibata, H.; Odahara, A.; Shimoda, T.

    2016-07-01

    The doubly-odd nucleus 136La is theoretically studied in terms of a large-scale shell model. The energy spectrum and transition rates are calculated and compared with the most updated experimental data. The isomerism is investigated for the first 14+ state, which was found to be an isomer in the previous study [Phys. Rev. C 91, 054305 (2015), 10.1103/PhysRevC.91.054305]. It is found that the 14+ state becomes an isomer due to a band crossing of two bands with completely different configurations. The yrast band with the (ν h11/2 -1⊗π h11 /2 ) configuration is investigated, revealing a staggering nature in M 1 transition rates.

  16. Investigation of airframe noise for a large-scale wing model with high-lift devices

    NASA Astrophysics Data System (ADS)

    Kopiev, V. F.; Zaytsev, M. Yu.; Belyaev, I. V.

    2016-01-01

    The acoustic characteristics of a large-scale model of a wing with high-lift devices in the landing configuration have been studied in the DNW-NWB wind tunnel with an anechoic test section. For the first time in domestic practice, data on airframe noise at high Reynolds numbers (1.1-1.8 × 106) have been obtained, which can be used for assessment of wing noise levels in aircraft certification tests. The scaling factor for recalculating the measurement results to natural conditions has been determined from the condition of collapsing the dimensionless noise spectra obtained at various flow velocities. The beamforming technique has been used to obtain localization of noise sources and provide their ranking with respect to intensity. For flap side-edge noise, which is an important noise component, a noise reduction method has been proposed. The efficiency of this method has been confirmed in DNW-NWB experiments.

  17. Towards large scale stochastic rainfall models for flood risk assessment in trans-national basins

    NASA Astrophysics Data System (ADS)

    Serinaldi, F.; Kilsby, C. G.

    2012-04-01

    While extensive research has been devoted to rainfall-runoff modelling for risk assessment in small and medium size watersheds, less attention has been paid, so far, to large scale trans-national basins, where flood events have severe societal and economic impacts with magnitudes quantified in billions of Euros. As an example, in the April 2006 flood events along the Danube basin at least 10 people lost their lives and up to 30 000 people were displaced, with overall damages estimated at more than half a billion Euros. In this context, refined analytical methods are fundamental to improve the risk assessment and, then, the design of structural and non structural measures of protection, such as hydraulic works and insurance/reinsurance policies. Since flood events are mainly driven by exceptional rainfall events, suitable characterization and modelling of space-time properties of rainfall fields is a key issue to perform a reliable flood risk analysis based on alternative precipitation scenarios to be fed in a new generation of large scale rainfall-runoff models. Ultimately, this approach should be extended to a global flood risk model. However, as the need of rainfall models able to account for and simulate spatio-temporal properties of rainfall fields over large areas is rather new, the development of new rainfall simulation frameworks is a challenging task involving that faces with the problem of overcoming the drawbacks of the existing modelling schemes (devised for smaller spatial scales), but keeping the desirable properties. In this study, we critically summarize the most widely used approaches for rainfall simulation. Focusing on stochastic approaches, we stress the importance of introducing suitable climate forcings in these simulation schemes in order to account for the physical coherence of rainfall fields over wide areas. Based on preliminary considerations, we suggest a modelling framework relying on the Generalized Additive Models for Location, Scale

  18. Hierarchical Modeling and Robust Synthesis for the Preliminary Design of Large Scale Complex Systems

    NASA Technical Reports Server (NTRS)

    Koch, Patrick N.

    1997-01-01

    Large-scale complex systems are characterized by multiple interacting subsystems and the analysis of multiple disciplines. The design and development of such systems inevitably requires the resolution of multiple conflicting objectives. The size of complex systems, however, prohibits the development of comprehensive system models, and thus these systems must be partitioned into their constituent parts. Because simultaneous solution of individual subsystem models is often not manageable iteration is inevitable and often excessive. In this dissertation these issues are addressed through the development of a method for hierarchical robust preliminary design exploration to facilitate concurrent system and subsystem design exploration, for the concurrent generation of robust system and subsystem specifications for the preliminary design of multi-level, multi-objective, large-scale complex systems. This method is developed through the integration and expansion of current design techniques: Hierarchical partitioning and modeling techniques for partitioning large-scale complex systems into more tractable parts, and allowing integration of subproblems for system synthesis; Statistical experimentation and approximation techniques for increasing both the efficiency and the comprehensiveness of preliminary design exploration; and Noise modeling techniques for implementing robust preliminary design when approximate models are employed. Hierarchical partitioning and modeling techniques including intermediate responses, linking variables, and compatibility constraints are incorporated within a hierarchical compromise decision support problem formulation for synthesizing subproblem solutions for a partitioned system. Experimentation and approximation techniques are employed for concurrent investigations and modeling of partitioned subproblems. A modified composite experiment is introduced for fitting better predictive models across the ranges of the factors, and an approach for

  19. Simulating large-scale pedestrian movement using CA and event driven model: Methodology and case study

    NASA Astrophysics Data System (ADS)

    Li, Jun; Fu, Siyao; He, Haibo; Jia, Hongfei; Li, Yanzhong; Guo, Yi

    2015-11-01

    Large-scale regional evacuation is an important part of national security emergency response plan. Large commercial shopping area, as the typical service system, its emergency evacuation is one of the hot research topics. A systematic methodology based on Cellular Automata with the Dynamic Floor Field and event driven model has been proposed, and the methodology has been examined within context of a case study involving the evacuation within a commercial shopping mall. Pedestrians walking is based on Cellular Automata and event driven model. In this paper, the event driven model is adopted to simulate the pedestrian movement patterns, the simulation process is divided into normal situation and emergency evacuation. The model is composed of four layers: environment layer, customer layer, clerk layer and trajectory layer. For the simulation of movement route of pedestrians, the model takes into account purchase intention of customers and density of pedestrians. Based on evacuation model of Cellular Automata with Dynamic Floor Field and event driven model, we can reflect behavior characteristics of customers and clerks at the situations of normal and emergency evacuation. The distribution of individual evacuation time as a function of initial positions and the dynamics of the evacuation process is studied. Our results indicate that the evacuation model using the combination of Cellular Automata with Dynamic Floor Field and event driven scheduling can be used to simulate the evacuation of pedestrian flows in indoor areas with complicated surroundings and to investigate the layout of shopping mall.

  20. Large-scale hydrological modelling by using modified PUB recommendations: the India-HYPE case

    NASA Astrophysics Data System (ADS)

    Pechlivanidis, I. G.; Arheimer, B.

    2015-11-01

    The scientific initiative Prediction in Ungauged Basins (PUB) (2003-2012 by the IAHS) put considerable effort into improving the reliability of hydrological models to predict flow response in ungauged rivers. PUB's collective experience advanced hydrologic science and defined guidelines to make predictions in catchments without observed runoff data. At present, there is a raised interest in applying catchment models to large domains and large data samples in a multi-basin manner, to explore emerging spatial patterns or learn from comparative hydrology. However, such modelling involves additional sources of uncertainties caused by the inconsistency between input data sets, i.e. particularly regional and global databases. This may lead to inaccurate model parameterisation and erroneous process understanding. In order to bridge the gap between the best practices for flow predictions in single catchments and multi-basins at the large scale, we present a further developed and slightly modified version of the recommended best practices for PUB by Takeuchi et al. (2013). By using examples from a recent HYPE (Hydrological Predictions for the Environment) hydrological model set-up across 6000 subbasins for the Indian subcontinent, named India-HYPE v1.0, we explore the PUB recommendations, identify challenges and recommend ways to overcome them. We describe the work process related to (a) errors and inconsistencies in global databases, unknown human impacts, and poor data quality; (b) robust approaches to identify model parameters using a stepwise calibration approach, remote sensing data, expert knowledge, and catchment similarities; and (c) evaluation based on flow signatures and performance metrics, using both multiple criteria and multiple variables, and independent gauges for "blind tests". The results show that despite the strong physiographical gradient over the subcontinent, a single model can describe the spatial variability in dominant hydrological processes at the

  1. Multilevel Item Response Modeling: Applications to Large-Scale Assessment of Academic Achievement

    ERIC Educational Resources Information Center

    Zheng, Xiaohui

    2009-01-01

    The call for standards-based reform and educational accountability has led to increased attention to large-scale assessments. Over the past two decades, large-scale assessments have been providing policymakers and educators with timely information about student learning and achievement to facilitate their decisions regarding schools, teachers and…

  2. Multilevel Item Response Modeling: Applications to Large-Scale Assessment of Academic Achievement

    ERIC Educational Resources Information Center

    Zheng, Xiaohui

    2009-01-01

    The call for standards-based reform and educational accountability has led to increased attention to large-scale assessments. Over the past two decades, large-scale assessments have been providing policymakers and educators with timely information about student learning and achievement to facilitate their decisions regarding schools, teachers and…

  3. Multi-variate spatial explicit constraining of a large scale hydrological model

    NASA Astrophysics Data System (ADS)

    Rakovec, Oldrich; Kumar, Rohini; Samaniego, Luis

    2016-04-01

    Increased availability and quality of near real-time data should target at better understanding of predictive skills of distributed hydrological models. Nevertheless, predictions of regional scale water fluxes and states remains of great challenge to the scientific community. Large scale hydrological models are used for prediction of soil moisture, evapotranspiration and other related water states and fluxes. They are usually properly constrained against river discharge, which is an integral variable. Rakovec et al (2016) recently demonstrated that constraining model parameters against river discharge is necessary, but not a sufficient condition. Therefore, we further aim at scrutinizing appropriate incorporation of readily available information into a hydrological model that may help to improve the realism of hydrological processes. It is important to analyze how complementary datasets besides observed streamflow and related signature measures can improve model skill of internal model variables during parameter estimation. Among those products suitable for further scrutiny are for example the GRACE satellite observations. Recent developments of using this dataset in a multivariate fashion to complement traditionally used streamflow data within the distributed model mHM (www.ufz.de/mhm) are presented. Study domain consists of 80 European basins, which cover a wide range of distinct physiographic and hydrologic regimes. First-order data quality check ensures that heavily human influenced basins are eliminated. For river discharge simulations we show that model performance of discharge remains unchanged when complemented by information from the GRACE product (both, daily and monthly time steps). Moreover, the GRACE complementary data lead to consistent and statistically significant improvements in evapotranspiration estimates, which are evaluated using an independent gridded FLUXNET product. We also show that the choice of the objective function used to estimate

  4. Vertical Distributions of Sulfur Species Simulated by Large Scale Atmospheric Models in COSAM: Comparison with Observations

    SciTech Connect

    Lohmann, U.; Leaitch, W. R.; Barrie, Leonard A.; Law, K.; Yi, Y.; Bergmann, D.; Bridgeman, C.; Chin, M.; Christensen, J.; Easter, Richard C.; Feichter, J.; Jeuken, A.; Kjellstrom, E.; Koch, D.; Land, C.; Rasch, P.; Roelofs, G.-J.

    2001-11-01

    A comparison of large-scale models simulating atmospheric sulfate aerosols (COSAM) was conducted to increase our understanding of global distributions of sulfate aerosols and precursors. Earlier model comparisons focused on wet deposition measurements and sulfate aerosol concentrations in source regions at the surface. They found that different models simulated the observed sulfate surface concentrations mostly within a factor of two, but that the simulated column burdens and vertical profiles were very different amongst different models. In the COSAM exercise, one aspect is the comparison of sulfate aerosol and precursor gases above the surface. Vertical profiles of SO2, SO42-, oxidants and cloud properties were measured by aircraft during the North Atlantic Regional Experiment (NARE) experiment in August/September 1993 off the coast of Nova Scotia and during the Second Eulerian Model Evaluation Field Study (EMEFSII) in central Ontario in March/April 1990. While no single model stands out as being best or worst, the general tendency is that those models simulating the full oxidant chemistry tend to agree best with observations although differences in transport and treatment of clouds are important as well.

  5. Some cases of machining large-scale parts: Characterization and modelling of heavy turning, deep drilling and broaching

    NASA Astrophysics Data System (ADS)

    Haddag, B.; Nouari, M.; Moufki, A.

    2016-10-01

    Machining large-scale parts involves extreme loading at the cutting zone. This paper presents an overview of some cases of machining large-scale parts: heavy turning, deep drilling and broaching processes. It focuses on experimental characterization and modelling methods of these processes. Observed phenomena and/or measured cutting forces are reported. The paper also discusses the predictive ability of the proposed models to reproduce experimental data.

  6. Non-intrusive Ensemble Kalman filtering for large scale geophysical models

    NASA Astrophysics Data System (ADS)

    Amour, Idrissa; Kauranne, Tuomo

    2016-04-01

    Advanced data assimilation techniques, such as variational assimilation methods, present often challenging implementation issues for large-scale models, both because of computational complexity and because of complexity of implementation. We present a non-intrusive wrapper library that addresses this problem by isolating the direct model and the linear algebra employed in data assimilation from each other completely. In this approach we have adopted a hybrid Variational Ensemble Kalman filter that combines Ensemble propagation with a 3DVAR analysis stage. The inverse problem of state and covariance propagation from prior to posterior estimates is thereby turned into a time-independent problem. This feature allows the linear algebra and minimization steps required in the variational step to be conducted outside the direct model and no tangent linear or adjoint codes are required. Communication between the model and the assimilation module is conducted exclusively via standard input and output files of the model. This non-intrusive approach is tested with the comprehensive 3D lake and shallow sea model COHERENS that is used to forecast and assimilate turbidity in lake Säkylän Pyhäjärvi in Finland, using both sparse satellite images and continuous real-time point measurements as observations.

  7. Structure-preserving model reduction of large-scale logistics networks. Applications for supply chains

    NASA Astrophysics Data System (ADS)

    Scholz-Reiter, B.; Wirth, F.; Dashkovskiy, S.; Makuschewitz, T.; Schönlein, M.; Kosmykov, M.

    2011-12-01

    We investigate the problem of model reduction with a view to large-scale logistics networks, specifically supply chains. Such networks are modeled by means of graphs, which describe the structure of material flow. An aim of the proposed model reduction procedure is to preserve important features within the network. As a new methodology we introduce the LogRank as a measure for the importance of locations, which is based on the structure of the flows within the network. We argue that these properties reflect relative importance of locations. Based on the LogRank we identify subgraphs of the network that can be neglected or aggregated. The effect of this is discussed for a few motifs. Using this approach we present a meta algorithm for structure-preserving model reduction that can be adapted to different mathematical modeling frameworks. The capabilities of the approach are demonstrated with a test case, where a logistics network is modeled as a Jackson network, i.e., a particular type of queueing network.

  8. Parameter Set Cloning Based on Catchment Similarity for Large-scale Hydrologic Modeling

    NASA Astrophysics Data System (ADS)

    Liu, Z.; Kaheil, Y.; McCollum, J.

    2016-12-01

    Parameter calibration is a crucial step to ensure the accuracy of hydrological models. However, streamflow gauges are not available everywhere for calibrating a large-scale hydrologic model globally. Thus, assigning parameters appropriately for regions where the calibration cannot be performed directly has been a challenge for large-scale hydrologic modeling. Here we propose a method to estimate the model parameters in ungauged regions based on the values obtained through calibration in areas where gauge observations are available. This parameter set cloning is performed according to a catchment similarity index, a weighted sum index based on four catchment characteristic attributes. These attributes are IPCC Climate Zone, Soil Texture, Land Cover, and Topographic Index. The catchments with calibrated parameter values are donors, while the uncalibrated catchments are candidates. Catchment characteristic analyses are first conducted for both donors and candidates. For each attribute, we compute a characteristic distance between donors and candidates. Next, for each candidate, weights are assigned to the four attributes such that higher weights are given to properties that are more directly linked to the hydrologic dominant processes. This will ensure that the parameter set cloning emphasizes the dominant hydrologic process in the region where the candidate is located. The catchment similarity index for each donor - candidate couple is then created as the sum of the weighted distance of the four properties. Finally, parameters are assigned to each candidate from the donor that is "most similar" (i.e. with the shortest weighted distance sum). For validation, we applied the proposed method to catchments where gauge observations are available, and compared simulated streamflows using the parameters cloned by other catchments to the results obtained by calibrating the hydrologic model directly using gauge data. The comparison shows good agreement between the two models

  9. Fermi Observations of Resolved Large-Scale Jets: Testing the IC/CMB Model

    NASA Astrophysics Data System (ADS)

    Breiding, Peter; Meyer, Eileen T.; Georganopoulos, Markos

    2017-01-01

    It has been observed with the Chandra X-ray Observatory since the early 2000s that many powerful quasar jets show X-ray emission on the kpc scale (Harris & Krawczynski, 2006). In many cases these X-rays cannot be explained by the extension of the radio-optical spectrum produced by synchrotron emitting electrons in the jet, since the observed X-ray flux is too high and the X-ray spectral index too hard. A widely accepted model for the X-ray emission first proposed by Celotti et al. 2001 and Tavecchio et al. 2000 posits that the X-rays are produced when relativistic electrons in the jet up-scatter ambient cosmic microwave background (CMB) photons via inverse Compton scattering from microwave to X-ray energies (the IC/CMB model). However, explaining the X-ray emission for these jets with the IC/CMB model requires high levels of IC/CMB γ-ray emission (Georganopoulos et al., 2006), which we are looking for using the FERMI/LAT γ-ray space telescope. Another viable model for the large scale jet X-ray emission favored by the results of Meyer et al. 2015 and Meyer & Georganopoulos 2014 is an alternate population of synchrotron emitting electrons. In contrast with the second synchrotron interpretation; the IC/CMB model requires jets with high kinetic powers which can exceed the Eddington luminsoity (Dermer & Atoyan 2004 and Atoyan & Dermer 2004) and be very fast on the kpc scale with a Γ~10 (Celotti et al. 2001 and Tavecchio et al. 2000). New results from data obtained with the Fermi/LAT will be shown for several quasars not in the Fermi/LAT 3FGL catalog whose large scale X-ray jets are attributed to IC/CMB. Additionally, recent work on the γ-ray bright blazar AP Librae will be shown which helps to constrain some models attempting to explain the high energy component of its SED, which extends from X-ray to TeV energies (e.g., Zacharias & Wagner 2016 & Petropoulou et al. 2016).

  10. Major historical droughts in Europe as simulated by an ensemble of large-scale hydrological models

    NASA Astrophysics Data System (ADS)

    Tallaksen, L. M.; Stahl, K.

    2012-04-01

    As drought is regional by nature it should preferably be studied at the large scale to consistently address the spatial and temporal characteristics of drought and related drought causing processes. Nevertheless, there is a high spatial variability within a drought affected region caused by a combination of small-scale climate variability and catchment properties, which influences our ability to identify a particular event in a consistent way. Several studies have addressed the occurrence of major drought events in Europe in the last century, still no thorough analysis exists that compares across the different methods, variables and time periods employed. Thus, there is a need for a comprehensive pan-European study of historical events, including their definition, cause, characteristics and major impacts. Important to consider in this respect are the type of data to be analysed and the choice of methodology for drought identification and drought indices best suited for the task. In this study focus is on hydrological drought, i.e. streamflow drought, and the main aim is to analyse key characteristics of major historical droughts in Europe over the period 1963-2000, including affected area, severity and persistence. The variable analysed is simulated daily total runoff for each grid cell in Europe (4425 land grids) derived from the WATCH multi-model ensemble of nine large-scale hydrological models. A grid cell is defined to be in drought if the runoff is below the q20 (20% non-exceedance frequency of the empirical runoff distribution on the respective day). Spatial continuity is accounted for by the introduction of a drought cluster, defined as a minimum of 10 spatially contiguous grid cells in drought on a given day. The results revealed two major dry periods in terms of the mean annual drought area, namely 1975-76 and 1989-90, when also a high consistency was found among models. On the other hand, daily time series during these events depicted a high model

  11. Reducing errors in simulated satellite views of clouds from large-scale models

    NASA Astrophysics Data System (ADS)

    Hillman, Benjamin R.

    A fundamental test of the representation of clouds in models is evaluating the simulation of present-day climate against available observations. Satellite retrievals of cloud properties provide an attractive baseline for this evaluation because they can provide near global coverage and long records. However, comparisons of modeled and satellite-retrieved cloud properties are difficult because the quantities that can be represented by a model and those that can be observed from space are fundamentally different. Satellite simulators have emerged in recent decades as a means to account for these differences by producing pseudo-retrievals of cloud properties from model diagnosed descriptions of the atmosphere, but these simulators are subject to their own uncertainties as well that have not been well-quantified in the existing literature. In addition to uncertainties regarding the simulation of satellite retrievals themselves, a more fundamental source of uncertainty exists in connecting the different spatial scales between satellite retrievals and large-scale models. Systematic errors arising due to assumptions about the unresolved cloud and precipitation condensate distributions are identified here. Simulated satellite retrievals are shown in this study to be particularly sensitive to the treatment of cloud and precipitation occurrence overlap as well as to unresolved condensate variability. To correct for these errors, an improved treatment of unresolved clouds and precipitation is implemented for use with the simulator framework and is shown to substantially reduce the identified errors.

  12. Ensemble modeling to predict habitat suitability for a large-scale disturbance specialist.

    PubMed

    Latif, Quresh S; Saab, Victoria A; Dudley, Jonathan G; Hollenbeck, Jeff P

    2013-11-01

    managers attempting to balance salvage logging with habitat conservation in burned-forest landscapes where black-backed woodpecker nest location data are not immediately available. Ensemble modeling represents a promising tool for guiding conservation of large-scale disturbance specialists.

  13. A Comparison of Large-Scale Atmospheric Sulphate Aerosol Models (COSAM): Overview and Highlights

    SciTech Connect

    Barrie, Leonard A.; Yi, Y.; Leaitch, W. R.; Lohmann, U.; Kasibhatla, P.; Roelofs, G.-J.; Wilson, J.; Mcgovern, F.; Benkovitz, C.; Melieres, M. A.; Law, K.; Prospero, J.; Kritz, M.; Bergmann, D.; Bridgeman, C.; Chin, M.; Christiansen, J.; Easter, Richard C.; Feichter, J.; Land, C.; Jeuken, A.; Kjellstrom, E.; Koch, D.; Rasch, P.

    2001-11-01

    The comparison of large-scale sulphate aerosol models study (COSAM) compared the performance of atmospheric models with each other and observations. It involved: (i) design of a standard model experiment for the world wide web, (ii) 10 model simulations of the cycles of sulphur and 222Rn/210Pb conforming to the experimental design, (iii) assemblage of the best available observations of atmospheric SO4=, SO2 and MSA and (iv) a workshop in Halifax, Canada to analyze model performance and future model development needs. The analysis presented in this paper and two companion papers by Roelofs, and Lohmann and co-workers examines the variance between models and observations, discusses the sources of that variance and suggests ways to improve models. Variations between models in the export of SOx from Europe or North America are not sufficient to explain an order of magnitude variation in spatial distributions of SOx downwind in the northern hemisphere. On average, models predicted surface level seasonal mean SO4= aerosol mixing ratios better (most within 20%) than SO2 mixing ratios (over-prediction by factors of 2 or more). Results suggest that vertical mixing from the planetary boundary layer into the free troposphere in source regions is a major source of uncertainty in predicting the global distribution of SO4= aerosols in climate models today. For improvement, it is essential that globally coordinated research efforts continue to address emissions of all atmospheric species that affect the distribution and optical properties of ambient aerosols in models and that a global network of observations be established that will ultimately produce a world aerosol chemistry climatology.

  14. Ensemble modeling to predict habitat suitability for a large-scale disturbance specialist

    PubMed Central

    Latif, Quresh S; Saab, Victoria A; Dudley, Jonathan G; Hollenbeck, Jeff P

    2013-01-01

    help guide managers attempting to balance salvage logging with habitat conservation in burned-forest landscapes where black-backed woodpecker nest location data are not immediately available. Ensemble modeling represents a promising tool for guiding conservation of large-scale disturbance specialists. PMID:24340177

  15. Gravitational waves during inflation from a 5D large-scale repulsive gravity model

    NASA Astrophysics Data System (ADS)

    Reyes, Luz M.; Moreno, Claudia; Madriz Aguilar, José Edgar; Bellini, Mauricio

    2012-10-01

    We investigate, in the transverse traceless (TT) gauge, the generation of the relic background of gravitational waves, generated during the early inflationary stage, on the framework of a large-scale repulsive gravity model. We calculate the spectrum of the tensor metric fluctuations of an effective 4D Schwarzschild-de Sitter metric on cosmological scales. This metric is obtained after implementing a planar coordinate transformation on a 5D Ricci-flat metric solution, in the context of a non-compact Kaluza-Klein theory of gravity. We found that the spectrum is nearly scale invariant under certain conditions. One interesting aspect of this model is that it is possible to derive the dynamical field equations for the tensor metric fluctuations, valid not just at cosmological scales, but also at astrophysical scales, from the same theoretical model. The astrophysical and cosmological scales are determined by the gravity-antigravity radius, which is a natural length scale of the model, that indicates when gravity becomes repulsive in nature.

  16. Revisiting the EC/CMB model for extragalactic large scale jets

    NASA Astrophysics Data System (ADS)

    Lucchini, M.; Tavecchio, F.; Ghisellini, G.

    2016-12-01

    One of the most outstanding results of the Chandra X-ray Observatory was the discovery that AGN jets are bright X-ray emitters on very large scales, up to hundreds of kpc. Of these, the powerful and beamed jets of Flat Spectrum Radio Quasars are particularly interesting, as the X-ray emission cannot be explained by an extrapolation of the lower frequency synchrotron spectrum. Instead, the most common model invokes inverse Compton scattering of photons of the Cosmic Microwave Background (EC/CMB) as the mechanism responsible for the high energy emission. The EC/CMB model has recently come under criticism, particularly because it should predict a significant steady flux in the MeV-GeV band which has not been detected by the Fermi/LAT telescope for two of the best studied jets (PKS 0637-752 and 3C273). In this work we revisit some aspects of the EC/CMB model and show that electron cooling plays an important part in shaping the spectrum. This can solve the overproduction of γ-rays by suppressing the high energy end of the emitting particle population. Furthermore, we show that cooling in the EC/CMB model predicts a new class of extended jets that are bright in X-rays but silent in the radio and optical bands. These jets are more likely to lie at intermediate redshifts, and would have been missed in all previous X-ray surveys due to selection effects.

  17. Large-Scale Patterns in a Minimal Cognitive Flocking Model: Incidental Leaders, Nematic Patterns, and Aggregates

    NASA Astrophysics Data System (ADS)

    Barberis, Lucas; Peruani, Fernando

    2016-12-01

    We study a minimal cognitive flocking model, which assumes that the moving entities navigate using the available instantaneous visual information exclusively. The model consists of active particles, with no memory, that interact by a short-ranged, position-based, attractive force, which acts inside a vision cone (VC), and lack velocity-velocity alignment. We show that this active system can exhibit—due to the VC that breaks Newton's third law—various complex, large-scale, self-organized patterns. Depending on parameter values, we observe the emergence of aggregates or millinglike patterns, the formation of moving—locally polar—files with particles at the front of these structures acting as effective leaders, and the self-organization of particles into macroscopic nematic structures leading to long-ranged nematic order. Combining simulations and nonlinear field equations, we show that position-based active models, as the one analyzed here, represent a new class of active systems fundamentally different from other active systems, including velocity-alignment-based flocking systems. The reported results are of prime importance in the study, interpretation, and modeling of collective motion patterns in living and nonliving active systems.

  18. Mathematical framework for large-scale brain network modeling in The Virtual Brain.

    PubMed

    Sanz-Leon, Paula; Knock, Stuart A; Spiegler, Andreas; Jirsa, Viktor K

    2015-05-01

    In this article, we describe the mathematical framework of the computational model at the core of the tool The Virtual Brain (TVB), designed to simulate collective whole brain dynamics by virtualizing brain structure and function, allowing simultaneous outputs of a number of experimental modalities such as electro- and magnetoencephalography (EEG, MEG) and functional Magnetic Resonance Imaging (fMRI). The implementation allows for a systematic exploration and manipulation of every underlying component of a large-scale brain network model (BNM), such as the neural mass model governing the local dynamics or the structural connectivity constraining the space time structure of the network couplings. Here, a consistent notation for the generalized BNM is given, so that in this form the equations represent a direct link between the mathematical description of BNMs and the components of the numerical implementation in TVB. Finally, we made a summary of the forward models implemented for mapping simulated neural activity (EEG, MEG, sterotactic electroencephalogram (sEEG), fMRI), identifying their advantages and limitations.

  19. Morphotectonic evolution of passive margins undergoing active surface processes: large-scale experiments using numerical models.

    NASA Astrophysics Data System (ADS)

    Beucher, Romain; Huismans, Ritske S.

    2016-04-01

    Extension of the continental lithosphere can lead to the formation of a wide range of rifted margins styles with contrasting tectonic and geomorphological characteristics. It is now understood that many of these characteristics depend on the manner extension is distributed depending on (among others factors) rheology, structural inheritance, thermal structure and surface processes. The relative importance and the possible interactions of these controlling factors is still largely unknown. Here we investigate the feedbacks between tectonics and the transfers of material at the surface resulting from erosion, transport, and sedimentation. We use large-scale (1200 x 600 km) and high-resolution (~1km) numerical experiments coupling a 2D upper-mantle-scale thermo-mechanical model with a plan-form 2D surface processes model (SPM). We test the sensitivity of the coupled models to varying crust-lithosphere rheology and erosional efficiency ranging from no-erosion to very efficient erosion. We discuss how fast, when and how the topography of the continents evolves and how it can be compared to actual passive margins escarpment morphologies. We show that although tectonics is the main factor controlling the rift geometry, transfers of masses at the surface affect the timing of faulting and the initiation of sea-floor spreading. We discuss how such models may help to understand the evolution of high-elevated passive margins around the world.

  20. Revisiting the EC/CMB model for extragalactic large scale jets

    NASA Astrophysics Data System (ADS)

    Lucchini, M.; Tavecchio, F.; Ghisellini, G.

    2017-04-01

    One of the most outstanding results of the Chandra X-ray Observatory was the discovery that AGN jets are bright X-ray emitters on very large scales, up to hundreds of kpc. Of these, the powerful and beamed jets of flat-spectrum radio quasars are particularly interesting, as the X-ray emission cannot be explained by an extrapolation of the lower frequency synchrotron spectrum. Instead, the most common model invokes inverse Compton scattering of photons of the cosmic microwave background (EC/CMB) as the mechanism responsible for the high-energy emission. The EC/CMB model has recently come under criticism, particularly because it should predict a significant steady flux in the MeV-GeV band which has not been detected by the Fermi/LAT telescope for two of the best studied jets (PKS 0637-752 and 3C273). In this work, we revisit some aspects of the EC/CMB model and show that electron cooling plays an important part in shaping the spectrum. This can solve the overproduction of γ-rays by suppressing the high-energy end of the emitting particle population. Furthermore, we show that cooling in the EC/CMB model predicts a new class of extended jets that are bright in X-rays but silent in the radio and optical bands. These jets are more likely to lie at intermediate redshifts and would have been missed in all previous X-ray surveys due to selection effects.

  1. Large-Scale Patterns in a Minimal Cognitive Flocking Model: Incidental Leaders, Nematic Patterns, and Aggregates.

    PubMed

    Barberis, Lucas; Peruani, Fernando

    2016-12-09

    We study a minimal cognitive flocking model, which assumes that the moving entities navigate using the available instantaneous visual information exclusively. The model consists of active particles, with no memory, that interact by a short-ranged, position-based, attractive force, which acts inside a vision cone (VC), and lack velocity-velocity alignment. We show that this active system can exhibit-due to the VC that breaks Newton's third law-various complex, large-scale, self-organized patterns. Depending on parameter values, we observe the emergence of aggregates or millinglike patterns, the formation of moving-locally polar-files with particles at the front of these structures acting as effective leaders, and the self-organization of particles into macroscopic nematic structures leading to long-ranged nematic order. Combining simulations and nonlinear field equations, we show that position-based active models, as the one analyzed here, represent a new class of active systems fundamentally different from other active systems, including velocity-alignment-based flocking systems. The reported results are of prime importance in the study, interpretation, and modeling of collective motion patterns in living and nonliving active systems.

  2. Statistical Modeling of Large-Scale Signal Path Loss in Underwater Acoustic Networks

    PubMed Central

    Llor, Jesús; Malumbres, Manuel Perez

    2013-01-01

    In an underwater acoustic channel, the propagation conditions are known to vary in time, causing the deviation of the received signal strength from the nominal value predicted by a deterministic propagation model. To facilitate a large-scale system design in such conditions (e.g., power allocation), we have developed a statistical propagation model in which the transmission loss is treated as a random variable. By applying repetitive computation to the acoustic field, using ray tracing for a set of varying environmental conditions (surface height, wave activity, small node displacements around nominal locations, etc.), an ensemble of transmission losses is compiled and later used to infer the statistical model parameters. A reasonable agreement is found with log-normal distribution, whose mean obeys a log-distance increases, and whose variance appears to be constant for a certain range of inter-node distances in a given deployment location. The statistical model is deemed useful for higher-level system planning, where simulation is needed to assess the performance of candidate network protocols under various resource allocation policies, i.e., to determine the transmit power and bandwidth allocation necessary to achieve a desired level of performance (connectivity, throughput, reliability, etc.). PMID:23396190

  3. Forcing the statistical regionalization method WETTREG with large scale models of different resolution: A sensitivity study

    NASA Astrophysics Data System (ADS)

    Spekat, A.; Baumgart, S.; Kreienkamp, F.; Enke, W.

    2010-09-01

    The statistical regionalization method WETTREG is making use of the assumption that future climate changes are linked to changes in large scale atmospheric patterns. The frequency distributions of those patterns and their time-dependency are identified in the output fields of dynamical climate models and applied to force WETTREG. Thus, the magnitude and the time evolution of high-resolution climate signals for time horizons far into the 21st century can be computed. The model results employed to force WETTREG include the GCMS ECHAM5C, HadCM3C and CNRM. Additionally results from the dynamical regional models CLM, DMI, HadRM, RACMO and REMO, nested into one or more of these global models, are used in their pattern-generating capacity to force WETTREG. The study yield insight concerning the forcing-dependent sensitivity of WETTREG as well as the bandwidth of climate change signals. Recent results for the German State of Hesse will be presented in an intercomparison study.

  4. Modelling potential changes in marine biogeochemistry due to large-scale offshore wind farms

    NASA Astrophysics Data System (ADS)

    van der Molen, Johan; Rees, Jon; Limpenny, Sian

    2013-04-01

    Large-scale renewable energy generation by offshore wind farms may lead to changes in marine ecosystem processes through the following mechanism: 1) wind-energy extraction leads to a reduction in local surface wind speeds; 2) these lead to a reduction in the local wind wave height; 3) as a consequence there's a reduction in SPM resuspension and concentrations; 4) this results in an improvement in under-water light regime, which 5) may lead to increased primary production, which subsequently 6) cascades through the ecosystem. A three-dimensional coupled hydrodynamics-biogeochemistry model (GETM_ERSEM) was used to investigate this process for a hypothetical wind farm in the central North Sea, by running a reference scenario and a scenario with a 10% reduction (as was found in a case study of a small farm in Danish waters) in surface wind velocities in the area of the wind farm. The ERSEM model included both pelagic and benthic processes. The results showed that, within the farm area, the physical mechanisms were as expected, but with variations in the magnitude of the response depending on the ecosystem variable or exchange rate between two ecosystem variables (3-28%, depending on variable/rate). Benthic variables tended to be more sensitive to the changes than pelagic variables. Reduced, but noticeable changes also occurred for some variables in a region of up to two farm diameters surrounding the wind farm. An additional model run in which the 10% reduction in surface wind speed was applied only for wind speeds below the generally used threshold of 25 m/s for operational shut-down showed only minor differences from the run in which all wind speeds were reduced. These first results indicate that there is potential for measurable effects of large-scale offshore wind farms on the marine ecosystem, mainly within the farm but for some variables up to two farm diameters away. However, the wave and SPM parameterisations currently used in the model are crude and need to be

  5. A large-scale stochastic spatiotemporal model for Aedes albopictus-borne chikungunya epidemiology

    PubMed Central

    Chandra, Nastassya L.; Proestos, Yiannis; Lelieveld, Jos; Christophides, George K.; Parham, Paul E.

    2017-01-01

    Chikungunya is a viral disease transmitted to humans primarily via the bites of infected Aedes mosquitoes. The virus caused a major epidemic in the Indian Ocean in 2004, affecting millions of inhabitants, while cases have also been observed in Europe since 2007. We developed a stochastic spatiotemporal model of Aedes albopictus-borne chikungunya transmission based on our recently developed environmentally-driven vector population dynamics model. We designed an integrated modelling framework incorporating large-scale gridded climate datasets to investigate disease outbreaks on Reunion Island and in Italy. We performed Bayesian parameter inference on the surveillance data, and investigated the validity and applicability of the underlying biological assumptions. The model successfully represents the outbreak and measures of containment in Italy, suggesting wider applicability in Europe. In its current configuration, the model implies two different viral strains, thus two different outbreaks, for the two-stage Reunion Island epidemic. Characterisation of the posterior distributions indicates a possible relationship between the second larger outbreak on Reunion Island and the Italian outbreak. The model suggests that vector control measures, with different modes of operation, are most effective when applied in combination: adult vector intervention has a high impact but is short-lived, larval intervention has a low impact but is long-lasting, and quarantining infected territories, if applied strictly, is effective in preventing large epidemics. We present a novel approach in analysing chikungunya outbreaks globally using a single environmentally-driven mathematical model. Our study represents a significant step towards developing a globally applicable Ae. albopictus-borne chikungunya transmission model, and introduces a guideline for extending such models to other vector-borne diseases. PMID:28362820

  6. Large scale cratering of the lunar highlands - Some Monte Carlo model considerations

    NASA Technical Reports Server (NTRS)

    Hoerz, F.; Gibbons, R. V.; Hill, R. E.; Gault, D. E.

    1976-01-01

    In an attempt to understand the scale and intensity of the moon's early, large scale meteoritic bombardment, a Monte Carlo computer model simulated the effects of all lunar craters greater than 800 m in diameter, for example, the number of times and depths specific fractions of the entire lunar surface were cratered. The model used observed crater size frequencies and crater-geometries compatible with the suggestions of Pike (1974) and Dence (1973); it simulated bombardment histories up to a factor of 10 more intense than those reflected by the present-day crater number density of the lunar highlands. For the present-day cratering record the model yields the following: approximately 25% of the entire lunar surface has not been cratered deeper than 100 m; 50% may have been cratered to 2-3 km depth; less than 5% of the surface has been cratered deeper than about 15 km. A typical highland site has suffered 1-2 impacts. Corresponding values for more intense bombardment histories are also presented, though it must remain uncertain what the absolute intensity of the moon's early meteorite bombardment was.

  7. Large scale cratering of the lunar highlands - Some Monte Carlo model considerations

    NASA Technical Reports Server (NTRS)

    Hoerz, F.; Gibbons, R. V.; Hill, R. E.; Gault, D. E.

    1976-01-01

    In an attempt to understand the scale and intensity of the moon's early, large scale meteoritic bombardment, a Monte Carlo computer model simulated the effects of all lunar craters greater than 800 m in diameter, for example, the number of times and depths specific fractions of the entire lunar surface were cratered. The model used observed crater size frequencies and crater-geometries compatible with the suggestions of Pike (1974) and Dence (1973); it simulated bombardment histories up to a factor of 10 more intense than those reflected by the present-day crater number density of the lunar highlands. For the present-day cratering record the model yields the following: approximately 25% of the entire lunar surface has not been cratered deeper than 100 m; 50% may have been cratered to 2-3 km depth; less than 5% of the surface has been cratered deeper than about 15 km. A typical highland site has suffered 1-2 impacts. Corresponding values for more intense bombardment histories are also presented, though it must remain uncertain what the absolute intensity of the moon's early meteorite bombardment was.

  8. Modeling the Hydrologic Effects of Large-Scale Green Infrastructure Projects with GIS

    NASA Astrophysics Data System (ADS)

    Bado, R. A.; Fekete, B. M.; Khanbilvardi, R.

    2015-12-01

    Impervious surfaces in urban areas generate excess runoff, which in turn causes flooding, combined sewer overflows, and degradation of adjacent surface waters. Municipal environmental protection agencies have shown a growing interest in mitigating these effects with 'green' infrastructure practices that partially restore the perviousness and water holding capacity of urban centers. Assessment of the performance of current and future green infrastructure projects is hindered by the lack of adequate hydrological modeling tools; conventional techniques fail to account for the complex flow pathways of urban environments, and detailed analyses are difficult to prepare for the very large domains in which green infrastructure projects are implemented. Currently, no standard toolset exists that can rapidly and conveniently predict runoff, consequent inundations, and sewer overflows at a city-wide scale. We demonstrate how streamlined modeling techniques can be used with open-source GIS software to efficiently model runoff in large urban catchments. Hydraulic parameters and flow paths through city blocks, roadways, and sewer drains are automatically generated from GIS layers, and ultimately urban flow simulations can be executed for a variety of rainfall conditions. With this methodology, users can understand the implications of large-scale land use changes and green/gray storm water retention systems on hydraulic loading, peak flow rates, and runoff volumes.

  9. Large-scale functional models of visual cortex for remote sensing

    SciTech Connect

    Brumby, Steven P; Kenyon, Garrett; Rasmussen, Craig E; Swaminarayan, Sriram; Bettencourt, Luis; Landecker, Will

    2009-01-01

    Neuroscience has revealed many properties of neurons and of the functional organization of visual cortex that are believed to be essential to human vision, but are missing in standard artificial neural networks. Equally important may be the sheer scale of visual cortex requiring {approx}1 petaflop of computation. In a year, the retina delivers {approx}1 petapixel to the brain, leading to massively large opportunities for learning at many levels of the cortical system. We describe work at Los Alamos National Laboratory (LANL) to develop large-scale functional models of visual cortex on LANL's Roadrunner petaflop supercomputer. An initial run of a simple region VI code achieved 1.144 petaflops during trials at the IBM facility in Poughkeepsie, NY (June 2008). Here, we present criteria for assessing when a set of learned local representations is 'complete' along with general criteria for assessing computer vision models based on their projected scaling behavior. Finally, we extend one class of biologically-inspired learning models to problems of remote sensing imagery.

  10. Influenza epidemic spread simulation for Poland — a large scale, individual based model study

    NASA Astrophysics Data System (ADS)

    Rakowski, Franciszek; Gruziel, Magdalena; Bieniasz-Krzywiec, Łukasz; Radomski, Jan P.

    2010-08-01

    In this work a construction of an agent based model for studying the effects of influenza epidemic in large scale (38 million individuals) stochastic simulations, together with the resulting various scenarios of disease spread in Poland are reported. Simple transportation rules were employed to mimic individuals’ travels in dynamic route-changing schemes, allowing for the infection spread during a journey. Parameter space was checked for stable behaviour, especially towards the effective infection transmission rate variability. Although the model reported here is based on quite simple assumptions, it allowed to observe two different types of epidemic scenarios: characteristic for urban and rural areas. This differentiates it from the results obtained in the analogous studies for the UK or US, where settlement and daily commuting patterns are both substantially different and more diverse. The resulting epidemic scenarios from these ABM simulations were compared with simple, differential equations based, SIR models - both types of the results displaying strong similarities. The pDYN software platform developed here is currently used in the next stage of the project employed to study various epidemic mitigation strategies.

  11. Development of explosive event scale model testing capability at Sandia`s large scale centrifuge facility

    SciTech Connect

    Blanchat, T.K.; Davie, N.T.; Calderone, J.J.

    1998-02-01

    Geotechnical structures such as underground bunkers, tunnels, and building foundations are subjected to stress fields produced by the gravity load on the structure and/or any overlying strata. These stress fields may be reproduced on a scaled model of the structure by proportionally increasing the gravity field through the use of a centrifuge. This technology can then be used to assess the vulnerability of various geotechnical structures to explosive loading. Applications of this technology include assessing the effectiveness of earth penetrating weapons, evaluating the vulnerability of various structures, counter-terrorism, and model validation. This document describes the development of expertise in scale model explosive testing on geotechnical structures using Sandia`s large scale centrifuge facility. This study focused on buried structures such as hardened storage bunkers or tunnels. Data from this study was used to evaluate the predictive capabilities of existing hydrocodes and structural dynamics codes developed at Sandia National Laboratories (such as Pronto/SPH, Pronto/CTH, and ALEGRA). 7 refs., 50 figs., 8 tabs.

  12. A method to search for large-scale concavities in asteroid shape models

    NASA Astrophysics Data System (ADS)

    Devogèle, M.; Rivet, J. P.; Tanga, P.; Bendjoya, Ph.; Surdej, J.; Bartczak, P.; Hanus, J.

    2015-11-01

    Photometric light-curve inversion of minor planets has proven to produce a unique model solution only under the hypothesis that the asteroid is convex. However, it was suggested that the resulting shape model, for the case of non-convex asteroids, is the convex-hull of the true asteroid non-convex shape. While a convex shape is already useful to provide the overall aspect of the target, much information about real shapes is missed, as we know that asteroids are very irregular. It is a commonly accepted evidence that large flat areas sometimes appearing on shapes derived from light curves correspond to concave areas, but this information has not been further explored and exploited so far. We present in this paper a method that allows to predict the presence of concavities from such flat regions. This method analyses the distribution of the local normals to the facets composing shape models to predict the presence of abnormally large flat surfaces. In order to test our approach, we consider here its application to a large family of synthetic asteroid shapes, and to real asteroids with large-scale concavities, whose detailed shape is known by other kinds of observations (radar and spacecraft encounters). The method that we propose has proven to be reliable and capable of providing a qualitative indication of the relevance of concavities on well-constrained asteroid shapes derived from purely photometric data sets.

  13. THE LARGE-SCALE BIAS OF DARK MATTER HALOS: NUMERICAL CALIBRATION AND MODEL TESTS

    SciTech Connect

    Tinker, Jeremy L.; Robertson, Brant E.; Kravtsov, Andrey V.; Klypin, Anatoly; Warren, Michael S.; Yepes, Gustavo; Gottloeber, Stefan

    2010-12-01

    We measure the clustering of dark matter halos in a large set of collisionless cosmological simulations of the flat {Lambda}CDM cosmology. Halos are identified using the spherical overdensity algorithm, which finds the mass around isolated peaks in the density field such that the mean density is {Delta} times the background. We calibrate fitting functions for the large-scale bias that are adaptable to any value of {Delta} we examine. We find a {approx}6% scatter about our best-fit bias relation. Our fitting functions couple to the halo mass functions of Tinker et al. such that the bias of all dark matter is normalized to unity. We demonstrate that the bias of massive, rare halos is higher than that predicted in the modified ellipsoidal collapse model of Sheth et al. and approaches the predictions of the spherical collapse model for the rarest halos. Halo bias results based on friends-of-friends halos identified with linking length 0.2 are systematically lower than for halos with the canonical {Delta} = 200 overdensity by {approx}10%. In contrast to our previous results on the mass function, we find that the universal bias function evolves very weakly with redshift, if at all. We use our numerical results, both for the mass function and the bias relation, to test the peak-background split model for halo bias. We find that the peak-background split achieves a reasonable agreement with the numerical results, but {approx}20% residuals remain, both at high and low masses.

  14. A polymer model explains the complexity of large-scale chromatin folding.

    PubMed

    Barbieri, Mariano; Fraser, James; Lavitas, Liron-Mark; Chotalia, Mita; Dostie, Josée; Pombo, Ana; Nicodemi, Mario

    2013-01-01

    The underlying global organization of chromatin within the cell nucleus has been the focus of intense recent research. Hi-C methods have allowed for the detection of genome-wide chromatin interactions, revealing a complex large-scale organization where chromosomes tend to partition into megabase-sized "topological domains" of local chromatin interactions and intra-chromosomal contacts extends over much longer scales, in a cell-type and chromosome specific manner. Until recently, the distinct chromatin folding properties observed experimentally have been difficult to explain in a single conceptual framework. We reported that a simple polymer-physics model of chromatin, the strings and binders switch (SBS) model, succeeds in describing the full range of chromatin configurations observed in vivo. The SBS model simulates the interactions between randomly diffusing binding molecules and binding sites on a polymer chain. It explains how polymer architectural patterns can be established, how different stable conformations can be produced and how conformational changes can be reliably regulated by simple strategies, such as protein upregulation or epigenetic modifications, via fundamental thermodynamics mechanisms.

  15. A polymer model explains the complexity of large-scale chromatin folding

    PubMed Central

    Barbieri, Mariano; Fraser, James; Lavitas, Liron-Mark; Chotalia, Mita; Dostie, Josée; Pombo, Ana; Nicodemi, Mario

    2013-01-01

    The underlying global organization of chromatin within the cell nucleus has been the focus of intense recent research. Hi-C methods have allowed for the detection of genome-wide chromatin interactions, revealing a complex large-scale organization where chromosomes tend to partition into megabase-sized “topological domains” of local chromatin interactions and intra-chromosomal contacts extends over much longer scales, in a cell-type and chromosome specific manner. Until recently, the distinct chromatin folding properties observed experimentally have been difficult to explain in a single conceptual framework. We reported that a simple polymer-physics model of chromatin, the strings and binders switch (SBS) model, succeeds in describing the full range of chromatin configurations observed in vivo. The SBS model simulates the interactions between randomly diffusing binding molecules and binding sites on a polymer chain. It explains how polymer architectural patterns can be established, how different stable conformations can be produced and how conformational changes can be reliably regulated by simple strategies, such as protein upregulation or epigenetic modifications, via fundamental thermodynamics mechanisms. PMID:23823730

  16. A Model for Managing Large-Scale Change: A Higher Education Perspective.

    ERIC Educational Resources Information Center

    Bruyns, H. J.

    2001-01-01

    Discusses key components and critical issues related to managing large-scale change in higher education. Explores reasons for inappropriate change patterns and suggests guidelines for establishing appropriate change paradigms. (EV)

  17. A Model for Managing Large-Scale Change: A Higher Education Perspective.

    ERIC Educational Resources Information Center

    Bruyns, H. J.

    2001-01-01

    Discusses key components and critical issues related to managing large-scale change in higher education. Explores reasons for inappropriate change patterns and suggests guidelines for establishing appropriate change paradigms. (EV)

  18. QSAR Modeling Using Large-Scale Databases: Case Study for HIV-1 Reverse Transcriptase Inhibitors.

    PubMed

    Tarasova, Olga A; Urusova, Aleksandra F; Filimonov, Dmitry A; Nicklaus, Marc C; Zakharov, Alexey V; Poroikov, Vladimir V

    2015-07-27

    Large-scale databases are important sources of training sets for various QSAR modeling approaches. Generally, these databases contain information extracted from different sources. This variety of sources can produce inconsistency in the data, defined as sometimes widely diverging activity results for the same compound against the same target. Because such inconsistency can reduce the accuracy of predictive models built from these data, we are addressing the question of how best to use data from publicly and commercially accessible databases to create accurate and predictive QSAR models. We investigate the suitability of commercially and publicly available databases to QSAR modeling of antiviral activity (HIV-1 reverse transcriptase (RT) inhibition). We present several methods for the creation of modeling (i.e., training and test) sets from two, either commercially or freely available, databases: Thomson Reuters Integrity and ChEMBL. We found that the typical predictivities of QSAR models obtained using these different modeling set compilation methods differ significantly from each other. The best results were obtained using training sets compiled for compounds tested using only one method and material (i.e., a specific type of biological assay). Compound sets aggregated by target only typically yielded poorly predictive models. We discuss the possibility of "mix-and-matching" assay data across aggregating databases such as ChEMBL and Integrity and their current severe limitations for this purpose. One of them is the general lack of complete and semantic/computer-parsable descriptions of assay methodology carried by these databases that would allow one to determine mix-and-matchability of result sets at the assay level.

  19. Modeling long-term, large-scale sediment storage using a simple sediment budget approach

    NASA Astrophysics Data System (ADS)

    Naipal, Victoria; Reick, Christian; Van Oost, Kristof; Hoffmann, Thomas; Pongratz, Julia

    2016-05-01

    Currently, the anthropogenic perturbation of the biogeochemical cycles remains unquantified due to the poor representation of lateral fluxes of carbon and nutrients in Earth system models (ESMs). This lateral transport of carbon and nutrients between terrestrial ecosystems is strongly affected by accelerated soil erosion rates. However, the quantification of global soil erosion by rainfall and runoff, and the resulting redistribution is missing. This study aims at developing new tools and methods to estimate global soil erosion and redistribution by presenting and evaluating a new large-scale coarse-resolution sediment budget model that is compatible with ESMs. This model can simulate spatial patterns and long-term trends of soil redistribution in floodplains and on hillslopes, resulting from external forces such as climate and land use change. We applied the model to the Rhine catchment using climate and land cover data from the Max Planck Institute Earth System Model (MPI-ESM) for the last millennium (here AD 850-2005). Validation is done using observed Holocene sediment storage data and observed scaling between sediment storage and catchment area. We find that the model reproduces the spatial distribution of floodplain sediment storage and the scaling behavior for floodplains and hillslopes as found in observations. After analyzing the dependence of the scaling behavior on the main parameters of the model, we argue that the scaling is an emergent feature of the model and mainly dependent on the underlying topography. Furthermore, we find that land use change is the main contributor to the change in sediment storage in the Rhine catchment during the last millennium. Land use change also explains most of the temporal variability in sediment storage in floodplains and on hillslopes.

  20. Evaluation of large-scale meteorological patterns associated with temperature extremes in the NARCCAP regional climate model simulations

    NASA Astrophysics Data System (ADS)

    Loikith, Paul C.; Waliser, Duane E.; Lee, Huikyo; Neelin, J. David; Lintner, Benjamin R.; McGinnis, Seth; Mearns, Linda O.; Kim, Jinwon

    2015-12-01

    Large-scale meteorological patterns (LSMPs) associated with temperature extremes are evaluated in a suite of regional climate model (RCM) simulations contributing to the North American Regional Climate Change Assessment Program. LSMPs are characterized through composites of surface air temperature, sea level pressure, and 500 hPa geopotential height anomalies concurrent with extreme temperature days. Six of the seventeen RCM simulations are driven by boundary conditions from reanalysis while the other eleven are driven by one of four global climate models (GCMs). Four illustrative case studies are analyzed in detail. Model fidelity in LSMP spatial representation is high for cold winter extremes near Chicago. Winter warm extremes are captured by most RCMs in northern California, with some notable exceptions. Model fidelity is lower for cool summer days near Houston and extreme summer heat events in the Ohio Valley. Physical interpretation of these patterns and identification of well-simulated cases, such as for Chicago, boosts confidence in the ability of these models to simulate days in the tails of the temperature distribution. Results appear consistent with the expectation that the ability of an RCM to reproduce a realistically shaped frequency distribution for temperature, especially at the tails, is related to its fidelity in simulating LMSPs. Each ensemble member is ranked for its ability to reproduce LSMPs associated with observed warm and cold extremes, identifying systematically high performing RCMs and the GCMs that provide superior boundary forcing. The methodology developed here provides a framework for identifying regions where further process-based evaluation would improve the understanding of simulation error and help guide future model improvement and downscaling efforts.

  1. Predictions of a non-Gaussian model for large scale structure

    SciTech Connect

    Fan, Z.H.; Bardeen, J.M.

    1992-06-26

    A modified CDM model for the origin of structure in the universe based on an inflation model with two interacting scalar fields, is analyzed to make predictions for the statistical properties of the density and velocity fields and the microwave background anisotropy. The initial gauge-invariant potential [zeta] which is defined as [zeta] = [delta][rho]/([rho] + p) + 3[var phi], where [var phi] is the curvature perturbation amplitude and p is the pressure, is the sum of a Gaussian field [phi][sub 1], and the square of a Gaussian field [phi][sub 2]. A Harrison-Zel'dovich scale-invariant power spectrum is assumed for [phi][sub 1]; and a log-normal 'peak' power spectrum for [phi][sub 2]. The location and the width of the peak are described by parameters k[sub c] and a. respectively. The model is motivated to some extent by inflation models with two interacting scalar fields, but is mainly interesting as an example of a model whose statistical properties change with scale. On small scales, it is almost identical to a standard scale-invariant Gaussian CDM model. On scales near the location of the peak of the non-Gaussian field, the distributions have long tails in high positive values of the density and velocity fields. Thus, it is easier to get large-scale streaming velocities than the standard CDM model. The quadrupole amplitude of fluctuations of the cosmic microwave background radiation and the rms variation of the temperature field smoothed with a 10[degree] FWHM Gaussian are calculated; a reasonable agreement is found with the new COBE results.

  2. Large-scale modelling of the divergent spectrin repeats in nesprins: giant modular proteins.

    PubMed

    Autore, Flavia; Pfuhl, Mark; Quan, Xueping; Williams, Aisling; Roberts, Roland G; Shanahan, Catherine M; Fraternali, Franca

    2013-01-01

    Nesprin-1 and nesprin-2 are nuclear envelope (NE) proteins characterized by a common structure of an SR (spectrin repeat) rod domain and a C-terminal transmembrane KASH [Klarsicht-ANC-Syne-homology] domain and display N-terminal actin-binding CH (calponin homology) domains. Mutations in these proteins have been described in Emery-Dreifuss muscular dystrophy and attributed to disruptions of interactions at the NE with nesprins binding partners, lamin A/C and emerin. Evolutionary analysis of the rod domains of the nesprins has shown that they are almost entirely composed of unbroken SR-like structures. We present a bioinformatical approach to accurate definition of the boundaries of each SR by comparison with canonical SR structures, allowing for a large-scale homology modelling of the 74 nesprin-1 and 56 nesprin-2 SRs. The exposed and evolutionary conserved residues identify important pbs for protein-protein interactions that can guide tailored binding experiments. Most importantly, the bioinformatics analyses and the 3D models have been central to the design of selected constructs for protein expression. 1D NMR and CD spectra have been performed of the expressed SRs, showing a folded, stable, high content α-helical structure, typical of SRs. Molecular Dynamics simulations have been performed to study the structural and elastic properties of consecutive SRs, revealing insights in the mechanical properties adopted by these modules in the cell.

  3. Prospective large-scale field study generates predictive model identifying major contributors to colony losses.

    PubMed

    Kielmanowicz, Merav Gleit; Inberg, Alex; Lerner, Inbar Maayan; Golani, Yael; Brown, Nicholas; Turner, Catherine Louise; Hayes, Gerald J R; Ballam, Joan M

    2015-04-01

    Over the last decade, unusually high losses of colonies have been reported by beekeepers across the USA. Multiple factors such as Varroa destructor, bee viruses, Nosema ceranae, weather, beekeeping practices, nutrition, and pesticides have been shown to contribute to colony losses. Here we describe a large-scale controlled trial, in which different bee pathogens, bee population, and weather conditions across winter were monitored at three locations across the USA. In order to minimize influence of various known contributing factors and their interaction, the hives in the study were not treated with antibiotics or miticides. Additionally, the hives were kept at one location and were not exposed to potential stress factors associated with migration. Our results show that a linear association between load of viruses (DWV or IAPV) in Varroa and bees is present at high Varroa infestation levels (>3 mites per 100 bees). The collection of comprehensive data allowed us to draw a predictive model of colony losses and to show that Varroa destructor, along with bee viruses, mainly DWV replication, contributes to approximately 70% of colony losses. This correlation further supports the claim that insufficient control of the virus-vectoring Varroa mite would result in increased hive loss. The predictive model also indicates that a single factor may not be sufficient to trigger colony losses, whereas a combination of stressors appears to impact hive health.

  4. Large-Scale Modelling of the Divergent Spectrin Repeats in Nesprins: Giant Modular Proteins

    PubMed Central

    Autore, Flavia; Pfuhl, Mark; Quan, Xueping; Williams, Aisling; Roberts, Roland G.; Shanahan, Catherine M.; Fraternali, Franca

    2013-01-01

    Nesprin-1 and nesprin-2 are nuclear envelope (NE) proteins characterized by a common structure of an SR (spectrin repeat) rod domain and a C-terminal transmembrane KASH [Klarsicht–ANC–Syne-homology] domain and display N-terminal actin-binding CH (calponin homology) domains. Mutations in these proteins have been described in Emery-Dreifuss muscular dystrophy and attributed to disruptions of interactions at the NE with nesprins binding partners, lamin A/C and emerin. Evolutionary analysis of the rod domains of the nesprins has shown that they are almost entirely composed of unbroken SR-like structures. We present a bioinformatical approach to accurate definition of the boundaries of each SR by comparison with canonical SR structures, allowing for a large-scale homology modelling of the 74 nesprin-1 and 56 nesprin-2 SRs. The exposed and evolutionary conserved residues identify important pbs for protein-protein interactions that can guide tailored binding experiments. Most importantly, the bioinformatics analyses and the 3D models have been central to the design of selected constructs for protein expression. 1D NMR and CD spectra have been performed of the expressed SRs, showing a folded, stable, high content α-helical structure, typical of SRs. Molecular Dynamics simulations have been performed to study the structural and elastic properties of consecutive SRs, revealing insights in the mechanical properties adopted by these modules in the cell. PMID:23671687

  5. A Fractal Model for the Shear Behaviour of Large-Scale Opened Rock Joints

    NASA Astrophysics Data System (ADS)

    Li, Y.; Oh, J.; Mitra, R.; Canbulat, I.

    2017-01-01

    This paper presents a joint constitutive model that represents the shear behaviour of a large-scale opened rock joint. Evaluation of the degree of opening is made by considering the ratio between the joint wall aperture and the joint amplitude. Scale dependence of the surface roughness is investigated by approximating a natural joint profile to a fractal curve patterned in self-affinity. Developed scaling laws show the slopes of critical waviness and critical unevenness tend to flatten with increased sampling length. Geometrical examination of four 400-mm joint profiles agrees well with the suggested formulations involving multi-order asperities and fractal descriptors. Additionally, a fractal-based formulation is proposed to estimate the peak shear displacements of rock joints at varying scales, which shows a good correlation with experimental data taken from the literature. Parameters involved in the constitutive law can be acquired by inspecting roughness features of sampled rock joints. Thus, the model can be implemented in numerical software for the stability analysis of the rock mass with opened joints.

  6. Prospective Large-Scale Field Study Generates Predictive Model Identifying Major Contributors to Colony Losses

    PubMed Central

    Kielmanowicz, Merav Gleit; Inberg, Alex; Lerner, Inbar Maayan; Golani, Yael; Brown, Nicholas; Turner, Catherine Louise; Hayes, Gerald J. R.; Ballam, Joan M.

    2015-01-01

    Over the last decade, unusually high losses of colonies have been reported by beekeepers across the USA. Multiple factors such as Varroa destructor, bee viruses, Nosema ceranae, weather, beekeeping practices, nutrition, and pesticides have been shown to contribute to colony losses. Here we describe a large-scale controlled trial, in which different bee pathogens, bee population, and weather conditions across winter were monitored at three locations across the USA. In order to minimize influence of various known contributing factors and their interaction, the hives in the study were not treated with antibiotics or miticides. Additionally, the hives were kept at one location and were not exposed to potential stress factors associated with migration. Our results show that a linear association between load of viruses (DWV or IAPV) in Varroa and bees is present at high Varroa infestation levels (>3 mites per 100 bees). The collection of comprehensive data allowed us to draw a predictive model of colony losses and to show that Varroa destructor, along with bee viruses, mainly DWV replication, contributes to approximately 70% of colony losses. This correlation further supports the claim that insufficient control of the virus-vectoring Varroa mite would result in increased hive loss. The predictive model also indicates that a single factor may not be sufficient to trigger colony losses, whereas a combination of stressors appears to impact hive health. PMID:25875764

  7. Constraining Large-Scale Solar Magnetic Field Models with Optical Coronal Observations

    NASA Astrophysics Data System (ADS)

    Uritsky, V. M.; Davila, J. M.; Jones, S. I.

    2015-12-01

    Scientific success of the Solar Probe Plus (SPP) and Solar Orbiter (SO) missions will depend to a large extent on the accuracy of the available coronal magnetic field models describing the connectivity of plasma disturbances in the inner heliosphere with their source regions. We argue that ground based and satellite coronagraph images can provide robust geometric constraints for the next generation of improved coronal magnetic field extrapolation models. In contrast to the previously proposed loop segmentation codes designed for detecting compact closed-field structures above solar active regions, we focus on the large-scale geometry of the open-field coronal regions located at significant radial distances from the solar surface. Details on the new feature detection algorithms will be presented. By applying the developed image processing methodology to high-resolution Mauna Loa Solar Observatory images, we perform an optimized 3D B-line tracing for a full Carrington rotation using the magnetic field extrapolation code presented in a companion talk by S.Jones at al. Tracing results are shown to be in a good qualitative agreement with the large-scalie configuration of the optical corona. Subsequent phases of the project and the related data products for SSP and SO missions as wwll as the supporting global heliospheric simulations will be discussed.

  8. Augmenting a Large-Scale Hydrology Model to Reproduce Groundwater Variability

    NASA Astrophysics Data System (ADS)

    Stampoulis, D.; Reager, J. T., II; Andreadis, K.; Famiglietti, J. S.

    2016-12-01

    To understand the influence of groundwater on terrestrial ecosystems and society, global assessment of groundwater temporal fluctuations is required. A water table was initialized in the Variable Infiltration Capacity (VIC) hydrologic model in a semi-realistic approach to account for groundwater variability. Global water table depth data derived from observations at nearly 2 million well sites compiled from government archives and published literature, as well as groundwater model simulations, were used to create a new soil layer of varying depth for each model grid cell. The new 4-layer version of VIC, hereafter named VIC-4L, was run with and without assimilating NASA's Gravity Recovery and Climate Experiment (GRACE) observations. The results were compared with simulations using the original VIC version (named VIC-3L) with GRACE assimilation, while all runs were compared with well data.

  9. Averaging over spatial heterogeneity leads to overestimation of ET in large scale Earth system models

    NASA Astrophysics Data System (ADS)

    Rouholahnejad Freund, Elham; Fan, Ying; Kirchner, James W.

    2017-04-01

    Hydrologic processes are heterogeneous at far smaller spatial scales than a typical Earth system model grid (1-5 degree, 100-500km). Thus, estimates of evapotranspiration (ET) in most Earth system models average over considerable sub-grid heterogeneity in land surface properties, precipitation (P), and potential evapotranspiration (PET). This spatial averaging could potentially bias ET estimates, due to the nonlinearities in the underlying relationships. Here we estimate the effects of spatial heterogeneity on grid-cell-averaged ET, as seen from the atmosphere over heterogeneous landscapes at global scale. Using a Budyko framework to express ET as a function of P and PET, we quantify how sub-grid heterogeneity affects average ET at the scale of typical Earth system model grid cells (1° by 1°). We show that averaging over sub-grid heterogeneity in P and PET, as typical Earth system models do, leads to overestimation of average ET. Our analysis at global scale shows that the effects of sub-grid heterogeneity will be most pronounced in steep mountainous areas where the topographic gradient is high and where P is inversely correlated with PET across the landscape. This approach yields a simple conceptual framework and mathematical expressions for determining whether, and how much, spatial heterogeneity can affect regional ET fluxes as seen from the atmosphere. Correcting for this overestimation of ET in Earth system models will be important for future temperature predictions, since smaller values of ET imply greater sensible heat fluxes, thus potentially amplifying dry and warm conditions in the context of climate change. This work presented here provides the basis for translating the quantified heterogeneity bias into correction factors for large scale Earth system models, which will be the focus of future work.

  10. Improving urban streamflow forecasting using a high-resolution large scale modeling framework

    NASA Astrophysics Data System (ADS)

    Read, Laura; Hogue, Terri; Gochis, David; Salas, Fernando

    2016-04-01

    Urban flood forecasting is a critical component in effective water management, emergency response, regional planning, and disaster mitigation. As populations across the world continue to move to cities (~1.8% growth per year), and studies indicate that significant flood damages are occurring outside the floodplain in urban areas, the ability to model and forecast flow over the urban landscape becomes critical to maintaining infrastructure and society. In this work, we use the Weather Research and Forecasting- Hydrological (WRF-Hydro) modeling framework as a platform for testing improvements to representation of urban land cover, impervious surfaces, and urban infrastructure. The three improvements we evaluate include: updating the land cover to the latest 30-meter National Land Cover Dataset, routing flow over a high-resolution 30-meter grid, and testing a methodology for integrating an urban drainage network into the routing regime. We evaluate performance of these improvements in the WRF-Hydro model for specific flood events in the Denver-Metro Colorado domain, comparing to historic gaged streamflow for retrospective forecasts. Denver-Metro provides an interesting case study as it is a rapidly growing urban/peri-urban region with an active history of flooding events that have caused significant loss of life and property. Considering that the WRF-Hydro model will soon be implemented nationally in the U.S. to provide flow forecasts on the National Hydrography Dataset Plus river reaches - increasing capability from 3,600 forecast points to 2.7 million, we anticipate that this work will support validation of this service in urban areas for operational forecasting. Broadly, this research aims to provide guidance for integrating complex urban infrastructure with a large-scale, high resolution coupled land-surface and distributed hydrologic model.

  11. Large Scale Frequent Pattern Mining using MPI One-Sided Model

    SciTech Connect

    Vishnu, Abhinav; Agarwal, Khushbu

    2015-09-08

    In this paper, we propose a work-stealing runtime --- Library for Work Stealing LibWS --- using MPI one-sided model for designing scalable FP-Growth --- {\\em de facto} frequent pattern mining algorithm --- on large scale systems. LibWS provides locality efficient and highly scalable work-stealing techniques for load balancing on a variety of data distributions. We also propose a novel communication algorithm for FP-growth data exchange phase, which reduces the communication complexity from state-of-the-art O(p) to O(f + p/f) for p processes and f frequent attributed-ids. FP-Growth is implemented using LibWS and evaluated on several work distributions and support counts. An experimental evaluation of the FP-Growth on LibWS using 4096 processes on an InfiniBand Cluster demonstrates excellent efficiency for several work distributions (87\\% efficiency for Power-law and 91% for Poisson). The proposed distributed FP-Tree merging algorithm provides 38x communication speedup on 4096 cores.

  12. Numerical simulations of large-scale detonation tests in the RUT facility by the LES model.

    PubMed

    Zbikowski, Mateusz; Makarov, Dmitriy; Molkov, Vladimir

    2010-09-15

    The LES model based on the progress variable equation and the gradient method to simulate propagation of the reaction front within the detonation wave, which was recently verified by the ZND theory, is tested in this study against two large-scale experiments in the RUT facility. The facility was 27.6 m x 6.3 m x 6.55 m compartment with complex three-dimensional geometry. Experiments with 20% and 25.5% hydrogen-air mixture and different location of direct detonation initiation were simulated. Sensitivity of 3D simulations to control volume size and type were tested and found to be stringent compared to the planar detonation case. The maximum simulated pressure peak was found to be lower than the theoretical von Neumann spike value for the planar detonation and larger than the Chapman-Jouguet pressure thus indicating that it is more challenging to keep numerical reaction zone behind a leading front of numerical shock for curved fronts with large control volumes. The simulations demonstrated agreement with the experimental data. Copyright 2010 Elsevier B.V. All rights reserved.

  13. Extremely large-scale simulation of a Kardar-Parisi-Zhang model using graphics cards.

    PubMed

    Kelling, Jeffrey; Ódo, Géza

    2011-12-01

    The octahedron model introduced recently has been implemented onto graphics cards, which permits extremely large-scale simulations via binary lattice gases and bit-coded algorithms. We confirm scaling behavior belonging to the two-dimensional Kardar-Parisi-Zhang universality class and find a surface growth exponent: β = 0.2415(15) on 2(17) × 2(17) systems, ruling out β = 1/4 suggested by field theory. The maximum speedup with respect to a single CPU is 240. The steady state has been analyzed by finite-size scaling and a growth exponent α = 0.393(4) is found. Correction-to-scaling-exponent are computed and the power-spectrum density of the steady state is determined. We calculate the universal scaling functions and cumulants and show that the limit distribution can be obtained by the sizes considered. We provide numerical fitting for the small and large tail behavior of the steady-state scaling function of the interface width.

  14. Acoustic characteristics of a large-scale augmentor wing model at forward speed

    NASA Technical Reports Server (NTRS)

    Falarski, M. D.; Koenig, D. G.

    1973-01-01

    The augmentor wing concept is being studied as one means of attaining short takeoff and landing (STOL) performance in turbofan powered aircraft. Because of the stringent noise requirements for STOL operation, the acoustics of the augmentor wing are undergoing extensive research. The results of a wind tunnel investigation of a large-scale swept augmentor model at forward speed are presented. The augmentor was not acoustically treated, although the compressor supplying the high pressure primary air was treated to allow the measurement of only the augmentor noise. Installing the augmentor flap and shroud on the slot primary nozzle caused the acoustic dependence on jet velocity to change from eighth power to sixth power. Deflecting the augmentor at constant power increased the perceived noise level in the forward quadrant. The effect of airspeed was small. A small aft shift in perceived noise directivity was experienced with no significant change in sound power. Sealing the lower augmentor slot at a flap deflection of 70 deg reduced the perceived noise level in the aft quadrant. The seal prevented noise from propagating through the slot.

  15. Acoustic characteristics of a large scale wind-tunnel model of a jet flap aircraft

    NASA Technical Reports Server (NTRS)

    Falarski, M. D.; Aiken, T. N.; Aoyagi, K.

    1975-01-01

    The expanding-duct jet flap (EJF) concept is studied to determine STOL performance in turbofan-powered aircraft. The EJF is used to solve the problem of ducting the required volume of air into the wing by providing an expanding cavity between the upper and lower surfaces of the flap. The results are presented of an investigation of the acoustic characteristics of the EJF concept on a large-scale aircraft model powered by JT15D engines. The noise of the EJF is generated by acoustic dipoles as shown by the sixth power dependence of the noise on jet velocity. These sources result from the interaction of the flow turbulence with flap of internal and external surfaces and the trailing edges. Increasing the trailing edge jet from 70 percent span to 100 percent span increased the noise 2 db for the equivalent nozzle area. Blowing at the knee of the flap rather than the trailing edge reduced the noise 5 to 10 db by displacing the jet from the trailing edge and providing shielding from high-frequency noise. Deflecting the flap and varying the angle of attack modified the directivity of the underwing noise but did not affect the peak noise. A forward speed of 33.5 m/sec (110 ft/sec) reduced the dipole noise less than 1 db.

  16. Repurposing of open data through large scale hydrological modelling - hypeweb.smhi.se

    NASA Astrophysics Data System (ADS)

    Strömbäck, Lena; Andersson, Jafet; Donnelly, Chantal; Gustafsson, David; Isberg, Kristina; Pechlivanidis, Ilias; Strömqvist, Johan; Arheimer, Berit

    2015-04-01

    Hydrological modelling demands large amounts of spatial data, such as soil properties, land use, topography, lakes and reservoirs, ice and snow coverage, water management (e.g. irrigation patterns and regulations), meteorological data and observed water discharge in rivers. By using such data, the hydrological model will in turn provide new data that can be used for new purposes (i.e. re-purposing). This presentation will give an example of how readily available open data from public portals have been re-purposed by using the Hydrological Predictions for the Environment (HYPE) model in a number of large-scale model applications covering numerous subbasins and rivers. HYPE is a dynamic, semi-distributed, process-based, and integrated catchment model. The model output is launched as new Open Data at the web site www.hypeweb.smhi.se to be used for (i) Climate change impact assessments on water resources and dynamics; (ii) The European Water Framework Directive (WFD) for characterization and development of measure programs to improve the ecological status of water bodies; (iii) Design variables for infrastructure constructions; (iv) Spatial water-resource mapping; (v) Operational forecasts (1-10 days and seasonal) on floods and droughts; (vi) Input to oceanographic models for operational forecasts and marine status assessments; (vii) Research. The following regional domains have been modelled so far with different resolutions (number of subbasins within brackets): Sweden (37 000), Europe (35 000), Arctic basin (30 000), La Plata River (6 000), Niger River (800), Middle-East North-Africa (31 000), and the Indian subcontinent (6 000). The Hype web site provides several interactive web applications for exploring results from the models. The user can explore an overview of various water variables for historical and future conditions. Moreover the user can explore and download historical time series of discharge for each basin and explore the performance of the model

  17. Primordial non-Gaussianity: Large-scale structure signature in the perturbative bias model

    NASA Astrophysics Data System (ADS)

    McDonald, Patrick

    2008-12-01

    I compute the effect on the power spectrum of tracers of the large-scale mass-density field (e.g., galaxies) of primordial non-Gaussianity of the form Φ=ϕ+fNL(ϕ2-⟨ϕ2⟩)+gNLϕ3+…, where Φ is proportional to the initial potential fluctuations and ϕ is a Gaussian field, using beyond-linear-order perturbation theory. I find that the need to eliminate large higher-order corrections necessitates the addition of a new term to the bias model, proportional to ϕ, i.e., δg=bδδ+bϕfNLϕ+…, with all the consequences this implies for clustering statistics, e.g., Pgg(k)=bδ2Pδδ(k)+2bδbϕfNLPϕδ(k)+bϕ2fNL2Pϕϕ(k)+…. This result is consistent with calculations based on a model for dark matter halo clustering, showing that the form is quite general, not requiring assumptions about peaks, or the formation or existence of halos. The halo model plays the same role it does in the usual bias picture, giving a prediction for bϕ for galaxies known to sit in a certain type of halo. Previous projections for future constraints based on this effect have been very conservative—there is enough volume at z≲2 to measure fNL to ˜±1, with much more volume at higher z. As a prelude to the bias calculation, I point out that the beyond-linear (in ϕ) corrections to the power spectrum of mass-density perturbations are naively infinite, so it is dangerous to assume they are negligible; however, the infinite part can be removed by a renormalization of the fluctuation amplitude, with the residual k-dependent corrections negligible for models allowed by current constraints.

  18. Large Scale Debris-flow Hazard Assessment : A Geotechnical Approach and Gis Modelling

    NASA Astrophysics Data System (ADS)

    Delmonaco, G.; Leoni, G.; Margottini, C.; Puglisi, C.; Spizzichino, D.

    A deterministic approach has been developed for large-scale landslide hazard analysis carried out by ENEA, the Italian Agency for New Technologies, Energy and Environ- ment, in the framework of TEMRAP- The European Multi-Hazard Risk Assessment Project, finalised to the application of methodologies to incorporate the reduction of natural disasters. The territory of Versilia, and in particular the basin of Vezza river (60 Km2), has been chosen as test area of the project. The Vezza river basin, was affected by over 250 shallow landslides (debris/earth flow) mainly involving the metamorphic geological formations outcropping in the area triggered by the hydro-meteorological event of 19th June 1996. Many approaches and methodologies have been proposed in the scientific literature aimed at assessing landslide hazard and risk, depending es- sentially on scope of work, availability of data and scale of representation. In the last decades landslide hazard and risk analyses have been favoured by the development of GIS techniques that have permitted to generalise, synthesise and model the stability conditions at large scale (>1:10.000) investigation. In this work, the main results de- rived by the application of a geotechnical model coupled with a hydrological model for the assessment of debris flows hazard analysis, are reported. The deterministic analysis has been developed through the following steps: 1) elaboration of a landslide inventory map through aerial photo interpretation and direct field survey; 2) genera- tion of a data-base and digital maps; 3) elaboration of a DTM and slope angle map; 4) definition of a superficial soil thickness map; 5) litho-technical soil characterisation, through implementation of a back-analysis on test slopes and laboratory test analy- sis; 6) inference of the influence of precipitation, for distinct return times, on ponding time and pore pressure generation; 7) implementation of a slope stability model (in- finite slope model) and

  19. Large-scale flow phenomena in axial compressors: Modeling, analysis, and control with air injectors

    NASA Astrophysics Data System (ADS)

    Hagen, Gregory Scott

    This thesis presents a large scale model of axial compressor flows that is detailed enough to describe the modal and spike stall inception processes, and is also amenable to dynamical systems analysis and control design. The research presented here is based on the model derived by Mezic, which shows that the flows are dominated by the competition between the blade forcing of the compressor and the overall pressure differential created by the compressor. This model describes the modal stall inception process in a similar manner as the Moore-Greitzer model, but also describes the cross sectional flow velocities, and exhibits full span and part span stall. All of these flow patterns described by the model agree with experimental data. Furthermore, the initial model is altered in order to describe the effects of three dimensional spike disturbances, which can destabilize the compressor at otherwise stable operating points. The three dimensional model exhibits flow patterns during spike stall inception that also appear in experiments. The second part of this research focuses on the dynamical systems analysis of, and control design with, the PDE model of the axial flow in the compressor. We show that the axial flow model can be written as a gradient system and illustrate some stability properties of the stalled flow. This also reveals that flows with multiple stall cells correspond to higher energy states in the compressor. The model is derived with air injection actuation, and globally stabilizing distributed controls are designed. We first present a locally optimal controller for the linearized system, and then use Lyapunov analysis to show sufficient conditions for global stability. The concept of sector nonlinearities is applied to the problem of distributed parameter systems, and by analyzing the sector property of the compressor characteristic function, completely decentralized controllers are derived. Finally, the modal decomposition and Lyapunov analysis used in

  20. Development of a realistic human airway model.

    PubMed

    Lizal, Frantisek; Elcner, Jakub; Hopke, Philip K; Jedelsky, Jan; Jicha, Miroslav

    2012-03-01

    Numerous models of human lungs with various levels of idealization have been reported in the literature; consequently, results acquired using these models are difficult to compare to in vivo measurements. We have developed a set of model components based on realistic geometries, which permits the analysis of the effects of subsequent model simplification. A realistic digital upper airway geometry except for the lack of an oral cavity has been created which proved suitable both for computational fluid dynamics (CFD) simulations and for the fabrication of physical models. Subsequently, an oral cavity was added to the tracheobronchial geometry. The airway geometry including the oral cavity was adjusted to enable fabrication of a semi-realistic model. Five physical models were created based on these three digital geometries. Two optically transparent models, one with and one without the oral cavity, were constructed for flow velocity measurements, two realistic segmented models, one with and one without the oral cavity, were constructed for particle deposition measurements, and a semi-realistic model with glass cylindrical airways was developed for optical measurements of flow velocity and in situ particle size measurements. One-dimensional phase doppler anemometry measurements were made and compared to the CFD calculations for this model and good agreement was obtained.

  1. Large-scale 3-D EM modelling with a Block Low-Rank multifrontal direct solver

    NASA Astrophysics Data System (ADS)

    Shantsev, Daniil V.; Jaysaval, Piyoosh; de la Kethulle de Ryhove, Sébastien; Amestoy, Patrick R.; Buttari, Alfredo; L'Excellent, Jean-Yves; Mary, Theo

    2017-06-01

    We put forward the idea of using a Block Low-Rank (BLR) multifrontal direct solver to efficiently solve the linear systems of equations arising from a finite-difference discretization of the frequency-domain Maxwell equations for 3-D electromagnetic (EM) problems. The solver uses a low-rank representation for the off-diagonal blocks of the intermediate dense matrices arising in the multifrontal method to reduce the computational load. A numerical threshold, the so-called BLR threshold, controlling the accuracy of low-rank representations was optimized by balancing errors in the computed EM fields against savings in floating point operations (flops). Simulations were carried out over large-scale 3-D resistivity models representing typical scenarios for marine controlled-source EM surveys, and in particular the SEG SEAM model which contains an irregular salt body. The flop count, size of factor matrices and elapsed run time for matrix factorization are reduced dramatically by using BLR representations and can go down to, respectively, 10, 30 and 40 per cent of their full-rank values for our largest system with N = 20.6 million unknowns. The reductions are almost independent of the number of MPI tasks and threads at least up to 90 × 10 = 900 cores. The BLR savings increase for larger systems, which reduces the factorization flop complexity from O(N2) for the full-rank solver to O(Nm) with m = 1.4-1.6. The BLR savings are significantly larger for deep-water environments that exclude the highly resistive air layer from the computational domain. A study in a scenario where simulations are required at multiple source locations shows that the BLR solver can become competitive in comparison to iterative solvers as an engine for 3-D controlled-source electromagnetic Gauss-Newton inversion that requires forward modelling for a few thousand right-hand sides.

  2. Large-scale mapping and predictive modeling of submerged aquatic vegetation in a shallow eutrophic lake.

    PubMed

    Havens, Karl E; Harwell, Matthew C; Brady, Mark A; Sharfstein, Bruce; East, Therese L; Rodusky, Andrew J; Anson, Daniel; Maki, Ryan P

    2002-04-09

    A spatially intensive sampling program was developed for mapping the submerged aquatic vegetation (SAV) over an area of approximately 20,000 ha in a large, shallow lake in Florida, U.S. The sampling program integrates Geographic Information System (GIS) technology with traditional field sampling of SAV and has the capability of producing robust vegetation maps under a wide range of conditions, including high turbidity, variable depth (0 to 2 m), and variable sediment types. Based on sampling carried out in August-September 2000, we measured 1,050 to 4,300 ha of vascular SAV species and approximately 14,000 ha of the macroalga Chara spp. The results were similar to those reported in the early 1990s, when the last large-scale SAV sampling occurred. Occurrence of Chara was strongly associated with peat sediments, and maximal depths of occurrence varied between sediment types (mud, sand, rock, and peat). A simple model of Chara occurrence, based only on water depth, had an accuracy of 55%. It predicted occurrence of Chara over large areas where the plant actually was not found. A model based on sediment type and depth had an accuracy of 75% and produced a spatial map very similar to that based on observations. While this approach needs to be validated with independent data in order to test its general utility, we believe it may have application elsewhere. The simple modeling approach could serve as a coarse-scale tool for evaluating effects of water level management on Chara populations.

  3. Large-scale 3D EM modeling with a Block Low-Rank multifrontal direct solver

    NASA Astrophysics Data System (ADS)

    Shantsev, Daniil V.; Jaysaval, Piyoosh; de la Kethulle de Ryhove, Sébastien; Amestoy, Patrick R.; Buttari, Alfredo; L'Excellent, Jean-Yves; Mary, Theo

    2017-03-01

    We put forward the idea of using a Block Low-Rank (BLR) multifrontal direct solver to efficiently solve the linear systems of equations arising from a finite-difference discretization of the frequency-domain Maxwell equations for 3D electromagnetic (EM) problems. The solver uses a low-rank representation for the off-diagonal blocks of the intermediate dense matrices arising in the multifrontal method to reduce the computational load. A numerical threshold, the so called BLR threshold, controlling the accuracy of low-rank representations was optimized by balancing errors in the computed EM fields against savings in floating point operations (flops). Simulations were carried out over large-scale 3D resistivity models representing typical scenarios for marine controlled-source EM surveys, and in particular the SEG SEAM model which contains an irregular salt body. The flop count, size of factor matrices and elapsed run time for matrix factorization are reduced dramatically by using BLR representations and can go down to, respectively, 10%, 30% and 40% of their full rank values for our largest system with N = 20.6 million unknowns. The reductions are almost independent of the number of MPI tasks and threads at least up to 90 × 10 = 900 cores. The BLR savings increase for larger systems, which reduces the factorization flop complexity from O( {{N^2}} ) for the full-rank solver to O( {{N^m}} ) with m = 1.4 - 1.6 . The BLR savings are significantly larger for deep-water environments that exclude the highly resistive air layer from the computational domain. A study in a scenario where simulations are required at multiple source locations shows that the BLR solver can become competitive in comparison to iterative solvers as an engine for 3D CSEM Gauss-Newton inversion that requires forward modelling for a few thousand right-hand sides.

  4. Identification of water quality degradation hotspots in developing countries by applying large scale water quality modelling

    NASA Astrophysics Data System (ADS)

    Malsy, Marcus; Reder, Klara; Flörke, Martina

    2014-05-01

    Decreasing water quality is one of the main global issues which poses risks to food security, economy, and public health and is consequently crucial for ensuring environmental sustainability. During the last decades access to clean drinking water increased, but 2.5 billion people still do not have access to basic sanitation, especially in Africa and parts of Asia. In this context not only connection to sewage system is of high importance, but also treatment, as an increasing connection rate will lead to higher loadings and therefore higher pressure on water resources. Furthermore, poor people in developing countries use local surface waters for daily activities, e.g. bathing and washing. It is thus clear that water utilization and water sewerage are indispensable connected. In this study, large scale water quality modelling is used to point out hotspots of water pollution to get an insight on potential environmental impacts, in particular, in regions with a low observation density and data gaps in measured water quality parameters. We applied the global water quality model WorldQual to calculate biological oxygen demand (BOD) loadings from point and diffuse sources, as well as in-stream concentrations. Regional focus in this study is on developing countries i.e. Africa, Asia, and South America, as they are most affected by water pollution. Hereby, model runs were conducted for the year 2010 to draw a picture of recent status of surface waters quality and to figure out hotspots and main causes of pollution. First results show that hotspots mainly occur in highly agglomerated regions where population density is high. Large urban areas are initially loading hotspots and pollution prevention and control become increasingly important as point sources are subject to connection rates and treatment levels. Furthermore, river discharge plays a crucial role due to dilution potential, especially in terms of seasonal variability. Highly varying shares of BOD sources across

  5. Large Scale Terrestrial Modeling: A Discussion of Technical and Conceptual Challenges and Solution Approaches

    NASA Astrophysics Data System (ADS)

    Rahman, M.; Aljazzar, T.; Kollet, S.; Maxwell, R.

    2012-04-01

    A number of simulation platforms have been developed to study the spatiotemporal variability of hydrologic responses to global change. Sophisticated terrestrial models demand large data sets and considerable computing resources as they attempt to include detailed physics for all relevant processes involving the feedbacks between subsurface, land surface and atmospheric processes. Access to required data scarcity, error and uncertainty; allocation of computing resources; and post processing/analysis are some of the well-known challenges. And have been discussed in previous studies dealing with catchments ranging from plot scale research (102m2), to small experimental catchments (0.1-10km2), and occasionally medium-sized catchments (102-103km2). However, there is still a lack of knowledge about large-scale simulations of the coupled terrestrial mass and energy balance over long time scales (years to decades). In this study, the interaction between subsurface, land surface, and the atmosphere are simulated in two large scale (>104km2) river catchments that are the Luanhe catchment in the North Plain, China and the Rur catchment, Germany. As a simulation platform, a fully coupled model (ParFlow.CLM) that links a three-dimensional variably-saturated groundwater flow model (ParFlow) with a land surface model (CLM) is used. The Luanhe and the Rur catchments have areas of 54,000 and 28,224km2 respectively and are being simulated using spatial resolutions on the order of 102 to 103m in the horizontal and 10-2 to 10-1m in the vertical direction. ParFlow.CLM was configured over computational domains well beyond the actual watershed boundaries to account for cross-watershed flow. The resulting catchment models consist of up to 108 cells which were implemented over more than 1000 processors each with 512MB memory on JUGENE hosted by the Juelich Supercomputing Centre, Germany. Consequently, large numbers of input and output files were produced for each parameter such as; soil

  6. Uncertainty analysis of channel capacity assumptions in large scale hydraulic modelling

    NASA Astrophysics Data System (ADS)

    Walsh, Alexander; Stroud, Rebecca; Willis, Thomas

    2015-04-01

    Flood modelling on national or even global scales is of great interest to re/insurers, governments and other agencies. Channel bathymetry data is not available over large areas which is a major limitation to this scale of modelling. It requires expensive channel surveying and the majority of remotely sensed data cannot see through water. Furthermore, channels represented as 1D models, or as an explicit feature in the model domain is computationally demanding, and so it is often necessary to find ways to reduce computational costs. A more efficient methodology is to make assumptions concerning the capacity of the channel, and then to remove this volume from inflow hydrographs. Previous research have shown that natural channels generally conform to carry flow for a 1-in-2 year return period (QMED). This assumption is widely used in large scale modelling studies across the world. However, channels flowing through high-risk areas, such as urban environments, are often modified to increase their capacity and thus reduce flood risk. Simulated flood outlines are potentially very sensitive to assumptions made regarding these capacities. For example, under the 1-in-2 year assumption, the flooding associated with smaller events might be overestimated, with too much flow being modelled as out of bank. There are requirements to; i) quantify the impact of uncertainty in assumed channel capacity on simulated flooded areas, and ii) to develop more optimal capacity assumptions, depending on specific reach characteristics, so that the effects of channel modification can be better represented in future studies. This work will demonstrate findings from a preliminary uncertainty analysis that seeks to address the former requirement. A set of benchmark tests, using 2D hydraulic models, were undertaken where different estimated return period flows in contrasting catchments are modelled with varying channel capacity parameters. The depth and extent for each benchmark model output were

  7. Large-scale modeling of reactive solute transport in fracture zones of granitic bedrocks.

    PubMed

    Molinero, Jorge; Samper, Javier

    2006-01-10

    Final disposal of high-level radioactive waste in deep repositories located in fractured granite formations is being considered by several countries. The assessment of the safety of such repositories requires using numerical models of groundwater flow, solute transport and chemical processes. These models are being developed from data and knowledge gained from in situ experiments such as the Redox Zone Experiment carried out at the underground laboratory of Aspö in Sweden. This experiment aimed at evaluating the effects of the construction of the access tunnel on the hydrogeological and hydrochemical conditions of a fracture zone intersected by the tunnel. Most chemical species showed dilution trends except for bicarbonate and sulphate which unexpectedly increased with time. Molinero and Samper [Molinero, J. and Samper, J. Groundwater flow and solute transport in fracture zones: an improved model for a large-scale field experiment at Aspö (Sweden). J. Hydraul. Res., 42, Extra Issue, 157-172] presented a two-dimensional water flow and solute transport finite element model which reproduced measured drawdowns and dilution curves of conservative species. Here we extend their model by using a reactive transport which accounts for aqueous complexation, acid-base, redox processes, dissolution-precipitation of calcite, quartz, hematite and pyrite, and cation exchange between Na+ and Ca2+. The model provides field-scale estimates of cation exchange capacity of the fracture zone and redox potential of groundwater recharge. It serves also to identify the mineral phases controlling the solubility of iron. In addition, the model is useful to test the relevance of several geochemical processes. Model results rule out calcite dissolution as the process causing the increase in bicarbonate concentration and reject the following possible sources of sulphate: (1) pyrite dissolution, (2) leaching of alkaline sulphate-rich waters from a nearby rock landfill and (3) dissolution of

  8. Large-scale modeling of reactive solute transport in fracture zones of granitic bedrocks

    NASA Astrophysics Data System (ADS)

    Molinero, Jorge; Samper, Javier

    2006-01-01

    Final disposal of high-level radioactive waste in deep repositories located in fractured granite formations is being considered by several countries. The assessment of the safety of such repositories requires using numerical models of groundwater flow, solute transport and chemical processes. These models are being developed from data and knowledge gained from in situ experiments such as the Redox Zone Experiment carried out at the underground laboratory of Äspö in Sweden. This experiment aimed at evaluating the effects of the construction of the access tunnel on the hydrogeological and hydrochemical conditions of a fracture zone intersected by the tunnel. Most chemical species showed dilution trends except for bicarbonate and sulphate which unexpectedly increased with time. Molinero and Samper [Molinero, J. and Samper, J. Groundwater flow and solute transport in fracture zones: an improved model for a large-scale field experiment at Äspö (Sweden). J. Hydraul. Res., 42, Extra Issue, 157-172] presented a two-dimensional water flow and solute transport finite element model which reproduced measured drawdowns and dilution curves of conservative species. Here we extend their model by using a reactive transport which accounts for aqueous complexation, acid-base, redox processes, dissolution-precipitation of calcite, quartz, hematite and pyrite, and cation exchange between Na + and Ca 2+. The model provides field-scale estimates of cation exchange capacity of the fracture zone and redox potential of groundwater recharge. It serves also to identify the mineral phases controlling the solubility of iron. In addition, the model is useful to test the relevance of several geochemical processes. Model results rule out calcite dissolution as the process causing the increase in bicarbonate concentration and reject the following possible sources of sulphate: (1) pyrite dissolution, (2) leaching of alkaline sulphate-rich waters from a nearby rock landfill and (3) dissolution of

  9. Climate Impacts of Large-scale Wind Farms as Parameterized in a Global Climate Model

    NASA Astrophysics Data System (ADS)

    Fitch, Anna

    2015-04-01

    The local, regional and global climate impacts of a large-scale global deployment of wind power in regionally high densities over land is investigated for a 60 year period. Wind farms are represented as elevated momentum sinks, as well as enhanced turbulence to represent turbine blade mixing in a global climate model, the Community Atmosphere Model version 5 (CAM5). For a total installed capacity of 2.5 TW, to provide 16% of the world's projected electricity demand in 2050, minimal impacts are found, both regionally and globally, on temperature, sensible and latent heat fluxes, cloud and precipitation. A mean near-surface warming of 0.12+/-0.07 K is seen within the wind farms. Impacts on wind speed and turbulence are more pronounced, but largely confined to within the wind farm areas. Increasing the wind farm areas to provide an installed capacity of 10 TW, or 65% of the 2050 electricity demand, causes further impacts, however, they remain slight overall. Maximum temperature changes are less than 0.5 K in the wind farm areas. Impacts, both within the wind farms and beyond, become more pronounced with a doubling in turbine density, to provide 20 TW of installed capacity, or 130% of the 2050 electricity demand. However, maximum temperature changes remain less than 0.7 K. Representing wind farms instead as an increase in surface roughness generally produces similar mean results, however, maximum changes increase and influences on wind and turbulence are exaggerated. Overall, wind farm impacts are much weaker than those expected from greenhouse gas emissions, with global mean climate impacts very slight.

  10. A Photohadronic Model of the Large-scale Jet of PKS 0637-752

    NASA Astrophysics Data System (ADS)

    Kusunose, Masaaki; Takahara, Fumio

    2017-01-01

    Strong X-ray emission from large scale jets of radio loud quasars still remains an open problem. Models based on inverse Compton scattering off cosmic microwave background photons by relativistically beamed jets have recently been ruled out, since Fermi LAT observations for 3C 273 and PKS 0637–752 give the upper limit far below the model prediction. Synchrotron emission from a separate electron population with multi-hundred TeV energies remains a possibility although its origin is not well known. We examine a photo-hadronic origin of such high energy electrons/positrons, assuming that protons are accelerated up to 1019 eV and produce electrons/positrons through a Bethe–Heitler process and photo-pion production. These secondary electrons/positrons are injected at sufficiently high energies and produce X-rays and γ-rays by synchrotron radiation without conflicting with the Fermi LAT upper limits. We find that the resultant spectrum well reproduces the X-ray observations from PKS 0637-752, if the proton power is at least {10}49 {erg} {{{s}}}-1, which is highly super-Eddington. It is noted that the X-ray emission originates primarily from leptons through a Bethe–Heitler process, while leptons from photo-pion origin lose energy directly through synchrotron emission of multi-TeV photons rather than cascading. To avoid the overproduction of the optical flux, optical emission is primarily due to synchrotron emission of secondary leptons rather than primary electrons, or a mild degree of beaming of the jet is needed if it is owing to the primary electrons. Proton synchrotron luminosity is a few orders of magnitude smaller.

  11. Observational and Model Studies of Large-Scale Mixing Processes in the Stratosphere

    NASA Technical Reports Server (NTRS)

    Bowman, Kenneth P.

    1997-01-01

    The following is the final technical report for grant NAGW-3442, 'Observational and Model Studies of Large-Scale Mixing Processes in the Stratosphere'. Research efforts in the first year concentrated on transport and mixing processes in the polar vortices. Three papers on mixing in the Antarctic were published. The first was a numerical modeling study of wavebreaking and mixing and their relationship to the period of observed stratospheric waves (Bowman). The second paper presented evidence from TOMS for wavebreaking in the Antarctic (Bowman and Mangus 1993). The third paper used Lagrangian trajectory calculations from analyzed winds to show that there is very little transport into the Antarctic polar vortex prior to the vortex breakdown (Bowman). Mixing is significantly greater at lower levels. This research helped to confirm theoretical arguments for vortex isolation and data from the Antarctic field experiments that were interpreted as indicating isolation. A Ph.D. student, Steve Dahlberg, used the trajectory approach to investigate mixing and transport in the Arctic. While the Arctic vortex is much more disturbed than the Antarctic, there still appears to be relatively little transport across the vortex boundary at 450 K prior to the vortex breakdown. The primary reason for the absence of an ozone hole in the Arctic is the earlier warming and breakdown of the vortex compared to the Antarctic, not replenishment of ozone by greater transport. Two papers describing these results have appeared (Dahlberg and Bowman; Dahlberg and Bowman). Steve Dahlberg completed his Ph.D. thesis (Dahlberg and Bowman) and is now teaching in the Physics Department at Concordia College. We also prepared an analysis of the QBO in SBUV ozone data (Hollandsworth et al.). A numerical study in collaboration with Dr. Ping Chen investigated mixing by barotropic instability, which is the probable origin of the 4-day wave in the upper stratosphere (Bowman and Chen). The important result from

  12. Metabolic Flux Elucidation for Large-Scale Models Using 13C Labeled Isotopes

    PubMed Central

    Suthers, Patrick F.; Burgard, Anthony P.; Dasika, Madhukar S.; Nowroozi, Farnaz; Van Dien, Stephen; Keasling, Jay D.; Maranas, Costas D.

    2007-01-01

    A key consideration in metabolic engineering is the determination of fluxes of the metabolites within the cell. This determination provides an unambiguous description of metabolism before and/or after engineering interventions. Here, we present a computational framework that combines a constraint-based modeling framework with isotopic label tracing on a large-scale. When cells are fed a growth substrate with certain carbon positions labeled with 13C, the distribution of this label in the intracellular metabolites can be calculated based on the known biochemistry of the participating pathways. Most labeling studies focus on skeletal representations of central metabolism and ignore many flux routes that could contribute to the observed isotopic labeling patterns. In contrast, our approach investigates the importance of carrying out isotopic labeling studies using a more comprehensive reaction network consisting of 350 fluxes and 184 metabolites in Escherichia coli including global metabolite balances on cofactors such as ATP, NADH, and NADPH. The proposed procedure is demonstrated on an E. coli strain engineered to produce amorphadiene, a precursor to the anti-malarial drug artemisinin. The cells were grown in continuous culture on glucose containing 20% [U-13C]glucose; the measurements are made using GC-MS performed on 13 amino acids extracted from the cells. We identify flux distributions for which the calculated labeling patterns agree well with the measurements alluding to the accuracy of the network reconstruction. Furthermore, we explore the robustness of the flux calculations to variability in the experimental MS measurements, as well as highlight the key experimental measurements necessary for flux determination. Finally, we discuss the effect of reducing the model, as well as shed light onto the customization of the developed computational framework to other systems. PMID:17632026

  13. Large-scale Models Reveal the Two-component Mechanics of Striated Muscle

    PubMed Central

    Jarosch, Robert

    2008-01-01

    This paper provides a comprehensive explanation of striated muscle mechanics and contraction on the basis of filament rotations. Helical proteins, particularly the coiled-coils of tropomyosin, myosin and α-actinin, shorten their H-bonds cooperatively and produce torque and filament rotations when the Coulombic net-charge repulsion of their highly charged side-chains is diminished by interaction with ions. The classical “two-component model” of active muscle differentiated a “contractile component” which stretches the “series elastic component” during force production. The contractile components are the helically shaped thin filaments of muscle that shorten the sarcomeres by clockwise drilling into the myosin cross-bridges with torque decrease (= force-deficit). Muscle stretch means drawing out the thin filament helices off the cross-bridges under passive counterclockwise rotation with torque increase (= stretch activation). Since each thin filament is anchored by four elastic α-actinin Z-filaments (provided with force-regulating sites for Ca2+ binding), the thin filament rotations change the torsional twist of the four Z-filaments as the “series elastic components”. Large scale models simulate the changes of structure and force in the Z-band by the different Z-filament twisting stages A, B, C, D, E, F and G. Stage D corresponds to the isometric state. The basic phenomena of muscle physiology, i. e. latency relaxation, Fenn-effect, the force-velocity relation, the length-tension relation, unexplained energy, shortening heat, the Huxley-Simmons phases, etc. are explained and interpreted with the help of the model experiments. PMID:19330099

  14. Poyang Lake basin: a successful, large-scale integrated basin management model for developing countries.

    PubMed

    Chen, Meiqiu; Wei, Xiaohua; Huang, Hongsheng; Lü, Tiangui

    2011-01-01

    Protection of water environment while developing socio-economy is a challenging task for lake regions of many developing countries. Poyang Lake is the largest fresh water lake in China, with its total drainage area of 160,000 km2. In spite of rapid development of socio-economy in Poyang Lake region in the past several decades, water in Poyang Lake is of good quality and is known as the "last pot of clear water" of the Yangtze River Basin in China. In this paper, the reasons of "last pot of clear water" of Poyang Lake were analysed to demonstrate how economic development and environmental protection can be coordinated. There are three main reasons for contributing to this coordinated development: 1) the unique geomorphologic features of Poyang Lake and the short water residence time; 2) the matching of the basin physical boundary with the administrative boundary; and 3) the implementation of "Mountain-River-Lake Program" (MRL), with the ecosystem concept of "mountain as source, river as connection flow, and lake as storage". In addition, a series of actions have been taken to coordinate development, utilisation, management and protection in the Poyang Lake basin. Our key experiences are: considering all basin components when focusing on lake environment protection is a guiding principle; raising the living standard of people through implementation of various eco-economic projects or models in the basin is the most important strategy; preventing soil and water erosion is critical for protecting water sources; and establishing an effective governance mechanism for basin management is essential. This successful, large-scale basin management model can be extended to any basin or lake regions of developing countries where both environmental protection and economic development are needed and coordinated.

  15. Observational and Model Studies of Large-Scale Mixing Processes in the Stratosphere

    NASA Technical Reports Server (NTRS)

    Bowman, Kenneth P.

    1997-01-01

    The following is the final technical report for grant NAGW-3442, 'Observational and Model Studies of Large-Scale Mixing Processes in the Stratosphere'. Research efforts in the first year concentrated on transport and mixing processes in the polar vortices. Three papers on mixing in the Antarctic were published. The first was a numerical modeling study of wavebreaking and mixing and their relationship to the period of observed stratospheric waves (Bowman). The second paper presented evidence from TOMS for wavebreaking in the Antarctic (Bowman and Mangus 1993). The third paper used Lagrangian trajectory calculations from analyzed winds to show that there is very little transport into the Antarctic polar vortex prior to the vortex breakdown (Bowman). Mixing is significantly greater at lower levels. This research helped to confirm theoretical arguments for vortex isolation and data from the Antarctic field experiments that were interpreted as indicating isolation. A Ph.D. student, Steve Dahlberg, used the trajectory approach to investigate mixing and transport in the Arctic. While the Arctic vortex is much more disturbed than the Antarctic, there still appears to be relatively little transport across the vortex boundary at 450 K prior to the vortex breakdown. The primary reason for the absence of an ozone hole in the Arctic is the earlier warming and breakdown of the vortex compared to the Antarctic, not replenishment of ozone by greater transport. Two papers describing these results have appeared (Dahlberg and Bowman; Dahlberg and Bowman). Steve Dahlberg completed his Ph.D. thesis (Dahlberg and Bowman) and is now teaching in the Physics Department at Concordia College. We also prepared an analysis of the QBO in SBUV ozone data (Hollandsworth et al.). A numerical study in collaboration with Dr. Ping Chen investigated mixing by barotropic instability, which is the probable origin of the 4-day wave in the upper stratosphere (Bowman and Chen). The important result from

  16. Model and controller reduction of large-scale structures based on projection methods

    NASA Astrophysics Data System (ADS)

    Gildin, Eduardo

    The design of low-order controllers for high-order plants is a challenging problem theoretically as well as from a computational point of view. Frequently, robust controller design techniques result in high-order controllers. It is then interesting to achieve reduced-order models and controllers while maintaining robustness properties. Controller designed for large structures based on models obtained by finite element techniques yield large state-space dimensions. In this case, problems related to storage, accuracy and computational speed may arise. Thus, model reduction methods capable of addressing controller reduction problems are of primary importance to allow the practical applicability of advanced controller design methods for high-order systems. A challenging large-scale control problem that has emerged recently is the protection of civil structures, such as high-rise buildings and long-span bridges, from dynamic loadings such as earthquakes, high wind, heavy traffic, and deliberate attacks. Even though significant effort has been spent in the application of control theory to the design of civil structures in order increase their safety and reliability, several challenging issues are open problems for real-time implementation. This dissertation addresses with the development of methodologies for controller reduction for real-time implementation in seismic protection of civil structures using projection methods. Three classes of schemes are analyzed for model and controller reduction: nodal truncation, singular value decomposition methods and Krylov-based methods. A family of benchmark problems for structural control are used as a framework for a comparative study of model and controller reduction techniques. It is shown that classical model and controller reduction techniques, such as balanced truncation, modal truncation and moment matching by Krylov techniques, yield reduced-order controllers that do not guarantee stability of the closed-loop system, that

  17. Large-scale Validation of AMIP II Land-surface Simulations: Preliminary Results for Ten Models

    SciTech Connect

    Phillips, T J; Henderson-Sellers, A; Irannejad, P; McGuffie, K; Zhang, H

    2005-12-01

    This report summarizes initial findings of a large-scale validation of the land-surface simulations of ten atmospheric general circulation models that are entries in phase II of the Atmospheric Model Intercomparison Project (AMIP II). This validation is conducted by AMIP Diagnostic Subproject 12 on Land-surface Processes and Parameterizations, which is focusing on putative relationships between the continental climate simulations and the associated models' land-surface schemes. The selected models typify the diversity of representations of land-surface climate that are currently implemented by the global modeling community. The current dearth of global-scale terrestrial observations makes exacting validation of AMIP II continental simulations impractical. Thus, selected land-surface processes of the models are compared with several alternative validation data sets, which include merged in-situ/satellite products, climate reanalyses, and off-line simulations of land-surface schemes that are driven by observed forcings. The aggregated spatio-temporal differences between each simulated process and a chosen reference data set then are quantified by means of root-mean-square error statistics; the differences among alternative validation data sets are similarly quantified as an estimate of the current observational uncertainty in the selected land-surface process. Examples of these metrics are displayed for land-surface air temperature, precipitation, and the latent and sensible heat fluxes. It is found that the simulations of surface air temperature, when aggregated over all land and seasons, agree most closely with the chosen reference data, while the simulations of precipitation agree least. In the latter case, there also is considerable inter-model scatter in the error statistics, with the reanalyses estimates of precipitation resembling the AMIP II simulations more than to the chosen reference data. In aggregate, the simulations of land-surface latent and sensible

  18. Large-Scale Model-Based Assessment of Deer-Vehicle Collision Risk

    PubMed Central

    Hothorn, Torsten; Brandl, Roland; Müller, Jörg

    2012-01-01

    Ungulates, in particular the Central European roe deer Capreolus capreolus and the North American white-tailed deer Odocoileus virginianus, are economically and ecologically important. The two species are risk factors for deer–vehicle collisions and as browsers of palatable trees have implications for forest regeneration. However, no large-scale management systems for ungulates have been implemented, mainly because of the high efforts and costs associated with attempts to estimate population sizes of free-living ungulates living in a complex landscape. Attempts to directly estimate population sizes of deer are problematic owing to poor data quality and lack of spatial representation on larger scales. We used data on 74,000 deer–vehicle collisions observed in 2006 and 2009 in Bavaria, Germany, to model the local risk of deer–vehicle collisions and to investigate the relationship between deer–vehicle collisions and both environmental conditions and browsing intensities. An innovative modelling approach for the number of deer–vehicle collisions, which allows nonlinear environment–deer relationships and assessment of spatial heterogeneity, was the basis for estimating the local risk of collisions for specific road types on the scale of Bavarian municipalities. Based on this risk model, we propose a new “deer–vehicle collision index” for deer management. We show that the risk of deer–vehicle collisions is positively correlated to browsing intensity and to harvest numbers. Overall, our results demonstrate that the number of deer–vehicle collisions can be predicted with high precision on the scale of municipalities. In the densely populated and intensively used landscapes of Central Europe and North America, a model-based risk assessment for deer–vehicle collisions provides a cost-efficient instrument for deer management on the landscape scale. The measures derived from our model provide valuable information for planning road protection and

  19. Using Multiple Soil Carbon Maps Facilitates Better Comparisons with Large Scale Modeled Outputs

    NASA Astrophysics Data System (ADS)

    Johnson, K. D.; D'Amore, D. V.; Pastick, N. J.; Genet, H.; Mishra, U.; Wylie, B. K.; Bliss, N. B.

    2015-12-01

    The choice of method applied for mapping the soil carbon is an important source of uncertainty when comparing observed soil carbon stocks to modeled outputs. Large scale soil mapping often relies on non-random and opportunistically collected soils data to make predictions over remote areas where few observations are available for independent validation. Addressing model choice and non-random sampling is problematic when models use the data for the calibration and validation of historical outputs. One potential way to address this uncertainty is to compare the modeled outputs to a range of soil carbon observations from different soil carbon maps that are more likely to capture the true soil carbon value than one map alone. The current analysis demonstrates this approach in Alaska, which despite suffering from a non-random sample, still has one of the richest datasets among the northern circumpolar regions. The outputs from 11 ESMs (from the 5th Climate Model Intercomparison Project) and the Dynamic Organic Soil version of the Terrestrial Ecosystem Model (DOS-TEM) were compared to 4 different soil carbon maps. In the most detailed comparison, DOS-TEM simulated total profile soil carbon stocks that were within the range of the 4 maps for 18 of 23 Alaskan ecosystems, whereas the results fell within the 95% confidence interval of only 8 when compared to just one commonly used soil carbon map (NCSCDv2). At the ecoregion level, the range of soil carbon map estimates overlapped the range of ESM outputs in every ecoregion, although the mean value of the soil carbon maps was between 17% (Southern Interior) and 63% (Arctic) higher than the mean of the ESM outputs. For the whole state of Alaska, the DOS-TEM output and 3 of the 11 ESM outputs fell within the range of the 4 soil carbon map estimates. However, when compared to only one map and its 95% confidence interval (NCSCDv2), the DOS-TEM result fell outside the interval and only two ESM's fell within the observed interval

  20. Using stochastically-generated subcolumns to represent cloud structure in a large-scale model

    SciTech Connect

    Pincus, R; Hemler, R; Klein, S A

    2005-12-08

    A new method for representing subgrid-scale cloud structure, in which each model column is decomposed into a set of subcolumns, has been introduced into the Geophysical Fluid Dynamics Laboratory's global climate model AM2. Each subcolumn in the decomposition is homogeneous but the ensemble reproduces the initial profiles of cloud properties including cloud fraction, internal variability (if any) in cloud condensate, and arbitrary overlap assumptions that describe vertical correlations. These subcolumns are used in radiation and diagnostic calculations, and have allowed the introduction of more realistic overlap assumptions. This paper describes the impact of these new methods for representing cloud structure in instantaneous calculations and long-term integrations. Shortwave radiation computed using subcolumns and the random overlap assumption differs in the global annual average by more than 4 W/m{sup 2} from the operational radiation scheme in instantaneous calculations; much of this difference is counteracted by a change in the overlap assumption to one in which overlap varies continuously with the separation distance between layers. Internal variability in cloud condensate, diagnosed from the mean condensate amount and cloud fraction, has about the same effect on radiative fluxes as does the ad hoc tuning accounting for this effect in the operational radiation scheme. Long simulations with the new model configuration show little difference from the operational model configuration, while statistical tests indicate that the model does not respond systematically to the sampling noise introduced by the approximate radiative transfer techniques introduced to work with the subcolumns.

  1. Analytical modeling of the statistical properties of the contrast of large-scale irregularities of the ionosphere

    NASA Astrophysics Data System (ADS)

    Vsekhsviatskaia, I. S.; Evstratova, E. A.; Kalinin, Iu. K.; Romanchuk, A. A.

    1989-08-01

    An analytical model is proposed for the distribution of variations of the relative contrast of the electron density of large-scale ionospheric irregularities. The model is characterized by nonzero asymmetry and excess. It is shown that the model can be applied to horizontal irregularity scales from hundreds to thousands of kilometers.

  2. A large-scale forest landscape model incorporating multi-scale processes and utilizing forest inventory data

    Treesearch

    Wen J. Wang; Hong S. He; Martin A. Spetich; Stephen R. Shifley; Frank R. Thompson III; David R. Larsen; Jacob S. Fraser; Jian. Yang

    2013-01-01

    Two challenges confronting forest landscape models (FLMs) are how to simulate fine, standscale processes while making large-scale (i.e., .107 ha) simulation possible, and how to take advantage of extensive forest inventory data such as U.S. Forest Inventory and Analysis (FIA) data to initialize and constrain model parameters. We present the LANDIS PRO model that...

  3. Isospin symmetry breaking and large-scale shell-model calculations with the Sakurai-Sugiura method

    NASA Astrophysics Data System (ADS)

    Mizusaki, Takahiro; Kaneko, Kazunari; Sun, Yang; Tazaki, Shigeru

    2015-05-01

    Recently isospin symmetry breaking for mass 60-70 region has been investigated based on large-scale shell-model calculations in terms of mirror energy differences (MED), Coulomb energy differences (CED) and triplet energy differences (TED). Behind these investigations, we have encountered a subtle problem in numerical calculations for odd-odd N = Z nuclei with large-scale shell-model calculations. Here we focus on how to solve this subtle problem by the Sakurai-Sugiura (SS) method, which has been recently proposed as a new diagonalization method and has been successfully applied to nuclear shell-model calculations.

  4. Large Scale Numerical Modelling to Study the Dispersion of Persistent Toxic Substances Over Europe

    NASA Astrophysics Data System (ADS)

    Aulinger, A.; Petersen, G.

    2003-12-01

    For the past two decades environmental research at the GKSS Research Centre has been concerned with airborne pollutants with adverse effects on human health. The research was mainly focused on investigating the dispersion and deposition of heavy metals like lead and mercury over Europe by means of numerical modelling frameworks. Lead, in particular, served as a model substance to study the relationship between emissions and human exposition. The major source of airborne lead in Germany was fuel combustion until the 1980ies when its use as gasoline additive declined due to political decisions. Since then, the concentration of lead in ambient air and the deposition rates decreased in the same way as the consumption of leaded fuel. These observations could further be related to the decrease of lead concentrations in human blood measured during medical studies in several German cities. Based on the experience with models for heavy metal transport and deposition we have now started to turn our research focus to organic substances, e.g. PAHs. PAHs have been recognized as significant air borne carcinogens for several decades. However, it is not yet possible to precisely quantify the risk of human exposure to those compounds. Physical and chemical data, known from literature, describing the partitioning of the compounds between particle and gas phase and their degradation in the gas phase are implemented in a tropospheric chemistry module. In this way, the fate of PAHs in the atmosphere due to different particle type and size and different meteorological conditions is tested before carrying out large-scale and long-time studies. First model runs have been carried out for Benzo(a)Pyrene as one of the principal carcinogenic PAHs. Up to now, nearly nothing is known about degradation reactions of particle bound BaP. Thus, they could not be taken into account in the model so far. On the other hand, the proportion of BaP in the gas phase has to be considered at higher ambient

  5. Modeling Booklet Effects for Nonequivalent Group Designs in Large-Scale Assessment

    ERIC Educational Resources Information Center

    Hecht, Martin; Weirich, Sebastian; Siegle, Thilo; Frey, Andreas

    2015-01-01

    Multiple matrix designs are commonly used in large-scale assessments to distribute test items to students. These designs comprise several booklets, each containing a subset of the complete item pool. Besides reducing the test burden of individual students, using various booklets allows aligning the difficulty of the presented items to the assumed…

  6. Making Appropriate & Ethical Choices in Large-Scale Assessments: A Model Policy Code.

    ERIC Educational Resources Information Center

    Bell, Gregory

    This set of policy statements is intended to provide guidance to those who evaluate and select assessments, prepare students for those assessments, administer and score the tests, and interpret and use assessment results to make decisions about students and schools. The focus is on large-scale assessments that have consequences for students and…

  7. The Large-Scale Structure of Semantic Networks: Statistical Analyses and a Model of Semantic Growth

    ERIC Educational Resources Information Center

    Steyvers, Mark; Tenenbaum, Joshua B.

    2005-01-01

    We present statistical analyses of the large-scale structure of 3 types of semantic networks: word associations, WordNet, and Roget's Thesaurus. We show that they have a small-world structure, characterized by sparse connectivity, short average path lengths between words, and strong local clustering. In addition, the distributions of the number of…

  8. Self-consistency tests of large-scale dynamics parameterizations for single-column modeling

    DOE PAGES

    Edman, Jacob P.; Romps, David M.

    2015-03-18

    Large-scale dynamics parameterizations are tested numerically in cloud-resolving simulations, including a new version of the weak-pressure-gradient approximation (WPG) introduced by Edman and Romps (2014), the weak-temperature-gradient approximation (WTG), and a prior implementation of WPG. We perform a series of self-consistency tests with each large-scale dynamics parameterization, in which we compare the result of a cloud-resolving simulation coupled to WTG or WPG with an otherwise identical simulation with prescribed large-scale convergence. In self-consistency tests based on radiative-convective equilibrium (RCE; i.e., no large-scale convergence), we find that simulations either weakly coupled or strongly coupled to either WPG or WTG are self-consistent, butmore » WPG-coupled simulations exhibit a nonmonotonic behavior as the strength of the coupling to WPG is varied. We also perform self-consistency tests based on observed forcings from two observational campaigns: the Tropical Warm Pool International Cloud Experiment (TWP-ICE) and the ARM Southern Great Plains (SGP) Summer 1995 IOP. In these tests, we show that the new version of WPG improves upon prior versions of WPG by eliminating a potentially troublesome gravity-wave resonance.« less

  9. Modeling Booklet Effects for Nonequivalent Group Designs in Large-Scale Assessment

    ERIC Educational Resources Information Center

    Hecht, Martin; Weirich, Sebastian; Siegle, Thilo; Frey, Andreas

    2015-01-01

    Multiple matrix designs are commonly used in large-scale assessments to distribute test items to students. These designs comprise several booklets, each containing a subset of the complete item pool. Besides reducing the test burden of individual students, using various booklets allows aligning the difficulty of the presented items to the assumed…

  10. The Large-Scale Structure of Semantic Networks: Statistical Analyses and a Model of Semantic Growth

    ERIC Educational Resources Information Center

    Steyvers, Mark; Tenenbaum, Joshua B.

    2005-01-01

    We present statistical analyses of the large-scale structure of 3 types of semantic networks: word associations, WordNet, and Roget's Thesaurus. We show that they have a small-world structure, characterized by sparse connectivity, short average path lengths between words, and strong local clustering. In addition, the distributions of the number of…

  11. Forcings and feedbacks on convection in the 2010 Pakistan flood: Modeling extreme precipitation with interactive large-scale ascent

    NASA Astrophysics Data System (ADS)

    Nie, Ji; Shaevitz, Daniel A.; Sobel, Adam H.

    2016-09-01

    Extratropical extreme precipitation events are usually associated with large-scale flow disturbances, strong ascent, and large latent heat release. The causal relationships between these factors are often not obvious, however, the roles of different physical processes in producing the extreme precipitation event can be difficult to disentangle. Here we examine the large-scale forcings and convective heating feedback in the precipitation events, which caused the 2010 Pakistan flood within the Column Quasi-Geostrophic framework. A cloud-revolving model (CRM) is forced with large-scale forcings (other than large-scale vertical motion) computed from the quasi-geostrophic omega equation using input data from a reanalysis data set, and the large-scale vertical motion is diagnosed interactively with the simulated convection. Numerical results show that the positive feedback of convective heating to large-scale dynamics is essential in amplifying the precipitation intensity to the observed values. Orographic lifting is the most important dynamic forcing in both events, while differential potential vorticity advection also contributes to the triggering of the first event. Horizontal moisture advection modulates the extreme events mainly by setting the environmental humidity, which modulates the amplitude of the convection's response to the dynamic forcings. When the CRM is replaced by either a single-column model (SCM) with parameterized convection or a dry model with a reduced effective static stability, the model results show substantial discrepancies compared with reanalysis data. The reasons for these discrepancies are examined, and the implications for global models and theoretical models are discussed.

  12. Collaborative Visualization for Large-Scale Accelerator Electromagnetic Modeling (Final Report)

    SciTech Connect

    William J. Schroeder

    2011-11-13

    This report contains the comprehensive summary of the work performed on the SBIR Phase II, Collaborative Visualization for Large-Scale Accelerator Electromagnetic Modeling at Kitware Inc. in collaboration with Stanford Linear Accelerator Center (SLAC). The goal of the work was to develop collaborative visualization tools for large-scale data as illustrated in the figure below. The solutions we proposed address the typical problems faced by geographicallyand organizationally-separated research and engineering teams, who produce large data (either through simulation or experimental measurement) and wish to work together to analyze and understand their data. Because the data is large, we expect that it cannot be easily transported to each team member's work site, and that the visualization server must reside near the data. Further, we also expect that each work site has heterogeneous resources: some with large computing clients, tiled (or large) displays and high bandwidth; others sites as simple as a team member on a laptop computer. Our solution is based on the open-source, widely used ParaView large-data visualization application. We extended this tool to support multiple collaborative clients who may locally visualize data, and then periodically rejoin and synchronize with the group to discuss their findings. Options for managing session control, adding annotation, and defining the visualization pipeline, among others, were incorporated. We also developed and deployed a Web visualization framework based on ParaView that enables the Web browser to act as a participating client in a collaborative session. The ParaView Web Visualization framework leverages various Web technologies including WebGL, JavaScript, Java and Flash to enable interactive 3D visualization over the web using ParaView as the visualization server. We steered the development of this technology by teaming with the SLAC National Accelerator Laboratory. SLAC has a computationally-intensive problem

  13. Using remote sensing for validation of a large scale hydrologic and hydrodynamic model in the Amazon

    NASA Astrophysics Data System (ADS)

    Paiva, R. C.; Bonnet, M.; Buarque, D. C.; Collischonn, W.; Frappart, F.; Mendes, C. B.

    2011-12-01

    We present the validation of the large-scale, catchment-based hydrological MGB-IPH model in the Amazon River basin. In this model, physically-based equations are used to simulate the hydrological processes, such as the Penman Monteith method to estimate evapotranspiration, or the Moore and Clarke infiltration model. A new feature recently introduced in the model is a 1D hydrodynamic module for river routing. It uses the full Saint-Venant equations and a simple floodplain storage model. River and floodplain geometry parameters are extracted from SRTM DEM using specially developed GIS algorithms that provide catchment discretization, estimation of river cross-sections geometry and water storage volume variations in the floodplains. The model was forced using satellite-derived daily rainfall TRMM 3B42, calibrated against discharge data and first validated using daily discharges and water levels from 111 and 69 stream gauges, respectively. Then, we performed a validation against remote sensing derived hydrological products, including (i) monthly Terrestrial Water Storage (TWS) anomalies derived from GRACE, (ii) river water levels derived from ENVISAT satellite altimetry data (212 virtual stations from Santos da Silva et al., 2010) and (iii) a multi-satellite monthly global inundation extent dataset at ~25 x 25 km spatial resolution (Papa et al., 2010). Validation against river discharges shows good performance of the MGB-IPH model. For 70% of the stream gauges, the Nash and Suttcliffe efficiency index (ENS) is higher than 0.6 and at Óbidos, close to Amazon river outlet, ENS equals 0.9 and the model bias equals,-4.6%. Largest errors are located in drainage areas outside Brazil and we speculate that it is due to the poor quality of rainfall datasets in these areas poorly monitored and/or mountainous. Validation against water levels shows that model is performing well in the major tributaries. For 60% of virtual stations, ENS is higher than 0.6. But, similarly, largest

  14. Development of a Wireless Model Incorporating Large-Scale Fading in a Rural, Urban and Suburban Environment

    DTIC Science & Technology

    2006-03-01

    in a rural area is obtained using equation (2.12). ( )250 50( )( ) ( ) 4.78 log 18.33log 40.94c cL rural dB L urban f f= − + − (2.12) where ( ) rea ...DEVELOPMENT OF A WIRELESS MODEL INCORPORATING LARGE- SCALE FADING IN A RURAL , URBAN AND SUBURBAN...the U.S. Government. AFIT/GE/ENG/06-25 DEVELOPMENT OF A WIRELESS MODEL INCORPORATING LARGE- SCALE FADING IN A RURAL , URBAN AND SUBURBAN

  15. Application of large-scale, multi-resolution watershed modeling framework using the Hydrologic and Water Quality System (HAWQS)

    USDA-ARS?s Scientific Manuscript database

    In recent years, large-scale watershed modeling has been implemented broadly in the field of water resources planning and management. Complex hydrological, sediment, and nutrient processes can be simulated by sophisticated watershed simulation models for important issues such as water resources all...

  16. How do parcellation size and short-range connectivity affect dynamics in large-scale brain network models?

    PubMed

    Proix, Timothée; Spiegler, Andreas; Schirner, Michael; Rothmeier, Simon; Ritter, Petra; Jirsa, Viktor K

    2016-11-15

    Recent efforts to model human brain activity on the scale of the whole brain rest on connectivity estimates of large-scale networks derived from diffusion magnetic resonance imaging (dMRI). This type of connectivity describes white matter fiber tracts. The number of short-range cortico-cortical white-matter connections is, however, underrepresented in such large-scale brain models. It is still unclear on the one hand, which scale of representation of white matter fibers is optimal to describe brain activity on a large-scale such as recorded with magneto- or electroencephalography (M/EEG) or functional magnetic resonance imaging (fMRI), and on the other hand, to which extent short-range connections that are typically local should be taken into account. In this article we quantified the effect of connectivity upon large-scale brain network dynamics by (i) systematically varying the number of brain regions before computing the connectivity matrix, and by (ii) adding generic short-range connections. We used dMRI data from the Human Connectome Project. We developed a suite of preprocessing modules called SCRIPTS to prepare these imaging data for The Virtual Brain, a neuroinformatics platform for large-scale brain modeling and simulations. We performed simulations under different connectivity conditions and quantified the spatiotemporal dynamics in terms of Shannon Entropy, dwell time and Principal Component Analysis. For the reconstructed connectivity, our results show that the major white matter fiber bundles play an important role in shaping slow dynamics in large-scale brain networks (e.g. in fMRI). Faster dynamics such as gamma oscillations (around 40 Hz) are sensitive to the short-range connectivity if transmission delays are considered.

  17. Modeling the MJO rain rates using parameterized large scale dynamics: vertical structure, radiation, and horizontal advection of dry air

    NASA Astrophysics Data System (ADS)

    Wang, S.; Sobel, A. H.; Nie, J.

    2015-12-01

    Two Madden Julian Oscillation (MJO) events were observed during October and November 2011 in the equatorial Indian Ocean during the DYNAMO field campaign. Precipitation rates and large-scale vertical motion profiles derived from the DYNAMO northern sounding array are simulated in a small-domain cloud-resolving model using parameterized large-scale dynamics. Three parameterizations of large-scale dynamics --- the conventional weak temperature gradient (WTG) approximation, vertical mode based spectral WTG (SWTG), and damped gravity wave coupling (DGW) --- are employed. The target temperature profiles and radiative heating rates are taken from a control simulation in which the large-scale vertical motion is imposed (rather than directly from observations), and the model itself is significantly modified from that used in previous work. These methodological changes lead to significant improvement in the results.Simulations using all three methods, with imposed time -dependent radiation and horizontal moisture advection, capture the time variations in precipitation associated with the two MJO events well. The three methods produce significant differences in the large-scale vertical motion profile, however. WTG produces the most top-heavy and noisy profiles, while DGW's is smoother with a peak in midlevels. SWTG produces a smooth profile, somewhere between WTG and DGW, and in better agreement with observations than either of the others. Numerical experiments without horizontal advection of moisture suggest that that process significantly reduces the precipitation and suppresses the top-heaviness of large-scale vertical motion during the MJO active phases, while experiments in which the effect of cloud on radiation are disabled indicate that cloud-radiative interaction significantly amplifies the MJO. Experiments in which interactive radiation is used produce poorer agreement with observation than those with imposed time-varying radiative heating. Our results highlight the

  18. When and where does preferential flow matter - from observation to large scale modelling

    NASA Astrophysics Data System (ADS)

    Weiler, Markus; Leistert, Hannes; Steinbrich, Andreas

    2017-04-01

    strong effect of the initial conditions due to the development of soil cracks. Not too surprisingly, the relevance of preferential flow was much lower when considering the whole range of precipitation events as only considering events with a high rainfall intensity. Also, the influence on infiltration and recharge were different. Despite the model can still be improved in particular considering more realistic information about the spatial and temporal variability of preferential flow by soil fauna and plants, the model already shows under what situation we need to be very careful when predicting infiltration and recharge with models considering only longer time steps (daily) or only matrix flow.

  19. LARGE-SCALE CYCLOGENESIS, FRONTAL WAVES AND DUST ON MARS: MODELING AND DIAGNOSTIC CONSIDERATIONS

    NASA Astrophysics Data System (ADS)

    Hollingsworth, J.; Kahre, M.

    2009-12-01

    During late autumn through early spring, Mars’ northern middle and high latitudes exhibit very strong equator-to-pole mean temperature contrasts (i.e., baroclinicity). From data collected during the Viking era and recent observations from both the Mars Global Surveyor (MGS) and Mars Reconnaissance Orbiter (MRO) missions, this strong baroclinicity supports vigorous large-scale eastward traveling weather systems (i.e., transient synoptic-period waves). These systems also have accompanying sub-synoptic scale ramifications on the atmospheric environment through cyclonic/anticyclonic winds, intense deformations and contractions/dilations in temperatures, and sharp perturbations amongst atmospheric tracers (e.g., dust and volatiles/condensates). Mars’ northern-hemisphere frontal waves can exhibit extended meridional structure, and appear to be active agents in the planet’s dust cycle. Their parenting cyclones tend to develop, travel eastward, and decay preferentially within certain geographic regions (i.e., storm zones). We adapt a version of the NASA Ames Mars general circulation model (GCM) at high horizontal resolution that includes the lifting, transport and sedimentation of radiatively-active dust to investigate the nature of cyclogenesis and frontal-wave circulations (both horizontally and vertically), and regional dust transport and concentration within the atmosphere. Near late winter and early spring (Ls ˜ 320-350°), high-resolution simulations indicate that the predominant dust lifting occurs through wind-stress lifting, in particular over the Tharsis highlands of the western hemisphere and to a lesser extent over the Arabia highlands of the eastern hemisphere. The former region also indicates considerable interaction with regards to upslope/downslope (i.e., nocturnal) flows and the synoptic/subsynoptic-scale circulations associated with cyclogenesis whereby dust can be readily “focused” within a frontal-wave disturbance and carried downstream both

  20. Development of Large-Scale Forcing Data for GoAmazon2014/5 Cloud Modeling Studies

    NASA Astrophysics Data System (ADS)

    Tang, S.; Xie, S.; Zhang, Y.; Schumacher, C.; Upton, H. M.; Ahlgrimm, M.; Feng, Z.

    2015-12-01

    The Observations and Modeling of the Green Ocean 2014-2015 (GoAmazon2014/5) field campaign is an international collaborated experiment conducted near Manaus, Brazil from January 2014 through December 2015. This experiment is designed to enable the study of aerosols, tropical clouds, convections and their interactions. To support modeling studies of these processes with data collected from the GoAmazon2014/5 campaign, we have developed a large-scale forcing data (e.g., vertical velocities and advective tendencies) during the second intensive operational period (IOP) of GoAmazon2014/5 from 1 Sep to 10 Oct, 2014. The method used in this study is the constrained variational analysis method in which the large-scale state fields are constrained by the surface and top-of-atmosphere observations (e.g. surface precipitation and outgoing longwave radiation) to conserve column-integrated mass, moisture and dry static energy. To address potential uncertainties in the derived forcing data due to uncertainties in surface precipitation, two sets of large-scale forcing data are developed based on the ECMWF analysis constrained by the two precipitation products respectively from SIPAM radar and TRMM 3B42 products. Our initial analysis shows large differences in these two precipitation products, which causes considerable differences in the derived large-scale forcing data. Potential uncertainties in the large-scale forcing data to other surface constraints such as surface latent and sensible fluxes will be explored. The characteristics of the large-scale forcing structures for selected cases will be discussed.

  1. Ultrafine particle transport and deposition in a large scale 17-generation lung model.

    PubMed

    Islam, Mohammad S; Saha, Suvash C; Sauret, Emilie; Gemci, Tevfik; Yang, Ian A; Gu, Y T

    2017-09-05

    To understand how to assess optimally the risks of inhaled particles on respiratory health, it is necessary to comprehend the uptake of ultrafine particulate matter by inhalation during the complex transport process through a non-dichotomously bifurcating network of conduit airways. It is evident that the highly toxic ultrafine particles damage the respiratory epithelium in the terminal bronchioles. The wide range of in silico available and the limited realistic model for the extrathoracic region of the lung have improved understanding of the ultrafine particle transport and deposition (TD) in the upper airways. However, comprehensive ultrafine particle TD data for the real and entire lung model are still unavailable in the literature. Therefore, this study is aimed to provide an understanding of the ultrafine particle TD in the terminal bronchioles for the development of future therapeutics. The Euler-Lagrange (E-L) approach and ANSYS fluent (17.2) solver were used to investigate ultrafine particle TD. The physical conditions of sleeping, resting, and light activity were considered in this modelling study. A comprehensive pressure-drop along five selected path lines in different lobes was calculated. The non-linear behaviour of pressure-drops is observed, which could aid the health risk assessment system for patients with respiratory diseases. Numerical results also showed that ultrafine particle-deposition efficiency (DE) in different lobes is different for various physical activities. Moreover, the numerical results showed hot spots in various locations among the different lobes for different flow rates, which could be helpful for targeted therapeutical aerosol transport to terminal bronchioles and the alveolar region. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Path2Models: large-scale generation of computational models from biochemical pathway maps

    PubMed Central

    2013-01-01

    Background Systems biology projects and omics technologies have led to a growing number of biochemical pathway models and reconstructions. However, the majority of these models are still created de novo, based on literature mining and the manual processing of pathway data. Results To increase the efficiency of model creation, the Path2Models project has automatically generated mathematical models from pathway representations using a suite of freely available software. Data sources include KEGG, BioCarta, MetaCyc and SABIO-RK. Depending on the source data, three types of models are provided: kinetic, logical and constraint-based. Models from over 2 600 organisms are encoded consistently in SBML, and are made freely available through BioModels Database at http://www.ebi.ac.uk/biomodels-main/path2models. Each model contains the list of participants, their interactions, the relevant mathematical constructs, and initial parameter values. Most models are also available as easy-to-understand graphical SBGN maps. Conclusions To date, the project has resulted in more than 140 000 freely available models. Such a resource can tremendously accelerate the development of mathematical models by providing initial starting models for simulation and analysis, which can be subsequently curated and further parameterized. PMID:24180668

  3. Forest landscape models, a tool for understanding the effect of the large-scale and long-term landscape processes

    Treesearch

    Hong S. He; Robert E. Keane; Louis R. Iverson

    2008-01-01

    Forest landscape models have become important tools for understanding large-scale and long-term landscape (spatial) processes such as climate change, fire, windthrow, seed dispersal, insect outbreak, disease propagation, forest harvest, and fuel treatment, because controlled field experiments designed to study the effects of these processes are often not possible (...

  4. Lichen elemental content bioindicators for air quality in upper Midwest, USA: A model for large-scale monitoring

    Treesearch

    Susan Will-Wolf; Sarah Jovan; Michael C. Amacher

    2017-01-01

    Our development of lichen elemental bioindicators for a United States of America (USA) national monitoring program is a useful model for other large-scale programs. Concentrations of 20 elements were measured, validated, and analyzed for 203 samples of five common lichen species. Collections were made by trained non-specialists near 75 permanent plots and an expert...

  5. Development of lichen response indexes using a regional gradient modeling approach for large-scale monitoring of forests

    Treesearch

    Susan Will-Wolf; Peter Neitlich

    2010-01-01

    Development of a regional lichen gradient model from community data is a powerful tool to derive lichen indexes of response to environmental factors for large-scale and long-term monitoring of forest ecosystems. The Forest Inventory and Analysis (FIA) Program of the U.S. Department of Agriculture Forest Service includes lichens in its national inventory of forests of...

  6. Large scale 3-D modeling by integration of resistivity models and borehole data through inversion

    NASA Astrophysics Data System (ADS)

    Foged, N.; Marker, P. A.; Christansen, A. V.; Bauer-Gottwein, P.; Jørgensen, F.; Høyer, A.-S.; Auken, E.

    2014-02-01

    We present an automatic method for parameterization of a 3-D model of the subsurface, integrating lithological information from boreholes with resistivity models through an inverse optimization, with the objective of further detailing for geological models or as direct input to groundwater models. The parameter of interest is the clay fraction, expressed as the relative length of clay-units in a depth interval. The clay fraction is obtained from lithological logs and the clay fraction from the resistivity is obtained by establishing a simple petrophysical relationship, a translator function, between resistivity and the clay fraction. Through inversion we use the lithological data and the resistivity data to determine the optimum spatially distributed translator function. Applying the translator function we get a 3-D clay fraction model, which holds information from the resistivity dataset and the borehole dataset in one variable. Finally, we use k means clustering to generate a 3-D model of the subsurface structures. We apply the concept to the Norsminde survey in Denmark integrating approximately 700 boreholes and more than 100 000 resistivity models from an airborne survey in the parameterization of the 3-D model covering 156 km2. The final five-cluster 3-D model differentiates between clay materials and different high resistive materials from information held in resistivity model and borehole observations respectively.

  7. Large-scale 3-D modeling by integration of resistivity models and borehole data through inversion

    NASA Astrophysics Data System (ADS)

    Foged, N.; Marker, P. A.; Christansen, A. V.; Bauer-Gottwein, P.; Jørgensen, F.; Høyer, A.-S.; Auken, E.

    2014-11-01

    We present an automatic method for parameterization of a 3-D model of the subsurface, integrating lithological information from boreholes with resistivity models through an inverse optimization, with the objective of further detailing of geological models, or as direct input into groundwater models. The parameter of interest is the clay fraction, expressed as the relative length of clay units in a depth interval. The clay fraction is obtained from lithological logs and the clay fraction from the resistivity is obtained by establishing a simple petrophysical relationship, a translator function, between resistivity and the clay fraction. Through inversion we use the lithological data and the resistivity data to determine the optimum spatially distributed translator function. Applying the translator function we get a 3-D clay fraction model, which holds information from the resistivity data set and the borehole data set in one variable. Finally, we use k-means clustering to generate a 3-D model of the subsurface structures. We apply the procedure to the Norsminde survey in Denmark, integrating approximately 700 boreholes and more than 100 000 resistivity models from an airborne survey in the parameterization of the 3-D model covering 156 km2. The final five-cluster 3-D model differentiates between clay materials and different high-resistivity materials from information held in the resistivity model and borehole observations, respectively.

  8. A large scale microwave emission model for forests. Contribution to the SMOS algorithm

    NASA Astrophysics Data System (ADS)

    Rahmoune, R.; Della Vecchia, A.; Ferrazzoli, P.; Guerriero, L.; Martin-Porqueras, F.

    2009-04-01

    1. INTRODUCTION It is well known that surface soil moisture plays an important role in the water cycle and the global climate. SMOS is a L-Band multi-angle dual-polarization microwave radiometer for global monitoring of this variable. In the areas covered by forests, the opacity is relatively high, and the knowledge of moisture remains problematic. A significant percentage of SMOS pixels at global scale is affected by fractional forest. Whereas the effect of the vegetation can be corrected thanks a simple radiative model, in case of dense forests the wave penetration is limited and the sensitivity to variations of soil moisture is poor. However, most of the pixels are mixed, and a reliable estimate of forest emissivity is important to retrieve the soil moisture of the areas less affected by forest cover. Moreover, there are many sparse woodlands, where the sensitivity to variations of soil moisture is still acceptable. At the scale of spaceborne radiometers, it is difficult to have a detailed knowledge of the variables which affect the overall emissivity. In order to manage effectively these problems, the electromagnetic model developed at Tor Vergata University was combined with information available from forest literature. Using allometric equations and other information, the geometrical and dielectric inputs required by the model were related to global variables available at large scale, such as the Leaf Area Index. This procedure is necessarily approximate. In a first version of the model, forest variables were assumed to be constant in time, and were simply related to the maximum yearly value of Leaf Area Index. Moreover, a unique sparse distribution of trunk diameters was assumed. Finally, the temperature distribution within the crown canopy was assumed to be uniform. The model is being refined, in order to consider seasonal variations of foliage cover, subdivided into arboreous foliage and understory contributions. Different distributions of trunk diameter

  9. Large-scale groundwater modeling using global datasets: a test case for the Rhine-Meuse basin

    NASA Astrophysics Data System (ADS)

    Sutanudjaja, E. H.; van Beek, L. P. H.; de Jong, S. M.; van Geer, F. C.; Bierkens, M. F. P.

    2011-09-01

    The current generation of large-scale hydrological models does not include a groundwater flow component. Large-scale groundwater models, involving aquifers and basins of multiple countries, are still rare mainly due to a lack of hydro-geological data which are usually only available in developed countries. In this study, we propose a novel approach to construct large-scale groundwater models by using global datasets that are readily available. As the test-bed, we use the combined Rhine-Meuse basin that contains groundwater head data used to verify the model output. We start by building a distributed land surface model (30 arc-second resolution) to estimate groundwater recharge and river discharge. Subsequently, a MODFLOW transient groundwater model is built and forced by the recharge and surface water levels calculated by the land surface model. Results are promising despite the fact that we still use an offline procedure to couple the land surface and MODFLOW groundwater models (i.e. the simulations of both models are separately performed). The simulated river discharges compare well to the observations. Moreover, based on our sensitivity analysis, in which we run several groundwater model scenarios with various hydro-geological parameter settings, we observe that the model can reasonably well reproduce the observed groundwater head time series. However, we note that there are still some limitations in the current approach, specifically because the offline-coupling technique simplifies the dynamic feedbacks between surface water levels and groundwater heads, and between soil moisture states and groundwater heads. Also the current sensitivity analysis ignores the uncertainty of the land surface model output. Despite these limitations, we argue that the results of the current model show a promise for large-scale groundwater modeling practices, including for data-poor environments and at the global scale.

  10. Comparing large-scale computational approaches to epidemic modeling: agent based versus structured metapopulation models

    NASA Astrophysics Data System (ADS)

    Gonçalves, Bruno; Ajelli, Marco; Balcan, Duygu; Colizza, Vittoria; Hu, Hao; Ramasco, José; Merler, Stefano; Vespignani, Alessandro

    2010-03-01

    We provide for the first time a side by side comparison of the results obtained with a stochastic agent based model and a structured metapopulation stochastic model for the evolution of a baseline pandemic event in Italy. The Agent Based model is based on the explicit representation of the Italian population through highly detailed data on the socio-demographic structure. The metapopulation simulations use the GLobal Epidemic and Mobility (GLEaM) model, based on high resolution census data worldwide, and integrating airline travel flow data with short range human mobility patterns at the global scale. Both models provide epidemic patterns that are in very good agreement at the granularity levels accessible by both approaches, with differences in peak timing of the order of few days. The age breakdown analysis shows that similar attack rates are obtained for the younger age classes.

  11. Large-scale hydrological modelling in the semi-arid north-east of Brazil

    NASA Astrophysics Data System (ADS)

    Güntner, Andreas

    2002-07-01

    the framework of an integrated model which contains modules that do not work on the basis of natural spatial units. The target units mentioned above are disaggregated in Wasa into smaller modelling units within a new multi-scale, hierarchical approach. The landscape units defined in this scheme capture in particular the effect of structured variability of terrain, soil and vegetation characteristics along toposequences on soil moisture and runoff generation. Lateral hydrological processes at the hillslope scale, as reinfiltration of surface runoff, being of particular importance in semi-arid environments, can thus be represented also within the large-scale model in a simplified form. Depending on the resolution of available data, small-scale variability is not represented explicitly with geographic reference in Wasa, but by the distribution of sub-scale units and by statistical transition frequencies for lateral fluxes between these units. Further model components of Wasa which respect specific features of semi-arid hydrology are: (1) A two-layer model for evapotranspiration comprises energy transfer at the soil surface (including soil evaporation), which is of importance in view of the mainly sparse vegetation cover. Additionally, vegetation parameters are differentiated in space and time in dependence on the occurrence of the rainy season. (2) The infiltration module represents in particular infiltration-excess surface runoff as the dominant runoff component. (3) For the aggregate description of the water balance of reservoirs that cannot be represented explicitly in the model, a storage approach respecting different reservoirs size classes and their interaction via the river network is applied. (4) A model for the quantification of water withdrawal by water use in different sectors is coupled to Wasa. (5) A cascade model for the temporal disaggregation of precipitation time series, adapted to the specific characteristics of tropical convective rainfall, is applied

  12. A large-scale neural network model of the influence of neuromodulatory levels on working memory and behavior

    PubMed Central

    Avery, Michael C.; Dutt, Nikil; Krichmar, Jeffrey L.

    2013-01-01

    The dorsolateral prefrontal cortex (dlPFC), which is regarded as the primary site for visuospatial working memory in the brain, is significantly modulated by dopamine (DA) and norepinephrine (NE). DA and NE originate in the ventral tegmental area (VTA) and locus coeruleus (LC), respectively, and have been shown to have an “inverted-U” dose-response profile in dlPFC, where the level of arousal and decision-making performance is a function of DA and NE concentrations. Moreover, there appears to be a sweet spot, in terms of the level of DA and NE activation, which allows for optimal working memory and behavioral performance. When either DA or NE is too high, input to the PFC is essentially blocked. When either DA or NE is too low, PFC network dynamics become noisy and activity levels diminish. Mechanisms for how this is occurring have been suggested, however, they have not been tested in a large-scale model with neurobiologically plausible network dynamics. Also, DA and NE levels have not been simultaneously manipulated experimentally, which is not realistic in vivo due to strong bi-directional connections between the VTA and LC. To address these issues, we built a spiking neural network model that includes D1, α2A, and α1 receptors. The model was able to match the inverted-U profiles that have been shown experimentally for differing levels of DA and NE. Furthermore, we were able to make predictions about what working memory and behavioral deficits may occur during simultaneous manipulation of DA and NE outside of their optimal levels. Specifically, when DA levels were low and NE levels were high, cues could not be held in working memory due to increased noise. On the other hand, when DA levels were high and NE levels were low, incorrect decisions were made due to weak overall network activity. We also show that lateral inhibition in working memory may play a more important role in increasing signal-to-noise ratio than increasing recurrent excitatory input. PMID

  13. The Nature of Global Large-scale Sea Level Variability in Relation to Atmospheric Forcing: A Modeling Study

    NASA Technical Reports Server (NTRS)

    Fukumori, I.; Raghunath, R.; Fu, L. L.

    1996-01-01

    The relation between large-scale sea level variability and ocean circulation is studied using a numerical model. A global primitive equaiton model of the ocean is forced by daily winds and climatological heat fluxes corresponding to the period from January 1992 to February 1996. The physical nature of the temporal variability from periods of days to a year, are examined based on spectral analyses of model results and comparisons with satellite altimetry and tide gauge measurements.

  14. Testing LTB void models without the cosmic microwave background or large scale structure: new constraints from galaxy ages

    SciTech Connect

    Putter, Roland de; Verde, Licia; Jimenez, Raul E-mail: liciaverde@icc.ub.edu

    2013-02-01

    We present new observational constraints on inhomogeneous models based on observables independent of the CMB and large-scale structure. Using Bayesian evidence we find very strong evidence for the homogeneous LCDM model, thus disfavouring inhomogeneous models. Our new constraints are based on quantities independent of the growth of perturbations and rely on cosmic clocks based on atomic physics and on the local density of matter.

  15. Using cloud resolving model simulations of deep convection to inform cloud parameterizations in large-scale models

    SciTech Connect

    Klein, Stephen A.; Pincus, Robert; Xu, Kuan-man

    2003-06-23

    Cloud parameterizations in large-scale models struggle to address the significant non-linear effects of radiation and precipitation that arise from horizontal inhomogeneity in cloud properties at scales smaller than the grid box size of the large-scale models. Statistical cloud schemes provide an attractive framework to self-consistently predict the horizontal inhomogeneity in radiation and microphysics because the probability distribution function (PDF) of total water contained in the scheme can be used to calculate these non-linear effects. Statistical cloud schemes were originally developed for boundary layer studies so extending them to a global model with many different environments is not straightforward. For example, deep convection creates abundant cloudiness and yet little is known about how deep convection alters the PDF of total water or how to parameterize these impacts. These issues are explored with data from a 29 day simulation by a cloud resolving model (CRM) of the July 1997 ARM Intensive Observing Period at the Southern Great Plains site. The simulation is used to answer two questions: (a) how well can the beta distribution represent the PDFs of total water relative to saturation resolved by the CRM? (b) how can the effects of convection on the PDF be parameterized? In addition to answering these questions, additional sections more fully describe the proposed statistical cloud scheme and the CRM simulation and analysis methods.

  16. Hydro-economic Modeling: Reducing the Gap between Large Scale Simulation and Optimization Models

    NASA Astrophysics Data System (ADS)

    Forni, L.; Medellin-Azuara, J.; Purkey, D.; Joyce, B. A.; Sieber, J.; Howitt, R.

    2012-12-01

    The integration of hydrological and socio economic components into hydro-economic models has become essential for water resources policy and planning analysis. In this study we integrate the economic value of water in irrigated agricultural production using SWAP (a StateWide Agricultural Production Model for California), and WEAP (Water Evaluation and Planning System) a climate driven hydrological model. The integration of the models is performed using a step function approximation of water demand curves from SWAP, and by relating the demand tranches to the priority scheme in WEAP. In order to do so, a modified version of SWAP was developed called SWEAP that has the Planning Area delimitations of WEAP, a Maximum Entropy Model to estimate evenly sized steps (tranches) of water derived demand functions, and the translation of water tranches into crop land. In addition, a modified version of WEAP was created called ECONWEAP with minor structural changes for the incorporation of land decisions from SWEAP and series of iterations run via an external VBA script. This paper shows the validity of this integration by comparing revenues from WEAP vs. ECONWEAP as well as an assessment of the approximation of tranches. Results show a significant increase in the resulting agricultural revenues for our case study in California's Central Valley using ECONWEAP while maintaining the same hydrology and regional water flows. These results highlight the gains from allocating water based on its economic compared to priority-based water allocation systems. Furthermore, this work shows the potential of integrating optimization and simulation-based hydrologic models like ECONWEAP.ercentage difference in total agricultural revenues (EconWEAP versus WEAP).

  17. Computational Models of Consumer Confidence from Large-Scale Online Attention Data: Crowd-Sourcing Econometrics

    PubMed Central

    2015-01-01

    Economies are instances of complex socio-technical systems that are shaped by the interactions of large numbers of individuals. The individual behavior and decision-making of consumer agents is determined by complex psychological dynamics that include their own assessment of present and future economic conditions as well as those of others, potentially leading to feedback loops that affect the macroscopic state of the economic system. We propose that the large-scale interactions of a nation's citizens with its online resources can reveal the complex dynamics of their collective psychology, including their assessment of future system states. Here we introduce a behavioral index of Chinese Consumer Confidence (C3I) that computationally relates large-scale online search behavior recorded by Google Trends data to the macroscopic variable of consumer confidence. Our results indicate that such computational indices may reveal the components and complex dynamics of consumer psychology as a collective socio-economic phenomenon, potentially leading to improved and more refined economic forecasting. PMID:25826692

  18. Computational models of consumer confidence from large-scale online attention data: crowd-sourcing econometrics.

    PubMed

    Dong, Xianlei; Bollen, Johan

    2015-01-01

    Economies are instances of complex socio-technical systems that are shaped by the interactions of large numbers of individuals. The individual behavior and decision-making of consumer agents is determined by complex psychological dynamics that include their own assessment of present and future economic conditions as well as those of others, potentially leading to feedback loops that affect the macroscopic state of the economic system. We propose that the large-scale interactions of a nation's citizens with its online resources can reveal the complex dynamics of their collective psychology, including their assessment of future system states. Here we introduce a behavioral index of Chinese Consumer Confidence (C3I) that computationally relates large-scale online search behavior recorded by Google Trends data to the macroscopic variable of consumer confidence. Our results indicate that such computational indices may reveal the components and complex dynamics of consumer psychology as a collective socio-economic phenomenon, potentially leading to improved and more refined economic forecasting.

  19. Sensitivities of Cumulus-Ensemble Rainfall in a Cloud-Resolving Model with Parameterized Large-Scale Dynamics.

    NASA Astrophysics Data System (ADS)

    Mapes, Brian E.

    2004-09-01

    The problem of closure in cumulus parameterization requires an understanding of the sensitivities of convective cloud systems to their large-scale setting. As a step toward such an understanding, this study probes some sensitivities of a simulated ensemble of convective clouds in a two-dimensional cloud-resolving model (CRM). The ensemble is initially in statistical equilibrium with a steady imposed background forcing (cooling and moistening). Large-scale stimuli are imposed as horizontally uniform perturbations nudged into the model fields over 10 min, and the rainfall response of the model clouds is monitored.In order to reduce a major source of artificial insensitivity in the CRM, a simple parameterization scheme is devised to account for heating-induced large-scale (i.e., domain averaged) vertical motions that would develop in nature but are forbidden by the periodic boundary conditions. The effects of this large-scale vertical motion are parameterized as advective tendency terms that are applied as a uniform forcing throughout the domain, just like the background forcing. This parameterized advection is assumed to lag rainfall (used as a proxy for heating) by a specified time scale. The time scale determines (via a gravity wave space time conversion factor) the size of the large-scale region represented by the periodic CRM domain, which can be of arbitrary size or dimensionality.The sensitivity of rain rate to deep cooling and moistening, representing an upward displacement by a large-scale wave of first baroclinic mode structure, is positive. Near linearity is found for ±1 K perturbations, and the sensitivity is about equally divided between temperature and moisture effects. For a second baroclinic mode (vertical dipole) displacement, the sign of the perturbation in the lower troposphere dominates the convective response. In this dipole case, the initial sensitivity is very large, but quantitative results are distorted by the oversimplified large-scale

  20. Seismic Modelling of the Earth's Large-Scale Three-Dimensional Structure

    NASA Astrophysics Data System (ADS)

    Woodhouse, J. H.; Dziewonski, A. M.

    1989-07-01

    Several different kinds of seismological data, spanning more than three orders of magnitude in frequency, have been employed in the study of the Earth's large-scale three-dimensional structure. These yield different but overlapping information, which is leading to a coherent picture of the Earth's internal heterogeneity. In this article we describe several methods of seismic inversion and intercompare the resulting models. Models of upper-mantle shear velocity based upon mantle waveforms (Woodhouse & Dziewonski (J. geophys. Res. 89, 5953-5986 (1984))) (f lesssim 7 mHz) and long-period body waveforms (f lesssim 20 mHz; Woodhouse & Dziewonski (Eos, Wash. 67, 307 (1986))) show the mid-oceanic ridges to be the major low-velocity anomalies in the uppermost mantle, together with regions in the western Pacific, characterized by back-arc volcanism. High velocities are associated with the continents, and in particular with the continental shields, extending to depths in excess of 300 km. By assuming a given ratio between density and wave velocity variations, and a given mantle viscosity structure, such models have been successful in explaining some aspects of observed plate motion in terms of thermal convection in the mantle (Forte & Peltier (J. geophys. Res. 92, 3645-3679 (1987))). An important qualitative conclusion from such analysis is that the magnitude of the observed seismic anomalies is of the order expected in a convecting system having the viscosity, temperature derivatives and flow rates which characterize the mantle. Models of the lower mantle based upon P-wave arrival times (f ≈ 1 Hz; Dziewonski (J. geophys. Res. 89, 5929-5952 (1984)); Morelli & Dziewonski (Eos, Wash. 67, 311 (1986))) SH waveforms (f ≈ 20 mHz; Woodhouse & Dziewonski (1986)) and free oscillations (Giardini et al. (Nature, Lond. 325, 405-411 (1987); J. geophys. Res. 93, 13716-13742 (1988))) (f ≈ 0.5-5 mHz) show a very long wavelength pattern, largely contained in spherical harmonics of

  1. Geodynamic models of a Yellowstone plume and its interaction with subduction and large-scale mantle circulation

    NASA Astrophysics Data System (ADS)

    Steinberger, B. M.

    2012-12-01

    Yellowstone is a site of intra-plate volcanism, with many traits of a classical "hotspot" (chain of age-progressive volcanics with active volcanism on one end; associated with flood basalt), yet it is atypical, as it is located near an area of Cenozoic subduction zones. Tomographic images show a tilted plume conduit in the upper mantle beneath Yellowstone; a similar tilt is predicted by simple geodynamic models: In these models, an initially (at the time when the corresponding Large Igneous Province erupted, ~15 Myr ago) vertical conduit gets tilted while it is advected in and buoyantly rising through large-scale flow: Generally eastward flow in the upper mantle in these models yields a predicted eastward tilt (i.e., the conduit is coming up from the west). In these models, mantle flow is derived from density anomalies, which are either inferred from seismic tomography or from subduction history. One drawback of these models is, that the initial plume location is chosen "ad hoc" such that the present-day position of Yellowstone is matched. Therefore, in another set of models, we study how subducted slabs (inferred from 300 Myr of subduction history) shape a basal chemically distinct layer into thermo-chemical piles, and create plumes along its margins. Our results show the formation of a Pacific pile. As subduction approaches this pile, the models frequently show part of the pile being separated off, with a plume rising above this part. This could be an analog to the formation and dynamics of the Yellowstone plume, yet there is a mismatch in location of about 30 degrees. It is therefore a goal to devise a model that combines the advantages of both models, i.e. a fully dynamic plume model, that matches the present-day position of Yellowstone. This will probably require "seeding" a plume through a thermal anomaly at the core-mantle boundary and possibly other modifications. Also, for a realistic model, the present-day density anomaly derived from subduction should

  2. Modeling relief demands in an emergency supply chain system under large-scale disasters based on a queuing network.

    PubMed

    He, Xinhua; Hu, Wenfa

    2014-01-01

    This paper presents a multiple-rescue model for an emergency supply chain system under uncertainties in large-scale affected area of disasters. The proposed methodology takes into consideration that the rescue demands caused by a large-scale disaster are scattered in several locations; the servers are arranged in multiple echelons (resource depots, distribution centers, and rescue center sites) located in different places but are coordinated within one emergency supply chain system; depending on the types of rescue demands, one or more distinct servers dispatch emergency resources in different vehicle routes, and emergency rescue services queue in multiple rescue-demand locations. This emergency system is modeled as a minimal queuing response time model of location and allocation. A solution to this complex mathematical problem is developed based on genetic algorithm. Finally, a case study of an emergency supply chain system operating in Shanghai is discussed. The results demonstrate the robustness and applicability of the proposed model.

  3. Modeling Relief Demands in an Emergency Supply Chain System under Large-Scale Disasters Based on a Queuing Network

    PubMed Central

    He, Xinhua

    2014-01-01

    This paper presents a multiple-rescue model for an emergency supply chain system under uncertainties in large-scale affected area of disasters. The proposed methodology takes into consideration that the rescue demands caused by a large-scale disaster are scattered in several locations; the servers are arranged in multiple echelons (resource depots, distribution centers, and rescue center sites) located in different places but are coordinated within one emergency supply chain system; depending on the types of rescue demands, one or more distinct servers dispatch emergency resources in different vehicle routes, and emergency rescue services queue in multiple rescue-demand locations. This emergency system is modeled as a minimal queuing response time model of location and allocation. A solution to this complex mathematical problem is developed based on genetic algorithm. Finally, a case study of an emergency supply chain system operating in Shanghai is discussed. The results demonstrate the robustness and applicability of the proposed model. PMID:24688367

  4. A refined regional modeling approach for the Corn Belt - Experiences and recommendations for large-scale integrated modeling

    NASA Astrophysics Data System (ADS)

    Panagopoulos, Yiannis; Gassman, Philip W.; Jha, Manoj K.; Kling, Catherine L.; Campbell, Todd; Srinivasan, Raghavan; White, Michael; Arnold, Jeffrey G.

    2015-05-01

    Nonpoint source pollution from agriculture is the main source of nitrogen and phosphorus in the stream systems of the Corn Belt region in the Midwestern US. This region is comprised of two large river basins, the intensely row-cropped Upper Mississippi River Basin (UMRB) and Ohio-Tennessee River Basin (OTRB), which are considered the key contributing areas for the Northern Gulf of Mexico hypoxic zone according to the US Environmental Protection Agency. Thus, in this area it is of utmost importance to ensure that intensive agriculture for food, feed and biofuel production can coexist with a healthy water environment. To address these objectives within a river basin management context, an integrated modeling system has been constructed with the hydrologic Soil and Water Assessment Tool (SWAT) model, capable of estimating river basin responses to alternative cropping and/or management strategies. To improve modeling performance compared to previous studies and provide a spatially detailed basis for scenario development, this SWAT Corn Belt application incorporates a greatly refined subwatershed structure based on 12-digit hydrologic units or 'subwatersheds' as defined by the US Geological Service. The model setup, calibration and validation are time-demanding and challenging tasks for these large systems, given the scale intensive data requirements, and the need to ensure the reliability of flow and pollutant load predictions at multiple locations. Thus, the objectives of this study are both to comprehensively describe this large-scale modeling approach, providing estimates of pollution and crop production in the region as well as to present strengths and weaknesses of integrated modeling at such a large scale along with how it can be improved on the basis of the current modeling structure and results. The predictions were based on a semi-automatic hydrologic calibration approach for large-scale and spatially detailed modeling studies, with the use of the Sequential

  5. The Large-Scale Debris Avalanche From The Tancitaro Volcano (Mexico): Characterization And Modeling

    NASA Astrophysics Data System (ADS)

    Morelli, S.; Gigli, G.; Falorni, G.; Garduno Monroy, V. H.; Arreygue, E.

    2008-12-01

    until they disappear entirely in the most distal reaches. The granulometric analysis and the comparison between the debris avalanche of the Tancitaro and other collapses with similar morphometric features (vertical relief during runout, travel distance, volume and area of the deposit) indicate that the collapse was most likely not primed by any type of eruption, but rather triggered by a strong seismic shock that could have induced the failure of a portion of the edifice, already deeply altered by intense hydrothermal fluid circulation. It is also possible to hypothesize that mechanical fluidization may have been the mechanism controlling the long runout of the avalanche, as has been determined for other well-known events. The behavior of the Tancitaro debris avalanche was numerically modeled using the DAN-W code. By opportunely modifying the rheological parameters of the different models selectable within DAN, it was determined that the two-parameter 'Voellmy model' provides the best approximation of the avalanche movement. The Voellmy model produces the most realistic results in terms of runout distance, velocity and spatial distribution of the failed mass. Since the Tancitaro event was not witnessed directly, it is possible to infer approximate velocities only from comparisons with similar and documented events, namely the Mt. St. Helens debris avalanche occurred on May 18, 1980.

  6. Large-Scale Disasters

    NASA Astrophysics Data System (ADS)

    Gad-El-Hak, Mohamed

    "Extreme" events - including climatic events, such as hurricanes, tornadoes, and drought - can cause massive disruption to society, including large death tolls and property damage in the billions of dollars. Events in recent years have shown the importance of being prepared and that countries need to work together to help alleviate the resulting pain and suffering. This volume presents a review of the broad research field of large-scale disasters. It establishes a common framework for predicting, controlling and managing both manmade and natural disasters. There is a particular focus on events caused by weather and climate change. Other topics include air pollution, tsunamis, disaster modeling, the use of remote sensing and the logistics of disaster management. It will appeal to scientists, engineers, first responders and health-care professionals, in addition to graduate students and researchers who have an interest in the prediction, prevention or mitigation of large-scale disasters.

  7. Multi-scale Modeling of Radiation Damage: Large Scale Data Analysis

    NASA Astrophysics Data System (ADS)

    Warrier, M.; Bhardwaj, U.; Bukkuru, S.

    2016-10-01

    Modification of materials in nuclear reactors due to neutron irradiation is a multiscale problem. These neutrons pass through materials creating several energetic primary knock-on atoms (PKA) which cause localized collision cascades creating damage tracks, defects (interstitials and vacancies) and defect clusters depending on the energy of the PKA. These defects diffuse and recombine throughout the whole duration of operation of the reactor, thereby changing the micro-structure of the material and its properties. It is therefore desirable to develop predictive computational tools to simulate the micro-structural changes of irradiated materials. In this paper we describe how statistical averages of the collision cascades from thousands of MD simulations are used to provide inputs to Kinetic Monte Carlo (KMC) simulations which can handle larger sizes, more defects and longer time durations. Use of unsupervised learning and graph optimization in handling and analyzing large scale MD data will be highlighted.

  8. Field-aligned Currents on Board of Intercosmos Bulgaria-1300 Satellite in Comparison with Modelled Large-scale Currents

    NASA Astrophysics Data System (ADS)

    Danov, D.; Koleva, R.

    2007-08-01

    The large-scale field-aligned currents (FACs) are well examined experimentally and described by different models, but the small scale FACs are less investigated and there exists a controversy about their intensity and dimensions. A possible source for the discrepancy is the assumption of infinite homogeneous current sheet which allowed their deriving from one-satellite measurements. We present a new method for identification of finite size current sheets, which we applied to derive FACs from magnetic field measurements aboard the INTERCOSMOS BULGARIA-1300 satellite. Then we compare one case of FAC, detected on 22 August 1981, with empirical (Tsyganenko 2001) and a magneto-hydrodynamic Block-Adaptive-Tree-Solar-wind-Roe-Upwind-Scheme (BATS-R-US) model of large-scale currents. We discuss the possible reasons for the observed discrepancy between the measured and modelled FACs.

  9. Solving large-scale fixed cost integer linear programming models for grid-based location problems with heuristic techniques

    NASA Astrophysics Data System (ADS)

    Noor-E-Alam, Md.; Doucette, John

    2015-08-01

    Grid-based location problems (GBLPs) can be used to solve location problems in business, engineering, resource exploitation, and even in the field of medical sciences. To solve these decision problems, an integer linear programming (ILP) model is designed and developed to provide the optimal solution for GBLPs considering fixed cost criteria. Preliminary results show that the ILP model is efficient in solving small to moderate-sized problems. However, this ILP model becomes intractable in solving large-scale instances. Therefore, a decomposition heuristic is proposed to solve these large-scale GBLPs, which demonstrates significant reduction of solution runtimes. To benchmark the proposed heuristic, results are compared with the exact solution via ILP. The experimental results show that the proposed method significantly outperforms the exact method in runtime with minimal (and in most cases, no) loss of optimality.

  10. Realistic molecular model of kerogen's nanostructure

    NASA Astrophysics Data System (ADS)

    Bousige, Colin; Ghimbeu, Camélia Matei; Vix-Guterl, Cathie; Pomerantz, Andrew E.; Suleimenova, Assiya; Vaughan, Gavin; Garbarino, Gaston; Feygenson, Mikhail; Wildgruber, Christoph; Ulm, Franz-Josef; Pellenq, Roland J.-M.; Coasne, Benoit

    2016-05-01

    Despite kerogen's importance as the organic backbone for hydrocarbon production from source rocks such as gas shale, the interplay between kerogen's chemistry, morphology and mechanics remains unexplored. As the environmental impact of shale gas rises, identifying functional relations between its geochemical, transport, elastic and fracture properties from realistic molecular models of kerogens becomes all the more important. Here, by using a hybrid experimental-simulation method, we propose a panel of realistic molecular models of mature and immature kerogens that provide a detailed picture of kerogen's nanostructure without considering the presence of clays and other minerals in shales. We probe the models' strengths and limitations, and show that they predict essential features amenable to experimental validation, including pore distribution, vibrational density of states and stiffness. We also show that kerogen's maturation, which manifests itself as an increase in the sp2/sp3 hybridization ratio, entails a crossover from plastic-to-brittle rupture mechanisms.

  11. Realistic molecular model of kerogen's nanostructure.

    PubMed

    Bousige, Colin; Ghimbeu, Camélia Matei; Vix-Guterl, Cathie; Pomerantz, Andrew E; Suleimenova, Assiya; Vaughan, Gavin; Garbarino, Gaston; Feygenson, Mikhail; Wildgruber, Christoph; Ulm, Franz-Josef; Pellenq, Roland J-M; Coasne, Benoit

    2016-05-01

    Despite kerogen's importance as the organic backbone for hydrocarbon production from source rocks such as gas shale, the interplay between kerogen's chemistry, morphology and mechanics remains unexplored. As the environmental impact of shale gas rises, identifying functional relations between its geochemical, transport, elastic and fracture properties from realistic molecular models of kerogens becomes all the more important. Here, by using a hybrid experimental-simulation method, we propose a panel of realistic molecular models of mature and immature kerogens that provide a detailed picture of kerogen's nanostructure without considering the presence of clays and other minerals in shales. We probe the models' strengths and limitations, and show that they predict essential features amenable to experimental validation, including pore distribution, vibrational density of states and stiffness. We also show that kerogen's maturation, which manifests itself as an increase in the sp(2)/sp(3) hybridization ratio, entails a crossover from plastic-to-brittle rupture mechanisms.

  12. Three-dimensional mechanical modeling of large-scale crustal deformation in China constrained by the GPS velocity field

    NASA Astrophysics Data System (ADS)

    Wang, Jian; Ye, Zheng-Ren; He, Jian-Kun

    2008-01-01

    We present a quantitative model for the crustal movement in China with respect to the Eurasia plate by using the three-dimensional finite element code ADELI. The model consists of an elastoplastic upper lithosphere and a viscoelastic lower lithosphere. The lithosphere is supported by the hydrostatic pressure at its base. The India-Eurasia collision is modeled as a velocity boundary condition. Ten large-scale faults are introduced as Coulomb-type frictional zones in the modeling. The values for the root mean square (RMS) of the east and north velocity components differences (RMS(Ue) and RMS(Un)), which are between the observation and the prediction, are regarded as the measurements to evaluate our simulations. We model the long-term crustal deformation in China by adjusting the faults frictions ranged from 0.01 to 0.5 and considering the effects resulted from lithospheric viscosity variation and topographic loading. Our results suggest most of the large-scale faults frictions are not larger than 0.1, which is consistent with other large-scale faults such as the North Anatolian fault (Provost, A.S., Chery, J., Hassani, R., 2003. Three-dimensional mechanical modeling of the GPS velocity field along the North Anatolian fault. Earth Planet. Sci. Lett. 209, 361-377) and the San Andreas fault (Mount, V.S., Suppe, J., 1987. State of stress near the San Andreas fault: implications for wrench tectonics. Geology, 15, 1143-1146). Further, we examine the effects on the long-term crustal deformation in China of three causes: the large-scale faults, lithospheric viscosity structure and topographic loading. Results indicate that the lithospheric viscosity structure and the topographic loading have important influences on the crustal deformation in China, while the influences caused by the large-scale faults are small. Although our simulations satisfactorily reproduce the general picture of crustal movement in China, there is a poor agreement between the model and the observed GPS

  13. Parameter estimation in large-scale systems biology models: a parallel and self-adaptive cooperative strategy.

    PubMed

    Penas, David R; González, Patricia; Egea, Jose A; Doallo, Ramón; Banga, Julio R

    2017-01-21

    The development of large-scale kinetic models is one of the current key issues in computational systems biology and bioinformatics. Here we consider the problem of parameter estimation in nonlinear dynamic models. Global optimization methods can be used to solve this type of problems but the associated computational cost is very large. Moreover, many of these methods need the tuning of a number of adjustable search parameters, requiring a number of initial exploratory runs and therefore further increasing the computation times. Here we present a novel parallel method, self-adaptive cooperative enhanced scatter search (saCeSS), to accelerate the solution of this class of problems. The method is based on the scatter search optimization metaheuristic and incorporates several key new mechanisms: (i) asynchronous cooperation between parallel processes, (ii) coarse and fine-grained parallelism, and (iii) self-tuning strategies. The performance and robustness of saCeSS is illustrated by solving a set of challenging parameter estimation problems, including medium and large-scale kinetic models of the bacterium E. coli, bakerés yeast S. cerevisiae, the vinegar fly D. melanogaster, Chinese Hamster Ovary cells, and a generic signal transduction network. The results consistently show that saCeSS is a robust and efficient method, allowing very significant reduction of computation times with respect to several previous state of the art methods (from days to minutes, in several cases) even when only a small number of processors is used. The new parallel cooperative method presented here allows the solution of medium and large scale parameter estimation problems in reasonable computation times and with small hardware requirements. Further, the method includes self-tuning mechanisms which facilitate its use by non-experts. We believe that this new method can play a key role in the development of large-scale and even whole-cell dynamic models.

  14. Maintaining Realistic Uncertainty in Model and Forecast

    DTIC Science & Technology

    1999-09-30

    Maintaining Realistic Uncertainty in Model and Forecast Leonard Smith Pembroke College Oxford University St Aldates Oxford OX1 3LB England phone... Oxford University ,Pembroke College,St Aldates,Oxford OX1 3LB England, 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING/MONITORING AGENCY NAME(S...in my group. REFERENCES Clarke, L. (1999) Rogue Thermocouple Detection. MSc Thesis, Mathematical Institute, Oxford University . Hansen J. and L. A

  15. Maintaining Realistic Uncertainty in Model and Forecast

    DTIC Science & Technology

    2000-09-30

    Maintaining Realistic Uncertainty in Model and Forecast Leonard Smith Pembroke College Oxford University St. Aldates Oxford OX1 1DW United Kingdom...5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Pembroke College, Oxford University ,,St...evaluation: l-shadowing, probabilistic prediction and weather forecasting. D.Phil Thesis, Oxford University . Lorenz, E. (1995) Predictability-a Partially

  16. Large-scale groundwater modeling using global datasets: a test case for the Rhine-Meuse basin

    NASA Astrophysics Data System (ADS)

    Sutanudjaja, E. H.; van Beek, L. P. H.; de Jong, S. M.; van Geer, F. C.; Bierkens, M. F. P.

    2011-03-01

    Large-scale groundwater models involving aquifers and basins of multiple countries are still rare due to a lack of hydrogeological data which are usually only available in developed countries. In this study, we propose a novel approach to construct large-scale groundwater models by using global datasets that are readily available. As the test-bed, we use the combined Rhine-Meuse basin that contains groundwater head data used to verify the model output. We start by building a distributed land surface model (30 arc-second resolution) to estimate groundwater recharge and river discharge. Subsequently, a MODFLOW transient groundwater model is built and forced by the recharge and surface water levels calculated by the land surface model. Although the method that we used to couple the land surface and MODFLOW groundwater model is considered as an offline-coupling procedure (i.e. the simulations of both models were performed separately), results are promising. The simulated river discharges compare well to the observations. Moreover, based on our sensitivity analysis, in which we run several groundwater model scenarios with various hydrogeological parameter settings, we observe that the model can reproduce the observed groundwater head time series reasonably well. However, we note that there are still some limitations in the current approach, specifically because the current offline-coupling technique simplifies dynamic feedbacks between surface water levels and groundwater heads, and between soil moisture states and groundwater heads. Also the current sensitivity analysis ignores the uncertainty of the land surface model output. Despite these limitations, we argue that the results of the current model show a promise for large-scale groundwater modeling practices, including for data-poor environments and at the global scale.

  17. Large-Scale Modeling of Epileptic Seizures: Scaling Properties of Two Parallel Neuronal Network Simulation Algorithms

    PubMed Central

    Pesce, Lorenzo L.; Lee, Hyong C.; Stevens, Rick L.

    2013-01-01

    Our limited understanding of the relationship between the behavior of individual neurons and large neuronal networks is an important limitation in current epilepsy research and may be one of the main causes of our inadequate ability to treat it. Addressing this problem directly via experiments is impossibly complex; thus, we have been developing and studying medium-large-scale simulations of detailed neuronal networks to guide us. Flexibility in the connection schemas and a complete description of the cortical tissue seem necessary for this purpose. In this paper we examine some of the basic issues encountered in these multiscale simulations. We have determined the detailed behavior of two such simulators on parallel computer systems. The observed memory and computation-time scaling behavior for a distributed memory implementation were very good over the range studied, both in terms of network sizes (2,000 to 400,000 neurons) and processor pool sizes (1 to 256 processors). Our simulations required between a few megabytes and about 150 gigabytes of RAM and lasted between a few minutes and about a week, well within the capability of most multinode clusters. Therefore, simulations of epileptic seizures on networks with millions of cells should be feasible on current supercomputers. PMID:24416069

  18. Similarity-based modeling in large-scale prediction of drug-drug interactions

    PubMed Central

    Vilar, Santiago; Uriarte, Eugenio; Santana, Lourdes; Lorberbaum, Tal; Hripcsak, George; Friedman, Carol; Tatonetti, Nicholas P

    2015-01-01

    Drug-drug interactions (DDIs) are a major cause of adverse drug effects and a public health concern, as they increase hospital care expenses and reduce patients’ quality of life. DDI detection is, therefore, an important objective in patient safety, one whose pursuit affects drug development and pharmacovigilance. In this article, we describe a protocol applicable on a large scale to predict novel DDIs based on similarity of drug interaction candidates to drugs involved in established DDIs. the method integrates a reference standard database of known DDIs with drug similarity information extracted from different sources, such as 2D and 3D molecular structure, interaction profile, target and side-effect similarities. the method is interpretable in that it generates drug interaction candidates that are traceable to pharmacological or clinical effects. We describe a protocol with applications in patient safety and preclinical toxicity screening. the time frame to implement this protocol is 5–7 h, with additional time potentially necessary, depending on the complexity of the reference standard DDI database and the similarity measures implemented. PMID:25122524

  19. Large-scale 3D modeling of projectile impact damage in brittle plates

    NASA Astrophysics Data System (ADS)

    Seagraves, A.; Radovitzky, R.

    2015-10-01

    The damage and failure of brittle plates subjected to projectile impact is investigated through large-scale three-dimensional simulation using the DG/CZM approach introduced by Radovitzky et al. [Comput. Methods Appl. Mech. Eng. 2011; 200(1-4), 326-344]. Two standard experimental setups are considered: first, we simulate edge-on impact experiments on Al2O3 tiles by Strassburger and Senf [Technical Report ARL-CR-214, Army Research Laboratory, 1995]. Qualitative and quantitative validation of the simulation results is pursued by direct comparison of simulations with experiments at different loading rates and good agreement is obtained. In the second example considered, we investigate the fracture patterns in normal impact of spheres on thin, unconfined ceramic plates over a wide range of loading rates. For both the edge-on and normal impact configurations, the full field description provided by the simulations is used to interpret the mechanisms underlying the crack propagation patterns and their strong dependence on loading rate.

  20. Large-Scale Modeling of Epileptic Seizures: Scaling Properties of Two Parallel Neuronal Network Simulation Algorithms

    DOE PAGES

    Pesce, Lorenzo L.; Lee, Hyong C.; Hereld, Mark; ...

    2013-01-01

    Our limited understanding of the relationship between the behavior of individual neurons and large neuronal networks is an important limitation in current epilepsy research and may be one of the main causes of our inadequate ability to treat it. Addressing this problem directly via experiments is impossibly complex; thus, we have been developing and studying medium-large-scale simulations of detailed neuronal networks to guide us. Flexibility in the connection schemas and a complete description of the cortical tissue seem necessary for this purpose. In this paper we examine some of the basic issues encountered in these multiscale simulations. We have determinedmore » the detailed behavior of two such simulators on parallel computer systems. The observed memory and computation-time scaling behavior for a distributed memory implementation were very good over the range studied, both in terms of network sizes (2,000 to 400,000 neurons) and processor pool sizes (1 to 256 processors). Our simulations required between a few megabytes and about 150 gigabytes of RAM and lasted between a few minutes and about a week, well within the capability of most multinode clusters. Therefore, simulations of epileptic seizures on networks with millions of cells should be feasible on current supercomputers.« less

  1. Linking electronic medical records to large-scale simulation models: can we put rapid learning on turbo?

    PubMed

    Eddy, David M

    2007-01-01

    One method for rapid learning is to use data from electronic medical records (EMRs) to help build and validate large-scale, physiology-based simulation models. These models can than be used to help answer questions that cannot be addressed directly from the EMR data. Their potential uses include analyses of physiological pathways; simulation and design of clinical trials; and analyses of clinical management tools such as guidelines, performance measures, priority setting, and cost-effectiveness. Linking the models to EMR data also facilitates tailoring analyses to specific populations. The models' power and accuracy can be improved by linkage to comprehensive, person-specific, longitudinal data from EMRs.

  2. Realistic inflation models and primordial gravity waves

    NASA Astrophysics Data System (ADS)

    Rehman, Mansoor Ur

    We investigate both supersymmetric and non-supersymmetric realistic models of inflation. In non-supersymmetric models, inflation is successfully realized by employing both Coleman Weinberg and Higgs potentials in GUTs such as SU(5) and SO(10). The quantum smearing of tree level predictions is discussed in the Higgs inflation. These quantum corrections can arise from the inflaton couplings to other particles such as GUT scalars. As a result of including these corrections, a reduction in the tensor-to-scalar ratio r, a canonical measure of gravity waves produced during inflation, is observed. In a simple phi4 chaotic model, we reconsider a non-minimal (xi > 0) gravitationalcoupling of inflaton φ arising from the interaction xi R phi2, where R is the Ricci scalar. In estimating bounds on various inflationaryparameters we also include quantum corrections. We emphasize that while working with high precision observations such as the current Planck satellite experiment we cannot ignore these radiative and gravitational corrections in analyzing the predictions of various inflationary models. In supersymmetric hybrid inflation with minimal Kahler potential, the soft SUSY breaking terms are shown to play an important role in realizing inflation consistent with the latest WMAP data. The SUSY hybrid models which we consider here predict exceedingly small values of r. However, to obtain observable gravity waves the non-minimal Kahler potential turns out to be a necessary ingredient. A realistic model of flipped SU(5) model, which benefits from the absence of topological defects, is considered in the standard SUSY hybrid inflation. We also present a discussion of shifted hybrid inflation in a realistic model of SUSY SU(5) GUT.

  3. A versatile platform for multilevel modeling of physiological systems: template/instance framework for large-scale modeling and simulation.

    PubMed

    Asai, Yoshiyuki; Abe, Takeshi; Oka, Hideki; Okita, Masao; Okuyama, Tomohiro; Hagihara, Ken-Ichi; Ghosh, Samik; Matsuoka, Yukiko; Kurachi, Yoshihisa; Kitano, Hrioaki

    2013-01-01

    Building multilevel models of physiological systems is a significant and effective method for integrating a huge amount of bio-physiological data and knowledge obtained by earlier experiments and simulations. Since such models tend to be large in size and complicated in structure, appropriate software frameworks for supporting modeling activities are required. A software platform, PhysioDesigner, has been developed, which supports the process of creating multilevel models. Models developed on PhysioDesigner are established in an XML format called PHML. Every physiological entity in a model is represented as a module, and hence a model constitutes an aggregation of modules. When the number of entities of which the model is comprised is large, it is difficult to manage the entities manually, and some semiautomatic assistive functions are necessary. In this article, which focuses particularly on recently developed features of the platform for building large-scale models utilizing a template/instance framework and morphological information, the PhysioDesigner platform is introduced.

  4. A Computational Framework for Realistic Retina Modeling.

    PubMed

    Martínez-Cañada, Pablo; Morillas, Christian; Pino, Begoña; Ros, Eduardo; Pelayo, Francisco

    2016-11-01

    Computational simulations of the retina have led to valuable insights about the biophysics of its neuronal activity and processing principles. A great number of retina models have been proposed to reproduce the behavioral diversity of the different visual processing pathways. While many of these models share common computational stages, previous efforts have been more focused on fitting specific retina functions rather than generalizing them beyond a particular model. Here, we define a set of computational retinal microcircuits that can be used as basic building blocks for the modeling of different retina mechanisms. To validate the hypothesis that similar processing structures may be repeatedly found in different retina functions, we implemented a series of retina models simply by combining these computational retinal microcircuits. Accuracy of the retina models for capturing neural behavior was assessed by fitting published electrophysiological recordings that characterize some of the best-known phenomena observed in the retina: adaptation to the mean light intensity and temporal contrast, and differential motion sensitivity. The retinal microcircuits are part of a new software platform for efficient computational retina modeling from single-cell to large-scale levels. It includes an interface with spiking neural networks that allows simulation of the spiking response of ganglion cells and integration with models of higher visual areas.

  5. Performance of hybrid methods for large-scale unconstrained optimization as applied to models of proteins.

    PubMed

    Das, B; Meirovitch, H; Navon, I M

    2003-07-30

    Energy minimization plays an important role in structure determination and analysis of proteins, peptides, and other organic molecules; therefore, development of efficient minimization algorithms is important. Recently, Morales and Nocedal developed hybrid methods for large-scale unconstrained optimization that interlace iterations of the limited-memory BFGS method (L-BFGS) and the Hessian-free Newton method (Computat Opt Appl 2002, 21, 143-154). We test the performance of this approach as compared to those of the L-BFGS algorithm of Liu and Nocedal and the truncated Newton (TN) with automatic preconditioner of Nash, as applied to the protein bovine pancreatic trypsin inhibitor (BPTI) and a loop of the protein ribonuclease A. These systems are described by the all-atom AMBER force field with a dielectric constant epsilon = 1 and a distance-dependent dielectric function epsilon = 2r, where r is the distance between two atoms. It is shown that for the optimal parameters the hybrid approach is typically two times more efficient in terms of CPU time and function/gradient calculations than the two other methods. The advantage of the hybrid approach increases as the electrostatic interactions become stronger, that is, in going from epsilon = 2r to epsilon = 1, which leads to a more rugged and probably more nonlinear potential energy surface. However, no general rule that defines the optimal parameters has been found and their determination requires a relatively large number of trial-and-error calculations for each problem. Copyright 2003 Wiley Periodicals, Inc. J Comput Chem 24: 1222-1231, 2003

  6. Using SMOS for validation and parameter estimation of a large scale hydrological model in Paraná river basin

    NASA Astrophysics Data System (ADS)

    Colossi, Bibiana; Fleischmann, Ayan; Siqueira, Vinicius; Bitar, Ahmad Al; Paiva, Rodrigo; Fan, Fernando; Ruhoff, Anderson; Pontes, Paulo; Collischonn, Walter

    2017-04-01

    Large scale representation of soil moisture conditions can be achieved through hydrological simulation and remote sensing techniques. However, both methodologies have several limitations, which suggests the potential benefits of using both information together. So, this study had two main objectives: perform a cross-validation between remotely sensed soil moisture from SMOS (Soil Moisture and Ocean Salinity) L3 product and soil moisture simulated with the large scale hydrological model MGB-IPH; and to evaluate the potential benefits of including remotely sensed soil moisture for model parameter estimation. The study analyzed results in South American continent, where hydrometeorological monitoring is usually scarce. The study was performed in Paraná River Basin, an important South American basin, whose extension and particular characteristics allow the representation of different climatic, geological, and, consequently, hydrological conditions. Soil moisture estimated with SMOS was transformed from water content to a Soil Water Index (SWI) so it is comparable to the saturation degree simulated with MGB-IPH model. The multi-objective complex evolution algorithm (MOCOM-UA) was applied for model automatic calibration considering only remotely sensed soil moisture, only discharge and both information together. Results show that this type of analysis can be very useful, because it allows to recognize limitations in model structure. In the case of the hydrological model calibration, this approach can avoid the use of parameters out of range, in an attempt to compensate model limitations. Also, it indicates aspects of the model were efforts should be concentrated, in order to improve hydrological or hydraulics process representation. Automatic calibration gives an estimative about the way different information can be applied and the quality of results it might lead. We emphasize that these findings can be valuable for hydrological modeling in large scale South American

  7. Physical characteristics of the Gulf Stream as an indicator of the quality of large-scale circulation modeling

    NASA Astrophysics Data System (ADS)

    Sarkisyan, A. S.; Nikitin, O. P.; Lebedev, K. V.

    2016-12-01

    The general idea of this work is to show that the efficiency of modeling boundary currents (compared to the results of observations) can serve as an indicator of correctness for the modeling of the entire large-scale ocean circulation. The results of calculation of the mean surface currents in the Gulf Stream area based on direct measurements from drifters are presented together with the results of numerical modeling of variability of the Gulf Stream transport at 33°N over the period 2005-2014 based on data from Argo profiling buoys.

  8. UAS in the NAS Project: Large-Scale Communication Architecture Simulations with NASA GRC Gen5 Radio Model

    NASA Technical Reports Server (NTRS)

    Kubat, Gregory

    2016-01-01

    This report provides a description and performance characterization of the large-scale, Relay architecture, UAS communications simulation capability developed for the NASA GRC, UAS in the NAS Project. The system uses a validated model of the GRC Gen5 CNPC, Flight-Test Radio model. Contained in the report is a description of the simulation system and its model components, recent changes made to the system to improve performance, descriptions and objectives of sample simulations used for test and verification, and a sampling and observations of results and performance data.

  9. A realistic renormalizable supersymmetric E₆ model

    SciTech Connect

    Bajc, Borut; Susič, Vasja

    2014-01-01

    A complete realistic model based on the supersymmetric version of E₆ is presented. It consists of three copies of matter 27, and a Higgs sector made of 2×(27+27⁻)+351´+351´⁻ representations. An analytic solution to the equations of motion is found which spontaneously breaks the gauge group into the Standard Model. The light fermion mass matrices are written down explicitly as non-linear functions of three Yukawa matrices. This contribution is based on Ref. [1].

  10. Can key vegetation parameters be retrieved at the large-scale using LAI satellite products and a generic modelling approach ?

    NASA Astrophysics Data System (ADS)

    Dewaele, Helene; Calvet, Jean-Christophe; Carrer, Dominique; Laanaia, Nabil

    2016-04-01

    In the context of climate change, the need to assess and predict the impact of droughts on vegetation and water resources increases. The generic approaches permitting the modelling of continental surfaces at large-scale has progressed in recent decades towards land surface models able to couple cycles of water, energy and carbon. A major source of uncertainty in these generic models is the maximum available water content of the soil (MaxAWC) usable by plants which is constrained by the rooting depth parameter and unobservable at the large-scale. In this study, vegetation products derived from the SPOT/VEGETATION satellite data available since 1999 are used to optimize the model rooting depth over rainfed croplands and permanent grasslands at 1 km x 1 km resolution. The inter-annual variability of the Leaf Area Index (LAI) is simulated over France using the Interactions between Soil, Biosphere and Atmosphere, CO2-reactive (ISBA-A-gs) generic land surface model and a two-layer force-restore (FR-2L) soil profile scheme. The leaf nitrogen concentration directly impacts the modelled value of the maximum annual LAI. In a first step this parameter is estimated for the last 15 years by using an iterative procedure that matches the maximum values of LAI modelled by ISBA-A-gs to the highest satellite-derived LAI values. The Root Mean Square Error (RMSE) is used as a cost function to be minimized. In a second step, the model rooting depth is optimized in order to reproduce the inter-annual variability resulting from the drought impact on the vegetation. The evaluation of the retrieved soil rooting depth is achieved using the French agricultural statistics of Agreste. Retrieved leaf nitrogen concentrations are compared with values from previous studies. The preliminary results show a good potential of this approach to estimate these two vegetation parameters (leaf nitrogen concentration, MaxAWC) at the large-scale over grassland areas. Besides, a marked impact of the

  11. Representation of drought propagation in large-scale models: a test on global scale and catchment scale

    NASA Astrophysics Data System (ADS)

    van Huijgevoort, Marjolein; van Loon, Anne; van Lanen, Henny

    2013-04-01

    Drought development has increasingly been studied using large-scale models, although, the suitability of these models to analyse hydrological drought is still unclear. Drought events propagate through the terrestrial hydrological cycle from meteorological drought to hydrological drought. We investigated to what extent large-scale models can reproduce this propagation. An ensemble of ten large-scale models, run within the WATCH project, and their forcing data (WATCH forcing data) were used to identify drought using a threshold level method. Propagation features (pooling, attenuation, lag, lengthening) were assessed on a global scale and, in more detail, for a selection of five case study areas in Europe. On a global scale, propagation features were reproduced by the multi-model ensemble, resulting in longer and fewer drought events in runoff than in precipitation. Spatial patterns of extreme drought events (e.g. the 1976 drought event in Europe) derived from monthly runoff data resembled more the spatial patterns derived from 3-monthly precipitation data than patterns derived from monthly precipitation data. There were differences between the individual models; some models showed a faster response in runoff than others. In general, modelled runoff showed a too fast response to rainfall, which led to deviations from historical drought events reported for slowly responding systems. Also in the selected case study areas, drought events became fewer and longer when moving through the hydrological cycle. For droughts events moving from precipitation via soil moisture to subsurface runoff, the number of droughts decreased from 3-5 per year to 0.5-1.5 per year and average duration increased from around 15 days to 50-120 days. Fast and slowly responding systems, however, did not show much differentiation. Also in the selected case study areas the simulated runoff reacted too fast to precipitation, especially in catchments with a cold climate, a semi-arid climate, or large

  12. Large scale traffic simulations

    SciTech Connect

    Nagel, K.; Barrett, C.L. |; Rickert, M. |

    1997-04-01

    Large scale microscopic (i.e. vehicle-based) traffic simulations pose high demands on computational speed in at least two application areas: (i) real-time traffic forecasting, and (ii) long-term planning applications (where repeated {open_quotes}looping{close_quotes} between the microsimulation and the simulated planning of individual person`s behavior is necessary). As a rough number, a real-time simulation of an area such as Los Angeles (ca. 1 million travellers) will need a computational speed of much higher than 1 million {open_quotes}particle{close_quotes} (= vehicle) updates per second. This paper reviews how this problem is approached in different projects and how these approaches are dependent both on the specific questions and on the prospective user community. The approaches reach from highly parallel and vectorizable, single-bit implementations on parallel supercomputers for Statistical Physics questions, via more realistic implementations on coupled workstations, to more complicated driving dynamics implemented again on parallel supercomputers. 45 refs., 9 figs., 1 tab.

  13. A large-scale integrated karst-vegetation recharge model to understand the impact of climate and land cover change

    NASA Astrophysics Data System (ADS)

    Sarrazin, Fanny; Hartmann, Andreas; Pianosi, Francesca; Wagener, Thorsten

    2017-04-01

    Karst aquifers are an important source of drinking water in many regions of the world, but their resources are likely to be affected by changes in climate and land cover. Karst areas are highly permeable and produce large amounts of groundwater recharge, while surface runoff is typically negligible. As a result, recharge in karst systems may be particularly sensitive to environmental changes compared to other less permeable systems. However, current large-scale hydrological models poorly represent karst specificities. They tend to provide an erroneous water balance and to underestimate groundwater recharge over karst areas. A better understanding of karst hydrology and estimating karst groundwater resources at a large-scale is therefore needed for guiding water management in a changing world. The first objective of the present study is to introduce explicit vegetation processes into a previously developed karst recharge model (VarKarst) to better estimate evapotranspiration losses depending on the land cover characteristics. The novelty of the approach for large-scale modelling lies in the assessment of model output uncertainty, and parameter sensitivity to avoid over-parameterisation. We find that the model so modified is able to produce simulations consistent with observations of evapotranspiration and soil moisture at Fluxnet sites located in carbonate rock areas. Secondly, we aim to determine the model sensitivities to climate and land cover characteristics, and to assess the relative influence of changes in climate and land cover on aquifer recharge. We perform virtual experiments using synthetic climate inputs, and varying the value of land cover parameters. In this way, we can control for variations in climate input characteristics (e.g. precipitation intensity, precipitation frequency) and vegetation characteristics (e.g. canopy water storage capacity, rooting depth), and we can isolate the effect that each of these quantities has on recharge. Our results

  14. Growth Mixture Modeling: Application to Reading Achievement Data from a Large-Scale Assessment

    ERIC Educational Resources Information Center

    Bilir, Mustafa Kuzey; Binici, Salih; Kamata, Akihito

    2008-01-01

    The popularity of growth modeling has increased in psychological and cognitive development research as a means to investigate patterns of changes and differences between observation units over time. Random coefficient modeling, such as multilevel modeling and latent growth curve modeling as a special application of structural equation modeling are…

  15. A mixed-layer model study of the stratocumulus response to changes in large-scale conditions

    NASA Astrophysics Data System (ADS)

    De Roode, Stephan R.; Siebesma, A. Pier; Gesso, Sara Dal; Jonker, Harm J. J.; Schalkwijk, Jerôme; Sival, Jasper

    2014-12-01

    A mixed-layer model is used to study the response of stratocumulus equilibrium state solutions to perturbations of cloud controlling factors which include the sea surface temperature, the specific humidity and temperature in the free troposphere, as well as the large-scale divergence and horizontal wind speed. In the first set of experiments, we assess the effect of a change in a single forcing condition while keeping the entrainment rate fixed, while in the second set, the entrainment rate is allowed to respond. The role of the entrainment rate is exemplified from an experiment in which the sea surface temperature is increased. An analysis of the budget equation for heat and moisture demonstrates that for a fixed entrainment rate, the stratocumulus liquid water path (LWP) will increase since the moistening from the surface evaporation dominates the warming effect. By contrast, if the response of the entrainment rate to the change in the surface forcing is sufficiently strong, enhanced mixing of dry and warm inversion air will cause a thinning of the cloud layer. If the entrainment warming effect is sufficiently strong, the surface sensible heat flux will decrease, as opposed to an increase which will occur for a fixed entrainment rate. It is argued that the surface evaporation will always increase for an increase in the sea surface temperature, and this change will be enlarged if the entrainment rate increases. These experiments aid the interpretation of results of similar simulations with single-column model versions of climate models carried out in the framework of the CFMIP-GCSS Intercomparison of Large-Eddy and Single-Column Models (CGILS) project. Because in a large-scale models, the entrainment response to changes in the large-scale forcing conditions depends on the details of the parameterization of turbulent and convective transport, intermodel differences in the sign of the LWP response may be well attributable to differences in the entrainment response.

  16. Modeling and Analysis of Realistic Fire Scenarios in Spacecraft

    NASA Technical Reports Server (NTRS)

    Brooker, J. E.; Dietrich, D. L.; Gokoglu, S. A.; Urban, D. L.; Ruff, G. A.

    2015-01-01

    An accidental fire inside a spacecraft is an unlikely, but very real emergency situation that can easily have dire consequences. While much has been learned over the past 25+ years of dedicated research on flame behavior in microgravity, a quantitative understanding of the initiation, spread, detection and extinguishment of a realistic fire aboard a spacecraft is lacking. Virtually all combustion experiments in microgravity have been small-scale, by necessity (hardware limitations in ground-based facilities and safety concerns in space-based facilities). Large-scale, realistic fire experiments are unlikely for the foreseeable future (unlike in terrestrial situations). Therefore, NASA will have to rely on scale modeling, extrapolation of small-scale experiments and detailed numerical modeling to provide the data necessary for vehicle and safety system design. This paper presents the results of parallel efforts to better model the initiation, spread, detection and extinguishment of fires aboard spacecraft. The first is a detailed numerical model using the freely available Fire Dynamics Simulator (FDS). FDS is a CFD code that numerically solves a large eddy simulation form of the Navier-Stokes equations. FDS provides a detailed treatment of the smoke and energy transport from a fire. The simulations provide a wealth of information, but are computationally intensive and not suitable for parametric studies where the detailed treatment of the mass and energy transport are unnecessary. The second path extends a model previously documented at ICES meetings that attempted to predict maximum survivable fires aboard space-craft. This one-dimensional model implies the heat and mass transfer as well as toxic species production from a fire. These simplifications result in a code that is faster and more suitable for parametric studies (having already been used to help in the hatch design of the Multi-Purpose Crew Vehicle, MPCV).

  17. Improved Large-Scale Inundation Modelling by 1D-2D Coupling and Consideration of Hydrologic and Hydrodynamic Processes - a Case Study in the Amazon

    NASA Astrophysics Data System (ADS)

    Hoch, J. M.; Bierkens, M. F.; Van Beek, R.; Winsemius, H.; Haag, A.

    2015-12-01

    Understanding the dynamics of fluvial floods is paramount to accurate flood hazard and risk modeling. Currently, economic losses due to flooding constitute about one third of all damage resulting from natural hazards. Given future projections of climate change, the anticipated increase in the World's population and the associated implications, sound knowledge of flood hazard and related risk is crucial. Fluvial floods are cross-border phenomena that need to be addressed accordingly. Yet, only few studies model floods at the large-scale which is preferable to tiling the output of small-scale models. Most models cannot realistically model flood wave propagation due to a lack of either detailed channel and floodplain geometry or the absence of hydrologic processes. This study aims to develop a large-scale modeling tool that accounts for both hydrologic and hydrodynamic processes, to find and understand possible sources of errors and improvements and to assess how the added hydrodynamics affect flood wave propagation. Flood wave propagation is simulated by DELFT3D-FM (FM), a hydrodynamic model using a flexible mesh to schematize the study area. It is coupled to PCR-GLOBWB (PCR), a macro-scale hydrological model, that has its own simpler 1D routing scheme (DynRout) which has already been used for global inundation modeling and flood risk assessments (GLOFRIS; Winsemius et al., 2013). A number of model set-ups are compared and benchmarked for the simulation period 1986-1996: (0) PCR with DynRout; (1) using a FM 2D flexible mesh forced with PCR output and (2) as in (1) but discriminating between 1D channels and 2D floodplains, and, for comparison, (3) and (4) the same set-ups as (1) and (2) but forced with observed GRDC discharge values. Outputs are subsequently validated against observed GRDC data at Óbidos and flood extent maps from the Dartmouth Flood Observatory. The present research constitutes a first step into a globally applicable approach to fully couple

  18. Nanostructure modeling in oxide ceramics using large scale parallel molecular dynamics simulations

    NASA Astrophysics Data System (ADS)

    Campbell, Timothy J.

    1998-12-01

    The purpose of this dissertation is to investigate the properties and processes in nanostructured oxide ceramics using molecular-dynamics (MD) simulations. These simulations are based on realistic interatomic potentials and require scalable and portable multiresolution algorithms implemented on parallel computers. The dynamics of oxidation of aluminum nanoclusters is studied with a MD scheme that can simultaneously treat metallic and oxide systems. Dynamic charge transfer between anions and cations which gives rise to a compute-intensive Coulomb interaction, is treated by the O(N) Fast Multipole Method. Structural and dynamical correlations and local stresses reveal significant charge transfer and stress variations which cause rapid diffusion of Al and O on the nanocluster surface. At a constant temperature, the formation of an amorphous surface-oxide layer is observed during the first 100 picoseconds. Subsequent sharp decrease in O diffusion normal to the cluster surface arrests the growth of the oxide layer with a saturation thickness of 4 nanometers; this is in excellent agreement with experiments. Analyses of the oxide scale reveal significant charge transfer and variations in local structure. When the heat is not extracted from the cluster, the oxidizing reaction becomes explosive. Sintering, structural correlations, vibrational properties, and mechanical behavior of nanophase silica glasses are also studied using the MD approach based on an empirical interatomic potential that consists of both two and three-body interactions. Nanophase silica glasses with densities ranging from 76 to 93% of the bulk glass density are obtained using an isothermal-isobaric MD approach. During the sintering process, the pore sizes and distribution change without any discernable change in the pore morphology. The height and position of the first sharp diffraction peak (the signature of intermediate-range order) in the neutron static structure factor shows significant differences

  19. Large-scale hydrodynamic modeling of the middle Yangtze River Basin with complex river-lake interactions

    NASA Astrophysics Data System (ADS)

    Lai, Xijun; Jiang, Jiahu; Liang, Qiuhua; Huang, Qun

    2013-06-01

    The flow regime in the middle Yangtze River Basin is experiencing rapid changes due to intensive human activities and ongoing climate change. The middle reach of Yangtze River and the associated water system are extremely difficult to be reliably modeled due to highly complex interactions between the main stream and many tributaries and lakes. This paper presents a new Coupled Hydrodynamic Analysis Model (CHAM) designed for simulating the large-scale water system in the middle Yangtze River Basin, featured with complex river-lake interactions. CHAM dynamically couples a one-dimensional (1-D) unsteady flow model and a two-dimensional (2-D) hydrodynamic model using a new coupling algorithm that is particularly suitable for large-scale water systems. Numerical simulations are carried out to reproduce the flow regime in the region in 1998 when a severe flood event occurred and in 2006 when it experienced an extremely dry year. The model is able to reproduce satisfactorily the major physical processes featured with seasonal wetting and drying controlled by strong river-lake interactions. This indicates that the present model provides a promising tool for predicting complex flow regimes with remarkable seasonal changes and strong river-lake interactions.

  20. Mining and state-space modeling and verification of sub-networks from large-scale biomolecular networks.

    PubMed

    Hu, Xiaohua; Wu, Fang-Xiang

    2007-08-31

    Biomolecular networks dynamically respond to stimuli and implement cellular function. Understanding these dynamic changes is the key challenge for cell biologists. As biomolecular networks grow in size and complexity, the model of a biomolecular network must become more rigorous to keep track of all the components and their interactions. In general this presents the need for computer simulation to manipulate and understand the biomolecular network model. In this paper, we present a novel method to model the regulatory system which executes a cellular function and can be represented as a biomolecular network. Our method consists of two steps. First, a novel scale-free network clustering approach is applied to the large-scale biomolecular network to obtain various sub-networks. Second, a state-space model is generated for the sub-networks and simulated to predict their behavior in the cellular context. The modeling results represent hypotheses that are tested against high-throughput data sets (microarrays and/or genetic screens) for both the natural system and perturbations. Notably, the dynamic modeling component of this method depends on the automated network structure generation of the first component and the sub-network clustering, which are both essential to make the solution tractable. Experimental results on time series gene expression data for the human cell cycle indicate our approach is promising for sub-network mining and simulation from large-scale biomolecular network.

  1. Mining and state-space modeling and verification of sub-networks from large-scale biomolecular networks

    PubMed Central

    Hu, Xiaohua; Wu, Fang-Xiang

    2007-01-01

    Background Biomolecular networks dynamically respond to stimuli and implement cellular function. Understanding these dynamic changes is the key challenge for cell biologists. As biomolecular networks grow in size and complexity, the model of a biomolecular network must become more rigorous to keep track of all the components and their interactions. In general this presents the need for computer simulation to manipulate and understand the biomolecular network model. Results In this paper, we present a novel method to model the regulatory system which executes a cellular function and can be represented as a biomolecular network. Our method consists of two steps. First, a novel scale-free network clustering approach is applied to the large-scale biomolecular network to obtain various sub-networks. Second, a state-space model is generated for the sub-networks and simulated to predict their behavior in the cellular context. The modeling results represent hypotheses that are tested against high-throughput data sets (microarrays and/or genetic screens) for both the natural system and perturbations. Notably, the dynamic modeling component of this method depends on the automated network structure generation of the first component and the sub-network clustering, which are both essential to make the solution tractable. Conclusion Experimental results on time series gene expression data for the human cell cycle indicate our approach is promising for sub-network mining and simulation from large-scale biomolecular network. PMID:17764552

  2. Large-Scale Atmospheric Circulation Patterns Associated with Temperature Extremes as a Basis for Model Evaluation: Methodological Overview and Results

    NASA Astrophysics Data System (ADS)

    Loikith, P. C.; Broccoli, A. J.; Waliser, D. E.; Lintner, B. R.; Neelin, J. D.

    2015-12-01

    Anomalous large-scale circulation patterns often play a key role in the occurrence of temperature extremes. For example, large-scale circulation can drive horizontal temperature advection or influence local processes that lead to extreme temperatures, such as by inhibiting moderating sea breezes, promoting downslope adiabatic warming, and affecting the development of cloud cover. Additionally, large-scale circulation can influence the shape of temperature distribution tails, with important implications for the magnitude of future changes in extremes. As a result of the prominent role these patterns play in the occurrence and character of extremes, the way in which temperature extremes change in the future will be highly influenced by if and how these patterns change. It is therefore critical to identify and understand the key patterns associated with extremes at local to regional scales in the current climate and to use this foundation as a target for climate model validation. This presentation provides an overview of recent and ongoing work aimed at developing and applying novel approaches to identifying and describing the large-scale circulation patterns associated with temperature extremes in observations and using this foundation to evaluate state-of-the-art global and regional climate models. Emphasis is given to anomalies in sea level pressure and 500 hPa geopotential height over North America using several methods to identify circulation patterns, including self-organizing maps and composite analysis. Overall, evaluation results suggest that models are able to reproduce observed patterns associated with temperature extremes with reasonable fidelity in many cases. Model skill is often highest when and where synoptic-scale processes are the dominant mechanisms for extremes, and lower where sub-grid scale processes (such as those related to topography) are important. Where model skill in reproducing these patterns is high, it can be inferred that extremes are

  3. Development of Residential Prototype Building Models and Analysis System for Large-Scale Energy Efficiency Studies Using EnergyPlus

    SciTech Connect

    Mendon, Vrushali V.; Taylor, Zachary T.

    2014-09-10

    ABSTRACT: Recent advances in residential building energy efficiency and codes have resulted in increased interest in detailed residential building energy models using the latest energy simulation software. One of the challenges of developing residential building models to characterize new residential building stock is to allow for flexibility to address variability in house features like geometry, configuration, HVAC systems etc. Researchers solved this problem in a novel way by creating a simulation structure capable of creating fully-functional EnergyPlus batch runs using a completely scalable residential EnergyPlus template system. This system was used to create a set of thirty-two residential prototype building models covering single- and multifamily buildings, four common foundation types and four common heating system types found in the United States (US). A weighting scheme with detailed state-wise and national weighting factors was designed to supplement the residential prototype models. The complete set is designed to represent a majority of new residential construction stock. The entire structure consists of a system of utility programs developed around the core EnergyPlus simulation engine to automate the creation and management of large-scale simulation studies with minimal human effort. The simulation structure and the residential prototype building models have been used for numerous large-scale studies, one of which is briefly discussed in this paper.

  4. The HyperHydro (H2) experiment for comparing different large-scale models at various resolutions

    NASA Astrophysics Data System (ADS)

    Sutanudjaja, Edwin

    2016-04-01

    HyperHydro (http://www.hyperhydro.org/) is an open network of scientists with the objective of simulating large-scale terrestrial hydrology and water resources at hyper-resolution (Wood et al., 2011, DOI: 10.1029/2010WR010090; Bierkens et al., 2014, DOI: 10.1002/hyp.10391). Within the HyperHydro network, a modeling workshop was held at Utrecht University, the Netherlands, on 9-12 June 2015. The goal of the workshop was to start the HyperHydro (H^2) experiment for comparing different large-scale hydrological models, at different spatial resolutions, from 50 km to 1 km. Model simulation results (e.g. discharge, soil moisture, evaporation, snow, groundwater depth, etc.) are evaluated to available observation data and compared across various models and resolutions. At EGU 2016, we would like to present the latest results of this inter-comparison experiment. We also invite participation from the hydrology community on this experiment. Up to now, the models compared are CLM, LISFLOOD, mHM, ParFlow-CLM, PCR-GLOBWB, TerrSysMP, VIC, WaterGAP, and wflow. As initial test-beds, we mainly focus on two river basins: San Joaquin/California (82000 km^2) and Rhine (185000 km^2). Moreover, comparison at a larger region, such for the CONUS (Contiguous-US) domain, is also explored and presented.

  5. The HyperHydro (H^2) experiment for comparing different large-scale models at various resolutions

    NASA Astrophysics Data System (ADS)

    Sutanudjaja, E.; Bosmans, J.; Chaney, N.; Clark, M. P.; Condon, L. E.; David, C. H.; De Roo, A. P. J.; Doll, P. M.; Drost, N.; Eisner, S.; Famiglietti, J. S.; Floerke, M.; Gilbert, J. M.; Gochis, D. J.; Hut, R.; Keune, J.; Kollet, S. J.; Maxwell, R. M.; Pan, M.; Rakovec, O.; Reager, J. T., II; Samaniego, L. E.; Mueller Schmied, H.; Trautmann, T.; Van Beek, L. P.; Van De Giesen, N.; Wood, E. F.; Bierkens, M. F.; Kumar, R.

    2015-12-01

    HyperHydro (http://www.hyperhydro.org/) is an open network of scientists with the objective of simulating large-scale terrestrial hydrology and water resources at hyper-resolution (Bierkens et al., 2014, DOI: 10.1002/hyp.10391). Within the HyperHydro network, a modeling workshop was held at Utrecht University, the Netherlands, on 9-12 June 2015. The goal of the workshop was to start the HyperHydro (H^2) experiment for comparing different large-scale hydrological models, at different spatial resolutions, from 50 km to 1 km. Model simulation results (e.g. discharge, soil moisture, evaporation, snow, groundwater depth, etc.) are evaluated to available observation data and compared across various models and resolutions. In AGU 2015, we would like to present the results of this inter-comparison experiment. During the workshop in Utrecht, the models compared were CLM, LISFLOOD, mHM, ParFlow-CLM, PCR-GLOBWB, TerrSysMP, VIC and WaterGAP. We invite participation from the hydrology community on this experiment. As test-beds, we focus on two river basins: San Joaquin (~82000 km2) and Rhine (~185000 km2). In the near future, we will escalate this experiment to the CONUS and CORDEX-EU domains. The picture below was taken during the workshop in Utrecht (9-12 June 2015).

  6. Investigation of Large Scale Cortical Models on Clustered Multi-Core Processors

    DTIC Science & Technology

    2013-02-01

    Acceleration platforms examined. x86 Cell GPGPU HTM [22] × × Dean [25] × × Izhikevich [26] × × × Hodgkin- Huxley [27] × × × Morris Lecar [28...examined. These are the Hodgkin- Huxley [27], Izhikevich [26], Wilson [29], and Morris-Lecar [28] models. The Hodgkin– Huxley model is considered to be...and inactivation of Na currents). Table 2 compares the computation properties of the four models. The Hodgkin– Huxley model utilizes exponential

  7. Social and Economic Effects of Large-Scale Energy Development in Rural Areas: An Assessment Model.

    ERIC Educational Resources Information Center

    Murdock, Steve H.; Leistritz, F. Larry

    General development, structure, and uses of a computerized impact projection model, the North Dakota Regional Environmental Assessment Program (REAP) Economic-Demographic Assessment Model, were studied not only to describe a model developed to meet informational needs of local decision makers (especially in a rural area undergoing development),…

  8. Large-Scale Features of Pliocene Climate: Results from the Pliocene Model Intercomparison Project

    NASA Technical Reports Server (NTRS)

    Haywood, A. M.; Hill, D.J.; Dolan, A. M.; Otto-Bliesner, B. L.; Bragg, F.; Chan, W.-L.; Chandler, M. A.; Contoux, C.; Dowsett, H. J.; Jost, A.; Kamae, Y.; Lohmann, G.; Lunt, D. J.; Abe-Ouchi, A.; Pickering, S. J.; Ramstein, G.; Rosenbloom, N. A.; Salzmann, U.; Sohl, L.; Stepanek, C.; Ueda, H.; Yan, Q.; Zhang, Z.

    2013-01-01

    Climate and environments of the mid-Pliocene warm period (3.264 to 3.025 Ma) have been extensively studied.Whilst numerical models have shed light on the nature of climate at the time, uncertainties in their predictions have not been systematically examined. The Pliocene Model Intercomparison Project quantifies uncertainties in model outputs through a coordinated multi-model and multi-mode data intercomparison. Whilst commonalities in model outputs for the Pliocene are clearly evident, we show substantial variation in the sensitivity of models to the implementation of Pliocene boundary conditions. Models appear able to reproduce many regional changes in temperature reconstructed from geological proxies. However, data model comparison highlights that models potentially underestimate polar amplification. To assert this conclusion with greater confidence, limitations in the time-averaged proxy data currently available must be addressed. Furthermore, sensitivity tests exploring the known unknowns in modelling Pliocene climate specifically relevant to the high latitudes are essential (e.g. palaeogeography, gateways, orbital forcing and trace gasses). Estimates of longer-term sensitivity to CO2 (also known as Earth System Sensitivity; ESS), support previous work suggesting that ESS is greater than Climate Sensitivity (CS), and suggest that the ratio of ESS to CS is between 1 and 2, with a "best" estimate of 1.5.

  9. Volterra representation enables modeling of complex synaptic nonlinear dynamics in large-scale simulations

    PubMed Central

    Hu, Eric Y.; Bouteiller, Jean-Marie C.; Song, Dong; Baudry, Michel; Berger, Theodore W.

    2015-01-01

    Chemical synapses are comprised of a wide collection of intricate signaling pathways involving complex dynamics. These mechanisms are often reduced to simple spikes or exponential representations in order to enable computer simulations at higher spatial levels of complexity. However, these representations cannot capture important nonlinear dynamics found in synaptic transmission. Here, we propose an input-output (IO) synapse model capable of generating complex nonlinear dynamics while maintaining low computational complexity. This IO synapse model is an extension of a detailed mechanistic glutamatergic synapse model capable of capturing the input-output relationships of the mechanistic model using the Volterra functional power series. We demonstrate that the IO synapse model is able to successfully track the nonlinear dynamics of the synapse up to the third order with high accuracy. We also evaluate the accuracy of the IO synapse model at different input frequencies and compared its performance with that of kinetic models in compartmental neuron models. Our results demonstrate that the IO synapse model is capable of efficiently replicating complex nonlinear dynamics that were represented in the original mechanistic model and provide a method to replicate complex and diverse synaptic transmission within neuron network simulations. PMID:26441622

  10. Why can't current large-scale models predict mixed-phase clouds correctly?

    NASA Astrophysics Data System (ADS)

    Barrett, Andrew; Hogan, Robin; Forbes, Richard

    2013-04-01

    Stratiform mid-level mixed-phase clouds have a significant radiative impact but are often missing from numerical model simulations for a number of reasons. This is particularly true more recently as models move towards treating cloud ice as a prognostic variable. This presentation will demonstrate three important findings that will help lead to better simulations of mixed-phase clouds by models in the future. Each is briefly covered in the paragraphs below. 1) The occurrence of mid-level mixed-phase clouds in models is compared with ground based remote sensors, finding an under-prediction of the supercooled liquid water content in the models of a factor of 2 or more. This is accompanied by a low bias in the liquid cloud fraction whilst the ice properties are better simulated. Models with more sophisticated microphysics schemes that include prognostic cloud ice are the worst performing models. 2) A new single column model is used to investigate which processes are important for the maintenance of supercooled liquid layers. By running the model over multiple days and exploring the parameter-space of numerous physical parameterizations it was determined that the most sensitive areas of the model are ice microphysical processes and vertical resolution. 3) Vertical resolutions finer than 200 metres are required to capture the thin liquid layers in these clouds and therefore their important radiative effect. Leading models are still far coarser than this in the mid-troposphere, limiting hope of simulating these clouds properly. A new parameterization of the vertical structure of these clouds is developed and allows their properties to be correctly simulated in a resolution independent way by numerical models with coarse vertical resolution. This parameterization is explained and demonstrated here and could enable significant improvement in model simulations of stratiform mixed-phase clouds.

  11. The topology of large-scale structure. I - Topology and the random phase hypothesis. [galactic formation models

    NASA Technical Reports Server (NTRS)

    Weinberg, David H.; Gott, J. Richard, III; Melott, Adrian L.

    1987-01-01

    Many models for the formation of galaxies and large-scale structure assume a spectrum of random phase (Gaussian), small-amplitude density fluctuations as initial conditions. In such scenarios, the topology of the galaxy distribution on large scales relates directly to the topology of the initial density fluctuations. Here a quantitative measure of topology - the genus of contours in a smoothed density distribution - is described and applied to numerical simulations of galaxy clustering, to a variety of three-dimensional toy models, and to a volume-limited sample of the CfA redshift survey. For random phase distributions the genus of density contours exhibits a universal dependence on threshold density. The clustering simulations show that a smoothing length of 2-3 times the mass correlation length is sufficient to recover the topology of the initial fluctuations from the evolved galaxy distribution. Cold dark matter and white noise models retain a random phase topology at shorter smoothing lengths, but massive neutrino models develop a cellular topology.

  12. St. Louis Initiative for Integrated Care Excellence (SLI(2)CE): integrated-collaborative care on a large scale model.

    PubMed

    Brawer, Peter A; Martielli, Richard; Pye, Patrice L; Manwaring, Jamie; Tierney, Anna

    2010-06-01

    The primary care health setting is in crisis. Increasing demand for services, with dwindling numbers of providers, has resulted in decreased access and decreased satisfaction for both patients and providers. Moreover, the overwhelming majority of primary care visits are for behavioral and mental health concerns rather than issues of a purely medical etiology. Integrated-collaborative models of health care delivery offer possible solutions to this crisis. The purpose of this article is to review the existing data available after 2 years of the St. Louis Initiative for Integrated Care Excellence; an example of integrated-collaborative care on a large scale model within a regional Veterans Affairs Health Care System. There is clear evidence that the SLI(2)CE initiative rather dramatically increased access to health care, and modified primary care practitioners' willingness to address mental health issues within the primary care setting. In addition, data suggests strong fidelity to a model of integrated-collaborative care which has been successful in the past. Integrated-collaborative care offers unique advantages to the traditional view and practice of medical care. Through careful implementation and practice, success is possible on a large scale model.

  13. Modeling Cultural/ecological Impacts of Large-scale Mining and Industrial Development in the Yukon-Kuskokwim Basin

    NASA Astrophysics Data System (ADS)

    Bunn, J. T.; Sparck, A.

    2004-12-01

    We are developing a methodology for predicting the cultural impact of large-scale mineral resource development in the Yukon-Kuskokwim (Y-K) basin. The Yup'ik/Cup'ik/Dene people of the Y-K basin currently practice a mixed-market subsistence economy, in which native subsistence traditions and social structures are largely intact. Large-scale mining and industrial-infrastructure developments are being planned that will constitute a significant expansion of the market economy, and will also significantly affect the physical environment that is central to the subsistence way of life. To explore the impact that these changes are likely to have on native culture we use a systems modeling approach, considering "culture" to be a system that encompasses the physical, biological and verbal realms. We draw upon Alaska Department of Fish and Game technical reports, anthropological studies, Yup'ik cultural visioning exercises, and personal experience to identify the components of our cultural model. We use structural equation modeling to determine causal relationships between system components. The resulting model is used predict changes that are likely to occur as a result of planned developments.

  14. Realistic synthetic observations from radiative transfer models

    NASA Astrophysics Data System (ADS)

    Koepferl, Christine; Robitaille, Thomas

    2013-07-01

    When modeling young stars and star-forming regions throughout the Galaxy, it is important to correctly treat the limitations of the data such as finite resolution and sensitivity. In order to study these effects, and to make radiative transfer models directly comparable to real observations, we have developed a Python package that allows post-processing the output of the 3-d Monte Carlo Radiative Transfer code HYPERION (Robitaille 2011 A&A 536, A79, see poster 2S001). With this package, realistic synthetic observations can be generated, modeling the effects of convolution with arbitrary PSFs, transmission curves, finite pixel resolution, noise and reddening. Pipelines can be written to compute synthetic observations that simulate observatories such as the Spitzer Space Telescope or the Herschel Space Observatory. In this poster we describe the package and present examples of such synthetic observations.

  15. Large-Scale Transport Model Uncertainty and Sensitivity Analysis: Distributed Sources in Complex, Hydrogeologic Systems

    NASA Astrophysics Data System (ADS)

    Wolfsberg, A.; Kang, Q.; Li, C.; Ruskauff, G.; Bhark, E.; Freeman, E.; Prothro, L.; Drellack, S.

    2007-12-01

    The Underground Test Area (UGTA) Project of the U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office is in the process of assessing and developing regulatory decision options based on modeling predictions of contaminant transport from underground testing of nuclear weapons at the Nevada Test Site (NTS). The UGTA Project is attempting to develop an effective modeling strategy that addresses and quantifies multiple components of uncertainty including natural variability, parameter uncertainty, conceptual/model uncertainty, and decision uncertainty in translating model results into regulatory requirements. The modeling task presents multiple unique challenges to the hydrological sciences as a result of the complex fractured and faulted hydrostratigraphy, the distributed locations of sources, the suite of reactive and non-reactive radionuclides, and uncertainty in conceptual models. Characterization of the hydrogeologic system is difficult and expensive because of deep groundwater in the arid desert setting and the large spatial setting of the NTS. Therefore, conceptual model uncertainty is partially addressed through the development of multiple alternative conceptual models of the hydrostratigraphic framework and multiple alternative models of recharge and discharge. Uncertainty in boundary conditions is assessed through development of alternative groundwater fluxes through multiple simulations using the regional groundwater flow model. Calibration of alternative models to heads and measured or inferred fluxes has not proven to provide clear measures of model quality. Therefore, model screening by comparison to independently-derived natural geochemical mixing targets through cluster analysis has also been invoked to evaluate differences between alternative conceptual models. Advancing multiple alternative flow models, sensitivity of transport predictions to parameter uncertainty is assessed through Monte Carlo simulations. The

  16. Large-Scale Transport Model Uncertainty and Sensitivity Analysis: Distributed Sources in Complex Hydrogeologic Systems

    SciTech Connect

    Sig Drellack, Lance Prothro

    2007-12-01

    The Underground Test Area (UGTA) Project of the U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office is in the process of assessing and developing regulatory decision options based on modeling predictions of contaminant transport from underground testing of nuclear weapons at the Nevada Test Site (NTS). The UGTA Project is attempting to develop an effective modeling strategy that addresses and quantifies multiple components of uncertainty including natural variability, parameter uncertainty, conceptual/model uncertainty, and decision uncertainty in translating model results into regulatory requirements. The modeling task presents multiple unique challenges to the hydrological sciences as a result of the complex fractured and faulted hydrostratigraphy, the distributed locations of sources, the suite of reactive and non-reactive radionuclides, and uncertainty in conceptual models. Characterization of the hydrogeologic system is difficult and expensive because of deep groundwater in the arid desert setting and the large spatial setting of the NTS. Therefore, conceptual model uncertainty is partially addressed through the development of multiple alternative conceptual models of the hydrostratigraphic framework and multiple alternative models of recharge and discharge. Uncertainty in boundary conditions is assessed through development of alternative groundwater fluxes through multiple simulations using the regional groundwater flow model. Calibration of alternative models to heads and measured or inferred fluxes has not proven to provide clear measures of model quality. Therefore, model screening by comparison to independently-derived natural geochemical mixing targets through cluster analysis has also been invoked to evaluate differences between alternative conceptual models. Advancing multiple alternative flow models, sensitivity of transport predictions to parameter uncertainty is assessed through Monte Carlo simulations. The

  17. Modelling of a large-scale urban contamination situation and remediation alternatives.

    PubMed

    Thiessen, K M; Arkhipov, A; Batandjieva, B; Charnock, T W; Gaschak, S; Golikov, V; Hwang, W T; Tomás, J; Zlobenko, B

    2009-05-01

    The Urban Remediation Working Group of the International Atomic Energy Agency's EMRAS (Environmental Modelling for Radiation Safety) program was organized to address issues of remediation assessment modelling for urban areas contaminated with dispersed radionuclides. The present paper describes the first of two modelling exercises, which was based on Chernobyl fallout data in the town of Pripyat, Ukraine. Modelling endpoints for the exercise included radionuclide concentrations and external dose rates at specified locations, contributions to the dose rates from individual surfaces and radionuclides, and annual and cumulative external doses to specified reference individuals. Model predictions were performed for a "no action" situation (with no remedial measures) and for selected countermeasures. The exercise provided a valuable opportunity to compare modelling approaches and parameter values, as well as to compare the predicted effectiveness of various countermeasures with respect to short-term and long-term reduction of predicted doses to people.

  18. Large-scale in silico modeling of metabolic interactions between cell types in the human brain.

    PubMed

    Lewis, Nathan E; Schramm, Gunnar; Bordbar, Aarash; Schellenberger, Jan; Andersen, Michael P; Cheng, Jeffrey K; Patel, Nilam; Yee, Alex; Lewis, Randall A; Eils, Roland; König, Rainer; Palsson, Bernhard Ø

    2010-12-01

    Metabolic interactions between multiple cell types are difficult to model using existing approaches. Here we present a workflow that integrates gene expression data, proteomics data and literature-based manual curation to model human metabolism within and between different types of cells. Transport reactions are used to account for the transfer of metabolites between models of different cell types via the interstitial fluid. We apply the method to create models of brain energy metabolism that recapitulate metabolic interactions between astrocytes and various neuron types relevant to Alzheimer's disease. Analysis of the models identifies genes and pathways that may explain observed experimental phenomena, including the differential effects of the disease on cell types and regions of the brain. Constraint-based modeling can thus contribute to the study and analysis of multicellular metabolic processes in the human tissue microenvironment and provide detailed mechanistic insight into high-throughput data analysis.

  19. The application of sensitivity analysis to models of large scale physiological systems

    NASA Technical Reports Server (NTRS)

    Leonard, J. I.

    1974-01-01

    A survey of the literature of sensitivity analysis as it applies to biological systems is reported as well as a brief development of sensitivity theory. A simple population model and a more complex thermoregulatory model illustrate the investigatory techniques and interpretation of parameter sensitivity analysis. The role of sensitivity analysis in validating and verifying models, and in identifying relative parameter influence in estimating errors in model behavior due to uncertainty in input data is presented. This analysis is valuable to the simulationist and the experimentalist in allocating resources for data collection. A method for reducing highly complex, nonlinear models to simple linear algebraic models that could be useful for making rapid, first order calculations of system behavior is presented.

  20. Development and application of a large scale river system model for National Water Accounting in Australia

    NASA Astrophysics Data System (ADS)

    Dutta, Dushmanta; Vaze, Jai; Kim, Shaun; Hughes, Justin; Yang, Ang; Teng, Jin; Lerat, Julien

    2017-04-01

    Existing global and continental scale river models, mainly designed for integrating with global climate models, are of very coarse spatial resolutions and lack many important hydrological processes, such as overbank flow, irrigation diversion, groundwater seepage/recharge, which operate at a much finer resolution. Thus, these models are not suitable for producing water accounts, which have become increasingly important for water resources planning and management at regional and national scales. A continental scale river system model called Australian Water Resource Assessment River System model (AWRA-R) has been developed and implemented for national water accounting in Australia using a node-link architecture. The model includes major hydrological processes, anthropogenic water utilisation and storage routing that influence the streamflow in both regulated and unregulated river systems. Two key components of the model are an irrigation model to compute water diversion for irrigation use and associated fluxes and stores and a storage-based floodplain inundation model to compute overbank flow from river to floodplain and associated floodplain fluxes and stores. The results in the Murray-Darling Basin shows highly satisfactory performance of the model with median daily Nash-Sutcliffe Efficiency (NSE) of 0.64 and median annual bias of less than 1% for the period of calibration (1970-1991) and median daily NSE of 0.69 and median annual bias of 12% for validation period (1992-2014). The results have demonstrated that the performance of the model is less satisfactory when the key processes such as overbank flow, groundwater seepage and irrigation diversion are switched off. The AWRA-R model, which has been operationalised by the Australian Bureau of Meteorology for continental scale water accounting, has contributed to improvements in the national water account by substantially reducing accounted different volume (gain/loss).

  1. An Approach to Large Scale Radar-Based Modeling and Simulation

    DTIC Science & Technology

    2010-03-01

    useful roles throughout the DoD, including operational analysis, training, and support of acquisition projects. Within the DoD there are many models and...future [3]. Likewise, new acquisition programs often develop new models to explain and support the development of new technology. When an organization...modeling and simulation, acquisition , operational, and research communities, the DoD refers to M&S as “a key enabler of DoD activi- ties” [33]. The

  2. Open source large-scale high-resolution environmental modelling with GEMS

    NASA Astrophysics Data System (ADS)

    Baarsma, Rein; Alberti, Koko; Marra, Wouter; Karssenberg, Derek

    2016-04-01

    Many environmental, topographic and climate data sets are freely available at a global scale, creating the opportunities to run environmental models for every location on Earth. Collection of the data necessary to do this and the consequent conversion into a useful format is very demanding however, not to mention the computational demand of a model itself. We developed GEMS (Global Environmental Modelling System), an online application to run environmental models on various scales directly in your browser and share the results with other researchers. GEMS is open-source and uses open-source platforms including Flask, Leaflet, GDAL, MapServer and the PCRaster-Python modelling framework to process spatio-temporal models in real time. With GEMS, users can write, run, and visualize the results of dynamic PCRaster-Python models in a browser. GEMS uses freely available global data to feed the models, and automatically converts the data to the relevant model extent and data format. Currently available data includes the SRTM elevation model, a selection of monthly vegetation data from MODIS, land use classifications from GlobCover, historical climate data from WorldClim, HWSD soil information from WorldGrids, population density from SEDAC and near real-time weather forecasts, most with a ±100m resolution. Furthermore, users can add other or their own datasets using a web coverage service or a custom data provider script. With easy access to a wide range of base datasets and without the data preparation that is usually necessary to run environmental models, building and running a model becomes a matter hours. Furthermore, it is easy to share the resulting maps, timeseries data or model scenarios with other researchers through a web mapping service (WMS). GEMS can be used to provide open access to model results. Additionally, environmental models in GEMS can be employed by users with no extensive experience with writing code, which is for example valuable for using models

  3. Large-Scale Sediment Routing: Development of a One-Dimensional Model Incorporating Sand Storage

    NASA Astrophysics Data System (ADS)

    Wiele, S. M.; Wilcock, P. R.; Grams, P. E.

    2005-12-01

    Routing sediment through long reaches and networks requires a balance between model efficiency, data availability, and accurate representation of sediment flux and storage. The first two often constrain the appropriate model to one dimension, but such models are unable to capture changes in sediment storage in side-channel environments, which are typically driven by two-dimensional transport fields. Side-channel environments are especially important in canyon channels. Routing of sand in canyon channels can be further complicated by transport of sand over a cobble or boulder bed and by remote locations, which can hinder measurement of channel shape. We have produced a one-dimensional model that routes water and sand through the Colorado River below Glen Canyon Dam in Arizona. Our model differs from conventional one-dimensional models in several significant ways: (1) exchange of sand between the main downstream current and eddies, which cannot be directly represented by a one-dimensional model, is included by parameterizing predictions over a wide range of conditions from a multidimensional model; (2) suspended-sand transport over an extremely rough and sparsely sand-covered bed, which is not accurately represented in conventional sand-transport relations or boundary conditions, is calculated in our model with newly developed algorithms (see Grams and others, this meeting); (3) the channel is represented by reach-averaged properties, thereby reducing data requirements and increasing model efficiency; and (4) the model is coupled with an unsteady-flow model, thereby accounting for frequent changes in discharge produced by variations in releases in this power-producing regulated river. Numerical models can contribute to the explanation of observed changes in sand storage, extrapolate field observations to unobserved flows, and evaluate alternative dam-operation strategies for preserving the sand resource. Model applications can address several significant management

  4. Exploring large-scale phenomena in composite membranes through an efficient implicit-solvent model

    NASA Astrophysics Data System (ADS)

    Laradji, Mohamed; Kumar, P. B. Sunil; Spangler, Eric J.

    2016-07-01

    Several microscopic and mesoscale models have been introduced in the past to investigate various phenomena in lipid membranes. Most of these models account for the solvent explicitly. Since in a typical molecular dynamics simulation, the majority of particles belong to the solvent, much of the computational effort in these simulations is devoted for calculating forces between solvent particles. To overcome this problem, several implicit-solvent mesoscale models for lipid membranes have been proposed during the last few years. In the present article, we review an efficient coarse-grained implicit-solvent model we introduced earlier for studies of lipid membranes. In this model, lipid molecules are coarse-grained into short semi-flexible chains of beads with soft interactions. Through molecular dynamics simulations, the model is used to investigate the thermal, structural and elastic properties of lipid membranes. We will also review here few studies, based on this model, of the phase behavior of nanoscale liposomes, cytoskeleton-induced blebbing in lipid membranes, as well as nanoparticles wrapping and endocytosis by tensionless lipid membranes. Topical Review article submitted to the Journal of Physics D: Applied Physics, May 9, 2016

  5. A large-scale model for simulating the fate & transport of organic contaminants in river basins.

    PubMed

    Lindim, C; van Gils, J; Cousins, I T

    2016-02-01

    We present STREAM-EU (Spatially and Temporally Resolved Exposure Assessment Model for EUropean basins), a novel dynamic mass balance model for predicting the environmental fate of organic contaminants in river basins. STREAM-EU goes beyond the current state-of-the-science in that it can simulate spatially and temporally-resolved contaminant concentrations in all relevant environmental media (surface water, groundwater, snow, soil and sediments) at the river basin scale. The model can currently be applied to multiple organic contaminants in any river basin in Europe, but the model framework is adaptable to any river basin in any continent. We simulate the environmental fate of perfluoroctanesulfonic acid (PFOS) and perfluorooctanoic acid (PFOA) in the Danube River basin and compare model predictions to recent monitoring data. The model predicts PFOS and PFOA concentrations that agree well with measured concentrations for large stretches of the river. Disagreements between the model predictions and measurements in some river sections are shown to be useful indicators of unknown contamination sources to the river basin. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. Large-scale hydrologic and hydrodynamic modeling of the Amazon River basin

    NASA Astrophysics Data System (ADS)

    de Paiva, Rodrigo Cauduro Dias; Buarque, Diogo Costa; Collischonn, Walter; Bonnet, Marie-Paule; Frappart, Frédéric; Calmant, Stephane; Bulhões Mendes, Carlos André

    2013-03-01

    In this paper, a hydrologic/hydrodynamic modeling of the Amazon River basin is presented using the MGB-IPH model with a validation using remotely sensed observations. Moreover, the sources of model errors by means of the validation and sensitivity tests are investigated, and the physical functioning of the Amazon basin is also explored. The MGB-IPH is a physically based model resolving all land hydrological processes and here using a full 1-D river hydrodynamic module with a simple floodplain storage model. River-floodplain geometry parameters were extracted from the SRTM digital elevation model, and the model was forced using satellite-derived rainfall from TRMM3B42. Model results agree with observed in situ daily river discharges and water levels and with three complementary satellite-based products: (1) water levels derived from ENVISAT altimetry data; (2) a global data set of monthly inundation extent; and (3) monthly terrestrial water storage (TWS) anomalies derived from the Gravity Recovery and Climate Experimental mission. However, the model is sensitive to precipitation forcing and river-floodplain parameters. Most of the errors occur in westerly regions, possibly due to the poor quality of TRMM 3B42 rainfall data set in these mountainous and/or poorly monitored areas. In addition, uncertainty in river-floodplain geometry causes errors in simulated water levels and inundation extent, suggesting the need for improvement of parameter estimation methods. Finally, analyses of Amazon hydrological processes demonstrate that surface waters govern most of the Amazon TWS changes (56%), followed by soil water (27%) and ground water (8%). Moreover, floodplains play a major role in stream flow routing, although backwater effects are also important to delay and attenuate flood waves.

  7. A systematic, large-scale comparison of transcription factor binding site models.

    PubMed

    Hombach, Daniela; Schwarz, Jana Marie; Robinson, Peter N; Schuelke, Markus; Seelow, Dominik

    2016-05-21

    The modelling of gene regulation is a major challenge in biomedical research. This process is dominated by transcription factors (TFs) and mutations in their binding sites (TFBSs) may cause the misregulation of genes, eventually leading to disease. The consequences of DNA variants on TF binding are modelled in silico using binding matrices, but it remains unclear whether these are capable of accurately representing in vivo binding. In this study, we present a systematic comparison of binding models for 82 human TFs from three freely available sources: JASPAR matrices, HT-SELEX-generated models and matrices derived from protein binding microarrays (PBMs). We determined their ability to detect experimentally verified "real" in vivo TFBSs derived from ENCODE ChIP-seq data. As negative controls we chose random downstream exonic sequences, which are unlikely to harbour TFBS. All models were assessed by receiver operating characteristics (ROC) analysis. While the area-under-curve was low for most of the tested models with only 47 % reaching a score of 0.7 or higher, we noticed strong differences between the various position-specific scoring matrices with JASPAR and HT-SELEX models showing higher success rates than PBM-derived models. In addition, we found that while TFBS sequences showed a higher degree of conservation than randomly chosen sequences, there was a high variability between individual TFBSs. Our results show that only few of the matrix-based models used to predict potential TFBS are able to reliably detect experimentally confirmed TFBS. We compiled our findings in a freely accessible web application called ePOSSUM ( http:/mutationtaster.charite.de/ePOSSUM/ ) which uses a Bayes classifier to assess the impact of genetic alterations on TF binding in user-defined sequences. Additionally, ePOSSUM provides information on the reliability of the prediction using our test set of experimentally confirmed binding sites.

  8. Modeling oxygen isotopes in the Pliocene: Large-scale features over the land and ocean

    NASA Astrophysics Data System (ADS)

    Tindall, Julia C.; Haywood, Alan M.

    2015-09-01

    The first isotope-enabled general circulation model (GCM) simulations of the Pliocene are used to discuss the interpretation of δ18O measurements for a warm climate. The model suggests that spatial patterns of Pliocene ocean surface δ18O (δ18Osw) were similar to those of the preindustrial period; however, Arctic and coastal regions were relatively depleted, while South Atlantic and Mediterranean regions were relatively enriched. Modeled δ18Osw anomalies are closely related to modeled salinity anomalies, which supports using δ18Osw as a paleosalinity proxy. Modeled Pliocene precipitation δ18O (δ18Op) was enriched relative to the preindustrial values (but with depletion of <2‰ over some tropical regions). While usually modest (<4‰), the enrichment can reach 25‰ over ice sheet regions. In the tropics δ18Op anomalies are related to precipitation amount anomalies, although there is usually a spatial offset between the two. This offset suggests that the location of precipitation change is more uncertain than the amplitude when interpreting δ18Op. At high latitudes δ18Op anomalies relate to temperature anomalies; however, the relationship is neither linear nor spatially coincident: a large δ18Op signal does not always translate to a large temperature signal. These results suggest that isotope modeling can lead to enhanced synergy between climate models and climate proxy data. The model can relate proxy data to climate in a physically based way even when the relationship is complex and nonlocal. The δ18O-climate relationships, identified here from a GCM, could not be determined from transfer functions or simple models.

  9. Multi-scale Modeling of the Evolution of a Large-Scale Nourishment

    NASA Astrophysics Data System (ADS)

    Luijendijk, A.; Hoonhout, B.

    2016-12-01

    Morphological predictions are often computed using a single morphological model commonly forced with schematized boundary conditions representing the time scale of the prediction. Recent model developments are now allowing us to think and act differently. This study presents some recent developments in coastal morphological modeling focusing on flexible meshes, flexible coupling between models operating at different time scales, and a recently developed morphodynamic model for the intertidal and dry beach. This integrated modeling approach is applied to the Sand Engine mega nourishment in The Netherlands to illustrate the added-values of this integrated approach both in accuracy and computational efficiency. The state-of-the-art Delft3D Flexible Mesh (FM) model is applied at the study site under moderate wave conditions. One of the advantages is that the flexibility of the mesh structure allows a better representation of the water exchange with the lagoon and corresponding morphological behavior than with the curvilinear grid used in the previous version of Delft3D. The XBeach model is applied to compute the morphodynamic response to storm events in detail incorporating the long wave effects on bed level changes. The recently developed aeolian transport and bed change model AeoLiS is used to compute the bed changes in the intertidal and dry beach area. In order to enable flexible couplings between the three abovementioned models, a component-based environment has been developed using the BMI method. This allows a serial coupling of Delft3D FM and XBeach steered by a control module that uses a hydrodynamic time series as input (see figure). In addition, a parallel online coupling, with information exchange in each timestep will be made with the AeoLiS model that predicts the bed level changes at the intertidal and dry beach area. This study presents the first years of evolution of the Sand Engine computed with the integrated modelling approach. Detailed comparisons

  10. Strategies for Large Scale Implementation of a Multiscale, Multiprocess Integrated Hydrologic Model

    NASA Astrophysics Data System (ADS)

    Kumar, M.; Duffy, C.

    2006-05-01

    Distributed models simulate hydrologic state variables in space and time while taking into account the heterogeneities in terrain, surface, subsurface properties and meteorological forcings. Computational cost and complexity associated with these model increases with its tendency to accurately simulate the large number of interacting physical processes at fine spatio-temporal resolution in a large basin. A hydrologic model run on a coarse spatial discretization of the watershed with limited number of physical processes needs lesser computational load. But this negatively affects the accuracy of model results and restricts physical realization of the problem. So it is imperative to have an integrated modeling strategy (a) which can be universally applied at various scales in order to study the tradeoffs between computational complexity (determined by spatio- temporal resolution), accuracy and predictive uncertainty in relation to various approximations of physical processes (b) which can be applied at adaptively different spatial scales in the same domain by taking into account the local heterogeneity of topography and hydrogeologic variables c) which is flexible enough to incorporate different number and approximation of process equations depending on model purpose and computational constraint. An efficient implementation of this strategy becomes all the more important for Great Salt Lake river basin which is relatively large (~89000 sq. km) and complex in terms of hydrologic and geomorphic conditions. Also the types and the time scales of hydrologic processes which are dominant in different parts of basin are different. Part of snow melt runoff generated in the Uinta Mountains infiltrates and contributes as base flow to the Great Salt Lake over a time scale of decades to centuries. The adaptive strategy helps capture the steep topographic and climatic gradient along the Wasatch front. Here we present the aforesaid modeling strategy along with an associated

  11. Incremental learning of Bayesian sensorimotor models: from low-level behaviours to large-scale structure of the environment

    NASA Astrophysics Data System (ADS)

    Diard, Julien; Gilet, Estelle; Simonin, Éva; Bessière, Pierre

    2010-12-01

    This paper concerns the incremental learning of hierarchies of representations of space in artificial or natural cognitive systems. We propose a mathematical formalism for defining space representations (Bayesian Maps) and modelling their interaction in hierarchies of representations (sensorimotor interaction operator). We illustrate our formalism with a robotic experiment. Starting from a model based on the proximity to obstacles, we learn a new one related to the direction of the light source. It provides new behaviours, like phototaxis and photophobia. We then combine these two maps so as to identify parts of the environment where the way the two modalities interact is recognisable. This classification is a basis for learning a higher level of abstraction map that describes the large-scale structure of the environment. In the final model, the perception-action cycle is modelled by a hierarchy of sensorimotor models of increasing time and space scales, which provide navigation strategies of increasing complexities.

  12. Methods of Model Reduction for Large-Scale Biological Systems: A Survey of Current Methods and Trends.

    PubMed

    Snowden, Thomas J; van der Graaf, Piet H; Tindall, Marcus J

    2017-07-01

    Complex models of biochemical reaction systems have become increasingly common in the systems biology literature. The complexity of such models can present a number of obstacles for their practical use, often making problems difficult to intuit or computationally intractable. Methods of model reduction can be employed to alleviate the issue of complexity by seeking to eliminate those portions of a reaction network that have little or no effect upon the outcomes of interest, hence yielding simplified systems that retain an accurate predictive capacity. This review paper seeks to provide a brief overview of a range of such methods and their application in the context of biochemical reaction network models. To achieve this, we provide a brief mathematical account of the main methods including timescale exploitation approaches, reduction via sensitivity analysis, optimisation methods, lumping, and singular value decomposition-based approaches. Methods are reviewed in the context of large-scale systems biology type models, and future areas of research are briefly discussed.

  13. h2-norm optimal model reduction for large scale discrete dynamical MIMO systems

    NASA Astrophysics Data System (ADS)

    Bunse-Gerstner, A.; Kubalinska, D.; Vossen, G.; Wilczek, D.

    2010-01-01

    Modeling strategies often result in dynamical systems of very high dimension. It is then desirable to find systems of the same form but of lower complexity, whose input-output behavior approximates the behavior of the original system. Here we consider linear time-invariant discrete-time dynamical systems. The cornerstone of this paper is a relation between optimal model reduction in the h2-norm and (tangential) rational Hermite interpolation. First order necessary conditions for h2-optimal model reduction are presented for discrete Multiple-Input-Multiple-Output (MIMO) systems. These conditions suggest a specific choice of interpolation data and a novel algorithm aiming for anh2-optimal model reduction for MIMO systems. It is also shown that the conditions are equivalent to two known gramian-based first order necessary conditions. Numerical experiments demonstrate the approximation quality of the method.

  14. Middle atmosphere project. A semi-spectral numerical model for the large-scale stratospheric circulation

    NASA Technical Reports Server (NTRS)

    Holton, J. R.; Wehrbein, W.

    1979-01-01

    The complete model is a semispectral model in which the longitudinal dependence is represented by expansion in zonal harmonics while the latitude and height dependencies are represented by a finite difference grid. The model is based on the primitive equations in the log pressure coordinate system. The lower boundary of the model domain is set at the 100 mb level (i.e., near the tropopause) and the effects of tropospheric forcing are included in the lower boundary condition. The upper boundary is at approximately 96 km, and the latitudinal extent is either global or hemispheric. The basic differential equations and boundary conditions are outlined. The finite difference equations are described. The initial conditions are discussed and a sample calculation is presented. The FORTRAN code is given in the appendix.

  15. Topology of large-scale structure in seeded hot dark matter models

    NASA Technical Reports Server (NTRS)

    Beaky, Matthew M.; Scherrer, Robert J.; Villumsen, Jens V.

    1992-01-01

    The topology of the isodensity surfaces in seeded hot dark matter models, in which static seed masses provide the density perturbations in a universe dominated by massive neutrinos is examined. When smoothed with a Gaussian window, the linear initial conditions in these models show no trace of non-Gaussian behavior for r0 equal to or greater than 5 Mpc (h = 1/2), except for very low seed densities, which show a shift toward isolated peaks. An approximate analytic expression is given for the genus curve expected in linear density fields from randomly distributed seed masses. The evolved models have a Gaussian topology for r0 = 10 Mpc, but show a shift toward a cellular topology with r0 = 5 Mpc; Gaussian models with an identical power spectrum show the same behavior.

  16. Comparing selected morphological models of hydrated Nafion using large scale molecular dynamics simulations

    NASA Astrophysics Data System (ADS)

    Knox, Craig K.

    Experimental elucidation of the nanoscale structure of hydrated Nafion, the most popular polymer electrolyte or proton exchange membrane (PEM) to date, and its influence on macroscopic proton conductance is particularly challenging. While it is generally agreed that hydrated Nafion is organized into distinct hydrophilic domains or clusters within a hydrophobic matrix, the geometry and length scale of these domains continues to be debated. For example, at least half a dozen different domain shapes, ranging from spheres to cylinders, have been proposed based on experimental SAXS and SANS studies. Since the characteristic length scale of these domains is believed to be ˜2 to 5 nm, very large molecular dynamics (MD) simulations are needed to accurately probe the structure and morphology of these domains, especially their connectivity and percolation phenomena at varying water content. Using classical, all-atom MD with explicit hydronium ions, simulations have been performed to study the first-ever hydrated Nafion systems that are large enough (~2 million atoms in a ˜30 nm cell) to directly observe several hydrophilic domains at the molecular level. These systems consisted of six of the most significant and relevant morphological models of Nafion to-date: (1) the cluster-channel model of Gierke, (2) the parallel cylinder model of Schmidt-Rohr, (3) the local-order model of Dreyfus, (4) the lamellar model of Litt, (5) the rod network model of Kreuer, and (6) a 'random' model, commonly used in previous simulations, that does not directly assume any particular geometry, distribution, or morphology. These simulations revealed fast intercluster bridge formation and network percolation in all of the models. Sulfonates were found inside these bridges and played a significant role in percolation. Sulfonates also strongly aggregated around and inside clusters. Cluster surfaces were analyzed to study the hydrophilic-hydrophobic interface. Interfacial area and cluster volume

  17. Evaluation of Large-scale Quaternary Stratigraphical Modelling in SubsurfaceViewer

    NASA Astrophysics Data System (ADS)

    Petrone, Johannes; Sohlenius, Gustav; Ising, Jonas; Strömgren, Mårten

    2017-04-01

    Forsmark in Sweden is the proposed site for hosting a deep geological repository for the Swedish spent nuclear fuel. Site investigations initiated in 2003 have resulted in a wealth of cross-disciplinary data used to describe the natural system at the site. Numerical and conceptual modelling has been performed both for the deep bedrock and for the surface systems. The variation in surface geology and regolith thickness are important parameters for e.g. hydrogeological and geochemical modelling and for the overall understanding of the area. The input data used to produce the 3D-model include boreholes, excavations, well logs, refraction seismics, reflection seismics, ground-penetrating radar and electrical soundings (CVES). Mentioned stratigraphical data in combination with a detailed DEM (Digital Elevation Model), detailed surface sediment mapping and stratigraphical rules has been imported into the 3D-modelling software SubsurfaceViewer. Hundreds of transects has been interpreted manually along sections covering approximately 180 square km. Using the general stratigraphy of the Quaternary deposits in Forsmark, the model is based on a seven-layer-principle where each layer can be given certain properties and where each layer can be divided into sub-layers. The uppermost layer represents soils that may have been influenced by surface processes, e.g. bioturbation, frost action and chemical weathering. Next layer represents peat. The peat is followed by a layer representing sand/gravel, glaciofluvial sediment or artificial fill, followed by a layer of postglacial clay and clay gyttja/gyttja clay. The two deepest layers of the model consist of glacial clay underlain by different classes of till. The bottom boundary represents the bedrock surface. Based on drillings it was concluded that the interface between the till and bedrock have a high frequency of fissures and fractures. This fractured area between the actual bedrock and the overlying till is also implemented in

  18. Study of an engine flow diverter system for a large scale ejector powered aircraft model

    NASA Technical Reports Server (NTRS)

    Springer, R. J.; Langley, B.; Plant, T.; Hunter, L.; Brock, O.

    1981-01-01

    Requirements were established for a conceptual design study to analyze and design an engine flow diverter system and to include accommodations for an ejector system in an existing 3/4 scale fighter model equipped with YJ-79 engines. Model constraints were identified and cost-effective limited modification was proposed to accept the ejectors, ducting and flow diverter valves. Complete system performance was calculated and a versatile computer program capable of analyzing any ejector system was developed.

  19. A large-scale simulation model to assess karstic groundwater recharge over Europe and the Mediterranean

    NASA Astrophysics Data System (ADS)

    Hartmann, A.; Gleeson, T.; Rosolem, R.; Pianosi, F.; Wada, Y.; Wagener, T.

    2015-06-01

    Karst develops through the dissolution of carbonate rock and is a major source of groundwater contributing up to half of the total drinking water supply in some European countries. Previous approaches to model future water availability in Europe are either too-small scale or do not incorporate karst processes, i.e. preferential flow paths. This study presents the first simulations of groundwater recharge in all karst regions in Europe with a parsimonious karst hydrology model. A novel parameter confinement strategy combines a priori information with recharge-related observations (actual evapotranspiration and soil moisture) at locations across Europe while explicitly identifying uncertainty in the model parameters. Europe's karst regions are divided into four typical karst landscapes (humid, mountain, Mediterranean and desert) by cluster analysis and recharge is simulated from 2002 to 2012 for each karst landscape. Mean annual recharge ranges from negligible in deserts to > 1 m a-1 in humid regions. The majority of recharge rates range from 20 to 50% of precipitation and are sensitive to subannual climate variability. Simulation results are consistent with independent observations of mean annual recharge and significantly better than other global hydrology models that do not consider karst processes (PCR-GLOBWB, WaterGAP). Global hydrology models systematically under-estimate karst recharge implying that they over-estimate actual evapotranspiration and surface runoff. Karst water budgets and thus information to support management decisions regarding drinking water supply and flood risk are significantly improved by our model.

  20. Uncovering Implicit Assumptions: a Large-Scale Study on Students' Mental Models of Diffusion

    NASA Astrophysics Data System (ADS)

    Stains, Marilyne; Sevian, Hannah

    2015-12-01

    Students' mental models of diffusion in a gas phase solution were studied through the use of the Structure and Motion of Matter (SAMM) survey. This survey permits identification of categories of ways students think about the structure of the gaseous solute and solvent, the origin of motion of gas particles, and trajectories of solute particles in the gaseous medium. A large sample of data ( N = 423) from students across grade 8 (age 13) through upper-level undergraduate was subjected to a cluster analysis to determine the main mental models present. The cluster analysis resulted in a reduced data set ( N = 308), and then, mental models were ascertained from robust clusters. The mental models that emerged from analysis were triangulated through interview data and characterised according to underlying implicit assumptions that guide and constrain thinking about diffusion of a solute in a gaseous medium. Impacts of students' level of preparation in science and relationships of mental models to science disciplines studied by students were examined. Implications are discussed for the value of this approach to identify typical mental models and the sets of implicit assumptions that constrain them.

  1. Hydrological improvements for nutrient and pollutant emission modeling in large scale catchments

    NASA Astrophysics Data System (ADS)

    Höllering, S.; Ihringer, J.

    2012-04-01

    An estimation of emissions and loads of nutrients and pollutants into European water bodies with as much accuracy as possible depends largely on the knowledge about the spatially and temporally distributed hydrological runoff patterns. An improved hydrological water balance model for the pollutant emission model MoRE (Modeling of Regionalized Emissions) (IWG, 2011) has been introduced, that can form an adequate basis to simulate discharge in a hydrologically differentiated, land-use based way to subsequently provide the required distributed discharge components. First of all the hydrological model had to comply both with requirements of space and time in order to calculate sufficiently precise the water balance on the catchment scale spatially distributed in sub-catchments and with a higher temporal resolution. Aiming to reproduce seasonal dynamics and the characteristic hydrological regimes of river catchments a daily (instead of a yearly) time increment was applied allowing for a more process oriented simulation of discharge dynamics, volume and therefore water balance. The enhancement of the hydrological model became also necessary to potentially account for the hydrological functioning of catchments in regard to scenarios of e.g. a changing climate or alterations of land use. As a deterministic, partly physically based, conceptual hydrological watershed and water balance model the Precipitation Runoff Modeling System (PRMS) (USGS, 2009) was selected to improve the hydrological input for MoRE. In PRMS the spatial discretization is implemented with sub-catchments and so called hydrologic response units (HRUs) which are the hydrotropic, distributed, finite modeling entities each having a homogeneous runoff reaction due to hydro-meteorological events. Spatial structures and heterogeneities in sub-catchments e.g. urbanity, land use and soil types were identified to derive hydrological similarities and classify in different urban and rural HRUs. In this way the

  2. Implementation of large-scale landscape evolution modelling to real high-resolution DEM

    NASA Astrophysics Data System (ADS)

    Schroeder, S.; Babeyko, A. Y.

    2012-12-01

    We have developed a surface evolution model to be naturally integrated with 3D thermomechanical codes like SLIM-3D to study coupled tectonic-climate interaction. The resolution of the surface evolution model is independent of that of the underlying continuum box. The surface model follows the concept of the cellular automaton implemented on a regular Eulerian mesh. It incorporates an effective filling algorithm that guarantees flow direction in each cell, D8 search for flow directions, computation of discharges and bedrock incision. Additionally, the model implements hillslope erosion in the form of non-linear, slope-dependent diffusion. The model was designed to be employed not only to synthetic topographies but also to real Digital Elevation Models (DEM). In present work we report our experience with model implication to the 30-meter resolution ASTER GDEM of the Pamir orogen, in particular, to the segment of the Panj river. We start with calibration of the model parameters (fluvial incision and hillslope diffusion coefficients) using direct measurements of Panj incision rates and volumes of suspended sediment transport. Since the incision algorithm is independent on hillslope processes, we first adjust the incision parameters. Power-law exponents of the incision equation were evaluated from the profile curvature of the main Pamir rivers. After that, incision coefficient was adjusted to fit the observed incision rate of 5 mm/y. Once the model results are consistent with the measured data, the calibration of hillslope processes follows. For given critical slope, diffusivity could be fitted to match the observed sediment discharge. Applying of surface evolution model to real DEM reveals specific problems which do not appear when working with synthetic landscapes. One of them is the noise of the satellite-measured topography. In particular, due to the non-vertical observation perspective, satellite may not be able to detect the bottom of the river channel, especially

  3. Large-scale pharmacological profiling of 3D tumor models of cancer cells.

    PubMed

    Mathews Griner, Lesley A; Zhang, Xiaohu; Guha, Rajarshi; McKnight, Crystal; Goldlust, Ian S; Lal-Nag, Madhu; Wilson, Kelli; Michael, Sam; Titus, Steve; Shinn, Paul; Thomas, Craig J; Ferrer, Marc

    2016-12-01

    The discovery of chemotherapeutic agents for the treatment of cancer commonly uses cell proliferation assays in which cells grow as two-dimensional (2D) monolayers. Compounds identified using 2D monolayer assays often fail to advance during clinical development, most likely because these assays do not reproduce the cellular complexity of tumors and their microenvironment in vivo. The use of three-dimensional (3D) cellular systems have been explored as enabling more predictive in vitro tumor models for drug discovery. To date, small-scale screens have demonstrated that pharmacological responses tend to differ between 2D and 3D cancer cell growth models. However, the limited scope of screens using 3D models has not provided a clear delineation of the cellular pathways and processes that differentially regulate cell survival and death in the different in vitro tumor models. Here we sought to further understand the differences in pharmacological responses between cancer tumor cells grown in different conditions by profiling a large collection of 1912 chemotherapeutic agents. We compared pharmacological responses obtained from cells cultured in traditional 2D monolayer conditions with those responses obtained from cells forming spheres versus cells already in 3D spheres. The target annotation of the compound library screened enabled the identification of those key cellular pathways and processes that when modulated by drugs induced cell death in all growth conditions or selectively in the different cell growth models. In addition, we also show that many of the compounds targeting these key cellular functions can be combined to produce synergistic cytotoxic effects, which in many cases differ in the magnitude of their synergism depending on the cellular model and cell type. The results from this work provide a high-throughput screening framework to profile the responses of drugs both as single agents and in pairwise combinations in 3D sphere models of cancer cells.

  4. Large-Scale Modelling of the Environmentally-Driven Population Dynamics of Temperate Aedes albopictus (Skuse).

    PubMed

    Erguler, Kamil; Smith-Unna, Stephanie E; Waldock, Joanna; Proestos, Yiannis; Christophides, George K; Lelieveld, Jos; Parham, Paul E

    2016-01-01

    The Asian tiger mosquito, Aedes albopictus, is a highly invasive vector species. It is a proven vector of dengue and chikungunya viruses, with the potential to host a further 24 arboviruses. It has recently expanded its geographical range, threatening many countries in the Middle East, Mediterranean, Europe and North America. Here, we investigate the theoretical limitations of its range expansion by developing an environmentally-driven mathematical model of its population dynamics. We focus on the temperate strain of Ae. albopictus and compile a comprehensive literature-based database of physiological parameters. As a novel approach, we link its population dynamics to globally-available environmental datasets by performing inference on all parameters. We adopt a Bayesian approach using experimental data as prior knowledge and the surveillance dataset of Emilia-Romagna, Italy, as evidence. The model accounts for temperature, precipitation, human population density and photoperiod as the main environmental drivers, and, in addition, incorporates the mechanism of diapause and a simple breeding site model. The model demonstrates high predictive skill over the reference region and beyond, confirming most of the current reports of vector presence in Europe. One of the main hypotheses derived from the model is the survival of Ae. albopictus populations through harsh winter conditions. The model, constrained by the environmental datasets, requires that either diapausing eggs or adult vectors have increased cold resistance. The model also suggests that temperature and photoperiod control diapause initiation and termination differentially. We demonstrate that it is possible to account for unobserved properties and constraints, such as differences between laboratory and field conditions, to derive reliable inferences on the environmental dependence of Ae. albopictus populations.

  5. Large-scale pharmacological profiling of 3D tumor models of cancer cells

    PubMed Central

    Mathews Griner, Lesley A; Zhang, Xiaohu; Guha, Rajarshi; McKnight, Crystal; Goldlust, Ian S; Lal-Nag, Madhu; Wilson, Kelli; Michael, Sam; Titus, Steve; Shinn, Paul; Thomas, Craig J; Ferrer, Marc

    2016-01-01

    The discovery of chemotherapeutic agents for the treatment of cancer commonly uses cell proliferation assays in which cells grow as two-dimensional (2D) monolayers. Compounds identified using 2D monolayer assays often fail to advance during clinical development, most likely because these assays do not reproduce the cellular complexity of tumors and their microenvironment in vivo. The use of three-dimensional (3D) cellular systems have been explored as enabling more predictive in vitro tumor models for drug discovery. To date, small-scale screens have demonstrated that pharmacological responses tend to differ between 2D and 3D cancer cell growth models. However, the limited scope of screens using 3D models has not provided a clear delineation of the cellular pathways and processes that differentially regulate cell survival and death in the different in vitro tumor models. Here we sought to further understand the differences in pharmacological responses between cancer tumor cells grown in different conditions by profiling a large collection of 1912 chemotherapeutic agents. We compared pharmacological responses obtained from cells cultured in traditional 2D monolayer conditions with those responses obtained from cells forming spheres versus cells already in 3D spheres. The target annotation of the compound library screened enabled the identification of those key cellular pathways and processes that when modulated by drugs induced cell death in all growth conditions or selectively in the different cell growth models. In addition, we also show that many of the compounds targeting these key cellular functions can be combined to produce synergistic cytotoxic effects, which in many cases differ in the magnitude of their synergism depending on the cellular model and cell type. The results from this work provide a high-throughput screening framework to profile the responses of drugs both as single agents and in pairwise combinations in 3D sphere models of cancer cells. PMID

  6. Large-Scale Modelling of the Environmentally-Driven Population Dynamics of Temperate Aedes albopictus (Skuse)

    PubMed Central

    Erguler, Kamil; Smith-Unna, Stephanie E.; Waldock, Joanna; Proestos, Yiannis; Christophides, George K.; Lelieveld, Jos; Parham, Paul E.

    2016-01-01

    The Asian tiger mosquito, Aedes albopictus, is a highly invasive vector species. It is a proven vector of dengue and chikungunya viruses, with the potential to host a further 24 arboviruses. It has recently expanded its geographical range, threatening many countries in the Middle East, Mediterranean, Europe and North America. Here, we investigate the theoretical limitations of its range expansion by developing an environmentally-driven mathematical model of its population dynamics. We focus on the temperate strain of Ae. albopictus and compile a comprehensive literature-based database of physiological parameters. As a novel approach, we link its population dynamics to globally-available environmental datasets by performing inference on all parameters. We adopt a Bayesian approach using experimental data as prior knowledge and the surveillance dataset of Emilia-Romagna, Italy, as evidence. The model accounts for temperature, precipitation, human population density and photoperiod as the main environmental drivers, and, in addition, incorporates the mechanism of diapause and a simple breeding site model. The model demonstrates high predictive skill over the reference region and beyond, confirming most of the current reports of vector presence in Europe. One of the main hypotheses derived from the model is the survival of Ae. albopictus populations through harsh winter conditions. The model, constrained by the environmental datasets, requires that either diapausing eggs or adult vectors have increased cold resistance. The model also suggests that temperature and photoperiod control diapause initiation and termination differentially. We demonstrate that it is possible to account for unobserved properties and constraints, such as differences between laboratory and field conditions, to derive reliable inferences on the environmental dependence of Ae. albopictus populations. PMID:26871447

  7. Predicting clinical outcomes from large scale cancer genomic profiles with deep survival models.

    PubMed

    Yousefi, Safoora; Amrollahi, Fatemeh; Amgad, Mohamed; Dong, Chengliang; Lewis, Joshua E; Song, Congzheng; Gutman, David A; Halani, Sameer H; Vega, Jose Enrique Velazquez; Brat, Daniel J; Cooper, Lee A D

    2017-09-15

    Translating the vast data generated by genomic platforms into accurate predictions of clinical outcomes is a fundamental challenge in genomic medicine. Many prediction methods face limitations in learning from the high-dimensional profiles generated by these platforms, and rely on experts to hand-select a small number of features for training prediction models. In this paper, we demonstrate how deep learning and Bayesian optimization methods that have been remarkably successful in general high-dimensional prediction tasks can be adapted to the problem of predicting cancer outcomes. We perform an extensive comparison of Bayesian optimized deep survival models and other state of the art machine learning methods for survival analysis, and describe a framework for interpreting deep survival models using a risk backpropagation technique. Finally, we illustrate that deep survival models can successfully transfer information across diseases to improve prognostic accuracy. We provide an open-source software implementation of this framework called SurvivalNet that enables automatic training, evaluation and interpretation of deep survival models.

  8. A Spatio-Temporally Explicit Random Encounter Model for Large-Scale Population Surveys

    PubMed Central

    Jousimo, Jussi; Ovaskainen, Otso

    2016-01-01

    Random encounter models can be used to estimate population abundance from indirect data collected by non-invasive sampling methods, such as track counts or camera-trap data. The classical Formozov–Malyshev–Pereleshin (FMP) estimator converts track counts into an estimate of mean population density, assuming that data on the daily movement distances of the animals are available. We utilize generalized linear models with spatio-temporal error structures to extend the FMP estimator into a flexible Bayesian modelling approach that estimates not only total population size, but also spatio-temporal variation in population density. We also introduce a weighting scheme to estimate density on habitats that are not covered by survey transects, assuming that movement data on a subset of individuals is available. We test the performance of spatio-temporal and temporal approaches by a simulation study mimicking the Finnish winter track count survey. The results illustrate how the spatio-temporal modelling approach is able to borrow information from observations made on neighboring locations and times when estimating population density, and that spatio-temporal and temporal smoothing models can provide improved estimates of total population size compared to the FMP method. PMID:27611683

  9. High-resolution global topographic index values for use in large-scale hydrological modelling

    NASA Astrophysics Data System (ADS)

    Marthews, Toby; Dadson, Simon; Lehner, Bernhard; Abele, Simon; Gedney, Nicola

    2015-04-01

    Modelling land surface water flow is of critical importance for simulating land-surface fluxes, predicting runoff and water table dynamics and for many other applications of Land Surface Models. Many approaches are based on the popular hydrology model TOPMODEL, and the most important parameter of this model is the well-known topographic index. Here we present new, high-resolution parameter maps of the topographic index for all ice-free land pixels calculated from hydrologically-conditioned HydroSHEDS data using the GA2 algorithm ('GRIDATB 2'). At 15 arc-sec resolution, these layers are four times finer than the resolution of the previously best-available topographic index layers, the Compound Topographic Index of HYDRO1k (CTI). For the largest river catchments occurring on each continent we found that, in comparison with CTI our revised values were up to 20% lower in, e.g., the Amazon. We found the highest catchment means were for the Murray-Darling and Nelson-Saskatchewan rather than for the Amazon and St. Lawrence as found from the CTI. For the majority of large catchments, however, the spread of our new GA2 index values is very similar to those of CTI, yet with more spatial variability apparent at fine scale. We believe these new index layers represent greatly-improved global-scale topographic index values and hope that they will be widely used in land surface modelling applications in the future.

  10. Quasi-three dimensional modeling of functionally-integrated elements of large-scale integrated circuits

    NASA Astrophysics Data System (ADS)

    Petrosyants, K. O.; Gurov, A. I.

    1984-08-01

    Functional integration allows a substantial reduction in the number of metallized connections and contact areas to the individual regions of a semiconductor structure, and realization of the basic connections between the elements within the semiconductor volume. Planning of a functionally integrated structure (FIS), which assures device parameters optimum for circuit engineering applications, is unthinkable without consideration of sufficient precise machine models. A series of models concerned with the special features of construction of FIS of various types was developed. A quasi-three dimensional model is presented of the functionally integrated elements of a bipolar BIS, which are multilayer semiconductor structures with an arbitrarily arranged diffused region and metallized contacts, controlled by a current or voltage. The model is described by a system of differential equations in partial derivatives of the elliptical type with integral limitations. A numerical algorithm uses a Newtonian procedure of quasilinearization in conjunction with a block method of upper relaxation. The model is realized in FORTRAN IV for a unified system of computers, intended to solve a wide range of problems which originate during planning of BIS, in particular choice of the optimum versions of topology, routes, and the electrical regime of the elements.

  11. The flow structure of pyroclastic density currents: evidence from particle models and large-scale experiments

    NASA Astrophysics Data System (ADS)

    Dellino, Pierfrancesco; Büttner, Ralf; Dioguardi, Fabio; Doronzo, Domenico Maria; La Volpe, Luigi; Mele, Daniela; Sonder, Ingo; Sulpizio, Roberto; Zimanowski, Bernd

    2010-05-01

    Pyroclastic flows are ground hugging, hot, gas-particle flows. They represent the most hazardous events of explosive volcanism, one striking example being the famous historical eruption of Pompeii (AD 79) at Vesuvius. Much of our knowledge on the mechanics of pyroclastic flows comes from theoretical models and numerical simulations. Valuable data are also stored in the geological record of past eruptions, i.e. the particles contained in pyroclastic deposits, but they are rarely used for quantifying the destructive potential of pyroclastic flows. In this paper, by means of experiments, we validate a model that is based on data from pyroclastic deposits. It allows the reconstruction of the current's fluid-dynamic behaviour. We show that our model results in likely values of dynamic pressure and particle volumetric concentration, and allows quantifying the hazard potential of pyroclastic flows.

  12. The effects of large-scale topography on the circulation in low-order models

    NASA Technical Reports Server (NTRS)

    O'Brien, Enda; Branscome, Lee E.

    1990-01-01

    This paper investigates the effect of topography on circulation produced by low-order quasi-geostrophic models that are capable of reproducing many basic features of midlatitude general circulation in the absence of topography. Using a simple two-level spectral model, time-mean stationary waves and low-frequency phenomena were examined for three different topographic configurations, of which two consisted of a sinusoidal mountain-valley structure, and the third was the Fourier representation of an isolated mountain peak. In the experiment with an isolated mountain, it was found that the time-mean wave in the model was highly dependent on the operation of wave-wave interactions, which had a significant impact on stationary waves through modifications in the mean zonal flow.

  13. Norway's 2011 Terror Attacks: Alleviating National Trauma With a Large-Scale Proactive Intervention Model.

    PubMed

    Kärki, Freja Ulvestad

    2015-09-01

    After the terror attacks of July 22, 2011, Norwegian health authorities piloted a new model for municipality-based psychosocial follow-up with victims. This column describes the development of a comprehensive follow-up intervention by health authorities and others that has been implemented at the municipality level across Norway. The model's principles emphasize proactivity by service providers; individually tailored help, with each victim being assigned a contact person in the residential municipality; continuity and long-term focus; effective intersectorial collaboration; and standardized screening of symptoms during the first year. Weekend reunions were also organized for the bereaved, and one-day reunions were organized for the survivors and their families at intervals over the first 18 months. Preliminary findings indicate a high level of success in model implementation. However, the overall effect of the interventions will be a subject for future evaluations.

  14. Prediction model of potential hepatocarcinogenicity of rat hepatocarcinogens using a large-scale toxicogenomics database

    SciTech Connect

    Uehara, Takeki; Minowa, Yohsuke; Morikawa, Yuji; Kondo, Chiaki; Maruyama, Toshiyuki; Kato, Ikuo; Nakatsu, Noriyuki; Igarashi, Yoshinobu; Ono, Atsushi; Hayashi, Hitomi; Mitsumori, Kunitoshi; Yamada, Hiroshi; Ohno, Yasuo; Urushidani, Tetsuro

    2011-09-15

    The present study was performed to develop a robust gene-based prediction model for early assessment of potential hepatocarcinogenicity of chemicals in rats by using our toxicogenomics database, TG-GATEs (Genomics-Assisted Toxicity Evaluation System developed by the Toxicogenomics Project in Japan). The positive training set consisted of high- or middle-dose groups that received 6 different non-genotoxic hepatocarcinogens during a 28-day period. The negative training set consisted of high- or middle-dose groups of 54 non-carcinogens. Support vector machine combined with wrapper-type gene selection algorithms was used for modeling. Consequently, our best classifier yielded prediction accuracies for hepatocarcinogenicity of 99% sensitivity and 97% specificity in the training data set, and false positive prediction was almost completely eliminated. Pathway analysis of feature genes revealed that the mitogen-activated protein kinase p38- and phosphatidylinositol-3-kinase-centered interactome and the v-myc myelocytomatosis viral oncogene homolog-centered interactome were the 2 most significant networks. The usefulness and robustness of our predictor were further confirmed in an independent validation data set obtained from the public database. Interestingly, similar positive predictions were obtained in several genotoxic hepatocarcinogens as well as non-genotoxic hepatocarcinogens. These results indicate that the expression profiles of our newly selected candidate biomarker genes might be common characteristics in the early stage of carcinogenesis for both genotoxic and non-genotoxic carcinogens in the rat liver. Our toxicogenomic model might be useful for the prospective screening of hepatocarcinogenicity of compounds and prioritization of compounds for carcinogenicity testing. - Highlights: >We developed a toxicogenomic model to predict hepatocarcinogenicity of chemicals. >The optimized model consisting of 9 probes had 99% sensitivity and 97% specificity. >This model

  15. High-resolution global topographic index values for use in large-scale hydrological modelling

    NASA Astrophysics Data System (ADS)

    Marthews, T. R.; Dadson, S. J.; Lehner, B.; Abele, S.; Gedney, N.

    2015-01-01

    Modelling land surface water flow is of critical importance for simulating land surface fluxes, predicting runoff and water table dynamics and for many other applications of Land Surface Models. Many approaches are based on the popular hydrology model TOPMODEL (TOPography-based hydrological MODEL), and the most important parameter of this model is the well-known topographic index. Here we present new, high-resolution parameter maps of the topographic index for all ice-free land pixels calculated from hydrologically conditioned HydroSHEDS (Hydrological data and maps based on SHuttle Elevation Derivatives at multiple Scales) data using the GA2 algorithm (GRIDATB 2). At 15 arcsec resolution, these layers are 4 times finer than the resolution of the previously best-available topographic index layers, the compound topographic index of HYDRO1k (CTI). For the largest river catchments occurring on each continent we found that, in comparison with CTI our revised values were up to 20% lower in, e.g. the Amazon. We found the highest catchment means were for the Murray-Darling and Nelson-Saskatchewan rather than for the Amazon and St. Lawrence as found from the CTI. For the majority of large catchments, however, the spread of our new GA2 index values is very similar to those of CTI, yet with more spatial variability apparent at fine scale. We believe these new index layers represent greatly improved global-scale topographic index values and hope that they will be widely used in land surface modelling applications in the future.

  16. Segmented linear modeling of CHO fed-batch culture and its application to large scale production.

    PubMed

    Ben Yahia, Bassem; Gourevitch, Boris; Malphettes, Laetitia; Heinzle, Elmar

    2017-04-01

    We describe a systematic approach to model CHO metabolism during biopharmaceutical production across a wide range of cell culture conditions. To this end, we applied the metabolic steady state concept. We analyzed and modeled the production rates of metabolites as a function of the specific growth rate. First, the total number of metabolic steady state phases and the location of the breakpoints were determined by recursive partitioning. For this, the smoothed derivative of the metabolic rates with respect to the growth rate were used followed by hierarchical clustering of the obtained partition. We then applied a piecewise regression to the metabolic rates with the previously determined number of phases. This allowed identifying the growth rates at which the cells underwent a metabolic shift. The resulting model with piecewise linear relationships between metabolic rates and the growth rate did well describe cellular metabolism in the fed-batch cultures. Using the model structure and parameter values from a small-scale cell culture (2 L) training dataset, it was possible to predict metabolic rates of new fed-batch cultures just using the experimental specific growth rates. Such prediction was successful both at the laboratory scale with 2 L bioreactors but also at the production scale of 2000 L. This type of modeling provides a flexible framework to set a solid foundation for metabolic flux analysis and mechanistic type of modeling. Biotechnol. Bioeng. 2017;114: 785-797. © 2016 The Authors. Biotechnology and Bioengineering Published by Wiley Periodicals, Inc. © 2016 The Authors. Biotechnology and Bioengineering Published by Wiley Periodicals, Inc.

  17. Segmented linear modeling of CHO fed‐batch culture and its application to large scale production

    PubMed Central

    Ben Yahia, Bassem; Gourevitch, Boris; Malphettes, Laetitia

    2016-01-01

    ABSTRACT We describe a systematic approach to model CHO metabolism during biopharmaceutical production across a wide range of cell culture conditions. To this end, we applied the metabolic steady state concept. We analyzed and modeled the production rates of metabolites as a function of the specific growth rate. First, the total number of metabolic steady state phases and the location of the breakpoints were determined by recursive partitioning. For this, the smoothed derivative of the metabolic rates with respect to the growth rate were used followed by hierarchical clustering of the obtained partition. We then applied a piecewise regression to the metabolic rates with the previously determined number of phases. This allowed identifying the growth rates at which the cells underwent a metabolic shift. The resulting model with piecewise linear relationships between metabolic rates and the growth rate did well describe cellular metabolism in the fed‐batch cultures. Using the model structure and parameter values from a small‐scale cell culture (2 L) training dataset, it was possible to predict metabolic rates of new fed‐batch cultures just using the experimental specific growth rates. Such prediction was successful both at the laboratory scale with 2 L bioreactors but also at the production scale of 2000 L. This type of modeling provides a flexible framework to set a solid foundation for metabolic flux analysis and mechanistic type of modeling. Biotechnol. Bioeng. 2017;114: 785–797. © 2016 The Authors. Biotechnology and Bioengineering Published by Wiley Periodicals, Inc. PMID:27869296

  18. Modelling and operation strategies of DLR's large scale thermocline test facility (TESIS)

    NASA Astrophysics Data System (ADS)

    Odenthal, Christian; Breidenbach, Nils; Bauer, Thomas

    2017-06-01

    In this work an overview of the TESIS:store thermocline test facility and its current construction status will be given. Based on this, the TESIS:store facility using sensible solid filler material is modelled with a fully transient model, implemented in MATLAB®. Results in terms of the impact of filler site and operation strategies will be presented. While low porosity and small particle diameters for the filler material are beneficial, operation strategy is one key element with potential for optimization. It is shown that plant operators have to ponder between utilization and exergetic efficiency. Different durations of the charging and discharging period enable further potential for optimizations.

  19. Uncovering Implicit Assumptions: A Large-Scale Study on Students' Mental Models of Diffusion

    ERIC Educational Resources Information Center

    Stains, Marilyne; Sevian, Hannah

    2015-01-01

    Students' mental models of diffusion in a gas phase solution were studied through the use of the Structure and Motion of Matter (SAMM) survey. This survey permits identification of categories of ways students think about the structure of the gaseous solute and solvent, the origin of motion of gas particles, and trajectories of solute particles in…

  20. fast_protein_cluster: parallel and optimized clustering of large-scale protein modeling data

    PubMed Central

    Hung, Ling-Hong; Samudrala, Ram

    2014-01-01

    Motivation: fast_protein_cluster is a fast, parallel and memory efficient package used to cluster 60 000 sets of protein models (with up to 550 000 models per set) generated by the Nutritious Rice for the World project. Results: fast_protein_cluster is an optimized and extensible toolkit that supports Root Mean Square Deviation after optimal superposition (RMSD) and Template Modeling score (TM-score) as metrics. RMSD calculations using a laptop CPU are 60× faster than qcprot and 3× faster than current graphics processing unit (GPU) implementations. New GPU code further increases the speed of RMSD and TM-score calculations. fast_protein_cluster provides novel k-means and hierarchical clustering methods that are up to 250× and 2000× faster, respectively, than Clusco, and identify significantly more accurate models than Spicker and Clusco. Availability and implementation: fast_protein_cluster is written in C++ using OpenMP for multi-threading support. Custom streaming Single Instruction Multiple Data (SIMD) extensions and advanced vector extension intrinsics code accelerate CPU calculations, and OpenCL kernels support AMD and Nvidia GPUs. fast_protein_cluster is available under the M.I.T. license. (http://software.compbio.washington.edu/fast_protein_cluster) Contact: lhhung@compbio.washington.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:24532722

  1. fast_protein_cluster: parallel and optimized clustering of large-scale protein modeling data.

    PubMed

    Hung, Ling-Hong; Samudrala, Ram

    2014-06-15

    fast_protein_cluster is a fast, parallel and memory efficient package used to cluster 60 000 sets of protein models (with up to 550 000 models per set) generated by the Nutritious Rice for the World project. fast_protein_cluster is an optimized and extensible toolkit that supports Root Mean Square Deviation after optimal superposition (RMSD) and Template Modeling score (TM-score) as metrics. RMSD calculations using a laptop CPU are 60× faster than qcprot and 3× faster than current graphics processing unit (GPU) implementations. New GPU code further increases the speed of RMSD and TM-score calculations. fast_protein_cluster provides novel k-means and hierarchical clustering methods that are up to 250× and 2000× faster, respectively, than Clusco, and identify significantly more accurate models than Spicker and Clusco. fast_protein_cluster is written in C++ using OpenMP for multi-threading support. Custom streaming Single Instruction Multiple Data (SIMD) extensions and advanced vector extension intrinsics code accelerate CPU calculations, and OpenCL kernels support AMD and Nvidia GPUs. fast_protein_cluster is available under the M.I.T. license. (http://software.compbio.washington.edu/fast_protein_cluster) © The Author 2014. Published by Oxford University Press.

  2. Large Scale Tissue Morphogenesis Simulation on Heterogenous Systems Based on a Flexible Biomechanical Cell Model.

    PubMed

    Jeannin-Girardon, Anne; Ballet, Pascal; Rodin, Vincent

    2015-01-01

    The complexity of biological tissue morphogenesis makes in silico simulations of such system very interesting in order to gain a better understanding of the underlying mechanisms ruling the development of multicellular tissues. This complexity is mainly due to two elements: firstly, biological tissues comprise a large amount of cells; secondly, these cells exhibit complex interactions and behaviors. To address these two issues, we propose two tools: the first one is a virtual cell model that comprise two main elements: firstly, a mechanical structure (membrane, cytoskeleton, and cortex) and secondly, the main behaviors exhibited by biological cells, i.e., mitosis, growth, differentiation, molecule consumption, and production as well as the consideration of the physical constraints issued from the environment. An artificial chemistry is also included in the model. This virtual cell model is coupled to an agent-based formalism. The second tool is a simulator that relies on the OpenCL framework. It allows efficient parallel simulations on heterogenous devices such as micro-processors or graphics processors. We present two case studies validating the implementation of our model in our simulator: cellular proliferation controlled by cell signalling and limb growth in a virtual organism.

  3. Advanced kinetic plasma model implementation for new large-scale investigations

    NASA Astrophysics Data System (ADS)

    Reddell, Noah; Shumlak, Uri

    2013-10-01

    A kinetic plasma model for of one or more particle species described by the Vlasov equation and coupled to fully dynamic electromagnetic forces is presented. The model is implemented as evolving continuous PDF (probability density function) in particle phase space (position-velocity) as opposed to particle-in-cell (PIC) methods which discretely sample the PDF. A new boundary condition for the truncated velocity-space edge, motivated by physical properties of the PDF tail, is introduced. The hyperbolic model is evolved using the discontinuous Galerkin numerical method, conserving system mass, momentum, and energy - an advantage compared to PIC. Simulations of two- to six-dimensional phase space are computationally expensive. To maximize performance and scaling to large simulations, a new framework, WARPM, has been developed for many-core (e.g. GPU) computing architectures. WARPM supports both multi-fluid and continuum kinetic plasma models as coupled hyperbolic systems with nearest neighbor predictable communication. Exemplary physics results and computational performance are presented.

  4. A balanced water layer concept for subglacial hydrology in large scale ice sheet models

    NASA Astrophysics Data System (ADS)

    Goeller, S.; Thoma, M.; Grosfeld, K.; Miller, H.

    2012-12-01

    There is currently no doubt about the existence of a wide-spread hydrological network under the Antarctic ice sheet, which lubricates the ice base and thus leads to increased ice velocities. Consequently, ice models should incorporate basal hydrology to obtain meaningful results for future ice dynamics and their contribution to global sea level rise. Here, we introduce the balanced water layer concept, covering two prominent subglacial hydrological features for ice sheet modeling on a continental scale: the evolution of subglacial lakes and balance water fluxes. We couple it to the thermomechanical ice-flow model RIMBAY and apply it to a synthetic model domain inspired by the Gamburtsev Mountains, Antarctica. In our experiments we demonstrate the dynamic generation of subglacial lakes and their impact on the velocity field of the overlaying ice sheet, resulting in a negative ice mass balance. Furthermore, we introduce an elementary parametrization of the water flux-basal sliding coupling and reveal the predominance of the ice loss through the resulting ice streams against the stabilizing influence of less hydrologically active areas. We point out, that established balance flux schemes quantify these effects only partially as their ability to store subglacial water is lacking.

  5. Uncovering Implicit Assumptions: A Large-Scale Study on Students' Mental Models of Diffusion

    ERIC Educational Resources Information Center

    Stains, Marilyne; Sevian, Hannah

    2015-01-01

    Students' mental models of diffusion in a gas phase solution were studied through the use of the Structure and Motion of Matter (SAMM) survey. This survey permits identification of categories of ways students think about the structure of the gaseous solute and solvent, the origin of motion of gas particles, and trajectories of solute particles in…

  6. Robust classification of protein variation using structural modelling and large-scale data integration

    PubMed Central

    Baugh, Evan H.; Simmons-Edler, Riley; Müller, Christian L.; Alford, Rebecca F.; Volfovsky, Natalia; Lash, Alex E.; Bonneau, Richard

    2016-01-01

    Existing methods for interpreting protein variation focus on annotating mutation pathogenicity rather than detailed interpretation of variant deleteriousness and frequently use only sequence-based or structure-based information. We present VIPUR, a computational framework that seamlessly integrates sequence analysis and structural modelling (using the Rosetta protein modelling suite) to identify and interpret deleterious protein variants. To train VIPUR, we collected 9477 protein variants with known effects on protein function from multiple organisms and curated structural models for each variant from crystal structures and homology models. VIPUR can be applied to mutations in any organism's proteome with improved generalized accuracy (AUROC .83) and interpretability (AUPR .87) compared to other methods. We demonstrate that VIPUR's predictions of deleteriousness match the biological phenotypes in ClinVar and provide a clear ranking of prediction confidence. We use VIPUR to interpret known mutations associated with inflammation and diabetes, demonstrating the structural diversity of disrupted functional sites and improved interpretation of mutations associated with human diseases. Lastly, we demonstrate VIPUR's ability to highlight candidate variants associated with human diseases by applying VIPUR to de novo variants associated with autism spectrum disorders. PMID:26926108

  7. Use of Standard Deviations as Predictors in Models Using Large-Scale International Data Sets

    ERIC Educational Resources Information Center

    Austin, Bruce; French, Brian; Adesope, Olusola; Gotch, Chad

    2017-01-01

    Measures of variability are successfully used in predictive modeling in research areas outside of education. This study examined how standard deviations can be used to address research questions not easily addressed using traditional measures such as group means based on index variables. Student survey data were obtained from the Organisation for…

  8. Predicting agricultural impacts of large-scale drought: 2012 and the case for better modeling

    USDA-ARS?s Scientific Manuscript database

    We present an example of a simulation-based forecast for the 2012 U.S. maize growing season produced as part of a high-resolution, multi-scale, predictive mechanistic modeling study designed for decision support, risk management, and counterfactual analysis. The simulations undertaken for this analy...

  9. Breach modelling by overflow with TELEMAC 2D: Comparison with large-scale experiments

    USDA-ARS?s Scientific Manuscript database

    An erosion law has been implemented in TELEMAC 2D to represent the surface erosion process to model the breach formation of a levee. We focus on homogeneous and earth fill levee to simplify this first implementation. The first part of this study reveals the ability of this method to represent simu...

  10. Formulation of Subgrid Variability and Boundary-Layer Cloud Cover in Large-Scale Models

    DTIC Science & Technology

    1999-02-28

    prescribed as described above. Additional model details are given in the appendix. 3. Advection and Radiation Forcing To make a fair comparison with...BATS land-surface scheme to study land- surface/atmosphere interactions. Dept de Termodinamica Universität de Valencia SPAIN Ernesto LOPEZ-BAEZA

  11. Methods for Modeling and Decomposing Treatment Effect Variation in Large-Scale Randomized Trials

    ERIC Educational Resources Information Center

    Ding, Peng; Feller, Avi; Miratrix, Luke

    2015-01-01

    Recent literature has underscored the critical role of treatment effect variation in estimating and understanding causal effects. This approach, however, is in contrast to much of the foundational research on causal inference. Linear models, for example, classically rely on constant treatment effect assumptions, or treatment effects defined by…

  12. Toward an Aspirational Learning Model Gleaned from Large-Scale Assessment

    ERIC Educational Resources Information Center

    Diket, Read M.; Xu, Lihua; Brewer, Thomas M.

    2014-01-01

    The aspirational model resulted from the authors' secondary analysis of the Mother/Child (M/C) test block from the 2008 National Assessment of Educational Progress restricted data that examined the responses of the national sample of 8th-grade students (n = 1648). This test block presented no artmaking task and consisted of the same 13 questions…

  13. Can simple models predict large-scale surface ocean isoprene concentrations?

    NASA Astrophysics Data System (ADS)

    Booge, Dennis; Marandino, Christa A.; Schlundt, Cathleen; Palmer, Paul I.; Schlundt, Michael; Atlas, Elliot L.; Bracher, Astrid; Saltzman, Eric S.; Wallace, Douglas W. R.

    2016-09-01

    We use isoprene and related field measurements from three different ocean data sets together with remotely sensed satellite data to model global marine isoprene emissions. We show that using monthly mean satellite-derived chl a concentrations to parameterize isoprene with a constant chl a normalized isoprene production rate underpredicts the measured oceanic isoprene concentration by a mean factor of 19 ± 12. Improving the model by using phytoplankton functional type dependent production values and by decreasing the bacterial degradation rate of isoprene in the water column results in only a slight underestimation (factor 1.7 ± 1.2). We calculate global isoprene emissions of 0.21 Tg C for 2014 using this improved model, which is twice the value calculated using the original model. Nonetheless, the sea-to-air fluxes have to be at least 1 order of magnitude higher to account for measured atmospheric isoprene mixing ratios. These findings suggest that there is at least one missing oceanic source of isoprene and, possibly, other unknown factors in the ocean or atmosphere influencing the atmospheric values. The discrepancy between calculated fluxes and atmospheric observations must be reconciled in order to fully understand the importance of marine-derived isoprene as a precursor to remote marine boundary layer particle formation.

  14. Ensemble modeling to predict habitat suitability for a large-scale disturbance specialist

    Treesearch

    Quresh S. Latif; Victoria A. Saab; Jonathan G. Dudley; Jeff P. Hollenbeck

    2013-01-01

    To conserve habitat for disturbance specialist species, ecologists must identify where individuals will likely settle in newly disturbed areas. Habitat suitability models can predict which sites at new disturbances will most likely attract specialists. Without validation data from newly disturbed areas, however, the best approach for maximizing predictive accuracy can...

  15. Contribution Of The SWOT Mission To Large-Scale Hydrological Modeling Using Data Assimilation

    NASA Astrophysics Data System (ADS)

    Emery, C. M.; Biancamaria, S.; Boone, A. A.; Ricci, S. M.; Rochoux, M. C.; Garambois, P. A.; Paris, A.; Calmant, S.

    2016-12-01

    The purpose of this work is to improve water fluxes estimation on the continental surfaces, at interanual and interseasonal scale (from few years to decennial time period). More specifically, it studies contribution of the incoming SWOT satellite mission to improve hydrology model at global scale, and using the land surface model ISBA-TRIP. This model corresponds to the continental component of the CNRM (French meteorological research center)'s climatic model. This study explores the potential of satellite data to correct either input parameters of the river routing scheme TRIP or its state variables. To do so, a data assimilation platform (using an Ensemble Kalman Filter, EnKF) has been implemented to assimilate SWOT virtual observations as well as discharges estimated from real nadir altimetry data. A series of twin experiments is used to test and validate the parameter estimation module of the platform. SWOT virtual-observations of water heights along SWOT tracks (with a 10 cm white noise model error) are assimilated to correct the river routing model parameters. To begin with, we chose to focus exclusively on the river manning coefficient, with the possibility to easily extend to other parameters such as the river widths. First results show that the platform is able to recover the "true" Manning distribution assimilating SWOT-like water heights. The error on the coefficients goes from 35 % before assimilation to 9 % after four SWOT orbit repeat period of 21 days. In the state estimation mode, daily assimilation cycles are realized to correct TRIP river water storage initial state by assimilating ENVISAT-based discharge. Those observations are derived from ENVISAT water elevation measures, using rating curves from the MGB-IPH hydrological model (calibrated over the Amazon using in situ gages discharge). Using such kind of observation allows going beyond idealized twin experiments and also to test contribution of a remotely-sensed discharge product, which could

  16. Lake Peipsi's eutrophication issue: new insights into large scale water quality modeling

    NASA Astrophysics Data System (ADS)

    Fink, Gabriel; Flörke, Martina

    2017-04-01

    The large and shallow European Lake Peipsi was polluted with phosphorus loadings from different point and diffuse sources over decades. The lake's trophic state changed from mesotrophic to eutrophic and hypertrophic. In the 1990s phosphorus pollution dropped significantly. However, more than twenty years later the lake is still eutrophic (L. Peipsi s.s.) and hypertrophic (L. Pihkva). It has been determined that internal loadings from a large nutrient pool in the lake's sediments play an important role in the actual phosphorus balance. For a pursuing and comprehensive understanding, there is a need for detailed and integrated water quality data. This is necessary to assess the current state as well as the younger lake nutrient history. However, in-situ data are scarce and difficult to access. To overcome this data sparse situation the global integrated modeling framework WaterGAP3 was applied (i) to test the applicability of a global scale (5 arc minutes resolution) water quality model in a local scale eutrophication study, and (ii) to provide a detailed local analysis of the eutrophication issue for Lake Peipsi. In this setting WaterGAP3 provides a detailed description of phosphorus sources, loadings and concentrations. Furthermore the newly implemented two box eutrophication module provides a long term description of total phosphorus (TP) concentrations in lakes, the consequent potential for toxic algae blooms, and the TP balance components such as the sediment storage. The WaterGAP3 global results such as river discharge, TP loads from different sectors, TP concentration in the lake and in the catchments river system cover a period of 1990-2010. Our model results indicate that the agricultural sector (diffuse source) is the primary source of TP pollution in the Lake Peipsi catchment (45%) followed by background sources (diffuse sources) such as atmospheric deposition and weathering (33%), and domestic point sources (19%). The model results confirm the reported

  17. Constructing Model of Relationship among Behaviors and Injuries to Products Based on Large Scale Text Data on Injuries

    NASA Astrophysics Data System (ADS)

    Nomori, Koji; Kitamura, Koji; Motomura, Yoichi; Nishida, Yoshifumi; Yamanaka, Tatsuhiro; Komatsubara, Akinori

    In Japan, childhood injury prevention is urgent issue. Safety measures through creating knowledge of injury data are essential for preventing childhood injuries. Especially the injury prevention approach by product modification is very important. The risk assessment is one of the most fundamental methods to design safety products. The conventional risk assessment has been carried out subjectively because product makers have poor data on injuries. This paper deals with evidence-based risk assessment, in which artificial intelligence technologies are strongly needed. This paper describes a new method of foreseeing usage of products, which is the first step of the evidence-based risk assessment, and presents a retrieval system of injury data. The system enables a product designer to foresee how children use a product and which types of injuries occur due to the product in daily environment. The developed system consists of large scale injury data, text mining technology and probabilistic modeling technology. Large scale text data on childhood injuries was collected from medical institutions by an injury surveillance system. Types of behaviors to a product were derived from the injury text data using text mining technology. The relationship among products, types of behaviors, types of injuries and characteristics of children was modeled by Bayesian Network. The fundamental functions of the developed system and examples of new findings obtained by the system are reported in this paper.

  18. Large-Scale Recurrent Neural Network Based Modelling of Gene Regulatory Network Using Cuckoo Search-Flower Pollination Algorithm.

    PubMed

    Mandal, Sudip; Khan, Abhinandan; Saha, Goutam; Pal, Rajat K

    2016-01-01

    The accurate prediction of genetic networks using computational tools is one of the greatest challenges in the postgenomic era. Recurrent Neural Network is one of the most popular but simple approaches to model the network dynamics from time-series microarray data. To date, it has been successfully applied to computationally derive small-scale artificial and real-world genetic networks with high accuracy. However, they underperformed for large-scale genetic networks. Here, a new methodology has been proposed where a hybrid Cuckoo Search-Flower Pollination Algorithm has been implemented with Recurrent Neural Network. Cuckoo Search is used to search the best combination of regulators. Moreover, Flower Pollination Algorithm is applied to optimize the model parameters of the Recurrent Neural Network formalism. Initially, the proposed method is tested on a benchmark large-scale artificial network for both noiseless and noisy data. The results obtained show that the proposed methodology is capable of increasing the inference of correct regulations and decreasing false regulations to a high degree. Secondly, the proposed methodology has been validated against the real-world dataset of the DNA SOS repair network of Escherichia coli. However, the proposed method sacrifices computational time complexity in both cases due to the hybrid optimization process.

  19. Splitting failure in side walls of a large-scale underground cavern group: a numerical modelling and a field study.

    PubMed

    Wang, Zhishen; Li, Yong; Zhu, Weishen; Xue, Yiguo; Yu, Song

    2016-01-01

    Vertical splitting cracks often appear in side walls of large-scale underground caverns during excavations owing to the brittle characteristics of surrounding rock mass, especially under the conditions of high in situ stress and great overburden depth. This phenomenon greatly affects the integral safety and stability of the underground caverns. In this paper, a transverse isotropic constitutive model and a splitting failure criterion are simultaneously proposed and secondly programmed in FLAC3D to numerically simulate the integral stability of the underground caverns during excavations in Dagangshan hydropower station in Sichuan province, China. Meanwhile, an in situ monitoring study on the displacement of the key points of the underground caverns has also been carried out, and the monitoring results are compared with the numerical results. From the comparative analysis, it can be concluded that the depths of splitting relaxation area obtained by numerical simulation are almost consistent with the actual in situ monitoring values, as well as the trend of the displacement curves, which shows that the transverse isotropic constitutive model combining with the splitting failure criterion is appropriate for investigating the splitting failure in side walls of large-scale underground caverns and it will be a helpful guidance of predicting the depths of splitting relaxation area in surrounding rock mass.

  20. Large-Scale Recurrent Neural Network Based Modelling of Gene Regulatory Network Using Cuckoo Search-Flower Pollination Algorithm

    PubMed Central

    Mandal, Sudip; Khan, Abhinandan; Saha, Goutam; Pal, Rajat K.

    2016-01-01

    The accurate prediction of genetic networks using computational tools is one of the greatest challenges in the postgenomic era. Recurrent Neural Network is one of the most popular but simple approaches to model the network dynamics from time-series microarray data. To date, it has been successfully applied to computationally derive small-scale artificial and real-world genetic networks with high accuracy. However, they underperformed for large-scale genetic networks. Here, a new methodology has been proposed where a hybrid Cuckoo Search-Flower Pollination Algorithm has been implemented with Recurrent Neural Network. Cuckoo Search is used to search the best combination of regulators. Moreover, Flower Pollination Algorithm is applied to optimize the model parameters of the Recurrent Neural Network formalism. Initially, the proposed method is tested on a benchmark large-scale artificial network for both noiseless and noisy data. The results obtained show that the proposed methodology is capable of increasing the inference of correct regulations and decreasing false regulations to a high degree. Secondly, the proposed methodology has been validated against the real-world dataset of the DNA SOS repair network of Escherichia coli. However, the proposed method sacrifices computational time complexity in both cases due to the hybrid optimization process. PMID:26989410