Science.gov

Sample records for realistic large-scale model

  1. Efficient Large-Scale Coating Microstructure Formation Using Realistic CFD Models

    NASA Astrophysics Data System (ADS)

    Wiederkehr, Thomas; Müller, Heinrich

    2015-02-01

    For the understanding of physical effects during the formation of thermally sprayed coating layers and the deduction of the macroscopic properties of a coating, microstructure modeling and simulation techniques play an important role. In this contribution, a coupled simulation framework consisting of a detailed, CFD-based single splat simulation, and a large-scale coating build-up simulation is presented that is capable to compute large-scale, three-dimensional, porous microstructures by sequential drop impingement of more than 10,000 individual particles on multicore workstation hardware. Due to the geometry-based coupling of the two simulations, the deformation, cooling, and solidification of every particle is sensitive to the hit surface area and thereby pores develop naturally in the model. The single splat simulation employs the highly parallel Lattice-Boltzmann method, which is well suited for GPU-acceleration. In order to save splat calculations, the coating simulation includes a database-driven approach that re-uses already computed splats for similar underground shapes at the randomly chosen impact sites. For a fast database search, three different methods of efficient pre-selection of candidates are described and compared against each other.

  2. The composite neuron: a realistic one-compartment Purkinje cell model suitable for large-scale neuronal network simulations.

    PubMed

    Coop, A D; Reeke, G N

    2001-01-01

    We present a simple method for the realistic description of neurons that is well suited to the development of large-scale neuronal network models where the interactions within and between neural circuits are the object of study rather than the details of dendritic signal propagation in individual cells. Referred to as the composite approach, it combines in a one-compartment model elements of both the leaky integrator cell and the conductance-based formalism of Hodgkin and Huxley (1952). Composite models treat the cell membrane as an equivalent circuit that contains ligand-gated synaptic, voltage-gated, and voltage- and concentration-dependent conductances. The time dependences of these various conductances are assumed to correlate with their spatial locations in the real cell. Thus, when viewed from the soma, ligand-gated synaptic and other dendritically located conductances can be modeled as either single alpha or double exponential functions of time, whereas, with the exception of discharge-related conductances, somatic and proximal dendritic conductances can be well approximated by simple current-voltage relationships. As an example of the composite approach to neuronal modeling we describe a composite model of a cerebellar Purkinje neuron.

  3. A realistic large-scale model of the cerebellum granular layer predicts circuit spatio-temporal filtering properties.

    PubMed

    Solinas, Sergio; Nieus, Thierry; D'Angelo, Egidio

    2010-01-01

    The way the cerebellar granular layer transforms incoming mossy fiber signals into new spike patterns to be related to Purkinje cells is not yet clear. Here, a realistic computational model of the granular layer was developed and used to address four main functional hypotheses: center-surround organization, time-windowing, high-pass filtering in responses to spike bursts and coherent oscillations in response to diffuse random activity. The model network was activated using patterns inspired by those recorded in vivo. Burst stimulation of a small mossy fiber bundle resulted in granule cell bursts delimited in time (time windowing) and space (center-surround) by network inhibition. This burst-burst transmission showed marked frequency-dependence configuring a high-pass filter with cut-off frequency around 100 Hz. The contrast between center and surround properties was regulated by the excitatory-inhibitory balance. The stronger excitation made the center more responsive to 10-50 Hz input frequencies and enhanced the granule cell output (with spikes occurring earlier and with higher frequency and number) compared to the surround. Finally, over a certain level of mossy fiber background activity, the circuit generated coherent oscillations in the theta-frequency band. All these processes were fine-tuned by NMDA and GABA-A receptor activation and neurotransmitter vesicle cycling in the cerebellar glomeruli. This model shows that available knowledge on cellular mechanisms is sufficient to unify the main functional hypotheses on the cerebellum granular layer and suggests that this network can behave as an adaptable spatio-temporal filter coordinated by theta-frequency oscillations.

  4. Modeling of the cross-beam energy transfer with realistic inertial-confinement-fusion beams in a large-scale hydrocode.

    PubMed

    Colaïtis, A; Duchateau, G; Ribeyre, X; Tikhonchuk, V

    2015-01-01

    A method for modeling realistic laser beams smoothed by kinoform phase plates is presented. The ray-based paraxial complex geometrical optics (PCGO) model with Gaussian thick rays allows one to create intensity variations, or pseudospeckles, that reproduce the beam envelope, contrast, and high-intensity statistics predicted by paraxial laser propagation codes. A steady-state cross-beam energy-transfer (CBET) model is implemented in a large-scale radiative hydrocode based on the PCGO model. It is used in conjunction with the realistic beam modeling technique to study the effects of CBET between coplanar laser beams on the target implosion. The pseudospeckle pattern imposed by PCGO produces modulations in the irradiation field and the shell implosion pressure. Cross-beam energy transfer between beams at 20(∘) and 40(∘) significantly degrades the irradiation symmetry by amplifying low-frequency modes and reducing the laser-capsule coupling efficiency, ultimately leading to large modulations of the shell areal density and lower convergence ratios. These results highlight the role of laser-plasma interaction and its influence on the implosion dynamics.

  5. Modeling the Internet's large-scale topology

    PubMed Central

    Yook, Soon-Hyung; Jeong, Hawoong; Barabási, Albert-László

    2002-01-01

    Network generators that capture the Internet's large-scale topology are crucial for the development of efficient routing protocols and modeling Internet traffic. Our ability to design realistic generators is limited by the incomplete understanding of the fundamental driving forces that affect the Internet's evolution. By combining several independent databases capturing the time evolution, topology, and physical layout of the Internet, we identify the universal mechanisms that shape the Internet's router and autonomous system level topology. We find that the physical layout of nodes form a fractal set, determined by population density patterns around the globe. The placement of links is driven by competition between preferential attachment and linear distance dependence, a marked departure from the currently used exponential laws. The universal parameters that we extract significantly restrict the class of potentially correct Internet models and indicate that the networks created by all available topology generators are fundamentally different from the current Internet. PMID:12368484

  6. SU(3)-guided Realistic Nucleon-nucleon Interaction for Large-scale Calculations

    NASA Astrophysics Data System (ADS)

    Sargsyan, Grigor; Launey, Kristina; Baker, Robert; Dytrych, Tomas; Draayer, Jerry

    2017-01-01

    We examine nucleon-nucleon (NN) realistic interactions, such as JISP16 and N3LO, based on their SU(3) decomposition and identify components of the interactions that are sufficient to describe the structure of low-lying states in nuclei. We observe that many of the interaction components, when expressed as SU(3) tensors, become negligible. Paring the interaction down to its physically relevant terms improves the efficacy of large-scale calculations from first principles (ab initio). The work compares spectral properties for low-lying states in 12C calculated by means of the selected interaction to the results obtained when the full interaction is used and confirms the validity of the method. Supported by the U.S. NSF (OCI-0904874, ACI -1516338) and the U.S. DOE (DE-SC0005248), and benefited from computing resources provided by Blue Waters and Louisiana State University's Center for Computation & Technology.

  7. Large-Scale Aerosol Modeling and Analysis

    DTIC Science & Technology

    2008-09-30

    aerosol species up to six days in advance anywhere on the globe. NAAPS and COAMPS are particularly useful for forecasts of dust storms in areas...impact cloud processes globally. With increasing dust storms due to climate change and land use changes in desert regions, the impact of the...bacteria in large-scale dust storms is expected to significantly impact warm ice cloud formation, human health, and ecosystems globally. In Niemi et al

  8. Large-Scale Aerosol Modeling and Analysis

    DTIC Science & Technology

    2007-09-30

    to six days in advance anywhere on the globe. NAAPS and COAMPS are particularly useful for forecasts of dust storms in areas downwind of the large...in FY08. NAAPS forecasts of CONUS dust storms and long-range dust transport to CONUS were further evaluated in collaboration with CSU. These...visibility. The regional model ( COAMPS /Aerosol) became operational during OIF. The global model Navy Aerosol Analysis and Prediction System (NAAPS

  9. Large Scale, High Resolution, Mantle Dynamics Modeling

    NASA Astrophysics Data System (ADS)

    Geenen, T.; Berg, A. V.; Spakman, W.

    2007-12-01

    To model the geodynamic evolution of plate convergence, subduction and collision and to allow for a connection to various types of observational data, geophysical, geodetical and geological, we developed a 4D (space-time) numerical mantle convection code. The model is based on a spherical 3D Eulerian fem model, with quadratic elements, on top of which we constructed a 3D Lagrangian particle in cell(PIC) method. We use the PIC method to transport material properties and to incorporate a viscoelastic rheology. Since capturing small scale processes associated with localization phenomena require a high resolution, we spend a considerable effort on implementing solvers suitable to solve for models with over 100 million degrees of freedom. We implemented Additive Schwartz type ILU based methods in combination with a Krylov solver, GMRES. However we found that for problems with over 500 thousend degrees of freedom the convergence of the solver degraded severely. This observation is known from the literature [Saad, 2003] and results from the local character of the ILU preconditioner resulting in a poor approximation of the inverse of A for large A. The size of A for which ILU is no longer usable depends on the condition of A and on the amount of fill in allowed for the ILU preconditioner. We found that for our problems with over 5×105 degrees of freedom convergence became to slow to solve the system within an acceptable amount of walltime, one minute, even when allowing for considerable amount of fill in. We also implemented MUMPS and found good scaling results for problems up to 107 degrees of freedom for up to 32 CPU¡¯s. For problems with over 100 million degrees of freedom we implemented Algebraic Multigrid type methods (AMG) from the ML library [Sala, 2006]. Since multigrid methods are most effective for single parameter problems, we rebuild our model to use the SIMPLE method in the Stokes solver [Patankar, 1980]. We present scaling results from these solvers for 3D

  10. Modeling Human Behavior at a Large Scale

    DTIC Science & Technology

    2012-01-01

    features (day of week and holiday), our models can handle arbitrary number of additional features, such as season , predicted weather, social and... allergies , that people discuss on Twitter. In a follow-up work (Paul and Dredze, 2011b) begin to consider the geographical patterns in the prevalence...preventative care, focusing specifically on suicide. Twitter has also been used to monitor the seasonal variation in affect around the globe (Golder and Macy

  11. Large-Scale Aerosol Modeling and Analysis

    DTIC Science & Technology

    2010-09-30

    advance anywhere on the globe. NAAPS and COAMPS are particularly useful for forecasts of dust storms in areas downwind of the large deserts of the world... dust source regions in NAAPS. The DSD has been crucial for high-resolution dust forecasting in SW Asia using COAMPS (Walker et al., 2009). Dust ...6 Figure 2. Four-panel product used to compare multiple model forecasts of visibility in SW Asia dust storms . On the web the product is

  12. Adaptive Texture Synthesis for Large Scale City Modeling

    NASA Astrophysics Data System (ADS)

    Despine, G.; Colleu, T.

    2015-02-01

    Large scale city models textured with aerial images are well suited for bird-eye navigation but generally the image resolution does not allow pedestrian navigation. One solution to face this problem is to use high resolution terrestrial photos but it requires huge amount of manual work to remove occlusions. Another solution is to synthesize generic textures with a set of procedural rules and elementary patterns like bricks, roof tiles, doors and windows. This solution may give realistic textures but with no correlation to the ground truth. Instead of using pure procedural modelling we present a method to extract information from aerial images and adapt the texture synthesis to each building. We describe a workflow allowing the user to drive the information extraction and to select the appropriate texture patterns. We also emphasize the importance to organize the knowledge about elementary pattern in a texture catalogue allowing attaching physical information, semantic attributes and to execute selection requests. Roofs are processed according to the detected building material. Façades are first described in terms of principal colours, then opening positions are detected and some window features are computed. These features allow selecting the most appropriate patterns from the texture catalogue. We experimented this workflow on two samples with 20 cm and 5 cm resolution images. The roof texture synthesis and opening detection were successfully conducted on hundreds of buildings. The window characterization is still sensitive to the distortions inherent to the projection of aerial images onto the facades.

  13. Bayesian hierarchical model for large-scale covariance matrix estimation.

    PubMed

    Zhu, Dongxiao; Hero, Alfred O

    2007-12-01

    Many bioinformatics problems implicitly depend on estimating large-scale covariance matrix. The traditional approaches tend to give rise to high variance and low accuracy due to "overfitting." We cast the large-scale covariance matrix estimation problem into the Bayesian hierarchical model framework, and introduce dependency between covariance parameters. We demonstrate the advantages of our approaches over the traditional approaches using simulations and OMICS data analysis.

  14. Optimization of Nanoparticle-Based SERS Substrates through Large-Scale Realistic Simulations.

    PubMed

    Solís, Diego M; Taboada, José M; Obelleiro, Fernando; Liz-Marzán, Luis M; García de Abajo, F Javier

    2017-02-15

    Surface-enhanced Raman scattering (SERS) has become a widely used spectroscopic technique for chemical identification, providing unbeaten sensitivity down to the single-molecule level. The amplification of the optical near field produced by collective electron excitations -plasmons- in nanostructured metal surfaces gives rise to a dramatic increase by many orders of magnitude in the Raman scattering intensities from neighboring molecules. This effect strongly depends on the detailed geometry and composition of the plasmon-supporting metallic structures. However, the search for optimized SERS substrates has largely relied on empirical data, due in part to the complexity of the structures, whose simulation becomes prohibitively demanding. In this work, we use state-of-the-art electromagnetic computation techniques to produce predictive simulations for a wide range of nanoparticle-based SERS substrates, including realistic configurations consisting of random arrangements of hundreds of nanoparticles with various morphologies. This allows us to derive rules of thumb for the influence of particle anisotropy and substrate coverage on the obtained SERS enhancement and optimum spectral ranges of operation. Our results provide a solid background to understand and design optimized SERS substrates.

  15. Optimization of Nanoparticle-Based SERS Substrates through Large-Scale Realistic Simulations

    PubMed Central

    2016-01-01

    Surface-enhanced Raman scattering (SERS) has become a widely used spectroscopic technique for chemical identification, providing unbeaten sensitivity down to the single-molecule level. The amplification of the optical near field produced by collective electron excitations —plasmons— in nanostructured metal surfaces gives rise to a dramatic increase by many orders of magnitude in the Raman scattering intensities from neighboring molecules. This effect strongly depends on the detailed geometry and composition of the plasmon-supporting metallic structures. However, the search for optimized SERS substrates has largely relied on empirical data, due in part to the complexity of the structures, whose simulation becomes prohibitively demanding. In this work, we use state-of-the-art electromagnetic computation techniques to produce predictive simulations for a wide range of nanoparticle-based SERS substrates, including realistic configurations consisting of random arrangements of hundreds of nanoparticles with various morphologies. This allows us to derive rules of thumb for the influence of particle anisotropy and substrate coverage on the obtained SERS enhancement and optimum spectral ranges of operation. Our results provide a solid background to understand and design optimized SERS substrates. PMID:28239616

  16. Large-Scale, Full-Wave Scattering Phenomenology Characterization of Realistic Trees: Preliminary Results

    DTIC Science & Technology

    2012-09-01

    2 Fig. 2 Sassafras tree model ...............................................................................2 Fig. 3 Eastern cottonwood...to right Fig. 2 Sassafras tree model Fig. 3 Eastern cottonwood (Populus deltoides) tree model After the mesh has been properly processed

  17. Homogenization of Large-Scale Movement Models in Ecology

    USGS Publications Warehouse

    Garlick, M.J.; Powell, J.A.; Hooten, M.B.; McFarlane, L.R.

    2011-01-01

    A difficulty in using diffusion models to predict large scale animal population dispersal is that individuals move differently based on local information (as opposed to gradients) in differing habitat types. This can be accommodated by using ecological diffusion. However, real environments are often spatially complex, limiting application of a direct approach. Homogenization for partial differential equations has long been applied to Fickian diffusion (in which average individual movement is organized along gradients of habitat and population density). We derive a homogenization procedure for ecological diffusion and apply it to a simple model for chronic wasting disease in mule deer. Homogenization allows us to determine the impact of small scale (10-100 m) habitat variability on large scale (10-100 km) movement. The procedure generates asymptotic equations for solutions on the large scale with parameters defined by small-scale variation. The simplicity of this homogenization procedure is striking when compared to the multi-dimensional homogenization procedure for Fickian diffusion,and the method will be equally straightforward for more complex models. ?? 2010 Society for Mathematical Biology.

  18. A model of plasma heating by large-scale flow

    NASA Astrophysics Data System (ADS)

    Pongkitiwanichakul, P.; Cattaneo, F.; Boldyrev, S.; Mason, J.; Perez, J. C.

    2015-12-01

    In this work, we study the process of energy dissipation triggered by a slow large-scale motion of a magnetized conducting fluid. Our consideration is motivated by the problem of heating the solar corona, which is believed to be governed by fast reconnection events set off by the slow motion of magnetic field lines anchored in the photospheric plasma. To elucidate the physics governing the disruption of the imposed laminar motion and the energy transfer to small scales, we propose a simplified model where the large-scale motion of magnetic field lines is prescribed not at the footpoints but rather imposed volumetrically. As a result, the problem can be treated numerically with an efficient, highly accurate spectral method, allowing us to use a resolution and statistical ensemble exceeding those of the previous work. We find that, even though the large-scale deformations are slow, they eventually lead to reconnection events that drive a turbulent state at smaller scales. The small-scale turbulence displays many of the universal features of field-guided magnetohydrodynamic turbulence like a well-developed inertial range spectrum. Based on these observations, we construct a phenomenological model that gives the scalings of the amplitude of the fluctuations and the energy-dissipation rate as functions of the input parameters. We find good agreement between the numerical results and the predictions of the model.

  19. Ecohydrological modeling for large-scale environmental impact assessment.

    PubMed

    Woznicki, Sean A; Nejadhashemi, A Pouyan; Abouali, Mohammad; Herman, Matthew R; Esfahanian, Elaheh; Hamaamin, Yaseen A; Zhang, Zhen

    2016-02-01

    Ecohydrological models are frequently used to assess the biological integrity of unsampled streams. These models vary in complexity and scale, and their utility depends on their final application. Tradeoffs are usually made in model scale, where large-scale models are useful for determining broad impacts of human activities on biological conditions, and regional-scale (e.g. watershed or ecoregion) models provide stakeholders greater detail at the individual stream reach level. Given these tradeoffs, the objective of this study was to develop large-scale stream health models with reach level accuracy similar to regional-scale models thereby allowing for impacts assessments and improved decision-making capabilities. To accomplish this, four measures of biological integrity (Ephemeroptera, Plecoptera, and Trichoptera taxa (EPT), Family Index of Biotic Integrity (FIBI), Hilsenhoff Biotic Index (HBI), and fish Index of Biotic Integrity (IBI)) were modeled based on four thermal classes (cold, cold-transitional, cool, and warm) of streams that broadly dictate the distribution of aquatic biota in Michigan. The Soil and Water Assessment Tool (SWAT) was used to simulate streamflow and water quality in seven watersheds and the Hydrologic Index Tool was used to calculate 171 ecologically relevant flow regime variables. Unique variables were selected for each thermal class using a Bayesian variable selection method. The variables were then used in development of adaptive neuro-fuzzy inference systems (ANFIS) models of EPT, FIBI, HBI, and IBI. ANFIS model accuracy improved when accounting for stream thermal class rather than developing a global model.

  20. Extending SME to Handle Large-Scale Cognitive Modeling.

    PubMed

    Forbus, Kenneth D; Ferguson, Ronald W; Lovett, Andrew; Gentner, Dedre

    2016-06-20

    Analogy and similarity are central phenomena in human cognition, involved in processes ranging from visual perception to conceptual change. To capture this centrality requires that a model of comparison must be able to integrate with other processes and handle the size and complexity of the representations required by the tasks being modeled. This paper describes extensions to Structure-Mapping Engine (SME) since its inception in 1986 that have increased its scope of operation. We first review the basic SME algorithm, describe psychological evidence for SME as a process model, and summarize its role in simulating similarity-based retrieval and generalization. Then we describe five techniques now incorporated into the SME that have enabled it to tackle large-scale modeling tasks: (a) Greedy merging rapidly constructs one or more best interpretations of a match in polynomial time: O(n(2) log(n)); (b) Incremental operation enables mappings to be extended as new information is retrieved or derived about the base or target, to model situations where information in a task is updated over time; (c) Ubiquitous predicates model the varying degrees to which items may suggest alignment; (d) Structural evaluation of analogical inferences models aspects of plausibility judgments; (e) Match filters enable large-scale task models to communicate constraints to SME to influence the mapping process. We illustrate via examples from published studies how these enable it to capture a broader range of psychological phenomena than before.

  1. Challenges of Modeling Flood Risk at Large Scales

    NASA Astrophysics Data System (ADS)

    Guin, J.; Simic, M.; Rowe, J.

    2009-04-01

    algorithm propagates the flows for each simulated event. The model incorporates a digital terrain model (DTM) at 10m horizontal resolution, which is used to extract flood plain cross-sections such that a one-dimensional hydraulic model can be used to estimate extent and elevation of flooding. In doing so the effect of flood defenses in mitigating floods are accounted for. Finally a suite of vulnerability relationships have been developed to estimate flood losses for a portfolio of properties that are exposed to flood hazard. Historical experience indicates that a for recent floods in Great Britain more than 50% of insurance claims occur outside the flood plain and these are primarily a result of excess surface flow, hillside flooding, flooding due to inadequate drainage. A sub-component of the model addresses this issue by considering several parameters that best explain the variability of claims off the flood plain. The challenges of modeling such a complex phenomenon at a large scale largely dictate the choice of modeling approaches that need to be adopted for each of these model components. While detailed numerically-based physical models exist and have been used for conducting flood hazard studies, they are generally restricted to small geographic regions. In a probabilistic risk estimation framework like our current model, a blend of deterministic and statistical techniques have to be employed such that each model component is independent, physically sound and is able to maintain the statistical properties of observed historical data. This is particularly important because of the highly non-linear behavior of the flooding process. With respect to vulnerability modeling, both on and off the flood plain, the challenges include the appropriate scaling of a damage relationship when applied to a portfolio of properties. This arises from the fact that the estimated hazard parameter used for damage assessment, namely maximum flood depth has considerable uncertainty. The

  2. Performance modeling and analysis of consumer classes in large scale systems

    NASA Astrophysics Data System (ADS)

    Al-Shukri, Sh.; Lenin, R. B.; Ramaswamy, S.; Anand, A.; Narasimhan, V. L.; Abraham, J.; Varadan, Vijay

    2009-03-01

    Peer-to-Peer (P2P) networks have been used efficiently as building blocks as overlay networks for large-scale distributed network applications with Internet Protocol (IP) based bottom layer networks. With large scale Wireless Sensor Networks (WSNs) becoming increasingly realistic, it is important to overlay networks with WSNs in the bottom layer. The suitable mathematical (stochastic) model that can model the overlay network over WSNs is Queuing Networks with Multi-Class customers. In this paper, we discuss how these mathematical network models can be simulated using the object oriented simulation package OMNeT++. We discuss the Graphical User Interface (GUI) which is developed to accept the input parameter files and execute the simulation using this interface. We compare the simulation results with analytical formulas available in the literature for these mathematical models.

  3. Large scale stochastic spatio-temporal modelling with PCRaster

    NASA Astrophysics Data System (ADS)

    Karssenberg, Derek; Drost, Niels; Schmitz, Oliver; de Jong, Kor; Bierkens, Marc F. P.

    2013-04-01

    software from the eScience Technology Platform (eSTeP), developed at the Netherlands eScience Center. This will allow us to scale up to hundreds of machines, with thousands of compute cores. A key requirement is not to change the user experience of the software. PCRaster operations and the use of the Python framework classes should work in a similar manner on machines ranging from a laptop to a supercomputer. This enables a seamless transfer of models from small machines, where model development is done, to large machines used for large-scale model runs. Domain specialists from a large range of disciplines, including hydrology, ecology, sedimentology, and land use change studies, currently use the PCRaster Python software within research projects. Applications include global scale hydrological modelling and error propagation in large-scale land use change models. The software runs on MS Windows, Linux operating systems, and OS X.

  4. Investigation of flow fields within large scale hypersonic inlet models

    NASA Technical Reports Server (NTRS)

    Gnos, A. V.; Watson, E. C.; Seebaugh, W. R.; Sanator, R. J.; Decarlo, J. P.

    1973-01-01

    Analytical and experimental investigations were conducted to determine the internal flow characteristics in model passages representative of hypersonic inlets for use at Mach numbers to about 12. The passages were large enough to permit measurements to be made in both the core flow and boundary layers. The analytical techniques for designing the internal contours and predicting the internal flow-field development accounted for coupling between the boundary layers and inviscid flow fields by means of a displacement-thickness correction. Three large-scale inlet models, each having a different internal compression ratio, were designed to provide high internal performance with an approximately uniform static-pressure distribution at the throat station. The models were tested in the Ames 3.5-Foot Hypersonic Wind Tunnel at a nominal free-stream Mach number of 7.4 and a unit free-stream Reynolds number of 8.86 X one million per meter.

  5. Modelling large-scale halo bias using the bispectrum

    NASA Astrophysics Data System (ADS)

    Pollack, Jennifer E.; Smith, Robert E.; Porciani, Cristiano

    2012-03-01

    We study the relation between the density distribution of tracers for large-scale structure and the underlying matter distribution - commonly termed bias - in the Λ cold dark matter framework. In particular, we examine the validity of the local model of biasing at quadratic order in the matter density. This model is characterized by parameters b1 and b2. Using an ensemble of N-body simulations, we apply several statistical methods to estimate the parameters. We measure halo and matter fluctuations smoothed on various scales. We find that, whilst the fits are reasonably good, the parameters vary with smoothing scale. We argue that, for real-space measurements, owing to the mixing of wavemodes, no smoothing scale can be found for which the parameters are independent of smoothing. However, this is not the case in Fourier space. We measure halo and halo-mass power spectra and from these construct estimates of the effective large-scale bias as a guide for b1. We measure the configuration dependence of the halo bispectra Bhhh and reduced bispectra Qhhh for very large-scale k-space triangles. From these data, we constrain b1 and b2, taking into account the full bispectrum covariance matrix. Using the lowest order perturbation theory, we find that for Bhhh the best-fitting parameters are in reasonable agreement with one another as the triangle scale is varied, although the fits become poor as smaller scales are included. The same is true for Qhhh. The best-fitting values were found to depend on the discreteness correction. This led us to consider halo-mass cross-bispectra. The results from these statistics supported our earlier findings. We then developed a test to explore whether the inconsistency in the recovered bias parameters could be attributed to missing higher order corrections in the models. We prove that low-order expansions are not sufficiently accurate to model the data, even on scales k1˜ 0.04 h Mpc-1. If robust inferences concerning bias are to be drawn

  6. Modeling Failure Propagation in Large-Scale Engineering Networks

    NASA Astrophysics Data System (ADS)

    Schläpfer, Markus; Shapiro, Jonathan L.

    The simultaneous unavailability of several technical components within large-scale engineering systems can lead to high stress, rendering them prone to cascading events. In order to gain qualitative insights into the failure propagation mechanisms resulting from independent outages, we adopt a minimalistic model representing the components and their interdependencies by an undirected, unweighted network. The failure dynamics are modeled by an anticipated accelerated “wearout” process being dependent on the initial degree of a node and on the number of failed nearest neighbors. The results of the stochastic simulations imply that the influence of the network topology on the speed of the cascade highly depends on how the number of failed nearest neighbors shortens the life expectancy of a node. As a formal description of the decaying networks we propose a continuous-time mean field approximation, estimating the average failure rate of the nearest neighbors of a node based on the degree-degree distribution.

  7. Research on large-scale wind farm modeling

    NASA Astrophysics Data System (ADS)

    Ma, Longfei; Zhang, Baoqun; Gong, Cheng; Jiao, Ran; Shi, Rui; Chi, Zhongjun; Ding, Yifeng

    2017-01-01

    Due to intermittent and adulatory properties of wind energy, when large-scale wind farm connected to the grid, it will have much impact on the power system, which is different from traditional power plants. Therefore it is necessary to establish an effective wind farm model to simulate and analyze the influence wind farms have on the grid as well as the transient characteristics of the wind turbines when the grid is at fault. However we must first establish an effective WTGs model. As the doubly-fed VSCF wind turbine has become the mainstream wind turbine model currently, this article first investigates the research progress of doubly-fed VSCF wind turbine, and then describes the detailed building process of the model. After that investigating the common wind farm modeling methods and pointing out the problems encountered. As WAMS is widely used in the power system, which makes online parameter identification of the wind farm model based on off-output characteristics of wind farm be possible, with a focus on interpretation of the new idea of identification-based modeling of large wind farms, which can be realized by two concrete methods.

  8. Surrogate population models for large-scale neural simulations.

    PubMed

    Tripp, Bryan P

    2015-06-01

    Because different parts of the brain have rich interconnections, it is not possible to model small parts realistically in isolation. However, it is also impractical to simulate large neural systems in detail. This article outlines a new approach to multiscale modeling of neural systems that involves constructing efficient surrogate models of populations. Given a population of neuron models with correlated activity and with specific, nonrandom connections, a surrogate model is constructed in order to approximate the aggregate outputs of the population. The surrogate model requires less computation than the neural model, but it has a clear and specific relationship with the neural model. For example, approximate spike rasters for specific neurons can be derived from a simulation of the surrogate model. This article deals specifically with neural engineering framework (NEF) circuits of leaky-integrate-and-fire point neurons. Weighted sums of spikes are modeled by interpolating over latent variables in the population activity, and linear filters operate on gaussian random variables to approximate spike-related fluctuations. It is found that the surrogate models can often closely approximate network behavior with orders-of-magnitude reduction in computational demands, although there are certain systematic differences between the spiking and surrogate models. Since individual spikes are not modeled, some simulations can be performed with much longer steps sizes (e.g., 20 ms). Possible extensions to non-NEF networks and to more complex neuron models are discussed.

  9. Multi-Resolution Modeling of Large Scale Scientific Simulation Data

    SciTech Connect

    Baldwin, C; Abdulla, G; Critchlow, T

    2003-01-31

    This paper discusses using the wavelets modeling technique as a mechanism for querying large-scale spatio-temporal scientific simulation data. Wavelets have been used successfully in time series analysis and in answering surprise and trend queries. Our approach however is driven by the need for compression, which is necessary for viable throughput given the size of the targeted data, along with the end user requirements from the discovery process. Our users would like to run fast queries to check the validity of the simulation algorithms used. In some cases users are welling to accept approximate results if the answer comes back within a reasonable time. In other cases they might want to identify a certain phenomena and track it over time. We face a unique problem because of the data set sizes. It may take months to generate one set of the targeted data; because of its shear size, the data cannot be stored on disk for long and thus needs to be analyzed immediately before it is sent to tape. We integrated wavelets within AQSIM, a system that we are developing to support exploration and analyses of tera-scale size data sets. We will discuss the way we utilized wavelets decomposition in our domain to facilitate compression and in answering a specific class of queries that is harder to answer with any other modeling technique. We will also discuss some of the shortcomings of our implementation and how to address them.

  10. Numerically modelling the large scale coronal magnetic field

    NASA Astrophysics Data System (ADS)

    Panja, Mayukh; Nandi, Dibyendu

    2016-07-01

    The solar corona spews out vast amounts of magnetized plasma into the heliosphere which has a direct impact on the Earth's magnetosphere. Thus it is important that we develop an understanding of the dynamics of the solar corona. With our present technology it has not been possible to generate 3D magnetic maps of the solar corona; this warrants the use of numerical simulations to study the coronal magnetic field. A very popular method of doing this, is to extrapolate the photospheric magnetic field using NLFF or PFSS codes. However the extrapolations at different time intervals are completely independent of each other and do not capture the temporal evolution of magnetic fields. On the other hand full MHD simulations of the global coronal field, apart from being computationally very expensive would be physically less transparent, owing to the large number of free parameters that are typically used in such codes. This brings us to the Magneto-frictional model which is relatively simpler and computationally more economic. We have developed a Magnetofrictional Model, in 3D spherical polar co-ordinates to study the large scale global coronal field. Here we present studies of changing connectivities between active regions, in response to photospheric motions.

  11. Dynamic route choice model of large-scale traffic network

    SciTech Connect

    Boyce, D.W.; Lee, D.H.; Janson, B.N.; Berka, S.

    1997-08-01

    Application and extensions of a dynamic network equilibrium model to the Advanced Driver and Vehicle Advisory Navigation Concept (ADVANCE) Network are described in this paper. ADVANCE is a dynamic route guidance field test designed for 800 km{sup 2} in the northwestern suburbs of Chicago. The dynamic route choice model employed in this paper is solved efficiently by a modified version of Janson`s DYMOD algorithm. Realistic traffic engineering-based link delay functions, instead of the simplistic Bureau of Public Roads (BPR) function, are used to estimate link travel times and intersection delays for most types of links and intersections. Further, an expanded intersection representation is utilized, resulting in a network of nearly 23,000 links and 10,000 nodes. Time-dependent link flows, travel times, speeds and queue spillbacks are generated for the ADVANCE Network. The model was solved on a CONVEX-C3880. Convergence and computational results are presented and analyzed.

  12. A first large-scale flood inundation forecasting model

    SciTech Connect

    Schumann, Guy J-P; Neal, Jeffrey C.; Voisin, Nathalie; Andreadis, Konstantinos M.; Pappenberger, Florian; Phanthuwongpakdee, Kay; Hall, Amanda C.; Bates, Paul D.

    2013-11-04

    At present continental to global scale flood forecasting focusses on predicting at a point discharge, with little attention to the detail and accuracy of local scale inundation predictions. Yet, inundation is actually the variable of interest and all flood impacts are inherently local in nature. This paper proposes a first large scale flood inundation ensemble forecasting model that uses best available data and modeling approaches in data scarce areas and at continental scales. The model was built for the Lower Zambezi River in southeast Africa to demonstrate current flood inundation forecasting capabilities in large data-scarce regions. The inundation model domain has a surface area of approximately 170k km2. ECMWF meteorological data were used to force the VIC (Variable Infiltration Capacity) macro-scale hydrological model which simulated and routed daily flows to the input boundary locations of the 2-D hydrodynamic model. Efficient hydrodynamic modeling over large areas still requires model grid resolutions that are typically larger than the width of many river channels that play a key a role in flood wave propagation. We therefore employed a novel sub-grid channel scheme to describe the river network in detail whilst at the same time representing the floodplain at an appropriate and efficient scale. The modeling system was first calibrated using water levels on the main channel from the ICESat (Ice, Cloud, and land Elevation Satellite) laser altimeter and then applied to predict the February 2007 Mozambique floods. Model evaluation showed that simulated flood edge cells were within a distance of about 1 km (one model resolution) compared to an observed flood edge of the event. Our study highlights that physically plausible parameter values and satisfactory performance can be achieved at spatial scales ranging from tens to several hundreds of thousands of km2 and at model grid resolutions up to several km2. However, initial model test runs in forecast mode

  13. Symmetry-guided large-scale shell-model theory

    NASA Astrophysics Data System (ADS)

    Launey, Kristina D.; Dytrych, Tomas; Draayer, Jerry P.

    2016-07-01

    In this review, we present a symmetry-guided strategy that utilizes exact as well as partial symmetries for enabling a deeper understanding of and advancing ab initio studies for determining the microscopic structure of atomic nuclei. These symmetries expose physically relevant degrees of freedom that, for large-scale calculations with QCD-inspired interactions, allow the model space size to be reduced through a very structured selection of the basis states to physically relevant subspaces. This can guide explorations of simple patterns in nuclei and how they emerge from first principles, as well as extensions of the theory beyond current limitations toward heavier nuclei and larger model spaces. This is illustrated for the ab initio symmetry-adapted no-core shell model (SA-NCSM) and two significant underlying symmetries, the symplectic Sp(3 , R) group and its deformation-related SU(3) subgroup. We review the broad scope of nuclei, where these symmetries have been found to play a key role-from the light p-shell systems, such as 6Li, 8B, 8Be, 12C, and 16O, and sd-shell nuclei exemplified by 20Ne, based on first-principle explorations; through the Hoyle state in 12C and enhanced collectivity in intermediate-mass nuclei, within a no-core shell-model perspective; up to strongly deformed species of the rare-earth and actinide regions, as investigated in earlier studies. A complementary picture, driven by symmetries dual to Sp(3 , R) , is also discussed. We briefly review symmetry-guided techniques that prove useful in various nuclear-theory models, such as Elliott model, ab initio SA-NCSM, symplectic model, pseudo- SU(3) and pseudo-symplectic models, ab initio hyperspherical harmonics method, ab initio lattice effective field theory, exact pairing-plus-shell model approaches, and cluster models, including the resonating-group method. Important implications of these approaches that have deepened our understanding of emergent phenomena in nuclei, such as enhanced

  14. Multi-Resolution Modeling of Large Scale Scientific Simulation Data

    SciTech Connect

    Baldwin, C; Abdulla, G; Critchlow, T

    2002-02-25

    Data produced by large scale scientific simulations, experiments, and observations can easily reach tera-bytes in size. The ability to examine data-sets of this magnitude, even in moderate detail, is problematic at best. Generally this scientific data consists of multivariate field quantities with complex inter-variable correlations and spatial-temporal structure. To provide scientists and engineers with the ability to explore and analyze such data sets we are using a twofold approach. First, we model the data with the objective of creating a compressed yet manageable representation. Second, with that compressed representation, we provide the user with the ability to query the resulting approximation to obtain approximate yet sufficient answers; a process called adhoc querying. This paper is concerned with a wavelet modeling technique that seeks to capture the important physical characteristics of the target scientific data. Our approach is driven by the compression, which is necessary for viable throughput, along with the end user requirements from the discovery process. Our work contrasts existing research which applies wavelets to range querying, change detection, and clustering problems by working directly with a decomposition of the data. The difference in this procedures is due primarily to the nature of the data and the requirements of the scientists and engineers. Our approach directly uses the wavelet coefficients of the data to compress as well as query. We will provide some background on the problem, describe how the wavelet decomposition is used to facilitate data compression and how queries are posed on the resulting compressed model. Results of this process will be shown for several problems of interest and we will end with some observations and conclusions about this research.

  15. Noise transmission characteristics of a large scale composite fuselage model

    NASA Technical Reports Server (NTRS)

    Beyer, Todd B.; Silcox, Richard J.

    1990-01-01

    Results from an experimental test undertaken to study the basic noise transmission characteristics of a realistic, built-up composite fuselage model are presented. The floor-equipped stiffened composite cylinder was exposed to a number of different exterior noise source configurations in a large anechoic chamber. These exterior source configurations included two point sources located in the same plane on opposite sides of the cylinder, a single point source and a propeller simulator. The results indicate that the interior source field is affected strongly by exterior noise source phasing. Sidewall treatment is seen to reduce the overall interior sound pressure levels and dampen dominant acoustic resonances so that other acoustic modes can affect interior noise distribution.

  16. Oligopolistic competition in wholesale electricity markets: Large-scale simulation and policy analysis using complementarity models

    NASA Astrophysics Data System (ADS)

    Helman, E. Udi

    This dissertation conducts research into the large-scale simulation of oligopolistic competition in wholesale electricity markets. The dissertation has two parts. Part I is an examination of the structure and properties of several spatial, or network, equilibrium models of oligopolistic electricity markets formulated as mixed linear complementarity problems (LCP). Part II is a large-scale application of such models to the electricity system that encompasses most of the United States east of the Rocky Mountains, the Eastern Interconnection. Part I consists of Chapters 1 to 6. The models developed in this part continue research into mixed LCP models of oligopolistic electricity markets initiated by Hobbs [67] and subsequently developed by Metzler [87] and Metzler, Hobbs and Pang [88]. Hobbs' central contribution is a network market model with Cournot competition in generation and a price-taking spatial arbitrage firm that eliminates spatial price discrimination by the Cournot firms. In one variant, the solution to this model is shown to be equivalent to the "no arbitrage" condition in a "pool" market, in which a Regional Transmission Operator optimizes spot sales such that the congestion price between two locations is exactly equivalent to the difference in the energy prices at those locations (commonly known as locational marginal pricing). Extensions to this model are presented in Chapters 5 and 6. One of these is a market model with a profit-maximizing arbitrage firm. This model is structured as a mathematical program with equilibrium constraints (MPEC), but due to the linearity of its constraints, can be solved as a mixed LCP. Part II consists of Chapters 7 to 12. The core of these chapters is a large-scale simulation of the U.S. Eastern Interconnection applying one of the Cournot competition with arbitrage models. This is the first oligopolistic equilibrium market model to encompass the full Eastern Interconnection with a realistic network representation (using

  17. Modeling emergent large-scale structures of barchan dune fields

    NASA Astrophysics Data System (ADS)

    Worman, S. L.; Murray, A. B.; Littlewood, R.; Andreotti, B.; Claudin, P.

    2013-10-01

    In nature, barchan dunes typically exist as members of larger fields that display striking, enigmatic structures that cannot be readily explained by examining the dynamics at the scale of single dunes, or by appealing to patterns in external forcing. To explore the possibility that observed structures emerge spontaneously as a collective result of many dunes interacting with each other, we built a numerical model that treats barchans as discrete entities that interact with one another according to simplified rules derived from theoretical and numerical work and from field observations: (1) Dunes exchange sand through the fluxes that leak from the downwind side of each dune and are captured on their upstream sides; (2) when dunes become sufficiently large, small dunes are born on their downwind sides (`calving'); and (3) when dunes collide directly enough, they merge. Results show that these relatively simple interactions provide potential explanations for a range of field-scale phenomena including isolated patches of dunes and heterogeneous arrangements of similarly sized dunes in denser fields. The results also suggest that (1) dune field characteristics depend on the sand flux fed into the upwind boundary, although (2) moving downwind, the system approaches a common attracting state in which the memory of the upwind conditions vanishes. This work supports the hypothesis that calving exerts a first-order control on field-scale phenomena; it prevents individual dunes from growing without bound, as single-dune analyses suggest, and allows the formation of roughly realistic, persistent dune field patterns.

  18. Modeling emergent large-scale structures of barchan dune fields

    NASA Astrophysics Data System (ADS)

    Worman, S. L.; Murray, A.; Littlewood, R. C.; Andreotti, B.; Claudin, P.

    2013-12-01

    In nature, barchan dunes typically exist as members of larger fields that display striking, enigmatic structures that cannot be readily explained by examining the dynamics at the scale of single dunes, or by appealing to patterns in external forcing. To explore the possibility that observed structures emerge spontaneously as a collective result of many dunes interacting with each other, we built a numerical model that treats barchans as discrete entities that interact with one another according to simplified rules derived from theoretical and numerical work, and from field observations: Dunes exchange sand through the fluxes that leak from the downwind side of each dune and are captured on their upstream sides; when dunes become sufficiently large, small dunes are born on their downwind sides ('calving'); and when dunes collide directly enough, they merge. Results show that these relatively simple interactions provide potential explanations for a range of field-scale phenomena including isolated patches of dunes and heterogeneous arrangements of similarly sized dunes in denser fields. The results also suggest that (1) dune field characteristics depend on the sand flux fed into the upwind boundary, although (2) moving downwind, the system approaches a common attracting state in which the memory of the upwind conditions vanishes. This work supports the hypothesis that calving exerts a first order control on field-scale phenomena; it prevents individual dunes from growing without bound, as single-dune analyses suggest, and allows the formation of roughly realistic, persistent dune field patterns.

  19. Functional models for large-scale gene regulation networks: realism and fiction.

    PubMed

    Lagomarsino, Marco Cosentino; Bassetti, Bruno; Castellani, Gastone; Remondini, Daniel

    2009-04-01

    High-throughput experiments are shedding light on the topology of large regulatory networks and at the same time their functional states, namely the states of activation of the nodes (for example transcript or protein levels) in different conditions, times, environments. We now possess a certain amount of information about these two levels of description, stored in libraries, databases and ontologies. A current challenge is to bridge the gap between topology and function, i.e. developing quantitative models aimed at characterizing the expression patterns of large sets of genes. However, approaches that work well for small networks become impossible to master at large scales, mainly because parameters proliferate. In this review we discuss the state of the art of large-scale functional network models, addressing the issue of what can be considered as "realistic" and what the main limitations may be. We also show some directions for future work, trying to set the goals that future models should try to achieve. Finally, we will emphasize the possible benefits in the understanding of biological mechanisms underlying complex multifactorial diseases, and in the development of novel strategies for the description and the treatment of such pathologies.

  20. A Large Scale, High Resolution Agent-Based Insurgency Model

    DTIC Science & Technology

    2013-09-30

    for understanding and analyzing human behavior in a civil violence paradigm. This model employed two types of agents: an agent that can become...cognitions and behaviors. Unlike previous agent-based models of civil violence , this work includes the use of a hidden Markov process for simulating...these models can portray real insurgent environments. Keywords simulation · agent based model · insurgency · civil violence · graphics processing

  1. Investigation of models for large-scale meteorological prediction experiments

    NASA Technical Reports Server (NTRS)

    Spar, J.

    1975-01-01

    The feasibility of extended and long-range weather prediction by means of global atmospheric models was studied. A number of computer experiments were conducted at GISS with the GISS global general circulation model. Topics discussed include atmospheric response to sea-surface temperature anomalies, and monthly mean forecast experiments with the global model.

  2. A robust and quick method to validate large scale flood inundation modelling with SAR remote sensing

    NASA Astrophysics Data System (ADS)

    Schumann, G. J.; Neal, J. C.; Bates, P. D.

    2011-12-01

    With flood frequency likely to increase as a result of altered precipitation patterns triggered by climate change, there is a growing demand for more data and, at the same time, improved flood inundation modeling. The aim is to develop more reliable flood forecasting systems over large scales that account for errors and inconsistencies in observations, modeling, and output. Over the last few decades, there have been major advances in the fields of remote sensing, particularly microwave remote sensing, and flood inundation modeling. At the same time both research communities are attempting to roll out their products on a continental to global scale. In a first attempt to harmonize both research efforts on a very large scale, a two-dimensional flood model has been built for the Niger Inland Delta basin in northwest Africa on a 700 km reach of the Niger River, an area similar to the size of the UK. This scale demands a different approach to traditional 2D model structuring and we have implemented a simplified version of the shallow water equations as developed in [1] and complemented this formulation with a sub-grid structure for simulating flows in a channel much smaller than the actual grid resolution of the model. This joined integration allows to model flood flows across two dimensions with efficient computational speeds but without losing out on channel resolution when moving to coarse model grids. Using gaged daily flows, the model was applied to simulate the wetting and drying of the Inland Delta floodplain for 7 years from 2002 to 2008, taking less than 30 minutes to simulate 365 days at 1 km resolution. In these rather data poor regions of the world and at this type of scale, verification of flood modeling is realistically only feasible with wide swath or global mode remotely sensed imagery. Validation of the Niger model was carried out using sequential global mode SAR images over the period 2006/7. This scale not only requires different types of models and

  3. Investigation of models for large scale meteorological prediction experiments

    NASA Technical Reports Server (NTRS)

    Spar, J.

    1982-01-01

    Long-range numerical prediction and climate simulation experiments with various global atmospheric general circulation models are reported. A chronological listing of the titles of all publications and technical reports already distributed is presented together with an account of the most recent reseach. Several reports on a series of perpetual January climate simulations with the GISS coarse mesh climate model are listed. A set of perpetual July climate simulations with the same model is presented and the results are described.

  4. Oscillations and Synchrony in Large-scale Cortical Network Models

    DTIC Science & Technology

    2008-06-17

    Intrinsic neuronal and circuit properties control the responses of large ensembles of neurons by creating spatiotemporal patterns of ...map-based models) to simulate the intrinsic dynamics of biological neurons . These phenomenological models were designed to capture the main response...function of parameters that affect synaptic interactions and intrinsic states of the neurons . Keywords

  5. Coordinated reset stimulation in a large-scale model of the STN-GPe circuit

    PubMed Central

    Ebert, Martin; Hauptmann, Christian; Tass, Peter A.

    2014-01-01

    Synchronization of populations of neurons is a hallmark of several brain diseases. Coordinated reset (CR) stimulation is a model-based stimulation technique which specifically counteracts abnormal synchrony by desynchronization. Electrical CR stimulation, e.g., for the treatment of Parkinson's disease (PD), is administered via depth electrodes. In order to get a deeper understanding of this technique, we extended the top-down approach of previous studies and constructed a large-scale computational model of the respective brain areas. Furthermore, we took into account the spatial anatomical properties of the simulated brain structures and incorporated a detailed numerical representation of 2 · 104 simulated neurons. We simulated the subthalamic nucleus (STN) and the globus pallidus externus (GPe). Connections within the STN were governed by spike-timing dependent plasticity (STDP). In this way, we modeled the physiological and pathological activity of the considered brain structures. In particular, we investigated how plasticity could be exploited and how the model could be shifted from strongly synchronized (pathological) activity to strongly desynchronized (healthy) activity of the neuronal populations via CR stimulation of the STN neurons. Furthermore, we investigated the impact of specific stimulation parameters especially the electrode position on the stimulation outcome. Our model provides a step forward toward a biophysically realistic model of the brain areas relevant to the emergence of pathological neuronal activity in PD. Furthermore, our model constitutes a test bench for the optimization of both stimulation parameters and novel electrode geometries for efficient CR stimulation. PMID:25505882

  6. Large-scale measurement and modeling of backbone Internet traffic

    NASA Astrophysics Data System (ADS)

    Roughan, Matthew; Gottlieb, Joel

    2002-07-01

    There is a brewing controversy in the traffic modeling community concerning how to model backbone traffic. The fundamental work on self-similarity in data traffic appears to be contradicted by recent findings that suggest that backbone traffic is smooth. The traffic analysis work to date has focused on high-quality but limited-scope packet trace measurements; this limits its applicability to high-speed backbone traffic. This paper uses more than one year's worth of SNMP traffic data covering an entire Tier 1 ISP backbone to address the question of how backbone network traffic should be modeled. Although the limitations of SNMP measurements do not permit us to comment on the fine timescale behavior of the traffic, careful analysis of the data suggests that irrespective of the variation at fine timescales, we can construct a simple traffic model that captures key features of the observed traffic. Furthermore, the model's parameters are measurable using existing network infrastructure, making this model practical in a present-day operational network. In addition to its practicality, the model verifies basic statistical multiplexing results, and thus sheds deep insight into how smooth backbone traffic really is.

  7. Multilevel method for modeling large-scale networks.

    SciTech Connect

    Safro, I. M.

    2012-02-24

    Understanding the behavior of real complex networks is of great theoretical and practical significance. It includes developing accurate artificial models whose topological properties are similar to the real networks, generating the artificial networks at different scales under special conditions, investigating a network dynamics, reconstructing missing data, predicting network response, detecting anomalies and other tasks. Network generation, reconstruction, and prediction of its future topology are central issues of this field. In this project, we address the questions related to the understanding of the network modeling, investigating its structure and properties, and generating artificial networks. Most of the modern network generation methods are based either on various random graph models (reinforced by a set of properties such as power law distribution of node degrees, graph diameter, and number of triangles) or on the principle of replicating an existing model with elements of randomization such as R-MAT generator and Kronecker product modeling. Hierarchical models operate at different levels of network hierarchy but with the same finest elements of the network. However, in many cases the methods that include randomization and replication elements on the finest relationships between network nodes and modeling that addresses the problem of preserving a set of simplified properties do not fit accurately enough the real networks. Among the unsatisfactory features are numerically inadequate results, non-stability of algorithms on real (artificial) data, that have been tested on artificial (real) data, and incorrect behavior at different scales. One reason is that randomization and replication of existing structures can create conflicts between fine and coarse scales of the real network geometry. Moreover, the randomization and satisfying of some attribute at the same time can abolish those topological attributes that have been undefined or hidden from

  8. Large-scale spherical fixed bed reactors: Modeling and optimization

    SciTech Connect

    Hartig, F.; Keil, F.J. )

    1993-03-01

    Iterative dynamic programming (IDP) according to Luus was used for the optimization of the methanol production in a cascade of spherical reactors. The system of three spherical reactors was compared to an externally cooled tubular reactor and a quench reactor. The reactors were modeled by the pseudohomogeneous and heterogeneous approach. The effectiveness factors of the heterogeneous model were calculated by the dusty gas model. The IDP method was compared with sequential quadratic programming (SQP) and the Box complex method. The optimized distributions of catalyst volume with the pseudohomogeneous and heterogeneous model lead to different results. The IDP method finds the global optimum with high probability. A combination of IDP and SQP provides a reliable optimization procedure that needs minimum computing time.

  9. Simulation of large-scale rule-based models

    SciTech Connect

    Hlavacek, William S; Monnie, Michael I; Colvin, Joshua; Faseder, James

    2008-01-01

    Interactions of molecules, such as signaling proteins, with multiple binding sites and/or multiple sites of post-translational covalent modification can be modeled using reaction rules. Rules comprehensively, but implicitly, define the individual chemical species and reactions that molecular interactions can potentially generate. Although rules can be automatically processed to define a biochemical reaction network, the network implied by a set of rules is often too large to generate completely or to simulate using conventional procedures. To address this problem, we present DYNSTOC, a general-purpose tool for simulating rule-based models. DYNSTOC implements a null-event algorithm for simulating chemical reactions in a homogenous reaction compartment. The simulation method does not require that a reaction network be specified explicitly in advance, but rather takes advantage of the availability of the reaction rules in a rule-based specification of a network to determine if a randomly selected set of molecular components participates in a reaction during a time step. DYNSTOC reads reaction rules written in the BioNetGen language which is useful for modeling protein-protein interactions involved in signal transduction. The method of DYNSTOC is closely related to that of STOCHSIM. DYNSTOC differs from STOCHSIM by allowing for model specification in terms of BNGL, which extends the range of protein complexes that can be considered in a model. DYNSTOC enables the simulation of rule-based models that cannot be simulated by conventional methods. We demonstrate the ability of DYNSTOC to simulate models accounting for multisite phosphorylation and multivalent binding processes that are characterized by large numbers of reactions. DYNSTOC is free for non-commercial use. The C source code, supporting documentation and example input files are available at .

  10. Modeling and simulation of large scale stirred tank

    NASA Astrophysics Data System (ADS)

    Neuville, John R.

    The purpose of this dissertation is to provide a written record of the evaluation performed on the DWPF mixing process by the construction of numerical models that resemble the geometry of this process. There were seven numerical models constructed to evaluate the DWPF mixing process and four pilot plants. The models were developed with Fluent software and the results from these models were used to evaluate the structure of the flow field and the power demand of the agitator. The results from the numerical models were compared with empirical data collected from these pilot plants that had been operated at an earlier date. Mixing is commonly used in a variety ways throughout industry to blend miscible liquids, disperse gas through liquid, form emulsions, promote heat transfer and, suspend solid particles. The DOE Sites at Hanford in Richland Washington, West Valley in New York, and Savannah River Site in Aiken South Carolina have developed a process that immobilizes highly radioactive liquid waste. The radioactive liquid waste at DWPF is an opaque sludge that is mixed in a stirred tank with glass frit particles and water to form slurry of specified proportions. The DWPF mixing process is composed of a flat bottom cylindrical mixing vessel with a centrally located helical coil, and agitator. The helical coil is used to heat and cool the contents of the tank and can improve flow circulation. The agitator shaft has two impellers; a radial blade and a hydrofoil blade. The hydrofoil is used to circulate the mixture between the top region and bottom region of the tank. The radial blade sweeps the bottom of the tank and pushes the fluid in the outward radial direction. The full scale vessel contains about 9500 gallons of slurry with flow behavior characterized as a Bingham Plastic. Particles in the mixture have an abrasive characteristic that cause excessive erosion to internal vessel components at higher impeller speeds. The desire for this mixing process is to ensure the

  11. Modelling large scale human activity in San Francisco

    NASA Astrophysics Data System (ADS)

    Gonzalez, Marta

    2010-03-01

    Diverse group of people with a wide variety of schedules, activities and travel needs compose our cities nowadays. This represents a big challenge for modeling travel behaviors in urban environments; those models are of crucial interest for a wide variety of applications such as traffic forecasting, spreading of viruses, or measuring human exposure to air pollutants. The traditional means to obtain knowledge about travel behavior is limited to surveys on travel journeys. The obtained information is based in questionnaires that are usually costly to implement and with intrinsic limitations to cover large number of individuals and some problems of reliability. Using mobile phone data, we explore the basic characteristics of a model of human travel: The distribution of agents is proportional to the population density of a given region, and each agent has a characteristic trajectory size contain information on frequency of visits to different locations. Additionally we use a complementary data set given by smart subway fare cards offering us information about the exact time of each passenger getting in or getting out of the subway station and the coordinates of it. This allows us to uncover the temporal aspects of the mobility. Since we have the actual time and place of individual's origin and destination we can understand the temporal patterns in each visited location with further details. Integrating two described data set we provide a dynamical model of human travels that incorporates different aspects observed empirically.

  12. Large-Scale Modeling of Wordform Learning and Representation

    ERIC Educational Resources Information Center

    Sibley, Daragh E.; Kello, Christopher T.; Plaut, David C.; Elman, Jeffrey L.

    2008-01-01

    The forms of words as they appear in text and speech are central to theories and models of lexical processing. Nonetheless, current methods for simulating their learning and representation fail to approach the scale and heterogeneity of real wordform lexicons. A connectionist architecture termed the "sequence encoder" is used to learn…

  13. Parameterization of Fire Injection Height in Large Scale Transport Model

    NASA Astrophysics Data System (ADS)

    Paugam, R.; Wooster, M.; Atherton, J.; Val Martin, M.; Freitas, S.; Kaiser, J. W.; Schultz, M. G.

    2012-12-01

    The parameterization of fire injection height in global chemistry transport model is currently a subject of debate in the atmospheric community. The approach usually proposed in the literature is based on relationships linking injection height and remote sensing products like the Fire Radiative Power (FRP) which can measure active fire properties. In this work we present an approach based on the Plume Rise Model (PRM) developed by Freitas et al (2007, 2010). This plume model is already used in different host models (e.g. WRF, BRAMS). In its original version, the fire is modeled by: a convective heat flux (CHF; pre-defined by the land cover and evaluated as a fixed part of the total heat released) and a plume radius (derived from the GOES Wildfire-ABBA product) which defines the fire extension where the CHF is homogeneously distributed. Here in our approach the Freitas model is modified, in particular we added (i) an equation for mass conservation, (ii) a scheme to parameterize horizontal entrainment/detrainment, and (iii) a new initialization module which estimates the sensible heat released by the fire on the basis of measured FRP rather than fuel cover type. FRP and Active Fire (AF) area necessary for the initialization of the model are directly derived from a modified version of the Dozier algorithm applied to the MOD14 product. An optimization (using the simulating annealing method) of this new version of the PRM is then proposed based on fire plume characteristics derived from the official MISR plume height project and atmospheric profiles extracted from the ECMWF analysis. The data set covers the main fire region (Africa, Siberia, Indonesia, and North and South America) and is set up to (i) retain fires where plume height and FRP can be easily linked (i.e. avoid large fire cluster where individual plume might interact), (ii) keep fire which show decrease of FRP and AF area after MISR overpass (i.e. to minimize effect of the time period needed for the plume to

  14. Parameterization of Fire Injection Height in Large Scale Transport Model

    NASA Astrophysics Data System (ADS)

    Paugam, r.; Wooster, m.; Freitas, s.; Gonzi, s.; Palmer, p.

    2012-04-01

    The parameterization of fire injection height in global chemistry transport model is currently a subject of debate in the atmospheric community. The approach usually proposed in the literature is based on relationships linking injection height and remote sensing products like the Fire Radiative Power (FRP) which can measure active fire properties. In this work we present an approach based on the Plume Rise Model (PRM) developed by Freitas et al (2007, 2010). This plume model is already used in different host models (e.g. WRF, BRAMS). In its original version, the fire is modelled by: a convective heat flux (CHF; pre-defined by the land cover and evaluated as a fixed part of the total heat released) and a plume radius (derived from the GOES Wildfire-ABBA product) which defines the fire extension where the CHF is homogeneously distributed. Here in our approach the Freitas model is modified. Major modifications are implemented in its initialisation module: (i) CHF and the Active Fire area are directly force from FRP data derived from a modified version of the Dozier algorithm applied to the MOD12 product, (ii) and a new module of the buoyancy flux calculation is implemented instead of the original module based on the Morton Taylor and Turner equation. Furthermore the dynamical core of the plume model is also modified with a new entrainment scheme inspired from latest results from shallow convection parameterization. Optimization and validation of this new version of the Freitas PRM is based on fire plume characteristics derived from the official MISR plume height project and atmospheric profile extracted from the ECMWF analysis. The data set is (i) build up to only keep fires where plume height and FRP can be easily linked (i.e. avoid large fire cluster where individual plume might interact) and (ii) split per fire land cover type to optimize the constant of the buoyancy flux module and the entrainment scheme to different fire regime. Result shows that the new PRM is

  15. GIS for large-scale watershed observational data model

    NASA Astrophysics Data System (ADS)

    Patino-Gomez, Carlos

    Because integrated management of a river basin requires the development of models that are used for many purposes, e.g., to assess risks and possible mitigation of droughts and floods, manage water rights, assess water quality, and simply to understand the hydrology of the basin, the development of a relational database from which models can access the various data needed to describe the systems being modeled is fundamental. In order for this concept to be useful and widely applicable, however, it must have a standard design. The recently developed ArcHydro data model facilitates the organization of data according to the "basin" principle and allows access to hydrologic information by models. The development of a basin-scale relational database for the Rio Grande/Bravo basin implemented in a Geographic Information System is one of the contributions of this research. This geodatabase represents the first major attempt to establish a more complete understanding of the basin as a whole, including spatial and temporal information obtained from the United States of America and Mexico. Difficulties in processing raster datasets over large regions are studied in this research. One of the most important contributions is the application of a Raster-Network Regionalization technique, which utilizes raster-based analysis at the subregional scale in an efficient manner and combines the resulting subregional vector datasets into a regional database. Another important contribution of this research is focused on implementing a robust structure for handling huge temporal data sets related to monitoring points such as hydrometric and climatic stations, reservoir inlets and outlets, water rights, etc. For the Rio Grande study area, the ArcHydro format is applied to the historical information collected in order to include and relate these time series to the monitoring points in the geodatabase. Its standard time series format is changed to include a relationship to the agency from

  16. Uncertainty Quantification for Large-Scale Ice Sheet Modeling

    SciTech Connect

    Ghattas, Omar

    2016-02-05

    This report summarizes our work to develop advanced forward and inverse solvers and uncertainty quantification capabilities for a nonlinear 3D full Stokes continental-scale ice sheet flow model. The components include: (1) forward solver: a new state-of-the-art parallel adaptive scalable high-order-accurate mass-conservative Newton-based 3D nonlinear full Stokes ice sheet flow simulator; (2) inverse solver: a new adjoint-based inexact Newton method for solution of deterministic inverse problems governed by the above 3D nonlinear full Stokes ice flow model; and (3) uncertainty quantification: a novel Hessian-based Bayesian method for quantifying uncertainties in the inverse ice sheet flow solution and propagating them forward into predictions of quantities of interest such as ice mass flux to the ocean.

  17. Multistability in Large Scale Models of Brain Activity

    PubMed Central

    Golos, Mathieu; Jirsa, Viktor; Daucé, Emmanuel

    2015-01-01

    Noise driven exploration of a brain network’s dynamic repertoire has been hypothesized to be causally involved in cognitive function, aging and neurodegeneration. The dynamic repertoire crucially depends on the network’s capacity to store patterns, as well as their stability. Here we systematically explore the capacity of networks derived from human connectomes to store attractor states, as well as various network mechanisms to control the brain’s dynamic repertoire. Using a deterministic graded response Hopfield model with connectome-based interactions, we reconstruct the system’s attractor space through a uniform sampling of the initial conditions. Large fixed-point attractor sets are obtained in the low temperature condition, with a bigger number of attractors than ever reported so far. Different variants of the initial model, including (i) a uniform activation threshold or (ii) a global negative feedback, produce a similarly robust multistability in a limited parameter range. A numerical analysis of the distribution of the attractors identifies spatially-segregated components, with a centro-medial core and several well-delineated regional patches. Those different modes share similarity with the fMRI independent components observed in the “resting state” condition. We demonstrate non-stationary behavior in noise-driven generalizations of the models, with different meta-stable attractors visited along the same time course. Only the model with a global dynamic density control is found to display robust and long-lasting non-stationarity with no tendency toward either overactivity or extinction. The best fit with empirical signals is observed at the edge of multistability, a parameter region that also corresponds to the highest entropy of the attractors. PMID:26709852

  18. Renormalizing a viscous fluid model for large scale structure formation

    SciTech Connect

    Führer, Florian; Rigopoulos, Gerasimos E-mail: gerasimos.rigopoulos@ncl.ac.uk

    2016-02-01

    Using the Stochastic Adhesion Model (SAM) as a simple toy model for cosmic structure formation, we study renormalization and the removal of the cutoff dependence from loop integrals in perturbative calculations. SAM shares the same symmetry with the full system of continuity+Euler equations and includes a viscosity term and a stochastic noise term, similar to the effective theories recently put forward to model CDM clustering. We show in this context that if the viscosity and noise terms are treated as perturbative corrections to the standard eulerian perturbation theory, they are necessarily non-local in time. To ensure Galilean Invariance higher order vertices related to the viscosity and the noise must then be added and we explicitly show at one-loop that these terms act as counter terms for vertex diagrams. The Ward Identities ensure that the non-local-in-time theory can be renormalized consistently. Another possibility is to include the viscosity in the linear propagator, resulting in exponential damping at high wavenumber. The resulting local-in-time theory is then renormalizable to one loop, requiring less free parameters for its renormalization.

  19. Large Scale Simulations of the Kinetic Ising Model

    NASA Astrophysics Data System (ADS)

    Münkel, Christian

    We present Monte Carlo simulation results for the dynamical critical exponent z of the two- and three-dimensional kinetic Ising model. The z-values were calculated from the magnetization relaxation from an ordered state into the equilibrium state at Tc for very large systems with up to (169984)2 and (3072)3 spins. To our knowledge, these are the largest Ising-systems simulated todate. We also report the successful simulation of very large lattices on a massively parallel MIMD computer with high speedups of approximately 1000 and an efficiency of about 0.93.

  20. Evaluation of drought propagation in an ensemble mean of large-scale hydrological models

    NASA Astrophysics Data System (ADS)

    Van Loon, A. F.; Van Huijgevoort, M. H. J.; Van Lanen, H. A. J.

    2012-07-01

    Hydrological drought is increasingly studied using large-scale models. It is, however, not sure whether large-scale models reproduce the development of hydrological drought correctly. The pressing question is: how well do large-scale models simulate the propagation from meteorological to hydrological drought? To answer this question, we evaluated the simulation of drought propagation in an ensemble mean of ten large-scale models, both land-surface models and global hydrological models, that were part of the model intercomparison project of WATCH (WaterMIP). For a selection of case study areas, we studied drought characteristics (number of droughts, duration, severity), drought propagation features (pooling, attenuation, lag, lengthening), and hydrological drought typology (classical rainfall deficit drought, rain-to-snow-season drought, wet-to-dry-season drought, cold snow season drought, warm snow season drought, composite drought). Drought characteristics simulated by large-scale models clearly reflected drought propagation, i.e. drought events became less and longer when moving through the hydrological cycle. However, more differentiation was expected between fast and slowly responding systems, with slowly responding systems having less and longer droughts in runoff than fast responding systems. This was not found using large-scale models. Drought propagation features were poorly reproduced by the large-scale models, because runoff reacted immediately to precipitation, in all case study areas. This fast reaction to precipitation, even in cold climates in winter and in semi-arid climates in summer, also greatly influenced the hydrological drought typology as identified by the large-scale models. In general, the large-scale models had the correct representation of drought types, but the percentages of occurrence had some important mismatches, e.g. an overestimation of classical rainfall deficit droughts, and an underestimation of wet-to-dry-season droughts and

  1. Modeling of Cloud/Radiation Processes for Large-Scale Clouds and Tropical Anvils

    DTIC Science & Technology

    1994-05-31

    three-dimensional, large-scale cloud model has been developed for the prediction of cloud cover, cloud liquid /ice water content (LWC/IWC), precipitation...specific humidity and temperature. Partial cloudiness is allowed to form when large-scale relative humidity is less than 100%. Both liquid and ice...phases are included in the model. The liquid phase processes consist of evaporation, condensation, autoconversion and precipitation. The ice phase

  2. Interaction of a cumulus cloud ensemble with the large-scale environment. IV - The discrete model

    NASA Technical Reports Server (NTRS)

    Lord, S. J.; Chao, W. C.; Arakawa, A.

    1982-01-01

    The Arakawa-Schubert (1974) parameterization is applied to a prognostic model of large-scale atmospheric circulations and used to analyze data in a general circulation model (GCM). The vertical structure of the large-scale model and the solution for the cloud subensemble thermodynamical properties are examined to choose cloud levels and representative regions. A mass flux distribution equation is adapted to formulate algorithms for calculating the large-scale forcing and the mass flux kernel, using either direct solution or linear programming. Finally, the feedback of the cumulus ensemble on the large-scale environment for a given subensemble mass flux is calculated. All cloud subensemble properties were determined from the conservation of mass, moist static energy, and total water.

  3. Using large-scale neural models to interpret connectivity measures of cortico-cortical dynamics at millisecond temporal resolution

    PubMed Central

    Banerjee, Arpan; Pillai, Ajay S.; Horwitz, Barry

    2012-01-01

    Over the last two decades numerous functional imaging studies have shown that higher order cognitive functions are crucially dependent on the formation of distributed, large-scale neuronal assemblies (neurocognitive networks), often for very short durations. This has fueled the development of a vast number of functional connectivity measures that attempt to capture the spatiotemporal evolution of neurocognitive networks. Unfortunately, interpreting the neural basis of goal directed behavior using connectivity measures on neuroimaging data are highly dependent on the assumptions underlying the development of the measure, the nature of the task, and the modality of the neuroimaging technique that was used. This paper has two main purposes. The first is to provide an overview of some of the different measures of functional/effective connectivity that deal with high temporal resolution neuroimaging data. We will include some results that come from a recent approach that we have developed to identify the formation and extinction of task-specific, large-scale neuronal assemblies from electrophysiological recordings at a ms-by-ms temporal resolution. The second purpose of this paper is to indicate how to partially validate the interpretations drawn from this (or any other) connectivity technique by using simulated data from large-scale, neurobiologically realistic models. Specifically, we applied our recently developed method to realistic simulations of MEG data during a delayed match-to-sample (DMS) task condition and a passive viewing of stimuli condition using a large-scale neural model of the ventral visual processing pathway. Simulated MEG data using simple head models were generated from sources placed in V1, V4, IT, and prefrontal cortex (PFC) for the passive viewing condition. The results show how closely the conclusions obtained from the functional connectivity method match with what actually occurred at the neuronal network level. PMID:22291621

  4. Evaluation of drought propagation in an ensemble mean of large-scale hydrological models

    NASA Astrophysics Data System (ADS)

    Van Loon, A. F.; Van Huijgevoort, M. H. J.; Van Lanen, H. A. J.

    2012-11-01

    Hydrological drought is increasingly studied using large-scale models. It is, however, not sure whether large-scale models reproduce the development of hydrological drought correctly. The pressing question is how well do large-scale models simulate the propagation from meteorological to hydrological drought? To answer this question, we evaluated the simulation of drought propagation in an ensemble mean of ten large-scale models, both land-surface models and global hydrological models, that participated in the model intercomparison project of WATCH (WaterMIP). For a selection of case study areas, we studied drought characteristics (number of droughts, duration, severity), drought propagation features (pooling, attenuation, lag, lengthening), and hydrological drought typology (classical rainfall deficit drought, rain-to-snow-season drought, wet-to-dry-season drought, cold snow season drought, warm snow season drought, composite drought). Drought characteristics simulated by large-scale models clearly reflected drought propagation; i.e. drought events became fewer and longer when moving through the hydrological cycle. However, more differentiation was expected between fast and slowly responding systems, with slowly responding systems having fewer and longer droughts in runoff than fast responding systems. This was not found using large-scale models. Drought propagation features were poorly reproduced by the large-scale models, because runoff reacted immediately to precipitation, in all case study areas. This fast reaction to precipitation, even in cold climates in winter and in semi-arid climates in summer, also greatly influenced the hydrological drought typology as identified by the large-scale models. In general, the large-scale models had the correct representation of drought types, but the percentages of occurrence had some important mismatches, e.g. an overestimation of classical rainfall deficit droughts, and an underestimation of wet-to-dry-season droughts and

  5. Realistic models of paracrystalline silicon

    NASA Astrophysics Data System (ADS)

    Nakhmanson, S. M.; Voyles, P. M.; Mousseau, Normand; Barkema, G. T.; Drabold, D. A.

    2001-06-01

    We present a procedure for the preparation of physically realistic models of paracrystalline silicon based on a modification of the bond-switching method of Wooten, Winer, and Weaire. The models contain randomly oriented c-Si grains embedded in a disordered matrix. Our technique creates interfaces between the crystalline and disordered phases of Si with an extremely low concentration of coordination defects. The resulting models possess structural and vibrational properties comparable with those of good continuous random network models of amorphous silicon and display realistic optical properties, correctly reproducing the electronic band gap of amorphous silicon. The largest of our models also shows the best agreement of any atomistic model structure that we tested with fluctuation microscopy experiments, indicating that this model has a degree of medium-range order closest to that of the real material.

  6. Analytical model of the statistical properties of contrast of large-scale ionospheric inhomogeneities.

    NASA Astrophysics Data System (ADS)

    Vsekhsvyatskaya, I. S.; Evstratova, E. A.; Kalinin, Yu. K.; Romanchuk, A. A.

    1989-08-01

    A new analytical model is proposed for the distribution of variations of the relative electron-density contrast of large-scale ionospheric inhomogeneities. The model is characterized by other-than-zero skewness and kurtosis. It is shown that the model is applicable in the interval of horizontal dimensions of inhomogeneities from hundreds to thousands of kilometers.

  7. Software System Design for Large Scale, Spatially-explicit Agroecosystem Modeling

    SciTech Connect

    Wang, Dali; Nichols, Dr Jeff A; Kang, Shujiang; Post, Wilfred M; Liu, Sumang

    2012-01-01

    Recently, site-based agroecosystem model has been applied at regional and state level to enable comprehensive analyses of environmental sustainability of food and biofuel production. Those large-scale, spatially-explicit simulations present computational challenges in software systems design. Herein, we describe our software system design for large-scale, spatially-explicit agroecosystem modeling and data analysis. First, we describe the software design principles in three major phases: data preparation, high performance simulation, and data management and analysis. Then, we use a case study at a regional intensive modeling area (RIMA) to demonstrate our system implementation and capability.

  8. Identification of large-scale genomic variation in cancer genomes using in silico reference models

    PubMed Central

    Killcoyne, Sarah; del Sol, Antonio

    2016-01-01

    Identifying large-scale structural variation in cancer genomes continues to be a challenge to researchers. Current methods rely on genome alignments based on a reference that can be a poor fit to highly variant and complex tumor genomes. To address this challenge we developed a method that uses available breakpoint information to generate models of structural variations. We use these models as references to align previously unmapped and discordant reads from a genome. By using these models to align unmapped reads, we show that our method can help to identify large-scale variations that have been previously missed. PMID:26264669

  9. CytoModeler: a tool for bridging large-scale network analysis and dynamic quantitative modeling

    PubMed Central

    Xia, Tian; Van Hemert, John; Dickerson, Julie A.

    2011-01-01

    Summary: CytoModeler is an open-source Java application based on the Cytoscape platform. It integrates large-scale network analysis and quantitative modeling by combining omics analysis on the Cytoscape platform, access to deterministic and stochastic simulators, and static and dynamic network context visualizations of simulation results. Availability: Implemented in Java, CytoModeler runs with Cytoscape 2.6 and 2.7. Binaries, documentation and video walkthroughs are freely available at http://vrac.iastate.edu/~jlv/cytomodeler/. Contact: julied@iastate.edu; netscape@iastate.edu Supplementary Information: Supplementary data are available at Bioinformatics online. PMID:21511714

  10. Non-Gaussianity and large-scale structure in a two-field inflationary model

    SciTech Connect

    Tseliakhovich, Dmitriy; Hirata, Christopher

    2010-08-15

    Single-field inflationary models predict nearly Gaussian initial conditions, and hence a detection of non-Gaussianity would be a signature of the more complex inflationary scenarios. In this paper we study the effect on the cosmic microwave background and on large-scale structure from primordial non-Gaussianity in a two-field inflationary model in which both the inflaton and curvaton contribute to the density perturbations. We show that in addition to the previously described enhancement of the galaxy bias on large scales, this setup results in large-scale stochasticity. We provide joint constraints on the local non-Gaussianity parameter f-tilde{sub NL} and the ratio {xi} of the amplitude of primordial perturbations due to the inflaton and curvaton using WMAP and Sloan Digital Sky Survey data.

  11. Non-Gaussianity and Large Scale Structure in a two-field Inflationary model

    SciTech Connect

    Tseliakhovich, D.; Slosar, A.; Hirata, C.

    2010-08-30

    Single-field inflationary models predict nearly Gaussian initial conditions, and hence a detection of non-Gaussianity would be a signature of the more complex inflationary scenarios. In this paper we study the effect on the cosmic microwave background and on large-scale structure from primordial non-Gaussianity in a two-field inflationary model in which both the inflaton and curvaton contribute to the density perturbations. We show that in addition to the previously described enhancement of the galaxy bias on large scales, this setup results in large-scale stochasticity. We provide joint constraints on the local non-Gaussianity parameter f*{sub NL} and the ratio {zeta} of the amplitude of primordial perturbations due to the inflaton and curvaton using WMAP and Sloan Digital Sky Survey data.

  12. Finite Mixture Multilevel Multidimensional Ordinal IRT Models for Large Scale Cross-Cultural Research

    ERIC Educational Resources Information Center

    de Jong, Martijn G.; Steenkamp, Jan-Benedict E. M.

    2010-01-01

    We present a class of finite mixture multilevel multidimensional ordinal IRT models for large scale cross-cultural research. Our model is proposed for confirmatory research settings. Our prior for item parameters is a mixture distribution to accommodate situations where different groups of countries have different measurement operations, while…

  13. On Applications of Rasch Models in International Comparative Large-Scale Assessments: A Historical Review

    ERIC Educational Resources Information Center

    Wendt, Heike; Bos, Wilfried; Goy, Martin

    2011-01-01

    Several current international comparative large-scale assessments of educational achievement (ICLSA) make use of "Rasch models", to address functions essential for valid cross-cultural comparisons. From a historical perspective, ICLSA and Georg Rasch's "models for measurement" emerged at about the same time, half a century ago. However, the…

  14. An Alternative Way to Model Population Ability Distributions in Large-Scale Educational Surveys

    ERIC Educational Resources Information Center

    Wetzel, Eunike; Xu, Xueli; von Davier, Matthias

    2015-01-01

    In large-scale educational surveys, a latent regression model is used to compensate for the shortage of cognitive information. Conventionally, the covariates in the latent regression model are principal components extracted from background data. This operational method has several important disadvantages, such as the handling of missing data and…

  15. The three-point function as a probe of models for large-scale structure

    NASA Technical Reports Server (NTRS)

    Frieman, Joshua A.; Gaztanaga, Enrique

    1994-01-01

    We analyze the consequences of models of structure formation for higher order (n-point) galaxy correlation functions in the mildly nonlinear regime. Several variations of the standard Omega = 1 cold dark matter model with scale-invariant primordial perturbations have recently been introduced to obtain more power on large scales, R(sub p) is approximately 20/h Mpc, e.g., low matter-density (nonzero cosmological constant) models, 'tilted' primordial spectra, and scenarios with a mixture of cold and hot dark matter. They also include models with an effective scale-dependent bias, such as the cooperative galaxy formation scenario of Bower et al. We show that higher-order (n-point) galaxy correlation functions can provide a useful test of such models and can discriminate between models with true large-scale power in the density field and those where the galaxy power arises from scale-dependent bias: a bias with rapid scale dependence leads to a dramatic decrease of the the hierarchical amplitudes Q(sub J) at large scales, r is greater than or approximately R(sub p). Current observational constraints on the three-point amplitudes Q(sub 3) and S(sub 3) can place limits on the bias parameter(s) and appear to disfavor, but not yet rule out, the hypothesis that scale-dependent bias is responsible for the extra power observed on large scales.

  16. Optimization of large-scale heterogeneous system-of-systems models.

    SciTech Connect

    Parekh, Ojas; Watson, Jean-Paul; Phillips, Cynthia Ann; Siirola, John; Swiler, Laura Painton; Hough, Patricia Diane; Lee, Herbert K. H.; Hart, William Eugene; Gray, Genetha Anne; Woodruff, David L.

    2012-01-01

    Decision makers increasingly rely on large-scale computational models to simulate and analyze complex man-made systems. For example, computational models of national infrastructures are being used to inform government policy, assess economic and national security risks, evaluate infrastructure interdependencies, and plan for the growth and evolution of infrastructure capabilities. A major challenge for decision makers is the analysis of national-scale models that are composed of interacting systems: effective integration of system models is difficult, there are many parameters to analyze in these systems, and fundamental modeling uncertainties complicate analysis. This project is developing optimization methods to effectively represent and analyze large-scale heterogeneous system of systems (HSoS) models, which have emerged as a promising approach for describing such complex man-made systems. These optimization methods enable decision makers to predict future system behavior, manage system risk, assess tradeoffs between system criteria, and identify critical modeling uncertainties.

  17. PLATO: data-oriented approach to collaborative large-scale brain system modeling.

    PubMed

    Kannon, Takayuki; Inagaki, Keiichiro; Kamiji, Nilton L; Makimura, Kouji; Usui, Shiro

    2011-11-01

    The brain is a complex information processing system, which can be divided into sub-systems, such as the sensory organs, functional areas in the cortex, and motor control systems. In this sense, most of the mathematical models developed in the field of neuroscience have mainly targeted a specific sub-system. In order to understand the details of the brain as a whole, such sub-system models need to be integrated toward the development of a neurophysiologically plausible large-scale system model. In the present work, we propose a model integration library where models can be connected by means of a common data format. Here, the common data format should be portable so that models written in any programming language, computer architecture, and operating system can be connected. Moreover, the library should be simple so that models can be adapted to use the common data format without requiring any detailed knowledge on its use. Using this library, we have successfully connected existing models reproducing certain features of the visual system, toward the development of a large-scale visual system model. This library will enable users to reuse and integrate existing and newly developed models toward the development and simulation of a large-scale brain system model. The resulting model can also be executed on high performance computers using Message Passing Interface (MPI).

  18. Influence of a compost layer on the attenuation of 28 selected organic micropollutants under realistic soil aquifer treatment conditions: insights from a large scale column experiment.

    PubMed

    Schaffer, Mario; Kröger, Kerrin Franziska; Nödler, Karsten; Ayora, Carlos; Carrera, Jesús; Hernández, Marta; Licha, Tobias

    2015-05-01

    Soil aquifer treatment is widely applied to improve the quality of treated wastewater in its reuse as alternative source of water. To gain a deeper understanding of the fate of thereby introduced organic micropollutants, the attenuation of 28 compounds was investigated in column experiments using two large scale column systems in duplicate. The influence of increasing proportions of solid organic matter (0.04% vs. 0.17%) and decreasing redox potentials (denitrification vs. iron reduction) was studied by introducing a layer of compost. Secondary effluent from a wastewater treatment plant was used as water matrix for simulating soil aquifer treatment. For neutral and anionic compounds, sorption generally increases with the compound hydrophobicity and the solid organic matter in the column system. Organic cations showed the highest attenuation. Among them, breakthroughs were only registered for the cationic beta-blockers atenolol and metoprolol. An enhanced degradation in the columns with organic infiltration layer was observed for the majority of the compounds, suggesting an improved degradation for higher levels of biodegradable dissolved organic carbon. Solely the degradation of sulfamethoxazole could clearly be attributed to redox effects (when reaching iron reducing conditions). The study provides valuable insights into the attenuation potential for a wide spectrum of organic micropollutants under realistic soil aquifer treatment conditions. Furthermore, the introduction of the compost layer generally showed positive effects on the removal of compounds preferentially degraded under reducing conditions and also increases the residence times in the soil aquifer treatment system via sorption.

  19. Aspects of investigating STOL noise using large scale wind tunnel models

    NASA Technical Reports Server (NTRS)

    Falarski, M. D.; Koenig, D. G.; Soderman, P. T.

    1972-01-01

    The applicability of the NASA Ames 40- by 80-ft wind tunnel for acoustic research on STOL concepts has been investigated. The acoustic characteristics of the wind tunnel test section has been studied with calibrated acoustic sources. Acoustic characteristics of several large-scale STOL models have been studied both in the free-field and wind tunnel acoustic environments. The results indicate that the acoustic characteristics of large-scale STOL models can be measured in the wind tunnel if the test section acoustic environment and model acoustic similitude are taken into consideration. The reverberant field of the test section must be determined with an acoustically similar noise source. Directional microphone and extrapolation of near-field data to far-field are some of the techniques being explored as possible solutions to the directivity loss in a reverberant field. The model sound pressure levels must be of sufficient magnitude to be discernable from the wind tunnel background noise.

  20. Large-scale peculiar velocity field in flat models of the universe

    SciTech Connect

    Vittorio, N.; Turner, M.S.

    1987-05-01

    The inflationary universe scenario predicts a flat universe and both adiabatic and isocurvature primordial density perturbations with the Zel'dovich spectrum. The two simplest realizations, models dominated by hot or cold dark matter, seem to be in conflict with observations. Flat models with two components of mass density, where one of the components of mass density is smoothly distributed, are examined, and the large-scale peculiar velocity field for these models is computed. For the smooth component the authors consider relativistic particles, a relic cosmological term, and light strings. At present the observational situation is unsettled, but, in principle, the large-scale peculiar velocity field is a very powerful discriminator between these different models. 66 references.

  1. Simulated pre-industrial climate in Bergen Climate Model (version 2): model description and large-scale circulation features

    NASA Astrophysics Data System (ADS)

    Otterâ, O. H.; Bentsen, M.; Bethke, I.; Kvamstø, N. G.

    2009-11-01

    The Bergen Climate Model (BCM) is a fully-coupled atmosphere-ocean-sea-ice model that provides state-of-the-art computer simulations of the Earth's past, present, and future climate. Here, a pre-industrial multi-century simulation with an updated version of BCM is described and compared to observational data. The model is run without any form of flux adjustments and is stable for several centuries. The simulated climate reproduces the general large-scale circulation in the atmosphere reasonably well, except for a positive bias in the high latitude sea level pressure distribution. Also, by introducing an updated turbulence scheme in the atmosphere model a persistent cold bias has been eliminated. For the ocean part, the model drifts in sea surface temperatures and salinities are considerably reduced compared to earlier versions of BCM. Improved conservation properties in the ocean model have contributed to this. Furthermore, by choosing a reference pressure at 2000 m and including thermobaric effects in the ocean model, a more realistic meridional overturning circulation is simulated in the Atlantic Ocean. The simulated sea-ice extent in the Northern Hemisphere is in general agreement with observational data except for summer where the extent is somewhat underestimated. In the Southern Hemisphere, large negative biases are found in the simulated sea-ice extent. This is partly related to problems with the mixed layer parametrization, causing the mixed layer in the Southern Ocean to be too deep, which in turn makes it hard to maintain a realistic sea-ice cover here. However, despite some problematic issues, the pre-industrial control simulation presented here should still be appropriate for climate change studies requiring multi-century simulations.

  2. Evaluation of variational principle based model for LDPE large scale film blowing process

    NASA Astrophysics Data System (ADS)

    Kolarik, Roman; Zatloukal, Martin

    2013-04-01

    In this work, variational principle based film blowing model combined with Pearson and Petrie formulation, considering non-isothermal processing conditions and novel generalized Newtonian model allowing to capture steady shear and uniaxial extensional viscosities has been validated by using experimentally determined bubble shape and velocity profile for LDPE sample on large scale film blowing line. It has been revealed that the minute change in the flow activation energy can significantly influence the film stretching level.

  3. Interactions among Radiation, Convection, and Large-Scale Dynamics in a General Circulation Model.

    NASA Astrophysics Data System (ADS)

    Randall, David A.; Harshvardhan; Dazlich, Donald A.; Corsetti, Thomas G.

    1989-07-01

    We have analyzed the effects of radiatively active clouds on the climate simulated by the UCLA/GLA GCM, with particular attention to the effects of the upper tropospheric stratiform clouds associated with deep cumulus convection, and the interactions of these clouds with convection and the large-scale circulation.Several numerical experiments have been performed to investigate the mechanisms through which the clouds influence the large-scale circulation. In the `NODETLQ' experiment, no liquid water or ice was detrained from cumulus clouds into the environment; all of the condensate was rained out. Upper level supersaturation cloudiness was drastically reduced, the atmosphere dried, and tropical outgoing longwave radiation increased. In the `NOANVIL' experiment, the radiative effects of the optically thich upper-level cloud sheets associated with deep cumulus convection were neglected. The land surface received more solar radiation in regions of convection, leading to enhanced surface fluxes and a dramatic increase in precipitation. In the `NOCRF' experiment, the longwave atmospheric cloud radiative forcing (ACRF) was omitted, paralleling the recent experiment of Slingo and Slingo. The results suggest that the ACRF enhances deep penetrative convection and precipitation, while suppressing shallow convection. They also indicate that the ACRF warms and moistens the tropical troposphere. The results of this experiment are somewhat ambiguous, however; for example, the ACRF suppresses precipitation in some parts of the tropics, and enhances it in others.To isolate the effects of the ACRF in a simpler setting, we have analyzed the climate of an ocean-covered Earth, which we call Seaworld. The key simplicities of Seaworld are the fixed boundary temperature with no land points, the lack of mountains, and the zonal uniformity of the boundary conditions. Results are presented from two Seaworld simulations. The first includes a full suite of physical parameterizations, while

  4. Simulated pre-industrial climate in Bergen Climate Model (version 2): model description and large-scale circulation features

    NASA Astrophysics Data System (ADS)

    Otterå, O. H.; Bentsen, M.; Bethke, I.; Kvamstø, N. G.

    2009-05-01

    The Bergen Climate Model (BCM) is a fully-coupled atmosphere-ocean-sea-ice model that provides state-of-the-art computer simulations of the Earth's past, present, and future climate. Here, a pre-industrial multi-century simulation with an updated version of BCM is described and compared to observational data. The model is run without any form of flux adjustments and is stable for several centuries. The simulated climate reproduces the general large scale circulation in the atmosphere reasonably well, except for a positive bias in the high latitude sea level pressures distribution. Also, by introducing an updated turbulence scheme in the atmosphere model a persistent cold bias has been eliminated. For the ocean part, the model drifts in sea surface temperatures and salinities are considerably reduced compared to earlier versions of BCM. Improved conservation properties in the ocean have contributed to this. Furthermore, by choosing a reference pressure at 2000 m and including thermobaric effects in the ocean model, a more realistic meridional overturning circulation is simulated in the Atlantic Ocean. The simulated sea-ice extent in the Northern Hemisphere is in general agreement with observational data except for summer where the extent is somewhat underestimated. In the Southern Hemisphere, large negative biases are found in the simulated sea-ice extent. This is partly related to problems with the mixed layer parametrization, causing the mixed layer in the Southern Ocean to be too deep, which in turn makes it hard to maintain a realistic sea-ice cover here. However, despite some problematic issues, the pre-industrial control simulation presented here should still be appropriate for climate change studies requiring multi-century simulations.

  5. Remote control and telemetry system for large-scale model test at sea

    NASA Astrophysics Data System (ADS)

    Sun, Shu-Zheng; Li, Ji-De; Zhao, Xiao-Dong; Luan, Jing-Lei; Wang, Chang-Tao

    2010-09-01

    Physical testing of large-scale ship models at sea is a new experimental method. It is a cheap and reliable way to research the environment adaptability of a ship in complex and extreme wave conditions. It is necessary to have a stable experimental system for the test. Since the experimental area is large, a remote control system and a telemetry system are essential, and were designed by the authors. An experiment was conducted on the Songhuajiang River to test the systems. The relationship between the model’s speed and its electromotor’s revolutions was also measured during the model test. The results showed that the two systems make it possible to carry out large-scale model tests at sea.

  6. Evolution of Large-Scale Circulation during TOGA COARE: Model Intercomparison and Basic Features.

    NASA Astrophysics Data System (ADS)

    Lau, K.-M.; Sheu, P. J.; Schubert, S.; Ledvina, D.; Weng, H.

    1996-05-01

    An intercomparison study of the evolution of large-scale circulation features during TOGA COARE has been carried out using data from three 4D assimilation systems: the National Meteorological Center (NMC, currently known as the National Center for Environmental Prediction), the Navy Fleet Numerical Oceanography Center, and the NASA Goddard Space Flight Center. Results show that the preliminary assimilation products, though somewhat crude, can provide important information concerning the evolution of the large-scale atmospheric circulation over the tropical western Pacific during TOGA COARE. Large-scale features such as sea level pressure, rotational wind field, and temperature are highly consistent among models. However, the rainfall and wind divergence distributions show poor agreement among models, even though some useful information can still be derived. All three models shows a continuous background rain over the Intensive Flux Area (IFA), even during periods with suppressed convection, in contrast to the radar-estimated rainfall that is more episodic. This may reflect a generic deficiency in the oversimplified representation of large-scale rain in all three models.Based on the comparative model diagnostics, a consistent picture of large-scale evolution and multiscale interaction during TOGA COARF emerges. The propagation of the Madden and Julian Oscillation (MJO) from the equatorial Indian Ocean region into the western Pacific foreshadows the establishment of westerly wind events over the COARE region. The genesis and maintenance of the westerly wind (WW) events during TOGA COARE are related to the establishment of a large-scale east-west pressure dipole between the Maritime Continent and the equatorial central Pacific. This pressure dipole could be identified in part with the ascending (low pressure) and descending (high pressure) branches of the MJO and in part with the fluctuations of the austral summer monsoon.Accompanying the development of WW over the

  7. Simplified radiation and convection treatments for large- scale tropical atmospheric modeling

    NASA Astrophysics Data System (ADS)

    Chou, Chia

    1997-05-01

    A physical parameterization package is developed for intermediate tropical atmospheric models, i.e., models slightly less complex than full general circulation models (GCMs). This package includes a linearized longwave radiation scheme, a simplified parameterization for surface solar radiation, and a cloudiness prediction scheme. A quantity that measures the net large-scale vertical stratification in deep convective regions, the gross moist stability, is estimated from observations. Using a Green's function method, the longwave radiation scheme is linearized from a fully nonlinear scheme used in GCMs. This includes the radiative flux dependence on large-scale variables, such as temperature, moisture, cloud fraction, and cloud top. A comparison with the fully nonlinear scheme in simulating tropical climatology, seasonal variations, and interannual variability is carried out using the observed large-scale variables as input. For these applications, the linearized scheme accurately reproduces the nonlinear results, and it can be easily applied in atmospheric models. The simplified solar radiation scheme is used to calculate surface solar irradiance as a function of cloud fraction and solar zenith angle. Cloud optical thickness is fixed for each cloud type, and cloud albedo is assumed to depend linearly on solar zenith angle. Comparison is made with two satellite-derived data sets. The cloudiness prediction scheme consists of empirical relations for cloudiness associated with deep convection, and is appropriate for long Reynolds-averaging intervals. Deep cloud can be estimated by large-scale precipitation in the tropics. Deep cloud and cirrostratus/cirrocumulus corresponding to tower and anvil clouds have a linear relation. Cirrus cloud fraction is calculated by a 2-D prognostic cloud ice budget equation. A deep-cloud-top- temperature postulate is used for parameterizing the cirrus source. The data analysis yields the physical hypothesis that deep cloud top temperature

  8. REIONIZATION ON LARGE SCALES. I. A PARAMETRIC MODEL CONSTRUCTED FROM RADIATION-HYDRODYNAMIC SIMULATIONS

    SciTech Connect

    Battaglia, N.; Trac, H.; Cen, R.; Loeb, A.

    2013-10-20

    We present a new method for modeling inhomogeneous cosmic reionization on large scales. Utilizing high-resolution radiation-hydrodynamic simulations with 2048{sup 3} dark matter particles, 2048{sup 3} gas cells, and 17 billion adaptive rays in a L = 100 Mpc h {sup –1} box, we show that the density and reionization redshift fields are highly correlated on large scales (∼> 1 Mpc h {sup –1}). This correlation can be statistically represented by a scale-dependent linear bias. We construct a parametric function for the bias, which is then used to filter any large-scale density field to derive the corresponding spatially varying reionization redshift field. The parametric model has three free parameters that can be reduced to one free parameter when we fit the two bias parameters to simulation results. We can differentiate degenerate combinations of the bias parameters by combining results for the global ionization histories and correlation length between ionized regions. Unlike previous semi-analytic models, the evolution of the reionization redshift field in our model is directly compared cell by cell against simulations and performs well in all tests. Our model maps the high-resolution, intermediate-volume radiation-hydrodynamic simulations onto lower-resolution, larger-volume N-body simulations (∼> 2 Gpc h {sup –1}) in order to make mock observations and theoretical predictions.

  9. Small parametric model for nonlinear dynamics of large scale cyclogenesis with wind speed variations

    NASA Astrophysics Data System (ADS)

    Erokhin, Nikolay; Shkevov, Rumen; Zolnikova, Nadezhda; Mikhailovskaya, Ludmila

    2016-07-01

    It is performed a numerical investigation of a self consistent small parametric model (SPM) for large scale cyclogenesis (RLSC) by usage of connected nonlinear equations for mean wind speed and ocean surface temperature in the tropical cyclone (TC). These equations may describe the different scenario of temporal dynamics of a powerful atmospheric vortex during its full life cycle. The numerical calculations have shown that relevant choice of SPMTs incoming parameters allows to describe the seasonal behavior of regional large scale cyclogenesis dynamics for a given number of TC during the active season. It is shown that SPM allows describe also the variable wind speed variations inside the TC. Thus by usage of the nonlinear small parametric model it is possible to study the features of RLSCTs temporal dynamics during the active season in the region given and to analyze the relationship between regional cyclogenesis parameters and different external factors like the space weather including the solar activity level and cosmic rays variations.

  10. A PRACTICAL ONTOLOGY FOR THE LARGE-SCALE MODELING OF SCHOLARLY ARTIFACTS AND THEIR USAGE

    SciTech Connect

    RODRIGUEZ, MARKO A.; BOLLEN, JOHAN; VAN DE SOMPEL, HERBERT

    2007-01-30

    The large-scale analysis of scholarly artifact usage is constrained primarily by current practices in usage data archiving, privacy issues concerned with the dissemination of usage data, and the lack of a practical ontology for modeling the usage domain. As a remedy to the third constraint, this article presents a scholarly ontology that was engineered to represent those classes for which large-scale bibliographic and usage data exists, supports usage research, and whose instantiation is scalable to the order of 50 million articles along with their associated artifacts (e.g. authors and journals) and an accompanying 1 billion usage events. The real world instantiation of the presented abstract ontology is a semantic network model of the scholarly community which lends the scholarly process to statistical analysis and computational support. They present the ontology, discuss its instantiation, and provide some example inference rules for calculating various scholarly artifact metrics.

  11. Exploiting multi-scale parallelism for large scale numerical modelling of laser wakefield accelerators

    NASA Astrophysics Data System (ADS)

    Fonseca, R. A.; Vieira, J.; Fiuza, F.; Davidson, A.; Tsung, F. S.; Mori, W. B.; Silva, L. O.

    2013-12-01

    A new generation of laser wakefield accelerators (LWFA), supported by the extreme accelerating fields generated in the interaction of PW-Class lasers and underdense targets, promises the production of high quality electron beams in short distances for multiple applications. Achieving this goal will rely heavily on numerical modelling to further understand the underlying physics and identify optimal regimes, but large scale modelling of these scenarios is computationally heavy and requires the efficient use of state-of-the-art petascale supercomputing systems. We discuss the main difficulties involved in running these simulations and the new developments implemented in the OSIRIS framework to address these issues, ranging from multi-dimensional dynamic load balancing and hybrid distributed/shared memory parallelism to the vectorization of the PIC algorithm. We present the results of the OASCR Joule Metric program on the issue of large scale modelling of LWFA, demonstrating speedups of over 1 order of magnitude on the same hardware. Finally, scalability to over ˜106 cores and sustained performance over ˜2 P Flops is demonstrated, opening the way for large scale modelling of LWFA scenarios.

  12. Computational fluid dynamics simulations of particle deposition in large-scale, multigenerational lung models.

    PubMed

    Walters, D Keith; Luke, William H

    2011-01-01

    Computational fluid dynamics (CFD) has emerged as a useful tool for the prediction of airflow and particle transport within the human lung airway. Several published studies have demonstrated the use of Eulerian finite-volume CFD simulations coupled with Lagrangian particle tracking methods to determine local and regional particle deposition rates in small subsections of the bronchopulmonary tree. However, the simulation of particle transport and deposition in large-scale models encompassing more than a few generations is less common, due in part to the sheer size and complexity of the human lung airway. Highly resolved, fully coupled flowfield solution and particle tracking in the entire lung, for example, is currently an intractable problem and will remain so for the foreseeable future. This paper adopts a previously reported methodology for simulating large-scale regions of the lung airway (Walters, D. K., and Luke, W. H., 2010, "A Method for Three-Dimensional Navier-Stokes Simulations of Large-Scale Regions of the Human Lung Airway," ASME J. Fluids Eng., 132(5), p. 051101), which was shown to produce results similar to fully resolved geometries using approximate, reduced geometry models. The methodology is extended here to particle transport and deposition simulations. Lagrangian particle tracking simulations are performed in combination with Eulerian simulations of the airflow in an idealized representation of the human lung airway tree. Results using the reduced models are compared with those using the fully resolved models for an eight-generation region of the conducting zone. The agreement between fully resolved and reduced geometry simulations indicates that the new method can provide an accurate alternative for large-scale CFD simulations while potentially reducing the computational cost of these simulations by several orders of magnitude.

  13. An integrated model for assessing both crop productivity and agricultural water resources at a large scale

    NASA Astrophysics Data System (ADS)

    Okada, M.; Sakurai, G.; Iizumi, T.; Yokozawa, M.

    2012-12-01

    Agricultural production utilizes regional resources (e.g. river water and ground water) as well as local resources (e.g. temperature, rainfall, solar energy). Future climate changes and increasing demand due to population increases and economic developments would intensively affect the availability of water resources for agricultural production. While many studies assessed the impacts of climate change on agriculture, there are few studies that dynamically account for changes in water resources and crop production. This study proposes an integrated model for assessing both crop productivity and agricultural water resources at a large scale. Also, the irrigation management to subseasonal variability in weather and crop response varies for each region and each crop. To deal with such variations, we used the Markov Chain Monte Carlo technique to quantify regional-specific parameters associated with crop growth and irrigation water estimations. We coupled a large-scale crop model (Sakurai et al. 2012), with a global water resources model, H08 (Hanasaki et al. 2008). The integrated model was consisting of five sub-models for the following processes: land surface, crop growth, river routing, reservoir operation, and anthropogenic water withdrawal. The land surface sub-model was based on a watershed hydrology model, SWAT (Neitsch et al. 2009). Surface and subsurface runoffs simulated by the land surface sub-model were input to the river routing sub-model of the H08 model. A part of regional water resources available for agriculture, simulated by the H08 model, was input as irrigation water to the land surface sub-model. The timing and amount of irrigation water was simulated at a daily step. The integrated model reproduced the observed streamflow in an individual watershed. Additionally, the model accurately reproduced the trends and interannual variations of crop yields. To demonstrate the usefulness of the integrated model, we compared two types of impact assessment of

  14. Using Agent Base Models to Optimize Large Scale Network for Large System Inventories

    NASA Technical Reports Server (NTRS)

    Shameldin, Ramez Ahmed; Bowling, Shannon R.

    2010-01-01

    The aim of this paper is to use Agent Base Models (ABM) to optimize large scale network handling capabilities for large system inventories and to implement strategies for the purpose of reducing capital expenses. The models used in this paper either use computational algorithms or procedure implementations developed by Matlab to simulate agent based models in a principal programming language and mathematical theory using clusters, these clusters work as a high performance computational performance to run the program in parallel computational. In both cases, a model is defined as compilation of a set of structures and processes assumed to underlie the behavior of a network system.

  15. A Novel Large-Scale Temperature Dominated Model for Predicting the End of the Growing Season

    PubMed Central

    Fu, Yang; Zheng, Zeyu; Shi, Haibo; Xiao, Rui

    2016-01-01

    Vegetation phenology regulates many ecosystem processes and is an indicator of the biological responses to climate change. It is important to model the timing of leaf senescence accurately, since the canopy duration and carbon assimilation are strongly determined by the timings of leaf senescence. However, the existing phenology models are unlikely to accurately predict the end of the growing season (EGS) on large scales, resulting in the misrepresentation of the seasonality and interannual variability of biosphere–atmosphere feedbacks and interactions in coupled global climate models. In this paper, we presented a novel large-scale temperature dominated model integrated with the physiological adaptation of plants to the local temperature to assess the spatial pattern and interannual variability of the EGS. Our model was validated in all temperate vegetation types over the Northern Hemisphere. The results indicated that our model showed better performance in representing the spatial and interannual variability of leaf senescence, compared with the original phenology model in the Integrated Biosphere Simulator (IBIS). Our model explained approximately 63% of the EGS variations, whereas the original model explained much lower variations (coefficient of determination R2 = 0.01–0.18). In addition, the differences between the EGS reproduced by our model and the MODIS EGS at 71.3% of the pixels were within 10 days. For the original model, it is only 26.1%. We also found that the temperature threshold (TcritTm) of grassland was lower than that of woody species in the same latitudinal zone. PMID:27893828

  16. Cooling biogeophysical effect of large-scale tropical deforestation in three Earth System models

    NASA Astrophysics Data System (ADS)

    Brovkin, V.; Pugh, T.; Robertson, E.; Bathiany, S.; Jones, C.; Arneth, A.

    2015-12-01

    Vegetation cover in the tropics is limited by moisture availability. Since transpiration from forests is generally greater than from grasslands, the sensitivity of precipitation in the Amazon to large-scale deforestation has long been seen as a critical parameter of climate-vegetation interactions. Most Amazon deforestation experiments to date have been performed with interactive land-atmosphere models but prescribed sea surface temperatures (SSTs). They reveal a strong reduction in evapotranspiration and precipitation, and an increase in global air surface temperature due to reduced latent heat flux. We performed large-scale tropical deforestation experiments with three Earth system models (ESMs) including interactive ocean models, which participated in the FP7 project EMBRACE. In response to tropical deforestation, all models simulate a significant reduction in tropical precipitation, similar to the experiments with prescribed SSTs. However, all three models suggest that the response of global temperature to the deforestation is a cooling or no change, differing from the result of a global warming in prescribed SSTs runs. Presumably, changes in the hydrological cycle and in the water vapor feedback due to deforestation operate in the direction of a global cooling. In addition, one of the models simulates a local cooling over the deforested tropical region. This is opposite to the local warming in the other models. This suggests that the balance between warming due to latent heat flux decrease and cooling due to albedo increase is rather subtle and model-dependent. Last but not least, we suggest using large-scale deforestation as a standard biogeophysical experiment for model intercomparison within the CMIP6 framework.

  17. Image fusion for remote sensing using fast, large-scale neuroscience models

    NASA Astrophysics Data System (ADS)

    Brumby, Steven P.

    2011-05-01

    We present results with large-scale neuroscience-inspired models for feature detection using multi-spectral visible/ infrared satellite imagery. We describe a model using an artificial neural network architecture and learning rules to build sparse scene representations over an adaptive dictionary, fusing spectral and spatial textural characteristics of the objects of interest. Our results with fast codes implemented on clusters of graphical processor units (GPUs) suggest that visual cortex models are a promising approach to practical pattern recognition problems in remote sensing, even for datasets using spectral bands not found in natural visual systems.

  18. Modeling dynamic functional information flows on large-scale brain networks.

    PubMed

    Lv, Peili; Guo, Lei; Hu, Xintao; Li, Xiang; Jin, Changfeng; Han, Junwei; Li, Lingjiang; Liu, Tianming

    2013-01-01

    Growing evidence from the functional neuroimaging field suggests that human brain functions are realized via dynamic functional interactions on large-scale structural networks. Even in resting state, functional brain networks exhibit remarkable temporal dynamics. However, it has been rarely explored to computationally model such dynamic functional information flows on large-scale brain networks. In this paper, we present a novel computational framework to explore this problem using multimodal resting state fMRI (R-fMRI) and diffusion tensor imaging (DTI) data. Basically, recent literature reports including our own studies have demonstrated that the resting state brain networks dynamically undergo a set of distinct brain states. Within each quasi-stable state, functional information flows from one set of structural brain nodes to other sets of nodes, which is analogous to the message package routing on the Internet from the source node to the destination. Therefore, based on the large-scale structural brain networks constructed from DTI data, we employ a dynamic programming strategy to infer functional information transition routines on structural networks, based on which hub routers that most frequently participate in these routines are identified. It is interesting that a majority of those hub routers are located within the default mode network (DMN), revealing a possible mechanism of the critical functional hub roles played by the DMN in resting state. Also, application of this framework on a post trauma stress disorder (PTSD) dataset demonstrated interesting difference in hub router distributions between PTSD patients and healthy controls.

  19. UDEC-AUTODYN Hybrid Modeling of a Large-Scale Underground Explosion Test

    NASA Astrophysics Data System (ADS)

    Deng, X. F.; Chen, S. G.; Zhu, J. B.; Zhou, Y. X.; Zhao, Z. Y.; Zhao, J.

    2015-03-01

    In this study, numerical modeling of a large-scale decoupled underground explosion test with 10 tons of TNT in Älvdalen, Sweden is performed by combining DEM and FEM with codes UDEC and AUTODYN. AUTODYN is adopted to model the explosion process, blast wave generation, and its action on the explosion chamber surfaces, while UDEC modeling is focused on shock wave propagation in jointed rock masses surrounding the explosion chamber. The numerical modeling results with the hybrid AUTODYN-UDEC method are compared with empirical estimations, purely AUTODYN modeling results, and the field test data. It is found that in terms of peak particle velocity, empirical estimations are much smaller than the measured data, while purely AUTODYN modeling results are larger than the test data. The UDEC-AUTODYN numerical modeling results agree well with the test data. Therefore, the UDEC-AUTODYN method is appropriate in modeling a large-scale explosive detonation in a closed space and the following wave propagation in jointed rock masses. It should be noted that joint mechanical and spatial properties adopted in UDEC-AUTODYN modeling are determined with empirical equations and available geological data, and they may not be sufficiently accurate.

  20. A cooperative strategy for parameter estimation in large scale systems biology models

    PubMed Central

    2012-01-01

    Background Mathematical models play a key role in systems biology: they summarize the currently available knowledge in a way that allows to make experimentally verifiable predictions. Model calibration consists of finding the parameters that give the best fit to a set of experimental data, which entails minimizing a cost function that measures the goodness of this fit. Most mathematical models in systems biology present three characteristics which make this problem very difficult to solve: they are highly non-linear, they have a large number of parameters to be estimated, and the information content of the available experimental data is frequently scarce. Hence, there is a need for global optimization methods capable of solving this problem efficiently. Results A new approach for parameter estimation of large scale models, called Cooperative Enhanced Scatter Search (CeSS), is presented. Its key feature is the cooperation between different programs (“threads”) that run in parallel in different processors. Each thread implements a state of the art metaheuristic, the enhanced Scatter Search algorithm (eSS). Cooperation, meaning information sharing between threads, modifies the systemic properties of the algorithm and allows to speed up performance. Two parameter estimation problems involving models related with the central carbon metabolism of E. coli which include different regulatory levels (metabolic and transcriptional) are used as case studies. The performance and capabilities of the method are also evaluated using benchmark problems of large-scale global optimization, with excellent results. Conclusions The cooperative CeSS strategy is a general purpose technique that can be applied to any model calibration problem. Its capability has been demonstrated by calibrating two large-scale models of different characteristics, improving the performance of previously existing methods in both cases. The cooperative metaheuristic presented here can be easily extended

  1. Large-scale growth evolution in the Szekeres inhomogeneous cosmological models with comparison to growth data

    NASA Astrophysics Data System (ADS)

    Peel, Austin; Ishak, Mustapha; Troxel, M. A.

    2012-12-01

    We use the Szekeres inhomogeneous cosmological models to study the growth of large-scale structure in the universe including nonzero spatial curvature and a cosmological constant. In particular, we use the Goode and Wainwright formulation of the solution, as in this form the models can be considered to represent exact nonlinear perturbations of an averaged background. We identify a density contrast in both classes I and II of the models, for which we derive growth evolution equations. By including Λ, the time evolution of the density contrast as well as kinematic quantities of interest can be tracked through the matter- and Λ-dominated cosmic eras up to the present and into the future. In class I, we consider a localized cosmic structure representing an overdensity neighboring a central void, surrounded by an almost Friedmann-Lemaître-Robertson-Walker background, while for class II, the exact perturbations exist globally. In various models of class I and class II, the growth rate is found to be stronger in the matter-dominated era than that of the standard lambda-cold dark matter (ΛCDM) cosmology, and it is suppressed at later times due to the presence of the cosmological constant. We find that there are Szekeres models able to provide a growth history similar to that of ΛCDM while requiring less matter content and nonzero spatial curvature, which speaks to the importance of including the effects of large-scale inhomogeneities in analyzing the growth of large-scale structure. Using data for the growth factor f from redshift space distortions and the Lyman-α forest, we obtain best fit parameters for class II models and compare their ability to match observations with ΛCDM. We find that there is negligible difference between best fit Szekeres models with no priors and those for ΛCDM, both including and excluding Lyman-α data. We also find that the standard growth index γ parametrization cannot be applied in a simple way to the growth in Szekeres models, so

  2. Comparison of void strengthening in fcc and bcc metals : large-scale atomic-level modelling.

    SciTech Connect

    Osetskiy, Yury N; Bacon, David J

    2005-01-01

    Strengthening due to voids can be a significant radiation effect in metals. Treatment of this by elasticity theory of dislocations is difficult when atomic structure of the obstacle and dislocation is influential. In this paper, we report results of large-scale atomic-level modelling of edge dislocation-void interaction in fcc (copper) and bcc (iron) metals. Voids of up to 5 nm diameter were studied over the temperature range from 0 to 600 K. We demonstrate that atomistic modelling is able to reveal important effects, which are beyond the continuum approach. Some arise from features of the dislocation core and crystal structure, others involve dislocation climb and temperature effects.

  3. Remarks on discrete and continuous large-scale models of DNA dynamics.

    PubMed Central

    Klapper, I; Qian, H

    1998-01-01

    We present a comparison of the continuous versus discrete models of large-scale DNA conformation, focusing on issues of relevance to molecular dynamics. Starting from conventional expressions for elastic potential energy, we derive elastic dynamic equations in terms of Cartesian coordinates of the helical axis curve, together with a twist function representing the helical or excess twist. It is noted that the conventional potential energies for the two models are not consistent. In addition, we derive expressions for random Brownian forcing for the nonlinear elastic dynamics and discuss the nature of such forces in a continuous system. PMID:9591677

  4. Formation and disruption of tonotopy in a large-scale model of the auditory cortex.

    PubMed

    Tomková, Markéta; Tomek, Jakub; Novák, Ondřej; Zelenka, Ondřej; Syka, Josef; Brom, Cyril

    2015-10-01

    There is ample experimental evidence describing changes of tonotopic organisation in the auditory cortex due to environmental factors. In order to uncover the underlying mechanisms, we designed a large-scale computational model of the auditory cortex. The model has up to 100 000 Izhikevich's spiking neurons of 17 different types, almost 21 million synapses, which are evolved according to Spike-Timing-Dependent Plasticity (STDP) and have an architecture akin to existing observations. Validation of the model revealed alternating synchronised/desynchronised states and different modes of oscillatory activity. We provide insight into these phenomena via analysing the activity of neuronal subtypes and testing different causal interventions into the simulation. Our model is able to produce experimental predictions on a cell type basis. To study the influence of environmental factors on the tonotopy, different types of auditory stimulations during the evolution of the network were modelled and compared. We found that strong white noise resulted in completely disrupted tonotopy, which is consistent with in vivo experimental observations. Stimulation with pure tones or spontaneous activity led to a similar degree of tonotopy as in the initial state of the network. Interestingly, weak white noise led to a substantial increase in tonotopy. As the STDP was the only mechanism of plasticity in our model, our results suggest that STDP is a sufficient condition for the emergence and disruption of tonotopy under various types of stimuli. The presented large-scale model of the auditory cortex and the core simulator, SUSNOIMAC, have been made publicly available.

  5. Lateral Flow across Multi-parallel Columns and Their Implications on Large-Scale Evapotranspiration Modeling

    NASA Astrophysics Data System (ADS)

    Sun, D.; Zhu, J.

    2011-12-01

    Evapotranspiration (ET, i.e., evaporation and plant transpiration) is an important component in hydrological cycle, especially for semi-arid and arid environments. The representation of soil hydrologic processes and parameters at scales different from the scale at which observations and measurements are made is a major challenge. Large scale evapotranspiration is often quantified through simulation of multiple columns of independent one-dimensional local scale vertical flow. The soil column used in each simulation is considered homogeneous for the purpose of modeling over short depths. A main limitation is that this purely one-dimensional modeling approach does not consider interaction between columns. Lateral flows might be significant for long and narrow tubes and heterogeneous hydraulic properties and plant characteristics. This study is to quantify the significance of lateral flow and examine whether using this one-dimensional modeling approach may introduce unacceptable errors for large scale evapotranspiration simulations using a three-dimensional modeling appraoch. Instead of using convenient parallel column models of independent hydrologic processes, this study simulates three-dimensional transpiration and evaporation in multiple columns which allow lateral interactions. Specifically, we examined the impact of plant rooting density, depth, pattern and other characteristics on the accuracy of this commonly used one-dimensional approximation of hydrological processes. In addition, the influence of spatial variability of hydraulic properties on the validity of the one-dimensional approach and the difference of wetting and drying processes are discussed. The results provide applicable guidance for applications of one-dimensional approach to simulate large scale evapotranspiration in a heterogeneous landscape.

  6. Automatic Construction of Predictive Neuron Models through Large Scale Assimilation of Electrophysiological Data

    PubMed Central

    Nogaret, Alain; Meliza, C. Daniel; Margoliash, Daniel; Abarbanel, Henry D. I.

    2016-01-01

    We report on the construction of neuron models by assimilating electrophysiological data with large-scale constrained nonlinear optimization. The method implements interior point line parameter search to determine parameters from the responses to intracellular current injections of zebra finch HVC neurons. We incorporated these parameters into a nine ionic channel conductance model to obtain completed models which we then use to predict the state of the neuron under arbitrary current stimulation. Each model was validated by successfully predicting the dynamics of the membrane potential induced by 20–50 different current protocols. The dispersion of parameters extracted from different assimilation windows was studied. Differences in constraints from current protocols, stochastic variability in neuron output, and noise behave as a residual temperature which broadens the global minimum of the objective function to an ellipsoid domain whose principal axes follow an exponentially decaying distribution. The maximum likelihood expectation of extracted parameters was found to provide an excellent approximation of the global minimum and yields highly consistent kinetics for both neurons studied. Large scale assimilation absorbs the intrinsic variability of electrophysiological data over wide assimilation windows. It builds models in an automatic manner treating all data as equal quantities and requiring minimal additional insight. PMID:27605157

  7. Automatic Construction of Predictive Neuron Models through Large Scale Assimilation of Electrophysiological Data

    NASA Astrophysics Data System (ADS)

    Nogaret, Alain; Meliza, C. Daniel; Margoliash, Daniel; Abarbanel, Henry D. I.

    2016-09-01

    We report on the construction of neuron models by assimilating electrophysiological data with large-scale constrained nonlinear optimization. The method implements interior point line parameter search to determine parameters from the responses to intracellular current injections of zebra finch HVC neurons. We incorporated these parameters into a nine ionic channel conductance model to obtain completed models which we then use to predict the state of the neuron under arbitrary current stimulation. Each model was validated by successfully predicting the dynamics of the membrane potential induced by 20–50 different current protocols. The dispersion of parameters extracted from different assimilation windows was studied. Differences in constraints from current protocols, stochastic variability in neuron output, and noise behave as a residual temperature which broadens the global minimum of the objective function to an ellipsoid domain whose principal axes follow an exponentially decaying distribution. The maximum likelihood expectation of extracted parameters was found to provide an excellent approximation of the global minimum and yields highly consistent kinetics for both neurons studied. Large scale assimilation absorbs the intrinsic variability of electrophysiological data over wide assimilation windows. It builds models in an automatic manner treating all data as equal quantities and requiring minimal additional insight.

  8. A Large-Scale, Energetic Model of Cardiovascular Homeostasis Predicts Dynamics of Arterial Pressure in Humans

    PubMed Central

    Roytvarf, Alexander; Shusterman, Vladimir

    2008-01-01

    The energetic balance of forces in the cardiovascular system is vital to the stability of blood flow to all physiological systems in mammals. Yet, a large-scale, theoretical model, summarizing the energetic balance of major forces in a single, mathematically closed system has not been described. Although a number of computer simulations have been successfully performed with the use of analog models, the analysis of energetic balance of forces in such models is obscured by a big number of interacting elements. Hence, the goal of our study was to develop a theoretical model that represents large-scale, energetic balance in the cardiovascular system, including the energies of arterial pressure wave, blood flow, and the smooth muscle tone of arterial walls. Because the emphasis of our study was on tracking beat-to-beat changes in the balance of forces, we used a simplified representation of the blood pressure wave as a trapezoidal pressure-pulse with a strong-discontinuity leading front. This allowed significant reduction in the number of required parameters. Our approach has been validated using theoretical analysis, and its accuracy has been confirmed experimentally. The model predicted the dynamics of arterial pressure in human subjects undergoing physiological tests and provided insights into the relationships between arterial pressure and pressure wave velocity. PMID:18269976

  9. Design of a V/STOL propulsion system for a large-scale fighter model

    NASA Technical Reports Server (NTRS)

    Willis, W. S.

    1981-01-01

    Modifications were made to the existing Large-Scale STOL fighter model to simulate a V/STOL configuration. Modifications include the substitutions of two dimensional lift/cruise exhaust nozzles in the nacelles, and the addition of a third J97 engine in the fuselage to suppy a remote exhaust nozzle simulating a Remote Augmented Lift System. A preliminary design of the inlet and exhaust ducting for the third engine was developed and a detailed design was completed of the hot exhaust ducting and remote nozzle.

  10. Aerodynamic characteristics of a large scale model with a swept wing and augmented jet flap

    NASA Technical Reports Server (NTRS)

    Falarski, M. D.; Koenig, D. G.

    1971-01-01

    Data of tests of a large-scale swept augmentor wing model in the 40- by 80-foot wind tunnel are presented. The data includes longitudinal characteristics with and without a horizontal tail as well as results of preliminary investigation of lateral-directional characteristics. The augmentor flap deflection was varied from 0 deg to 70.6 deg at isentropic jet thrust coefficients of 0 to 1.47. The tests were made at a Reynolds number from 2.43 to 4.1 times one million.

  11. Sensitivity analysis of key components in large-scale hydroeconomic models

    NASA Astrophysics Data System (ADS)

    Medellin-Azuara, J.; Connell, C. R.; Lund, J. R.; Howitt, R. E.

    2008-12-01

    This paper explores the likely impact of different estimation methods in key components of hydro-economic models such as hydrology and economic costs or benefits, using the CALVIN hydro-economic optimization for water supply in California. In perform our analysis using two climate scenarios: historical and warm-dry. The components compared were perturbed hydrology using six versus eighteen basins, highly-elastic urban water demands, and different valuation of agricultural water scarcity. Results indicate that large scale hydroeconomic hydro-economic models are often rather robust to a variety of estimation methods of ancillary models and components. Increasing the level of detail in the hydrologic representation of this system might not greatly affect overall estimates of climate and its effects and adaptations for California's water supply. More price responsive urban water demands will have a limited role in allocating water optimally among competing uses. Different estimation methods for the economic value of water and scarcity in agriculture may influence economically optimal water allocation; however land conversion patterns may have a stronger influence in this allocation. Overall optimization results of large-scale hydro-economic models remain useful for a wide range of assumptions in eliciting promising water management alternatives.

  12. Mutual coupling of hydrologic and hydrodynamic models - a viable approach for improved large-scale inundation estimates?

    NASA Astrophysics Data System (ADS)

    Hoch, Jannis; Winsemius, Hessel; van Beek, Ludovicus; Haag, Arjen; Bierkens, Marc

    2016-04-01

    Due to their increasing occurrence rate and associated economic costs, fluvial floods are large-scale and cross-border phenomena that need to be well understood. Sound information about temporal and spatial variations of flood hazard is essential for adequate flood risk management and climate change adaption measures. While progress has been made in assessments of flood hazard and risk on the global scale, studies to date have made compromises between spatial resolution on the one hand and local detail that influences their temporal characteristics (rate of rise, duration) on the other. Moreover, global models cannot realistically model flood wave propagation due to a lack of detail in channel and floodplain geometry, and the representation of hydrologic processes influencing the surface water balance such as open water evaporation from inundated water and re-infiltration of water in river banks. To overcome these restrictions and to obtain a better understanding of flood propagation including its spatio-temporal variations at the large scale, yet at a sufficiently high resolution, the present study aims to develop a large-scale modeling tool by coupling the global hydrologic model PCR-GLOBWB and the recently developed hydrodynamic model DELFT3D-FM. The first computes surface water volumes which are routed by the latter, solving the full Saint-Venant equations. With DELFT3D FM being capable of representing the model domain as a flexible mesh, model accuracy is only improved at relevant locations (river and adjacent floodplain) and the computation time is not unnecessarily increased. This efficiency is very advantageous for large-scale modelling approaches. The model domain is thereby schematized by 2D floodplains, being derived from global data sets (HydroSHEDS and G3WBM, respectively). Since a previous study with 1way-coupling showed good model performance (J.M. Hoch et al., in prep.), this approach was extended to 2way-coupling to fully represent evaporation

  13. Fast 3-D large-scale gravity and magnetic modeling using unstructured grids and an adaptive multilevel fast multipole method

    NASA Astrophysics Data System (ADS)

    Ren, Zhengyong; Tang, Jingtian; Kalscheuer, Thomas; Maurer, Hansruedi

    2017-01-01

    A novel fast and accurate algorithm is developed for large-scale 3-D gravity and magnetic modeling problems. An unstructured grid discretization is used to approximate sources with arbitrary mass and magnetization distributions. A novel adaptive multilevel fast multipole (AMFM) method is developed to reduce the modeling time. An observation octree is constructed on a set of arbitrarily distributed observation sites, while a source octree is constructed on a source tetrahedral grid. A novel characteristic is the independence between the observation octree and the source octree, which simplifies the implementation of different survey configurations such as airborne and ground surveys. Two synthetic models, a cubic model and a half-space model with mountain-valley topography, are tested. As compared to analytical solutions of gravity and magnetic signals, excellent agreements of the solutions verify the accuracy of our AMFM algorithm. Finally, our AMFM method is used to calculate the terrain effect on an airborne gravity data set for a realistic topography model represented by a triangular surface retrieved from a digital elevation model. Using 16 threads, more than 5800 billion interactions between 1,002,001 observation points and 5,839,830 tetrahedral elements are computed in 453.6 s. A traditional first-order Gaussian quadrature approach requires 3.77 days. Hence, our new AMFM algorithm not only can quickly compute the gravity and magnetic signals for complicated problems but also can substantially accelerate the solution of 3-D inversion problems.

  14. Changes of traffic characteristics after large-scale aggregation in 3Tnet: modeling, analysis, and evaluation

    NASA Astrophysics Data System (ADS)

    Yuan, Chi; Huang, Junbin; Li, Zhengbin; He, Yongqi; Xu, Anshi

    2007-11-01

    Understanding network traffic behavior is essential for all aspects of network design and operation, e.g. component design, protocol design, provisioning, operations, administration and maintenance (OAM). A careful study of traffic behavior can lead to improvements in underlying protocols to attain greater efficiencies and higher performance. Many researches have shown that traffic in Ethernet and other networks, either in local or wide area networks, exhibit properties of self-similarity. Several empirical studies on network traffic indicate that this traffic is self-similar in nature. However, the network modeling methods used in current networks have been primarily designed and analyzed under the assumption of the traditional Poisson arrival process. These "Poisson-like" models suggest that the network traffic is smooth, which is inherently unable to capture the self-similar characteristic of traffic. In this paper, after introduce the high performance broadband information network (3Tnet) of China, an aggregation model at access convergence router (ACR) is proposed and analyzed in 3Tnet. We studied the impact of large-scale aggregation applied at the edge of 3Tnet in terms of the self-similarity level observed at the output traffic in presence of self-similar input traffic. Two formulas were presented to describe the changes of Hurst parameter. Using OPNET software simulator, changes of traffic characteristics after large-scale aggregation in 3Tnet was extensive studied. The theoretic analysis results were consistent with the simulation results.

  15. An assembly model for simulation of large-scale ground water flow and transport.

    PubMed

    Huang, Junqi; Christ, John A; Goltz, Mark N

    2008-01-01

    When managing large-scale ground water contamination problems, it is often necessary to model flow and transport using finely discretized domains--for instance (1) to simulate flow and transport near a contamination source area or in the area where a remediation technology is being implemented; (2) to account for small-scale heterogeneities; (3) to represent ground water-surface water interactions; or (4) some combination of these scenarios. A model with a large domain and fine-grid resolution will need extensive computing resources. In this work, a domain decomposition-based assembly model implemented in a parallel computing environment is developed, which will allow efficient simulation of large-scale ground water flow and transport problems using domain-wide grid refinement. The method employs common ground water flow (MODFLOW) and transport (RT3D) simulators, enabling the solution of almost all commonly encountered ground water flow and transport problems. The basic approach partitions a large model domain into any number of subdomains. Parallel processors are used to solve the model equations within each subdomain. Schwarz iteration is applied to match the flow solution at the subdomain boundaries. For the transport model, an extended numerical array is implemented to permit the exchange of dispersive and advective flux information across subdomain boundaries. The model is verified using a conventional single-domain model. Model simulations demonstrate that the proposed model operated in a parallel computing environment can result in considerable savings in computer run times (between 50% and 80%) compared with conventional modeling approaches and may be used to simulate grid discretizations that were formerly intractable.

  16. LipidWrapper: An Algorithm for Generating Large-Scale Membrane Models of Arbitrary Geometry

    PubMed Central

    Durrant, Jacob D.; Amaro, Rommie E.

    2014-01-01

    As ever larger and more complex biological systems are modeled in silico, approximating physiological lipid bilayers with simple planar models becomes increasingly unrealistic. In order to build accurate large-scale models of subcellular environments, models of lipid membranes with carefully considered, biologically relevant curvature will be essential. In the current work, we present a multi-scale utility called LipidWrapper capable of creating curved membrane models with geometries derived from various sources, both experimental and theoretical. To demonstrate its utility, we use LipidWrapper to examine an important mechanism of influenza virulence. A copy of the program can be downloaded free of charge under the terms of the open-source FreeBSD License from http://nbcr.ucsd.edu/lipidwrapper. LipidWrapper has been tested on all major computer operating systems. PMID:25032790

  17. A New Statistically based Autoconversion rate Parameterization for use in Large-Scale Models

    NASA Technical Reports Server (NTRS)

    Lin, Bing; Zhang, Junhua; Lohmann, Ulrike

    2002-01-01

    The autoconversion rate is a key process for the formation of precipitation in warm clouds. In climate models, physical processes such as autoconversion rate, which are calculated from grid mean values, are biased, because they do not take subgrid variability into account. Recently, statistical cloud schemes have been introduced in large-scale models to account for partially cloud-covered grid boxes. However, these schemes do not include the in-cloud variability in their parameterizations. In this paper, a new statistically based autoconversion rate considering the in-cloud variability is introduced and tested in three cases using the Canadian Single Column Model (SCM) of the global climate model. The results show that the new autoconversion rate improves the model simulation, especially in terms of liquid water path in all three case studies.

  18. Calibration of a large-scale semi-distributed hydrological model for the continental United States

    NASA Astrophysics Data System (ADS)

    Li, S.; Lohmann, D.

    2011-12-01

    Recent major flood losses raised the awareness of flood risk worldwide. In large-scale (e.g., country) flood simulation, semi-distributed hydrological model shows its advantage in capturing spatial heterogeneity of hydrological characteristics within a basin with relatively low computational cost. However, it is still very challenging to calibrate the model over large scale and a wide variety of hydroclimatic conditions. The objectives of this study are (1) to compare the effectiveness of state-of-the-art evolutionary multiobjective algorithms in calibrating a semi-distributed hydrological model used in the RMS flood loss model; (2) to calibrate the model over the entire continental United States. Firstly, the computational efficiency of the following four algorithms is evaluated: the Non-Dominated Sorted Genetic Algorithm II (NSGAII), the Strength Pareto Evolutionary Algorithm 2 (SPEA2), the Epsilon-Dominance Non-Dominated Sorted Genetic Algorithm II (ɛ-NSGAII), and the Epsilon-Dominance Multi-Objective Evolutionary Algorithm (ɛMOEA). The test was conducted on four river basins with a wide variety of hydro-climatic conditions in US. The optimization objectives include RMSE and high-flow RMSE. Results of the analysis indicate that NSGAII has the best performance in terms of effectiveness and stability. Then we applied the modified version of NSGAII to calibrate the hydrological model over the entire continental US. Comparing with the observation and published data shows the performance of the calibrated model is good overall. This well-calibrated model allows a more accurate modeling of flood risk and loss in the continental United States. Furthermore it will allow underwriters to better manage the exposure.

  19. Acoustic characteristics of large-scale STOL model at forward speed

    NASA Technical Reports Server (NTRS)

    Falarski, M. D.; Aoyagi, K.; Koenig, D. G.

    1972-01-01

    Wind-tunnel investigations of the acoustic characteristics of the externally blown jet flap (EBF) and augmentor wing STOL concepts are dicussed. The large-scale EBF model was equipped with a triple-slotted blown by four JT15D turbofan engines with circular, coannular exhaust nozzles. The large-scale augmentor wing model was equipped with an unlined augmentor blown by a slot primary nozzle. The effects of airspeed and angle of attack on the acoustics of the EBF were small. At a forward speed of 60 knots, the impingement noise of the landing flap was approximately 2 db lower than in the static tests. Angle of attack increased the impingement noise approximately 0.1 decibels per degree. Flap deflection had a greater effect on the acoustics of the augmentor wing than did airspeed. For a nozzle pressure ratio of 1.9, the peak perceived noise level of the landing flap was 3 to 5 PNdb higher than that of the takeoff flap. The total sound power was also significantly higher for landing indicating that turning in the augmentor generated acoustic energy. Airspeed produced a small aft shift in acoustic directivity with no significant change in the peak perceived noise levels or sound power levels.

  20. Acoustic characteristics of large-scale STOL models at forward speed

    NASA Technical Reports Server (NTRS)

    Falarski, M. D.; Aoyagi, K.; Koenig, D. G.

    1972-01-01

    Wind-tunnel investigations of the acoustic characteristics of the externally blown jet flap (EBF) and augmentor wing STOL concepts are discussed. The large-scale EBF model was equipped with a triple-slotted flap blown by four JT15D turbofan engines with circular, coannular exhaust nozzles. The large-scale augmentor wing model was equipped with an unlined augmentor blown by a slot primary nozzle. The effects of airspeed and angle of attack on the acoustics of the EBF were small. Flap deflection had a greater effect on the acoustics of the augmentor wing than did airspeed. The total sound power was also significantly higher for landing indicating that turning in the augmentor generated acoustic energy. Airspeed produced a small aft shift in acoustic directivity with no significant change in the peak perceived noise levels or sound power levels. Small-scale research of the acoustics for the augmentor wing has shown that by blowing an acoustically treated augmentor with a lobed primary nozzle, the 95-PNdb noise level goal can be achieved or surpassed.

  1. Comparison of the KAMELEON fire model to large-scale open pool fire data

    SciTech Connect

    Nicolette, V.F.; Gritzo, L.A.; Holen, J.; Magnussen, B.F.

    1994-06-01

    A comparison of the KAMELEON Fire model to large-scale open pool fire experimental data is presented. The model was used to calculate large-scale JP-4 pool fires with and without wind, and with and without large objects in the fire. The effect of wind and large objects on the fire environment is clearly seen. For the pool fire calculations without any object in the fire, excellent agreement is seen in the location of the oxygen-starved region near the pool center. Calculated flame temperatures are about 200--300 K higher than measured. This results in higher heat fluxes back to the fuel pool and higher fuel evaporation rates (by a factor of 2). Fuel concentrations at lower elevations and peak soot concentrations are in good agreement with data. For pool fire calculations with objects, similar trends in the fire environment are observed. Excellent agreement is seen in the distribution of the heat flux around a cylindrical calorimeter in a rectangular pool with wind effects. The magnitude of the calculated heat flux to the object is high by a factor of 2 relative to the test data, due to the higher temperatures calculated. For the case of a large flat plate adjacent to a circular pool, excellent qualitative agreement is seen in the predicted and measured flame shapes as a function of wind.

  2. Generative models of rich clubs in Hebbian neuronal networks and large-scale human brain networks

    PubMed Central

    Vértes, Petra E.; Alexander-Bloch, Aaron; Bullmore, Edward T.

    2014-01-01

    Rich clubs arise when nodes that are ‘rich’ in connections also form an elite, densely connected ‘club’. In brain networks, rich clubs incur high physical connection costs but also appear to be especially valuable to brain function. However, little is known about the selection pressures that drive their formation. Here, we take two complementary approaches to this question: firstly we show, using generative modelling, that the emergence of rich clubs in large-scale human brain networks can be driven by an economic trade-off between connection costs and a second, competing topological term. Secondly we show, using simulated neural networks, that Hebbian learning rules also drive the emergence of rich clubs at the microscopic level, and that the prominence of these features increases with learning time. These results suggest that Hebbian learning may provide a neuronal mechanism for the selection of complex features such as rich clubs. The neural networks that we investigate are explicitly Hebbian, and we argue that the topological term in our model of large-scale brain connectivity may represent an analogous connection rule. This putative link between learning and rich clubs is also consistent with predictions that integrative aspects of brain network organization are especially important for adaptive behaviour. PMID:25180309

  3. Modeling the interdecadal eurasian snow cover variations influenced by large-scale atmospheric modes

    NASA Astrophysics Data System (ADS)

    Shmakin, A. B.; Popova, V. V.

    2003-04-01

    The variations of snow water equivalent (SWE) in Eurasia during the last 100 years have been evaluated using a simplified model of heat/water exchange at the land surface. The model is designed for monthly time step, and its equations are written in deviations from average climatic regime. The forcing anomalies of meteorological parameters for 20th century at each grid cell were specified according to large-scale atmospheric indices (such as NAO, PNA, etc.) and regressions between the indices and the meteorological variables. The results were tested against the data observed in Russia during several decades at regular stations and in their vicinity in typical environment. The observed data, Former Soviet Union Hydrological Snow Surveys, were obtained from the National Snow and Ice Data Center (NSIDC), University of Colorado at Boulder. The main features of SWE spatial distribution and its interdecadal variance were reproduced satisfactorily, but the errors were greater in the regions with poorer correlation between atmospheric variables and circulation indices. The regions located closer to Atlantic and, to lesser extent, Pacific coast, demonstrated better agreement with observed data. The large-scale atmospheric modes most responsible for the Eurasian SWE variations at decadal time scale are NAO and intensity of Aleutian low. The study was supported by the Russian Foundation for Basic Research (grants 01-05-64707 and 01-05-64395).

  4. Large-scale Monte Carlo simulations for the depinning transition in Ising-type lattice models

    NASA Astrophysics Data System (ADS)

    Si, Lisha; Liao, Xiaoyun; Zhou, Nengji

    2016-12-01

    With the developed "extended Monte Carlo" (EMC) algorithm, we have studied the depinning transition in Ising-type lattice models by extensive numerical simulations, taking the random-field Ising model with a driving field and the driven bond-diluted Ising model as examples. In comparison with the usual Monte Carlo method, the EMC algorithm exhibits greater efficiency of the simulations. Based on the short-time dynamic scaling form, both the transition field and critical exponents of the depinning transition are determined accurately via the large-scale simulations with the lattice size up to L = 8912, significantly refining the results in earlier literature. In the strong-disorder regime, a new universality class of the Ising-type lattice model is unveiled with the exponents β = 0.304(5) , ν = 1.32(3) , z = 1.12(1) , and ζ = 0.90(1) , quite different from that of the quenched Edwards-Wilkinson equation.

  5. A case study of large-scale structure in a 'hot' model universe

    NASA Technical Reports Server (NTRS)

    Centrella, Joan M.; Gallagher, John S., III; Melott, Adrian L.; Bushouse, Howard A.

    1988-01-01

    Large-scale structure is studied in an Omega(0) = 1 model universe filled with 'hot' dark matter. A particle mesh computer code is used to calculate the development of gravitational instabilities in 64-cubed mass clouds on a 64-cubed three-dimensional grid over an expansion factor of about 1000. The present epoch is identified by matching the slope of the model particle-particle two-point correlation function with that obtained from observations of galaxies, and the model then corresponds to a cubical sample of the universe of about 105/h Mpc on a side. Properties of the simulated universe are investigated by casting the model quantities into observer's coordinates and comparing the results with observations of the spatial and velocity distributions of luminous matter. It is concluded based on simple arguments that current limits on the time of galaxy formation do not rule out 'hot' dark matter.

  6. Large-scale shell-model calculations on the spectroscopy of N <126 Pb isotopes

    NASA Astrophysics Data System (ADS)

    Qi, Chong; Jia, L. Y.; Fu, G. J.

    2016-07-01

    Large-scale shell-model calculations are carried out in the model space including neutron-hole orbitals 2 p1 /2 ,1 f5 /2 ,2 p3 /2 ,0 i13 /2 ,1 f7 /2 , and 0 h9 /2 to study the structure and electromagnetic properties of neutron-deficient Pb isotopes. An optimized effective interaction is used. Good agreement between full shell-model calculations and experimental data is obtained for the spherical states in isotopes Pb-206194. The lighter isotopes are calculated with an importance-truncation approach constructed based on the monopole Hamiltonian. The full shell-model results also agree well with our generalized seniority and nucleon-pair-approximation truncation calculations. The deviations between theory and experiment concerning the excitation energies and electromagnetic properties of low-lying 0+ and 2+ excited states and isomeric states may provide a constraint on our understanding of nuclear deformation and intruder configuration in this region.

  7. The relationship between large-scale and convective states in the tropics - Towards an improved representation of convection in large-scale models

    SciTech Connect

    Jakob, Christian

    2015-02-26

    This report summarises an investigation into the relationship of tropical thunderstorms to the atmospheric conditions they are embedded in. The study is based on the use of radar observations at the Atmospheric Radiation Measurement site in Darwin run under the auspices of the DOE Atmospheric Systems Research program. Linking the larger scales of the atmosphere with the smaller scales of thunderstorms is crucial for the development of the representation of thunderstorms in weather and climate models, which is carried out by a process termed parametrisation. Through the analysis of radar and wind profiler observations the project made several fundamental discoveries about tropical storms and quantified the relationship of the occurrence and intensity of these storms to the large-scale atmosphere. We were able to show that the rainfall averaged over an area the size of a typical climate model grid-box is largely controlled by the number of storms in the area, and less so by the storm intensity. This allows us to completely rethink the way we represent such storms in climate models. We also found that storms occur in three distinct categories based on their depth and that the transition between these categories is strongly related to the larger scale dynamical features of the atmosphere more so than its thermodynamic state. Finally, we used our observational findings to test and refine a new approach to cumulus parametrisation which relies on the stochastic modelling of the area covered by different convective cloud types.

  8. Large-scale hydrological modelling by using modified PUB recommendations: the India-HYPE case

    NASA Astrophysics Data System (ADS)

    Pechlivanidis, I. G.; Arheimer, B.

    2015-03-01

    The Prediction in Ungauged Basins (PUB) scientific initiative (2003-2012 by IAHS) put considerable effort into improving the reliability of hydrological models to predict flow response in ungauged rivers. PUB's collective experience advanced hydrologic science and defined guidelines to make predictions in catchments without observed runoff data. At present, there is a raised interest in applying catchment models for large domains and large data samples in a multi-basin manner. However, such modelling involves several sources of uncertainties, which may be caused by the imperfectness of input data, i.e. particularly regional and global databases. This may lead to inaccurate model parameterisation and incomplete process understanding. In order to bridge the gap between the best practices for single catchments and large-scale hydrology, we present a further developed and slightly modified version of the recommended best practices for PUB by Takeuchi et al. (2013). By using examples from a recent HYPE hydrological model set-up on the Indian subcontinent, named India-HYPE v1.0, we explore the recommendations, indicate challenges and recommend quality checks to avoid erroneous assumptions. We identify the obstacles, ways to overcome them and describe the work process related to: (a) errors and inconsistencies in global databases, unknown human impacts, poor data quality, (b) robust approaches to identify parameters using a stepwise calibration approach, remote sensing data, expert knowledge and catchment similarities; and (c) evaluation based on flow signatures and performance metrics, using both multiple criteria and multiple variables, and independent gauges for "blind tests". The results show that despite the strong hydro-climatic gradient over the subcontinent, a single model can adequately describe the spatial variability in dominant hydrological processes at the catchment scale. Eventually, during calibration of India-HYPE, the median Kling-Gupta Efficiency for

  9. Automatic Generation of Connectivity for Large-Scale Neuronal Network Models through Structural Plasticity

    PubMed Central

    Diaz-Pier, Sandra; Naveau, Mikaël; Butz-Ostendorf, Markus; Morrison, Abigail

    2016-01-01

    With the emergence of new high performance computation technology in the last decade, the simulation of large scale neural networks which are able to reproduce the behavior and structure of the brain has finally become an achievable target of neuroscience. Due to the number of synaptic connections between neurons and the complexity of biological networks, most contemporary models have manually defined or static connectivity. However, it is expected that modeling the dynamic generation and deletion of the links among neurons, locally and between different regions of the brain, is crucial to unravel important mechanisms associated with learning, memory and healing. Moreover, for many neural circuits that could potentially be modeled, activity data is more readily and reliably available than connectivity data. Thus, a framework that enables networks to wire themselves on the basis of specified activity targets can be of great value in specifying network models where connectivity data is incomplete or has large error margins. To address these issues, in the present work we present an implementation of a model of structural plasticity in the neural network simulator NEST. In this model, synapses consist of two parts, a pre- and a post-synaptic element. Synapses are created and deleted during the execution of the simulation following local homeostatic rules until a mean level of electrical activity is reached in the network. We assess the scalability of the implementation in order to evaluate its potential usage in the self generation of connectivity of large scale networks. We show and discuss the results of simulations on simple two population networks and more complex models of the cortical microcircuit involving 8 populations and 4 layers using the new framework. PMID:27303272

  10. Integrating adaptive behaviour in large-scale flood risk assessments: an Agent-Based Modelling approach

    NASA Astrophysics Data System (ADS)

    Haer, Toon; Aerts, Jeroen

    2015-04-01

    Between 1998 and 2009, Europe suffered over 213 major damaging floods, causing 1126 deaths, displacing around half a million people. In this period, floods caused at least 52 billion euro in insured economic losses making floods the most costly natural hazard faced in Europe. In many low-lying areas, the main strategy to cope with floods is to reduce the risk of the hazard through flood defence structures, like dikes and levees. However, it is suggested that part of the responsibility for flood protection needs to shift to households and businesses in areas at risk, and that governments and insurers can effectively stimulate the implementation of individual protective measures. However, adaptive behaviour towards flood risk reduction and the interaction between the government, insurers, and individuals has hardly been studied in large-scale flood risk assessments. In this study, an European Agent-Based Model is developed including agent representatives for the administrative stakeholders of European Member states, insurers and reinsurers markets, and individuals following complex behaviour models. The Agent-Based Modelling approach allows for an in-depth analysis of the interaction between heterogeneous autonomous agents and the resulting (non-)adaptive behaviour. Existing flood damage models are part of the European Agent-Based Model to allow for a dynamic response of both the agents and the environment to changing flood risk and protective efforts. By following an Agent-Based Modelling approach this study is a first contribution to overcome the limitations of traditional large-scale flood risk models in which the influence of individual adaptive behaviour towards flood risk reduction is often lacking.

  11. Towards large scale modelling of wetland water dynamics in northern basins.

    NASA Astrophysics Data System (ADS)

    Pedinotti, V.; Sapriza, G.; Stone, L.; Davison, B.; Pietroniro, A.; Quinton, W. L.; Spence, C.; Wheater, H. S.

    2015-12-01

    Understanding the hydrological behaviour of low topography, wetland-dominated sub-arctic areas is one major issue needed for the improvement of large scale hydrological models. These wet organic soils cover a large extent of Northern America and have a considerable impact on the rainfall-runoff response of a catchment. Moreover their strong interactions with the lower atmosphere and the carbon cycle make of these areas a noteworthy component of the regional climate system. In the framework of the Changing Cold Regions Network (CCRN), this study aims at providing a model for wetland water dynamics that can be used for large scale applications in cold regions. The modelling system has two main components : a) the simulation of surface runoff using the Modélisation Environmentale Communautaire - Surface and Hydrology (MESH) land surface model driven with several gridded atmospheric datasets and b) the routing of surface runoff using the WATROUTE channel scheme. As a preliminary study, we focus on two small representative study basins in Northern Canada : Scotty Creek in the lower Liard River valley of the Northwest Territories and Baker Creek, located a few kilometers north of Yellowknife. Both areas present characteristic landscapes dominated by a series of peat plateaus, channel fens, small lakes and bogs. Moreover, they constitute important fieldwork sites with detailed data to support our modelling study. The challenge of our new wetland model is to represent the hydrological functioning of the various landscape units encountered in those watersheds and their interactions using simple numerical formulations that can be later extended to larger basins such as the Mackenzie river basin. Using observed datasets, the performance of the model to simulate the temporal evolution of hydrological variables such as the water table depth, frost table depth and discharge is assessed.

  12. Pangolin v1.0, a conservative 2-D transport model for large scale parallel calculation

    NASA Astrophysics Data System (ADS)

    Praga, A.; Cariolle, D.; Giraud, L.

    2014-07-01

    To exploit the possibilities of parallel computers, we designed a large-scale bidimensional atmospheric transport model named Pangolin. As the basis for a future chemistry-transport model, a finite-volume approach was chosen both for mass preservation and to ease parallelization. To overcome the pole restriction on time-steps for a regular latitude-longitude grid, Pangolin uses a quasi-area-preserving reduced latitude-longitude grid. The features of the regular grid are exploited to improve parallel performances and a custom domain decomposition algorithm is presented. To assess the validity of the transport scheme, its results are compared with state-of-the-art models on analytical test cases. Finally, parallel performances are shown in terms of strong scaling and confirm the efficient scalability up to a few hundred of cores.

  13. A simple simulation model of tuberculosis epidemiology for use without large-scale computers.

    PubMed

    Azuma, Y

    1975-01-01

    A large-scale computer service is not always available in many countries with tuberculosis problems needing epidemiological analysis. To facilitate work in such countries, a simple epidemiological model was made to calculate annual trends in the prevalence and incidence of tuberculosis and its infection, in tuberculosis mortality, and in BCG coverage, using average parameter values not specific for age groups or birth year cohorts. To test its approximation capabilities and limits, the model was applied to epidemiological data from Japan, where sufficient information was available from repeated nation-wide sample surveys and national statistics. The approximation was found to be satisfactory within certain limits. The model is best used with a desk-top computer, but the calculations can be performed with a small calculator or even by hand.

  14. Numerical modeling of water spray suppression of conveyor belt fires in a large-scale tunnel.

    PubMed

    Yuan, Liming; Smith, Alex C

    2015-05-01

    Conveyor belt fires in an underground mine pose a serious life threat to miners. Water sprinkler systems are usually used to extinguish underground conveyor belt fires, but because of the complex interaction between conveyor belt fires and mine ventilation airflow, more effective engineering designs are needed for the installation of water sprinkler systems. A computational fluid dynamics (CFD) model was developed to simulate the interaction between the ventilation airflow, the belt flame spread, and the water spray system in a mine entry. The CFD model was calibrated using test results from a large-scale conveyor belt fire suppression experiment. Simulations were conducted using the calibrated CFD model to investigate the effects of sprinkler location, water flow rate, and sprinkler activation temperature on the suppression of conveyor belt fires. The sprinkler location and the activation temperature were found to have a major effect on the suppression of the belt fire, while the water flow rate had a minor effect.

  15. Numerical modeling of water spray suppression of conveyor belt fires in a large-scale tunnel

    PubMed Central

    Yuan, Liming; Smith, Alex C.

    2015-01-01

    Conveyor belt fires in an underground mine pose a serious life threat to miners. Water sprinkler systems are usually used to extinguish underground conveyor belt fires, but because of the complex interaction between conveyor belt fires and mine ventilation airflow, more effective engineering designs are needed for the installation of water sprinkler systems. A computational fluid dynamics (CFD) model was developed to simulate the interaction between the ventilation airflow, the belt flame spread, and the water spray system in a mine entry. The CFD model was calibrated using test results from a large-scale conveyor belt fire suppression experiment. Simulations were conducted using the calibrated CFD model to investigate the effects of sprinkler location, water flow rate, and sprinkler activation temperature on the suppression of conveyor belt fires. The sprinkler location and the activation temperature were found to have a major effect on the suppression of the belt fire, while the water flow rate had a minor effect. PMID:26190905

  16. Localization Algorithm Based on a Spring Model (LASM) for Large Scale Wireless Sensor Networks.

    PubMed

    Chen, Wanming; Mei, Tao; Meng, Max Q-H; Liang, Huawei; Liu, Yumei; Li, Yangming; Li, Shuai

    2008-03-15

    A navigation method for a lunar rover based on large scale wireless sensornetworks is proposed. To obtain high navigation accuracy and large exploration area, highnode localization accuracy and large network scale are required. However, thecomputational and communication complexity and time consumption are greatly increasedwith the increase of the network scales. A localization algorithm based on a spring model(LASM) method is proposed to reduce the computational complexity, while maintainingthe localization accuracy in large scale sensor networks. The algorithm simulates thedynamics of physical spring system to estimate the positions of nodes. The sensor nodesare set as particles with masses and connected with neighbor nodes by virtual springs. Thevirtual springs will force the particles move to the original positions, the node positionscorrespondingly, from the randomly set positions. Therefore, a blind node position can bedetermined from the LASM algorithm by calculating the related forces with the neighbornodes. The computational and communication complexity are O(1) for each node, since thenumber of the neighbor nodes does not increase proportionally with the network scale size.Three patches are proposed to avoid local optimization, kick out bad nodes and deal withnode variation. Simulation results show that the computational and communicationcomplexity are almost constant despite of the increase of the network scale size. The time consumption has also been proven to remain almost constant since the calculation steps arealmost unrelated with the network scale size.

  17. Meta-Analysis in Human Neuroimaging: Computational Modeling of Large-Scale Databases

    PubMed Central

    Fox, Peter T.; Lancaster, Jack L.; Laird, Angela R.; Eickhoff, Simon B.

    2016-01-01

    Spatial normalization—applying standardized coordinates as anatomical addresses within a reference space—was introduced to human neuroimaging research nearly 30 years ago. Over these three decades, an impressive series of methodological advances have adopted, extended, and popularized this standard. Collectively, this work has generated a methodologically coherent literature of unprecedented rigor, size, and scope. Large-scale online databases have compiled these observations and their associated meta-data, stimulating the development of meta-analytic methods to exploit this expanding corpus. Coordinate-based meta-analytic methods have emerged and evolved in rigor and utility. Early methods computed cross-study consensus, in a manner roughly comparable to traditional (nonimaging) meta-analysis. Recent advances now compute coactivation-based connectivity, connectivity-based functional parcellation, and complex network models powered from data sets representing tens of thousands of subjects. Meta-analyses of human neuroimaging data in large-scale databases now stand at the forefront of computational neurobiology. PMID:25032500

  18. Inclusive constraints on unified dark matter models from future large-scale surveys

    SciTech Connect

    Camera, Stefano; Carbone, Carmelita; Moscardini, Lauro E-mail: carmelita.carbone@unibo.it

    2012-03-01

    In the very last years, cosmological models where the properties of the dark components of the Universe — dark matter and dark energy — are accounted for by a single ''dark fluid'' have drawn increasing attention and interest. Amongst many proposals, Unified Dark Matter (UDM) cosmologies are promising candidates as effective theories. In these models, a scalar field with a non-canonical kinetic term in its Lagrangian mimics both the accelerated expansion of the Universe at late times and the clustering properties of the large-scale structure of the cosmos. However, UDM models also present peculiar behaviours, the most interesting one being the fact that the perturbations in the dark-matter component of the scalar field do have a non-negligible speed of sound. This gives rise to an effective Jeans scale for the Newtonian potential, below which the dark fluid does not cluster any more. This implies a growth of structures fairly different from that of the concordance ΛCDM model. In this paper, we demonstrate that forthcoming large-scale surveys will be able to discriminate between viable UDM models and ΛCDM to a good degree of accuracy. To this purpose, the planned Euclid satellite will be a powerful tool, since it will provide very accurate data on galaxy clustering and the weak lensing effect of cosmic shear. Finally, we also exploit the constraining power of the ongoing CMB Planck experiment. Although our approach is the most conservative, with the inclusion of only well-understood, linear dynamics, in the end we also show what could be done if some amount of non-linear information were included.

  19. Large-scale shell-model calculations of nuclei around mass 210

    NASA Astrophysics Data System (ADS)

    Teruya, E.; Higashiyama, K.; Yoshinaga, N.

    2016-06-01

    Large-scale shell-model calculations are performed for even-even, odd-mass, and doubly odd nuclei of Pb, Bi, Po, At, Rn, and Fr isotopes in the neutron deficit region (Z ≥82 ,N ≤126 ) assuming 208Pb as a doubly magic core. All the six single-particle orbitals between the magic numbers 82 and 126, namely, 0 h9 /2,1 f7 /2,0 i13 /2,2 p3 /2,1 f5 /2 , and 2 p1 /2 , are considered. For a phenomenological effective two-body interaction, one set of the monopole pairing and quadrupole-quadrupole interactions including the multipole-pairing interactions is adopted for all the nuclei considered. The calculated energies and electromagnetic properties are compared with the experimental data. Furthermore, many isomeric states are analyzed in terms of the shell-model configurations.

  20. A review of large-scale LNG spills: experiments and modeling.

    PubMed

    Luketa-Hanlin, Anay

    2006-05-20

    The prediction of the possible hazards associated with the storage and transportation of liquefied natural gas (LNG) by ship has motivated a substantial number of experimental and analytical studies. This paper reviews the experimental and analytical work performed to date on large-scale spills of LNG. Specifically, experiments on the dispersion of LNG, as well as experiments of LNG fires from spills on water and land are reviewed. Explosion, pool boiling, and rapid phase transition (RPT) explosion studies are described and discussed, as well as models used to predict dispersion and thermal hazard distances. Although there have been significant advances in understanding the behavior of LNG spills, technical knowledge gaps to improve hazard prediction are identified. Some of these gaps can be addressed with current modeling and testing capabilities. A discussion of the state of knowledge and recommendations to further improve the understanding of the behavior of LNG spills on water is provided.

  1. A review of large-scale LNG spills : experiment and modeling.

    SciTech Connect

    Luketa-Hanlin, Anay Josephine

    2005-04-01

    The prediction of the possible hazards associated with the storage and transportation of liquefied natural gas (LNG) by ship has motivated a substantial number of experimental and analytical studies. This paper reviews the experimental and analytical work performed to date on large-scale spills of LNG. Specifically, experiments on the dispersion of LNG, as well as experiments of LNG fires from spills on water and land are reviewed. Explosion, pool boiling, and rapid phase transition (RPT) explosion studies are described and discussed, as well as models used to predict dispersion and thermal hazard distances. Although there have been significant advances in understanding the behavior of LNG spills, technical knowledge gaps to improve hazard prediction are identified. Some of these gaps can be addressed with current modeling and testing capabilities. A discussion of the state of knowledge and recommendations to further improve the understanding of the behavior of LNG spills on water is provided.

  2. Large scale landslide mud flow modeling, simulation, and comparison with observations

    NASA Astrophysics Data System (ADS)

    Liu, F.; Shao, X.; Zhang, B.

    2012-12-01

    Landslide is a catastrophic natural event. Modeling, simulation, and early warning of landslide event can protect the safety of lives and properties. Therefore, study of landslide bears important scientific and practical value. In this research, we constructed a high performance parallel fluid dynamics model to study large scale landslide transport and evolution process. This model solves shallow water equation derived from 3 dimensional Euler equations in Cartesian coordinate system. Based on bottom topography, initial condition, bottom friction, and mudflow viscosity coefficient, density and other parameters, this model predicts landslide transport process and deposition distribution. Using 3 dimension bottom topography data from an digital elevation model in Zhou Qu area, this model produces the onset, transport and deposition process happened during Zhou Qu landslide. It also calculates spatial and temporal distribution of the mud flow transportation route, deposition depth, and kinematic energy of the event. This model together with an early warning system can lead to significant improvement to construction planning in landslide susceptible area.; Zhou Qu topography from Digital Elevation Model ; Modeling result from PLM (parallel landslide model)

  3. A Data-driven Analytic Model for Proton Acceleration by Large-scale Solar Coronal Shocks

    NASA Astrophysics Data System (ADS)

    Kozarev, Kamen A.; Schwadron, Nathan A.

    2016-11-01

    We have recently studied the development of an eruptive filament-driven, large-scale off-limb coronal bright front (OCBF) in the low solar corona, using remote observations from the Solar Dynamics Observatory’s Advanced Imaging Assembly EUV telescopes. In that study, we obtained high-temporal resolution estimates of the OCBF parameters regulating the efficiency of charged particle acceleration within the theoretical framework of diffusive shock acceleration (DSA). These parameters include the time-dependent front size, speed, and strength, as well as the upstream coronal magnetic field orientations with respect to the front’s surface normal direction. Here we present an analytical particle acceleration model, specifically developed to incorporate the coronal shock/compressive front properties described above, derived from remote observations. We verify the model’s performance through a grid of idealized case runs using input parameters typical for large-scale coronal shocks, and demonstrate that the results approach the expected DSA steady-state behavior. We then apply the model to the event of 2011 May 11 using the OCBF time-dependent parameters derived by Kozarev et al. We find that the compressive front likely produced energetic particles as low as 1.3 solar radii in the corona. Comparing the modeled and observed fluences near Earth, we also find that the bulk of the acceleration during this event must have occurred above 1.5 solar radii. With this study we have taken a first step in using direct observations of shocks and compressions in the innermost corona to predict the onsets and intensities of solar energetic particle events.

  4. Global Sensitivity Analysis for Large-scale Socio-hydrological Models using the Cloud

    NASA Astrophysics Data System (ADS)

    Hu, Y.; Garcia-Cabrejo, O.; Cai, X.; Valocchi, A. J.; Dupont, B.

    2014-12-01

    In the context of coupled human and natural system (CHNS), incorporating human factors into water resource management provides us with the opportunity to understand the interactions between human and environmental systems. A multi-agent system (MAS) model is designed to couple with the physically-based Republican River Compact Administration (RRCA) groundwater model, in an attempt to understand the declining water table and base flow in the heavily irrigated Republican River basin. For MAS modelling, we defined five behavioral parameters (κ_pr, ν_pr, κ_prep, ν_prep and λ) to characterize the agent's pumping behavior given the uncertainties of the future crop prices and precipitation. κ and ν describe agent's beliefs in their prior knowledge of the mean and variance of crop prices (κ_pr, ν_pr) and precipitation (κ_prep, ν_prep), and λ is used to describe the agent's attitude towards the fluctuation of crop profits. Notice that these human behavioral parameters as inputs to the MAS model are highly uncertain and even not measurable. Thus, we estimate the influences of these behavioral parameters on the coupled models using Global Sensitivity Analysis (GSA). In this paper, we address two main challenges arising from GSA with such a large-scale socio-hydrological model by using Hadoop-based Cloud Computing techniques and Polynomial Chaos Expansion (PCE) based variance decomposition approach. As a result, 1,000 scenarios of the coupled models are completed within two hours with the Hadoop framework, rather than about 28days if we run those scenarios sequentially. Based on the model results, GSA using PCE is able to measure the impacts of the spatial and temporal variations of these behavioral parameters on crop profits and water table, and thus identifies two influential parameters, κ_pr and λ. The major contribution of this work is a methodological framework for the application of GSA in large-scale socio-hydrological models. This framework attempts to

  5. Excavating the Genome: Large Scale Mutagenesis Screening for the Discovery of New Mouse Models

    PubMed Central

    Sundberg, John P.; Dadras, Soheil S.; Silva, Kathleen A.; Kennedy, Victoria E.; Murray, Stephen A.; Denegre, James; Schofield, Paul N.; King, Lloyd E.; Wiles, Michael; Pratt, C. Herbert

    2016-01-01

    Technology now exists for rapid screening of mutated laboratory mice to identify phenotypes associated with specific genetic mutations. Large repositories exist for spontaneous mutants and those induced by chemical mutagenesis, many of which have never been studied or comprehensively evaluated. To supplement these resources, a variety of techniques have been consolidated in an international effort to create mutations in all known protein coding genes in the mouse. With targeted embryonic stem cell lines now available for almost all protein coding genes and more recently CRISPR/Cas9 technology, large-scale efforts are underway to create novel mutant mouse strains and to characterize their phenotypes. However, accurate diagnosis of skin, hair, and nail diseases still relies on careful gross and histological analysis. While not automated to the level of the physiological phenotyping, histopathology provides the most direct and accurate diagnosis and correlation with human diseases. As a result of these efforts, many new mouse dermatological disease models are being developed. PMID:26551941

  6. Enhanced ICP for the Registration of Large-Scale 3D Environment Models: An Experimental Study

    PubMed Central

    Han, Jianda; Yin, Peng; He, Yuqing; Gu, Feng

    2016-01-01

    One of the main applications of mobile robots is the large-scale perception of the outdoor environment. One of the main challenges of this application is fusing environmental data obtained by multiple robots, especially heterogeneous robots. This paper proposes an enhanced iterative closest point (ICP) method for the fast and accurate registration of 3D environmental models. First, a hierarchical searching scheme is combined with the octree-based ICP algorithm. Second, an early-warning mechanism is used to perceive the local minimum problem. Third, a heuristic escape scheme based on sampled potential transformation vectors is used to avoid local minima and achieve optimal registration. Experiments involving one unmanned aerial vehicle and one unmanned surface vehicle were conducted to verify the proposed technique. The experimental results were compared with those of normal ICP registration algorithms to demonstrate the superior performance of the proposed method. PMID:26891298

  7. Large-scale shell model study of the newly found isomer in 136La

    NASA Astrophysics Data System (ADS)

    Teruya, E.; Yoshinaga, N.; Higashiyama, K.; Nishibata, H.; Odahara, A.; Shimoda, T.

    2016-07-01

    The doubly-odd nucleus 136La is theoretically studied in terms of a large-scale shell model. The energy spectrum and transition rates are calculated and compared with the most updated experimental data. The isomerism is investigated for the first 14+ state, which was found to be an isomer in the previous study [Phys. Rev. C 91, 054305 (2015), 10.1103/PhysRevC.91.054305]. It is found that the 14+ state becomes an isomer due to a band crossing of two bands with completely different configurations. The yrast band with the (ν h11/2 -1⊗π h11 /2 ) configuration is investigated, revealing a staggering nature in M 1 transition rates.

  8. Computational framework for modeling the dynamic evolution of large-scale multi-agent organizations

    NASA Astrophysics Data System (ADS)

    Lazar, Alina; Reynolds, Robert G.

    2002-07-01

    A multi-agent system model of the origins of an archaic state is developed. Agent interaction is mediated by a collection of rules. The rules are mined from a related large-scale data base using two different techniques. One technique uses decision trees while the other uses rough sets. The latter was used since the data collection techniques were associated with a certain degree of uncertainty. The generation of the rough set rules was guided by Genetic Algorithms. Since the rules mediate agent interaction, the rule set with fewer rules and conditionals to check will make scaling up the simulation easier to do. The results suggest that explicitly dealing with uncertainty in rule formation can produce simpler rules than ignoring that uncertainty in situations where uncertainty is a factor in the measurement process.

  9. Large-scale Individual-based Models of Pandemic Influenza Mitigation Strategies

    NASA Astrophysics Data System (ADS)

    Kadau, Kai; Germann, Timothy; Longini, Ira; Macken, Catherine

    2007-03-01

    We have developed a large-scale stochastic simulation model to investigate the spread of a pandemic strain of influenza virus through the U.S. population of 281 million people, to assess the likely effectiveness of various potential intervention strategies including antiviral agents, vaccines, and modified social mobility (including school closure and travel restrictions) [1]. The heterogeneous population structure and mobility is based on available Census and Department of Transportation data where available. Our simulations demonstrate that, in a highly mobile population, restricting travel after an outbreak is detected is likely to delay slightly the time course of the outbreak without impacting the eventual number ill. For large basic reproductive numbers R0, we predict that multiple strategies in combination (involving both social and medical interventions) will be required to achieve a substantial reduction in illness rates. [1] T. C. Germann, K. Kadau, I. M. Longini, and C. A. Macken, Proc. Natl. Acad. Sci. (USA) 103, 5935-5940 (2006).

  10. Enhanced ICP for the Registration of Large-Scale 3D Environment Models: An Experimental Study.

    PubMed

    Han, Jianda; Yin, Peng; He, Yuqing; Gu, Feng

    2016-02-15

    One of the main applications of mobile robots is the large-scale perception of the outdoor environment. One of the main challenges of this application is fusing environmental data obtained by multiple robots, especially heterogeneous robots. This paper proposes an enhanced iterative closest point (ICP) method for the fast and accurate registration of 3D environmental models. First, a hierarchical searching scheme is combined with the octree-based ICP algorithm. Second, an early-warning mechanism is used to perceive the local minimum problem. Third, a heuristic escape scheme based on sampled potential transformation vectors is used to avoid local minima and achieve optimal registration. Experiments involving one unmanned aerial vehicle and one unmanned surface vehicle were conducted to verify the proposed technique. The experimental results were compared with those of normal ICP registration algorithms to demonstrate the superior performance of the proposed method.

  11. Modeling of the dielectrophoretic conveyer-belt assembling microparticles into large-scale structures

    NASA Astrophysics Data System (ADS)

    Khusid, Boris; Jacqmin, David; Kumar, Anil; Acrivos, Andreas

    2007-11-01

    A dielectrophoretic conveyor-belt method for assembling negatively polarized microparticles into large-scale structures was recently developed (APL 90, 154104, 2007). To do this, first, an array of microelectrodes is energized to generate a spatially periodic AC electric field that causes the particles to aggregate into boluses in positions of the field intensity- minima, which are located mid-way along the height of the channel. The minima and their associated boluses are then moved by periodically grounding and energizing the electrode array so as to generate an electrical field moving along the electrode array. We simulate this experiment numerically via a two- dimensional electro-hydrodynamic model (PRE 69, 021402, 2004). The numerical results are in qualitative agreement with experiments in that they show similar particle aggregation rates, bolus sizes and bolus transport speeds.

  12. Investigation of airframe noise for a large-scale wing model with high-lift devices

    NASA Astrophysics Data System (ADS)

    Kopiev, V. F.; Zaytsev, M. Yu.; Belyaev, I. V.

    2016-01-01

    The acoustic characteristics of a large-scale model of a wing with high-lift devices in the landing configuration have been studied in the DNW-NWB wind tunnel with an anechoic test section. For the first time in domestic practice, data on airframe noise at high Reynolds numbers (1.1-1.8 × 106) have been obtained, which can be used for assessment of wing noise levels in aircraft certification tests. The scaling factor for recalculating the measurement results to natural conditions has been determined from the condition of collapsing the dimensionless noise spectra obtained at various flow velocities. The beamforming technique has been used to obtain localization of noise sources and provide their ranking with respect to intensity. For flap side-edge noise, which is an important noise component, a noise reduction method has been proposed. The efficiency of this method has been confirmed in DNW-NWB experiments.

  13. Hierarchical Modeling and Robust Synthesis for the Preliminary Design of Large Scale Complex Systems

    NASA Technical Reports Server (NTRS)

    Koch, Patrick N.

    1997-01-01

    Large-scale complex systems are characterized by multiple interacting subsystems and the analysis of multiple disciplines. The design and development of such systems inevitably requires the resolution of multiple conflicting objectives. The size of complex systems, however, prohibits the development of comprehensive system models, and thus these systems must be partitioned into their constituent parts. Because simultaneous solution of individual subsystem models is often not manageable iteration is inevitable and often excessive. In this dissertation these issues are addressed through the development of a method for hierarchical robust preliminary design exploration to facilitate concurrent system and subsystem design exploration, for the concurrent generation of robust system and subsystem specifications for the preliminary design of multi-level, multi-objective, large-scale complex systems. This method is developed through the integration and expansion of current design techniques: Hierarchical partitioning and modeling techniques for partitioning large-scale complex systems into more tractable parts, and allowing integration of subproblems for system synthesis; Statistical experimentation and approximation techniques for increasing both the efficiency and the comprehensiveness of preliminary design exploration; and Noise modeling techniques for implementing robust preliminary design when approximate models are employed. Hierarchical partitioning and modeling techniques including intermediate responses, linking variables, and compatibility constraints are incorporated within a hierarchical compromise decision support problem formulation for synthesizing subproblem solutions for a partitioned system. Experimentation and approximation techniques are employed for concurrent investigations and modeling of partitioned subproblems. A modified composite experiment is introduced for fitting better predictive models across the ranges of the factors, and an approach for

  14. Simulating large-scale pedestrian movement using CA and event driven model: Methodology and case study

    NASA Astrophysics Data System (ADS)

    Li, Jun; Fu, Siyao; He, Haibo; Jia, Hongfei; Li, Yanzhong; Guo, Yi

    2015-11-01

    Large-scale regional evacuation is an important part of national security emergency response plan. Large commercial shopping area, as the typical service system, its emergency evacuation is one of the hot research topics. A systematic methodology based on Cellular Automata with the Dynamic Floor Field and event driven model has been proposed, and the methodology has been examined within context of a case study involving the evacuation within a commercial shopping mall. Pedestrians walking is based on Cellular Automata and event driven model. In this paper, the event driven model is adopted to simulate the pedestrian movement patterns, the simulation process is divided into normal situation and emergency evacuation. The model is composed of four layers: environment layer, customer layer, clerk layer and trajectory layer. For the simulation of movement route of pedestrians, the model takes into account purchase intention of customers and density of pedestrians. Based on evacuation model of Cellular Automata with Dynamic Floor Field and event driven model, we can reflect behavior characteristics of customers and clerks at the situations of normal and emergency evacuation. The distribution of individual evacuation time as a function of initial positions and the dynamics of the evacuation process is studied. Our results indicate that the evacuation model using the combination of Cellular Automata with Dynamic Floor Field and event driven scheduling can be used to simulate the evacuation of pedestrian flows in indoor areas with complicated surroundings and to investigate the layout of shopping mall.

  15. Large-scale hydrological modelling by using modified PUB recommendations: the India-HYPE case

    NASA Astrophysics Data System (ADS)

    Pechlivanidis, I. G.; Arheimer, B.

    2015-11-01

    The scientific initiative Prediction in Ungauged Basins (PUB) (2003-2012 by the IAHS) put considerable effort into improving the reliability of hydrological models to predict flow response in ungauged rivers. PUB's collective experience advanced hydrologic science and defined guidelines to make predictions in catchments without observed runoff data. At present, there is a raised interest in applying catchment models to large domains and large data samples in a multi-basin manner, to explore emerging spatial patterns or learn from comparative hydrology. However, such modelling involves additional sources of uncertainties caused by the inconsistency between input data sets, i.e. particularly regional and global databases. This may lead to inaccurate model parameterisation and erroneous process understanding. In order to bridge the gap between the best practices for flow predictions in single catchments and multi-basins at the large scale, we present a further developed and slightly modified version of the recommended best practices for PUB by Takeuchi et al. (2013). By using examples from a recent HYPE (Hydrological Predictions for the Environment) hydrological model set-up across 6000 subbasins for the Indian subcontinent, named India-HYPE v1.0, we explore the PUB recommendations, identify challenges and recommend ways to overcome them. We describe the work process related to (a) errors and inconsistencies in global databases, unknown human impacts, and poor data quality; (b) robust approaches to identify model parameters using a stepwise calibration approach, remote sensing data, expert knowledge, and catchment similarities; and (c) evaluation based on flow signatures and performance metrics, using both multiple criteria and multiple variables, and independent gauges for "blind tests". The results show that despite the strong physiographical gradient over the subcontinent, a single model can describe the spatial variability in dominant hydrological processes at the

  16. Multilevel Item Response Modeling: Applications to Large-Scale Assessment of Academic Achievement

    ERIC Educational Resources Information Center

    Zheng, Xiaohui

    2009-01-01

    The call for standards-based reform and educational accountability has led to increased attention to large-scale assessments. Over the past two decades, large-scale assessments have been providing policymakers and educators with timely information about student learning and achievement to facilitate their decisions regarding schools, teachers and…

  17. Multi-variate spatial explicit constraining of a large scale hydrological model

    NASA Astrophysics Data System (ADS)

    Rakovec, Oldrich; Kumar, Rohini; Samaniego, Luis

    2016-04-01

    Increased availability and quality of near real-time data should target at better understanding of predictive skills of distributed hydrological models. Nevertheless, predictions of regional scale water fluxes and states remains of great challenge to the scientific community. Large scale hydrological models are used for prediction of soil moisture, evapotranspiration and other related water states and fluxes. They are usually properly constrained against river discharge, which is an integral variable. Rakovec et al (2016) recently demonstrated that constraining model parameters against river discharge is necessary, but not a sufficient condition. Therefore, we further aim at scrutinizing appropriate incorporation of readily available information into a hydrological model that may help to improve the realism of hydrological processes. It is important to analyze how complementary datasets besides observed streamflow and related signature measures can improve model skill of internal model variables during parameter estimation. Among those products suitable for further scrutiny are for example the GRACE satellite observations. Recent developments of using this dataset in a multivariate fashion to complement traditionally used streamflow data within the distributed model mHM (www.ufz.de/mhm) are presented. Study domain consists of 80 European basins, which cover a wide range of distinct physiographic and hydrologic regimes. First-order data quality check ensures that heavily human influenced basins are eliminated. For river discharge simulations we show that model performance of discharge remains unchanged when complemented by information from the GRACE product (both, daily and monthly time steps). Moreover, the GRACE complementary data lead to consistent and statistically significant improvements in evapotranspiration estimates, which are evaluated using an independent gridded FLUXNET product. We also show that the choice of the objective function used to estimate

  18. Some cases of machining large-scale parts: Characterization and modelling of heavy turning, deep drilling and broaching

    NASA Astrophysics Data System (ADS)

    Haddag, B.; Nouari, M.; Moufki, A.

    2016-10-01

    Machining large-scale parts involves extreme loading at the cutting zone. This paper presents an overview of some cases of machining large-scale parts: heavy turning, deep drilling and broaching processes. It focuses on experimental characterization and modelling methods of these processes. Observed phenomena and/or measured cutting forces are reported. The paper also discusses the predictive ability of the proposed models to reproduce experimental data.

  19. Non-intrusive Ensemble Kalman filtering for large scale geophysical models

    NASA Astrophysics Data System (ADS)

    Amour, Idrissa; Kauranne, Tuomo

    2016-04-01

    Advanced data assimilation techniques, such as variational assimilation methods, present often challenging implementation issues for large-scale models, both because of computational complexity and because of complexity of implementation. We present a non-intrusive wrapper library that addresses this problem by isolating the direct model and the linear algebra employed in data assimilation from each other completely. In this approach we have adopted a hybrid Variational Ensemble Kalman filter that combines Ensemble propagation with a 3DVAR analysis stage. The inverse problem of state and covariance propagation from prior to posterior estimates is thereby turned into a time-independent problem. This feature allows the linear algebra and minimization steps required in the variational step to be conducted outside the direct model and no tangent linear or adjoint codes are required. Communication between the model and the assimilation module is conducted exclusively via standard input and output files of the model. This non-intrusive approach is tested with the comprehensive 3D lake and shallow sea model COHERENS that is used to forecast and assimilate turbidity in lake Säkylän Pyhäjärvi in Finland, using both sparse satellite images and continuous real-time point measurements as observations.

  20. Structure-preserving model reduction of large-scale logistics networks. Applications for supply chains

    NASA Astrophysics Data System (ADS)

    Scholz-Reiter, B.; Wirth, F.; Dashkovskiy, S.; Makuschewitz, T.; Schönlein, M.; Kosmykov, M.

    2011-12-01

    We investigate the problem of model reduction with a view to large-scale logistics networks, specifically supply chains. Such networks are modeled by means of graphs, which describe the structure of material flow. An aim of the proposed model reduction procedure is to preserve important features within the network. As a new methodology we introduce the LogRank as a measure for the importance of locations, which is based on the structure of the flows within the network. We argue that these properties reflect relative importance of locations. Based on the LogRank we identify subgraphs of the network that can be neglected or aggregated. The effect of this is discussed for a few motifs. Using this approach we present a meta algorithm for structure-preserving model reduction that can be adapted to different mathematical modeling frameworks. The capabilities of the approach are demonstrated with a test case, where a logistics network is modeled as a Jackson network, i.e., a particular type of queueing network.

  1. Major historical droughts in Europe as simulated by an ensemble of large-scale hydrological models

    NASA Astrophysics Data System (ADS)

    Tallaksen, L. M.; Stahl, K.

    2012-04-01

    As drought is regional by nature it should preferably be studied at the large scale to consistently address the spatial and temporal characteristics of drought and related drought causing processes. Nevertheless, there is a high spatial variability within a drought affected region caused by a combination of small-scale climate variability and catchment properties, which influences our ability to identify a particular event in a consistent way. Several studies have addressed the occurrence of major drought events in Europe in the last century, still no thorough analysis exists that compares across the different methods, variables and time periods employed. Thus, there is a need for a comprehensive pan-European study of historical events, including their definition, cause, characteristics and major impacts. Important to consider in this respect are the type of data to be analysed and the choice of methodology for drought identification and drought indices best suited for the task. In this study focus is on hydrological drought, i.e. streamflow drought, and the main aim is to analyse key characteristics of major historical droughts in Europe over the period 1963-2000, including affected area, severity and persistence. The variable analysed is simulated daily total runoff for each grid cell in Europe (4425 land grids) derived from the WATCH multi-model ensemble of nine large-scale hydrological models. A grid cell is defined to be in drought if the runoff is below the q20 (20% non-exceedance frequency of the empirical runoff distribution on the respective day). Spatial continuity is accounted for by the introduction of a drought cluster, defined as a minimum of 10 spatially contiguous grid cells in drought on a given day. The results revealed two major dry periods in terms of the mean annual drought area, namely 1975-76 and 1989-90, when also a high consistency was found among models. On the other hand, daily time series during these events depicted a high model

  2. Fermi Observations of Resolved Large-Scale Jets: Testing the IC/CMB Model

    NASA Astrophysics Data System (ADS)

    Breiding, Peter; Meyer, Eileen T.; Georganopoulos, Markos

    2017-01-01

    It has been observed with the Chandra X-ray Observatory since the early 2000s that many powerful quasar jets show X-ray emission on the kpc scale (Harris & Krawczynski, 2006). In many cases these X-rays cannot be explained by the extension of the radio-optical spectrum produced by synchrotron emitting electrons in the jet, since the observed X-ray flux is too high and the X-ray spectral index too hard. A widely accepted model for the X-ray emission first proposed by Celotti et al. 2001 and Tavecchio et al. 2000 posits that the X-rays are produced when relativistic electrons in the jet up-scatter ambient cosmic microwave background (CMB) photons via inverse Compton scattering from microwave to X-ray energies (the IC/CMB model). However, explaining the X-ray emission for these jets with the IC/CMB model requires high levels of IC/CMB γ-ray emission (Georganopoulos et al., 2006), which we are looking for using the FERMI/LAT γ-ray space telescope. Another viable model for the large scale jet X-ray emission favored by the results of Meyer et al. 2015 and Meyer & Georganopoulos 2014 is an alternate population of synchrotron emitting electrons. In contrast with the second synchrotron interpretation; the IC/CMB model requires jets with high kinetic powers which can exceed the Eddington luminsoity (Dermer & Atoyan 2004 and Atoyan & Dermer 2004) and be very fast on the kpc scale with a Γ~10 (Celotti et al. 2001 and Tavecchio et al. 2000). New results from data obtained with the Fermi/LAT will be shown for several quasars not in the Fermi/LAT 3FGL catalog whose large scale X-ray jets are attributed to IC/CMB. Additionally, recent work on the γ-ray bright blazar AP Librae will be shown which helps to constrain some models attempting to explain the high energy component of its SED, which extends from X-ray to TeV energies (e.g., Zacharias & Wagner 2016 & Petropoulou et al. 2016).

  3. Ensemble modeling to predict habitat suitability for a large-scale disturbance specialist.

    PubMed

    Latif, Quresh S; Saab, Victoria A; Dudley, Jonathan G; Hollenbeck, Jeff P

    2013-11-01

    managers attempting to balance salvage logging with habitat conservation in burned-forest landscapes where black-backed woodpecker nest location data are not immediately available. Ensemble modeling represents a promising tool for guiding conservation of large-scale disturbance specialists.

  4. Morphotectonic evolution of passive margins undergoing active surface processes: large-scale experiments using numerical models.

    NASA Astrophysics Data System (ADS)

    Beucher, Romain; Huismans, Ritske S.

    2016-04-01

    Extension of the continental lithosphere can lead to the formation of a wide range of rifted margins styles with contrasting tectonic and geomorphological characteristics. It is now understood that many of these characteristics depend on the manner extension is distributed depending on (among others factors) rheology, structural inheritance, thermal structure and surface processes. The relative importance and the possible interactions of these controlling factors is still largely unknown. Here we investigate the feedbacks between tectonics and the transfers of material at the surface resulting from erosion, transport, and sedimentation. We use large-scale (1200 x 600 km) and high-resolution (~1km) numerical experiments coupling a 2D upper-mantle-scale thermo-mechanical model with a plan-form 2D surface processes model (SPM). We test the sensitivity of the coupled models to varying crust-lithosphere rheology and erosional efficiency ranging from no-erosion to very efficient erosion. We discuss how fast, when and how the topography of the continents evolves and how it can be compared to actual passive margins escarpment morphologies. We show that although tectonics is the main factor controlling the rift geometry, transfers of masses at the surface affect the timing of faulting and the initiation of sea-floor spreading. We discuss how such models may help to understand the evolution of high-elevated passive margins around the world.

  5. Statistical Modeling of Large-Scale Signal Path Loss in Underwater Acoustic Networks

    PubMed Central

    Llor, Jesús; Malumbres, Manuel Perez

    2013-01-01

    In an underwater acoustic channel, the propagation conditions are known to vary in time, causing the deviation of the received signal strength from the nominal value predicted by a deterministic propagation model. To facilitate a large-scale system design in such conditions (e.g., power allocation), we have developed a statistical propagation model in which the transmission loss is treated as a random variable. By applying repetitive computation to the acoustic field, using ray tracing for a set of varying environmental conditions (surface height, wave activity, small node displacements around nominal locations, etc.), an ensemble of transmission losses is compiled and later used to infer the statistical model parameters. A reasonable agreement is found with log-normal distribution, whose mean obeys a log-distance increases, and whose variance appears to be constant for a certain range of inter-node distances in a given deployment location. The statistical model is deemed useful for higher-level system planning, where simulation is needed to assess the performance of candidate network protocols under various resource allocation policies, i.e., to determine the transmit power and bandwidth allocation necessary to achieve a desired level of performance (connectivity, throughput, reliability, etc.). PMID:23396190

  6. Mathematical framework for large-scale brain network modeling in The Virtual Brain.

    PubMed

    Sanz-Leon, Paula; Knock, Stuart A; Spiegler, Andreas; Jirsa, Viktor K

    2015-05-01

    In this article, we describe the mathematical framework of the computational model at the core of the tool The Virtual Brain (TVB), designed to simulate collective whole brain dynamics by virtualizing brain structure and function, allowing simultaneous outputs of a number of experimental modalities such as electro- and magnetoencephalography (EEG, MEG) and functional Magnetic Resonance Imaging (fMRI). The implementation allows for a systematic exploration and manipulation of every underlying component of a large-scale brain network model (BNM), such as the neural mass model governing the local dynamics or the structural connectivity constraining the space time structure of the network couplings. Here, a consistent notation for the generalized BNM is given, so that in this form the equations represent a direct link between the mathematical description of BNMs and the components of the numerical implementation in TVB. Finally, we made a summary of the forward models implemented for mapping simulated neural activity (EEG, MEG, sterotactic electroencephalogram (sEEG), fMRI), identifying their advantages and limitations.

  7. Revisiting the EC/CMB model for extragalactic large scale jets

    NASA Astrophysics Data System (ADS)

    Lucchini, M.; Tavecchio, F.; Ghisellini, G.

    2016-12-01

    One of the most outstanding results of the Chandra X-ray Observatory was the discovery that AGN jets are bright X-ray emitters on very large scales, up to hundreds of kpc. Of these, the powerful and beamed jets of Flat Spectrum Radio Quasars are particularly interesting, as the X-ray emission cannot be explained by an extrapolation of the lower frequency synchrotron spectrum. Instead, the most common model invokes inverse Compton scattering of photons of the Cosmic Microwave Background (EC/CMB) as the mechanism responsible for the high energy emission. The EC/CMB model has recently come under criticism, particularly because it should predict a significant steady flux in the MeV-GeV band which has not been detected by the Fermi/LAT telescope for two of the best studied jets (PKS 0637-752 and 3C273). In this work we revisit some aspects of the EC/CMB model and show that electron cooling plays an important part in shaping the spectrum. This can solve the overproduction of γ-rays by suppressing the high energy end of the emitting particle population. Furthermore, we show that cooling in the EC/CMB model predicts a new class of extended jets that are bright in X-rays but silent in the radio and optical bands. These jets are more likely to lie at intermediate redshifts, and would have been missed in all previous X-ray surveys due to selection effects.

  8. Large-Scale Patterns in a Minimal Cognitive Flocking Model: Incidental Leaders, Nematic Patterns, and Aggregates

    NASA Astrophysics Data System (ADS)

    Barberis, Lucas; Peruani, Fernando

    2016-12-01

    We study a minimal cognitive flocking model, which assumes that the moving entities navigate using the available instantaneous visual information exclusively. The model consists of active particles, with no memory, that interact by a short-ranged, position-based, attractive force, which acts inside a vision cone (VC), and lack velocity-velocity alignment. We show that this active system can exhibit—due to the VC that breaks Newton's third law—various complex, large-scale, self-organized patterns. Depending on parameter values, we observe the emergence of aggregates or millinglike patterns, the formation of moving—locally polar—files with particles at the front of these structures acting as effective leaders, and the self-organization of particles into macroscopic nematic structures leading to long-ranged nematic order. Combining simulations and nonlinear field equations, we show that position-based active models, as the one analyzed here, represent a new class of active systems fundamentally different from other active systems, including velocity-alignment-based flocking systems. The reported results are of prime importance in the study, interpretation, and modeling of collective motion patterns in living and nonliving active systems.

  9. Gravitational waves during inflation from a 5D large-scale repulsive gravity model

    NASA Astrophysics Data System (ADS)

    Reyes, Luz M.; Moreno, Claudia; Madriz Aguilar, José Edgar; Bellini, Mauricio

    2012-10-01

    We investigate, in the transverse traceless (TT) gauge, the generation of the relic background of gravitational waves, generated during the early inflationary stage, on the framework of a large-scale repulsive gravity model. We calculate the spectrum of the tensor metric fluctuations of an effective 4D Schwarzschild-de Sitter metric on cosmological scales. This metric is obtained after implementing a planar coordinate transformation on a 5D Ricci-flat metric solution, in the context of a non-compact Kaluza-Klein theory of gravity. We found that the spectrum is nearly scale invariant under certain conditions. One interesting aspect of this model is that it is possible to derive the dynamical field equations for the tensor metric fluctuations, valid not just at cosmological scales, but also at astrophysical scales, from the same theoretical model. The astrophysical and cosmological scales are determined by the gravity-antigravity radius, which is a natural length scale of the model, that indicates when gravity becomes repulsive in nature.

  10. Large-Scale Patterns in a Minimal Cognitive Flocking Model: Incidental Leaders, Nematic Patterns, and Aggregates.

    PubMed

    Barberis, Lucas; Peruani, Fernando

    2016-12-09

    We study a minimal cognitive flocking model, which assumes that the moving entities navigate using the available instantaneous visual information exclusively. The model consists of active particles, with no memory, that interact by a short-ranged, position-based, attractive force, which acts inside a vision cone (VC), and lack velocity-velocity alignment. We show that this active system can exhibit-due to the VC that breaks Newton's third law-various complex, large-scale, self-organized patterns. Depending on parameter values, we observe the emergence of aggregates or millinglike patterns, the formation of moving-locally polar-files with particles at the front of these structures acting as effective leaders, and the self-organization of particles into macroscopic nematic structures leading to long-ranged nematic order. Combining simulations and nonlinear field equations, we show that position-based active models, as the one analyzed here, represent a new class of active systems fundamentally different from other active systems, including velocity-alignment-based flocking systems. The reported results are of prime importance in the study, interpretation, and modeling of collective motion patterns in living and nonliving active systems.

  11. Forcing the statistical regionalization method WETTREG with large scale models of different resolution: A sensitivity study

    NASA Astrophysics Data System (ADS)

    Spekat, A.; Baumgart, S.; Kreienkamp, F.; Enke, W.

    2010-09-01

    The statistical regionalization method WETTREG is making use of the assumption that future climate changes are linked to changes in large scale atmospheric patterns. The frequency distributions of those patterns and their time-dependency are identified in the output fields of dynamical climate models and applied to force WETTREG. Thus, the magnitude and the time evolution of high-resolution climate signals for time horizons far into the 21st century can be computed. The model results employed to force WETTREG include the GCMS ECHAM5C, HadCM3C and CNRM. Additionally results from the dynamical regional models CLM, DMI, HadRM, RACMO and REMO, nested into one or more of these global models, are used in their pattern-generating capacity to force WETTREG. The study yield insight concerning the forcing-dependent sensitivity of WETTREG as well as the bandwidth of climate change signals. Recent results for the German State of Hesse will be presented in an intercomparison study.

  12. Modelling potential changes in marine biogeochemistry due to large-scale offshore wind farms

    NASA Astrophysics Data System (ADS)

    van der Molen, Johan; Rees, Jon; Limpenny, Sian

    2013-04-01

    Large-scale renewable energy generation by offshore wind farms may lead to changes in marine ecosystem processes through the following mechanism: 1) wind-energy extraction leads to a reduction in local surface wind speeds; 2) these lead to a reduction in the local wind wave height; 3) as a consequence there's a reduction in SPM resuspension and concentrations; 4) this results in an improvement in under-water light regime, which 5) may lead to increased primary production, which subsequently 6) cascades through the ecosystem. A three-dimensional coupled hydrodynamics-biogeochemistry model (GETM_ERSEM) was used to investigate this process for a hypothetical wind farm in the central North Sea, by running a reference scenario and a scenario with a 10% reduction (as was found in a case study of a small farm in Danish waters) in surface wind velocities in the area of the wind farm. The ERSEM model included both pelagic and benthic processes. The results showed that, within the farm area, the physical mechanisms were as expected, but with variations in the magnitude of the response depending on the ecosystem variable or exchange rate between two ecosystem variables (3-28%, depending on variable/rate). Benthic variables tended to be more sensitive to the changes than pelagic variables. Reduced, but noticeable changes also occurred for some variables in a region of up to two farm diameters surrounding the wind farm. An additional model run in which the 10% reduction in surface wind speed was applied only for wind speeds below the generally used threshold of 25 m/s for operational shut-down showed only minor differences from the run in which all wind speeds were reduced. These first results indicate that there is potential for measurable effects of large-scale offshore wind farms on the marine ecosystem, mainly within the farm but for some variables up to two farm diameters away. However, the wave and SPM parameterisations currently used in the model are crude and need to be

  13. A large-scale stochastic spatiotemporal model for Aedes albopictus-borne chikungunya epidemiology

    PubMed Central

    Chandra, Nastassya L.; Proestos, Yiannis; Lelieveld, Jos; Christophides, George K.; Parham, Paul E.

    2017-01-01

    Chikungunya is a viral disease transmitted to humans primarily via the bites of infected Aedes mosquitoes. The virus caused a major epidemic in the Indian Ocean in 2004, affecting millions of inhabitants, while cases have also been observed in Europe since 2007. We developed a stochastic spatiotemporal model of Aedes albopictus-borne chikungunya transmission based on our recently developed environmentally-driven vector population dynamics model. We designed an integrated modelling framework incorporating large-scale gridded climate datasets to investigate disease outbreaks on Reunion Island and in Italy. We performed Bayesian parameter inference on the surveillance data, and investigated the validity and applicability of the underlying biological assumptions. The model successfully represents the outbreak and measures of containment in Italy, suggesting wider applicability in Europe. In its current configuration, the model implies two different viral strains, thus two different outbreaks, for the two-stage Reunion Island epidemic. Characterisation of the posterior distributions indicates a possible relationship between the second larger outbreak on Reunion Island and the Italian outbreak. The model suggests that vector control measures, with different modes of operation, are most effective when applied in combination: adult vector intervention has a high impact but is short-lived, larval intervention has a low impact but is long-lasting, and quarantining infected territories, if applied strictly, is effective in preventing large epidemics. We present a novel approach in analysing chikungunya outbreaks globally using a single environmentally-driven mathematical model. Our study represents a significant step towards developing a globally applicable Ae. albopictus-borne chikungunya transmission model, and introduces a guideline for extending such models to other vector-borne diseases. PMID:28362820

  14. Development of explosive event scale model testing capability at Sandia`s large scale centrifuge facility

    SciTech Connect

    Blanchat, T.K.; Davie, N.T.; Calderone, J.J.

    1998-02-01

    Geotechnical structures such as underground bunkers, tunnels, and building foundations are subjected to stress fields produced by the gravity load on the structure and/or any overlying strata. These stress fields may be reproduced on a scaled model of the structure by proportionally increasing the gravity field through the use of a centrifuge. This technology can then be used to assess the vulnerability of various geotechnical structures to explosive loading. Applications of this technology include assessing the effectiveness of earth penetrating weapons, evaluating the vulnerability of various structures, counter-terrorism, and model validation. This document describes the development of expertise in scale model explosive testing on geotechnical structures using Sandia`s large scale centrifuge facility. This study focused on buried structures such as hardened storage bunkers or tunnels. Data from this study was used to evaluate the predictive capabilities of existing hydrocodes and structural dynamics codes developed at Sandia National Laboratories (such as Pronto/SPH, Pronto/CTH, and ALEGRA). 7 refs., 50 figs., 8 tabs.

  15. Large-scale functional models of visual cortex for remote sensing

    SciTech Connect

    Brumby, Steven P; Kenyon, Garrett; Rasmussen, Craig E; Swaminarayan, Sriram; Bettencourt, Luis; Landecker, Will

    2009-01-01

    Neuroscience has revealed many properties of neurons and of the functional organization of visual cortex that are believed to be essential to human vision, but are missing in standard artificial neural networks. Equally important may be the sheer scale of visual cortex requiring {approx}1 petaflop of computation. In a year, the retina delivers {approx}1 petapixel to the brain, leading to massively large opportunities for learning at many levels of the cortical system. We describe work at Los Alamos National Laboratory (LANL) to develop large-scale functional models of visual cortex on LANL's Roadrunner petaflop supercomputer. An initial run of a simple region VI code achieved 1.144 petaflops during trials at the IBM facility in Poughkeepsie, NY (June 2008). Here, we present criteria for assessing when a set of learned local representations is 'complete' along with general criteria for assessing computer vision models based on their projected scaling behavior. Finally, we extend one class of biologically-inspired learning models to problems of remote sensing imagery.

  16. Modeling the Hydrologic Effects of Large-Scale Green Infrastructure Projects with GIS

    NASA Astrophysics Data System (ADS)

    Bado, R. A.; Fekete, B. M.; Khanbilvardi, R.

    2015-12-01

    Impervious surfaces in urban areas generate excess runoff, which in turn causes flooding, combined sewer overflows, and degradation of adjacent surface waters. Municipal environmental protection agencies have shown a growing interest in mitigating these effects with 'green' infrastructure practices that partially restore the perviousness and water holding capacity of urban centers. Assessment of the performance of current and future green infrastructure projects is hindered by the lack of adequate hydrological modeling tools; conventional techniques fail to account for the complex flow pathways of urban environments, and detailed analyses are difficult to prepare for the very large domains in which green infrastructure projects are implemented. Currently, no standard toolset exists that can rapidly and conveniently predict runoff, consequent inundations, and sewer overflows at a city-wide scale. We demonstrate how streamlined modeling techniques can be used with open-source GIS software to efficiently model runoff in large urban catchments. Hydraulic parameters and flow paths through city blocks, roadways, and sewer drains are automatically generated from GIS layers, and ultimately urban flow simulations can be executed for a variety of rainfall conditions. With this methodology, users can understand the implications of large-scale land use changes and green/gray storm water retention systems on hydraulic loading, peak flow rates, and runoff volumes.

  17. Influenza epidemic spread simulation for Poland — a large scale, individual based model study

    NASA Astrophysics Data System (ADS)

    Rakowski, Franciszek; Gruziel, Magdalena; Bieniasz-Krzywiec, Łukasz; Radomski, Jan P.

    2010-08-01

    In this work a construction of an agent based model for studying the effects of influenza epidemic in large scale (38 million individuals) stochastic simulations, together with the resulting various scenarios of disease spread in Poland are reported. Simple transportation rules were employed to mimic individuals’ travels in dynamic route-changing schemes, allowing for the infection spread during a journey. Parameter space was checked for stable behaviour, especially towards the effective infection transmission rate variability. Although the model reported here is based on quite simple assumptions, it allowed to observe two different types of epidemic scenarios: characteristic for urban and rural areas. This differentiates it from the results obtained in the analogous studies for the UK or US, where settlement and daily commuting patterns are both substantially different and more diverse. The resulting epidemic scenarios from these ABM simulations were compared with simple, differential equations based, SIR models - both types of the results displaying strong similarities. The pDYN software platform developed here is currently used in the next stage of the project employed to study various epidemic mitigation strategies.

  18. A method to search for large-scale concavities in asteroid shape models

    NASA Astrophysics Data System (ADS)

    Devogèle, M.; Rivet, J. P.; Tanga, P.; Bendjoya, Ph.; Surdej, J.; Bartczak, P.; Hanus, J.

    2015-11-01

    Photometric light-curve inversion of minor planets has proven to produce a unique model solution only under the hypothesis that the asteroid is convex. However, it was suggested that the resulting shape model, for the case of non-convex asteroids, is the convex-hull of the true asteroid non-convex shape. While a convex shape is already useful to provide the overall aspect of the target, much information about real shapes is missed, as we know that asteroids are very irregular. It is a commonly accepted evidence that large flat areas sometimes appearing on shapes derived from light curves correspond to concave areas, but this information has not been further explored and exploited so far. We present in this paper a method that allows to predict the presence of concavities from such flat regions. This method analyses the distribution of the local normals to the facets composing shape models to predict the presence of abnormally large flat surfaces. In order to test our approach, we consider here its application to a large family of synthetic asteroid shapes, and to real asteroids with large-scale concavities, whose detailed shape is known by other kinds of observations (radar and spacecraft encounters). The method that we propose has proven to be reliable and capable of providing a qualitative indication of the relevance of concavities on well-constrained asteroid shapes derived from purely photometric data sets.

  19. Large scale cratering of the lunar highlands - Some Monte Carlo model considerations

    NASA Technical Reports Server (NTRS)

    Hoerz, F.; Gibbons, R. V.; Hill, R. E.; Gault, D. E.

    1976-01-01

    In an attempt to understand the scale and intensity of the moon's early, large scale meteoritic bombardment, a Monte Carlo computer model simulated the effects of all lunar craters greater than 800 m in diameter, for example, the number of times and depths specific fractions of the entire lunar surface were cratered. The model used observed crater size frequencies and crater-geometries compatible with the suggestions of Pike (1974) and Dence (1973); it simulated bombardment histories up to a factor of 10 more intense than those reflected by the present-day crater number density of the lunar highlands. For the present-day cratering record the model yields the following: approximately 25% of the entire lunar surface has not been cratered deeper than 100 m; 50% may have been cratered to 2-3 km depth; less than 5% of the surface has been cratered deeper than about 15 km. A typical highland site has suffered 1-2 impacts. Corresponding values for more intense bombardment histories are also presented, though it must remain uncertain what the absolute intensity of the moon's early meteorite bombardment was.

  20. A Model for Managing Large-Scale Change: A Higher Education Perspective.

    ERIC Educational Resources Information Center

    Bruyns, H. J.

    2001-01-01

    Discusses key components and critical issues related to managing large-scale change in higher education. Explores reasons for inappropriate change patterns and suggests guidelines for establishing appropriate change paradigms. (EV)

  1. QSAR Modeling Using Large-Scale Databases: Case Study for HIV-1 Reverse Transcriptase Inhibitors.

    PubMed

    Tarasova, Olga A; Urusova, Aleksandra F; Filimonov, Dmitry A; Nicklaus, Marc C; Zakharov, Alexey V; Poroikov, Vladimir V

    2015-07-27

    Large-scale databases are important sources of training sets for various QSAR modeling approaches. Generally, these databases contain information extracted from different sources. This variety of sources can produce inconsistency in the data, defined as sometimes widely diverging activity results for the same compound against the same target. Because such inconsistency can reduce the accuracy of predictive models built from these data, we are addressing the question of how best to use data from publicly and commercially accessible databases to create accurate and predictive QSAR models. We investigate the suitability of commercially and publicly available databases to QSAR modeling of antiviral activity (HIV-1 reverse transcriptase (RT) inhibition). We present several methods for the creation of modeling (i.e., training and test) sets from two, either commercially or freely available, databases: Thomson Reuters Integrity and ChEMBL. We found that the typical predictivities of QSAR models obtained using these different modeling set compilation methods differ significantly from each other. The best results were obtained using training sets compiled for compounds tested using only one method and material (i.e., a specific type of biological assay). Compound sets aggregated by target only typically yielded poorly predictive models. We discuss the possibility of "mix-and-matching" assay data across aggregating databases such as ChEMBL and Integrity and their current severe limitations for this purpose. One of them is the general lack of complete and semantic/computer-parsable descriptions of assay methodology carried by these databases that would allow one to determine mix-and-matchability of result sets at the assay level.

  2. Modeling long-term, large-scale sediment storage using a simple sediment budget approach

    NASA Astrophysics Data System (ADS)

    Naipal, Victoria; Reick, Christian; Van Oost, Kristof; Hoffmann, Thomas; Pongratz, Julia

    2016-05-01

    Currently, the anthropogenic perturbation of the biogeochemical cycles remains unquantified due to the poor representation of lateral fluxes of carbon and nutrients in Earth system models (ESMs). This lateral transport of carbon and nutrients between terrestrial ecosystems is strongly affected by accelerated soil erosion rates. However, the quantification of global soil erosion by rainfall and runoff, and the resulting redistribution is missing. This study aims at developing new tools and methods to estimate global soil erosion and redistribution by presenting and evaluating a new large-scale coarse-resolution sediment budget model that is compatible with ESMs. This model can simulate spatial patterns and long-term trends of soil redistribution in floodplains and on hillslopes, resulting from external forces such as climate and land use change. We applied the model to the Rhine catchment using climate and land cover data from the Max Planck Institute Earth System Model (MPI-ESM) for the last millennium (here AD 850-2005). Validation is done using observed Holocene sediment storage data and observed scaling between sediment storage and catchment area. We find that the model reproduces the spatial distribution of floodplain sediment storage and the scaling behavior for floodplains and hillslopes as found in observations. After analyzing the dependence of the scaling behavior on the main parameters of the model, we argue that the scaling is an emergent feature of the model and mainly dependent on the underlying topography. Furthermore, we find that land use change is the main contributor to the change in sediment storage in the Rhine catchment during the last millennium. Land use change also explains most of the temporal variability in sediment storage in floodplains and on hillslopes.

  3. Evaluation of large-scale meteorological patterns associated with temperature extremes in the NARCCAP regional climate model simulations

    NASA Astrophysics Data System (ADS)

    Loikith, Paul C.; Waliser, Duane E.; Lee, Huikyo; Neelin, J. David; Lintner, Benjamin R.; McGinnis, Seth; Mearns, Linda O.; Kim, Jinwon

    2015-12-01

    Large-scale meteorological patterns (LSMPs) associated with temperature extremes are evaluated in a suite of regional climate model (RCM) simulations contributing to the North American Regional Climate Change Assessment Program. LSMPs are characterized through composites of surface air temperature, sea level pressure, and 500 hPa geopotential height anomalies concurrent with extreme temperature days. Six of the seventeen RCM simulations are driven by boundary conditions from reanalysis while the other eleven are driven by one of four global climate models (GCMs). Four illustrative case studies are analyzed in detail. Model fidelity in LSMP spatial representation is high for cold winter extremes near Chicago. Winter warm extremes are captured by most RCMs in northern California, with some notable exceptions. Model fidelity is lower for cool summer days near Houston and extreme summer heat events in the Ohio Valley. Physical interpretation of these patterns and identification of well-simulated cases, such as for Chicago, boosts confidence in the ability of these models to simulate days in the tails of the temperature distribution. Results appear consistent with the expectation that the ability of an RCM to reproduce a realistically shaped frequency distribution for temperature, especially at the tails, is related to its fidelity in simulating LMSPs. Each ensemble member is ranked for its ability to reproduce LSMPs associated with observed warm and cold extremes, identifying systematically high performing RCMs and the GCMs that provide superior boundary forcing. The methodology developed here provides a framework for identifying regions where further process-based evaluation would improve the understanding of simulation error and help guide future model improvement and downscaling efforts.

  4. Constraining Large-Scale Solar Magnetic Field Models with Optical Coronal Observations

    NASA Astrophysics Data System (ADS)

    Uritsky, V. M.; Davila, J. M.; Jones, S. I.

    2015-12-01

    Scientific success of the Solar Probe Plus (SPP) and Solar Orbiter (SO) missions will depend to a large extent on the accuracy of the available coronal magnetic field models describing the connectivity of plasma disturbances in the inner heliosphere with their source regions. We argue that ground based and satellite coronagraph images can provide robust geometric constraints for the next generation of improved coronal magnetic field extrapolation models. In contrast to the previously proposed loop segmentation codes designed for detecting compact closed-field structures above solar active regions, we focus on the large-scale geometry of the open-field coronal regions located at significant radial distances from the solar surface. Details on the new feature detection algorithms will be presented. By applying the developed image processing methodology to high-resolution Mauna Loa Solar Observatory images, we perform an optimized 3D B-line tracing for a full Carrington rotation using the magnetic field extrapolation code presented in a companion talk by S.Jones at al. Tracing results are shown to be in a good qualitative agreement with the large-scalie configuration of the optical corona. Subsequent phases of the project and the related data products for SSP and SO missions as wwll as the supporting global heliospheric simulations will be discussed.

  5. A Fractal Model for the Shear Behaviour of Large-Scale Opened Rock Joints

    NASA Astrophysics Data System (ADS)

    Li, Y.; Oh, J.; Mitra, R.; Canbulat, I.

    2017-01-01

    This paper presents a joint constitutive model that represents the shear behaviour of a large-scale opened rock joint. Evaluation of the degree of opening is made by considering the ratio between the joint wall aperture and the joint amplitude. Scale dependence of the surface roughness is investigated by approximating a natural joint profile to a fractal curve patterned in self-affinity. Developed scaling laws show the slopes of critical waviness and critical unevenness tend to flatten with increased sampling length. Geometrical examination of four 400-mm joint profiles agrees well with the suggested formulations involving multi-order asperities and fractal descriptors. Additionally, a fractal-based formulation is proposed to estimate the peak shear displacements of rock joints at varying scales, which shows a good correlation with experimental data taken from the literature. Parameters involved in the constitutive law can be acquired by inspecting roughness features of sampled rock joints. Thus, the model can be implemented in numerical software for the stability analysis of the rock mass with opened joints.

  6. Prospective Large-Scale Field Study Generates Predictive Model Identifying Major Contributors to Colony Losses

    PubMed Central

    Kielmanowicz, Merav Gleit; Inberg, Alex; Lerner, Inbar Maayan; Golani, Yael; Brown, Nicholas; Turner, Catherine Louise; Hayes, Gerald J. R.; Ballam, Joan M.

    2015-01-01

    Over the last decade, unusually high losses of colonies have been reported by beekeepers across the USA. Multiple factors such as Varroa destructor, bee viruses, Nosema ceranae, weather, beekeeping practices, nutrition, and pesticides have been shown to contribute to colony losses. Here we describe a large-scale controlled trial, in which different bee pathogens, bee population, and weather conditions across winter were monitored at three locations across the USA. In order to minimize influence of various known contributing factors and their interaction, the hives in the study were not treated with antibiotics or miticides. Additionally, the hives were kept at one location and were not exposed to potential stress factors associated with migration. Our results show that a linear association between load of viruses (DWV or IAPV) in Varroa and bees is present at high Varroa infestation levels (>3 mites per 100 bees). The collection of comprehensive data allowed us to draw a predictive model of colony losses and to show that Varroa destructor, along with bee viruses, mainly DWV replication, contributes to approximately 70% of colony losses. This correlation further supports the claim that insufficient control of the virus-vectoring Varroa mite would result in increased hive loss. The predictive model also indicates that a single factor may not be sufficient to trigger colony losses, whereas a combination of stressors appears to impact hive health. PMID:25875764

  7. Prospective large-scale field study generates predictive model identifying major contributors to colony losses.

    PubMed

    Kielmanowicz, Merav Gleit; Inberg, Alex; Lerner, Inbar Maayan; Golani, Yael; Brown, Nicholas; Turner, Catherine Louise; Hayes, Gerald J R; Ballam, Joan M

    2015-04-01

    Over the last decade, unusually high losses of colonies have been reported by beekeepers across the USA. Multiple factors such as Varroa destructor, bee viruses, Nosema ceranae, weather, beekeeping practices, nutrition, and pesticides have been shown to contribute to colony losses. Here we describe a large-scale controlled trial, in which different bee pathogens, bee population, and weather conditions across winter were monitored at three locations across the USA. In order to minimize influence of various known contributing factors and their interaction, the hives in the study were not treated with antibiotics or miticides. Additionally, the hives were kept at one location and were not exposed to potential stress factors associated with migration. Our results show that a linear association between load of viruses (DWV or IAPV) in Varroa and bees is present at high Varroa infestation levels (>3 mites per 100 bees). The collection of comprehensive data allowed us to draw a predictive model of colony losses and to show that Varroa destructor, along with bee viruses, mainly DWV replication, contributes to approximately 70% of colony losses. This correlation further supports the claim that insufficient control of the virus-vectoring Varroa mite would result in increased hive loss. The predictive model also indicates that a single factor may not be sufficient to trigger colony losses, whereas a combination of stressors appears to impact hive health.

  8. Improving urban streamflow forecasting using a high-resolution large scale modeling framework

    NASA Astrophysics Data System (ADS)

    Read, Laura; Hogue, Terri; Gochis, David; Salas, Fernando

    2016-04-01

    Urban flood forecasting is a critical component in effective water management, emergency response, regional planning, and disaster mitigation. As populations across the world continue to move to cities (~1.8% growth per year), and studies indicate that significant flood damages are occurring outside the floodplain in urban areas, the ability to model and forecast flow over the urban landscape becomes critical to maintaining infrastructure and society. In this work, we use the Weather Research and Forecasting- Hydrological (WRF-Hydro) modeling framework as a platform for testing improvements to representation of urban land cover, impervious surfaces, and urban infrastructure. The three improvements we evaluate include: updating the land cover to the latest 30-meter National Land Cover Dataset, routing flow over a high-resolution 30-meter grid, and testing a methodology for integrating an urban drainage network into the routing regime. We evaluate performance of these improvements in the WRF-Hydro model for specific flood events in the Denver-Metro Colorado domain, comparing to historic gaged streamflow for retrospective forecasts. Denver-Metro provides an interesting case study as it is a rapidly growing urban/peri-urban region with an active history of flooding events that have caused significant loss of life and property. Considering that the WRF-Hydro model will soon be implemented nationally in the U.S. to provide flow forecasts on the National Hydrography Dataset Plus river reaches - increasing capability from 3,600 forecast points to 2.7 million, we anticipate that this work will support validation of this service in urban areas for operational forecasting. Broadly, this research aims to provide guidance for integrating complex urban infrastructure with a large-scale, high resolution coupled land-surface and distributed hydrologic model.

  9. Acoustic characteristics of a large-scale augmentor wing model at forward speed

    NASA Technical Reports Server (NTRS)

    Falarski, M. D.; Koenig, D. G.

    1973-01-01

    The augmentor wing concept is being studied as one means of attaining short takeoff and landing (STOL) performance in turbofan powered aircraft. Because of the stringent noise requirements for STOL operation, the acoustics of the augmentor wing are undergoing extensive research. The results of a wind tunnel investigation of a large-scale swept augmentor model at forward speed are presented. The augmentor was not acoustically treated, although the compressor supplying the high pressure primary air was treated to allow the measurement of only the augmentor noise. Installing the augmentor flap and shroud on the slot primary nozzle caused the acoustic dependence on jet velocity to change from eighth power to sixth power. Deflecting the augmentor at constant power increased the perceived noise level in the forward quadrant. The effect of airspeed was small. A small aft shift in perceived noise directivity was experienced with no significant change in sound power. Sealing the lower augmentor slot at a flap deflection of 70 deg reduced the perceived noise level in the aft quadrant. The seal prevented noise from propagating through the slot.

  10. Acoustic characteristics of a large scale wind-tunnel model of a jet flap aircraft

    NASA Technical Reports Server (NTRS)

    Falarski, M. D.; Aiken, T. N.; Aoyagi, K.

    1975-01-01

    The expanding-duct jet flap (EJF) concept is studied to determine STOL performance in turbofan-powered aircraft. The EJF is used to solve the problem of ducting the required volume of air into the wing by providing an expanding cavity between the upper and lower surfaces of the flap. The results are presented of an investigation of the acoustic characteristics of the EJF concept on a large-scale aircraft model powered by JT15D engines. The noise of the EJF is generated by acoustic dipoles as shown by the sixth power dependence of the noise on jet velocity. These sources result from the interaction of the flow turbulence with flap of internal and external surfaces and the trailing edges. Increasing the trailing edge jet from 70 percent span to 100 percent span increased the noise 2 db for the equivalent nozzle area. Blowing at the knee of the flap rather than the trailing edge reduced the noise 5 to 10 db by displacing the jet from the trailing edge and providing shielding from high-frequency noise. Deflecting the flap and varying the angle of attack modified the directivity of the underwing noise but did not affect the peak noise. A forward speed of 33.5 m/sec (110 ft/sec) reduced the dipole noise less than 1 db.

  11. Large Scale Frequent Pattern Mining using MPI One-Sided Model

    SciTech Connect

    Vishnu, Abhinav; Agarwal, Khushbu

    2015-09-08

    In this paper, we propose a work-stealing runtime --- Library for Work Stealing LibWS --- using MPI one-sided model for designing scalable FP-Growth --- {\\em de facto} frequent pattern mining algorithm --- on large scale systems. LibWS provides locality efficient and highly scalable work-stealing techniques for load balancing on a variety of data distributions. We also propose a novel communication algorithm for FP-growth data exchange phase, which reduces the communication complexity from state-of-the-art O(p) to O(f + p/f) for p processes and f frequent attributed-ids. FP-Growth is implemented using LibWS and evaluated on several work distributions and support counts. An experimental evaluation of the FP-Growth on LibWS using 4096 processes on an InfiniBand Cluster demonstrates excellent efficiency for several work distributions (87\\% efficiency for Power-law and 91% for Poisson). The proposed distributed FP-Tree merging algorithm provides 38x communication speedup on 4096 cores.

  12. Extremely large-scale simulation of a Kardar-Parisi-Zhang model using graphics cards.

    PubMed

    Kelling, Jeffrey; Ódo, Géza

    2011-12-01

    The octahedron model introduced recently has been implemented onto graphics cards, which permits extremely large-scale simulations via binary lattice gases and bit-coded algorithms. We confirm scaling behavior belonging to the two-dimensional Kardar-Parisi-Zhang universality class and find a surface growth exponent: β = 0.2415(15) on 2(17) × 2(17) systems, ruling out β = 1/4 suggested by field theory. The maximum speedup with respect to a single CPU is 240. The steady state has been analyzed by finite-size scaling and a growth exponent α = 0.393(4) is found. Correction-to-scaling-exponent are computed and the power-spectrum density of the steady state is determined. We calculate the universal scaling functions and cumulants and show that the limit distribution can be obtained by the sizes considered. We provide numerical fitting for the small and large tail behavior of the steady-state scaling function of the interface width.

  13. Repurposing of open data through large scale hydrological modelling - hypeweb.smhi.se

    NASA Astrophysics Data System (ADS)

    Strömbäck, Lena; Andersson, Jafet; Donnelly, Chantal; Gustafsson, David; Isberg, Kristina; Pechlivanidis, Ilias; Strömqvist, Johan; Arheimer, Berit

    2015-04-01

    Hydrological modelling demands large amounts of spatial data, such as soil properties, land use, topography, lakes and reservoirs, ice and snow coverage, water management (e.g. irrigation patterns and regulations), meteorological data and observed water discharge in rivers. By using such data, the hydrological model will in turn provide new data that can be used for new purposes (i.e. re-purposing). This presentation will give an example of how readily available open data from public portals have been re-purposed by using the Hydrological Predictions for the Environment (HYPE) model in a number of large-scale model applications covering numerous subbasins and rivers. HYPE is a dynamic, semi-distributed, process-based, and integrated catchment model. The model output is launched as new Open Data at the web site www.hypeweb.smhi.se to be used for (i) Climate change impact assessments on water resources and dynamics; (ii) The European Water Framework Directive (WFD) for characterization and development of measure programs to improve the ecological status of water bodies; (iii) Design variables for infrastructure constructions; (iv) Spatial water-resource mapping; (v) Operational forecasts (1-10 days and seasonal) on floods and droughts; (vi) Input to oceanographic models for operational forecasts and marine status assessments; (vii) Research. The following regional domains have been modelled so far with different resolutions (number of subbasins within brackets): Sweden (37 000), Europe (35 000), Arctic basin (30 000), La Plata River (6 000), Niger River (800), Middle-East North-Africa (31 000), and the Indian subcontinent (6 000). The Hype web site provides several interactive web applications for exploring results from the models. The user can explore an overview of various water variables for historical and future conditions. Moreover the user can explore and download historical time series of discharge for each basin and explore the performance of the model

  14. Large-scale flow phenomena in axial compressors: Modeling, analysis, and control with air injectors

    NASA Astrophysics Data System (ADS)

    Hagen, Gregory Scott

    This thesis presents a large scale model of axial compressor flows that is detailed enough to describe the modal and spike stall inception processes, and is also amenable to dynamical systems analysis and control design. The research presented here is based on the model derived by Mezic, which shows that the flows are dominated by the competition between the blade forcing of the compressor and the overall pressure differential created by the compressor. This model describes the modal stall inception process in a similar manner as the Moore-Greitzer model, but also describes the cross sectional flow velocities, and exhibits full span and part span stall. All of these flow patterns described by the model agree with experimental data. Furthermore, the initial model is altered in order to describe the effects of three dimensional spike disturbances, which can destabilize the compressor at otherwise stable operating points. The three dimensional model exhibits flow patterns during spike stall inception that also appear in experiments. The second part of this research focuses on the dynamical systems analysis of, and control design with, the PDE model of the axial flow in the compressor. We show that the axial flow model can be written as a gradient system and illustrate some stability properties of the stalled flow. This also reveals that flows with multiple stall cells correspond to higher energy states in the compressor. The model is derived with air injection actuation, and globally stabilizing distributed controls are designed. We first present a locally optimal controller for the linearized system, and then use Lyapunov analysis to show sufficient conditions for global stability. The concept of sector nonlinearities is applied to the problem of distributed parameter systems, and by analyzing the sector property of the compressor characteristic function, completely decentralized controllers are derived. Finally, the modal decomposition and Lyapunov analysis used in

  15. Large-scale 3D EM modeling with a Block Low-Rank multifrontal direct solver

    NASA Astrophysics Data System (ADS)

    Shantsev, Daniil V.; Jaysaval, Piyoosh; de la Kethulle de Ryhove, Sébastien; Amestoy, Patrick R.; Buttari, Alfredo; L'Excellent, Jean-Yves; Mary, Theo

    2017-03-01

    We put forward the idea of using a Block Low-Rank (BLR) multifrontal direct solver to efficiently solve the linear systems of equations arising from a finite-difference discretization of the frequency-domain Maxwell equations for 3D electromagnetic (EM) problems. The solver uses a low-rank representation for the off-diagonal blocks of the intermediate dense matrices arising in the multifrontal method to reduce the computational load. A numerical threshold, the so called BLR threshold, controlling the accuracy of low-rank representations was optimized by balancing errors in the computed EM fields against savings in floating point operations (flops). Simulations were carried out over large-scale 3D resistivity models representing typical scenarios for marine controlled-source EM surveys, and in particular the SEG SEAM model which contains an irregular salt body. The flop count, size of factor matrices and elapsed run time for matrix factorization are reduced dramatically by using BLR representations and can go down to, respectively, 10%, 30% and 40% of their full rank values for our largest system with N = 20.6 million unknowns. The reductions are almost independent of the number of MPI tasks and threads at least up to 90 × 10 = 900 cores. The BLR savings increase for larger systems, which reduces the factorization flop complexity from O( {{N^2}} ) for the full-rank solver to O( {{N^m}} ) with m = 1.4 - 1.6 . The BLR savings are significantly larger for deep-water environments that exclude the highly resistive air layer from the computational domain. A study in a scenario where simulations are required at multiple source locations shows that the BLR solver can become competitive in comparison to iterative solvers as an engine for 3D CSEM Gauss-Newton inversion that requires forward modelling for a few thousand right-hand sides.

  16. Large-scale mapping and predictive modeling of submerged aquatic vegetation in a shallow eutrophic lake.

    PubMed

    Havens, Karl E; Harwell, Matthew C; Brady, Mark A; Sharfstein, Bruce; East, Therese L; Rodusky, Andrew J; Anson, Daniel; Maki, Ryan P

    2002-04-09

    A spatially intensive sampling program was developed for mapping the submerged aquatic vegetation (SAV) over an area of approximately 20,000 ha in a large, shallow lake in Florida, U.S. The sampling program integrates Geographic Information System (GIS) technology with traditional field sampling of SAV and has the capability of producing robust vegetation maps under a wide range of conditions, including high turbidity, variable depth (0 to 2 m), and variable sediment types. Based on sampling carried out in August-September 2000, we measured 1,050 to 4,300 ha of vascular SAV species and approximately 14,000 ha of the macroalga Chara spp. The results were similar to those reported in the early 1990s, when the last large-scale SAV sampling occurred. Occurrence of Chara was strongly associated with peat sediments, and maximal depths of occurrence varied between sediment types (mud, sand, rock, and peat). A simple model of Chara occurrence, based only on water depth, had an accuracy of 55%. It predicted occurrence of Chara over large areas where the plant actually was not found. A model based on sediment type and depth had an accuracy of 75% and produced a spatial map very similar to that based on observations. While this approach needs to be validated with independent data in order to test its general utility, we believe it may have application elsewhere. The simple modeling approach could serve as a coarse-scale tool for evaluating effects of water level management on Chara populations.

  17. Identification of water quality degradation hotspots in developing countries by applying large scale water quality modelling

    NASA Astrophysics Data System (ADS)

    Malsy, Marcus; Reder, Klara; Flörke, Martina

    2014-05-01

    Decreasing water quality is one of the main global issues which poses risks to food security, economy, and public health and is consequently crucial for ensuring environmental sustainability. During the last decades access to clean drinking water increased, but 2.5 billion people still do not have access to basic sanitation, especially in Africa and parts of Asia. In this context not only connection to sewage system is of high importance, but also treatment, as an increasing connection rate will lead to higher loadings and therefore higher pressure on water resources. Furthermore, poor people in developing countries use local surface waters for daily activities, e.g. bathing and washing. It is thus clear that water utilization and water sewerage are indispensable connected. In this study, large scale water quality modelling is used to point out hotspots of water pollution to get an insight on potential environmental impacts, in particular, in regions with a low observation density and data gaps in measured water quality parameters. We applied the global water quality model WorldQual to calculate biological oxygen demand (BOD) loadings from point and diffuse sources, as well as in-stream concentrations. Regional focus in this study is on developing countries i.e. Africa, Asia, and South America, as they are most affected by water pollution. Hereby, model runs were conducted for the year 2010 to draw a picture of recent status of surface waters quality and to figure out hotspots and main causes of pollution. First results show that hotspots mainly occur in highly agglomerated regions where population density is high. Large urban areas are initially loading hotspots and pollution prevention and control become increasingly important as point sources are subject to connection rates and treatment levels. Furthermore, river discharge plays a crucial role due to dilution potential, especially in terms of seasonal variability. Highly varying shares of BOD sources across

  18. Large-scale modeling of reactive solute transport in fracture zones of granitic bedrocks.

    PubMed

    Molinero, Jorge; Samper, Javier

    2006-01-10

    Final disposal of high-level radioactive waste in deep repositories located in fractured granite formations is being considered by several countries. The assessment of the safety of such repositories requires using numerical models of groundwater flow, solute transport and chemical processes. These models are being developed from data and knowledge gained from in situ experiments such as the Redox Zone Experiment carried out at the underground laboratory of Aspö in Sweden. This experiment aimed at evaluating the effects of the construction of the access tunnel on the hydrogeological and hydrochemical conditions of a fracture zone intersected by the tunnel. Most chemical species showed dilution trends except for bicarbonate and sulphate which unexpectedly increased with time. Molinero and Samper [Molinero, J. and Samper, J. Groundwater flow and solute transport in fracture zones: an improved model for a large-scale field experiment at Aspö (Sweden). J. Hydraul. Res., 42, Extra Issue, 157-172] presented a two-dimensional water flow and solute transport finite element model which reproduced measured drawdowns and dilution curves of conservative species. Here we extend their model by using a reactive transport which accounts for aqueous complexation, acid-base, redox processes, dissolution-precipitation of calcite, quartz, hematite and pyrite, and cation exchange between Na+ and Ca2+. The model provides field-scale estimates of cation exchange capacity of the fracture zone and redox potential of groundwater recharge. It serves also to identify the mineral phases controlling the solubility of iron. In addition, the model is useful to test the relevance of several geochemical processes. Model results rule out calcite dissolution as the process causing the increase in bicarbonate concentration and reject the following possible sources of sulphate: (1) pyrite dissolution, (2) leaching of alkaline sulphate-rich waters from a nearby rock landfill and (3) dissolution of

  19. Observational and Model Studies of Large-Scale Mixing Processes in the Stratosphere

    NASA Technical Reports Server (NTRS)

    Bowman, Kenneth P.

    1997-01-01

    The following is the final technical report for grant NAGW-3442, 'Observational and Model Studies of Large-Scale Mixing Processes in the Stratosphere'. Research efforts in the first year concentrated on transport and mixing processes in the polar vortices. Three papers on mixing in the Antarctic were published. The first was a numerical modeling study of wavebreaking and mixing and their relationship to the period of observed stratospheric waves (Bowman). The second paper presented evidence from TOMS for wavebreaking in the Antarctic (Bowman and Mangus 1993). The third paper used Lagrangian trajectory calculations from analyzed winds to show that there is very little transport into the Antarctic polar vortex prior to the vortex breakdown (Bowman). Mixing is significantly greater at lower levels. This research helped to confirm theoretical arguments for vortex isolation and data from the Antarctic field experiments that were interpreted as indicating isolation. A Ph.D. student, Steve Dahlberg, used the trajectory approach to investigate mixing and transport in the Arctic. While the Arctic vortex is much more disturbed than the Antarctic, there still appears to be relatively little transport across the vortex boundary at 450 K prior to the vortex breakdown. The primary reason for the absence of an ozone hole in the Arctic is the earlier warming and breakdown of the vortex compared to the Antarctic, not replenishment of ozone by greater transport. Two papers describing these results have appeared (Dahlberg and Bowman; Dahlberg and Bowman). Steve Dahlberg completed his Ph.D. thesis (Dahlberg and Bowman) and is now teaching in the Physics Department at Concordia College. We also prepared an analysis of the QBO in SBUV ozone data (Hollandsworth et al.). A numerical study in collaboration with Dr. Ping Chen investigated mixing by barotropic instability, which is the probable origin of the 4-day wave in the upper stratosphere (Bowman and Chen). The important result from

  20. Metabolic Flux Elucidation for Large-Scale Models Using 13C Labeled Isotopes

    PubMed Central

    Suthers, Patrick F.; Burgard, Anthony P.; Dasika, Madhukar S.; Nowroozi, Farnaz; Van Dien, Stephen; Keasling, Jay D.; Maranas, Costas D.

    2007-01-01

    A key consideration in metabolic engineering is the determination of fluxes of the metabolites within the cell. This determination provides an unambiguous description of metabolism before and/or after engineering interventions. Here, we present a computational framework that combines a constraint-based modeling framework with isotopic label tracing on a large-scale. When cells are fed a growth substrate with certain carbon positions labeled with 13C, the distribution of this label in the intracellular metabolites can be calculated based on the known biochemistry of the participating pathways. Most labeling studies focus on skeletal representations of central metabolism and ignore many flux routes that could contribute to the observed isotopic labeling patterns. In contrast, our approach investigates the importance of carrying out isotopic labeling studies using a more comprehensive reaction network consisting of 350 fluxes and 184 metabolites in Escherichia coli including global metabolite balances on cofactors such as ATP, NADH, and NADPH. The proposed procedure is demonstrated on an E. coli strain engineered to produce amorphadiene, a precursor to the anti-malarial drug artemisinin. The cells were grown in continuous culture on glucose containing 20% [U-13C]glucose; the measurements are made using GC-MS performed on 13 amino acids extracted from the cells. We identify flux distributions for which the calculated labeling patterns agree well with the measurements alluding to the accuracy of the network reconstruction. Furthermore, we explore the robustness of the flux calculations to variability in the experimental MS measurements, as well as highlight the key experimental measurements necessary for flux determination. Finally, we discuss the effect of reducing the model, as well as shed light onto the customization of the developed computational framework to other systems. PMID:17632026

  1. Large-scale Models Reveal the Two-component Mechanics of Striated Muscle

    PubMed Central

    Jarosch, Robert

    2008-01-01

    This paper provides a comprehensive explanation of striated muscle mechanics and contraction on the basis of filament rotations. Helical proteins, particularly the coiled-coils of tropomyosin, myosin and α-actinin, shorten their H-bonds cooperatively and produce torque and filament rotations when the Coulombic net-charge repulsion of their highly charged side-chains is diminished by interaction with ions. The classical “two-component model” of active muscle differentiated a “contractile component” which stretches the “series elastic component” during force production. The contractile components are the helically shaped thin filaments of muscle that shorten the sarcomeres by clockwise drilling into the myosin cross-bridges with torque decrease (= force-deficit). Muscle stretch means drawing out the thin filament helices off the cross-bridges under passive counterclockwise rotation with torque increase (= stretch activation). Since each thin filament is anchored by four elastic α-actinin Z-filaments (provided with force-regulating sites for Ca2+ binding), the thin filament rotations change the torsional twist of the four Z-filaments as the “series elastic components”. Large scale models simulate the changes of structure and force in the Z-band by the different Z-filament twisting stages A, B, C, D, E, F and G. Stage D corresponds to the isometric state. The basic phenomena of muscle physiology, i. e. latency relaxation, Fenn-effect, the force-velocity relation, the length-tension relation, unexplained energy, shortening heat, the Huxley-Simmons phases, etc. are explained and interpreted with the help of the model experiments. PMID:19330099

  2. A Photohadronic Model of the Large-scale Jet of PKS 0637-752

    NASA Astrophysics Data System (ADS)

    Kusunose, Masaaki; Takahara, Fumio

    2017-01-01

    Strong X-ray emission from large scale jets of radio loud quasars still remains an open problem. Models based on inverse Compton scattering off cosmic microwave background photons by relativistically beamed jets have recently been ruled out, since Fermi LAT observations for 3C 273 and PKS 0637–752 give the upper limit far below the model prediction. Synchrotron emission from a separate electron population with multi-hundred TeV energies remains a possibility although its origin is not well known. We examine a photo-hadronic origin of such high energy electrons/positrons, assuming that protons are accelerated up to 1019 eV and produce electrons/positrons through a Bethe–Heitler process and photo-pion production. These secondary electrons/positrons are injected at sufficiently high energies and produce X-rays and γ-rays by synchrotron radiation without conflicting with the Fermi LAT upper limits. We find that the resultant spectrum well reproduces the X-ray observations from PKS 0637-752, if the proton power is at least {10}49 {erg} {{{s}}}-1, which is highly super-Eddington. It is noted that the X-ray emission originates primarily from leptons through a Bethe–Heitler process, while leptons from photo-pion origin lose energy directly through synchrotron emission of multi-TeV photons rather than cascading. To avoid the overproduction of the optical flux, optical emission is primarily due to synchrotron emission of secondary leptons rather than primary electrons, or a mild degree of beaming of the jet is needed if it is owing to the primary electrons. Proton synchrotron luminosity is a few orders of magnitude smaller.

  3. Climate Impacts of Large-scale Wind Farms as Parameterized in a Global Climate Model

    NASA Astrophysics Data System (ADS)

    Fitch, Anna

    2015-04-01

    The local, regional and global climate impacts of a large-scale global deployment of wind power in regionally high densities over land is investigated for a 60 year period. Wind farms are represented as elevated momentum sinks, as well as enhanced turbulence to represent turbine blade mixing in a global climate model, the Community Atmosphere Model version 5 (CAM5). For a total installed capacity of 2.5 TW, to provide 16% of the world's projected electricity demand in 2050, minimal impacts are found, both regionally and globally, on temperature, sensible and latent heat fluxes, cloud and precipitation. A mean near-surface warming of 0.12+/-0.07 K is seen within the wind farms. Impacts on wind speed and turbulence are more pronounced, but largely confined to within the wind farm areas. Increasing the wind farm areas to provide an installed capacity of 10 TW, or 65% of the 2050 electricity demand, causes further impacts, however, they remain slight overall. Maximum temperature changes are less than 0.5 K in the wind farm areas. Impacts, both within the wind farms and beyond, become more pronounced with a doubling in turbine density, to provide 20 TW of installed capacity, or 130% of the 2050 electricity demand. However, maximum temperature changes remain less than 0.7 K. Representing wind farms instead as an increase in surface roughness generally produces similar mean results, however, maximum changes increase and influences on wind and turbulence are exaggerated. Overall, wind farm impacts are much weaker than those expected from greenhouse gas emissions, with global mean climate impacts very slight.

  4. Analytical modeling of the statistical properties of the contrast of large-scale irregularities of the ionosphere

    NASA Astrophysics Data System (ADS)

    Vsekhsviatskaia, I. S.; Evstratova, E. A.; Kalinin, Iu. K.; Romanchuk, A. A.

    1989-08-01

    An analytical model is proposed for the distribution of variations of the relative contrast of the electron density of large-scale ionospheric irregularities. The model is characterized by nonzero asymmetry and excess. It is shown that the model can be applied to horizontal irregularity scales from hundreds to thousands of kilometers.

  5. Using Multiple Soil Carbon Maps Facilitates Better Comparisons with Large Scale Modeled Outputs

    NASA Astrophysics Data System (ADS)

    Johnson, K. D.; D'Amore, D. V.; Pastick, N. J.; Genet, H.; Mishra, U.; Wylie, B. K.; Bliss, N. B.

    2015-12-01

    The choice of method applied for mapping the soil carbon is an important source of uncertainty when comparing observed soil carbon stocks to modeled outputs. Large scale soil mapping often relies on non-random and opportunistically collected soils data to make predictions over remote areas where few observations are available for independent validation. Addressing model choice and non-random sampling is problematic when models use the data for the calibration and validation of historical outputs. One potential way to address this uncertainty is to compare the modeled outputs to a range of soil carbon observations from different soil carbon maps that are more likely to capture the true soil carbon value than one map alone. The current analysis demonstrates this approach in Alaska, which despite suffering from a non-random sample, still has one of the richest datasets among the northern circumpolar regions. The outputs from 11 ESMs (from the 5th Climate Model Intercomparison Project) and the Dynamic Organic Soil version of the Terrestrial Ecosystem Model (DOS-TEM) were compared to 4 different soil carbon maps. In the most detailed comparison, DOS-TEM simulated total profile soil carbon stocks that were within the range of the 4 maps for 18 of 23 Alaskan ecosystems, whereas the results fell within the 95% confidence interval of only 8 when compared to just one commonly used soil carbon map (NCSCDv2). At the ecoregion level, the range of soil carbon map estimates overlapped the range of ESM outputs in every ecoregion, although the mean value of the soil carbon maps was between 17% (Southern Interior) and 63% (Arctic) higher than the mean of the ESM outputs. For the whole state of Alaska, the DOS-TEM output and 3 of the 11 ESM outputs fell within the range of the 4 soil carbon map estimates. However, when compared to only one map and its 95% confidence interval (NCSCDv2), the DOS-TEM result fell outside the interval and only two ESM's fell within the observed interval

  6. Large-Scale Model-Based Assessment of Deer-Vehicle Collision Risk

    PubMed Central

    Hothorn, Torsten; Brandl, Roland; Müller, Jörg

    2012-01-01

    Ungulates, in particular the Central European roe deer Capreolus capreolus and the North American white-tailed deer Odocoileus virginianus, are economically and ecologically important. The two species are risk factors for deer–vehicle collisions and as browsers of palatable trees have implications for forest regeneration. However, no large-scale management systems for ungulates have been implemented, mainly because of the high efforts and costs associated with attempts to estimate population sizes of free-living ungulates living in a complex landscape. Attempts to directly estimate population sizes of deer are problematic owing to poor data quality and lack of spatial representation on larger scales. We used data on 74,000 deer–vehicle collisions observed in 2006 and 2009 in Bavaria, Germany, to model the local risk of deer–vehicle collisions and to investigate the relationship between deer–vehicle collisions and both environmental conditions and browsing intensities. An innovative modelling approach for the number of deer–vehicle collisions, which allows nonlinear environment–deer relationships and assessment of spatial heterogeneity, was the basis for estimating the local risk of collisions for specific road types on the scale of Bavarian municipalities. Based on this risk model, we propose a new “deer–vehicle collision index” for deer management. We show that the risk of deer–vehicle collisions is positively correlated to browsing intensity and to harvest numbers. Overall, our results demonstrate that the number of deer–vehicle collisions can be predicted with high precision on the scale of municipalities. In the densely populated and intensively used landscapes of Central Europe and North America, a model-based risk assessment for deer–vehicle collisions provides a cost-efficient instrument for deer management on the landscape scale. The measures derived from our model provide valuable information for planning road protection and

  7. Large-scale Validation of AMIP II Land-surface Simulations: Preliminary Results for Ten Models

    SciTech Connect

    Phillips, T J; Henderson-Sellers, A; Irannejad, P; McGuffie, K; Zhang, H

    2005-12-01

    This report summarizes initial findings of a large-scale validation of the land-surface simulations of ten atmospheric general circulation models that are entries in phase II of the Atmospheric Model Intercomparison Project (AMIP II). This validation is conducted by AMIP Diagnostic Subproject 12 on Land-surface Processes and Parameterizations, which is focusing on putative relationships between the continental climate simulations and the associated models' land-surface schemes. The selected models typify the diversity of representations of land-surface climate that are currently implemented by the global modeling community. The current dearth of global-scale terrestrial observations makes exacting validation of AMIP II continental simulations impractical. Thus, selected land-surface processes of the models are compared with several alternative validation data sets, which include merged in-situ/satellite products, climate reanalyses, and off-line simulations of land-surface schemes that are driven by observed forcings. The aggregated spatio-temporal differences between each simulated process and a chosen reference data set then are quantified by means of root-mean-square error statistics; the differences among alternative validation data sets are similarly quantified as an estimate of the current observational uncertainty in the selected land-surface process. Examples of these metrics are displayed for land-surface air temperature, precipitation, and the latent and sensible heat fluxes. It is found that the simulations of surface air temperature, when aggregated over all land and seasons, agree most closely with the chosen reference data, while the simulations of precipitation agree least. In the latter case, there also is considerable inter-model scatter in the error statistics, with the reanalyses estimates of precipitation resembling the AMIP II simulations more than to the chosen reference data. In aggregate, the simulations of land-surface latent and sensible

  8. Model and controller reduction of large-scale structures based on projection methods

    NASA Astrophysics Data System (ADS)

    Gildin, Eduardo

    The design of low-order controllers for high-order plants is a challenging problem theoretically as well as from a computational point of view. Frequently, robust controller design techniques result in high-order controllers. It is then interesting to achieve reduced-order models and controllers while maintaining robustness properties. Controller designed for large structures based on models obtained by finite element techniques yield large state-space dimensions. In this case, problems related to storage, accuracy and computational speed may arise. Thus, model reduction methods capable of addressing controller reduction problems are of primary importance to allow the practical applicability of advanced controller design methods for high-order systems. A challenging large-scale control problem that has emerged recently is the protection of civil structures, such as high-rise buildings and long-span bridges, from dynamic loadings such as earthquakes, high wind, heavy traffic, and deliberate attacks. Even though significant effort has been spent in the application of control theory to the design of civil structures in order increase their safety and reliability, several challenging issues are open problems for real-time implementation. This dissertation addresses with the development of methodologies for controller reduction for real-time implementation in seismic protection of civil structures using projection methods. Three classes of schemes are analyzed for model and controller reduction: nodal truncation, singular value decomposition methods and Krylov-based methods. A family of benchmark problems for structural control are used as a framework for a comparative study of model and controller reduction techniques. It is shown that classical model and controller reduction techniques, such as balanced truncation, modal truncation and moment matching by Krylov techniques, yield reduced-order controllers that do not guarantee stability of the closed-loop system, that

  9. Isospin symmetry breaking and large-scale shell-model calculations with the Sakurai-Sugiura method

    NASA Astrophysics Data System (ADS)

    Mizusaki, Takahiro; Kaneko, Kazunari; Sun, Yang; Tazaki, Shigeru

    2015-05-01

    Recently isospin symmetry breaking for mass 60-70 region has been investigated based on large-scale shell-model calculations in terms of mirror energy differences (MED), Coulomb energy differences (CED) and triplet energy differences (TED). Behind these investigations, we have encountered a subtle problem in numerical calculations for odd-odd N = Z nuclei with large-scale shell-model calculations. Here we focus on how to solve this subtle problem by the Sakurai-Sugiura (SS) method, which has been recently proposed as a new diagonalization method and has been successfully applied to nuclear shell-model calculations.

  10. Using stochastically-generated subcolumns to represent cloud structure in a large-scale model

    SciTech Connect

    Pincus, R; Hemler, R; Klein, S A

    2005-12-08

    A new method for representing subgrid-scale cloud structure, in which each model column is decomposed into a set of subcolumns, has been introduced into the Geophysical Fluid Dynamics Laboratory's global climate model AM2. Each subcolumn in the decomposition is homogeneous but the ensemble reproduces the initial profiles of cloud properties including cloud fraction, internal variability (if any) in cloud condensate, and arbitrary overlap assumptions that describe vertical correlations. These subcolumns are used in radiation and diagnostic calculations, and have allowed the introduction of more realistic overlap assumptions. This paper describes the impact of these new methods for representing cloud structure in instantaneous calculations and long-term integrations. Shortwave radiation computed using subcolumns and the random overlap assumption differs in the global annual average by more than 4 W/m{sup 2} from the operational radiation scheme in instantaneous calculations; much of this difference is counteracted by a change in the overlap assumption to one in which overlap varies continuously with the separation distance between layers. Internal variability in cloud condensate, diagnosed from the mean condensate amount and cloud fraction, has about the same effect on radiative fluxes as does the ad hoc tuning accounting for this effect in the operational radiation scheme. Long simulations with the new model configuration show little difference from the operational model configuration, while statistical tests indicate that the model does not respond systematically to the sampling noise introduced by the approximate radiative transfer techniques introduced to work with the subcolumns.

  11. Large Scale Numerical Modelling to Study the Dispersion of Persistent Toxic Substances Over Europe

    NASA Astrophysics Data System (ADS)

    Aulinger, A.; Petersen, G.

    2003-12-01

    For the past two decades environmental research at the GKSS Research Centre has been concerned with airborne pollutants with adverse effects on human health. The research was mainly focused on investigating the dispersion and deposition of heavy metals like lead and mercury over Europe by means of numerical modelling frameworks. Lead, in particular, served as a model substance to study the relationship between emissions and human exposition. The major source of airborne lead in Germany was fuel combustion until the 1980ies when its use as gasoline additive declined due to political decisions. Since then, the concentration of lead in ambient air and the deposition rates decreased in the same way as the consumption of leaded fuel. These observations could further be related to the decrease of lead concentrations in human blood measured during medical studies in several German cities. Based on the experience with models for heavy metal transport and deposition we have now started to turn our research focus to organic substances, e.g. PAHs. PAHs have been recognized as significant air borne carcinogens for several decades. However, it is not yet possible to precisely quantify the risk of human exposure to those compounds. Physical and chemical data, known from literature, describing the partitioning of the compounds between particle and gas phase and their degradation in the gas phase are implemented in a tropospheric chemistry module. In this way, the fate of PAHs in the atmosphere due to different particle type and size and different meteorological conditions is tested before carrying out large-scale and long-time studies. First model runs have been carried out for Benzo(a)Pyrene as one of the principal carcinogenic PAHs. Up to now, nearly nothing is known about degradation reactions of particle bound BaP. Thus, they could not be taken into account in the model so far. On the other hand, the proportion of BaP in the gas phase has to be considered at higher ambient

  12. Modeling Booklet Effects for Nonequivalent Group Designs in Large-Scale Assessment

    ERIC Educational Resources Information Center

    Hecht, Martin; Weirich, Sebastian; Siegle, Thilo; Frey, Andreas

    2015-01-01

    Multiple matrix designs are commonly used in large-scale assessments to distribute test items to students. These designs comprise several booklets, each containing a subset of the complete item pool. Besides reducing the test burden of individual students, using various booklets allows aligning the difficulty of the presented items to the assumed…

  13. The Large-Scale Structure of Semantic Networks: Statistical Analyses and a Model of Semantic Growth

    ERIC Educational Resources Information Center

    Steyvers, Mark; Tenenbaum, Joshua B.

    2005-01-01

    We present statistical analyses of the large-scale structure of 3 types of semantic networks: word associations, WordNet, and Roget's Thesaurus. We show that they have a small-world structure, characterized by sparse connectivity, short average path lengths between words, and strong local clustering. In addition, the distributions of the number of…

  14. Forcings and feedbacks on convection in the 2010 Pakistan flood: Modeling extreme precipitation with interactive large-scale ascent

    NASA Astrophysics Data System (ADS)

    Nie, Ji; Shaevitz, Daniel A.; Sobel, Adam H.

    2016-09-01

    Extratropical extreme precipitation events are usually associated with large-scale flow disturbances, strong ascent, and large latent heat release. The causal relationships between these factors are often not obvious, however, the roles of different physical processes in producing the extreme precipitation event can be difficult to disentangle. Here we examine the large-scale forcings and convective heating feedback in the precipitation events, which caused the 2010 Pakistan flood within the Column Quasi-Geostrophic framework. A cloud-revolving model (CRM) is forced with large-scale forcings (other than large-scale vertical motion) computed from the quasi-geostrophic omega equation using input data from a reanalysis data set, and the large-scale vertical motion is diagnosed interactively with the simulated convection. Numerical results show that the positive feedback of convective heating to large-scale dynamics is essential in amplifying the precipitation intensity to the observed values. Orographic lifting is the most important dynamic forcing in both events, while differential potential vorticity advection also contributes to the triggering of the first event. Horizontal moisture advection modulates the extreme events mainly by setting the environmental humidity, which modulates the amplitude of the convection's response to the dynamic forcings. When the CRM is replaced by either a single-column model (SCM) with parameterized convection or a dry model with a reduced effective static stability, the model results show substantial discrepancies compared with reanalysis data. The reasons for these discrepancies are examined, and the implications for global models and theoretical models are discussed.

  15. Collaborative Visualization for Large-Scale Accelerator Electromagnetic Modeling (Final Report)

    SciTech Connect

    William J. Schroeder

    2011-11-13

    This report contains the comprehensive summary of the work performed on the SBIR Phase II, Collaborative Visualization for Large-Scale Accelerator Electromagnetic Modeling at Kitware Inc. in collaboration with Stanford Linear Accelerator Center (SLAC). The goal of the work was to develop collaborative visualization tools for large-scale data as illustrated in the figure below. The solutions we proposed address the typical problems faced by geographicallyand organizationally-separated research and engineering teams, who produce large data (either through simulation or experimental measurement) and wish to work together to analyze and understand their data. Because the data is large, we expect that it cannot be easily transported to each team member's work site, and that the visualization server must reside near the data. Further, we also expect that each work site has heterogeneous resources: some with large computing clients, tiled (or large) displays and high bandwidth; others sites as simple as a team member on a laptop computer. Our solution is based on the open-source, widely used ParaView large-data visualization application. We extended this tool to support multiple collaborative clients who may locally visualize data, and then periodically rejoin and synchronize with the group to discuss their findings. Options for managing session control, adding annotation, and defining the visualization pipeline, among others, were incorporated. We also developed and deployed a Web visualization framework based on ParaView that enables the Web browser to act as a participating client in a collaborative session. The ParaView Web Visualization framework leverages various Web technologies including WebGL, JavaScript, Java and Flash to enable interactive 3D visualization over the web using ParaView as the visualization server. We steered the development of this technology by teaming with the SLAC National Accelerator Laboratory. SLAC has a computationally-intensive problem

  16. Using remote sensing for validation of a large scale hydrologic and hydrodynamic model in the Amazon

    NASA Astrophysics Data System (ADS)

    Paiva, R. C.; Bonnet, M.; Buarque, D. C.; Collischonn, W.; Frappart, F.; Mendes, C. B.

    2011-12-01

    We present the validation of the large-scale, catchment-based hydrological MGB-IPH model in the Amazon River basin. In this model, physically-based equations are used to simulate the hydrological processes, such as the Penman Monteith method to estimate evapotranspiration, or the Moore and Clarke infiltration model. A new feature recently introduced in the model is a 1D hydrodynamic module for river routing. It uses the full Saint-Venant equations and a simple floodplain storage model. River and floodplain geometry parameters are extracted from SRTM DEM using specially developed GIS algorithms that provide catchment discretization, estimation of river cross-sections geometry and water storage volume variations in the floodplains. The model was forced using satellite-derived daily rainfall TRMM 3B42, calibrated against discharge data and first validated using daily discharges and water levels from 111 and 69 stream gauges, respectively. Then, we performed a validation against remote sensing derived hydrological products, including (i) monthly Terrestrial Water Storage (TWS) anomalies derived from GRACE, (ii) river water levels derived from ENVISAT satellite altimetry data (212 virtual stations from Santos da Silva et al., 2010) and (iii) a multi-satellite monthly global inundation extent dataset at ~25 x 25 km spatial resolution (Papa et al., 2010). Validation against river discharges shows good performance of the MGB-IPH model. For 70% of the stream gauges, the Nash and Suttcliffe efficiency index (ENS) is higher than 0.6 and at Óbidos, close to Amazon river outlet, ENS equals 0.9 and the model bias equals,-4.6%. Largest errors are located in drainage areas outside Brazil and we speculate that it is due to the poor quality of rainfall datasets in these areas poorly monitored and/or mountainous. Validation against water levels shows that model is performing well in the major tributaries. For 60% of virtual stations, ENS is higher than 0.6. But, similarly, largest

  17. Application of large-scale, multi-resolution watershed modeling framework using the Hydrologic and Water Quality System (HAWQS)

    Technology Transfer Automated Retrieval System (TEKTRAN)

    In recent years, large-scale watershed modeling has been implemented broadly in the field of water resources planning and management. Complex hydrological, sediment, and nutrient processes can be simulated by sophisticated watershed simulation models for important issues such as water resources all...

  18. How do parcellation size and short-range connectivity affect dynamics in large-scale brain network models?

    PubMed

    Proix, Timothée; Spiegler, Andreas; Schirner, Michael; Rothmeier, Simon; Ritter, Petra; Jirsa, Viktor K

    2016-11-15

    Recent efforts to model human brain activity on the scale of the whole brain rest on connectivity estimates of large-scale networks derived from diffusion magnetic resonance imaging (dMRI). This type of connectivity describes white matter fiber tracts. The number of short-range cortico-cortical white-matter connections is, however, underrepresented in such large-scale brain models. It is still unclear on the one hand, which scale of representation of white matter fibers is optimal to describe brain activity on a large-scale such as recorded with magneto- or electroencephalography (M/EEG) or functional magnetic resonance imaging (fMRI), and on the other hand, to which extent short-range connections that are typically local should be taken into account. In this article we quantified the effect of connectivity upon large-scale brain network dynamics by (i) systematically varying the number of brain regions before computing the connectivity matrix, and by (ii) adding generic short-range connections. We used dMRI data from the Human Connectome Project. We developed a suite of preprocessing modules called SCRIPTS to prepare these imaging data for The Virtual Brain, a neuroinformatics platform for large-scale brain modeling and simulations. We performed simulations under different connectivity conditions and quantified the spatiotemporal dynamics in terms of Shannon Entropy, dwell time and Principal Component Analysis. For the reconstructed connectivity, our results show that the major white matter fiber bundles play an important role in shaping slow dynamics in large-scale brain networks (e.g. in fMRI). Faster dynamics such as gamma oscillations (around 40 Hz) are sensitive to the short-range connectivity if transmission delays are considered.

  19. Development of Large-Scale Forcing Data for GoAmazon2014/5 Cloud Modeling Studies

    NASA Astrophysics Data System (ADS)

    Tang, S.; Xie, S.; Zhang, Y.; Schumacher, C.; Upton, H. M.; Ahlgrimm, M.; Feng, Z.

    2015-12-01

    The Observations and Modeling of the Green Ocean 2014-2015 (GoAmazon2014/5) field campaign is an international collaborated experiment conducted near Manaus, Brazil from January 2014 through December 2015. This experiment is designed to enable the study of aerosols, tropical clouds, convections and their interactions. To support modeling studies of these processes with data collected from the GoAmazon2014/5 campaign, we have developed a large-scale forcing data (e.g., vertical velocities and advective tendencies) during the second intensive operational period (IOP) of GoAmazon2014/5 from 1 Sep to 10 Oct, 2014. The method used in this study is the constrained variational analysis method in which the large-scale state fields are constrained by the surface and top-of-atmosphere observations (e.g. surface precipitation and outgoing longwave radiation) to conserve column-integrated mass, moisture and dry static energy. To address potential uncertainties in the derived forcing data due to uncertainties in surface precipitation, two sets of large-scale forcing data are developed based on the ECMWF analysis constrained by the two precipitation products respectively from SIPAM radar and TRMM 3B42 products. Our initial analysis shows large differences in these two precipitation products, which causes considerable differences in the derived large-scale forcing data. Potential uncertainties in the large-scale forcing data to other surface constraints such as surface latent and sensible fluxes will be explored. The characteristics of the large-scale forcing structures for selected cases will be discussed.

  20. LARGE-SCALE CYCLOGENESIS, FRONTAL WAVES AND DUST ON MARS: MODELING AND DIAGNOSTIC CONSIDERATIONS

    NASA Astrophysics Data System (ADS)

    Hollingsworth, J.; Kahre, M.

    2009-12-01

    During late autumn through early spring, Mars’ northern middle and high latitudes exhibit very strong equator-to-pole mean temperature contrasts (i.e., baroclinicity). From data collected during the Viking era and recent observations from both the Mars Global Surveyor (MGS) and Mars Reconnaissance Orbiter (MRO) missions, this strong baroclinicity supports vigorous large-scale eastward traveling weather systems (i.e., transient synoptic-period waves). These systems also have accompanying sub-synoptic scale ramifications on the atmospheric environment through cyclonic/anticyclonic winds, intense deformations and contractions/dilations in temperatures, and sharp perturbations amongst atmospheric tracers (e.g., dust and volatiles/condensates). Mars’ northern-hemisphere frontal waves can exhibit extended meridional structure, and appear to be active agents in the planet’s dust cycle. Their parenting cyclones tend to develop, travel eastward, and decay preferentially within certain geographic regions (i.e., storm zones). We adapt a version of the NASA Ames Mars general circulation model (GCM) at high horizontal resolution that includes the lifting, transport and sedimentation of radiatively-active dust to investigate the nature of cyclogenesis and frontal-wave circulations (both horizontally and vertically), and regional dust transport and concentration within the atmosphere. Near late winter and early spring (Ls ˜ 320-350°), high-resolution simulations indicate that the predominant dust lifting occurs through wind-stress lifting, in particular over the Tharsis highlands of the western hemisphere and to a lesser extent over the Arabia highlands of the eastern hemisphere. The former region also indicates considerable interaction with regards to upslope/downslope (i.e., nocturnal) flows and the synoptic/subsynoptic-scale circulations associated with cyclogenesis whereby dust can be readily “focused” within a frontal-wave disturbance and carried downstream both

  1. Path2Models: large-scale generation of computational models from biochemical pathway maps

    PubMed Central

    2013-01-01

    Background Systems biology projects and omics technologies have led to a growing number of biochemical pathway models and reconstructions. However, the majority of these models are still created de novo, based on literature mining and the manual processing of pathway data. Results To increase the efficiency of model creation, the Path2Models project has automatically generated mathematical models from pathway representations using a suite of freely available software. Data sources include KEGG, BioCarta, MetaCyc and SABIO-RK. Depending on the source data, three types of models are provided: kinetic, logical and constraint-based. Models from over 2 600 organisms are encoded consistently in SBML, and are made freely available through BioModels Database at http://www.ebi.ac.uk/biomodels-main/path2models. Each model contains the list of participants, their interactions, the relevant mathematical constructs, and initial parameter values. Most models are also available as easy-to-understand graphical SBGN maps. Conclusions To date, the project has resulted in more than 140 000 freely available models. Such a resource can tremendously accelerate the development of mathematical models by providing initial starting models for simulation and analysis, which can be subsequently curated and further parameterized. PMID:24180668

  2. A large scale microwave emission model for forests. Contribution to the SMOS algorithm

    NASA Astrophysics Data System (ADS)

    Rahmoune, R.; Della Vecchia, A.; Ferrazzoli, P.; Guerriero, L.; Martin-Porqueras, F.

    2009-04-01

    1. INTRODUCTION It is well known that surface soil moisture plays an important role in the water cycle and the global climate. SMOS is a L-Band multi-angle dual-polarization microwave radiometer for global monitoring of this variable. In the areas covered by forests, the opacity is relatively high, and the knowledge of moisture remains problematic. A significant percentage of SMOS pixels at global scale is affected by fractional forest. Whereas the effect of the vegetation can be corrected thanks a simple radiative model, in case of dense forests the wave penetration is limited and the sensitivity to variations of soil moisture is poor. However, most of the pixels are mixed, and a reliable estimate of forest emissivity is important to retrieve the soil moisture of the areas less affected by forest cover. Moreover, there are many sparse woodlands, where the sensitivity to variations of soil moisture is still acceptable. At the scale of spaceborne radiometers, it is difficult to have a detailed knowledge of the variables which affect the overall emissivity. In order to manage effectively these problems, the electromagnetic model developed at Tor Vergata University was combined with information available from forest literature. Using allometric equations and other information, the geometrical and dielectric inputs required by the model were related to global variables available at large scale, such as the Leaf Area Index. This procedure is necessarily approximate. In a first version of the model, forest variables were assumed to be constant in time, and were simply related to the maximum yearly value of Leaf Area Index. Moreover, a unique sparse distribution of trunk diameters was assumed. Finally, the temperature distribution within the crown canopy was assumed to be uniform. The model is being refined, in order to consider seasonal variations of foliage cover, subdivided into arboreous foliage and understory contributions. Different distributions of trunk diameter

  3. Large scale 3-D modeling by integration of resistivity models and borehole data through inversion

    NASA Astrophysics Data System (ADS)

    Foged, N.; Marker, P. A.; Christansen, A. V.; Bauer-Gottwein, P.; Jørgensen, F.; Høyer, A.-S.; Auken, E.

    2014-02-01

    We present an automatic method for parameterization of a 3-D model of the subsurface, integrating lithological information from boreholes with resistivity models through an inverse optimization, with the objective of further detailing for geological models or as direct input to groundwater models. The parameter of interest is the clay fraction, expressed as the relative length of clay-units in a depth interval. The clay fraction is obtained from lithological logs and the clay fraction from the resistivity is obtained by establishing a simple petrophysical relationship, a translator function, between resistivity and the clay fraction. Through inversion we use the lithological data and the resistivity data to determine the optimum spatially distributed translator function. Applying the translator function we get a 3-D clay fraction model, which holds information from the resistivity dataset and the borehole dataset in one variable. Finally, we use k means clustering to generate a 3-D model of the subsurface structures. We apply the concept to the Norsminde survey in Denmark integrating approximately 700 boreholes and more than 100 000 resistivity models from an airborne survey in the parameterization of the 3-D model covering 156 km2. The final five-cluster 3-D model differentiates between clay materials and different high resistive materials from information held in resistivity model and borehole observations respectively.

  4. Large-scale 3-D modeling by integration of resistivity models and borehole data through inversion

    NASA Astrophysics Data System (ADS)

    Foged, N.; Marker, P. A.; Christansen, A. V.; Bauer-Gottwein, P.; Jørgensen, F.; Høyer, A.-S.; Auken, E.

    2014-11-01

    We present an automatic method for parameterization of a 3-D model of the subsurface, integrating lithological information from boreholes with resistivity models through an inverse optimization, with the objective of further detailing of geological models, or as direct input into groundwater models. The parameter of interest is the clay fraction, expressed as the relative length of clay units in a depth interval. The clay fraction is obtained from lithological logs and the clay fraction from the resistivity is obtained by establishing a simple petrophysical relationship, a translator function, between resistivity and the clay fraction. Through inversion we use the lithological data and the resistivity data to determine the optimum spatially distributed translator function. Applying the translator function we get a 3-D clay fraction model, which holds information from the resistivity data set and the borehole data set in one variable. Finally, we use k-means clustering to generate a 3-D model of the subsurface structures. We apply the procedure to the Norsminde survey in Denmark, integrating approximately 700 boreholes and more than 100 000 resistivity models from an airborne survey in the parameterization of the 3-D model covering 156 km2. The final five-cluster 3-D model differentiates between clay materials and different high-resistivity materials from information held in the resistivity model and borehole observations, respectively.

  5. Large-scale groundwater modeling using global datasets: a test case for the Rhine-Meuse basin

    NASA Astrophysics Data System (ADS)

    Sutanudjaja, E. H.; van Beek, L. P. H.; de Jong, S. M.; van Geer, F. C.; Bierkens, M. F. P.

    2011-09-01

    The current generation of large-scale hydrological models does not include a groundwater flow component. Large-scale groundwater models, involving aquifers and basins of multiple countries, are still rare mainly due to a lack of hydro-geological data which are usually only available in developed countries. In this study, we propose a novel approach to construct large-scale groundwater models by using global datasets that are readily available. As the test-bed, we use the combined Rhine-Meuse basin that contains groundwater head data used to verify the model output. We start by building a distributed land surface model (30 arc-second resolution) to estimate groundwater recharge and river discharge. Subsequently, a MODFLOW transient groundwater model is built and forced by the recharge and surface water levels calculated by the land surface model. Results are promising despite the fact that we still use an offline procedure to couple the land surface and MODFLOW groundwater models (i.e. the simulations of both models are separately performed). The simulated river discharges compare well to the observations. Moreover, based on our sensitivity analysis, in which we run several groundwater model scenarios with various hydro-geological parameter settings, we observe that the model can reasonably well reproduce the observed groundwater head time series. However, we note that there are still some limitations in the current approach, specifically because the offline-coupling technique simplifies the dynamic feedbacks between surface water levels and groundwater heads, and between soil moisture states and groundwater heads. Also the current sensitivity analysis ignores the uncertainty of the land surface model output. Despite these limitations, we argue that the results of the current model show a promise for large-scale groundwater modeling practices, including for data-poor environments and at the global scale.

  6. Large-scale hydrological modelling in the semi-arid north-east of Brazil

    NASA Astrophysics Data System (ADS)

    Güntner, Andreas

    2002-07-01

    the framework of an integrated model which contains modules that do not work on the basis of natural spatial units. The target units mentioned above are disaggregated in Wasa into smaller modelling units within a new multi-scale, hierarchical approach. The landscape units defined in this scheme capture in particular the effect of structured variability of terrain, soil and vegetation characteristics along toposequences on soil moisture and runoff generation. Lateral hydrological processes at the hillslope scale, as reinfiltration of surface runoff, being of particular importance in semi-arid environments, can thus be represented also within the large-scale model in a simplified form. Depending on the resolution of available data, small-scale variability is not represented explicitly with geographic reference in Wasa, but by the distribution of sub-scale units and by statistical transition frequencies for lateral fluxes between these units. Further model components of Wasa which respect specific features of semi-arid hydrology are: (1) A two-layer model for evapotranspiration comprises energy transfer at the soil surface (including soil evaporation), which is of importance in view of the mainly sparse vegetation cover. Additionally, vegetation parameters are differentiated in space and time in dependence on the occurrence of the rainy season. (2) The infiltration module represents in particular infiltration-excess surface runoff as the dominant runoff component. (3) For the aggregate description of the water balance of reservoirs that cannot be represented explicitly in the model, a storage approach respecting different reservoirs size classes and their interaction via the river network is applied. (4) A model for the quantification of water withdrawal by water use in different sectors is coupled to Wasa. (5) A cascade model for the temporal disaggregation of precipitation time series, adapted to the specific characteristics of tropical convective rainfall, is applied

  7. Testing LTB void models without the cosmic microwave background or large scale structure: new constraints from galaxy ages

    SciTech Connect

    Putter, Roland de; Verde, Licia; Jimenez, Raul E-mail: liciaverde@icc.ub.edu

    2013-02-01

    We present new observational constraints on inhomogeneous models based on observables independent of the CMB and large-scale structure. Using Bayesian evidence we find very strong evidence for the homogeneous LCDM model, thus disfavouring inhomogeneous models. Our new constraints are based on quantities independent of the growth of perturbations and rely on cosmic clocks based on atomic physics and on the local density of matter.

  8. Using cloud resolving model simulations of deep convection to inform cloud parameterizations in large-scale models

    SciTech Connect

    Klein, Stephen A.; Pincus, Robert; Xu, Kuan-man

    2003-06-23

    Cloud parameterizations in large-scale models struggle to address the significant non-linear effects of radiation and precipitation that arise from horizontal inhomogeneity in cloud properties at scales smaller than the grid box size of the large-scale models. Statistical cloud schemes provide an attractive framework to self-consistently predict the horizontal inhomogeneity in radiation and microphysics because the probability distribution function (PDF) of total water contained in the scheme can be used to calculate these non-linear effects. Statistical cloud schemes were originally developed for boundary layer studies so extending them to a global model with many different environments is not straightforward. For example, deep convection creates abundant cloudiness and yet little is known about how deep convection alters the PDF of total water or how to parameterize these impacts. These issues are explored with data from a 29 day simulation by a cloud resolving model (CRM) of the July 1997 ARM Intensive Observing Period at the Southern Great Plains site. The simulation is used to answer two questions: (a) how well can the beta distribution represent the PDFs of total water relative to saturation resolved by the CRM? (b) how can the effects of convection on the PDF be parameterized? In addition to answering these questions, additional sections more fully describe the proposed statistical cloud scheme and the CRM simulation and analysis methods.

  9. Computational models of consumer confidence from large-scale online attention data: crowd-sourcing econometrics.

    PubMed

    Dong, Xianlei; Bollen, Johan

    2015-01-01

    Economies are instances of complex socio-technical systems that are shaped by the interactions of large numbers of individuals. The individual behavior and decision-making of consumer agents is determined by complex psychological dynamics that include their own assessment of present and future economic conditions as well as those of others, potentially leading to feedback loops that affect the macroscopic state of the economic system. We propose that the large-scale interactions of a nation's citizens with its online resources can reveal the complex dynamics of their collective psychology, including their assessment of future system states. Here we introduce a behavioral index of Chinese Consumer Confidence (C3I) that computationally relates large-scale online search behavior recorded by Google Trends data to the macroscopic variable of consumer confidence. Our results indicate that such computational indices may reveal the components and complex dynamics of consumer psychology as a collective socio-economic phenomenon, potentially leading to improved and more refined economic forecasting.

  10. Computational Models of Consumer Confidence from Large-Scale Online Attention Data: Crowd-Sourcing Econometrics

    PubMed Central

    2015-01-01

    Economies are instances of complex socio-technical systems that are shaped by the interactions of large numbers of individuals. The individual behavior and decision-making of consumer agents is determined by complex psychological dynamics that include their own assessment of present and future economic conditions as well as those of others, potentially leading to feedback loops that affect the macroscopic state of the economic system. We propose that the large-scale interactions of a nation's citizens with its online resources can reveal the complex dynamics of their collective psychology, including their assessment of future system states. Here we introduce a behavioral index of Chinese Consumer Confidence (C3I) that computationally relates large-scale online search behavior recorded by Google Trends data to the macroscopic variable of consumer confidence. Our results indicate that such computational indices may reveal the components and complex dynamics of consumer psychology as a collective socio-economic phenomenon, potentially leading to improved and more refined economic forecasting. PMID:25826692

  11. Hydro-economic Modeling: Reducing the Gap between Large Scale Simulation and Optimization Models

    NASA Astrophysics Data System (ADS)

    Forni, L.; Medellin-Azuara, J.; Purkey, D.; Joyce, B. A.; Sieber, J.; Howitt, R.

    2012-12-01

    The integration of hydrological and socio economic components into hydro-economic models has become essential for water resources policy and planning analysis. In this study we integrate the economic value of water in irrigated agricultural production using SWAP (a StateWide Agricultural Production Model for California), and WEAP (Water Evaluation and Planning System) a climate driven hydrological model. The integration of the models is performed using a step function approximation of water demand curves from SWAP, and by relating the demand tranches to the priority scheme in WEAP. In order to do so, a modified version of SWAP was developed called SWEAP that has the Planning Area delimitations of WEAP, a Maximum Entropy Model to estimate evenly sized steps (tranches) of water derived demand functions, and the translation of water tranches into crop land. In addition, a modified version of WEAP was created called ECONWEAP with minor structural changes for the incorporation of land decisions from SWEAP and series of iterations run via an external VBA script. This paper shows the validity of this integration by comparing revenues from WEAP vs. ECONWEAP as well as an assessment of the approximation of tranches. Results show a significant increase in the resulting agricultural revenues for our case study in California's Central Valley using ECONWEAP while maintaining the same hydrology and regional water flows. These results highlight the gains from allocating water based on its economic compared to priority-based water allocation systems. Furthermore, this work shows the potential of integrating optimization and simulation-based hydrologic models like ECONWEAP.ercentage difference in total agricultural revenues (EconWEAP versus WEAP).

  12. Realistic molecular model of kerogen's nanostructure

    NASA Astrophysics Data System (ADS)

    Bousige, Colin; Ghimbeu, Camélia Matei; Vix-Guterl, Cathie; Pomerantz, Andrew E.; Suleimenova, Assiya; Vaughan, Gavin; Garbarino, Gaston; Feygenson, Mikhail; Wildgruber, Christoph; Ulm, Franz-Josef; Pellenq, Roland J.-M.; Coasne, Benoit

    2016-05-01

    Despite kerogen's importance as the organic backbone for hydrocarbon production from source rocks such as gas shale, the interplay between kerogen's chemistry, morphology and mechanics remains unexplored. As the environmental impact of shale gas rises, identifying functional relations between its geochemical, transport, elastic and fracture properties from realistic molecular models of kerogens becomes all the more important. Here, by using a hybrid experimental-simulation method, we propose a panel of realistic molecular models of mature and immature kerogens that provide a detailed picture of kerogen's nanostructure without considering the presence of clays and other minerals in shales. We probe the models' strengths and limitations, and show that they predict essential features amenable to experimental validation, including pore distribution, vibrational density of states and stiffness. We also show that kerogen's maturation, which manifests itself as an increase in the sp2/sp3 hybridization ratio, entails a crossover from plastic-to-brittle rupture mechanisms.

  13. Realistic molecular model of kerogen's nanostructure.

    PubMed

    Bousige, Colin; Ghimbeu, Camélia Matei; Vix-Guterl, Cathie; Pomerantz, Andrew E; Suleimenova, Assiya; Vaughan, Gavin; Garbarino, Gaston; Feygenson, Mikhail; Wildgruber, Christoph; Ulm, Franz-Josef; Pellenq, Roland J-M; Coasne, Benoit

    2016-05-01

    Despite kerogen's importance as the organic backbone for hydrocarbon production from source rocks such as gas shale, the interplay between kerogen's chemistry, morphology and mechanics remains unexplored. As the environmental impact of shale gas rises, identifying functional relations between its geochemical, transport, elastic and fracture properties from realistic molecular models of kerogens becomes all the more important. Here, by using a hybrid experimental-simulation method, we propose a panel of realistic molecular models of mature and immature kerogens that provide a detailed picture of kerogen's nanostructure without considering the presence of clays and other minerals in shales. We probe the models' strengths and limitations, and show that they predict essential features amenable to experimental validation, including pore distribution, vibrational density of states and stiffness. We also show that kerogen's maturation, which manifests itself as an increase in the sp(2)/sp(3) hybridization ratio, entails a crossover from plastic-to-brittle rupture mechanisms.

  14. Seismic Modelling of the Earth's Large-Scale Three-Dimensional Structure

    NASA Astrophysics Data System (ADS)

    Woodhouse, J. H.; Dziewonski, A. M.

    1989-07-01

    Several different kinds of seismological data, spanning more than three orders of magnitude in frequency, have been employed in the study of the Earth's large-scale three-dimensional structure. These yield different but overlapping information, which is leading to a coherent picture of the Earth's internal heterogeneity. In this article we describe several methods of seismic inversion and intercompare the resulting models. Models of upper-mantle shear velocity based upon mantle waveforms (Woodhouse & Dziewonski (J. geophys. Res. 89, 5953-5986 (1984))) (f lesssim 7 mHz) and long-period body waveforms (f lesssim 20 mHz; Woodhouse & Dziewonski (Eos, Wash. 67, 307 (1986))) show the mid-oceanic ridges to be the major low-velocity anomalies in the uppermost mantle, together with regions in the western Pacific, characterized by back-arc volcanism. High velocities are associated with the continents, and in particular with the continental shields, extending to depths in excess of 300 km. By assuming a given ratio between density and wave velocity variations, and a given mantle viscosity structure, such models have been successful in explaining some aspects of observed plate motion in terms of thermal convection in the mantle (Forte & Peltier (J. geophys. Res. 92, 3645-3679 (1987))). An important qualitative conclusion from such analysis is that the magnitude of the observed seismic anomalies is of the order expected in a convecting system having the viscosity, temperature derivatives and flow rates which characterize the mantle. Models of the lower mantle based upon P-wave arrival times (f ≈ 1 Hz; Dziewonski (J. geophys. Res. 89, 5929-5952 (1984)); Morelli & Dziewonski (Eos, Wash. 67, 311 (1986))) SH waveforms (f ≈ 20 mHz; Woodhouse & Dziewonski (1986)) and free oscillations (Giardini et al. (Nature, Lond. 325, 405-411 (1987); J. geophys. Res. 93, 13716-13742 (1988))) (f ≈ 0.5-5 mHz) show a very long wavelength pattern, largely contained in spherical harmonics of

  15. Modeling relief demands in an emergency supply chain system under large-scale disasters based on a queuing network.

    PubMed

    He, Xinhua; Hu, Wenfa

    2014-01-01

    This paper presents a multiple-rescue model for an emergency supply chain system under uncertainties in large-scale affected area of disasters. The proposed methodology takes into consideration that the rescue demands caused by a large-scale disaster are scattered in several locations; the servers are arranged in multiple echelons (resource depots, distribution centers, and rescue center sites) located in different places but are coordinated within one emergency supply chain system; depending on the types of rescue demands, one or more distinct servers dispatch emergency resources in different vehicle routes, and emergency rescue services queue in multiple rescue-demand locations. This emergency system is modeled as a minimal queuing response time model of location and allocation. A solution to this complex mathematical problem is developed based on genetic algorithm. Finally, a case study of an emergency supply chain system operating in Shanghai is discussed. The results demonstrate the robustness and applicability of the proposed model.

  16. Modeling Relief Demands in an Emergency Supply Chain System under Large-Scale Disasters Based on a Queuing Network

    PubMed Central

    He, Xinhua

    2014-01-01

    This paper presents a multiple-rescue model for an emergency supply chain system under uncertainties in large-scale affected area of disasters. The proposed methodology takes into consideration that the rescue demands caused by a large-scale disaster are scattered in several locations; the servers are arranged in multiple echelons (resource depots, distribution centers, and rescue center sites) located in different places but are coordinated within one emergency supply chain system; depending on the types of rescue demands, one or more distinct servers dispatch emergency resources in different vehicle routes, and emergency rescue services queue in multiple rescue-demand locations. This emergency system is modeled as a minimal queuing response time model of location and allocation. A solution to this complex mathematical problem is developed based on genetic algorithm. Finally, a case study of an emergency supply chain system operating in Shanghai is discussed. The results demonstrate the robustness and applicability of the proposed model. PMID:24688367

  17. Geodynamic models of a Yellowstone plume and its interaction with subduction and large-scale mantle circulation

    NASA Astrophysics Data System (ADS)

    Steinberger, B. M.

    2012-12-01

    Yellowstone is a site of intra-plate volcanism, with many traits of a classical "hotspot" (chain of age-progressive volcanics with active volcanism on one end; associated with flood basalt), yet it is atypical, as it is located near an area of Cenozoic subduction zones. Tomographic images show a tilted plume conduit in the upper mantle beneath Yellowstone; a similar tilt is predicted by simple geodynamic models: In these models, an initially (at the time when the corresponding Large Igneous Province erupted, ~15 Myr ago) vertical conduit gets tilted while it is advected in and buoyantly rising through large-scale flow: Generally eastward flow in the upper mantle in these models yields a predicted eastward tilt (i.e., the conduit is coming up from the west). In these models, mantle flow is derived from density anomalies, which are either inferred from seismic tomography or from subduction history. One drawback of these models is, that the initial plume location is chosen "ad hoc" such that the present-day position of Yellowstone is matched. Therefore, in another set of models, we study how subducted slabs (inferred from 300 Myr of subduction history) shape a basal chemically distinct layer into thermo-chemical piles, and create plumes along its margins. Our results show the formation of a Pacific pile. As subduction approaches this pile, the models frequently show part of the pile being separated off, with a plume rising above this part. This could be an analog to the formation and dynamics of the Yellowstone plume, yet there is a mismatch in location of about 30 degrees. It is therefore a goal to devise a model that combines the advantages of both models, i.e. a fully dynamic plume model, that matches the present-day position of Yellowstone. This will probably require "seeding" a plume through a thermal anomaly at the core-mantle boundary and possibly other modifications. Also, for a realistic model, the present-day density anomaly derived from subduction should

  18. A refined regional modeling approach for the Corn Belt - Experiences and recommendations for large-scale integrated modeling

    NASA Astrophysics Data System (ADS)

    Panagopoulos, Yiannis; Gassman, Philip W.; Jha, Manoj K.; Kling, Catherine L.; Campbell, Todd; Srinivasan, Raghavan; White, Michael; Arnold, Jeffrey G.

    2015-05-01

    Nonpoint source pollution from agriculture is the main source of nitrogen and phosphorus in the stream systems of the Corn Belt region in the Midwestern US. This region is comprised of two large river basins, the intensely row-cropped Upper Mississippi River Basin (UMRB) and Ohio-Tennessee River Basin (OTRB), which are considered the key contributing areas for the Northern Gulf of Mexico hypoxic zone according to the US Environmental Protection Agency. Thus, in this area it is of utmost importance to ensure that intensive agriculture for food, feed and biofuel production can coexist with a healthy water environment. To address these objectives within a river basin management context, an integrated modeling system has been constructed with the hydrologic Soil and Water Assessment Tool (SWAT) model, capable of estimating river basin responses to alternative cropping and/or management strategies. To improve modeling performance compared to previous studies and provide a spatially detailed basis for scenario development, this SWAT Corn Belt application incorporates a greatly refined subwatershed structure based on 12-digit hydrologic units or 'subwatersheds' as defined by the US Geological Service. The model setup, calibration and validation are time-demanding and challenging tasks for these large systems, given the scale intensive data requirements, and the need to ensure the reliability of flow and pollutant load predictions at multiple locations. Thus, the objectives of this study are both to comprehensively describe this large-scale modeling approach, providing estimates of pollution and crop production in the region as well as to present strengths and weaknesses of integrated modeling at such a large scale along with how it can be improved on the basis of the current modeling structure and results. The predictions were based on a semi-automatic hydrologic calibration approach for large-scale and spatially detailed modeling studies, with the use of the Sequential

  19. Large-Scale Disasters

    NASA Astrophysics Data System (ADS)

    Gad-El-Hak, Mohamed

    "Extreme" events - including climatic events, such as hurricanes, tornadoes, and drought - can cause massive disruption to society, including large death tolls and property damage in the billions of dollars. Events in recent years have shown the importance of being prepared and that countries need to work together to help alleviate the resulting pain and suffering. This volume presents a review of the broad research field of large-scale disasters. It establishes a common framework for predicting, controlling and managing both manmade and natural disasters. There is a particular focus on events caused by weather and climate change. Other topics include air pollution, tsunamis, disaster modeling, the use of remote sensing and the logistics of disaster management. It will appeal to scientists, engineers, first responders and health-care professionals, in addition to graduate students and researchers who have an interest in the prediction, prevention or mitigation of large-scale disasters.

  20. Maintaining Realistic Uncertainty in Model and Forecast

    DTIC Science & Technology

    1999-09-30

    Maintaining Realistic Uncertainty in Model and Forecast Leonard Smith Pembroke College Oxford University St Aldates Oxford OX1 3LB England phone... Oxford University ,Pembroke College,St Aldates,Oxford OX1 3LB England, 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING/MONITORING AGENCY NAME(S...in my group. REFERENCES Clarke, L. (1999) Rogue Thermocouple Detection. MSc Thesis, Mathematical Institute, Oxford University . Hansen J. and L. A

  1. Maintaining Realistic Uncertainty in Model and Forecast

    DTIC Science & Technology

    2000-09-30

    Maintaining Realistic Uncertainty in Model and Forecast Leonard Smith Pembroke College Oxford University St. Aldates Oxford OX1 1DW United Kingdom...5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Pembroke College, Oxford University ,,St...evaluation: l-shadowing, probabilistic prediction and weather forecasting. D.Phil Thesis, Oxford University . Lorenz, E. (1995) Predictability-a Partially

  2. Solving large-scale fixed cost integer linear programming models for grid-based location problems with heuristic techniques

    NASA Astrophysics Data System (ADS)

    Noor-E-Alam, Md.; Doucette, John

    2015-08-01

    Grid-based location problems (GBLPs) can be used to solve location problems in business, engineering, resource exploitation, and even in the field of medical sciences. To solve these decision problems, an integer linear programming (ILP) model is designed and developed to provide the optimal solution for GBLPs considering fixed cost criteria. Preliminary results show that the ILP model is efficient in solving small to moderate-sized problems. However, this ILP model becomes intractable in solving large-scale instances. Therefore, a decomposition heuristic is proposed to solve these large-scale GBLPs, which demonstrates significant reduction of solution runtimes. To benchmark the proposed heuristic, results are compared with the exact solution via ILP. The experimental results show that the proposed method significantly outperforms the exact method in runtime with minimal (and in most cases, no) loss of optimality.

  3. Multi-scale Modeling of Radiation Damage: Large Scale Data Analysis

    NASA Astrophysics Data System (ADS)

    Warrier, M.; Bhardwaj, U.; Bukkuru, S.

    2016-10-01

    Modification of materials in nuclear reactors due to neutron irradiation is a multiscale problem. These neutrons pass through materials creating several energetic primary knock-on atoms (PKA) which cause localized collision cascades creating damage tracks, defects (interstitials and vacancies) and defect clusters depending on the energy of the PKA. These defects diffuse and recombine throughout the whole duration of operation of the reactor, thereby changing the micro-structure of the material and its properties. It is therefore desirable to develop predictive computational tools to simulate the micro-structural changes of irradiated materials. In this paper we describe how statistical averages of the collision cascades from thousands of MD simulations are used to provide inputs to Kinetic Monte Carlo (KMC) simulations which can handle larger sizes, more defects and longer time durations. Use of unsupervised learning and graph optimization in handling and analyzing large scale MD data will be highlighted.

  4. The Large-Scale Debris Avalanche From The Tancitaro Volcano (Mexico): Characterization And Modeling

    NASA Astrophysics Data System (ADS)

    Morelli, S.; Gigli, G.; Falorni, G.; Garduno Monroy, V. H.; Arreygue, E.

    2008-12-01

    until they disappear entirely in the most distal reaches. The granulometric analysis and the comparison between the debris avalanche of the Tancitaro and other collapses with similar morphometric features (vertical relief during runout, travel distance, volume and area of the deposit) indicate that the collapse was most likely not primed by any type of eruption, but rather triggered by a strong seismic shock that could have induced the failure of a portion of the edifice, already deeply altered by intense hydrothermal fluid circulation. It is also possible to hypothesize that mechanical fluidization may have been the mechanism controlling the long runout of the avalanche, as has been determined for other well-known events. The behavior of the Tancitaro debris avalanche was numerically modeled using the DAN-W code. By opportunely modifying the rheological parameters of the different models selectable within DAN, it was determined that the two-parameter 'Voellmy model' provides the best approximation of the avalanche movement. The Voellmy model produces the most realistic results in terms of runout distance, velocity and spatial distribution of the failed mass. Since the Tancitaro event was not witnessed directly, it is possible to infer approximate velocities only from comparisons with similar and documented events, namely the Mt. St. Helens debris avalanche occurred on May 18, 1980.

  5. Realistic inflation models and primordial gravity waves

    NASA Astrophysics Data System (ADS)

    Rehman, Mansoor Ur

    We investigate both supersymmetric and non-supersymmetric realistic models of inflation. In non-supersymmetric models, inflation is successfully realized by employing both Coleman Weinberg and Higgs potentials in GUTs such as SU(5) and SO(10). The quantum smearing of tree level predictions is discussed in the Higgs inflation. These quantum corrections can arise from the inflaton couplings to other particles such as GUT scalars. As a result of including these corrections, a reduction in the tensor-to-scalar ratio r, a canonical measure of gravity waves produced during inflation, is observed. In a simple phi4 chaotic model, we reconsider a non-minimal (xi > 0) gravitationalcoupling of inflaton φ arising from the interaction xi R phi2, where R is the Ricci scalar. In estimating bounds on various inflationaryparameters we also include quantum corrections. We emphasize that while working with high precision observations such as the current Planck satellite experiment we cannot ignore these radiative and gravitational corrections in analyzing the predictions of various inflationary models. In supersymmetric hybrid inflation with minimal Kahler potential, the soft SUSY breaking terms are shown to play an important role in realizing inflation consistent with the latest WMAP data. The SUSY hybrid models which we consider here predict exceedingly small values of r. However, to obtain observable gravity waves the non-minimal Kahler potential turns out to be a necessary ingredient. A realistic model of flipped SU(5) model, which benefits from the absence of topological defects, is considered in the standard SUSY hybrid inflation. We also present a discussion of shifted hybrid inflation in a realistic model of SUSY SU(5) GUT.

  6. A Computational Framework for Realistic Retina Modeling.

    PubMed

    Martínez-Cañada, Pablo; Morillas, Christian; Pino, Begoña; Ros, Eduardo; Pelayo, Francisco

    2016-11-01

    Computational simulations of the retina have led to valuable insights about the biophysics of its neuronal activity and processing principles. A great number of retina models have been proposed to reproduce the behavioral diversity of the different visual processing pathways. While many of these models share common computational stages, previous efforts have been more focused on fitting specific retina functions rather than generalizing them beyond a particular model. Here, we define a set of computational retinal microcircuits that can be used as basic building blocks for the modeling of different retina mechanisms. To validate the hypothesis that similar processing structures may be repeatedly found in different retina functions, we implemented a series of retina models simply by combining these computational retinal microcircuits. Accuracy of the retina models for capturing neural behavior was assessed by fitting published electrophysiological recordings that characterize some of the best-known phenomena observed in the retina: adaptation to the mean light intensity and temporal contrast, and differential motion sensitivity. The retinal microcircuits are part of a new software platform for efficient computational retina modeling from single-cell to large-scale levels. It includes an interface with spiking neural networks that allows simulation of the spiking response of ganglion cells and integration with models of higher visual areas.

  7. Three-dimensional mechanical modeling of large-scale crustal deformation in China constrained by the GPS velocity field

    NASA Astrophysics Data System (ADS)

    Wang, Jian; Ye, Zheng-Ren; He, Jian-Kun

    2008-01-01

    We present a quantitative model for the crustal movement in China with respect to the Eurasia plate by using the three-dimensional finite element code ADELI. The model consists of an elastoplastic upper lithosphere and a viscoelastic lower lithosphere. The lithosphere is supported by the hydrostatic pressure at its base. The India-Eurasia collision is modeled as a velocity boundary condition. Ten large-scale faults are introduced as Coulomb-type frictional zones in the modeling. The values for the root mean square (RMS) of the east and north velocity components differences (RMS(Ue) and RMS(Un)), which are between the observation and the prediction, are regarded as the measurements to evaluate our simulations. We model the long-term crustal deformation in China by adjusting the faults frictions ranged from 0.01 to 0.5 and considering the effects resulted from lithospheric viscosity variation and topographic loading. Our results suggest most of the large-scale faults frictions are not larger than 0.1, which is consistent with other large-scale faults such as the North Anatolian fault (Provost, A.S., Chery, J., Hassani, R., 2003. Three-dimensional mechanical modeling of the GPS velocity field along the North Anatolian fault. Earth Planet. Sci. Lett. 209, 361-377) and the San Andreas fault (Mount, V.S., Suppe, J., 1987. State of stress near the San Andreas fault: implications for wrench tectonics. Geology, 15, 1143-1146). Further, we examine the effects on the long-term crustal deformation in China of three causes: the large-scale faults, lithospheric viscosity structure and topographic loading. Results indicate that the lithospheric viscosity structure and the topographic loading have important influences on the crustal deformation in China, while the influences caused by the large-scale faults are small. Although our simulations satisfactorily reproduce the general picture of crustal movement in China, there is a poor agreement between the model and the observed GPS

  8. Large-scale groundwater modeling using global datasets: a test case for the Rhine-Meuse basin

    NASA Astrophysics Data System (ADS)

    Sutanudjaja, E. H.; van Beek, L. P. H.; de Jong, S. M.; van Geer, F. C.; Bierkens, M. F. P.

    2011-03-01

    Large-scale groundwater models involving aquifers and basins of multiple countries are still rare due to a lack of hydrogeological data which are usually only available in developed countries. In this study, we propose a novel approach to construct large-scale groundwater models by using global datasets that are readily available. As the test-bed, we use the combined Rhine-Meuse basin that contains groundwater head data used to verify the model output. We start by building a distributed land surface model (30 arc-second resolution) to estimate groundwater recharge and river discharge. Subsequently, a MODFLOW transient groundwater model is built and forced by the recharge and surface water levels calculated by the land surface model. Although the method that we used to couple the land surface and MODFLOW groundwater model is considered as an offline-coupling procedure (i.e. the simulations of both models were performed separately), results are promising. The simulated river discharges compare well to the observations. Moreover, based on our sensitivity analysis, in which we run several groundwater model scenarios with various hydrogeological parameter settings, we observe that the model can reproduce the observed groundwater head time series reasonably well. However, we note that there are still some limitations in the current approach, specifically because the current offline-coupling technique simplifies dynamic feedbacks between surface water levels and groundwater heads, and between soil moisture states and groundwater heads. Also the current sensitivity analysis ignores the uncertainty of the land surface model output. Despite these limitations, we argue that the results of the current model show a promise for large-scale groundwater modeling practices, including for data-poor environments and at the global scale.

  9. A realistic renormalizable supersymmetric E₆ model

    SciTech Connect

    Bajc, Borut; Susič, Vasja

    2014-01-01

    A complete realistic model based on the supersymmetric version of E₆ is presented. It consists of three copies of matter 27, and a Higgs sector made of 2×(27+27⁻)+351´+351´⁻ representations. An analytic solution to the equations of motion is found which spontaneously breaks the gauge group into the Standard Model. The light fermion mass matrices are written down explicitly as non-linear functions of three Yukawa matrices. This contribution is based on Ref. [1].

  10. Large-scale 3D modeling of projectile impact damage in brittle plates

    NASA Astrophysics Data System (ADS)

    Seagraves, A.; Radovitzky, R.

    2015-10-01

    The damage and failure of brittle plates subjected to projectile impact is investigated through large-scale three-dimensional simulation using the DG/CZM approach introduced by Radovitzky et al. [Comput. Methods Appl. Mech. Eng. 2011; 200(1-4), 326-344]. Two standard experimental setups are considered: first, we simulate edge-on impact experiments on Al2O3 tiles by Strassburger and Senf [Technical Report ARL-CR-214, Army Research Laboratory, 1995]. Qualitative and quantitative validation of the simulation results is pursued by direct comparison of simulations with experiments at different loading rates and good agreement is obtained. In the second example considered, we investigate the fracture patterns in normal impact of spheres on thin, unconfined ceramic plates over a wide range of loading rates. For both the edge-on and normal impact configurations, the full field description provided by the simulations is used to interpret the mechanisms underlying the crack propagation patterns and their strong dependence on loading rate.

  11. Similarity-based modeling in large-scale prediction of drug-drug interactions

    PubMed Central

    Vilar, Santiago; Uriarte, Eugenio; Santana, Lourdes; Lorberbaum, Tal; Hripcsak, George; Friedman, Carol; Tatonetti, Nicholas P

    2015-01-01

    Drug-drug interactions (DDIs) are a major cause of adverse drug effects and a public health concern, as they increase hospital care expenses and reduce patients’ quality of life. DDI detection is, therefore, an important objective in patient safety, one whose pursuit affects drug development and pharmacovigilance. In this article, we describe a protocol applicable on a large scale to predict novel DDIs based on similarity of drug interaction candidates to drugs involved in established DDIs. the method integrates a reference standard database of known DDIs with drug similarity information extracted from different sources, such as 2D and 3D molecular structure, interaction profile, target and side-effect similarities. the method is interpretable in that it generates drug interaction candidates that are traceable to pharmacological or clinical effects. We describe a protocol with applications in patient safety and preclinical toxicity screening. the time frame to implement this protocol is 5–7 h, with additional time potentially necessary, depending on the complexity of the reference standard DDI database and the similarity measures implemented. PMID:25122524

  12. Large-Scale Modeling of Epileptic Seizures: Scaling Properties of Two Parallel Neuronal Network Simulation Algorithms

    DOE PAGES

    Pesce, Lorenzo L.; Lee, Hyong C.; Hereld, Mark; ...

    2013-01-01

    Our limited understanding of the relationship between the behavior of individual neurons and large neuronal networks is an important limitation in current epilepsy research and may be one of the main causes of our inadequate ability to treat it. Addressing this problem directly via experiments is impossibly complex; thus, we have been developing and studying medium-large-scale simulations of detailed neuronal networks to guide us. Flexibility in the connection schemas and a complete description of the cortical tissue seem necessary for this purpose. In this paper we examine some of the basic issues encountered in these multiscale simulations. We have determinedmore » the detailed behavior of two such simulators on parallel computer systems. The observed memory and computation-time scaling behavior for a distributed memory implementation were very good over the range studied, both in terms of network sizes (2,000 to 400,000 neurons) and processor pool sizes (1 to 256 processors). Our simulations required between a few megabytes and about 150 gigabytes of RAM and lasted between a few minutes and about a week, well within the capability of most multinode clusters. Therefore, simulations of epileptic seizures on networks with millions of cells should be feasible on current supercomputers.« less

  13. Linking electronic medical records to large-scale simulation models: can we put rapid learning on turbo?

    PubMed

    Eddy, David M

    2007-01-01

    One method for rapid learning is to use data from electronic medical records (EMRs) to help build and validate large-scale, physiology-based simulation models. These models can than be used to help answer questions that cannot be addressed directly from the EMR data. Their potential uses include analyses of physiological pathways; simulation and design of clinical trials; and analyses of clinical management tools such as guidelines, performance measures, priority setting, and cost-effectiveness. Linking the models to EMR data also facilitates tailoring analyses to specific populations. The models' power and accuracy can be improved by linkage to comprehensive, person-specific, longitudinal data from EMRs.

  14. A versatile platform for multilevel modeling of physiological systems: template/instance framework for large-scale modeling and simulation.

    PubMed

    Asai, Yoshiyuki; Abe, Takeshi; Oka, Hideki; Okita, Masao; Okuyama, Tomohiro; Hagihara, Ken-Ichi; Ghosh, Samik; Matsuoka, Yukiko; Kurachi, Yoshihisa; Kitano, Hrioaki

    2013-01-01

    Building multilevel models of physiological systems is a significant and effective method for integrating a huge amount of bio-physiological data and knowledge obtained by earlier experiments and simulations. Since such models tend to be large in size and complicated in structure, appropriate software frameworks for supporting modeling activities are required. A software platform, PhysioDesigner, has been developed, which supports the process of creating multilevel models. Models developed on PhysioDesigner are established in an XML format called PHML. Every physiological entity in a model is represented as a module, and hence a model constitutes an aggregation of modules. When the number of entities of which the model is comprised is large, it is difficult to manage the entities manually, and some semiautomatic assistive functions are necessary. In this article, which focuses particularly on recently developed features of the platform for building large-scale models utilizing a template/instance framework and morphological information, the PhysioDesigner platform is introduced.

  15. UAS in the NAS Project: Large-Scale Communication Architecture Simulations with NASA GRC Gen5 Radio Model

    NASA Technical Reports Server (NTRS)

    Kubat, Gregory

    2016-01-01

    This report provides a description and performance characterization of the large-scale, Relay architecture, UAS communications simulation capability developed for the NASA GRC, UAS in the NAS Project. The system uses a validated model of the GRC Gen5 CNPC, Flight-Test Radio model. Contained in the report is a description of the simulation system and its model components, recent changes made to the system to improve performance, descriptions and objectives of sample simulations used for test and verification, and a sampling and observations of results and performance data.

  16. Physical characteristics of the Gulf Stream as an indicator of the quality of large-scale circulation modeling

    NASA Astrophysics Data System (ADS)

    Sarkisyan, A. S.; Nikitin, O. P.; Lebedev, K. V.

    2016-12-01

    The general idea of this work is to show that the efficiency of modeling boundary currents (compared to the results of observations) can serve as an indicator of correctness for the modeling of the entire large-scale ocean circulation. The results of calculation of the mean surface currents in the Gulf Stream area based on direct measurements from drifters are presented together with the results of numerical modeling of variability of the Gulf Stream transport at 33°N over the period 2005-2014 based on data from Argo profiling buoys.

  17. Modeling and Analysis of Realistic Fire Scenarios in Spacecraft

    NASA Technical Reports Server (NTRS)

    Brooker, J. E.; Dietrich, D. L.; Gokoglu, S. A.; Urban, D. L.; Ruff, G. A.

    2015-01-01

    An accidental fire inside a spacecraft is an unlikely, but very real emergency situation that can easily have dire consequences. While much has been learned over the past 25+ years of dedicated research on flame behavior in microgravity, a quantitative understanding of the initiation, spread, detection and extinguishment of a realistic fire aboard a spacecraft is lacking. Virtually all combustion experiments in microgravity have been small-scale, by necessity (hardware limitations in ground-based facilities and safety concerns in space-based facilities). Large-scale, realistic fire experiments are unlikely for the foreseeable future (unlike in terrestrial situations). Therefore, NASA will have to rely on scale modeling, extrapolation of small-scale experiments and detailed numerical modeling to provide the data necessary for vehicle and safety system design. This paper presents the results of parallel efforts to better model the initiation, spread, detection and extinguishment of fires aboard spacecraft. The first is a detailed numerical model using the freely available Fire Dynamics Simulator (FDS). FDS is a CFD code that numerically solves a large eddy simulation form of the Navier-Stokes equations. FDS provides a detailed treatment of the smoke and energy transport from a fire. The simulations provide a wealth of information, but are computationally intensive and not suitable for parametric studies where the detailed treatment of the mass and energy transport are unnecessary. The second path extends a model previously documented at ICES meetings that attempted to predict maximum survivable fires aboard space-craft. This one-dimensional model implies the heat and mass transfer as well as toxic species production from a fire. These simplifications result in a code that is faster and more suitable for parametric studies (having already been used to help in the hatch design of the Multi-Purpose Crew Vehicle, MPCV).

  18. Can key vegetation parameters be retrieved at the large-scale using LAI satellite products and a generic modelling approach ?

    NASA Astrophysics Data System (ADS)

    Dewaele, Helene; Calvet, Jean-Christophe; Carrer, Dominique; Laanaia, Nabil

    2016-04-01

    In the context of climate change, the need to assess and predict the impact of droughts on vegetation and water resources increases. The generic approaches permitting the modelling of continental surfaces at large-scale has progressed in recent decades towards land surface models able to couple cycles of water, energy and carbon. A major source of uncertainty in these generic models is the maximum available water content of the soil (MaxAWC) usable by plants which is constrained by the rooting depth parameter and unobservable at the large-scale. In this study, vegetation products derived from the SPOT/VEGETATION satellite data available since 1999 are used to optimize the model rooting depth over rainfed croplands and permanent grasslands at 1 km x 1 km resolution. The inter-annual variability of the Leaf Area Index (LAI) is simulated over France using the Interactions between Soil, Biosphere and Atmosphere, CO2-reactive (ISBA-A-gs) generic land surface model and a two-layer force-restore (FR-2L) soil profile scheme. The leaf nitrogen concentration directly impacts the modelled value of the maximum annual LAI. In a first step this parameter is estimated for the last 15 years by using an iterative procedure that matches the maximum values of LAI modelled by ISBA-A-gs to the highest satellite-derived LAI values. The Root Mean Square Error (RMSE) is used as a cost function to be minimized. In a second step, the model rooting depth is optimized in order to reproduce the inter-annual variability resulting from the drought impact on the vegetation. The evaluation of the retrieved soil rooting depth is achieved using the French agricultural statistics of Agreste. Retrieved leaf nitrogen concentrations are compared with values from previous studies. The preliminary results show a good potential of this approach to estimate these two vegetation parameters (leaf nitrogen concentration, MaxAWC) at the large-scale over grassland areas. Besides, a marked impact of the

  19. Representation of drought propagation in large-scale models: a test on global scale and catchment scale

    NASA Astrophysics Data System (ADS)

    van Huijgevoort, Marjolein; van Loon, Anne; van Lanen, Henny

    2013-04-01

    Drought development has increasingly been studied using large-scale models, although, the suitability of these models to analyse hydrological drought is still unclear. Drought events propagate through the terrestrial hydrological cycle from meteorological drought to hydrological drought. We investigated to what extent large-scale models can reproduce this propagation. An ensemble of ten large-scale models, run within the WATCH project, and their forcing data (WATCH forcing data) were used to identify drought using a threshold level method. Propagation features (pooling, attenuation, lag, lengthening) were assessed on a global scale and, in more detail, for a selection of five case study areas in Europe. On a global scale, propagation features were reproduced by the multi-model ensemble, resulting in longer and fewer drought events in runoff than in precipitation. Spatial patterns of extreme drought events (e.g. the 1976 drought event in Europe) derived from monthly runoff data resembled more the spatial patterns derived from 3-monthly precipitation data than patterns derived from monthly precipitation data. There were differences between the individual models; some models showed a faster response in runoff than others. In general, modelled runoff showed a too fast response to rainfall, which led to deviations from historical drought events reported for slowly responding systems. Also in the selected case study areas, drought events became fewer and longer when moving through the hydrological cycle. For droughts events moving from precipitation via soil moisture to subsurface runoff, the number of droughts decreased from 3-5 per year to 0.5-1.5 per year and average duration increased from around 15 days to 50-120 days. Fast and slowly responding systems, however, did not show much differentiation. Also in the selected case study areas the simulated runoff reacted too fast to precipitation, especially in catchments with a cold climate, a semi-arid climate, or large

  20. Large scale traffic simulations

    SciTech Connect

    Nagel, K.; Barrett, C.L. |; Rickert, M. |

    1997-04-01

    Large scale microscopic (i.e. vehicle-based) traffic simulations pose high demands on computational speed in at least two application areas: (i) real-time traffic forecasting, and (ii) long-term planning applications (where repeated {open_quotes}looping{close_quotes} between the microsimulation and the simulated planning of individual person`s behavior is necessary). As a rough number, a real-time simulation of an area such as Los Angeles (ca. 1 million travellers) will need a computational speed of much higher than 1 million {open_quotes}particle{close_quotes} (= vehicle) updates per second. This paper reviews how this problem is approached in different projects and how these approaches are dependent both on the specific questions and on the prospective user community. The approaches reach from highly parallel and vectorizable, single-bit implementations on parallel supercomputers for Statistical Physics questions, via more realistic implementations on coupled workstations, to more complicated driving dynamics implemented again on parallel supercomputers. 45 refs., 9 figs., 1 tab.

  1. Improved Large-Scale Inundation Modelling by 1D-2D Coupling and Consideration of Hydrologic and Hydrodynamic Processes - a Case Study in the Amazon

    NASA Astrophysics Data System (ADS)

    Hoch, J. M.; Bierkens, M. F.; Van Beek, R.; Winsemius, H.; Haag, A.

    2015-12-01

    Understanding the dynamics of fluvial floods is paramount to accurate flood hazard and risk modeling. Currently, economic losses due to flooding constitute about one third of all damage resulting from natural hazards. Given future projections of climate change, the anticipated increase in the World's population and the associated implications, sound knowledge of flood hazard and related risk is crucial. Fluvial floods are cross-border phenomena that need to be addressed accordingly. Yet, only few studies model floods at the large-scale which is preferable to tiling the output of small-scale models. Most models cannot realistically model flood wave propagation due to a lack of either detailed channel and floodplain geometry or the absence of hydrologic processes. This study aims to develop a large-scale modeling tool that accounts for both hydrologic and hydrodynamic processes, to find and understand possible sources of errors and improvements and to assess how the added hydrodynamics affect flood wave propagation. Flood wave propagation is simulated by DELFT3D-FM (FM), a hydrodynamic model using a flexible mesh to schematize the study area. It is coupled to PCR-GLOBWB (PCR), a macro-scale hydrological model, that has its own simpler 1D routing scheme (DynRout) which has already been used for global inundation modeling and flood risk assessments (GLOFRIS; Winsemius et al., 2013). A number of model set-ups are compared and benchmarked for the simulation period 1986-1996: (0) PCR with DynRout; (1) using a FM 2D flexible mesh forced with PCR output and (2) as in (1) but discriminating between 1D channels and 2D floodplains, and, for comparison, (3) and (4) the same set-ups as (1) and (2) but forced with observed GRDC discharge values. Outputs are subsequently validated against observed GRDC data at Óbidos and flood extent maps from the Dartmouth Flood Observatory. The present research constitutes a first step into a globally applicable approach to fully couple

  2. Growth Mixture Modeling: Application to Reading Achievement Data from a Large-Scale Assessment

    ERIC Educational Resources Information Center

    Bilir, Mustafa Kuzey; Binici, Salih; Kamata, Akihito

    2008-01-01

    The popularity of growth modeling has increased in psychological and cognitive development research as a means to investigate patterns of changes and differences between observation units over time. Random coefficient modeling, such as multilevel modeling and latent growth curve modeling as a special application of structural equation modeling are…

  3. Large-scale hydrodynamic modeling of the middle Yangtze River Basin with complex river-lake interactions

    NASA Astrophysics Data System (ADS)

    Lai, Xijun; Jiang, Jiahu; Liang, Qiuhua; Huang, Qun

    2013-06-01

    The flow regime in the middle Yangtze River Basin is experiencing rapid changes due to intensive human activities and ongoing climate change. The middle reach of Yangtze River and the associated water system are extremely difficult to be reliably modeled due to highly complex interactions between the main stream and many tributaries and lakes. This paper presents a new Coupled Hydrodynamic Analysis Model (CHAM) designed for simulating the large-scale water system in the middle Yangtze River Basin, featured with complex river-lake interactions. CHAM dynamically couples a one-dimensional (1-D) unsteady flow model and a two-dimensional (2-D) hydrodynamic model using a new coupling algorithm that is particularly suitable for large-scale water systems. Numerical simulations are carried out to reproduce the flow regime in the region in 1998 when a severe flood event occurred and in 2006 when it experienced an extremely dry year. The model is able to reproduce satisfactorily the major physical processes featured with seasonal wetting and drying controlled by strong river-lake interactions. This indicates that the present model provides a promising tool for predicting complex flow regimes with remarkable seasonal changes and strong river-lake interactions.

  4. Mining and state-space modeling and verification of sub-networks from large-scale biomolecular networks

    PubMed Central

    Hu, Xiaohua; Wu, Fang-Xiang

    2007-01-01

    Background Biomolecular networks dynamically respond to stimuli and implement cellular function. Understanding these dynamic changes is the key challenge for cell biologists. As biomolecular networks grow in size and complexity, the model of a biomolecular network must become more rigorous to keep track of all the components and their interactions. In general this presents the need for computer simulation to manipulate and understand the biomolecular network model. Results In this paper, we present a novel method to model the regulatory system which executes a cellular function and can be represented as a biomolecular network. Our method consists of two steps. First, a novel scale-free network clustering approach is applied to the large-scale biomolecular network to obtain various sub-networks. Second, a state-space model is generated for the sub-networks and simulated to predict their behavior in the cellular context. The modeling results represent hypotheses that are tested against high-throughput data sets (microarrays and/or genetic screens) for both the natural system and perturbations. Notably, the dynamic modeling component of this method depends on the automated network structure generation of the first component and the sub-network clustering, which are both essential to make the solution tractable. Conclusion Experimental results on time series gene expression data for the human cell cycle indicate our approach is promising for sub-network mining and simulation from large-scale biomolecular network. PMID:17764552

  5. Large-Scale Atmospheric Circulation Patterns Associated with Temperature Extremes as a Basis for Model Evaluation: Methodological Overview and Results

    NASA Astrophysics Data System (ADS)

    Loikith, P. C.; Broccoli, A. J.; Waliser, D. E.; Lintner, B. R.; Neelin, J. D.

    2015-12-01

    Anomalous large-scale circulation patterns often play a key role in the occurrence of temperature extremes. For example, large-scale circulation can drive horizontal temperature advection or influence local processes that lead to extreme temperatures, such as by inhibiting moderating sea breezes, promoting downslope adiabatic warming, and affecting the development of cloud cover. Additionally, large-scale circulation can influence the shape of temperature distribution tails, with important implications for the magnitude of future changes in extremes. As a result of the prominent role these patterns play in the occurrence and character of extremes, the way in which temperature extremes change in the future will be highly influenced by if and how these patterns change. It is therefore critical to identify and understand the key patterns associated with extremes at local to regional scales in the current climate and to use this foundation as a target for climate model validation. This presentation provides an overview of recent and ongoing work aimed at developing and applying novel approaches to identifying and describing the large-scale circulation patterns associated with temperature extremes in observations and using this foundation to evaluate state-of-the-art global and regional climate models. Emphasis is given to anomalies in sea level pressure and 500 hPa geopotential height over North America using several methods to identify circulation patterns, including self-organizing maps and composite analysis. Overall, evaluation results suggest that models are able to reproduce observed patterns associated with temperature extremes with reasonable fidelity in many cases. Model skill is often highest when and where synoptic-scale processes are the dominant mechanisms for extremes, and lower where sub-grid scale processes (such as those related to topography) are important. Where model skill in reproducing these patterns is high, it can be inferred that extremes are

  6. Nanostructure modeling in oxide ceramics using large scale parallel molecular dynamics simulations

    NASA Astrophysics Data System (ADS)

    Campbell, Timothy J.

    1998-12-01

    The purpose of this dissertation is to investigate the properties and processes in nanostructured oxide ceramics using molecular-dynamics (MD) simulations. These simulations are based on realistic interatomic potentials and require scalable and portable multiresolution algorithms implemented on parallel computers. The dynamics of oxidation of aluminum nanoclusters is studied with a MD scheme that can simultaneously treat metallic and oxide systems. Dynamic charge transfer between anions and cations which gives rise to a compute-intensive Coulomb interaction, is treated by the O(N) Fast Multipole Method. Structural and dynamical correlations and local stresses reveal significant charge transfer and stress variations which cause rapid diffusion of Al and O on the nanocluster surface. At a constant temperature, the formation of an amorphous surface-oxide layer is observed during the first 100 picoseconds. Subsequent sharp decrease in O diffusion normal to the cluster surface arrests the growth of the oxide layer with a saturation thickness of 4 nanometers; this is in excellent agreement with experiments. Analyses of the oxide scale reveal significant charge transfer and variations in local structure. When the heat is not extracted from the cluster, the oxidizing reaction becomes explosive. Sintering, structural correlations, vibrational properties, and mechanical behavior of nanophase silica glasses are also studied using the MD approach based on an empirical interatomic potential that consists of both two and three-body interactions. Nanophase silica glasses with densities ranging from 76 to 93% of the bulk glass density are obtained using an isothermal-isobaric MD approach. During the sintering process, the pore sizes and distribution change without any discernable change in the pore morphology. The height and position of the first sharp diffraction peak (the signature of intermediate-range order) in the neutron static structure factor shows significant differences

  7. The HyperHydro (H2) experiment for comparing different large-scale models at various resolutions

    NASA Astrophysics Data System (ADS)

    Sutanudjaja, Edwin

    2016-04-01

    HyperHydro (http://www.hyperhydro.org/) is an open network of scientists with the objective of simulating large-scale terrestrial hydrology and water resources at hyper-resolution (Wood et al., 2011, DOI: 10.1029/2010WR010090; Bierkens et al., 2014, DOI: 10.1002/hyp.10391). Within the HyperHydro network, a modeling workshop was held at Utrecht University, the Netherlands, on 9-12 June 2015. The goal of the workshop was to start the HyperHydro (H^2) experiment for comparing different large-scale hydrological models, at different spatial resolutions, from 50 km to 1 km. Model simulation results (e.g. discharge, soil moisture, evaporation, snow, groundwater depth, etc.) are evaluated to available observation data and compared across various models and resolutions. At EGU 2016, we would like to present the latest results of this inter-comparison experiment. We also invite participation from the hydrology community on this experiment. Up to now, the models compared are CLM, LISFLOOD, mHM, ParFlow-CLM, PCR-GLOBWB, TerrSysMP, VIC, WaterGAP, and wflow. As initial test-beds, we mainly focus on two river basins: San Joaquin/California (82000 km^2) and Rhine (185000 km^2). Moreover, comparison at a larger region, such for the CONUS (Contiguous-US) domain, is also explored and presented.

  8. Development of Residential Prototype Building Models and Analysis System for Large-Scale Energy Efficiency Studies Using EnergyPlus

    SciTech Connect

    Mendon, Vrushali V.; Taylor, Zachary T.

    2014-09-10

    ABSTRACT: Recent advances in residential building energy efficiency and codes have resulted in increased interest in detailed residential building energy models using the latest energy simulation software. One of the challenges of developing residential building models to characterize new residential building stock is to allow for flexibility to address variability in house features like geometry, configuration, HVAC systems etc. Researchers solved this problem in a novel way by creating a simulation structure capable of creating fully-functional EnergyPlus batch runs using a completely scalable residential EnergyPlus template system. This system was used to create a set of thirty-two residential prototype building models covering single- and multifamily buildings, four common foundation types and four common heating system types found in the United States (US). A weighting scheme with detailed state-wise and national weighting factors was designed to supplement the residential prototype models. The complete set is designed to represent a majority of new residential construction stock. The entire structure consists of a system of utility programs developed around the core EnergyPlus simulation engine to automate the creation and management of large-scale simulation studies with minimal human effort. The simulation structure and the residential prototype building models have been used for numerous large-scale studies, one of which is briefly discussed in this paper.

  9. The HyperHydro (H^2) experiment for comparing different large-scale models at various resolutions

    NASA Astrophysics Data System (ADS)

    Sutanudjaja, E.; Bosmans, J.; Chaney, N.; Clark, M. P.; Condon, L. E.; David, C. H.; De Roo, A. P. J.; Doll, P. M.; Drost, N.; Eisner, S.; Famiglietti, J. S.; Floerke, M.; Gilbert, J. M.; Gochis, D. J.; Hut, R.; Keune, J.; Kollet, S. J.; Maxwell, R. M.; Pan, M.; Rakovec, O.; Reager, J. T., II; Samaniego, L. E.; Mueller Schmied, H.; Trautmann, T.; Van Beek, L. P.; Van De Giesen, N.; Wood, E. F.; Bierkens, M. F.; Kumar, R.

    2015-12-01

    HyperHydro (http://www.hyperhydro.org/) is an open network of scientists with the objective of simulating large-scale terrestrial hydrology and water resources at hyper-resolution (Bierkens et al., 2014, DOI: 10.1002/hyp.10391). Within the HyperHydro network, a modeling workshop was held at Utrecht University, the Netherlands, on 9-12 June 2015. The goal of the workshop was to start the HyperHydro (H^2) experiment for comparing different large-scale hydrological models, at different spatial resolutions, from 50 km to 1 km. Model simulation results (e.g. discharge, soil moisture, evaporation, snow, groundwater depth, etc.) are evaluated to available observation data and compared across various models and resolutions. In AGU 2015, we would like to present the results of this inter-comparison experiment. During the workshop in Utrecht, the models compared were CLM, LISFLOOD, mHM, ParFlow-CLM, PCR-GLOBWB, TerrSysMP, VIC and WaterGAP. We invite participation from the hydrology community on this experiment. As test-beds, we focus on two river basins: San Joaquin (~82000 km2) and Rhine (~185000 km2). In the near future, we will escalate this experiment to the CONUS and CORDEX-EU domains. The picture below was taken during the workshop in Utrecht (9-12 June 2015).

  10. Modeling Cultural/ecological Impacts of Large-scale Mining and Industrial Development in the Yukon-Kuskokwim Basin

    NASA Astrophysics Data System (ADS)

    Bunn, J. T.; Sparck, A.

    2004-12-01

    We are developing a methodology for predicting the cultural impact of large-scale mineral resource development in the Yukon-Kuskokwim (Y-K) basin. The Yup'ik/Cup'ik/Dene people of the Y-K basin currently practice a mixed-market subsistence economy, in which native subsistence traditions and social structures are largely intact. Large-scale mining and industrial-infrastructure developments are being planned that will constitute a significant expansion of the market economy, and will also significantly affect the physical environment that is central to the subsistence way of life. To explore the impact that these changes are likely to have on native culture we use a systems modeling approach, considering "culture" to be a system that encompasses the physical, biological and verbal realms. We draw upon Alaska Department of Fish and Game technical reports, anthropological studies, Yup'ik cultural visioning exercises, and personal experience to identify the components of our cultural model. We use structural equation modeling to determine causal relationships between system components. The resulting model is used predict changes that are likely to occur as a result of planned developments.

  11. The topology of large-scale structure. I - Topology and the random phase hypothesis. [galactic formation models

    NASA Technical Reports Server (NTRS)

    Weinberg, David H.; Gott, J. Richard, III; Melott, Adrian L.

    1987-01-01

    Many models for the formation of galaxies and large-scale structure assume a spectrum of random phase (Gaussian), small-amplitude density fluctuations as initial conditions. In such scenarios, the topology of the galaxy distribution on large scales relates directly to the topology of the initial density fluctuations. Here a quantitative measure of topology - the genus of contours in a smoothed density distribution - is described and applied to numerical simulations of galaxy clustering, to a variety of three-dimensional toy models, and to a volume-limited sample of the CfA redshift survey. For random phase distributions the genus of density contours exhibits a universal dependence on threshold density. The clustering simulations show that a smoothing length of 2-3 times the mass correlation length is sufficient to recover the topology of the initial fluctuations from the evolved galaxy distribution. Cold dark matter and white noise models retain a random phase topology at shorter smoothing lengths, but massive neutrino models develop a cellular topology.

  12. St. Louis Initiative for Integrated Care Excellence (SLI(2)CE): integrated-collaborative care on a large scale model.

    PubMed

    Brawer, Peter A; Martielli, Richard; Pye, Patrice L; Manwaring, Jamie; Tierney, Anna

    2010-06-01

    The primary care health setting is in crisis. Increasing demand for services, with dwindling numbers of providers, has resulted in decreased access and decreased satisfaction for both patients and providers. Moreover, the overwhelming majority of primary care visits are for behavioral and mental health concerns rather than issues of a purely medical etiology. Integrated-collaborative models of health care delivery offer possible solutions to this crisis. The purpose of this article is to review the existing data available after 2 years of the St. Louis Initiative for Integrated Care Excellence; an example of integrated-collaborative care on a large scale model within a regional Veterans Affairs Health Care System. There is clear evidence that the SLI(2)CE initiative rather dramatically increased access to health care, and modified primary care practitioners' willingness to address mental health issues within the primary care setting. In addition, data suggests strong fidelity to a model of integrated-collaborative care which has been successful in the past. Integrated-collaborative care offers unique advantages to the traditional view and practice of medical care. Through careful implementation and practice, success is possible on a large scale model.

  13. Large-Scale Features of Pliocene Climate: Results from the Pliocene Model Intercomparison Project

    NASA Technical Reports Server (NTRS)

    Haywood, A. M.; Hill, D.J.; Dolan, A. M.; Otto-Bliesner, B. L.; Bragg, F.; Chan, W.-L.; Chandler, M. A.; Contoux, C.; Dowsett, H. J.; Jost, A.; Kamae, Y.; Lohmann, G.; Lunt, D. J.; Abe-Ouchi, A.; Pickering, S. J.; Ramstein, G.; Rosenbloom, N. A.; Salzmann, U.; Sohl, L.; Stepanek, C.; Ueda, H.; Yan, Q.; Zhang, Z.

    2013-01-01

    Climate and environments of the mid-Pliocene warm period (3.264 to 3.025 Ma) have been extensively studied.Whilst numerical models have shed light on the nature of climate at the time, uncertainties in their predictions have not been systematically examined. The Pliocene Model Intercomparison Project quantifies uncertainties in model outputs through a coordinated multi-model and multi-mode data intercomparison. Whilst commonalities in model outputs for the Pliocene are clearly evident, we show substantial variation in the sensitivity of models to the implementation of Pliocene boundary conditions. Models appear able to reproduce many regional changes in temperature reconstructed from geological proxies. However, data model comparison highlights that models potentially underestimate polar amplification. To assert this conclusion with greater confidence, limitations in the time-averaged proxy data currently available must be addressed. Furthermore, sensitivity tests exploring the known unknowns in modelling Pliocene climate specifically relevant to the high latitudes are essential (e.g. palaeogeography, gateways, orbital forcing and trace gasses). Estimates of longer-term sensitivity to CO2 (also known as Earth System Sensitivity; ESS), support previous work suggesting that ESS is greater than Climate Sensitivity (CS), and suggest that the ratio of ESS to CS is between 1 and 2, with a "best" estimate of 1.5.

  14. Volterra representation enables modeling of complex synaptic nonlinear dynamics in large-scale simulations

    PubMed Central

    Hu, Eric Y.; Bouteiller, Jean-Marie C.; Song, Dong; Baudry, Michel; Berger, Theodore W.

    2015-01-01

    Chemical synapses are comprised of a wide collection of intricate signaling pathways involving complex dynamics. These mechanisms are often reduced to simple spikes or exponential representations in order to enable computer simulations at higher spatial levels of complexity. However, these representations cannot capture important nonlinear dynamics found in synaptic transmission. Here, we propose an input-output (IO) synapse model capable of generating complex nonlinear dynamics while maintaining low computational complexity. This IO synapse model is an extension of a detailed mechanistic glutamatergic synapse model capable of capturing the input-output relationships of the mechanistic model using the Volterra functional power series. We demonstrate that the IO synapse model is able to successfully track the nonlinear dynamics of the synapse up to the third order with high accuracy. We also evaluate the accuracy of the IO synapse model at different input frequencies and compared its performance with that of kinetic models in compartmental neuron models. Our results demonstrate that the IO synapse model is capable of efficiently replicating complex nonlinear dynamics that were represented in the original mechanistic model and provide a method to replicate complex and diverse synaptic transmission within neuron network simulations. PMID:26441622

  15. The application of sensitivity analysis to models of large scale physiological systems

    NASA Technical Reports Server (NTRS)

    Leonard, J. I.

    1974-01-01

    A survey of the literature of sensitivity analysis as it applies to biological systems is reported as well as a brief development of sensitivity theory. A simple population model and a more complex thermoregulatory model illustrate the investigatory techniques and interpretation of parameter sensitivity analysis. The role of sensitivity analysis in validating and verifying models, and in identifying relative parameter influence in estimating errors in model behavior due to uncertainty in input data is presented. This analysis is valuable to the simulationist and the experimentalist in allocating resources for data collection. A method for reducing highly complex, nonlinear models to simple linear algebraic models that could be useful for making rapid, first order calculations of system behavior is presented.

  16. Large-scale in silico modeling of metabolic interactions between cell types in the human brain.

    PubMed

    Lewis, Nathan E; Schramm, Gunnar; Bordbar, Aarash; Schellenberger, Jan; Andersen, Michael P; Cheng, Jeffrey K; Patel, Nilam; Yee, Alex; Lewis, Randall A; Eils, Roland; König, Rainer; Palsson, Bernhard Ø

    2010-12-01

    Metabolic interactions between multiple cell types are difficult to model using existing approaches. Here we present a workflow that integrates gene expression data, proteomics data and literature-based manual curation to model human metabolism within and between different types of cells. Transport reactions are used to account for the transfer of metabolites between models of different cell types via the interstitial fluid. We apply the method to create models of brain energy metabolism that recapitulate metabolic interactions between astrocytes and various neuron types relevant to Alzheimer's disease. Analysis of the models identifies genes and pathways that may explain observed experimental phenomena, including the differential effects of the disease on cell types and regions of the brain. Constraint-based modeling can thus contribute to the study and analysis of multicellular metabolic processes in the human tissue microenvironment and provide detailed mechanistic insight into high-throughput data analysis.

  17. Modelling of a large-scale urban contamination situation and remediation alternatives.

    PubMed

    Thiessen, K M; Arkhipov, A; Batandjieva, B; Charnock, T W; Gaschak, S; Golikov, V; Hwang, W T; Tomás, J; Zlobenko, B

    2009-05-01

    The Urban Remediation Working Group of the International Atomic Energy Agency's EMRAS (Environmental Modelling for Radiation Safety) program was organized to address issues of remediation assessment modelling for urban areas contaminated with dispersed radionuclides. The present paper describes the first of two modelling exercises, which was based on Chernobyl fallout data in the town of Pripyat, Ukraine. Modelling endpoints for the exercise included radionuclide concentrations and external dose rates at specified locations, contributions to the dose rates from individual surfaces and radionuclides, and annual and cumulative external doses to specified reference individuals. Model predictions were performed for a "no action" situation (with no remedial measures) and for selected countermeasures. The exercise provided a valuable opportunity to compare modelling approaches and parameter values, as well as to compare the predicted effectiveness of various countermeasures with respect to short-term and long-term reduction of predicted doses to people.

  18. Development and application of a large scale river system model for National Water Accounting in Australia

    NASA Astrophysics Data System (ADS)

    Dutta, Dushmanta; Vaze, Jai; Kim, Shaun; Hughes, Justin; Yang, Ang; Teng, Jin; Lerat, Julien

    2017-04-01

    Existing global and continental scale river models, mainly designed for integrating with global climate models, are of very coarse spatial resolutions and lack many important hydrological processes, such as overbank flow, irrigation diversion, groundwater seepage/recharge, which operate at a much finer resolution. Thus, these models are not suitable for producing water accounts, which have become increasingly important for water resources planning and management at regional and national scales. A continental scale river system model called Australian Water Resource Assessment River System model (AWRA-R) has been developed and implemented for national water accounting in Australia using a node-link architecture. The model includes major hydrological processes, anthropogenic water utilisation and storage routing that influence the streamflow in both regulated and unregulated river systems. Two key components of the model are an irrigation model to compute water diversion for irrigation use and associated fluxes and stores and a storage-based floodplain inundation model to compute overbank flow from river to floodplain and associated floodplain fluxes and stores. The results in the Murray-Darling Basin shows highly satisfactory performance of the model with median daily Nash-Sutcliffe Efficiency (NSE) of 0.64 and median annual bias of less than 1% for the period of calibration (1970-1991) and median daily NSE of 0.69 and median annual bias of 12% for validation period (1992-2014). The results have demonstrated that the performance of the model is less satisfactory when the key processes such as overbank flow, groundwater seepage and irrigation diversion are switched off. The AWRA-R model, which has been operationalised by the Australian Bureau of Meteorology for continental scale water accounting, has contributed to improvements in the national water account by substantially reducing accounted different volume (gain/loss).

  19. An Approach to Large Scale Radar-Based Modeling and Simulation

    DTIC Science & Technology

    2010-03-01

    useful roles throughout the DoD, including operational analysis, training, and support of acquisition projects. Within the DoD there are many models and...future [3]. Likewise, new acquisition programs often develop new models to explain and support the development of new technology. When an organization...modeling and simulation, acquisition , operational, and research communities, the DoD refers to M&S as “a key enabler of DoD activi- ties” [33]. The

  20. Open source large-scale high-resolution environmental modelling with GEMS

    NASA Astrophysics Data System (ADS)

    Baarsma, Rein; Alberti, Koko; Marra, Wouter; Karssenberg, Derek

    2016-04-01

    Many environmental, topographic and climate data sets are freely available at a global scale, creating the opportunities to run environmental models for every location on Earth. Collection of the data necessary to do this and the consequent conversion into a useful format is very demanding however, not to mention the computational demand of a model itself. We developed GEMS (Global Environmental Modelling System), an online application to run environmental models on various scales directly in your browser and share the results with other researchers. GEMS is open-source and uses open-source platforms including Flask, Leaflet, GDAL, MapServer and the PCRaster-Python modelling framework to process spatio-temporal models in real time. With GEMS, users can write, run, and visualize the results of dynamic PCRaster-Python models in a browser. GEMS uses freely available global data to feed the models, and automatically converts the data to the relevant model extent and data format. Currently available data includes the SRTM elevation model, a selection of monthly vegetation data from MODIS, land use classifications from GlobCover, historical climate data from WorldClim, HWSD soil information from WorldGrids, population density from SEDAC and near real-time weather forecasts, most with a ±100m resolution. Furthermore, users can add other or their own datasets using a web coverage service or a custom data provider script. With easy access to a wide range of base datasets and without the data preparation that is usually necessary to run environmental models, building and running a model becomes a matter hours. Furthermore, it is easy to share the resulting maps, timeseries data or model scenarios with other researchers through a web mapping service (WMS). GEMS can be used to provide open access to model results. Additionally, environmental models in GEMS can be employed by users with no extensive experience with writing code, which is for example valuable for using models

  1. Large-Scale Sediment Routing: Development of a One-Dimensional Model Incorporating Sand Storage

    NASA Astrophysics Data System (ADS)

    Wiele, S. M.; Wilcock, P. R.; Grams, P. E.

    2005-12-01

    Routing sediment through long reaches and networks requires a balance between model efficiency, data availability, and accurate representation of sediment flux and storage. The first two often constrain the appropriate model to one dimension, but such models are unable to capture changes in sediment storage in side-channel environments, which are typically driven by two-dimensional transport fields. Side-channel environments are especially important in canyon channels. Routing of sand in canyon channels can be further complicated by transport of sand over a cobble or boulder bed and by remote locations, which can hinder measurement of channel shape. We have produced a one-dimensional model that routes water and sand through the Colorado River below Glen Canyon Dam in Arizona. Our model differs from conventional one-dimensional models in several significant ways: (1) exchange of sand between the main downstream current and eddies, which cannot be directly represented by a one-dimensional model, is included by parameterizing predictions over a wide range of conditions from a multidimensional model; (2) suspended-sand transport over an extremely rough and sparsely sand-covered bed, which is not accurately represented in conventional sand-transport relations or boundary conditions, is calculated in our model with newly developed algorithms (see Grams and others, this meeting); (3) the channel is represented by reach-averaged properties, thereby reducing data requirements and increasing model efficiency; and (4) the model is coupled with an unsteady-flow model, thereby accounting for frequent changes in discharge produced by variations in releases in this power-producing regulated river. Numerical models can contribute to the explanation of observed changes in sand storage, extrapolate field observations to unobserved flows, and evaluate alternative dam-operation strategies for preserving the sand resource. Model applications can address several significant management

  2. Of mice, flies--and men? Comparing fungal infection models for large-scale screening efforts.

    PubMed

    Brunke, Sascha; Quintin, Jessica; Kasper, Lydia; Jacobsen, Ilse D; Richter, Martin E; Hiller, Ekkehard; Schwarzmüller, Tobias; d'Enfert, Christophe; Kuchler, Karl; Rupp, Steffen; Hube, Bernhard; Ferrandon, Dominique

    2015-05-01

    Studying infectious diseases requires suitable hosts for experimental in vivo infections. Recent years have seen the advent of many alternatives to murine infection models. However, the use of non-mammalian models is still controversial because it is often unclear how well findings from these systems predict virulence potential in humans or other mammals. Here, we compare the commonly used models, fruit fly and mouse (representing invertebrate and mammalian hosts), for their similarities and degree of correlation upon infection with a library of mutants of an important fungal pathogen, the yeast Candida glabrata. Using two indices, for fly survival time and for mouse fungal burden in specific organs, we show a good agreement between the models. We provide a suitable predictive model for estimating the virulence potential of C. glabrata mutants in the mouse from fly survival data. As examples, we found cell wall integrity mutants attenuated in flies, and mutants of a MAP kinase pathway had defective virulence in flies and reduced relative pathogen fitness in mice. In addition, mutants with strongly reduced in vitro growth generally, but not always, had reduced virulence in flies. Overall, we demonstrate that surveying Drosophila survival after infection is a suitable model to predict the outcome of murine infections, especially for severely attenuated C. glabrata mutants. Pre-screening of mutants in an invertebrate Drosophila model can, thus, provide a good estimate of the probability of finding a strain with reduced microbial burden in the mouse host.

  3. Exploring large-scale phenomena in composite membranes through an efficient implicit-solvent model

    NASA Astrophysics Data System (ADS)

    Laradji, Mohamed; Kumar, P. B. Sunil; Spangler, Eric J.

    2016-07-01

    Several microscopic and mesoscale models have been introduced in the past to investigate various phenomena in lipid membranes. Most of these models account for the solvent explicitly. Since in a typical molecular dynamics simulation, the majority of particles belong to the solvent, much of the computational effort in these simulations is devoted for calculating forces between solvent particles. To overcome this problem, several implicit-solvent mesoscale models for lipid membranes have been proposed during the last few years. In the present article, we review an efficient coarse-grained implicit-solvent model we introduced earlier for studies of lipid membranes. In this model, lipid molecules are coarse-grained into short semi-flexible chains of beads with soft interactions. Through molecular dynamics simulations, the model is used to investigate the thermal, structural and elastic properties of lipid membranes. We will also review here few studies, based on this model, of the phase behavior of nanoscale liposomes, cytoskeleton-induced blebbing in lipid membranes, as well as nanoparticles wrapping and endocytosis by tensionless lipid membranes. Topical Review article submitted to the Journal of Physics D: Applied Physics, May 9, 2016

  4. Modeling oxygen isotopes in the Pliocene: Large-scale features over the land and ocean

    NASA Astrophysics Data System (ADS)

    Tindall, Julia C.; Haywood, Alan M.

    2015-09-01

    The first isotope-enabled general circulation model (GCM) simulations of the Pliocene are used to discuss the interpretation of δ18O measurements for a warm climate. The model suggests that spatial patterns of Pliocene ocean surface δ18O (δ18Osw) were similar to those of the preindustrial period; however, Arctic and coastal regions were relatively depleted, while South Atlantic and Mediterranean regions were relatively enriched. Modeled δ18Osw anomalies are closely related to modeled salinity anomalies, which supports using δ18Osw as a paleosalinity proxy. Modeled Pliocene precipitation δ18O (δ18Op) was enriched relative to the preindustrial values (but with depletion of <2‰ over some tropical regions). While usually modest (<4‰), the enrichment can reach 25‰ over ice sheet regions. In the tropics δ18Op anomalies are related to precipitation amount anomalies, although there is usually a spatial offset between the two. This offset suggests that the location of precipitation change is more uncertain than the amplitude when interpreting δ18Op. At high latitudes δ18Op anomalies relate to temperature anomalies; however, the relationship is neither linear nor spatially coincident: a large δ18Op signal does not always translate to a large temperature signal. These results suggest that isotope modeling can lead to enhanced synergy between climate models and climate proxy data. The model can relate proxy data to climate in a physically based way even when the relationship is complex and nonlocal. The δ18O-climate relationships, identified here from a GCM, could not be determined from transfer functions or simple models.

  5. Strategies for Large Scale Implementation of a Multiscale, Multiprocess Integrated Hydrologic Model

    NASA Astrophysics Data System (ADS)

    Kumar, M.; Duffy, C.

    2006-05-01

    Distributed models simulate hydrologic state variables in space and time while taking into account the heterogeneities in terrain, surface, subsurface properties and meteorological forcings. Computational cost and complexity associated with these model increases with its tendency to accurately simulate the large number of interacting physical processes at fine spatio-temporal resolution in a large basin. A hydrologic model run on a coarse spatial discretization of the watershed with limited number of physical processes needs lesser computational load. But this negatively affects the accuracy of model results and restricts physical realization of the problem. So it is imperative to have an integrated modeling strategy (a) which can be universally applied at various scales in order to study the tradeoffs between computational complexity (determined by spatio- temporal resolution), accuracy and predictive uncertainty in relation to various approximations of physical processes (b) which can be applied at adaptively different spatial scales in the same domain by taking into account the local heterogeneity of topography and hydrogeologic variables c) which is flexible enough to incorporate different number and approximation of process equations depending on model purpose and computational constraint. An efficient implementation of this strategy becomes all the more important for Great Salt Lake river basin which is relatively large (~89000 sq. km) and complex in terms of hydrologic and geomorphic conditions. Also the types and the time scales of hydrologic processes which are dominant in different parts of basin are different. Part of snow melt runoff generated in the Uinta Mountains infiltrates and contributes as base flow to the Great Salt Lake over a time scale of decades to centuries. The adaptive strategy helps capture the steep topographic and climatic gradient along the Wasatch front. Here we present the aforesaid modeling strategy along with an associated

  6. Topology of large-scale structure in seeded hot dark matter models

    NASA Technical Reports Server (NTRS)

    Beaky, Matthew M.; Scherrer, Robert J.; Villumsen, Jens V.

    1992-01-01

    The topology of the isodensity surfaces in seeded hot dark matter models, in which static seed masses provide the density perturbations in a universe dominated by massive neutrinos is examined. When smoothed with a Gaussian window, the linear initial conditions in these models show no trace of non-Gaussian behavior for r0 equal to or greater than 5 Mpc (h = 1/2), except for very low seed densities, which show a shift toward isolated peaks. An approximate analytic expression is given for the genus curve expected in linear density fields from randomly distributed seed masses. The evolved models have a Gaussian topology for r0 = 10 Mpc, but show a shift toward a cellular topology with r0 = 5 Mpc; Gaussian models with an identical power spectrum show the same behavior.

  7. Comparing selected morphological models of hydrated Nafion using large scale molecular dynamics simulations

    NASA Astrophysics Data System (ADS)

    Knox, Craig K.

    Experimental elucidation of the nanoscale structure of hydrated Nafion, the most popular polymer electrolyte or proton exchange membrane (PEM) to date, and its influence on macroscopic proton conductance is particularly challenging. While it is generally agreed that hydrated Nafion is organized into distinct hydrophilic domains or clusters within a hydrophobic matrix, the geometry and length scale of these domains continues to be debated. For example, at least half a dozen different domain shapes, ranging from spheres to cylinders, have been proposed based on experimental SAXS and SANS studies. Since the characteristic length scale of these domains is believed to be ˜2 to 5 nm, very large molecular dynamics (MD) simulations are needed to accurately probe the structure and morphology of these domains, especially their connectivity and percolation phenomena at varying water content. Using classical, all-atom MD with explicit hydronium ions, simulations have been performed to study the first-ever hydrated Nafion systems that are large enough (~2 million atoms in a ˜30 nm cell) to directly observe several hydrophilic domains at the molecular level. These systems consisted of six of the most significant and relevant morphological models of Nafion to-date: (1) the cluster-channel model of Gierke, (2) the parallel cylinder model of Schmidt-Rohr, (3) the local-order model of Dreyfus, (4) the lamellar model of Litt, (5) the rod network model of Kreuer, and (6) a 'random' model, commonly used in previous simulations, that does not directly assume any particular geometry, distribution, or morphology. These simulations revealed fast intercluster bridge formation and network percolation in all of the models. Sulfonates were found inside these bridges and played a significant role in percolation. Sulfonates also strongly aggregated around and inside clusters. Cluster surfaces were analyzed to study the hydrophilic-hydrophobic interface. Interfacial area and cluster volume

  8. Formulation of Subgrid Variability and Boundary-Layer Cloud Cover in Large-Scale Models

    DTIC Science & Technology

    2007-11-02

    models. Mon. Wea. Rev., Ill, 536-549. Pan, H.-L., 1990: A simple parameterization scheme of evapotranspiration over land for the NMC medium-range...AHMAD, ahmad@ouvaxa.ucls.ohiou.edu L. N. SASTRY (Ahmad student), ramkumar@sys2.ped.pto.ford.com Evapotranspiration and local climate modification in...regions (eastern Oregon/Washington) using full OSU- PL model. Ä-6 Indian Institute of Tropical Meteorology Pune411 008 INDIA Surendra S. PARASNIS

  9. Study of an engine flow diverter system for a large scale ejector powered aircraft model

    NASA Technical Reports Server (NTRS)

    Springer, R. J.; Langley, B.; Plant, T.; Hunter, L.; Brock, O.

    1981-01-01

    Requirements were established for a conceptual design study to analyze and design an engine flow diverter system and to include accommodations for an ejector system in an existing 3/4 scale fighter model equipped with YJ-79 engines. Model constraints were identified and cost-effective limited modification was proposed to accept the ejectors, ducting and flow diverter valves. Complete system performance was calculated and a versatile computer program capable of analyzing any ejector system was developed.

  10. Segmented linear modeling of CHO fed-batch culture and its application to large scale production.

    PubMed

    Ben Yahia, Bassem; Gourevitch, Boris; Malphettes, Laetitia; Heinzle, Elmar

    2017-04-01

    We describe a systematic approach to model CHO metabolism during biopharmaceutical production across a wide range of cell culture conditions. To this end, we applied the metabolic steady state concept. We analyzed and modeled the production rates of metabolites as a function of the specific growth rate. First, the total number of metabolic steady state phases and the location of the breakpoints were determined by recursive partitioning. For this, the smoothed derivative of the metabolic rates with respect to the growth rate were used followed by hierarchical clustering of the obtained partition. We then applied a piecewise regression to the metabolic rates with the previously determined number of phases. This allowed identifying the growth rates at which the cells underwent a metabolic shift. The resulting model with piecewise linear relationships between metabolic rates and the growth rate did well describe cellular metabolism in the fed-batch cultures. Using the model structure and parameter values from a small-scale cell culture (2 L) training dataset, it was possible to predict metabolic rates of new fed-batch cultures just using the experimental specific growth rates. Such prediction was successful both at the laboratory scale with 2 L bioreactors but also at the production scale of 2000 L. This type of modeling provides a flexible framework to set a solid foundation for metabolic flux analysis and mechanistic type of modeling. Biotechnol. Bioeng. 2017;114: 785-797. © 2016 The Authors. Biotechnology and Bioengineering Published by Wiley Periodicals, Inc.

  11. Uncovering Implicit Assumptions: a Large-Scale Study on Students' Mental Models of Diffusion

    NASA Astrophysics Data System (ADS)

    Stains, Marilyne; Sevian, Hannah

    2015-12-01

    Students' mental models of diffusion in a gas phase solution were studied through the use of the Structure and Motion of Matter (SAMM) survey. This survey permits identification of categories of ways students think about the structure of the gaseous solute and solvent, the origin of motion of gas particles, and trajectories of solute particles in the gaseous medium. A large sample of data ( N = 423) from students across grade 8 (age 13) through upper-level undergraduate was subjected to a cluster analysis to determine the main mental models present. The cluster analysis resulted in a reduced data set ( N = 308), and then, mental models were ascertained from robust clusters. The mental models that emerged from analysis were triangulated through interview data and characterised according to underlying implicit assumptions that guide and constrain thinking about diffusion of a solute in a gaseous medium. Impacts of students' level of preparation in science and relationships of mental models to science disciplines studied by students were examined. Implications are discussed for the value of this approach to identify typical mental models and the sets of implicit assumptions that constrain them.

  12. Implementation of large-scale landscape evolution modelling to real high-resolution DEM

    NASA Astrophysics Data System (ADS)

    Schroeder, S.; Babeyko, A. Y.

    2012-12-01

    We have developed a surface evolution model to be naturally integrated with 3D thermomechanical codes like SLIM-3D to study coupled tectonic-climate interaction. The resolution of the surface evolution model is independent of that of the underlying continuum box. The surface model follows the concept of the cellular automaton implemented on a regular Eulerian mesh. It incorporates an effective filling algorithm that guarantees flow direction in each cell, D8 search for flow directions, computation of discharges and bedrock incision. Additionally, the model implements hillslope erosion in the form of non-linear, slope-dependent diffusion. The model was designed to be employed not only to synthetic topographies but also to real Digital Elevation Models (DEM). In present work we report our experience with model implication to the 30-meter resolution ASTER GDEM of the Pamir orogen, in particular, to the segment of the Panj river. We start with calibration of the model parameters (fluvial incision and hillslope diffusion coefficients) using direct measurements of Panj incision rates and volumes of suspended sediment transport. Since the incision algorithm is independent on hillslope processes, we first adjust the incision parameters. Power-law exponents of the incision equation were evaluated from the profile curvature of the main Pamir rivers. After that, incision coefficient was adjusted to fit the observed incision rate of 5 mm/y. Once the model results are consistent with the measured data, the calibration of hillslope processes follows. For given critical slope, diffusivity could be fitted to match the observed sediment discharge. Applying of surface evolution model to real DEM reveals specific problems which do not appear when working with synthetic landscapes. One of them is the noise of the satellite-measured topography. In particular, due to the non-vertical observation perspective, satellite may not be able to detect the bottom of the river channel, especially

  13. Hydrological improvements for nutrient and pollutant emission modeling in large scale catchments

    NASA Astrophysics Data System (ADS)

    Höllering, S.; Ihringer, J.

    2012-04-01

    An estimation of emissions and loads of nutrients and pollutants into European water bodies with as much accuracy as possible depends largely on the knowledge about the spatially and temporally distributed hydrological runoff patterns. An improved hydrological water balance model for the pollutant emission model MoRE (Modeling of Regionalized Emissions) (IWG, 2011) has been introduced, that can form an adequate basis to simulate discharge in a hydrologically differentiated, land-use based way to subsequently provide the required distributed discharge components. First of all the hydrological model had to comply both with requirements of space and time in order to calculate sufficiently precise the water balance on the catchment scale spatially distributed in sub-catchments and with a higher temporal resolution. Aiming to reproduce seasonal dynamics and the characteristic hydrological regimes of river catchments a daily (instead of a yearly) time increment was applied allowing for a more process oriented simulation of discharge dynamics, volume and therefore water balance. The enhancement of the hydrological model became also necessary to potentially account for the hydrological functioning of catchments in regard to scenarios of e.g. a changing climate or alterations of land use. As a deterministic, partly physically based, conceptual hydrological watershed and water balance model the Precipitation Runoff Modeling System (PRMS) (USGS, 2009) was selected to improve the hydrological input for MoRE. In PRMS the spatial discretization is implemented with sub-catchments and so called hydrologic response units (HRUs) which are the hydrotropic, distributed, finite modeling entities each having a homogeneous runoff reaction due to hydro-meteorological events. Spatial structures and heterogeneities in sub-catchments e.g. urbanity, land use and soil types were identified to derive hydrological similarities and classify in different urban and rural HRUs. In this way the

  14. Large-scale pharmacological profiling of 3D tumor models of cancer cells

    PubMed Central

    Mathews Griner, Lesley A; Zhang, Xiaohu; Guha, Rajarshi; McKnight, Crystal; Goldlust, Ian S; Lal-Nag, Madhu; Wilson, Kelli; Michael, Sam; Titus, Steve; Shinn, Paul; Thomas, Craig J; Ferrer, Marc

    2016-01-01

    The discovery of chemotherapeutic agents for the treatment of cancer commonly uses cell proliferation assays in which cells grow as two-dimensional (2D) monolayers. Compounds identified using 2D monolayer assays often fail to advance during clinical development, most likely because these assays do not reproduce the cellular complexity of tumors and their microenvironment in vivo. The use of three-dimensional (3D) cellular systems have been explored as enabling more predictive in vitro tumor models for drug discovery. To date, small-scale screens have demonstrated that pharmacological responses tend to differ between 2D and 3D cancer cell growth models. However, the limited scope of screens using 3D models has not provided a clear delineation of the cellular pathways and processes that differentially regulate cell survival and death in the different in vitro tumor models. Here we sought to further understand the differences in pharmacological responses between cancer tumor cells grown in different conditions by profiling a large collection of 1912 chemotherapeutic agents. We compared pharmacological responses obtained from cells cultured in traditional 2D monolayer conditions with those responses obtained from cells forming spheres versus cells already in 3D spheres. The target annotation of the compound library screened enabled the identification of those key cellular pathways and processes that when modulated by drugs induced cell death in all growth conditions or selectively in the different cell growth models. In addition, we also show that many of the compounds targeting these key cellular functions can be combined to produce synergistic cytotoxic effects, which in many cases differ in the magnitude of their synergism depending on the cellular model and cell type. The results from this work provide a high-throughput screening framework to profile the responses of drugs both as single agents and in pairwise combinations in 3D sphere models of cancer cells. PMID

  15. Large-Scale Modelling of the Environmentally-Driven Population Dynamics of Temperate Aedes albopictus (Skuse)

    PubMed Central

    Erguler, Kamil; Smith-Unna, Stephanie E.; Waldock, Joanna; Proestos, Yiannis; Christophides, George K.; Lelieveld, Jos; Parham, Paul E.

    2016-01-01

    The Asian tiger mosquito, Aedes albopictus, is a highly invasive vector species. It is a proven vector of dengue and chikungunya viruses, with the potential to host a further 24 arboviruses. It has recently expanded its geographical range, threatening many countries in the Middle East, Mediterranean, Europe and North America. Here, we investigate the theoretical limitations of its range expansion by developing an environmentally-driven mathematical model of its population dynamics. We focus on the temperate strain of Ae. albopictus and compile a comprehensive literature-based database of physiological parameters. As a novel approach, we link its population dynamics to globally-available environmental datasets by performing inference on all parameters. We adopt a Bayesian approach using experimental data as prior knowledge and the surveillance dataset of Emilia-Romagna, Italy, as evidence. The model accounts for temperature, precipitation, human population density and photoperiod as the main environmental drivers, and, in addition, incorporates the mechanism of diapause and a simple breeding site model. The model demonstrates high predictive skill over the reference region and beyond, confirming most of the current reports of vector presence in Europe. One of the main hypotheses derived from the model is the survival of Ae. albopictus populations through harsh winter conditions. The model, constrained by the environmental datasets, requires that either diapausing eggs or adult vectors have increased cold resistance. The model also suggests that temperature and photoperiod control diapause initiation and termination differentially. We demonstrate that it is possible to account for unobserved properties and constraints, such as differences between laboratory and field conditions, to derive reliable inferences on the environmental dependence of Ae. albopictus populations. PMID:26871447

  16. Large-Scale Modelling of the Environmentally-Driven Population Dynamics of Temperate Aedes albopictus (Skuse).

    PubMed

    Erguler, Kamil; Smith-Unna, Stephanie E; Waldock, Joanna; Proestos, Yiannis; Christophides, George K; Lelieveld, Jos; Parham, Paul E

    2016-01-01

    The Asian tiger mosquito, Aedes albopictus, is a highly invasive vector species. It is a proven vector of dengue and chikungunya viruses, with the potential to host a further 24 arboviruses. It has recently expanded its geographical range, threatening many countries in the Middle East, Mediterranean, Europe and North America. Here, we investigate the theoretical limitations of its range expansion by developing an environmentally-driven mathematical model of its population dynamics. We focus on the temperate strain of Ae. albopictus and compile a comprehensive literature-based database of physiological parameters. As a novel approach, we link its population dynamics to globally-available environmental datasets by performing inference on all parameters. We adopt a Bayesian approach using experimental data as prior knowledge and the surveillance dataset of Emilia-Romagna, Italy, as evidence. The model accounts for temperature, precipitation, human population density and photoperiod as the main environmental drivers, and, in addition, incorporates the mechanism of diapause and a simple breeding site model. The model demonstrates high predictive skill over the reference region and beyond, confirming most of the current reports of vector presence in Europe. One of the main hypotheses derived from the model is the survival of Ae. albopictus populations through harsh winter conditions. The model, constrained by the environmental datasets, requires that either diapausing eggs or adult vectors have increased cold resistance. The model also suggests that temperature and photoperiod control diapause initiation and termination differentially. We demonstrate that it is possible to account for unobserved properties and constraints, such as differences between laboratory and field conditions, to derive reliable inferences on the environmental dependence of Ae. albopictus populations.

  17. A Spatio-Temporally Explicit Random Encounter Model for Large-Scale Population Surveys

    PubMed Central

    Jousimo, Jussi; Ovaskainen, Otso

    2016-01-01

    Random encounter models can be used to estimate population abundance from indirect data collected by non-invasive sampling methods, such as track counts or camera-trap data. The classical Formozov–Malyshev–Pereleshin (FMP) estimator converts track counts into an estimate of mean population density, assuming that data on the daily movement distances of the animals are available. We utilize generalized linear models with spatio-temporal error structures to extend the FMP estimator into a flexible Bayesian modelling approach that estimates not only total population size, but also spatio-temporal variation in population density. We also introduce a weighting scheme to estimate density on habitats that are not covered by survey transects, assuming that movement data on a subset of individuals is available. We test the performance of spatio-temporal and temporal approaches by a simulation study mimicking the Finnish winter track count survey. The results illustrate how the spatio-temporal modelling approach is able to borrow information from observations made on neighboring locations and times when estimating population density, and that spatio-temporal and temporal smoothing models can provide improved estimates of total population size compared to the FMP method. PMID:27611683

  18. High-resolution global topographic index values for use in large-scale hydrological modelling

    NASA Astrophysics Data System (ADS)

    Marthews, Toby; Dadson, Simon; Lehner, Bernhard; Abele, Simon; Gedney, Nicola

    2015-04-01

    Modelling land surface water flow is of critical importance for simulating land-surface fluxes, predicting runoff and water table dynamics and for many other applications of Land Surface Models. Many approaches are based on the popular hydrology model TOPMODEL, and the most important parameter of this model is the well-known topographic index. Here we present new, high-resolution parameter maps of the topographic index for all ice-free land pixels calculated from hydrologically-conditioned HydroSHEDS data using the GA2 algorithm ('GRIDATB 2'). At 15 arc-sec resolution, these layers are four times finer than the resolution of the previously best-available topographic index layers, the Compound Topographic Index of HYDRO1k (CTI). For the largest river catchments occurring on each continent we found that, in comparison with CTI our revised values were up to 20% lower in, e.g., the Amazon. We found the highest catchment means were for the Murray-Darling and Nelson-Saskatchewan rather than for the Amazon and St. Lawrence as found from the CTI. For the majority of large catchments, however, the spread of our new GA2 index values is very similar to those of CTI, yet with more spatial variability apparent at fine scale. We believe these new index layers represent greatly-improved global-scale topographic index values and hope that they will be widely used in land surface modelling applications in the future.

  19. The flow structure of pyroclastic density currents: evidence from particle models and large-scale experiments

    NASA Astrophysics Data System (ADS)

    Dellino, Pierfrancesco; Büttner, Ralf; Dioguardi, Fabio; Doronzo, Domenico Maria; La Volpe, Luigi; Mele, Daniela; Sonder, Ingo; Sulpizio, Roberto; Zimanowski, Bernd

    2010-05-01

    Pyroclastic flows are ground hugging, hot, gas-particle flows. They represent the most hazardous events of explosive volcanism, one striking example being the famous historical eruption of Pompeii (AD 79) at Vesuvius. Much of our knowledge on the mechanics of pyroclastic flows comes from theoretical models and numerical simulations. Valuable data are also stored in the geological record of past eruptions, i.e. the particles contained in pyroclastic deposits, but they are rarely used for quantifying the destructive potential of pyroclastic flows. In this paper, by means of experiments, we validate a model that is based on data from pyroclastic deposits. It allows the reconstruction of the current's fluid-dynamic behaviour. We show that our model results in likely values of dynamic pressure and particle volumetric concentration, and allows quantifying the hazard potential of pyroclastic flows.

  20. Norway's 2011 Terror Attacks: Alleviating National Trauma With a Large-Scale Proactive Intervention Model.

    PubMed

    Kärki, Freja Ulvestad

    2015-09-01

    After the terror attacks of July 22, 2011, Norwegian health authorities piloted a new model for municipality-based psychosocial follow-up with victims. This column describes the development of a comprehensive follow-up intervention by health authorities and others that has been implemented at the municipality level across Norway. The model's principles emphasize proactivity by service providers; individually tailored help, with each victim being assigned a contact person in the residential municipality; continuity and long-term focus; effective intersectorial collaboration; and standardized screening of symptoms during the first year. Weekend reunions were also organized for the bereaved, and one-day reunions were organized for the survivors and their families at intervals over the first 18 months. Preliminary findings indicate a high level of success in model implementation. However, the overall effect of the interventions will be a subject for future evaluations.

  1. Splitting failure in side walls of a large-scale underground cavern group: a numerical modelling and a field study.

    PubMed

    Wang, Zhishen; Li, Yong; Zhu, Weishen; Xue, Yiguo; Yu, Song

    2016-01-01

    Vertical splitting cracks often appear in side walls of large-scale underground caverns during excavations owing to the brittle characteristics of surrounding rock mass, especially under the conditions of high in situ stress and great overburden depth. This phenomenon greatly affects the integral safety and stability of the underground caverns. In this paper, a transverse isotropic constitutive model and a splitting failure criterion are simultaneously proposed and secondly programmed in FLAC3D to numerically simulate the integral stability of the underground caverns during excavations in Dagangshan hydropower station in Sichuan province, China. Meanwhile, an in situ monitoring study on the displacement of the key points of the underground caverns has also been carried out, and the monitoring results are compared with the numerical results. From the comparative analysis, it can be concluded that the depths of splitting relaxation area obtained by numerical simulation are almost consistent with the actual in situ monitoring values, as well as the trend of the displacement curves, which shows that the transverse isotropic constitutive model combining with the splitting failure criterion is appropriate for investigating the splitting failure in side walls of large-scale underground caverns and it will be a helpful guidance of predicting the depths of splitting relaxation area in surrounding rock mass.

  2. Large-Scale Recurrent Neural Network Based Modelling of Gene Regulatory Network Using Cuckoo Search-Flower Pollination Algorithm

    PubMed Central

    Mandal, Sudip; Khan, Abhinandan; Saha, Goutam; Pal, Rajat K.

    2016-01-01

    The accurate prediction of genetic networks using computational tools is one of the greatest challenges in the postgenomic era. Recurrent Neural Network is one of the most popular but simple approaches to model the network dynamics from time-series microarray data. To date, it has been successfully applied to computationally derive small-scale artificial and real-world genetic networks with high accuracy. However, they underperformed for large-scale genetic networks. Here, a new methodology has been proposed where a hybrid Cuckoo Search-Flower Pollination Algorithm has been implemented with Recurrent Neural Network. Cuckoo Search is used to search the best combination of regulators. Moreover, Flower Pollination Algorithm is applied to optimize the model parameters of the Recurrent Neural Network formalism. Initially, the proposed method is tested on a benchmark large-scale artificial network for both noiseless and noisy data. The results obtained show that the proposed methodology is capable of increasing the inference of correct regulations and decreasing false regulations to a high degree. Secondly, the proposed methodology has been validated against the real-world dataset of the DNA SOS repair network of Escherichia coli. However, the proposed method sacrifices computational time complexity in both cases due to the hybrid optimization process. PMID:26989410

  3. Large-Scale Recurrent Neural Network Based Modelling of Gene Regulatory Network Using Cuckoo Search-Flower Pollination Algorithm.

    PubMed

    Mandal, Sudip; Khan, Abhinandan; Saha, Goutam; Pal, Rajat K

    2016-01-01

    The accurate prediction of genetic networks using computational tools is one of the greatest challenges in the postgenomic era. Recurrent Neural Network is one of the most popular but simple approaches to model the network dynamics from time-series microarray data. To date, it has been successfully applied to computationally derive small-scale artificial and real-world genetic networks with high accuracy. However, they underperformed for large-scale genetic networks. Here, a new methodology has been proposed where a hybrid Cuckoo Search-Flower Pollination Algorithm has been implemented with Recurrent Neural Network. Cuckoo Search is used to search the best combination of regulators. Moreover, Flower Pollination Algorithm is applied to optimize the model parameters of the Recurrent Neural Network formalism. Initially, the proposed method is tested on a benchmark large-scale artificial network for both noiseless and noisy data. The results obtained show that the proposed methodology is capable of increasing the inference of correct regulations and decreasing false regulations to a high degree. Secondly, the proposed methodology has been validated against the real-world dataset of the DNA SOS repair network of Escherichia coli. However, the proposed method sacrifices computational time complexity in both cases due to the hybrid optimization process.

  4. Constructing Model of Relationship among Behaviors and Injuries to Products Based on Large Scale Text Data on Injuries

    NASA Astrophysics Data System (ADS)

    Nomori, Koji; Kitamura, Koji; Motomura, Yoichi; Nishida, Yoshifumi; Yamanaka, Tatsuhiro; Komatsubara, Akinori

    In Japan, childhood injury prevention is urgent issue. Safety measures through creating knowledge of injury data are essential for preventing childhood injuries. Especially the injury prevention approach by product modification is very important. The risk assessment is one of the most fundamental methods to design safety products. The conventional risk assessment has been carried out subjectively because product makers have poor data on injuries. This paper deals with evidence-based risk assessment, in which artificial intelligence technologies are strongly needed. This paper describes a new method of foreseeing usage of products, which is the first step of the evidence-based risk assessment, and presents a retrieval system of injury data. The system enables a product designer to foresee how children use a product and which types of injuries occur due to the product in daily environment. The developed system consists of large scale injury data, text mining technology and probabilistic modeling technology. Large scale text data on childhood injuries was collected from medical institutions by an injury surveillance system. Types of behaviors to a product were derived from the injury text data using text mining technology. The relationship among products, types of behaviors, types of injuries and characteristics of children was modeled by Bayesian Network. The fundamental functions of the developed system and examples of new findings obtained by the system are reported in this paper.

  5. Prediction model of potential hepatocarcinogenicity of rat hepatocarcinogens using a large-scale toxicogenomics database

    SciTech Connect

    Uehara, Takeki; Minowa, Yohsuke; Morikawa, Yuji; Kondo, Chiaki; Maruyama, Toshiyuki; Kato, Ikuo; Nakatsu, Noriyuki; Igarashi, Yoshinobu; Ono, Atsushi; Hayashi, Hitomi; Mitsumori, Kunitoshi; Yamada, Hiroshi; Ohno, Yasuo; Urushidani, Tetsuro

    2011-09-15

    The present study was performed to develop a robust gene-based prediction model for early assessment of potential hepatocarcinogenicity of chemicals in rats by using our toxicogenomics database, TG-GATEs (Genomics-Assisted Toxicity Evaluation System developed by the Toxicogenomics Project in Japan). The positive training set consisted of high- or middle-dose groups that received 6 different non-genotoxic hepatocarcinogens during a 28-day period. The negative training set consisted of high- or middle-dose groups of 54 non-carcinogens. Support vector machine combined with wrapper-type gene selection algorithms was used for modeling. Consequently, our best classifier yielded prediction accuracies for hepatocarcinogenicity of 99% sensitivity and 97% specificity in the training data set, and false positive prediction was almost completely eliminated. Pathway analysis of feature genes revealed that the mitogen-activated protein kinase p38- and phosphatidylinositol-3-kinase-centered interactome and the v-myc myelocytomatosis viral oncogene homolog-centered interactome were the 2 most significant networks. The usefulness and robustness of our predictor were further confirmed in an independent validation data set obtained from the public database. Interestingly, similar positive predictions were obtained in several genotoxic hepatocarcinogens as well as non-genotoxic hepatocarcinogens. These results indicate that the expression profiles of our newly selected candidate biomarker genes might be common characteristics in the early stage of carcinogenesis for both genotoxic and non-genotoxic carcinogens in the rat liver. Our toxicogenomic model might be useful for the prospective screening of hepatocarcinogenicity of compounds and prioritization of compounds for carcinogenicity testing. - Highlights: >We developed a toxicogenomic model to predict hepatocarcinogenicity of chemicals. >The optimized model consisting of 9 probes had 99% sensitivity and 97% specificity. >This model

  6. Segmented linear modeling of CHO fed‐batch culture and its application to large scale production

    PubMed Central

    Ben Yahia, Bassem; Gourevitch, Boris; Malphettes, Laetitia

    2016-01-01

    ABSTRACT We describe a systematic approach to model CHO metabolism during biopharmaceutical production across a wide range of cell culture conditions. To this end, we applied the metabolic steady state concept. We analyzed and modeled the production rates of metabolites as a function of the specific growth rate. First, the total number of metabolic steady state phases and the location of the breakpoints were determined by recursive partitioning. For this, the smoothed derivative of the metabolic rates with respect to the growth rate were used followed by hierarchical clustering of the obtained partition. We then applied a piecewise regression to the metabolic rates with the previously determined number of phases. This allowed identifying the growth rates at which the cells underwent a metabolic shift. The resulting model with piecewise linear relationships between metabolic rates and the growth rate did well describe cellular metabolism in the fed‐batch cultures. Using the model structure and parameter values from a small‐scale cell culture (2 L) training dataset, it was possible to predict metabolic rates of new fed‐batch cultures just using the experimental specific growth rates. Such prediction was successful both at the laboratory scale with 2 L bioreactors but also at the production scale of 2000 L. This type of modeling provides a flexible framework to set a solid foundation for metabolic flux analysis and mechanistic type of modeling. Biotechnol. Bioeng. 2017;114: 785–797. © 2016 The Authors. Biotechnology and Bioengineering Published by Wiley Periodicals, Inc. PMID:27869296

  7. High-resolution global topographic index values for use in large-scale hydrological modelling

    NASA Astrophysics Data System (ADS)

    Marthews, T. R.; Dadson, S. J.; Lehner, B.; Abele, S.; Gedney, N.

    2015-01-01

    Modelling land surface water flow is of critical importance for simulating land surface fluxes, predicting runoff and water table dynamics and for many other applications of Land Surface Models. Many approaches are based on the popular hydrology model TOPMODEL (TOPography-based hydrological MODEL), and the most important parameter of this model is the well-known topographic index. Here we present new, high-resolution parameter maps of the topographic index for all ice-free land pixels calculated from hydrologically conditioned HydroSHEDS (Hydrological data and maps based on SHuttle Elevation Derivatives at multiple Scales) data using the GA2 algorithm (GRIDATB 2). At 15 arcsec resolution, these layers are 4 times finer than the resolution of the previously best-available topographic index layers, the compound topographic index of HYDRO1k (CTI). For the largest river catchments occurring on each continent we found that, in comparison with CTI our revised values were up to 20% lower in, e.g. the Amazon. We found the highest catchment means were for the Murray-Darling and Nelson-Saskatchewan rather than for the Amazon and St. Lawrence as found from the CTI. For the majority of large catchments, however, the spread of our new GA2 index values is very similar to those of CTI, yet with more spatial variability apparent at fine scale. We believe these new index layers represent greatly improved global-scale topographic index values and hope that they will be widely used in land surface modelling applications in the future.

  8. Towards agile large-scale predictive modelling in drug discovery with flow-based programming design principles.

    PubMed

    Lampa, Samuel; Alvarsson, Jonathan; Spjuth, Ola

    2016-01-01

    Predictive modelling in drug discovery is challenging to automate as it often contains multiple analysis steps and might involve cross-validation and parameter tuning that create complex dependencies between tasks. With large-scale data or when using computationally demanding modelling methods, e-infrastructures such as high-performance or cloud computing are required, adding to the existing challenges of fault-tolerant automation. Workflow management systems can aid in many of these challenges, but the currently available systems are lacking in the functionality needed to enable agile and flexible predictive modelling. We here present an approach inspired by elements of the flow-based programming paradigm, implemented as an extension of the Luigi system which we name SciLuigi. We also discuss the experiences from using the approach when modelling a large set of biochemical interactions using a shared computer cluster.Graphical abstract.

  9. Methods for Modeling and Decomposing Treatment Effect Variation in Large-Scale Randomized Trials

    ERIC Educational Resources Information Center

    Ding, Peng; Feller, Avi; Miratrix, Luke

    2015-01-01

    Recent literature has underscored the critical role of treatment effect variation in estimating and understanding causal effects. This approach, however, is in contrast to much of the foundational research on causal inference. Linear models, for example, classically rely on constant treatment effect assumptions, or treatment effects defined by…

  10. Robust classification of protein variation using structural modelling and large-scale data integration

    PubMed Central

    Baugh, Evan H.; Simmons-Edler, Riley; Müller, Christian L.; Alford, Rebecca F.; Volfovsky, Natalia; Lash, Alex E.; Bonneau, Richard

    2016-01-01

    Existing methods for interpreting protein variation focus on annotating mutation pathogenicity rather than detailed interpretation of variant deleteriousness and frequently use only sequence-based or structure-based information. We present VIPUR, a computational framework that seamlessly integrates sequence analysis and structural modelling (using the Rosetta protein modelling suite) to identify and interpret deleterious protein variants. To train VIPUR, we collected 9477 protein variants with known effects on protein function from multiple organisms and curated structural models for each variant from crystal structures and homology models. VIPUR can be applied to mutations in any organism's proteome with improved generalized accuracy (AUROC .83) and interpretability (AUPR .87) compared to other methods. We demonstrate that VIPUR's predictions of deleteriousness match the biological phenotypes in ClinVar and provide a clear ranking of prediction confidence. We use VIPUR to interpret known mutations associated with inflammation and diabetes, demonstrating the structural diversity of disrupted functional sites and improved interpretation of mutations associated with human diseases. Lastly, we demonstrate VIPUR's ability to highlight candidate variants associated with human diseases by applying VIPUR to de novo variants associated with autism spectrum disorders. PMID:26926108

  11. Breach modelling by overflow with TELEMAC 2D: Comparison with large-scale experiments

    Technology Transfer Automated Retrieval System (TEKTRAN)

    An erosion law has been implemented in TELEMAC 2D to represent the surface erosion process to model the breach formation of a levee. We focus on homogeneous and earth fill levee to simplify this first implementation. The first part of this study reveals the ability of this method to represent simu...

  12. A balanced water layer concept for subglacial hydrology in large scale ice sheet models

    NASA Astrophysics Data System (ADS)

    Goeller, S.; Thoma, M.; Grosfeld, K.; Miller, H.

    2012-12-01

    There is currently no doubt about the existence of a wide-spread hydrological network under the Antarctic ice sheet, which lubricates the ice base and thus leads to increased ice velocities. Consequently, ice models should incorporate basal hydrology to obtain meaningful results for future ice dynamics and their contribution to global sea level rise. Here, we introduce the balanced water layer concept, covering two prominent subglacial hydrological features for ice sheet modeling on a continental scale: the evolution of subglacial lakes and balance water fluxes. We couple it to the thermomechanical ice-flow model RIMBAY and apply it to a synthetic model domain inspired by the Gamburtsev Mountains, Antarctica. In our experiments we demonstrate the dynamic generation of subglacial lakes and their impact on the velocity field of the overlaying ice sheet, resulting in a negative ice mass balance. Furthermore, we introduce an elementary parametrization of the water flux-basal sliding coupling and reveal the predominance of the ice loss through the resulting ice streams against the stabilizing influence of less hydrologically active areas. We point out, that established balance flux schemes quantify these effects only partially as their ability to store subglacial water is lacking.

  13. Predicting agricultural impacts of large-scale drought: 2012 and the case for better modeling

    Technology Transfer Automated Retrieval System (TEKTRAN)

    We present an example of a simulation-based forecast for the 2012 U.S. maize growing season produced as part of a high-resolution, multi-scale, predictive mechanistic modeling study designed for decision support, risk management, and counterfactual analysis. The simulations undertaken for this analy...

  14. Uncovering Implicit Assumptions: A Large-Scale Study on Students' Mental Models of Diffusion

    ERIC Educational Resources Information Center

    Stains, Marilyne; Sevian, Hannah

    2015-01-01

    Students' mental models of diffusion in a gas phase solution were studied through the use of the Structure and Motion of Matter (SAMM) survey. This survey permits identification of categories of ways students think about the structure of the gaseous solute and solvent, the origin of motion of gas particles, and trajectories of solute particles in…

  15. Advanced kinetic plasma model implementation for new large-scale investigations

    NASA Astrophysics Data System (ADS)

    Reddell, Noah; Shumlak, Uri

    2013-10-01

    A kinetic plasma model for of one or more particle species described by the Vlasov equation and coupled to fully dynamic electromagnetic forces is presented. The model is implemented as evolving continuous PDF (probability density function) in particle phase space (position-velocity) as opposed to particle-in-cell (PIC) methods which discretely sample the PDF. A new boundary condition for the truncated velocity-space edge, motivated by physical properties of the PDF tail, is introduced. The hyperbolic model is evolved using the discontinuous Galerkin numerical method, conserving system mass, momentum, and energy - an advantage compared to PIC. Simulations of two- to six-dimensional phase space are computationally expensive. To maximize performance and scaling to large simulations, a new framework, WARPM, has been developed for many-core (e.g. GPU) computing architectures. WARPM supports both multi-fluid and continuum kinetic plasma models as coupled hyperbolic systems with nearest neighbor predictable communication. Exemplary physics results and computational performance are presented.

  16. Can simple models predict large-scale surface ocean isoprene concentrations?

    NASA Astrophysics Data System (ADS)

    Booge, Dennis; Marandino, Christa A.; Schlundt, Cathleen; Palmer, Paul I.; Schlundt, Michael; Atlas, Elliot L.; Bracher, Astrid; Saltzman, Eric S.; Wallace, Douglas W. R.

    2016-09-01

    We use isoprene and related field measurements from three different ocean data sets together with remotely sensed satellite data to model global marine isoprene emissions. We show that using monthly mean satellite-derived chl a concentrations to parameterize isoprene with a constant chl a normalized isoprene production rate underpredicts the measured oceanic isoprene concentration by a mean factor of 19 ± 12. Improving the model by using phytoplankton functional type dependent production values and by decreasing the bacterial degradation rate of isoprene in the water column results in only a slight underestimation (factor 1.7 ± 1.2). We calculate global isoprene emissions of 0.21 Tg C for 2014 using this improved model, which is twice the value calculated using the original model. Nonetheless, the sea-to-air fluxes have to be at least 1 order of magnitude higher to account for measured atmospheric isoprene mixing ratios. These findings suggest that there is at least one missing oceanic source of isoprene and, possibly, other unknown factors in the ocean or atmosphere influencing the atmospheric values. The discrepancy between calculated fluxes and atmospheric observations must be reconciled in order to fully understand the importance of marine-derived isoprene as a precursor to remote marine boundary layer particle formation.

  17. HYPERstream: a multi-scale framework for streamflow routing in large-scale hydrological model

    NASA Astrophysics Data System (ADS)

    Piccolroaz, Sebastiano; Di Lazzaro, Michele; Zarlenga, Antonio; Majone, Bruno; Bellin, Alberto; Fiori, Aldo

    2016-05-01

    We present HYPERstream, an innovative streamflow routing scheme based on the width function instantaneous unit hydrograph (WFIUH) theory, which is specifically designed to facilitate coupling with weather forecasting and climate models. The proposed routing scheme preserves geomorphological dispersion of the river network when dealing with horizontal hydrological fluxes, irrespective of the computational grid size inherited from the overlaying climate model providing the meteorological forcing. This is achieved by simulating routing within the river network through suitable transfer functions obtained by applying the WFIUH theory to the desired level of detail. The underlying principle is similar to the block-effective dispersion employed in groundwater hydrology, with the transfer functions used to represent the effect on streamflow of morphological heterogeneity at scales smaller than the computational grid. Transfer functions are constructed for each grid cell with respect to the nodes of the network where streamflow is simulated, by taking advantage of the detailed morphological information contained in the digital elevation model (DEM) of the zone of interest. These characteristics make HYPERstream well suited for multi-scale applications, ranging from catchment up to continental scale, and to investigate extreme events (e.g., floods) that require an accurate description of routing through the river network. The routing scheme enjoys parsimony in the adopted parametrization and computational efficiency, leading to a dramatic reduction of the computational effort with respect to full-gridded models at comparable level of accuracy. HYPERstream is designed with a simple and flexible modular structure that allows for the selection of any rainfall-runoff model to be coupled with the routing scheme and the choice of different hillslope processes to be represented, and it makes the framework particularly suitable to massive parallelization, customization according to

  18. North American extreme temperature events and related large scale meteorological patterns: A review of statistical methods, dynamics, modeling, and trends

    DOE PAGES

    Grotjahn, Richard; Black, Robert; Leung, Ruby; ...

    2015-05-22

    This paper reviews research approaches and open questions regarding data, statistical analyses, dynamics, modeling efforts, and trends in relation to temperature extremes. Our specific focus is upon extreme events of short duration (roughly less than 5 days) that affect parts of North America. These events are associated with large scale meteorological patterns (LSMPs). Methods used to define extreme events statistics and to identify and connect LSMPs to extreme temperatures are presented. Recent advances in statistical techniques can connect LSMPs to extreme temperatures through appropriately defined covariates that supplements more straightforward analyses. A wide array of LSMPs, ranging from synoptic tomore » planetary scale phenomena, have been implicated as contributors to extreme temperature events. Current knowledge about the physical nature of these contributions and the dynamical mechanisms leading to the implicated LSMPs is incomplete. There is a pressing need for (a) systematic study of the physics of LSMPs life cycles and (b) comprehensive model assessment of LSMP-extreme temperature event linkages and LSMP behavior. Generally, climate models capture the observed heat waves and cold air outbreaks with some fidelity. However they overestimate warm wave frequency and underestimate cold air outbreaks frequency, and underestimate the collective influence of low-frequency modes on temperature extremes. Climate models have been used to investigate past changes and project future trends in extreme temperatures. Overall, modeling studies have identified important mechanisms such as the effects of large-scale circulation anomalies and land-atmosphere interactions on changes in extreme temperatures. However, few studies have examined changes in LSMPs more specifically to understand the role of LSMPs on past and future extreme temperature changes. Even though LSMPs are resolvable by global and regional climate models, they are not necessarily well simulated so

  19. North American extreme temperature events and related large scale meteorological patterns: A review of statistical methods, dynamics, modeling, and trends

    SciTech Connect

    Grotjahn, Richard; Black, Robert; Leung, Ruby; Wehner, Michael F.; Barlow, Mathew; Bosilovich, Michael; Gershunov, Alexander; Gutowski, Jr., William J.; Gyakum, John R.; Katz, Richard W.; Lee, Yun -Young; Lim, Young -Kwon; Prabhat, -

    2015-05-22

    This paper reviews research approaches and open questions regarding data, statistical analyses, dynamics, modeling efforts, and trends in relation to temperature extremes. Our specific focus is upon extreme events of short duration (roughly less than 5 days) that affect parts of North America. These events are associated with large scale meteorological patterns (LSMPs). Methods used to define extreme events statistics and to identify and connect LSMPs to extreme temperatures are presented. Recent advances in statistical techniques can connect LSMPs to extreme temperatures through appropriately defined covariates that supplements more straightforward analyses. A wide array of LSMPs, ranging from synoptic to planetary scale phenomena, have been implicated as contributors to extreme temperature events. Current knowledge about the physical nature of these contributions and the dynamical mechanisms leading to the implicated LSMPs is incomplete. There is a pressing need for (a) systematic study of the physics of LSMPs life cycles and (b) comprehensive model assessment of LSMP-extreme temperature event linkages and LSMP behavior. Generally, climate models capture the observed heat waves and cold air outbreaks with some fidelity. However they overestimate warm wave frequency and underestimate cold air outbreaks frequency, and underestimate the collective influence of low-frequency modes on temperature extremes. Climate models have been used to investigate past changes and project future trends in extreme temperatures. Overall, modeling studies have identified important mechanisms such as the effects of large-scale circulation anomalies and land-atmosphere interactions on changes in extreme temperatures. However, few studies have examined changes in LSMPs more specifically to understand the role of LSMPs on past and future extreme temperature changes. Even though LSMPs are resolvable by global and regional climate models, they are not necessarily well simulated so more

  20. Comparing Realistic Subthalamic Nucleus Neuron Models

    NASA Astrophysics Data System (ADS)

    Njap, Felix; Claussen, Jens C.; Moser, Andreas; Hofmann, Ulrich G.

    2011-06-01

    The mechanism of action of clinically effective electrical high frequency stimulation is still under debate. However, recent evidence points at the specific activation of GABA-ergic ion channels. Using a computational approach, we analyze temporal properties of the spike trains emitted by biologically realistic neurons of the subthalamic nucleus (STN) as a function of GABA-ergic synaptic input conductances. Our contribution is based on a model proposed by Rubin and Terman and exhibits a wide variety of different firing patterns, silent, low spiking, moderate spiking and intense spiking activity. We observed that most of the cells in our network turn to silent mode when we increase the GABAA input conductance above the threshold of 3.75 mS/cm2. On the other hand, insignificant changes in firing activity are observed when the input conductance is low or close to zero. We thus reproduce Rubin's model with vanishing synaptic conductances. To quantitatively compare spike trains from the original model with the modified model at different conductance levels, we apply four different (dis)similarity measures between them. We observe that Mahalanobis distance, Victor-Purpura metric, and Interspike Interval distribution are sensitive to different firing regimes, whereas Mutual Information seems undiscriminative for these functional changes.

  1. A dynamical basis for the parameterization of organized deep convection in large-scale numerical models

    NASA Technical Reports Server (NTRS)

    Moncrieff, M. W.

    1984-01-01

    A hierarchy of steady, nonlinear, semianalytic models of different types of convection were produced. These provide a theoretical framework for determining cloud outflow fluxes of both dynamic and thermodynamic quantities, which can be used to formulate dynamical transports in parameterization schemes. This was achieved by exploiting certain Lagrangian conservation properties of steady flow, from which an equation for the vertical displacement of particles can be obtained and the outflow entropy, energy and momentum fluxes and the infow/outflow mass fluxes can be determined from solution to the equation. These fluxes are determined in terms of grid scale parameters such as convective available potential energy (CAPE), cloud layer shear, and horizontal pressure gradients. Five main types of system models are identified, respectively representing archtypes of convection in zero shear, large shear, midlatitude squall lines, tropical squall lines and cellular convection. The downdraught is an important aspect in the first four of these and the cloud scale transport of momentum is very distinctive.

  2. A semiparametric graphical modelling approach for large-scale equity selection

    PubMed Central

    Liu, Han; Mulvey, John; Zhao, Tianqi

    2016-01-01

    We propose a new stock selection strategy that exploits rebalancing returns and improves portfolio performance. To effectively harvest rebalancing gains, we apply ideas from elliptical-copula graphical modelling and stability inference to select stocks that are as independent as possible. The proposed elliptical-copula graphical model has a latent Gaussian representation; its structure can be effectively inferred using the regularized rank-based estimators. The resulting algorithm is computationally efficient and scales to large data-sets. To show the efficacy of the proposed method, we apply it to conduct equity selection based on a 16-year health care stock data-set and a large 34-year stock data-set. Empirical tests show that the proposed method is superior to alternative strategies including a principal component analysis-based approach and the classical Markowitz strategy based on the traditional buy-and-hold assumption. PMID:28316507

  3. Modeling Large Scale Circuits Using Massively Parallel Descrete-Event Simulation

    DTIC Science & Technology

    2013-06-01

    1,966,080 cores of the Sequoia Blue Gene/Q supercomputer system. For the PHOLD benchmark model, we demonstrate the ability to process 33 trillion...events in 65 seconds yielding a peak event-rate in excess of 504 billion events/second using 120 racks of Sequoia . 15. SUBJECT TERMS Circuit...13 Table 7 - SEQUOIA : Raw PHOLD Performance Data for 1, 2, 4, 8, and 48 Rack Runs ............ 18 Table 8 - SEQUOIA : Raw PHOLD Performance

  4. A statistical model for brain networks inferred from large-scale electrophysiological signals.

    PubMed

    Obando, Catalina; De Vico Fallani, Fabrizio

    2017-03-01

    Network science has been extensively developed to characterize the structural properties of complex systems, including brain networks inferred from neuroimaging data. As a result of the inference process, networks estimated from experimentally obtained biological data represent one instance of a larger number of realizations with similar intrinsic topology. A modelling approach is therefore needed to support statistical inference on the bottom-up local connectivity mechanisms influencing the formation of the estimated brain networks. Here, we adopted a statistical model based on exponential random graph models (ERGMs) to reproduce brain networks, or connectomes, estimated by spectral coherence between high-density electroencephalographic (EEG) signals. ERGMs are made up by different local graph metrics, whereas the parameters weight the respective contribution in explaining the observed network. We validated this approach in a dataset of N = 108 healthy subjects during eyes-open (EO) and eyes-closed (EC) resting-state conditions. Results showed that the tendency to form triangles and stars, reflecting clustering and node centrality, better explained the global properties of the EEG connectomes than other combinations of graph metrics. In particular, the synthetic networks generated by this model configuration replicated the characteristic differences found in real brain networks, with EO eliciting significantly higher segregation in the alpha frequency band (8-13 Hz) than EC. Furthermore, the fitted ERGM parameter values provided complementary information showing that clustering connections are significantly more represented from EC to EO in the alpha range, but also in the beta band (14-29 Hz), which is known to play a crucial role in cortical processing of visual input and externally oriented attention. Taken together, these findings support the current view of the functional segregation and integration of the brain in terms of modules and hubs, and provide a

  5. Model-Constrained Optimization Methods for Reduction of Parameterized Large-Scale Systems

    DTIC Science & Technology

    2007-05-01

    colorful with his stereo karaoke system. Anh Hai, thanks for helping me move my furnitures many times, and for all the beers too! To all Vietnamese...visit them. My trips to Springfield would have been very boring if Anh Tung (+ Thao) and Anh Danh (+ Thuy) had not turn on their super stereo karaoke ...expensive to solve, e.g. for applications such as optimal design or probabilistic analyses. Model order reduction is a powerful tool that permits the

  6. Modeling of hydrodynamics of large scale atmospheric circulating fluidized bed coal combustors

    SciTech Connect

    Leretaille, P.Y.; Werther, J.; Briand, P.; Montat, D.

    1999-07-01

    A model for evaluation of the hydrodynamics of gas-solid flow in the riser of a circulating fluidized bed coal boiler is proposed. The 3D fields of the gas and solid velocities and of the solid concentration in the riser are estimated from measured data of the vertical pressure profile. The model includes semi-empirical laws developed on the basis of a set of experimental data on six industrial boilers ranging from 12 MWth to 700 MWth. Its relevance for laboratory scale risers was not tested. The estimation of flow of solids near the walls was fulfilled with a special care due to the influence of this flow on heat transfer. For the validation of the model, measurements of solid concentration with guarded capacitance probes were performed in the 250 MWe Stein Industrie-Lurgi type CFB boiler in Gardanne, France. Finally, an attempt to predict the vertical pressure profile on the riser, starting from the operating conditions (and based on an empirical evaluation of the variation of the downward flow of solid from local conditions) is presented and compared to experimental data.

  7. Developing a large-scale model to predict the effects of land ...

    EPA Pesticide Factsheets

    The US EPA’s National Rivers and Streams Assessment (NRSA) uses spatially balanced sampling to estimate the proportion of streams within the continental US (CONUS) that fail to support healthy biological communities. However, to manage these systems, we also must understand how human land use alters stream communities from their natural condition and how natural factors, such as climate, interact with these effects. We used random forest modeling and data from 1353 streams that NRSA determined to be in “good” or “poor” biological condition (BC) to predict the probable BC of nearly 5.4 million km of stream (National Hydrography Dataset) within the CONUS. BC was best predicted by 5 natural factors (mean discharge, mean annual air temperature [AT], soil water content, topography, major ecoregion) and 2 riparian factors that are easily altered by humans (% riparian urbanization [%Urb], % riparian forest [%Fst] cover). The model correctly predicted BC for 74% of sites, but predicted poor BC slightly more accurately (76%) than good BC (71%). Initial results showed that probability of good BC declined rapidly with increasing %Urb, but this effect leveled off in streams with >7 %Urb. Likewise, probability of good BC increased in streams with >45 %Fst. This model can be used to generate hypotheses to guide future research and test restoration scenarios. For example, BC had a U-shaped relationship with AT, with poorest BCs predicted between 10-15°C. Plots sugge

  8. User Friendly Open GIS Tool for Large Scale Data Assimilation - a Case Study of Hydrological Modelling

    NASA Astrophysics Data System (ADS)

    Gupta, P. K.

    2012-08-01

    Open source software (OSS) coding has tremendous advantages over proprietary software. These are primarily fuelled by high level programming languages (JAVA, C++, Python etc...) and open source geospatial libraries (GDAL/OGR, GEOS, GeoTools etc.). Quantum GIS (QGIS) is a popular open source GIS package, which is licensed under GNU GPL and is written in C++. It allows users to perform specialised tasks by creating plugins in C++ and Python. This research article emphasises on exploiting this capability of QGIS to build and implement plugins across multiple platforms using the easy to learn - Python programming language. In the present study, a tool has been developed to assimilate large spatio-temporal datasets such as national level gridded rainfall, temperature, topographic (digital elevation model, slope, aspect), landuse/landcover and multi-layer soil data for input into hydrological models. At present this tool has been developed for Indian sub-continent. An attempt is also made to use popular scientific and numerical libraries to create custom applications for digital inclusion. In the hydrological modelling calibration and validation are important steps which are repetitively carried out for the same study region. As such the developed tool will be user friendly and used efficiently for these repetitive processes by reducing the time required for data management and handling. Moreover, it was found that the developed tool can easily assimilate large dataset in an organised manner.

  9. Large-scale application of the flood damage model RAilway Infrastructure Loss (RAIL)

    NASA Astrophysics Data System (ADS)

    Kellermann, Patric; Schönberger, Christine; Thieken, Annegret H.

    2016-11-01

    Experience has shown that river floods can significantly hamper the reliability of railway networks and cause extensive structural damage and disruption. As a result, the national railway operator in Austria had to cope with financial losses of more than EUR 100 million due to flooding in recent years. Comprehensive information on potential flood risk hot spots as well as on expected flood damage in Austria is therefore needed for strategic flood risk management. In view of this, the flood damage model RAIL (RAilway Infrastructure Loss) was applied to estimate (1) the expected structural flood damage and (2) the resulting repair costs of railway infrastructure due to a 30-, 100- and 300-year flood in the Austrian Mur River catchment. The results were then used to calculate the expected annual damage of the railway subnetwork and subsequently analysed in terms of their sensitivity to key model assumptions. Additionally, the impact of risk aversion on the estimates was investigated, and the overall results were briefly discussed against the background of climate change and possibly resulting changes in flood risk. The findings indicate that the RAIL model is capable of supporting decision-making in risk management by providing comprehensive risk information on the catchment level. It is furthermore demonstrated that an increased risk aversion of the railway operator has a marked influence on flood damage estimates for the study area and, hence, should be considered with regard to the development of risk management strategies.

  10. Modeling ramp compression experiments using large-scale molecular dynamics simulation.

    SciTech Connect

    Mattsson, Thomas Kjell Rene; Desjarlais, Michael Paul; Grest, Gary Stephen; Templeton, Jeremy Alan; Thompson, Aidan Patrick; Jones, Reese E.; Zimmerman, Jonathan A.; Baskes, Michael I.; Winey, J. Michael; Gupta, Yogendra Mohan; Lane, J. Matthew D.; Ditmire, Todd; Quevedo, Hernan J.

    2011-10-01

    Molecular dynamics simulation (MD) is an invaluable tool for studying problems sensitive to atomscale physics such as structural transitions, discontinuous interfaces, non-equilibrium dynamics, and elastic-plastic deformation. In order to apply this method to modeling of ramp-compression experiments, several challenges must be overcome: accuracy of interatomic potentials, length- and time-scales, and extraction of continuum quantities. We have completed a 3 year LDRD project with the goal of developing molecular dynamics simulation capabilities for modeling the response of materials to ramp compression. The techniques we have developed fall in to three categories (i) molecular dynamics methods (ii) interatomic potentials (iii) calculation of continuum variables. Highlights include the development of an accurate interatomic potential describing shock-melting of Beryllium, a scaling technique for modeling slow ramp compression experiments using fast ramp MD simulations, and a technique for extracting plastic strain from MD simulations. All of these methods have been implemented in Sandia's LAMMPS MD code, ensuring their widespread availability to dynamic materials research at Sandia and elsewhere.

  11. Large-scale parallel lattice Boltzmann-cellular automaton model of two-dimensional dendritic growth

    NASA Astrophysics Data System (ADS)

    Jelinek, Bohumir; Eshraghi, Mohsen; Felicelli, Sergio; Peters, John F.

    2014-03-01

    An extremely scalable lattice Boltzmann (LB)-cellular automaton (CA) model for simulations of two-dimensional (2D) dendritic solidification under forced convection is presented. The model incorporates effects of phase change, solute diffusion, melt convection, and heat transport. The LB model represents the diffusion, convection, and heat transfer phenomena. The dendrite growth is driven by a difference between actual and equilibrium liquid composition at the solid-liquid interface. The CA technique is deployed to track the new interface cells. The computer program was parallelized using the Message Passing Interface (MPI) technique. Parallel scaling of the algorithm was studied and major scalability bottlenecks were identified. Efficiency loss attributable to the high memory bandwidth requirement of the algorithm was observed when using multiple cores per processor. Parallel writing of the output variables of interest was implemented in the binary Hierarchical Data Format 5 (HDF5) to improve the output performance, and to simplify visualization. Calculations were carried out in single precision arithmetic without significant loss in accuracy, resulting in 50% reduction of memory and computational time requirements. The presented solidification model shows a very good scalability up to centimeter size domains, including more than ten million of dendrites. Catalogue identifier: AEQZ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEQZ_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, UK Licensing provisions: Standard CPC license, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 29,767 No. of bytes in distributed program, including test data, etc.: 3131,367 Distribution format: tar.gz Programming language: Fortran 90. Computer: Linux PC and clusters. Operating system: Linux. Has the code been vectorized or parallelized?: Yes. Program is parallelized using MPI

  12. Inverse transport modeling of volcanic sulfur dioxide emissions using large-scale simulations

    NASA Astrophysics Data System (ADS)

    Heng, Yi; Hoffmann, Lars; Griessbach, Sabine; Rößler, Thomas; Stein, Olaf

    2016-05-01

    An inverse transport modeling approach based on the concepts of sequential importance resampling and parallel computing is presented to reconstruct altitude-resolved time series of volcanic emissions, which often cannot be obtained directly with current measurement techniques. A new inverse modeling and simulation system, which implements the inversion approach with the Lagrangian transport model Massive-Parallel Trajectory Calculations (MPTRAC) is developed to provide reliable transport simulations of volcanic sulfur dioxide (SO2). In the inverse modeling system MPTRAC is used to perform two types of simulations, i.e., unit simulations for the reconstruction of volcanic emissions and final forward simulations. Both types of transport simulations are based on wind fields of the ERA-Interim meteorological reanalysis of the European Centre for Medium Range Weather Forecasts. The reconstruction of altitude-dependent SO2 emission time series is also based on Atmospheric InfraRed Sounder (AIRS) satellite observations. A case study for the eruption of the Nabro volcano, Eritrea, in June 2011, with complex emission patterns, is considered for method validation. Meteosat Visible and InfraRed Imager (MVIRI) near-real-time imagery data are used to validate the temporal development of the reconstructed emissions. Furthermore, the altitude distributions of the emission time series are compared with top and bottom altitude measurements of aerosol layers obtained by the Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP) and the Michelson Interferometer for Passive Atmospheric Sounding (MIPAS) satellite instruments. The final forward simulations provide detailed spatial and temporal information on the SO2 distributions of the Nabro eruption. By using the critical success index (CSI), the simulation results are evaluated with the AIRS observations. Compared to the results with an assumption of a constant flux of SO2 emissions, our inversion approach leads to an improvement

  13. 5D Modelling: An Efficient Approach for Creating Spatiotemporal Predictive 3D Maps of Large-Scale Cultural Resources

    NASA Astrophysics Data System (ADS)

    Doulamis, A.; Doulamis, N.; Ioannidis, C.; Chrysouli, C.; Grammalidis, N.; Dimitropoulos, K.; Potsiou, C.; Stathopoulou, E.-K.; Ioannides, M.

    2015-08-01

    Outdoor large-scale cultural sites are mostly sensitive to environmental, natural and human made factors, implying an imminent need for a spatio-temporal assessment to identify regions of potential cultural interest (material degradation, structuring, conservation). On the other hand, in Cultural Heritage research quite different actors are involved (archaeologists, curators, conservators, simple users) each of diverse needs. All these statements advocate that a 5D modelling (3D geometry plus time plus levels of details) is ideally required for preservation and assessment of outdoor large scale cultural sites, which is currently implemented as a simple aggregation of 3D digital models at different time and levels of details. The main bottleneck of such an approach is its complexity, making 5D modelling impossible to be validated in real life conditions. In this paper, a cost effective and affordable framework for 5D modelling is proposed based on a spatial-temporal dependent aggregation of 3D digital models, by incorporating a predictive assessment procedure to indicate which regions (surfaces) of an object should be reconstructed at higher levels of details at next time instances and which at lower ones. In this way, dynamic change history maps are created, indicating spatial probabilities of regions needed further 3D modelling at forthcoming instances. Using these maps, predictive assessment can be made, that is, to localize surfaces within the objects where a high accuracy reconstruction process needs to be activated at the forthcoming time instances. The proposed 5D Digital Cultural Heritage Model (5D-DCHM) is implemented using open interoperable standards based on the CityGML framework, which also allows the description of additional semantic metadata information. Visualization aspects are also supported to allow easy manipulation, interaction and representation of the 5D-DCHM geometry and the respective semantic information. The open source 3DCity

  14. Modelling large-scale ice-sheet-climate interactions following glacial inception

    NASA Astrophysics Data System (ADS)

    Gregory, J. M.; Browne, O. J. H.; Payne, A. J.; Ridley, J. K.; Rutt, I. C.

    2012-10-01

    We have coupled the FAMOUS global AOGCM (atmosphere-ocean general circulation model) to the Glimmer thermomechanical ice-sheet model in order to study the development of ice-sheets in north-east America (Laurentia) and north-west Europe (Fennoscandia) following glacial inception. This first use of a coupled AOGCM-ice-sheet model for a study of change on long palæoclimate timescales is made possible by the low computational cost of FAMOUS, despite its inclusion of physical parameterisations similar in complexity to higher-resolution AOGCMs. With the orbital forcing of 115 ka BP, FAMOUS-Glimmer produces ice caps on the Canadian Arctic islands, on the north-west coast of Hudson Bay and in southern Scandinavia, which grow to occupy the Keewatin region of the Canadian mainland and all of Fennoscandia over 50 ka. Their growth is eventually halted by increasing coastal ice discharge. The expansion of the ice-sheets influences the regional climate, which becomes cooler, reducing the ablation, and ice accumulates in places that initially do not have positive surface mass balance. The results suggest the possibility that the glaciation of north-east America could have begun on the Canadian Arctic islands, producing a regional climate change that caused or enhanced the growth of ice on the mainland. The increase in albedo (due to snow and ice cover) is the dominant feedback on the area of the ice-sheets and acts rapidly, whereas the feedback of topography on SMB does not become significant for several centuries, but eventually has a large effect on the thickening of the ice-sheets. These two positive feedbacks are mutually reinforcing. In addition, the change in topography perturbs the tropospheric circulation, producing some reduction of cloud, and mitigating the local cooling along the margin of the Laurentide ice-sheet. Our experiments demonstrate the importance and complexity of the interactions between ice-sheets and local climate.

  15. Large-Scale Wind-Tunnel Tests of Inverting Flaps on a STOL Utility Aircraft Model.

    DTIC Science & Technology

    1980-06-01

    the same basic wing contour for cruise and have been tested in the Ames 40- by 80-Foot Wind Tunnel using this sarme STOL utility aircraft model with...inverting flap are seen to be quite evenly matched at a descent angle of approximately 130 to 140 , corresponding to a theoretical "no-flare" landing distance...to a T of 2.4, with a maneuvering reserve capability of about 0.6 rad /sec 2 . A slightly larger horizontal tail would be required to provide adequate

  16. Stochastic and recursive calibration for operational, large-scale, agricultural land and water use management models

    NASA Astrophysics Data System (ADS)

    Maneta, M. P.; Kimball, J. S.; Jencso, K. G.

    2015-12-01

    Managing the impact of climatic cycles on agricultural production, on land allocation, and on the state of active and projected water sources is challenging. This is because in addition to the uncertainties associated with climate projections, it is difficult to anticipate how farmers will respond to climatic change or to economic and policy incentives. Some sophisticated decision support systems available to water managers consider farmers' adaptive behavior but they are data intensive and difficult to apply operationally over large regions. Satellite-based observational technologies, in conjunction with models and assimilation methods, create an opportunity for new, cost-effective analysis tools to support policy and decision-making over large spatial extents at seasonal scales.We present an integrated modeling framework that can be driven by satellite remote sensing to enable robust regional assessment and prediction of climatic and policy impacts on agricultural production, water resources, and management decisions. The core of this framework is a widely used model of agricultural production and resource allocation adapted to be used in conjunction with remote sensing inputs to quantify the amount of land and water farmers allocate for each crop they choose to grow on a seasonal basis in response to reduced or enhanced access to water due to climatic or policy restrictions. A recursive Bayesian update method is used to adjust the model parameters by assimilating information on crop acreage, production, and crop evapotranspiration as a proxy for water use that can be estimated from high spatial resolution satellite remote sensing. The data assimilation framework blends new and old information to avoid over-calibration to the specific conditions of a single year and permits the updating of parameters to track gradual changes in the agricultural system.This integrated framework provides an operational means of monitoring and forecasting what crops will be grown

  17. Developing a Massively Parallel Forward Projection Radiography Model for Large-Scale Industrial Applications

    SciTech Connect

    Bauerle, Matthew

    2014-08-01

    This project utilizes Graphics Processing Units (GPUs) to compute radiograph simulations for arbitrary objects. The generation of radiographs, also known as the forward projection imaging model, is computationally intensive and not widely utilized. The goal of this research is to develop a massively parallel algorithm that can compute forward projections for objects with a trillion voxels (3D pixels). To achieve this end, the data are divided into blocks that can each t into GPU memory. The forward projected image is also divided into segments to allow for future parallelization and to avoid needless computations.

  18. Examining tissue differentiation stability through large scale, multi-cellular pathway modeling.

    SciTech Connect

    May, Elebeoba Eni; Schiek, Richard Louis

    2005-03-01

    Using a multi-cellular, pathway model approach, we investigate the Drosophila sp. segmental differentiation network's stability as a function of initial conditions. While this network's functionality has been investigated in the absence of noise, this is the first work to specifically investigate how natural systems respond to random errors or noise. Our findings agree with earlier results that the overall network is robust in the absence of noise. However, when one includes random initial perturbations in intracellular protein WG levels, the robustness of the system decreases dramatically. The effect of noise on the system is not linear, and appears to level out at high noise levels.

  19. Characterizing and modeling the efficiency limits in large-scale production of hyperpolarized 129Xe

    PubMed Central

    Freeman, M.S.; Emami, K.; Driehuys, B.

    2014-01-01

    The ability to produce liter volumes of highly spin-polarized 129Xe enables a wide range of investigations, most notably in the fields of materials science and biomedical MRI. However, for nearly all polarizers built to date, both peak 129Xe polarization and the rate at which it is produced fall far below those predicted by the standard model of Rb metal vapor, spin-exchange optical pumping (SEOP). In this work, we comprehensively characterized a high-volume, flow-through 129Xe polarizer using three different SEOP cells with internal volumes of 100, 200 and 300 cc and two types of optical sources: a broad-spectrum 111-W laser (FWHM = 1.92 nm) and a line-narrowed 71-W laser (FWHM = 0.39 nm). By measuring 129Xe polarization as a function of gas flow rate, we extracted peak polarization and polarization production rate across a wide range of laser absorption levels. Peak polarization for all cells consistently remained a factor of 2-3 times lower than predicted at all absorption levels. Moreover, although production rates increased with laser absorption, they did so much more slowly than predicted by the standard theoretical model and basic spin exchange efficiency arguments. Underperformance was most notable in the smallest optical cells. We propose that all these systematic deviations from theory can be explained by invoking the presence of paramagnetic Rb clusters within the vapor. Cluster formation within saturated alkali vapors is well established and their interaction with resonant laser light was recently shown to create plasma-like conditions. Such cluster systems cause both Rb and 129Xe depolarization, as well as excess photon scattering. These effects were incorporated into the SEOP model by assuming that clusters are activated in proportion to excited-state Rb number density and by further estimating physically reasonable values for the nanocluster-induced, velocity-averaged spin-destruction cross-section for Rb (<σcluster-Rbv> ≈4×10-7 cm3s-1), 129Xe

  20. Pangolin v1.0, a conservative 2-D advection model towards large-scale parallel calculation

    NASA Astrophysics Data System (ADS)

    Praga, A.; Cariolle, D.; Giraud, L.

    2015-02-01

    To exploit the possibilities of parallel computers, we designed a large-scale bidimensional atmospheric advection model named Pangolin. As the basis for a future chemistry-transport model, a finite-volume approach for advection was chosen to ensure mass preservation and to ease parallelization. To overcome the pole restriction on time steps for a regular latitude-longitude grid, Pangolin uses a quasi-area-preserving reduced latitude-longitude grid. The features of the regular grid are exploited to reduce the memory footprint and enable effective parallel performances. In addition, a custom domain decomposition algorithm is presented. To assess the validity of the advection scheme, its results are compared with state-of-the-art models on algebraic test cases. Finally, parallel performances are shown in terms of strong scaling and confirm the efficient scalability up to a few hundred cores.

  1. Thermal Reactor Model for Large-Scale Algae Cultivation in Vertical Flat Panel Photobioreactors.

    PubMed

    Endres, Christian H; Roth, Arne; Brück, Thomas B

    2016-04-05

    Microalgae can grow significantly faster than terrestrial plants and are a promising feedstock for sustainable value added products encompassing pharmaceuticals, pigments, proteins and most prominently biofuels. As the biomass productivity of microalgae strongly depends on the cultivation temperature, detailed information on the reactor temperature as a function of time and geographical location is essential to evaluate the true potential of microalgae as an industrial feedstock. In the present study, a temperature model for an array of vertical flat plate photobioreactors is presented. It was demonstrated that mutual shading of reactor panels has a decisive effect on the reactor temperature. By optimizing distance and thickness of the panels, the occurrence of extreme temperatures and the amplitude of daily temperature fluctuations in the culture medium can be drastically reduced, while maintaining a high level of irradiation on the panels. The presented model was developed and applied to analyze the suitability of various climate zones for algae production in flat panel photobioreactors. Our results demonstrate that in particular Mediterranean and tropical climates represent favorable locations. Lastly, the thermal energy demand required for the case of active temperature control is determined for several locations.

  2. Incremental N-mode SVD for large-scale multilinear generative models.

    PubMed

    Lee, Minsik; Choi, Chong-Ho

    2014-10-01

    Tensor decomposition is frequently used in image processing and machine learning for its ability to express higher order characteristics of data. Among tensor decomposition methods, N-mode singular value decomposition (SVD) is widely used owing to its simplicity. However, the data dimension often becomes too large to perform N-mode SVD directly due to memory limitation. An incremental method to N-mode SVD can be used to resolve this issue, but existing approaches only provide a result, which is just enough to solve discriminative problems, not the full factorization result. In this paper, we present a complete derivation of the incremental N-mode SVD, which can be applied to generative models, accompanied by a technique that can reduce the computational cost by reordering calculations. The proposed incremental N-mode SVD can also be used effectively to update the current result of N-mode SVD when new training data is received. The proposed method provides a very good approximation of N -mode SVD for the experimental data, and requires much less computation in updating a multilinear model.

  3. Large-scale Environmental Variables and Transition to Deep Convection in Cloud Resolving Model Simulations: A Vector Representation

    SciTech Connect

    Hagos, Samson M.; Leung, Lai-Yung R.

    2012-11-01

    Cloud resolving model simulations and vector analysis are used to develop a quantitative method of assessing regional variations in the relationships between various large-scale environmental variables and the transition to deep convection. Results of the CRM simulations from three tropical regions are used to cluster environmental conditions under which transition to deep convection does and does not take place. Projections of the large-scale environmental variables on the difference between these two clusters are used to quantify the roles of these variables in the transition to deep convection. While the transition to deep convection is most sensitive to moisture and vertical velocity perturbations, the details of the profiles of the anomalies vary from region to region. In comparison, the transition to deep convection is found to be much less sensitive to temperature anomalies over all three regions. The vector formulation presented in this study represents a simple general framework for quantifying various aspects of how the transition to deep convection is sensitive to environmental conditions.

  4. Vertical structure of Indonesian throughflow in a large-scale model1

    NASA Astrophysics Data System (ADS)

    Potemra, James T.; Hautala, Susan L.; Sprintall, Janet

    2003-07-01

    The vertical structure of the exchange of water between the Pacific and Indian Oceans via the Indonesian throughflow and its temporal variability are examined. Since there are no simultaneous, direct observations of transport variations with depth at the inflow straits (Makassar, Maluku, and Halmahera) and outflow straits (Lombok, Ombai, and Timor), numerical model results are used. Analysis of depth-integrated transport through the model straits indicates differences in the vertical structure of the flow between the inflow and outflow straits. Generally speaking, local winds affect flow in a layer above the thermocline, while remote forcing, e.g., ENSO or coastal Kelvin waves, affect flow in a subsurface layer. On the outflow side, transport occurs primarily in two vertical modes. The dominant mode is characterized by a surface intensification that decays to zero around 400 m. The second mode is characterized by flow in the upper 100 m that is of opposite direction to flow from 100 to 400 m. The vertical decomposition of transport through the model's inflow straits varies between the straits. At Makassar, the western-most inflow passage, the dominant mode is similar to the outflow straits, with a surface intensification of southward transport that decays to zero at 800 m. At Halmahera, the eastern-most inflow strait, the dominant mode is two-layer, with surface to 200 m transport in the opposite direction of transport from 200 to 700 m, similar to the second mode at the outflow straits. At Maluku, the center inflow passage, the dominant vertical mode is three-layer. At this strait, there is a layer from about 100 to 800 m within which flow is in the opposite direction to flow in a surface layer above 100 m and in a deeper layer below 800 m. Phase lags on the annual cycle suggest that during April-October, peaking in May, there is a convergence of mass in the upper 100 m of the Indonesian seas. This convergence is balanced by a mass divergence from 100 to 710 m

  5. Reduced Order Modeling for Prediction and Control of Large-Scale Systems.

    SciTech Connect

    Kalashnikova, Irina; Arunajatesan, Srinivasan; Barone, Matthew Franklin; van Bloemen Waanders, Bart Gustaaf; Fike, Jeffrey A.

    2014-05-01

    This report describes work performed from June 2012 through May 2014 as a part of a Sandia Early Career Laboratory Directed Research and Development (LDRD) project led by the first author. The objective of the project is to investigate methods for building stable and efficient proper orthogonal decomposition (POD)/Galerkin reduced order models (ROMs): models derived from a sequence of high-fidelity simulations but having a much lower computational cost. Since they are, by construction, small and fast, ROMs can enable real-time simulations of complex systems for onthe- spot analysis, control and decision-making in the presence of uncertainty. Of particular interest to Sandia is the use of ROMs for the quantification of the compressible captive-carry environment, simulated for the design and qualification of nuclear weapons systems. It is an unfortunate reality that many ROM techniques are computationally intractable or lack an a priori stability guarantee for compressible flows. For this reason, this LDRD project focuses on the development of techniques for building provably stable projection-based ROMs. Model reduction approaches based on continuous as well as discrete projection are considered. In the first part of this report, an approach for building energy-stable Galerkin ROMs for linear hyperbolic or incompletely parabolic systems of partial differential equations (PDEs) using continuous projection is developed. The key idea is to apply a transformation induced by the Lyapunov function for the system, and to build the ROM in the transformed variables. It is shown that, for many PDE systems including the linearized compressible Euler and linearized compressible Navier-Stokes equations, the desired transformation is induced by a special inner product, termed the “symmetry inner product”. Attention is then turned to nonlinear conservation laws. A new transformation and corresponding energy-based inner product for the full nonlinear compressible Navier

  6. Measurement, Modeling, and Analysis of a Large-scale Blog Sever Workload

    SciTech Connect

    Jeon, Myeongjae; Hwang, Jeaho; Kim, Youngjae; Jae-Wan, Jang; Lee, Joonwon; Seo, Euiseong

    2010-01-01

    Despite the growing popularity of Online Social Networks (OSNs), the workload characteristics of OSN servers, such as those hosting blog services, are not well understood. Understanding workload characteristics is important for opti- mizing and improving the performance of current systems and software based on observed trends. Thus, in this paper, we characterize the system workload of the largest blog hosting servers in South Korea, Tistory1. In addition to understanding the system workload of the blog hosting server, we have developed synthesized workloads and obtained the following major findings: (i) the transfer size of non-multimedia files and blog articles can be modeled by a truncated Pareto distribution and a log-normal distribution respectively, and (ii) users accesses to blog articles do not show temporal locality, but they are strongly biased toward those posted along with images or audio.

  7. Solving large-scale sparse eigenvalue problems and linear systems of equations for accelerator modeling

    SciTech Connect

    Gene Golub; Kwok Ko

    2009-03-30

    The solutions of sparse eigenvalue problems and linear systems constitute one of the key computational kernels in the discretization of partial differential equations for the modeling of linear accelerators. The computational challenges faced by existing techniques for solving those sparse eigenvalue problems and linear systems call for continuing research to improve on the algorithms so that ever increasing problem size as required by the physics application can be tackled. Under the support of this award, the filter algorithm for solving large sparse eigenvalue problems was developed at Stanford to address the computational difficulties in the previous methods with the goal to enable accelerator simulations on then the world largest unclassified supercomputer at NERSC for this class of problems. Specifically, a new method, the Hemitian skew-Hemitian splitting method, was proposed and researched as an improved method for solving linear systems with non-Hermitian positive definite and semidefinite matrices.

  8. North American extreme temperature events and related large scale meteorological patterns: a review of statistical methods, dynamics, modeling, and trends

    NASA Astrophysics Data System (ADS)

    Grotjahn, Richard; Black, Robert; Leung, Ruby; Wehner, Michael F.; Barlow, Mathew; Bosilovich, Mike; Gershunov, Alexander; Gutowski, William J.; Gyakum, John R.; Katz, Richard W.; Lee, Yun-Young; Lim, Young-Kwon; Prabhat

    2016-02-01

    The objective of this paper is to review statistical methods, dynamics, modeling efforts, and trends related to temperature extremes, with a focus upon extreme events of short duration that affect parts of North America. These events are associated with large scale meteorological patterns (LSMPs). The statistics, dynamics, and modeling sections of this paper are written to be autonomous and so can be read separately. Methods to define extreme events statistics and to identify and connect LSMPs to extreme temperature events are presented. Recent advances in statistical techniques connect LSMPs to extreme temperatures through appropriately defined covariates that supplement more straightforward analyses. Various LSMPs, ranging from synoptic to planetary scale structures, are associated with extreme temperature events. Current knowledge about the synoptics and the dynamical mechanisms leading to the associated LSMPs is incomplete. Systematic studies of: the physics of LSMP life cycles, comprehensive model assessment of LSMP-extreme temperature event linkages, and LSMP properties are needed. Generally, climate models capture observed properties of heat waves and cold air outbreaks with some fidelity. However they overestimate warm wave frequency and underestimate cold air outbreak frequency, and underestimate the collective influence of low-frequency modes on temperature extremes. Modeling studies have identified the impact of large-scale circulation anomalies and land-atmosphere interactions on changes in extreme temperatures. However, few studies have examined changes in LSMPs to more specifically understand the role of LSMPs on past and future extreme temperature changes. Even though LSMPs are resolvable by global and regional climate models, they are not necessarily well simulated. The paper concludes with unresolved issues and research questions.

  9. The Impacts of Armoring Our Deltas: Mapping and Modeling Large-Scale Deltaplain Aggradation

    NASA Astrophysics Data System (ADS)

    Overeem, I.; Higgins, S.; Syvitski, J. P.; Kettner, A. J.; Brakenridge, R.

    2014-12-01

    Humans have hardened land-water boundaries in almost every delta they live on. Engineering includes stabilizing and embanking channels to protect from river floods, building dikes around islands and emerging bars to reclaim land, and putting up sea walls to protect from waves and storm surges. These measures aim to reduce the exchange of water and sediment between the distributary delta channel network and the adjacent deltaplain. To first order, armoring of deltas results in net elevation loss of the floodplain, due to subsidence, compaction and reduced aggradation. Here, we ask what are the mechanisms of aggradation in 'armored' deltas? How do aggradation patterns compare to more natural depositional patterns? We analyze 2-week period aggregates of MODIS satellite data from 2000 onwards to map inundation patterns due to irrigation, river floods and storm surges for selected deltas. Using a MODIS band-ratio, we assess relative concentrations of suspended sediment in stagnant water on the floodplains. In addition, we use a simple approach to route sediment through the delta distributary network based on the relative channel geometries. A depositional process model then calculates cross-channel sediment flux as an exponential decay function, and determines sediment deposition over inundated areas. Stacked inundation maps show vast areas of deltaplains have flooded between 2000-2014, despite armoring channels with dikes, and coastlines with seawalls. Flooding is caused by overtopping of levees and more rarely by breaching and in those latter cases the flooded areas are often locally constrained. In Asian deltas, rice paddy irrigation with floodwater can be mapped even in the more distal floodplain. Our model predicts that inundated areas still receive significant amounts of fresh sediment, but that the pattern is more variable than in natural systems. Sparse in-situ observations of floodplain aggradation rates and storm surge deposits corroborate high, but localized

  10. Model-Data Fusion and Adaptive Sensing for Large Scale Systems: Applications to Atmospheric Release Incidents

    NASA Astrophysics Data System (ADS)

    Madankan, Reza

    All across the world, toxic material clouds are emitted from sources, such as industrial plants, vehicular traffic, and volcanic eruptions can contain chemical, biological or radiological material. With the growing fear of natural, accidental or deliberate release of toxic agents, there is tremendous interest in precise source characterization and generating accurate hazard maps of toxic material dispersion for appropriate disaster management. In this dissertation, an end-to-end framework has been developed for probabilistic source characterization and forecasting of atmospheric release incidents. The proposed methodology consists of three major components which are combined together to perform the task of source characterization and forecasting. These components include Uncertainty Quantification, Optimal Information Collection, and Data Assimilation. Precise approximation of prior statistics is crucial to ensure performance of the source characterization process. In this work, an efficient quadrature based method has been utilized for quantification of uncertainty in plume dispersion models that are subject to uncertain source parameters. In addition, a fast and accurate approach is utilized for the approximation of probabilistic hazard maps, based on combination of polynomial chaos theory and the method of quadrature points. Besides precise quantification of uncertainty, having useful measurement data is also highly important to warranty accurate source parameter estimation. The performance of source characterization is highly affected by applied sensor orientation for data observation. Hence, a general framework has been developed for the optimal allocation of data observation sensors, to improve performance of the source characterization process. The key goal of this framework is to optimally locate a set of mobile sensors such that measurement of textit{better} data is guaranteed. This is achieved by maximizing the mutual information between model predictions

  11. Modeling the Economic Feasibility of Large-Scale Net-Zero Water Management: A Case Study.

    PubMed

    Guo, Tianjiao; Englehardt, James D; Fallon, Howard J

      While municipal direct potable water reuse (DPR) has been recommended for consideration by the U.S. National Research Council, it is unclear how to size new closed-loop DPR plants, termed "net-zero water (NZW) plants", to minimize cost and energy demand assuming upgradient water distribution. Based on a recent model optimizing the economics of plant scale for generalized conditions, the authors evaluated the feasibility and optimal scale of NZW plants for treatment capacity expansion in Miami-Dade County, Florida. Local data on population distribution and topography were input to compare projected costs for NZW vs the current plan. Total cost was minimized at a scale of 49 NZW plants for the service population of 671,823. Total unit cost for NZW systems, which mineralize chemical oxygen demand to below normal detection limits, is projected at ~$10.83 / 1000 gal, approximately 13% above the current plan and less than rates reported for several significant U.S. cities.

  12. The role of soil hydrologic heterogeneity for modeling large-scale bioremediation protocols.

    NASA Astrophysics Data System (ADS)

    Romano, N.; Palladino, M.; Speranza, G.; Di Fiore, P.; Sica, B.; Nasta, P.

    2014-12-01

    The major aim of the EU-Life+ project EcoRemed (Implementation of eco-compatible protocols for agricultural soil remediation in Litorale Domizio-Agro Aversano NIPS) is the implementation of operating protocols for agriculture-based bioremediation of contaminated croplands, which also involves plants extracting pollutants being then used as biomasses for renewable energy production. The study area is the National Interest Priority Site (NIPS) called Litorale Domitio-Agro Aversano, which is located in the Campania Region (Southern Italy) and has an extent of about 200,000 ectars. In this area, a high-level spotted soil contamination is mostly due to the legal or outlaw industrial and municipal wastes, with hazardous consequences also on the quality of the groundwater. An accurate determination of the soil hydraulic properties to characterize the landscape heterogeneity of the study area plays a key role within the general framework of this project, especially in view of the use of various modeling tools for water flow and solute transport simulations and to predict the effectiveness of the adopted bioremediation protocols. The present contribution is part of an ongoing study where we are investigating the following research questions: a) Which spatial aggregation schemes seem more suitable for upscaling from point to block support? b) Which effective soil hydrologic characteristic schemes simulate better the average behavior of larger scale phytoremediation processes? c) Allowing also for questions a) and b), how the spatial variability of soil hydraulic properties affect the variability of plant responses to hydro-meteorological forcing?

  13. Analysis of large-scale tablet coating: Modeling, simulation and experiments.

    PubMed

    Boehling, P; Toschkoff, G; Knop, K; Kleinebudde, P; Just, S; Funke, A; Rehbaum, H; Khinast, J G

    2016-07-30

    This work concerns a tablet coating process in an industrial-scale drum coater. We set up a full-scale Design of Simulation Experiment (DoSE) using the Discrete Element Method (DEM) to investigate the influence of various process parameters (the spray rate, the number of nozzles, the rotation rate and the drum load) on the coefficient of inter-tablet coating variation (cv,inter). The coater was filled with up to 290kg of material, which is equivalent to 1,028,369 tablets. To mimic the tablet shape, the glued sphere approach was followed, and each modeled tablet consisted of eight spheres. We simulated the process via the eXtended Particle System (XPS), proving that it is possible to accurately simulate the tablet coating process on the industrial scale. The process time required to reach a uniform tablet coating was extrapolated based on the simulated data and was in good agreement with experimental results. The results are provided at various levels of details, from thorough investigation of the influence that the process parameters have on the cv,inter and the amount of tablets that visit the spray zone during the simulated 90s to the velocity in the spray zone and the spray and bed cycle time. It was found that increasing the number of nozzles and decreasing the spray rate had the highest influence on the cv,inter. Although increasing the drum load and the rotation rate increased the tablet velocity, it did not have a relevant influence on the cv,inter and the process time.

  14. Sialendoscopy Training: Presentation of a Realistic Model.

    PubMed

    Pascoto, Gabriela Robaskewicz; Stamm, Aldo Cassol; Lyra, Marcos

    2017-01-01

    Introduction Several surgical training simulators have been created for residents and young surgeons to gain experience with surgical procedures. Laboratory training is fundamental for acquiring familiarity with the techniques of surgery and skill in handing instruments. Objective The aim of this study is to present a novel simulator for training sialendoscopy. Method This realistic simulator was built with a synthetic thermo-retractile, thermo-sensible rubber which, when combined with different polymers, produces more than 30 different formulas. These formulas present textures, consistencies, and mechanical resistance are similar to many human tissues. Results The authors present a training model to practice sialendoscopy. All aspects of the procedure are simulated: month opening, dilatation of papillae, insert of the scope, visualization of stones, extraction of these stones with grasping or baskets, and finally, stone fragmentation with holmium laser. Conclusion This anatomical model for sialendoscopy training should be considerably useful to abbreviate the learning curve during the qualification of young surgeons while minimizing the consequences of technical errors.

  15. Sialendoscopy Training: Presentation of a Realistic Model

    PubMed Central

    Pascoto, Gabriela Robaskewicz; Stamm, Aldo Cassol; Lyra, Marcos

    2016-01-01

    Introduction Several surgical training simulators have been created for residents and young surgeons to gain experience with surgical procedures. Laboratory training is fundamental for acquiring familiarity with the techniques of surgery and skill in handing instruments. Objective The aim of this study is to present a novel simulator for training sialendoscopy. Method This realistic simulator was built with a synthetic thermo-retractile, thermo-sensible rubber which, when combined with different polymers, produces more than 30 different formulas. These formulas present textures, consistencies, and mechanical resistance are similar to many human tissues. Results The authors present a training model to practice sialendoscopy. All aspects of the procedure are simulated: month opening, dilatation of papillae, insert of the scope, visualization of stones, extraction of these stones with grasping or baskets, and finally, stone fragmentation with holmium laser. Conclusion This anatomical model for sialendoscopy training should be considerably useful to abbreviate the learning curve during the qualification of young surgeons while minimizing the consequences of technical errors. PMID:28050202

  16. Low-speed wind-tunnel investigation of a large-scale VTOL lift-fan transport model

    NASA Technical Reports Server (NTRS)

    Aoyagi, K.

    1979-01-01

    An investigation was conducted in the NASA-Ames 40 by 80 Foot Wind Tunnel to determine the aerodynamic characteristics of a large scale, VTOL, lift fan, jet transport model. The model had two lift fans at the forward portion of the fuselage, a lift fan at each wing tip, and two lift/cruise fans at the aft portion of the fuselage. All fans were driven by tip turbines using T-58 gas generators. Results were obtained for several lift fan, exit vane deflections and lift/cruise fan thrust deflections are zero sideslip. Three component longitudinal data are presented at several fan tip speed ratios. A limited amount of six component data were obtained with asymmetric vane settings. All of the data were obtained without a horizontal tail. Downwash angles at a typical tail location are also presented.

  17. A Preliminary Model Study of the Large-Scale Seasonal Cycle in Bottom Pressure Over the Global Ocean

    NASA Technical Reports Server (NTRS)

    Ponte, Rui M.

    1998-01-01

    Output from the primitive equation model of Semtner and Chervin is used to examine the seasonal cycle in bottom pressure (Pb) over the global ocean. Effects of the volume-conserving formulation of the model on the calculation Of Pb are considered. The estimated seasonal, large-scale Pb signals have amplitudes ranging from less than 1 cm over most of the deep ocean to several centimeters over shallow, boundary regions. Variability generally increases toward the western sides of the basins, and is also larger in some Southern Ocean regions. An oscillation between subtropical and higher latitudes in the North Pacific is clear. Comparison with barotropic simulations indicates that, on basin scales, seasonal Pb variability is related to barotropic dynamics and the seasonal cycle in Ekman pumping, and results from a small, net residual in mass divergence from the balance between Ekman and Sverdrup flows.

  18. Physical control oriented model of large scale refrigerators to synthesize advanced control schemes. Design, validation, and first control results

    NASA Astrophysics Data System (ADS)

    Bonne, François; Alamir, Mazen; Bonnay, Patrick

    2014-01-01

    In this paper, a physical method to obtain control-oriented dynamical models of large scale cryogenic refrigerators is proposed, in order to synthesize model-based advanced control schemes. These schemes aim to replace classical user experience designed approaches usually based on many independent PI controllers. This is particularly useful in the case where cryoplants are submitted to large pulsed thermal loads, expected to take place in the cryogenic cooling systems of future fusion reactors such as the International Thermonuclear Experimental Reactor (ITER) or the Japan Torus-60 Super Advanced Fusion Experiment (JT-60SA). Advanced control schemes lead to a better perturbation immunity and rejection, to offer a safer utilization of cryoplants. The paper gives details on how basic components used in the field of large scale helium refrigeration (especially those present on the 400W @1.8K helium test facility at CEA-Grenoble) are modeled and assembled to obtain the complete dynamic description of controllable subsystems of the refrigerator (controllable subsystems are namely the Joule-Thompson Cycle, the Brayton Cycle, the Liquid Nitrogen Precooling Unit and the Warm Compression Station). The complete 400W @1.8K (in the 400W @4.4K configuration) helium test facility model is then validated against experimental data and the optimal control of both the Joule-Thompson valve and the turbine valve is proposed, to stabilize the plant under highly variable thermals loads. This work is partially supported through the European Fusion Development Agreement (EFDA) Goal Oriented Training Program, task agreement WP10-GOT-GIRO.

  19. Physical control oriented model of large scale refrigerators to synthesize advanced control schemes. Design, validation, and first control results

    SciTech Connect

    Bonne, François; Bonnay, Patrick

    2014-01-29

    In this paper, a physical method to obtain control-oriented dynamical models of large scale cryogenic refrigerators is proposed, in order to synthesize model-based advanced control schemes. These schemes aim to replace classical user experience designed approaches usually based on many independent PI controllers. This is particularly useful in the case where cryoplants are submitted to large pulsed thermal loads, expected to take place in the cryogenic cooling systems of future fusion reactors such as the International Thermonuclear Experimental Reactor (ITER) or the Japan Torus-60 Super Advanced Fusion Experiment (JT-60SA). Advanced control schemes lead to a better perturbation immunity and rejection, to offer a safer utilization of cryoplants. The paper gives details on how basic components used in the field of large scale helium refrigeration (especially those present on the 400W @1.8K helium test facility at CEA-Grenoble) are modeled and assembled to obtain the complete dynamic description of controllable subsystems of the refrigerator (controllable subsystems are namely the Joule-Thompson Cycle, the Brayton Cycle, the Liquid Nitrogen Precooling Unit and the Warm Compression Station). The complete 400W @1.8K (in the 400W @4.4K configuration) helium test facility model is then validated against experimental data and the optimal control of both the Joule-Thompson valve and the turbine valve is proposed, to stabilize the plant under highly variable thermals loads. This work is partially supported through the European Fusion Development Agreement (EFDA) Goal Oriented Training Program, task agreement WP10-GOT-GIRO.

  20. A realistic molecular model of cement hydrates

    PubMed Central

    Pellenq, Roland J.-M.; Kushima, Akihiro; Shahsavari, Rouzbeh; Van Vliet, Krystyn J.; Buehler, Markus J.; Yip, Sidney; Ulm, Franz-Josef

    2009-01-01

    Despite decades of studies of calcium-silicate-hydrate (C-S-H), the structurally complex binder phase of concrete, the interplay between chemical composition and density remains essentially unexplored. Together these characteristics of C-S-H define and modulate the physical and mechanical properties of this “liquid stone” gel phase. With the recent determination of the calcium/silicon (C/S = 1.7) ratio and the density of the C-S-H particle (2.6 g/cm3) by neutron scattering measurements, there is new urgency to the challenge of explaining these essential properties. Here we propose a molecular model of C-S-H based on a bottom-up atomistic simulation approach that considers only the chemical specificity of the system as the overriding constraint. By allowing for short silica chains distributed as monomers, dimers, and pentamers, this C-S-H archetype of a molecular description of interacting CaO, SiO2, and H2O units provides not only realistic values of the C/S ratio and the density computed by grand canonical Monte Carlo simulation of water adsorption at 300 K. The model, with a chemical composition of (CaO)1.65(SiO2)(H2O)1.75, also predicts other essential structural features and fundamental physical properties amenable to experimental validation, which suggest that the C-S-H gel structure includes both glass-like short-range order and crystalline features of the mineral tobermorite. Additionally, we probe the mechanical stiffness, strength, and hydrolytic shear response of our molecular model, as compared to experimentally measured properties of C-S-H. The latter results illustrate the prospect of treating cement on equal footing with metals and ceramics in the current application of mechanism-based models and multiscale simulations to study inelastic deformation and cracking. PMID:19805265

  1. A realistic molecular model of cement hydrates.

    PubMed

    Pellenq, Roland J-M; Kushima, Akihiro; Shahsavari, Rouzbeh; Van Vliet, Krystyn J; Buehler, Markus J; Yip, Sidney; Ulm, Franz-Josef

    2009-09-22

    Despite decades of studies of calcium-silicate-hydrate (C-S-H), the structurally complex binder phase of concrete, the interplay between chemical composition and density remains essentially unexplored. Together these characteristics of C-S-H define and modulate the physical and mechanical properties of this "liquid stone" gel phase. With the recent determination of the calcium/silicon (C/S = 1.7) ratio and the density of the C-S-H particle (2.6 g/cm(3)) by neutron scattering measurements, there is new urgency to the challenge of explaining these essential properties. Here we propose a molecular model of C-S-H based on a bottom-up atomistic simulation approach that considers only the chemical specificity of the system as the overriding constraint. By allowing for short silica chains distributed as monomers, dimers, and pentamers, this C-S-H archetype of a molecular description of interacting CaO, SiO2, and H2O units provides not only realistic values of the C/S ratio and the density computed by grand canonical Monte Carlo simulation of water adsorption at 300 K. The model, with a chemical composition of (CaO)(1.65)(SiO2)(H2O)(1.75), also predicts other essential structural features and fundamental physical properties amenable to experimental validation, which suggest that the C-S-H gel structure includes both glass-like short-range order and crystalline features of the mineral tobermorite. Additionally, we probe the mechanical stiffness, strength, and hydrolytic shear response of our molecular model, as compared to experimentally measured properties of C-S-H. The latter results illustrate the prospect of treating cement on equal footing with metals and ceramics in the current application of mechanism-based models and multiscale simulations to study inelastic deformation and cracking.

  2. Large-scale Flood Simulation with Rainfall-Runoff-Inundation Model in the Chao Phraya River Basin

    NASA Astrophysics Data System (ADS)

    Sayama, Takahiro; Tatebe, Yuya; Tanaka, Shigenobu

    2013-04-01

    A large amount of rainfall during the 2011 monsoonal season caused an unprecedented flood disaster in the Chao Phraya River basin in Thailand. When a large-scale flood occurs, it is very important to take appropriate emergency measures by holistically understanding the characteristics of the flooding based on available information and by predicting its possible development. This paper proposes quick response-type flood simulation that can be conducted during a severe flooding event. The hydrologic simulation model used in this study is designed to simulate river discharges and flood inundation simultaneously for an entire river basin with satellite based rainfall and topographic information. The model is based on two-dimensional diffusive wave equations for rainfall-runoff and inundation calculations. The model takes into account the effects of lateral subsurface flow and vertical infiltration flow since these two types of flow are also important processes. This paper presents prediction results obtained in mid-October 2011, when the flooding in Thailand was approaching to its peak. Our scientific question is how well we can predict the possible development of a large-scale flooding event with limited information and how much we can improve the prediction with more local information. In comparison with a satellite based flood inundation map, the study found that the quick response-type simulation (Lv1) was capable of capturing the peak flood inundation extent reasonably as compared to the estimation based on satellite remote sensing. Our interpretation of the prediction was that the flooding might continue even until the end of November, which was also positively confirmed to some extent by the actual flooding status in late November. Nevertheless, the Lv1 simulation generally overestimated the peak water level. To address this overestimation, the input data was updated with additional local information (Lv2). Consequently, the simulation accuracy improved in the

  3. Breaking Computational Barriers: Real-time Analysis and Optimization with Large-scale Nonlinear Models via Model Reduction

    SciTech Connect

    Carlberg, Kevin Thomas; Drohmann, Martin; Tuminaro, Raymond S.; Boggs, Paul T.; Ray, Jaideep; van Bloemen Waanders, Bart Gustaaf

    2014-10-01

    Model reduction for dynamical systems is a promising approach for reducing the computational cost of large-scale physics-based simulations to enable high-fidelity models to be used in many- query (e.g., Bayesian inference) and near-real-time (e.g., fast-turnaround simulation) contexts. While model reduction works well for specialized problems such as linear time-invariant systems, it is much more difficult to obtain accurate, stable, and efficient reduced-order models (ROMs) for systems with general nonlinearities. This report describes several advances that enable nonlinear reduced-order models (ROMs) to be deployed in a variety of time-critical settings. First, we present an error bound for the Gauss-Newton with Approximated Tensors (GNAT) nonlinear model reduction technique. This bound allows the state-space error for the GNAT method to be quantified when applied with the backward Euler time-integration scheme. Second, we present a methodology for preserving classical Lagrangian structure in nonlinear model reduction. This technique guarantees that important properties--such as energy conservation and symplectic time-evolution maps--are preserved when performing model reduction for models described by a Lagrangian formalism (e.g., molecular dynamics, structural dynamics). Third, we present a novel technique for decreasing the temporal complexity --defined as the number of Newton-like iterations performed over the course of the simulation--by exploiting time-domain data. Fourth, we describe a novel method for refining projection-based reduced-order models a posteriori using a goal-oriented framework similar to mesh-adaptive h -refinement in finite elements. The technique allows the ROM to generate arbitrarily accurate solutions, thereby providing the ROM with a 'failsafe' mechanism in the event of insufficient training data. Finally, we present the reduced-order model error surrogate (ROMES) method for statistically quantifying reduced- order-model errors. This

  4. Mechanical stabilization of the Levitron's realistic model

    NASA Astrophysics Data System (ADS)

    Olvera, Arturo; De la Rosa, Abraham; Giordano, Claudia M.

    2016-11-01

    The stability of the magnetic levitation showed by the Levitron was studied by M.V. Berry as a six degrees of freedom Hamiltonian system using an adiabatic approximation. Further, H.R. Dullin found critical spin rate bounds where the levitation persists and R.F. Gans et al. offered numerical results regarding the initial conditions' manifold where this occurs. In the line of this series of works, first, we extend the equations of motion to include dissipation for a more realistic model, and then introduce a mechanical forcing to inject energy into the system in order to prevent the Levitron from falling. A systematic study of the flying time as a function of the forcing parameters is carried out which yields detailed bifurcation diagrams showing an Arnold's tongues structure. The stability of these solutions were studied with the help of a novel method to compute the maximum Lyapunov exponent called MEGNO. The bifurcation diagrams for MEGNO reproduce the same Arnold's tongue structure.

  5. Observed and CMIP5 modeled influence of large-scale circulation on summer precipitation and drought in the South-Central United States

    NASA Astrophysics Data System (ADS)

    Ryu, Jung-Hee; Hayhoe, Katharine

    2017-02-01

    Annual precipitation in the largely agricultural South-Central United States is characterized by a primary wet season in May and June, a mid-summer dry period in July and August, and a second precipitation peak in September and October. Of the 22 CMIP5 global climate models with sufficient output available, 16 are able to reproduce this bimodal distribution (we refer to these as "BM" models), while 6 have trouble simulating the mid-summer dry period, instead producing an extended wet season ("EW" models). In BM models, the timing and amplitude of the mid-summer westward extension of the North Atlantic Subtropical High (NASH) are realistic, while the magnitude of the Great Plains Lower Level Jet (GPLLJ) tends to be overestimated, particularly in July. In EW models, temporal variations and geophysical locations of the NASH and GPLLJ appear reasonable compared to reanalysis but their magnitudes are too weak to suppress mid-summer precipitation. During warm-season droughts, however, both groups of models reproduce the observed tendency towards a stronger NASH that remains over the region through September, and an intensification and northward extension of the GPLLJ. Similarly, future simulations from both model groups under a +1 to +3 °C transient increase in global mean temperature show decreases in summer precipitation concurrent with an enhanced NASH and an intensified GPLLJ, though models differ regarding the months in which these decreases are projected to occur: early summer in the BM models, and late summer in the EW models. Overall, these results suggest that projected future decreases in summer precipitation over the South-Central region appear to be closely related to anomalous patterns of large-scale circulation already observed and modeled during historical dry years, patterns that are consistently reproduced by CMIP5 models.

  6. The integration of large-scale neural network modeling and functional brain imaging in speech motor control

    PubMed Central

    Golfinopoulos, E.; Tourville, J.A.; Guenther, F.H.

    2009-01-01

    Speech production demands a number of integrated processing stages. The system must encode the speech motor programs that command movement trajectories of the articulators and monitor transient spatiotemporal variations in auditory and somatosensory feedback. Early models of this system proposed that independent neural regions perform specialized speech processes. As technology advanced, neuroimaging data revealed that the dynamic sensorimotor processes of speech require a distributed set of interacting neural regions. The DIVA (Directions into Velocities of Articulators) neurocomputational model elaborates on early theories, integrating existing data and contemporary ideologies, to provide a mechanistic account of acoustic, kinematic, and functional magnetic resonance imaging (fMRI) data on speech acquisition and production. This large-scale neural network model is composed of several interconnected components whose cell activities and synaptic weight strengths are governed by differential equations. Cells in the model are associated with neuroanatomical substrates and have been mapped to locations in Montreal Neurological Institute stereotactic space, providing a means to compare simulated and empirical fMRI data. The DIVA model also provides a computational and neurophysiological framework within which to interpret and organize research on speech acquisition and production in fluent and dysfluent child and adult speakers. The purpose of this review article is to demonstrate how the DIVA model is used to motivate and guide functional imaging studies. We describe how model predictions are evaluated using voxel-based, region-of-interest-based parametric analyses and inter-regional effective connectivity modeling of fMRI data. PMID:19837177

  7. Stringent restriction from the growth of large-scale structure on apparent acceleration in inhomogeneous cosmological models.

    PubMed

    Ishak, Mustapha; Peel, Austin; Troxel, M A

    2013-12-20

    Probes of cosmic expansion constitute the main basis for arguments to support or refute a possible apparent acceleration due to different expansion rates in the Universe as described by inhomogeneous cosmological models. We present in this Letter a separate argument based on results from an analysis of the growth rate of large-scale structure in the Universe as modeled by the inhomogeneous cosmological models of Szekeres. We use the models with no assumptions of spherical or axial symmetries. We find that while the Szekeres models can fit very well the observed expansion history without a Λ, they fail to produce the observed late-time suppression in the growth unless Λ is added to the dynamics. A simultaneous fit to the supernova and growth factor data shows that the cold dark matter model with a cosmological constant (ΛCDM) provides consistency with the data at a confidence level of 99.65%, while the Szekeres model without Λ achieves only a 60.46% level. When the data sets are considered separately, the Szekeres with no Λ fits the supernova data as well as the ΛCDM does, but provides a very poor fit to the growth data with only 31.31% consistency level compared to 99.99% for the ΛCDM. This absence of late-time growth suppression in inhomogeneous models without a Λ is consolidated by a physical explanation.

  8. Large Scale Earth's Bow Shock with Northern IMF as Simulated by PIC Code in Parallel with MHD Model

    NASA Astrophysics Data System (ADS)

    Baraka, Suleiman

    2016-06-01

    In this paper, we propose a 3D kinetic model (particle-in-cell, PIC) for the description of the large scale Earth's bow shock. The proposed version is stable and does not require huge or extensive computer resources. Because PIC simulations work with scaled plasma and field parameters, we also propose to validate our code by comparing its results with the available MHD simulations under same scaled solar wind (SW) and (IMF) conditions. We report new results from the two models. In both codes the Earth's bow shock position is found to be ≈14.8 R E along the Sun-Earth line, and ≈29 R E on the dusk side. Those findings are consistent with past in situ observations. Both simulations reproduce the theoretical jump conditions at the shock. However, the PIC code density and temperature distributions are inflated and slightly shifted sunward when compared to the MHD results. Kinetic electron motions and reflected ions upstream may cause this sunward shift. Species distributions in the foreshock region are depicted within the transition of the shock (measured ≈2 c/ ω pi for Θ Bn = 90° and M MS = 4.7) and in the downstream. The size of the foot jump in the magnetic field at the shock is measured to be (1.7 c/ ω pi ). In the foreshocked region, the thermal velocity is found equal to 213 km s-1 at 15 R E and is equal to 63 km s -1 at 12 R E (magnetosheath region). Despite the large cell size of the current version of the PIC code, it is powerful to retain macrostructure of planets magnetospheres in very short time, thus it can be used for pedagogical test purposes. It is also likely complementary with MHD to deepen our understanding of the large scale magnetosphere.

  9. Analyzing large-scale conservation interventions with Bayesian hierarchical models: a case study of supplementing threatened Pacific salmon

    PubMed Central

    Scheuerell, Mark D; Buhle, Eric R; Semmens, Brice X; Ford, Michael J; Cooney, Tom; Carmichael, Richard W

    2015-01-01

    Myriad human activities increasingly threaten the existence of many species. A variety of conservation interventions such as habitat restoration, protected areas, and captive breeding have been used to prevent extinctions. Evaluating the effectiveness of these interventions requires appropriate statistical methods, given the quantity and quality of available data. Historically, analysis of variance has been used with some form of predetermined before-after control-impact design to estimate the effects of large-scale experiments or conservation interventions. However, ad hoc retrospective study designs or the presence of random effects at multiple scales may preclude the use of these tools. We evaluated the effects of a large-scale supplementation program on the density of adult Chinook salmon Oncorhynchus tshawytscha from the Snake River basin in the northwestern United States currently listed under the U.S. Endangered Species Act. We analyzed 43 years of data from 22 populations, accounting for random effects across time and space using a form of Bayesian hierarchical time-series model common in analyses of financial markets. We found that varying degrees of supplementation over a period of 25 years increased the density of natural-origin adults, on average, by 0–8% relative to nonsupplementation years. Thirty-nine of the 43 year effects were at least two times larger in magnitude than the mean supplementation effect, suggesting common environmental variables play a more important role in driving interannual variability in adult density. Additional residual variation in density varied considerably across the region, but there was no systematic difference between supplemented and reference populations. Our results demonstrate the power of hierarchical Bayesian models to detect the diffuse effects of management interventions and to quantitatively describe the variability of intervention success. Nevertheless, our study could not address whether ecological

  10. Ice-Accretion Test Results for Three Large-Scale Swept-Wing Models in the NASA Icing Research Tunnel

    NASA Technical Reports Server (NTRS)

    Broeren, Andy P.; Potapczuk, Mark G.; Lee, Sam; Malone, Adam M.; Paul, Bernard P., Jr.; Woodard, Brian S.

    2016-01-01

    Icing simulation tools and computational fluid dynamics codes are reaching levels of maturity such that they are being proposed by manufacturers for use in certification of aircraft for flight in icing conditions with increasingly less reliance on natural-icing flight testing and icing-wind-tunnel testing. Sufficient high-quality data to evaluate the performance of these tools is not currently available. The objective of this work was to generate a database of ice-accretion geometry that can be used for development and validation of icing simulation tools as well as for aerodynamic testing. Three large-scale swept wing models were built and tested at the NASA Glenn Icing Research Tunnel (IRT). The models represented the Inboard (20 percent semispan), Midspan (64 percent semispan) and Outboard stations (83 percent semispan) of a wing based upon a 65 percent scale version of the Common Research Model (CRM). The IRT models utilized a hybrid design that maintained the full-scale leading-edge geometry with a truncated afterbody and flap. The models were instrumented with surface pressure taps in order to acquire sufficient aerodynamic data to verify the hybrid model design capability to simulate the full-scale wing section. A series of ice-accretion tests were conducted over a range of total temperatures from -23.8 to -1.4 C with all other conditions held constant. The results showed the changing ice-accretion morphology from rime ice at the colder temperatures to highly 3-D scallop ice in the range of -11.2 to -6.3 C. Warmer temperatures generated highly 3-D ice accretion with glaze ice characteristics. The results indicated that the general scallop ice morphology was similar for all three models. Icing results were documented for limited parametric variations in angle of attack, drop size and cloud liquid-water content (LWC). The effect of velocity on ice accretion was documented for the Midspan and Outboard models for a limited number of test cases. The data suggest

  11. Ice-Accretion Test Results for Three Large-Scale Swept-Wing Models in the NASA Icing Research Tunnel

    NASA Technical Reports Server (NTRS)

    Broeren, Andy P.; Potapczuk, Mark G.; Lee, Sam; Malone, Adam M.; Paul, Benard P., Jr.; Woodard, Brian S.

    2016-01-01

    Icing simulation tools and computational fluid dynamics codes are reaching levels of maturity such that they are being proposed by manufacturers for use in certification of aircraft for flight in icing conditions with increasingly less reliance on natural-icing flight testing and icing-wind-tunnel testing. Sufficient high-quality data to evaluate the performance of these tools is not currently available. The objective of this work was to generate a database of ice-accretion geometry that can be used for development and validation of icing simulation tools as well as for aerodynamic testing. Three large-scale swept wing models were built and tested at the NASA Glenn Icing Research Tunnel (IRT). The models represented the Inboard (20% semispan), Midspan (64% semispan) and Outboard stations (83% semispan) of a wing based upon a 65% scale version of the Common Research Model (CRM). The IRT models utilized a hybrid design that maintained the full-scale leading-edge geometry with a truncated afterbody and flap. The models were instrumented with surface pressure taps in order to acquire sufficient aerodynamic data to verify the hybrid model design capability to simulate the full-scale wing section. A series of ice-accretion tests were conducted over a range of total temperatures from -23.8 deg C to -1.4 deg C with all other conditions held constant. The results showed the changing ice-accretion morphology from rime ice at the colder temperatures to highly 3-D scallop ice in the range of -11.2 deg C to -6.3 deg C. Warmer temperatures generated highly 3-D ice accretion with glaze ice characteristics. The results indicated that the general scallop ice morphology was similar for all three models. Icing results were documented for limited parametric variations in angle of attack, drop size and cloud liquid-water content (LWC). The effect of velocity on ice accretion was documented for the Midspan and Outboard models for a limited number of test cases. The data suggest that

  12. Large scale dynamic systems

    NASA Technical Reports Server (NTRS)

    Doolin, B. F.

    1975-01-01

    Classes of large scale dynamic systems were discussed in the context of modern control theory. Specific examples discussed were in the technical fields of aeronautics, water resources and electric power.

  13. Large-scale thermodynamical and dynamical controls on subtropical cloud variability in observations and CMIP3 and CMIP5 models

    NASA Astrophysics Data System (ADS)

    Myers, T. A.; Norris, J. R.

    2013-12-01

    Uncertainty in radiative feedbacks associated with marine boundary cloud changes over the eastern subtropical oceans was a dominant contributor to the spread of climate sensitivity estimates among models of phase 3 of the Coupled Model Intercomparison Project (CMIP3). The present study compares the interannual sensitivity of boundary layer clouds and overlying high-level clouds to large-scale thermodynamical and dynamical variations in observations, CMIP3 models, and CMIP5 models. In observations, greater outgoing shortwave radiation due to clouds is associated with cooler sea-surface temperature, a stronger temperature inversion above cloud top, a moister free troposphere (thermodynamics), faster surface wind speed, and weaker subsidence (dynamics). More shortwave reflection, generally caused by increased low-level cloud fraction, acts to cool the climate system. One third of CMIP3 models simulate within the range of observational uncertainty each of the observed linear regression coefficients between top-of-atmosphere shortwave cloud radiative effect and the large-scale thermodynamics and dynamics, while only one fourth of CMIP5 models do so. Fewer than half of the simulated regression coefficients of the shortwave cloud radiative effect with respect to variations in surface wind speed are within the range of observational uncertainty in both CMIP3 and CMIP5 models. Interestingly, the regression coefficients for shortwave cloud radiative effect on sea-surface temperature and inversion strength are more poorly simulated in CMIP5 than CMIP3. On this basis, the simulation of subtropical marine boundary layer cloud variability has deteriorated from CMIP3 to CMIP5. In observations, reduced outgoing longwave radiation due to clouds is associated with a moister free troposphere and weaker subsidence. Less outgoing radiation, caused by increased high-level cloud fraction, acts to warm the climate system. These regression coefficients are well simulated by CMIP3 and CMIP5

  14. Calibration of a large-scale groundwater flow model using GRACE data: a case study in the Qaidam Basin, China

    NASA Astrophysics Data System (ADS)

    Hu, Litang; Jiao, Jiu Jimmy

    2015-11-01

    Traditional numerical models usually use extensive observed hydraulic-head data as calibration targets. However, this calibration process is not applicable in remote areas with limited or no monitoring data. This study presents an approach to calibrate a large-scale groundwater flow model using the monthly Gravity Recovery and Climate Experiment (GRACE) satellite data, which have been available globally on a spatial grid of 1° in the geographic coordinate system since 2002. A groundwater storage anomaly isolated from the terrestrial water storage (TWS) anomaly is converted into hydraulic head at the center of the grid, which is then used as observed data to calibrate a numerical model to estimate aquifer hydraulic conductivity. The aquifer system in the remote and hyperarid Qaidam Basin, China, is used as a case study to demonstrate the applicability of this approach. A groundwater model using FEFLOW is constructed for the Qaidam Basin and the GRACE-derived groundwater storage anomaly over the period 2003-2012 is included to calibrate the model, which is done using an automatic estimation method (PEST). The calibrated model is then run to output hydraulic heads at three sites where long-term hydraulic head data are available. The reasonably good fit between the calculated and observed hydraulic heads, together with the very similar groundwater storage anomalies from the numerical model and GRACE data, demonstrate that this approach is generally applicable in regions of groundwater data scarcity.

  15. Large Scale Biologically Realistic Models of Cortical Microcircuit Dynamics with Application to Novel Statistical Classifiers (Pilot Investigation)

    DTIC Science & Technology

    2007-11-02

    simulations using parallel distributed " Beowulf " clustering. Milestones included improved single- processor efficiency of 24-fold. On multiprocessor...single CPU, or roughly 6 hours on the proposed 30-CPU Beowulf system. Substantial progress was made toward a C++ implementation for subsequent research.

  16. The Interaction of Trade-Wind Clouds with the Large-Scale Flow in Observations and Models

    NASA Astrophysics Data System (ADS)

    Nuijens, L.; Medeiros, B.; Sandu, I.; Ahlgrimm, M.; Vogel, R.

    2015-12-01

    Most of the (sub)tropical oceans within the Hadley circulation experience either moderate subsidence or weak ascent. In these regions shallow trade-wind clouds prevail, whose vertical and spatial distribution have emerged as key factors determining the sensitivity of our climate in global climate models. A large unknown is how large the effect of these clouds should be. For instance, how sensitive is the radiation budget to variations in the distribution of trade-wind cloudiness in nature? How variable is trade-wind cloudiness in the first place? And do we understand the role of the large-scale flow in that variability? In this talk we present how space-borne remote sensing and reanalysis products combined with ground-based remote sensing and high resolution modeling at a representative location start to answer these questions and help validate climate models. We show that across regimes or seasons with moderate subsidence and weak ascent the cloud radiative effect and low-level cloudiness vary remarkably little. A negative feedback mechanism of convection on cloudiness near the lifting condensation level is used to explain this insensitivity. The main difference across regimes is a moderate change in cloudiness in the upper cloud layer, whereby the presence of a trade-wind inversion and strong winds appear a prerequisite for larger cloudiness. However, most variance in cloudiness at that level takes place on shorter time scales, with an important role for the deepening of individual clouds and local patterns in vertical motion induced by convection itself, which can significantly alter the trade-wind layer structure. Trade-wind cloudiness in climate models in turn is overly sensitive to changes in the large-scale flow, because relationships that separate cloudiness across regimes in long-term climatologies, which have inspired parameterizations, also act on shorter time scales. We discuss how these findings relate to recent explanations for the spread in modeled

  17. Large-Scale Multiphase Flow Modeling of Hydrocarbon Migration and Fluid Sequestration in Faulted Cenozoic Sedimentary Basins, Southern California

    NASA Astrophysics Data System (ADS)

    Jung, B.; Garven, G.; Boles, J. R.

    2011-12-01

    Major fault systems play a first-order role in controlling fluid migration in the Earth's crust, and also in the genesis/preservation of hydrocarbon reservoirs in young sedimentary basins undergoing deformation, and therefore understanding the geohydrology of faults is essential for the successful exploration of energy resources. For actively deforming systems like the Santa Barbara Basin and Los Angeles Basin, we have found it useful to develop computational geohydrologic models to study the various coupled and nonlinear processes affecting multiphase fluid migration, including relative permeability, anisotropy, heterogeneity, capillarity, pore pressure, and phase saturation that affect hydrocarbon mobility within fault systems and to search the possible hydrogeologic conditions that enable the natural sequestration of prolific hydrocarbon reservoirs in these young basins. Subsurface geology, reservoir data (fluid pressure-temperature-chemistry), structural reconstructions, and seismic profiles provide important constraints for model geometry and parameter testing, and provide critical insight on how large-scale faults and aquifer networks influence the distribution and the hydrodynamics of liquid and gas-phase hydrocarbon migration. For example, pore pressure changes at a methane seepage site on the seafloor have been carefully analyzed to estimate large-scale fault permeability, which helps to constrain basin-scale natural gas migration models for the Santa Barbara Basin. We have developed our own 2-D multiphase finite element/finite IMPES numerical model, and successfully modeled hydrocarbon gas/liquid movement for intensely faulted and heterogeneous basin profiles of the Los Angeles Basin. Our simulations suggest that hydrocarbon reservoirs that are today aligned with the Newport-Inglewood Fault Zone were formed by massive hydrocarbon flows from deeply buried source beds in the central synclinal region during post-Miocene time. Fault permeability, capillarity

  18. Wind tunnel investigation of a large-scale upper surface blown-flap model having four engines

    NASA Technical Reports Server (NTRS)

    Aoyagi, K.; Falarski, M. D.; Koenig, D. G.

    1975-01-01

    Investigations were conducted in the Ames 40- by 80-Foot Wind Tunnel to determine the aerodynamic characteristics of a large-scale subsonic jet transport model with an upper surface blown flap system. The model had a 25 deg swept wing of aspect ratio 7.28 and four turbofan engines. The lift of the flap system was augmented by turning the turbofan exhaust over the Coanda surface. Results were obtained for several flap deflections with several wing leading-edge configurations at jet momentum coefficients from 0 to 4.0. Three-component longitudinal data are presented with four engines operating. In addition, longitudinal and lateral data are presented with an engine out. The maximum lift and stall angle of the four engine model were lower than those obtained with a two engine model that was previously investigated. The addition of the outboard nacelles had an adverse effect on these values. Efforts to improve these values were successful. A maximum lift of 8.8 at an angle-of-attack of 27 deg was obtained with a jet thrust coefficient of 2 for the landing flap configuration.

  19. Sensitivity study of a large-scale air pollution model by using high-performance computations and Monte Carlo algorithms

    NASA Astrophysics Data System (ADS)

    Ostromsky, Tz.; Dimov, I.; Georgieva, R.; Marinov, P.; Zlatev, Z.

    2013-10-01

    In this paper we present some new results of our work on sensitivity analysis of a large-scale air pollution model, more specificly the Danish Eulerian Model (DEM). The main purpose of this study is to analyse the sensitivity of ozone concentrations with respect to the rates of some chemical reactions. The current sensitivity study considers the rates of six important chemical reactions and is done for the areas of several European cities with different geographical locations, climate, industrialization and population density. One of the most widely used variance-based techniques for sensitivity analysis, such as Sobol estimates and their modifications, have been used in this study. A vast number of numerical experiments with a specially adapted for the purpose version of the Danish Eulerian Model (SA-DEM) were carried out to compute global Sobol sensitivity measures. SA-DEM was implemented and run on two powerful cluster supercomputers: IBM Blue Gene/P, the most powerful parallel supercomputer in Bulgaria and IBM MareNostrum III, the most powerful parallel supercomputer in Spain. The refined (480 × 480) mesh version of the model was used in the experiments on MareNostrum III, which is a challenging computational problem even on such a powerful machine. Some optimizations of the code with respect to the parallel efficiency and the memory use were performed. Tables with performance results of a number of numerical experiments on IBM BlueGene/P and on IBM MareNostrum III are presented and analysed.

  20. A statistical model for Windstorm Variability over the British Isles based on Large-scale Atmospheric and Oceanic Mechanisms

    NASA Astrophysics Data System (ADS)

    Kirchner-Bossi, Nicolas; Befort, Daniel J.; Wild, Simon B.; Ulbrich, Uwe; Leckebusch, Gregor C.

    2016-04-01

    Time-clustered winter storms are responsible for a majority of the wind-induced losses in Europe. Over last years, different atmospheric and oceanic large-scale mechanisms as the North Atlantic Oscillation (NAO) or the Meridional Overturning Circulation (MOC) have been proven to drive some significant portion of the windstorm variability over Europe. In this work we systematically investigate the influence of different large-scale natural variability modes: more than 20 indices related to those mechanisms with proven or potential influence on the windstorm frequency variability over Europe - mostly SST- or pressure-based - are derived by means of ECMWF ERA-20C reanalysis during the last century (1902-2009), and compared to the windstorm variability for the European winter (DJF). Windstorms are defined and tracked as in Leckebusch et al. (2008). The derived indices are then employed to develop a statistical procedure including a stepwise Multiple Linear Regression (MLR) and an Artificial Neural Network (ANN), aiming to hindcast the inter-annual (DJF) regional windstorm frequency variability in a case study for the British Isles. This case study reveals 13 indices with a statistically significant coupling with seasonal windstorm counts. The Scandinavian Pattern (SCA) showed the strongest correlation (0.61), followed by the NAO (0.48) and the Polar/Eurasia Pattern (0.46). The obtained indices (standard-normalised) are selected as predictors for a windstorm variability hindcast model applied for the British Isles. First, a stepwise linear regression is performed, to identify which mechanisms can explain windstorm variability best. Finally, the indices retained by the stepwise regression are used to develop a multlayer perceptron-based ANN that hindcasted seasonal windstorm frequency and clustering. Eight indices (SCA, NAO, EA, PDO, W.NAtl.SST, AMO (unsmoothed), EA/WR and Trop.N.Atl SST) are retained by the stepwise regression. Among them, SCA showed the highest linear

  1. The Determination of the Large-Scale Circulation of the Pacific Ocean from Satellite Altimetry using Model Green's Functions

    NASA Technical Reports Server (NTRS)

    Stammer, Detlef; Wunsch, Carl

    1996-01-01

    A Green's function method for obtaining an estimate of the ocean circulation using both a general circulation model and altimetric data is demonstrated. The fundamental assumption is that the model is so accurate that the differences between the observations and the model-estimated fields obey a linear dynamics. In the present case, the calculations are demonstrated for model/data differences occurring on very a large scale, where the linearization hypothesis appears to be a good one. A semi-automatic linearization of the Bryan/Cox general circulation model is effected by calculating the model response to a series of isolated (in both space and time) geostrophically balanced vortices. These resulting impulse responses or 'Green's functions' then provide the kernels for a linear inverse problem. The method is first demonstrated with a set of 'twin experiments' and then with real data spanning the entire model domain and a year of TOPEX/POSEIDON observations. Our present focus is on the estimate of the time-mean and annual cycle of the model. Residuals of the inversion/assimilation are largest in the western tropical Pacific, and are believed to reflect primarily geoid error. Vertical resolution diminishes with depth with 1 year of data. The model mean is modified such that the subtropical gyre is weakened by about 1 cm/s and the center of the gyre shifted southward by about 10 deg. Corrections to the flow field at the annual cycle suggest that the dynamical response is weak except in the tropics, where the estimated seasonal cycle of the low-latitude current system is of the order of 2 cm/s. The underestimation of observed fluctuations can be related to the inversion on the coarse spatial grid, which does not permit full resolution of the tropical physics. The methodology is easily extended to higher resolution, to use of spatially correlated errors, and to other data types.

  2. Large-scale atmospheric processes in the Arctic region reproduced by Sl-AV model and reanalysis data

    NASA Astrophysics Data System (ADS)

    Kulikova, Irina; Kruglova, Ekaterina; Khan, Valentina; Kiktev, Dmitry; Tischenko, Vladimir

    2015-04-01

    The variability of large-scale atmospheric processes in the Arctic region was analyzed on the base of the NCEP/DOE reanalysis data and seasonal hindcasts from global semi-Lagrangian model (SL-AV), developed in collaboration of Hydrometeorological Centre of Russia with Institute of Numerical Mathematics. Using the factor analysis it was shown that the model reproduces well the first major variability modes to explain 85-90% of the accumulated dispersion. Teleconnection indices as the quantitative characteristics of low-frequency variability are used to identify zonal and meridional flow regimes. Composite maps indicating the spatial distribution of anomalies of the main meteorological variables (500 hPa geopotential height, the sea level atmospheric pressure, the temperature at 850 hPa, 2m air temperature, precipitation, zonal and meridional wind component) for positive and negative phases of each index of atmospheric circulation are created. Average values of composite maps are accompanied with their statistical significance assessed using the "bootstrap" technique. Main characteristics of field configuration in Arctic region of cited above meteorological parameters corresponding to positive and negative phases of circulation indices are analyzed and discussed. Ability of SL-AV model to reproduce these characteristics at monthly and seasonal time scale is discussed as well. Results of this study are aimed to improve the quality of long-range forecasts and increase the "limit of predictability" and can be useful in the practice to develop monthly and seasonal weather forecasts for the Arctic region.

  3. Large-scale effects of migration and conflict in pre-agricultural groups: Insights from a dynamic model

    PubMed Central

    Bagarello, Fabio

    2017-01-01

    The debate on the causes of conflict in human societies has deep roots. In particular, the extent of conflict in hunter-gatherer groups remains unclear. Some authors suggest that large-scale violence only arose with the spreading of agriculture and the building of complex societies. To shed light on this issue, we developed a model based on operatorial techniques simulating population-resource dynamics within a two-dimensional lattice, with humans and natural resources interacting in each cell of the lattice. The model outcomes under different conditions were compared with recently available demographic data for prehistoric South America. Only under conditions that include migration among cells and conflict was the model able to consistently reproduce the empirical data at a continental scale. We argue that the interplay between resource competition, migration, and conflict drove the population dynamics of South America after the colonization phase and before the introduction of agriculture. The relation between population and resources indeed emerged as a key factor leading to migration and conflict once the carrying capacity of the environment has been reached. PMID:28273114

  4. Large-scale effects of migration and conflict in pre-agricultural groups: Insights from a dynamic model.

    PubMed

    Gargano, Francesco; Tamburino, Lucia; Bagarello, Fabio; Bravo, Giangiacomo

    2017-01-01

    The debate on the causes of conflict in human societies has deep roots. In particular, the extent of conflict in hunter-gatherer groups remains unclear. Some authors suggest that large-scale violence only arose with the spreading of agriculture and the building of complex societies. To shed light on this issue, we developed a model based on operatorial techniques simulating population-resource dynamics within a two-dimensional lattice, with humans and natural resources interacting in each cell of the lattice. The model outcomes under different conditions were compared with recently available demographic data for prehistoric South America. Only under conditions that include migration among cells and conflict was the model able to consistently reproduce the empirical data at a continental scale. We argue that the interplay between resource competition, migration, and conflict drove the population dynamics of South America after the colonization phase and before the introduction of agriculture. The relation between population and resources indeed emerged as a key factor leading to migration and conflict once the carrying capacity of the environment has been reached.

  5. A Limited-Memory BFGS Algorithm Based on a Trust-Region Quadratic Model for Large-Scale Nonlinear Equations

    PubMed Central

    Li, Yong; Yuan, Gonglin; Wei, Zengxin

    2015-01-01

    In this paper, a trust-region algorithm is proposed for large-scale nonlinear equations, where the limited-memory BFGS (L-M-BFGS) update matrix is used in the trust-region subproblem to improve the effectiveness of the algorithm for large-scale problems. The global convergence of the presented method is established under suitable conditions. The numerical results of the test problems show that the method is competitive with the norm method. PMID:25950725

  6. HydroSCAPE: a multi-scale framework for streamflow routing in large-scale hydrological models

    NASA Astrophysics Data System (ADS)

    Piccolroaz, S.; Di Lazzaro, M.; Zarlenga, A.; Majone, B.; Bellin, A.; Fiori, A.

    2015-09-01

    We present HydroSCAPE, a large scale hydrological model with an innovative streamflow routing scheme based on the Width Function Instantaneous Unit Hydrograph (WFIUH) theory, which is designed to facilitate coupling with weather forecasting and climate models. HydroSCAPE preserves geomorphological dispersion of the river network when dealing with horizontal hydrological fluxes, irrespective of the adopted grid size, which is typically inherited from the overlaying weather forecast or climate model. This is achieved through a separate treatment of hillslope processes and routing within the river network, with the latter simulated by suitable transfer functions constructed by applying the WFIUH theory to the desired level of detail. Transfer functions are constructed for each grid cell and nodes of the network where water discharge is desired by taking advantage of the detailed morphological information contained in the Digital Elevation Model of the zone of interest. These characteristics render HydroSCAPE well suited for multi-scale applications, ranging from catchment up to continental scale, and to investigate extreme events (e.g. floods) that require an accurate description of routing through the river network. The model enjoys reliability and robustness, united to parsimony in the adopted parametrization and computational efficiency, leading to a dramatic reduction of the computational effort with respect to full-gridded models at comparable level of accuracy of routing. Additionally, HydroSCAPE is designed with a simple and flexible modular structure, which makes it particularly suitable to massive parallelization, customization according to the specific user needs and preferences (e.g. choice of rainfall-runoff model), and continuous development and improvements.

  7. Study of materials and machines for 3D printed large-scale, flexible electronic structures using fused deposition modeling

    NASA Astrophysics Data System (ADS)

    Hwang, Seyeon

    The 3 dimensional printing (3DP), called to additive manufacturing (AM) or rapid prototyping (RP), is emerged to revolutionize manufacturing and completely transform how products are designed and fabricated. A great deal of research activities have been carried out to apply this new technology to a variety of fields. In spite of many endeavors, much more research is still required to perfect the processes of the 3D printing techniques especially in the area of the large-scale additive manufacturing and flexible printed electronics. The principles of various 3D printing processes are briefly outlined in the Introduction Section. New types of thermoplastic polymer composites aiming to specified functional applications are also introduced in this section. Chapter 2 shows studies about the metal/polymer composite filaments for fused deposition modeling (FDM) process. Various metal particles, copper and iron particles, are added into thermoplastics polymer matrices as the reinforcement filler. The thermo-mechanical properties, such as thermal conductivity, hardness, tensile strength, and fracture mechanism, of composites are tested to figure out the effects of metal fillers on 3D printed composite structures for the large-scale printing process. In Chapter 3, carbon/polymer composite filaments are developed by a simple mechanical blending process with an aim of fabricating the flexible 3D printed electronics as a single structure. Various types of carbon particles consisting of multi-wall carbon nanotube (MWCNT), conductive carbon black (CCB), and graphite are used as the conductive fillers to provide the thermoplastic polyurethane (TPU) with improved electrical conductivity. The mechanical behavior and conduction mechanisms of the developed composite materials are observed in terms of the loading amount of carbon fillers in this section. Finally, the prototype flexible electronics are modeled and manufactured by the FDM process using Carbon/TPU composite filaments and

  8. Experimental results and numerical modeling of a high-performance large-scale cryopump. I. Test particle Monte Carlo simulation

    SciTech Connect

    Luo Xueli; Day, Christian; Haas, Horst; Varoutis, Stylianos

    2011-07-15

    For the torus of the nuclear fusion project ITER (originally the International Thermonuclear Experimental Reactor, but also Latin: the way), eight high-performance large-scale customized cryopumps must be designed and manufactured to accommodate the very high pumping speeds and throughputs of the fusion exhaust gas needed to maintain the plasma under stable vacuum conditions and comply with other criteria which cannot be met by standard commercial vacuum pumps. Under an earlier research and development program, a model pump of reduced scale based on active cryosorption on charcoal-coated panels at 4.5 K was manufactured and tested systematically. The present article focuses on the simulation of the true three-dimensional complex geometry of the model pump by the newly developed ProVac3D Monte Carlo code. It is shown for gas throughputs of up to 1000 sccm ({approx}1.69 Pa m{sup 3}/s at T = 0 deg. C) in the free molecular regime that the numerical simulation results are in good agreement with the pumping speeds measured. Meanwhile, the capture coefficient associated with the virtual region around the cryogenic panels and shields which holds for higher throughputs is calculated using this generic approach. This means that the test particle Monte Carlo simulations in free molecular flow can be used not only for the optimization of the pumping system but also for the supply of the input parameters necessary for the future direct simulation Monte Carlo in the full flow regime.

  9. Towards a Quantitative Use of Satellite Remote Sensing in Crop Growth Models for Large Scale Agricultural Production Estimate (Invited)

    NASA Astrophysics Data System (ADS)

    Defourny, P.

    2013-12-01

    such the Green Area Index (GAI), fAPAR and fcover usually retrieved from MODIS, MERIS, SPOT-Vegetation described the quality of the green vegetation development. The GLOBAM (Belgium) and EU FP-7 MOCCCASIN projects (Russia) improved the standard products and were demonstrated over large scale. The GAI retrieved from MODIS time series using a purity index criterion depicted successfully the inter-annual variability. Furthermore, the quantitative assimilation of these GAI time series into a crop growth model improved the yield estimate over years. These results showed that the GAI assimilation works best at the district or provincial level. In the context of the GEO Ag., the Joint Experiment of Crop Assessment and Monitoring (JECAM) was designed to enable the global agricultural monitoring community to compare such methods and results over a variety of regional cropping systems. For a network of test sites around the world, satellite and field measurements are currently collected and will be made available for collaborative effort. This experiment should facilitate international standards for data products and reporting, eventually supporting the development of a global system of systems for agricultural crop assessment and monitoring.

  10. The Climate Potentials and Side-Effects of Large-Scale terrestrial CO2 Removal - Insights from Quantitative Model Assessments

    NASA Astrophysics Data System (ADS)

    Boysen, L.; Heck, V.; Lucht, W.; Gerten, D.

    2015-12-01

    Terrestrial carbon dioxide removal (tCDR) through dedicated biomass plantations is considered as one climate engineering (CE) option if implemented at large-scale. While the risks and costs are supposed to be small, the effectiveness depends strongly on spatial and temporal scales of implementation. Based on simulations with a dynamic global vegetation model (LPJmL) we comprehensively assess the effectiveness, biogeochemical side-effects and tradeoffs from an earth system-analytic perspective. We analyzed systematic land-use scenarios in which all, 25%, or 10% of natural and/or agricultural areas are converted to tCDR plantations including the assumption that biomass plantations are established once the 2°C target is crossed in a business-as-usual climate change trajectory. The resulting tCDR potentials in year 2100 include the net accumulated annual biomass harvests and changes in all land carbon pools. We find that only the most spatially excessive, and thus undesirable, scenario would be capable to restore the 2° target by 2100 under continuing high emissions (with a cooling of 3.02°C). Large-scale biomass plantations covering areas between 1.1 - 4.2 Gha would produce a climate reduction potential of 0.8 - 1.4°C. tCDR plantations at smaller scales do not build up enough biomass over this considered period and the potentials to achieve global warming reductions are substantially lowered to no more than 0.5-0.6°C. Finally, we demonstrate that the (non-economic) costs for the Earth system include negative impacts on the water cycle and on ecosystems, which are already under pressure due to both land use change and climate change. Overall, tCDR may lead to a further transgression of land- and water-related planetary boundaries while not being able to set back the crossing of the planetary boundary for climate change. tCDR could still be considered in the near-future mitigation portfolio if implemented on small scales on wisely chosen areas.

  11. Troposphere-stratosphere response to large-scale North Atlantic Ocean variability in an atmosphere/ocean coupled model

    NASA Astrophysics Data System (ADS)

    Omrani, N.-E.; Bader, Jürgen; Keenlyside, N. S.; Manzini, Elisa

    2016-03-01

    The instrumental records indicate that the basin-wide wintertime North Atlantic warm conditions are accompanied by a pattern resembling negative North Atlantic oscillation (NAO), and cold conditions with pattern resembling the positive NAO. This relation is well reproduced in a control simulation by the stratosphere resolving atmosphere-ocean coupled Max-Planck-Institute Earth System Model (MPI-ESM). Further analyses of the MPI-ESM model simulation shows that the large-scale warm North Atlantic conditions are associated with a stratospheric precursory signal that propagates down into the troposphere, preceding the wintertime negative NAO. Additional experiments using only the atmospheric component of MPI-ESM (ECHAM6) indicate that these stratospheric and tropospheric changes are forced by the warm North Atlantic conditions. The basin-wide warming excites a wave-induced stratospheric vortex weakening, stratosphere/troposphere coupling and a high-latitude tropospheric warming. The induced high-latitude tropospheric warming is associated with reduction of the growth rate of low-level baroclinic waves over the North Atlantic region, contributing to the negative NAO pattern. For the cold North Atlantic conditions, the strengthening of the westerlies in the coupled model is confined to the troposphere and lower stratosphere. Comparing the coupled and uncoupled model shows that in the cold phase the tropospheric changes seen in the coupled model are not well reproduced by the standalone atmospheric configuration. Our experiments provide further evidence that North Atlantic Ocean variability (NAV) impacts the coupled stratosphere/troposphere system. As NAV has been shown to be predictable on seasonal-to-decadal timescales, these results have important implications for the predictability of the extra-tropical atmospheric circulation on these time-scales.

  12. Afterslip and viscoelastic relaxation model inferred from the large-scale post-seismic deformation following the 2010 Mw 8.8 Maule earthquake (Chile)

    NASA Astrophysics Data System (ADS)

    Klein, E.; Fleitout, L.; Vigny, C.; Garaud, J. D.

    2016-06-01

    Megathrust earthquakes of magnitude close to 9 are followed by large-scale (thousands of km) and long-lasting (decades), significant crustal and mantle deformation. This deformation can be observed at the surface and quantified with GPS measurements. Here we report on deformation observed during the 5 yr time span after the 2010 Mw 8.8 Maule Megathrust Earthquake (2010 February 27) over the whole South American continent. With the first 2 yr of those data, we use finite element modelling (FEM) to relate this deformation to slip on the plate interface and relaxation in the mantle, using a realistic layered Earth model and Burgers rheologies. Slip alone on the interface, even up to large depths, is unable to provide a satisfactory fit simultaneously to horizontal and vertical displacements. The horizontal deformation pattern requires relaxation both in the asthenosphere and in a low-viscosity channel along the deepest part of the plate interface and no additional low-viscosity wedge is required by the data. The vertical velocity pattern (intense and quick uplift over the Cordillera) is well fitted only when the channel extends deeper than 100 km. Additionally, viscoelastic relaxation alone cannot explain the characteristics and amplitude of displacements over the first 200 km from the trench and aseismic slip on the fault plane is needed. This aseismic slip on the interface generates stresses, which induce additional relaxation in the mantle. In the final model, all three components (relaxation due to the coseismic slip, aseismic slip on the fault plane and relaxation due to aseismic slip) are taken into account. Our best-fit model uses slip at shallow depths on the subduction interface decreasing as function of time and includes (i) an asthenosphere extending down to 200 km, with a steady-state Maxwell viscosity of 4.75 × 1018 Pa s; and (ii) a low-viscosity channel along the plate interface extending from depths of 55-135 km with viscosities below 1018 Pa s.

  13. Local and large-scale atmospheric responses to reduced Arctic sea ice and ocean warming in the WRF model

    NASA Astrophysics Data System (ADS)

    Porter, David F.; Cassano, John J.; Serreze, Mark C.

    2012-06-01

    The Weather Research and Forecasting (WRF) model is used to explore the sensitivity of the large-scale atmospheric energy and moisture budgets to prescribed changes in Arctic sea ice and sea surface temperatures (SSTs). Observed sea ice fractions and SSTs from 1996 and 2007, representing years of high and low sea ice extent, are used as lower boundary conditions. A pan-Arctic domain extending into the North Pacific and Atlantic Oceans is used. ERA-Interim reanalysis data from 1994 to 2008 are employed as initial and lateral forcing data for each high and low sea ice simulation. The addition of a third ensemble, with a mixed SST field between years 1996 and 2007 (using 2007 SSTs above 66°N and 1996 values below), results in a total of three 15-member ensembles. Results of the simulations show both local and remote responses to reduced sea ice. The local polar cap averaged response is largest in October and November, dominated by increased turbulent heat fluxes resulting in vertically deep heating and moistening of the Arctic atmosphere. This warmer and moister atmosphere is associated with an increase in cloud cover, affecting the surface and atmospheric energy budgets. There is an enhancement of the hydrologic cycle, with increased evaporation in areas of sea ice loss paired with increased precipitation. Most of the Arctic climate response results from within-Arctic changes, although some changes in the hydrologic cycle reflect circulation responses to midlatitude SST forcing, highlighting the general sensitivity of the Arctic climate.

  14. Sensitivity and foreground modelling for large-scale cosmic microwave background B-mode polarization satellite missions

    NASA Astrophysics Data System (ADS)

    Remazeilles, M.; Dickinson, C.; Eriksen, H. K. K.; Wehus, I. K.

    2016-05-01

    The measurement of the large-scale B-mode polarization in the cosmic microwave background (CMB) is a fundamental goal of future CMB experiments. However, because of unprecedented sensitivity, future CMB experiments will be much more sensitive to any imperfect modelling of the Galactic foreground polarization in the reconstruction of the primordial B-mode signal. We compare the sensitivity to B-modes of different concepts of CMB satellite missions (LiteBIRD, COrE, COrE+, PRISM, EPIC, PIXIE) in the presence of Galactic foregrounds. In particular, we quantify the impact on the tensor-to-scalar parameter of incorrect foreground modelling in the component separation process. Using Bayesian fitting and Gibbs sampling, we perform the separation of the CMB and Galactic foreground B-modes. The recovered CMB B-mode power spectrum is used to compute the likelihood distribution of the tensor-to-scalar ratio. We focus the analysis to the very large angular scales that can be probed only by CMB space missions, i.e. the reionization bump, where primordial B-modes dominate over spurious B-modes induced by gravitational lensing. We find that fitting a single modified blackbody component for thermal dust where the `real' sky consists of two dust components strongly bias the estimation of the tensor-to-scalar ratio by more than 5σ for the most sensitive experiments. Neglecting in the parametric model the curvature of the synchrotron spectral index may bias the estimated tensor-to-scalar ratio by more than 1σ. For sensitive CMB experiments, omitting in the foreground modelling a 1 per cent polarized spinning dust component may induce a non-negligible bias in the estimated tensor-to-scalar ratio.

  15. The eHabitat R library: Large scale modelling of habitat uniqueness for the management and assessment of protected areas

    NASA Astrophysics Data System (ADS)

    Olav Skøien, Jon; Martínez-López, Javier; Dubois, Gregoire

    2014-05-01

    There are over 100,000 protected areas in the world that need to be assessed systematically according to their ecological values in order to support decision making and fund allocation processes. Ecological modelling has become an important tool for conservation and biodiversity studies. Moreover, linking remote sensing with ecological modelling can help overcoming some typical limitations of ecological studies related to conservation, such as sampling effort bias of biodiversity inventories. Habitats offer refuge for species and can be mapped at ecoregion scale by means of remote sensing. Large-scale ecological models are thus needed to make progress on important conservation challenges and the adoption of an open source community approach is crucial for its implementation. R is a Free and Open Source Software (FOSS) which allows the analysis of large amounts of remote sensing data through multivariate statistics and GIS capabilities, offers interoperability with other models and tools, and can be further implemented and used within a web processing service, as well as under a local desktop environment. The eHabitat R library, one of the Web Processing Services (WPS) supporting DOPA, the Digital Observatory for Protected Areas (http://dopa.jrc.ec.europa.eu/), computes habitat similarities and proposes a habitat replaceability index (HRI) which can be used for characterizing each protected area worldwide. More exactly, eHabitat computes for each protected area a map of probabilities to find areas presenting ecological characteristics that are similar to those found in the selected protected area. The library is available online for using it and extending it by the research and end users communities. This paper presents the eHabitat library, as an example of a successful development and application of FOSS tools for geoscientific tasks, in particular for delivering critical services in relation the conservation of protected areas. Some methodological aspects, such

  16. Conceptual Numerical Modeling of Large-Scale Footwall Behavior at the Kiirunavaara Mine, and Implications for Deformation Monitoring

    NASA Astrophysics Data System (ADS)

    Svartsjaern, M.; Saiang, D.; Nordlund, E.; Eitzenberger, A.

    2016-03-01

    Over the last 30 years, the Kiirunavaara mine has experienced a slow but progressive fracturing and movement in the footwall rock mass, which is directly related to the sublevel caving (SLC) method utilized by Luossavaara-Kiirunavaara Aktiebolag (LKAB). As part of an ongoing work, this paper focuses on describing and explaining a likely evolution path of large-scale fracturing in the Kiirunavaara footwall. The trace of this fracturing was based on a series of damage mapping campaigns carried out over the last 2 years, accompanied by numerical modeling. Data collected from the damage mapping between mine levels 320 and 907 m was used to create a 3D surface representing a conceptual boundary for the extent of the damaged volume. The extent boundary surface was used as the basis for calibrating conceptual numerical models created in UDEC. The mapping data, in combination with the numerical models, indicated a plausible evolution path of the footwall fracturing that was subsequently described. Between levels 320 and 740 m, the extent of fracturing into the footwall appears to be controlled by natural pre-existing discontinuities, while below 740 m, there are indications of a curved shear or step-path failure. The step-path is hypothesized to be activated by rock mass heave into the SLC zone above the current extraction level. Above the 320 m level, the fracturing seems to intersect a subvertical structure that daylights in the old open pit slope. Identification of these probable damage mechanisms was an important step in order to determine the requirements for a monitoring system for tracking footwall damage. This paper describes the background work for the design of the system currently being installed.

  17. Model based multivariable controller for large scale compression stations. Design and experimental validation on the LHC 18KW cryorefrigerator

    SciTech Connect

    Bonne, François; Bonnay, Patrick; Bradu, Benjamin

    2014-01-29

    In this paper, a multivariable model-based non-linear controller for Warm Compression Stations (WCS) is proposed. The strategy is to replace all the PID loops controlling the WCS with an optimally designed model-based multivariable loop. This new strategy leads to high stability and fast disturbance rejection such as those induced by a turbine or a compressor stop, a key-aspect in the case of large scale cryogenic refrigeration. The proposed control scheme can be used to have precise control of every pressure in normal operation or to stabilize and control the cryoplant under high variation of thermal loads (such as a pulsed heat load expected to take place in future fusion reactors such as those expected in the cryogenic cooling systems of the International Thermonuclear Experimental Reactor ITER or the Japan Torus-60 Super Advanced fusion experiment JT-60SA). The paper details how to set the WCS model up to synthesize the Linear Quadratic Optimal feedback gain and how to use it. After preliminary tuning at CEA-Grenoble on the 400W@1.8K helium test facility, the controller has been implemented on a Schneider PLC and fully tested first on the CERN's real-time simulator. Then, it was experimentally validated on a real CERN cryoplant. The efficiency of the solution is experimentally assessed using a reasonable operating scenario of start and stop of compressors and cryogenic turbines. This work is partially supported through the European Fusion Development Agreement (EFDA) Goal Oriented Training Program, task agreement WP10-GOT-GIRO.

  18. Model based multivariable controller for large scale compression stations. Design and experimental validation on the LHC 18KW cryorefrigerator

    NASA Astrophysics Data System (ADS)

    Bonne, François; Alamir, Mazen; Bonnay, Patrick; Bradu, Benjamin

    2014-01-01

    In this paper, a multivariable model-based non-linear controller for Warm Compression Stations (WCS) is proposed. The strategy is to replace all the PID loops controlling the WCS with an optimally designed model-based multivariable loop. This new strategy leads to high stability and fast disturbance rejection such as those induced by a turbine or a compressor stop, a key-aspect in the case of large scale cryogenic refrigeration. The proposed control scheme can be used to have precise control of every pressure in normal operation or to stabilize and control the cryoplant under high variation of thermal loads (such as a pulsed heat load expected to take place in future fusion reactors such as those expected in the cryogenic cooling systems of the International Thermonuclear Experimental Reactor ITER or the Japan Torus-60 Super Advanced fusion experiment JT-60SA). The paper details how to set the WCS model up to synthesize the Linear Quadratic Optimal feedback gain and how to use it. After preliminary tuning at CEA-Grenoble on the 400W@1.8K helium test facility, the controller has been implemented on a Schneider PLC and fully tested first on the CERN's real-time simulator. Then, it was experimentally validated on a real CERN cryoplant. The efficiency of the solution is experimentally assessed using a reasonable operating scenario of start and stop of compressors and cryogenic turbines. This work is partially supported through the European Fusion Development Agreement (EFDA) Goal Oriented Training Program, task agreement WP10-GOT-GIRO.

  19. Parameterization of large scale snow redistribution models using high-resolution information: tests in an alpine catchment (Invited)

    NASA Astrophysics Data System (ADS)

    MacDonald, M. K.; Pomeroy, J. W.; Pietroniro, A.

    2009-12-01

    Snowcover development in alpine environments is highly variable due to the heterogeneous arrangements of terrain and vegetation cover. The interactions between wind flow and surface aerodynamic characteristics produce complex blowing snow redistribution regimes. The snowcover distribution is also influenced by ablation, which varies with surface energetics over complex terrain. For medium to large scale hydrological and atmospheric calculations it is necessary to estimate blowing snow fluxes over incremental land units no smaller than hydrological response units (HRU) or landscape tiles. Blowing snow process algorithms exist and can be deployed, though a robust method to obtain HRU-scale wind speed forcing does not. In this study, snow redistribution by wind was simulated over HRUs in a mountain tundra catchment in western Canada. The HRUs and their aerodynamic properties were delineated using wind speeds derived from a high-resolution empirical terrain-based wind flow model. The wind flow model, based on Ryan (1977), uses a digital elevation model (DEM), reference wind direction and reference wind speed to calculate wind ratios (the ratio of simulated grid cell wind speed to reference wind speed) at 10 m cell resolution, based on terrain aspect, curvature and slope. A high resolution LiDAR DEM of the catchment was available for this. Three parameters are required by the model: the curvature length scale and the weights that control the influence of curvature and slope on calculated wind ratios. These three parameters were estimated via calibration on approximately 1,000 wind speed measurements from each of three meteorological stations located within the Marmot Creek Research Basin. Snow depths estimated from subtraction of summer from winter LiDAR-derived DEMs were used to analyze the relationships between snow depth, calculated wind ratios and terrain variables such as aspect, curvature, elevation, slope, and vegetation height. Snow depth was most strongly

  20. Self-sustaining non-repetitive activity in a large scale neuronal-level model of the hippocampal circuit

    PubMed Central

    Scorcioni, Ruggero; Hamilton, David J.; Ascoli, Giorgio A.

    2008-01-01

    fail to reproduce the full behavioral complexity of the large-scale model. Thus network size, cell class diversity, and connectivity details may all be critical to generate self-sustained non-repetitive activity patterns. PMID:18595658

  1. Self-sustaining non-repetitive activity in a large scale neuronal-level model of the hippocampal circuit.

    PubMed

    Scorcioni, Ruggero; Hamilton, David J; Ascoli, Giorgio A

    2008-10-01

    to reproduce the full behavioral complexity of the large-scale model. Thus network size, cell class diversity, and connectivity details may all be critical to generate self-sustained non-repetitive activity patterns.

  2. A model of large-scale instabilities in the Jovian troposphere. I - Linear model. II - Quasi-linear model

    NASA Astrophysics Data System (ADS)

    Orsolini, Y.; Leovy, C. B.

    1993-12-01

    A quasi-geostrophic midlatitude beta-plane linear model is here used to study whether the decay with height and meridional circulations of near-steady jets in the tropospheric circulation of Jupiter arise as a means of stabilizing a deep zonal flow that extends into the upper troposphere. The model results obtained are analogous to the stabilizing effect of meridional shear on baroclinic instabilities. In the second part of this work, a quasi-linear model is used to investigate how an initially barotropically unstable flow develops a quasi-steady shear zone in the lower scale heights of the model domain, due to the action of the eddy fluxes.

  3. Ice Accretion Test Results for Three Large-Scale Swept-Wing Models in the NASA Icing Research Tunnel

    NASA Technical Reports Server (NTRS)

    Broeren, Andy; Potapczuk, Mark; Lee, Sam; Malone, Adam; Paul, Ben; Woodard, Brian

    2016-01-01

    The design and certification of modern transport airplanes for flight in icing conditions increasing relies on three-dimensional numerical simulation tools for ice accretion prediction. There is currently no publically available, high-quality, ice accretion database upon which to evaluate the performance of icing simulation tools for large-scale swept wings that are representative of modern commercial transport airplanes. The purpose of this presentation is to present the results of a series of icing wind tunnel test campaigns whose aim was to provide an ice accretion database for large-scale, swept wings.

  4. Business Model for the Security of a Large-Scale PACS, Compliance with ISO/27002:2013 Standard.

    PubMed

    Gutiérrez-Martínez, Josefina; Núñez-Gaona, Marco Antonio; Aguirre-Meneses, Heriberto

    2015-08-01

    Data security is a critical issue in an organization; a proper information security management (ISM) is an ongoing process that seeks to build and maintain programs, policies, and controls for protecting information. A hospital is one of the most complex organizations, where patient information has not only legal and economic implications but, more importantly, an impact on the patient's health. Imaging studies include medical images, patient identification data, and proprietary information of the study; these data are contained in the storage device of a PACS. This system must preserve the confidentiality, integrity, and availability of patient information. There are techniques such as firewalls, encryption, and data encapsulation that contribute to the protection of information. In addition, the Digital Imaging and Communications in Medicine (DICOM) standard and the requirements of the Health Insurance Portability and Accountability Act (HIPAA) regulations are also used to protect the patient clinical data. However, these techniques are not systematically applied to the picture and archiving and communication system (PACS) in most cases and are not sufficient to ensure the integrity of the images and associated data during transmission. The ISO/IEC 27001:2013 standard has been developed to improve the ISM. Currently, health institutions lack effective ISM processes that enable reliable interorganizational activities. In this paper, we present a business model that accomplishes the controls of ISO/IEC 27002:2013 standard and criteria of security and privacy from DICOM and HIPAA to improve the ISM of a large-scale PACS. The methodology associated with the model can monitor the flow of data in a PACS, facilitating the detection of unauthorized access to images and other abnormal activities.

  5. The Philosophical Aspects of IRT Equating: Modeling Drift to Evaluate Cohort Growth in Large-Scale Assessments

    ERIC Educational Resources Information Center

    Taherbhai, Husein; Seo, Daeryong

    2013-01-01

    Calibration and equating is the quintessential necessity for most large-scale educational assessments. However, there are instances when no consideration is given to the equating process in terms of context and substantive realization, and the methods used in its execution. In the view of the authors, equating is not merely an exhibit of the…

  6. Techno-economic Modeling of the Integration of 20% Wind and Large-scale Energy Storage in ERCOT by 2030

    SciTech Connect

    Baldick, Ross; Webber, Michael; King, Carey; Garrison, Jared; Cohen, Stuart; Lee, Duehee

    2012-12-21

    This study's objective is to examine interrelated technical and economic avenues for the Electric Reliability Council of Texas (ERCOT) grid to incorporate up to and over 20% wind generation by 2030. Our specific interests are to look at the factors that will affect the implementation of both high level of wind power penetration (> 20% generation) and installation of large scale storage.

  7. Development and analysis of prognostic equations for mesoscale kinetic energy and mesoscale (subgrid scale) fluxes for large-scale atmospheric models

    NASA Technical Reports Server (NTRS)

    Avissar, Roni; Chen, Fei

    1993-01-01

    Generated by landscape discontinuities (e.g., sea breezes) mesoscale circulation processes are not represented in large-scale atmospheric models (e.g., general circulation models), which have an inappropiate grid-scale resolution. With the assumption that atmospheric variables can be separated into large scale, mesoscale, and turbulent scale, a set of prognostic equations applicable in large-scale atmospheric models for momentum, temperature, moisture, and any other gaseous or aerosol material, which includes both mesoscale and turbulent fluxes is developed. Prognostic equations are also developed for these mesoscale fluxes, which indicate a closure problem and, therefore, require a parameterization. For this purpose, the mean mesoscale kinetic energy (MKE) per unit of mass is used, defined as E-tilde = 0.5 (the mean value of u'(sub i exp 2), where u'(sub i) represents the three Cartesian components of a mesoscale circulation (the angle bracket symbol is the grid-scale, horizontal averaging operator in the large-scale model, and a tilde indicates a corresponding large-scale mean value). A prognostic equation is developed for E-tilde, and an analysis of the different terms of this equation indicates that the mesoscale vertical heat flux, the mesoscale pressure correlation, and the interaction between turbulence and mesoscale perturbations are the major terms that affect the time tendency of E-tilde. A-state-of-the-art mesoscale atmospheric model is used to investigate the relationship between MKE, landscape discontinuities (as characterized by the spatial distribution of heat fluxes at the earth's surface), and mesoscale sensible and latent heat fluxes in the atmosphere. MKE is compared with turbulence kinetic energy to illustrate the importance of mesoscale processes as compared to turbulent processes. This analysis emphasizes the potential use of MKE to bridge between landscape discontinuities and mesoscale fluxes and, therefore, to parameterize mesoscale fluxes

  8. Validation of a simple model to predict the performance of methane oxidation systems, using field data from a large scale biocover test field.

    PubMed

    Geck, Christoph; Scharff, Heijo; Pfeiffer, Eva-Maria; Gebert, Julia

    2016-10-01

    On a large scale test field (1060m(2)) methane emissions were monitored over a period of 30months. During this period, the test field was loaded at rates between 14 and 46gCH4m(-2)d(-1). The total area was subdivided into 60 monitoring grid fields at 17.7m(2) each, which were individually surveyed for methane emissions and methane oxidation efficiency. The latter was calculated both from the direct methane mass balance and from the shift of the carbon dioxide - methane ratio between the base of the methane oxidation layer and the emitted gas. The base flux to each grid field was back-calculated from the data on methane oxidation efficiency and emission. Resolution to grid field scale allowed the analysis of the spatial heterogeneity of all considered fluxes. Higher emissions were measured in the upslope area of the test field. This was attributed to the capillary barrier integrated into the test field resulting in a higher diffusivity and gas permeability in the upslope area. Predictions of the methane oxidation potential were estimated with the simple model Methane Oxidation Tool (MOT) using soil temperature, air filled porosity and water tension as input parameters. It was found that the test field could oxidize 84% of the injected methane. The MOT predictions seemed to be realistic albeit the higher range of the predicted oxidations potentials could not be challenged because the load to the field was too low. Spatial and temporal emission patterns were found indicating heterogeneity of fluxes and efficiencies in the test field. No constant share of direct emissions was found as proposed by the MOT albeit the mean share of emissions throughout the monitoring period was in the range of the expected emissions.

  9. Experimental validation of computational models for large-scale nonlinear ultrasound simulations in heterogeneous, absorbing fluid media

    NASA Astrophysics Data System (ADS)

    Martin, Elly; Treeby, Bradley E.

    2015-10-01

    To increase the effectiveness of high intensity focused ultrasound (HIFU) treatments, prediction of ultrasound propagation in biological tissues is essential, particularly where bones are present in the field. This requires complex full-wave computational models which account for nonlinearity, absorption, and heterogeneity. These models must be properly validated but there is a lack of analytical solutions which apply in these conditions. Experimental validation of the models is therefore essential. However, accurate measurement of HIFU fields is not trivial. Our aim is to establish rigorous methods for obtaining reference data sets with which to validate tissue realistic simulations of ultrasound propagation. Here, we present preliminary measurements which form an initial validation of simulations performed using the k-Wave MATLAB toolbox. Acoustic pressure was measured on a plane in the field of a focused ultrasound transducer in free field conditions to be used as a Dirichlet boundary condition for simulations. Rectangular and wedge shaped olive oil scatterers were placed in the field and further pressure measurements were made in the far field for comparison with simulations. Good qualitative agreement was observed between the measured and simulated nonlinear pressure fields.

  10. Which spatial discretization for distributed hydrological models? Proposition of a methodology and illustration for medium to large-scale catchments

    NASA Astrophysics Data System (ADS)

    Dehotin, J.; Braud, I.

    2008-05-01

    discretization). The first part of the paper presents a review about catchment discretization in hydrological models from which we derived the principles of our general methodology. The second part of the paper focuses on the derivation of hydro-landscape units for medium to large scale catchments. For this sub-catchment discretization, we propose the use of principles borrowed from landscape classification. These principles are independent of the catchment size. They allow retaining suitable features required in the catchment description in order to fulfil a specific modelling objective. The method leads to unstructured and homogeneous areas within the sub-catchments, which can be used to derive modelling meshes. It avoids map smoothing by suppressing the smallest units, the role of which can be very important in hydrology, and provides a confidence map (the distance map) for the classification. The confidence map can be used for further uncertainty analysis of modelling results. The final discretization remains consistent with the resolution of input data and that of the source maps. The last part of the paper illustrates the method using available data for the upper Saône catchment in France. The interest of the method for an efficient representation of landscape heterogeneity is illustrated by a comparison with more traditional mapping approaches. Examples of possible models, which can be built on this spatial discretization, are finally given as perspectives for the work.

  11. Water consumption and allocation strategies along the river oases of Tarim River based on large-scale hydrological modelling

    NASA Astrophysics Data System (ADS)

    Yu, Yang; Disse, Markus; Yu, Ruide

    2016-04-01

    With the mainstream of 1,321km and located in an arid area in northwest China, the Tarim River is China's longest inland river. The Tarim basin on the northern edge of the Taklamakan desert is an extremely arid region. In this region, agricultural water consumption and allocation management are crucial to address the conflicts among irrigation water users from upstream to downstream. Since 2011, the German Ministry of Science and Education BMBF established the Sino-German SuMaRiO project, for the sustainable management of river oases along the Tarim River. The project aims to contribute to a sustainable land management which explicitly takes into account ecosystem functions and ecosystem services. SuMaRiO will identify realizable management strategies, considering social, economic and ecological criteria. This will have positive effects for nearly 10 million inhabitants of different ethnic groups. The modelling of water consumption and allocation strategies is a core block in the SuMaRiO cluster. A large-scale hydrological model (MIKE HYDRO Basin) was established for the purpose of sustainable agricultural water management in the main stem Tarim River. MIKE HYDRO Basin is an integrated, multipurpose, map-based decision support tool for river basin analysis, planning and management. It provides detailed simulation results concerning water resources and land use in the catchment areas of the river. Calibration data and future predictions based on large amount of data was acquired. The results of model calibration indicated a close correlation between simulated and observed values. Scenarios with the change on irrigation strategies and land use distributions were investigated. Irrigation scenarios revealed that the available irrigation water has significant and varying effects on the yields of different crops. Irrigation water saving could reach up to 40% in the water-saving irrigation scenario. Land use scenarios illustrated that an increase of farmland area in the

  12. A Parallel Sliding Region Algorithm to Make Agent-Based Modeling Possible for a Large-Scale Simulation: Modeling Hepatitis C Epidemics in Canada.

    PubMed

    Wong, William W L; Feng, Zeny Z; Thein, Hla-Hla

    2016-11-01

    Agent-based models (ABMs) are computer simulation models that define interactions among agents and simulate emergent behaviors that arise from the ensemble of local decisions. ABMs have been increasingly used to examine trends in infectious disease epidemiology. However, the main limitation of ABMs is the high computational cost for a large-scale simulation. To improve the computational efficiency for large-scale ABM simulations, we built a parallelizable sliding region algorithm (SRA) for ABM and compared it to a nonparallelizable ABM. We developed a complex agent network and performed two simulations to model hepatitis C epidemics based on the real demographic data from Saskatchewan, Canada. The first simulation used the SRA that processed on each postal code subregion subsequently. The second simulation processed the entire population simultaneously. It was concluded that the parallelizable SRA showed computational time saving with comparable results in a province-wide simulation. Using the same method, SRA can be generalized for performing a country-wide simulation. Thus, this parallel algorithm enables the possibility of using ABM for large-scale simulation with limited computational resources.

  13. Modeling of Giant Impact into a Differentiated Asteroid and Implications for the Large-Scale Troughs on Vesta

    NASA Astrophysics Data System (ADS)

    Buczkowski, D.; Iyer, K.; Raymond, C. A.; Wyrick, D. Y.; Kahn, E.; Nathues, A.; Gaskell, R. W.; Roatsch, T.; Preusker, F.; Russell, C. T.

    2012-12-01

    Linear structures have been identified in a concentric orientation around impact craters on several asteroids (e.g. Ida [1], Eros [2], Lutetia [3]) and their formation tied to the impact event [1,2]. Images of Vesta taken by the Dawn spacecraft reveal large-scale linear structural features in a similar orientation around the Rheasilvia and Veneneia basins [4]. However, the dimensions and shape of these features suggest that they are graben similar to those observed on terrestrial planets, not fractures or grooves such as are found on Ida, Eros and Lutetia [5]. Although the fault plane analysis [4] implies that impact may have been responsible for triggering the formation of these features as on the smaller asteroids, we suggest the significantly different morphology implies that some other component must also have been involved in their development. It has been established that Vesta is a differentiated body with a core [6]. This differentiated interior could be a factor in the troughs' resemblance to planetary faults rather than asteroidal fractures, as it is predicted that the stresses resultant from impact would be amplified and reoriented compared to a similar impact on an undifferentiated body. Preliminary CTH hydrocode [7] models of a 530 km sphere composed of a basalt analog with a 220 km iron core [6] show that the impact of a 50 km object results in different patterns of tensile stress and pressure compared to an undifferentiated sphere of the same material and diameter. While these first-order models have yet to fully mimic the observations we've made on Vesta, they do demonstrate that the density contrast in Vesta's differentiated interior affects the stresses resulting from the Rheasilivia and Veneneia impacts. It is this impedance mismatch that we suggest is responsible for the development of Vesta's planet-like troughs. Thus, future identification of planetary-style tectonic features on small solar system bodies may then imply a differentiated

  14. Large-Scale Exploratory Analysis, Cleaning, and Modeling for Event Detection in Real-World Power Systems Data

    DTIC Science & Technology

    2013-11-01

    experiences using a non-traditional Hadoop distributed computing setup on top of a HPC computing cluster. Categories and Subject Descriptors J.2 [Computer...Analysis, Hadoop , R 1. INTRODUCTION In application areas involving large-scale distributed sen- sor networks, prior to deploying algorithms over high...executed on a Hadoop cluster. This provides the flexible rapid development and iterative analysis capabilities required for our analysis as well as the

  15. Shoreline Response to Climate Change and Human Manipulations in a Model of Large-Scale Coastal Change

    NASA Astrophysics Data System (ADS)

    Slott, J. M.; Murray, A. B.; Valvo, L.; Ashton, A.

    2005-12-01

    ) show that large-scale coastal features (e.g. capes and cuspate spits) may self-organize as smaller coastal features grow and merge by interacting over large distances through wave shadowing. Our current work extends this model by including the effects of beach nourishment and seawalls. These simulations start with a cape-like shoreline, resembling the Carolina coastline, which we generated using the one-line model driven by the statistical average of 20 years of hindcast wave data measured off Cape Lookout, NC (WIS Station 509). In our experiments, we explored the effects of shoreline stabilization under four different wave climate scenarios: (a) unchanged, (b) increased winter storms, (c) increased tropical storms, and (d) decreased storminess. For each of these four scenarios, we ran three simulations: a control run with no shoreline stabilization, a run with a 10 km beach nourishment project, and a run with a 10 km seawall. We identified the effects of shoreline stabilization by comparing each of the latter two simulations to the control run. In each experiment, shoreline stabilization had a large effect on shoreline position--on the order of a few kilometers--within tens of kilometers of the stabilization area. We also saw sizable effects on adjacent capes nearly 100 kilometers away. Analysis of the simulations indicate that these distant impacts occurred because shoreline stabilization altered the extent to which the stabilized cape shadowed other parts of the coast. We thank the National Science Foundation and the Duke Center on Global Change for supporting our work.

  16. The importance of mineral physics and a free surface in large-scale numerical models of mantle convection and plate tectonics

    NASA Astrophysics Data System (ADS)

    Tackley, Paul; Nakagawa, Takashi; Crameri, Fabio; Connolly, James; Deschamps, Frédéric; Kaus, Boris; Gerya, Taras

    2010-05-01

    Here, our recent progress in understanding the large-scale dynamics of the mantle convection - plate tectonics system is summarised, with particular focus on the influence of realistic mineral physics and a free surface. High pressure and temperature experiments and calculations of the properties of mantle minerals show that many different mineral phases exist as a function of pressure, temperature and composition [e.g. Irifune and Ringwood, EPSL 1987], and that these have a first-order influence on density (which has a large effect on the dynamics) and elastic moduli (which influence seismic velocity). Numerical models of global thermo-chemical mantle convection have typically used a simple approximation such as the extended Boussinesq approximation to treat these complex variations in material properties. Instead, we calculate composition-dependent mineral assemblages and their physical properties using the code Perple_X, which minimizes free energy for a given combination of oxides as a function of temperature and pressure [Connolly, EPSL 2005], and use this in a numerical model of thermo-chemical mantle convection in a three-dimensional spherical shell, to calculate three-dimensionally-varying physical proporties. In this presentation we compare the results obtained with this new, self-consistently-calculated treatment with results using our old, approximate treatment, focusing particularly on thermo-chemical-phase structures and seismic anomalies in the transition zone and core-mantle boundary (CMB) region [Nakagawa and Tackley, G3 2009], which are strongly influenced by the coupling between compositional variations and phase transitions. The numerical models treat the evolution of a planet over billions of years, including self-consistent plate tectonics arising from plastic yielding, melting-induced differentiation, and a parameterised model of core evolution based on heat extracted by mantle convection. Self-consistent plate tectonics-like behaviour may be

  17. Expanded Large-Scale Forcing Properties Derived from the Multiscale Data Assimilation System and Its Application to Single-Column Models

    NASA Astrophysics Data System (ADS)

    Feng, S.; Li, Z.; Liu, Y.; Lin, W.; Toto, T.; Vogelmann, A. M.; Fridlind, A. M.

    2013-12-01

    We present an approach to derive large-scale forcing that is used to drive single-column models (SCMs) and cloud resolving models (CRMs)/large eddy simulation (LES) for evaluating fast physics parameterizations in climate models. The forcing fields are derived by use of a newly developed multi-scale data assimilation (MS-DA) system. This DA system is developed on top of the NCEP Gridpoint Statistical Interpolation (GSI) System and is implemented in the Weather Research and Forecasting (WRF) model at a cloud resolving resolution of 2 km. This approach has been applied to the generation of large scale forcing for a set of Intensive Operation Periods (IOPs) over the Atmospheric Radiation Measurement (ARM) Climate Research Facility's Southern Great Plains (SGP) site. The dense ARM in-situ observations and high-resolution satellite data effectively constrain the WRF model. The evaluation shows that the derived forcing displays accuracies comparable to the existing continuous forcing product and, overall, a better dynamic consistency with observed cloud and precipitation. One important application of this approach is to derive large-scale hydrometeor forcing and multiscale forcing, which is not provided in the existing continuous forcing product. It is shown that the hydrometeor forcing poses an appreciable impact on cloud and precipitation fields in the single-column model simulations. The large-scale forcing exhibits a significant dependency on domain-size that represents SCM grid-sizes. Subgrid processes often contribute a significant component to the large-scale forcing, and this contribution is sensitive to the grid-size and cloud-regime.

  18. Biophysically realistic minimal model of dopamine neuron

    NASA Astrophysics Data System (ADS)

    Oprisan, Sorinel

    2008-03-01

    We proposed and studied a new biophysically relevant computational model of dopaminergic neurons. Midbrain dopamine neurons are involved in motivation and the control of movement, and have been implicated in various pathologies such as Parkinson's disease, schizophrenia, and drug abuse. The model we developed is a single-compartment Hodgkin-Huxley (HH)-type parallel conductance membrane model. The model captures the essential mechanisms underlying the slow oscillatory potentials and plateau potential oscillations. The main currents involved are: 1) a voltage-dependent fast calcium current, 2) a small conductance potassium current that is modulated by the cytosolic concentration of calcium, and 3) a slow voltage-activated potassium current. We developed multidimensional bifurcation diagrams and extracted the effective domains of sustained oscillations. The model includes a calcium balance due to the fundamental importance of calcium influx as proved by simultaneous electrophysiological and calcium imaging procedure. Although there are significant evidences to suggest a partially electrogenic calcium pump, all previous models considered only elecrtogenic pumps. We investigated the effect of the electrogenic calcium pump on the bifurcation diagram of the model and compared our findings against the experimental results.

  19. Realistic Modeling of Wireless Network Environments

    DTIC Science & Technology

    2015-03-01

    the FPGA. The memory can be used for a number of tasks, including capturing samples, storing samples for replay , and storing parameters for channel...models. We also increased the size of the memory available on the DSP card so longer traces can be stored and replayed . • We replaced the...the channel state. We also added large memories to the SCM and DSP card, allowing us to accurately model interference from various types of devices

  20. Improved wave transformation in a large-scale coastline model to explore the role of wave climate change in driving coastal erosion

    NASA Astrophysics Data System (ADS)

    Whitley, A. E.; McNamara, D.

    2013-12-01

    According to the 2010 U.S. Census, over one third of the United States population lives near the eastern coastline. With such a significant investment in human agency along the coast, it is critical to understand how large-scale coastal morphology will evolve in the coming decades in response to rising sea level and changing storm climates. Previous work has shown that potential changes in wave climate can give rise to a larger coastal erosion signal than that expected due to sea level rise alone. This work utilized a large-scale coastal change model that simulated deep-water wave transformation assuming bathymetric contours were parallel to the shoreline and the model did not incorporate wave crest convergence or divergence. Linear stability analyses that have been performed on large-scale coastline evolution that do not assume parallel bathymetric contours and account for wave converge and divergence were found to be sensitive to the offshore extent of shore parallel contours. This study incorporates wave ray tracing into an existing coastline change model to explore finite amplitude development and evolution of large-scale coastal morphology. We will present results that explore the relative contributions of wave climate change and sea level rise to coastal erosion.

  1. Improved wave transformation in a large-scale coastline model to explore the role of wave climate change in driving coastal erosion

    NASA Astrophysics Data System (ADS)

    Whitley, A. E.; McNamara, D.; Murray, A.

    2012-12-01

    According to the 2010 U.S. Census, over one third of the United States population lives near the eastern coastline. With such a significant investment in human agency along the coast, it is critical to understand how large-scale coastal morphology will evolve in the coming decades in response to rising sea level and changing storm climates. Previous work has shown that potential changes in wave climate can give rise to a larger coastal erosion signal than that expected due to sea level rise alone. This work utilized a large-scale coastal change model that simulated deep-water wave transformation assuming bathymetric contours were parallel to the shoreline and the model did not incorporate wave crest convergence or divergence. Linear stability analyses that have been performed on large-scale coastline evolution that do not assume parallel bathymetric contours and account for wave converge and divergence were found to be sensitive to the offshore extent of shore parallel contours. This study incorporates wave ray tracing into an existing coastline change model to explore finite amplitude development and evolution of large-scale coastal morphology. We will present results that explore the relative contributions of wave climate change and sea level rise to coastal erosion.

  2. Constructing a large-scale 3D Geologic Model for Analysis of the Non-Proliferation Experiment

    SciTech Connect

    Wagoner, J; Myers, S

    2008-04-09

    -wave studies. For regional seismic simulations we convert this realistic geologic model into elastic parameters. Upper crustal units are treated as seismically homogeneous while the lower crust and upper mantle are parameterized by a smoothly varying velocity profile. In order to mitigate spurious reflections, the lower crust and upper mantle are treated as velocity gradients as a function of depth.

  3. Realistic Real World Contexts: Model Eliciting Activities

    ERIC Educational Resources Information Center

    Doruk, Bekir Kürsat

    2016-01-01

    Researchers have proposed a variety of methods to make a connection between real life and mathematics so that it can be learned in a practical way and enable people to utilise mathematics in their daily lives. Model-eliciting activities (MEAs) were developed to fulfil this need and are very capable of serving this purpose. The reason MEAs are so…

  4. Realistic modeling of complex oxide materials

    NASA Astrophysics Data System (ADS)

    Solovyev, I. V.

    2011-01-01

    Since electronic and magnetic properties of many transition-metal oxides can be efficiently controlled by external factors such as the temperature, pressure, electric or magnetic field, they are regarded as promising materials for various applications. From the viewpoint of the electronic structure, these phenomena are frequently related to the behavior of a small group of states located near the Fermi level. The basic idea of this project is to construct a model for the low-energy states, derive all the parameters rigorously on the basis of density functional theory (DFT), and to study this model by modern techniques. After a brief review of the method, the abilities of this approach will be illustrated on a number of examples, including multiferroic manganites and spin-orbital-lattice coupled phenomena in RVO 3 (where R is the three-valent element).

  5. Towards Realistic Modeling of Massive Star Clusters

    NASA Astrophysics Data System (ADS)

    Gnedin, O.; Li, H.

    2016-06-01

    Cosmological simulations of galaxy formation are rapidly advancing towards smaller scales. Current models can now resolve giant molecular clouds in galaxies and predict basic properties of star clusters forming within them. I will describe new theoretical simulations of the formation of the Milky Way throughout cosmic time, with the adaptive mesh refinement code ART. However, many challenges - physical and numerical - still remain. I will discuss how observations of massive star clusters and star forming regions can help us overcome some of them. Video of the talk is available at https://goo.gl/ZoZOfX

  6. Recent developments for realistic solar models

    NASA Astrophysics Data System (ADS)

    Serenelli, Aldo M.

    2014-05-01

    The "solar abundance problem" has triggered a renewed interest in revising the concept of SSM from different perspectives: 1) constituent microphysics: equation of state, nuclear rates, radiative opacities; 2) constituent macrophysics: the physical processes impact the evolution of the Sun and its present-day structure, e.g. dynamical processes induced by rotation, presence of magnetic fields; 3) challenge the hypothesis that the young Sun was chemically homogeneous: the possible interaction of the young Sun with its protoplanetary disk. Here, I briefly review and then present a (personal) view on recent advances and developments on solar modeling, part of them carried out as attempts to solve the solar abundance problem.

  7. [Realistic surgical training. The Aachen model].

    PubMed

    Krones, C J; Binnebösel, M; Stumpf, M; Schumpelick, V

    2010-01-01

    The Aachen model is a practical mode in teaching and advanced training, which is closely geared to the areas of academic acquisition and training. During medical education optional student courses with constitutive curricula offer practical points of contact to the surgical department at all times. Besides improvement of manual training the aims are enhancing interests and acquisition of talents. This guided structure will be intensified with progression into advanced education. Next to the formal guidelines of the curriculum, education logbook and progression conversations, quality, transparency and reliability are particularly emphasized. An evaluation of both the reforms and the surgical trainers is still to be made. In addition procurement of an affirmative occupational image is essential.

  8. Large scale and cloud scale dynamics and microphysics in the formation and evolution of a TTL cirrus : a case modelling study

    NASA Astrophysics Data System (ADS)

    Podglajen, Aurélien; Plougonven, Riwal; Hertzog, Albert; Legras, Bernard

    2015-04-01

    Cirrus clouds in the tropical tropopause layer (TTL) control dehydration of air masses entering the stratosphere and strongly contribute to the local radiative heating. In this study, we aim at understanding, through a real case simulation, the dynamics controlling the formation and life cycle of a cirrus cloud event in the TTL. We also aim at quantifying the chemical and radiative impacts of the clouds. To do this, we use the Weather Research and Forecast (WRF) model to simulate a large scale TTL cirrus event happening in January 2009 (27-29) over the Eastern Pacific, which has been extensively described through satellite observations (Taylor et al., 2011). Comparison of simulated and observed high clouds shows a fair agreement, and validates the reference simulation regarding cloud extension, location and life time. The simulation and Lagrangian trajectories within the simulation are then used to characterize the evolution of the cloud : displacement, Lagrangian life time and links with dynamics. The efficiency of dehydration by such clouds is also examined. Sensitivity tests were performed to evaluate the importance of different microphysics schemes and initial and boundary conditions to accurately simulate the cirrus. As expected, both were found to have strong impacts. In particular, there were substantial differences between simulations using different initial and boundary conditions from atmospheric analyses (NCEP CFSR and ECMWF). This illustrates the primordial role of accurate vapour and dynamics for realistic cirrus modelling, on top of the need for appropriate microphysics. Last, we examined the effects of cloud radiative heating. Long wave radiative heating in cirrus clouds has been invoked to induce a cloud scale circulation that would lengthen the cloud lifetime, and increase the size of its dehydration area (Dinh et al. 2010). To try to diagnose this, we have carried out simulations using different radiative schemes, including or suppressing the

  9. Evolution of Large-scale Solar Magnetic Fields in a Flux-Transport Model Including a Multi-cell Meridional Flow

    NASA Astrophysics Data System (ADS)

    McDonald, E.; Dikpati, M.

    2003-12-01

    Advances in helioseismology over the past decade have enabled us to detect subsurface meridional flows in the Sun. Some recent helioseismological analysis (Giles 1999, Haber et al. 2002) has indicated a submerged, reverse flow cell occurring at high latitudes of the Sun's northern hemisphere between 1998 and 2001. Meridional circulation plays an important role in the operation of a class of large-scale solar dynamo, the so-called "flux-transport" dynamo. In such dynamo models, the poleward drift of the large-scale solar magnetic fields and the polar reversal process are explained by the advective-diffusive transport of magnetic flux by a meridional circulation with a poleward surface flow component. Any temporal and spatial variations in the meridional flow pattern are expected to greatly influence the evolution of large-scale magnetic fields in a flux-transport dynamo. The aim of this paper is to explore the implications of a steady, multi-cell flow on the advection of weak, large-scale, magnetic flux. We present a simple, two-cell flux transport model operating in an r-theta cross-section of the northern hemisphere. Azimuthal symmetry is assumed. Performing numerical flux-transport simulations with a reverse flow cell at various latitudes, we demonstrate the effect of this cell on the evolutionary pattern of the large-scale diffuse fields. We also show how a flux concentration may occur at the latitude where the radial flows of the two cells are sinking downward. This work is supported by NASA grants W-19752, W-10107, and W-10175. The National Center for Atmospheric Research is sponsored by the National Science Foundation.

  10. Large Scale Modeling of Floodplain Inundation; Calibration and Forecast Based on Lisflood-FP Model and Remotely Sensed Data

    NASA Astrophysics Data System (ADS)

    Najafi, M.; Durand, M. T.; Neal, J. C.; Moritz, M.

    2013-12-01

    The Logone floodplain located in the Chad basin in north Cameroon, Africa experiences seasonal flooding as the result of Logone River overbank flow. The seasonal and inter-annual variability of flood depths and extents have significant impacts on the socio-economic as well as eco-hydrology of the basin. Recent human interventions on the hydraulic characteristics of the basin have caused serious concerns for the future behavior of the system. Construction of the Maga dam and hundreds of fish canals along with the impact of climate change are potential factors which alternate the floodplain characteristics. To understand the hydraulics of the basin and predict future changes in flood inundation we calibrate the LISFLOOD-FP numerical model using the historical records of river discharge as well as satellite observations of flood depths and extents. LISFLOOD is a distributed 2D model which efficiently simulates large basins. Because of data limitations the Shuttle Radar Topography Mission (SRTM) is considered to extract the DEM data. LISFLOOD subgrid 2D model is applied which allows for defining river channel widths smaller than the DEM resolution. River widths are extracted from Landsat 4 image obtained on Feb-1999. Model parameters including roughness coefficient and river bathymetry are then calibrated. The results demonstrate the potential application of the proposed model to simulate future changes in the floodplain. The sub-grid model has shown to improve hydraulic connectivity within the inundated area. DEM errors are major sources of uncertainty in model prediction.

  11. Large scale tracking algorithms

    SciTech Connect

    Hansen, Ross L.; Love, Joshua Alan; Melgaard, David Kennett; Karelitz, David B.; Pitts, Todd Alan; Zollweg, Joshua David; Anderson, Dylan Z.; Nandy, Prabal; Whitlow, Gary L.; Bender, Daniel A.; Byrne, Raymond Harry

    2015-01-01

    Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.

  12. Research project on CO2 geological storage and groundwaterresources: Large-scale hydrological evaluation and modeling of impact ongroundwater systems

    SciTech Connect

    Birkholzer, Jens; Zhou, Quanlin; Rutqvist, Jonny; Jordan,Preston; Zhang,K.; Tsang, Chin-Fu

    2007-10-24

    If carbon dioxide capture and storage (CCS) technologies areimplemented on a large scale, the amounts of CO2 injected and sequesteredunderground could be extremely large. The stored CO2 then replaces largevolumes of native brine, which can cause considerable pressureperturbation and brine migration in the deep saline formations. Ifhydraulically communicating, either directly via updipping formations orthrough interlayer pathways such as faults or imperfect seals, theseperturbations may impact shallow groundwater or even surface waterresources used for domestic or commercial water supply. Possibleenvironmental concerns include changes in pressure and water table,changes in discharge and recharge zones, as well as changes in waterquality. In compartmentalized formations, issues related to large-scalepressure buildup and brine displacement may also cause storage capacityproblems, because significant pressure buildup can be produced. Toaddress these issues, a three-year research project was initiated inOctober 2006, the first part of which is summarized in this annualreport.

  13. Large-scale modeling of the Antarctic ice sheet using a massively-parallelized finite element model (CIELO).

    NASA Astrophysics Data System (ADS)

    Larour, E.; Rignot, E.; Morlighem, M.; Seroussi, H.

    2008-12-01

    We implemented a fully-three-dimensional, thermo-mechanical, finite element model of the Antarctic Ice Sheet with a spatial resolution varying from 10 km inland to 2 km along the coast on a massively-parallelized architecture named CIELO and developed at JPL. The model is based on a "Pattyn" type formulation for ice sheets, and a "MacAyeal shelf-stream" formulation for ice shelves. Both types of formulations are coupled using penalty methods, which enables a considerable reduction of the computational load. Using a simple law of basal friction (based on locally computed balanced velocities), the model is able to replicate the location and order-magnitude speed of major ice streams and ice shelves. We then coupled the model with observations of ice motion from SAR interferometry to refine the pattern of basal friction using an inverse control method (MacAyeal 1993). The result provides an excellent agreement with observations and a first complete mapping of the pattern of basal friction along the coast of Antarctica at a resolution compatible with the size of its glaciers and ice streams.

  14. Multiphase flow modelling using non orthogonal collocated finite volumes : application to fluid catalytical cracking and large scale geophysical flows.

    NASA Astrophysics Data System (ADS)

    Martin, R. M.; Nicolas, A. N.

    2003-04-01

    A modeling approach of gas solid flow, taking into account different physical phenomena such as gas turbulence and inter-particle interactions is presented. Moment transport equations are derived for the second order fluctuating velocity tensor which allow to involve practical closures based on single phase turbulence modeling on one hand and kinetic theory of granular media on the other hand. The model is applied to fluid catalytic cracking processes and explosive volcanism. In the industry as well as in the geophysical community, multiphase flows are modeled using a finite volume approach and a multicorrector algorithm in time in order to determine implicitly the pressures, velocities and volume fractions for each phase. Pressures, and velocities are generally determined at mid-half mesh step from each other following the staggered grid approach. This ensures stability and prevents oscillations in pressure. It allows to treat almost all the Reynolds number ranges for all speeds and viscosities. The disadvantages appear when we want to treat more complex geometries or if a generalized curvilinear formulation of the conservation equations is considered. Too many interpolations have to be done and accuracy is then lost. In order to overcome these problems, we use here a similar algorithm in time and a Rhie and Chow interpolation (1983) of the collocated variables and essentially the velocities at the interface. The Rhie and Chow interpolation of the velocities at the finite volume interfaces allows to have no oscillations of the pressure without checkerboard effects and to stabilize all the algorithm. In a first predictor step, fluxes at the interfaces of the finite volumes are then computed using 2nd and 3rd order shock capturing schemes of MUSCL/TVD or Van Leer type, and the orthogonal stress components are treated implicitly while cross viscous/diffusion terms are treated explicitly. Pentadiagonal linear systems are solved in each geometrical direction (the so

  15. An Ensemble Three-Dimensional Constrained Variational Analysis Method to Derive Large-Scale Forcing Data for Single-Column Models

    NASA Astrophysics Data System (ADS)

    Tang, Shuaiqi

    Atmospheric vertical velocities and advective tendencies are essential as large-scale forcing data to drive single-column models (SCM), cloud-resolving models (CRM) and large-eddy simulations (LES). They cannot be directly measured or easily calculated with great accuracy from field measurements. In the Atmospheric Radiation Measurement (ARM) program, a constrained variational algorithm (1DCVA) has been used to derive large-scale forcing data over a sounding network domain with the aid of flux measurements at the surface and top of the atmosphere (TOA). We extend the 1DCVA algorithm into three dimensions (3DCVA) along with other improvements to calculate gridded large-scale forcing data. We also introduce an ensemble framework using different background data, error covariance matrices and constraint variables to quantify the uncertainties of the large-scale forcing data. The results of sensitivity study show that the derived forcing data and SCM simulated clouds are more sensitive to the background data than to the error covariance matrices and constraint variables, while horizontal moisture advection has relatively large sensitivities to the precipitation, the dominate constraint variable. Using a mid-latitude cyclone case study in March 3rd, 2000 at the ARM Southern Great Plains (SGP) site, we investigate the spatial distribution of diabatic heating sources (Q1) and moisture sinks (Q2), and show that they are consistent with the satellite clouds and intuitive structure of the mid-latitude cyclone. We also evaluate the Q1 and Q2 in analysis/reanalysis, finding that the regional analysis/reanalysis all tend to underestimate the sub-grid scale upward transport of moist static energy in the lower troposphere. With the uncertainties from large-scale forcing data and observation specified, we compare SCM results and observations and find that models have large biases on cloud properties which could not be fully explained by the uncertainty from the large-scale forcing

  16. Large-Scale Computations Leading to a First-Principles Approach to Nuclear Structure

    SciTech Connect

    Ormand, W E; Navratil, P

    2003-08-18

    We report on large-scale applications of the ab initio, no-core shell model with the primary goal of achieving an accurate description of nuclear structure from the fundamental inter-nucleon interactions. In particular, we show that realistic two-nucleon interactions are inadequate to describe the low-lying structure of {sup 10}B, and that realistic three-nucleon interactions are essential.

  17. Large-Scale Mass Spectrometry Imaging Investigation of Consequences of Cortical Spreading Depression in a Transgenic Mouse Model of Migraine

    NASA Astrophysics Data System (ADS)

    Carreira, Ricardo J.; Shyti, Reinald; Balluff, Benjamin; Abdelmoula, Walid M.; van Heiningen, Sandra H.; van Zeijl, Rene J.; Dijkstra, Jouke; Ferrari, Michel D.; Tolner, Else A.; McDonnell, Liam A.; van den Maagdenberg, Arn M. J. M.

    2015-06-01

    Cortical spreading depression (CSD) is the electrophysiological correlate of migraine aura. Transgenic mice carrying the R192Q missense mutation in the Cacna1a gene, which in patients causes familial hemiplegic migraine type 1 (FHM1), exhibit increased propensity to CSD. Herein, mass spectrometry imaging (MSI) was applied for the first time to an animal cohort of transgenic and wild type mice to study the biomolecular changes following CSD in the brain. Ninety-six coronal brain sections from 32 mice were analyzed by MALDI-MSI. All MSI datasets were registered to the Allen Brain Atlas reference atlas of the mouse brain so that the molecular signatures of distinct brain regions could be compared. A number of metabolites and peptides showed substantial changes in the brain associated with CSD. Among those, different mass spectral features showed significant ( t-test, P < 0.05) changes in the cortex, 146 and 377 Da, and in the thalamus, 1820 and 1834 Da, of the CSD-affected hemisphere of FHM1 R192Q mice. Our findings reveal CSD- and genotype-specific molecular changes in the brain of FHM1 transgenic mice that may further our understanding about the role of CSD in migraine pathophysiology. The results also demonstrate the utility of aligning MSI datasets to a common reference atlas for large-scale MSI investigations.

  18. Data Analysis, Pre-Ignition Assessment, and Post-Ignition Modeling of the Large-Scale Annular Cookoff Tests

    SciTech Connect

    G. Terrones; F.J. Souto; R.F. Shea; M.W.Burkett; E.S. Idar

    2005-09-30

    In order to understand the implications that cookoff of plastic-bonded explosive-9501 could have on safety assessments, we analyzed the available data from the large-scale annular cookoff (LSAC) assembly series of experiments. In addition, we examined recent data regarding hypotheses about pre-ignition that may be relevant to post-ignition behavior. Based on the post-ignition data from Shot 6, which had the most complete set of data, we developed an approximate equation of state (EOS) for the gaseous products of deflagration. Implementation of this EOS into the multimaterial hydrodynamics computer program PAGOSA yielded good agreement with the inner-liner collapse sequence for Shot 6 and with other data, such as velocity interferometer system for any reflector and resistance wires. A metric to establish the degree of symmetry based on the concept of time of arrival to pin locations was used to compare numerical simulations with experimental data. Several simulations were performed to elucidate the mode of ignition in the LSAC and to determine the possible compression levels that the metal assembly could have been subjected to during post-ignition.

  19. Modelling high Reynolds number wall-turbulence interactions in laboratory experiments using large-scale free-stream turbulence

    NASA Astrophysics Data System (ADS)

    Dogan, Eda; Hearst, R. Jason; Ganapathisubramani, Bharathram

    2017-03-01

    A turbulent boundary layer subjected to free-stream turbulence is investigated in order to ascertain the scale interactions that dominate the near-wall region. The results are discussed in relation to a canonical high Reynolds number turbulent boundary layer because previous studies have reported considerable similarities between these two flows. Measurements were acquired simultaneously from four hot wires mounted to a rake which was traversed through the boundary layer. Particular focus is given to two main features of both canonical high Reynolds number boundary layers and boundary layers subjected to free-stream turbulence: (i) the footprint of the large scales in the logarithmic region on the near-wall small scales, specifically the modulating interaction between these scales, and (ii) the phase difference in amplitude modulation. The potential for a turbulent boundary layer subjected to free-stream turbulence to `simulate' high Reynolds number wall-turbulence interactions is discussed. The results of this study have encouraging implications for future investigations of the fundamental scale interactions that take place in high Reynolds number flows as it demonstrates that these can be achieved at typical laboratory scales.

  20. A SPATIALLY REALISTIC MODEL FOR INFORMING FOREST MANAGEMENT DECISIONS

    EPA Science Inventory

    Spatially realistic population models (SRPMs) address a fundamental
    problem commonly confronted by wildlife managers - predicting the
    effects of landscape-scale habitat management on an animal population.
    SRPMs typically consist of three submodels: (1) a habitat submodel...

  1. Modelling disease outbreaks in realistic urban social networks

    NASA Astrophysics Data System (ADS)

    Eubank, Stephen; Guclu, Hasan; Anil Kumar, V. S.; Marathe, Madhav V.; Srinivasan, Aravind; Toroczkai, Zoltán; Wang, Nan

    2004-05-01

    Most mathematical models for the spread of disease use differential equations based on uniform mixing assumptions or ad hoc models for the contact process. Here we explore the use of dynamic bipartite graphs to model the physical contact patterns that result from movements of individuals between specific locations. The graphs are generated by large-scale individual-based urban traffic simulations built on actual census, land-use and population-mobility data. We find that the contact network among people is a strongly connected small-world-like graph with a well-defined scale for the degree distribution. However, the locations graph is scale-free, which allows highly efficient outbreak detection by placing sensors in the hubs of the locations network. Within this large-scale simulation framework, we then analyse the relative merits of several proposed mitigation strategies for smallpox spread. Our results suggest that outbreaks can be contained by a strategy of targeted vaccination combined with early detection without resorting to mass vaccination of a population.

  2. Assessing the weighted multi-objective adaptive surrogate model optimization to derive large-scale reservoir operating rules with sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Zhang, Jingwen; Wang, Xu; Liu, Pan; Lei, Xiaohui; Li, Zejun; Gong, Wei; Duan, Qingyun; Wang, Hao

    2017-01-01

    The optimization of large-scale reservoir system is time-consuming due to its intrinsic characteristics of non-commensurable objectives and high dimensionality. One way to solve the problem is to employ an efficient multi-objective optimization algorithm in the derivation of large-scale reservoir operating rules. In this study, the Weighted Multi-Objective Adaptive Surrogate Model Optimization (WMO-ASMO) algorithm is used. It consists of three steps: (1) simplifying the large-scale reservoir operating rules by the aggregation-decomposition model, (2) identifying the most sensitive parameters through multivariate adaptive regression splines (MARS) for dimensional reduction, and (3) reducing computational cost and speeding the searching process by WMO-ASMO, embedded with weighted non-dominated sorting genetic algorithm II (WNSGAII). The intercomparison of non-dominated sorting genetic algorithm (NSGAII), WNSGAII and WMO-ASMO are conducted in the large-scale reservoir system of Xijiang river basin in China. Results indicate that: (1) WNSGAII surpasses NSGAII in the median of annual power generation, increased by 1.03% (from 523.29 to 528.67 billion kW h), and the median of ecological index, optimized by 3.87% (from 1.879 to 1.809) with 500 simulations, because of the weighted crowding distance and (2) WMO-ASMO outperforms NSGAII and WNSGAII in terms of better solutions (annual power generation (530.032 billion kW h) and ecological index (1.675)) with 1000 simulations and computational time reduced by 25% (from 10 h to 8 h) with 500 simulations. Therefore, the proposed method is proved to be more efficient and could provide better Pareto frontier.

  3. FERMI RULES OUT THE INVERSE COMPTON/CMB MODEL FOR THE LARGE-SCALE JET X-RAY EMISSION OF 3C 273

    SciTech Connect

    Meyer, Eileen T.; Georganopoulos, Markos

    2014-01-10

    The X-ray emission mechanism in large-scale jets of powerful radio quasars has been a source of debate in recent years, with two competing interpretations: either the X-rays are of synchrotron origin, arising from a different electron energy distribution than that producing the radio to optical synchrotron component, or they are due to inverse Compton scattering of cosmic microwave background photons (IC/CMB) by relativistic electrons in a powerful relativistic jet with bulk Lorentz factor Γ ∼ 10-20. These two models imply radically different conditions in the large-scale jet in terms of jet speed, kinetic power, and maximum energy of the particle acceleration mechanism, with important implications for the impact of the jet on the large-scale environment. A large part of the X-ray origin debate has centered on the well-studied source 3C 273. Here we present new observations from Fermi which put an upper limit on the gamma-ray flux from the large-scale jet of 3C 273 that violates at a confidence greater that 99.9% the flux expected from the IC/CMB X-ray model found by extrapolation of the UV to X-ray spectrum of knot A, thus ruling out the IC/CMB interpretation entirely for this source when combined with previous work. Further, this upper limit from Fermi puts a limit on the Doppler beaming factor of at least δ < 9, assuming equipartition fields, and possibly as low as δ < 5, assuming no major deceleration of the jet from knots A through D1.

  4. Comparison between realistic and spherical approaches in EEG forward modelling.

    PubMed

    Meneghini, Fabio; Vatta, Federica; Esposito, Fabrizio; Mininel, Stefano; Di Salle, Francesco

    2010-06-01

    In electroencephalography (EEG) a valid conductor model of the head (forward model) is necessary for predicting measurable scalp voltages from intra-cranial current distributions. All inverse models, capable of inferring the spatial distribution of the neural sources generating measurable electrical and magnetic signals outside the brain are normally formulated in terms of a pre-estimated forward model, which implies considering one (or more) current dipole(s) inside the head and computing the electrical potentials generated at the electrode sites on the scalp surface. Therefore, the accuracy of the forward model strongly affects the reliability of the source reconstruction process independently of the specific inverse model. So far, it is as yet unclear which brain regions are more sensitive to the choice of different model geometry, from both quantitative and qualitative points of view. In this paper, we compare the finite difference method-based realistic model with the four-layers sensor-fitted spherical model using simulated cortical sources in the MNI152 standard space. We focused on the investigation of the spatial variation of the lead fields produced by simulated cortical sources which were placed on the reconstructed mesh of the neocortex along the surface electrodes of a 62-channel configuration. This comparison is carried out by evaluating a point spread function all over the brain cortex, with the aim of finding the lead fields mismatch between realistic and spherical geometry. Realistic geometry turns out to be a relevant factor of improvement which is particularly important when considering sources placed in the temporal or in the occipital cortex. In these situations, using a realistic head model will allow a better spatial discrimination of neural sources when compared to the spherical model.

  5. Large-scale structure and microwave-background anisotropies in cosmological models and stellar photometry techniques with wide-field/planetary camera of the Hubble space telescope

    SciTech Connect

    Holtzman, J.A.

    1989-01-01

    This dissertation consists of two separate parts. The first presents calculations of microwave background anisotropies at various angular scales and of expected large scale bulk velocities and mass correlation functions for a variety of models which include baryons, radiation, cold dark matter (CDM), and massive and massless neutrinos. Free parameters include {Omega}, H{sub 0}, the mass fractions of each component, and the initial conditions; nearly 100 different models are considered. Open and flat models with blot adiabatic and isocurvature initial conditions are calculated for models without massive neutrinos. A set of flat models with both massive neutrinos and CDM with adiabatic initial conditions is also considered. Fitting functions for the mass transfer function and small angle radiation correlation function are provided for all of the models. A discussion of the evolution of the perturbations is presented. Results are compared with some recent observations of large scale velocities and limits on microwave background anisotropies. CDM and baryon models have difficulty satisfying observational limits, although they are not completely ruled out. Hybrid models with massive neutrinos and CDM satisfy current observational data. The second part of the dissertation is a discussion with the Wide Field/Planetary Camer (WF/PC) of the Hubble Space Telescope (HST). Detailed simulations are used to determine optimum techniques to use and to assess the expected accuracy of such techniques.

  6. Natural inflation: Particle physics models, power-law spectra for large-scale structure, and constraints from the Cosmic Background Explorer

    NASA Astrophysics Data System (ADS)

    Adams, Fred C.; Bond, J. Richard; Freese, Katherine; Frieman, Joshua A.; Olinto, Angela V.

    1993-01-01

    We discuss the particle physics basis for models of natural inflation with pseudo Nambu-Goldstone bosons and study the consequences for large-scale structure of the nonscale-invariant density fluctuation spectra that arise in natural inflation and other models. A pseudo Nambu-Goldstone boson, with a potential of the form V(φ)=Λ4[1+/-cos(φ/f)], can naturally give rise to an epoch of inflation in the early Universe, if f~MPl and Λ~MGUT. Such mass scales arise in particle physics models with a gauge group that becomes strongly interacting at the grand unified theory scale. We work out a specific particle physics example based on the multiple gaugino condensation scenario in superstring theory. We then study the cosmological evolution of and constraints upon these inflation models numerically and analytically. To obtain sufficient inflation with a probability of order 1 and a high enough post-inflation reheat temperature for baryogenesis, we require f>~0.3MPl. The primordial density fluctuation spectrum generated by quantum fluctuations in φ is a non-scale-invariant power law P(k)~kns, with ns~=1-(M2Pl/8πf2) leading to more power on large length scales than the ns=1 Harrison-Zeldovich spectrum. (For the reader primarily interested in large-scale structure, the discussion of this topic is presented in Sec. IV and is intended to be nearly self-contained.) We pay special attention to the prospects of using the enhanced power to explain the otherwise puzzling large-scale clustering of galaxies and clusters and their flows. We find that the standard cold dark matter (CDM) model with 0<~ns<~0.6 could in principle explain these data. However, the microwave background anisotropies recently detected by the Cosmic Background Explorer (COBE) imply such low primordial amplitudes for these CDM models (that is, bias factors b8>~2 for ns<~0.6) that galaxy formation would occur too late to be viable and the large-scale galaxy velocities would be too small. In fact, combining the

  7. Large Scale System Defense

    DTIC Science & Technology

    2008-10-01

    gram models, which further raises the bar and makes it more difficult for attackers to build precise packet structures to evade Anagram even if they...pealing because of the need to modify source code. Since source-level annotations serve as a vestigial policy, we articulated a way to augment self...is ideally suited to the problem of detecting when con- straints on a system’s behavior and information structures have been violated. The CW model

  8. Coupling a basin erosion and river sediment transport model into a large scale hydrological model: an application in the Amazon basin

    NASA Astrophysics Data System (ADS)

    Buarque, D. C.; Collischonn, W.; Paiva, R. C. D.

    2012-04-01

    This study presents the first application and preliminary results of the large scale hydrodynamic/hydrological model MGB-IPH with a new module to predict the spatial distribution of the basin erosion and river sediment transport in a daily time step. The MGB-IPH is a large-scale, distributed and process based hydrological model that uses a catchment based discretization and the Hydrological Response Units (HRU) approach. It uses physical based equations to simulate the hydrological processes, such as the Penman Monteith model for evapotranspiration, and uses the Muskingum Cunge approach and a full 1D hydrodynamic model for river routing; including backwater effects and seasonal flooding. The sediment module of the MGB-IPH model is divided into two components: 1) prediction of erosion over the basin and sediment yield to river network; 2) sediment transport along the river channels. Both MGB-IPH and the sediment module use GIS tools to display relevant maps and to extract parameters from SRTM DEM (a 15" resolution was adopted). Using the catchment discretization the sediment module applies the Modified Universal Soil Loss Equation to predict soil loss from each HRU considering three sediment classes defined according to the soil texture: sand, silt and clay. The effects of topography on soil erosion are estimated by a two-dimensional slope length (LS) factor which using the contributing area approach and a local slope steepness (S), both estimated for each DEM pixel using GIS algorithms. The amount of sediment releasing to the catchment river reach in each day is calculated using a linear reservoir. Once the sediment reaches the river they are transported into the river channel using an advection equation for silt and clay and a sediment continuity equation for sand. A sediment balance based on the Yang sediment transport capacity, allowing to compute the amount of erosion and deposition along the rivers, is performed for sand particles as bed load, whilst no

  9. Evaluating cloud processes in large-scale models: Of idealized case studies, parameterization testbeds and single-column modelling on climate time-scales

    NASA Astrophysics Data System (ADS)

    Neggers, Roel

    2016-04-01

    Boundary-layer schemes have always formed an integral part of General Circulation Models (GCMs) used for numerical weather and climate prediction. The spatial and temporal scales associated with boundary-layer processes and clouds are typically much smaller than those at which GCMs are discretized, which makes their representation through parameterization a necessity. The need for generally applicable boundary-layer parameterizations has motivated many scientific studies, which in effect has created its own active research field in the atmospheric sciences. Of particular interest has been the evaluation of boundary-layer schemes at "process-level". This means that parameterized physics are studied in isolated mode from the larger-scale circulation, using prescribed forcings and excluding any upscale interaction. Although feedbacks are thus prevented, the benefit is an enhanced model transparency, which might aid an investigator in identifying model errors and understanding model behavior. The popularity and success of the process-level approach is demonstrated by the many past and ongoing model inter-comparison studies that have been organized by initiatives such as GCSS/GASS. A red line in the results of these studies is that although most schemes somehow manage to capture first-order aspects of boundary layer cloud fields, there certainly remains room for improvement in many areas. Only too often are boundary layer parameterizations still found to be at the heart of problems in large-scale models, negatively affecting forecast skills of NWP models or causing uncertainty in numerical predictions of future climate. How to break this parameterization "deadlock" remains an open problem. This presentation attempts to give an overview of the various existing methods for the process-level evaluation of boundary-layer physics in large-scale models. This includes i) idealized case studies, ii) longer-term evaluation at permanent meteorological sites (the testbed approach

  10. Mechanisms controlling primary and new production in a global ecosystem model Part I: The role of the large-scale upper mixed layer variability

    NASA Astrophysics Data System (ADS)

    Popova, E. E.; Coward, A. C.; Nurser, G. A.; de Cuevas, B.; Fasham, M. J. R.; Anderson, T. R.

    2006-07-01

    A global general circulation model coupled to a simple six-compartment ecosystem model is used to study the extent to which global variability in primary and export production can be realistically predicted on the basis of advanced parameterizations of upper mixed layer physics, without recourse to introducing extra complexity in model biology. The ''K profile parameterization'' (KPP) scheme employed, combined with 6-hourly external forcing, is able to capture short-term periodic and episodic events such as diurnal cycling and storm-induced deepening. The model realistically reproduces various features of global ecosystem dynamics that have been problematic in previous global modelling studies, using a single generic parameter set. The realistic simulation of deep convection in the North Atlantic, and lack of it in the North Pacific and Southern Oceans, leads to good predictions of chlorophyll and primary production in these contrasting areas. Realistic levels of primary production are predicted in the oligotrophic gyres due to high frequency external forcing of the upper mixed layer (accompanying paper Popova et al., 2006) and novel parameterizations of zooplankton excretion. Good agreement is shown between model and observations at various JFOFS time series sites: BATS, KERFIX, Papa and station India. One exception is that the high zooplankton grazing rates required to maintain low chlorophyll in high-nutrient low-chlorophyll and oligotrophic systems lessened agreement between model and data in the northern North Atlantic, where mesozooplankton with lower grazing rates may be dominant. The model is therefore not globally robust in the sense that additional parameterizations were needed to realistically simulate ecosystem dynamics in the North Atlantic. Nevertheless, the work emphasises the need to pay particular attention to the parameterization of mixed layer physics in global ocean ecosystem modelling as a prerequisite to increasing the complexity of ecosystem

  11. Development and Validation of a One-Dimensional Co-Electrolysis Model for Use in Large-Scale Process Modeling Analysis

    SciTech Connect

    J. E. O'Brien; M. G. McKellar; G. L. Hawkes; C. M. Stoots

    2007-07-01

    A one-dimensional chemical equilibrium model has been developed for analysis of simultaneous high-temperature electrolysis of steam and carbon dioxide (coelectrolysis) for the direct production of syngas, a mixture of hydrogen and carbon monoxide. The model assumes local chemical equilibrium among the four process-gas species via the shift reaction. For adiabatic or specified-heat-transfer conditions, the electrolyzer model allows for the determination of coelectrolysis outlet temperature, composition (anode and cathode sides), mean Nernst potential, operating voltage and electrolyzer power based on specified inlet gas flow rates, heat loss or gain, current density, and cell area-specific resistance. Alternately, for isothermal operation, it allows for determination of outlet composition, mean Nernst potential, operating voltage, electrolyzer power, and the isothermal heat requirement for specified inlet gas flow rates, operating temperature, current density and area-specific resistance. This model has been developed for incorporation into a system-analysis code from which the overall performance of large-scale coelectrolysis plants can be evaluated. The one-dimensional co-electrolysis model has been validated by comparison with results obtained from a 3-D computational fluid dynamics model and by comparison with experimental results.

  12. Realistic modeling of chamber transport for heavy-ion fusion

    SciTech Connect

    Sharp, W.M.; Grote, D.P.; Callahan, D.A.; Tabak, M.; Henestroza, E.; Yu, S.S.; Peterson, P.F.; Welch, D.R.; Rose, D.V.

    2003-05-01

    Transport of intense heavy-ion beams to an inertial-fusion target after final focus is simulated here using a realistic computer model. It is found that passing the beam through a rarefied plasma layer before it enters the fusion chamber can largely neutralize the beam space charge and lead to a usable focal spot for a range of ion species and input conditions.

  13. A realistic model for charged strange quark stars

    NASA Astrophysics Data System (ADS)

    Thirukkanesh, S.; Ragel, F. C.

    2017-01-01

    We report a general approach to solve an Einstein-Maxwell system to describe a static spherically symmetric anisotropic strange matter distribution with linear equation of state in terms of two generating functions. It is examined by choosing Tolmann IV type potential for one of the gravitational potentials and a physically reasonable choice for the electric field. Hence, the generated model satisfies all the required major physical properties of a realistic star. The effect of electric charge on physical properties is highlighted.

  14. Wind-tunnel investigation of the thrust augmentor performance of a large-scale swept wing model. [in the Ames 40 by 80 foot wind tunnel

    NASA Technical Reports Server (NTRS)

    Koenig, D. G.; Falarski, M. D.

    1979-01-01

    Tests were made in the Ames 40- by 80-foot wind tunnel to determine the forward speed effects on wing-mounted thrust augmentors. The large-