Science.gov

Sample records for realistic large-scale model

  1. Towards a large-scale biologically realistic model of the hippocampus.

    PubMed

    Hendrickson, Phillip J; Yu, Gene J; Robinson, Brian S; Song, Dong; Berger, Theodore W

    2012-01-01

    Real neurobiological systems in the mammalian brain have a complicated and detailed structure, being composed of 1) large numbers of neurons with intricate, branching morphologies--complex morphology brings with it complex passive membrane properties; 2) active membrane properties--nonlinear sodium, potassium, calcium, etc. conductances; 3) non-uniform distributions throughout the dendritic and somal membrane surface of these non-linear conductances; 4) non-uniform and topographic connectivity between pre- and post-synaptic neurons; and 5) activity-dependent changes in synaptic function. One of the essential, and as yet unanswered questions in neuroscience is the role of these fundamental structural and functional features in determining "neural processing" properties of a given brain system. To help answer that question, we're creating a large-scale biologically realistic model of the intrinsic pathway of the hippocampus, which consists of the projection from layer II entorhinal cortex (EC) to dentate gyrus (DG), EC to CA3, DG to CA3, and CA3 to CA1. We describe the computational hardware and software tools the model runs on, and demonstrate its viability as a modeling platform with an EC-to-DG model. PMID:23366951

  2. A novel CPU/GPU simulation environment for large-scale biologically realistic neural modeling

    PubMed Central

    Hoang, Roger V.; Tanna, Devyani; Jayet Bray, Laurence C.; Dascalu, Sergiu M.; Harris, Frederick C.

    2013-01-01

    Computational Neuroscience is an emerging field that provides unique opportunities to study complex brain structures through realistic neural simulations. However, as biological details are added to models, the execution time for the simulation becomes longer. Graphics Processing Units (GPUs) are now being utilized to accelerate simulations due to their ability to perform computations in parallel. As such, they have shown significant improvement in execution time compared to Central Processing Units (CPUs). Most neural simulators utilize either multiple CPUs or a single GPU for better performance, but still show limitations in execution time when biological details are not sacrificed. Therefore, we present a novel CPU/GPU simulation environment for large-scale biological networks, the NeoCortical Simulator version 6 (NCS6). NCS6 is a free, open-source, parallelizable, and scalable simulator, designed to run on clusters of multiple machines, potentially with high performance computing devices in each of them. It has built-in leaky-integrate-and-fire (LIF) and Izhikevich (IZH) neuron models, but users also have the capability to design their own plug-in interface for different neuron types as desired. NCS6 is currently able to simulate one million cells and 100 million synapses in quasi real time by distributing data across eight machines with each having two video cards. PMID:24106475

  3. A novel CPU/GPU simulation environment for large-scale biologically realistic neural modeling.

    PubMed

    Hoang, Roger V; Tanna, Devyani; Jayet Bray, Laurence C; Dascalu, Sergiu M; Harris, Frederick C

    2013-01-01

    Computational Neuroscience is an emerging field that provides unique opportunities to study complex brain structures through realistic neural simulations. However, as biological details are added to models, the execution time for the simulation becomes longer. Graphics Processing Units (GPUs) are now being utilized to accelerate simulations due to their ability to perform computations in parallel. As such, they have shown significant improvement in execution time compared to Central Processing Units (CPUs). Most neural simulators utilize either multiple CPUs or a single GPU for better performance, but still show limitations in execution time when biological details are not sacrificed. Therefore, we present a novel CPU/GPU simulation environment for large-scale biological networks, the NeoCortical Simulator version 6 (NCS6). NCS6 is a free, open-source, parallelizable, and scalable simulator, designed to run on clusters of multiple machines, potentially with high performance computing devices in each of them. It has built-in leaky-integrate-and-fire (LIF) and Izhikevich (IZH) neuron models, but users also have the capability to design their own plug-in interface for different neuron types as desired. NCS6 is currently able to simulate one million cells and 100 million synapses in quasi real time by distributing data across eight machines with each having two video cards.

  4. Modeling of the cross-beam energy transfer with realistic inertial-confinement-fusion beams in a large-scale hydrocode.

    PubMed

    Colaïtis, A; Duchateau, G; Ribeyre, X; Tikhonchuk, V

    2015-01-01

    A method for modeling realistic laser beams smoothed by kinoform phase plates is presented. The ray-based paraxial complex geometrical optics (PCGO) model with Gaussian thick rays allows one to create intensity variations, or pseudospeckles, that reproduce the beam envelope, contrast, and high-intensity statistics predicted by paraxial laser propagation codes. A steady-state cross-beam energy-transfer (CBET) model is implemented in a large-scale radiative hydrocode based on the PCGO model. It is used in conjunction with the realistic beam modeling technique to study the effects of CBET between coplanar laser beams on the target implosion. The pseudospeckle pattern imposed by PCGO produces modulations in the irradiation field and the shell implosion pressure. Cross-beam energy transfer between beams at 20(∘) and 40(∘) significantly degrades the irradiation symmetry by amplifying low-frequency modes and reducing the laser-capsule coupling efficiency, ultimately leading to large modulations of the shell areal density and lower convergence ratios. These results highlight the role of laser-plasma interaction and its influence on the implosion dynamics.

  5. Modeling of the cross-beam energy transfer with realistic inertial-confinement-fusion beams in a large-scale hydrocode.

    PubMed

    Colaïtis, A; Duchateau, G; Ribeyre, X; Tikhonchuk, V

    2015-01-01

    A method for modeling realistic laser beams smoothed by kinoform phase plates is presented. The ray-based paraxial complex geometrical optics (PCGO) model with Gaussian thick rays allows one to create intensity variations, or pseudospeckles, that reproduce the beam envelope, contrast, and high-intensity statistics predicted by paraxial laser propagation codes. A steady-state cross-beam energy-transfer (CBET) model is implemented in a large-scale radiative hydrocode based on the PCGO model. It is used in conjunction with the realistic beam modeling technique to study the effects of CBET between coplanar laser beams on the target implosion. The pseudospeckle pattern imposed by PCGO produces modulations in the irradiation field and the shell implosion pressure. Cross-beam energy transfer between beams at 20(∘) and 40(∘) significantly degrades the irradiation symmetry by amplifying low-frequency modes and reducing the laser-capsule coupling efficiency, ultimately leading to large modulations of the shell areal density and lower convergence ratios. These results highlight the role of laser-plasma interaction and its influence on the implosion dynamics. PMID:25679718

  6. The role of topography in the transformation of spatiotemporal patterns by a large-scale, biologically realistic model of the rat dentate gyrus.

    PubMed

    Yu, Gene J; Hendrickson, Phillip J; Robinson, Brian S; Song, Dong; Berger, Theodore W

    2013-01-01

    A large-scale, biologically realistic, computational model of the rat hippocampus is being constructed to study the input-output transformation that the hippocampus performs. In the initial implementation, the layer II entorhinal cortex neurons, which provide the major input to the hippocampus, and the granule cells of the dentate gyrus, which receive the majority of the input, are modeled. In a previous work, the topography, or the wiring diagram, connecting these two populations had been derived and implemented. This paper explores the consequences of two features of the topography, the distribution of the axons and the size of the neurons' axon terminal fields. The topography converts streams of independently generated random Poisson trains into structured spatiotemporal patterns through spatiotemporal convergence achievable by overlapping axon terminal fields. Increasing the axon terminal field lengths allowed input to converge over larger regions of space resulting in granule activation across a greater area but did not increase the total activity as a function of time as the number of targets per input remained constant. Additional simulations demonstrated that the total distribution of spikes in space depends not on the distribution of the presynaptic axons but the distribution of the postsynaptic population. Analyzing spike counts emphasizes the importance of the postsynaptic distribution, but it ignores the fact that each individual input may be carrying unique information. Therefore, a metric should be created that relates and tracks individual inputs as they are propagated and integrated through hippocampus.

  7. Population density methods for large-scale modelling of neuronal networks with realistic synaptic kinetics: cutting the dimension down to size.

    PubMed

    Haskell, E; Nykamp, D Q; Tranchina, D

    2001-05-01

    Population density methods provide promising time-saving alternatives to direct Monte Carlo simulations of neuronal network activity, in which one tracks the state of thousands of individual neurons and synapses. A population density method has been found to be roughly a hundred times faster than direct simulation for various test networks of integrate-and-fire model neurons with instantaneous excitatory and inhibitory post-synaptic conductances. In this method, neurons are grouped into large populations of similar neurons. For each population, one calculates the evolution of a probability density function (PDF) which describes the distribution of neurons over state space. The population firing rate is then given by the total flux of probability across the threshold voltage for firing an action potential. Extending the method beyond instantaneous synapses is necessary for obtaining accurate results, because synaptic kinetics play an important role in network dynamics. Embellishments incorporating more realistic synaptic kinetics for the underlying neuron model increase the dimension of the PDF, which was one-dimensional in the instantaneous synapse case. This increase in dimension causes a substantial increase in computation time to find the exact PDF, decreasing the computational speed advantage of the population density method over direct Monte Carlo simulation. We report here on a one-dimensional model of the PDF for neurons with arbitrary synaptic kinetics. The method is more accurate than the mean-field method in the steady state, where the mean-field approximation works best, and also under dynamic-stimulus conditions. The method is much faster than direct simulations. Limitations of the method are demonstrated, and possible improvements are discussed. PMID:11405420

  8. Large Scale, High Resolution, Mantle Dynamics Modeling

    NASA Astrophysics Data System (ADS)

    Geenen, T.; Berg, A. V.; Spakman, W.

    2007-12-01

    To model the geodynamic evolution of plate convergence, subduction and collision and to allow for a connection to various types of observational data, geophysical, geodetical and geological, we developed a 4D (space-time) numerical mantle convection code. The model is based on a spherical 3D Eulerian fem model, with quadratic elements, on top of which we constructed a 3D Lagrangian particle in cell(PIC) method. We use the PIC method to transport material properties and to incorporate a viscoelastic rheology. Since capturing small scale processes associated with localization phenomena require a high resolution, we spend a considerable effort on implementing solvers suitable to solve for models with over 100 million degrees of freedom. We implemented Additive Schwartz type ILU based methods in combination with a Krylov solver, GMRES. However we found that for problems with over 500 thousend degrees of freedom the convergence of the solver degraded severely. This observation is known from the literature [Saad, 2003] and results from the local character of the ILU preconditioner resulting in a poor approximation of the inverse of A for large A. The size of A for which ILU is no longer usable depends on the condition of A and on the amount of fill in allowed for the ILU preconditioner. We found that for our problems with over 5×105 degrees of freedom convergence became to slow to solve the system within an acceptable amount of walltime, one minute, even when allowing for considerable amount of fill in. We also implemented MUMPS and found good scaling results for problems up to 107 degrees of freedom for up to 32 CPU¡¯s. For problems with over 100 million degrees of freedom we implemented Algebraic Multigrid type methods (AMG) from the ML library [Sala, 2006]. Since multigrid methods are most effective for single parameter problems, we rebuild our model to use the SIMPLE method in the Stokes solver [Patankar, 1980]. We present scaling results from these solvers for 3D

  9. Large-Scale Simulations of Realistic Fluidized Bed Reactors using Novel Numerical Methods

    NASA Astrophysics Data System (ADS)

    Capecelatro, Jesse; Desjardins, Olivier; Pepiot, Perrine; National Renewable Energy Lab Collaboration

    2011-11-01

    Turbulent particle-laden flows in the form of fluidized bed reactors display good mixing properties, low pressure drops, and a fairly uniform temperature distribution. Understanding and predicting the flow dynamics within the reactor is necessary for improving the efficiency, and providing technologies for large-scale industrialization. A numerical strategy based on an Eulerian representation of the gas phase and Lagrangian tracking of the particles is developed in the framework of NGA, a high- order fully conservative parallel code tailored for turbulent flows. The particles are accounted for using a point-particle assumption. Once the gas-phase quantities are mapped to the particle location a conservative, implicit diffusion operation smoothes the field. Normal and tangential collisions are handled via soft-sphere model, modified to allow the bed to reach close packing at rest. The pressure drop across the bed is compared with theory to accurately predict the minimum fluidization velocity. 3D simulations of the National Renewable Energy Lab's 4-inch reactor are then conducted. Tens of millions of particles are tracked. The reactor's geometry is modeled using an immersed boundary scheme. Statistics for volume fraction, velocities, bed expansion, and bubble characteristics are analyzed and compared with experimental data.

  10. Potential and issues in large scale flood inundation modelling

    NASA Astrophysics Data System (ADS)

    Di Baldassarre, Giuliano; Brandimarte, Luigia; Dottori, Francesco; Mazzoleni, Maurizio; Yan, Kun

    2015-04-01

    The last years have seen a growing research interest on large scale flood inundation modelling. Nowadays, modelling tools and datasets allow for analyzing flooding processes at regional, continental and even global scale with an increasing level of detail. As a result, several research works have already addressed this topic using different methodologies of varying complexity. The potential of these studies is certainly enormous. Large scale flood inundation modelling can provide valuable information in areas where few information and studies were previously available. They can provide a consistent framework for a comprehensive assessment of flooding processes in the river basins of world's large rivers, as well as impacts of future climate scenarios. To make the most of such a potential, we believe it is necessary, on the one hand, to understand strengths and limitations of the existing methodologies, and on the other hand, to discuss possibilities and implications of using large scale flood models for operational flood risk assessment and management. Where should researchers put their effort, in order to develop useful and reliable methodologies and outcomes? How the information coming from large scale flood inundation studies can be used by stakeholders? How should we use this information where previous higher resolution studies exist, or where official studies are available?

  11. Homogenization of Large-Scale Movement Models in Ecology

    USGS Publications Warehouse

    Garlick, M.J.; Powell, J.A.; Hooten, M.B.; McFarlane, L.R.

    2011-01-01

    A difficulty in using diffusion models to predict large scale animal population dispersal is that individuals move differently based on local information (as opposed to gradients) in differing habitat types. This can be accommodated by using ecological diffusion. However, real environments are often spatially complex, limiting application of a direct approach. Homogenization for partial differential equations has long been applied to Fickian diffusion (in which average individual movement is organized along gradients of habitat and population density). We derive a homogenization procedure for ecological diffusion and apply it to a simple model for chronic wasting disease in mule deer. Homogenization allows us to determine the impact of small scale (10-100 m) habitat variability on large scale (10-100 km) movement. The procedure generates asymptotic equations for solutions on the large scale with parameters defined by small-scale variation. The simplicity of this homogenization procedure is striking when compared to the multi-dimensional homogenization procedure for Fickian diffusion,and the method will be equally straightforward for more complex models. ?? 2010 Society for Mathematical Biology.

  12. Multiresolution comparison of precipitation datasets for large-scale models

    NASA Astrophysics Data System (ADS)

    Chun, K. P.; Sapriza Azuri, G.; Davison, B.; DeBeer, C. M.; Wheater, H. S.

    2014-12-01

    Gridded precipitation datasets are crucial for driving large-scale models which are related to weather forecast and climate research. However, the quality of precipitation products is usually validated individually. Comparisons between gridded precipitation products along with ground observations provide another avenue for investigating how the precipitation uncertainty would affect the performance of large-scale models. In this study, using data from a set of precipitation gauges over British Columbia and Alberta, we evaluate several widely used North America gridded products including the Canadian Gridded Precipitation Anomalies (CANGRD), the National Center for Environmental Prediction (NCEP) reanalysis, the Water and Global Change (WATCH) project, the thin plate spline smoothing algorithms (ANUSPLIN) and Canadian Precipitation Analysis (CaPA). Based on verification criteria for various temporal and spatial scales, results provide an assessment of possible applications for various precipitation datasets. For long-term climate variation studies (~100 years), CANGRD, NCEP, WATCH and ANUSPLIN have different comparative advantages in terms of their resolution and accuracy. For synoptic and mesoscale precipitation patterns, CaPA provides appealing performance of spatial coherence. In addition to the products comparison, various downscaling methods are also surveyed to explore new verification and bias-reduction methods for improving gridded precipitation outputs for large-scale models.

  13. Statistical Modeling of Large-Scale Scientific Simulation Data

    SciTech Connect

    Eliassi-Rad, T; Baldwin, C; Abdulla, G; Critchlow, T

    2003-11-15

    With the advent of massively parallel computer systems, scientists are now able to simulate complex phenomena (e.g., explosions of a stars). Such scientific simulations typically generate large-scale data sets over the spatio-temporal space. Unfortunately, the sheer sizes of the generated data sets make efficient exploration of them impossible. Constructing queriable statistical models is an essential step in helping scientists glean new insight from their computer simulations. We define queriable statistical models to be descriptive statistics that (1) summarize and describe the data within a user-defined modeling error, and (2) are able to answer complex range-based queries over the spatiotemporal dimensions. In this chapter, we describe systems that build queriable statistical models for large-scale scientific simulation data sets. In particular, we present our Ad-hoc Queries for Simulation (AQSim) infrastructure, which reduces the data storage requirements and query access times by (1) creating and storing queriable statistical models of the data at multiple resolutions, and (2) evaluating queries on these models of the data instead of the entire data set. Within AQSim, we focus on three simple but effective statistical modeling techniques. AQSim's first modeling technique (called univariate mean modeler) computes the ''true'' (unbiased) mean of systematic partitions of the data. AQSim's second statistical modeling technique (called univariate goodness-of-fit modeler) uses the Andersen-Darling goodness-of-fit method on systematic partitions of the data. Finally, AQSim's third statistical modeling technique (called multivariate clusterer) utilizes the cosine similarity measure to cluster the data into similar groups. Our experimental evaluations on several scientific simulation data sets illustrate the value of using these statistical models on large-scale simulation data sets.

  14. Ecohydrological modeling for large-scale environmental impact assessment.

    PubMed

    Woznicki, Sean A; Nejadhashemi, A Pouyan; Abouali, Mohammad; Herman, Matthew R; Esfahanian, Elaheh; Hamaamin, Yaseen A; Zhang, Zhen

    2016-02-01

    Ecohydrological models are frequently used to assess the biological integrity of unsampled streams. These models vary in complexity and scale, and their utility depends on their final application. Tradeoffs are usually made in model scale, where large-scale models are useful for determining broad impacts of human activities on biological conditions, and regional-scale (e.g. watershed or ecoregion) models provide stakeholders greater detail at the individual stream reach level. Given these tradeoffs, the objective of this study was to develop large-scale stream health models with reach level accuracy similar to regional-scale models thereby allowing for impacts assessments and improved decision-making capabilities. To accomplish this, four measures of biological integrity (Ephemeroptera, Plecoptera, and Trichoptera taxa (EPT), Family Index of Biotic Integrity (FIBI), Hilsenhoff Biotic Index (HBI), and fish Index of Biotic Integrity (IBI)) were modeled based on four thermal classes (cold, cold-transitional, cool, and warm) of streams that broadly dictate the distribution of aquatic biota in Michigan. The Soil and Water Assessment Tool (SWAT) was used to simulate streamflow and water quality in seven watersheds and the Hydrologic Index Tool was used to calculate 171 ecologically relevant flow regime variables. Unique variables were selected for each thermal class using a Bayesian variable selection method. The variables were then used in development of adaptive neuro-fuzzy inference systems (ANFIS) models of EPT, FIBI, HBI, and IBI. ANFIS model accuracy improved when accounting for stream thermal class rather than developing a global model. PMID:26595397

  15. Ecohydrological modeling for large-scale environmental impact assessment.

    PubMed

    Woznicki, Sean A; Nejadhashemi, A Pouyan; Abouali, Mohammad; Herman, Matthew R; Esfahanian, Elaheh; Hamaamin, Yaseen A; Zhang, Zhen

    2016-02-01

    Ecohydrological models are frequently used to assess the biological integrity of unsampled streams. These models vary in complexity and scale, and their utility depends on their final application. Tradeoffs are usually made in model scale, where large-scale models are useful for determining broad impacts of human activities on biological conditions, and regional-scale (e.g. watershed or ecoregion) models provide stakeholders greater detail at the individual stream reach level. Given these tradeoffs, the objective of this study was to develop large-scale stream health models with reach level accuracy similar to regional-scale models thereby allowing for impacts assessments and improved decision-making capabilities. To accomplish this, four measures of biological integrity (Ephemeroptera, Plecoptera, and Trichoptera taxa (EPT), Family Index of Biotic Integrity (FIBI), Hilsenhoff Biotic Index (HBI), and fish Index of Biotic Integrity (IBI)) were modeled based on four thermal classes (cold, cold-transitional, cool, and warm) of streams that broadly dictate the distribution of aquatic biota in Michigan. The Soil and Water Assessment Tool (SWAT) was used to simulate streamflow and water quality in seven watersheds and the Hydrologic Index Tool was used to calculate 171 ecologically relevant flow regime variables. Unique variables were selected for each thermal class using a Bayesian variable selection method. The variables were then used in development of adaptive neuro-fuzzy inference systems (ANFIS) models of EPT, FIBI, HBI, and IBI. ANFIS model accuracy improved when accounting for stream thermal class rather than developing a global model.

  16. Challenges of Modeling Flood Risk at Large Scales

    NASA Astrophysics Data System (ADS)

    Guin, J.; Simic, M.; Rowe, J.

    2009-04-01

    algorithm propagates the flows for each simulated event. The model incorporates a digital terrain model (DTM) at 10m horizontal resolution, which is used to extract flood plain cross-sections such that a one-dimensional hydraulic model can be used to estimate extent and elevation of flooding. In doing so the effect of flood defenses in mitigating floods are accounted for. Finally a suite of vulnerability relationships have been developed to estimate flood losses for a portfolio of properties that are exposed to flood hazard. Historical experience indicates that a for recent floods in Great Britain more than 50% of insurance claims occur outside the flood plain and these are primarily a result of excess surface flow, hillside flooding, flooding due to inadequate drainage. A sub-component of the model addresses this issue by considering several parameters that best explain the variability of claims off the flood plain. The challenges of modeling such a complex phenomenon at a large scale largely dictate the choice of modeling approaches that need to be adopted for each of these model components. While detailed numerically-based physical models exist and have been used for conducting flood hazard studies, they are generally restricted to small geographic regions. In a probabilistic risk estimation framework like our current model, a blend of deterministic and statistical techniques have to be employed such that each model component is independent, physically sound and is able to maintain the statistical properties of observed historical data. This is particularly important because of the highly non-linear behavior of the flooding process. With respect to vulnerability modeling, both on and off the flood plain, the challenges include the appropriate scaling of a damage relationship when applied to a portfolio of properties. This arises from the fact that the estimated hazard parameter used for damage assessment, namely maximum flood depth has considerable uncertainty. The

  17. Large scale stochastic spatio-temporal modelling with PCRaster

    NASA Astrophysics Data System (ADS)

    Karssenberg, Derek; Drost, Niels; Schmitz, Oliver; de Jong, Kor; Bierkens, Marc F. P.

    2013-04-01

    software from the eScience Technology Platform (eSTeP), developed at the Netherlands eScience Center. This will allow us to scale up to hundreds of machines, with thousands of compute cores. A key requirement is not to change the user experience of the software. PCRaster operations and the use of the Python framework classes should work in a similar manner on machines ranging from a laptop to a supercomputer. This enables a seamless transfer of models from small machines, where model development is done, to large machines used for large-scale model runs. Domain specialists from a large range of disciplines, including hydrology, ecology, sedimentology, and land use change studies, currently use the PCRaster Python software within research projects. Applications include global scale hydrological modelling and error propagation in large-scale land use change models. The software runs on MS Windows, Linux operating systems, and OS X.

  18. Performance modeling and analysis of consumer classes in large scale systems

    NASA Astrophysics Data System (ADS)

    Al-Shukri, Sh.; Lenin, R. B.; Ramaswamy, S.; Anand, A.; Narasimhan, V. L.; Abraham, J.; Varadan, Vijay

    2009-03-01

    Peer-to-Peer (P2P) networks have been used efficiently as building blocks as overlay networks for large-scale distributed network applications with Internet Protocol (IP) based bottom layer networks. With large scale Wireless Sensor Networks (WSNs) becoming increasingly realistic, it is important to overlay networks with WSNs in the bottom layer. The suitable mathematical (stochastic) model that can model the overlay network over WSNs is Queuing Networks with Multi-Class customers. In this paper, we discuss how these mathematical network models can be simulated using the object oriented simulation package OMNeT++. We discuss the Graphical User Interface (GUI) which is developed to accept the input parameter files and execute the simulation using this interface. We compare the simulation results with analytical formulas available in the literature for these mathematical models.

  19. Modelling large-scale halo bias using the bispectrum

    NASA Astrophysics Data System (ADS)

    Pollack, Jennifer E.; Smith, Robert E.; Porciani, Cristiano

    2012-03-01

    We study the relation between the density distribution of tracers for large-scale structure and the underlying matter distribution - commonly termed bias - in the Λ cold dark matter framework. In particular, we examine the validity of the local model of biasing at quadratic order in the matter density. This model is characterized by parameters b1 and b2. Using an ensemble of N-body simulations, we apply several statistical methods to estimate the parameters. We measure halo and matter fluctuations smoothed on various scales. We find that, whilst the fits are reasonably good, the parameters vary with smoothing scale. We argue that, for real-space measurements, owing to the mixing of wavemodes, no smoothing scale can be found for which the parameters are independent of smoothing. However, this is not the case in Fourier space. We measure halo and halo-mass power spectra and from these construct estimates of the effective large-scale bias as a guide for b1. We measure the configuration dependence of the halo bispectra Bhhh and reduced bispectra Qhhh for very large-scale k-space triangles. From these data, we constrain b1 and b2, taking into account the full bispectrum covariance matrix. Using the lowest order perturbation theory, we find that for Bhhh the best-fitting parameters are in reasonable agreement with one another as the triangle scale is varied, although the fits become poor as smaller scales are included. The same is true for Qhhh. The best-fitting values were found to depend on the discreteness correction. This led us to consider halo-mass cross-bispectra. The results from these statistics supported our earlier findings. We then developed a test to explore whether the inconsistency in the recovered bias parameters could be attributed to missing higher order corrections in the models. We prove that low-order expansions are not sufficiently accurate to model the data, even on scales k1˜ 0.04 h Mpc-1. If robust inferences concerning bias are to be drawn

  20. Modeling temporal relationships in large scale clinical associations

    PubMed Central

    Hanauer, David A; Ramakrishnan, Naren

    2013-01-01

    Objective We describe an approach for modeling temporal relationships in a large scale association analysis of electronic health record data. The addition of temporal information can inform hypothesis generation and help to explain the relationships. We applied this approach on a dataset containing 41.2 million time-stamped International Classification of Diseases, Ninth Revision (ICD-9) codes from 1.6 million patients. Methods We performed two independent analyses including a pairwise association analysis using a χ2 test and a temporal analysis using a binomial test. Data were visualized using network diagrams and reviewed for clinical significance. Results We found nearly 400 000 highly associated pairs of ICD-9 codes with varying numbers of strong temporal associations ranging from ≥1 day to ≥10 years apart. Most of the findings were not considered clinically novel, although some, such as an association between Helicobacter pylori infection and diabetes, have recently been reported in the literature. The temporal analysis in our large cohort, however, revealed that diabetes usually preceded the diagnoses of H pylori, raising questions about possible cause and effect. Discussion Such analyses have significant limitations, some of which are due to known problems with ICD-9 codes and others to potentially incomplete data even at a health system level. Nevertheless, large scale association analyses with temporal modeling can help provide a mechanism for novel discovery in support of hypothesis generation. Conclusions Temporal relationships can provide an additional layer of meaning in identifying and interpreting clinical associations. PMID:23019240

  1. A large-scale model of the locust antennal lobe.

    PubMed

    Patel, Mainak; Rangan, Aaditya V; Cai, David

    2009-12-01

    The antennal lobe (AL) is the primary structure within the locust's brain that receives information from olfactory receptor neurons (ORNs) within the antennae. Different odors activate distinct subsets of ORNs, implying that neuronal signals at the level of the antennae encode odors combinatorially. Within the AL, however, different odors produce signals with long-lasting dynamic transients carried by overlapping neural ensembles, suggesting a more complex coding scheme. In this work we use a large-scale point neuron model of the locust AL to investigate this shift in stimulus encoding and potential consequences for odor discrimination. Consistent with experiment, our model produces stimulus-sensitive, dynamically evolving populations of active AL neurons. Our model relies critically on the persistence time-scale associated with ORN input to the AL, sparse connectivity among projection neurons, and a synaptic slow inhibitory mechanism. Collectively, these architectural features can generate network odor representations of considerably higher dimension than would be generated by a direct feed-forward representation of stimulus space.

  2. Numerically modelling the large scale coronal magnetic field

    NASA Astrophysics Data System (ADS)

    Panja, Mayukh; Nandi, Dibyendu

    2016-07-01

    The solar corona spews out vast amounts of magnetized plasma into the heliosphere which has a direct impact on the Earth's magnetosphere. Thus it is important that we develop an understanding of the dynamics of the solar corona. With our present technology it has not been possible to generate 3D magnetic maps of the solar corona; this warrants the use of numerical simulations to study the coronal magnetic field. A very popular method of doing this, is to extrapolate the photospheric magnetic field using NLFF or PFSS codes. However the extrapolations at different time intervals are completely independent of each other and do not capture the temporal evolution of magnetic fields. On the other hand full MHD simulations of the global coronal field, apart from being computationally very expensive would be physically less transparent, owing to the large number of free parameters that are typically used in such codes. This brings us to the Magneto-frictional model which is relatively simpler and computationally more economic. We have developed a Magnetofrictional Model, in 3D spherical polar co-ordinates to study the large scale global coronal field. Here we present studies of changing connectivities between active regions, in response to photospheric motions.

  3. Multi-Resolution Modeling of Large Scale Scientific Simulation Data

    SciTech Connect

    Baldwin, C; Abdulla, G; Critchlow, T

    2003-01-31

    This paper discusses using the wavelets modeling technique as a mechanism for querying large-scale spatio-temporal scientific simulation data. Wavelets have been used successfully in time series analysis and in answering surprise and trend queries. Our approach however is driven by the need for compression, which is necessary for viable throughput given the size of the targeted data, along with the end user requirements from the discovery process. Our users would like to run fast queries to check the validity of the simulation algorithms used. In some cases users are welling to accept approximate results if the answer comes back within a reasonable time. In other cases they might want to identify a certain phenomena and track it over time. We face a unique problem because of the data set sizes. It may take months to generate one set of the targeted data; because of its shear size, the data cannot be stored on disk for long and thus needs to be analyzed immediately before it is sent to tape. We integrated wavelets within AQSIM, a system that we are developing to support exploration and analyses of tera-scale size data sets. We will discuss the way we utilized wavelets decomposition in our domain to facilitate compression and in answering a specific class of queries that is harder to answer with any other modeling technique. We will also discuss some of the shortcomings of our implementation and how to address them.

  4. Towards a self-consistent halo model for the nonlinear large-scale structure

    NASA Astrophysics Data System (ADS)

    Schmidt, Fabian

    2016-03-01

    The halo model is a theoretically and empirically well-motivated framework for predicting the statistics of the nonlinear matter distribution in the Universe. However, current incarnations of the halo model suffer from two major deficiencies: (i) they do not enforce the stress-energy conservation of matter; (ii) they are not guaranteed to recover exact perturbation theory results on large scales. Here, we provide a formulation of the halo model (EHM) that remedies both drawbacks in a consistent way, while attempting to maintain the predictivity of the approach. In the formulation presented here, mass and momentum conservation are guaranteed on large scales, and results of the perturbation theory and the effective field theory can, in principle, be matched to any desired order on large scales. We find that a key ingredient in the halo model power spectrum is the halo stochasticity covariance, which has been studied to a much lesser extent than other ingredients such as mass function, bias, and profiles of halos. As written here, this approach still does not describe the transition regime between perturbation theory and halo scales realistically, which is left as an open problem. We also show explicitly that, when implemented consistently, halo model predictions do not depend on any properties of low-mass halos that are smaller than the scales of interest.

  5. A first large-scale flood inundation forecasting model

    SciTech Connect

    Schumann, Guy J-P; Neal, Jeffrey C.; Voisin, Nathalie; Andreadis, Konstantinos M.; Pappenberger, Florian; Phanthuwongpakdee, Kay; Hall, Amanda C.; Bates, Paul D.

    2013-11-04

    At present continental to global scale flood forecasting focusses on predicting at a point discharge, with little attention to the detail and accuracy of local scale inundation predictions. Yet, inundation is actually the variable of interest and all flood impacts are inherently local in nature. This paper proposes a first large scale flood inundation ensemble forecasting model that uses best available data and modeling approaches in data scarce areas and at continental scales. The model was built for the Lower Zambezi River in southeast Africa to demonstrate current flood inundation forecasting capabilities in large data-scarce regions. The inundation model domain has a surface area of approximately 170k km2. ECMWF meteorological data were used to force the VIC (Variable Infiltration Capacity) macro-scale hydrological model which simulated and routed daily flows to the input boundary locations of the 2-D hydrodynamic model. Efficient hydrodynamic modeling over large areas still requires model grid resolutions that are typically larger than the width of many river channels that play a key a role in flood wave propagation. We therefore employed a novel sub-grid channel scheme to describe the river network in detail whilst at the same time representing the floodplain at an appropriate and efficient scale. The modeling system was first calibrated using water levels on the main channel from the ICESat (Ice, Cloud, and land Elevation Satellite) laser altimeter and then applied to predict the February 2007 Mozambique floods. Model evaluation showed that simulated flood edge cells were within a distance of about 1 km (one model resolution) compared to an observed flood edge of the event. Our study highlights that physically plausible parameter values and satisfactory performance can be achieved at spatial scales ranging from tens to several hundreds of thousands of km2 and at model grid resolutions up to several km2. However, initial model test runs in forecast mode

  6. Symmetry-guided large-scale shell-model theory

    NASA Astrophysics Data System (ADS)

    Launey, Kristina D.; Dytrych, Tomas; Draayer, Jerry P.

    2016-07-01

    In this review, we present a symmetry-guided strategy that utilizes exact as well as partial symmetries for enabling a deeper understanding of and advancing ab initio studies for determining the microscopic structure of atomic nuclei. These symmetries expose physically relevant degrees of freedom that, for large-scale calculations with QCD-inspired interactions, allow the model space size to be reduced through a very structured selection of the basis states to physically relevant subspaces. This can guide explorations of simple patterns in nuclei and how they emerge from first principles, as well as extensions of the theory beyond current limitations toward heavier nuclei and larger model spaces. This is illustrated for the ab initio symmetry-adapted no-core shell model (SA-NCSM) and two significant underlying symmetries, the symplectic Sp(3 , R) group and its deformation-related SU(3) subgroup. We review the broad scope of nuclei, where these symmetries have been found to play a key role-from the light p-shell systems, such as 6Li, 8B, 8Be, 12C, and 16O, and sd-shell nuclei exemplified by 20Ne, based on first-principle explorations; through the Hoyle state in 12C and enhanced collectivity in intermediate-mass nuclei, within a no-core shell-model perspective; up to strongly deformed species of the rare-earth and actinide regions, as investigated in earlier studies. A complementary picture, driven by symmetries dual to Sp(3 , R) , is also discussed. We briefly review symmetry-guided techniques that prove useful in various nuclear-theory models, such as Elliott model, ab initio SA-NCSM, symplectic model, pseudo- SU(3) and pseudo-symplectic models, ab initio hyperspherical harmonics method, ab initio lattice effective field theory, exact pairing-plus-shell model approaches, and cluster models, including the resonating-group method. Important implications of these approaches that have deepened our understanding of emergent phenomena in nuclei, such as enhanced

  7. Multi-Resolution Modeling of Large Scale Scientific Simulation Data

    SciTech Connect

    Baldwin, C; Abdulla, G; Critchlow, T

    2002-02-25

    Data produced by large scale scientific simulations, experiments, and observations can easily reach tera-bytes in size. The ability to examine data-sets of this magnitude, even in moderate detail, is problematic at best. Generally this scientific data consists of multivariate field quantities with complex inter-variable correlations and spatial-temporal structure. To provide scientists and engineers with the ability to explore and analyze such data sets we are using a twofold approach. First, we model the data with the objective of creating a compressed yet manageable representation. Second, with that compressed representation, we provide the user with the ability to query the resulting approximation to obtain approximate yet sufficient answers; a process called adhoc querying. This paper is concerned with a wavelet modeling technique that seeks to capture the important physical characteristics of the target scientific data. Our approach is driven by the compression, which is necessary for viable throughput, along with the end user requirements from the discovery process. Our work contrasts existing research which applies wavelets to range querying, change detection, and clustering problems by working directly with a decomposition of the data. The difference in this procedures is due primarily to the nature of the data and the requirements of the scientists and engineers. Our approach directly uses the wavelet coefficients of the data to compress as well as query. We will provide some background on the problem, describe how the wavelet decomposition is used to facilitate data compression and how queries are posed on the resulting compressed model. Results of this process will be shown for several problems of interest and we will end with some observations and conclusions about this research.

  8. Oligopolistic competition in wholesale electricity markets: Large-scale simulation and policy analysis using complementarity models

    NASA Astrophysics Data System (ADS)

    Helman, E. Udi

    This dissertation conducts research into the large-scale simulation of oligopolistic competition in wholesale electricity markets. The dissertation has two parts. Part I is an examination of the structure and properties of several spatial, or network, equilibrium models of oligopolistic electricity markets formulated as mixed linear complementarity problems (LCP). Part II is a large-scale application of such models to the electricity system that encompasses most of the United States east of the Rocky Mountains, the Eastern Interconnection. Part I consists of Chapters 1 to 6. The models developed in this part continue research into mixed LCP models of oligopolistic electricity markets initiated by Hobbs [67] and subsequently developed by Metzler [87] and Metzler, Hobbs and Pang [88]. Hobbs' central contribution is a network market model with Cournot competition in generation and a price-taking spatial arbitrage firm that eliminates spatial price discrimination by the Cournot firms. In one variant, the solution to this model is shown to be equivalent to the "no arbitrage" condition in a "pool" market, in which a Regional Transmission Operator optimizes spot sales such that the congestion price between two locations is exactly equivalent to the difference in the energy prices at those locations (commonly known as locational marginal pricing). Extensions to this model are presented in Chapters 5 and 6. One of these is a market model with a profit-maximizing arbitrage firm. This model is structured as a mathematical program with equilibrium constraints (MPEC), but due to the linearity of its constraints, can be solved as a mixed LCP. Part II consists of Chapters 7 to 12. The core of these chapters is a large-scale simulation of the U.S. Eastern Interconnection applying one of the Cournot competition with arbitrage models. This is the first oligopolistic equilibrium market model to encompass the full Eastern Interconnection with a realistic network representation (using

  9. Stable Isotope Tracers in Large Scale Hydrological Models

    NASA Astrophysics Data System (ADS)

    Fekete, B. M.; Aggarwal, P.

    2004-05-01

    Stable isotopes of oxygen and hydrogen (deuterium and oxygen-18) have been shown to be effective tracers for characterizing hydrological processes in small river basins. Their application in large river basins has lagged behind due to the lack of sufficient isotope data. Recent availability of isotope data from most US rivers and subsequent efforts by the International Atomic Energy Agency (IAEA) to collect comprehensive global information on isotope compositions of river runoff is changing this situation. These data sets offer new opportunities to utilize stable isotopes in studies of large river basins. Recent work carried out jointly by the Water Systems Analysis Group of the University of New Hampshire and the Isotope Hydrology Section of the IAEA applied isotope-enabled global water balance and transport models to assess the feasibility of using isotope data for improving water balance estimations at large scales. The model implemented simple mixing in the various storage pools (e.g. snow pack, soil moisture, groundwater, and river channel) and fractionation during evapotranspiration. Sensitivity tests show that spatial and temporal distributions of isotopes in precipitation and their mixing in the various storage pools are the most important factors affecting the isotopic composition of river discharge. The groundwater storage pool plays a key role in the seasonal dynamics of stable isotope composition of river discharge. Fractionation during phase changes appears to have a less pronounced impact. These findings are consistent with those in small scale catchments where ``old water'' and ``new water'' (i.e. pre-event water and storm runoff) can be easily separated by using isotopes. Model validation using available data from the US rivers showed remarkable performance considering the inconsistencies in the temporal sampling of precipitation and runoff isotope composition records. The good model performance suggests that seasonal variations of the isotopic

  10. Double-step truncation procedure for large-scale shell-model calculations

    NASA Astrophysics Data System (ADS)

    Coraggio, L.; Gargano, A.; Itaco, N.

    2016-06-01

    We present a procedure that is helpful to reduce the computational complexity of large-scale shell-model calculations, by preserving as much as possible the role of the rejected degrees of freedom in an effective approach. Our truncation is driven first by the analysis of the effective single-particle energies of the original large-scale shell-model Hamiltonian, in order to locate the relevant degrees of freedom to describe a class of isotopes or isotones, namely the single-particle orbitals that will constitute a new truncated model space. The second step is to perform a unitary transformation of the original Hamiltonian from its model space into the truncated one. This transformation generates a new shell-model Hamiltonian, defined in a smaller model space, that retains effectively the role of the excluded single-particle orbitals. As an application of this procedure, we have chosen a realistic shell-model Hamiltonian defined in a large model space, set up by seven proton and five neutron single-particle orbitals outside 88Sr. We study the dependence of shell-model results upon different truncations of the original model space for the Zr, Mo, Ru, Pd, Cd, and Sn isotopic chains, showing the reliability of this truncation procedure.

  11. Modeling parametric scattering instabilities in large-scale expanding plasmas

    NASA Astrophysics Data System (ADS)

    Masson-Laborde, P. E.; Hüller, S.; Pesme, D.; Casanova, M.; Loiseau, P.; Labaune, Ch.

    2006-06-01

    We present results from two-dimensional simulations of long scale-length laser-plasma interaction experiments performed at LULI. With the goal of predictive modeling of such experiments with our code Harmony2D, we take into account realistic plasma density and velocity profiles, the propagation of the laser light beam and the scattered light, as well as the coupling with the ion acoustic waves in order to describe Stimulated Brillouin Scattering (SBS). Laser pulse shaping is taken into account to follow the evolution ofthe SBS reflectivity as close as possible to the experiment. The light reflectivity is analyzed by distinguishing the backscattered light confined in the solid angle defined by the aperture of the incident light beam and the scattered light outside this cone. As in the experiment, it is observed that the aperture of the scattered light tends to increase with the mean intensity of the RPP-smoothed laser beam. A further common feature between simulations and experiments is the observed localization of the SBS-driven ion acoustic waves (IAW) in the front part of the target (with respect to the incoming laser beam).

  12. Numerical Modeling of Large-Scale Rocky Coastline Evolution

    NASA Astrophysics Data System (ADS)

    Limber, P.; Murray, A. B.; Littlewood, R.; Valvo, L.

    2008-12-01

    Seventy-five percent of the world's ocean coastline is rocky. On large scales (i.e. greater than a kilometer), many intertwined processes drive rocky coastline evolution, including coastal erosion and sediment transport, tectonics, antecedent topography, and variations in sea cliff lithology. In areas such as California, an additional aspect of rocky coastline evolution involves submarine canyons that cut across the continental shelf and extend into the nearshore zone. These types of canyons intercept alongshore sediment transport and flush sand to abyssal depths during periodic turbidity currents, thereby delineating coastal sediment transport pathways and affecting shoreline evolution over large spatial and time scales. How tectonic, sediment transport, and canyon processes interact with inherited topographic and lithologic settings to shape rocky coastlines remains an unanswered, and largely unexplored, question. We will present numerical model results of rocky coastline evolution that starts with an immature fractal coastline. The initial shape is modified by headland erosion, wave-driven alongshore sediment transport, and submarine canyon placement. Our previous model results have shown that, as expected, an initial sediment-free irregularly shaped rocky coastline with homogeneous lithology will undergo smoothing in response to wave attack; headlands erode and mobile sediment is swept into bays, forming isolated pocket beaches. As this diffusive process continues, pocket beaches coalesce, and a continuous sediment transport pathway results. However, when a randomly placed submarine canyon is introduced to the system as a sediment sink, the end results are wholly different: sediment cover is reduced, which in turn increases weathering and erosion rates and causes the entire shoreline to move landward more rapidly. The canyon's alongshore position also affects coastline morphology. When placed offshore of a headland, the submarine canyon captures local sediment

  13. A new mixed-mode fracture criterion for large-scale lattice models

    NASA Astrophysics Data System (ADS)

    Sachau, T.; Koehn, D.

    2014-01-01

    Reasonable fracture criteria are crucial for the modeling of dynamic failure in computational lattice models. Successful criteria exist for experiments on the micro- and on the mesoscale, which are based on the stress that a bond experiences. In this paper, we test the applicability of these failure criteria to large-scale models, where gravity plays an important role in addition to the externally applied deformation. Brittle structures, resulting from these criteria, do not resemble the outcome predicted by fracture mechanics and by geological observations. For this reason we derive an elliptical fracture criterion, which is based on the strain energy stored in a bond. Simulations using the new criterion result in realistic structures. It is another great advantage of this fracture model that it can be combined with classic geological material parameters: the tensile strength σ0 and the shear cohesion τ0. The proposed fracture criterion is much more robust with regard to numerical strain increments than fracture criteria based on stress (e.g., Drucker-Prager). While we tested the fracture model only for large-scale structures, there is strong reason to believe that the model is equally applicable to lattice simulations on the micro- and on the mesoscale.

  14. Large scale model tests of a new technology V/STOL concept

    NASA Technical Reports Server (NTRS)

    Whittley, D. C.; Koenig, D. G.

    1980-01-01

    An ejector design concept for V/STOL aircraft, featuring a double-delta configuration with two large chordwise ejector slots adjacent to the fuselage side and a tailplane or canard for longitudinal control is examined. Large scale model tests of the concept have shown that ejector systems are capable of significant thrust augmentation at realistic supply pressures and temperatures, so that power plant size and weight can be reduced accordingly. A thrust augmentation of at least 1.75 can be achieved for the isolated ejector, not making allowance for duct and nozzle losses. Substantial reductions in velocity, temperature and noise of the lifting jet are assured due to mixing within the ejector - this lessens the severity of ground erosion and the thrust loss associated with reingestion. Consideration is also given to the effect of ground proximity, longitudinal aerodynamic characteristics, transition performance, and lateral stability.

  15. An empirical model relating U.S. monthly hail occurrence to large-scale meteorological environment

    NASA Astrophysics Data System (ADS)

    Allen, John T.; Tippett, Michael K.; Sobel, Adam H.

    2015-03-01

    An empirical model relating monthly hail occurrence to the large-scale environment has been developed and tested for the United States (U.S.). Monthly hail occurrence for each 1°×1° grid box is defined as the number of hail events that occur there during a month; a hail event consists of a 3 h period with at least one report of hail larger than 1 in. The model is derived using climatological annual cycle data only. Environmental variables are taken from the North American Regional Reanalysis (NARR; 1979-2012). The model includes four environmental variables convective precipitation, convective available potential energy, storm relative helicity, and mean surface to 90 hPa specific humidity. The model differs in its choice of variables and their relative weighting from existing severe weather indices. The model realistically matches the annual cycle of hail occurrence both regionally and for the contiguous U.S. (CONUS). The modeled spatial distribution is also consistent with the observed hail climatology. However, the westward shift of maximum hail frequency during the summer months is delayed in the model relative to observations, and the model has a lower frequency of hail just east of the Rocky Mountains compared to observations. Year-to-year variability provides an independent test of the model. On monthly and annual time scales, the model reproduces observed hail frequencies. Overall model trends are small compared to observed changes, suggesting that further analysis is necessary to differentiate between physical and nonphysical trends. The empirical hail model provides a new tool for exploration of connections between large-scale climate and severe weather.

  16. Insights into the Large-Scale Organization of Convection Through Statistical Models of Cloud Regimes

    NASA Astrophysics Data System (ADS)

    Tan, J.; Jakob, C.; Lane, T. P.

    2013-12-01

    Tropical convection is a critical process in the climate system, one that cannot be explicitly resolved in global-scale climate models. As a consequence parametrization schemes are employed that use the resolved large-scale variables of the model to drive the sub-grid scale behavior of ensembles of convective clouds. The representation of convection in models in this way has had only limited success and many model shortcomings, ranging from the mean distribution of tropical precipitation to errors in major modes of tropical variability, have been ascribed to limitations in the parametrization of convection. The exact reasons as to why convection parametrizations fail in some of their basic tasks remain unclear, thereby hindering the development of improvements. In this study we develop and apply simple statistical models of tropical convection based on observations at a resolution comparable to global models to explore the implications of some of the assumptions made in model representations of tropical convection. In particular, we investigate the potential consequences of the commonly-used diagnostic approach to cumulus parametrization, i.e., one where the existence and behavior of convection in one grid box at one model time step is diagnosed without information from previous time steps or neighboring grid boxes. We exploit the relationships between large-scale variables and cloud regimes, which are proxies for different states of convection, to design statistical models of tropical convection with various degrees of sophistication. In particular, we vary the model from a purely diagnostic approach to one adding memory in time to one that also adds information from surrounding points. All models rely on probabilistic rules in which the regimes are assigned based on the large-scale environment. We find that the statistical models fail to reproduce the observed spatiotemporal coherence of the convective regimes. In the purely diagnostic approach the model regimes

  17. Advancing Software Architecture Modeling for Large Scale Heterogeneous Systems

    SciTech Connect

    Gorton, Ian; Liu, Yan

    2010-11-07

    In this paper we describe how incorporating technology-specific modeling at the architecture level can help reduce risks and produce better designs for large, heterogeneous software applications. We draw an analogy with established modeling approaches in scientific domains, using groundwater modeling as an example, to help illustrate gaps in current software architecture modeling approaches. We then describe the advances in modeling, analysis and tooling that are required to bring sophisticated modeling and development methods within reach of software architects.

  18. Incorporating microbes into large-scale biogeochemical models

    NASA Astrophysics Data System (ADS)

    Allison, S. D.; Martiny, J. B.

    2008-12-01

    Micro-organisms, including Bacteria, Archaea, and Fungi, control major processes throughout the Earth system. Recent advances in microbial ecology and microbiology have revealed an astounding level of genetic and metabolic diversity in microbial communities. However, a framework for interpreting the meaning of this diversity has lagged behind the initial discoveries. Microbial communities have yet to be included explicitly in any major biogeochemical models in terrestrial ecosystems, and have only recently broken into ocean models. Although simplification of microbial communities is essential in complex systems, omission of community parameters may seriously compromise model predictions of biogeochemical processes. Two key questions arise from this tradeoff: 1) When and where must microbial community parameters be included in biogeochemical models? 2) If microbial communities are important, how should they be simplified, aggregated, and parameterized in models? To address these questions, we conducted a meta-analysis to determine if microbial communities are sensitive to four environmental disturbances that are associated with global change. In all cases, we found that community composition changed significantly following disturbance. However, the implications for ecosystem function were unclear in most of the published studies. Therefore, we developed a simple model framework to illustrate the situations in which microbial community changes would affect rates of biogeochemical processes. We found that these scenarios could be quite common, but powerful predictive models cannot be developed without much more information on the functions and disturbance responses of microbial taxa. Small-scale models that explicitly incorporate microbial communities also suggest that process rates strongly depend on microbial interactions and disturbance responses. The challenge is to scale up these models to make predictions at the ecosystem and global scales based on measurable

  19. Multilevel method for modeling large-scale networks.

    SciTech Connect

    Safro, I. M.

    2012-02-24

    Understanding the behavior of real complex networks is of great theoretical and practical significance. It includes developing accurate artificial models whose topological properties are similar to the real networks, generating the artificial networks at different scales under special conditions, investigating a network dynamics, reconstructing missing data, predicting network response, detecting anomalies and other tasks. Network generation, reconstruction, and prediction of its future topology are central issues of this field. In this project, we address the questions related to the understanding of the network modeling, investigating its structure and properties, and generating artificial networks. Most of the modern network generation methods are based either on various random graph models (reinforced by a set of properties such as power law distribution of node degrees, graph diameter, and number of triangles) or on the principle of replicating an existing model with elements of randomization such as R-MAT generator and Kronecker product modeling. Hierarchical models operate at different levels of network hierarchy but with the same finest elements of the network. However, in many cases the methods that include randomization and replication elements on the finest relationships between network nodes and modeling that addresses the problem of preserving a set of simplified properties do not fit accurately enough the real networks. Among the unsatisfactory features are numerically inadequate results, non-stability of algorithms on real (artificial) data, that have been tested on artificial (real) data, and incorrect behavior at different scales. One reason is that randomization and replication of existing structures can create conflicts between fine and coarse scales of the real network geometry. Moreover, the randomization and satisfying of some attribute at the same time can abolish those topological attributes that have been undefined or hidden from

  20. Statistical Modeling of Large-Scale Simulation Data

    SciTech Connect

    Eliassi-Rad, T; Critchlow, T; Abdulla, G

    2002-02-22

    With the advent of fast computer systems, Scientists are now able to generate terabytes of simulation data. Unfortunately, the shear size of these data sets has made efficient exploration of them impossible. To aid scientists in gathering knowledge from their simulation data, we have developed an ad-hoc query infrastructure. Our system, called AQSim (short for Ad-hoc Queries for Simulation) reduces the data storage requirements and access times in two stages. First, it creates and stores mathematical and statistical models of the data. Second, it evaluates queries on the models of the data instead of on the entire data set. In this paper, we present two simple but highly effective statistical modeling techniques for simulation data. Our first modeling technique computes the true mean of systematic partitions of the data. It makes no assumptions about the distribution of the data and uses a variant of the root mean square error to evaluate a model. In our second statistical modeling technique, we use the Andersen-Darling goodness-of-fit method on systematic partitions of the data. This second method evaluates a model by how well it passes the normality test on the data. Both of our statistical models summarize the data so as to answer range queries in the most effective way. We calculate precision on an answer to a query by scaling the one-sided Chebyshev Inequalities with the original mesh's topology. Our experimental evaluations on two scientific simulation data sets illustrate the value of using these statistical modeling techniques on large simulation data sets.

  1. Propagating waves in visual cortex: a large-scale model of turtle visual cortex.

    PubMed

    Nenadic, Zoran; Ghosh, Bijoy K; Ulinski, Philip

    2003-01-01

    This article describes a large-scale model of turtle visual cortex that simulates the propagating waves of activity seen in real turtle cortex. The cortex model contains 744 multicompartment models of pyramidal cells, stellate cells, and horizontal cells. Input is provided by an array of 201 geniculate neurons modeled as single compartments with spike-generating mechanisms and axons modeled as delay lines. Diffuse retinal flashes or presentation of spots of light to the retina are simulated by activating groups of geniculate neurons. The model is limited in that it does not have a retina to provide realistic input to the geniculate, and the cortex and does not incorporate all of the biophysical details of real cortical neurons. However, the model does reproduce the fundamental features of planar propagating waves. Activation of geniculate neurons produces a wave of activity that originates at the rostrolateral pole of the cortex at the point where a high density of geniculate afferents enter the cortex. Waves propagate across the cortex with velocities of 4 microm/ms to 70 microm/ms and occasionally reflect from the caudolateral border of the cortex. PMID:12567015

  2. Coordinated reset stimulation in a large-scale model of the STN-GPe circuit

    PubMed Central

    Ebert, Martin; Hauptmann, Christian; Tass, Peter A.

    2014-01-01

    Synchronization of populations of neurons is a hallmark of several brain diseases. Coordinated reset (CR) stimulation is a model-based stimulation technique which specifically counteracts abnormal synchrony by desynchronization. Electrical CR stimulation, e.g., for the treatment of Parkinson's disease (PD), is administered via depth electrodes. In order to get a deeper understanding of this technique, we extended the top-down approach of previous studies and constructed a large-scale computational model of the respective brain areas. Furthermore, we took into account the spatial anatomical properties of the simulated brain structures and incorporated a detailed numerical representation of 2 · 104 simulated neurons. We simulated the subthalamic nucleus (STN) and the globus pallidus externus (GPe). Connections within the STN were governed by spike-timing dependent plasticity (STDP). In this way, we modeled the physiological and pathological activity of the considered brain structures. In particular, we investigated how plasticity could be exploited and how the model could be shifted from strongly synchronized (pathological) activity to strongly desynchronized (healthy) activity of the neuronal populations via CR stimulation of the STN neurons. Furthermore, we investigated the impact of specific stimulation parameters especially the electrode position on the stimulation outcome. Our model provides a step forward toward a biophysically realistic model of the brain areas relevant to the emergence of pathological neuronal activity in PD. Furthermore, our model constitutes a test bench for the optimization of both stimulation parameters and novel electrode geometries for efficient CR stimulation. PMID:25505882

  3. Geometric algorithms for electromagnetic modeling of large scale structures

    NASA Astrophysics Data System (ADS)

    Pingenot, James

    With the rapid increase in the speed and complexity of integrated circuit designs, 3D full wave and time domain simulation of chip, package, and board systems becomes more and more important for the engineering of modern designs. Much effort has been applied to the problem of electromagnetic (EM) simulation of such systems in recent years. Major advances in boundary element EM simulations have led to O(n log n) simulations using iterative methods and advanced Fast. Fourier Transform (FFT), Multi-Level Fast Multi-pole Methods (MLFMM), and low-rank matrix compression techniques. These advances have been augmented with an explosion of multi-core and distributed computing technologies, however, realization of the full scale of these capabilities has been hindered by cumbersome and inefficient geometric processing. Anecdotal evidence from industry suggests that users may spend around 80% of turn-around time manipulating the geometric model and mesh. This dissertation addresses this problem by developing fast and efficient data structures and algorithms for 3D modeling of chips, packages, and boards. The methods proposed here harness the regular, layered 2D nature of the models (often referred to as "2.5D") to optimize these systems for large geometries. First, an architecture is developed for efficient storage and manipulation of 2.5D models. The architecture gives special attention to native representation of structures across various input models and special issues particular to 3D modeling. The 2.5D structure is then used to optimize the mesh systems First, circuit/EM co-simulation techniques are extended to provide electrical connectivity between objects. This concept is used to connect independently meshed layers, allowing simple and efficient 2D mesh algorithms to be used in creating a 3D mesh. Here, adaptive meshing is used to ensure that the mesh accurately models the physical unknowns (current and charge). Utilizing the regularized nature of 2.5D objects and

  4. Simulation of large-scale rule-based models

    SciTech Connect

    Hlavacek, William S; Monnie, Michael I; Colvin, Joshua; Faseder, James

    2008-01-01

    Interactions of molecules, such as signaling proteins, with multiple binding sites and/or multiple sites of post-translational covalent modification can be modeled using reaction rules. Rules comprehensively, but implicitly, define the individual chemical species and reactions that molecular interactions can potentially generate. Although rules can be automatically processed to define a biochemical reaction network, the network implied by a set of rules is often too large to generate completely or to simulate using conventional procedures. To address this problem, we present DYNSTOC, a general-purpose tool for simulating rule-based models. DYNSTOC implements a null-event algorithm for simulating chemical reactions in a homogenous reaction compartment. The simulation method does not require that a reaction network be specified explicitly in advance, but rather takes advantage of the availability of the reaction rules in a rule-based specification of a network to determine if a randomly selected set of molecular components participates in a reaction during a time step. DYNSTOC reads reaction rules written in the BioNetGen language which is useful for modeling protein-protein interactions involved in signal transduction. The method of DYNSTOC is closely related to that of STOCHSIM. DYNSTOC differs from STOCHSIM by allowing for model specification in terms of BNGL, which extends the range of protein complexes that can be considered in a model. DYNSTOC enables the simulation of rule-based models that cannot be simulated by conventional methods. We demonstrate the ability of DYNSTOC to simulate models accounting for multisite phosphorylation and multivalent binding processes that are characterized by large numbers of reactions. DYNSTOC is free for non-commercial use. The C source code, supporting documentation and example input files are available at .

  5. Modeling and simulation of large scale stirred tank

    NASA Astrophysics Data System (ADS)

    Neuville, John R.

    The purpose of this dissertation is to provide a written record of the evaluation performed on the DWPF mixing process by the construction of numerical models that resemble the geometry of this process. There were seven numerical models constructed to evaluate the DWPF mixing process and four pilot plants. The models were developed with Fluent software and the results from these models were used to evaluate the structure of the flow field and the power demand of the agitator. The results from the numerical models were compared with empirical data collected from these pilot plants that had been operated at an earlier date. Mixing is commonly used in a variety ways throughout industry to blend miscible liquids, disperse gas through liquid, form emulsions, promote heat transfer and, suspend solid particles. The DOE Sites at Hanford in Richland Washington, West Valley in New York, and Savannah River Site in Aiken South Carolina have developed a process that immobilizes highly radioactive liquid waste. The radioactive liquid waste at DWPF is an opaque sludge that is mixed in a stirred tank with glass frit particles and water to form slurry of specified proportions. The DWPF mixing process is composed of a flat bottom cylindrical mixing vessel with a centrally located helical coil, and agitator. The helical coil is used to heat and cool the contents of the tank and can improve flow circulation. The agitator shaft has two impellers; a radial blade and a hydrofoil blade. The hydrofoil is used to circulate the mixture between the top region and bottom region of the tank. The radial blade sweeps the bottom of the tank and pushes the fluid in the outward radial direction. The full scale vessel contains about 9500 gallons of slurry with flow behavior characterized as a Bingham Plastic. Particles in the mixture have an abrasive characteristic that cause excessive erosion to internal vessel components at higher impeller speeds. The desire for this mixing process is to ensure the

  6. Modelling large scale human activity in San Francisco

    NASA Astrophysics Data System (ADS)

    Gonzalez, Marta

    2010-03-01

    Diverse group of people with a wide variety of schedules, activities and travel needs compose our cities nowadays. This represents a big challenge for modeling travel behaviors in urban environments; those models are of crucial interest for a wide variety of applications such as traffic forecasting, spreading of viruses, or measuring human exposure to air pollutants. The traditional means to obtain knowledge about travel behavior is limited to surveys on travel journeys. The obtained information is based in questionnaires that are usually costly to implement and with intrinsic limitations to cover large number of individuals and some problems of reliability. Using mobile phone data, we explore the basic characteristics of a model of human travel: The distribution of agents is proportional to the population density of a given region, and each agent has a characteristic trajectory size contain information on frequency of visits to different locations. Additionally we use a complementary data set given by smart subway fare cards offering us information about the exact time of each passenger getting in or getting out of the subway station and the coordinates of it. This allows us to uncover the temporal aspects of the mobility. Since we have the actual time and place of individual's origin and destination we can understand the temporal patterns in each visited location with further details. Integrating two described data set we provide a dynamical model of human travels that incorporates different aspects observed empirically.

  7. Large-Scale Modeling of Wordform Learning and Representation

    ERIC Educational Resources Information Center

    Sibley, Daragh E.; Kello, Christopher T.; Plaut, David C.; Elman, Jeffrey L.

    2008-01-01

    The forms of words as they appear in text and speech are central to theories and models of lexical processing. Nonetheless, current methods for simulating their learning and representation fail to approach the scale and heterogeneity of real wordform lexicons. A connectionist architecture termed the "sequence encoder" is used to learn nearly…

  8. Complex nuclear spectra in a large scale shell model approach

    NASA Astrophysics Data System (ADS)

    D, Bianco; F, Andreozzi; Iudice N, Lo; A, Porrino; F, Knapp

    2012-05-01

    We report on a shell model implementation of an iterative matrix diagonalization algorithm in the spin uncoupled scheme. A new importance sampling is adopted which brings the eigenvalues to convergence with about 10% of the basis states. The method is shown to be able to provide an exhaustive description of the low-energy spectroscopic properties of 132-134Xe isotopes and of the spectrum of 130Xe.

  9. Multistability in Large Scale Models of Brain Activity

    PubMed Central

    Golos, Mathieu; Jirsa, Viktor; Daucé, Emmanuel

    2015-01-01

    Noise driven exploration of a brain network’s dynamic repertoire has been hypothesized to be causally involved in cognitive function, aging and neurodegeneration. The dynamic repertoire crucially depends on the network’s capacity to store patterns, as well as their stability. Here we systematically explore the capacity of networks derived from human connectomes to store attractor states, as well as various network mechanisms to control the brain’s dynamic repertoire. Using a deterministic graded response Hopfield model with connectome-based interactions, we reconstruct the system’s attractor space through a uniform sampling of the initial conditions. Large fixed-point attractor sets are obtained in the low temperature condition, with a bigger number of attractors than ever reported so far. Different variants of the initial model, including (i) a uniform activation threshold or (ii) a global negative feedback, produce a similarly robust multistability in a limited parameter range. A numerical analysis of the distribution of the attractors identifies spatially-segregated components, with a centro-medial core and several well-delineated regional patches. Those different modes share similarity with the fMRI independent components observed in the “resting state” condition. We demonstrate non-stationary behavior in noise-driven generalizations of the models, with different meta-stable attractors visited along the same time course. Only the model with a global dynamic density control is found to display robust and long-lasting non-stationarity with no tendency toward either overactivity or extinction. The best fit with empirical signals is observed at the edge of multistability, a parameter region that also corresponds to the highest entropy of the attractors. PMID:26709852

  10. Renormalizing a viscous fluid model for large scale structure formation

    NASA Astrophysics Data System (ADS)

    Führer, Florian; Rigopoulos, Gerasimos

    2016-02-01

    Using the Stochastic Adhesion Model (SAM) as a simple toy model for cosmic structure formation, we study renormalization and the removal of the cutoff dependence from loop integrals in perturbative calculations. SAM shares the same symmetry with the full system of continuity+Euler equations and includes a viscosity term and a stochastic noise term, similar to the effective theories recently put forward to model CDM clustering. We show in this context that if the viscosity and noise terms are treated as perturbative corrections to the standard eulerian perturbation theory, they are necessarily non-local in time. To ensure Galilean Invariance higher order vertices related to the viscosity and the noise must then be added and we explicitly show at one-loop that these terms act as counter terms for vertex diagrams. The Ward Identities ensure that the non-local-in-time theory can be renormalized consistently. Another possibility is to include the viscosity in the linear propagator, resulting in exponential damping at high wavenumber. The resulting local-in-time theory is then renormalizable to one loop, requiring less free parameters for its renormalization.

  11. Large scale molecular dynamics modeling of materials fabrication processes

    SciTech Connect

    Belak, J.; Glosli, J.N.; Boercker, D.B.; Stowers, I.F.

    1994-02-01

    An atomistic molecular dynamics model of materials fabrication processes is presented. Several material removal processes are shown to be within the domain of this simulation method. Results are presented for orthogonal cutting of copper and silicon and for crack propagation in silica glass. Both copper and silicon show ductile behavior, but the atomistic mechanisms that allow this behavior are significantly different in the two cases. The copper chip remains crystalline while the silicon chip transforms into an amorphous state. The critical stress for crack propagation in silica glass was found to be in reasonable agreement with experiment and a novel stick-slip phenomenon was observed.

  12. Evaluation of drought propagation in an ensemble mean of large-scale hydrological models

    NASA Astrophysics Data System (ADS)

    Van Loon, A. F.; Van Huijgevoort, M. H. J.; Van Lanen, H. A. J.

    2012-07-01

    Hydrological drought is increasingly studied using large-scale models. It is, however, not sure whether large-scale models reproduce the development of hydrological drought correctly. The pressing question is: how well do large-scale models simulate the propagation from meteorological to hydrological drought? To answer this question, we evaluated the simulation of drought propagation in an ensemble mean of ten large-scale models, both land-surface models and global hydrological models, that were part of the model intercomparison project of WATCH (WaterMIP). For a selection of case study areas, we studied drought characteristics (number of droughts, duration, severity), drought propagation features (pooling, attenuation, lag, lengthening), and hydrological drought typology (classical rainfall deficit drought, rain-to-snow-season drought, wet-to-dry-season drought, cold snow season drought, warm snow season drought, composite drought). Drought characteristics simulated by large-scale models clearly reflected drought propagation, i.e. drought events became less and longer when moving through the hydrological cycle. However, more differentiation was expected between fast and slowly responding systems, with slowly responding systems having less and longer droughts in runoff than fast responding systems. This was not found using large-scale models. Drought propagation features were poorly reproduced by the large-scale models, because runoff reacted immediately to precipitation, in all case study areas. This fast reaction to precipitation, even in cold climates in winter and in semi-arid climates in summer, also greatly influenced the hydrological drought typology as identified by the large-scale models. In general, the large-scale models had the correct representation of drought types, but the percentages of occurrence had some important mismatches, e.g. an overestimation of classical rainfall deficit droughts, and an underestimation of wet-to-dry-season droughts and

  13. Large-scale modeling of rain fields from a rain cell deterministic model

    NASA Astrophysics Data System (ADS)

    FéRal, Laurent; Sauvageot, Henri; Castanet, Laurent; Lemorton, JoëL.; Cornet, FréDéRic; Leconte, Katia

    2006-04-01

    A methodology to simulate two-dimensional rain rate fields at large scale (1000 × 1000 km2, the scale of a satellite telecommunication beam or a terrestrial fixed broadband wireless access network) is proposed. It relies on a rain rate field cellular decomposition. At small scale (˜20 × 20 km2), the rain field is split up into its macroscopic components, the rain cells, described by the Hybrid Cell (HYCELL) cellular model. At midscale (˜150 × 150 km2), the rain field results from the conglomeration of rain cells modeled by HYCELL. To account for the rain cell spatial distribution at midscale, the latter is modeled by a doubly aggregative isotropic random walk, the optimal parameterization of which is derived from radar observations at midscale. The extension of the simulation area from the midscale to the large scale (1000 × 1000 km2) requires the modeling of the weather frontal area. The latter is first modeled by a Gaussian field with anisotropic covariance function. The Gaussian field is then turned into a binary field, giving the large-scale locations over which it is raining. This transformation requires the definition of the rain occupation rate over large-scale areas. Its probability distribution is determined from observations by the French operational radar network ARAMIS. The coupling with the rain field modeling at midscale is immediate whenever the large-scale field is split up into midscale subareas. The rain field thus generated accounts for the local CDF at each point, defining a structure spatially correlated at small scale, midscale, and large scale. It is then suggested that this approach be used by system designers to evaluate diversity gain, terrestrial path attenuation, or slant path attenuation for different azimuth and elevation angle directions.

  14. Soil hydrologic characterization for modeling large scale soil remediation protocols

    NASA Astrophysics Data System (ADS)

    Romano, Nunzio; Palladino, Mario; Di Fiore, Paola; Sica, Benedetto; Speranza, Giuseppe

    2014-05-01

    In Campania Region (Italy), the Ministry of Environment identified a National Interest Priority Sites (NIPS) with a surface of about 200,000 ha, characterized by different levels and sources of pollution. This area, called Litorale Domitio-Agro Aversano includes some polluted agricultural land, belonging to more than 61 municipalities in the Naples and Caserta provinces. In this area, a high level spotted soil contamination is moreover due to the legal and outlaw industrial and municipal wastes dumping, with hazardous consequences also on the quality of the water table. The EU-Life+ project ECOREMED (Implementation of eco-compatible protocols for agricultural soil remediation in Litorale Domizio-Agro Aversano NIPS) has the major aim of defining an operating protocol for agriculture-based bioremediation of contaminated agricultural soils, also including the use of crops extracting pollutants to be used as biomasses for renewable energy production. In the framework of this project, soil hydrologic characterization plays a key role and modeling water flow and solute transport has two main challenging points on which we focus on. A first question is related to the fate of contaminants infiltrated from stormwater runoff and the potential for groundwater contamination. Another question is the quantification of fluxes and spatial extent of root water uptake by the plant species employed to extract pollutants in the uppermost soil horizons. Given the high variability of spatial distribution of pollutants, we use soil characterization at different scales, from field scale when facing root water uptake process, to regional scale when simulating interaction between soil hydrology and groundwater fluxes.

  15. Large Scale Modelling of Glow Discharges or Non - Plasmas

    NASA Astrophysics Data System (ADS)

    Shankar, Sadasivan

    The Electron Velocity Distribution Function (EVDF) in the cathode fall of a DC helium glow discharge was evaluated from a numerical solution of the Boltzmann Transport Equation(BTE). The numerical technique was based on a Petrov-Galerkin technique and a unique combination of streamline upwinding with self -consistent feedback-based shock-capturing. EVDF for the cathode fall was solved at 1 Torr, as a function of position x, axial velocity v_{rm x}, radial velocity v_{rm r}, and time t. The electron-neutral collisions consisted of elastic, excitation, and ionization processes. The algorithm was optimized and vectorized to speed execution by more than a factor of 10 on CRAY-XMP. Efficient storage schemes were used to save the memory allocation required by the algorithm. The analysis of the solution of BTE was done in terms of the 8-moments that were evaluated. Higher moments were found necessary to study the momentum and energy fluxes. The time and length scales were estimated and used as a basis for the characterization of DC glow discharges. Based on an exhaustive study of Knudsen numbers, it was observed that the electrons in the cathode fall were in the transition or Boltzmann regime. The shortest relaxation time was the momentum relaxation and the longest times were the ionization and energy relaxation times. The other times in the processes were that for plasma reaction, diffusion, convection, transit, entropy relaxation, and that for mean free flight between the collisions. Different models were classified based on the moments, time scales, and length scales in their applicability to glow discharges. These consisted of BTE with different number af phase and configuration dimensions, Bhatnagar-Gross-Krook equation, moment equations (e.g. Drift-Diffusion, Drift-Diffusion-Inertia), and spherical harmonic expansions.

  16. Numerical models for ac loss calculation in large-scale applications of HTS coated conductors

    NASA Astrophysics Data System (ADS)

    Quéval, Loïc; Zermeño, Víctor M. R.; Grilli, Francesco

    2016-02-01

    Numerical models are powerful tools to predict the electromagnetic behavior of superconductors. In recent years, a variety of models have been successfully developed to simulate high-temperature-superconducting (HTS) coated conductor tapes. While the models work well for the simulation of individual tapes or relatively small assemblies, their direct applicability to devices involving hundreds or thousands of tapes, e.g., coils used in electrical machines, is questionable. Indeed, the simulation time and memory requirement can quickly become prohibitive. In this paper, we develop and compare two different models for simulating realistic HTS devices composed of a large number of tapes: (1) the homogenized model simulates the coil using an equivalent anisotropic homogeneous bulk with specifically developed current constraints to account for the fact that each turn carries the same current; (2) the multi-scale model parallelizes and reduces the computational problem by simulating only several individual tapes at significant positions of the coil’s cross-section using appropriate boundary conditions to account for the field generated by the neighboring turns. Both methods are used to simulate a coil made of 2000 tapes, and compared against the widely used H-formulation finite-element model that includes all the tapes. Both approaches allow faster simulations of large number of HTS tapes by 1-3 orders of magnitudes, while maintaining good accuracy of the results. Both models can therefore be used to design and optimize large-scale HTS devices. This study provides key advancement with respect to previous versions of both models. The homogenized model is extended from simple stacks to large arrays of tapes. For the multi-scale model, the importance of the choice of the current distribution used to generate the background field is underlined; the error in ac loss estimation resulting from the most obvious choice of starting from a uniform current distribution is revealed.

  17. Using large-scale neural models to interpret connectivity measures of cortico-cortical dynamics at millisecond temporal resolution

    PubMed Central

    Banerjee, Arpan; Pillai, Ajay S.; Horwitz, Barry

    2012-01-01

    Over the last two decades numerous functional imaging studies have shown that higher order cognitive functions are crucially dependent on the formation of distributed, large-scale neuronal assemblies (neurocognitive networks), often for very short durations. This has fueled the development of a vast number of functional connectivity measures that attempt to capture the spatiotemporal evolution of neurocognitive networks. Unfortunately, interpreting the neural basis of goal directed behavior using connectivity measures on neuroimaging data are highly dependent on the assumptions underlying the development of the measure, the nature of the task, and the modality of the neuroimaging technique that was used. This paper has two main purposes. The first is to provide an overview of some of the different measures of functional/effective connectivity that deal with high temporal resolution neuroimaging data. We will include some results that come from a recent approach that we have developed to identify the formation and extinction of task-specific, large-scale neuronal assemblies from electrophysiological recordings at a ms-by-ms temporal resolution. The second purpose of this paper is to indicate how to partially validate the interpretations drawn from this (or any other) connectivity technique by using simulated data from large-scale, neurobiologically realistic models. Specifically, we applied our recently developed method to realistic simulations of MEG data during a delayed match-to-sample (DMS) task condition and a passive viewing of stimuli condition using a large-scale neural model of the ventral visual processing pathway. Simulated MEG data using simple head models were generated from sources placed in V1, V4, IT, and prefrontal cortex (PFC) for the passive viewing condition. The results show how closely the conclusions obtained from the functional connectivity method match with what actually occurred at the neuronal network level. PMID:22291621

  18. Evaluation of drought propagation in an ensemble mean of large-scale hydrological models

    NASA Astrophysics Data System (ADS)

    Van Loon, A. F.; Van Huijgevoort, M. H. J.; Van Lanen, H. A. J.

    2012-11-01

    Hydrological drought is increasingly studied using large-scale models. It is, however, not sure whether large-scale models reproduce the development of hydrological drought correctly. The pressing question is how well do large-scale models simulate the propagation from meteorological to hydrological drought? To answer this question, we evaluated the simulation of drought propagation in an ensemble mean of ten large-scale models, both land-surface models and global hydrological models, that participated in the model intercomparison project of WATCH (WaterMIP). For a selection of case study areas, we studied drought characteristics (number of droughts, duration, severity), drought propagation features (pooling, attenuation, lag, lengthening), and hydrological drought typology (classical rainfall deficit drought, rain-to-snow-season drought, wet-to-dry-season drought, cold snow season drought, warm snow season drought, composite drought). Drought characteristics simulated by large-scale models clearly reflected drought propagation; i.e. drought events became fewer and longer when moving through the hydrological cycle. However, more differentiation was expected between fast and slowly responding systems, with slowly responding systems having fewer and longer droughts in runoff than fast responding systems. This was not found using large-scale models. Drought propagation features were poorly reproduced by the large-scale models, because runoff reacted immediately to precipitation, in all case study areas. This fast reaction to precipitation, even in cold climates in winter and in semi-arid climates in summer, also greatly influenced the hydrological drought typology as identified by the large-scale models. In general, the large-scale models had the correct representation of drought types, but the percentages of occurrence had some important mismatches, e.g. an overestimation of classical rainfall deficit droughts, and an underestimation of wet-to-dry-season droughts and

  19. A parallel implementation of the Lattice Solid Model for large scale simulation of earthquake dynamics

    NASA Astrophysics Data System (ADS)

    Abe, S.; Place, D.; Mora, P.

    2001-12-01

    The particle based lattice solid model has been used successfully as a virtual laboratory to simulate the dynamics of faults, earthquakes and gouge processes. The phenomena investigated with the lattice solid model range from the stick-slip behavior of faults, localization phenomena in gouge and the evolution of stress correlation in multi-fault systems, to the influence of rate and state-dependent friction laws on the macroscopic behavior of faults. However, the results from those simulations also show that in order to make a next step towards more realistic simulations it will be necessary to use three-dimensional models containing a large number of particles with a range of sizes, thus requiring a significantly increased amount of computing resources. Whereas the computing power provided by a single processor can be expected to double every 18 to 24 months, parallel computers which provide hundreds of times the computing power are available today and there are several efforts underway to construct dedicated parallel computers and associated simulation software systems for large-scale earth science simulation (e.g. The Australian Computational Earth Systems Simulator[1] and Japanese Earth Simulator[2])". In order to use the computing power made available by those large parallel computers, a parallel version of the lattice solid model has been implemented. In order to guarantee portability over a wide range of computer architectures, a message passing approach based on MPI has been used in the implementation. Particular care has been taken to eliminate serial bottlenecks in the program, thus ensuring high scalability on systems with a large number of CPUs. Measures taken to achieve this objective include the use of asynchronous communication between the parallel processes and the minimization of communication with and work done by a central ``master'' process. Benchmarks using models with up to 6 million particles on a parallel computer with 128 CPUs show that the

  20. Non-Gaussianity and large-scale structure in a two-field inflationary model

    SciTech Connect

    Tseliakhovich, Dmitriy; Hirata, Christopher

    2010-08-15

    Single-field inflationary models predict nearly Gaussian initial conditions, and hence a detection of non-Gaussianity would be a signature of the more complex inflationary scenarios. In this paper we study the effect on the cosmic microwave background and on large-scale structure from primordial non-Gaussianity in a two-field inflationary model in which both the inflaton and curvaton contribute to the density perturbations. We show that in addition to the previously described enhancement of the galaxy bias on large scales, this setup results in large-scale stochasticity. We provide joint constraints on the local non-Gaussianity parameter f-tilde{sub NL} and the ratio {xi} of the amplitude of primordial perturbations due to the inflaton and curvaton using WMAP and Sloan Digital Sky Survey data.

  1. On Applications of Rasch Models in International Comparative Large-Scale Assessments: A Historical Review

    ERIC Educational Resources Information Center

    Wendt, Heike; Bos, Wilfried; Goy, Martin

    2011-01-01

    Several current international comparative large-scale assessments of educational achievement (ICLSA) make use of "Rasch models", to address functions essential for valid cross-cultural comparisons. From a historical perspective, ICLSA and Georg Rasch's "models for measurement" emerged at about the same time, half a century ago. However, the…

  2. On the Estimation of Hierarchical Latent Regression Models for Large-Scale Assessments

    ERIC Educational Resources Information Center

    Li, Deping; Oranje, Andreas; Jiang, Yanlin

    2009-01-01

    To find population proficiency distributions, a two-level hierarchical linear model may be applied to large-scale survey assessments such as the National Assessment of Educational Progress (NAEP). The model and parameter estimation are developed and a simulation was carried out to evaluate parameter recovery. Subsequently, both a hierarchical and…

  3. An Alternative Way to Model Population Ability Distributions in Large-Scale Educational Surveys

    ERIC Educational Resources Information Center

    Wetzel, Eunike; Xu, Xueli; von Davier, Matthias

    2015-01-01

    In large-scale educational surveys, a latent regression model is used to compensate for the shortage of cognitive information. Conventionally, the covariates in the latent regression model are principal components extracted from background data. This operational method has several important disadvantages, such as the handling of missing data and…

  4. Finite Mixture Multilevel Multidimensional Ordinal IRT Models for Large Scale Cross-Cultural Research

    ERIC Educational Resources Information Center

    de Jong, Martijn G.; Steenkamp, Jan-Benedict E. M.

    2010-01-01

    We present a class of finite mixture multilevel multidimensional ordinal IRT models for large scale cross-cultural research. Our model is proposed for confirmatory research settings. Our prior for item parameters is a mixture distribution to accommodate situations where different groups of countries have different measurement operations, while…

  5. The three-point function as a probe of models for large-scale structure

    NASA Technical Reports Server (NTRS)

    Frieman, Joshua A.; Gaztanaga, Enrique

    1993-01-01

    The consequences of models of structure formation for higher-order (n-point) galaxy correlation functions in the mildly non-linear regime are analyzed. Several variations of the standard Omega = 1 cold dark matter model with scale-invariant primordial perturbations were recently introduced to obtain more power on large scales, R(sub p) is approximately 20 h(sup -1) Mpc, e.g., low-matter-density (non-zero cosmological constant) models, 'tilted' primordial spectra, and scenarios with a mixture of cold and hot dark matter. They also include models with an effective scale-dependent bias, such as the cooperative galaxy formation scenario of Bower, etal. It is shown that higher-order (n-point) galaxy correlation functions can provide a useful test of such models and can discriminate between models with true large-scale power in the density field and those where the galaxy power arises from scale-dependent bias: a bias with rapid scale-dependence leads to a dramatic decrease of the hierarchical amplitudes Q(sub J) at large scales, r is approximately greater than R(sub p). Current observational constraints on the three-point amplitudes Q(sub 3) and S(sub 3) can place limits on the bias parameter(s) and appear to disfavor, but not yet rule out, the hypothesis that scale-dependent bias is responsible for the extra power observed on large scales.

  6. The three-point function as a probe of models for large-scale structure

    SciTech Connect

    Frieman, J.A.; Gaztanaga, E.

    1993-06-19

    The authors analyze the consequences of models of structure formation for higher-order (n-point) galaxy correlation functions in the mildly non-linear regime. Several variations of the standard {Omega} = 1 cold dark matter model with scale-invariant primordial perturbations have recently been introduced to obtain more power on large scales, R{sub p} {approximately}20 h{sup {minus}1} Mpc, e.g., low-matter-density (non-zero cosmological constant) models, {open_quote}tilted{close_quote} primordial spectra, and scenarios with a mixture of cold and hot dark matter. They also include models with an effective scale-dependent bias, such as the cooperative galaxy formation scenario of Bower, et al. The authors show that higher-order (n-point) galaxy correlation functions can provide a useful test of such models and can discriminate between models with true large-scale power in the density field and those where the galaxy power arises from scale-dependent bias: a bias with rapid scale-dependence leads to a dramatic decrease of the hierarchical amplitudes Q{sub J} at large scales, r {approx_gt} R{sub p}. Current observational constraints on the three-point amplitudes Q{sub 3} and S{sub 3} can place limits on the bias parameter(s) and appear to disfavor, but not yet rule out, the hypothesis that scale-dependent bias is responsible for the extra power observed on large scales.

  7. On the sensitivity of large scale sea-ice models to snow thermal conductivity

    NASA Astrophysics Data System (ADS)

    Lecomte, O.; Fichefet, T.; Vancoppenolle, M.; Massonnet, F.

    2012-04-01

    In both hemispheres, the sea-ice snow cover is a key element in the local climate system and particularly in the processes driving the sea-ice thickness evolution. Because of its high reflectance and thermal insulating properties, the snow pack inhibits or delays the sea-ice summer surface melt. In winter however, snow acts as a blanket that curtails the heat loss from the sea ice to the atmosphere and therefore reduces the basal growth rate. Among the snow thermo-physical properties, snow thermal conductivity is known to be one of the most important with regard to the sea-ice-related thermodynamical processes. In the literature, both model and observational studies parameterize the snow thermal conductivity as a function of density and several different relationships are used. For the purpose of large scale modelling, one issue is then to have the snow density correctly represented while, for computational cost reasons, a comprehensive snow scheme can generally not be used in such models. Since it is known by observationalists that one of the key atmospheric parameters that affect snow thermal conductivity and density is the wind speed, one way to get around the problem is to try to have a realistic representation of the snow density profiles on the sea-ice directly using observations or simple wind speed depending parameterizations. In this study, we analyze the importance of the snow density profile and thermal conductivity in the thermodynamic-dynamic Louvain-la-Neuve Sea-Ice Model (LIM3), which is part of the ocean modelling platform NEMO (Nucleus for European Modelling of the Ocean, IPSL, Paris). In order to do this, a new snow thermodynamic scheme was developed and implemented into LIM3. This scheme is multilayer with varying snow thermo-physical properties. For memory and computational cost reasons, it includes only 3 layers but the vertical grid is refined in thermodynamic routines. Although snow density is time- and space-dependent in the model, it is not

  8. Modeling haboob dust storms in large-scale weather and climate models

    NASA Astrophysics Data System (ADS)

    Pantillon, Florian; Knippertz, Peter; Marsham, John H.; Panitz, Hans-Jürgen; Bischoff-Gauss, Ingeborg

    2016-03-01

    Recent field campaigns have shown that haboob dust storms, formed by convective cold pool outflows, contribute a significant fraction of dust uplift over the Sahara and Sahel in summer. However, in situ observations are sparse and haboobs are frequently concealed by clouds in satellite imagery. Furthermore, most large-scale weather and climate models lack haboobs, because they do not explicitly represent convection. Here a 1 year long model run with explicit representation of convection delivers the first full seasonal cycle of haboobs over northern Africa. Using conservative estimates, the model suggests that haboobs contribute one fifth of the annual dust-generating winds over northern Africa, one fourth between May and October, and one third over the western Sahel during this season. A simple parameterization of haboobs has recently been developed for models with parameterized convection, based on the downdraft mass flux of convection schemes. It is applied here to two model runs with different horizontal resolutions and assessed against the explicit run. The parameterization succeeds in capturing the geographical distribution of haboobs and their seasonal cycle over the Sahara and Sahel. It can be tuned to the different horizontal resolutions, and different formulations are discussed with respect to the frequency of extreme events. The results show that the parameterization is reliable and may solve a major and long-standing issue in simulating dust storms in large-scale weather and climate models.

  9. Optimization of large-scale heterogeneous system-of-systems models.

    SciTech Connect

    Parekh, Ojas; Watson, Jean-Paul; Phillips, Cynthia Ann; Siirola, John; Swiler, Laura Painton; Hough, Patricia Diane; Lee, Herbert K. H.; Hart, William Eugene; Gray, Genetha Anne; Woodruff, David L.

    2012-01-01

    Decision makers increasingly rely on large-scale computational models to simulate and analyze complex man-made systems. For example, computational models of national infrastructures are being used to inform government policy, assess economic and national security risks, evaluate infrastructure interdependencies, and plan for the growth and evolution of infrastructure capabilities. A major challenge for decision makers is the analysis of national-scale models that are composed of interacting systems: effective integration of system models is difficult, there are many parameters to analyze in these systems, and fundamental modeling uncertainties complicate analysis. This project is developing optimization methods to effectively represent and analyze large-scale heterogeneous system of systems (HSoS) models, which have emerged as a promising approach for describing such complex man-made systems. These optimization methods enable decision makers to predict future system behavior, manage system risk, assess tradeoffs between system criteria, and identify critical modeling uncertainties.

  10. Influence of a compost layer on the attenuation of 28 selected organic micropollutants under realistic soil aquifer treatment conditions: insights from a large scale column experiment.

    PubMed

    Schaffer, Mario; Kröger, Kerrin Franziska; Nödler, Karsten; Ayora, Carlos; Carrera, Jesús; Hernández, Marta; Licha, Tobias

    2015-05-01

    Soil aquifer treatment is widely applied to improve the quality of treated wastewater in its reuse as alternative source of water. To gain a deeper understanding of the fate of thereby introduced organic micropollutants, the attenuation of 28 compounds was investigated in column experiments using two large scale column systems in duplicate. The influence of increasing proportions of solid organic matter (0.04% vs. 0.17%) and decreasing redox potentials (denitrification vs. iron reduction) was studied by introducing a layer of compost. Secondary effluent from a wastewater treatment plant was used as water matrix for simulating soil aquifer treatment. For neutral and anionic compounds, sorption generally increases with the compound hydrophobicity and the solid organic matter in the column system. Organic cations showed the highest attenuation. Among them, breakthroughs were only registered for the cationic beta-blockers atenolol and metoprolol. An enhanced degradation in the columns with organic infiltration layer was observed for the majority of the compounds, suggesting an improved degradation for higher levels of biodegradable dissolved organic carbon. Solely the degradation of sulfamethoxazole could clearly be attributed to redox effects (when reaching iron reducing conditions). The study provides valuable insights into the attenuation potential for a wide spectrum of organic micropollutants under realistic soil aquifer treatment conditions. Furthermore, the introduction of the compost layer generally showed positive effects on the removal of compounds preferentially degraded under reducing conditions and also increases the residence times in the soil aquifer treatment system via sorption.

  11. A low-dimensional model for large-scale coherent structures

    NASA Astrophysics Data System (ADS)

    Bai, Kunlun; Ji, Dandan; Brown, Eric

    2015-11-01

    We demonstrate a methodology to predict the dynamics of the large-scale coherent structures in turbulence using a simple low dimensional stochastic model proposed by Brown and Ahlers (Phys. Fluids, 2008). The model terms are derived from the Navier-Stokes equations, including a potential term depending on the geometry of the system. The model has previously described several dynamical modes of the large-scale circulation (LSC) in turbulent Rayleigh-Bénard convection. Here we test a model prediction for the existence of a new mode where the LSC stochastically changes direction to align with different diagonals of a cubic container. The model successfully predicts the switching rate of the LSC at different tilting conditions. The success of the prediction of the switching mode demonstrates that a low-dimensional turbulent model can quantitatively predict the existence and properties of different dynamical states that result from boundary geometry.

  12. An Efficient Simulation Environment for Modeling Large-Scale Cortical Processing

    PubMed Central

    Richert, Micah; Nageswaran, Jayram Moorkanikara; Dutt, Nikil; Krichmar, Jeffrey L.

    2011-01-01

    We have developed a spiking neural network simulator, which is both easy to use and computationally efficient, for the generation of large-scale computational neuroscience models. The simulator implements current or conductance based Izhikevich neuron networks, having spike-timing dependent plasticity and short-term plasticity. It uses a standard network construction interface. The simulator allows for execution on either GPUs or CPUs. The simulator, which is written in C/C++, allows for both fine grain and coarse grain specificity of a host of parameters. We demonstrate the ease of use and computational efficiency of this model by implementing a large-scale model of cortical areas V1, V4, and area MT. The complete model, which has 138,240 neurons and approximately 30 million synapses, runs in real-time on an off-the-shelf GPU. The simulator source code, as well as the source code for the cortical model examples is publicly available. PMID:22007166

  13. Modelling Convective Dust Storms in Large-Scale Weather and Climate Models

    NASA Astrophysics Data System (ADS)

    Pantillon, Florian; Knippertz, Peter; Marsham, John H.; Panitz, Hans-Jürgen; Bischoff-Gauss, Ingeborg

    2016-04-01

    Recent field campaigns have shown that convective dust storms - also known as haboobs or cold pool outflows - contribute a significant fraction of dust uplift over the Sahara and Sahel in summer. However, in-situ observations are sparse and convective dust storms are frequently concealed by clouds in satellite imagery. Therefore numerical models are often the only available source of information over the area. Here a regional climate model with explicit representation of convection delivers the first full seasonal cycle of convective dust storms over North Africa. The model suggests that they contribute one fifth of the annual dust uplift over North Africa, one fourth between May and October, and one third over the western Sahel during this season. In contrast, most large-scale weather and climate models do not explicitly represent convection and thus lack such storms. A simple parameterization of convective dust storms has recently been developed, based on the downdraft mass flux of convection schemes. The parameterization is applied here to a set of regional climate runs with different horizontal resolutions and convection schemes, and assessed against the explicit run and against sparse station observations. The parameterization succeeds in capturing the geographical distribution and seasonal cycle of convective dust storms. It can be tuned to different horizontal resolutions and convection schemes, although the details of the geographical distribution and seasonal cycle depend on the representation of the monsoon in the parent model. Different versions of the parameterization are further discussed with respect to differences in the frequency of extreme events. The results show that the parameterization is reliable and can therefore solve a long-standing problem in simulating dust storms in large-scale weather and climate models.

  14. Modelling Convective Dust Storms in Large-Scale Weather and Climate Models

    NASA Astrophysics Data System (ADS)

    Pantillon, F.; Knippertz, P.; Marsham, J. H.; Panitz, H. J.; Bischoff-Gauss, I.

    2015-12-01

    Recent field campaigns have shown that convective dust storms - also known as haboobs or cold pool outflows - contribute a significant fraction of dust uplift over the Sahara and Sahel in summer. However, in-situ observations are sparse and convective dust storms are frequently concealed by clouds in satellite imagery. Therefore numerical models are often the only available source of information over the area. Here a regional climate model with explicit representation of convection delivers the first full seasonal cycle of convective dust storms over North Africa. The model suggests that they contribute one fifth of the annual dust uplift over North Africa, one fourth between May and October, and one third over the western Sahel during this season. In contrast, most large-scale weather and climate models do not explicitly represent convection and thus lack such storms.A simple parameterization of convective dust storms has recently been developed, based on the downdraft mass flux of convection schemes. The parameterization is applied here to a set of regional climate runs with different horizontal resolutions and convection schemes, and assessed against the explicit run and against sparse station observations. The parameterization succeeds in capturing the geographical distribution and seasonal cycle of convective dust storms. It can be tuned to different horizontal resolutions and convection schemes, although the details of the geographical distribution and seasonal cycle depend on the representation of the monsoon in the parent model. Different versions of the parameterization are further discussed with respect to differences in the frequency of extreme events. The results show that the parameterization is reliable and can therefore solve a long-standing problem in simulating dust storms in large-scale weather and climate models.

  15. REIONIZATION ON LARGE SCALES. I. A PARAMETRIC MODEL CONSTRUCTED FROM RADIATION-HYDRODYNAMIC SIMULATIONS

    SciTech Connect

    Battaglia, N.; Trac, H.; Cen, R.; Loeb, A.

    2013-10-20

    We present a new method for modeling inhomogeneous cosmic reionization on large scales. Utilizing high-resolution radiation-hydrodynamic simulations with 2048{sup 3} dark matter particles, 2048{sup 3} gas cells, and 17 billion adaptive rays in a L = 100 Mpc h {sup –1} box, we show that the density and reionization redshift fields are highly correlated on large scales (∼> 1 Mpc h {sup –1}). This correlation can be statistically represented by a scale-dependent linear bias. We construct a parametric function for the bias, which is then used to filter any large-scale density field to derive the corresponding spatially varying reionization redshift field. The parametric model has three free parameters that can be reduced to one free parameter when we fit the two bias parameters to simulation results. We can differentiate degenerate combinations of the bias parameters by combining results for the global ionization histories and correlation length between ionized regions. Unlike previous semi-analytic models, the evolution of the reionization redshift field in our model is directly compared cell by cell against simulations and performs well in all tests. Our model maps the high-resolution, intermediate-volume radiation-hydrodynamic simulations onto lower-resolution, larger-volume N-body simulations (∼> 2 Gpc h {sup –1}) in order to make mock observations and theoretical predictions.

  16. Small parametric model for nonlinear dynamics of large scale cyclogenesis with wind speed variations

    NASA Astrophysics Data System (ADS)

    Erokhin, Nikolay; Shkevov, Rumen; Zolnikova, Nadezhda; Mikhailovskaya, Ludmila

    2016-07-01

    It is performed a numerical investigation of a self consistent small parametric model (SPM) for large scale cyclogenesis (RLSC) by usage of connected nonlinear equations for mean wind speed and ocean surface temperature in the tropical cyclone (TC). These equations may describe the different scenario of temporal dynamics of a powerful atmospheric vortex during its full life cycle. The numerical calculations have shown that relevant choice of SPMTs incoming parameters allows to describe the seasonal behavior of regional large scale cyclogenesis dynamics for a given number of TC during the active season. It is shown that SPM allows describe also the variable wind speed variations inside the TC. Thus by usage of the nonlinear small parametric model it is possible to study the features of RLSCTs temporal dynamics during the active season in the region given and to analyze the relationship between regional cyclogenesis parameters and different external factors like the space weather including the solar activity level and cosmic rays variations.

  17. A PRACTICAL ONTOLOGY FOR THE LARGE-SCALE MODELING OF SCHOLARLY ARTIFACTS AND THEIR USAGE

    SciTech Connect

    RODRIGUEZ, MARKO A.; BOLLEN, JOHAN; VAN DE SOMPEL, HERBERT

    2007-01-30

    The large-scale analysis of scholarly artifact usage is constrained primarily by current practices in usage data archiving, privacy issues concerned with the dissemination of usage data, and the lack of a practical ontology for modeling the usage domain. As a remedy to the third constraint, this article presents a scholarly ontology that was engineered to represent those classes for which large-scale bibliographic and usage data exists, supports usage research, and whose instantiation is scalable to the order of 50 million articles along with their associated artifacts (e.g. authors and journals) and an accompanying 1 billion usage events. The real world instantiation of the presented abstract ontology is a semantic network model of the scholarly community which lends the scholarly process to statistical analysis and computational support. They present the ontology, discuss its instantiation, and provide some example inference rules for calculating various scholarly artifact metrics.

  18. Modeled large-scale warming impacts on summer California coastal-cooling trends

    NASA Astrophysics Data System (ADS)

    Lebassi-Habtezion, Bereket; GonzáLez, Jorge; Bornstein, Robert

    2011-10-01

    Regional Atmospheric Modeling System (RAMS) meso-meteorological model simulations with a horizontal grid resolution of 4 km on an inner grid over the South Coast Air Basin of California were used to investigate effects from long-term (i.e., past 35 years) large-scale warming impacts on coastal flows. Comparison of present- and past-climate simulations showed significant increases in summer daytime sea breeze activity by up to 1.5 m s-1 (in the onshore component) and a concurrent coastal cooling of average-daily peak temperatures of up to -1.6°C, both of which support observations that the latter is an indirect "reverse reaction" to the large-scale warming of inland areas.

  19. Impacts of Large-Scale Circulation on Convection: A 2-D Cloud Resolving Model Study

    NASA Technical Reports Server (NTRS)

    Li, X; Sui, C.-H.; Lau, K.-M.

    1999-01-01

    Studies of impacts of large-scale circulation on convection, and the roles of convection in heat and water balances over tropical region are fundamentally important for understanding global climate changes. Heat and water budgets over warm pool (SST=29.5 C) and cold pool (SST=26 C) were analyzed based on simulations of the two-dimensional cloud resolving model. Here the sensitivity of heat and water budgets to different sizes of warm and cold pools is examined.

  20. Using Agent Base Models to Optimize Large Scale Network for Large System Inventories

    NASA Technical Reports Server (NTRS)

    Shameldin, Ramez Ahmed; Bowling, Shannon R.

    2010-01-01

    The aim of this paper is to use Agent Base Models (ABM) to optimize large scale network handling capabilities for large system inventories and to implement strategies for the purpose of reducing capital expenses. The models used in this paper either use computational algorithms or procedure implementations developed by Matlab to simulate agent based models in a principal programming language and mathematical theory using clusters, these clusters work as a high performance computational performance to run the program in parallel computational. In both cases, a model is defined as compilation of a set of structures and processes assumed to underlie the behavior of a network system.

  1. Understanding dynamics of large-scale atmospheric vortices with moist-convective shallow water model

    NASA Astrophysics Data System (ADS)

    Rostami, M.; Zeitlin, V.

    2016-08-01

    Atmospheric jets and vortices which, together with inertia-gravity waves, constitute the principal dynamical entities of large-scale atmospheric motions, are well described in the framework of one- or multi-layer rotating shallow water models, which are obtained by vertically averaging of full “primitive” equations. There is a simple and physically consistent way to include moist convection in these models by adding a relaxational parameterization of precipitation and coupling precipitation with convective fluxes with the help of moist enthalpy conservation. We recall the construction of moist-convective rotating shallow water model (mcRSW) model and give an example of application to upper-layer atmospheric vortices.

  2. Large-scale multi-configuration electromagnetic induction: a promising tool to improve hydrological models

    NASA Astrophysics Data System (ADS)

    von Hebel, Christian; Rudolph, Sebastian; Mester, Achim; Huisman, Johan A.; Montzka, Carsten; Weihermüller, Lutz; Vereecken, Harry; van der Kruk, Jan

    2015-04-01

    Large-scale multi-configuration electromagnetic induction (EMI) use different coil configurations, i.e., coil offsets and coil orientations, to sense coil specific depth volumes. The obtained apparent electrical conductivity (ECa) maps can be related to some soil properties such as clay content, soil water content, and pore water conductivity, which are important characteristics that influence hydrological processes. Here, we use large-scale EMI measurements to investigate changes in soil texture that drive the available water supply causing crop development patterns that were observed in leaf area index (LAI) maps obtained from RapidEye satellite images taken after a drought period. The 20 ha test site is situated within the Ellebach catchment (Germany) and consists of a sand-and-gravel dominated upper terrace (UT) and a loamy lower terrace (LT). The large-scale multi-configuration EMI measurements were calibrated using electrical resistivity tomography (ERT) measurements at selected transects and soil samples were taken at representative locations where changes in the electrical conductivity were observed and therefore changing soil properties were expected. By analyzing all the data, the observed LAI patterns could be attributed to buried paleo-river channel systems that contained a higher silt and clay content and provided a higher water holding capacity than the surrounding coarser material. Moreover, the measured EMI data showed highest correlation with LAI for the deepest sensing coil offset (up to 1.9 m), which indicates that the deeper subsoil is responsible for root water uptake especially under drought conditions. To obtain a layered subsurface electrical conductivity model that shows the subsurface structures more clearly, a novel EMI inversion scheme was applied to the field data. The obtained electrical conductivity distributions were validated with soil probes and ERT transects that confirmed the inverted lateral and vertical large-scale electrical

  3. UDEC-AUTODYN Hybrid Modeling of a Large-Scale Underground Explosion Test

    NASA Astrophysics Data System (ADS)

    Deng, X. F.; Chen, S. G.; Zhu, J. B.; Zhou, Y. X.; Zhao, Z. Y.; Zhao, J.

    2015-03-01

    In this study, numerical modeling of a large-scale decoupled underground explosion test with 10 tons of TNT in Älvdalen, Sweden is performed by combining DEM and FEM with codes UDEC and AUTODYN. AUTODYN is adopted to model the explosion process, blast wave generation, and its action on the explosion chamber surfaces, while UDEC modeling is focused on shock wave propagation in jointed rock masses surrounding the explosion chamber. The numerical modeling results with the hybrid AUTODYN-UDEC method are compared with empirical estimations, purely AUTODYN modeling results, and the field test data. It is found that in terms of peak particle velocity, empirical estimations are much smaller than the measured data, while purely AUTODYN modeling results are larger than the test data. The UDEC-AUTODYN numerical modeling results agree well with the test data. Therefore, the UDEC-AUTODYN method is appropriate in modeling a large-scale explosive detonation in a closed space and the following wave propagation in jointed rock masses. It should be noted that joint mechanical and spatial properties adopted in UDEC-AUTODYN modeling are determined with empirical equations and available geological data, and they may not be sufficiently accurate.

  4. Oscillations in large-scale cortical networks: map-based model.

    PubMed

    Rulkov, N F; Timofeev, I; Bazhenov, M

    2004-01-01

    We develop a new computationally efficient approach for the analysis of complex large-scale neurobiological networks. Its key element is the use of a new phenomenological model of a neuron capable of replicating important spike pattern characteristics and designed in the form of a system of difference equations (a map). We developed a set of map-based models that replicate spiking activity of cortical fast spiking, regular spiking and intrinsically bursting neurons. Interconnected with synaptic currents these model neurons demonstrated responses very similar to those found with Hodgkin-Huxley models and in experiments. We illustrate the efficacy of this approach in simulations of one- and two-dimensional cortical network models consisting of regular spiking neurons and fast spiking interneurons to model sleep and activated states of the thalamocortical system. Our study suggests that map-based models can be widely used for large-scale simulations and that such models are especially useful for tasks where the modeling of specific firing patterns of different cell classes is important. PMID:15306740

  5. A cooperative strategy for parameter estimation in large scale systems biology models

    PubMed Central

    2012-01-01

    Background Mathematical models play a key role in systems biology: they summarize the currently available knowledge in a way that allows to make experimentally verifiable predictions. Model calibration consists of finding the parameters that give the best fit to a set of experimental data, which entails minimizing a cost function that measures the goodness of this fit. Most mathematical models in systems biology present three characteristics which make this problem very difficult to solve: they are highly non-linear, they have a large number of parameters to be estimated, and the information content of the available experimental data is frequently scarce. Hence, there is a need for global optimization methods capable of solving this problem efficiently. Results A new approach for parameter estimation of large scale models, called Cooperative Enhanced Scatter Search (CeSS), is presented. Its key feature is the cooperation between different programs (“threads”) that run in parallel in different processors. Each thread implements a state of the art metaheuristic, the enhanced Scatter Search algorithm (eSS). Cooperation, meaning information sharing between threads, modifies the systemic properties of the algorithm and allows to speed up performance. Two parameter estimation problems involving models related with the central carbon metabolism of E. coli which include different regulatory levels (metabolic and transcriptional) are used as case studies. The performance and capabilities of the method are also evaluated using benchmark problems of large-scale global optimization, with excellent results. Conclusions The cooperative CeSS strategy is a general purpose technique that can be applied to any model calibration problem. Its capability has been demonstrated by calibrating two large-scale models of different characteristics, improving the performance of previously existing methods in both cases. The cooperative metaheuristic presented here can be easily extended

  6. Formation and disruption of tonotopy in a large-scale model of the auditory cortex.

    PubMed

    Tomková, Markéta; Tomek, Jakub; Novák, Ondřej; Zelenka, Ondřej; Syka, Josef; Brom, Cyril

    2015-10-01

    There is ample experimental evidence describing changes of tonotopic organisation in the auditory cortex due to environmental factors. In order to uncover the underlying mechanisms, we designed a large-scale computational model of the auditory cortex. The model has up to 100 000 Izhikevich's spiking neurons of 17 different types, almost 21 million synapses, which are evolved according to Spike-Timing-Dependent Plasticity (STDP) and have an architecture akin to existing observations. Validation of the model revealed alternating synchronised/desynchronised states and different modes of oscillatory activity. We provide insight into these phenomena via analysing the activity of neuronal subtypes and testing different causal interventions into the simulation. Our model is able to produce experimental predictions on a cell type basis. To study the influence of environmental factors on the tonotopy, different types of auditory stimulations during the evolution of the network were modelled and compared. We found that strong white noise resulted in completely disrupted tonotopy, which is consistent with in vivo experimental observations. Stimulation with pure tones or spontaneous activity led to a similar degree of tonotopy as in the initial state of the network. Interestingly, weak white noise led to a substantial increase in tonotopy. As the STDP was the only mechanism of plasticity in our model, our results suggest that STDP is a sufficient condition for the emergence and disruption of tonotopy under various types of stimuli. The presented large-scale model of the auditory cortex and the core simulator, SUSNOIMAC, have been made publicly available. PMID:26344164

  7. Hyper-Resolution Large Scale Flood Inundation Modeling: Development of AutoRAPID Model

    NASA Astrophysics Data System (ADS)

    Tavakoly, A. A.; Follum, M. L.; Wahl, M.; Snow, A.

    2015-12-01

    Streamflow and the resultant flood inundation are defining elements in large scale flood analyses. High-fidelity predictive capabilities of flood inundation risk requires hydrologic and hydrodynamic modeling at hyper-resolution (<100 m) scales. Using spatiotemporal data from climate models as the driver, we couple a continental scale river routing model known as Routing Application for Parallel ComputatIon of Discharge (RAPID) with a regional scale flood delineation model called AutoRoute to estimate flood extents. We demonstrate how the coupled tool, referred to as AutoRAPID, can quickly and efficiently simulate flood extents using a high resolution dataset (~10 m) at the regional scale (> 100,000 km2). The AutoRAPID framework is implemented over 230,000 km2 in the Midwestern United States (between latitude 38°N and 44°N, and longitude 86°W to 91°W, approximately 8% of the Mississippi River Basin) using a 10 m DEM. We generate the flood inundation map over the entire area for a June 2008 flood event. The model is compared with observed data at five select locations: Spencer, IN; Newberry, IN; Gays Mills, WI; Ft. Atkinson, WI, and Janesville, WI. We show that the model results are generally satisfactory with observed flow and flood inundation data and suggest that the AutoRAPID model can be considered for several potential applications, such as: forecast flow and flood inundation information; generating flood recurrence maps using high resolution vector river data; and for emergency management applications to protect/evacuate large areas when time is limited and data are sparse.

  8. Parameterization of plume chemistry into large-scale atmospheric models: Application to aircraft NOx emissions

    NASA Astrophysics Data System (ADS)

    Cariolle, D.; Caro, D.; Paoli, R.; Hauglustaine, D. A.; CuéNot, B.; Cozic, A.; Paugam, R.

    2009-10-01

    A method is presented to parameterize the impact of the nonlinear chemical reactions occurring in the plume generated by concentrated NOx sources into large-scale models. The resulting plume parameterization is implemented into global models and used to evaluate the impact of aircraft emissions on the atmospheric chemistry. Compared to previous approaches that rely on corrected emissions or corrective factors to account for the nonlinear chemical effects, the present parameterization is based on the representation of the plume effects via a fuel tracer and a characteristic lifetime during which the nonlinear interactions between species are important and operate via rates of conversion for the NOx species and an effective reaction rates for O3. The implementation of this parameterization insures mass conservation and allows the transport of emissions at high concentrations in plume form by the model dynamics. Results from the model simulations of the impact on atmospheric ozone of aircraft NOx emissions are in rather good agreement with previous work. It is found that ozone production is decreased by 10 to 25% in the Northern Hemisphere with the largest effects in the north Atlantic flight corridor when the plume effects on the global-scale chemistry are taken into account. These figures are consistent with evaluations made with corrected emissions, but regional differences are noticeable owing to the possibility offered by this parameterization to transport emitted species in plume form prior to their dilution at large scale. This method could be further improved to make the parameters used by the parameterization function of the local temperature, humidity and turbulence properties diagnosed by the large-scale model. Further extensions of the method can also be considered to account for multistep dilution regimes during the plume dissipation. Furthermore, the present parameterization can be adapted to other types of point-source NOx emissions that have to be

  9. Large-scale circulation patterns, instability factors and global precipitation modeling as influenced by external forcing

    NASA Astrophysics Data System (ADS)

    Bundel, A.; Kulikova, I.; Kruglova, E.; Muravev, A.

    2003-04-01

    The scope of the study is to estimate the relationship between large-scale circulation regimes, various instability indices and global precipitation with different boundary conditions, considered as external forcing. The experiments were carried out in the ensemble-prediction framework of the dynamic-statistical monthly forecast scheme run in the Hydrometeorological Research Center of Russia every ten days. The extension to seasonal intervals makes it necessary to investigate the role of slowly changing boundary conditions among which the sea surface temperature (SST) may be defined as the most effective factor. Continuous integrations of the global spectral T41L15 model for the whole year 2000 (starting from January 1) were performed with the climatic SST and the Reynolds Archive SSTs. Monthly values of the SST were projected on the year days using spline interpolation technique. First, the global precipitation values in experiments were compared to the GPCP (Global Precipitation Climate Program) daily observation data. Although the global mean precipitation is underestimated by the model, some large-scale regional amounts correspond to the real ones (e.g. for Europe) fairly well. On the whole, however, anomaly phases failed to be reproduced. The precipitation averaged over the whole land revealed a greater sensitivity to the SSTs than that over the oceans. The wavelet analysis was applied to separate the low- and high-frequency signal of the SST influence on the large-scale circulation and precipitation. A derivative of the Wallace-Gutzler teleconnection index for the East-Atlantic oscillation was taken as the circulation characteristic. The daily oscillation index values and precipitation amounts averaged over Europe were decomposed using wavelet approach with different “mother wavelets” up to approximation level 3. It was demonstrated that an increase in the precipitation amount over Europe was associated with the zonal flow intensification over the

  10. A Large-Scale, Energetic Model of Cardiovascular Homeostasis Predicts Dynamics of Arterial Pressure in Humans

    PubMed Central

    Roytvarf, Alexander; Shusterman, Vladimir

    2008-01-01

    The energetic balance of forces in the cardiovascular system is vital to the stability of blood flow to all physiological systems in mammals. Yet, a large-scale, theoretical model, summarizing the energetic balance of major forces in a single, mathematically closed system has not been described. Although a number of computer simulations have been successfully performed with the use of analog models, the analysis of energetic balance of forces in such models is obscured by a big number of interacting elements. Hence, the goal of our study was to develop a theoretical model that represents large-scale, energetic balance in the cardiovascular system, including the energies of arterial pressure wave, blood flow, and the smooth muscle tone of arterial walls. Because the emphasis of our study was on tracking beat-to-beat changes in the balance of forces, we used a simplified representation of the blood pressure wave as a trapezoidal pressure-pulse with a strong-discontinuity leading front. This allowed significant reduction in the number of required parameters. Our approach has been validated using theoretical analysis, and its accuracy has been confirmed experimentally. The model predicted the dynamics of arterial pressure in human subjects undergoing physiological tests and provided insights into the relationships between arterial pressure and pressure wave velocity. PMID:18269976

  11. Automatic Construction of Predictive Neuron Models through Large Scale Assimilation of Electrophysiological Data

    PubMed Central

    Nogaret, Alain; Meliza, C. Daniel; Margoliash, Daniel; Abarbanel, Henry D. I.

    2016-01-01

    We report on the construction of neuron models by assimilating electrophysiological data with large-scale constrained nonlinear optimization. The method implements interior point line parameter search to determine parameters from the responses to intracellular current injections of zebra finch HVC neurons. We incorporated these parameters into a nine ionic channel conductance model to obtain completed models which we then use to predict the state of the neuron under arbitrary current stimulation. Each model was validated by successfully predicting the dynamics of the membrane potential induced by 20–50 different current protocols. The dispersion of parameters extracted from different assimilation windows was studied. Differences in constraints from current protocols, stochastic variability in neuron output, and noise behave as a residual temperature which broadens the global minimum of the objective function to an ellipsoid domain whose principal axes follow an exponentially decaying distribution. The maximum likelihood expectation of extracted parameters was found to provide an excellent approximation of the global minimum and yields highly consistent kinetics for both neurons studied. Large scale assimilation absorbs the intrinsic variability of electrophysiological data over wide assimilation windows. It builds models in an automatic manner treating all data as equal quantities and requiring minimal additional insight. PMID:27605157

  12. Automatic Construction of Predictive Neuron Models through Large Scale Assimilation of Electrophysiological Data.

    PubMed

    Nogaret, Alain; Meliza, C Daniel; Margoliash, Daniel; Abarbanel, Henry D I

    2016-01-01

    We report on the construction of neuron models by assimilating electrophysiological data with large-scale constrained nonlinear optimization. The method implements interior point line parameter search to determine parameters from the responses to intracellular current injections of zebra finch HVC neurons. We incorporated these parameters into a nine ionic channel conductance model to obtain completed models which we then use to predict the state of the neuron under arbitrary current stimulation. Each model was validated by successfully predicting the dynamics of the membrane potential induced by 20-50 different current protocols. The dispersion of parameters extracted from different assimilation windows was studied. Differences in constraints from current protocols, stochastic variability in neuron output, and noise behave as a residual temperature which broadens the global minimum of the objective function to an ellipsoid domain whose principal axes follow an exponentially decaying distribution. The maximum likelihood expectation of extracted parameters was found to provide an excellent approximation of the global minimum and yields highly consistent kinetics for both neurons studied. Large scale assimilation absorbs the intrinsic variability of electrophysiological data over wide assimilation windows. It builds models in an automatic manner treating all data as equal quantities and requiring minimal additional insight. PMID:27605157

  13. Design of a V/STOL propulsion system for a large-scale fighter model

    NASA Technical Reports Server (NTRS)

    Willis, W. S.

    1981-01-01

    Modifications were made to the existing Large-Scale STOL fighter model to simulate a V/STOL configuration. Modifications include the substitutions of two dimensional lift/cruise exhaust nozzles in the nacelles, and the addition of a third J97 engine in the fuselage to suppy a remote exhaust nozzle simulating a Remote Augmented Lift System. A preliminary design of the inlet and exhaust ducting for the third engine was developed and a detailed design was completed of the hot exhaust ducting and remote nozzle.

  14. Sensitivity analysis of key components in large-scale hydroeconomic models

    NASA Astrophysics Data System (ADS)

    Medellin-Azuara, J.; Connell, C. R.; Lund, J. R.; Howitt, R. E.

    2008-12-01

    This paper explores the likely impact of different estimation methods in key components of hydro-economic models such as hydrology and economic costs or benefits, using the CALVIN hydro-economic optimization for water supply in California. In perform our analysis using two climate scenarios: historical and warm-dry. The components compared were perturbed hydrology using six versus eighteen basins, highly-elastic urban water demands, and different valuation of agricultural water scarcity. Results indicate that large scale hydroeconomic hydro-economic models are often rather robust to a variety of estimation methods of ancillary models and components. Increasing the level of detail in the hydrologic representation of this system might not greatly affect overall estimates of climate and its effects and adaptations for California's water supply. More price responsive urban water demands will have a limited role in allocating water optimally among competing uses. Different estimation methods for the economic value of water and scarcity in agriculture may influence economically optimal water allocation; however land conversion patterns may have a stronger influence in this allocation. Overall optimization results of large-scale hydro-economic models remain useful for a wide range of assumptions in eliciting promising water management alternatives.

  15. An assembly model for simulation of large-scale ground water flow and transport.

    PubMed

    Huang, Junqi; Christ, John A; Goltz, Mark N

    2008-01-01

    When managing large-scale ground water contamination problems, it is often necessary to model flow and transport using finely discretized domains--for instance (1) to simulate flow and transport near a contamination source area or in the area where a remediation technology is being implemented; (2) to account for small-scale heterogeneities; (3) to represent ground water-surface water interactions; or (4) some combination of these scenarios. A model with a large domain and fine-grid resolution will need extensive computing resources. In this work, a domain decomposition-based assembly model implemented in a parallel computing environment is developed, which will allow efficient simulation of large-scale ground water flow and transport problems using domain-wide grid refinement. The method employs common ground water flow (MODFLOW) and transport (RT3D) simulators, enabling the solution of almost all commonly encountered ground water flow and transport problems. The basic approach partitions a large model domain into any number of subdomains. Parallel processors are used to solve the model equations within each subdomain. Schwarz iteration is applied to match the flow solution at the subdomain boundaries. For the transport model, an extended numerical array is implemented to permit the exchange of dispersive and advective flux information across subdomain boundaries. The model is verified using a conventional single-domain model. Model simulations demonstrate that the proposed model operated in a parallel computing environment can result in considerable savings in computer run times (between 50% and 80%) compared with conventional modeling approaches and may be used to simulate grid discretizations that were formerly intractable.

  16. Development of a coupled soil erosion and large-scale hydrology modeling system

    NASA Astrophysics Data System (ADS)

    Mao, Dazhi; Cherkauer, Keith A.; Flanagan, Dennis C.

    2010-08-01

    Soil erosion models are usually limited in their application to the field scale; however, the management of land resources requires information at the regional scale. Large-scale physically based land surface schemes (LSS) provide estimates of regional scale hydrologic processes that contribute to erosion. If scaling issues are adequately addressed, coupling an LSS to a physically based erosion model can provide a tool to study the regional impact of soil erosion. A coupling scheme was developed using the Variable Infiltration Capacity (VIC) model to produce hydrologic inputs for the stand-alone Water Erosion Prediction Project-Hillslope Erosion (WEPP-HE) program, accounting for both temporal and spatial scaling issues. Precipitation events were disaggregated from daily to hourly and used with the VIC model to generate hydrologic fluxes. Slope profiles were downscaled from 30 arc second to 30 m hillslopes. Additionally, soil texture and erodibility were adjusted with simplified assumptions based on the full WEPP model. Soil erosion at the large scale was represented on a VIC model grid cell basis by applying WEPP-HE to subsamples of 30 m hillslopes. On an average annual basis, results showed that the coupled model was comparable with full WEPP model predictions. On an event basis, the coupled model system captured more small erosion events, with erodibility adjustments of the same magnitude as from the full WEPP model simulations. Differences in results can be attributed to discrepancies in hydrologic data calculations and simplified assumptions in vegetation and soil erodibility. Overall, the coupled model demonstrated the feasibility of erosion prediction for large river basins.

  17. Management and services for large-scale virtual 3D urban model data based on network

    NASA Astrophysics Data System (ADS)

    He, Zhengwei; Chen, Jing; Wu, Huayi

    2008-10-01

    The buildings in modern city are complex and diverse, and the quantity is huge. These bring very big challenge for constructing 3D GIS under network circumstance and eventually realizing the Digital Earth. After analyzed the characteristic of network service about massive 3D urban building model data, this paper focuses on the organization and management of spatial data and the network services strategy, proposes a progressive network transmission schema based on the spatial resolution and the component elements of 3D building model data. Next, this paper put forward multistage-link three-dimensional spatial data organization model and encoding method of spatial index based on fully level quadtree structure. Then, a virtual earth platform, called GeoGlobe, was developed using above theory. Experimental results show that above 3D spatial data management model and service theory can availably provide network services for large-scale 3D urban model data. The application results and user experience good .

  18. Mutual coupling of hydrologic and hydrodynamic models - a viable approach for improved large-scale inundation estimates?

    NASA Astrophysics Data System (ADS)

    Hoch, Jannis; Winsemius, Hessel; van Beek, Ludovicus; Haag, Arjen; Bierkens, Marc

    2016-04-01

    Due to their increasing occurrence rate and associated economic costs, fluvial floods are large-scale and cross-border phenomena that need to be well understood. Sound information about temporal and spatial variations of flood hazard is essential for adequate flood risk management and climate change adaption measures. While progress has been made in assessments of flood hazard and risk on the global scale, studies to date have made compromises between spatial resolution on the one hand and local detail that influences their temporal characteristics (rate of rise, duration) on the other. Moreover, global models cannot realistically model flood wave propagation due to a lack of detail in channel and floodplain geometry, and the representation of hydrologic processes influencing the surface water balance such as open water evaporation from inundated water and re-infiltration of water in river banks. To overcome these restrictions and to obtain a better understanding of flood propagation including its spatio-temporal variations at the large scale, yet at a sufficiently high resolution, the present study aims to develop a large-scale modeling tool by coupling the global hydrologic model PCR-GLOBWB and the recently developed hydrodynamic model DELFT3D-FM. The first computes surface water volumes which are routed by the latter, solving the full Saint-Venant equations. With DELFT3D FM being capable of representing the model domain as a flexible mesh, model accuracy is only improved at relevant locations (river and adjacent floodplain) and the computation time is not unnecessarily increased. This efficiency is very advantageous for large-scale modelling approaches. The model domain is thereby schematized by 2D floodplains, being derived from global data sets (HydroSHEDS and G3WBM, respectively). Since a previous study with 1way-coupling showed good model performance (J.M. Hoch et al., in prep.), this approach was extended to 2way-coupling to fully represent evaporation

  19. Comparison of the KAMELEON fire model to large-scale open pool fire data

    SciTech Connect

    Nicolette, V.F.; Gritzo, L.A.; Holen, J.; Magnussen, B.F.

    1994-06-01

    A comparison of the KAMELEON Fire model to large-scale open pool fire experimental data is presented. The model was used to calculate large-scale JP-4 pool fires with and without wind, and with and without large objects in the fire. The effect of wind and large objects on the fire environment is clearly seen. For the pool fire calculations without any object in the fire, excellent agreement is seen in the location of the oxygen-starved region near the pool center. Calculated flame temperatures are about 200--300 K higher than measured. This results in higher heat fluxes back to the fuel pool and higher fuel evaporation rates (by a factor of 2). Fuel concentrations at lower elevations and peak soot concentrations are in good agreement with data. For pool fire calculations with objects, similar trends in the fire environment are observed. Excellent agreement is seen in the distribution of the heat flux around a cylindrical calorimeter in a rectangular pool with wind effects. The magnitude of the calculated heat flux to the object is high by a factor of 2 relative to the test data, due to the higher temperatures calculated. For the case of a large flat plate adjacent to a circular pool, excellent qualitative agreement is seen in the predicted and measured flame shapes as a function of wind.

  20. Toward large-scale computational fluid-solid-growth models of intracranial aneurysms.

    PubMed

    Di Achille, Paolo; Humphrey, Jay D

    2012-06-01

    Complementary advances in medical imaging, vascular biology, genetics, biomechanics, and computational methods promise to enable the development of mathematical models of the enlargement and possible rupture of intracranial aneurysms that can help inform clinical decisions. Nevertheless, this ultimate goal is extremely challenging given the many diverse and complex factors that control the natural history of these lesions. As it should be expected, therefore, predictive models continue to develop in stages, with new advances incorporated as data and computational methods permit. In this paper, we submit that large-scale, patient-specific, fluid-solid interaction models of the entire circle of Willis and included intracranial aneurysm are both computationally tractable and necessary as a critical step toward fluid-solid-growth (FSG) models that can address the evolution of a lesion while incorporating information on the genetically and mechanobiologically determined microstructure of the wall.

  1. Aerodynamic force measurement on a large-scale model in a short duration test facility

    SciTech Connect

    Tanno, H.; Kodera, M.; Komuro, T.; Sato, K.; Takahasi, M.; Itoh, K.

    2005-03-01

    A force measurement technique has been developed for large-scale aerodynamic models with a short test time. The technique is based on direct acceleration measurements, with miniature accelerometers mounted on a test model suspended by wires. Measuring acceleration at two different locations, the technique can eliminate oscillations from natural vibration of the model. The technique was used for drag force measurements on a 3 m long supersonic combustor model in the HIEST free-piston driven shock tunnel. A time resolution of 350 {mu}s is guaranteed during measurements, whose resolution is enough for ms order test time in HIEST. To evaluate measurement reliability and accuracy, measured values were compared with results from a three-dimensional Navier-Stokes numerical simulation. The difference between measured values and numerical simulation values was less than 5%. We conclude that this measurement technique is sufficiently reliable for measuring aerodynamic force within test durations of 1 ms.

  2. Large-scale shell-model calculations on the spectroscopy of N <126 Pb isotopes

    NASA Astrophysics Data System (ADS)

    Qi, Chong; Jia, L. Y.; Fu, G. J.

    2016-07-01

    Large-scale shell-model calculations are carried out in the model space including neutron-hole orbitals 2 p1 /2 ,1 f5 /2 ,2 p3 /2 ,0 i13 /2 ,1 f7 /2 , and 0 h9 /2 to study the structure and electromagnetic properties of neutron-deficient Pb isotopes. An optimized effective interaction is used. Good agreement between full shell-model calculations and experimental data is obtained for the spherical states in isotopes Pb-206194. The lighter isotopes are calculated with an importance-truncation approach constructed based on the monopole Hamiltonian. The full shell-model results also agree well with our generalized seniority and nucleon-pair-approximation truncation calculations. The deviations between theory and experiment concerning the excitation energies and electromagnetic properties of low-lying 0+ and 2+ excited states and isomeric states may provide a constraint on our understanding of nuclear deformation and intruder configuration in this region.

  3. The relationship between large-scale and convective states in the tropics - Towards an improved representation of convection in large-scale models

    SciTech Connect

    Jakob, Christian

    2015-02-26

    This report summarises an investigation into the relationship of tropical thunderstorms to the atmospheric conditions they are embedded in. The study is based on the use of radar observations at the Atmospheric Radiation Measurement site in Darwin run under the auspices of the DOE Atmospheric Systems Research program. Linking the larger scales of the atmosphere with the smaller scales of thunderstorms is crucial for the development of the representation of thunderstorms in weather and climate models, which is carried out by a process termed parametrisation. Through the analysis of radar and wind profiler observations the project made several fundamental discoveries about tropical storms and quantified the relationship of the occurrence and intensity of these storms to the large-scale atmosphere. We were able to show that the rainfall averaged over an area the size of a typical climate model grid-box is largely controlled by the number of storms in the area, and less so by the storm intensity. This allows us to completely rethink the way we represent such storms in climate models. We also found that storms occur in three distinct categories based on their depth and that the transition between these categories is strongly related to the larger scale dynamical features of the atmosphere more so than its thermodynamic state. Finally, we used our observational findings to test and refine a new approach to cumulus parametrisation which relies on the stochastic modelling of the area covered by different convective cloud types.

  4. Large-scale hydrological modelling by using modified PUB recommendations: the India-HYPE case

    NASA Astrophysics Data System (ADS)

    Pechlivanidis, I. G.; Arheimer, B.

    2015-03-01

    The Prediction in Ungauged Basins (PUB) scientific initiative (2003-2012 by IAHS) put considerable effort into improving the reliability of hydrological models to predict flow response in ungauged rivers. PUB's collective experience advanced hydrologic science and defined guidelines to make predictions in catchments without observed runoff data. At present, there is a raised interest in applying catchment models for large domains and large data samples in a multi-basin manner. However, such modelling involves several sources of uncertainties, which may be caused by the imperfectness of input data, i.e. particularly regional and global databases. This may lead to inaccurate model parameterisation and incomplete process understanding. In order to bridge the gap between the best practices for single catchments and large-scale hydrology, we present a further developed and slightly modified version of the recommended best practices for PUB by Takeuchi et al. (2013). By using examples from a recent HYPE hydrological model set-up on the Indian subcontinent, named India-HYPE v1.0, we explore the recommendations, indicate challenges and recommend quality checks to avoid erroneous assumptions. We identify the obstacles, ways to overcome them and describe the work process related to: (a) errors and inconsistencies in global databases, unknown human impacts, poor data quality, (b) robust approaches to identify parameters using a stepwise calibration approach, remote sensing data, expert knowledge and catchment similarities; and (c) evaluation based on flow signatures and performance metrics, using both multiple criteria and multiple variables, and independent gauges for "blind tests". The results show that despite the strong hydro-climatic gradient over the subcontinent, a single model can adequately describe the spatial variability in dominant hydrological processes at the catchment scale. Eventually, during calibration of India-HYPE, the median Kling-Gupta Efficiency for

  5. Automatic Generation of Connectivity for Large-Scale Neuronal Network Models through Structural Plasticity.

    PubMed

    Diaz-Pier, Sandra; Naveau, Mikaël; Butz-Ostendorf, Markus; Morrison, Abigail

    2016-01-01

    With the emergence of new high performance computation technology in the last decade, the simulation of large scale neural networks which are able to reproduce the behavior and structure of the brain has finally become an achievable target of neuroscience. Due to the number of synaptic connections between neurons and the complexity of biological networks, most contemporary models have manually defined or static connectivity. However, it is expected that modeling the dynamic generation and deletion of the links among neurons, locally and between different regions of the brain, is crucial to unravel important mechanisms associated with learning, memory and healing. Moreover, for many neural circuits that could potentially be modeled, activity data is more readily and reliably available than connectivity data. Thus, a framework that enables networks to wire themselves on the basis of specified activity targets can be of great value in specifying network models where connectivity data is incomplete or has large error margins. To address these issues, in the present work we present an implementation of a model of structural plasticity in the neural network simulator NEST. In this model, synapses consist of two parts, a pre- and a post-synaptic element. Synapses are created and deleted during the execution of the simulation following local homeostatic rules until a mean level of electrical activity is reached in the network. We assess the scalability of the implementation in order to evaluate its potential usage in the self generation of connectivity of large scale networks. We show and discuss the results of simulations on simple two population networks and more complex models of the cortical microcircuit involving 8 populations and 4 layers using the new framework. PMID:27303272

  6. Automatic Generation of Connectivity for Large-Scale Neuronal Network Models through Structural Plasticity

    PubMed Central

    Diaz-Pier, Sandra; Naveau, Mikaël; Butz-Ostendorf, Markus; Morrison, Abigail

    2016-01-01

    With the emergence of new high performance computation technology in the last decade, the simulation of large scale neural networks which are able to reproduce the behavior and structure of the brain has finally become an achievable target of neuroscience. Due to the number of synaptic connections between neurons and the complexity of biological networks, most contemporary models have manually defined or static connectivity. However, it is expected that modeling the dynamic generation and deletion of the links among neurons, locally and between different regions of the brain, is crucial to unravel important mechanisms associated with learning, memory and healing. Moreover, for many neural circuits that could potentially be modeled, activity data is more readily and reliably available than connectivity data. Thus, a framework that enables networks to wire themselves on the basis of specified activity targets can be of great value in specifying network models where connectivity data is incomplete or has large error margins. To address these issues, in the present work we present an implementation of a model of structural plasticity in the neural network simulator NEST. In this model, synapses consist of two parts, a pre- and a post-synaptic element. Synapses are created and deleted during the execution of the simulation following local homeostatic rules until a mean level of electrical activity is reached in the network. We assess the scalability of the implementation in order to evaluate its potential usage in the self generation of connectivity of large scale networks. We show and discuss the results of simulations on simple two population networks and more complex models of the cortical microcircuit involving 8 populations and 4 layers using the new framework. PMID:27303272

  7. Integrating adaptive behaviour in large-scale flood risk assessments: an Agent-Based Modelling approach

    NASA Astrophysics Data System (ADS)

    Haer, Toon; Aerts, Jeroen

    2015-04-01

    Between 1998 and 2009, Europe suffered over 213 major damaging floods, causing 1126 deaths, displacing around half a million people. In this period, floods caused at least 52 billion euro in insured economic losses making floods the most costly natural hazard faced in Europe. In many low-lying areas, the main strategy to cope with floods is to reduce the risk of the hazard through flood defence structures, like dikes and levees. However, it is suggested that part of the responsibility for flood protection needs to shift to households and businesses in areas at risk, and that governments and insurers can effectively stimulate the implementation of individual protective measures. However, adaptive behaviour towards flood risk reduction and the interaction between the government, insurers, and individuals has hardly been studied in large-scale flood risk assessments. In this study, an European Agent-Based Model is developed including agent representatives for the administrative stakeholders of European Member states, insurers and reinsurers markets, and individuals following complex behaviour models. The Agent-Based Modelling approach allows for an in-depth analysis of the interaction between heterogeneous autonomous agents and the resulting (non-)adaptive behaviour. Existing flood damage models are part of the European Agent-Based Model to allow for a dynamic response of both the agents and the environment to changing flood risk and protective efforts. By following an Agent-Based Modelling approach this study is a first contribution to overcome the limitations of traditional large-scale flood risk models in which the influence of individual adaptive behaviour towards flood risk reduction is often lacking.

  8. Pangolin v1.0, a conservative 2-D transport model for large scale parallel calculation

    NASA Astrophysics Data System (ADS)

    Praga, A.; Cariolle, D.; Giraud, L.

    2014-07-01

    To exploit the possibilities of parallel computers, we designed a large-scale bidimensional atmospheric transport model named Pangolin. As the basis for a future chemistry-transport model, a finite-volume approach was chosen both for mass preservation and to ease parallelization. To overcome the pole restriction on time-steps for a regular latitude-longitude grid, Pangolin uses a quasi-area-preserving reduced latitude-longitude grid. The features of the regular grid are exploited to improve parallel performances and a custom domain decomposition algorithm is presented. To assess the validity of the transport scheme, its results are compared with state-of-the-art models on analytical test cases. Finally, parallel performances are shown in terms of strong scaling and confirm the efficient scalability up to a few hundred of cores.

  9. Meta-Analysis in Human Neuroimaging: Computational Modeling of Large-Scale Databases

    PubMed Central

    Fox, Peter T.; Lancaster, Jack L.; Laird, Angela R.; Eickhoff, Simon B.

    2016-01-01

    Spatial normalization—applying standardized coordinates as anatomical addresses within a reference space—was introduced to human neuroimaging research nearly 30 years ago. Over these three decades, an impressive series of methodological advances have adopted, extended, and popularized this standard. Collectively, this work has generated a methodologically coherent literature of unprecedented rigor, size, and scope. Large-scale online databases have compiled these observations and their associated meta-data, stimulating the development of meta-analytic methods to exploit this expanding corpus. Coordinate-based meta-analytic methods have emerged and evolved in rigor and utility. Early methods computed cross-study consensus, in a manner roughly comparable to traditional (nonimaging) meta-analysis. Recent advances now compute coactivation-based connectivity, connectivity-based functional parcellation, and complex network models powered from data sets representing tens of thousands of subjects. Meta-analyses of human neuroimaging data in large-scale databases now stand at the forefront of computational neurobiology. PMID:25032500

  10. Inclusive constraints on unified dark matter models from future large-scale surveys

    NASA Astrophysics Data System (ADS)

    Camera, Stefano; Carbone, Carmelita; Moscardini, Lauro

    2012-03-01

    In the very last years, cosmological models where the properties of the dark components of the Universe — dark matter and dark energy — are accounted for by a single ``dark fluid'' have drawn increasing attention and interest. Amongst many proposals, Unified Dark Matter (UDM) cosmologies are promising candidates as effective theories. In these models, a scalar field with a non-canonical kinetic term in its Lagrangian mimics both the accelerated expansion of the Universe at late times and the clustering properties of the large-scale structure of the cosmos. However, UDM models also present peculiar behaviours, the most interesting one being the fact that the perturbations in the dark-matter component of the scalar field do have a non-negligible speed of sound. This gives rise to an effective Jeans scale for the Newtonian potential, below which the dark fluid does not cluster any more. This implies a growth of structures fairly different from that of the concordance ΛCDM model. In this paper, we demonstrate that forthcoming large-scale surveys will be able to discriminate between viable UDM models and ΛCDM to a good degree of accuracy. To this purpose, the planned Euclid satellite will be a powerful tool, since it will provide very accurate data on galaxy clustering and the weak lensing effect of cosmic shear. Finally, we also exploit the constraining power of the ongoing CMB Planck experiment. Although our approach is the most conservative, with the inclusion of only well-understood, linear dynamics, in the end we also show what could be done if some amount of non-linear information were included.

  11. Inclusive constraints on unified dark matter models from future large-scale surveys

    SciTech Connect

    Camera, Stefano; Carbone, Carmelita; Moscardini, Lauro E-mail: carmelita.carbone@unibo.it

    2012-03-01

    In the very last years, cosmological models where the properties of the dark components of the Universe — dark matter and dark energy — are accounted for by a single ''dark fluid'' have drawn increasing attention and interest. Amongst many proposals, Unified Dark Matter (UDM) cosmologies are promising candidates as effective theories. In these models, a scalar field with a non-canonical kinetic term in its Lagrangian mimics both the accelerated expansion of the Universe at late times and the clustering properties of the large-scale structure of the cosmos. However, UDM models also present peculiar behaviours, the most interesting one being the fact that the perturbations in the dark-matter component of the scalar field do have a non-negligible speed of sound. This gives rise to an effective Jeans scale for the Newtonian potential, below which the dark fluid does not cluster any more. This implies a growth of structures fairly different from that of the concordance ΛCDM model. In this paper, we demonstrate that forthcoming large-scale surveys will be able to discriminate between viable UDM models and ΛCDM to a good degree of accuracy. To this purpose, the planned Euclid satellite will be a powerful tool, since it will provide very accurate data on galaxy clustering and the weak lensing effect of cosmic shear. Finally, we also exploit the constraining power of the ongoing CMB Planck experiment. Although our approach is the most conservative, with the inclusion of only well-understood, linear dynamics, in the end we also show what could be done if some amount of non-linear information were included.

  12. Reversible Parallel Discrete-Event Execution of Large-scale Epidemic Outbreak Models

    SciTech Connect

    Perumalla, Kalyan S; Seal, Sudip K

    2010-01-01

    The spatial scale, runtime speed and behavioral detail of epidemic outbreak simulations together require the use of large-scale parallel processing. In this paper, an optimistic parallel discrete event execution of a reaction-diffusion simulation model of epidemic outbreaks is presented, with an implementation over the $\\mu$sik simulator. Rollback support is achieved with the development of a novel reversible model that combines reverse computation with a small amount of incremental state saving. Parallel speedup and other runtime performance metrics of the simulation are tested on a small (8,192-core) Blue Gene / P system, while scalability is demonstrated on 65,536 cores of a large Cray XT5 system. Scenarios representing large population sizes (up to several hundred million individuals in the largest case) are exercised.

  13. A review of large-scale LNG spills : experiment and modeling.

    SciTech Connect

    Luketa-Hanlin, Anay Josephine

    2005-04-01

    The prediction of the possible hazards associated with the storage and transportation of liquefied natural gas (LNG) by ship has motivated a substantial number of experimental and analytical studies. This paper reviews the experimental and analytical work performed to date on large-scale spills of LNG. Specifically, experiments on the dispersion of LNG, as well as experiments of LNG fires from spills on water and land are reviewed. Explosion, pool boiling, and rapid phase transition (RPT) explosion studies are described and discussed, as well as models used to predict dispersion and thermal hazard distances. Although there have been significant advances in understanding the behavior of LNG spills, technical knowledge gaps to improve hazard prediction are identified. Some of these gaps can be addressed with current modeling and testing capabilities. A discussion of the state of knowledge and recommendations to further improve the understanding of the behavior of LNG spills on water is provided.

  14. A review of large-scale LNG spills: experiments and modeling.

    PubMed

    Luketa-Hanlin, Anay

    2006-05-20

    The prediction of the possible hazards associated with the storage and transportation of liquefied natural gas (LNG) by ship has motivated a substantial number of experimental and analytical studies. This paper reviews the experimental and analytical work performed to date on large-scale spills of LNG. Specifically, experiments on the dispersion of LNG, as well as experiments of LNG fires from spills on water and land are reviewed. Explosion, pool boiling, and rapid phase transition (RPT) explosion studies are described and discussed, as well as models used to predict dispersion and thermal hazard distances. Although there have been significant advances in understanding the behavior of LNG spills, technical knowledge gaps to improve hazard prediction are identified. Some of these gaps can be addressed with current modeling and testing capabilities. A discussion of the state of knowledge and recommendations to further improve the understanding of the behavior of LNG spills on water is provided. PMID:16271829

  15. GPU-Based Parallelized Solver for Large Scale Vascular Blood Flow Modeling and Simulations.

    PubMed

    Santhanam, Anand P; Neylon, John; Eldredge, Jeff; Teran, Joseph; Dutson, Erik; Benharash, Peyman

    2016-01-01

    Cardio-vascular blood flow simulations are essential in understanding the blood flow behavior during normal and disease conditions. To date, such blood flow simulations have only been done at a macro scale level due to computational limitations. In this paper, we present a GPU based large scale solver that enables modeling the flow even in the smallest arteries. A mechanical equivalent of the circuit based flow modeling system is first developed to employ the GPU computing framework. Numerical studies were employed using a set of 10 million connected vascular elements. Run-time flow analysis were performed to simulate vascular blockages, as well as arterial cut-off. Our results showed that we can achieve ~100 FPS using a GTX 680m and ~40 FPS using a Tegra K1 computing platform. PMID:27046603

  16. Large-scale shell-model calculations of nuclei around mass 210

    NASA Astrophysics Data System (ADS)

    Teruya, E.; Higashiyama, K.; Yoshinaga, N.

    2016-06-01

    Large-scale shell-model calculations are performed for even-even, odd-mass, and doubly odd nuclei of Pb, Bi, Po, At, Rn, and Fr isotopes in the neutron deficit region (Z ≥82 ,N ≤126 ) assuming 208Pb as a doubly magic core. All the six single-particle orbitals between the magic numbers 82 and 126, namely, 0 h9 /2,1 f7 /2,0 i13 /2,2 p3 /2,1 f5 /2 , and 2 p1 /2 , are considered. For a phenomenological effective two-body interaction, one set of the monopole pairing and quadrupole-quadrupole interactions including the multipole-pairing interactions is adopted for all the nuclei considered. The calculated energies and electromagnetic properties are compared with the experimental data. Furthermore, many isomeric states are analyzed in terms of the shell-model configurations.

  17. Large scale landslide mud flow modeling, simulation, and comparison with observations

    NASA Astrophysics Data System (ADS)

    Liu, F.; Shao, X.; Zhang, B.

    2012-12-01

    Landslide is a catastrophic natural event. Modeling, simulation, and early warning of landslide event can protect the safety of lives and properties. Therefore, study of landslide bears important scientific and practical value. In this research, we constructed a high performance parallel fluid dynamics model to study large scale landslide transport and evolution process. This model solves shallow water equation derived from 3 dimensional Euler equations in Cartesian coordinate system. Based on bottom topography, initial condition, bottom friction, and mudflow viscosity coefficient, density and other parameters, this model predicts landslide transport process and deposition distribution. Using 3 dimension bottom topography data from an digital elevation model in Zhou Qu area, this model produces the onset, transport and deposition process happened during Zhou Qu landslide. It also calculates spatial and temporal distribution of the mud flow transportation route, deposition depth, and kinematic energy of the event. This model together with an early warning system can lead to significant improvement to construction planning in landslide susceptible area.; Zhou Qu topography from Digital Elevation Model ; Modeling result from PLM (parallel landslide model)

  18. Microbranching in mode-I fracture using large-scale simulations of amorphous and perturbed-lattice models

    NASA Astrophysics Data System (ADS)

    Heizler, Shay I.; Kessler, David A.

    2015-07-01

    We study the high-velocity regime mode-I fracture instability wherein small microbranches start to appear near the main crack, using large-scale simulations. Some of the features of those microbranches have been reproduced qualitatively in smaller-scale studies [using O (104) atoms] on both a model of an amorphous material (via the continuous random network model) and using perturbed-lattice models. In this study, larger-scale simulations [ O (106) atoms] were performed using multithreading computing on a GPU device, in order to achieve more physically realistic results. First, we find that the microbranching pattern appears to be converging with the lattice width. Second, the simulations reproduce the growth of the size of a microbranch as a function of the crack velocity, as well as the increase of the amplitude of the derivative of the electrical-resistance root-mean square with respect to the time as a function of the crack velocity. In addition, the simulations yield the correct branching angle of the microbranches, and the power law exponent governing the shape of the microbranches seems to be lower than unity, so that the side cracks turn over in the direction of propagation of the main crack as seen in experiment.

  19. Global Sensitivity Analysis for Large-scale Socio-hydrological Models using the Cloud

    NASA Astrophysics Data System (ADS)

    Hu, Y.; Garcia-Cabrejo, O.; Cai, X.; Valocchi, A. J.; Dupont, B.

    2014-12-01

    In the context of coupled human and natural system (CHNS), incorporating human factors into water resource management provides us with the opportunity to understand the interactions between human and environmental systems. A multi-agent system (MAS) model is designed to couple with the physically-based Republican River Compact Administration (RRCA) groundwater model, in an attempt to understand the declining water table and base flow in the heavily irrigated Republican River basin. For MAS modelling, we defined five behavioral parameters (κ_pr, ν_pr, κ_prep, ν_prep and λ) to characterize the agent's pumping behavior given the uncertainties of the future crop prices and precipitation. κ and ν describe agent's beliefs in their prior knowledge of the mean and variance of crop prices (κ_pr, ν_pr) and precipitation (κ_prep, ν_prep), and λ is used to describe the agent's attitude towards the fluctuation of crop profits. Notice that these human behavioral parameters as inputs to the MAS model are highly uncertain and even not measurable. Thus, we estimate the influences of these behavioral parameters on the coupled models using Global Sensitivity Analysis (GSA). In this paper, we address two main challenges arising from GSA with such a large-scale socio-hydrological model by using Hadoop-based Cloud Computing techniques and Polynomial Chaos Expansion (PCE) based variance decomposition approach. As a result, 1,000 scenarios of the coupled models are completed within two hours with the Hadoop framework, rather than about 28days if we run those scenarios sequentially. Based on the model results, GSA using PCE is able to measure the impacts of the spatial and temporal variations of these behavioral parameters on crop profits and water table, and thus identifies two influential parameters, κ_pr and λ. The major contribution of this work is a methodological framework for the application of GSA in large-scale socio-hydrological models. This framework attempts to

  20. Large-scale shell model calculations for even-even 62-66Fe isotopes

    NASA Astrophysics Data System (ADS)

    Srivastava, P. C.; Mehrotra, I.

    2009-10-01

    The recently measured experimental data of Legnaro National Laboratories on neutron-rich even isotopes of 62-66Fe with A = 62, 64, 66 have been interpreted in the framework of a large-scale shell model. Calculations have been performed with a newly derived effective interaction GXPF1A in full fp space without truncation. The experimental data are very well explained for 62Fe, satisfactorily reproduced for 64Fe and poorly fitted for 66Fe. The increasing collectivity reflected in experimental data when approaching N = 40 is not reproduced in calculated values. This indicates that whereas the considered valence space is adequate for 62Fe, inclusion of higher orbits from the sdg shell is required for describing 66Fe.

  1. A simple model to relate ionogram signatures to large-scale wave structure

    NASA Astrophysics Data System (ADS)

    Tsunoda, Roland T.

    2012-09-01

    The development of plasma structure in the nighttime equatorial F region, known as equatorial spread F (ESF), appears to be controlled by the preceding presence of large-scale wave structure (LSWS). To understand this process, knowledge of the properties of LSWS is crucial. Information about LSWS appears to reside in two ionogram signatures, multi-reflected echoes (MREs) and the so-called “satellite” traces (STs). However, how LSWS is related to MREs and STs is not yet clear. To gain insight, a tilted, linear reflector, modulated by LSWS, is described and shown to be capable of explaining even the most puzzling forms of MREs and STs. With this kind of model, ionogram signatures can be used to infer the nature of LSWS.

  2. Enhanced ICP for the Registration of Large-Scale 3D Environment Models: An Experimental Study.

    PubMed

    Han, Jianda; Yin, Peng; He, Yuqing; Gu, Feng

    2016-01-01

    One of the main applications of mobile robots is the large-scale perception of the outdoor environment. One of the main challenges of this application is fusing environmental data obtained by multiple robots, especially heterogeneous robots. This paper proposes an enhanced iterative closest point (ICP) method for the fast and accurate registration of 3D environmental models. First, a hierarchical searching scheme is combined with the octree-based ICP algorithm. Second, an early-warning mechanism is used to perceive the local minimum problem. Third, a heuristic escape scheme based on sampled potential transformation vectors is used to avoid local minima and achieve optimal registration. Experiments involving one unmanned aerial vehicle and one unmanned surface vehicle were conducted to verify the proposed technique. The experimental results were compared with those of normal ICP registration algorithms to demonstrate the superior performance of the proposed method. PMID:26891298

  3. LARGE SCALE DISTRIBUTED PARAMETER MODEL OF MAIN MAGNET SYSTEM AND FREQUENCY DECOMPOSITION ANALYSIS

    SciTech Connect

    ZHANG,W.; MARNERIS, I.; SANDBERG, J.

    2007-06-25

    Large accelerator main magnet system consists of hundreds, even thousands, of dipole magnets. They are linked together under selected configurations to provide highly uniform dipole fields when powered. Distributed capacitance, insulation resistance, coil resistance, magnet inductance, and coupling inductance of upper and lower pancakes make each magnet a complex network. When all dipole magnets are chained together in a circle, they become a coupled pair of very high order complex ladder networks. In this study, a network of more than thousand inductive, capacitive or resistive elements are used to model an actual system. The circuit is a large-scale network. Its equivalent polynomial form has several hundred degrees. Analysis of this high order circuit and simulation of the response of any or all components is often computationally infeasible. We present methods to use frequency decomposition approach to effectively simulate and analyze magnet configuration and power supply topologies.

  4. Large-scale Individual-based Models of Pandemic Influenza Mitigation Strategies

    NASA Astrophysics Data System (ADS)

    Kadau, Kai; Germann, Timothy; Longini, Ira; Macken, Catherine

    2007-03-01

    We have developed a large-scale stochastic simulation model to investigate the spread of a pandemic strain of influenza virus through the U.S. population of 281 million people, to assess the likely effectiveness of various potential intervention strategies including antiviral agents, vaccines, and modified social mobility (including school closure and travel restrictions) [1]. The heterogeneous population structure and mobility is based on available Census and Department of Transportation data where available. Our simulations demonstrate that, in a highly mobile population, restricting travel after an outbreak is detected is likely to delay slightly the time course of the outbreak without impacting the eventual number ill. For large basic reproductive numbers R0, we predict that multiple strategies in combination (involving both social and medical interventions) will be required to achieve a substantial reduction in illness rates. [1] T. C. Germann, K. Kadau, I. M. Longini, and C. A. Macken, Proc. Natl. Acad. Sci. (USA) 103, 5935-5940 (2006).

  5. Excavating the Genome: Large Scale Mutagenesis Screening for the Discovery of New Mouse Models

    PubMed Central

    Sundberg, John P.; Dadras, Soheil S.; Silva, Kathleen A.; Kennedy, Victoria E.; Murray, Stephen A.; Denegre, James; Schofield, Paul N.; King, Lloyd E.; Wiles, Michael; Pratt, C. Herbert

    2016-01-01

    Technology now exists for rapid screening of mutated laboratory mice to identify phenotypes associated with specific genetic mutations. Large repositories exist for spontaneous mutants and those induced by chemical mutagenesis, many of which have never been studied or comprehensively evaluated. To supplement these resources, a variety of techniques have been consolidated in an international effort to create mutations in all known protein coding genes in the mouse. With targeted embryonic stem cell lines now available for almost all protein coding genes and more recently CRISPR/Cas9 technology, large-scale efforts are underway to create novel mutant mouse strains and to characterize their phenotypes. However, accurate diagnosis of skin, hair, and nail diseases still relies on careful gross and histological analysis. While not automated to the level of the physiological phenotyping, histopathology provides the most direct and accurate diagnosis and correlation with human diseases. As a result of these efforts, many new mouse dermatological disease models are being developed. PMID:26551941

  6. Enhanced ICP for the Registration of Large-Scale 3D Environment Models: An Experimental Study

    PubMed Central

    Han, Jianda; Yin, Peng; He, Yuqing; Gu, Feng

    2016-01-01

    One of the main applications of mobile robots is the large-scale perception of the outdoor environment. One of the main challenges of this application is fusing environmental data obtained by multiple robots, especially heterogeneous robots. This paper proposes an enhanced iterative closest point (ICP) method for the fast and accurate registration of 3D environmental models. First, a hierarchical searching scheme is combined with the octree-based ICP algorithm. Second, an early-warning mechanism is used to perceive the local minimum problem. Third, a heuristic escape scheme based on sampled potential transformation vectors is used to avoid local minima and achieve optimal registration. Experiments involving one unmanned aerial vehicle and one unmanned surface vehicle were conducted to verify the proposed technique. The experimental results were compared with those of normal ICP registration algorithms to demonstrate the superior performance of the proposed method. PMID:26891298

  7. Large-scale shell model study of the newly found isomer in 136La

    NASA Astrophysics Data System (ADS)

    Teruya, E.; Yoshinaga, N.; Higashiyama, K.; Nishibata, H.; Odahara, A.; Shimoda, T.

    2016-07-01

    The doubly-odd nucleus 136La is theoretically studied in terms of a large-scale shell model. The energy spectrum and transition rates are calculated and compared with the most updated experimental data. The isomerism is investigated for the first 14+ state, which was found to be an isomer in the previous study [Phys. Rev. C 91, 054305 (2015), 10.1103/PhysRevC.91.054305]. It is found that the 14+ state becomes an isomer due to a band crossing of two bands with completely different configurations. The yrast band with the (ν h11/2 -1⊗π h11 /2 ) configuration is investigated, revealing a staggering nature in M 1 transition rates.

  8. Towards large scale stochastic rainfall models for flood risk assessment in trans-national basins

    NASA Astrophysics Data System (ADS)

    Serinaldi, F.; Kilsby, C. G.

    2012-04-01

    While extensive research has been devoted to rainfall-runoff modelling for risk assessment in small and medium size watersheds, less attention has been paid, so far, to large scale trans-national basins, where flood events have severe societal and economic impacts with magnitudes quantified in billions of Euros. As an example, in the April 2006 flood events along the Danube basin at least 10 people lost their lives and up to 30 000 people were displaced, with overall damages estimated at more than half a billion Euros. In this context, refined analytical methods are fundamental to improve the risk assessment and, then, the design of structural and non structural measures of protection, such as hydraulic works and insurance/reinsurance policies. Since flood events are mainly driven by exceptional rainfall events, suitable characterization and modelling of space-time properties of rainfall fields is a key issue to perform a reliable flood risk analysis based on alternative precipitation scenarios to be fed in a new generation of large scale rainfall-runoff models. Ultimately, this approach should be extended to a global flood risk model. However, as the need of rainfall models able to account for and simulate spatio-temporal properties of rainfall fields over large areas is rather new, the development of new rainfall simulation frameworks is a challenging task involving that faces with the problem of overcoming the drawbacks of the existing modelling schemes (devised for smaller spatial scales), but keeping the desirable properties. In this study, we critically summarize the most widely used approaches for rainfall simulation. Focusing on stochastic approaches, we stress the importance of introducing suitable climate forcings in these simulation schemes in order to account for the physical coherence of rainfall fields over wide areas. Based on preliminary considerations, we suggest a modelling framework relying on the Generalized Additive Models for Location, Scale

  9. Hierarchical Modeling and Robust Synthesis for the Preliminary Design of Large Scale Complex Systems

    NASA Technical Reports Server (NTRS)

    Koch, Patrick N.

    1997-01-01

    Large-scale complex systems are characterized by multiple interacting subsystems and the analysis of multiple disciplines. The design and development of such systems inevitably requires the resolution of multiple conflicting objectives. The size of complex systems, however, prohibits the development of comprehensive system models, and thus these systems must be partitioned into their constituent parts. Because simultaneous solution of individual subsystem models is often not manageable iteration is inevitable and often excessive. In this dissertation these issues are addressed through the development of a method for hierarchical robust preliminary design exploration to facilitate concurrent system and subsystem design exploration, for the concurrent generation of robust system and subsystem specifications for the preliminary design of multi-level, multi-objective, large-scale complex systems. This method is developed through the integration and expansion of current design techniques: Hierarchical partitioning and modeling techniques for partitioning large-scale complex systems into more tractable parts, and allowing integration of subproblems for system synthesis; Statistical experimentation and approximation techniques for increasing both the efficiency and the comprehensiveness of preliminary design exploration; and Noise modeling techniques for implementing robust preliminary design when approximate models are employed. Hierarchical partitioning and modeling techniques including intermediate responses, linking variables, and compatibility constraints are incorporated within a hierarchical compromise decision support problem formulation for synthesizing subproblem solutions for a partitioned system. Experimentation and approximation techniques are employed for concurrent investigations and modeling of partitioned subproblems. A modified composite experiment is introduced for fitting better predictive models across the ranges of the factors, and an approach for

  10. Unifying Algebraic and Large-Scale Shell-Model Approaches in Nuclear Structure Calculations

    NASA Astrophysics Data System (ADS)

    Draayer, Jerry P.

    1997-04-01

    The shell model is the most robust theory for addressing nuclear structure questions. Unfortunately, it is only as good as the input hamiltonian and the appropriateness of the selected model space, and both of these elements usually prove to be a significant challenge. There are three basic theories: 1) algebraic models, boson and fermion, which focus on symmetries, exact and approximate, of a hamiltonian and usually use model spaces that are severely truncated; 2) numerically oriented schemes that accommodate larger spaces but rely on special techniques and algorithms for producing convergent results; and 3) models that employ statistical concepts, like statistical spectroscopy of the 70s and 80s and Monte Carlo methods of the 90s, schemes that are not limited by the usual dimensionality considerations. These three approaches and their various realizations and extensions, with their pluses and minuses, will be considered. In addition, opportunities that exist for defining a scheme that employs the best of all three approaches to yield a symmetry adapted theory that is not limited to simplified spaces and hamiltonians and yet remains tractable even for large-scale calculations of the type that are required for testing a theory against experimental data and for predicting new physical phenomena will be explored. Special attention will be focused on unifying themes linking the shell-model with the simpler and yet highly successful mean-field and collective-model theories. As a example of the latter, some recent results using the symplectic shell model will be presented.

  11. Estimating the impact of SWOT observations on the predictability of large-scale hydraulic models

    NASA Astrophysics Data System (ADS)

    Schumann, G. J.; Andreadis, K.

    2012-12-01

    The proposed NASA/CNES Surface Water Ocean Topography (SWOT) satellite mission would provide unprecedented measurements of hydraulic variables globally. This paper investigates the impact of different SWOT-like observations on the capability to model and predict hydrodynamics over large scales. In order to achieve this, the Ensemble Sensitivity (ET) method was adopted, examining the cost functional between two 'models' run on a 40,000 km2 area of the Ohio basin. The ET method is similar to the adjoint method but uses an ensemble of model perturbations to calculate the sensitivity to observations. The experiment consists of two configurations of the LISFLOOD-FP hydraulic model. The first (baseline) simulation represents a calibrated 'best effort' model based on a sub-grid channel structure using observations for parameters and boundary conditions, whereas the second (background) simulation consists of estimated parameters and SRTM-based boundary conditions. Using accurate SWOT-like observations such as water level, water surface width and slope in an Ensemble Sensitivity framework allowed us to assess the true impact of SWOT observables over different temporal and spatial scales on our current capabilities to model and predict hydrodynamic characteristics at a potentially global scale. Estimating the model sensitivity to observations could also allow the identification of errors in the model structure and parameterizations, as well as facilitate the derivation of a SWOT data product with optimal characteristics (e.g. reach-averaging).

  12. Simulating large-scale pedestrian movement using CA and event driven model: Methodology and case study

    NASA Astrophysics Data System (ADS)

    Li, Jun; Fu, Siyao; He, Haibo; Jia, Hongfei; Li, Yanzhong; Guo, Yi

    2015-11-01

    Large-scale regional evacuation is an important part of national security emergency response plan. Large commercial shopping area, as the typical service system, its emergency evacuation is one of the hot research topics. A systematic methodology based on Cellular Automata with the Dynamic Floor Field and event driven model has been proposed, and the methodology has been examined within context of a case study involving the evacuation within a commercial shopping mall. Pedestrians walking is based on Cellular Automata and event driven model. In this paper, the event driven model is adopted to simulate the pedestrian movement patterns, the simulation process is divided into normal situation and emergency evacuation. The model is composed of four layers: environment layer, customer layer, clerk layer and trajectory layer. For the simulation of movement route of pedestrians, the model takes into account purchase intention of customers and density of pedestrians. Based on evacuation model of Cellular Automata with Dynamic Floor Field and event driven model, we can reflect behavior characteristics of customers and clerks at the situations of normal and emergency evacuation. The distribution of individual evacuation time as a function of initial positions and the dynamics of the evacuation process is studied. Our results indicate that the evacuation model using the combination of Cellular Automata with Dynamic Floor Field and event driven scheduling can be used to simulate the evacuation of pedestrian flows in indoor areas with complicated surroundings and to investigate the layout of shopping mall.

  13. Large-scale hydrological modelling by using modified PUB recommendations: the India-HYPE case

    NASA Astrophysics Data System (ADS)

    Pechlivanidis, I. G.; Arheimer, B.

    2015-11-01

    The scientific initiative Prediction in Ungauged Basins (PUB) (2003-2012 by the IAHS) put considerable effort into improving the reliability of hydrological models to predict flow response in ungauged rivers. PUB's collective experience advanced hydrologic science and defined guidelines to make predictions in catchments without observed runoff data. At present, there is a raised interest in applying catchment models to large domains and large data samples in a multi-basin manner, to explore emerging spatial patterns or learn from comparative hydrology. However, such modelling involves additional sources of uncertainties caused by the inconsistency between input data sets, i.e. particularly regional and global databases. This may lead to inaccurate model parameterisation and erroneous process understanding. In order to bridge the gap between the best practices for flow predictions in single catchments and multi-basins at the large scale, we present a further developed and slightly modified version of the recommended best practices for PUB by Takeuchi et al. (2013). By using examples from a recent HYPE (Hydrological Predictions for the Environment) hydrological model set-up across 6000 subbasins for the Indian subcontinent, named India-HYPE v1.0, we explore the PUB recommendations, identify challenges and recommend ways to overcome them. We describe the work process related to (a) errors and inconsistencies in global databases, unknown human impacts, and poor data quality; (b) robust approaches to identify model parameters using a stepwise calibration approach, remote sensing data, expert knowledge, and catchment similarities; and (c) evaluation based on flow signatures and performance metrics, using both multiple criteria and multiple variables, and independent gauges for "blind tests". The results show that despite the strong physiographical gradient over the subcontinent, a single model can describe the spatial variability in dominant hydrological processes at the

  14. Spatial and temporal patterns of large-scale droughts in Europe: model dispersion and performance

    NASA Astrophysics Data System (ADS)

    Tallaksen, Lena M.; Stahl, Kerstin

    2014-05-01

    Droughts are regional events that have a wide range of environmental and socio-economic impacts and thus, it is vital that models correctly simulate drought characteristics in a future climate. In this study we explore the performance of a suite of off-line, global hydrological and land surface models in mapping spatial and temporal patterns of large-scale hydrological droughts. The model ensemble consists of seven global models run with the same simulation setup (developed in a joint effort within the WATCH project). Daily total runoff (sum of fast and slow component) simulated for each grid cell in Europe for the period 1963-2000 constitute the basis for the analysis. Simulated and observed daily (7-day backward-smoothed) runoff series for each grid cell were first transformed into nonparametric anomalies, and a grid cell is considered to be in drought if the runoff is below q20, i.e., the 20% non-exceedance frequency of that day. The mean annual drought area, i.e., the average of the daily total area in drought, is used to characterize the overall dryness of a year. The annual maximum drought cluster area, i.e., the area of the largest cluster of spatially contiguous cells in drought within a year, is chosen as a measure of the severity of a given drought. The total number of drought events is defined as runs of consecutive days in drought over the entire record. Consistent model behavior was found for inter-annual variability in mean drought area, whereas high model dispersion was revealed in the weekly evolution of contiguous area in drought and its annual maximum. Comparison with nearly three hundred catchment-scale streamflow observations showed an overall tendency to overestimate the number of drought events and hence, underestimate drought duration, whereas persistence in drought affected area (weekly mean) was underestimated, noticeable for one group of models. The high model dispersion in temporal and spatial persistence of drought identified implies

  15. Multilevel Item Response Modeling: Applications to Large-Scale Assessment of Academic Achievement

    ERIC Educational Resources Information Center

    Zheng, Xiaohui

    2009-01-01

    The call for standards-based reform and educational accountability has led to increased attention to large-scale assessments. Over the past two decades, large-scale assessments have been providing policymakers and educators with timely information about student learning and achievement to facilitate their decisions regarding schools, teachers and…

  16. Multi-variate spatial explicit constraining of a large scale hydrological model

    NASA Astrophysics Data System (ADS)

    Rakovec, Oldrich; Kumar, Rohini; Samaniego, Luis

    2016-04-01

    Increased availability and quality of near real-time data should target at better understanding of predictive skills of distributed hydrological models. Nevertheless, predictions of regional scale water fluxes and states remains of great challenge to the scientific community. Large scale hydrological models are used for prediction of soil moisture, evapotranspiration and other related water states and fluxes. They are usually properly constrained against river discharge, which is an integral variable. Rakovec et al (2016) recently demonstrated that constraining model parameters against river discharge is necessary, but not a sufficient condition. Therefore, we further aim at scrutinizing appropriate incorporation of readily available information into a hydrological model that may help to improve the realism of hydrological processes. It is important to analyze how complementary datasets besides observed streamflow and related signature measures can improve model skill of internal model variables during parameter estimation. Among those products suitable for further scrutiny are for example the GRACE satellite observations. Recent developments of using this dataset in a multivariate fashion to complement traditionally used streamflow data within the distributed model mHM (www.ufz.de/mhm) are presented. Study domain consists of 80 European basins, which cover a wide range of distinct physiographic and hydrologic regimes. First-order data quality check ensures that heavily human influenced basins are eliminated. For river discharge simulations we show that model performance of discharge remains unchanged when complemented by information from the GRACE product (both, daily and monthly time steps). Moreover, the GRACE complementary data lead to consistent and statistically significant improvements in evapotranspiration estimates, which are evaluated using an independent gridded FLUXNET product. We also show that the choice of the objective function used to estimate

  17. Vertical Distributions of Sulfur Species Simulated by Large Scale Atmospheric Models in COSAM: Comparison with Observations

    SciTech Connect

    Lohmann, U.; Leaitch, W. R.; Barrie, Leonard A.; Law, K.; Yi, Y.; Bergmann, D.; Bridgeman, C.; Chin, M.; Christensen, J.; Easter, Richard C.; Feichter, J.; Jeuken, A.; Kjellstrom, E.; Koch, D.; Land, C.; Rasch, P.; Roelofs, G.-J.

    2001-11-01

    A comparison of large-scale models simulating atmospheric sulfate aerosols (COSAM) was conducted to increase our understanding of global distributions of sulfate aerosols and precursors. Earlier model comparisons focused on wet deposition measurements and sulfate aerosol concentrations in source regions at the surface. They found that different models simulated the observed sulfate surface concentrations mostly within a factor of two, but that the simulated column burdens and vertical profiles were very different amongst different models. In the COSAM exercise, one aspect is the comparison of sulfate aerosol and precursor gases above the surface. Vertical profiles of SO2, SO42-, oxidants and cloud properties were measured by aircraft during the North Atlantic Regional Experiment (NARE) experiment in August/September 1993 off the coast of Nova Scotia and during the Second Eulerian Model Evaluation Field Study (EMEFSII) in central Ontario in March/April 1990. While no single model stands out as being best or worst, the general tendency is that those models simulating the full oxidant chemistry tend to agree best with observations although differences in transport and treatment of clouds are important as well.

  18. A modelling case study of a large-scale cirrus in the tropical tropopause layer

    NASA Astrophysics Data System (ADS)

    Podglajen, Aurélien; Plougonven, Riwal; Hertzog, Albert; Legras, Bernard

    2016-03-01

    We use the Weather Research and Forecast (WRF) model to simulate a large-scale tropical tropopause layer (TTL) cirrus in order to understand the formation and life cycle of the cloud. This cirrus event has been previously described through satellite observations by Taylor et al. (2011). Comparisons of the simulated and observed cirrus show a fair agreement and validate the reference simulation regarding cloud extension, location and life time. The validated simulation is used to understand the causes of cloud formation. It is shown that several cirrus clouds successively form in the region due to adiabatic cooling and large-scale uplift rather than from convective anvils. The structure of the uplift is tied to the equatorial response (equatorial wave excitation) to a potential vorticity intrusion from the midlatitudes. Sensitivity tests are then performed to assess the relative importance of the choice of the microphysics parameterization and of the initial and boundary conditions. The initial dynamical conditions (wind and temperature) essentially control the horizontal location and area of the cloud. However, the choice of the microphysics scheme influences the ice water content and the cloud vertical position. Last, the fair agreement with the observations allows to estimate the cloud impact in the TTL in the simulations. The cirrus clouds have a small but not negligible impact on the radiative budget of the local TTL. However, for this particular case, the cloud radiative heating does not significantly influence the simulated dynamics. This result is due to (1) the lifetime of air parcels in the cloud system, which is too short to significantly influence the dynamics, and (2) the fact that induced vertical motions would be comparable to or smaller than the typical mesoscale motions present. Finally, the simulation also provides an estimate of the vertical redistribution of water by the cloud and the results emphasize the importance in our case of both

  19. A modelling case study of a large-scale cirrus in the tropical tropopause layer

    NASA Astrophysics Data System (ADS)

    Podglajen, A.; Plougonven, R.; Hertzog, A.; Legras, B.

    2015-11-01

    We use the Weather Research and Forecast (WRF) model to simulate a large-scale tropical tropopause layer (TTL) cirrus, in order to understand the formation and life cycle of the cloud. This cirrus event has been previously described through satellite observations by Taylor et al. (2011). Comparisons of the simulated and observed cirrus show a fair agreement, and validate the reference simulation regarding cloud extension, location and life time. The validated simulation is used to understand the causes of cloud formation. It is shown that several cirrus clouds successively form in the region due to adiabatic cooling and large-scale uplift rather than from ice lofting from convective anvils. The equatorial response (equatorial wave excitation) to a midlatitude potential vorticity (PV) intrusion structures the uplift. Sensitivity tests are then performed to assess the relative importance of the choice of the microphysics parametrisation and of the initial and boundary conditions. The initial dynamical conditions (wind and temperature) essentially control the horizontal location and area of the cloud. On the other hand, the choice of the microphysics scheme influences the ice water content and the cloud vertical position. Last, the fair agreement with the observations allows to estimate the cloud impact in the TTL in the simulations. The cirrus clouds have a small but not negligible impact on the radiative budget of the local TTL. However, the cloud radiative heating does not significantly influence the simulated dynamics. The simulation also provides an estimate of the vertical redistribution of water by the cloud and the results emphasize the importance in our case of both re and dehydration in the vicinity of the cirrus.

  20. Ensemble modeling to predict habitat suitability for a large-scale disturbance specialist.

    PubMed

    Latif, Quresh S; Saab, Victoria A; Dudley, Jonathan G; Hollenbeck, Jeff P

    2013-11-01

    managers attempting to balance salvage logging with habitat conservation in burned-forest landscapes where black-backed woodpecker nest location data are not immediately available. Ensemble modeling represents a promising tool for guiding conservation of large-scale disturbance specialists.

  1. Ensemble modeling to predict habitat suitability for a large-scale disturbance specialist

    PubMed Central

    Latif, Quresh S; Saab, Victoria A; Dudley, Jonathan G; Hollenbeck, Jeff P

    2013-01-01

    help guide managers attempting to balance salvage logging with habitat conservation in burned-forest landscapes where black-backed woodpecker nest location data are not immediately available. Ensemble modeling represents a promising tool for guiding conservation of large-scale disturbance specialists. PMID:24340177

  2. A Comparison of Large-Scale Atmospheric Sulphate Aerosol Models (COSAM): Overview and Highlights

    SciTech Connect

    Barrie, Leonard A.; Yi, Y.; Leaitch, W. R.; Lohmann, U.; Kasibhatla, P.; Roelofs, G.-J.; Wilson, J.; Mcgovern, F.; Benkovitz, C.; Melieres, M. A.; Law, K.; Prospero, J.; Kritz, M.; Bergmann, D.; Bridgeman, C.; Chin, M.; Christiansen, J.; Easter, Richard C.; Feichter, J.; Land, C.; Jeuken, A.; Kjellstrom, E.; Koch, D.; Rasch, P.

    2001-11-01

    The comparison of large-scale sulphate aerosol models study (COSAM) compared the performance of atmospheric models with each other and observations. It involved: (i) design of a standard model experiment for the world wide web, (ii) 10 model simulations of the cycles of sulphur and 222Rn/210Pb conforming to the experimental design, (iii) assemblage of the best available observations of atmospheric SO4=, SO2 and MSA and (iv) a workshop in Halifax, Canada to analyze model performance and future model development needs. The analysis presented in this paper and two companion papers by Roelofs, and Lohmann and co-workers examines the variance between models and observations, discusses the sources of that variance and suggests ways to improve models. Variations between models in the export of SOx from Europe or North America are not sufficient to explain an order of magnitude variation in spatial distributions of SOx downwind in the northern hemisphere. On average, models predicted surface level seasonal mean SO4= aerosol mixing ratios better (most within 20%) than SO2 mixing ratios (over-prediction by factors of 2 or more). Results suggest that vertical mixing from the planetary boundary layer into the free troposphere in source regions is a major source of uncertainty in predicting the global distribution of SO4= aerosols in climate models today. For improvement, it is essential that globally coordinated research efforts continue to address emissions of all atmospheric species that affect the distribution and optical properties of ambient aerosols in models and that a global network of observations be established that will ultimately produce a world aerosol chemistry climatology.

  3. A Polymer Model for Large-scale Chromatin Organization in Lower Eukaryotes

    PubMed Central

    Ostashevsky, Joseph

    2002-01-01

    A quantitative model of large-scale chromatin organization was applied to nuclei of fission yeast Schizosaccharomyces pombe (meiotic prophase and G2 phase), budding yeast Saccharomyces cerevisiae (young and senescent cells), Drosophila (embryonic cycles 10 and 14, and polytene tissues) and Caenorhabditis elegans (G1 phase). The model is based on the coil-like behavior of chromosomal fibers and the tight packing of discrete chromatin domains in a nucleus. Intrachromosomal domains are formed by chromatin anchoring to nuclear structures (e.g., the nuclear envelope). The observed sizes for confinement of chromatin diffusional motion are similar to the estimated sizes of corresponding domains. The model correctly predicts chromosome configurations (linear, Rabl, loop) and chromosome associations (homologous pairing, centromere and telomere clusters) on the basis of the geometrical constraints imposed by nuclear size and shape. Agreement between the model predictions and literature observations supports the notion that the average linear density of the 30-nm chromatin fiber is ∼4 nucleosomes per 10 nm contour length. PMID:12058077

  4. Morphotectonic evolution of passive margins undergoing active surface processes: large-scale experiments using numerical models.

    NASA Astrophysics Data System (ADS)

    Beucher, Romain; Huismans, Ritske S.

    2016-04-01

    Extension of the continental lithosphere can lead to the formation of a wide range of rifted margins styles with contrasting tectonic and geomorphological characteristics. It is now understood that many of these characteristics depend on the manner extension is distributed depending on (among others factors) rheology, structural inheritance, thermal structure and surface processes. The relative importance and the possible interactions of these controlling factors is still largely unknown. Here we investigate the feedbacks between tectonics and the transfers of material at the surface resulting from erosion, transport, and sedimentation. We use large-scale (1200 x 600 km) and high-resolution (~1km) numerical experiments coupling a 2D upper-mantle-scale thermo-mechanical model with a plan-form 2D surface processes model (SPM). We test the sensitivity of the coupled models to varying crust-lithosphere rheology and erosional efficiency ranging from no-erosion to very efficient erosion. We discuss how fast, when and how the topography of the continents evolves and how it can be compared to actual passive margins escarpment morphologies. We show that although tectonics is the main factor controlling the rift geometry, transfers of masses at the surface affect the timing of faulting and the initiation of sea-floor spreading. We discuss how such models may help to understand the evolution of high-elevated passive margins around the world.

  5. Gravitational waves during inflation from a 5D large-scale repulsive gravity model

    NASA Astrophysics Data System (ADS)

    Reyes, Luz M.; Moreno, Claudia; Madriz Aguilar, José Edgar; Bellini, Mauricio

    2012-10-01

    We investigate, in the transverse traceless (TT) gauge, the generation of the relic background of gravitational waves, generated during the early inflationary stage, on the framework of a large-scale repulsive gravity model. We calculate the spectrum of the tensor metric fluctuations of an effective 4D Schwarzschild-de Sitter metric on cosmological scales. This metric is obtained after implementing a planar coordinate transformation on a 5D Ricci-flat metric solution, in the context of a non-compact Kaluza-Klein theory of gravity. We found that the spectrum is nearly scale invariant under certain conditions. One interesting aspect of this model is that it is possible to derive the dynamical field equations for the tensor metric fluctuations, valid not just at cosmological scales, but also at astrophysical scales, from the same theoretical model. The astrophysical and cosmological scales are determined by the gravity-antigravity radius, which is a natural length scale of the model, that indicates when gravity becomes repulsive in nature.

  6. Statistical Modeling of Large-Scale Signal Path Loss in Underwater Acoustic Networks

    PubMed Central

    Llor, Jesús; Malumbres, Manuel Perez

    2013-01-01

    In an underwater acoustic channel, the propagation conditions are known to vary in time, causing the deviation of the received signal strength from the nominal value predicted by a deterministic propagation model. To facilitate a large-scale system design in such conditions (e.g., power allocation), we have developed a statistical propagation model in which the transmission loss is treated as a random variable. By applying repetitive computation to the acoustic field, using ray tracing for a set of varying environmental conditions (surface height, wave activity, small node displacements around nominal locations, etc.), an ensemble of transmission losses is compiled and later used to infer the statistical model parameters. A reasonable agreement is found with log-normal distribution, whose mean obeys a log-distance increases, and whose variance appears to be constant for a certain range of inter-node distances in a given deployment location. The statistical model is deemed useful for higher-level system planning, where simulation is needed to assess the performance of candidate network protocols under various resource allocation policies, i.e., to determine the transmit power and bandwidth allocation necessary to achieve a desired level of performance (connectivity, throughput, reliability, etc.). PMID:23396190

  7. Development of a realistic human airway model.

    PubMed

    Lizal, Frantisek; Elcner, Jakub; Hopke, Philip K; Jedelsky, Jan; Jicha, Miroslav

    2012-03-01

    Numerous models of human lungs with various levels of idealization have been reported in the literature; consequently, results acquired using these models are difficult to compare to in vivo measurements. We have developed a set of model components based on realistic geometries, which permits the analysis of the effects of subsequent model simplification. A realistic digital upper airway geometry except for the lack of an oral cavity has been created which proved suitable both for computational fluid dynamics (CFD) simulations and for the fabrication of physical models. Subsequently, an oral cavity was added to the tracheobronchial geometry. The airway geometry including the oral cavity was adjusted to enable fabrication of a semi-realistic model. Five physical models were created based on these three digital geometries. Two optically transparent models, one with and one without the oral cavity, were constructed for flow velocity measurements, two realistic segmented models, one with and one without the oral cavity, were constructed for particle deposition measurements, and a semi-realistic model with glass cylindrical airways was developed for optical measurements of flow velocity and in situ particle size measurements. One-dimensional phase doppler anemometry measurements were made and compared to the CFD calculations for this model and good agreement was obtained. PMID:22558834

  8. Modelling potential changes in marine biogeochemistry due to large-scale offshore wind farms

    NASA Astrophysics Data System (ADS)

    van der Molen, Johan; Rees, Jon; Limpenny, Sian

    2013-04-01

    Large-scale renewable energy generation by offshore wind farms may lead to changes in marine ecosystem processes through the following mechanism: 1) wind-energy extraction leads to a reduction in local surface wind speeds; 2) these lead to a reduction in the local wind wave height; 3) as a consequence there's a reduction in SPM resuspension and concentrations; 4) this results in an improvement in under-water light regime, which 5) may lead to increased primary production, which subsequently 6) cascades through the ecosystem. A three-dimensional coupled hydrodynamics-biogeochemistry model (GETM_ERSEM) was used to investigate this process for a hypothetical wind farm in the central North Sea, by running a reference scenario and a scenario with a 10% reduction (as was found in a case study of a small farm in Danish waters) in surface wind velocities in the area of the wind farm. The ERSEM model included both pelagic and benthic processes. The results showed that, within the farm area, the physical mechanisms were as expected, but with variations in the magnitude of the response depending on the ecosystem variable or exchange rate between two ecosystem variables (3-28%, depending on variable/rate). Benthic variables tended to be more sensitive to the changes than pelagic variables. Reduced, but noticeable changes also occurred for some variables in a region of up to two farm diameters surrounding the wind farm. An additional model run in which the 10% reduction in surface wind speed was applied only for wind speeds below the generally used threshold of 25 m/s for operational shut-down showed only minor differences from the run in which all wind speeds were reduced. These first results indicate that there is potential for measurable effects of large-scale offshore wind farms on the marine ecosystem, mainly within the farm but for some variables up to two farm diameters away. However, the wave and SPM parameterisations currently used in the model are crude and need to be

  9. Real-World-Time Simulation of Memory Consolidation in a Large-Scale Cerebellar Model

    PubMed Central

    Gosui, Masato; Yamazaki, Tadashi

    2016-01-01

    We report development of a large-scale spiking network model of the cerebellum composed of more than 1 million neurons. The model is implemented on graphics processing units (GPUs), which are dedicated hardware for parallel computing. Using 4 GPUs simultaneously, we achieve realtime simulation, in which computer simulation of cerebellar activity for 1 s completes within 1 s in the real-world time, with temporal resolution of 1 ms. This allows us to carry out a very long-term computer simulation of cerebellar activity in a practical time with millisecond temporal resolution. Using the model, we carry out computer simulation of long-term gain adaptation of optokinetic response (OKR) eye movements for 5 days aimed to study the neural mechanisms of posttraining memory consolidation. The simulation results are consistent with animal experiments and our theory of posttraining memory consolidation. These results suggest that realtime computing provides a useful means to study a very slow neural process such as memory consolidation in the brain. PMID:26973472

  10. Aerodynamic characteristics of a large-scale hybrid upper surface blown flap model having four engines

    NASA Technical Reports Server (NTRS)

    Carros, R. J.; Boissevain, A. G.; Aoyagi, K.

    1975-01-01

    Data are presented from an investigation of the aerodynamic characteristics of large-scale wind tunnel aircraft model that utilized a hybrid-upper surface blown flap to augment lift. The hybrid concept of this investigation used a portion of the turbofan exhaust air for blowing over the trailing edge flap to provide boundary layer control. The model, tested in the Ames 40- by 80-foot Wind Tunnel, had a 27.5 deg swept wing of aspect ratio 8 and 4 turbofan engines mounted on the upper surface of the wing. The lift of the model was augmented by turbofan exhaust impingement on the wind upper-surface and flap system. Results were obtained for three flap deflections, for some variation of engine nozzle configuration and for jet thrust coefficients from 0 to 3.0. Six-component longitudinal and lateral data are presented with four engine operation and with the critical engine out. In addition, a limited number of cross-plots of the data are presented. All of the tests were made with a downwash rake installed instead of a horizontal tail. Some of these downwash data are also presented.

  11. Large-scale functional models of visual cortex for remote sensing

    SciTech Connect

    Brumby, Steven P; Kenyon, Garrett; Rasmussen, Craig E; Swaminarayan, Sriram; Bettencourt, Luis; Landecker, Will

    2009-01-01

    Neuroscience has revealed many properties of neurons and of the functional organization of visual cortex that are believed to be essential to human vision, but are missing in standard artificial neural networks. Equally important may be the sheer scale of visual cortex requiring {approx}1 petaflop of computation. In a year, the retina delivers {approx}1 petapixel to the brain, leading to massively large opportunities for learning at many levels of the cortical system. We describe work at Los Alamos National Laboratory (LANL) to develop large-scale functional models of visual cortex on LANL's Roadrunner petaflop supercomputer. An initial run of a simple region VI code achieved 1.144 petaflops during trials at the IBM facility in Poughkeepsie, NY (June 2008). Here, we present criteria for assessing when a set of learned local representations is 'complete' along with general criteria for assessing computer vision models based on their projected scaling behavior. Finally, we extend one class of biologically-inspired learning models to problems of remote sensing imagery.

  12. Real-World-Time Simulation of Memory Consolidation in a Large-Scale Cerebellar Model.

    PubMed

    Gosui, Masato; Yamazaki, Tadashi

    2016-01-01

    We report development of a large-scale spiking network model of the cerebellum composed of more than 1 million neurons. The model is implemented on graphics processing units (GPUs), which are dedicated hardware for parallel computing. Using 4 GPUs simultaneously, we achieve realtime simulation, in which computer simulation of cerebellar activity for 1 s completes within 1 s in the real-world time, with temporal resolution of 1 ms. This allows us to carry out a very long-term computer simulation of cerebellar activity in a practical time with millisecond temporal resolution. Using the model, we carry out computer simulation of long-term gain adaptation of optokinetic response (OKR) eye movements for 5 days aimed to study the neural mechanisms of posttraining memory consolidation. The simulation results are consistent with animal experiments and our theory of posttraining memory consolidation. These results suggest that realtime computing provides a useful means to study a very slow neural process such as memory consolidation in the brain.

  13. Development of explosive event scale model testing capability at Sandia`s large scale centrifuge facility

    SciTech Connect

    Blanchat, T.K.; Davie, N.T.; Calderone, J.J.

    1998-02-01

    Geotechnical structures such as underground bunkers, tunnels, and building foundations are subjected to stress fields produced by the gravity load on the structure and/or any overlying strata. These stress fields may be reproduced on a scaled model of the structure by proportionally increasing the gravity field through the use of a centrifuge. This technology can then be used to assess the vulnerability of various geotechnical structures to explosive loading. Applications of this technology include assessing the effectiveness of earth penetrating weapons, evaluating the vulnerability of various structures, counter-terrorism, and model validation. This document describes the development of expertise in scale model explosive testing on geotechnical structures using Sandia`s large scale centrifuge facility. This study focused on buried structures such as hardened storage bunkers or tunnels. Data from this study was used to evaluate the predictive capabilities of existing hydrocodes and structural dynamics codes developed at Sandia National Laboratories (such as Pronto/SPH, Pronto/CTH, and ALEGRA). 7 refs., 50 figs., 8 tabs.

  14. Real-World-Time Simulation of Memory Consolidation in a Large-Scale Cerebellar Model.

    PubMed

    Gosui, Masato; Yamazaki, Tadashi

    2016-01-01

    We report development of a large-scale spiking network model of the cerebellum composed of more than 1 million neurons. The model is implemented on graphics processing units (GPUs), which are dedicated hardware for parallel computing. Using 4 GPUs simultaneously, we achieve realtime simulation, in which computer simulation of cerebellar activity for 1 s completes within 1 s in the real-world time, with temporal resolution of 1 ms. This allows us to carry out a very long-term computer simulation of cerebellar activity in a practical time with millisecond temporal resolution. Using the model, we carry out computer simulation of long-term gain adaptation of optokinetic response (OKR) eye movements for 5 days aimed to study the neural mechanisms of posttraining memory consolidation. The simulation results are consistent with animal experiments and our theory of posttraining memory consolidation. These results suggest that realtime computing provides a useful means to study a very slow neural process such as memory consolidation in the brain. PMID:26973472

  15. Influenza epidemic spread simulation for Poland — a large scale, individual based model study

    NASA Astrophysics Data System (ADS)

    Rakowski, Franciszek; Gruziel, Magdalena; Bieniasz-Krzywiec, Łukasz; Radomski, Jan P.

    2010-08-01

    In this work a construction of an agent based model for studying the effects of influenza epidemic in large scale (38 million individuals) stochastic simulations, together with the resulting various scenarios of disease spread in Poland are reported. Simple transportation rules were employed to mimic individuals’ travels in dynamic route-changing schemes, allowing for the infection spread during a journey. Parameter space was checked for stable behaviour, especially towards the effective infection transmission rate variability. Although the model reported here is based on quite simple assumptions, it allowed to observe two different types of epidemic scenarios: characteristic for urban and rural areas. This differentiates it from the results obtained in the analogous studies for the UK or US, where settlement and daily commuting patterns are both substantially different and more diverse. The resulting epidemic scenarios from these ABM simulations were compared with simple, differential equations based, SIR models - both types of the results displaying strong similarities. The pDYN software platform developed here is currently used in the next stage of the project employed to study various epidemic mitigation strategies.

  16. Modeling the Hydrologic Effects of Large-Scale Green Infrastructure Projects with GIS

    NASA Astrophysics Data System (ADS)

    Bado, R. A.; Fekete, B. M.; Khanbilvardi, R.

    2015-12-01

    Impervious surfaces in urban areas generate excess runoff, which in turn causes flooding, combined sewer overflows, and degradation of adjacent surface waters. Municipal environmental protection agencies have shown a growing interest in mitigating these effects with 'green' infrastructure practices that partially restore the perviousness and water holding capacity of urban centers. Assessment of the performance of current and future green infrastructure projects is hindered by the lack of adequate hydrological modeling tools; conventional techniques fail to account for the complex flow pathways of urban environments, and detailed analyses are difficult to prepare for the very large domains in which green infrastructure projects are implemented. Currently, no standard toolset exists that can rapidly and conveniently predict runoff, consequent inundations, and sewer overflows at a city-wide scale. We demonstrate how streamlined modeling techniques can be used with open-source GIS software to efficiently model runoff in large urban catchments. Hydraulic parameters and flow paths through city blocks, roadways, and sewer drains are automatically generated from GIS layers, and ultimately urban flow simulations can be executed for a variety of rainfall conditions. With this methodology, users can understand the implications of large-scale land use changes and green/gray storm water retention systems on hydraulic loading, peak flow rates, and runoff volumes.

  17. Large scale cratering of the lunar highlands - Some Monte Carlo model considerations

    NASA Technical Reports Server (NTRS)

    Hoerz, F.; Gibbons, R. V.; Hill, R. E.; Gault, D. E.

    1976-01-01

    In an attempt to understand the scale and intensity of the moon's early, large scale meteoritic bombardment, a Monte Carlo computer model simulated the effects of all lunar craters greater than 800 m in diameter, for example, the number of times and depths specific fractions of the entire lunar surface were cratered. The model used observed crater size frequencies and crater-geometries compatible with the suggestions of Pike (1974) and Dence (1973); it simulated bombardment histories up to a factor of 10 more intense than those reflected by the present-day crater number density of the lunar highlands. For the present-day cratering record the model yields the following: approximately 25% of the entire lunar surface has not been cratered deeper than 100 m; 50% may have been cratered to 2-3 km depth; less than 5% of the surface has been cratered deeper than about 15 km. A typical highland site has suffered 1-2 impacts. Corresponding values for more intense bombardment histories are also presented, though it must remain uncertain what the absolute intensity of the moon's early meteorite bombardment was.

  18. Modeling long-term, large-scale sediment storage using a simple sediment budget approach

    NASA Astrophysics Data System (ADS)

    Naipal, Victoria; Reick, Christian; Van Oost, Kristof; Hoffmann, Thomas; Pongratz, Julia

    2016-05-01

    Currently, the anthropogenic perturbation of the biogeochemical cycles remains unquantified due to the poor representation of lateral fluxes of carbon and nutrients in Earth system models (ESMs). This lateral transport of carbon and nutrients between terrestrial ecosystems is strongly affected by accelerated soil erosion rates. However, the quantification of global soil erosion by rainfall and runoff, and the resulting redistribution is missing. This study aims at developing new tools and methods to estimate global soil erosion and redistribution by presenting and evaluating a new large-scale coarse-resolution sediment budget model that is compatible with ESMs. This model can simulate spatial patterns and long-term trends of soil redistribution in floodplains and on hillslopes, resulting from external forces such as climate and land use change. We applied the model to the Rhine catchment using climate and land cover data from the Max Planck Institute Earth System Model (MPI-ESM) for the last millennium (here AD 850-2005). Validation is done using observed Holocene sediment storage data and observed scaling between sediment storage and catchment area. We find that the model reproduces the spatial distribution of floodplain sediment storage and the scaling behavior for floodplains and hillslopes as found in observations. After analyzing the dependence of the scaling behavior on the main parameters of the model, we argue that the scaling is an emergent feature of the model and mainly dependent on the underlying topography. Furthermore, we find that land use change is the main contributor to the change in sediment storage in the Rhine catchment during the last millennium. Land use change also explains most of the temporal variability in sediment storage in floodplains and on hillslopes.

  19. Modeling human mobility responses to the large-scale spreading of infectious diseases

    NASA Astrophysics Data System (ADS)

    Meloni, Sandro; Perra, Nicola; Arenas, Alex; Gómez, Sergio; Moreno, Yamir; Vespignani, Alessandro

    2011-08-01

    Current modeling of infectious diseases allows for the study of realistic scenarios that include population heterogeneity, social structures, and mobility processes down to the individual level. The advances in the realism of epidemic description call for the explicit modeling of individual behavioral responses to the presence of disease within modeling frameworks. Here we formulate and analyze a metapopulation model that incorporates several scenarios of self-initiated behavioral changes into the mobility patterns of individuals. We find that prevalence-based travel limitations do not alter the epidemic invasion threshold. Strikingly, we observe in both synthetic and data-driven numerical simulations that when travelers decide to avoid locations with high levels of prevalence, this self-initiated behavioral change may enhance disease spreading. Our results point out that the real-time availability of information on the disease and the ensuing behavioral changes in the population may produce a negative impact on disease containment and mitigation.

  20. Evaluation of large-scale meteorological patterns associated with temperature extremes in the NARCCAP regional climate model simulations

    NASA Astrophysics Data System (ADS)

    Loikith, Paul C.; Waliser, Duane E.; Lee, Huikyo; Neelin, J. David; Lintner, Benjamin R.; McGinnis, Seth; Mearns, Linda O.; Kim, Jinwon

    2015-12-01

    Large-scale meteorological patterns (LSMPs) associated with temperature extremes are evaluated in a suite of regional climate model (RCM) simulations contributing to the North American Regional Climate Change Assessment Program. LSMPs are characterized through composites of surface air temperature, sea level pressure, and 500 hPa geopotential height anomalies concurrent with extreme temperature days. Six of the seventeen RCM simulations are driven by boundary conditions from reanalysis while the other eleven are driven by one of four global climate models (GCMs). Four illustrative case studies are analyzed in detail. Model fidelity in LSMP spatial representation is high for cold winter extremes near Chicago. Winter warm extremes are captured by most RCMs in northern California, with some notable exceptions. Model fidelity is lower for cool summer days near Houston and extreme summer heat events in the Ohio Valley. Physical interpretation of these patterns and identification of well-simulated cases, such as for Chicago, boosts confidence in the ability of these models to simulate days in the tails of the temperature distribution. Results appear consistent with the expectation that the ability of an RCM to reproduce a realistically shaped frequency distribution for temperature, especially at the tails, is related to its fidelity in simulating LMSPs. Each ensemble member is ranked for its ability to reproduce LSMPs associated with observed warm and cold extremes, identifying systematically high performing RCMs and the GCMs that provide superior boundary forcing. The methodology developed here provides a framework for identifying regions where further process-based evaluation would improve the understanding of simulation error and help guide future model improvement and downscaling efforts.

  1. Predictions of a non-Gaussian model for large scale structure

    SciTech Connect

    Fan, Z.H.; Bardeen, J.M.

    1992-06-26

    A modified CDM model for the origin of structure in the universe based on an inflation model with two interacting scalar fields, is analyzed to make predictions for the statistical properties of the density and velocity fields and the microwave background anisotropy. The initial gauge-invariant potential [zeta] which is defined as [zeta] = [delta][rho]/([rho] + p) + 3[var phi], where [var phi] is the curvature perturbation amplitude and p is the pressure, is the sum of a Gaussian field [phi][sub 1], and the square of a Gaussian field [phi][sub 2]. A Harrison-Zel'dovich scale-invariant power spectrum is assumed for [phi][sub 1]; and a log-normal 'peak' power spectrum for [phi][sub 2]. The location and the width of the peak are described by parameters k[sub c] and a. respectively. The model is motivated to some extent by inflation models with two interacting scalar fields, but is mainly interesting as an example of a model whose statistical properties change with scale. On small scales, it is almost identical to a standard scale-invariant Gaussian CDM model. On scales near the location of the peak of the non-Gaussian field, the distributions have long tails in high positive values of the density and velocity fields. Thus, it is easier to get large-scale streaming velocities than the standard CDM model. The quadrupole amplitude of fluctuations of the cosmic microwave background radiation and the rms variation of the temperature field smoothed with a 10[degree] FWHM Gaussian are calculated; a reasonable agreement is found with the new COBE results.

  2. Large-scale model-based assessment of deer-vehicle collision risk.

    PubMed

    Hothorn, Torsten; Brandl, Roland; Müller, Jörg

    2012-01-01

    Ungulates, in particular the Central European roe deer Capreolus capreolus and the North American white-tailed deer Odocoileus virginianus, are economically and ecologically important. The two species are risk factors for deer-vehicle collisions and as browsers of palatable trees have implications for forest regeneration. However, no large-scale management systems for ungulates have been implemented, mainly because of the high efforts and costs associated with attempts to estimate population sizes of free-living ungulates living in a complex landscape. Attempts to directly estimate population sizes of deer are problematic owing to poor data quality and lack of spatial representation on larger scales. We used data on >74,000 deer-vehicle collisions observed in 2006 and 2009 in Bavaria, Germany, to model the local risk of deer-vehicle collisions and to investigate the relationship between deer-vehicle collisions and both environmental conditions and browsing intensities. An innovative modelling approach for the number of deer-vehicle collisions, which allows nonlinear environment-deer relationships and assessment of spatial heterogeneity, was the basis for estimating the local risk of collisions for specific road types on the scale of Bavarian municipalities. Based on this risk model, we propose a new "deer-vehicle collision index" for deer management. We show that the risk of deer-vehicle collisions is positively correlated to browsing intensity and to harvest numbers. Overall, our results demonstrate that the number of deer-vehicle collisions can be predicted with high precision on the scale of municipalities. In the densely populated and intensively used landscapes of Central Europe and North America, a model-based risk assessment for deer-vehicle collisions provides a cost-efficient instrument for deer management on the landscape scale. The measures derived from our model provide valuable information for planning road protection and defining hunting quota. Open

  3. Prospective large-scale field study generates predictive model identifying major contributors to colony losses.

    PubMed

    Kielmanowicz, Merav Gleit; Inberg, Alex; Lerner, Inbar Maayan; Golani, Yael; Brown, Nicholas; Turner, Catherine Louise; Hayes, Gerald J R; Ballam, Joan M

    2015-04-01

    Over the last decade, unusually high losses of colonies have been reported by beekeepers across the USA. Multiple factors such as Varroa destructor, bee viruses, Nosema ceranae, weather, beekeeping practices, nutrition, and pesticides have been shown to contribute to colony losses. Here we describe a large-scale controlled trial, in which different bee pathogens, bee population, and weather conditions across winter were monitored at three locations across the USA. In order to minimize influence of various known contributing factors and their interaction, the hives in the study were not treated with antibiotics or miticides. Additionally, the hives were kept at one location and were not exposed to potential stress factors associated with migration. Our results show that a linear association between load of viruses (DWV or IAPV) in Varroa and bees is present at high Varroa infestation levels (>3 mites per 100 bees). The collection of comprehensive data allowed us to draw a predictive model of colony losses and to show that Varroa destructor, along with bee viruses, mainly DWV replication, contributes to approximately 70% of colony losses. This correlation further supports the claim that insufficient control of the virus-vectoring Varroa mite would result in increased hive loss. The predictive model also indicates that a single factor may not be sufficient to trigger colony losses, whereas a combination of stressors appears to impact hive health. PMID:25875764

  4. Constraining Large-Scale Solar Magnetic Field Models with Optical Coronal Observations

    NASA Astrophysics Data System (ADS)

    Uritsky, V. M.; Davila, J. M.; Jones, S. I.

    2015-12-01

    Scientific success of the Solar Probe Plus (SPP) and Solar Orbiter (SO) missions will depend to a large extent on the accuracy of the available coronal magnetic field models describing the connectivity of plasma disturbances in the inner heliosphere with their source regions. We argue that ground based and satellite coronagraph images can provide robust geometric constraints for the next generation of improved coronal magnetic field extrapolation models. In contrast to the previously proposed loop segmentation codes designed for detecting compact closed-field structures above solar active regions, we focus on the large-scale geometry of the open-field coronal regions located at significant radial distances from the solar surface. Details on the new feature detection algorithms will be presented. By applying the developed image processing methodology to high-resolution Mauna Loa Solar Observatory images, we perform an optimized 3D B-line tracing for a full Carrington rotation using the magnetic field extrapolation code presented in a companion talk by S.Jones at al. Tracing results are shown to be in a good qualitative agreement with the large-scalie configuration of the optical corona. Subsequent phases of the project and the related data products for SSP and SO missions as wwll as the supporting global heliospheric simulations will be discussed.

  5. Prospective large-scale field study generates predictive model identifying major contributors to colony losses.

    PubMed

    Kielmanowicz, Merav Gleit; Inberg, Alex; Lerner, Inbar Maayan; Golani, Yael; Brown, Nicholas; Turner, Catherine Louise; Hayes, Gerald J R; Ballam, Joan M

    2015-04-01

    Over the last decade, unusually high losses of colonies have been reported by beekeepers across the USA. Multiple factors such as Varroa destructor, bee viruses, Nosema ceranae, weather, beekeeping practices, nutrition, and pesticides have been shown to contribute to colony losses. Here we describe a large-scale controlled trial, in which different bee pathogens, bee population, and weather conditions across winter were monitored at three locations across the USA. In order to minimize influence of various known contributing factors and their interaction, the hives in the study were not treated with antibiotics or miticides. Additionally, the hives were kept at one location and were not exposed to potential stress factors associated with migration. Our results show that a linear association between load of viruses (DWV or IAPV) in Varroa and bees is present at high Varroa infestation levels (>3 mites per 100 bees). The collection of comprehensive data allowed us to draw a predictive model of colony losses and to show that Varroa destructor, along with bee viruses, mainly DWV replication, contributes to approximately 70% of colony losses. This correlation further supports the claim that insufficient control of the virus-vectoring Varroa mite would result in increased hive loss. The predictive model also indicates that a single factor may not be sufficient to trigger colony losses, whereas a combination of stressors appears to impact hive health.

  6. Prospective Large-Scale Field Study Generates Predictive Model Identifying Major Contributors to Colony Losses

    PubMed Central

    Kielmanowicz, Merav Gleit; Inberg, Alex; Lerner, Inbar Maayan; Golani, Yael; Brown, Nicholas; Turner, Catherine Louise; Hayes, Gerald J. R.; Ballam, Joan M.

    2015-01-01

    Over the last decade, unusually high losses of colonies have been reported by beekeepers across the USA. Multiple factors such as Varroa destructor, bee viruses, Nosema ceranae, weather, beekeeping practices, nutrition, and pesticides have been shown to contribute to colony losses. Here we describe a large-scale controlled trial, in which different bee pathogens, bee population, and weather conditions across winter were monitored at three locations across the USA. In order to minimize influence of various known contributing factors and their interaction, the hives in the study were not treated with antibiotics or miticides. Additionally, the hives were kept at one location and were not exposed to potential stress factors associated with migration. Our results show that a linear association between load of viruses (DWV or IAPV) in Varroa and bees is present at high Varroa infestation levels (>3 mites per 100 bees). The collection of comprehensive data allowed us to draw a predictive model of colony losses and to show that Varroa destructor, along with bee viruses, mainly DWV replication, contributes to approximately 70% of colony losses. This correlation further supports the claim that insufficient control of the virus-vectoring Varroa mite would result in increased hive loss. The predictive model also indicates that a single factor may not be sufficient to trigger colony losses, whereas a combination of stressors appears to impact hive health. PMID:25875764

  7. Large-scale modeling of condition-specific gene regulatory networks by information integration and inference

    PubMed Central

    Ellwanger, Daniel Christian; Leonhardt, Jörn Florian; Mewes, Hans-Werner

    2014-01-01

    Understanding how regulatory networks globally coordinate the response of a cell to changing conditions, such as perturbations by shifting environments, is an elementary challenge in systems biology which has yet to be met. Genome-wide gene expression measurements are high dimensional as these are reflecting the condition-specific interplay of thousands of cellular components. The integration of prior biological knowledge into the modeling process of systems-wide gene regulation enables the large-scale interpretation of gene expression signals in the context of known regulatory relations. We developed COGERE (http://mips.helmholtz-muenchen.de/cogere), a method for the inference of condition-specific gene regulatory networks in human and mouse. We integrated existing knowledge of regulatory interactions from multiple sources to a comprehensive model of prior information. COGERE infers condition-specific regulation by evaluating the mutual dependency between regulator (transcription factor or miRNA) and target gene expression using prior information. This dependency is scored by the non-parametric, nonlinear correlation coefficient η2 (eta squared) that is derived by a two-way analysis of variance. We show that COGERE significantly outperforms alternative methods in predicting condition-specific gene regulatory networks on simulated data sets. Furthermore, by inferring the cancer-specific gene regulatory network from the NCI-60 expression study, we demonstrate the utility of COGERE to promote hypothesis-driven clinical research.

  8. Large-scale modeling of condition-specific gene regulatory networks by information integration and inference.

    PubMed

    Ellwanger, Daniel Christian; Leonhardt, Jörn Florian; Mewes, Hans-Werner

    2014-12-01

    Understanding how regulatory networks globally coordinate the response of a cell to changing conditions, such as perturbations by shifting environments, is an elementary challenge in systems biology which has yet to be met. Genome-wide gene expression measurements are high dimensional as these are reflecting the condition-specific interplay of thousands of cellular components. The integration of prior biological knowledge into the modeling process of systems-wide gene regulation enables the large-scale interpretation of gene expression signals in the context of known regulatory relations. We developed COGERE (http://mips.helmholtz-muenchen.de/cogere), a method for the inference of condition-specific gene regulatory networks in human and mouse. We integrated existing knowledge of regulatory interactions from multiple sources to a comprehensive model of prior information. COGERE infers condition-specific regulation by evaluating the mutual dependency between regulator (transcription factor or miRNA) and target gene expression using prior information. This dependency is scored by the non-parametric, nonlinear correlation coefficient η(2) (eta squared) that is derived by a two-way analysis of variance. We show that COGERE significantly outperforms alternative methods in predicting condition-specific gene regulatory networks on simulated data sets. Furthermore, by inferring the cancer-specific gene regulatory network from the NCI-60 expression study, we demonstrate the utility of COGERE to promote hypothesis-driven clinical research.

  9. Towards a large-scale scalable adaptive heart model using shallow tree meshes

    NASA Astrophysics Data System (ADS)

    Krause, Dorian; Dickopf, Thomas; Potse, Mark; Krause, Rolf

    2015-10-01

    Electrophysiological heart models are sophisticated computational tools that place high demands on the computing hardware due to the high spatial resolution required to capture the steep depolarization front. To address this challenge, we present a novel adaptive scheme for resolving the deporalization front accurately using adaptivity in space. Our adaptive scheme is based on locally structured meshes. These tensor meshes in space are organized in a parallel forest of trees, which allows us to resolve complicated geometries and to realize high variations in the local mesh sizes with a minimal memory footprint in the adaptive scheme. We discuss both a non-conforming mortar element approximation and a conforming finite element space and present an efficient technique for the assembly of the respective stiffness matrices using matrix representations of the inclusion operators into the product space on the so-called shallow tree meshes. We analyzed the parallel performance and scalability for a two-dimensional ventricle slice as well as for a full large-scale heart model. Our results demonstrate that the method has good performance and high accuracy.

  10. A mass-flux cumulus parameterization scheme for large-scale models: description and test with observations

    NASA Astrophysics Data System (ADS)

    Wu, Tongwen

    2012-02-01

    A simple mass-flux cumulus parameterization scheme suitable for large-scale atmospheric models is presented. The scheme is based on a bulk-cloud approach and has the following properties: (1) Deep convection is launched at the level of maximum moist static energy above the top of the boundary layer. It is triggered if there is positive convective available potential energy (CAPE) and relative humidity of the air at the lifting level of convection cloud is greater than 75%; (2) Convective updrafts for mass, dry static energy, moisture, cloud liquid water and momentum are parameterized by a one-dimensional entrainment/detrainment bulk-cloud model. The lateral entrainment of the environmental air into the unstable ascending parcel before it rises to the lifting condensation level is considered. The entrainment/detrainment amount for the updraft cloud parcel is separately determined according to the increase/decrease of updraft parcel mass with altitude, and the mass change for the adiabatic ascent cloud parcel with altitude is derived from a total energy conservation equation of the whole adiabatic system in which involves the updraft cloud parcel and the environment; (3) The convective downdraft is assumed saturated and originated from the level of minimum environmental saturated equivalent potential temperature within the updraft cloud; (4) The mass flux at the base of convective cloud is determined by a closure scheme suggested by Zhang (J Geophys Res 107(D14), doi: 10.1029/2001JD001005 , 2002) in which the increase/decrease of CAPE due to changes of the thermodynamic states in the free troposphere resulting from convection approximately balances the decrease/increase resulting from large-scale processes. Evaluation of the proposed convection scheme is performed by using a single column model (SCM) forced by the Atmospheric Radiation Measurement Program

  11. Inverse transport modeling of volcanic sulfur dioxide emissions using large-scale ensemble simulations

    NASA Astrophysics Data System (ADS)

    Heng, Y.; Hoffmann, L.; Griessbach, S.; Rößler, T.; Stein, O.

    2015-10-01

    An inverse transport modeling approach based on the concepts of sequential importance resampling and parallel computing is presented to reconstruct altitude-resolved time series of volcanic emissions, which often can not be obtained directly with current measurement techniques. A new inverse modeling and simulation system, which implements the inversion approach with the Lagrangian transport model Massive-Parallel Trajectory Calculations (MPTRAC) is developed to provide reliable transport simulations of volcanic sulfur dioxide (SO2). In the inverse modeling system MPTRAC is used to perform two types of simulations, i. e., large-scale ensemble simulations for the reconstruction of volcanic emissions and final transport simulations. The transport simulations are based on wind fields of the ERA-Interim meteorological reanalysis of the European Centre for Medium Range Weather Forecasts. The reconstruction of altitude-dependent SO2 emission time series is also based on Atmospheric Infrared Sounder (AIRS) satellite observations. A case study for the eruption of the Nabro volcano, Eritrea, in June 2011, with complex emission patterns, is considered for method validation. Meteosat Visible and InfraRed Imager (MVIRI) near-real-time imagery data are used to validate the temporal development of the reconstructed emissions. Furthermore, the altitude distributions of the emission time series are compared with top and bottom altitude measurements of aerosol layers obtained by the Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP) and the Michelson Interferometer for Passive Atmospheric Sounding (MIPAS) satellite instruments. The final transport simulations provide detailed spatial and temporal information on the SO2 distributions of the Nabro eruption. The SO2 column densities from the simulations are in good qualitative agreement with the AIRS observations. Our new inverse modeling and simulation system is expected to become a useful tool to also study other volcanic

  12. Improving urban streamflow forecasting using a high-resolution large scale modeling framework

    NASA Astrophysics Data System (ADS)

    Read, Laura; Hogue, Terri; Gochis, David; Salas, Fernando

    2016-04-01

    Urban flood forecasting is a critical component in effective water management, emergency response, regional planning, and disaster mitigation. As populations across the world continue to move to cities (~1.8% growth per year), and studies indicate that significant flood damages are occurring outside the floodplain in urban areas, the ability to model and forecast flow over the urban landscape becomes critical to maintaining infrastructure and society. In this work, we use the Weather Research and Forecasting- Hydrological (WRF-Hydro) modeling framework as a platform for testing improvements to representation of urban land cover, impervious surfaces, and urban infrastructure. The three improvements we evaluate include: updating the land cover to the latest 30-meter National Land Cover Dataset, routing flow over a high-resolution 30-meter grid, and testing a methodology for integrating an urban drainage network into the routing regime. We evaluate performance of these improvements in the WRF-Hydro model for specific flood events in the Denver-Metro Colorado domain, comparing to historic gaged streamflow for retrospective forecasts. Denver-Metro provides an interesting case study as it is a rapidly growing urban/peri-urban region with an active history of flooding events that have caused significant loss of life and property. Considering that the WRF-Hydro model will soon be implemented nationally in the U.S. to provide flow forecasts on the National Hydrography Dataset Plus river reaches - increasing capability from 3,600 forecast points to 2.7 million, we anticipate that this work will support validation of this service in urban areas for operational forecasting. Broadly, this research aims to provide guidance for integrating complex urban infrastructure with a large-scale, high resolution coupled land-surface and distributed hydrologic model.

  13. Large Scale Frequent Pattern Mining using MPI One-Sided Model

    SciTech Connect

    Vishnu, Abhinav; Agarwal, Khushbu

    2015-09-08

    In this paper, we propose a work-stealing runtime --- Library for Work Stealing LibWS --- using MPI one-sided model for designing scalable FP-Growth --- {\\em de facto} frequent pattern mining algorithm --- on large scale systems. LibWS provides locality efficient and highly scalable work-stealing techniques for load balancing on a variety of data distributions. We also propose a novel communication algorithm for FP-growth data exchange phase, which reduces the communication complexity from state-of-the-art O(p) to O(f + p/f) for p processes and f frequent attributed-ids. FP-Growth is implemented using LibWS and evaluated on several work distributions and support counts. An experimental evaluation of the FP-Growth on LibWS using 4096 processes on an InfiniBand Cluster demonstrates excellent efficiency for several work distributions (87\\% efficiency for Power-law and 91% for Poisson). The proposed distributed FP-Tree merging algorithm provides 38x communication speedup on 4096 cores.

  14. Excavating the Genome: Large-Scale Mutagenesis Screening for the Discovery of New Mouse Models.

    PubMed

    Sundberg, John P; Dadras, Soheil S; Silva, Kathleen A; Kennedy, Victoria E; Murray, Stephen A; Denegre, James M; Schofield, Paul N; King, Lloyd E; Wiles, Michael V; Pratt, C Herbert

    2015-11-01

    Technology now exists for rapid screening of mutated laboratory mice to identify phenotypes associated with specific genetic mutations. Large repositories exist for spontaneous mutants and those induced by chemical mutagenesis, many of which have never been fully studied or comprehensively evaluated. To supplement these resources, a variety of techniques have been consolidated in an international effort to create mutations in all known protein coding genes in the mouse. With targeted embryonic stem cell lines now available for almost all protein coding genes and more recently CRISPR/Cas9 technology, large-scale efforts are underway to create further novel mutant mouse strains and to characterize their phenotypes. However, accurate diagnosis of skin, hair, and nail diseases still relies on careful gross and histological analysis, and while not automated to the level of the physiological phenotyping, histopathology still provides the most direct and accurate diagnosis and correlation with human diseases. As a result of these efforts, many new mouse dermatological disease models are being characterized and developed. PMID:26551941

  15. Statistical model for large-scale peptide identification in databases from tandem mass spectra using SEQUEST.

    PubMed

    López-Ferrer, Daniel; Martínez-Bartolomé, Salvador; Villar, Margarita; Campillos, Mónica; Martín-Maroto, Fernando; Vázquez, Jesús

    2004-12-01

    Recent technological advances have made multidimensional peptide separation techniques coupled with tandem mass spectrometry the method of choice for high-throughput identification of proteins. Due to these advances, the development of software tools for large-scale, fully automated, unambiguous peptide identification is highly necessary. In this work, we have used as a model the nuclear proteome from Jurkat cells and present a processing algorithm that allows accurate predictions of random matching distributions, based on the two SEQUEST scores Xcorr and DeltaCn. Our method permits a very simple and precise calculation of the probabilities associated with individual peptide assignments, as well as of the false discovery rate among the peptides identified in any experiment. A further mathematical analysis demonstrates that the score distributions are highly dependent on database size and precursor mass window and suggests that the probability associated with SEQUEST scores depends on the number of candidate peptide sequences available for the search. Our results highlight the importance of adjusting the filtering criteria to discriminate between correct and incorrect peptide sequences according to the circumstances of each particular experiment.

  16. Repurposing of open data through large scale hydrological modelling - hypeweb.smhi.se

    NASA Astrophysics Data System (ADS)

    Strömbäck, Lena; Andersson, Jafet; Donnelly, Chantal; Gustafsson, David; Isberg, Kristina; Pechlivanidis, Ilias; Strömqvist, Johan; Arheimer, Berit

    2015-04-01

    Hydrological modelling demands large amounts of spatial data, such as soil properties, land use, topography, lakes and reservoirs, ice and snow coverage, water management (e.g. irrigation patterns and regulations), meteorological data and observed water discharge in rivers. By using such data, the hydrological model will in turn provide new data that can be used for new purposes (i.e. re-purposing). This presentation will give an example of how readily available open data from public portals have been re-purposed by using the Hydrological Predictions for the Environment (HYPE) model in a number of large-scale model applications covering numerous subbasins and rivers. HYPE is a dynamic, semi-distributed, process-based, and integrated catchment model. The model output is launched as new Open Data at the web site www.hypeweb.smhi.se to be used for (i) Climate change impact assessments on water resources and dynamics; (ii) The European Water Framework Directive (WFD) for characterization and development of measure programs to improve the ecological status of water bodies; (iii) Design variables for infrastructure constructions; (iv) Spatial water-resource mapping; (v) Operational forecasts (1-10 days and seasonal) on floods and droughts; (vi) Input to oceanographic models for operational forecasts and marine status assessments; (vii) Research. The following regional domains have been modelled so far with different resolutions (number of subbasins within brackets): Sweden (37 000), Europe (35 000), Arctic basin (30 000), La Plata River (6 000), Niger River (800), Middle-East North-Africa (31 000), and the Indian subcontinent (6 000). The Hype web site provides several interactive web applications for exploring results from the models. The user can explore an overview of various water variables for historical and future conditions. Moreover the user can explore and download historical time series of discharge for each basin and explore the performance of the model

  17. A large-scale methane model by incorporating the surface water transport

    NASA Astrophysics Data System (ADS)

    Lu, Xiaoliang; Zhuang, Qianlai; Liu, Yaling; Zhou, Yuyu; Aghakouchak, Amir

    2016-06-01

    The effect of surface water movement on methane emissions is not explicitly considered in most of the current methane models. In this study, a surface water routing was coupled into our previously developed large-scale methane model. The revised methane model was then used to simulate global methane emissions during 2006-2010. From our simulations, the global mean annual maximum inundation extent is 10.6 ± 1.9 km2 and the methane emission is 297 ± 11 Tg C/yr in the study period. In comparison to the currently used TOPMODEL-based approach, we found that the incorporation of surface water routing leads to 24.7% increase in the annual maximum inundation extent and 30.8% increase in the methane emissions at the global scale for the study period, respectively. The effect of surface water transport on methane emissions varies in different regions: (1) the largest difference occurs in flat and moist regions, such as Eastern China; (2) high-latitude regions, hot spots in methane emissions, show a small increase in both inundation extent and methane emissions with the consideration of surface water movement; and (3) in arid regions, the new model yields significantly larger maximum flooded areas and a relatively small increase in the methane emissions. Although surface water is a small component in the terrestrial water balance, it plays an important role in determining inundation extent and methane emissions, especially in flat regions. This study indicates that future quantification of methane emissions shall consider the effects of surface water transport.

  18. Large scale nutrient modelling using globally available datasets: A test for the Rhine basin

    NASA Astrophysics Data System (ADS)

    Loos, Sibren; Middelkoop, Hans; van der Perk, Marcel; van Beek, Rens

    2009-05-01

    SummaryNutrient discharge to coastal waters from rivers draining populated areas can cause vast algal blooms. Changing conditions in the drainage basin, like land use change, or climate induced changes in hydrology, may alter riverine nitrogen (N) and phosphorus (P) fluxes and further increase the pressure on coastal water quality. Several large scale models have been employed to quantify riverine nutrient fluxes on a yearly to decadal timescale. Seasonal variation of these fluxes, governed by internal nutrient transformations and attenuation, is often larger than the inter-annual variation and may contain crucial information on nutrient transfer through river basins and should therefore not be overlooked. In the last decade the increasing availability of global datasets at fine resolutions has enabled the modelling of multiple basins using a coherent dataset. Furthermore, the use of global datasets will aid to global change impact assessment. We developed a new model, RiNUX, to adequately simulate present and future river nutrient loads in large river basins. The RiNUX model captures the intra-annual variation at the basin scale in order to provide more accurate estimates of future nutrient loads in response to global change. With an incorporated dynamic sediment flux model, the particulate nutrient loads can be assessed. It is concluded that the RiNUX model provides a powerful, spatial and temporal explicit tool to estimate intra-annual variations in riverine nutrient loads in large river basins. The model was calibrated using the detailed RHIN dataset and its overall efficiency was tested using a coarser dataset GLOB for the Rhine basin. Using the RHIN dataset seasonal variable nutrient load at the river outlet can be satisfactorily modelled for both total N ( E = 0.50) and total P ( E = 0.47). The largest prediction errors occur in estimating high TN loads. When using the GLOB dataset, the model efficiency is lower for TN ( E = 0.12), due to overestimated

  19. Large-scale mapping and predictive modeling of submerged aquatic vegetation in a shallow eutrophic lake.

    PubMed

    Havens, Karl E; Harwell, Matthew C; Brady, Mark A; Sharfstein, Bruce; East, Therese L; Rodusky, Andrew J; Anson, Daniel; Maki, Ryan P

    2002-04-01

    A spatially intensive sampling program was developed for mapping the submerged aquatic vegetation (SAV) over an area of approximately 20,000 ha in a large, shallow lake in Florida, U.S. The sampling program integrates Geographic Information System (GIS) technology with traditional field sampling of SAV and has the capability of producing robust vegetation maps under a wide range of conditions, including high turbidity, variable depth (0 to 2 m), and variable sediment types. Based on sampling carried out in August-September 2000, we measured 1,050 to 4,300 ha of vascular SAV species and approximately 14,000 ha of the macroalga Chara spp. The results were similar to those reported in the early 1990s, when the last large-scale SAV sampling occurred. Occurrence of Chara was strongly associated with peat sediments, and maximal depths of occurrence varied between sediment types (mud, sand, rock, and peat). A simple model of Chara occurrence, based only on water depth, had an accuracy of 55%. It predicted occurrence of Chara over large areas where the plant actually was not found. A model based on sediment type and depth had an accuracy of 75% and produced a spatial map very similar to that based on observations. While this approach needs to be validated with independent data in order to test its general utility, we believe it may have application elsewhere. The simple modeling approach could serve as a coarse-scale tool for evaluating effects of water level management on Chara populations.

  20. Identification of water quality degradation hotspots in developing countries by applying large scale water quality modelling

    NASA Astrophysics Data System (ADS)

    Malsy, Marcus; Reder, Klara; Flörke, Martina

    2014-05-01

    Decreasing water quality is one of the main global issues which poses risks to food security, economy, and public health and is consequently crucial for ensuring environmental sustainability. During the last decades access to clean drinking water increased, but 2.5 billion people still do not have access to basic sanitation, especially in Africa and parts of Asia. In this context not only connection to sewage system is of high importance, but also treatment, as an increasing connection rate will lead to higher loadings and therefore higher pressure on water resources. Furthermore, poor people in developing countries use local surface waters for daily activities, e.g. bathing and washing. It is thus clear that water utilization and water sewerage are indispensable connected. In this study, large scale water quality modelling is used to point out hotspots of water pollution to get an insight on potential environmental impacts, in particular, in regions with a low observation density and data gaps in measured water quality parameters. We applied the global water quality model WorldQual to calculate biological oxygen demand (BOD) loadings from point and diffuse sources, as well as in-stream concentrations. Regional focus in this study is on developing countries i.e. Africa, Asia, and South America, as they are most affected by water pollution. Hereby, model runs were conducted for the year 2010 to draw a picture of recent status of surface waters quality and to figure out hotspots and main causes of pollution. First results show that hotspots mainly occur in highly agglomerated regions where population density is high. Large urban areas are initially loading hotspots and pollution prevention and control become increasingly important as point sources are subject to connection rates and treatment levels. Furthermore, river discharge plays a crucial role due to dilution potential, especially in terms of seasonal variability. Highly varying shares of BOD sources across

  1. Large Scale Terrestrial Modeling: A Discussion of Technical and Conceptual Challenges and Solution Approaches

    NASA Astrophysics Data System (ADS)

    Rahman, M.; Aljazzar, T.; Kollet, S.; Maxwell, R.

    2012-04-01

    A number of simulation platforms have been developed to study the spatiotemporal variability of hydrologic responses to global change. Sophisticated terrestrial models demand large data sets and considerable computing resources as they attempt to include detailed physics for all relevant processes involving the feedbacks between subsurface, land surface and atmospheric processes. Access to required data scarcity, error and uncertainty; allocation of computing resources; and post processing/analysis are some of the well-known challenges. And have been discussed in previous studies dealing with catchments ranging from plot scale research (102m2), to small experimental catchments (0.1-10km2), and occasionally medium-sized catchments (102-103km2). However, there is still a lack of knowledge about large-scale simulations of the coupled terrestrial mass and energy balance over long time scales (years to decades). In this study, the interaction between subsurface, land surface, and the atmosphere are simulated in two large scale (>104km2) river catchments that are the Luanhe catchment in the North Plain, China and the Rur catchment, Germany. As a simulation platform, a fully coupled model (ParFlow.CLM) that links a three-dimensional variably-saturated groundwater flow model (ParFlow) with a land surface model (CLM) is used. The Luanhe and the Rur catchments have areas of 54,000 and 28,224km2 respectively and are being simulated using spatial resolutions on the order of 102 to 103m in the horizontal and 10-2 to 10-1m in the vertical direction. ParFlow.CLM was configured over computational domains well beyond the actual watershed boundaries to account for cross-watershed flow. The resulting catchment models consist of up to 108 cells which were implemented over more than 1000 processors each with 512MB memory on JUGENE hosted by the Juelich Supercomputing Centre, Germany. Consequently, large numbers of input and output files were produced for each parameter such as; soil

  2. Large-scale modeling of reactive solute transport in fracture zones of granitic bedrocks.

    PubMed

    Molinero, Jorge; Samper, Javier

    2006-01-10

    Final disposal of high-level radioactive waste in deep repositories located in fractured granite formations is being considered by several countries. The assessment of the safety of such repositories requires using numerical models of groundwater flow, solute transport and chemical processes. These models are being developed from data and knowledge gained from in situ experiments such as the Redox Zone Experiment carried out at the underground laboratory of Aspö in Sweden. This experiment aimed at evaluating the effects of the construction of the access tunnel on the hydrogeological and hydrochemical conditions of a fracture zone intersected by the tunnel. Most chemical species showed dilution trends except for bicarbonate and sulphate which unexpectedly increased with time. Molinero and Samper [Molinero, J. and Samper, J. Groundwater flow and solute transport in fracture zones: an improved model for a large-scale field experiment at Aspö (Sweden). J. Hydraul. Res., 42, Extra Issue, 157-172] presented a two-dimensional water flow and solute transport finite element model which reproduced measured drawdowns and dilution curves of conservative species. Here we extend their model by using a reactive transport which accounts for aqueous complexation, acid-base, redox processes, dissolution-precipitation of calcite, quartz, hematite and pyrite, and cation exchange between Na+ and Ca2+. The model provides field-scale estimates of cation exchange capacity of the fracture zone and redox potential of groundwater recharge. It serves also to identify the mineral phases controlling the solubility of iron. In addition, the model is useful to test the relevance of several geochemical processes. Model results rule out calcite dissolution as the process causing the increase in bicarbonate concentration and reject the following possible sources of sulphate: (1) pyrite dissolution, (2) leaching of alkaline sulphate-rich waters from a nearby rock landfill and (3) dissolution of

  3. Observational and Model Studies of Large-Scale Mixing Processes in the Stratosphere

    NASA Technical Reports Server (NTRS)

    Bowman, Kenneth P.

    1997-01-01

    The following is the final technical report for grant NAGW-3442, 'Observational and Model Studies of Large-Scale Mixing Processes in the Stratosphere'. Research efforts in the first year concentrated on transport and mixing processes in the polar vortices. Three papers on mixing in the Antarctic were published. The first was a numerical modeling study of wavebreaking and mixing and their relationship to the period of observed stratospheric waves (Bowman). The second paper presented evidence from TOMS for wavebreaking in the Antarctic (Bowman and Mangus 1993). The third paper used Lagrangian trajectory calculations from analyzed winds to show that there is very little transport into the Antarctic polar vortex prior to the vortex breakdown (Bowman). Mixing is significantly greater at lower levels. This research helped to confirm theoretical arguments for vortex isolation and data from the Antarctic field experiments that were interpreted as indicating isolation. A Ph.D. student, Steve Dahlberg, used the trajectory approach to investigate mixing and transport in the Arctic. While the Arctic vortex is much more disturbed than the Antarctic, there still appears to be relatively little transport across the vortex boundary at 450 K prior to the vortex breakdown. The primary reason for the absence of an ozone hole in the Arctic is the earlier warming and breakdown of the vortex compared to the Antarctic, not replenishment of ozone by greater transport. Two papers describing these results have appeared (Dahlberg and Bowman; Dahlberg and Bowman). Steve Dahlberg completed his Ph.D. thesis (Dahlberg and Bowman) and is now teaching in the Physics Department at Concordia College. We also prepared an analysis of the QBO in SBUV ozone data (Hollandsworth et al.). A numerical study in collaboration with Dr. Ping Chen investigated mixing by barotropic instability, which is the probable origin of the 4-day wave in the upper stratosphere (Bowman and Chen). The important result from

  4. Scale-down model to simulate spatial pH variations in large-scale bioreactors.

    PubMed

    Amanullah, A; McFarlane, C M; Emery, A N; Nienow, A W

    2001-06-01

    For the first time a laboratory-scale two-compartment system was used to investigate the effects of pH fluctuations consequent to large scales of operation on microorganisms. pH fluctuations can develop in production-scale fermenters as a consequence of the combined effects of poor mixing and adding concentrated reagents at the liquid surface for control of the bulk pH. Bacillus subtilis was used as a model culture since in addition to its sensitivity to dissolved oxygen levels, the production of the metabolites, acetoin and 2,3-butanediol, is sensitive to pH values between 6.5 and 7.2. The scale-down model consisted of a stirred tank reactor (STR) and a recycle loop containing a plug flow reactor (PFR), with the pH in the stirred tank being maintained at 6.5 by addition of alkali in the loop. Different residence times in the loop simulated the exposure time of fluid elements to high values of pH in the vicinity of the addition point in large bioreactors and tracer experiments were performed to characterise the residence time distribution in it. Since the culture was sensitive to dissolved oxygen, for each experiment with pH control by adding base into the PFR, equivalent experiments were conducted with pH control by addition of base into the STR, thus ensuring that any dissolved oxygen effects were common to both types of experiments. The present study indicates that although biomass concentration remained unaffected by pH variations, product formation was influenced by residence times in the PFR of 60 sec or longer. These changes in metabolism are thought to be linked to both the sensitivity of the acetoin and 2,3-butanediol-forming enzymes to pH and to the inducing effects of dissociated acetate on the acetolactate synthase enzyme.

  5. Poyang Lake basin: a successful, large-scale integrated basin management model for developing countries.

    PubMed

    Chen, Meiqiu; Wei, Xiaohua; Huang, Hongsheng; Lü, Tiangui

    2011-01-01

    Protection of water environment while developing socio-economy is a challenging task for lake regions of many developing countries. Poyang Lake is the largest fresh water lake in China, with its total drainage area of 160,000 km2. In spite of rapid development of socio-economy in Poyang Lake region in the past several decades, water in Poyang Lake is of good quality and is known as the "last pot of clear water" of the Yangtze River Basin in China. In this paper, the reasons of "last pot of clear water" of Poyang Lake were analysed to demonstrate how economic development and environmental protection can be coordinated. There are three main reasons for contributing to this coordinated development: 1) the unique geomorphologic features of Poyang Lake and the short water residence time; 2) the matching of the basin physical boundary with the administrative boundary; and 3) the implementation of "Mountain-River-Lake Program" (MRL), with the ecosystem concept of "mountain as source, river as connection flow, and lake as storage". In addition, a series of actions have been taken to coordinate development, utilisation, management and protection in the Poyang Lake basin. Our key experiences are: considering all basin components when focusing on lake environment protection is a guiding principle; raising the living standard of people through implementation of various eco-economic projects or models in the basin is the most important strategy; preventing soil and water erosion is critical for protecting water sources; and establishing an effective governance mechanism for basin management is essential. This successful, large-scale basin management model can be extended to any basin or lake regions of developing countries where both environmental protection and economic development are needed and coordinated.

  6. Metabolic Flux Elucidation for Large-Scale Models Using 13C Labeled Isotopes

    PubMed Central

    Suthers, Patrick F.; Burgard, Anthony P.; Dasika, Madhukar S.; Nowroozi, Farnaz; Van Dien, Stephen; Keasling, Jay D.; Maranas, Costas D.

    2007-01-01

    A key consideration in metabolic engineering is the determination of fluxes of the metabolites within the cell. This determination provides an unambiguous description of metabolism before and/or after engineering interventions. Here, we present a computational framework that combines a constraint-based modeling framework with isotopic label tracing on a large-scale. When cells are fed a growth substrate with certain carbon positions labeled with 13C, the distribution of this label in the intracellular metabolites can be calculated based on the known biochemistry of the participating pathways. Most labeling studies focus on skeletal representations of central metabolism and ignore many flux routes that could contribute to the observed isotopic labeling patterns. In contrast, our approach investigates the importance of carrying out isotopic labeling studies using a more comprehensive reaction network consisting of 350 fluxes and 184 metabolites in Escherichia coli including global metabolite balances on cofactors such as ATP, NADH, and NADPH. The proposed procedure is demonstrated on an E. coli strain engineered to produce amorphadiene, a precursor to the anti-malarial drug artemisinin. The cells were grown in continuous culture on glucose containing 20% [U-13C]glucose; the measurements are made using GC-MS performed on 13 amino acids extracted from the cells. We identify flux distributions for which the calculated labeling patterns agree well with the measurements alluding to the accuracy of the network reconstruction. Furthermore, we explore the robustness of the flux calculations to variability in the experimental MS measurements, as well as highlight the key experimental measurements necessary for flux determination. Finally, we discuss the effect of reducing the model, as well as shed light onto the customization of the developed computational framework to other systems. PMID:17632026

  7. Large-scale Models Reveal the Two-component Mechanics of Striated Muscle

    PubMed Central

    Jarosch, Robert

    2008-01-01

    This paper provides a comprehensive explanation of striated muscle mechanics and contraction on the basis of filament rotations. Helical proteins, particularly the coiled-coils of tropomyosin, myosin and α-actinin, shorten their H-bonds cooperatively and produce torque and filament rotations when the Coulombic net-charge repulsion of their highly charged side-chains is diminished by interaction with ions. The classical “two-component model” of active muscle differentiated a “contractile component” which stretches the “series elastic component” during force production. The contractile components are the helically shaped thin filaments of muscle that shorten the sarcomeres by clockwise drilling into the myosin cross-bridges with torque decrease (= force-deficit). Muscle stretch means drawing out the thin filament helices off the cross-bridges under passive counterclockwise rotation with torque increase (= stretch activation). Since each thin filament is anchored by four elastic α-actinin Z-filaments (provided with force-regulating sites for Ca2+ binding), the thin filament rotations change the torsional twist of the four Z-filaments as the “series elastic components”. Large scale models simulate the changes of structure and force in the Z-band by the different Z-filament twisting stages A, B, C, D, E, F and G. Stage D corresponds to the isometric state. The basic phenomena of muscle physiology, i. e. latency relaxation, Fenn-effect, the force-velocity relation, the length-tension relation, unexplained energy, shortening heat, the Huxley-Simmons phases, etc. are explained and interpreted with the help of the model experiments. PMID:19330099

  8. Large-scale Validation of AMIP II Land-surface Simulations: Preliminary Results for Ten Models

    SciTech Connect

    Phillips, T J; Henderson-Sellers, A; Irannejad, P; McGuffie, K; Zhang, H

    2005-12-01

    This report summarizes initial findings of a large-scale validation of the land-surface simulations of ten atmospheric general circulation models that are entries in phase II of the Atmospheric Model Intercomparison Project (AMIP II). This validation is conducted by AMIP Diagnostic Subproject 12 on Land-surface Processes and Parameterizations, which is focusing on putative relationships between the continental climate simulations and the associated models' land-surface schemes. The selected models typify the diversity of representations of land-surface climate that are currently implemented by the global modeling community. The current dearth of global-scale terrestrial observations makes exacting validation of AMIP II continental simulations impractical. Thus, selected land-surface processes of the models are compared with several alternative validation data sets, which include merged in-situ/satellite products, climate reanalyses, and off-line simulations of land-surface schemes that are driven by observed forcings. The aggregated spatio-temporal differences between each simulated process and a chosen reference data set then are quantified by means of root-mean-square error statistics; the differences among alternative validation data sets are similarly quantified as an estimate of the current observational uncertainty in the selected land-surface process. Examples of these metrics are displayed for land-surface air temperature, precipitation, and the latent and sensible heat fluxes. It is found that the simulations of surface air temperature, when aggregated over all land and seasons, agree most closely with the chosen reference data, while the simulations of precipitation agree least. In the latter case, there also is considerable inter-model scatter in the error statistics, with the reanalyses estimates of precipitation resembling the AMIP II simulations more than to the chosen reference data. In aggregate, the simulations of land-surface latent and sensible

  9. Using stochastically-generated subcolumns to represent cloud structure in a large-scale model

    SciTech Connect

    Pincus, R; Hemler, R; Klein, S A

    2005-12-08

    A new method for representing subgrid-scale cloud structure, in which each model column is decomposed into a set of subcolumns, has been introduced into the Geophysical Fluid Dynamics Laboratory's global climate model AM2. Each subcolumn in the decomposition is homogeneous but the ensemble reproduces the initial profiles of cloud properties including cloud fraction, internal variability (if any) in cloud condensate, and arbitrary overlap assumptions that describe vertical correlations. These subcolumns are used in radiation and diagnostic calculations, and have allowed the introduction of more realistic overlap assumptions. This paper describes the impact of these new methods for representing cloud structure in instantaneous calculations and long-term integrations. Shortwave radiation computed using subcolumns and the random overlap assumption differs in the global annual average by more than 4 W/m{sup 2} from the operational radiation scheme in instantaneous calculations; much of this difference is counteracted by a change in the overlap assumption to one in which overlap varies continuously with the separation distance between layers. Internal variability in cloud condensate, diagnosed from the mean condensate amount and cloud fraction, has about the same effect on radiative fluxes as does the ad hoc tuning accounting for this effect in the operational radiation scheme. Long simulations with the new model configuration show little difference from the operational model configuration, while statistical tests indicate that the model does not respond systematically to the sampling noise introduced by the approximate radiative transfer techniques introduced to work with the subcolumns.

  10. Large Scale Numerical Modelling to Study the Dispersion of Persistent Toxic Substances Over Europe

    NASA Astrophysics Data System (ADS)

    Aulinger, A.; Petersen, G.

    2003-12-01

    For the past two decades environmental research at the GKSS Research Centre has been concerned with airborne pollutants with adverse effects on human health. The research was mainly focused on investigating the dispersion and deposition of heavy metals like lead and mercury over Europe by means of numerical modelling frameworks. Lead, in particular, served as a model substance to study the relationship between emissions and human exposition. The major source of airborne lead in Germany was fuel combustion until the 1980ies when its use as gasoline additive declined due to political decisions. Since then, the concentration of lead in ambient air and the deposition rates decreased in the same way as the consumption of leaded fuel. These observations could further be related to the decrease of lead concentrations in human blood measured during medical studies in several German cities. Based on the experience with models for heavy metal transport and deposition we have now started to turn our research focus to organic substances, e.g. PAHs. PAHs have been recognized as significant air borne carcinogens for several decades. However, it is not yet possible to precisely quantify the risk of human exposure to those compounds. Physical and chemical data, known from literature, describing the partitioning of the compounds between particle and gas phase and their degradation in the gas phase are implemented in a tropospheric chemistry module. In this way, the fate of PAHs in the atmosphere due to different particle type and size and different meteorological conditions is tested before carrying out large-scale and long-time studies. First model runs have been carried out for Benzo(a)Pyrene as one of the principal carcinogenic PAHs. Up to now, nearly nothing is known about degradation reactions of particle bound BaP. Thus, they could not be taken into account in the model so far. On the other hand, the proportion of BaP in the gas phase has to be considered at higher ambient

  11. Comparing wave shoaling methods used in large-scale coastal evolution modeling

    NASA Astrophysics Data System (ADS)

    Limber, P. W.; Adams, P. N.; Murray, A.

    2013-12-01

    output where wave height is approximately one-half of the water depth (a standard wave breaking threshold). The goal of this modeling exercise is to understand under what conditions a simple wave model is sufficient for simulating coastline evolution, and when using a more complex shoaling routine can optimize a coastline model. The Coastline Evolution Model (CEM; Ashton and Murray, 2006) is used to show how different shoaling routines affect modeled coastline behavior. The CEM currently includes the most basic wave shoaling approach to simulate cape and spit formation. We will instead couple it to SWAN, using the insight from the comprehensive wave model (above) to guide its application. This will allow waves transformed over complex bathymetry, such as cape-associated shoals and ridges, to be input for the CEM so that large-scale coastline behavior can be addressed in less idealized environments. Ashton, A., and Murray, A.B., 2006, High-angle wave instability and emergent shoreline shapes: 1. Modeling of sand waves, flying spits, and capes: Journal of Geophysical Research, v. 111, p. F04011, doi:10.1029/2005JF000422.

  12. Transport and fate of the herbicide diclofop-methyl in a large-scale physical model

    NASA Astrophysics Data System (ADS)

    Lawrence, J. R.; Hendry, M. J.; Zanyk, B. N.; Wolfaardt, G. M.

    1995-07-01

    penetration of diclofop below the rooting zone. Further, the diclofop rapidly dissipated and degraded at all depths in the unsaturated zone. In addition, the results show that these large-scale physical models can exhibit variability similar to that observed at field scale.

  13. Mathematical model of influenza A virus production in large-scale microcarrier culture.

    PubMed

    Möhler, Lars; Flockerzi, Dietrich; Sann, Heiner; Reichl, Udo

    2005-04-01

    A mathematical model that describes the replication of influenza A virus in animal cells in large-scale microcarrier culture is presented. The virus is produced in a two-step process, which begins with the growth of adherent Madin-Darby canine kidney (MDCK) cells. After several washing steps serum-free virus maintenance medium is added, and the cells are infected with equine influenza virus (A/Equi 2 (H3N8), Newmarket 1/93). A time-delayed model is considered that has three state variables: the number of uninfected cells, infected cells, and free virus particles. It is assumed that uninfected cells adsorb the virus added at the time of infection. The infection rate is proportional to the number of uninfected cells and free virions. Depending on multiplicity of infection (MOI), not necessarily all cells are infected by this first step leading to the production of free virions. Newly produced viruses can infect the remaining uninfected cells in a chain reaction. To follow the time course of virus replication, infected cells were stained with fluorescent antibodies. Quantitation of influenza viruses by a hemagglutination assay (HA) enabled the estimation of the total number of new virions produced, which is relevant for the production of inactivated influenza vaccines. It takes about 4-6 h before visibly infected cells can be identified on the microcarriers followed by a strong increase in HA titers after 15-16 h in the medium. Maximum virus yield Vmax was about 1x10(10) virions/mL (2.4 log HA units/100 microL), which corresponds to a burst size ratio of about 18,755 virus particles produced per cell. The model tracks the time course of uninfected and infected cells as well as virus production. It suggests that small variations (<10%) in initial values and specific rates do not have a significant influence on Vmax. The main parameters relevant for the optimization of virus antigen yields are specific virus replication rate and specific cell death rate due to infection

  14. The Large-Scale Structure of Semantic Networks: Statistical Analyses and a Model of Semantic Growth

    ERIC Educational Resources Information Center

    Steyvers, Mark; Tenenbaum, Joshua B.

    2005-01-01

    We present statistical analyses of the large-scale structure of 3 types of semantic networks: word associations, WordNet, and Roget's Thesaurus. We show that they have a small-world structure, characterized by sparse connectivity, short average path lengths between words, and strong local clustering. In addition, the distributions of the number of…

  15. Modeling Booklet Effects for Nonequivalent Group Designs in Large-Scale Assessment

    ERIC Educational Resources Information Center

    Hecht, Martin; Weirich, Sebastian; Siegle, Thilo; Frey, Andreas

    2015-01-01

    Multiple matrix designs are commonly used in large-scale assessments to distribute test items to students. These designs comprise several booklets, each containing a subset of the complete item pool. Besides reducing the test burden of individual students, using various booklets allows aligning the difficulty of the presented items to the assumed…

  16. Large-scale hydrological modelling in the semi-arid north-east of Brazil

    NASA Astrophysics Data System (ADS)

    Güntner, Andreas

    2002-07-01

    the framework of an integrated model which contains modules that do not work on the basis of natural spatial units. The target units mentioned above are disaggregated in Wasa into smaller modelling units within a new multi-scale, hierarchical approach. The landscape units defined in this scheme capture in particular the effect of structured variability of terrain, soil and vegetation characteristics along toposequences on soil moisture and runoff generation. Lateral hydrological processes at the hillslope scale, as reinfiltration of surface runoff, being of particular importance in semi-arid environments, can thus be represented also within the large-scale model in a simplified form. Depending on the resolution of available data, small-scale variability is not represented explicitly with geographic reference in Wasa, but by the distribution of sub-scale units and by statistical transition frequencies for lateral fluxes between these units. Further model components of Wasa which respect specific features of semi-arid hydrology are: (1) A two-layer model for evapotranspiration comprises energy transfer at the soil surface (including soil evaporation), which is of importance in view of the mainly sparse vegetation cover. Additionally, vegetation parameters are differentiated in space and time in dependence on the occurrence of the rainy season. (2) The infiltration module represents in particular infiltration-excess surface runoff as the dominant runoff component. (3) For the aggregate description of the water balance of reservoirs that cannot be represented explicitly in the model, a storage approach respecting different reservoirs size classes and their interaction via the river network is applied. (4) A model for the quantification of water withdrawal by water use in different sectors is coupled to Wasa. (5) A cascade model for the temporal disaggregation of precipitation time series, adapted to the specific characteristics of tropical convective rainfall, is applied

  17. Collaborative Visualization for Large-Scale Accelerator Electromagnetic Modeling (Final Report)

    SciTech Connect

    William J. Schroeder

    2011-11-13

    This report contains the comprehensive summary of the work performed on the SBIR Phase II, Collaborative Visualization for Large-Scale Accelerator Electromagnetic Modeling at Kitware Inc. in collaboration with Stanford Linear Accelerator Center (SLAC). The goal of the work was to develop collaborative visualization tools for large-scale data as illustrated in the figure below. The solutions we proposed address the typical problems faced by geographicallyand organizationally-separated research and engineering teams, who produce large data (either through simulation or experimental measurement) and wish to work together to analyze and understand their data. Because the data is large, we expect that it cannot be easily transported to each team member's work site, and that the visualization server must reside near the data. Further, we also expect that each work site has heterogeneous resources: some with large computing clients, tiled (or large) displays and high bandwidth; others sites as simple as a team member on a laptop computer. Our solution is based on the open-source, widely used ParaView large-data visualization application. We extended this tool to support multiple collaborative clients who may locally visualize data, and then periodically rejoin and synchronize with the group to discuss their findings. Options for managing session control, adding annotation, and defining the visualization pipeline, among others, were incorporated. We also developed and deployed a Web visualization framework based on ParaView that enables the Web browser to act as a participating client in a collaborative session. The ParaView Web Visualization framework leverages various Web technologies including WebGL, JavaScript, Java and Flash to enable interactive 3D visualization over the web using ParaView as the visualization server. We steered the development of this technology by teaming with the SLAC National Accelerator Laboratory. SLAC has a computationally-intensive problem

  18. Application of large-scale, multi-resolution watershed modeling framework using the Hydrologic and Water Quality System (HAWQS)

    Technology Transfer Automated Retrieval System (TEKTRAN)

    In recent years, large-scale watershed modeling has been implemented broadly in the field of water resources planning and management. Complex hydrological, sediment, and nutrient processes can be simulated by sophisticated watershed simulation models for important issues such as water resources all...

  19. Modeling the MJO rain rates using parameterized large scale dynamics: vertical structure, radiation, and horizontal advection of dry air

    NASA Astrophysics Data System (ADS)

    Wang, S.; Sobel, A. H.; Nie, J.

    2015-12-01

    Two Madden Julian Oscillation (MJO) events were observed during October and November 2011 in the equatorial Indian Ocean during the DYNAMO field campaign. Precipitation rates and large-scale vertical motion profiles derived from the DYNAMO northern sounding array are simulated in a small-domain cloud-resolving model using parameterized large-scale dynamics. Three parameterizations of large-scale dynamics --- the conventional weak temperature gradient (WTG) approximation, vertical mode based spectral WTG (SWTG), and damped gravity wave coupling (DGW) --- are employed. The target temperature profiles and radiative heating rates are taken from a control simulation in which the large-scale vertical motion is imposed (rather than directly from observations), and the model itself is significantly modified from that used in previous work. These methodological changes lead to significant improvement in the results.Simulations using all three methods, with imposed time -dependent radiation and horizontal moisture advection, capture the time variations in precipitation associated with the two MJO events well. The three methods produce significant differences in the large-scale vertical motion profile, however. WTG produces the most top-heavy and noisy profiles, while DGW's is smoother with a peak in midlevels. SWTG produces a smooth profile, somewhere between WTG and DGW, and in better agreement with observations than either of the others. Numerical experiments without horizontal advection of moisture suggest that that process significantly reduces the precipitation and suppresses the top-heaviness of large-scale vertical motion during the MJO active phases, while experiments in which the effect of cloud on radiation are disabled indicate that cloud-radiative interaction significantly amplifies the MJO. Experiments in which interactive radiation is used produce poorer agreement with observation than those with imposed time-varying radiative heating. Our results highlight the

  20. LARGE-SCALE CYCLOGENESIS, FRONTAL WAVES AND DUST ON MARS: MODELING AND DIAGNOSTIC CONSIDERATIONS

    NASA Astrophysics Data System (ADS)

    Hollingsworth, J.; Kahre, M.

    2009-12-01

    During late autumn through early spring, Mars’ northern middle and high latitudes exhibit very strong equator-to-pole mean temperature contrasts (i.e., baroclinicity). From data collected during the Viking era and recent observations from both the Mars Global Surveyor (MGS) and Mars Reconnaissance Orbiter (MRO) missions, this strong baroclinicity supports vigorous large-scale eastward traveling weather systems (i.e., transient synoptic-period waves). These systems also have accompanying sub-synoptic scale ramifications on the atmospheric environment through cyclonic/anticyclonic winds, intense deformations and contractions/dilations in temperatures, and sharp perturbations amongst atmospheric tracers (e.g., dust and volatiles/condensates). Mars’ northern-hemisphere frontal waves can exhibit extended meridional structure, and appear to be active agents in the planet’s dust cycle. Their parenting cyclones tend to develop, travel eastward, and decay preferentially within certain geographic regions (i.e., storm zones). We adapt a version of the NASA Ames Mars general circulation model (GCM) at high horizontal resolution that includes the lifting, transport and sedimentation of radiatively-active dust to investigate the nature of cyclogenesis and frontal-wave circulations (both horizontally and vertically), and regional dust transport and concentration within the atmosphere. Near late winter and early spring (Ls ˜ 320-350°), high-resolution simulations indicate that the predominant dust lifting occurs through wind-stress lifting, in particular over the Tharsis highlands of the western hemisphere and to a lesser extent over the Arabia highlands of the eastern hemisphere. The former region also indicates considerable interaction with regards to upslope/downslope (i.e., nocturnal) flows and the synoptic/subsynoptic-scale circulations associated with cyclogenesis whereby dust can be readily “focused” within a frontal-wave disturbance and carried downstream both

  1. Development of Large-Scale Forcing Data for GoAmazon2014/5 Cloud Modeling Studies

    NASA Astrophysics Data System (ADS)

    Tang, S.; Xie, S.; Zhang, Y.; Schumacher, C.; Upton, H. M.; Ahlgrimm, M.; Feng, Z.

    2015-12-01

    The Observations and Modeling of the Green Ocean 2014-2015 (GoAmazon2014/5) field campaign is an international collaborated experiment conducted near Manaus, Brazil from January 2014 through December 2015. This experiment is designed to enable the study of aerosols, tropical clouds, convections and their interactions. To support modeling studies of these processes with data collected from the GoAmazon2014/5 campaign, we have developed a large-scale forcing data (e.g., vertical velocities and advective tendencies) during the second intensive operational period (IOP) of GoAmazon2014/5 from 1 Sep to 10 Oct, 2014. The method used in this study is the constrained variational analysis method in which the large-scale state fields are constrained by the surface and top-of-atmosphere observations (e.g. surface precipitation and outgoing longwave radiation) to conserve column-integrated mass, moisture and dry static energy. To address potential uncertainties in the derived forcing data due to uncertainties in surface precipitation, two sets of large-scale forcing data are developed based on the ECMWF analysis constrained by the two precipitation products respectively from SIPAM radar and TRMM 3B42 products. Our initial analysis shows large differences in these two precipitation products, which causes considerable differences in the derived large-scale forcing data. Potential uncertainties in the large-scale forcing data to other surface constraints such as surface latent and sensible fluxes will be explored. The characteristics of the large-scale forcing structures for selected cases will be discussed.

  2. Path2Models: large-scale generation of computational models from biochemical pathway maps

    PubMed Central

    2013-01-01

    Background Systems biology projects and omics technologies have led to a growing number of biochemical pathway models and reconstructions. However, the majority of these models are still created de novo, based on literature mining and the manual processing of pathway data. Results To increase the efficiency of model creation, the Path2Models project has automatically generated mathematical models from pathway representations using a suite of freely available software. Data sources include KEGG, BioCarta, MetaCyc and SABIO-RK. Depending on the source data, three types of models are provided: kinetic, logical and constraint-based. Models from over 2 600 organisms are encoded consistently in SBML, and are made freely available through BioModels Database at http://www.ebi.ac.uk/biomodels-main/path2models. Each model contains the list of participants, their interactions, the relevant mathematical constructs, and initial parameter values. Most models are also available as easy-to-understand graphical SBGN maps. Conclusions To date, the project has resulted in more than 140 000 freely available models. Such a resource can tremendously accelerate the development of mathematical models by providing initial starting models for simulation and analysis, which can be subsequently curated and further parameterized. PMID:24180668

  3. CALF/JAST X-32 test program: the LSPM (Large Scale Powered Model), Lockheed's concept for a

    NASA Technical Reports Server (NTRS)

    1996-01-01

    CALF/JAST X-32 test program: the LSPM (Large Scale Powered Model), Lockheed's concept for a tri-service aircraft (Air Force, Navy, Marines) CALF (Common Affordable Lightweight Fighter) as part of the Department of Defense's Joint Advanced Strike Technology (JAST) is being tested in the 80x120ft w.t. test-930 with rear horizontal stabilizer

  4. Applying Multidimensional Item Response Theory Models in Validating Test Dimensionality: An Example of K-12 Large-Scale Science Assessment

    ERIC Educational Resources Information Center

    Li, Ying; Jiao, Hong; Lissitz, Robert W.

    2012-01-01

    This study investigated the application of multidimensional item response theory (IRT) models to validate test structure and dimensionality. Multiple content areas or domains within a single subject often exist in large-scale achievement tests. Such areas or domains may cause multidimensionality or local item dependence, which both violate the…

  5. Realistic molecular model of kerogen's nanostructure.

    PubMed

    Bousige, Colin; Ghimbeu, Camélia Matei; Vix-Guterl, Cathie; Pomerantz, Andrew E; Suleimenova, Assiya; Vaughan, Gavin; Garbarino, Gaston; Feygenson, Mikhail; Wildgruber, Christoph; Ulm, Franz-Josef; Pellenq, Roland J-M; Coasne, Benoit

    2016-05-01

    Despite kerogen's importance as the organic backbone for hydrocarbon production from source rocks such as gas shale, the interplay between kerogen's chemistry, morphology and mechanics remains unexplored. As the environmental impact of shale gas rises, identifying functional relations between its geochemical, transport, elastic and fracture properties from realistic molecular models of kerogens becomes all the more important. Here, by using a hybrid experimental-simulation method, we propose a panel of realistic molecular models of mature and immature kerogens that provide a detailed picture of kerogen's nanostructure without considering the presence of clays and other minerals in shales. We probe the models' strengths and limitations, and show that they predict essential features amenable to experimental validation, including pore distribution, vibrational density of states and stiffness. We also show that kerogen's maturation, which manifests itself as an increase in the sp(2)/sp(3) hybridization ratio, entails a crossover from plastic-to-brittle rupture mechanisms.

  6. Realistic molecular model of kerogen's nanostructure

    NASA Astrophysics Data System (ADS)

    Bousige, Colin; Ghimbeu, Camélia Matei; Vix-Guterl, Cathie; Pomerantz, Andrew E.; Suleimenova, Assiya; Vaughan, Gavin; Garbarino, Gaston; Feygenson, Mikhail; Wildgruber, Christoph; Ulm, Franz-Josef; Pellenq, Roland J.-M.; Coasne, Benoit

    2016-05-01

    Despite kerogen's importance as the organic backbone for hydrocarbon production from source rocks such as gas shale, the interplay between kerogen's chemistry, morphology and mechanics remains unexplored. As the environmental impact of shale gas rises, identifying functional relations between its geochemical, transport, elastic and fracture properties from realistic molecular models of kerogens becomes all the more important. Here, by using a hybrid experimental-simulation method, we propose a panel of realistic molecular models of mature and immature kerogens that provide a detailed picture of kerogen's nanostructure without considering the presence of clays and other minerals in shales. We probe the models' strengths and limitations, and show that they predict essential features amenable to experimental validation, including pore distribution, vibrational density of states and stiffness. We also show that kerogen's maturation, which manifests itself as an increase in the sp2/sp3 hybridization ratio, entails a crossover from plastic-to-brittle rupture mechanisms.

  7. Realistic molecular model of kerogen's nanostructure.

    PubMed

    Bousige, Colin; Ghimbeu, Camélia Matei; Vix-Guterl, Cathie; Pomerantz, Andrew E; Suleimenova, Assiya; Vaughan, Gavin; Garbarino, Gaston; Feygenson, Mikhail; Wildgruber, Christoph; Ulm, Franz-Josef; Pellenq, Roland J-M; Coasne, Benoit

    2016-05-01

    Despite kerogen's importance as the organic backbone for hydrocarbon production from source rocks such as gas shale, the interplay between kerogen's chemistry, morphology and mechanics remains unexplored. As the environmental impact of shale gas rises, identifying functional relations between its geochemical, transport, elastic and fracture properties from realistic molecular models of kerogens becomes all the more important. Here, by using a hybrid experimental-simulation method, we propose a panel of realistic molecular models of mature and immature kerogens that provide a detailed picture of kerogen's nanostructure without considering the presence of clays and other minerals in shales. We probe the models' strengths and limitations, and show that they predict essential features amenable to experimental validation, including pore distribution, vibrational density of states and stiffness. We also show that kerogen's maturation, which manifests itself as an increase in the sp(2)/sp(3) hybridization ratio, entails a crossover from plastic-to-brittle rupture mechanisms. PMID:26828313

  8. A large scale microwave emission model for forests. Contribution to the SMOS algorithm

    NASA Astrophysics Data System (ADS)

    Rahmoune, R.; Della Vecchia, A.; Ferrazzoli, P.; Guerriero, L.; Martin-Porqueras, F.

    2009-04-01

    1. INTRODUCTION It is well known that surface soil moisture plays an important role in the water cycle and the global climate. SMOS is a L-Band multi-angle dual-polarization microwave radiometer for global monitoring of this variable. In the areas covered by forests, the opacity is relatively high, and the knowledge of moisture remains problematic. A significant percentage of SMOS pixels at global scale is affected by fractional forest. Whereas the effect of the vegetation can be corrected thanks a simple radiative model, in case of dense forests the wave penetration is limited and the sensitivity to variations of soil moisture is poor. However, most of the pixels are mixed, and a reliable estimate of forest emissivity is important to retrieve the soil moisture of the areas less affected by forest cover. Moreover, there are many sparse woodlands, where the sensitivity to variations of soil moisture is still acceptable. At the scale of spaceborne radiometers, it is difficult to have a detailed knowledge of the variables which affect the overall emissivity. In order to manage effectively these problems, the electromagnetic model developed at Tor Vergata University was combined with information available from forest literature. Using allometric equations and other information, the geometrical and dielectric inputs required by the model were related to global variables available at large scale, such as the Leaf Area Index. This procedure is necessarily approximate. In a first version of the model, forest variables were assumed to be constant in time, and were simply related to the maximum yearly value of Leaf Area Index. Moreover, a unique sparse distribution of trunk diameters was assumed. Finally, the temperature distribution within the crown canopy was assumed to be uniform. The model is being refined, in order to consider seasonal variations of foliage cover, subdivided into arboreous foliage and understory contributions. Different distributions of trunk diameter

  9. Comparing large-scale computational approaches to epidemic modeling: agent based versus structured metapopulation models

    NASA Astrophysics Data System (ADS)

    Gonçalves, Bruno; Ajelli, Marco; Balcan, Duygu; Colizza, Vittoria; Hu, Hao; Ramasco, José; Merler, Stefano; Vespignani, Alessandro

    2010-03-01

    We provide for the first time a side by side comparison of the results obtained with a stochastic agent based model and a structured metapopulation stochastic model for the evolution of a baseline pandemic event in Italy. The Agent Based model is based on the explicit representation of the Italian population through highly detailed data on the socio-demographic structure. The metapopulation simulations use the GLobal Epidemic and Mobility (GLEaM) model, based on high resolution census data worldwide, and integrating airline travel flow data with short range human mobility patterns at the global scale. Both models provide epidemic patterns that are in very good agreement at the granularity levels accessible by both approaches, with differences in peak timing of the order of few days. The age breakdown analysis shows that similar attack rates are obtained for the younger age classes.

  10. The Nature of Global Large-scale Sea Level Variability in Relation to Atmospheric Forcing: A Modeling Study

    NASA Technical Reports Server (NTRS)

    Fukumori, I.; Raghunath, R.; Fu, L. L.

    1996-01-01

    The relation between large-scale sea level variability and ocean circulation is studied using a numerical model. A global primitive equaiton model of the ocean is forced by daily winds and climatological heat fluxes corresponding to the period from January 1992 to February 1996. The physical nature of the temporal variability from periods of days to a year, are examined based on spectral analyses of model results and comparisons with satellite altimetry and tide gauge measurements.

  11. Testing LTB void models without the cosmic microwave background or large scale structure: new constraints from galaxy ages

    SciTech Connect

    Putter, Roland de; Verde, Licia; Jimenez, Raul E-mail: liciaverde@icc.ub.edu

    2013-02-01

    We present new observational constraints on inhomogeneous models based on observables independent of the CMB and large-scale structure. Using Bayesian evidence we find very strong evidence for the homogeneous LCDM model, thus disfavouring inhomogeneous models. Our new constraints are based on quantities independent of the growth of perturbations and rely on cosmic clocks based on atomic physics and on the local density of matter.

  12. Using cloud resolving model simulations of deep convection to inform cloud parameterizations in large-scale models

    SciTech Connect

    Klein, Stephen A.; Pincus, Robert; Xu, Kuan-man

    2003-06-23

    Cloud parameterizations in large-scale models struggle to address the significant non-linear effects of radiation and precipitation that arise from horizontal inhomogeneity in cloud properties at scales smaller than the grid box size of the large-scale models. Statistical cloud schemes provide an attractive framework to self-consistently predict the horizontal inhomogeneity in radiation and microphysics because the probability distribution function (PDF) of total water contained in the scheme can be used to calculate these non-linear effects. Statistical cloud schemes were originally developed for boundary layer studies so extending them to a global model with many different environments is not straightforward. For example, deep convection creates abundant cloudiness and yet little is known about how deep convection alters the PDF of total water or how to parameterize these impacts. These issues are explored with data from a 29 day simulation by a cloud resolving model (CRM) of the July 1997 ARM Intensive Observing Period at the Southern Great Plains site. The simulation is used to answer two questions: (a) how well can the beta distribution represent the PDFs of total water relative to saturation resolved by the CRM? (b) how can the effects of convection on the PDF be parameterized? In addition to answering these questions, additional sections more fully describe the proposed statistical cloud scheme and the CRM simulation and analysis methods.

  13. Hydro-economic Modeling: Reducing the Gap between Large Scale Simulation and Optimization Models

    NASA Astrophysics Data System (ADS)

    Forni, L.; Medellin-Azuara, J.; Purkey, D.; Joyce, B. A.; Sieber, J.; Howitt, R.

    2012-12-01

    The integration of hydrological and socio economic components into hydro-economic models has become essential for water resources policy and planning analysis. In this study we integrate the economic value of water in irrigated agricultural production using SWAP (a StateWide Agricultural Production Model for California), and WEAP (Water Evaluation and Planning System) a climate driven hydrological model. The integration of the models is performed using a step function approximation of water demand curves from SWAP, and by relating the demand tranches to the priority scheme in WEAP. In order to do so, a modified version of SWAP was developed called SWEAP that has the Planning Area delimitations of WEAP, a Maximum Entropy Model to estimate evenly sized steps (tranches) of water derived demand functions, and the translation of water tranches into crop land. In addition, a modified version of WEAP was created called ECONWEAP with minor structural changes for the incorporation of land decisions from SWEAP and series of iterations run via an external VBA script. This paper shows the validity of this integration by comparing revenues from WEAP vs. ECONWEAP as well as an assessment of the approximation of tranches. Results show a significant increase in the resulting agricultural revenues for our case study in California's Central Valley using ECONWEAP while maintaining the same hydrology and regional water flows. These results highlight the gains from allocating water based on its economic compared to priority-based water allocation systems. Furthermore, this work shows the potential of integrating optimization and simulation-based hydrologic models like ECONWEAP.ercentage difference in total agricultural revenues (EconWEAP versus WEAP).

  14. COST MINIMIZATION MODEL OF OCEANGOING CARRIERS ON A LARGE-SCALE INTERNATIONAL MARITIME CONTAINER SHIPPING NETWORK CONSIDERING CHARACTERISTICS OF PORTS

    NASA Astrophysics Data System (ADS)

    Shibasaki, Ryuichi; Watanabe, Tomihiro; Ieda, Hitoshi

    This paper deals with a cost minimization problem of oceangoing carriers on a large-scale network of international maritime container shipping industry, in order to measure impact of port policies for each country including Japan. Concretely, the authors develop a model to decide ports to call and size of containership in each route by ocean-going carrier group, with consideration of construction of deeper berths to deal with enlargement of containership, decrease of various port charges per cargo by attracting cargos into one port, and congestion by exceeding aggregation. The developed model is applied to the actual large-scale international maritime container shipping network in Eastern Asia. The performance of the model developed is validated. Also, the sensitivity of the model output is confirmed from the viewpoints of economy and diseconomy of scale included in the model.

  15. Computational models of consumer confidence from large-scale online attention data: crowd-sourcing econometrics.

    PubMed

    Dong, Xianlei; Bollen, Johan

    2015-01-01

    Economies are instances of complex socio-technical systems that are shaped by the interactions of large numbers of individuals. The individual behavior and decision-making of consumer agents is determined by complex psychological dynamics that include their own assessment of present and future economic conditions as well as those of others, potentially leading to feedback loops that affect the macroscopic state of the economic system. We propose that the large-scale interactions of a nation's citizens with its online resources can reveal the complex dynamics of their collective psychology, including their assessment of future system states. Here we introduce a behavioral index of Chinese Consumer Confidence (C3I) that computationally relates large-scale online search behavior recorded by Google Trends data to the macroscopic variable of consumer confidence. Our results indicate that such computational indices may reveal the components and complex dynamics of consumer psychology as a collective socio-economic phenomenon, potentially leading to improved and more refined economic forecasting.

  16. Computational models of consumer confidence from large-scale online attention data: crowd-sourcing econometrics.

    PubMed

    Dong, Xianlei; Bollen, Johan

    2015-01-01

    Economies are instances of complex socio-technical systems that are shaped by the interactions of large numbers of individuals. The individual behavior and decision-making of consumer agents is determined by complex psychological dynamics that include their own assessment of present and future economic conditions as well as those of others, potentially leading to feedback loops that affect the macroscopic state of the economic system. We propose that the large-scale interactions of a nation's citizens with its online resources can reveal the complex dynamics of their collective psychology, including their assessment of future system states. Here we introduce a behavioral index of Chinese Consumer Confidence (C3I) that computationally relates large-scale online search behavior recorded by Google Trends data to the macroscopic variable of consumer confidence. Our results indicate that such computational indices may reveal the components and complex dynamics of consumer psychology as a collective socio-economic phenomenon, potentially leading to improved and more refined economic forecasting. PMID:25826692

  17. Computational Models of Consumer Confidence from Large-Scale Online Attention Data: Crowd-Sourcing Econometrics

    PubMed Central

    2015-01-01

    Economies are instances of complex socio-technical systems that are shaped by the interactions of large numbers of individuals. The individual behavior and decision-making of consumer agents is determined by complex psychological dynamics that include their own assessment of present and future economic conditions as well as those of others, potentially leading to feedback loops that affect the macroscopic state of the economic system. We propose that the large-scale interactions of a nation's citizens with its online resources can reveal the complex dynamics of their collective psychology, including their assessment of future system states. Here we introduce a behavioral index of Chinese Consumer Confidence (C3I) that computationally relates large-scale online search behavior recorded by Google Trends data to the macroscopic variable of consumer confidence. Our results indicate that such computational indices may reveal the components and complex dynamics of consumer psychology as a collective socio-economic phenomenon, potentially leading to improved and more refined economic forecasting. PMID:25826692

  18. Proposed damage evolution model for large-scale finite element modeling of the dual coolant US-ITER TBM

    NASA Astrophysics Data System (ADS)

    Sharafat, S.; El-Awady, J.; Liu, S.; Diegele, E.; Ghoniem, N. M.

    2007-08-01

    Large-scale finite element modeling (FEM) of the US Dual Coolant Lead Lithium ITER Test Blanket Module including damage evolution is under development. A comprehensive rate-theory based radiation damage creep deformation code was integrated with the ABACUS FEM code. The advantage of this approach is that time-dependent in-reactor deformations and radiation damage can now be directly coupled with 'material properties' of FEM analyses. The coupled FEM-Creep damage model successfully simulated the simultaneous microstructure and stress evolution in small tensile test-bar structures. Applying the integrated Creep/FEM code to large structures is still computationally prohibitive. Instead, for thermo-structural analysis of the DCLL TBM structure the integrated FEM-creep damage model was used to develop true stress-strain behavior of F82H ferritic steel. Based on this integrated damage evolution-FEM approach it is proposed to use large-scale FEM analysis to identify and isolate critical stress areas for follow up analysis using detailed and fully integrated creep-FEM approach.

  19. Modeling Relief Demands in an Emergency Supply Chain System under Large-Scale Disasters Based on a Queuing Network

    PubMed Central

    He, Xinhua

    2014-01-01

    This paper presents a multiple-rescue model for an emergency supply chain system under uncertainties in large-scale affected area of disasters. The proposed methodology takes into consideration that the rescue demands caused by a large-scale disaster are scattered in several locations; the servers are arranged in multiple echelons (resource depots, distribution centers, and rescue center sites) located in different places but are coordinated within one emergency supply chain system; depending on the types of rescue demands, one or more distinct servers dispatch emergency resources in different vehicle routes, and emergency rescue services queue in multiple rescue-demand locations. This emergency system is modeled as a minimal queuing response time model of location and allocation. A solution to this complex mathematical problem is developed based on genetic algorithm. Finally, a case study of an emergency supply chain system operating in Shanghai is discussed. The results demonstrate the robustness and applicability of the proposed model. PMID:24688367

  20. Modeling relief demands in an emergency supply chain system under large-scale disasters based on a queuing network.

    PubMed

    He, Xinhua; Hu, Wenfa

    2014-01-01

    This paper presents a multiple-rescue model for an emergency supply chain system under uncertainties in large-scale affected area of disasters. The proposed methodology takes into consideration that the rescue demands caused by a large-scale disaster are scattered in several locations; the servers are arranged in multiple echelons (resource depots, distribution centers, and rescue center sites) located in different places but are coordinated within one emergency supply chain system; depending on the types of rescue demands, one or more distinct servers dispatch emergency resources in different vehicle routes, and emergency rescue services queue in multiple rescue-demand locations. This emergency system is modeled as a minimal queuing response time model of location and allocation. A solution to this complex mathematical problem is developed based on genetic algorithm. Finally, a case study of an emergency supply chain system operating in Shanghai is discussed. The results demonstrate the robustness and applicability of the proposed model.

  1. A Computational Framework for Realistic Retina Modeling.

    PubMed

    Martínez-Cañada, Pablo; Morillas, Christian; Pino, Begoña; Ros, Eduardo; Pelayo, Francisco

    2016-11-01

    Computational simulations of the retina have led to valuable insights about the biophysics of its neuronal activity and processing principles. A great number of retina models have been proposed to reproduce the behavioral diversity of the different visual processing pathways. While many of these models share common computational stages, previous efforts have been more focused on fitting specific retina functions rather than generalizing them beyond a particular model. Here, we define a set of computational retinal microcircuits that can be used as basic building blocks for the modeling of different retina mechanisms. To validate the hypothesis that similar processing structures may be repeatedly found in different retina functions, we implemented a series of retina models simply by combining these computational retinal microcircuits. Accuracy of the retina models for capturing neural behavior was assessed by fitting published electrophysiological recordings that characterize some of the best-known phenomena observed in the retina: adaptation to the mean light intensity and temporal contrast, and differential motion sensitivity. The retinal microcircuits are part of a new software platform for efficient computational retina modeling from single-cell to large-scale levels. It includes an interface with spiking neural networks that allows simulation of the spiking response of ganglion cells and integration with models of higher visual areas. PMID:27354192

  2. A Computational Framework for Realistic Retina Modeling.

    PubMed

    Martínez-Cañada, Pablo; Morillas, Christian; Pino, Begoña; Ros, Eduardo; Pelayo, Francisco

    2016-11-01

    Computational simulations of the retina have led to valuable insights about the biophysics of its neuronal activity and processing principles. A great number of retina models have been proposed to reproduce the behavioral diversity of the different visual processing pathways. While many of these models share common computational stages, previous efforts have been more focused on fitting specific retina functions rather than generalizing them beyond a particular model. Here, we define a set of computational retinal microcircuits that can be used as basic building blocks for the modeling of different retina mechanisms. To validate the hypothesis that similar processing structures may be repeatedly found in different retina functions, we implemented a series of retina models simply by combining these computational retinal microcircuits. Accuracy of the retina models for capturing neural behavior was assessed by fitting published electrophysiological recordings that characterize some of the best-known phenomena observed in the retina: adaptation to the mean light intensity and temporal contrast, and differential motion sensitivity. The retinal microcircuits are part of a new software platform for efficient computational retina modeling from single-cell to large-scale levels. It includes an interface with spiking neural networks that allows simulation of the spiking response of ganglion cells and integration with models of higher visual areas.

  3. A refined regional modeling approach for the Corn Belt - Experiences and recommendations for large-scale integrated modeling

    NASA Astrophysics Data System (ADS)

    Panagopoulos, Yiannis; Gassman, Philip W.; Jha, Manoj K.; Kling, Catherine L.; Campbell, Todd; Srinivasan, Raghavan; White, Michael; Arnold, Jeffrey G.

    2015-05-01

    Nonpoint source pollution from agriculture is the main source of nitrogen and phosphorus in the stream systems of the Corn Belt region in the Midwestern US. This region is comprised of two large river basins, the intensely row-cropped Upper Mississippi River Basin (UMRB) and Ohio-Tennessee River Basin (OTRB), which are considered the key contributing areas for the Northern Gulf of Mexico hypoxic zone according to the US Environmental Protection Agency. Thus, in this area it is of utmost importance to ensure that intensive agriculture for food, feed and biofuel production can coexist with a healthy water environment. To address these objectives within a river basin management context, an integrated modeling system has been constructed with the hydrologic Soil and Water Assessment Tool (SWAT) model, capable of estimating river basin responses to alternative cropping and/or management strategies. To improve modeling performance compared to previous studies and provide a spatially detailed basis for scenario development, this SWAT Corn Belt application incorporates a greatly refined subwatershed structure based on 12-digit hydrologic units or 'subwatersheds' as defined by the US Geological Service. The model setup, calibration and validation are time-demanding and challenging tasks for these large systems, given the scale intensive data requirements, and the need to ensure the reliability of flow and pollutant load predictions at multiple locations. Thus, the objectives of this study are both to comprehensively describe this large-scale modeling approach, providing estimates of pollution and crop production in the region as well as to present strengths and weaknesses of integrated modeling at such a large scale along with how it can be improved on the basis of the current modeling structure and results. The predictions were based on a semi-automatic hydrologic calibration approach for large-scale and spatially detailed modeling studies, with the use of the Sequential

  4. Modeling and extraction of interconnect parameters in very-large-scale integrated circuits

    NASA Astrophysics Data System (ADS)

    Yuan, C. P.

    1983-08-01

    The increased complexity of the very large scale integrated circuits (VLSI) has greatly impacted the field of computer-aided design (CAD). One of the problems brought about is the interconnection problem. In this research, the goal is two fold. First of all, a more accurate numerical method to evaluate the interconnect capacitance, including the coupling capacitance between interconnects and the fringing field capacitance, was investigated, and the integral method was employed. Two FORTRAN programs "CAP2D' and "CAP3D' based on this method were developed. Second, a PASCAL extraction program emphasizing the extraction of interconnect parameters was developed. It employs the cylindrical approximation formula for the self-capacitance of a single interconnect and other simple formulas for the coupling capacitances derived by a least square method. The extractor assumes only Manhattan geometry and NMOS technology. Four-dimensional binary search trees are used as the basic data structure.

  5. Solving large-scale fixed cost integer linear programming models for grid-based location problems with heuristic techniques

    NASA Astrophysics Data System (ADS)

    Noor-E-Alam, Md.; Doucette, John

    2015-08-01

    Grid-based location problems (GBLPs) can be used to solve location problems in business, engineering, resource exploitation, and even in the field of medical sciences. To solve these decision problems, an integer linear programming (ILP) model is designed and developed to provide the optimal solution for GBLPs considering fixed cost criteria. Preliminary results show that the ILP model is efficient in solving small to moderate-sized problems. However, this ILP model becomes intractable in solving large-scale instances. Therefore, a decomposition heuristic is proposed to solve these large-scale GBLPs, which demonstrates significant reduction of solution runtimes. To benchmark the proposed heuristic, results are compared with the exact solution via ILP. The experimental results show that the proposed method significantly outperforms the exact method in runtime with minimal (and in most cases, no) loss of optimality.

  6. A realistic renormalizable supersymmetric E₆ model

    SciTech Connect

    Bajc, Borut; Susič, Vasja

    2014-01-01

    A complete realistic model based on the supersymmetric version of E₆ is presented. It consists of three copies of matter 27, and a Higgs sector made of 2×(27+27⁻)+351´+351´⁻ representations. An analytic solution to the equations of motion is found which spontaneously breaks the gauge group into the Standard Model. The light fermion mass matrices are written down explicitly as non-linear functions of three Yukawa matrices. This contribution is based on Ref. [1].

  7. Performance of hybrid methods for large-scale unconstrained optimization as applied to models of proteins.

    PubMed

    Das, B; Meirovitch, H; Navon, I M

    2003-07-30

    Energy minimization plays an important role in structure determination and analysis of proteins, peptides, and other organic molecules; therefore, development of efficient minimization algorithms is important. Recently, Morales and Nocedal developed hybrid methods for large-scale unconstrained optimization that interlace iterations of the limited-memory BFGS method (L-BFGS) and the Hessian-free Newton method (Computat Opt Appl 2002, 21, 143-154). We test the performance of this approach as compared to those of the L-BFGS algorithm of Liu and Nocedal and the truncated Newton (TN) with automatic preconditioner of Nash, as applied to the protein bovine pancreatic trypsin inhibitor (BPTI) and a loop of the protein ribonuclease A. These systems are described by the all-atom AMBER force field with a dielectric constant epsilon = 1 and a distance-dependent dielectric function epsilon = 2r, where r is the distance between two atoms. It is shown that for the optimal parameters the hybrid approach is typically two times more efficient in terms of CPU time and function/gradient calculations than the two other methods. The advantage of the hybrid approach increases as the electrostatic interactions become stronger, that is, in going from epsilon = 2r to epsilon = 1, which leads to a more rugged and probably more nonlinear potential energy surface. However, no general rule that defines the optimal parameters has been found and their determination requires a relatively large number of trial-and-error calculations for each problem.

  8. Large-scale modeling of epileptic seizures: scaling properties of two parallel neuronal network simulation algorithms.

    PubMed

    Pesce, Lorenzo L; Lee, Hyong C; Hereld, Mark; Visser, Sid; Stevens, Rick L; Wildeman, Albert; van Drongelen, Wim

    2013-01-01

    Our limited understanding of the relationship between the behavior of individual neurons and large neuronal networks is an important limitation in current epilepsy research and may be one of the main causes of our inadequate ability to treat it. Addressing this problem directly via experiments is impossibly complex; thus, we have been developing and studying medium-large-scale simulations of detailed neuronal networks to guide us. Flexibility in the connection schemas and a complete description of the cortical tissue seem necessary for this purpose. In this paper we examine some of the basic issues encountered in these multiscale simulations. We have determined the detailed behavior of two such simulators on parallel computer systems. The observed memory and computation-time scaling behavior for a distributed memory implementation were very good over the range studied, both in terms of network sizes (2,000 to 400,000 neurons) and processor pool sizes (1 to 256 processors). Our simulations required between a few megabytes and about 150 gigabytes of RAM and lasted between a few minutes and about a week, well within the capability of most multinode clusters. Therefore, simulations of epileptic seizures on networks with millions of cells should be feasible on current supercomputers. PMID:24416069

  9. Large-scale modeling of epileptic seizures: scaling properties of two parallel neuronal network simulation algorithms.

    PubMed

    Pesce, Lorenzo L; Lee, Hyong C; Hereld, Mark; Visser, Sid; Stevens, Rick L; Wildeman, Albert; van Drongelen, Wim

    2013-01-01

    Our limited understanding of the relationship between the behavior of individual neurons and large neuronal networks is an important limitation in current epilepsy research and may be one of the main causes of our inadequate ability to treat it. Addressing this problem directly via experiments is impossibly complex; thus, we have been developing and studying medium-large-scale simulations of detailed neuronal networks to guide us. Flexibility in the connection schemas and a complete description of the cortical tissue seem necessary for this purpose. In this paper we examine some of the basic issues encountered in these multiscale simulations. We have determined the detailed behavior of two such simulators on parallel computer systems. The observed memory and computation-time scaling behavior for a distributed memory implementation were very good over the range studied, both in terms of network sizes (2,000 to 400,000 neurons) and processor pool sizes (1 to 256 processors). Our simulations required between a few megabytes and about 150 gigabytes of RAM and lasted between a few minutes and about a week, well within the capability of most multinode clusters. Therefore, simulations of epileptic seizures on networks with millions of cells should be feasible on current supercomputers.

  10. Similarity-based modeling in large-scale prediction of drug-drug interactions

    PubMed Central

    Vilar, Santiago; Uriarte, Eugenio; Santana, Lourdes; Lorberbaum, Tal; Hripcsak, George; Friedman, Carol; Tatonetti, Nicholas P

    2015-01-01

    Drug-drug interactions (DDIs) are a major cause of adverse drug effects and a public health concern, as they increase hospital care expenses and reduce patients’ quality of life. DDI detection is, therefore, an important objective in patient safety, one whose pursuit affects drug development and pharmacovigilance. In this article, we describe a protocol applicable on a large scale to predict novel DDIs based on similarity of drug interaction candidates to drugs involved in established DDIs. the method integrates a reference standard database of known DDIs with drug similarity information extracted from different sources, such as 2D and 3D molecular structure, interaction profile, target and side-effect similarities. the method is interpretable in that it generates drug interaction candidates that are traceable to pharmacological or clinical effects. We describe a protocol with applications in patient safety and preclinical toxicity screening. the time frame to implement this protocol is 5–7 h, with additional time potentially necessary, depending on the complexity of the reference standard DDI database and the similarity measures implemented. PMID:25122524

  11. Similarity-based modeling in large-scale prediction of drug-drug interactions.

    PubMed

    Vilar, Santiago; Uriarte, Eugenio; Santana, Lourdes; Lorberbaum, Tal; Hripcsak, George; Friedman, Carol; Tatonetti, Nicholas P

    2014-09-01

    Drug-drug interactions (DDIs) are a major cause of adverse drug effects and a public health concern, as they increase hospital care expenses and reduce patients' quality of life. DDI detection is, therefore, an important objective in patient safety, one whose pursuit affects drug development and pharmacovigilance. In this article, we describe a protocol applicable on a large scale to predict novel DDIs based on similarity of drug interaction candidates to drugs involved in established DDIs. The method integrates a reference standard database of known DDIs with drug similarity information extracted from different sources, such as 2D and 3D molecular structure, interaction profile, target and side-effect similarities. The method is interpretable in that it generates drug interaction candidates that are traceable to pharmacological or clinical effects. We describe a protocol with applications in patient safety and preclinical toxicity screening. The time frame to implement this protocol is 5-7 h, with additional time potentially necessary, depending on the complexity of the reference standard DDI database and the similarity measures implemented. PMID:25122524

  12. Large-Scale Modeling of Epileptic Seizures: Scaling Properties of Two Parallel Neuronal Network Simulation Algorithms

    DOE PAGES

    Pesce, Lorenzo L.; Lee, Hyong C.; Hereld, Mark; Visser, Sid; Stevens, Rick L.; Wildeman, Albert; van Drongelen, Wim

    2013-01-01

    Our limited understanding of the relationship between the behavior of individual neurons and large neuronal networks is an important limitation in current epilepsy research and may be one of the main causes of our inadequate ability to treat it. Addressing this problem directly via experiments is impossibly complex; thus, we have been developing and studying medium-large-scale simulations of detailed neuronal networks to guide us. Flexibility in the connection schemas and a complete description of the cortical tissue seem necessary for this purpose. In this paper we examine some of the basic issues encountered in these multiscale simulations. We have determinedmore » the detailed behavior of two such simulators on parallel computer systems. The observed memory and computation-time scaling behavior for a distributed memory implementation were very good over the range studied, both in terms of network sizes (2,000 to 400,000 neurons) and processor pool sizes (1 to 256 processors). Our simulations required between a few megabytes and about 150 gigabytes of RAM and lasted between a few minutes and about a week, well within the capability of most multinode clusters. Therefore, simulations of epileptic seizures on networks with millions of cells should be feasible on current supercomputers.« less

  13. Large-scale 3D modeling of projectile impact damage in brittle plates

    NASA Astrophysics Data System (ADS)

    Seagraves, A.; Radovitzky, R.

    2015-10-01

    The damage and failure of brittle plates subjected to projectile impact is investigated through large-scale three-dimensional simulation using the DG/CZM approach introduced by Radovitzky et al. [Comput. Methods Appl. Mech. Eng. 2011; 200(1-4), 326-344]. Two standard experimental setups are considered: first, we simulate edge-on impact experiments on Al2O3 tiles by Strassburger and Senf [Technical Report ARL-CR-214, Army Research Laboratory, 1995]. Qualitative and quantitative validation of the simulation results is pursued by direct comparison of simulations with experiments at different loading rates and good agreement is obtained. In the second example considered, we investigate the fracture patterns in normal impact of spheres on thin, unconfined ceramic plates over a wide range of loading rates. For both the edge-on and normal impact configurations, the full field description provided by the simulations is used to interpret the mechanisms underlying the crack propagation patterns and their strong dependence on loading rate.

  14. Modeling and Analysis of Realistic Fire Scenarios in Spacecraft

    NASA Technical Reports Server (NTRS)

    Brooker, J. E.; Dietrich, D. L.; Gokoglu, S. A.; Urban, D. L.; Ruff, G. A.

    2015-01-01

    An accidental fire inside a spacecraft is an unlikely, but very real emergency situation that can easily have dire consequences. While much has been learned over the past 25+ years of dedicated research on flame behavior in microgravity, a quantitative understanding of the initiation, spread, detection and extinguishment of a realistic fire aboard a spacecraft is lacking. Virtually all combustion experiments in microgravity have been small-scale, by necessity (hardware limitations in ground-based facilities and safety concerns in space-based facilities). Large-scale, realistic fire experiments are unlikely for the foreseeable future (unlike in terrestrial situations). Therefore, NASA will have to rely on scale modeling, extrapolation of small-scale experiments and detailed numerical modeling to provide the data necessary for vehicle and safety system design. This paper presents the results of parallel efforts to better model the initiation, spread, detection and extinguishment of fires aboard spacecraft. The first is a detailed numerical model using the freely available Fire Dynamics Simulator (FDS). FDS is a CFD code that numerically solves a large eddy simulation form of the Navier-Stokes equations. FDS provides a detailed treatment of the smoke and energy transport from a fire. The simulations provide a wealth of information, but are computationally intensive and not suitable for parametric studies where the detailed treatment of the mass and energy transport are unnecessary. The second path extends a model previously documented at ICES meetings that attempted to predict maximum survivable fires aboard space-craft. This one-dimensional model implies the heat and mass transfer as well as toxic species production from a fire. These simplifications result in a code that is faster and more suitable for parametric studies (having already been used to help in the hatch design of the Multi-Purpose Crew Vehicle, MPCV).

  15. The CAM/IMPACT/CoCiP Coupled Climate Model: Radiative forcing by aircraft in large-scale clouds

    NASA Astrophysics Data System (ADS)

    Penner, J. E.; Schumann, U.; Chen, Y.; Zhou, C.; Graf, K.

    2013-12-01

    Radiative forcing by aircraft soot in large-scale clouds has been estimated to be both positive and negative, while forcing by contrails and contrail cirrus (i.e. spreading contrails) is positive. Here we use an improved model to estimate the forcing in large-scale clouds and evaluate the effects of coupling the hydrological cycle within CAM with the CoCiP contrail model. The large-scale cloud effects assume that the fraction of soot particles that have been processed through contrails are good heterogeneous ice nuclei (IN), in agreement with laboratory data. We explore the effect of sulfate deposition on soot in decreasing the ability of contrail-processed soot to act as IN. The calculated total all-sky radiative climate forcing with and without coupling of CoCiP to the hydrological cycle within CAM and its range is reported. We compare results with observations and discuss what is needed to narrow the range of forcing.

  16. A versatile platform for multilevel modeling of physiological systems: template/instance framework for large-scale modeling and simulation.

    PubMed

    Asai, Yoshiyuki; Abe, Takeshi; Oka, Hideki; Okita, Masao; Okuyama, Tomohiro; Hagihara, Ken-Ichi; Ghosh, Samik; Matsuoka, Yukiko; Kurachi, Yoshihisa; Kitano, Hrioaki

    2013-01-01

    Building multilevel models of physiological systems is a significant and effective method for integrating a huge amount of bio-physiological data and knowledge obtained by earlier experiments and simulations. Since such models tend to be large in size and complicated in structure, appropriate software frameworks for supporting modeling activities are required. A software platform, PhysioDesigner, has been developed, which supports the process of creating multilevel models. Models developed on PhysioDesigner are established in an XML format called PHML. Every physiological entity in a model is represented as a module, and hence a model constitutes an aggregation of modules. When the number of entities of which the model is comprised is large, it is difficult to manage the entities manually, and some semiautomatic assistive functions are necessary. In this article, which focuses particularly on recently developed features of the platform for building large-scale models utilizing a template/instance framework and morphological information, the PhysioDesigner platform is introduced.

  17. Lattice models for large-scale simulations of coherent wave scattering.

    PubMed

    Wang, Shumin; Teixeira, Fernando L

    2004-01-01

    Lattice approximations for partial differential equations describing physical phenomena are commonly used for the numerical simulation of many problems otherwise intractable by pure analytical approaches. The discretization inevitably leads to many of the original symmetries to be broken or modified. In the case of Maxwell's equations for example, invariance and isotropy of the speed of light in vacuum is invariably lost because of the so-called grid dispersion. Since it is a cumulative effect, grid dispersion is particularly harmful for the accuracy of results of large-scale simulations of scattering problems. Grid dispersion is usually combated by either increasing the lattice resolution or by employing higher-order schemes with larger stencils for the space and time derivatives. Both alternatives lead to increased computational cost to simulate a problem of a given physical size. Here, we introduce a general approach to develop lattice approximations with reduced grid dispersion error for a given stencil (and hence at no additional computational cost). The present approach is based on first obtaining stencil coefficients in the Fourier domain that minimize the maximum grid dispersion error for wave propagation at all directions (minimax sense). The resulting coefficients are then expanded into a Taylor series in terms of the frequency variable and incorporated into time-domain (update) equations after an inverse Fourier transformation. Maximally flat (Butterworth) or Chebyshev filters are subsequently used to minimize the wave speed variations for a given frequency range of interest. The use of such filters also allows for the adjustment of the grid dispersion characteristics so as to minimize not only the local dispersion error but also the accumulated phase error in a frequency range of interest.

  18. Two Realistic Beagle Models for Dose Assessment.

    PubMed

    Stabin, Michael G; Kost, Susan D; Segars, William P; Guilmette, Raymond A

    2015-09-01

    Previously, the authors developed a series of eight realistic digital mouse and rat whole body phantoms based on NURBS technology to facilitate internal and external dose calculations in various species of rodents. In this paper, two body phantoms of adult beagles are described based on voxel images converted to NURBS models. Specific absorbed fractions for activity in 24 organs are presented in these models. CT images were acquired of an adult male and female beagle. The images were segmented, and the organs and structures were modeled using NURBS surfaces and polygon meshes. Each model was voxelized at a resolution of 0.75 × 0.75 × 2 mm. The voxel versions were implemented in GEANT4 radiation transport codes to calculate specific absorbed fractions (SAFs) using internal photon and electron sources. Photon and electron SAFs were then calculated for relevant organs in both models. The SAFs for photons and electrons were compatible with results observed by others. Absorbed fractions for electrons for organ self-irradiation were significantly less than 1.0 at energies above 0.5 MeV, as expected for many of these small-sized organs, and measurable cross irradiation was observed for many organ pairs for high-energy electrons (as would be emitted by nuclides like 32P, 90Y, or 188Re). The SAFs were used with standardized decay data to develop dose factors (DFs) for radiation dose calculations using the RADAR Method. These two new realistic models of male and female beagle dogs will be useful in radiation dosimetry calculations for external or internal simulated sources. PMID:26222214

  19. Can key vegetation parameters be retrieved at the large-scale using LAI satellite products and a generic modelling approach ?

    NASA Astrophysics Data System (ADS)

    Dewaele, Helene; Calvet, Jean-Christophe; Carrer, Dominique; Laanaia, Nabil

    2016-04-01

    In the context of climate change, the need to assess and predict the impact of droughts on vegetation and water resources increases. The generic approaches permitting the modelling of continental surfaces at large-scale has progressed in recent decades towards land surface models able to couple cycles of water, energy and carbon. A major source of uncertainty in these generic models is the maximum available water content of the soil (MaxAWC) usable by plants which is constrained by the rooting depth parameter and unobservable at the large-scale. In this study, vegetation products derived from the SPOT/VEGETATION satellite data available since 1999 are used to optimize the model rooting depth over rainfed croplands and permanent grasslands at 1 km x 1 km resolution. The inter-annual variability of the Leaf Area Index (LAI) is simulated over France using the Interactions between Soil, Biosphere and Atmosphere, CO2-reactive (ISBA-A-gs) generic land surface model and a two-layer force-restore (FR-2L) soil profile scheme. The leaf nitrogen concentration directly impacts the modelled value of the maximum annual LAI. In a first step this parameter is estimated for the last 15 years by using an iterative procedure that matches the maximum values of LAI modelled by ISBA-A-gs to the highest satellite-derived LAI values. The Root Mean Square Error (RMSE) is used as a cost function to be minimized. In a second step, the model rooting depth is optimized in order to reproduce the inter-annual variability resulting from the drought impact on the vegetation. The evaluation of the retrieved soil rooting depth is achieved using the French agricultural statistics of Agreste. Retrieved leaf nitrogen concentrations are compared with values from previous studies. The preliminary results show a good potential of this approach to estimate these two vegetation parameters (leaf nitrogen concentration, MaxAWC) at the large-scale over grassland areas. Besides, a marked impact of the

  20. The impact of the assimilation of SWOT satellite data into a large scale hydrological model parametrization over the Niger basin.

    NASA Astrophysics Data System (ADS)

    Pedinotti, Vanessa; Boone, Aaron; Mognard, Nelly; Ricci, Sophie; Biancamaria, Sylvain; Lion, Christine

    2013-04-01

    Satellite measurements are used for hydrological investigations, especially in regions where in situ measurements are not readily available. The future Surface Water and Ocean Topography (SWOT) satellite mission will deliver maps of water surface elevation (WSE) with an unprecedented resolution and provide observation of rivers wider than 100 m and water surface areas above 250 x 250 m over continental surfaces between 78°S and 78°N. The purpose of the study presented here is to use SWOT virtual data for the optimization of the parameters of a large scale river routing model, typically employed for global scale applications. The method consists in applying a data assimilation approach, the Best Linear Unbiased Estimator (BLUE) algorithm, to correct uncertain input parameters of the ISBA-TRIP Continental Hydrologic System. In Land Surface Models (LSMs), parameters used to describe hydrological basin characteristics are generally derived from geomorphologic relationships, which might not always be realistic. The study focuses on the Niger basin, a trans-boundary river, which is the main source of fresh water for all the riparian countries and where geopolitical issues restrict the exchange of hydrological data. As a preparation for this study, the model was first evaluated against in-situ and satellite derived datasets within the framework of the AMMA project. Since the SWOT observations are not available yet and also to assess the skills of the assimilation method, the study is carried out in the framework of an Observing System Simulation Experiment (OSSE). Here, we assume that modeling errors are only due to uncertainties in Manning coefficient field. The true Manning coefficient is then supposed to be known and is used to generate synthetic SWOT observations over the period 2002-2003. The satellite measurement errors are estimated using a simple instrument simulator. The impact of the assimilation system on the Niger basin hydrological cycle is then quantified

  1. Association of parameter, software, and hardware variation with large-scale behavior across 57,000 climate models.

    PubMed

    Knight, Christopher G; Knight, Sylvia H E; Massey, Neil; Aina, Tolu; Christensen, Carl; Frame, Dave J; Kettleborough, Jamie A; Martin, Andrew; Pascoe, Stephen; Sanderson, Ben; Stainforth, David A; Allen, Myles R

    2007-07-24

    In complex spatial models, as used to predict the climate response to greenhouse gas emissions, parameter variation within plausible bounds has major effects on model behavior of interest. Here, we present an unprecedentedly large ensemble of >57,000 climate model runs in which 10 parameters, initial conditions, hardware, and software used to run the model all have been varied. We relate information about the model runs to large-scale model behavior (equilibrium sensitivity of global mean temperature to a doubling of carbon dioxide). We demonstrate that effects of parameter, hardware, and software variation are detectable, complex, and interacting. However, we find most of the effects of parameter variation are caused by a small subset of parameters. Notably, the entrainment coefficient in clouds is associated with 30% of the variation seen in climate sensitivity, although both low and high values can give high climate sensitivity. We demonstrate that the effect of hardware and software is small relative to the effect of parameter variation and, over the wide range of systems tested, may be treated as equivalent to that caused by changes in initial conditions. We discuss the significance of these results in relation to the design and interpretation of climate modeling experiments and large-scale modeling more generally.

  2. Improved Large-Scale Inundation Modelling by 1D-2D Coupling and Consideration of Hydrologic and Hydrodynamic Processes - a Case Study in the Amazon

    NASA Astrophysics Data System (ADS)

    Hoch, J. M.; Bierkens, M. F.; Van Beek, R.; Winsemius, H.; Haag, A.

    2015-12-01

    Understanding the dynamics of fluvial floods is paramount to accurate flood hazard and risk modeling. Currently, economic losses due to flooding constitute about one third of all damage resulting from natural hazards. Given future projections of climate change, the anticipated increase in the World's population and the associated implications, sound knowledge of flood hazard and related risk is crucial. Fluvial floods are cross-border phenomena that need to be addressed accordingly. Yet, only few studies model floods at the large-scale which is preferable to tiling the output of small-scale models. Most models cannot realistically model flood wave propagation due to a lack of either detailed channel and floodplain geometry or the absence of hydrologic processes. This study aims to develop a large-scale modeling tool that accounts for both hydrologic and hydrodynamic processes, to find and understand possible sources of errors and improvements and to assess how the added hydrodynamics affect flood wave propagation. Flood wave propagation is simulated by DELFT3D-FM (FM), a hydrodynamic model using a flexible mesh to schematize the study area. It is coupled to PCR-GLOBWB (PCR), a macro-scale hydrological model, that has its own simpler 1D routing scheme (DynRout) which has already been used for global inundation modeling and flood risk assessments (GLOFRIS; Winsemius et al., 2013). A number of model set-ups are compared and benchmarked for the simulation period 1986-1996: (0) PCR with DynRout; (1) using a FM 2D flexible mesh forced with PCR output and (2) as in (1) but discriminating between 1D channels and 2D floodplains, and, for comparison, (3) and (4) the same set-ups as (1) and (2) but forced with observed GRDC discharge values. Outputs are subsequently validated against observed GRDC data at Óbidos and flood extent maps from the Dartmouth Flood Observatory. The present research constitutes a first step into a globally applicable approach to fully couple

  3. Multisource remote sensing supported large scale fully distributed hydrological modeling of the Tarim River Basin in Central Asia

    NASA Astrophysics Data System (ADS)

    Feng, Xianwei; Chen, Xi; Willems, Patrick; Liu, Tie; Li, Lanhai; Bao, Anming; Huang, Yue

    2009-06-01

    Potential application of remote sensing in hydrology is one of the hot spots in the distributed hydrological model research. The remote sensing technology can be applied to obtain the spatial distribution and dynamics of hydrological phenomena which is not generally possible based on traditional data. In this paper, a fully distributed large scale hydrological modeling application is considered in the semi-arid area of the River Tarim basin in central Asia (area of more than 1.20x105 km2). The model has been built based on the hydrological modeling software MIKE-SHE, which makes combined use of ground station data and multi-source and multi-temporal remote sensing data. Input and output data of spatially and temporally detailed variable model have been obtained by remote sensing data processing and geographical* spatial analysis for many useful hydrologic variables. These variables include digital elevations, land uses, soil types, precipitation intensities, evapotranspiration depths, snow cover heights and areas and leaf area index information. Through the case study application, insights have been obtained in the advantage of the potential usage of the remote sensing technology and products to support the hydrological process modeling of large scale river basins in developing countries where traditional station based data is very limited. The technology developed and the experience built in this study can be exported for applications in other analogical regions.

  4. Large-Scale Atmospheric Circulation Patterns Associated with Temperature Extremes as a Basis for Model Evaluation: Methodological Overview and Results

    NASA Astrophysics Data System (ADS)

    Loikith, P. C.; Broccoli, A. J.; Waliser, D. E.; Lintner, B. R.; Neelin, J. D.

    2015-12-01

    Anomalous large-scale circulation patterns often play a key role in the occurrence of temperature extremes. For example, large-scale circulation can drive horizontal temperature advection or influence local processes that lead to extreme temperatures, such as by inhibiting moderating sea breezes, promoting downslope adiabatic warming, and affecting the development of cloud cover. Additionally, large-scale circulation can influence the shape of temperature distribution tails, with important implications for the magnitude of future changes in extremes. As a result of the prominent role these patterns play in the occurrence and character of extremes, the way in which temperature extremes change in the future will be highly influenced by if and how these patterns change. It is therefore critical to identify and understand the key patterns associated with extremes at local to regional scales in the current climate and to use this foundation as a target for climate model validation. This presentation provides an overview of recent and ongoing work aimed at developing and applying novel approaches to identifying and describing the large-scale circulation patterns associated with temperature extremes in observations and using this foundation to evaluate state-of-the-art global and regional climate models. Emphasis is given to anomalies in sea level pressure and 500 hPa geopotential height over North America using several methods to identify circulation patterns, including self-organizing maps and composite analysis. Overall, evaluation results suggest that models are able to reproduce observed patterns associated with temperature extremes with reasonable fidelity in many cases. Model skill is often highest when and where synoptic-scale processes are the dominant mechanisms for extremes, and lower where sub-grid scale processes (such as those related to topography) are important. Where model skill in reproducing these patterns is high, it can be inferred that extremes are

  5. The HyperHydro (H^2) experiment for comparing different large-scale models at various resolutions

    NASA Astrophysics Data System (ADS)

    Sutanudjaja, E.; Bosmans, J.; Chaney, N.; Clark, M. P.; Condon, L. E.; David, C. H.; De Roo, A. P. J.; Doll, P. M.; Drost, N.; Eisner, S.; Famiglietti, J. S.; Floerke, M.; Gilbert, J. M.; Gochis, D. J.; Hut, R.; Keune, J.; Kollet, S. J.; Maxwell, R. M.; Pan, M.; Rakovec, O.; Reager, J. T., II; Samaniego, L. E.; Mueller Schmied, H.; Trautmann, T.; Van Beek, L. P.; Van De Giesen, N.; Wood, E. F.; Bierkens, M. F.; Kumar, R.

    2015-12-01

    HyperHydro (http://www.hyperhydro.org/) is an open network of scientists with the objective of simulating large-scale terrestrial hydrology and water resources at hyper-resolution (Bierkens et al., 2014, DOI: 10.1002/hyp.10391). Within the HyperHydro network, a modeling workshop was held at Utrecht University, the Netherlands, on 9-12 June 2015. The goal of the workshop was to start the HyperHydro (H^2) experiment for comparing different large-scale hydrological models, at different spatial resolutions, from 50 km to 1 km. Model simulation results (e.g. discharge, soil moisture, evaporation, snow, groundwater depth, etc.) are evaluated to available observation data and compared across various models and resolutions. In AGU 2015, we would like to present the results of this inter-comparison experiment. During the workshop in Utrecht, the models compared were CLM, LISFLOOD, mHM, ParFlow-CLM, PCR-GLOBWB, TerrSysMP, VIC and WaterGAP. We invite participation from the hydrology community on this experiment. As test-beds, we focus on two river basins: San Joaquin (~82000 km2) and Rhine (~185000 km2). In the near future, we will escalate this experiment to the CONUS and CORDEX-EU domains. The picture below was taken during the workshop in Utrecht (9-12 June 2015).

  6. The HyperHydro (H2) experiment for comparing different large-scale models at various resolutions

    NASA Astrophysics Data System (ADS)

    Sutanudjaja, Edwin

    2016-04-01

    HyperHydro (http://www.hyperhydro.org/) is an open network of scientists with the objective of simulating large-scale terrestrial hydrology and water resources at hyper-resolution (Wood et al., 2011, DOI: 10.1029/2010WR010090; Bierkens et al., 2014, DOI: 10.1002/hyp.10391). Within the HyperHydro network, a modeling workshop was held at Utrecht University, the Netherlands, on 9-12 June 2015. The goal of the workshop was to start the HyperHydro (H^2) experiment for comparing different large-scale hydrological models, at different spatial resolutions, from 50 km to 1 km. Model simulation results (e.g. discharge, soil moisture, evaporation, snow, groundwater depth, etc.) are evaluated to available observation data and compared across various models and resolutions. At EGU 2016, we would like to present the latest results of this inter-comparison experiment. We also invite participation from the hydrology community on this experiment. Up to now, the models compared are CLM, LISFLOOD, mHM, ParFlow-CLM, PCR-GLOBWB, TerrSysMP, VIC, WaterGAP, and wflow. As initial test-beds, we mainly focus on two river basins: San Joaquin/California (82000 km^2) and Rhine (185000 km^2). Moreover, comparison at a larger region, such for the CONUS (Contiguous-US) domain, is also explored and presented.

  7. Development of Residential Prototype Building Models and Analysis System for Large-Scale Energy Efficiency Studies Using EnergyPlus

    SciTech Connect

    Mendon, Vrushali V.; Taylor, Zachary T.

    2014-09-10

    ABSTRACT: Recent advances in residential building energy efficiency and codes have resulted in increased interest in detailed residential building energy models using the latest energy simulation software. One of the challenges of developing residential building models to characterize new residential building stock is to allow for flexibility to address variability in house features like geometry, configuration, HVAC systems etc. Researchers solved this problem in a novel way by creating a simulation structure capable of creating fully-functional EnergyPlus batch runs using a completely scalable residential EnergyPlus template system. This system was used to create a set of thirty-two residential prototype building models covering single- and multifamily buildings, four common foundation types and four common heating system types found in the United States (US). A weighting scheme with detailed state-wise and national weighting factors was designed to supplement the residential prototype models. The complete set is designed to represent a majority of new residential construction stock. The entire structure consists of a system of utility programs developed around the core EnergyPlus simulation engine to automate the creation and management of large-scale simulation studies with minimal human effort. The simulation structure and the residential prototype building models have been used for numerous large-scale studies, one of which is briefly discussed in this paper.

  8. A study of conceptual model uncertainty in large-scale CO2 storage simulation

    NASA Astrophysics Data System (ADS)

    Zhang, Y.; Li, S.; Zhang, X.

    2011-12-01

    Multiscale permeability upscaling is combined with a sensitivity study of model boundary condition to identify an optimal heterogeneity resolution in developing a reservoir model to represent a deep saline aquifer in CO2 storage simulation. A three-dimensional, fully heterogeneous reservoir model is built for a deep saline aquifer in western Wyoming, where each grid cell is identified by multiple material tags. On the basis of these tags, permeability upscaling is conducted to create three increasingly simpler site models, a facies model, a layered model, and a formation model. Accuracy of upscaling is evaluated first, before CO2 simulation is conducted in all models. Since at the injection site, large uncertainty exists in the nature of the reservoir compartment, end-member boundary conditions are evaluated, whereby brine production is introduced to control formation fluid pressure. The effect of conceptual model uncertainty on model prediction is then assessed for each boundary condition. Results suggest that for the spatial and temporal scales considered, without brine production, optimal complexity of the upscaled model depends on the prediction metric of interest; the facies model is the most accurate for capturing plume shape, fluid pressure, and CO2 mass profiles, while the formation model is adequate for pressure prediction. The layered model is not accurate for predicting most of the performance metrics. Moreover, boundary condition impacts fluid pressure and the amount of CO2 that can be injected. For the boundary conditions tested, brine production can modulate fluid pressure, affect the direction of mobile gas flow, and influence the accuracy of the upscaled models. In particular, the importance of detailed geologic resolution is weakened when viscous force is strengthened in relation to gravity force. When brine production is active, variability of the predictions by the upscaled models becomes smaller and the predictions are more accurate, suggesting

  9. A study of conceptual model uncertainty in large-scale CO2 storage simulation

    NASA Astrophysics Data System (ADS)

    Li, Shuiquan; Zhang, Ye; Zhang, Xu

    2011-05-01

    In this study, multiscale permeability upscaling is combined with a sensitivity study of model boundary condition to identify an optimal heterogeneity resolution in developing a reservoir model to represent a deep saline aquifer in CO2 storage simulation. A three-dimensional, fully heterogeneous reservoir model is built for a deep saline aquifer in western Wyoming, where each grid cell is identified by multiple material tags. On the basis of these tags, permeability upscaling is conducted to create three increasingly simpler site models, a facies model, a layered model, and a formation model. Accuracy of upscaling is evaluated first, before CO2 simulation is conducted in all models. Since at the injection site, uncertainty exists in the nature of the reservoir compartment, end-member boundary conditions are evaluated, whereby brine production is introduced to control formation fluid pressure. The effect of conceptual model uncertainty on model prediction is then assessed for each boundary condition. Results suggest that for the spatial and temporal scales considered, without brine production, optimal complexity of the upscaled model depends on the prediction metric of interest; the facies model is the most accurate for capturing plume shape, fluid pressure, and CO2 mass profiles, while the formation model is adequate for pressure prediction. The layered model is not accurate for predicting most of the performance metrics. Moreover, boundary condition impacts fluid pressure and the amount of CO2 that can be injected. For the boundary conditions tested, brine production can modulate fluid pressure, affect the direction of mobile gas flow, and influence the accuracy of the upscaled models. In particular, the importance of detailed geologic resolution is weakened when viscous force is strengthened in relation to gravity force. When brine production is active, variability of the predictions by the upscaled models becomes smaller and the predictions are more accurate

  10. Social and Economic Effects of Large-Scale Energy Development in Rural Areas: An Assessment Model.

    ERIC Educational Resources Information Center

    Murdock, Steve H.; Leistritz, F. Larry

    General development, structure, and uses of a computerized impact projection model, the North Dakota Regional Environmental Assessment Program (REAP) Economic-Demographic Assessment Model, were studied not only to describe a model developed to meet informational needs of local decision makers (especially in a rural area undergoing development),…

  11. Investigation of the Longitudinal Characteristics of a Large-Scale Jet Transport Model Equipped with Controllable Thrust Reversers

    NASA Technical Reports Server (NTRS)

    Hickey, David H.; Tolhurst, William H., Jr.; Aoyagi, Kiyoshi

    1961-01-01

    An investigation was conducted to determine the effect of thrust control by means of controllable thrust reversers on the longitudinal characteristics of a large-scale airplane model with a 35' sweptback wing of aspect ratio of 7 and four pylon-mounted jet engines equipped with target-type thrust reversers designed to provide thrust control ranging from full forward thrust to full reverse thrust. The thrust control in landing-approach configurations formed the major portion of the study. Results were obtained with both leading- and trailing-edge high-lift devices.

  12. Combining local- and large-scale models to predict the distributions of invasive plant species.

    PubMed

    Jones, Chad C; Acker, Steven A; Halpern, Charles B

    2010-03-01

    Habitat distribution models are increasingly used to predict the potential distributions of invasive species and to inform monitoring. However, these models assume that species are in equilibrium with the environment, which is clearly not true for most invasive species. Although this assumption is frequently acknowledged, solutions have not been adequately addressed. There are several potential methods for improving habitat distribution models. Models that require only presence data may be more effective for invasive species, but this assumption has rarely been tested. In addition, combining modeling types to form "ensemble" models may improve the accuracy of predictions. However, even with these improvements, models developed for recently invaded areas are greatly influenced by the current distributions of species and thus reflect near- rather than long-term potential for invasion. Larger scale models from species' native and invaded ranges may better reflect long-term invasion potential, but they lack finer scale resolution. We compared logistic regression (which uses presence/absence data) and two presence-only methods for modeling the potential distributions of three invasive plant species on the Olympic Peninsula in Washington, USA. We then combined the three methods to create ensemble models. We also developed climate envelope models for the same species based on larger scale distributions and combined models from multiple scales to create an index of near- and long-term invasion risk to inform monitoring in Olympic National Park (ONP). Neither presence-only nor ensemble models were more accurate than logistic regression for any of the species. Larger scale models predicted much greater areas at risk of invasion. Our index of near- and long-term invasion risk indicates that < 4% of ONP is at high near-term risk of invasion while 67-99% of the Park is at moderate or high long-term risk of invasion. We demonstrate how modeling results can be used to guide the

  13. Modeling Cultural/ecological Impacts of Large-scale Mining and Industrial Development in the Yukon-Kuskokwim Basin

    NASA Astrophysics Data System (ADS)

    Bunn, J. T.; Sparck, A.

    2004-12-01

    We are developing a methodology for predicting the cultural impact of large-scale mineral resource development in the Yukon-Kuskokwim (Y-K) basin. The Yup'ik/Cup'ik/Dene people of the Y-K basin currently practice a mixed-market subsistence economy, in which native subsistence traditions and social structures are largely intact. Large-scale mining and industrial-infrastructure developments are being planned that will constitute a significant expansion of the market economy, and will also significantly affect the physical environment that is central to the subsistence way of life. To explore the impact that these changes are likely to have on native culture we use a systems modeling approach, considering "culture" to be a system that encompasses the physical, biological and verbal realms. We draw upon Alaska Department of Fish and Game technical reports, anthropological studies, Yup'ik cultural visioning exercises, and personal experience to identify the components of our cultural model. We use structural equation modeling to determine causal relationships between system components. The resulting model is used predict changes that are likely to occur as a result of planned developments.

  14. The topology of large-scale structure. I - Topology and the random phase hypothesis. [galactic formation models

    NASA Technical Reports Server (NTRS)

    Weinberg, David H.; Gott, J. Richard, III; Melott, Adrian L.

    1987-01-01

    Many models for the formation of galaxies and large-scale structure assume a spectrum of random phase (Gaussian), small-amplitude density fluctuations as initial conditions. In such scenarios, the topology of the galaxy distribution on large scales relates directly to the topology of the initial density fluctuations. Here a quantitative measure of topology - the genus of contours in a smoothed density distribution - is described and applied to numerical simulations of galaxy clustering, to a variety of three-dimensional toy models, and to a volume-limited sample of the CfA redshift survey. For random phase distributions the genus of density contours exhibits a universal dependence on threshold density. The clustering simulations show that a smoothing length of 2-3 times the mass correlation length is sufficient to recover the topology of the initial fluctuations from the evolved galaxy distribution. Cold dark matter and white noise models retain a random phase topology at shorter smoothing lengths, but massive neutrino models develop a cellular topology.

  15. Volterra representation enables modeling of complex synaptic nonlinear dynamics in large-scale simulations

    PubMed Central

    Hu, Eric Y.; Bouteiller, Jean-Marie C.; Song, Dong; Baudry, Michel; Berger, Theodore W.

    2015-01-01

    Chemical synapses are comprised of a wide collection of intricate signaling pathways involving complex dynamics. These mechanisms are often reduced to simple spikes or exponential representations in order to enable computer simulations at higher spatial levels of complexity. However, these representations cannot capture important nonlinear dynamics found in synaptic transmission. Here, we propose an input-output (IO) synapse model capable of generating complex nonlinear dynamics while maintaining low computational complexity. This IO synapse model is an extension of a detailed mechanistic glutamatergic synapse model capable of capturing the input-output relationships of the mechanistic model using the Volterra functional power series. We demonstrate that the IO synapse model is able to successfully track the nonlinear dynamics of the synapse up to the third order with high accuracy. We also evaluate the accuracy of the IO synapse model at different input frequencies and compared its performance with that of kinetic models in compartmental neuron models. Our results demonstrate that the IO synapse model is capable of efficiently replicating complex nonlinear dynamics that were represented in the original mechanistic model and provide a method to replicate complex and diverse synaptic transmission within neuron network simulations. PMID:26441622

  16. Large-Scale Features of Pliocene Climate: Results from the Pliocene Model Intercomparison Project

    NASA Technical Reports Server (NTRS)

    Haywood, A. M.; Hill, D.J.; Dolan, A. M.; Otto-Bliesner, B. L.; Bragg, F.; Chan, W.-L.; Chandler, M. A.; Contoux, C.; Dowsett, H. J.; Jost, A.; Kamae, Y.; Lohmann, G.; Lunt, D. J.; Abe-Ouchi, A.; Pickering, S. J.; Ramstein, G.; Rosenbloom, N. A.; Salzmann, U.; Sohl, L.; Stepanek, C.; Ueda, H.; Yan, Q.; Zhang, Z.

    2013-01-01

    Climate and environments of the mid-Pliocene warm period (3.264 to 3.025 Ma) have been extensively studied.Whilst numerical models have shed light on the nature of climate at the time, uncertainties in their predictions have not been systematically examined. The Pliocene Model Intercomparison Project quantifies uncertainties in model outputs through a coordinated multi-model and multi-mode data intercomparison. Whilst commonalities in model outputs for the Pliocene are clearly evident, we show substantial variation in the sensitivity of models to the implementation of Pliocene boundary conditions. Models appear able to reproduce many regional changes in temperature reconstructed from geological proxies. However, data model comparison highlights that models potentially underestimate polar amplification. To assert this conclusion with greater confidence, limitations in the time-averaged proxy data currently available must be addressed. Furthermore, sensitivity tests exploring the known unknowns in modelling Pliocene climate specifically relevant to the high latitudes are essential (e.g. palaeogeography, gateways, orbital forcing and trace gasses). Estimates of longer-term sensitivity to CO2 (also known as Earth System Sensitivity; ESS), support previous work suggesting that ESS is greater than Climate Sensitivity (CS), and suggest that the ratio of ESS to CS is between 1 and 2, with a "best" estimate of 1.5.

  17. Advances in Simulating Large-scale Water Cycle Processes in the Community Land Model Version 5.0

    NASA Astrophysics Data System (ADS)

    Lawrence, D. M.; Swenson, S. C.; Clark, M. P.; Li, H. Y.; Brunke, M.; Perket, J.

    2015-12-01

    The Community Land Model is the land component of the Community Earth System Model (CESM). In this presentation, we will describe a comprehensive suite of recent improvements to the representation of water cycle processes in CLM that have been developed in collaboration with the research community that utilizes CLM. Results from a set of offline simulations comparing several versions of CLM will be presented and compared against observed data for runoff, river discharge, soil moisture, and total water storage to assess the performance of the new model. In particular, we will demonstrate how comparisons to GRACE and FLUXNET-MTE evapotranspiration data contributed to the identification and correction of problems in the model. The new model, CLM5 will be incorporated in CESM2 and provides the basis for improved large-scale modeling and study of energy, water, and biogeochemical (carbon and nitrogen) cycles. Opportunities for further improvement and the CUAHSI - CLM partnership will also be discussed.

  18. Inflation in Realistic D-Brane Models

    NASA Astrophysics Data System (ADS)

    Burgess, C. P.; Cline, J. M.; Stoica, H.; Quevedo, F.

    2004-09-01

    We find successful models of D-brane/anti-brane inflation within a string context. We work within the GKP-Bbb KLT class of type IIB string vacua for which many moduli are stabilized through fluxes, as recently modified to include `realistic' orbifold sectors containing standard-model type particles. We allow all moduli to roll when searching for inflationary solutions and find that inflation is not generic inasmuch as special choices must be made for the parameters describing the vacuum. But given these choices inflation can occur for a reasonably wide range of initial conditions for the brane and antibrane. We find that D-terms associated with the orbifold blowing-up modes play an important role in the inflationary dynamics. Since the models contain a standard-model-like sector after inflation, they open up the possibility of addressing reheating issues. We calculate predictions for the CMB temperature fluctuations and find that these can be consistent with observations, but are generically not deep within the scale-invariant regime and so can allow appreciable values for dns/dln k as well as predicting a potentially observable gravity-wave signal. It is also possible to generate some admixture of isocurvature fluctuations.

  19. Comparing Realistic Subthalamic Nucleus Neuron Models

    NASA Astrophysics Data System (ADS)

    Njap, Felix; Claussen, Jens C.; Moser, Andreas; Hofmann, Ulrich G.

    2011-06-01

    The mechanism of action of clinically effective electrical high frequency stimulation is still under debate. However, recent evidence points at the specific activation of GABA-ergic ion channels. Using a computational approach, we analyze temporal properties of the spike trains emitted by biologically realistic neurons of the subthalamic nucleus (STN) as a function of GABA-ergic synaptic input conductances. Our contribution is based on a model proposed by Rubin and Terman and exhibits a wide variety of different firing patterns, silent, low spiking, moderate spiking and intense spiking activity. We observed that most of the cells in our network turn to silent mode when we increase the GABAA input conductance above the threshold of 3.75 mS/cm2. On the other hand, insignificant changes in firing activity are observed when the input conductance is low or close to zero. We thus reproduce Rubin's model with vanishing synaptic conductances. To quantitatively compare spike trains from the original model with the modified model at different conductance levels, we apply four different (dis)similarity measures between them. We observe that Mahalanobis distance, Victor-Purpura metric, and Interspike Interval distribution are sensitive to different firing regimes, whereas Mutual Information seems undiscriminative for these functional changes.

  20. Large-Scale Transport Model Uncertainty and Sensitivity Analysis: Distributed Sources in Complex, Hydrogeologic Systems

    NASA Astrophysics Data System (ADS)

    Wolfsberg, A.; Kang, Q.; Li, C.; Ruskauff, G.; Bhark, E.; Freeman, E.; Prothro, L.; Drellack, S.

    2007-12-01

    The Underground Test Area (UGTA) Project of the U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office is in the process of assessing and developing regulatory decision options based on modeling predictions of contaminant transport from underground testing of nuclear weapons at the Nevada Test Site (NTS). The UGTA Project is attempting to develop an effective modeling strategy that addresses and quantifies multiple components of uncertainty including natural variability, parameter uncertainty, conceptual/model uncertainty, and decision uncertainty in translating model results into regulatory requirements. The modeling task presents multiple unique challenges to the hydrological sciences as a result of the complex fractured and faulted hydrostratigraphy, the distributed locations of sources, the suite of reactive and non-reactive radionuclides, and uncertainty in conceptual models. Characterization of the hydrogeologic system is difficult and expensive because of deep groundwater in the arid desert setting and the large spatial setting of the NTS. Therefore, conceptual model uncertainty is partially addressed through the development of multiple alternative conceptual models of the hydrostratigraphic framework and multiple alternative models of recharge and discharge. Uncertainty in boundary conditions is assessed through development of alternative groundwater fluxes through multiple simulations using the regional groundwater flow model. Calibration of alternative models to heads and measured or inferred fluxes has not proven to provide clear measures of model quality. Therefore, model screening by comparison to independently-derived natural geochemical mixing targets through cluster analysis has also been invoked to evaluate differences between alternative conceptual models. Advancing multiple alternative flow models, sensitivity of transport predictions to parameter uncertainty is assessed through Monte Carlo simulations. The

  1. Large-Scale Transport Model Uncertainty and Sensitivity Analysis: Distributed Sources in Complex Hydrogeologic Systems

    SciTech Connect

    Sig Drellack, Lance Prothro

    2007-12-01

    The Underground Test Area (UGTA) Project of the U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office is in the process of assessing and developing regulatory decision options based on modeling predictions of contaminant transport from underground testing of nuclear weapons at the Nevada Test Site (NTS). The UGTA Project is attempting to develop an effective modeling strategy that addresses and quantifies multiple components of uncertainty including natural variability, parameter uncertainty, conceptual/model uncertainty, and decision uncertainty in translating model results into regulatory requirements. The modeling task presents multiple unique challenges to the hydrological sciences as a result of the complex fractured and faulted hydrostratigraphy, the distributed locations of sources, the suite of reactive and non-reactive radionuclides, and uncertainty in conceptual models. Characterization of the hydrogeologic system is difficult and expensive because of deep groundwater in the arid desert setting and the large spatial setting of the NTS. Therefore, conceptual model uncertainty is partially addressed through the development of multiple alternative conceptual models of the hydrostratigraphic framework and multiple alternative models of recharge and discharge. Uncertainty in boundary conditions is assessed through development of alternative groundwater fluxes through multiple simulations using the regional groundwater flow model. Calibration of alternative models to heads and measured or inferred fluxes has not proven to provide clear measures of model quality. Therefore, model screening by comparison to independently-derived natural geochemical mixing targets through cluster analysis has also been invoked to evaluate differences between alternative conceptual models. Advancing multiple alternative flow models, sensitivity of transport predictions to parameter uncertainty is assessed through Monte Carlo simulations. The

  2. The application of sensitivity analysis to models of large scale physiological systems

    NASA Technical Reports Server (NTRS)

    Leonard, J. I.

    1974-01-01

    A survey of the literature of sensitivity analysis as it applies to biological systems is reported as well as a brief development of sensitivity theory. A simple population model and a more complex thermoregulatory model illustrate the investigatory techniques and interpretation of parameter sensitivity analysis. The role of sensitivity analysis in validating and verifying models, and in identifying relative parameter influence in estimating errors in model behavior due to uncertainty in input data is presented. This analysis is valuable to the simulationist and the experimentalist in allocating resources for data collection. A method for reducing highly complex, nonlinear models to simple linear algebraic models that could be useful for making rapid, first order calculations of system behavior is presented.

  3. Open source large-scale high-resolution environmental modelling with GEMS

    NASA Astrophysics Data System (ADS)

    Baarsma, Rein; Alberti, Koko; Marra, Wouter; Karssenberg, Derek

    2016-04-01

    Many environmental, topographic and climate data sets are freely available at a global scale, creating the opportunities to run environmental models for every location on Earth. Collection of the data necessary to do this and the consequent conversion into a useful format is very demanding however, not to mention the computational demand of a model itself. We developed GEMS (Global Environmental Modelling System), an online application to run environmental models on various scales directly in your browser and share the results with other researchers. GEMS is open-source and uses open-source platforms including Flask, Leaflet, GDAL, MapServer and the PCRaster-Python modelling framework to process spatio-temporal models in real time. With GEMS, users can write, run, and visualize the results of dynamic PCRaster-Python models in a browser. GEMS uses freely available global data to feed the models, and automatically converts the data to the relevant model extent and data format. Currently available data includes the SRTM elevation model, a selection of monthly vegetation data from MODIS, land use classifications from GlobCover, historical climate data from WorldClim, HWSD soil information from WorldGrids, population density from SEDAC and near real-time weather forecasts, most with a ±100m resolution. Furthermore, users can add other or their own datasets using a web coverage service or a custom data provider script. With easy access to a wide range of base datasets and without the data preparation that is usually necessary to run environmental models, building and running a model becomes a matter hours. Furthermore, it is easy to share the resulting maps, timeseries data or model scenarios with other researchers through a web mapping service (WMS). GEMS can be used to provide open access to model results. Additionally, environmental models in GEMS can be employed by users with no extensive experience with writing code, which is for example valuable for using models

  4. Use of Item Models in a Large-Scale Admissions Test: A Case Study

    ERIC Educational Resources Information Center

    Sinharay, Sandip; Johnson, Matthew S.

    2008-01-01

    "Item models" (LaDuca, Staples, Templeton, & Holzman, 1986) are classes from which it is possible to generate items that are equivalent/isomorphic to other items from the same model (e.g., Bejar, 1996, 2002). They have the potential to produce large numbers of high-quality items at reduced cost. This article introduces data from an application of…

  5. Large-scale ligand-based predictive modelling using support vector machines.

    PubMed

    Alvarsson, Jonathan; Lampa, Samuel; Schaal, Wesley; Andersson, Claes; Wikberg, Jarl E S; Spjuth, Ola

    2016-01-01

    The increasing size of datasets in drug discovery makes it challenging to build robust and accurate predictive models within a reasonable amount of time. In order to investigate the effect of dataset sizes on predictive performance and modelling time, ligand-based regression models were trained on open datasets of varying sizes of up to 1.2 million chemical structures. For modelling, two implementations of support vector machines (SVM) were used. Chemical structures were described by the signatures molecular descriptor. Results showed that for the larger datasets, the LIBLINEAR SVM implementation performed on par with the well-established libsvm with a radial basis function kernel, but with dramatically less time for model building even on modest computer resources. Using a non-linear kernel proved to be infeasible for large data sizes, even with substantial computational resources on a computer cluster. To deploy the resulting models, we extended the Bioclipse decision support framework to support models from LIBLINEAR and made our models of logD and solubility available from within Bioclipse. PMID:27516811

  6. A large-scale model for simulating the fate & transport of organic contaminants in river basins.

    PubMed

    Lindim, C; van Gils, J; Cousins, I T

    2016-02-01

    We present STREAM-EU (Spatially and Temporally Resolved Exposure Assessment Model for EUropean basins), a novel dynamic mass balance model for predicting the environmental fate of organic contaminants in river basins. STREAM-EU goes beyond the current state-of-the-science in that it can simulate spatially and temporally-resolved contaminant concentrations in all relevant environmental media (surface water, groundwater, snow, soil and sediments) at the river basin scale. The model can currently be applied to multiple organic contaminants in any river basin in Europe, but the model framework is adaptable to any river basin in any continent. We simulate the environmental fate of perfluoroctanesulfonic acid (PFOS) and perfluorooctanoic acid (PFOA) in the Danube River basin and compare model predictions to recent monitoring data. The model predicts PFOS and PFOA concentrations that agree well with measured concentrations for large stretches of the river. Disagreements between the model predictions and measurements in some river sections are shown to be useful indicators of unknown contamination sources to the river basin.

  7. Artificial neural network modelling of a large-scale wastewater treatment plant operation.

    PubMed

    Güçlü, Dünyamin; Dursun, Sükrü

    2010-11-01

    Artificial Neural Networks (ANNs), a method of artificial intelligence method, provide effective predictive models for complex processes. Three independent ANN models trained with back-propagation algorithm were developed to predict effluent chemical oxygen demand (COD), suspended solids (SS) and aeration tank mixed liquor suspended solids (MLSS) concentrations of the Ankara central wastewater treatment plant. The appropriate architecture of ANN models was determined through several steps of training and testing of the models. ANN models yielded satisfactory predictions. Results of the root mean square error, mean absolute error and mean absolute percentage error were 3.23, 2.41 mg/L and 5.03% for COD; 1.59, 1.21 mg/L and 17.10% for SS; 52.51, 44.91 mg/L and 3.77% for MLSS, respectively, indicating that the developed model could be efficiently used. The results overall also confirm that ANN modelling approach may have a great implementation potential for simulation, precise performance prediction and process control of wastewater treatment plants.

  8. Of mice, flies--and men? Comparing fungal infection models for large-scale screening efforts.

    PubMed

    Brunke, Sascha; Quintin, Jessica; Kasper, Lydia; Jacobsen, Ilse D; Richter, Martin E; Hiller, Ekkehard; Schwarzmüller, Tobias; d'Enfert, Christophe; Kuchler, Karl; Rupp, Steffen; Hube, Bernhard; Ferrandon, Dominique

    2015-05-01

    Studying infectious diseases requires suitable hosts for experimental in vivo infections. Recent years have seen the advent of many alternatives to murine infection models. However, the use of non-mammalian models is still controversial because it is often unclear how well findings from these systems predict virulence potential in humans or other mammals. Here, we compare the commonly used models, fruit fly and mouse (representing invertebrate and mammalian hosts), for their similarities and degree of correlation upon infection with a library of mutants of an important fungal pathogen, the yeast Candida glabrata. Using two indices, for fly survival time and for mouse fungal burden in specific organs, we show a good agreement between the models. We provide a suitable predictive model for estimating the virulence potential of C. glabrata mutants in the mouse from fly survival data. As examples, we found cell wall integrity mutants attenuated in flies, and mutants of a MAP kinase pathway had defective virulence in flies and reduced relative pathogen fitness in mice. In addition, mutants with strongly reduced in vitro growth generally, but not always, had reduced virulence in flies. Overall, we demonstrate that surveying Drosophila survival after infection is a suitable model to predict the outcome of murine infections, especially for severely attenuated C. glabrata mutants. Pre-screening of mutants in an invertebrate Drosophila model can, thus, provide a good estimate of the probability of finding a strain with reduced microbial burden in the mouse host. PMID:25786415

  9. Of mice, flies--and men? Comparing fungal infection models for large-scale screening efforts.

    PubMed

    Brunke, Sascha; Quintin, Jessica; Kasper, Lydia; Jacobsen, Ilse D; Richter, Martin E; Hiller, Ekkehard; Schwarzmüller, Tobias; d'Enfert, Christophe; Kuchler, Karl; Rupp, Steffen; Hube, Bernhard; Ferrandon, Dominique

    2015-05-01

    Studying infectious diseases requires suitable hosts for experimental in vivo infections. Recent years have seen the advent of many alternatives to murine infection models. However, the use of non-mammalian models is still controversial because it is often unclear how well findings from these systems predict virulence potential in humans or other mammals. Here, we compare the commonly used models, fruit fly and mouse (representing invertebrate and mammalian hosts), for their similarities and degree of correlation upon infection with a library of mutants of an important fungal pathogen, the yeast Candida glabrata. Using two indices, for fly survival time and for mouse fungal burden in specific organs, we show a good agreement between the models. We provide a suitable predictive model for estimating the virulence potential of C. glabrata mutants in the mouse from fly survival data. As examples, we found cell wall integrity mutants attenuated in flies, and mutants of a MAP kinase pathway had defective virulence in flies and reduced relative pathogen fitness in mice. In addition, mutants with strongly reduced in vitro growth generally, but not always, had reduced virulence in flies. Overall, we demonstrate that surveying Drosophila survival after infection is a suitable model to predict the outcome of murine infections, especially for severely attenuated C. glabrata mutants. Pre-screening of mutants in an invertebrate Drosophila model can, thus, provide a good estimate of the probability of finding a strain with reduced microbial burden in the mouse host.

  10. Exploring large-scale phenomena in composite membranes through an efficient implicit-solvent model

    NASA Astrophysics Data System (ADS)

    Laradji, Mohamed; Kumar, P. B. Sunil; Spangler, Eric J.

    2016-07-01

    Several microscopic and mesoscale models have been introduced in the past to investigate various phenomena in lipid membranes. Most of these models account for the solvent explicitly. Since in a typical molecular dynamics simulation, the majority of particles belong to the solvent, much of the computational effort in these simulations is devoted for calculating forces between solvent particles. To overcome this problem, several implicit-solvent mesoscale models for lipid membranes have been proposed during the last few years. In the present article, we review an efficient coarse-grained implicit-solvent model we introduced earlier for studies of lipid membranes. In this model, lipid molecules are coarse-grained into short semi-flexible chains of beads with soft interactions. Through molecular dynamics simulations, the model is used to investigate the thermal, structural and elastic properties of lipid membranes. We will also review here few studies, based on this model, of the phase behavior of nanoscale liposomes, cytoskeleton-induced blebbing in lipid membranes, as well as nanoparticles wrapping and endocytosis by tensionless lipid membranes. Topical Review article submitted to the Journal of Physics D: Applied Physics, May 9, 2016

  11. Of mice, flies – and men? Comparing fungal infection models for large-scale screening efforts

    PubMed Central

    Brunke, Sascha; Quintin, Jessica; Kasper, Lydia; Jacobsen, Ilse D.; Richter, Martin E.; Hiller, Ekkehard; Schwarzmüller, Tobias; d'Enfert, Christophe; Kuchler, Karl; Rupp, Steffen; Hube, Bernhard; Ferrandon, Dominique

    2015-01-01

    ABSTRACT Studying infectious diseases requires suitable hosts for experimental in vivo infections. Recent years have seen the advent of many alternatives to murine infection models. However, the use of non-mammalian models is still controversial because it is often unclear how well findings from these systems predict virulence potential in humans or other mammals. Here, we compare the commonly used models, fruit fly and mouse (representing invertebrate and mammalian hosts), for their similarities and degree of correlation upon infection with a library of mutants of an important fungal pathogen, the yeast Candida glabrata. Using two indices, for fly survival time and for mouse fungal burden in specific organs, we show a good agreement between the models. We provide a suitable predictive model for estimating the virulence potential of C. glabrata mutants in the mouse from fly survival data. As examples, we found cell wall integrity mutants attenuated in flies, and mutants of a MAP kinase pathway had defective virulence in flies and reduced relative pathogen fitness in mice. In addition, mutants with strongly reduced in vitro growth generally, but not always, had reduced virulence in flies. Overall, we demonstrate that surveying Drosophila survival after infection is a suitable model to predict the outcome of murine infections, especially for severely attenuated C. glabrata mutants. Pre-screening of mutants in an invertebrate Drosophila model can, thus, provide a good estimate of the probability of finding a strain with reduced microbial burden in the mouse host. PMID:25786415

  12. Survival Models on Unobserved Heterogeneity and their Applications in Analyzing Large-scale Survey Data

    PubMed Central

    Liu, Xian

    2014-01-01

    In survival analysis, researchers often encounter multivariate survival time data, in which failure times are correlated even in the presence of model covariates. It is argued that because observations are clustered by unobserved heterogeneity, the application of standard survival models can result in biased parameter estimates and erroneous model-based predictions. In this article, the author describes and compares four methods handling unobserved heterogeneity in survival analysis: the Andersen-Gill approach, the robust sandwich variance estimator, the hazard model with individual frailty, and the retransformation method. An empirical analysis provides strong evidence that in the presence of strong unobserved heterogeneity, the application of a standard survival model can yield equally robust parameter estimates and the likelihood ratio statistic as does a corresponding model adding an additional parameter for random effects. When predicting the survival function, however, a standard model on multivariate survival time data can result in serious prediction bias. The retransformation method is effective to derive an adjustment factor for correctly predicting the survival function. PMID:25525559

  13. Implementation of a large-scale flow routing scheme in the Canadian Regional Climate Model (CRCM)

    NASA Astrophysics Data System (ADS)

    Lucas-Picher, P.; Arora, V.; Caya, D.; Laprise, R.

    2002-12-01

    Freshwater flux from river acts as an important forcing on the ocean. With lower density than ocean saltwater, freshwater from rivers affects thermohaline circulation and sea-ice formation at high-latitudes. Freshwater flux can be computed in a climate model by using runoff as an input into a flow routing model, which transfers runoff from the land surface to the continental edges. In addition to modeling freshwater flux for oceans, the streamflow obtained by the routing model can be used to assess the performance of atmospheric models on a climatological basis by comparisons with observed streamflow. The variable velocity flow routing algorithm of Arora and Boer (1999, JGR-Atmos., 104, 30965-30979) is used to compute river flow in the Canadian Regional Climate Model (CRCM) (Caya and Laprise, 1999, Mon. Wea. Rev., 127, 341-362). The flow routing scheme consists of surface and groundwater reservoirs, which obtain daily estimates of surface runoff and drainage inputs, respectively simulated by the land surface scheme. The flow routing algorithm uses Manning's equation to estimate flow velocities. A rectangular river cross section is assumed with a fixed width and the variable depth is estimated using the amount of water in the river, slope, and river width. Discretization of major river basins and flow directions for the North America domain are obtained at the polar stereographic resolution of the CRCM using 5 minute global river flow directions (Graham et al., 1999, WRR, 35, 583-587) as a template. Model runoff estimates from a global simulation of Variable Infiltration Capacity (VIC) hydrological model are use to validate the routing scheme. Routing models results show that compared to the unrouted runoff, the inclusion of flow routing improves comparison with observation-based streamflow estimates.

  14. Incremental learning of Bayesian sensorimotor models: from low-level behaviours to large-scale structure of the environment

    NASA Astrophysics Data System (ADS)

    Diard, Julien; Gilet, Estelle; Simonin, Éva; Bessière, Pierre

    2010-12-01

    This paper concerns the incremental learning of hierarchies of representations of space in artificial or natural cognitive systems. We propose a mathematical formalism for defining space representations (Bayesian Maps) and modelling their interaction in hierarchies of representations (sensorimotor interaction operator). We illustrate our formalism with a robotic experiment. Starting from a model based on the proximity to obstacles, we learn a new one related to the direction of the light source. It provides new behaviours, like phototaxis and photophobia. We then combine these two maps so as to identify parts of the environment where the way the two modalities interact is recognisable. This classification is a basis for learning a higher level of abstraction map that describes the large-scale structure of the environment. In the final model, the perception-action cycle is modelled by a hierarchy of sensorimotor models of increasing time and space scales, which provide navigation strategies of increasing complexities.

  15. Middle atmosphere project. A semi-spectral numerical model for the large-scale stratospheric circulation

    NASA Technical Reports Server (NTRS)

    Holton, J. R.; Wehrbein, W.

    1979-01-01

    The complete model is a semispectral model in which the longitudinal dependence is represented by expansion in zonal harmonics while the latitude and height dependencies are represented by a finite difference grid. The model is based on the primitive equations in the log pressure coordinate system. The lower boundary of the model domain is set at the 100 mb level (i.e., near the tropopause) and the effects of tropospheric forcing are included in the lower boundary condition. The upper boundary is at approximately 96 km, and the latitudinal extent is either global or hemispheric. The basic differential equations and boundary conditions are outlined. The finite difference equations are described. The initial conditions are discussed and a sample calculation is presented. The FORTRAN code is given in the appendix.

  16. Topology of large-scale structure in seeded hot dark matter models

    NASA Technical Reports Server (NTRS)

    Beaky, Matthew M.; Scherrer, Robert J.; Villumsen, Jens V.

    1992-01-01

    The topology of the isodensity surfaces in seeded hot dark matter models, in which static seed masses provide the density perturbations in a universe dominated by massive neutrinos is examined. When smoothed with a Gaussian window, the linear initial conditions in these models show no trace of non-Gaussian behavior for r0 equal to or greater than 5 Mpc (h = 1/2), except for very low seed densities, which show a shift toward isolated peaks. An approximate analytic expression is given for the genus curve expected in linear density fields from randomly distributed seed masses. The evolved models have a Gaussian topology for r0 = 10 Mpc, but show a shift toward a cellular topology with r0 = 5 Mpc; Gaussian models with an identical power spectrum show the same behavior.

  17. Comparing selected morphological models of hydrated Nafion using large scale molecular dynamics simulations

    NASA Astrophysics Data System (ADS)

    Knox, Craig K.

    Experimental elucidation of the nanoscale structure of hydrated Nafion, the most popular polymer electrolyte or proton exchange membrane (PEM) to date, and its influence on macroscopic proton conductance is particularly challenging. While it is generally agreed that hydrated Nafion is organized into distinct hydrophilic domains or clusters within a hydrophobic matrix, the geometry and length scale of these domains continues to be debated. For example, at least half a dozen different domain shapes, ranging from spheres to cylinders, have been proposed based on experimental SAXS and SANS studies. Since the characteristic length scale of these domains is believed to be ˜2 to 5 nm, very large molecular dynamics (MD) simulations are needed to accurately probe the structure and morphology of these domains, especially their connectivity and percolation phenomena at varying water content. Using classical, all-atom MD with explicit hydronium ions, simulations have been performed to study the first-ever hydrated Nafion systems that are large enough (~2 million atoms in a ˜30 nm cell) to directly observe several hydrophilic domains at the molecular level. These systems consisted of six of the most significant and relevant morphological models of Nafion to-date: (1) the cluster-channel model of Gierke, (2) the parallel cylinder model of Schmidt-Rohr, (3) the local-order model of Dreyfus, (4) the lamellar model of Litt, (5) the rod network model of Kreuer, and (6) a 'random' model, commonly used in previous simulations, that does not directly assume any particular geometry, distribution, or morphology. These simulations revealed fast intercluster bridge formation and network percolation in all of the models. Sulfonates were found inside these bridges and played a significant role in percolation. Sulfonates also strongly aggregated around and inside clusters. Cluster surfaces were analyzed to study the hydrophilic-hydrophobic interface. Interfacial area and cluster volume

  18. Study of an engine flow diverter system for a large scale ejector powered aircraft model

    NASA Technical Reports Server (NTRS)

    Springer, R. J.; Langley, B.; Plant, T.; Hunter, L.; Brock, O.

    1981-01-01

    Requirements were established for a conceptual design study to analyze and design an engine flow diverter system and to include accommodations for an ejector system in an existing 3/4 scale fighter model equipped with YJ-79 engines. Model constraints were identified and cost-effective limited modification was proposed to accept the ejectors, ducting and flow diverter valves. Complete system performance was calculated and a versatile computer program capable of analyzing any ejector system was developed.

  19. Uncovering Implicit Assumptions: a Large-Scale Study on Students' Mental Models of Diffusion

    NASA Astrophysics Data System (ADS)

    Stains, Marilyne; Sevian, Hannah

    2015-12-01

    Students' mental models of diffusion in a gas phase solution were studied through the use of the Structure and Motion of Matter (SAMM) survey. This survey permits identification of categories of ways students think about the structure of the gaseous solute and solvent, the origin of motion of gas particles, and trajectories of solute particles in the gaseous medium. A large sample of data ( N = 423) from students across grade 8 (age 13) through upper-level undergraduate was subjected to a cluster analysis to determine the main mental models present. The cluster analysis resulted in a reduced data set ( N = 308), and then, mental models were ascertained from robust clusters. The mental models that emerged from analysis were triangulated through interview data and characterised according to underlying implicit assumptions that guide and constrain thinking about diffusion of a solute in a gaseous medium. Impacts of students' level of preparation in science and relationships of mental models to science disciplines studied by students were examined. Implications are discussed for the value of this approach to identify typical mental models and the sets of implicit assumptions that constrain them.

  20. A large-scale simulation model to assess karstic groundwater recharge over Europe and the Mediterranean

    NASA Astrophysics Data System (ADS)

    Hartmann, A.; Gleeson, T.; Rosolem, R.; Pianosi, F.; Wada, Y.; Wagener, T.

    2015-06-01

    Karst develops through the dissolution of carbonate rock and is a major source of groundwater contributing up to half of the total drinking water supply in some European countries. Previous approaches to model future water availability in Europe are either too-small scale or do not incorporate karst processes, i.e. preferential flow paths. This study presents the first simulations of groundwater recharge in all karst regions in Europe with a parsimonious karst hydrology model. A novel parameter confinement strategy combines a priori information with recharge-related observations (actual evapotranspiration and soil moisture) at locations across Europe while explicitly identifying uncertainty in the model parameters. Europe's karst regions are divided into four typical karst landscapes (humid, mountain, Mediterranean and desert) by cluster analysis and recharge is simulated from 2002 to 2012 for each karst landscape. Mean annual recharge ranges from negligible in deserts to > 1 m a-1 in humid regions. The majority of recharge rates range from 20 to 50% of precipitation and are sensitive to subannual climate variability. Simulation results are consistent with independent observations of mean annual recharge and significantly better than other global hydrology models that do not consider karst processes (PCR-GLOBWB, WaterGAP). Global hydrology models systematically under-estimate karst recharge implying that they over-estimate actual evapotranspiration and surface runoff. Karst water budgets and thus information to support management decisions regarding drinking water supply and flood risk are significantly improved by our model.

  1. Discrete element modelling of large scale particle systems—I: exact scaling laws

    NASA Astrophysics Data System (ADS)

    Feng, Y. T.; Owen, D. R. J.

    2014-06-01

    The discrete element method has emerged as a powerful predictive tool for the numerical modelling of many scientific and engineering problems involving discrete and discontinuous phenomena. There are nevertheless computational challenges to resolve before industrial scale applications can be effectively simulated. This multi-part paper aims to address some of the theoretical and computational issues central to achieving this goal. In the first part of this paper, a simple but generic theoretical framework is established for the development of a comprehensive set of scaling conditions, under which a scaled discrete element model can exactly reproduce the mechanical behaviour of a physical model. In particular, three basic physical quantities and their scale factors can be freely chosen. A special selection leads to a unique set of scale factors governing an exact scaling, which also gives rise to the requirement that all the interaction laws employed in a scaled model be scale-invariant. The subsequent examination reveals that most commonly used interaction laws, if all material (mechanical and physical) properties are treated as constant, do not possess such a feature and therefore cannot be directly employed in a scaled model. The problem can be solved by treating the scaled particles as pseudo-particles and by properly scaling the interaction laws. The resulting scaled interaction laws become scale-invariant and thus can be used in a scaled model.

  2. A finite element beam-model for efficient simulation of large-scale porous structures.

    PubMed

    Stauber, Martin; Huber, Martin; Van Lenthe, G Harry; Boyd, Steven K; Müller, Ralph

    2004-02-01

    This paper presents a new method for the generation of a beam finite element (FE) model from a three-dimensional (3D) data set acquired by micro-computed tomography (micro-CT). This method differs from classical modeling of trabecular bone because it models a specific sample only and differs from conventional solid hexahedron element-based FE approaches in its computational efficiency. The stress-strain curve, characterizing global mechanical properties of a porous structure, could be well predicted (R(2)=0.92). Furthermore, validation of the method was achieved by comparing local displacements of element nodes with the displacements directly measured by time-lapsed imaging methods of failure, and these measures were in good agreement. The presented model is a first step in modeling specific samples for efficient strength analysis by FE modeling. We believe that with upcoming high-resolution in-vivo imaging methods, this approach could lead to a novel and accurate tool in the risk assessment for osteoporotic fractures.

  3. Hydrological improvements for nutrient and pollutant emission modeling in large scale catchments

    NASA Astrophysics Data System (ADS)

    Höllering, S.; Ihringer, J.

    2012-04-01

    An estimation of emissions and loads of nutrients and pollutants into European water bodies with as much accuracy as possible depends largely on the knowledge about the spatially and temporally distributed hydrological runoff patterns. An improved hydrological water balance model for the pollutant emission model MoRE (Modeling of Regionalized Emissions) (IWG, 2011) has been introduced, that can form an adequate basis to simulate discharge in a hydrologically differentiated, land-use based way to subsequently provide the required distributed discharge components. First of all the hydrological model had to comply both with requirements of space and time in order to calculate sufficiently precise the water balance on the catchment scale spatially distributed in sub-catchments and with a higher temporal resolution. Aiming to reproduce seasonal dynamics and the characteristic hydrological regimes of river catchments a daily (instead of a yearly) time increment was applied allowing for a more process oriented simulation of discharge dynamics, volume and therefore water balance. The enhancement of the hydrological model became also necessary to potentially account for the hydrological functioning of catchments in regard to scenarios of e.g. a changing climate or alterations of land use. As a deterministic, partly physically based, conceptual hydrological watershed and water balance model the Precipitation Runoff Modeling System (PRMS) (USGS, 2009) was selected to improve the hydrological input for MoRE. In PRMS the spatial discretization is implemented with sub-catchments and so called hydrologic response units (HRUs) which are the hydrotropic, distributed, finite modeling entities each having a homogeneous runoff reaction due to hydro-meteorological events. Spatial structures and heterogeneities in sub-catchments e.g. urbanity, land use and soil types were identified to derive hydrological similarities and classify in different urban and rural HRUs. In this way the

  4. Implementation of large-scale landscape evolution modelling to real high-resolution DEM

    NASA Astrophysics Data System (ADS)

    Schroeder, S.; Babeyko, A. Y.

    2012-12-01

    We have developed a surface evolution model to be naturally integrated with 3D thermomechanical codes like SLIM-3D to study coupled tectonic-climate interaction. The resolution of the surface evolution model is independent of that of the underlying continuum box. The surface model follows the concept of the cellular automaton implemented on a regular Eulerian mesh. It incorporates an effective filling algorithm that guarantees flow direction in each cell, D8 search for flow directions, computation of discharges and bedrock incision. Additionally, the model implements hillslope erosion in the form of non-linear, slope-dependent diffusion. The model was designed to be employed not only to synthetic topographies but also to real Digital Elevation Models (DEM). In present work we report our experience with model implication to the 30-meter resolution ASTER GDEM of the Pamir orogen, in particular, to the segment of the Panj river. We start with calibration of the model parameters (fluvial incision and hillslope diffusion coefficients) using direct measurements of Panj incision rates and volumes of suspended sediment transport. Since the incision algorithm is independent on hillslope processes, we first adjust the incision parameters. Power-law exponents of the incision equation were evaluated from the profile curvature of the main Pamir rivers. After that, incision coefficient was adjusted to fit the observed incision rate of 5 mm/y. Once the model results are consistent with the measured data, the calibration of hillslope processes follows. For given critical slope, diffusivity could be fitted to match the observed sediment discharge. Applying of surface evolution model to real DEM reveals specific problems which do not appear when working with synthetic landscapes. One of them is the noise of the satellite-measured topography. In particular, due to the non-vertical observation perspective, satellite may not be able to detect the bottom of the river channel, especially

  5. Large-Scale Modelling of the Environmentally-Driven Population Dynamics of Temperate Aedes albopictus (Skuse)

    PubMed Central

    Erguler, Kamil; Smith-Unna, Stephanie E.; Waldock, Joanna; Proestos, Yiannis; Christophides, George K.; Lelieveld, Jos; Parham, Paul E.

    2016-01-01

    The Asian tiger mosquito, Aedes albopictus, is a highly invasive vector species. It is a proven vector of dengue and chikungunya viruses, with the potential to host a further 24 arboviruses. It has recently expanded its geographical range, threatening many countries in the Middle East, Mediterranean, Europe and North America. Here, we investigate the theoretical limitations of its range expansion by developing an environmentally-driven mathematical model of its population dynamics. We focus on the temperate strain of Ae. albopictus and compile a comprehensive literature-based database of physiological parameters. As a novel approach, we link its population dynamics to globally-available environmental datasets by performing inference on all parameters. We adopt a Bayesian approach using experimental data as prior knowledge and the surveillance dataset of Emilia-Romagna, Italy, as evidence. The model accounts for temperature, precipitation, human population density and photoperiod as the main environmental drivers, and, in addition, incorporates the mechanism of diapause and a simple breeding site model. The model demonstrates high predictive skill over the reference region and beyond, confirming most of the current reports of vector presence in Europe. One of the main hypotheses derived from the model is the survival of Ae. albopictus populations through harsh winter conditions. The model, constrained by the environmental datasets, requires that either diapausing eggs or adult vectors have increased cold resistance. The model also suggests that temperature and photoperiod control diapause initiation and termination differentially. We demonstrate that it is possible to account for unobserved properties and constraints, such as differences between laboratory and field conditions, to derive reliable inferences on the environmental dependence of Ae. albopictus populations. PMID:26871447

  6. Large-Scale Modelling of the Environmentally-Driven Population Dynamics of Temperate Aedes albopictus (Skuse).

    PubMed

    Erguler, Kamil; Smith-Unna, Stephanie E; Waldock, Joanna; Proestos, Yiannis; Christophides, George K; Lelieveld, Jos; Parham, Paul E

    2016-01-01

    The Asian tiger mosquito, Aedes albopictus, is a highly invasive vector species. It is a proven vector of dengue and chikungunya viruses, with the potential to host a further 24 arboviruses. It has recently expanded its geographical range, threatening many countries in the Middle East, Mediterranean, Europe and North America. Here, we investigate the theoretical limitations of its range expansion by developing an environmentally-driven mathematical model of its population dynamics. We focus on the temperate strain of Ae. albopictus and compile a comprehensive literature-based database of physiological parameters. As a novel approach, we link its population dynamics to globally-available environmental datasets by performing inference on all parameters. We adopt a Bayesian approach using experimental data as prior knowledge and the surveillance dataset of Emilia-Romagna, Italy, as evidence. The model accounts for temperature, precipitation, human population density and photoperiod as the main environmental drivers, and, in addition, incorporates the mechanism of diapause and a simple breeding site model. The model demonstrates high predictive skill over the reference region and beyond, confirming most of the current reports of vector presence in Europe. One of the main hypotheses derived from the model is the survival of Ae. albopictus populations through harsh winter conditions. The model, constrained by the environmental datasets, requires that either diapausing eggs or adult vectors have increased cold resistance. The model also suggests that temperature and photoperiod control diapause initiation and termination differentially. We demonstrate that it is possible to account for unobserved properties and constraints, such as differences between laboratory and field conditions, to derive reliable inferences on the environmental dependence of Ae. albopictus populations. PMID:26871447

  7. Large-Scale Modelling of the Environmentally-Driven Population Dynamics of Temperate Aedes albopictus (Skuse).

    PubMed

    Erguler, Kamil; Smith-Unna, Stephanie E; Waldock, Joanna; Proestos, Yiannis; Christophides, George K; Lelieveld, Jos; Parham, Paul E

    2016-01-01

    The Asian tiger mosquito, Aedes albopictus, is a highly invasive vector species. It is a proven vector of dengue and chikungunya viruses, with the potential to host a further 24 arboviruses. It has recently expanded its geographical range, threatening many countries in the Middle East, Mediterranean, Europe and North America. Here, we investigate the theoretical limitations of its range expansion by developing an environmentally-driven mathematical model of its population dynamics. We focus on the temperate strain of Ae. albopictus and compile a comprehensive literature-based database of physiological parameters. As a novel approach, we link its population dynamics to globally-available environmental datasets by performing inference on all parameters. We adopt a Bayesian approach using experimental data as prior knowledge and the surveillance dataset of Emilia-Romagna, Italy, as evidence. The model accounts for temperature, precipitation, human population density and photoperiod as the main environmental drivers, and, in addition, incorporates the mechanism of diapause and a simple breeding site model. The model demonstrates high predictive skill over the reference region and beyond, confirming most of the current reports of vector presence in Europe. One of the main hypotheses derived from the model is the survival of Ae. albopictus populations through harsh winter conditions. The model, constrained by the environmental datasets, requires that either diapausing eggs or adult vectors have increased cold resistance. The model also suggests that temperature and photoperiod control diapause initiation and termination differentially. We demonstrate that it is possible to account for unobserved properties and constraints, such as differences between laboratory and field conditions, to derive reliable inferences on the environmental dependence of Ae. albopictus populations.

  8. Large-scale Operational Evapotranspiration Mapping Using Remote Sensing and Weather Datasets: Modeling and Validation

    NASA Astrophysics Data System (ADS)

    Senay, G. B.; Velpuri, N.; Singh, R. K.; Bohms, S.; Verdin, J. P.

    2013-12-01

    We present a simple but robust method that uses remotely sensed thermal data and model-assimilated weather fields to produce actual evapotranspiration (ET) for the contiguous United States (CONUS) at monthly and seasonal time scales. The method is based on the Simplified Surface Energy Balance (SSEB) model which is now parameterized for operational applications, and renamed as SSEBop. The innovative aspect of the SSEBop is that it uses pre-defined, boundary conditions that are unique to each pixel for the 'hot' and 'cold' reference end members. We used SSEBop to compute 13 years (2000-2012) of monthly ET using MODIS and data streams provided by Global Data Assimilation System (GDAS). Validation of SSEBop performance (model to observed as well as model to model) was performed over the CONUS at both point and basin scales. Point scale model to observed validation was performed using eddy covariance FLUXNET ET (FLET) data (2001-2007) aggregated by year, land cover, elevation and climate zone. Basin scale model to observed validation was performed using annual gridded FLUXNET ET (GFET) and annual basin water balance ET (WBET) data aggregated by various Hydrologic Unit Code (HUC) levels. Model-to-model comparison was also performed by comparing SSEBop ET with MOD16 ET. Point scale validation using monthly data aggregated by years revealed that the MOD16 ET and SSEBop ET products compared well with observations at comparable accuracies annually. Both ET products showed comparable results by most land cover types and by climate zones. However, SSEBop performed better for Grassland and Forest classeswhereasMOD16 performed better for the woody savanna class. Validation results at different HUC levels over 2000-2011 using GFET as a reference indicated higher accuracies for MOD16 ET data. MOD16, SSEBop and GFET data were validated against WBET (2000-2009), and results indicate that both MOD16 and SSEBop ET matched the accuracies of the global GFET dataset at HUC levels. Our

  9. A Spatio-Temporally Explicit Random Encounter Model for Large-Scale Population Surveys

    PubMed Central

    Jousimo, Jussi; Ovaskainen, Otso

    2016-01-01

    Random encounter models can be used to estimate population abundance from indirect data collected by non-invasive sampling methods, such as track counts or camera-trap data. The classical Formozov–Malyshev–Pereleshin (FMP) estimator converts track counts into an estimate of mean population density, assuming that data on the daily movement distances of the animals are available. We utilize generalized linear models with spatio-temporal error structures to extend the FMP estimator into a flexible Bayesian modelling approach that estimates not only total population size, but also spatio-temporal variation in population density. We also introduce a weighting scheme to estimate density on habitats that are not covered by survey transects, assuming that movement data on a subset of individuals is available. We test the performance of spatio-temporal and temporal approaches by a simulation study mimicking the Finnish winter track count survey. The results illustrate how the spatio-temporal modelling approach is able to borrow information from observations made on neighboring locations and times when estimating population density, and that spatio-temporal and temporal smoothing models can provide improved estimates of total population size compared to the FMP method. PMID:27611683

  10. A Spatio-Temporally Explicit Random Encounter Model for Large-Scale Population Surveys.

    PubMed

    Jousimo, Jussi; Ovaskainen, Otso

    2016-01-01

    Random encounter models can be used to estimate population abundance from indirect data collected by non-invasive sampling methods, such as track counts or camera-trap data. The classical Formozov-Malyshev-Pereleshin (FMP) estimator converts track counts into an estimate of mean population density, assuming that data on the daily movement distances of the animals are available. We utilize generalized linear models with spatio-temporal error structures to extend the FMP estimator into a flexible Bayesian modelling approach that estimates not only total population size, but also spatio-temporal variation in population density. We also introduce a weighting scheme to estimate density on habitats that are not covered by survey transects, assuming that movement data on a subset of individuals is available. We test the performance of spatio-temporal and temporal approaches by a simulation study mimicking the Finnish winter track count survey. The results illustrate how the spatio-temporal modelling approach is able to borrow information from observations made on neighboring locations and times when estimating population density, and that spatio-temporal and temporal smoothing models can provide improved estimates of total population size compared to the FMP method. PMID:27611683

  11. High-resolution global topographic index values for use in large-scale hydrological modelling

    NASA Astrophysics Data System (ADS)

    Marthews, Toby; Dadson, Simon; Lehner, Bernhard; Abele, Simon; Gedney, Nicola

    2015-04-01

    Modelling land surface water flow is of critical importance for simulating land-surface fluxes, predicting runoff and water table dynamics and for many other applications of Land Surface Models. Many approaches are based on the popular hydrology model TOPMODEL, and the most important parameter of this model is the well-known topographic index. Here we present new, high-resolution parameter maps of the topographic index for all ice-free land pixels calculated from hydrologically-conditioned HydroSHEDS data using the GA2 algorithm ('GRIDATB 2'). At 15 arc-sec resolution, these layers are four times finer than the resolution of the previously best-available topographic index layers, the Compound Topographic Index of HYDRO1k (CTI). For the largest river catchments occurring on each continent we found that, in comparison with CTI our revised values were up to 20% lower in, e.g., the Amazon. We found the highest catchment means were for the Murray-Darling and Nelson-Saskatchewan rather than for the Amazon and St. Lawrence as found from the CTI. For the majority of large catchments, however, the spread of our new GA2 index values is very similar to those of CTI, yet with more spatial variability apparent at fine scale. We believe these new index layers represent greatly-improved global-scale topographic index values and hope that they will be widely used in land surface modelling applications in the future.

  12. Norway's 2011 Terror Attacks: Alleviating National Trauma With a Large-Scale Proactive Intervention Model.

    PubMed

    Kärki, Freja Ulvestad

    2015-09-01

    After the terror attacks of July 22, 2011, Norwegian health authorities piloted a new model for municipality-based psychosocial follow-up with victims. This column describes the development of a comprehensive follow-up intervention by health authorities and others that has been implemented at the municipality level across Norway. The model's principles emphasize proactivity by service providers; individually tailored help, with each victim being assigned a contact person in the residential municipality; continuity and long-term focus; effective intersectorial collaboration; and standardized screening of symptoms during the first year. Weekend reunions were also organized for the bereaved, and one-day reunions were organized for the survivors and their families at intervals over the first 18 months. Preliminary findings indicate a high level of success in model implementation. However, the overall effect of the interventions will be a subject for future evaluations. PMID:26030322

  13. The flow structure of pyroclastic density currents: evidence from particle models and large-scale experiments

    NASA Astrophysics Data System (ADS)

    Dellino, Pierfrancesco; Büttner, Ralf; Dioguardi, Fabio; Doronzo, Domenico Maria; La Volpe, Luigi; Mele, Daniela; Sonder, Ingo; Sulpizio, Roberto; Zimanowski, Bernd

    2010-05-01

    Pyroclastic flows are ground hugging, hot, gas-particle flows. They represent the most hazardous events of explosive volcanism, one striking example being the famous historical eruption of Pompeii (AD 79) at Vesuvius. Much of our knowledge on the mechanics of pyroclastic flows comes from theoretical models and numerical simulations. Valuable data are also stored in the geological record of past eruptions, i.e. the particles contained in pyroclastic deposits, but they are rarely used for quantifying the destructive potential of pyroclastic flows. In this paper, by means of experiments, we validate a model that is based on data from pyroclastic deposits. It allows the reconstruction of the current's fluid-dynamic behaviour. We show that our model results in likely values of dynamic pressure and particle volumetric concentration, and allows quantifying the hazard potential of pyroclastic flows.

  14. Norway's 2011 Terror Attacks: Alleviating National Trauma With a Large-Scale Proactive Intervention Model.

    PubMed

    Kärki, Freja Ulvestad

    2015-09-01

    After the terror attacks of July 22, 2011, Norwegian health authorities piloted a new model for municipality-based psychosocial follow-up with victims. This column describes the development of a comprehensive follow-up intervention by health authorities and others that has been implemented at the municipality level across Norway. The model's principles emphasize proactivity by service providers; individually tailored help, with each victim being assigned a contact person in the residential municipality; continuity and long-term focus; effective intersectorial collaboration; and standardized screening of symptoms during the first year. Weekend reunions were also organized for the bereaved, and one-day reunions were organized for the survivors and their families at intervals over the first 18 months. Preliminary findings indicate a high level of success in model implementation. However, the overall effect of the interventions will be a subject for future evaluations.

  15. High-resolution global topographic index values for use in large-scale hydrological modelling

    NASA Astrophysics Data System (ADS)

    Marthews, T. R.; Dadson, S. J.; Lehner, B.; Abele, S.; Gedney, N.

    2015-01-01

    Modelling land surface water flow is of critical importance for simulating land surface fluxes, predicting runoff and water table dynamics and for many other applications of Land Surface Models. Many approaches are based on the popular hydrology model TOPMODEL (TOPography-based hydrological MODEL), and the most important parameter of this model is the well-known topographic index. Here we present new, high-resolution parameter maps of the topographic index for all ice-free land pixels calculated from hydrologically conditioned HydroSHEDS (Hydrological data and maps based on SHuttle Elevation Derivatives at multiple Scales) data using the GA2 algorithm (GRIDATB 2). At 15 arcsec resolution, these layers are 4 times finer than the resolution of the previously best-available topographic index layers, the compound topographic index of HYDRO1k (CTI). For the largest river catchments occurring on each continent we found that, in comparison with CTI our revised values were up to 20% lower in, e.g. the Amazon. We found the highest catchment means were for the Murray-Darling and Nelson-Saskatchewan rather than for the Amazon and St. Lawrence as found from the CTI. For the majority of large catchments, however, the spread of our new GA2 index values is very similar to those of CTI, yet with more spatial variability apparent at fine scale. We believe these new index layers represent greatly improved global-scale topographic index values and hope that they will be widely used in land surface modelling applications in the future.

  16. Prediction model of potential hepatocarcinogenicity of rat hepatocarcinogens using a large-scale toxicogenomics database

    SciTech Connect

    Uehara, Takeki; Minowa, Yohsuke; Morikawa, Yuji; Kondo, Chiaki; Maruyama, Toshiyuki; Kato, Ikuo; Nakatsu, Noriyuki; Igarashi, Yoshinobu; Ono, Atsushi; Hayashi, Hitomi; Mitsumori, Kunitoshi; Yamada, Hiroshi; Ohno, Yasuo; Urushidani, Tetsuro

    2011-09-15

    The present study was performed to develop a robust gene-based prediction model for early assessment of potential hepatocarcinogenicity of chemicals in rats by using our toxicogenomics database, TG-GATEs (Genomics-Assisted Toxicity Evaluation System developed by the Toxicogenomics Project in Japan). The positive training set consisted of high- or middle-dose groups that received 6 different non-genotoxic hepatocarcinogens during a 28-day period. The negative training set consisted of high- or middle-dose groups of 54 non-carcinogens. Support vector machine combined with wrapper-type gene selection algorithms was used for modeling. Consequently, our best classifier yielded prediction accuracies for hepatocarcinogenicity of 99% sensitivity and 97% specificity in the training data set, and false positive prediction was almost completely eliminated. Pathway analysis of feature genes revealed that the mitogen-activated protein kinase p38- and phosphatidylinositol-3-kinase-centered interactome and the v-myc myelocytomatosis viral oncogene homolog-centered interactome were the 2 most significant networks. The usefulness and robustness of our predictor were further confirmed in an independent validation data set obtained from the public database. Interestingly, similar positive predictions were obtained in several genotoxic hepatocarcinogens as well as non-genotoxic hepatocarcinogens. These results indicate that the expression profiles of our newly selected candidate biomarker genes might be common characteristics in the early stage of carcinogenesis for both genotoxic and non-genotoxic carcinogens in the rat liver. Our toxicogenomic model might be useful for the prospective screening of hepatocarcinogenicity of compounds and prioritization of compounds for carcinogenicity testing. - Highlights: >We developed a toxicogenomic model to predict hepatocarcinogenicity of chemicals. >The optimized model consisting of 9 probes had 99% sensitivity and 97% specificity. >This model

  17. Integrated model for transport and large scale instabilities in tokamak plasmas

    NASA Astrophysics Data System (ADS)

    Halpern, Federico David

    Improved models for neoclassical tearing modes and anomalous transport are developed and validated within integrated modeling codes to predict toroidal rotation, temperature and current density profiles in tokamak plasmas. Neoclassical tearing modes produce helical filaments of plasma, called magnetic islands, which have the effect of degrading tokamak plasma confinement or terminating the discharge. An improved code is developed in order to compute the widths of multiple simultaneous magnetic islands whose shapes are distorted by the radial variation in the magnetic perturbation [F. D. Halpern, et al., J. Plasma Physics 72 (2006) 1153]. It is found in simulations of DIII-D and JET tokamak discharges that multiple simultaneous magnetic islands produce a 10% to 20% reduction in plasma thermal confinement. If magnetic islands are allowed to grow to their full width in ITER fusion reactor simulations, fusion power production is reduced by a factor of four [F. D. Halpern, et al., Phys. Plasmas 13 (2006) 062510]. In addition to improving the prediction of neoclassical tearing modes, a new Multi-Mode transport model, MMM08, was developed to predict temperature and toroidal angular frequency profiles in simulations of tokamak discharges. The capability for predicting toroidal rotation is motivated by ITER simulation results that indicate that the effects of toroidal rotation can increase ITER fusion power production [F. D. Halpern et al., Phys. Plasmas 15 (2008), 062505]. The MMM08 model consists of an improved model for transport driven by ion drift modes [F. D. Halpern et al., Phys. Plasmas 15 (2008) 012304] together with a model for transport driven by short wavelength electron drift modes combined with models for transport driven by classical processes. The new MMM08 transport model was validated by comparing predictive simulation results with experimental data for 32 discharges in the DIII-D and JET tokamaks. It was found that the prediction of intrinsic plasma

  18. Large-Scale Recurrent Neural Network Based Modelling of Gene Regulatory Network Using Cuckoo Search-Flower Pollination Algorithm.

    PubMed

    Mandal, Sudip; Khan, Abhinandan; Saha, Goutam; Pal, Rajat K

    2016-01-01

    The accurate prediction of genetic networks using computational tools is one of the greatest challenges in the postgenomic era. Recurrent Neural Network is one of the most popular but simple approaches to model the network dynamics from time-series microarray data. To date, it has been successfully applied to computationally derive small-scale artificial and real-world genetic networks with high accuracy. However, they underperformed for large-scale genetic networks. Here, a new methodology has been proposed where a hybrid Cuckoo Search-Flower Pollination Algorithm has been implemented with Recurrent Neural Network. Cuckoo Search is used to search the best combination of regulators. Moreover, Flower Pollination Algorithm is applied to optimize the model parameters of the Recurrent Neural Network formalism. Initially, the proposed method is tested on a benchmark large-scale artificial network for both noiseless and noisy data. The results obtained show that the proposed methodology is capable of increasing the inference of correct regulations and decreasing false regulations to a high degree. Secondly, the proposed methodology has been validated against the real-world dataset of the DNA SOS repair network of Escherichia coli. However, the proposed method sacrifices computational time complexity in both cases due to the hybrid optimization process. PMID:26989410

  19. Large-Scale Recurrent Neural Network Based Modelling of Gene Regulatory Network Using Cuckoo Search-Flower Pollination Algorithm.

    PubMed

    Mandal, Sudip; Khan, Abhinandan; Saha, Goutam; Pal, Rajat K

    2016-01-01

    The accurate prediction of genetic networks using computational tools is one of the greatest challenges in the postgenomic era. Recurrent Neural Network is one of the most popular but simple approaches to model the network dynamics from time-series microarray data. To date, it has been successfully applied to computationally derive small-scale artificial and real-world genetic networks with high accuracy. However, they underperformed for large-scale genetic networks. Here, a new methodology has been proposed where a hybrid Cuckoo Search-Flower Pollination Algorithm has been implemented with Recurrent Neural Network. Cuckoo Search is used to search the best combination of regulators. Moreover, Flower Pollination Algorithm is applied to optimize the model parameters of the Recurrent Neural Network formalism. Initially, the proposed method is tested on a benchmark large-scale artificial network for both noiseless and noisy data. The results obtained show that the proposed methodology is capable of increasing the inference of correct regulations and decreasing false regulations to a high degree. Secondly, the proposed methodology has been validated against the real-world dataset of the DNA SOS repair network of Escherichia coli. However, the proposed method sacrifices computational time complexity in both cases due to the hybrid optimization process.

  20. Large-Scale Recurrent Neural Network Based Modelling of Gene Regulatory Network Using Cuckoo Search-Flower Pollination Algorithm

    PubMed Central

    Mandal, Sudip; Khan, Abhinandan; Saha, Goutam; Pal, Rajat K.

    2016-01-01

    The accurate prediction of genetic networks using computational tools is one of the greatest challenges in the postgenomic era. Recurrent Neural Network is one of the most popular but simple approaches to model the network dynamics from time-series microarray data. To date, it has been successfully applied to computationally derive small-scale artificial and real-world genetic networks with high accuracy. However, they underperformed for large-scale genetic networks. Here, a new methodology has been proposed where a hybrid Cuckoo Search-Flower Pollination Algorithm has been implemented with Recurrent Neural Network. Cuckoo Search is used to search the best combination of regulators. Moreover, Flower Pollination Algorithm is applied to optimize the model parameters of the Recurrent Neural Network formalism. Initially, the proposed method is tested on a benchmark large-scale artificial network for both noiseless and noisy data. The results obtained show that the proposed methodology is capable of increasing the inference of correct regulations and decreasing false regulations to a high degree. Secondly, the proposed methodology has been validated against the real-world dataset of the DNA SOS repair network of Escherichia coli. However, the proposed method sacrifices computational time complexity in both cases due to the hybrid optimization process. PMID:26989410

  1. Splitting failure in side walls of a large-scale underground cavern group: a numerical modelling and a field study.

    PubMed

    Wang, Zhishen; Li, Yong; Zhu, Weishen; Xue, Yiguo; Yu, Song

    2016-01-01

    Vertical splitting cracks often appear in side walls of large-scale underground caverns during excavations owing to the brittle characteristics of surrounding rock mass, especially under the conditions of high in situ stress and great overburden depth. This phenomenon greatly affects the integral safety and stability of the underground caverns. In this paper, a transverse isotropic constitutive model and a splitting failure criterion are simultaneously proposed and secondly programmed in FLAC3D to numerically simulate the integral stability of the underground caverns during excavations in Dagangshan hydropower station in Sichuan province, China. Meanwhile, an in situ monitoring study on the displacement of the key points of the underground caverns has also been carried out, and the monitoring results are compared with the numerical results. From the comparative analysis, it can be concluded that the depths of splitting relaxation area obtained by numerical simulation are almost consistent with the actual in situ monitoring values, as well as the trend of the displacement curves, which shows that the transverse isotropic constitutive model combining with the splitting failure criterion is appropriate for investigating the splitting failure in side walls of large-scale underground caverns and it will be a helpful guidance of predicting the depths of splitting relaxation area in surrounding rock mass.

  2. Splitting failure in side walls of a large-scale underground cavern group: a numerical modelling and a field study.

    PubMed

    Wang, Zhishen; Li, Yong; Zhu, Weishen; Xue, Yiguo; Yu, Song

    2016-01-01

    Vertical splitting cracks often appear in side walls of large-scale underground caverns during excavations owing to the brittle characteristics of surrounding rock mass, especially under the conditions of high in situ stress and great overburden depth. This phenomenon greatly affects the integral safety and stability of the underground caverns. In this paper, a transverse isotropic constitutive model and a splitting failure criterion are simultaneously proposed and secondly programmed in FLAC3D to numerically simulate the integral stability of the underground caverns during excavations in Dagangshan hydropower station in Sichuan province, China. Meanwhile, an in situ monitoring study on the displacement of the key points of the underground caverns has also been carried out, and the monitoring results are compared with the numerical results. From the comparative analysis, it can be concluded that the depths of splitting relaxation area obtained by numerical simulation are almost consistent with the actual in situ monitoring values, as well as the trend of the displacement curves, which shows that the transverse isotropic constitutive model combining with the splitting failure criterion is appropriate for investigating the splitting failure in side walls of large-scale underground caverns and it will be a helpful guidance of predicting the depths of splitting relaxation area in surrounding rock mass. PMID:27652101

  3. Toward understanding the large-scale land-atmosphere coupling in the models: Roles of different processes

    NASA Astrophysics Data System (ADS)

    Wei, Jiangfeng; Dirmeyer, Paul A.

    2010-10-01

    Two different Atmospheric General Circulation Models (AGCMs), each coupled to three different land surface schemes (LSSs) (six different model configurations in total), are used to study the roles of different model components and different action processes in land-atmosphere coupling. Experiments show that, for the six model configurations, the choice of AGCMs is the main reason for the substantially different precipitation variability, predictability, and land-atmosphere coupling strength among the configurations. The impact of different LSSs is secondary. Intraseasonal precipitation variability, which is mainly a property of the AGCM, can impact land-atmosphere coupling both directly in the atmosphere and indirectly through soil moisture response to precipitation. These results lead to a common conceptual decomposition of the land-atmosphere coupling strength and increases the understanding on large-scale land-atmosphere coupling.

  4. Robust classification of protein variation using structural modelling and large-scale data integration.

    PubMed

    Baugh, Evan H; Simmons-Edler, Riley; Müller, Christian L; Alford, Rebecca F; Volfovsky, Natalia; Lash, Alex E; Bonneau, Richard

    2016-04-01

    Existing methods for interpreting protein variation focus on annotating mutation pathogenicity rather than detailed interpretation of variant deleteriousness and frequently use only sequence-based or structure-based information. We present VIPUR, a computational framework that seamlessly integrates sequence analysis and structural modelling (using the Rosetta protein modelling suite) to identify and interpret deleterious protein variants. To train VIPUR, we collected 9477 protein variants with known effects on protein function from multiple organisms and curated structural models for each variant from crystal structures and homology models. VIPUR can be applied to mutations in any organism's proteome with improved generalized accuracy (AUROC .83) and interpretability (AUPR .87) compared to other methods. We demonstrate that VIPUR's predictions of deleteriousness match the biological phenotypes in ClinVar and provide a clear ranking of prediction confidence. We use VIPUR to interpret known mutations associated with inflammation and diabetes, demonstrating the structural diversity of disrupted functional sites and improved interpretation of mutations associated with human diseases. Lastly, we demonstrate VIPUR's ability to highlight candidate variants associated with human diseases by applying VIPUR to de novo variants associated with autism spectrum disorders. PMID:26926108

  5. Large Scale Tissue Morphogenesis Simulation on Heterogenous Systems Based on a Flexible Biomechanical Cell Model.

    PubMed

    Jeannin-Girardon, Anne; Ballet, Pascal; Rodin, Vincent

    2015-01-01

    The complexity of biological tissue morphogenesis makes in silico simulations of such system very interesting in order to gain a better understanding of the underlying mechanisms ruling the development of multicellular tissues. This complexity is mainly due to two elements: firstly, biological tissues comprise a large amount of cells; secondly, these cells exhibit complex interactions and behaviors. To address these two issues, we propose two tools: the first one is a virtual cell model that comprise two main elements: firstly, a mechanical structure (membrane, cytoskeleton, and cortex) and secondly, the main behaviors exhibited by biological cells, i.e., mitosis, growth, differentiation, molecule consumption, and production as well as the consideration of the physical constraints issued from the environment. An artificial chemistry is also included in the model. This virtual cell model is coupled to an agent-based formalism. The second tool is a simulator that relies on the OpenCL framework. It allows efficient parallel simulations on heterogenous devices such as micro-processors or graphics processors. We present two case studies validating the implementation of our model in our simulator: cellular proliferation controlled by cell signalling and limb growth in a virtual organism. PMID:26451816

  6. Large Scale Tissue Morphogenesis Simulation on Heterogenous Systems Based on a Flexible Biomechanical Cell Model.

    PubMed

    Jeannin-Girardon, Anne; Ballet, Pascal; Rodin, Vincent

    2015-01-01

    The complexity of biological tissue morphogenesis makes in silico simulations of such system very interesting in order to gain a better understanding of the underlying mechanisms ruling the development of multicellular tissues. This complexity is mainly due to two elements: firstly, biological tissues comprise a large amount of cells; secondly, these cells exhibit complex interactions and behaviors. To address these two issues, we propose two tools: the first one is a virtual cell model that comprise two main elements: firstly, a mechanical structure (membrane, cytoskeleton, and cortex) and secondly, the main behaviors exhibited by biological cells, i.e., mitosis, growth, differentiation, molecule consumption, and production as well as the consideration of the physical constraints issued from the environment. An artificial chemistry is also included in the model. This virtual cell model is coupled to an agent-based formalism. The second tool is a simulator that relies on the OpenCL framework. It allows efficient parallel simulations on heterogenous devices such as micro-processors or graphics processors. We present two case studies validating the implementation of our model in our simulator: cellular proliferation controlled by cell signalling and limb growth in a virtual organism.

  7. Robust classification of protein variation using structural modelling and large-scale data integration

    PubMed Central

    Baugh, Evan H.; Simmons-Edler, Riley; Müller, Christian L.; Alford, Rebecca F.; Volfovsky, Natalia; Lash, Alex E.; Bonneau, Richard

    2016-01-01

    Existing methods for interpreting protein variation focus on annotating mutation pathogenicity rather than detailed interpretation of variant deleteriousness and frequently use only sequence-based or structure-based information. We present VIPUR, a computational framework that seamlessly integrates sequence analysis and structural modelling (using the Rosetta protein modelling suite) to identify and interpret deleterious protein variants. To train VIPUR, we collected 9477 protein variants with known effects on protein function from multiple organisms and curated structural models for each variant from crystal structures and homology models. VIPUR can be applied to mutations in any organism's proteome with improved generalized accuracy (AUROC .83) and interpretability (AUPR .87) compared to other methods. We demonstrate that VIPUR's predictions of deleteriousness match the biological phenotypes in ClinVar and provide a clear ranking of prediction confidence. We use VIPUR to interpret known mutations associated with inflammation and diabetes, demonstrating the structural diversity of disrupted functional sites and improved interpretation of mutations associated with human diseases. Lastly, we demonstrate VIPUR's ability to highlight candidate variants associated with human diseases by applying VIPUR to de novo variants associated with autism spectrum disorders. PMID:26926108

  8. Effects of Large-Scale Flows on Coronal Abundances: Multispecies Models and TRACE Observations

    NASA Astrophysics Data System (ADS)

    Lenz, D. D.

    2003-05-01

    Understanding coronal abundances is crucial for interpreting coronal observations and for understanding coronal physical processes and heating. Bulk flows and gravity, both unmistakably present in the corona, significantly affect abundances. We present multispecies simulations of long-lived coronal structures and compare model results with TRACE observations, focusing on abundance variations and flows.

  9. Predicting agricultural impacts of large-scale drought: 2012 and the case for better modeling

    Technology Transfer Automated Retrieval System (TEKTRAN)

    We present an example of a simulation-based forecast for the 2012 U.S. maize growing season produced as part of a high-resolution, multi-scale, predictive mechanistic modeling study designed for decision support, risk management, and counterfactual analysis. The simulations undertaken for this analy...

  10. Uncovering Implicit Assumptions: A Large-Scale Study on Students' Mental Models of Diffusion

    ERIC Educational Resources Information Center

    Stains, Marilyne; Sevian, Hannah

    2015-01-01

    Students' mental models of diffusion in a gas phase solution were studied through the use of the Structure and Motion of Matter (SAMM) survey. This survey permits identification of categories of ways students think about the structure of the gaseous solute and solvent, the origin of motion of gas particles, and trajectories of solute particles in…

  11. Toward an Aspirational Learning Model Gleaned from Large-Scale Assessment

    ERIC Educational Resources Information Center

    Diket, Read M.; Xu, Lihua; Brewer, Thomas M.

    2014-01-01

    The aspirational model resulted from the authors' secondary analysis of the Mother/Child (M/C) test block from the 2008 National Assessment of Educational Progress restricted data that examined the responses of the national sample of 8th-grade students (n = 1648). This test block presented no artmaking task and consisted of the same 13…

  12. A balanced water layer concept for subglacial hydrology in large scale ice sheet models

    NASA Astrophysics Data System (ADS)

    Goeller, S.; Thoma, M.; Grosfeld, K.; Miller, H.

    2012-12-01

    There is currently no doubt about the existence of a wide-spread hydrological network under the Antarctic ice sheet, which lubricates the ice base and thus leads to increased ice velocities. Consequently, ice models should incorporate basal hydrology to obtain meaningful results for future ice dynamics and their contribution to global sea level rise. Here, we introduce the balanced water layer concept, covering two prominent subglacial hydrological features for ice sheet modeling on a continental scale: the evolution of subglacial lakes and balance water fluxes. We couple it to the thermomechanical ice-flow model RIMBAY and apply it to a synthetic model domain inspired by the Gamburtsev Mountains, Antarctica. In our experiments we demonstrate the dynamic generation of subglacial lakes and their impact on the velocity field of the overlaying ice sheet, resulting in a negative ice mass balance. Furthermore, we introduce an elementary parametrization of the water flux-basal sliding coupling and reveal the predominance of the ice loss through the resulting ice streams against the stabilizing influence of less hydrologically active areas. We point out, that established balance flux schemes quantify these effects only partially as their ability to store subglacial water is lacking.

  13. Can simple models predict large-scale surface ocean isoprene concentrations?

    NASA Astrophysics Data System (ADS)

    Booge, Dennis; Marandino, Christa A.; Schlundt, Cathleen; Palmer, Paul I.; Schlundt, Michael; Atlas, Elliot L.; Bracher, Astrid; Saltzman, Eric S.; Wallace, Douglas W. R.

    2016-09-01

    We use isoprene and related field measurements from three different ocean data sets together with remotely sensed satellite data to model global marine isoprene emissions. We show that using monthly mean satellite-derived chl a concentrations to parameterize isoprene with a constant chl a normalized isoprene production rate underpredicts the measured oceanic isoprene concentration by a mean factor of 19 ± 12. Improving the model by using phytoplankton functional type dependent production values and by decreasing the bacterial degradation rate of isoprene in the water column results in only a slight underestimation (factor 1.7 ± 1.2). We calculate global isoprene emissions of 0.21 Tg C for 2014 using this improved model, which is twice the value calculated using the original model. Nonetheless, the sea-to-air fluxes have to be at least 1 order of magnitude higher to account for measured atmospheric isoprene mixing ratios. These findings suggest that there is at least one missing oceanic source of isoprene and, possibly, other unknown factors in the ocean or atmosphere influencing the atmospheric values. The discrepancy between calculated fluxes and atmospheric observations must be reconciled in order to fully understand the importance of marine-derived isoprene as a precursor to remote marine boundary layer particle formation.

  14. On the representation of snow in large scale sea-ice models

    NASA Astrophysics Data System (ADS)

    Lecomte, O.; Fichefet, T.; Vancoppenolle, M.; Massonnet, F.

    2011-12-01

    In both hemispheres, the sea-ice snow cover is a key element in the local climate system and particularly in the processes driving the sea-ice thickness evolution. Because of its high reflectance and thermal insulating properties, the snow pack inhibits or delays the sea-ice summer surface melt. In winter however, snow acts as a blanket that curtails the heat loss from the sea ice to the atmosphere and therefore reduces the basal growth rate. Among the processes controlling the snow state on sea ice, snowfall, wind and temperature changes are probably the most important. Despite its high horizontal heterogeneity, due to the transport by wind and the underlying sea-ice thickness distribution, the snow cover is vertically stratified. Each layer carries a signature of past weather events, for relatively recent snow, and metamorphic pathways that older snow may have been through. In a simplified model, this snow stratigraphy can be represented by its vertical density profi[|#12#|]le, while the other snow properties are assumed to be computationable from density. In this study, we analyze the importance of the snow density profi[|#12#|]le in both one-dimensional and full versions of the thermodynamic-dynamic Louvain-la-Neuve Sea-Ice Model (LIM3), which is part of the ocean modelling platform NEMO (Nucleus for European Modelling of the Ocean, IPSL, Paris). In order to do this, a new snow thermodynamic scheme was developed and implemented into LIM3. This scheme is multilayer with varying snow thermo-physical properties. For memory and computational cost reasons, it includes only 3 layers but the vertical grid is refi[|#12#|]ned in thermodynamic routines. Although snow density is time- and space-dependent in the model, it is not a prognostic variable. The shape of the density pro[|#12#|]file is prescribed as a function of snow and ice thicknesses, based on snow pit observations. Several typical profi[|#12#|]les are tested in the model and results are presented by

  15. A hierarchical stochastic model of large-scale atmospheric circulation patterns and multiple station daily precipitation

    NASA Astrophysics Data System (ADS)

    Wilson, Larry L.; Lettenmaier, Dennis P.; Skyllingstad, Eric

    1992-02-01

    A stochastic model of weather states and concurrent daily precipitation at multiple precipitation stations is described. Four algorithms are investigated for classification of daily weather states: k-means clustering, fuzzy clustering, principal components, and principal components coupled with k-means clustering. A semi-Markov model with a geometric distribution for within-class lengths of stay is used to describe the evolution of weather classes. A hierarchical modified Pólya urn model is used to simulate precipitation conditioned on the regional weather type. An information measure that considers both the probability of weather class occurrence and conditional precipitation probabilities is developed to quantify the extent to which each of the weather classification schemes discriminates the precipitation states (rain-no rain) at the precipitation stations. Evaluation of the four algorithms using the information measure shows that all methods performed equally well. The principal components method is chosen due to its ability to incorporate information from larger spatial fields. Precipitation amount distributions are assumed to be drawn from spatially correlated mixed exponential distributions, whose parameters varied by season and weather class. The model is implemented using National Meteorological Center historical atmospheric observations for the period 1964-1988 mapped to 5° × 5° grid cells over the eastern North Pacific, and three precipitation stations west of the Cascade mountain range in the state of Washington. Comparison of simulated weather class-station precipitation time series with observational data shows that the model preserved weather class statistics and mean daily precipitation quite well, especially for stations highest in the hierarchy. Precipitation amounts for the lowest precipitation station in the hierarchy, and for precipitation extremes, are not as well preserved.

  16. HYPERstream: a multi-scale framework for streamflow routing in large-scale hydrological model

    NASA Astrophysics Data System (ADS)

    Piccolroaz, Sebastiano; Di Lazzaro, Michele; Zarlenga, Antonio; Majone, Bruno; Bellin, Alberto; Fiori, Aldo

    2016-05-01

    We present HYPERstream, an innovative streamflow routing scheme based on the width function instantaneous unit hydrograph (WFIUH) theory, which is specifically designed to facilitate coupling with weather forecasting and climate models. The proposed routing scheme preserves geomorphological dispersion of the river network when dealing with horizontal hydrological fluxes, irrespective of the computational grid size inherited from the overlaying climate model providing the meteorological forcing. This is achieved by simulating routing within the river network through suitable transfer functions obtained by applying the WFIUH theory to the desired level of detail. The underlying principle is similar to the block-effective dispersion employed in groundwater hydrology, with the transfer functions used to represent the effect on streamflow of morphological heterogeneity at scales smaller than the computational grid. Transfer functions are constructed for each grid cell with respect to the nodes of the network where streamflow is simulated, by taking advantage of the detailed morphological information contained in the digital elevation model (DEM) of the zone of interest. These characteristics make HYPERstream well suited for multi-scale applications, ranging from catchment up to continental scale, and to investigate extreme events (e.g., floods) that require an accurate description of routing through the river network. The routing scheme enjoys parsimony in the adopted parametrization and computational efficiency, leading to a dramatic reduction of the computational effort with respect to full-gridded models at comparable level of accuracy. HYPERstream is designed with a simple and flexible modular structure that allows for the selection of any rainfall-runoff model to be coupled with the routing scheme and the choice of different hillslope processes to be represented, and it makes the framework particularly suitable to massive parallelization, customization according to

  17. Fractured shale reservoirs: Towards a realistic model

    SciTech Connect

    Hamilton-Smith, T.

    1996-09-01

    Fractured shale reservoirs are fundamentally unconventional, which is to say that their behavior is qualitatively different from reservoirs characterized by intergranular pore space. Attempts to analyze fractured shale reservoirs are essentially misleading. Reliance on such models can have only negative results for fractured shale oil and gas exploration and development. A realistic model of fractured shale reservoirs begins with the history of the shale as a hydrocarbon source rock. Minimum levels of both kerogen concentration and thermal maturity are required for effective hydrocarbon generation. Hydrocarbon generation results in overpressuring of the shale. At some critical level of repressuring, the shale fractures in the ambient stress field. This primary natural fracture system is fundamental to the future behavior of the fractured shale gas reservoir. The fractures facilitate primary migration of oil and gas out of the shale and into the basin. In this process, all connate water is expelled, leaving the fractured shale oil-wet and saturated with oil and gas. What fluids are eventually produced from the fractured shale depends on the consequent structural and geochemical history. As long as the shale remains hot, oil production may be obtained. (e.g. Bakken Shale, Green River Shale). If the shale is significantly cooled, mainly gas will be produced (e.g. Antrim Shale, Ohio Shale, New Albany Shale). Where secondary natural fracture systems are developed and connect the shale to aquifers or to surface recharge, the fractured shale will also produce water (e.g. Antrim Shale, Indiana New Albany Shale).

  18. North American extreme temperature events and related large scale meteorological patterns: A review of statistical methods, dynamics, modeling, and trends

    SciTech Connect

    Grotjahn, Richard; Black, Robert; Leung, Ruby; Wehner, Michael F.; Barlow, Mathew; Bosilovich, Michael; Gershunov, Alexander; Gutowski, Jr., William J.; Gyakum, John R.; Katz, Richard W.; Lee, Yun -Young; Lim, Young -Kwon; Prabhat, -

    2015-05-22

    This paper reviews research approaches and open questions regarding data, statistical analyses, dynamics, modeling efforts, and trends in relation to temperature extremes. Our specific focus is upon extreme events of short duration (roughly less than 5 days) that affect parts of North America. These events are associated with large scale meteorological patterns (LSMPs). Methods used to define extreme events statistics and to identify and connect LSMPs to extreme temperatures are presented. Recent advances in statistical techniques can connect LSMPs to extreme temperatures through appropriately defined covariates that supplements more straightforward analyses. A wide array of LSMPs, ranging from synoptic to planetary scale phenomena, have been implicated as contributors to extreme temperature events. Current knowledge about the physical nature of these contributions and the dynamical mechanisms leading to the implicated LSMPs is incomplete. There is a pressing need for (a) systematic study of the physics of LSMPs life cycles and (b) comprehensive model assessment of LSMP-extreme temperature event linkages and LSMP behavior. Generally, climate models capture the observed heat waves and cold air outbreaks with some fidelity. However they overestimate warm wave frequency and underestimate cold air outbreaks frequency, and underestimate the collective influence of low-frequency modes on temperature extremes. Climate models have been used to investigate past changes and project future trends in extreme temperatures. Overall, modeling studies have identified important mechanisms such as the effects of large-scale circulation anomalies and land-atmosphere interactions on changes in extreme temperatures. However, few studies have examined changes in LSMPs more specifically to understand the role of LSMPs on past and future extreme temperature changes. Even though LSMPs are resolvable by global and regional climate models, they are not necessarily well simulated so more

  19. North American extreme temperature events and related large scale meteorological patterns: A review of statistical methods, dynamics, modeling, and trends

    DOE PAGES

    Grotjahn, Richard; Black, Robert; Leung, Ruby; Wehner, Michael F.; Barlow, Mathew; Bosilovich, Michael; Gershunov, Alexander; Gutowski, Jr., William J.; Gyakum, John R.; Katz, Richard W.; et al

    2015-05-22

    This paper reviews research approaches and open questions regarding data, statistical analyses, dynamics, modeling efforts, and trends in relation to temperature extremes. Our specific focus is upon extreme events of short duration (roughly less than 5 days) that affect parts of North America. These events are associated with large scale meteorological patterns (LSMPs). Methods used to define extreme events statistics and to identify and connect LSMPs to extreme temperatures are presented. Recent advances in statistical techniques can connect LSMPs to extreme temperatures through appropriately defined covariates that supplements more straightforward analyses. A wide array of LSMPs, ranging from synoptic tomore » planetary scale phenomena, have been implicated as contributors to extreme temperature events. Current knowledge about the physical nature of these contributions and the dynamical mechanisms leading to the implicated LSMPs is incomplete. There is a pressing need for (a) systematic study of the physics of LSMPs life cycles and (b) comprehensive model assessment of LSMP-extreme temperature event linkages and LSMP behavior. Generally, climate models capture the observed heat waves and cold air outbreaks with some fidelity. However they overestimate warm wave frequency and underestimate cold air outbreaks frequency, and underestimate the collective influence of low-frequency modes on temperature extremes. Climate models have been used to investigate past changes and project future trends in extreme temperatures. Overall, modeling studies have identified important mechanisms such as the effects of large-scale circulation anomalies and land-atmosphere interactions on changes in extreme temperatures. However, few studies have examined changes in LSMPs more specifically to understand the role of LSMPs on past and future extreme temperature changes. Even though LSMPs are resolvable by global and regional climate models, they are not necessarily well simulated so

  20. Large-scale modeling provides insights into Arabidopsis's acclimation to changing light and temperature conditions.

    PubMed

    Töpfer, Nadine; Niokoloski, Zoran

    2013-09-01

    Classical flux balance analysis predicts steady-state flux distributions that maximize a given objective function. A recent study, Schuetz et al., (1) demonstrated that competing objectives constrain the metabolic fluxes in E. coli. For plants, with multiple cell types, fulfilling different functions, the objectives remain elusive and, therefore, hinder the prediction of actual fluxes, particularly for changing environments. In our study, we presented a novel approach to predict flux capacities for a large collection of metabolic pathways under eight different temperature and light conditions. (2) By integrating time-series transcriptomics data to constrain the flux boundaries of the metabolic model, we captured the time- and condition-specific state of the network. Although based on a single time-series experiment, the comparison of these capacities to a novel null model for transcript distribution allowed us to define a measure for differential behavior that accounts for the underlying network structure and the complex interplay of metabolic pathways.

  1. The Replication Domain Model: regulating replicon firing in the context of large-scale chromosome architecture

    PubMed Central

    Pope, Benjamin D.; Gilbert, David M.

    2013-01-01

    The “Replicon Theory” of Jacob, Brenner and Cuzin has reliably served as the paradigm for regulating the sites where individual replicons initiate replication. Concurrent with the replicon model was Taylor’s demonstration that plant and animal chromosomes replicate segmentally in a defined temporal sequence, via cytologically defined units too large to be accounted for by a single replicon. Instead, there seemed to be a program to choreograph when chromosome units replicate during S phase, executed by inititation at clusters of individual replicons within each segment. Here, we summarize recent molecular evidence for the existence of such units, now known as “replication domains”, and discuss how the organization of large chromosomes into structural units has added additional layers of regulation to the original replicon model. PMID:23603017

  2. Large-scale numerical modeling of hydro-acoustic waves generated by tsunamigenic earthquakes

    NASA Astrophysics Data System (ADS)

    Cecioni, C.; Abdolali, A.; Bellotti, G.; Sammarco, P.

    2015-03-01

    Tsunamigenic fast movements of the seabed generate pressure waves in weakly compressible seawater, namely hydro-acoustic waves, which travel at the sound celerity in water (about 1500 m s-1). These waves travel much faster than the counterpart long free-surface gravity waves and contain significant information on the source. Measurement of hydro-acoustic waves can therefore anticipate the tsunami arrival and significantly improve the capability of tsunami early warning systems. In this paper a novel numerical model for reproduction of hydro-acoustic waves is applied to analyze the generation and propagation in real bathymetry of these pressure perturbations for two historical catastrophic earthquake scenarios in Mediterranean Sea. The model is based on the solution of a depth-integrated equation, and therefore results are computationally efficient in reconstructing the hydro-acoustic waves propagation scenarios.

  3. Estimating extinction risk with metapopulation models of large-scale fragmentation.

    PubMed

    Schnell, Jessica K; Harris, Grant M; Pimm, Stuart L; Russell, Gareth J

    2013-06-01

    Habitat loss is the principal threat to species. How much habitat remains-and how quickly it is shrinking-are implicitly included in the way the International Union for Conservation of Nature determines a species' risk of extinction. Many endangered species have habitats that are also fragmented to different extents. Thus, ideally, fragmentation should be quantified in a standard way in risk assessments. Although mapping fragmentation from satellite imagery is easy, efficient techniques for relating maps of remaining habitat to extinction risk are few. Purely spatial metrics from landscape ecology are hard to interpret and do not address extinction directly. Spatially explicit metapopulation models link fragmentation to extinction risk, but standard models work only at small scales. Counterintuitively, these models predict that a species in a large, contiguous habitat will fare worse than one in 2 tiny patches. This occurs because although the species in the large, contiguous habitat has a low probability of extinction, recolonization cannot occur if there are no other patches to provide colonists for a rescue effect. For 4 ecologically comparable bird species of the North Central American highland forests, we devised metapopulation models with area-weighted self-colonization terms; this reflected repopulation of a patch from a remnant of individuals that survived an adverse event. Use of this term gives extra weight to a patch in its own rescue effect. Species assigned least risk status were comparable in long-term extinction risk with those ranked as threatened. This finding suggests that fragmentation has had a substantial negative effect on them that is not accounted for in their Red List category.

  4. Study on data model of large-scale urban and rural integrated cadastre

    NASA Astrophysics Data System (ADS)

    Peng, Liangyong; Huang, Quanyi; Gao, Dequan

    2008-10-01

    Urban and Rural Integrated Cadastre (URIC) has been the subject of great interests for modern cadastre management. It is highly desirable to develop a rational data model for establishing an information system of URIC. In this paper, firstly, the old cadastral management mode in China was introduced, the limitation was analyzed, and the conception of URIC and its development course in China were described. Afterwards, based on the requirements of cadastre management in developed region, the goal of URIC and two key ideas for realizing URIC were proposed. Then, conceptual management mode was studied and a data model of URIC was designed. At last, based on the raw data of land use survey with a scale of 1:1000 and urban conversional cadastral survey with a scale of 1:500 in Jiangyin city, a well-defined information system of URIC was established according to the data model and an uniform management of land use and use right and landownership in urban and rural area was successfully realized. Its feasibility and practicability was well proved.

  5. Transforming GIS data into functional road models for large-scale traffic simulation.

    PubMed

    Wilkie, David; Sewall, Jason; Lin, Ming C

    2012-06-01

    There exists a vast amount of geographic information system (GIS) data that model road networks around the world as polylines with attributes. In this form, the data are insufficient for applications such as simulation and 3D visualization-tools which will grow in power and demand as sensor data become more pervasive and as governments try to optimize their existing physical infrastructure. In this paper, we propose an efficient method for enhancing a road map from a GIS database to create a geometrically and topologically consistent 3D model to be used in real-time traffic simulation, interactive visualization of virtual worlds, and autonomous vehicle navigation. The resulting representation provides important road features for traffic simulations, including ramps, highways, overpasses, legal merge zones, and intersections with arbitrary states, and it is independent of the simulation methodologies. We test the 3D models of road networks generated by our algorithm on real-time traffic simulation using both macroscopic and microscopic techniques. PMID:21690653

  6. Modeling ramp compression experiments using large-scale molecular dynamics simulation.

    SciTech Connect

    Mattsson, Thomas Kjell Rene; Desjarlais, Michael Paul; Grest, Gary Stephen; Templeton, Jeremy Alan; Thompson, Aidan Patrick; Jones, Reese E.; Zimmerman, Jonathan A.; Baskes, Michael I.; Winey, J. Michael; Gupta, Yogendra Mohan; Lane, J. Matthew D.; Ditmire, Todd; Quevedo, Hernan J.

    2011-10-01

    Molecular dynamics simulation (MD) is an invaluable tool for studying problems sensitive to atomscale physics such as structural transitions, discontinuous interfaces, non-equilibrium dynamics, and elastic-plastic deformation. In order to apply this method to modeling of ramp-compression experiments, several challenges must be overcome: accuracy of interatomic potentials, length- and time-scales, and extraction of continuum quantities. We have completed a 3 year LDRD project with the goal of developing molecular dynamics simulation capabilities for modeling the response of materials to ramp compression. The techniques we have developed fall in to three categories (i) molecular dynamics methods (ii) interatomic potentials (iii) calculation of continuum variables. Highlights include the development of an accurate interatomic potential describing shock-melting of Beryllium, a scaling technique for modeling slow ramp compression experiments using fast ramp MD simulations, and a technique for extracting plastic strain from MD simulations. All of these methods have been implemented in Sandia's LAMMPS MD code, ensuring their widespread availability to dynamic materials research at Sandia and elsewhere.

  7. User Friendly Open GIS Tool for Large Scale Data Assimilation - a Case Study of Hydrological Modelling

    NASA Astrophysics Data System (ADS)

    Gupta, P. K.

    2012-08-01

    Open source software (OSS) coding has tremendous advantages over proprietary software. These are primarily fuelled by high level programming languages (JAVA, C++, Python etc...) and open source geospatial libraries (GDAL/OGR, GEOS, GeoTools etc.). Quantum GIS (QGIS) is a popular open source GIS package, which is licensed under GNU GPL and is written in C++. It allows users to perform specialised tasks by creating plugins in C++ and Python. This research article emphasises on exploiting this capability of QGIS to build and implement plugins across multiple platforms using the easy to learn - Python programming language. In the present study, a tool has been developed to assimilate large spatio-temporal datasets such as national level gridded rainfall, temperature, topographic (digital elevation model, slope, aspect), landuse/landcover and multi-layer soil data for input into hydrological models. At present this tool has been developed for Indian sub-continent. An attempt is also made to use popular scientific and numerical libraries to create custom applications for digital inclusion. In the hydrological modelling calibration and validation are important steps which are repetitively carried out for the same study region. As such the developed tool will be user friendly and used efficiently for these repetitive processes by reducing the time required for data management and handling. Moreover, it was found that the developed tool can easily assimilate large dataset in an organised manner.

  8. Inverse transport modeling of volcanic sulfur dioxide emissions using large-scale simulations

    NASA Astrophysics Data System (ADS)

    Heng, Yi; Hoffmann, Lars; Griessbach, Sabine; Rößler, Thomas; Stein, Olaf

    2016-05-01

    An inverse transport modeling approach based on the concepts of sequential importance resampling and parallel computing is presented to reconstruct altitude-resolved time series of volcanic emissions, which often cannot be obtained directly with current measurement techniques. A new inverse modeling and simulation system, which implements the inversion approach with the Lagrangian transport model Massive-Parallel Trajectory Calculations (MPTRAC) is developed to provide reliable transport simulations of volcanic sulfur dioxide (SO2). In the inverse modeling system MPTRAC is used to perform two types of simulations, i.e., unit simulations for the reconstruction of volcanic emissions and final forward simulations. Both types of transport simulations are based on wind fields of the ERA-Interim meteorological reanalysis of the European Centre for Medium Range Weather Forecasts. The reconstruction of altitude-dependent SO2 emission time series is also based on Atmospheric InfraRed Sounder (AIRS) satellite observations. A case study for the eruption of the Nabro volcano, Eritrea, in June 2011, with complex emission patterns, is considered for method validation. Meteosat Visible and InfraRed Imager (MVIRI) near-real-time imagery data are used to validate the temporal development of the reconstructed emissions. Furthermore, the altitude distributions of the emission time series are compared with top and bottom altitude measurements of aerosol layers obtained by the Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP) and the Michelson Interferometer for Passive Atmospheric Sounding (MIPAS) satellite instruments. The final forward simulations provide detailed spatial and temporal information on the SO2 distributions of the Nabro eruption. By using the critical success index (CSI), the simulation results are evaluated with the AIRS observations. Compared to the results with an assumption of a constant flux of SO2 emissions, our inversion approach leads to an improvement

  9. Large-scale parallel lattice Boltzmann-cellular automaton model of two-dimensional dendritic growth

    NASA Astrophysics Data System (ADS)

    Jelinek, Bohumir; Eshraghi, Mohsen; Felicelli, Sergio; Peters, John F.

    2014-03-01

    An extremely scalable lattice Boltzmann (LB)-cellular automaton (CA) model for simulations of two-dimensional (2D) dendritic solidification under forced convection is presented. The model incorporates effects of phase change, solute diffusion, melt convection, and heat transport. The LB model represents the diffusion, convection, and heat transfer phenomena. The dendrite growth is driven by a difference between actual and equilibrium liquid composition at the solid-liquid interface. The CA technique is deployed to track the new interface cells. The computer program was parallelized using the Message Passing Interface (MPI) technique. Parallel scaling of the algorithm was studied and major scalability bottlenecks were identified. Efficiency loss attributable to the high memory bandwidth requirement of the algorithm was observed when using multiple cores per processor. Parallel writing of the output variables of interest was implemented in the binary Hierarchical Data Format 5 (HDF5) to improve the output performance, and to simplify visualization. Calculations were carried out in single precision arithmetic without significant loss in accuracy, resulting in 50% reduction of memory and computational time requirements. The presented solidification model shows a very good scalability up to centimeter size domains, including more than ten million of dendrites. Catalogue identifier: AEQZ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEQZ_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, UK Licensing provisions: Standard CPC license, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 29,767 No. of bytes in distributed program, including test data, etc.: 3131,367 Distribution format: tar.gz Programming language: Fortran 90. Computer: Linux PC and clusters. Operating system: Linux. Has the code been vectorized or parallelized?: Yes. Program is parallelized using MPI

  10. The heating of diffuse dust at large scale in AGNs: a radiative transfer model study

    NASA Astrophysics Data System (ADS)

    Fritz, Jacopo; De Looze, Ilse; Baes, Maarten; Camps, Peter; Saftly, Waad; Pérez Villegas, Angeles; Rivaz-Sánchez, Mariana; Stalevski, Marko; Hatziminaoglou, Evanthia

    2016-08-01

    The panchromatic, broad-band, spectral energy distribution (SED) of galaxies is usually modelled by combining together the theoretical spectra of its emission components: stars in the optical/near-infrared, and thermal emission by dust -heated by the stellar radiation field- in the infrared. SED fitting codes such as MAGPHYS and CIGALE are capable to automatically fit observed multiwavelength data of galaxies, providing a set of galactic properties as a result. The situation gets somehow complicated when Active Galaxies (both local, low-luminosity Seyferts, and the bright QSOs) are considered. Very often, in fact, their observed near- and mid-infrared (NIR and MIR, respectively) SED is dominated by the emission of hot dust located close to the supermassive, active black hole which powers the bulk of their luminosity. Hence, a third component must be added to the set of theoretical SEDs: that of the molecular torus which surrounds the disk of gas accreting onto the supermassive black hole. The standard way to do it, is to simply add such models to the observed SED, until the MIR gap is filled. This implicitly assumes that the AGN has no influence whatsoever on the dust properties on scales larger than that of the torus (~few pc). I am investigating whether this assumption is valid, in which cases, and under which circumstances the AGN provides a non negligible contribution to the interstellar radiation field heating the diffuse dust in galaxies. This is accomplished by means of radiative transfer models which take into account the most relevant characteristics of the problem: the relative dust-stars distribution and the very wide range of spatial scales involved.

  11. Large-Scale Physical Modelling of Complex Tsunami-Generated Currents

    NASA Astrophysics Data System (ADS)

    Lynett, P. J.; Kalligeris, N.; Ayca, A.

    2014-12-01

    For tsunamis passing through sharp bathymetric variability, such as a shoal or a harbor entrance channel, z-axis vortical motions are created. These structures are often characterized by a horizontal length scale that is much greater than the local depth and are herein called shallow turbulent coherent structures (TCS). These shallow TCS can greatly increase the drag force on affected infrastructure and the ability of the flow to transport debris and floating objects. Shallow TCS typically manifest as large "whirlpools" during tsunamis, very commonly in ports and harbors. Such structures have been observed numerous times in the tsunamis over the past decade, and are postulated as the cause of large vessels parting their mooring lines due to yaw induced by the rotational eddy. Through the NSF NEES program, a laboratory study to examine a shallow TCS was performed during the summer of 2014. To generate this phenomenon, a 60 second period long wave was created and then interacted with a breakwater in the basin, forcing the generation of a large and stable TCS. The model scale is 1:30, equating to a 5.5 minute period and 0.5 m amplitude in the prototype scale. Surface tracers, dye studies, AVD's, wave gages, and bottom pressure sensors are used to characterize the flow. Complex patterns of surface convergence and divergence are easily seen in the data, indicating three-dimensional flow patterns. Dye studies show areas of relatively high and low spatial mixing. Model vessels are placed in the basin such that ship motion in the presence of these rapidly varying currents might be captured. The data obtained from this laboratory study should permit a better physical understanding of the nearshore currents that tsunamis are known to generate, as well as provide a benchmark for numerical modelers who wish to simulate currents.

  12. Cosmic microwave background and large-scale structure constraints on a simple quintessential inflation model

    SciTech Connect

    Rosenfeld, Rogerio; Frieman, Joshua A.; /Fermilab /Chicago U., Astron. Astrophys. Ctr.

    2006-11-01

    We derive constraints on a simple quintessential inflation model, based on a spontaneously broken {Phi}{sup 4} theory, imposed by the Wilkinson Microwave Anisotropy Probe three-year data (WMAP3) and by galaxy clustering results from the Sloan Digital Sky Survey (SDSS). We find that the scale of symmetry breaking must be larger than about 3 Planck masses in order for inflation to generate acceptable values of the scalar spectral index and of the tensor-to-scalar ratio. We also show that the resulting quintessence equation-of-state can evolve rapidly at recent times and hence can potentially be distinguished from a simple cosmological constant in this parameter regime.

  13. Examining tissue differentiation stability through large scale, multi-cellular pathway modeling.

    SciTech Connect

    May, Elebeoba Eni; Schiek, Richard Louis

    2005-03-01

    Using a multi-cellular, pathway model approach, we investigate the Drosophila sp. segmental differentiation network's stability as a function of initial conditions. While this network's functionality has been investigated in the absence of noise, this is the first work to specifically investigate how natural systems respond to random errors or noise. Our findings agree with earlier results that the overall network is robust in the absence of noise. However, when one includes random initial perturbations in intracellular protein WG levels, the robustness of the system decreases dramatically. The effect of noise on the system is not linear, and appears to level out at high noise levels.

  14. Characterizing and modeling the efficiency limits in large-scale production of hyperpolarized Xe129

    NASA Astrophysics Data System (ADS)

    Freeman, M. S.; Emami, K.; Driehuys, B.

    2014-08-01

    The ability to produce liter volumes of highly-spin-polarized Xe129 enables a wide range of investigations, most notably in the fields of materials science and biomedical magnetic resonance imaging. However, for nearly all polarizers built to date, both peak Xe129 polarization and the rate at which it is produced fall far below those predicted by the standard model of Rb metal vapor, spin-exchange optical pumping (SEOP). In this work we comprehensively characterized a high-volume flow-through Xe129 polarizer using three different SEOP cells with internal volumes of 100, 200, and 300cm3 and two types of optical sources: a broad-spectrum 111-W laser [full width at half maximum (FWHM) equal to 1.92 nm] and a line-narrowed 71-W laser (FWHM equal to 0.39 nm). By measuring Xe129 polarization as a function of gas flow rate, we extracted the peak polarization and polarization production rate across a wide range of laser absorption levels. Peak polarization for all cells consistently remained a factor of 2-3 times lower than predicted at all absorption levels. Moreover, although production rates increased with laser absorption, they did so much more slowly than predicted by the standard theoretical model and basic spin-exchange efficiency arguments. Underperformance was most notable in the smallest optical cells. We propose that all these systematic deviations from theory can be explained by invoking the presence of paramagnetic Rb clusters within the vapor. Cluster formation within saturated alkali-metal vapors is well established and their interaction with resonant laser light was recently shown to create plasmalike conditions. Such cluster systems cause both Rb and Xe129 depolarization, as well as excess photon scattering. These effects were incorporated into the SEOP model by assuming that clusters are activated in proportion to excited-state Rb number density and by further estimating physically reasonable values for the nanocluster-induced velocity-averaged spin

  15. Stochastic and recursive calibration for operational, large-scale, agricultural land and water use management models

    NASA Astrophysics Data System (ADS)

    Maneta, M. P.; Kimball, J. S.; Jencso, K. G.

    2015-12-01

    Managing the impact of climatic cycles on agricultural production, on land allocation, and on the state of active and projected water sources is challenging. This is because in addition to the uncertainties associated with climate projections, it is difficult to anticipate how farmers will respond to climatic change or to economic and policy incentives. Some sophisticated decision support systems available to water managers consider farmers' adaptive behavior but they are data intensive and difficult to apply operationally over large regions. Satellite-based observational technologies, in conjunction with models and assimilation methods, create an opportunity for new, cost-effective analysis tools to support policy and decision-making over large spatial extents at seasonal scales.We present an integrated modeling framework that can be driven by satellite remote sensing to enable robust regional assessment and prediction of climatic and policy impacts on agricultural production, water resources, and management decisions. The core of this framework is a widely used model of agricultural production and resource allocation adapted to be used in conjunction with remote sensing inputs to quantify the amount of land and water farmers allocate for each crop they choose to grow on a seasonal basis in response to reduced or enhanced access to water due to climatic or policy restrictions. A recursive Bayesian update method is used to adjust the model parameters by assimilating information on crop acreage, production, and crop evapotranspiration as a proxy for water use that can be estimated from high spatial resolution satellite remote sensing. The data assimilation framework blends new and old information to avoid over-calibration to the specific conditions of a single year and permits the updating of parameters to track gradual changes in the agricultural system.This integrated framework provides an operational means of monitoring and forecasting what crops will be grown

  16. 5D Modelling: An Efficient Approach for Creating Spatiotemporal Predictive 3D Maps of Large-Scale Cultural Resources

    NASA Astrophysics Data System (ADS)

    Doulamis, A.; Doulamis, N.; Ioannidis, C.; Chrysouli, C.; Grammalidis, N.; Dimitropoulos, K.; Potsiou, C.; Stathopoulou, E.-K.; Ioannides, M.

    2015-08-01

    Outdoor large-scale cultural sites are mostly sensitive to environmental, natural and human made factors, implying an imminent need for a spatio-temporal assessment to identify regions of potential cultural interest (material degradation, structuring, conservation). On the other hand, in Cultural Heritage research quite different actors are involved (archaeologists, curators, conservators, simple users) each of diverse needs. All these statements advocate that a 5D modelling (3D geometry plus time plus levels of details) is ideally required for preservation and assessment of outdoor large scale cultural sites, which is currently implemented as a simple aggregation of 3D digital models at different time and levels of details. The main bottleneck of such an approach is its complexity, making 5D modelling impossible to be validated in real life conditions. In this paper, a cost effective and affordable framework for 5D modelling is proposed based on a spatial-temporal dependent aggregation of 3D digital models, by incorporating a predictive assessment procedure to indicate which regions (surfaces) of an object should be reconstructed at higher levels of details at next time instances and which at lower ones. In this way, dynamic change history maps are created, indicating spatial probabilities of regions needed further 3D modelling at forthcoming instances. Using these maps, predictive assessment can be made, that is, to localize surfaces within the objects where a high accuracy reconstruction process needs to be activated at the forthcoming time instances. The proposed 5D Digital Cultural Heritage Model (5D-DCHM) is implemented using open interoperable standards based on the CityGML framework, which also allows the description of additional semantic metadata information. Visualization aspects are also supported to allow easy manipulation, interaction and representation of the 5D-DCHM geometry and the respective semantic information. The open source 3DCity

  17. A realistic molecular model of cement hydrates

    PubMed Central

    Pellenq, Roland J.-M.; Kushima, Akihiro; Shahsavari, Rouzbeh; Van Vliet, Krystyn J.; Buehler, Markus J.; Yip, Sidney; Ulm, Franz-Josef

    2009-01-01

    Despite decades of studies of calcium-silicate-hydrate (C-S-H), the structurally complex binder phase of concrete, the interplay between chemical composition and density remains essentially unexplored. Together these characteristics of C-S-H define and modulate the physical and mechanical properties of this “liquid stone” gel phase. With the recent determination of the calcium/silicon (C/S = 1.7) ratio and the density of the C-S-H particle (2.6 g/cm3) by neutron scattering measurements, there is new urgency to the challenge of explaining these essential properties. Here we propose a molecular model of C-S-H based on a bottom-up atomistic simulation approach that considers only the chemical specificity of the system as the overriding constraint. By allowing for short silica chains distributed as monomers, dimers, and pentamers, this C-S-H archetype of a molecular description of interacting CaO, SiO2, and H2O units provides not only realistic values of the C/S ratio and the density computed by grand canonical Monte Carlo simulation of water adsorption at 300 K. The model, with a chemical composition of (CaO)1.65(SiO2)(H2O)1.75, also predicts other essential structural features and fundamental physical properties amenable to experimental validation, which suggest that the C-S-H gel structure includes both glass-like short-range order and crystalline features of the mineral tobermorite. Additionally, we probe the mechanical stiffness, strength, and hydrolytic shear response of our molecular model, as compared to experimentally measured properties of C-S-H. The latter results illustrate the prospect of treating cement on equal footing with metals and ceramics in the current application of mechanism-based models and multiscale simulations to study inelastic deformation and cracking. PMID:19805265

  18. Petascale resources and CP2K: enabling sampling, large scale models or correlation beyond DFT

    NASA Astrophysics Data System (ADS)

    Vandevondele, Joost

    2014-03-01

    Already with modest computer resources, GGA DFT simulations of models containing a few hundred atoms can contribute greatly to chemistry, physics and materials science. With the advent of petascale resources, new length, time and accuracy scales can be explored. Recently, we have made progress in all three directions: (1) A novel Tree Monte Carlo (TMC) algorithm introduces a further level of parallelism and allows for generating long Markov chains. Sampling 100'000s of configurations with DFT, the dielectric constant and order-disorder transition in water ice Ih/XI has been studied. (2) The removal of all non-linear scaling steps from GGA DFT calculations and the development of a massively parallel GPU-accelerated sparse matrix library make structural relaxation and MD possible for systems containing 10'000s of atoms. (3) A well parallelized implementation of a novel algorithm to compute four center intergrals over molecular states (RI-GPW), allows for many-body perturbation theory (MP2, RPA) calculations on a few hundred atoms. Sampling liquid water at the MP2 level yields a very satisfying model of liquid water, without empirical parameters.

  19. Thermal Reactor Model for Large-Scale Algae Cultivation in Vertical Flat Panel Photobioreactors.

    PubMed

    Endres, Christian H; Roth, Arne; Brück, Thomas B

    2016-04-01

    Microalgae can grow significantly faster than terrestrial plants and are a promising feedstock for sustainable value added products encompassing pharmaceuticals, pigments, proteins and most prominently biofuels. As the biomass productivity of microalgae strongly depends on the cultivation temperature, detailed information on the reactor temperature as a function of time and geographical location is essential to evaluate the true potential of microalgae as an industrial feedstock. In the present study, a temperature model for an array of vertical flat plate photobioreactors is presented. It was demonstrated that mutual shading of reactor panels has a decisive effect on the reactor temperature. By optimizing distance and thickness of the panels, the occurrence of extreme temperatures and the amplitude of daily temperature fluctuations in the culture medium can be drastically reduced, while maintaining a high level of irradiation on the panels. The presented model was developed and applied to analyze the suitability of various climate zones for algae production in flat panel photobioreactors. Our results demonstrate that in particular Mediterranean and tropical climates represent favorable locations. Lastly, the thermal energy demand required for the case of active temperature control is determined for several locations. PMID:26950078

  20. Pangolin v1.0, a conservative 2-D advection model towards large-scale parallel calculation

    NASA Astrophysics Data System (ADS)

    Praga, A.; Cariolle, D.; Giraud, L.

    2015-02-01

    To exploit the possibilities of parallel computers, we designed a large-scale bidimensional atmospheric advection model named Pangolin. As the basis for a future chemistry-transport model, a finite-volume approach for advection was chosen to ensure mass preservation and to ease parallelization. To overcome the pole restriction on time steps for a regular latitude-longitude grid, Pangolin uses a quasi-area-preserving reduced latitude-longitude grid. The features of the regular grid are exploited to reduce the memory footprint and enable effective parallel performances. In addition, a custom domain decomposition algorithm is presented. To assess the validity of the advection scheme, its results are compared with state-of-the-art models on algebraic test cases. Finally, parallel performances are shown in terms of strong scaling and confirm the efficient scalability up to a few hundred cores.

  1. High Resolution WRF Modeling of the Western USA: Comparisons with Observations and large scale Gridded Data

    NASA Astrophysics Data System (ADS)

    Lebassi-Habtezion, B.; Diffenbaugh, N. S.

    2011-12-01

    Meso- and micro-scale atmospheric features are often not captured in GCMs due to the coarse model resolution. These features could be very important in modifying the regional- and local-scale climate. For example sea breezes, urbanization, irrigation, and mountain/valley circulations can modify the local climate and potentially upscale to larger scales. In this study we evaluate the mesoscale Weather Research and Forecast (WRF) Model against station observations, gridded observations, and reanalysis data over the western states of the USA. Simulations are compared for summer (JJA) 2010 at resolutions of 4, 25 and 50kms with each grid covering the entire Western USA. Observations of July surface temperature, relative humidity, and wind speed and direction are compared with model results at the three resolutions. Results showed that 4km WRF most closely matched point observations of the daytime 10m wind speeds and direction, while 50km WRF showed the largest biases. However, 4km WRF showed larger daytime surface temperature and humidity biases, while agreement with observed nighttime temperature and humidity was generally high for all resolutions. Comparisons of 4km WRF and 4km gridded PRISM data showed a warm bias in the Central Valley of California and the southern part of the Western USA domain. These biases were small in June and larger in July and August, and are associated with deficit of moisture from irrigation in the Central Valley and deficit of monsoon rainfall in the southern domain. Finally, comparisons between 4km WRF forced by global (NCEP) and regional (NARR) reanalysis was undertaken. Results showed warm biases in coastal California when 4km WRF was nested within the global reanalysis, and that these coastal biases did not occur 4km WRF was nested within the regional reanalysis. These results will be used in evaluations of the need for high resolution non-hydrostatic WRF and its performance against observations. It will also be used for quantifying

  2. A simple model for large-scale simulations of fcc metals with explicit treatment of electrons

    NASA Astrophysics Data System (ADS)

    Mason, D. R.; Foulkes, W. M. C.; Sutton, A. P.

    2010-01-01

    The continuing advance in computational power is beginning to make accurate electronic structure calculations routine. Yet, where physics emerges through the dynamics of tens of thousands of atoms in metals, simplifications must be made to the electronic Hamiltonian. We present the simplest extension to a single s-band model [A.P. Sutton, T.N. Todorov, M.J. Cawkwell and J. Hoekstra, Phil. Mag. A 81 (2001) p.1833.] of metallic bonding, namely, the addition of a second s-band. We show that this addition yields a reasonable description of the density of states at the Fermi level, the cohesive energy, formation energies of point defects and elastic constants of some face-centred cubic (fcc) metals.

  3. Solving large-scale sparse eigenvalue problems and linear systems of equations for accelerator modeling

    SciTech Connect

    Gene Golub; Kwok Ko

    2009-03-30

    The solutions of sparse eigenvalue problems and linear systems constitute one of the key computational kernels in the discretization of partial differential equations for the modeling of linear accelerators. The computational challenges faced by existing techniques for solving those sparse eigenvalue problems and linear systems call for continuing research to improve on the algorithms so that ever increasing problem size as required by the physics application can be tackled. Under the support of this award, the filter algorithm for solving large sparse eigenvalue problems was developed at Stanford to address the computational difficulties in the previous methods with the goal to enable accelerator simulations on then the world largest unclassified supercomputer at NERSC for this class of problems. Specifically, a new method, the Hemitian skew-Hemitian splitting method, was proposed and researched as an improved method for solving linear systems with non-Hermitian positive definite and semidefinite matrices.

  4. Measurement, Modeling, and Analysis of a Large-scale Blog Sever Workload

    SciTech Connect

    Jeon, Myeongjae; Hwang, Jeaho; Kim, Youngjae; Jae-Wan, Jang; Lee, Joonwon; Seo, Euiseong

    2010-01-01

    Despite the growing popularity of Online Social Networks (OSNs), the workload characteristics of OSN servers, such as those hosting blog services, are not well understood. Understanding workload characteristics is important for opti- mizing and improving the performance of current systems and software based on observed trends. Thus, in this paper, we characterize the system workload of the largest blog hosting servers in South Korea, Tistory1. In addition to understanding the system workload of the blog hosting server, we have developed synthesized workloads and obtained the following major findings: (i) the transfer size of non-multimedia files and blog articles can be modeled by a truncated Pareto distribution and a log-normal distribution respectively, and (ii) users accesses to blog articles do not show temporal locality, but they are strongly biased toward those posted along with images or audio.

  5. Reduced Order Modeling for Prediction and Control of Large-Scale Systems.

    SciTech Connect

    Kalashnikova, Irina; Arunajatesan, Srinivasan; Barone, Matthew Franklin; van Bloemen Waanders, Bart Gustaaf; Fike, Jeffrey A.

    2014-05-01

    This report describes work performed from June 2012 through May 2014 as a part of a Sandia Early Career Laboratory Directed Research and Development (LDRD) project led by the first author. The objective of the project is to investigate methods for building stable and efficient proper orthogonal decomposition (POD)/Galerkin reduced order models (ROMs): models derived from a sequence of high-fidelity simulations but having a much lower computational cost. Since they are, by construction, small and fast, ROMs can enable real-time simulations of complex systems for onthe- spot analysis, control and decision-making in the presence of uncertainty. Of particular interest to Sandia is the use of ROMs for the quantification of the compressible captive-carry environment, simulated for the design and qualification of nuclear weapons systems. It is an unfortunate reality that many ROM techniques are computationally intractable or lack an a priori stability guarantee for compressible flows. For this reason, this LDRD project focuses on the development of techniques for building provably stable projection-based ROMs. Model reduction approaches based on continuous as well as discrete projection are considered. In the first part of this report, an approach for building energy-stable Galerkin ROMs for linear hyperbolic or incompletely parabolic systems of partial differential equations (PDEs) using continuous projection is developed. The key idea is to apply a transformation induced by the Lyapunov function for the system, and to build the ROM in the transformed variables. It is shown that, for many PDE systems including the linearized compressible Euler and linearized compressible Navier-Stokes equations, the desired transformation is induced by a special inner product, termed the “symmetry inner product”. Attention is then turned to nonlinear conservation laws. A new transformation and corresponding energy-based inner product for the full nonlinear compressible Navier

  6. Doming at large scale on Europa: a model of formation of Thera Macula

    NASA Astrophysics Data System (ADS)

    Mével, L.; Tobie, G.; Mercier, E.; Sotin, C.

    2003-04-01

    Thera Macula is an approximately 140 by 80 km elliptical feature of the southern hemisphere of Europa. Our morphological analysis shows two types of terrains. The north west part is weakly disturbed and only some cuesta-like structures are recognized. Nevertheless, the south east part looks like a chaotic area similar to Conamara Chaos with ice overflowing on the southern margin. The chaotic terrains have a lower elevation than the weakly disturbed terrains. Both units are separated by a steep scarp cutting across the middle of Thera Macula. This dichotomy may reflect the processes by which Thera was build. Detailed observation of the chaotic area reveals the presence of little sinuous scarps limiting terraces lying at different elevations. We have calculated the cumulated height along a N-S profile and deduced a mean regional slope ranging from 0.2% to 0.8% along the entire profile. On the basis of these morphological arguments, we purpose an original model for the emplacement of Thera Macula. The rise of ductile or liquid material beneath an inclined brittle icy crust may induce vertical upward, doming, and a median fracture. Then, the soft material may overflow alongside the regional slope and the dome may collapse as the reservoir empties out. In order to constrain this emplacement model, we are currently performing numerical experiments of thermal convection for a fluid with a strongly temperature-dependent viscosity, including tidal heating and damage rheology. Preliminary results suggest that, although a thick stagnant lid forms at the top of a convective ice layer, damaged icy material in this rigid lid permits the rise of warm ductile ice at shallow depth. This could explain both doming and softening of the crustal material.

  7. Large-scale Environmental Variables and Transition to Deep Convection in Cloud Resolving Model Simulations: A Vector Representation

    SciTech Connect

    Hagos, Samson M.; Leung, Lai-Yung R.

    2012-11-01

    Cloud resolving model simulations and vector analysis are used to develop a quantitative method of assessing regional variations in the relationships between various large-scale environmental variables and the transition to deep convection. Results of the CRM simulations from three tropical regions are used to cluster environmental conditions under which transition to deep convection does and does not take place. Projections of the large-scale environmental variables on the difference between these two clusters are used to quantify the roles of these variables in the transition to deep convection. While the transition to deep convection is most sensitive to moisture and vertical velocity perturbations, the details of the profiles of the anomalies vary from region to region. In comparison, the transition to deep convection is found to be much less sensitive to temperature anomalies over all three regions. The vector formulation presented in this study represents a simple general framework for quantifying various aspects of how the transition to deep convection is sensitive to environmental conditions.

  8. North American extreme temperature events and related large scale meteorological patterns: a review of statistical methods, dynamics, modeling, and trends

    NASA Astrophysics Data System (ADS)

    Grotjahn, Richard; Black, Robert; Leung, Ruby; Wehner, Michael F.; Barlow, Mathew; Bosilovich, Mike; Gershunov, Alexander; Gutowski, William J.; Gyakum, John R.; Katz, Richard W.; Lee, Yun-Young; Lim, Young-Kwon; Prabhat

    2016-02-01

    The objective of this paper is to review statistical methods, dynamics, modeling efforts, and trends related to temperature extremes, with a focus upon extreme events of short duration that affect parts of North America. These events are associated with large scale meteorological patterns (LSMPs). The statistics, dynamics, and modeling sections of this paper are written to be autonomous and so can be read separately. Methods to define extreme events statistics and to identify and connect LSMPs to extreme temperature events are presented. Recent advances in statistical techniques connect LSMPs to extreme temperatures through appropriately defined covariates that supplement more straightforward analyses. Various LSMPs, ranging from synoptic to planetary scale structures, are associated with extreme temperature events. Current knowledge about the synoptics and the dynamical mechanisms leading to the associated LSMPs is incomplete. Systematic studies of: the physics of LSMP life cycles, comprehensive model assessment of LSMP-extreme temperature event linkages, and LSMP properties are needed. Generally, climate models capture observed properties of heat waves and cold air outbreaks with some fidelity. However they overestimate warm wave frequency and underestimate cold air outbreak frequency, and underestimate the collective influence of low-frequency modes on temperature extremes. Modeling studies have identified the impact of large-scale circulation anomalies and land-atmosphere interactions on changes in extreme temperatures. However, few studies have examined changes in LSMPs to more specifically understand the role of LSMPs on past and future extreme temperature changes. Even though LSMPs are resolvable by global and regional climate models, they are not necessarily well simulated. The paper concludes with unresolved issues and research questions.

  9. Model-Data Fusion and Adaptive Sensing for Large Scale Systems: Applications to Atmospheric Release Incidents

    NASA Astrophysics Data System (ADS)

    Madankan, Reza

    All across the world, toxic material clouds are emitted from sources, such as industrial plants, vehicular traffic, and volcanic eruptions can contain chemical, biological or radiological material. With the growing fear of natural, accidental or deliberate release of toxic agents, there is tremendous interest in precise source characterization and generating accurate hazard maps of toxic material dispersion for appropriate disaster management. In this dissertation, an end-to-end framework has been developed for probabilistic source characterization and forecasting of atmospheric release incidents. The proposed methodology consists of three major components which are combined together to perform the task of source characterization and forecasting. These components include Uncertainty Quantification, Optimal Information Collection, and Data Assimilation. Precise approximation of prior statistics is crucial to ensure performance of the source characterization process. In this work, an efficient quadrature based method has been utilized for quantification of uncertainty in plume dispersion models that are subject to uncertain source parameters. In addition, a fast and accurate approach is utilized for the approximation of probabilistic hazard maps, based on combination of polynomial chaos theory and the method of quadrature points. Besides precise quantification of uncertainty, having useful measurement data is also highly important to warranty accurate source parameter estimation. The performance of source characterization is highly affected by applied sensor orientation for data observation. Hence, a general framework has been developed for the optimal allocation of data observation sensors, to improve performance of the source characterization process. The key goal of this framework is to optimally locate a set of mobile sensors such that measurement of textit{better} data is guaranteed. This is achieved by maximizing the mutual information between model predictions

  10. The periglacial engine of mountain erosion - Part 2: Modelling large-scale landscape evolution

    NASA Astrophysics Data System (ADS)

    Egholm, D. L.; Andersen, J. L.; Knudsen, M. F.; Jansen, J. D.; Nielsen, S. B.

    2015-10-01

    There is growing recognition of strong periglacial control on bedrock erosion in mountain landscapes, including the shaping of low-relief surfaces at high elevations (summit flats). But, as yet, the hypothesis that frost action was crucial to the assumed Late Cenozoic rise in erosion rates remains compelling and untested. Here we present a landscape evolution model incorporating two key periglacial processes - regolith production via frost cracking and sediment transport via frost creep - which together are harnessed to variations in temperature and the evolving thickness of sediment cover. Our computational experiments time-integrate the contribution of frost action to shaping mountain topography over million-year timescales, with the primary and highly reproducible outcome being the development of flattish or gently convex summit flats. A simple scaling of temperature to marine δ18O records spanning the past 14 Myr indicates that the highest summit flats in mid- to high-latitude mountains may have formed via frost action prior to the Quaternary. We suggest that deep cooling in the Quaternary accelerated mechanical weathering globally by significantly expanding the area subject to frost. Further, the inclusion of subglacial erosion alongside periglacial processes in our computational experiments points to alpine glaciers increasing the long-term efficiency of frost-driven erosion by steepening hillslopes.

  11. A model for large-scale plastic yield of the Gorda deformation zone

    SciTech Connect

    Denlinger, R.P. )

    1992-10-01

    A solution satisfying both continuity and force balance for an elastoplastic Gorda plate in planar coordinates is presented. Continuity on a plane is used to approximate continuity on a spherical surface due to the small area under consideration. The zone of plastic yield vs the seismicity does not change much with fault strength along the Mendocino. Due to the nature of the deformation, the direction of maximum shear stress near the Mendocino triple junction is between 40 and 50 deg to the Mendocino transform in both cases, but curves sharply in the neighborhood of the transform if the fault is strong. It is concluded that the strength of the Mendocino relative to the lithosphere varied over time. Five million years ago a change in pole position increased convergence of the Blanco fracture zone and Mendocino transform, exponentially increasing brittle shear stresses across the fault. Between 2.47 Ma and 1.8 Ma the convergence stabilized, and the resistance to sliding along the transform decayed back to residual levels. The relative slip along the fault during this time was about 1 km. As a result of this history, previous models either for flexural-slip or for right-lateral shear will fit the deformation at different times. 35 refs.

  12. The role of soil hydrologic heterogeneity for modeling large-scale bioremediation protocols.

    NASA Astrophysics Data System (ADS)

    Romano, N.; Palladino, M.; Speranza, G.; Di Fiore, P.; Sica, B.; Nasta, P.

    2014-12-01

    The major aim of the EU-Life+ project EcoRemed (Implementation of eco-compatible protocols for agricultural soil remediation in Litorale Domizio-Agro Aversano NIPS) is the implementation of operating protocols for agriculture-based bioremediation of contaminated croplands, which also involves plants extracting pollutants being then used as biomasses for renewable energy production. The study area is the National Interest Priority Site (NIPS) called Litorale Domitio-Agro Aversano, which is located in the Campania Region (Southern Italy) and has an extent of about 200,000 ectars. In this area, a high-level spotted soil contamination is mostly due to the legal or outlaw industrial and municipal wastes, with hazardous consequences also on the quality of the groundwater. An accurate determination of the soil hydraulic properties to characterize the landscape heterogeneity of the study area plays a key role within the general framework of this project, especially in view of the use of various modeling tools for water flow and solute transport simulations and to predict the effectiveness of the adopted bioremediation protocols. The present contribution is part of an ongoing study where we are investigating the following research questions: a) Which spatial aggregation schemes seem more suitable for upscaling from point to block support? b) Which effective soil hydrologic characteristic schemes simulate better the average behavior of larger scale phytoremediation processes? c) Allowing also for questions a) and b), how the spatial variability of soil hydraulic properties affect the variability of plant responses to hydro-meteorological forcing?

  13. Beach Nourishment Dynamics in a Coupled Large-Scale Coastal Change and Economic Optimization Model

    NASA Astrophysics Data System (ADS)

    McNamara, D. E.; Murray, B.; Smith, M.

    2008-12-01

    Global climate change is predicted to have significant consequences for shoreline evolution from both sea level rise and changing wave climates. Because many coastal communities actively defend against erosion, changing environmental conditions will influence rates of nourishment. Over large coastal regions, including many towns, the anticipated future rate of nourishment is assumed to be proportional to the expected evolution of the shoreline in the region. This view neglects the possibility of strong coupling between the spatial patterns of nourishment and the distribution of property values within the region. To explore the impact of this coupling, we present a numerical model that incorporates the physical forces of alongshore sediment transport and erosion due to sea level rise as well as the economic forces that drive beach replenishment including the economic benefits of enhanced or maintained beach width and the costs of replenishing. Results are presented for a Carolina-like coastline and show how natural shoreline change rates are altered as the wave climate changes (because of changing storm behaviors). Results also show that the nourishment rate is conserved for varying property value distributions when the nourishment cost is unrelated to past nourishment and, in contrast, increasing nourishment cost as available sand for nourishment is depleted causes strong coupling between the property value distribution and erosion patterns. This strong coupling significantly alters the rate of nourishment and hence the depletion of available sand for nourishing.

  14. Experiments on vertical transverse mixing in a large-scale heterogeneous model aquifer

    NASA Astrophysics Data System (ADS)

    Rahman, Md. Arifur; Jose, Surabhin C.; Nowak, Wolfgang; Cirpka, Olaf A.

    2005-11-01

    Vertical transverse mixing is known to be a controlling factor in natural attenuation of extended biodegradable plumes originating from continuously emitting sources. We perform conservative and reactive tracer tests in a quasi two-dimensional 14 m long sandbox in order to quantify vertical mixing in heterogeneous media. The filling mimics natural sediments including a distribution of different hydro-facies, made of different sand mixtures, and micro-structures within the sand lenses. We quantify the concentration distribution of the conservative tracer by the analysis of digital images taken at steady state during the tracer-dye experiment. Heterogeneity causes plume meandering, leading to distorted concentration profiles. Without knowledge about the velocity distribution, it is not possible to determine meaningful vertical dispersion coefficients from the concentration profiles. Using the stream-line pattern resulting from an inverse model of previous experiments in the sandbox, we can correct for the plume meandering. The resulting vertical dispersion coefficient is approximately ≈ 4 × 10 - 9 m 2/s. We observe no distinct increase in the vertical dispersion coefficient with increasing travel distance, indicating that heterogeneity has hardly any impact on vertical transverse mixing. In the reactive tracer test, we continuously inject an alkaline solution over a certain height into the domain that is occupied otherwise by an acidic solution. The outline of the alkaline plume is visualized by adding a pH indicator into both solutions. From the height and length of the reactive plume, we estimate a transverse dispersion coefficient of ≈ 3 × 10 - 9 m 2/s. Overall, the vertical transverse dispersion coefficients are less than an order of magnitude larger than pore diffusion coefficients and hardly increase due to heterogeneity. Thus, we conclude for the assessment of natural attenuation that reactive plumes might become very large if they are controlled by

  15. Wind tunnel investigation of a large-scale upper surface blown-flap transport model having two engines

    NASA Technical Reports Server (NTRS)

    Aoyagi, K.; Falarski, M. D.; Koenig, D. G.

    1973-01-01

    An investigation has been conducted to determine the aerodynamic characteristics of a large-scale subsonic jet transport model with an upper surface blowing flap system that would augment lift. The model had a 25 deg swept wing of aspect ratio 7.89 and two turbofan engines with the engine centerline located at 0.256 of the wing semispan. The lift of the flap system was augmented by turbofan exhaust impingement on the Coanda surface. Results were obtained for several flap deflections and engine nozzle configurations at jet momentum coefficients from 0 to 4.0. Three-component longitudinal data are presented with two engines operating. Limited longitudinal and lateral data are presented with an engine out. In addition, limited exhaust and flap pressure data are presented.

  16. A Preliminary Model Study of the Large-Scale Seasonal Cycle in Bottom Pressure Over the Global Ocean

    NASA Technical Reports Server (NTRS)

    Ponte, Rui M.

    1998-01-01

    Output from the primitive equation model of Semtner and Chervin is used to examine the seasonal cycle in bottom pressure (Pb) over the global ocean. Effects of the volume-conserving formulation of the model on the calculation Of Pb are considered. The estimated seasonal, large-scale Pb signals have amplitudes ranging from less than 1 cm over most of the deep ocean to several centimeters over shallow, boundary regions. Variability generally increases toward the western sides of the basins, and is also larger in some Southern Ocean regions. An oscillation between subtropical and higher latitudes in the North Pacific is clear. Comparison with barotropic simulations indicates that, on basin scales, seasonal Pb variability is related to barotropic dynamics and the seasonal cycle in Ekman pumping, and results from a small, net residual in mass divergence from the balance between Ekman and Sverdrup flows.

  17. Application of Large-Scale Database-Based Online Modeling to Plant State Long-Term Estimation

    NASA Astrophysics Data System (ADS)

    Ogawa, Masatoshi; Ogai, Harutoshi

    Recently, attention has been drawn to the local modeling techniques of a new idea called “Just-In-Time (JIT) modeling”. To apply “JIT modeling” to a large amount of database online, “Large-scale database-based Online Modeling (LOM)” has been proposed. LOM is a technique that makes the retrieval of neighboring data more efficient by using both “stepwise selection” and quantization. In order to predict the long-term state of the plant without using future data of manipulated variables, an Extended Sequential Prediction method of LOM (ESP-LOM) has been proposed. In this paper, the LOM and the ESP-LOM are introduced.

  18. Physical control oriented model of large scale refrigerators to synthesize advanced control schemes. Design, validation, and first control results

    SciTech Connect

    Bonne, François; Bonnay, Patrick

    2014-01-29

    In this paper, a physical method to obtain control-oriented dynamical models of large scale cryogenic refrigerators is proposed, in order to synthesize model-based advanced control schemes. These schemes aim to replace classical user experience designed approaches usually based on many independent PI controllers. This is particularly useful in the case where cryoplants are submitted to large pulsed thermal loads, expected to take place in the cryogenic cooling systems of future fusion reactors such as the International Thermonuclear Experimental Reactor (ITER) or the Japan Torus-60 Super Advanced Fusion Experiment (JT-60SA). Advanced control schemes lead to a better perturbation immunity and rejection, to offer a safer utilization of cryoplants. The paper gives details on how basic components used in the field of large scale helium refrigeration (especially those present on the 400W @1.8K helium test facility at CEA-Grenoble) are modeled and assembled to obtain the complete dynamic description of controllable subsystems of the refrigerator (controllable subsystems are namely the Joule-Thompson Cycle, the Brayton Cycle, the Liquid Nitrogen Precooling Unit and the Warm Compression Station). The complete 400W @1.8K (in the 400W @4.4K configuration) helium test facility model is then validated against experimental data and the optimal control of both the Joule-Thompson valve and the turbine valve is proposed, to stabilize the plant under highly variable thermals loads. This work is partially supported through the European Fusion Development Agreement (EFDA) Goal Oriented Training Program, task agreement WP10-GOT-GIRO.

  19. Physical control oriented model of large scale refrigerators to synthesize advanced control schemes. Design, validation, and first control results

    NASA Astrophysics Data System (ADS)

    Bonne, François; Alamir, Mazen; Bonnay, Patrick

    2014-01-01

    In this paper, a physical method to obtain control-oriented dynamical models of large scale cryogenic refrigerators is proposed, in order to synthesize model-based advanced control schemes. These schemes aim to replace classical user experience designed approaches usually based on many independent PI controllers. This is particularly useful in the case where cryoplants are submitted to large pulsed thermal loads, expected to take place in the cryogenic cooling systems of future fusion reactors such as the International Thermonuclear Experimental Reactor (ITER) or the Japan Torus-60 Super Advanced Fusion Experiment (JT-60SA). Advanced control schemes lead to a better perturbation immunity and rejection, to offer a safer utilization of cryoplants. The paper gives details on how basic components used in the field of large scale helium refrigeration (especially those present on the 400W @1.8K helium test facility at CEA-Grenoble) are modeled and assembled to obtain the complete dynamic description of controllable subsystems of the refrigerator (controllable subsystems are namely the Joule-Thompson Cycle, the Brayton Cycle, the Liquid Nitrogen Precooling Unit and the Warm Compression Station). The complete 400W @1.8K (in the 400W @4.4K configuration) helium test facility model is then validated against experimental data and the optimal control of both the Joule-Thompson valve and the turbine valve is proposed, to stabilize the plant under highly variable thermals loads. This work is partially supported through the European Fusion Development Agreement (EFDA) Goal Oriented Training Program, task agreement WP10-GOT-GIRO.

  20. Physics-based animation of large-scale splashing liquids, elastoplastic solids, and model-reduced flow

    NASA Astrophysics Data System (ADS)

    Gerszewski, Daniel James

    Physical simulation has become an essential tool in computer animation. As the use of visual effects increases, the need for simulating real-world materials increases. In this dissertation, we consider three problems in physics-based animation: large-scale splashing liquids, elastoplastic material simulation, and dimensionality reduction techniques for fluid simulation. Fluid simulation has been one of the greatest successes of physics-based animation, generating hundreds of research papers and a great many special effects over the last fifteen years. However, the animation of large-scale, splashing liquids remains challenging. We show that a novel combination of unilateral incompressibility, mass-full FLIP, and blurred boundaries is extremely well-suited to the animation of large-scale, violent, splashing liquids. Materials that incorporate both plastic and elastic deformations, also referred to as elastioplastic materials, are frequently encountered in everyday life. Methods for animating such common real-world materials are useful for effects practitioners and have been successfully employed in films. We describe a point-based method for animating elastoplastic materials. Our primary contribution is a simple method for computing the deformation gradient for each particle in the simulation. Given the deformation gradient, we can apply arbitrary constitutive models and compute the resulting elastic forces. Our method has two primary advantages: we do not store or compare to an initial rest configuration and we work directly with the deformation gradient. The first advantage avoids poor numerical conditioning and the second naturally leads to a multiplicative model of deformation appropriate for finite deformations. One of the most significant drawbacks of physics-based animation is that ever-higher fidelity leads to an explosion in the number of degrees of freedom. This problem leads us to the consideration of dimensionality reduction techniques. We present

  1. Breaking Computational Barriers: Real-time Analysis and Optimization with Large-scale Nonlinear Models via Model Reduction

    SciTech Connect

    Carlberg, Kevin Thomas; Drohmann, Martin; Tuminaro, Raymond S.; Boggs, Paul T.; Ray, Jaideep; van Bloemen Waanders, Bart Gustaaf

    2014-10-01

    Model reduction for dynamical systems is a promising approach for reducing the computational cost of large-scale physics-based simulations to enable high-fidelity models to be used in many- query (e.g., Bayesian inference) and near-real-time (e.g., fast-turnaround simulation) contexts. While model reduction works well for specialized problems such as linear time-invariant systems, it is much more difficult to obtain accurate, stable, and efficient reduced-order models (ROMs) for systems with general nonlinearities. This report describes several advances that enable nonlinear reduced-order models (ROMs) to be deployed in a variety of time-critical settings. First, we present an error bound for the Gauss-Newton with Approximated Tensors (GNAT) nonlinear model reduction technique. This bound allows the state-space error for the GNAT method to be quantified when applied with the backward Euler time-integration scheme. Second, we present a methodology for preserving classical Lagrangian structure in nonlinear model reduction. This technique guarantees that important properties--such as energy conservation and symplectic time-evolution maps--are preserved when performing model reduction for models described by a Lagrangian formalism (e.g., molecular dynamics, structural dynamics). Third, we present a novel technique for decreasing the temporal complexity --defined as the number of Newton-like iterations performed over the course of the simulation--by exploiting time-domain data. Fourth, we describe a novel method for refining projection-based reduced-order models a posteriori using a goal-oriented framework similar to mesh-adaptive h -refinement in finite elements. The technique allows the ROM to generate arbitrarily accurate solutions, thereby providing the ROM with a 'failsafe' mechanism in the event of insufficient training data. Finally, we present the reduced-order model error surrogate (ROMES) method for statistically quantifying reduced- order-model errors. This

  2. Experimental real-time multi-model ensemble (MME) prediction of rainfall during monsoon 2008: Large-scale medium-range aspects

    NASA Astrophysics Data System (ADS)

    Mitra, A. K.; Iyengar, G. R.; Durai, V. R.; Sanjay, J.; Krishnamurti, T. N.; Mishra, A.; Sikka, D. R.

    2011-02-01

    Realistic simulation/prediction of the Asian summer monsoon rainfall on various space-time scales is a challenging scientific task. Compared to mid-latitudes, a proportional skill improvement in the prediction of monsoon rainfall in the medium range has not happened in recent years. Global models and data assimilation techniques are being improved for monsoon/tropics. However, multi-model ensemble (MME) forecasting is gaining popularity, as it has the potential to provide more information for practical forecasting in terms of making a consensus forecast and handling model uncertainties. As major centers are exchanging model output in near real-time, MME is a viable inexpensive way of enhancing the forecasting skill and information content. During monsoon 2008, on an experimental basis, an MME forecasting of large-scale monsoon precipitation in the medium range was carried out in real-time at National Centre for Medium Range Weather Forecasting (NCMRWF), India. Simple ensemble mean (EMN) giving equal weight to member models, bias-corrected ensemble mean (BCEMn) and MME forecast, where different weights are given to member models, are the products of the algorithm tested here. In general, the aforementioned products from the multi-model ensemble forecast system have a higher skill than individual model forecasts. The skill score for the Indian domain and other sub-regions indicates that the BCEMn produces the best result, compared to EMN and MME. Giving weights to different models to obtain an MME product helps to improve individual member models only marginally. It is noted that for higher rainfall values, the skill of the global model rainfall forecast decreases rapidly beyond day-3, and hence for day-4 and day-5, the MME products could not bring much improvement over member models. However, up to day-3, the MME products were always better than individual member models.

  3. An exceptionally heavy snowfall in Northeast china: large-scale circulation anomalies and hindcast of the NCAR WRF model

    NASA Astrophysics Data System (ADS)

    Wang, Huijun; Yu, Entao; Yang, Song

    2011-06-01

    In Northeast China (NEC), snowfalls usually occur during winter and early spring, from mid-October to late March, and strong snowfalls rarely occur in middle spring. During 12-13 April 2010, an exceptionally strong snowfall occurred in NEC, with 26.8 mm of accumulated water-equivalent snow over Harbin, the capital of the most eastern province in NEC. In this study, the major features of the snowfall and associated large-scale circulation and the predictability of the snowfall are analyzed using both observations and models. The Siberia High intensified and shifted southeastward from 10 days before the snowfall, resulting in intensifying the low-pressure system over NEC and strengthening the East Asian Trough during 12-13 April. Therefore, large convergence of water vapor and strong rising motion appeared over eastern NEC, resulting in heavy snowfall. Hindcast experiments were carried out using the NCAR Weather Research and Forecasting (WRF) model in a two-way nesting approach, forced by NCEP Global Forecast System data sets. Many observed features including the large-scale and regional circulation anomalies and snowfall amount can be reproduced reasonably well, suggesting the feasibility of the WRF model in forecasting extreme weather events over NEC. A quantitative analysis also shows that the nested NEC domain simulation is even better than mother domain simulation in simulating the snowfall amount and spatial distribution, and that both simulations are more skillful than the NCEP Global Forecast System output. The forecast result from the nested forecast system is very promising for an operational purpose.

  4. Beyond single syllables: large-scale modeling of reading aloud with the Connectionist Dual Process (CDP++) model.

    PubMed

    Perry, Conrad; Ziegler, Johannes C; Zorzi, Marco

    2010-09-01

    Most words in English have more than one syllable, yet the most influential computational models of reading aloud are restricted to processing monosyllabic words. Here, we present CDP++, a new version of the Connectionist Dual Process model (Perry, Ziegler, & Zorzi, 2007). CDP++ is able to simulate the reading aloud of mono- and disyllabic words and nonwords, and learns to assign stress in exactly the same way as it learns to associate graphemes with phonemes. CDP++ is able to simulate the monosyllabic benchmark effects its predecessor could, and therefore shows full backwards compatibility. CDP++ also accounts for a number of novel effects specific to disyllabic words, including the effects of stress regularity and syllable number. In terms of database performance, CDP++ accounts for over 49% of the reaction time variance on items selected from the English Lexicon Project, a very large database of several thousand of words. With its lexicon of over 32,000 words, CDP++ is therefore a notable example of the successful scaling-up of a connectionist model to a size that more realistically approximates the human lexical system.

  5. The integration of large-scale neural network modeling and functional brain imaging in speech motor control

    PubMed Central

    Golfinopoulos, E.; Tourville, J.A.; Guenther, F.H.

    2009-01-01

    Speech production demands a number of integrated processing stages. The system must encode the speech motor programs that command movement trajectories of the articulators and monitor transient spatiotemporal variations in auditory and somatosensory feedback. Early models of this system proposed that independent neural regions perform specialized speech processes. As technology advanced, neuroimaging data revealed that the dynamic sensorimotor processes of speech require a distributed set of interacting neural regions. The DIVA (Directions into Velocities of Articulators) neurocomputational model elaborates on early theories, integrating existing data and contemporary ideologies, to provide a mechanistic account of acoustic, kinematic, and functional magnetic resonance imaging (fMRI) data on speech acquisition and production. This large-scale neural network model is composed of several interconnected components whose cell activities and synaptic weight strengths are governed by differential equations. Cells in the model are associated with neuroanatomical substrates and have been mapped to locations in Montreal Neurological Institute stereotactic space, providing a means to compare simulated and empirical fMRI data. The DIVA model also provides a computational and neurophysiological framework within which to interpret and organize research on speech acquisition and production in fluent and dysfluent child and adult speakers. The purpose of this review article is to demonstrate how the DIVA model is used to motivate and guide functional imaging studies. We describe how model predictions are evaluated using voxel-based, region-of-interest-based parametric analyses and inter-regional effective connectivity modeling of fMRI data. PMID:19837177

  6. The integration of large-scale neural network modeling and functional brain imaging in speech motor control.

    PubMed

    Golfinopoulos, E; Tourville, J A; Guenther, F H

    2010-09-01

    Speech production demands a number of integrated processing stages. The system must encode the speech motor programs that command movement trajectories of the articulators and monitor transient spatiotemporal variations in auditory and somatosensory feedback. Early models of this system proposed that independent neural regions perform specialized speech processes. As technology advanced, neuroimaging data revealed that the dynamic sensorimotor processes of speech require a distributed set of interacting neural regions. The DIVA (Directions into Velocities of Articulators) neurocomputational model elaborates on early theories, integrating existing data and contemporary ideologies, to provide a mechanistic account of acoustic, kinematic, and functional magnetic resonance imaging (fMRI) data on speech acquisition and production. This large-scale neural network model is composed of several interconnected components whose cell activities and synaptic weight strengths are governed by differential equations. Cells in the model are associated with neuroanatomical substrates and have been mapped to locations in Montreal Neurological Institute stereotactic space, providing a means to compare simulated and empirical fMRI data. The DIVA model also provides a computational and neurophysiological framework within which to interpret and organize research on speech acquisition and production in fluent and dysfluent child and adult speakers. The purpose of this review article is to demonstrate how the DIVA model is used to motivate and guide functional imaging studies. We describe how model predictions are evaluated using voxel-based, region-of-interest-based parametric analyses and inter-regional effective connectivity modeling of fMRI data.

  7. Analyzing large-scale conservation interventions with Bayesian hierarchical models: a case study of supplementing threatened Pacific salmon.

    PubMed

    Scheuerell, Mark D; Buhle, Eric R; Semmens, Brice X; Ford, Michael J; Cooney, Tom; Carmichael, Richard W

    2015-05-01

    Myriad human activities increasingly threaten the existence of many species. A variety of conservation interventions such as habitat restoration, protected areas, and captive breeding have been used to prevent extinctions. Evaluating the effectiveness of these interventions requires appropriate statistical methods, given the quantity and quality of available data. Historically, analysis of variance has been used with some form of predetermined before-after control-impact design to estimate the effects of large-scale experiments or conservation interventions. However, ad hoc retrospective study designs or the presence of random effects at multiple scales may preclude the use of these tools. We evaluated the effects of a large-scale supplementation program on the density of adult Chinook salmon Oncorhynchus tshawytscha from the Snake River basin in the northwestern United States currently listed under the U.S. Endangered Species Act. We analyzed 43 years of data from 22 populations, accounting for random effects across time and space using a form of Bayesian hierarchical time-series model common in analyses of financial markets. We found that varying degrees of supplementation over a period of 25 years increased the density of natural-origin adults, on average, by 0-8% relative to nonsupplementation years. Thirty-nine of the 43 year effects were at least two times larger in magnitude than the mean supplementation effect, suggesting common environmental variables play a more important role in driving interannual variability in adult density. Additional residual variation in density varied considerably across the region, but there was no systematic difference between supplemented and reference populations. Our results demonstrate the power of hierarchical Bayesian models to detect the diffuse effects of management interventions and to quantitatively describe the variability of intervention success. Nevertheless, our study could not address whether ecological factors

  8. Analyzing large-scale conservation interventions with Bayesian hierarchical models: a case study of supplementing threatened Pacific salmon.

    PubMed

    Scheuerell, Mark D; Buhle, Eric R; Semmens, Brice X; Ford, Michael J; Cooney, Tom; Carmichael, Richard W

    2015-05-01

    Myriad human activities increasingly threaten the existence of many species. A variety of conservation interventions such as habitat restoration, protected areas, and captive breeding have been used to prevent extinctions. Evaluating the effectiveness of these interventions requires appropriate statistical methods, given the quantity and quality of available data. Historically, analysis of variance has been used with some form of predetermined before-after control-impact design to estimate the effects of large-scale experiments or conservation interventions. However, ad hoc retrospective study designs or the presence of random effects at multiple scales may preclude the use of these tools. We evaluated the effects of a large-scale supplementation program on the density of adult Chinook salmon Oncorhynchus tshawytscha from the Snake River basin in the northwestern United States currently listed under the U.S. Endangered Species Act. We analyzed 43 years of data from 22 populations, accounting for random effects across time and space using a form of Bayesian hierarchical time-series model common in analyses of financial markets. We found that varying degrees of supplementation over a period of 25 years increased the density of natural-origin adults, on average, by 0-8% relative to nonsupplementation years. Thirty-nine of the 43 year effects were at least two times larger in magnitude than the mean supplementation effect, suggesting common environmental variables play a more important role in driving interannual variability in adult density. Additional residual variation in density varied considerably across the region, but there was no systematic difference between supplemented and reference populations. Our results demonstrate the power of hierarchical Bayesian models to detect the diffuse effects of management interventions and to quantitatively describe the variability of intervention success. Nevertheless, our study could not address whether ecological factors

  9. Analyzing large-scale conservation interventions with Bayesian hierarchical models: a case study of supplementing threatened Pacific salmon

    PubMed Central

    Scheuerell, Mark D; Buhle, Eric R; Semmens, Brice X; Ford, Michael J; Cooney, Tom; Carmichael, Richard W

    2015-01-01

    Myriad human activities increasingly threaten the existence of many species. A variety of conservation interventions such as habitat restoration, protected areas, and captive breeding have been used to prevent extinctions. Evaluating the effectiveness of these interventions requires appropriate statistical methods, given the quantity and quality of available data. Historically, analysis of variance has been used with some form of predetermined before-after control-impact design to estimate the effects of large-scale experiments or conservation interventions. However, ad hoc retrospective study designs or the presence of random effects at multiple scales may preclude the use of these tools. We evaluated the effects of a large-scale supplementation program on the density of adult Chinook salmon Oncorhynchus tshawytscha from the Snake River basin in the northwestern United States currently listed under the U.S. Endangered Species Act. We analyzed 43 years of data from 22 populations, accounting for random effects across time and space using a form of Bayesian hierarchical time-series model common in analyses of financial markets. We found that varying degrees of supplementation over a period of 25 years increased the density of natural-origin adults, on average, by 0–8% relative to nonsupplementation years. Thirty-nine of the 43 year effects were at least two times larger in magnitude than the mean supplementation effect, suggesting common environmental variables play a more important role in driving interannual variability in adult density. Additional residual variation in density varied considerably across the region, but there was no systematic difference between supplemented and reference populations. Our results demonstrate the power of hierarchical Bayesian models to detect the diffuse effects of management interventions and to quantitatively describe the variability of intervention success. Nevertheless, our study could not address whether ecological

  10. Contrasting non-local effects of shoreline stabilization methods in a model of large-scale coastline morphodynamics

    NASA Astrophysics Data System (ADS)

    Ells, K. D.; Murray, A.

    2011-12-01

    Advances in the understanding of the wave-angle dependence of large-scale sandy coastline evolution have allowed exploratory modeling investigations into the emergence of large-scale coastline features such as sandwaves, capes, and spits; the possible responses of these complex coastline shapes to changing wave climates; and the dynamic coupling of natural coastal processes with economic decisions for shoreline stabilization. Recent numerical-model experiments found that beach nourishment on a complex-shaped coastline can significantly alter rates of shoreline change on spatial scales commensurate with the alongshore distance of adjacent features (up to 100 km). While the effect of beach nourishment is to fix a given shoreline position while maintaining a saturated sediment flux locally, hard structured stabilization methods (e.g. seawalls, revetments, or groynes) tend to reduce local alongshore fluxes of sediment. In long-term numerical experiments (decades to centuries), the effects of local stabilization propagate both progressively alongshore and through a non-local mechanism (wave shadowing). Comparing these two fundamentally different methods of shoreline stabilization on various locations along a cuspate cape coastline, we find that both the local and regional responses to hard structures greatly contrast those of beach nourishment. Sustained nourishment near the tip of a cape tends to extend the cape both seaward and in the direction of alongshore flux, increasing the effect that wave shadowing would have otherwise had on distant shorelines, leading to a negative (landward) perturbation to an adjacent cape. A hard structure at the same location, however, completely fixes the cape's original location, decreasing the shadowing effect and resulting in a positive (seaward) perturbation to the downdrift cape. Recent extensions of this work examine how different stabilization methods affect long-term coastline morphodynamics on other coastline types, starting

  11. Large Scale Earth's Bow Shock with Northern IMF as Simulated by PIC Code in Parallel with MHD Model

    NASA Astrophysics Data System (ADS)

    Baraka, Suleiman

    2016-06-01

    In this paper, we propose a 3D kinetic model (particle-in-cell, PIC) for the description of the large scale Earth's bow shock. The proposed version is stable and does not require huge or extensive computer resources. Because PIC simulations work with scaled plasma and field parameters, we also propose to validate our code by comparing its results with the available MHD simulations under same scaled solar wind (SW) and (IMF) conditions. We report new results from the two models. In both codes the Earth's bow shock position is found to be ≈14.8 R E along the Sun-Earth line, and ≈29 R E on the dusk side. Those findings are consistent with past in situ observations. Both simulations reproduce the theoretical jump conditions at the shock. However, the PIC code density and temperature distributions are inflated and slightly shifted sunward when compared to the MHD results. Kinetic electron motions and reflected ions upstream may cause this sunward shift. Species distributions in the foreshock region are depicted within the transition of the shock (measured ≈2 c/ ω pi for Θ Bn = 90° and M MS = 4.7) and in the downstream. The size of the foot jump in the magnetic field at the shock is measured to be (1.7 c/ ω pi ). In the foreshocked region, the thermal velocity is found equal to 213 km s-1 at 15 R E and is equal to 63 km s -1 at 12 R E (magnetosheath region). Despite the large cell size of the current version of the PIC code, it is powerful to retain macrostructure of planets magnetospheres in very short time, thus it can be used for pedagogical test purposes. It is also likely complementary with MHD to deepen our understanding of the large scale magnetosphere.

  12. Calibration of a large-scale groundwater flow model using GRACE data: a case study in the Qaidam Basin, China

    NASA Astrophysics Data System (ADS)

    Hu, Litang; Jiao, Jiu Jimmy

    2015-11-01

    Traditional numerical models usually use extensive observed hydraulic-head data as calibration targets. However, this calibration process is not applicable in remote areas with limited or no monitoring data. This study presents an approach to calibrate a large-scale groundwater flow model using the monthly Gravity Recovery and Climate Experiment (GRACE) satellite data, which have been available globally on a spatial grid of 1° in the geographic coordinate system since 2002. A groundwater storage anomaly isolated from the terrestrial water storage (TWS) anomaly is converted into hydraulic head at the center of the grid, which is then used as observed data to calibrate a numerical model to estimate aquifer hydraulic conductivity. The aquifer system in the remote and hyperarid Qaidam Basin, China, is used as a case study to demonstrate the applicability of this approach. A groundwater model using FEFLOW is constructed for the Qaidam Basin and the GRACE-derived groundwater storage anomaly over the period 2003-2012 is included to calibrate the model, which is done using an automatic estimation method (PEST). The calibrated model is then run to output hydraulic heads at three sites where long-term hydraulic head data are available. The reasonably good fit between the calculated and observed hydraulic heads, together with the very similar groundwater storage anomalies from the numerical model and GRACE data, demonstrate that this approach is generally applicable in regions of groundwater data scarcity.

  13. Large scale dynamic systems

    NASA Technical Reports Server (NTRS)

    Doolin, B. F.

    1975-01-01

    Classes of large scale dynamic systems were discussed in the context of modern control theory. Specific examples discussed were in the technical fields of aeronautics, water resources and electric power.

  14. The Interaction of Trade-Wind Clouds with the Large-Scale Flow in Observations and Models

    NASA Astrophysics Data System (ADS)

    Nuijens, L.; Medeiros, B.; Sandu, I.; Ahlgrimm, M.; Vogel, R.

    2015-12-01

    Most of the (sub)tropical oceans within the Hadley circulation experience either moderate subsidence or weak ascent. In these regions shallow trade-wind clouds prevail, whose vertical and spatial distribution have emerged as key factors determining the sensitivity of our climate in global climate models. A large unknown is how large the effect of these clouds should be. For instance, how sensitive is the radiation budget to variations in the distribution of trade-wind cloudiness in nature? How variable is trade-wind cloudiness in the first place? And do we understand the role of the large-scale flow in that variability? In this talk we present how space-borne remote sensing and reanalysis products combined with ground-based remote sensing and high resolution modeling at a representative location start to answer these questions and help validate climate models. We show that across regimes or seasons with moderate subsidence and weak ascent the cloud radiative effect and low-level cloudiness vary remarkably little. A negative feedback mechanism of convection on cloudiness near the lifting condensation level is used to explain this insensitivity. The main difference across regimes is a moderate change in cloudiness in the upper cloud layer, whereby the presence of a trade-wind inversion and strong winds appear a prerequisite for larger cloudiness. However, most variance in cloudiness at that level takes place on shorter time scales, with an important role for the deepening of individual clouds and local patterns in vertical motion induced by convection itself, which can significantly alter the trade-wind layer structure. Trade-wind cloudiness in climate models in turn is overly sensitive to changes in the large-scale flow, because relationships that separate cloudiness across regimes in long-term climatologies, which have inspired parameterizations, also act on shorter time scales. We discuss how these findings relate to recent explanations for the spread in modeled

  15. A Hydrologic Model to Quantify Large Scale Biofuel Production Impact on Upper Mississippi River Basin Water Quality

    NASA Astrophysics Data System (ADS)

    Demissie, Y. K.; Yan, E.; Wu, M.

    2010-12-01

    The projected increase in domestic ethanol production in the U.S. is expected to reduce greenhouse gas emissions, promote rural community development, and strengthen the nation’s energy security. However, its potential effect on water resources at both regional and local scales is still uncertain. Especially, changes in a large scale land use and management to produce high yield energy crops raised serious concern on its unintended potential impact on water quality and availability. This work presents a watershed modeling effort to establish a baseline condition for the Upper Mississippi River Basin, based on which impacts of conventional and cellulosic biofuel feedstock production on the region water resources will be evaluated. The watershed model was adequately calibrated and validated using eighteen years of observed water quality and stream discharge data. The model’s ability to estimate spatially and temporally varying crop growth and biomass production, which is essential to develop future biofuel productions scenarios, was evaluated based on the observed corn and soybean yields. The result validates the model ability to effectively simulate biomass productions from different bioenergy feedstock. A sensitivity analysis was further conducted to evaluate the calibrated model response to change in soil, crop properties, and fertilizer application rates associated with the expected increase in biofuel production. The results demonstrate non-linear, spatially-varying relationship among nitrate application rate, crop yield and nutrient loads, as well as soil and crop properties that are affected by increases in biofuel feedstock.

  16. Wind tunnel investigation of a large-scale upper surface blown-flap model having four engines

    NASA Technical Reports Server (NTRS)

    Aoyagi, K.; Falarski, M. D.; Koenig, D. G.

    1975-01-01

    Investigations were conducted in the Ames 40- by 80-Foot Wind Tunnel to determine the aerodynamic characteristics of a large-scale subsonic jet transport model with an upper surface blown flap system. The model had a 25 deg swept wing of aspect ratio 7.28 and four turbofan engines. The lift of the flap system was augmented by turning the turbofan exhaust over the Coanda surface. Results were obtained for several flap deflections with several wing leading-edge configurations at jet momentum coefficients from 0 to 4.0. Three-component longitudinal data are presented with four engines operating. In addition, longitudinal and lateral data are presented with an engine out. The maximum lift and stall angle of the four engine model were lower than those obtained with a two engine model that was previously investigated. The addition of the outboard nacelles had an adverse effect on these values. Efforts to improve these values were successful. A maximum lift of 8.8 at an angle-of-attack of 27 deg was obtained with a jet thrust coefficient of 2 for the landing flap configuration.

  17. Estimation of Van Genuchten and preferential flow parameters by inverse modelling for large scale vertical flow constructed wetlands

    NASA Astrophysics Data System (ADS)

    Maier, U.

    2009-04-01

    Background of this study is the attempt to predict the capability of vertical flow constructed wetlands for cleanup of contaminated groundwater. Constructed wetlands have been used for waste water treatment for decades and they provide a promising cost-efficient tool for large scale contaminated groundwater remediation. Vertical soil filters are one type of such constructed wetlands, where water flows vertically under alternating unsaturated conditions (intermittent load). The present study focusses on the model and calibration of unsaturated water flow at two different vertical soil filters. Flow data used for the calibration correspond to measurements performed in two vertical filters used for sewage water treatment at a research pilot treatment plant. Numerical simulations were performed using the code MIN3P, in which variably saturated flow is based on the Richards equation. Soil hydraulic functions based on van Genuchten coefficients and preferential flow characteristics were obtained by calibrating the model to measured data using evolution strategies with covariance matrix adaptation (CMA-ES). The presented inverse modelling procedure not only provides best fit parameterizations for separate and joint model objectives, but also utilizes the information from multiple re-starts of the optimization algorithm to determine suitable parameter ranges and reveal potential correlations. The sequential automatic calibration is both straightforward and efficient even if different complex objective functions are considered.

  18. Assimilation of satellite data to optimize large-scale hydrological model parameters: a case study for the SWOT mission

    NASA Astrophysics Data System (ADS)

    Pedinotti, V.; Boone, A.; Ricci, S.; Biancamaria, S.; Mognard, N.

    2014-11-01

    During the last few decades, satellite measurements have been widely used to study the continental water cycle, especially in regions where in situ measurements are not readily available. The future Surface Water and Ocean Topography (SWOT) satellite mission will deliver maps of water surface elevation (WSE) with an unprecedented resolution and provide observation of rivers wider than 100 m and water surface areas greater than approximately 250 x 250 m over continental surfaces between 78° S and 78° N. This study aims to investigate the potential of SWOT data for parameter optimization for large-scale river routing models. The method consists in applying a data assimilation approach, the extended Kalman filter (EKF) algorithm, to correct the Manning roughness coefficients of the ISBA (Interactions between Soil, Biosphere, and Atmosphere)-TRIP (Total Runoff Integrating Pathways) continental hydrologic system. Parameters such as the Manning coefficient, used within such models to describe water basin characteristics, are generally derived from geomorphological relationships, which leads to significant errors at reach and large scales. The current study focuses on the Niger Basin, a transboundary river. Since the SWOT observations are not available yet and also to assess the proposed assimilation method, the study is carried out under the framework of an observing system simulation experiment (OSSE). It is assumed that modeling errors are only due to uncertainties in the Manning coefficient. The true Manning coefficients are then supposed to be known and are used to generate synthetic SWOT observations over the period 2002-2003. The impact of the assimilation system on the Niger Basin hydrological cycle is then quantified. The optimization of the Manning coefficient using the EKF (extended Kalman filter) algorithm over an 18-month period led to a significant improvement of the river water levels. The relative bias of the water level is globally improved (a 30

  19. Systems Perturbation Analysis of a Large-Scale Signal Transduction Model Reveals Potentially Influential Candidates for Cancer Therapeutics.

    PubMed

    Puniya, Bhanwar Lal; Allen, Laura; Hochfelder, Colleen; Majumder, Mahbubul; Helikar, Tomáš

    2016-01-01

    Dysregulation in signal transduction pathways can lead to a variety of complex disorders, including cancer. Computational approaches such as network analysis are important tools to understand system dynamics as well as to identify critical components that could be further explored as therapeutic targets. Here, we performed perturbation analysis of a large-scale signal transduction model in extracellular environments that stimulate cell death, growth, motility, and quiescence. Each of the model's components was perturbed under both loss-of-function and gain-of-function mutations. Using 1,300 simulations under both types of perturbations across various extracellular conditions, we identified the most and least influential components based on the magnitude of their influence on the rest of the system. Based on the premise that the most influential components might serve as better drug targets, we characterized them for biological functions, housekeeping genes, essential genes, and druggable proteins. The most influential components under all environmental conditions were enriched with several biological processes. The inositol pathway was found as most influential under inactivating perturbations, whereas the kinase and small lung cancer pathways were identified as the most influential under activating perturbations. The most influential components were enriched with essential genes and druggable proteins. Moreover, known cancer drug targets were also classified in influential components based on the affected components in the network. Additionally, the systemic perturbation analysis of the model revealed a network motif of most influential components which affect each other. Furthermore, our analysis predicted novel combinations of cancer drug targets with various effects on other most influential components. We found that the combinatorial perturbation consisting of PI3K inactivation and overactivation of IP3R1 can lead to increased activity levels of apoptosis

  20. The application of ICOM, a non-hydrostatic, fully unstructured mesh model in large scale ocean domains

    NASA Astrophysics Data System (ADS)

    Kramer, Stephan C.; Piggott, Matthew D.; Cotter, Colin J.; Pain, Chris C.; Nelson, Rhodri B.

    2010-05-01

    given of some of the difficulties that were encountered in the application of ICOM in large scale, high aspect ratio ocean domains and how they have been overcome. A large scale application in the form of a baroclinic, wind-driven double gyre will be presented and the results are compared to two other models, the MIT general circulation model (MITgcm, [3]) and NEMO (Nucleus for European Modelling of the Ocean, [4]). Also a comparison of the performance and parallel scaling of the models on a supercomputing platform will be made. References [1] M.D. Piggott, G.J. Gorman, C.C. Pain, P.A. Allison, A.S. Candy, B.T. Martin and W.R. Wells, "A new computational framework for multi-scale ocean modelling based on adapting unstructured meshes", International Journal for Numerical Methods in Fluids 56, pp 1003 - 1015, 2008 [2] S.C. Kramer, C.J. Cotter and C.C. Pain, "Solving the Poisson equation on small aspect ratio domains using unstructured meshes", submitted to Ocean Modelling [3] J. Marshall, C. Hill, L. Perelman, and A. Adcroft, "Hydrostatic, quasi-hydrostatic, and nonhydrostatic ocean modeling", J. Geophysical Res., 102(C3), pp 5733-5752, 1997 [4] G. Madec, "NEMO ocean engine", Note du Pole de modélisation, Institut Pierre-Simon Laplace (IPSL), France, No 27 ISSN No 1288-1619

  1. The Determination of the Large-Scale Circulation of the Pacific Ocean from Satellite Altimetry using Model Green's Functions

    NASA Technical Reports Server (NTRS)

    Stammer, Detlef; Wunsch, Carl

    1996-01-01

    A Green's function method for obtaining an estimate of the ocean circulation using both a general circulation model and altimetric data is demonstrated. The fundamental assumption is that the model is so accurate that the differences between the observations and the model-estimated fields obey a linear dynamics. In the present case, the calculations are demonstrated for model/data differences occurring on very a large scale, where the linearization hypothesis appears to be a good one. A semi-automatic linearization of the Bryan/Cox general circulation model is effected by calculating the model response to a series of isolated (in both space and time) geostrophically balanced vortices. These resulting impulse responses or 'Green's functions' then provide the kernels for a linear inverse problem. The method is first demonstrated with a set of 'twin experiments' and then with real data spanning the entire model domain and a year of TOPEX/POSEIDON observations. Our present focus is on the estimate of the time-mean and annual cycle of the model. Residuals of the inversion/assimilation are largest in the western tropical Pacific, and are believed to reflect primarily geoid error. Vertical resolution diminishes with depth with 1 year of data. The model mean is modified such that the subtropical gyre is weakened by about 1 cm/s and the center of the gyre shifted southward by about 10 deg. Corrections to the flow field at the annual cycle suggest that the dynamical response is weak except in the tropics, where the estimated seasonal cycle of the low-latitude current system is of the order of 2 cm/s. The underestimation of observed fluctuations can be related to the inversion on the coarse spatial grid, which does not permit full resolution of the tropical physics. The methodology is easily extended to higher resolution, to use of spatially correlated errors, and to other data types.

  2. A statistical model for Windstorm Variability over the British Isles based on Large-scale Atmospheric and Oceanic Mechanisms

    NASA Astrophysics Data System (ADS)

    Kirchner-Bossi, Nicolas; Befort, Daniel J.; Wild, Simon B.; Ulbrich, Uwe; Leckebusch, Gregor C.

    2016-04-01

    Time-clustered winter storms are responsible for a majority of the wind-induced losses in Europe. Over last years, different atmospheric and oceanic large-scale mechanisms as the North Atlantic Oscillation (NAO) or the Meridional Overturning Circulation (MOC) have been proven to drive some significant portion of the windstorm variability over Europe. In this work we systematically investigate the influence of different large-scale natural variability modes: more than 20 indices related to those mechanisms with proven or potential influence on the windstorm frequency variability over Europe - mostly SST- or pressure-based - are derived by means of ECMWF ERA-20C reanalysis during the last century (1902-2009), and compared to the windstorm variability for the European winter (DJF). Windstorms are defined and tracked as in Leckebusch et al. (2008). The derived indices are then employed to develop a statistical procedure including a stepwise Multiple Linear Regression (MLR) and an Artificial Neural Network (ANN), aiming to hindcast the inter-annual (DJF) regional windstorm frequency variability in a case study for the British Isles. This case study reveals 13 indices with a statistically significant coupling with seasonal windstorm counts. The Scandinavian Pattern (SCA) showed the strongest correlation (0.61), followed by the NAO (0.48) and the Polar/Eurasia Pattern (0.46). The obtained indices (standard-normalised) are selected as predictors for a windstorm variability hindcast model applied for the British Isles. First, a stepwise linear regression is performed, to identify which mechanisms can explain windstorm variability best. Finally, the indices retained by the stepwise regression are used to develop a multlayer perceptron-based ANN that hindcasted seasonal windstorm frequency and clustering. Eight indices (SCA, NAO, EA, PDO, W.NAtl.SST, AMO (unsmoothed), EA/WR and Trop.N.Atl SST) are retained by the stepwise regression. Among them, SCA showed the highest linear

  3. A three-dimensional diffusion/convection model of the large scale magnetic field in the Venus ionosphere

    SciTech Connect

    Luhmann, J.G. )

    1988-06-01

    An appreciation of how large-scale magnetic fields can be maintained in the subsolar Venus ionosphere by the solar wind interaction was previously obtained with one-dimensional diffusion/convection numerical models. Here, the solution of the diffusion/convection or dynamo equation for the ionospheric field is generalized to three dimensions under the assumption that the field and flow at the upper boundary (in the magnetic barrier) is known from a previous gas dynamic model, and that the ionospheric plasma velocity is known. The latter is given by the combination of the antisunward convection inferred from measurements, and the downward drift calculated from the observed vertical thermal pressure gradient. The results suggest that the low-altitude magnetosheath field draping may be distorted by the interaction with the ionosphere in such a manner that there is an apparent focusing of the field toward the subsolar point. Although the model resolution is too course to resolve the magnetic belt, an ionospheric field is produced that is strongest and parallel to the overlying field in the subsolar region, as is observed.

  4. Vibration, performance, flutter and forced response characteristics of a large-scale propfan and its aeroelastic model

    NASA Technical Reports Server (NTRS)

    August, Richard; Kaza, Krishna Rao V.

    1988-01-01

    An investigation of the vibration, performance, flutter, and forced response of the large-scale propfan, SR7L, and its aeroelastic model, SR7A, has been performed by applying available structural and aeroelastic analytical codes and then correlating measured and calculated results. Finite element models of the blades were used to obtain modal frequencies, displacements, stresses and strains. These values were then used in conjunction with a 3-D, unsteady, lifting surface aerodynamic theory for the subsequent aeroelastic analyses of the blades. The agreement between measured and calculated frequencies and mode shapes for both models is very good. Calculated power coefficients correlate well with those measured for low advance ratios. Flutter results show that both propfans are stable at their respective design points. There is also good agreement between calculated and measured blade vibratory strains due to excitation resulting from yawed flow for the SR7A propfan. The similarity of structural and aeroelastic results show that the SR7A propfan simulates the SR7L characteristics.

  5. Systems Perturbation Analysis of a Large-Scale Signal Transduction Model Reveals Potentially Influential Candidates for Cancer Therapeutics

    PubMed Central

    Puniya, Bhanwar Lal; Allen, Laura; Hochfelder, Colleen; Majumder, Mahbubul; Helikar, Tomáš

    2016-01-01

    Dysregulation in signal transduction pathways can lead to a variety of complex disorders, including cancer. Computational approaches such as network analysis are important tools to understand system dynamics as well as to identify critical components that could be further explored as therapeutic targets. Here, we performed perturbation analysis of a large-scale signal transduction model in extracellular environments that stimulate cell death, growth, motility, and quiescence. Each of the model’s components was perturbed under both loss-of-function and gain-of-function mutations. Using 1,300 simulations under both types of perturbations across various extracellular conditions, we identified the most and least influential components based on the magnitude of their influence on the rest of the system. Based on the premise that the most influential components might serve as better drug targets, we characterized them for biological functions, housekeeping genes, essential genes, and druggable proteins. The most influential components under all environmental conditions were enriched with several biological processes. The inositol pathway was found as most influential under inactivating perturbations, whereas the kinase and small lung cancer pathways were identified as the most influential under activating perturbations. The most influential components were enriched with essential genes and druggable proteins. Moreover, known cancer drug targets were also classified in influential components based on the affected components in the network. Additionally, the systemic perturbation analysis of the model revealed a network motif of most influential components which affect each other. Furthermore, our analysis predicted novel combinations of cancer drug targets with various effects on other most influential components. We found that the combinatorial perturbation consisting of PI3K inactivation and overactivation of IP3R1 can lead to increased activity levels of apoptosis

  6. Study of materials and machines for 3D printed large-scale, flexible electronic structures using fused deposition modeling

    NASA Astrophysics Data System (ADS)

    Hwang, Seyeon

    The 3 dimensional printing (3DP), called to additive manufacturing (AM) or rapid prototyping (RP), is emerged to revolutionize manufacturing and completely transform how products are designed and fabricated. A great deal of research activities have been carried out to apply this new technology to a variety of fields. In spite of many endeavors, much more research is still required to perfect the processes of the 3D printing techniques especially in the area of the large-scale additive manufacturing and flexible printed electronics. The principles of various 3D printing processes are briefly outlined in the Introduction Section. New types of thermoplastic polymer composites aiming to specified functional applications are also introduced in this section. Chapter 2 shows studies about the metal/polymer composite filaments for fused deposition modeling (FDM) process. Various metal particles, copper and iron particles, are added into thermoplastics polymer matrices as the reinforcement filler. The thermo-mechanical properties, such as thermal conductivity, hardness, tensile strength, and fracture mechanism, of composites are tested to figure out the effects of metal fillers on 3D printed composite structures for the large-scale printing process. In Chapter 3, carbon/polymer composite filaments are developed by a simple mechanical blending process with an aim of fabricating the flexible 3D printed electronics as a single structure. Various types of carbon particles consisting of multi-wall carbon nanotube (MWCNT), conductive carbon black (CCB), and graphite are used as the conductive fillers to provide the thermoplastic polyurethane (TPU) with improved electrical conductivity. The mechanical behavior and conduction mechanisms of the developed composite materials are observed in terms of the loading amount of carbon fillers in this section. Finally, the prototype flexible electronics are modeled and manufactured by the FDM process using Carbon/TPU composite filaments and

  7. Experimental results and numerical modeling of a high-performance large-scale cryopump. I. Test particle Monte Carlo simulation

    SciTech Connect

    Luo Xueli; Day, Christian; Haas, Horst; Varoutis, Stylianos

    2011-07-15

    For the torus of the nuclear fusion project ITER (originally the International Thermonuclear Experimental Reactor, but also Latin: the way), eight high-performance large-scale customized cryopumps must be designed and manufactured to accommodate the very high pumping speeds and throughputs of the fusion exhaust gas needed to maintain the plasma under stable vacuum conditions and comply with other criteria which cannot be met by standard commercial vacuum pumps. Under an earlier research and development program, a model pump of reduced scale based on active cryosorption on charcoal-coated panels at 4.5 K was manufactured and tested systematically. The present article focuses on the simulation of the true three-dimensional complex geometry of the model pump by the newly developed ProVac3D Monte Carlo code. It is shown for gas throughputs of up to 1000 sccm ({approx}1.69 Pa m{sup 3}/s at T = 0 deg. C) in the free molecular regime that the numerical simulation results are in good agreement with the pumping speeds measured. Meanwhile, the capture coefficient associated with the virtual region around the cryogenic panels and shields which holds for higher throughputs is calculated using this generic approach. This means that the test particle Monte Carlo simulations in free molecular flow can be used not only for the optimization of the pumping system but also for the supply of the input parameters necessary for the future direct simulation Monte Carlo in the full flow regime.

  8. Context-Dependent Encoding of Fear and Extinction Memories in a Large-Scale Network Model of the Basal Amygdala

    PubMed Central

    Vlachos, Ioannis; Herry, Cyril; Lüthi, Andreas; Aertsen, Ad; Kumar, Arvind

    2011-01-01

    The basal nucleus of the amygdala (BA) is involved in the formation of context-dependent conditioned fear and extinction memories. To understand the underlying neural mechanisms we developed a large-scale neuron network model of the BA, composed of excitatory and inhibitory leaky-integrate-and-fire neurons. Excitatory BA neurons received conditioned stimulus (CS)-related input from the adjacent lateral nucleus (LA) and contextual input from the hippocampus or medial prefrontal cortex (mPFC). We implemented a plasticity mechanism according to which CS and contextual synapses were potentiated if CS and contextual inputs temporally coincided on the afferents of the excitatory neurons. Our simulations revealed a differential recruitment of two distinct subpopulations of BA neurons during conditioning and extinction, mimicking the activation of experimentally observed cell populations. We propose that these two subgroups encode contextual specificity of fear and extinction memories, respectively. Mutual competition between them, mediated by feedback inhibition and driven by contextual inputs, regulates the activity in the central amygdala (CEA) thereby controlling amygdala output and fear behavior. The model makes multiple testable predictions that may advance our understanding of fear and extinction memories. PMID:21437238

  9. Feasibility analysis of using inverse modeling for estimating natural groundwater recharge from a large-scale soil moisture monitoring network

    NASA Astrophysics Data System (ADS)

    Wang, Tiejun; Franz, Trenton E.; Yue, Weifeng; Szilagyi, Jozsef; Zlotnik, Vitaly A.; You, Jinsheng; Chen, Xunhong; Shulski, Martha D.; Young, Aaron

    2016-02-01

    Despite the importance of groundwater recharge (GR), its accurate estimation still remains one of the most challenging tasks in the field of hydrology. In this study, with the help of inverse modeling, long-term (6 years) soil moisture data at 34 sites from the Automated Weather Data Network (AWDN) were used to estimate the spatial distribution of GR across Nebraska, USA, where significant spatial variability exists in soil properties and precipitation (P). To ensure the generality of this study and its potential broad applications, data from public domains and literature were used to parameterize the standard Hydrus-1D model. Although observed soil moisture differed significantly across the AWDN sites mainly due to the variations in P and soil properties, the simulations were able to capture the dynamics of observed soil moisture under different climatic and soil conditions. The inferred mean annual GR from the calibrated models varied over three orders of magnitude across the study area. To assess the uncertainties of the approach, estimates of GR and actual evapotranspiration (ETa) from the calibrated models were compared to the GR and ETa obtained from other techniques in the study area (e.g., remote sensing, tracers, and regional water balance). Comparison clearly demonstrated the feasibility of inverse modeling and large-scale (>104 km2) soil moisture monitoring networks for estimating GR. In addition, the model results were used to further examine the impacts of climate and soil on GR. The data showed that both P and soil properties had significant impacts on GR in the study area with coarser soils generating higher GR; however, different relationships between GR and P emerged at the AWDN sites, defined by local climatic and soil conditions. In general, positive correlations existed between annual GR and P for the sites with coarser-textured soils or under wetter climatic conditions. With the rapidly expanding soil moisture monitoring networks around the

  10. The benefits of using remotely sensed soil moisture in parameter identification of large-scale hydrological models

    NASA Astrophysics Data System (ADS)

    Karssenberg, D.; Wanders, N.; de Roo, A.; de Jong, S.; Bierkens, M. F.

    2013-12-01

    Large-scale hydrological models are nowadays mostly calibrated using observed discharge. As a result, a large part of the hydrological system that is not directly linked to discharge, in particular the unsaturated zone, remains uncalibrated, or might be modified unrealistically. Soil moisture observations from satellites have the potential to fill this gap, as these provide the closest thing to a direct measurement of the state of the unsaturated zone, and thus are potentially useful in calibrating unsaturated zone model parameters. This is expected to result in a better identification of the complete hydrological system, potentially leading to improved forecasts of the hydrograph as well. Here we evaluate this added value of remotely sensed soil moisture in calibration of large-scale hydrological models by addressing two research questions: 1) Which parameters of hydrological models can be identified by calibration with remotely sensed soil moisture? 2) Does calibration with remotely sensed soil moisture lead to an improved calibration of hydrological models compared to approaches that calibrate only with discharge, such that this leads to improved forecasts of soil moisture content and discharge as well? To answer these questions we use a dual state and parameter ensemble Kalman filter to calibrate the hydrological model LISFLOOD for the Upper Danube area. Calibration is done with discharge and remotely sensed soil moisture acquired by AMSR-E, SMOS and ASCAT. Four scenarios are studied: no calibration (expert knowledge), calibration on discharge, calibration on remote sensing data (three satellites) and calibration on both discharge and remote sensing data. Using a split-sample approach, the model is calibrated for a period of 2 years and validated for the calibrated model parameters on a validation period of 10 years. Results show that calibration with discharge data improves the estimation of groundwater parameters (e.g., groundwater reservoir constant) and

  11. Towards a Quantitative Use of Satellite Remote Sensing in Crop Growth Models for Large Scale Agricultural Production Estimate (Invited)

    NASA Astrophysics Data System (ADS)

    Defourny, P.

    2013-12-01

    such the Green Area Index (GAI), fAPAR and fcover usually retrieved from MODIS, MERIS, SPOT-Vegetation described the quality of the green vegetation development. The GLOBAM (Belgium) and EU FP-7 MOCCCASIN projects (Russia) improved the standard products and were demonstrated over large scale. The GAI retrieved from MODIS time series using a purity index criterion depicted successfully the inter-annual variability. Furthermore, the quantitative assimilation of these GAI time series into a crop growth model improved the yield estimate over years. These results showed that the GAI assimilation works best at the district or provincial level. In the context of the GEO Ag., the Joint Experiment of Crop Assessment and Monitoring (JECAM) was designed to enable the global agricultural monitoring community to compare such methods and results over a variety of regional cropping systems. For a network of test sites around the world, satellite and field measurements are currently collected and will be made available for collaborative effort. This experiment should facilitate international standards for data products and reporting, eventually supporting the development of a global system of systems for agricultural crop assessment and monitoring.

  12. The Climate Potentials and Side-Effects of Large-Scale terrestrial CO2 Removal - Insights from Quantitative Model Assessments

    NASA Astrophysics Data System (ADS)

    Boysen, L.; Heck, V.; Lucht, W.; Gerten, D.

    2015-12-01

    Terrestrial carbon dioxide removal (tCDR) through dedicated biomass plantations is considered as one climate engineering (CE) option if implemented at large-scale. While the risks and costs are supposed to be small, the effectiveness depends strongly on spatial and temporal scales of implementation. Based on simulations with a dynamic global vegetation model (LPJmL) we comprehensively assess the effectiveness, biogeochemical side-effects and tradeoffs from an earth system-analytic perspective. We analyzed systematic land-use scenarios in which all, 25%, or 10% of natural and/or agricultural areas are converted to tCDR plantations including the assumption that biomass plantations are established once the 2°C target is crossed in a business-as-usual climate change trajectory. The resulting tCDR potentials in year 2100 include the net accumulated annual biomass harvests and changes in all land carbon pools. We find that only the most spatially excessive, and thus undesirable, scenario would be capable to restore the 2° target by 2100 under continuing high emissions (with a cooling of 3.02°C). Large-scale biomass plantations covering areas between 1.1 - 4.2 Gha would produce a climate reduction potential of 0.8 - 1.4°C. tCDR plantations at smaller scales do not build up enough biomass over this considered period and the potentials to achieve global warming reductions are substantially lowered to no more than 0.5-0.6°C. Finally, we demonstrate that the (non-economic) costs for the Earth system include negative impacts on the water cycle and on ecosystems, which are already under pressure due to both land use change and climate change. Overall, tCDR may lead to a further transgression of land- and water-related planetary boundaries while not being able to set back the crossing of the planetary boundary for climate change. tCDR could still be considered in the near-future mitigation portfolio if implemented on small scales on wisely chosen areas.

  13. Troposphere-stratosphere response to large-scale North Atlantic Ocean variability in an atmosphere/ocean coupled model

    NASA Astrophysics Data System (ADS)

    Omrani, N.-E.; Bader, Jürgen; Keenlyside, N. S.; Manzini, Elisa

    2016-03-01

    The instrumental records indicate that the basin-wide wintertime North Atlantic warm conditions are accompanied by a pattern resembling negative North Atlantic oscillation (NAO), and cold conditions with pattern resembling the positive NAO. This relation is well reproduced in a control simulation by the stratosphere resolving atmosphere-ocean coupled Max-Planck-Institute Earth System Model (MPI-ESM). Further analyses of the MPI-ESM model simulation shows that the large-scale warm North Atlantic conditions are associated with a stratospheric precursory signal that propagates down into the troposphere, preceding the wintertime negative NAO. Additional experiments using only the atmospheric component of MPI-ESM (ECHAM6) indicate that these stratospheric and tropospheric changes are forced by the warm North Atlantic conditions. The basin-wide warming excites a wave-induced stratospheric vortex weakening, stratosphere/troposphere coupling and a high-latitude tropospheric warming. The induced high-latitude tropospheric warming is associated with reduction of the growth rate of low-level baroclinic waves over the North Atlantic region, contributing to the negative NAO pattern. For the cold North Atlantic conditions, the strengthening of the westerlies in the coupled model is confined to the troposphere and lower stratosphere. Comparing the coupled and uncoupled model shows that in the cold phase the tropospheric changes seen in the coupled model are not well reproduced by the standalone atmospheric configuration. Our experiments provide further evidence that North Atlantic Ocean variability (NAV) impacts the coupled stratosphere/troposphere system. As NAV has been shown to be predictable on seasonal-to-decadal timescales, these results have important implications for the predictability of the extra-tropical atmospheric circulation on these time-scales.

  14. Large-Scale Integrated Hydrologic Modeling: Response of the Susquehanna River Basin to 99-Year Climate Forcing

    NASA Astrophysics Data System (ADS)

    Sedmera, K. A.; Duffy, C. J.; Reed, P. M.

    2004-05-01

    This research focuses on large scale (10,000-100,000 sq. km) simulation of regional water budgets using digital data sets and a fully-coupled integrated (surface/subsurface) hydrologic model for the Susquehanna River basin (SRB). The main objectives in this effort are to develop an appropriate and consistent data model for the SRB, delineate groundwater basins, assess the dominant modes and spatial scales affecting the SRB, and estimate the dominant hydrologic response of relatively un-gaged sub-basins. The data model primarily consists of 1) a 99-year climate and vegetation history from PRISM and VEMAP, 2) land surface parameters from various EPA, NRCS, and USGS reports and data sets, and 3) hydrogeology from various state geologic surveys and reports. MODHMS (MODFLOW Hydrologic Modeling System) is a fully-coupled integrated hydrologic model that simulates 3-D variably saturated subsurface flow (Richard's equation), 1-D channel flow and 2-D surface runoff (diffusion wave approximation), canopy interception and evapotranspiration, and offers robust solutions to the governing equations for coupled surface/subsurface flow. The first step in this approach uses a steady-state simulation to estimate regional recharge, to delineate groundwater basins within each river basin, and to assess the validity of the hydrologic landscape concept. The long term climate history is then used to drive a transient simulation that will be used to study the effect of seasonal, inter-annual, and decadal climate patterns and land use on the persistence of wet and dry cycles in soil moisture, on recharge, and on the regional water budget as a whole.

  15. The benefits of using remotely sensed soil moisture in parameter identification of large-scale hydrological models

    NASA Astrophysics Data System (ADS)

    Wanders, Niko; Bierkens, Marc F. P.; de Jong, Steven M.; de Roo, Ad; Karssenberg, Derek

    2013-04-01

    Nowadays large-scale hydrological models are mostly calibrated using observed discharge. Although this may lead to accurate hydrograph estimation, calibration on discharge is restricted to parameters that directly affect discharge. As a result, a large part of the hydrological system that is not directly linked to discharge, in particular the unsaturated zone, remains uncalibrated, or might be modified unrealistically. Soil moisture observations from satellites have the potential to fill this gap, as these provide a direct measurement of the state of the unsaturated zone, and thus are potentially useful in calibrating unsaturated zone model parameters. This is expected to result in a better identification of the complete hydrological system, potentially leading to improved forecasts of the hydrograph as well. Here we evaluate this added value of remotely sensed soil moisture in calibration of large-scale hydrological models by addressing two research questions: 1) Does calibration on remotely sensed soil moisture lead to an improved identification of hydrological models compared to approaches that calibrate on discharge alone? 2) If this is the case, what is the improvement in the forecasted hydrograph? To answer these questions we use a dual state and parameter ensemble Kalman filter to calibrate the hydrological model LISFLOOD for the Upper-Danube area. Calibration is done with discharge and remotely sensed soil moisture from AMSR-E, SMOS and ASCAT. Estimates and spatial correlation are derived from a previous published study on the quantification of the errors and spatial error structure of microwave remote sensing techniques. Four scenarios are studied, namely, no calibration (expert knowledge), calibration on discharge, calibration on remote sensing data and calibration on both discharge and remote sensing data. Using a split-sample approach, the model is calibrated for a period of 2 years and validated using a validation period of 10 years with the calibrated

  16. Transcranial direct current stimulation changes resting state functional connectivity: A large-scale brain network modeling study.

    PubMed

    Kunze, Tim; Hunold, Alexander; Haueisen, Jens; Jirsa, Viktor; Spiegler, Andreas

    2016-10-15

    Transcranial direct current stimulation (tDCS) is a noninvasive technique for affecting brain dynamics with promising application in the clinical therapy of neurological and psychiatric disorders such as Parkinson's disease, Alzheimer's disease, depression, and schizophrenia. Resting state dynamics increasingly play a role in the assessment of connectivity-based pathologies such as Alzheimer's and schizophrenia. We systematically applied tDCS in a large-scale network model of 74 cerebral areas, investigating the spatiotemporal changes in dynamic states as a function of structural connectivity changes. Structural connectivity was defined by the human connectome. The main findings of this study are fourfold: Firstly, we found a tDCS-induced increase in functional connectivity among cerebral areas and among EEG sensors, where the latter reproduced empirical findings of other researchers. Secondly, the analysis of the network dynamics suggested synchronization to be the main mechanism of the observed effects. Thirdly, we found that tDCS sharpens and shifts the frequency distribution of scalp EEG sensors slightly towards higher frequencies. Fourthly, new dynamic states emerged through interacting areas in the network compared to the dynamics of an isolated area. The findings propose synchronization as a key mechanism underlying the changes in the spatiotemporal pattern formation due to tDCS. Our work supports the notion that noninvasive brain stimulation is able to bias brain dynamics by affecting the competitive interplay of functional subnetworks.

  17. Afterslip and viscoelastic relaxation model inferred from the large-scale post-seismic deformation following the 2010 Mw 8.8 Maule earthquake (Chile)

    NASA Astrophysics Data System (ADS)

    Klein, E.; Fleitout, L.; Vigny, C.; Garaud, J. D.

    2016-06-01

    Megathrust earthquakes of magnitude close to 9 are followed by large-scale (thousands of km) and long-lasting (decades), significant crustal and mantle deformation. This deformation can be observed at the surface and quantified with GPS measurements. Here we report on deformation observed during the 5 yr time span after the 2010 Mw 8.8 Maule Megathrust Earthquake (2010 February 27) over the whole South American continent. With the first 2 yr of those data, we use finite element modelling (FEM) to relate this deformation to slip on the plate interface and relaxation in the mantle, using a realistic layered Earth model and Burgers rheologies. Slip alone on the interface, even up to large depths, is unable to provide a satisfactory fit simultaneously to horizontal and vertical displacements. The horizontal deformation pattern requires relaxation both in the asthenosphere and in a low-viscosity channel along the deepest part of the plate interface and no additional low-viscosity wedge is required by the data. The vertical velocity pattern (intense and quick uplift over the Cordillera) is well fitted only when the channel extends deeper than 100 km. Additionally, viscoelastic relaxation alone cannot explain the characteristics and amplitude of displacements over the first 200 km from the trench and aseismic slip on the fault plane is needed. This aseismic slip on the interface generates stresses, which induce additional relaxation in the mantle. In the final model, all three components (relaxation due to the coseismic slip, aseismic slip on the fault plane and relaxation due to aseismic slip) are taken into account. Our best-fit model uses slip at shallow depths on the subduction interface decreasing as function of time and includes (i) an asthenosphere extending down to 200 km, with a steady-state Maxwell viscosity of 4.75 × 1018 Pa s; and (ii) a low-viscosity channel along the plate interface extending from depths of 55-135 km with viscosities below 1018 Pa s.

  18. Sensitivity and foreground modelling for large-scale cosmic microwave background B-mode polarization satellite missions

    NASA Astrophysics Data System (ADS)

    Remazeilles, M.; Dickinson, C.; Eriksen, H. K. K.; Wehus, I. K.

    2016-05-01

    The measurement of the large-scale B-mode polarization in the cosmic microwave background (CMB) is a fundamental goal of future CMB experiments. However, because of unprecedented sensitivity, future CMB experiments will be much more sensitive to any imperfect modelling of the Galactic foreground polarization in the reconstruction of the primordial B-mode signal. We compare the sensitivity to B-modes of different concepts of CMB satellite missions (LiteBIRD, COrE, COrE+, PRISM, EPIC, PIXIE) in the presence of Galactic foregrounds. In particular, we quantify the impact on the tensor-to-scalar parameter of incorrect foreground modelling in the component separation process. Using Bayesian fitting and Gibbs sampling, we perform the separation of the CMB and Galactic foreground B-modes. The recovered CMB B-mode power spectrum is used to compute the likelihood distribution of the tensor-to-scalar ratio. We focus the analysis to the very large angular scales that can be probed only by CMB space missions, i.e. the reionization bump, where primordial B-modes dominate over spurious B-modes induced by gravitational lensing. We find that fitting a single modified blackbody component for thermal dust where the `real' sky consists of two dust components strongly bias the estimation of the tensor-to-scalar ratio by more than 5σ for the most sensitive experiments. Neglecting in the parametric model the curvature of the synchrotron spectral index may bias the estimated tensor-to-scalar ratio by more than 1σ. For sensitive CMB experiments, omitting in the foreground modelling a 1 per cent polarized spinning dust component may induce a non-negligible bias in the estimated tensor-to-scalar ratio.

  19. The eHabitat R library: Large scale modelling of habitat uniqueness for the management and assessment of protected areas

    NASA Astrophysics Data System (ADS)

    Olav Skøien, Jon; Martínez-López, Javier; Dubois, Gregoire

    2014-05-01

    There are over 100,000 protected areas in the world that need to be assessed systematically according to their ecological values in order to support decision making and fund allocation processes. Ecological modelling has become an important tool for conservation and biodiversity studies. Moreover, linking remote sensing with ecological modelling can help overcoming some typical limitations of ecological studies related to conservation, such as sampling effort bias of biodiversity inventories. Habitats offer refuge for species and can be mapped at ecoregion scale by means of remote sensing. Large-scale ecological models are thus needed to make progress on important conservation challenges and the adoption of an open source community approach is crucial for its implementation. R is a Free and Open Source Software (FOSS) which allows the analysis of large amounts of remote sensing data through multivariate statistics and GIS capabilities, offers interoperability with other models and tools, and can be further implemented and used within a web processing service, as well as under a local desktop environment. The eHabitat R library, one of the Web Processing Services (WPS) supporting DOPA, the Digital Observatory for Protected Areas (http://dopa.jrc.ec.europa.eu/), computes habitat similarities and proposes a habitat replaceability index (HRI) which can be used for characterizing each protected area worldwide. More exactly, eHabitat computes for each protected area a map of probabilities to find areas presenting ecological characteristics that are similar to those found in the selected protected area. The library is available online for using it and extending it by the research and end users communities. This paper presents the eHabitat library, as an example of a successful development and application of FOSS tools for geoscientific tasks, in particular for delivering critical services in relation the conservation of protected areas. Some methodological aspects, such

  20. Large-scale modelling of forest hydrological processes and their long-term effect on water yield

    NASA Astrophysics Data System (ADS)

    Watson, Fred G. R.; Vertessy, Robert A.; Grayson, Rodger B.

    1999-04-01

    A water balance model was used to simulate the long-term increases in water yield with forest age which are observed in the mountain ash (Eucalyptus regnans) forests of Victoria, Australia. Specifically, the hypothesis was tested that water yield changes could be explained by changes in evapotranspiration resulting from changes in leaf area index (LAI). A curve predicting changes in the total LAI of mountain ash forest was constructed from ground-based observations and their correlation with Landsat Thematic Mapper measurements of the transformed normalized difference vegetation index (TNDVI). A further curve for mountain ash canopy LAI was constructed from destructive LAI measurements and stem diameter measurements. The curves were incorporated within Macaque, a large-scale, physically based water balance model which was applied to three forested catchments (total area 145 km2). The model was used to evaluate the effect of changes in LAI on predicted stream flow over an 82-year period spanning the 1939 wildfires which burnt most of the area. The use of the LAI curves induced improvement in the predicted hydrographs relative to the case for constant LAI, but the change was not large enough to account for all of the difference in water yield between old-growth and regrowth forests. Of a number of possibilities, concomitant changes in leaf conductance with age were suggested as an additional control on stream flow. These were estimated using data on stand sapwood area per unit leaf area and coded into Macaque. The hydrograph predicted using both the LAI curves and a new leaf conductance versus age curve accurately predicted the observed long-term changes in water yield. We conclude that LAI is a partial control on long-term yield changes, but that another water use efficiency per unit LAI control is also operative.

  1. Model based multivariable controller for large scale compression stations. Design and experimental validation on the LHC 18KW cryorefrigerator

    SciTech Connect

    Bonne, François; Bonnay, Patrick; Bradu, Benjamin

    2014-01-29

    In this paper, a multivariable model-based non-linear controller for Warm Compression Stations (WCS) is proposed. The strategy is to replace all the PID loops controlling the WCS with an optimally designed model-based multivariable loop. This new strategy leads to high stability and fast disturbance rejection such as those induced by a turbine or a compressor stop, a key-aspect in the case of large scale cryogenic refrigeration. The proposed control scheme can be used to have precise control of every pressure in normal operation or to stabilize and control the cryoplant under high variation of thermal loads (such as a pulsed heat load expected to take place in future fusion reactors such as those expected in the cryogenic cooling systems of the International Thermonuclear Experimental Reactor ITER or the Japan Torus-60 Super Advanced fusion experiment JT-60SA). The paper details how to set the WCS model up to synthesize the Linear Quadratic Optimal feedback gain and how to use it. After preliminary tuning at CEA-Grenoble on the 400W@1.8K helium test facility, the controller has been implemented on a Schneider PLC and fully tested first on the CERN's real-time simulator. Then, it was experimentally validated on a real CERN cryoplant. The efficiency of the solution is experimentally assessed using a reasonable operating scenario of start and stop of compressors and cryogenic turbines. This work is partially supported through the European Fusion Development Agreement (EFDA) Goal Oriented Training Program, task agreement WP10-GOT-GIRO.

  2. Model based multivariable controller for large scale compression stations. Design and experimental validation on the LHC 18KW cryorefrigerator

    NASA Astrophysics Data System (ADS)

    Bonne, François; Alamir, Mazen; Bonnay, Patrick; Bradu, Benjamin

    2014-01-01

    In this paper, a multivariable model-based non-linear controller for Warm Compression Stations (WCS) is proposed. The strategy is to replace all the PID loops controlling the WCS with an optimally designed model-based multivariable loop. This new strategy leads to high stability and fast disturbance rejection such as those induced by a turbine or a compressor stop, a key-aspect in the case of large scale cryogenic refrigeration. The proposed control scheme can be used to have precise control of every pressure in normal operation or to stabilize and control the cryoplant under high variation of thermal loads (such as a pulsed heat load expected to take place in future fusion reactors such as those expected in the cryogenic cooling systems of the International Thermonuclear Experimental Reactor ITER or the Japan Torus-60 Super Advanced fusion experiment JT-60SA). The paper details how to set the WCS model up to synthesize the Linear Quadratic Optimal feedback gain and how to use it. After preliminary tuning at CEA-Grenoble on the 400W@1.8K helium test facility, the controller has been implemented on a Schneider PLC and fully tested first on the CERN's real-time simulator. Then, it was experimentally validated on a real CERN cryoplant. The efficiency of the solution is experimentally assessed using a reasonable operating scenario of start and stop of compressors and cryogenic turbines. This work is partially supported through the European Fusion Development Agreement (EFDA) Goal Oriented Training Program, task agreement WP10-GOT-GIRO.

  3. Conceptual Numerical Modeling of Large-Scale Footwall Behavior at the Kiirunavaara Mine, and Implications for Deformation Monitoring

    NASA Astrophysics Data System (ADS)

    Svartsjaern, M.; Saiang, D.; Nordlund, E.; Eitzenberger, A.

    2016-03-01

    Over the last 30 years, the Kiirunavaara mine has experienced a slow but progressive fracturing and movement in the footwall rock mass, which is directly related to the sublevel caving (SLC) method utilized by Luossavaara-Kiirunavaara Aktiebolag (LKAB). As part of an ongoing work, this paper focuses on describing and explaining a likely evolution path of large-scale fracturing in the Kiirunavaara footwall. The trace of this fracturing was based on a series of damage mapping campaigns carried out over the last 2 years, accompanied by numerical modeling. Data collected from the damage mapping between mine levels 320 and 907 m was used to create a 3D surface representing a conceptual boundary for the extent of the damaged volume. The extent boundary surface was used as the basis for calibrating conceptual numerical models created in UDEC. The mapping data, in combination with the numerical models, indicated a plausible evolution path of the footwall fracturing that was subsequently described. Between levels 320 and 740 m, the extent of fracturing into the footwall appears to be controlled by natural pre-existing discontinuities, while below 740 m, there are indications of a curved shear or step-path failure. The step-path is hypothesized to be activated by rock mass heave into the SLC zone above the current extraction level. Above the 320 m level, the fracturing seems to intersect a subvertical structure that daylights in the old open pit slope. Identification of these probable damage mechanisms was an important step in order to determine the requirements for a monitoring system for tracking footwall damage. This paper describes the background work for the design of the system currently being installed.

  4. Parameterization of large scale snow redistribution models using high-resolution information: tests in an alpine catchment (Invited)

    NASA Astrophysics Data System (ADS)

    MacDonald, M. K.; Pomeroy, J. W.; Pietroniro, A.

    2009-12-01

    Snowcover development in alpine environments is highly variable due to the heterogeneous arrangements of terrain and vegetation cover. The interactions between wind flow and surface aerodynamic characteristics produce complex blowing snow redistribution regimes. The snowcover distribution is also influenced by ablation, which varies with surface energetics over complex terrain. For medium to large scale hydrological and atmospheric calculations it is necessary to estimate blowing snow fluxes over incremental land units no smaller than hydrological response units (HRU) or landscape tiles. Blowing snow process algorithms exist and can be deployed, though a robust method to obtain HRU-scale wind speed forcing does not. In this study, snow redistribution by wind was simulated over HRUs in a mountain tundra catchment in western Canada. The HRUs and their aerodynamic properties were delineated using wind speeds derived from a high-resolution empirical terrain-based wind flow model. The wind flow model, based on Ryan (1977), uses a digital elevation model (DEM), reference wind direction and reference wind speed to calculate wind ratios (the ratio of simulated grid cell wind speed to reference wind speed) at 10 m cell resolution, based on terrain aspect, curvature and slope. A high resolution LiDAR DEM of the catchment was available for this. Three parameters are required by the model: the curvature length scale and the weights that control the influence of curvature and slope on calculated wind ratios. These three parameters were estimated via calibration on approximately 1,000 wind speed measurements from each of three meteorological stations located within the Marmot Creek Research Basin. Snow depths estimated from subtraction of summer from winter LiDAR-derived DEMs were used to analyze the relationships between snow depth, calculated wind ratios and terrain variables such as aspect, curvature, elevation, slope, and vegetation height. Snow depth was most strongly

  5. Modeling the MJO in a cloud-resolving model with parameterized large-scale dynamics: Vertical structure, radiation, and horizontal advection of dry air

    NASA Astrophysics Data System (ADS)

    Wang, Shuguang; Sobel, Adam H.; Nie, Ji

    2016-03-01

    Two Madden-Julian Oscillation (MJO) events, observed during October and November 2011 in the equatorial Indian Ocean during the DYNAMO field campaign, are simulated in a limited-area cloud-resolving model using parameterized large-scale dynamics. Three parameterizations of large-scale dynamics—the conventional weak temperature gradient (WTG) approximation, vertical mode-based spectral WTG (SWTG), and damped gravity wave coupling (DGW)—are employed. A number of changes to the implementation of the large-scale parameterizations, as well as the model itself, are made and lead to improvements in the results. Simulations using all three methods, with imposed time-dependent radiation and horizontal moisture advection, capture the time variations in precipitation associated with the two MJO events well. The three methods produce significant differences in the large-scale vertical motion profile, however. WTG produces the most top-heavy profile, while DGW's is less so, and SWTG produces a profile between the two, and in better agreement with observations. Numerical experiments without horizontal advection of moisture suggest that that process significantly reduces the precipitation and suppresses the top-heaviness of large-scale vertical motion during the MJO active phases. Experiments in which a temporally constant radiative heating profile is used indicate that radiative feedbacks significantly amplify the MJO. Experiments in which interactive radiation is used produce agreement with observations that is much better than that achieved in previous work, though not as good as that with imposed time-varying radiative heating. Our results highlight the importance of both horizontal advection of moisture and radiative feedbacks to the dynamics of the MJO.

  6. Business Model for the Security of a Large-Scale PACS, Compliance with ISO/27002:2013 Standard.

    PubMed

    Gutiérrez-Martínez, Josefina; Núñez-Gaona, Marco Antonio; Aguirre-Meneses, Heriberto

    2015-08-01

    Data security is a critical issue in an organization; a proper information security management (ISM) is an ongoing process that seeks to build and maintain programs, policies, and controls for protecting information. A hospital is one of the most complex organizations, where patient information has not only legal and economic implications but, more importantly, an impact on the patient's health. Imaging studies include medical images, patient identification data, and proprietary information of the study; these data are contained in the storage device of a PACS. This system must preserve the confidentiality, integrity, and availability of patient information. There are techniques such as firewalls, encryption, and data encapsulation that contribute to the protection of information. In addition, the Digital Imaging and Communications in Medicine (DICOM) standard and the requirements of the Health Insurance Portability and Accountability Act (HIPAA) regulations are also used to protect the patient clinical data. However, these techniques are not systematically applied to the picture and archiving and communication system (PACS) in most cases and are not sufficient to ensure the integrity of the images and associated data during transmission. The ISO/IEC 27001:2013 standard has been developed to improve the ISM. Currently, health institutions lack effective ISM processes that enable reliable interorganizational activities. In this paper, we present a business model that accomplishes the controls of ISO/IEC 27002:2013 standard and criteria of security and privacy from DICOM and HIPAA to improve the ISM of a large-scale PACS. The methodology associated with the model can monitor the flow of data in a PACS, facilitating the detection of unauthorized access to images and other abnormal activities.

  7. Business Model for the Security of a Large-Scale PACS, Compliance with ISO/27002:2013 Standard.

    PubMed

    Gutiérrez-Martínez, Josefina; Núñez-Gaona, Marco Antonio; Aguirre-Meneses, Heriberto

    2015-08-01

    Data security is a critical issue in an organization; a proper information security management (ISM) is an ongoing process that seeks to build and maintain programs, policies, and controls for protecting information. A hospital is one of the most complex organizations, where patient information has not only legal and economic implications but, more importantly, an impact on the patient's health. Imaging studies include medical images, patient identification data, and proprietary information of the study; these data are contained in the storage device of a PACS. This system must preserve the confidentiality, integrity, and availability of patient information. There are techniques such as firewalls, encryption, and data encapsulation that contribute to the protection of information. In addition, the Digital Imaging and Communications in Medicine (DICOM) standard and the requirements of the Health Insurance Portability and Accountability Act (HIPAA) regulations are also used to protect the patient clinical data. However, these techniques are not systematically applied to the picture and archiving and communication system (PACS) in most cases and are not sufficient to ensure the integrity of the images and associated data during transmission. The ISO/IEC 27001:2013 standard has been developed to improve the ISM. Currently, health institutions lack effective ISM processes that enable reliable interorganizational activities. In this paper, we present a business model that accomplishes the controls of ISO/IEC 27002:2013 standard and criteria of security and privacy from DICOM and HIPAA to improve the ISM of a large-scale PACS. The methodology associated with the model can monitor the flow of data in a PACS, facilitating the detection of unauthorized access to images and other abnormal activities. PMID:25634674

  8. Techno-economic Modeling of the Integration of 20% Wind and Large-scale Energy Storage in ERCOT by 2030

    SciTech Connect

    Baldick, Ross; Webber, Michael; King, Carey; Garrison, Jared; Cohen, Stuart; Lee, Duehee

    2012-12-21

    This study's objective is to examine interrelated technical and economic avenues for the Electric Reliability Council of Texas (ERCOT) grid to incorporate up to and over 20% wind generation by 2030. Our specific interests are to look at the factors that will affect the implementation of both high level of wind power penetration (> 20% generation) and installation of large scale storage.

  9. Development and analysis of prognostic equations for mesoscale kinetic energy and mesoscale (subgrid scale) fluxes for large-scale atmospheric models

    NASA Technical Reports Server (NTRS)

    Avissar, Roni; Chen, Fei

    1993-01-01

    Generated by landscape discontinuities (e.g., sea breezes) mesoscale circulation processes are not represented in large-scale atmospheric models (e.g., general circulation models), which have an inappropiate grid-scale resolution. With the assumption that atmospheric variables can be separated into large scale, mesoscale, and turbulent scale, a set of prognostic equations applicable in large-scale atmospheric models for momentum, temperature, moisture, and any other gaseous or aerosol material, which includes both mesoscale and turbulent fluxes is developed. Prognostic equations are also developed for these mesoscale fluxes, which indicate a closure problem and, therefore, require a parameterization. For this purpose, the mean mesoscale kinetic energy (MKE) per unit of mass is used, defined as E-tilde = 0.5 (the mean value of u'(sub i exp 2), where u'(sub i) represents the three Cartesian components of a mesoscale circulation (the angle bracket symbol is the grid-scale, horizontal averaging operator in the large-scale model, and a tilde indicates a corresponding large-scale mean value). A prognostic equation is developed for E-tilde, and an analysis of the different terms of this equation indicates that the mesoscale vertical heat flux, the mesoscale pressure correlation, and the interaction between turbulence and mesoscale perturbations are the major terms that affect the time tendency of E-tilde. A-state-of-the-art mesoscale atmospheric model is used to investigate the relationship between MKE, landscape discontinuities (as characterized by the spatial distribution of heat fluxes at the earth's surface), and mesoscale sensible and latent heat fluxes in the atmosphere. MKE is compared with turbulence kinetic energy to illustrate the importance of mesoscale processes as compared to turbulent processes. This analysis emphasizes the potential use of MKE to bridge between landscape discontinuities and mesoscale fluxes and, therefore, to parameterize mesoscale fluxes

  10. Experimental validation of computational models for large-scale nonlinear ultrasound simulations in heterogeneous, absorbing fluid media

    NASA Astrophysics Data System (ADS)

    Martin, Elly; Treeby, Bradley E.

    2015-10-01

    To increase the effectiveness of high intensity focused ultrasound (HIFU) treatments, prediction of ultrasound propagation in biological tissues is essential, particularly where bones are present in the field. This requires complex full-wave computational models which account for nonlinearity, absorption, and heterogeneity. These models must be properly validated but there is a lack of analytical solutions which apply in these conditions. Experimental validation of the models is therefore essential. However, accurate measurement of HIFU fields is not trivial. Our aim is to establish rigorous methods for obtaining reference data sets with which to validate tissue realistic simulations of ultrasound propagation. Here, we present preliminary measurements which form an initial validation of simulations performed using the k-Wave MATLAB toolbox. Acoustic pressure was measured on a plane in the field of a focused ultrasound transducer in free field conditions to be used as a Dirichlet boundary condition for simulations. Rectangular and wedge shaped olive oil scatterers were placed in the field and further pressure measurements were made in the far field for comparison with simulations. Good qualitative agreement was observed between the measured and simulated nonlinear pressure fields.

  11. Validation of a simple model to predict the performance of methane oxidation systems, using field data from a large scale biocover test field.

    PubMed

    Geck, Christoph; Scharff, Heijo; Pfeiffer, Eva-Maria; Gebert, Julia

    2016-10-01

    On a large scale test field (1060m(2)) methane emissions were monitored over a period of 30months. During this period, the test field was loaded at rates between 14 and 46gCH4m(-2)d(-1). The total area was subdivided into 60 monitoring grid fields at 17.7m(2) each, which were individually surveyed for methane emissions and methane oxidation efficiency. The latter was calculated both from the direct methane mass balance and from the shift of the carbon dioxide - methane ratio between the base of the methane oxidation layer and the emitted gas. The base flux to each grid field was back-calculated from the data on methane oxidation efficiency and emission. Resolution to grid field scale allowed the analysis of the spatial heterogeneity of all considered fluxes. Higher emissions were measured in the upslope area of the test field. This was attributed to the capillary barrier integrated into the test field resulting in a higher diffusivity and gas permeability in the upslope area. Predictions of the methane oxidation potential were estimated with the simple model Methane Oxidation Tool (MOT) using soil temperature, air filled porosity and water tension as input parameters. It was found that the test field could oxidize 84% of the injected methane. The MOT predictions seemed to be realistic albeit the higher range of the predicted oxidations potentials could not be challenged because the load to the field was too low. Spatial and temporal emission patterns were found indicating heterogeneity of fluxes and efficiencies in the test field. No constant share of direct emissions was found as proposed by the MOT albeit the mean share of emissions throughout the monitoring period was in the range of the expected emissions. PMID:27426022

  12. Validation of a simple model to predict the performance of methane oxidation systems, using field data from a large scale biocover test field.

    PubMed

    Geck, Christoph; Scharff, Heijo; Pfeiffer, Eva-Maria; Gebert, Julia

    2016-10-01

    On a large scale test field (1060m(2)) methane emissions were monitored over a period of 30months. During this period, the test field was loaded at rates between 14 and 46gCH4m(-2)d(-1). The total area was subdivided into 60 monitoring grid fields at 17.7m(2) each, which were individually surveyed for methane emissions and methane oxidation efficiency. The latter was calculated both from the direct methane mass balance and from the shift of the carbon dioxide - methane ratio between the base of the methane oxidation layer and the emitted gas. The base flux to each grid field was back-calculated from the data on methane oxidation efficiency and emission. Resolution to grid field scale allowed the analysis of the spatial heterogeneity of all considered fluxes. Higher emissions were measured in the upslope area of the test field. This was attributed to the capillary barrier integrated into the test field resulting in a higher diffusivity and gas permeability in the upslope area. Predictions of the methane oxidation potential were estimated with the simple model Methane Oxidation Tool (MOT) using soil temperature, air filled porosity and water tension as input parameters. It was found that the test field could oxidize 84% of the injected methane. The MOT predictions seemed to be realistic albeit the higher range of the predicted oxidations potentials could not be challenged because the load to the field was too low. Spatial and temporal emission patterns were found indicating heterogeneity of fluxes and efficiencies in the test field. No constant share of direct emissions was found as proposed by the MOT albeit the mean share of emissions throughout the monitoring period was in the range of the expected emissions.

  13. Water consumption and allocation strategies along the river oases of Tarim River based on large-scale hydrological modelling

    NASA Astrophysics Data System (ADS)

    Yu, Yang; Disse, Markus; Yu, Ruide

    2016-04-01

    With the mainstream of 1,321km and located in an arid area in northwest China, the Tarim River is China's longest inland river. The Tarim basin on the northern edge of the Taklamakan desert is an extremely arid region. In this region, agricultural water consumption and allocation management are crucial to address the conflicts among irrigation water users from upstream to downstream. Since 2011, the German Ministry of Science and Education BMBF established the Sino-German SuMaRiO project, for the sustainable management of river oases along the Tarim River. The project aims to contribute to a sustainable land management which explicitly takes into account ecosystem functions and ecosystem services. SuMaRiO will identify realizable management strategies, considering social, economic and ecological criteria. This will have positive effects for nearly 10 million inhabitants of different ethnic groups. The modelling of water consumption and allocation strategies is a core block in the SuMaRiO cluster. A large-scale hydrological model (MIKE HYDRO Basin) was established for the purpose of sustainable agricultural water management in the main stem Tarim River. MIKE HYDRO Basin is an integrated, multipurpose, map-based decision support tool for river basin analysis, planning and management. It provides detailed simulation results concerning water resources and land use in the catchment areas of the river. Calibration data and future predictions based on large amount of data was acquired. The results of model calibration indicated a close correlation between simulated and observed values. Scenarios with the change on irrigation strategies and land use distributions were investigated. Irrigation scenarios revealed that the available irrigation water has significant and varying effects on the yields of different crops. Irrigation water saving could reach up to 40% in the water-saving irrigation scenario. Land use scenarios illustrated that an increase of farmland area in the

  14. Modal analysis of measurements from a large-scale VIV model test of a riser in linearly sheared flow

    NASA Astrophysics Data System (ADS)

    Lie, H.; Kaasen, K. E.

    2006-05-01

    Large-scale model testing of a tensioned steel riser in well-defined sheared current was performed at Hanøytangen outside Bergen, Norway in 1997. The length of the model was 90 m and the diameter was 3 cm. The aim of the present work is to look into this information and try to improve the understanding of vortex-induced vibrations (VIV) for cases with very high order of responding modes, and in particular to study if and under which circumstances the riser motions would be single-mode or multi-mode. The measurement system consisted of 29 biaxial gauges for bending moment. The signals are processed to yield curvature and displacement and further to identify modes of vibration. A modal approach is used successfully employing a combination of signal filtering and least-squares fitting of precalculated mode-shapes. As a part of the modal analysis, it is demonstrated that the equally spaced instrumentation limited the maximum mode number to be extracted to be equal to the number of instrumentation locations. This imposed a constraint on the analysis of in-line (IL) vibration, which occurs at higher frequencies and involves higher modes than cross-flow (CF). The analysis has shown that in general the riser response was irregular (i.e. broad-banded) and that the degree of irregularity increases with the flow speed. In some tests distinct spectral peaks could be seen, corresponding to a dominating mode. No occurrences of single-mode (lock-in) were seen. The IL response is more broad-banded than the CF response and contains higher frequencies. The average value of the displacement r.m.s over the length of the riser is computed to indicate the magnitude of VIV motion during one test. In the CF direction the average displacement is typically 1/4 of the diameter, almost independent of the flow speed. For the IL direction the values are in the range 0.05 0.08 of the diameter. The peak frequency taken from the spectra of the CF displacement at riser midpoint show approximately

  15. Biophysically realistic minimal model of dopamine neuron

    NASA Astrophysics Data System (ADS)

    Oprisan, Sorinel

    2008-03-01

    We proposed and studied a new biophysically relevant computational model of dopaminergic neurons. Midbrain dopamine neurons are involved in motivation and the control of movement, and have been implicated in various pathologies such as Parkinson's disease, schizophrenia, and drug abuse. The model we developed is a single-compartment Hodgkin-Huxley (HH)-type parallel conductance membrane model. The model captures the essential mechanisms underlying the slow oscillatory potentials and plateau potential oscillations. The main currents involved are: 1) a voltage-dependent fast calcium current, 2) a small conductance potassium current that is modulated by the cytosolic concentration of calcium, and 3) a slow voltage-activated potassium current. We developed multidimensional bifurcation diagrams and extracted the effective domains of sustained oscillations. The model includes a calcium balance due to the fundamental importance of calcium influx as proved by simultaneous electrophysiological and calcium imaging procedure. Although there are significant evidences to suggest a partially electrogenic calcium pump, all previous models considered only elecrtogenic pumps. We investigated the effect of the electrogenic calcium pump on the bifurcation diagram of the model and compared our findings against the experimental results.

  16. Realistic power output modeling of CPV modules

    NASA Astrophysics Data System (ADS)

    Steiner, Marc; Siefer, Gerald; Bösch, Armin; Hornung, Thorsten; Bett, Andreas W.

    2012-10-01

    In this work, we introduce a new model called YieldOpt, which calculates the power output of CPV modules. It uses SMARTS2 to model the spectral irradiance, a ray tracing program to model the optics and SPICE network simulation to model the electrical characteristic of triple-junction (3J) cells. The calculated power output is compared to data measured of five CPV modules operating in Freiburg, Germany during a period from October 2011 to March 2012. Four of the modules use lattice-matched 3J cells; one of these modules has also reflective secondary optics. In one of the five modules novel metamorphic 3J cells are used. The agreement of the predicted power output calculated by YieldOpt with the measured data is quantified using the normalized root mean square error. A good agreement between simulation and measurement is achieved. Moreover, the predicted energy yield derived from the new model is compared with the measured energy yield. A good agreement between the measured data and simulated data is achieved. In addition, a high accuracy in predicting the energy yield of different CPV modules is demonstrated. Finally, the new model is compared with three empirical models.

  17. Assimilation of satellite data to optimize large scale hydrological model parameters: a case study for the SWOT mission

    NASA Astrophysics Data System (ADS)

    Pedinotti, V.; Boone, A.; Ricci, S.; Biancamaria, S.; Mognard, N.

    2014-04-01

    During the last few decades, satellite measurements have been widely used to study the continental water cycle, especially in regions where in situ measurements are not readily available. The future Surface Water and Ocean Topography (SWOT) satellite mission will deliver maps of water surface elevation (WSE) with an unprecedented resolution and provide observation of rivers wider than 100 m and water surface areas greater than approximately 250 m × 250 m over continental surfaces between 78° S and 78° N. This study aims to investigate the potential of SWOT data for parameter optimization for large scale river routing models which are typically employed in Land Surface Models (LSM) for global scale applications. The method consists in applying a data assimilation approach, the Extended Kalman Filter (EKF) algorithm, to correct the Manning roughness coefficients of the ISBA-TRIP Continental Hydrologic System. Indeed, parameters such as the Manning coefficient, used within such models to describe water basin characteristics, are generally derived from geomorphological relationships, which might have locally significant errors. The current study focuses on the Niger basin, a trans-boundary river, which is the main source of fresh water for all the riparian countries. In addition, geopolitical issues in this region can restrict the exchange of hydrological data, so that SWOT should help improve this situation by making hydrological data freely available. In a previous study, the model was first evaluated against in-situ and satellite derived data sets within the framework of the international African Monsoon Multi-disciplinary Analysis (AMMA) project. Since the SWOT observations are not available yet and also to assess the proposed assimilation method, the study is carried out under the framework of an Observing System Simulation Experiment (OSSE). It is assumed that modeling errors are only due to uncertainties in the Manning coefficient. The true Manning

  18. Metastable cosmic strings in realistic models

    SciTech Connect

    Holman, R.; Hsu, S.; Vachaspati, T.; Watkins, R. |

    1992-11-01

    The stability of the electroweak Z-string is investigated at high temperatures. The results show that, while finite temperature corrections can improve the stability of the Z-string, their effect is not strong enough to stabilize the Z-string in the standard electroweak model. Consequently, the Z-string will be unstable even under the conditions present during the electroweak phase transition. Phenomenologically viable models based on the gauge group SU(2){sub L} {times} SU(2) {sub R} {times} U(1){sub B-L} are then considered, and it is shown that metastable strings exist and are stable to small perturbations for a large region of the parameter space for these models. It is also shown that these strings are superconducting with bosonic charge carriers. The string superconductivity may be able to stabilize segments and loops against dynamical contraction. Possible implications of these strings for cosmology are discussed.

  19. Metastable cosmic strings in realistic models

    SciTech Connect

    Holman, R. . Dept. of Physics); Hsu, S. . Lyman Lab. of Physics); Vachaspati, T. . Dept. of Physics and Astronomy); Watkins, R. Fermi National Accelerator Lab., Batavia, IL )

    1992-01-01

    The stability of the electroweak Z-string is investigated at high temperatures. The results show that, while finite temperature corrections can improve the stability of the Z-string, their effect is not strong enough to stabilize the Z-string in the standard electroweak model. Consequently, the Z-string will be unstable even under the conditions present during the electroweak phase transition. Phenomenologically viable models based on the gauge group SU(2)[sub L] [times] SU(2) [sub R] [times] U(1)[sub B-L] are then considered, and it is shown that metastable strings exist and are stable to small perturbations for a large region of the parameter space for these models. It is also shown that these strings are superconducting with bosonic charge carriers. The string superconductivity may be able to stabilize segments and loops against dynamical contraction. Possible implications of these strings for cosmology are discussed.

  20. Realistic model for SU(5) grand unification

    SciTech Connect

    Oshimo, Noriyuki

    2009-10-01

    A grand unified model based on SU(5) and supersymmetry is presented. Pairs of superfields belonging to 15 and 15 representations are newly introduced, two pairs with even and one pair with odd matter parity. Improper mass relations in the minimal model between charged leptons and d-type quarks are corrected. Neutrinos have nonvanishing masses, with large angles for generation mixings of the leptons being compatible with the small angles of the quarks. A new source for lepton-number generation in the early universe is provided.

  1. A New FE Modeling Method for Isothermal Local Loading Process of Large-scale Complex Titanium Alloy Components Based on DEFORM-3D

    SciTech Connect

    Zhang Dawei; Yang He; Sun Zhichao; Fan Xiaoguang

    2010-06-15

    Isothermal local loading process provides a new way to form large-scale complex titanium alloy components. The forming process is characterized by an extreme size (large scale in global and compared small size in regional), multi-parameter effects, and complicated loading path. To establish a reasonable finite element model is one of the key problems urgently to be solved in the research and development of isothermal local loading forming process of large-scale complex titanium alloy components. In this paper, a new finite element model of the isothermal local loading process is developed under the DEFORM-3D environment based on the solution of some key techniques. The modeling method has the following features: (1) different meshing techniques are used in different loading areas and the number of meshed elements is determined according to the deformation characteristic in different local loading steps in order to improve computational efficiency; (2) the accurate magnitude of the friction factor under titanium alloy hot forming (isothermal forming) condition is adopted instead of the typical value for lubricated hot forming processes; (3) different FEM solvers are chosen at different stages according to the loading characteristic and the contact state; (4) in contrast to the local component model, a full 3D component is modeled to simulate the process. The 3D-FE model is validated by experimental data of a large-scale bulkhead forming under isothermal local loading. The model can describe the quantitative relationships between the forming conditions and the forming results. The results of the present study may provide a basis for studying the local deformation mechanism, selecting the reasonable parameters, optimizing the die design and the process control in isothermal local loading process of large-scale complex titanium alloy components.

  2. A New FE Modeling Method for Isothermal Local Loading Process of Large-scale Complex Titanium Alloy Components Based on DEFORM-3D

    NASA Astrophysics Data System (ADS)

    Zhang, Dawei; Yang, He; Sun, Zhichao; Fan, Xiaoguang

    2010-06-01

    Isothermal local loading process provides a new way to form large-scale complex titanium alloy components. The forming process is characterized by an extreme size (large scale in global and compared small size in regional), multi-parameter effects, and complicated loading path. To establish a reasonable finite element model is one of the key problems urgently to be solved in the research and development of isothermal local loading forming process of large-scale complex titanium alloy components. In this paper, a new finite element model of the isothermal local loading process is developed under the DEFORM-3D environment based on the solution of some key techniques. The modeling method has the following features: (1) different meshing techniques are used in different loading areas and the number of meshed elements is determined according to the deformation characteristic in different local loading steps in order to improve computational efficiency; (2) the accurate magnitude of the friction factor under titanium alloy hot forming (isothermal forming) condition is adopted instead of the typical value for lubricated hot forming processes; (3) different FEM solvers are chosen at different stages according to the loading characteristic and the contact state; (4) in contrast to the local component model, a full 3D component is modeled to simulate the process. The 3D-FE model is validated by experimental data of a large-scale bulkhead forming under isothermal local loading. The model can describe the quantitative relationships between the forming conditions and the forming results. The results of the present study may provide a basis for studying the local deformation mechanism, selecting the reasonable parameters, optimizing the die design and the process control in isothermal local loading process of large-scale complex titanium alloy components.

  3. Towards Realistic Modeling of Massive Star Clusters

    NASA Astrophysics Data System (ADS)

    Gnedin, O.; Li, H.

    2016-06-01

    Cosmological simulations of galaxy formation are rapidly advancing towards smaller scales. Current models can now resolve giant molecular clouds in galaxies and predict basic properties of star clusters forming within them. I will describe new theoretical simulations of the formation of the Milky Way throughout cosmic time, with the adaptive mesh refinement code ART. However, many challenges - physical and numerical - still remain. I will discuss how observations of massive star clusters and star forming regions can help us overcome some of them. Video of the talk is available at https://goo.gl/ZoZOfX

  4. Recent developments for realistic solar models

    NASA Astrophysics Data System (ADS)

    Serenelli, Aldo M.

    2014-05-01

    The "solar abundance problem" has triggered a renewed interest in revising the concept of SSM from different perspectives: 1) constituent microphysics: equation of state, nuclear rates, radiative opacities; 2) constituent macrophysics: the physical processes impact the evolution of the Sun and its present-day structure, e.g. dynamical processes induced by rotation, presence of magnetic fields; 3) challenge the hypothesis that the young Sun was chemically homogeneous: the possible interaction of the young Sun with its protoplanetary disk. Here, I briefly review and then present a (personal) view on recent advances and developments on solar modeling, part of them carried out as attempts to solve the solar abundance problem.

  5. Recent developments for realistic solar models

    SciTech Connect

    Serenelli, Aldo M.

    2014-05-02

    The 'solar abundance problem' has triggered a renewed interest in revising the concept of SSM from different perspectives: 1) constituent microphysics: equation of state, nuclear rates, radiative opacities; 2) constituent macrophysics: the physical processes impact the evolution of the Sun and its present-day structure, e.g. dynamical processes induced by rotation, presence of magnetic fields; 3) challenge the hypothesis that the young Sun was chemically homogeneous: the possible interaction of the young Sun with its protoplanetary disk. Here, I briefly review and then present a (personal) view on recent advances and developments on solar modeling, part of them carried out as attempts to solve the solar abundance problem.

  6. [Realistic surgical training. The Aachen model].

    PubMed

    Krones, C J; Binnebösel, M; Stumpf, M; Schumpelick, V

    2010-01-01

    The Aachen model is a practical mode in teaching and advanced training, which is closely geared to the areas of academic acquisition and training. During medical education optional student courses with constitutive curricula offer practical points of contact to the surgical department at all times. Besides improvement of manual training the aims are enhancing interests and acquisition of talents. This guided structure will be intensified with progression into advanced education. Next to the formal guidelines of the curriculum, education logbook and progression conversations, quality, transparency and reliability are particularly emphasized. An evaluation of both the reforms and the surgical trainers is still to be made. In addition procurement of an affirmative occupational image is essential.

  7. Constructing a large-scale 3D Geologic Model for Analysis of the Non-Proliferation Experiment

    SciTech Connect

    Wagoner, J; Myers, S

    2008-04-09

    -wave studies. For regional seismic simulations we convert this realistic geologic model into elastic parameters. Upper crustal units are treated as seismically homogeneous while the lower crust and upper mantle are parameterized by a smoothly varying velocity profile. In order to mitigate spurious reflections, the lower crust and upper mantle are treated as velocity gradients as a function of depth.

  8. Development and application of large-scale hydrologic and aquatic carbon models to understand riverine CO2 evasion in Amazonia

    NASA Astrophysics Data System (ADS)

    Howard, E. A.; Coe, M. T.; Foley, J. A.; Costa, M. H.

    2004-12-01

    Many researchers are investigating the topic of CO2 efflux to the atmosphere from waters of the Amazon basin at several scales. We are developing a physically based modeling system to simulate this flux throughout the whole basin as a function of time-transient climate, vegetation, and hydrology. This modeling system includes an ecosystem land surface model (IBIS; Foley et al. 1996, Kucharik et al. 2000), a hydrological routing model (HYDRA; Coe 2000, Coe et al. 2002), and a new aquatic carbon processing module that we are incorporating into HYDRA (Howard et al. in prep). HYDRA has been recently modified to better represent river discharge and flood extent and height throughout the Amazon Basin. These modifications include: 1) using empirically derived equations representing stream width and height at flood initiation (Costa et al. 2002) to provide more accurate estimates of the initiation and cessation of flood conditions; and 2) using spatially explicit river sinuosity data (Costa et al. 2002) and a stream velocity function based on the Manning equation to provide more realistic representation of stream flow timing and magnitude. HYDRA has been calibrated and validated with observations of river discharge, water height, and flooded area at numerous locations in the mainstem and headwaters of the basin. Results of this validation show better agreement with observations than the previous version of HYDRA but also indicate the need for improved land surface topography and precipitation datasets. The aquatic carbon processing module prototype is currently implemented as an aspatial STELLA/textregistered model, decoupled from HYDRA, that simulates individual grid cells (at ˜ 9 km resolution). We drive the model with IBIS-derived hydrological inputs from the land, and with empirically derived estimates of C inputs (from CAMREX and LBA sources). To allow for seasonal fluctuations in the aquatic-terrestrial transition zone, for each timestep we simulate the volume of

  9. Large Scale Shape Optimization for Accelerator Cavities

    SciTech Connect

    Akcelik, Volkan; Lee, Lie-Quan; Li, Zenghai; Ng, Cho; Xiao, Li-Ling; Ko, Kwok; /SLAC

    2011-12-06

    We present a shape optimization method for designing accelerator cavities with large scale computations. The objective is to find the best accelerator cavity shape with the desired spectral response, such as with the specified frequencies of resonant modes, field profiles, and external Q values. The forward problem is the large scale Maxwell equation in the frequency domain. The design parameters are the CAD parameters defining the cavity shape. We develop scalable algorithms with a discrete adjoint approach and use the quasi-Newton method to solve the nonlinear optimization problem. Two realistic accelerator cavity design examples are presented.

  10. Large Scale Modeling of Floodplain Inundation; Calibration and Forecast Based on Lisflood-FP Model and Remotely Sensed Data

    NASA Astrophysics Data System (ADS)

    Najafi, M.; Durand, M. T.; Neal, J. C.; Moritz, M.

    2013-12-01

    The Logone floodplain located in the Chad basin in north Cameroon, Africa experiences seasonal flooding as the result of Logone River overbank flow. The seasonal and inter-annual variability of flood depths and extents have significant impacts on the socio-economic as well as eco-hydrology of the basin. Recent human interventions on the hydraulic characteristics of the basin have caused serious concerns for the future behavior of the system. Construction of the Maga dam and hundreds of fish canals along with the impact of climate change are potential factors which alternate the floodplain characteristics. To understand the hydraulics of the basin and predict future changes in flood inundation we calibrate the LISFLOOD-FP numerical model using the historical records of river discharge as well as satellite observations of flood depths and extents. LISFLOOD is a distributed 2D model which efficiently simulates large basins. Because of data limitations the Shuttle Radar Topography Mission (SRTM) is considered to extract the DEM data. LISFLOOD subgrid 2D model is applied which allows for defining river channel widths smaller than the DEM resolution. River widths are extracted from Landsat 4 image obtained on Feb-1999. Model parameters including roughness coefficient and river bathymetry are then calibrated. The results demonstrate the potential application of the proposed model to simulate future changes in the floodplain. The sub-grid model has shown to improve hydraulic connectivity within the inundated area. DEM errors are major sources of uncertainty in model prediction.

  11. Large scale and cloud scale dynamics and microphysics in the formation and evolution of a TTL cirrus : a case modelling study

    NASA Astrophysics Data System (ADS)

    Podglajen, Aurélien; Plougonven, Riwal; Hertzog, Albert; Legras, Bernard

    2015-04-01

    Cirrus clouds in the tropical tropopause layer (TTL) control dehydration of air masses entering the stratosphere and strongly contribute to the local radiative heating. In this study, we aim at understanding, through a real case simulation, the dynamics controlling the formation and life cycle of a cirrus cloud event in the TTL. We also aim at quantifying the chemical and radiative impacts of the clouds. To do this, we use the Weather Research and Forecast (WRF) model to simulate a large scale TTL cirrus event happening in January 2009 (27-29) over the Eastern Pacific, which has been extensively described through satellite observations (Taylor et al., 2011). Comparison of simulated and observed high clouds shows a fair agreement, and validates the reference simulation regarding cloud extension, location and life time. The simulation and Lagrangian trajectories within the simulation are then used to characterize the evolution of the cloud : displacement, Lagrangian life time and links with dynamics. The efficiency of dehydration by such clouds is also examined. Sensitivity tests were performed to evaluate the importance of different microphysics schemes and initial and boundary conditions to accurately simulate the cirrus. As expected, both were found to have strong impacts. In particular, there were substantial differences between simulations using different initial and boundary conditions from atmospheric analyses (NCEP CFSR and ECMWF). This illustrates the primordial role of accurate vapour and dynamics for realistic cirrus modelling, on top of the need for appropriate microphysics. Last, we examined the effects of cloud radiative heating. Long wave radiative heating in cirrus clouds has been invoked to induce a cloud scale circulation that would lengthen the cloud lifetime, and increase the size of its dehydration area (Dinh et al. 2010). To try to diagnose this, we have carried out simulations using different radiative schemes, including or suppressing the

  12. Evolution of Large-scale Solar Magnetic Fields in a Flux-Transport Model Including a Multi-cell Meridional Flow

    NASA Astrophysics Data System (ADS)

    McDonald, E.; Dikpati, M.

    2003-12-01

    Advances in helioseismology over the past decade have enabled us to detect subsurface meridional flows in the Sun. Some recent helioseismological analysis (Giles 1999, Haber et al. 2002) has indicated a submerged, reverse flow cell occurring at high latitudes of the Sun's northern hemisphere between 1998 and 2001. Meridional circulation plays an important role in the operation of a class of large-scale solar dynamo, the so-called "flux-transport" dynamo. In such dynamo models, the poleward drift of the large-scale solar magnetic fields and the polar reversal process are explained by the advective-diffusive transport of magnetic flux by a meridional circulation with a poleward surface flow component. Any temporal and spatial variations in the meridional flow pattern are expected to greatly influence the evolution of large-scale magnetic fields in a flux-transport dynamo. The aim of this paper is to explore the implications of a steady, multi-cell flow on the advection of weak, large-scale, magnetic flux. We present a simple, two-cell flux transport model operating in an r-theta cross-section of the northern hemisphere. Azimuthal symmetry is assumed. Performing numerical flux-transport simulations with a reverse flow cell at various latitudes, we demonstrate the effect of this cell on the evolutionary pattern of the large-scale diffuse fields. We also show how a flux concentration may occur at the latitude where the radial flows of the two cells are sinking downward. This work is supported by NASA grants W-19752, W-10107, and W-10175. The National Center for Atmospheric Research is sponsored by the National Science Foundation.

  13. Large scale tracking algorithms.

    SciTech Connect

    Hansen, Ross L.; Love, Joshua Alan; Melgaard, David Kennett; Karelitz, David B.; Pitts, Todd Alan; Zollweg, Joshua David; Anderson, Dylan Z.; Nandy, Prabal; Whitlow, Gary L.; Bender, Daniel A.; Byrne, Raymond Harry

    2015-01-01

    Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.

  14. Research project on CO2 geological storage and groundwaterresources: Large-scale hydrological evaluation and modeling of impact ongroundwater systems

    SciTech Connect

    Birkholzer, Jens; Zhou, Quanlin; Rutqvist, Jonny; Jordan,Preston; Zhang,K.; Tsang, Chin-Fu

    2007-10-24

    If carbon dioxide capture and storage (CCS) technologies areimplemented on a large scale, the amounts of CO2 injected and sequesteredunderground could be extremely large. The stored CO2 then replaces largevolumes of native brine, which can cause considerable pressureperturbation and brine migration in the deep saline formations. Ifhydraulically communicating, either directly via updipping formations orthrough interlayer pathways such as faults or imperfect seals, theseperturbations may impact shallow groundwater or even surface waterresources used for domestic or commercial water supply. Possibleenvironmental concerns include changes in pressure and water table,changes in discharge and recharge zones, as well as changes in waterquality. In compartmentalized formations, issues related to large-scalepressure buildup and brine displacement may also cause storage capacityproblems, because significant pressure buildup can be produced. Toaddress these issues, a three-year research project was initiated inOctober 2006, the first part of which is summarized in this annualreport.

  15. Realistic model of compact VLSI FitzHugh-Nagumo oscillators

    NASA Astrophysics Data System (ADS)

    Cosp, Jordi; Binczak, Stéphane; Madrenas, Jordi; Fernández, Daniel

    2014-02-01

    In this article, we present a compact analogue VLSI implementation of the FitzHugh-Nagumo neuron model, intended to model large-scale, biologically plausible, oscillator networks. As the model requires a series resistor and a parallel capacitor with the inductor, which is the most complex part of the design, it is possible to greatly simplify the active inductor implementation compared to other implementations of this device as typically found in filters by allowing appreciable, but well modelled, nonidealities. We model and obtain the parameters of the inductor nonideal model as an inductance in series with a parasitic resistor and a second order low-pass filter with a large cut-off frequency. Post-layout simulations for a CMOS 0.35 μm double-poly technology using the MOSFET Spice BSIM3v3 model confirm the proper behaviour of the design.

  16. A SPATIALLY REALISTIC MODEL FOR INFORMING FOREST MANAGEMENT DECISIONS

    EPA Science Inventory

    Spatially realistic population models (SRPMs) address a fundamental
    problem commonly confronted by wildlife managers - predicting the
    effects of landscape-scale habitat management on an animal population.
    SRPMs typically consist of three submodels: (1) a habitat submodel...

  17. Modelling disease outbreaks in realistic urban social networks

    NASA Astrophysics Data System (ADS)

    Eubank, Stephen; Guclu, Hasan; Anil Kumar, V. S.; Marathe, Madhav V.; Srinivasan, Aravind; Toroczkai, Zoltán; Wang, Nan

    2004-05-01

    Most mathematical models for the spread of disease use differential equations based on uniform mixing assumptions or ad hoc models for the contact process. Here we explore the use of dynamic bipartite graphs to model the physical contact patterns that result from movements of individuals between specific locations. The graphs are generated by large-scale individual-based urban traffic simulations built on actual census, land-use and population-mobility data. We find that the contact network among people is a strongly connected small-world-like graph with a well-defined scale for the degree distribution. However, the locations graph is scale-free, which allows highly efficient outbreak detection by placing sensors in the hubs of the locations network. Within this large-scale simulation framework, we then analyse the relative merits of several proposed mitigation strategies for smallpox spread. Our results suggest that outbreaks can be contained by a strategy of targeted vaccination combined with early detection without resorting to mass vaccination of a population.

  18. Large-Scale Computations Leading to a First-Principles Approach to Nuclear Structure

    SciTech Connect

    Ormand, W E; Navratil, P

    2003-08-18

    We report on large-scale applications of the ab initio, no-core shell model with the primary goal of achieving an accurate description of nuclear structure from the fundamental inter-nucleon interactions. In particular, we show that realistic two-nucleon interactions are inadequate to describe the low-lying structure of {sup 10}B, and that realistic three-nucleon interactions are essential.

  19. Large-scale mass spectrometry imaging investigation of consequences of cortical spreading depression in a transgenic mouse model of migraine.

    PubMed

    Carreira, Ricardo J; Shyti, Reinald; Balluff, Benjamin; Abdelmoula, Walid M; van Heiningen, Sandra H; van Zeijl, Rene J; Dijkstra, Jouke; Ferrari, Michel D; Tolner, Else A; McDonnell, Liam A; van den Maagdenberg, Arn M J M

    2015-06-01

    Cortical spreading depression (CSD) is the electrophysiological correlate of migraine aura. Transgenic mice carrying the R192Q missense mutation in the Cacna1a gene, which in patients causes familial hemiplegic migraine type 1 (FHM1), exhibit increased propensity to CSD. Herein, mass spectrometry imaging (MSI) was applied for the first time to an animal cohort of transgenic and wild type mice to study the biomolecular changes following CSD in the brain. Ninety-six coronal brain sections from 32 mice were analyzed by MALDI-MSI. All MSI datasets were registered to the Allen Brain Atlas reference atlas of the mouse brain so that the molecular signatures of distinct brain regions could be compared. A number of metabolites and peptides showed substantial changes in the brain associated with CSD. Among those, different mass spectral features showed significant (t-test, P < 0.05) changes in the cortex, 146 and 377 Da, and in the thalamus, 1820 and 1834 Da, of the CSD-affected hemisphere of FHM1 R192Q mice. Our findings reveal CSD- and genotype-specific molecular changes in the brain of FHM1 transgenic mice that may further our understanding about the role of CSD in migraine pathophysiology. The results also demonstrate the utility of aligning MSI datasets to a common reference atlas for large-scale MSI investigations.

  20. Data Analysis, Pre-Ignition Assessment, and Post-Ignition Modeling of the Large-Scale Annular Cookoff Tests

    SciTech Connect

    G. Terrones; F.J. Souto; R.F. Shea; M.W.Burkett; E.S. Idar

    2005-09-30

    In order to understand the implications that cookoff of plastic-bonded explosive-9501 could have on safety assessments, we analyzed the available data from the large-scale annular cookoff (LSAC) assembly series of experiments. In addition, we examined recent data regarding hypotheses about pre-ignition that may be relevant to post-ignition behavior. Based on the post-ignition data from Shot 6, which had the most complete set of data, we developed an approximate equation of state (EOS) for the gaseous products of deflagration. Implementation of this EOS into the multimaterial hydrodynamics computer program PAGOSA yielded good agreement with the inner-liner collapse sequence for Shot 6 and with other data, such as velocity interferometer system for any reflector and resistance wires. A metric to establish the degree of symmetry based on the concept of time of arrival to pin locations was used to compare numerical simulations with experimental data. Several simulations were performed to elucidate the mode of ignition in the LSAC and to determine the possible compression levels that the metal assembly could have been subjected to during post-ignition.

  1. Large-Scale Mass Spectrometry Imaging Investigation of Consequences of Cortical Spreading Depression in a Transgenic Mouse Model of Migraine

    NASA Astrophysics Data System (ADS)

    Carreira, Ricardo J.; Shyti, Reinald; Balluff, Benjamin; Abdelmoula, Walid M.; van Heiningen, Sandra H.; van Zeijl, Rene J.; Dijkstra, Jouke; Ferrari, Michel D.; Tolner, Else A.; McDonnell, Liam A.; van den Maagdenberg, Arn M. J. M.

    2015-06-01

    Cortical spreading depression (CSD) is the electrophysiological correlate of migraine aura. Transgenic mice carrying the R192Q missense mutation in the Cacna1a gene, which in patients causes familial hemiplegic migraine type 1 (FHM1), exhibit increased propensity to CSD. Herein, mass spectrometry imaging (MSI) was applied for the first time to an animal cohort of transgenic and wild type mice to study the biomolecular changes following CSD in the brain. Ninety-six coronal brain sections from 32 mice were analyzed by MALDI-MSI. All MSI datasets were registered to the Allen Brain Atlas reference atlas of the mouse brain so that the molecular signatures of distinct brain regions could be compared. A number of metabolites and peptides showed substantial changes in the brain associated with CSD. Among those, different mass spectral features showed significant ( t-test, P < 0.05) changes in the cortex, 146 and 377 Da, and in the thalamus, 1820 and 1834 Da, of the CSD-affected hemisphere of FHM1 R192Q mice. Our findings reveal CSD- and genotype-specific molecular changes in the brain of FHM1 transgenic mice that may further our understanding about the role of CSD in migraine pathophysiology. The results also demonstrate the utility of aligning MSI datasets to a common reference atlas for large-scale MSI investigations.

  2. A continental-scale hydrology and water quality model for Europe: Calibration and uncertainty of a high-resolution large-scale SWAT model

    NASA Astrophysics Data System (ADS)

    Abbaspour, K. C.; Rouholahnejad, E.; Vaghefi, S.; Srinivasan, R.; Yang, H.; Kløve, B.

    2015-05-01

    A combination of driving forces are increasing pressure on local, national, and regional water supplies needed for irrigation, energy production, industrial uses, domestic purposes, and the environment. In many parts of Europe groundwater quantity, and in particular quality, have come under sever degradation and water levels have decreased resulting in negative environmental impacts. Rapid improvements in the economy of the eastern European block of countries and uncertainties with regard to freshwater availability create challenges for water managers. At the same time, climate change adds a new level of uncertainty with regard to freshwater supplies. In this research we build and calibrate an integrated hydrological model of Europe using the Soil and Water Assessment Tool (SWAT) program. Different components of water resources are simulated and crop yield and water quality are considered at the Hydrological Response Unit (HRU) level. The water resources are quantified at subbasin level with monthly time intervals. Leaching of nitrate into groundwater is also simulated at a finer spatial level (HRU). The use of large-scale, high-resolution water resources models enables consistent and comprehensive examination of integrated system behavior through physically-based, data-driven simulation. In this article we discuss issues with data availability, calibration of large-scale distributed models, and outline procedures for model calibration and uncertainty analysis. The calibrated model and results provide information support to the European Water Framework Directive and lay the basis for further assessment of the impact of climate change on water availability and quality. The approach and methods developed are general and can be applied to any large region around the world.

  3. FERMI RULES OUT THE INVERSE COMPTON/CMB MODEL FOR THE LARGE-SCALE JET X-RAY EMISSION OF 3C 273

    SciTech Connect

    Meyer, Eileen T.; Georganopoulos, Markos

    2014-01-10

    The X-ray emission mechanism in large-scale jets of powerful radio quasars has been a source of debate in recent years, with two competing interpretations: either the X-rays are of synchrotron origin, arising from a different electron energy distribution than that producing the radio to optical synchrotron component, or they are due to inverse Compton scattering of cosmic microwave background photons (IC/CMB) by relativistic electrons in a powerful relativistic jet with bulk Lorentz factor Γ ∼ 10-20. These two models imply radically different conditions in the large-scale jet in terms of jet speed, kinetic power, and maximum energy of the particle acceleration mechanism, with important implications for the impact of the jet on the large-scale environment. A large part of the X-ray origin debate has centered on the well-studied source 3C 273. Here we present new observations from Fermi which put an upper limit on the gamma-ray flux from the large-scale jet of 3C 273 that violates at a confidence greater that 99.9% the flux expected from the IC/CMB X-ray model found by extrapolation of the UV to X-ray spectrum of knot A, thus ruling out the IC/CMB interpretation entirely for this source when combined with previous work. Further, this upper limit from Fermi puts a limit on the Doppler beaming factor of at least δ < 9, assuming equipartition fields, and possibly as low as δ < 5, assuming no major deceleration of the jet from knots A through D1.

  4. Comparison of two global datasets of TRMM and WFD with rain gauge data in driving Large-scale hydrological modelling in Beas River, North India

    NASA Astrophysics Data System (ADS)

    Li, L.; Xu, C.-Y.; Jain, S.

    2012-04-01

    In recent years, large-scale hydrological models have increasingly been used as a main assessment tool for global/regional water resources. During past two decades, numerous datasets have been developed for global/regional hydrological assessment and modeling, but these datasets often show differences in their spatial and temporal distributions of precipitation, which is one of the most critical input variables in global/regional hydrological modeling. This paper aimed at comparing the consistency and difference of two widely used global precipitation datasets in North India, i.e., Tropical Rainfall Measuring Mission (TRMM) 3B42 dataset and the Water and Global Change (WATCH) Forcing Data (WFD), and evaluating the performance of the large scale hydrological model (WASMOD-M) in simulation of water balance of Beas River basin in North India with these two global datasets as inputs. The study was carried out in the following steps. Firstly, the spatial-temporal distribution of TRMM and WFD precipitation in North India was compared by using four statistical analysis methods, which include Mann-Kendall method for testing whether these two datasets reveal the same temporal variability as gauging dataset, Kolmogorov-Smirnov test for testing whether these two datasets follow the same distribution pattern, and T and F test for testing whether they have the same mean and variances with those of gauging data. Secondly, the spatial-temporal distribution from rain gauging data in Beas river basin was taken as a benchmark, to compare and bias-correct the TRMM and WFD datasets. Thirdly, these two adjusted datasets were used to drive the large scale hydrological model (WASMOD-M) in water balance simulations of Beas River basin for the period of 1997-2001 when the two datasets overlap. The modeling results were compared and assessed based on the indices of Nash-Sutcliffe coefficient (NS), absolute value of the volume error (%) (VE), the performance measure of flow-duration curve

  5. Is the universe homogeneous on large scale?

    NASA Astrophysics Data System (ADS)

    Zhu, Xingfen; Chu, Yaoquan

    Wether the distribution of matter in the universe is homogeneous or fractal on large scale is vastly debated in observational cosmology recently. Pietronero and his co-workers have strongly advocated that the fractal behaviour in the galaxy distribution extends to the largest scale observed (≍1000h-1Mpc) with the fractal dimension D ≍ 2. Most cosmologists who hold the standard model, however, insist that the universe be homogeneous on large scale. The answer of whether the universe is homogeneous or not on large scale should wait for the new results of next generation galaxy redshift surveys.

  6. Noise reduction tests of large-scale-model externally blown flap using trailing-edge blowing and partial flap slot covering. [jet aircraft noise reduction

    NASA Technical Reports Server (NTRS)

    Mckinzie, D. J., Jr.; Burns, R. J.; Wagner, J. M.

    1976-01-01

    Noise data were obtained with a large-scale cold-flow model of a two-flap, under-the-wing, externally blown flap proposed for use on future STOL aircraft. The noise suppression effectiveness of locating a slot conical nozzle at the trailing edge of the second flap and of applying partial covers to the slots between the wing and flaps was evaluated. Overall-sound-pressure-level reductions of 5 db occurred below the wing in the flyover plane. Existing models of several noise sources were applied to the test results. The resulting analytical relation compares favorably with the test data. The noise source mechanisms were analyzed and are discussed.

  7. Evaluating cloud processes in large-scale models: Of idealized case studies, parameterization testbeds and single-column modelling on climate time-scales

    NASA Astrophysics Data System (ADS)

    Neggers, Roel

    2016-04-01

    Boundary-layer schemes have always formed an integral part of General Circulation Models (GCMs) used for numerical weather and climate prediction. The spatial and temporal scales associated with boundary-layer processes and clouds are typically much smaller than those at which GCMs are discretized, which makes their representation through parameterization a necessity. The need for generally applicable boundary-layer parameterizations has motivated many scientific studies, which in effect has created its own active research field in the atmospheric sciences. Of particular interest has been the evaluation of boundary-layer schemes at "process-level". This means that parameterized physics are studied in isolated mode from the larger-scale circulation, using prescribed forcings and excluding any upscale interaction. Although feedbacks are thus prevented, the benefit is an enhanced model transparency, which might aid an investigator in identifying model errors and understanding model behavior. The popularity and success of the process-level approach is demonstrated by the many past and ongoing model inter-comparison studies that have been organized by initiatives such as GCSS/GASS. A red line in the results of these studies is that although most schemes somehow manage to capture first-order aspects of boundary layer cloud fields, there certainly remains room for improvement in many areas. Only too often are boundary layer parameterizations still found to be at the heart of problems in large-scale models, negatively affecting forecast skills of NWP models or causing uncertainty in numerical predictions of future climate. How to break this parameterization "deadlock" remains an open problem. This presentation attempts to give an overview of the various existing methods for the process-level evaluation of boundary-layer physics in large-scale models. This includes i) idealized case studies, ii) longer-term evaluation at permanent meteorological sites (the testbed approach

  8. A Model of Biological Attacks on a Realistic Population

    NASA Astrophysics Data System (ADS)

    Carley, Kathleen M.; Fridsma, Douglas; Casman, Elizabeth; Altman, Neal; Chen, Li-Chiou; Kaminsky, Boris; Nave, Demian; Yahja, Alex

    The capability to assess the impacts of large-scale biological attacks and the efficacy of containment policies is critical and requires knowledge-intensive reasoning about social response and disease transmission within a complex social system. There is a close linkage among social networks, transportation networks, disease spread, and early detection. Spatial dimensions related to public gathering places such as hospitals, nursing homes, and restaurants, can play a major role in epidemics [Klovdahl et. al. 2001]. Like natural epidemics, bioterrorist attacks unfold within spatially defined, complex social systems, and the societal and networked response can have profound effects on their outcome. This paper focuses on bioterrorist attacks, but the model has been applied to emergent and familiar diseases as well.

  9. Acquisition of detailed laryngeal flow measurements in geometrically realistic models

    PubMed Central

    Farley, Jayrin; Thomson, Scott L.

    2011-01-01

    Characterization of laryngeal flow velocity fields is important to understanding vocal fold vibration and voice production. One common method for acquiring flow field data is particle image velocimetry (PIV). However, because using PIV with models that have curved surfaces is problematic due to optical distortion, experimental investigations of laryngeal airflow are typically performed using models with idealized geometries. In this paper a method for acquiring PIV data using models with realistic geometries is presented. Sample subglottal, intraglottal, and supraglottal PIV data are shown. Capabilities and limitations are discussed, and suggestions for future implementation are provided. PMID:21877775

  10. Temperature Extremes and Associated Large-Scale Meteorological Patterns in NARCCAP Regional Climate Models: Towards a framework for generalized model evaluation

    NASA Astrophysics Data System (ADS)

    Loikith, P.; Waliser, D. E.; Lee, H.; Kim, J.; Neelin, J. D.; McGinnis, S. A.; Lintner, B. R.; Mearns, L. O.

    2014-12-01

    Large-scale meteorological patterns associated with extreme temperatures are evaluated across a suite of regional climate model (RCM) simulations produced as a part of the North American Regional Climate Change Assessment Program (NARRCAP). Evaluation is performed on six hindcast simulations and eleven simulations driven by four global climate models (GCMs). In places removed from the influence of complex topography in the winter, such as the Midwest of the United States, extremes and associated patterns are generally simulated with high fidelity. In other cases, such as for much of the Gulf of Mexico Coast in summer, the RCMs have notable difficulty in reproducing temperature extremes and associated meteorological patterns. In some cases, the temperature extremes appear to be well reproduced, but for the wrong reasons, making this analysis particularly valuable for diagnosing and interpreting RCM skill in making future projections of temperature extremes. An RCM skill score is developed, based on pattern agreement at all grid cells, to identify the RCM-GCM combinations that may be best suited for making future projections of temperature extremes. Cases identified as having low RCM skill will be the subject of further investigations with a focus on understanding key processes that are contributing to model error and helping to guide future model development. It is anticipated that this work will be implemented as part of a framework for evaluating temperature extremes in RCMs, providing generalized performance metrics based on mechanistic and process-oriented diagnostics.

  11. a Novel Ship Detection Method for Large-Scale Optical Satellite Images Based on Visual Lbp Feature and Visual Attention Model

    NASA Astrophysics Data System (ADS)

    Haigang, Sui; Zhina, Song

    2016-06-01

    Reliably ship detection in optical satellite images has a wide application in both military and civil fields. However, this problem is very difficult in complex backgrounds, such as waves, clouds, and small islands. Aiming at these issues, this paper explores an automatic and robust model for ship detection in large-scale optical satellite images, which relies on detecting statistical signatures of ship targets, in terms of biologically-inspired visual features. This model first selects salient candidate regions across large-scale images by using a mechanism based on biologically-inspired visual features, combined with visual attention model with local binary pattern (CVLBP). Different from traditional studies, the proposed algorithm is high-speed and helpful to focus on the suspected ship areas avoiding the separation step of land and sea. Largearea images are cut into small image chips and analyzed in two complementary ways: Sparse saliency using visual attention model and detail signatures using LBP features, thus accordant with sparseness of ship distribution on images. Then these features are employed to classify each chip as containing ship targets or not, using a support vector machine (SVM). After getting the suspicious areas, there are still some false alarms such as microwaves and small ribbon clouds, thus simple shape and texture analysis are adopted to distinguish between ships and nonships in suspicious areas. Experimental results show the proposed method is insensitive to waves, clouds, illumination and ship size.

  12. Design for and efficient dynamic climate model with realistic geography

    NASA Technical Reports Server (NTRS)

    Suarez, M. J.; Abeles, J.

    1984-01-01

    The long term climate sensitivity which include realistic atmospheric dynamics are severely restricted by the expense of integrating atmospheric general circulation models are discussed. Taking as an example models used at GSFC for this dynamic model is an alternative which is of much lower horizontal or vertical resolution. The model of Heid and Suarez uses only two levels in the vertical and, although it has conventional grid resolution in the meridional direction, horizontal resolution is reduced by keeping only a few degrees of freedom in the zonal wavenumber spectrum. Without zonally asymmetric forcing this model simulates a day in roughly 1/2 second on a CRAY. The model under discussion is a fully finite differenced, zonally asymmetric version of the Heid-Suarez model. It is anticipated that speeds can be obtained a few seconds a day roughly 50 times faster than moderate resolution, multilayer GCM's.

  13. The KM phase in semi-realistic heterotic orbifold models

    SciTech Connect

    Giedt, Joel

    2000-07-05

    In string-inspired semi-realistic heterotic orbifolds models with an anomalous U(1){sub X},a nonzero Kobayashi-Masakawa (KM) phase is shown to arise generically from the expectation values of complex scalar fields, which appear in nonrenormalizable quark mass couplings. Modular covariant nonrenormalizable superpotential couplings are constructed. A toy Z{sub 3} orbifold model is analyzed in some detail. Modular symmetries and orbifold selection rules are taken into account and do not lead to a cancellation of the KM phase. We also discuss attempts to obtain the KM phase solely from renormalizable interactions.