Science.gov

Sample records for realistic large-scale model

  1. Towards a large-scale biologically realistic model of the hippocampus.

    PubMed

    Hendrickson, Phillip J; Yu, Gene J; Robinson, Brian S; Song, Dong; Berger, Theodore W

    2012-01-01

    Real neurobiological systems in the mammalian brain have a complicated and detailed structure, being composed of 1) large numbers of neurons with intricate, branching morphologies--complex morphology brings with it complex passive membrane properties; 2) active membrane properties--nonlinear sodium, potassium, calcium, etc. conductances; 3) non-uniform distributions throughout the dendritic and somal membrane surface of these non-linear conductances; 4) non-uniform and topographic connectivity between pre- and post-synaptic neurons; and 5) activity-dependent changes in synaptic function. One of the essential, and as yet unanswered questions in neuroscience is the role of these fundamental structural and functional features in determining "neural processing" properties of a given brain system. To help answer that question, we're creating a large-scale biologically realistic model of the intrinsic pathway of the hippocampus, which consists of the projection from layer II entorhinal cortex (EC) to dentate gyrus (DG), EC to CA3, DG to CA3, and CA3 to CA1. We describe the computational hardware and software tools the model runs on, and demonstrate its viability as a modeling platform with an EC-to-DG model. PMID:23366951

  2. A novel CPU/GPU simulation environment for large-scale biologically realistic neural modeling

    PubMed Central

    Hoang, Roger V.; Tanna, Devyani; Jayet Bray, Laurence C.; Dascalu, Sergiu M.; Harris, Frederick C.

    2013-01-01

    Computational Neuroscience is an emerging field that provides unique opportunities to study complex brain structures through realistic neural simulations. However, as biological details are added to models, the execution time for the simulation becomes longer. Graphics Processing Units (GPUs) are now being utilized to accelerate simulations due to their ability to perform computations in parallel. As such, they have shown significant improvement in execution time compared to Central Processing Units (CPUs). Most neural simulators utilize either multiple CPUs or a single GPU for better performance, but still show limitations in execution time when biological details are not sacrificed. Therefore, we present a novel CPU/GPU simulation environment for large-scale biological networks, the NeoCortical Simulator version 6 (NCS6). NCS6 is a free, open-source, parallelizable, and scalable simulator, designed to run on clusters of multiple machines, potentially with high performance computing devices in each of them. It has built-in leaky-integrate-and-fire (LIF) and Izhikevich (IZH) neuron models, but users also have the capability to design their own plug-in interface for different neuron types as desired. NCS6 is currently able to simulate one million cells and 100 million synapses in quasi real time by distributing data across eight machines with each having two video cards. PMID:24106475

  3. Modeling of the cross-beam energy transfer with realistic inertial-confinement-fusion beams in a large-scale hydrocode.

    PubMed

    Colaïtis, A; Duchateau, G; Ribeyre, X; Tikhonchuk, V

    2015-01-01

    A method for modeling realistic laser beams smoothed by kinoform phase plates is presented. The ray-based paraxial complex geometrical optics (PCGO) model with Gaussian thick rays allows one to create intensity variations, or pseudospeckles, that reproduce the beam envelope, contrast, and high-intensity statistics predicted by paraxial laser propagation codes. A steady-state cross-beam energy-transfer (CBET) model is implemented in a large-scale radiative hydrocode based on the PCGO model. It is used in conjunction with the realistic beam modeling technique to study the effects of CBET between coplanar laser beams on the target implosion. The pseudospeckle pattern imposed by PCGO produces modulations in the irradiation field and the shell implosion pressure. Cross-beam energy transfer between beams at 20(∘) and 40(∘) significantly degrades the irradiation symmetry by amplifying low-frequency modes and reducing the laser-capsule coupling efficiency, ultimately leading to large modulations of the shell areal density and lower convergence ratios. These results highlight the role of laser-plasma interaction and its influence on the implosion dynamics. PMID:25679718

  4. Photorealistic large-scale urban city model reconstruction.

    PubMed

    Poullis, Charalambos; You, Suya

    2009-01-01

    The rapid and efficient creation of virtual environments has become a crucial part of virtual reality applications. In particular, civil and defense applications often require and employ detailed models of operations areas for training, simulations of different scenarios, planning for natural or man-made events, monitoring, surveillance, games, and films. A realistic representation of the large-scale environments is therefore imperative for the success of such applications since it increases the immersive experience of its users and helps reduce the difference between physical and virtual reality. However, the task of creating such large-scale virtual environments still remains a time-consuming and manual work. In this work, we propose a novel method for the rapid reconstruction of photorealistic large-scale virtual environments. First, a novel, extendible, parameterized geometric primitive is presented for the automatic building identification and reconstruction of building structures. In addition, buildings with complex roofs containing complex linear and nonlinear surfaces are reconstructed interactively using a linear polygonal and a nonlinear primitive, respectively. Second, we present a rendering pipeline for the composition of photorealistic textures, which unlike existing techniques, can recover missing or occluded texture information by integrating multiple information captured from different optical sensors (ground, aerial, and satellite). PMID:19423889

  5. Large Scale, High Resolution, Mantle Dynamics Modeling

    NASA Astrophysics Data System (ADS)

    Geenen, T.; Berg, A. V.; Spakman, W.

    2007-12-01

    To model the geodynamic evolution of plate convergence, subduction and collision and to allow for a connection to various types of observational data, geophysical, geodetical and geological, we developed a 4D (space-time) numerical mantle convection code. The model is based on a spherical 3D Eulerian fem model, with quadratic elements, on top of which we constructed a 3D Lagrangian particle in cell(PIC) method. We use the PIC method to transport material properties and to incorporate a viscoelastic rheology. Since capturing small scale processes associated with localization phenomena require a high resolution, we spend a considerable effort on implementing solvers suitable to solve for models with over 100 million degrees of freedom. We implemented Additive Schwartz type ILU based methods in combination with a Krylov solver, GMRES. However we found that for problems with over 500 thousend degrees of freedom the convergence of the solver degraded severely. This observation is known from the literature [Saad, 2003] and results from the local character of the ILU preconditioner resulting in a poor approximation of the inverse of A for large A. The size of A for which ILU is no longer usable depends on the condition of A and on the amount of fill in allowed for the ILU preconditioner. We found that for our problems with over 5×105 degrees of freedom convergence became to slow to solve the system within an acceptable amount of walltime, one minute, even when allowing for considerable amount of fill in. We also implemented MUMPS and found good scaling results for problems up to 107 degrees of freedom for up to 32 CPU¡¯s. For problems with over 100 million degrees of freedom we implemented Algebraic Multigrid type methods (AMG) from the ML library [Sala, 2006]. Since multigrid methods are most effective for single parameter problems, we rebuild our model to use the SIMPLE method in the Stokes solver [Patankar, 1980]. We present scaling results from these solvers for 3D

  6. Large-Scale Simulations of Realistic Fluidized Bed Reactors using Novel Numerical Methods

    NASA Astrophysics Data System (ADS)

    Capecelatro, Jesse; Desjardins, Olivier; Pepiot, Perrine; National Renewable Energy Lab Collaboration

    2011-11-01

    Turbulent particle-laden flows in the form of fluidized bed reactors display good mixing properties, low pressure drops, and a fairly uniform temperature distribution. Understanding and predicting the flow dynamics within the reactor is necessary for improving the efficiency, and providing technologies for large-scale industrialization. A numerical strategy based on an Eulerian representation of the gas phase and Lagrangian tracking of the particles is developed in the framework of NGA, a high- order fully conservative parallel code tailored for turbulent flows. The particles are accounted for using a point-particle assumption. Once the gas-phase quantities are mapped to the particle location a conservative, implicit diffusion operation smoothes the field. Normal and tangential collisions are handled via soft-sphere model, modified to allow the bed to reach close packing at rest. The pressure drop across the bed is compared with theory to accurately predict the minimum fluidization velocity. 3D simulations of the National Renewable Energy Lab's 4-inch reactor are then conducted. Tens of millions of particles are tracked. The reactor's geometry is modeled using an immersed boundary scheme. Statistics for volume fraction, velocities, bed expansion, and bubble characteristics are analyzed and compared with experimental data.

  7. Adaptive Texture Synthesis for Large Scale City Modeling

    NASA Astrophysics Data System (ADS)

    Despine, G.; Colleu, T.

    2015-02-01

    Large scale city models textured with aerial images are well suited for bird-eye navigation but generally the image resolution does not allow pedestrian navigation. One solution to face this problem is to use high resolution terrestrial photos but it requires huge amount of manual work to remove occlusions. Another solution is to synthesize generic textures with a set of procedural rules and elementary patterns like bricks, roof tiles, doors and windows. This solution may give realistic textures but with no correlation to the ground truth. Instead of using pure procedural modelling we present a method to extract information from aerial images and adapt the texture synthesis to each building. We describe a workflow allowing the user to drive the information extraction and to select the appropriate texture patterns. We also emphasize the importance to organize the knowledge about elementary pattern in a texture catalogue allowing attaching physical information, semantic attributes and to execute selection requests. Roofs are processed according to the detected building material. Façades are first described in terms of principal colours, then opening positions are detected and some window features are computed. These features allow selecting the most appropriate patterns from the texture catalogue. We experimented this workflow on two samples with 20 cm and 5 cm resolution images. The roof texture synthesis and opening detection were successfully conducted on hundreds of buildings. The window characterization is still sensitive to the distortions inherent to the projection of aerial images onto the facades.

  8. Homogenization of Large-Scale Movement Models in Ecology

    USGS Publications Warehouse

    Garlick, M.J.; Powell, J.A.; Hooten, M.B.; McFarlane, L.R.

    2011-01-01

    A difficulty in using diffusion models to predict large scale animal population dispersal is that individuals move differently based on local information (as opposed to gradients) in differing habitat types. This can be accommodated by using ecological diffusion. However, real environments are often spatially complex, limiting application of a direct approach. Homogenization for partial differential equations has long been applied to Fickian diffusion (in which average individual movement is organized along gradients of habitat and population density). We derive a homogenization procedure for ecological diffusion and apply it to a simple model for chronic wasting disease in mule deer. Homogenization allows us to determine the impact of small scale (10-100 m) habitat variability on large scale (10-100 km) movement. The procedure generates asymptotic equations for solutions on the large scale with parameters defined by small-scale variation. The simplicity of this homogenization procedure is striking when compared to the multi-dimensional homogenization procedure for Fickian diffusion,and the method will be equally straightforward for more complex models. ?? 2010 Society for Mathematical Biology.

  9. Multiresolution comparison of precipitation datasets for large-scale models

    NASA Astrophysics Data System (ADS)

    Chun, K. P.; Sapriza Azuri, G.; Davison, B.; DeBeer, C. M.; Wheater, H. S.

    2014-12-01

    Gridded precipitation datasets are crucial for driving large-scale models which are related to weather forecast and climate research. However, the quality of precipitation products is usually validated individually. Comparisons between gridded precipitation products along with ground observations provide another avenue for investigating how the precipitation uncertainty would affect the performance of large-scale models. In this study, using data from a set of precipitation gauges over British Columbia and Alberta, we evaluate several widely used North America gridded products including the Canadian Gridded Precipitation Anomalies (CANGRD), the National Center for Environmental Prediction (NCEP) reanalysis, the Water and Global Change (WATCH) project, the thin plate spline smoothing algorithms (ANUSPLIN) and Canadian Precipitation Analysis (CaPA). Based on verification criteria for various temporal and spatial scales, results provide an assessment of possible applications for various precipitation datasets. For long-term climate variation studies (~100 years), CANGRD, NCEP, WATCH and ANUSPLIN have different comparative advantages in terms of their resolution and accuracy. For synoptic and mesoscale precipitation patterns, CaPA provides appealing performance of spatial coherence. In addition to the products comparison, various downscaling methods are also surveyed to explore new verification and bias-reduction methods for improving gridded precipitation outputs for large-scale models.

  10. Statistical Modeling of Large-Scale Scientific Simulation Data

    SciTech Connect

    Eliassi-Rad, T; Baldwin, C; Abdulla, G; Critchlow, T

    2003-11-15

    With the advent of massively parallel computer systems, scientists are now able to simulate complex phenomena (e.g., explosions of a stars). Such scientific simulations typically generate large-scale data sets over the spatio-temporal space. Unfortunately, the sheer sizes of the generated data sets make efficient exploration of them impossible. Constructing queriable statistical models is an essential step in helping scientists glean new insight from their computer simulations. We define queriable statistical models to be descriptive statistics that (1) summarize and describe the data within a user-defined modeling error, and (2) are able to answer complex range-based queries over the spatiotemporal dimensions. In this chapter, we describe systems that build queriable statistical models for large-scale scientific simulation data sets. In particular, we present our Ad-hoc Queries for Simulation (AQSim) infrastructure, which reduces the data storage requirements and query access times by (1) creating and storing queriable statistical models of the data at multiple resolutions, and (2) evaluating queries on these models of the data instead of the entire data set. Within AQSim, we focus on three simple but effective statistical modeling techniques. AQSim's first modeling technique (called univariate mean modeler) computes the ''true'' (unbiased) mean of systematic partitions of the data. AQSim's second statistical modeling technique (called univariate goodness-of-fit modeler) uses the Andersen-Darling goodness-of-fit method on systematic partitions of the data. Finally, AQSim's third statistical modeling technique (called multivariate clusterer) utilizes the cosine similarity measure to cluster the data into similar groups. Our experimental evaluations on several scientific simulation data sets illustrate the value of using these statistical models on large-scale simulation data sets.

  11. Ecohydrological modeling for large-scale environmental impact assessment.

    PubMed

    Woznicki, Sean A; Nejadhashemi, A Pouyan; Abouali, Mohammad; Herman, Matthew R; Esfahanian, Elaheh; Hamaamin, Yaseen A; Zhang, Zhen

    2016-02-01

    Ecohydrological models are frequently used to assess the biological integrity of unsampled streams. These models vary in complexity and scale, and their utility depends on their final application. Tradeoffs are usually made in model scale, where large-scale models are useful for determining broad impacts of human activities on biological conditions, and regional-scale (e.g. watershed or ecoregion) models provide stakeholders greater detail at the individual stream reach level. Given these tradeoffs, the objective of this study was to develop large-scale stream health models with reach level accuracy similar to regional-scale models thereby allowing for impacts assessments and improved decision-making capabilities. To accomplish this, four measures of biological integrity (Ephemeroptera, Plecoptera, and Trichoptera taxa (EPT), Family Index of Biotic Integrity (FIBI), Hilsenhoff Biotic Index (HBI), and fish Index of Biotic Integrity (IBI)) were modeled based on four thermal classes (cold, cold-transitional, cool, and warm) of streams that broadly dictate the distribution of aquatic biota in Michigan. The Soil and Water Assessment Tool (SWAT) was used to simulate streamflow and water quality in seven watersheds and the Hydrologic Index Tool was used to calculate 171 ecologically relevant flow regime variables. Unique variables were selected for each thermal class using a Bayesian variable selection method. The variables were then used in development of adaptive neuro-fuzzy inference systems (ANFIS) models of EPT, FIBI, HBI, and IBI. ANFIS model accuracy improved when accounting for stream thermal class rather than developing a global model. PMID:26595397

  12. Challenges of Modeling Flood Risk at Large Scales

    NASA Astrophysics Data System (ADS)

    Guin, J.; Simic, M.; Rowe, J.

    2009-04-01

    algorithm propagates the flows for each simulated event. The model incorporates a digital terrain model (DTM) at 10m horizontal resolution, which is used to extract flood plain cross-sections such that a one-dimensional hydraulic model can be used to estimate extent and elevation of flooding. In doing so the effect of flood defenses in mitigating floods are accounted for. Finally a suite of vulnerability relationships have been developed to estimate flood losses for a portfolio of properties that are exposed to flood hazard. Historical experience indicates that a for recent floods in Great Britain more than 50% of insurance claims occur outside the flood plain and these are primarily a result of excess surface flow, hillside flooding, flooding due to inadequate drainage. A sub-component of the model addresses this issue by considering several parameters that best explain the variability of claims off the flood plain. The challenges of modeling such a complex phenomenon at a large scale largely dictate the choice of modeling approaches that need to be adopted for each of these model components. While detailed numerically-based physical models exist and have been used for conducting flood hazard studies, they are generally restricted to small geographic regions. In a probabilistic risk estimation framework like our current model, a blend of deterministic and statistical techniques have to be employed such that each model component is independent, physically sound and is able to maintain the statistical properties of observed historical data. This is particularly important because of the highly non-linear behavior of the flooding process. With respect to vulnerability modeling, both on and off the flood plain, the challenges include the appropriate scaling of a damage relationship when applied to a portfolio of properties. This arises from the fact that the estimated hazard parameter used for damage assessment, namely maximum flood depth has considerable uncertainty. The

  13. Large-scale electromagnetic modeling for multiple inhomogeneous domains

    NASA Astrophysics Data System (ADS)

    Zhdanov, M. S.; Endo, M.; Cuma, M.

    2008-12-01

    We develop a new formulation of the integral equation (IE) method for three-dimensional (3D) electromagnetic (EM) field computation in large-scale models with multiple inhomogeneous domains. This problem arises in many practical applications including modeling the EM fields within the complex geoelectrical structures in geophysical exploration. In geophysical applications, it is difficult to describe an earth structure using a horizontally layered background conductivity model, which is required for the efficient implementation of the conventional IE approach. As a result, a large domain of interest with anomalous conductivity distribution needs to be discretized, which complicates the computations. The new method allows us to consider multiple inhomogeneous domains, where the conductivity distribution is different from that of the background, and to use independent discretizations for different domains. This reduces dramatically the computational resources required for large-scale modeling. In addition, by using this method, we can analyze the response of each domain separately without an inappropriate use of the superposition principle for the EM field calculations. The method was carefully tested for modeling the marine controlled-source electromagnetic (MCSEM) fields for complex geoelectrical structures with multiple inhomogeneous domains, such as a seafloor with rough bathymetry, salt domes, and reservoirs. We have also used this technique to investigate the return induction effects from regional geoelectrical structures, e.g., seafloor bathymetry and salt domes, which can distort the EM response from the geophysical exploration target.

  14. Disinformative data in large-scale hydrological modelling

    NASA Astrophysics Data System (ADS)

    Kauffeldt, Anna; Halldin, Sven; Rodhe, Allan; Xu, Chong-Yu; Westerberg, Ida

    2013-04-01

    Large-scale hydrological modelling has become an important tool for the study of global and regional water resources, climate impacts, and water-resources management. However, modelling efforts over large spatial domains are fraught with problems of data scarcity, uncertainties and inconsistencies between forcing and evaluation data. Model-independent methods to screen and analyse data for such problems are needed. This study aims at identifying two types of data inconsistencies in global datasets using a pre-modelling analysis, inconsistencies that can be disinformative for subsequent modelling. Firstly, four hydrographic datasets were examined in terms of how well basin areas were represented in the flow networks. It was found that most basins could be well represented in both gridded basin delineations and polygon-based ones, but some basins exhibited large area discrepancies between hydrographic datasets and archived basin areas. Secondly, the consistency between climate data (precipitation and potential evaporation) and discharge data was examined for the possibility of water-balance closure. It was found that basins exhibiting too high runoff coefficients were abundant in areas where precipitation data were likely affected by snow undercatch, and that the occurrence of basins exhibiting losses exceeding the energy limit were strongly dependent on the potential-evaporation data, both in terms of numbers and geographical distribution. These results emphasise the need for pre-modelling data analysis to identify dataset inconsistencies as an important first step in any large-scale study. Applying data-screening methods before modelling increases our chances to draw robust conclusions from subsequent model simulations.

  15. Performance modeling and analysis of consumer classes in large scale systems

    NASA Astrophysics Data System (ADS)

    Al-Shukri, Sh.; Lenin, R. B.; Ramaswamy, S.; Anand, A.; Narasimhan, V. L.; Abraham, J.; Varadan, Vijay

    2009-03-01

    Peer-to-Peer (P2P) networks have been used efficiently as building blocks as overlay networks for large-scale distributed network applications with Internet Protocol (IP) based bottom layer networks. With large scale Wireless Sensor Networks (WSNs) becoming increasingly realistic, it is important to overlay networks with WSNs in the bottom layer. The suitable mathematical (stochastic) model that can model the overlay network over WSNs is Queuing Networks with Multi-Class customers. In this paper, we discuss how these mathematical network models can be simulated using the object oriented simulation package OMNeT++. We discuss the Graphical User Interface (GUI) which is developed to accept the input parameter files and execute the simulation using this interface. We compare the simulation results with analytical formulas available in the literature for these mathematical models.

  16. MODELING THE LARGE-SCALE BIAS OF NEUTRAL HYDROGEN

    SciTech Connect

    MarIn, Felipe A.; Gnedin, Nickolay Y.; Seo, Hee-Jong; Vallinotto, Alberto E-mail: gnedin@fnal.go E-mail: avalli@fnal.go

    2010-08-01

    We present new analytical estimates of the large-scale bias of neutral hydrogen (H I). We use a simple, non-parametric model which monotonically relates the total mass of a halo M{sub tot} with its H I mass M{sub HI} at zero redshift; for earlier times we assume limiting models for the {Omega}{sub HI} evolution consistent with the data presently available, as well as two main scenarios for the evolution of our M{sub HI}-M{sub tot} relation. We find that both the linear and the first nonlinear bias terms exhibit a strong evolution with redshift, regardless of the specific limiting model assumed for the H I density over time. These analytical predictions are then shown to be consistent with measurements performed on the Millennium Simulation. Additionally, we show that this strong bias evolution does not sensibly affect the measurement of the H I power spectrum.

  17. Modeling and Dynamic Simulation of a Large Scale Helium Refrigerator

    NASA Astrophysics Data System (ADS)

    Lv, C.; Qiu, T. N.; Wu, J. H.; Xie, X. J.; Li, Q.

    In order to simulate the transient behaviors of a newly developed 2 kW helium refrigerator, a numerical model of the critical equipment including a screw compressor with variable-frequency drive, plate-fin heat exchangers, a turbine expander, and pneumatic valves wasdeveloped. In the simulation,the calculation of the helium thermodynamic properties arebased on 32-parameter modified Benedict-Webb-Rubin (MBWR) state equation.The start-up process of the warm compressor station with gas management subsystem, and the cool-down process of cold box in an actual operation, were dynamically simulated. The developed model was verified by comparing the simulated results with the experimental data.Besides, system responses of increasing heat load were simulated. This model can also be used to design and optimize other large scale helium refrigerators.

  18. Fourier method for large scale surface modeling and registration.

    PubMed

    Shen, Li; Kim, Sungeun; Saykin, Andrew J

    2009-06-01

    Spherical harmonic (SPHARM) description is a powerful Fourier shape modeling method for processing arbitrarily shaped but simply connected 3D objects. As a highly promising method, SPHARM has been widely used in several domains including medical imaging. However, its primary use has been focused on modeling small or moderately-sized surfaces that are relatively smooth, due to challenges related to its applicability, robustness and scalability. This paper presents an enhanced SPHARM framework that addresses these issues and show that the use of SPHARM can expand into broader areas. In particular, we present a simple and efficient Fourier expansion method on the sphere that enables large scale modeling, and propose a new SPHARM registration method that aims to preserve the important homological properties between 3D models. Although SPHARM is a global descriptor, our experimental results show that the proposed SPHARM framework can accurately describe complicated graphics models and highly convoluted 3D surfaces and the proposed registration method allows for effective alignment and registration of these 3D models for further processing or analysis. These methods greatly enable the potential of applying SPHARM to broader areas such as computer graphics, medical imaging, CAD/CAM, bioinformatics, and other related geometric modeling and processing fields. PMID:20161536

  19. Can global hydrological models reproduce large scale river flood regimes?

    NASA Astrophysics Data System (ADS)

    Eisner, Stephanie; Flörke, Martina

    2013-04-01

    River flooding remains one of the most severe natural hazards. On the one hand, major flood events pose a serious threat to human well-being, causing deaths and considerable economic damage. On the other hand, the periodic occurrence of flood pulses is crucial to maintain the functioning of riverine floodplains and wetlands, and to preserve the ecosystem services the latter provide. In many regions, river floods reveal a distinct seasonality, i.e. they occur at a particular time during the year. This seasonality is related to regionally dominant flood generating processes which can be expressed in river flood types. While in data-rich regions (esp. Europe and North America) the analysis of flood regimes can be based on observed river discharge time series, this data is sparse or lacking in many other regions of the world. This gap of knowledge can be filled by global modeling approaches. However, to date most global modeling studies have focused on mean annual or monthly water availability and their change over time while simulating discharge extremes, both floods and droughts, still remains a challenge for large scale hydrological models. This study will explore the ability of the global hydrological model WaterGAP3 to simulate the large scale patterns of river flood regimes, represented by seasonal pattern and the dominant flood type. WaterGAP3 simulates the global terrestrial water balance on a 5 arc minute spatial grid (excluding Greenland and Antarctica) at a daily time step. The model accounts for human interference on river flow, i.e. water abstraction for various purposes, e.g. irrigation, and flow regulation by large dams and reservoirs. Our analysis will provide insight in the general ability of global hydrological models to reproduce river flood regimes and thus will promote the creation of a global map of river flood regimes to provide a spatially inclusive and comprehensive picture. Understanding present-day flood regimes can support both flood risk

  20. A first large-scale flood inundation forecasting model

    NASA Astrophysics Data System (ADS)

    Schumann, G. J.-P.; Neal, J. C.; Voisin, N.; Andreadis, K. M.; Pappenberger, F.; Phanthuwongpakdee, N.; Hall, A. C.; Bates, P. D.

    2013-10-01

    At present continental to global scale flood forecasting predicts at a point discharge, with little attention to detail and accuracy of local scale inundation predictions. Yet, inundation variables are of interest and all flood impacts are inherently local in nature. This paper proposes a large-scale flood inundation ensemble forecasting model that uses best available data and modeling approaches in data scarce areas. The model was built for the Lower Zambezi River to demonstrate current flood inundation forecasting capabilities in large data-scarce regions. ECMWF ensemble forecast (ENS) data were used to force the VIC (Variable Infiltration Capacity) hydrologic model, which simulated and routed daily flows to the input boundary locations of a 2-D hydrodynamic model. Efficient hydrodynamic modeling over large areas still requires model grid resolutions that are typically larger than the width of channels that play a key role in flood wave propagation. We therefore employed a novel subgrid channel scheme to describe the river network in detail while representing the floodplain at an appropriate scale. The modeling system was calibrated using channel water levels from satellite laser altimetry and then applied to predict the February 2007 Mozambique floods. Model evaluation showed that simulated flood edge cells were within a distance of between one and two model resolutions compared to an observed flood edge and inundation area agreement was on average 86%. Our study highlights that physically plausible parameter values and satisfactory performance can be achieved at spatial scales ranging from tens to several hundreds of thousands of km2 and at model grid resolutions up to several km2.

  1. Large-scale Modeling of Inundation in the Amazon Basin

    NASA Astrophysics Data System (ADS)

    Luo, X.; Li, H. Y.; Getirana, A.; Leung, L. R.; Tesfa, T. K.

    2015-12-01

    Flood events have impacts on the exchange of energy, water and trace gases between land and atmosphere, hence potentially affecting the climate. The Amazon River basin is the world's largest river basin. Seasonal floods occur in the Amazon Basin each year. The basin being characterized by flat gradients, backwater effects are evident in the river dynamics. This factor, together with large uncertainties in river hydraulic geometry, surface topography and other datasets, contribute to difficulties in simulating flooding processes over this basin. We have developed a large-scale inundation scheme in the framework of the Model for Scale Adaptive River Transport (MOSART) river routing model. Both the kinematic wave and the diffusion wave routing methods are implemented in the model. A new process-based algorithm is designed to represent river channel - floodplain interactions. Uncertainties in the input datasets are partly addressed through model calibration. We will present the comparison of simulated results against satellite and in situ observations and analysis to understand factors that influence inundation processes in the Amazon Basin.

  2. Importance-truncated large-scale shell model

    NASA Astrophysics Data System (ADS)

    Stumpf, Christina; Braun, Jonas; Roth, Robert

    2016-02-01

    We propose an importance-truncation scheme for the large-scale nuclear shell model that extends its range of applicability to larger valence spaces and midshell nuclei. It is based on a perturbative measure for the importance of individual basis states that acts as an additional truncation for the many-body model space in which the eigenvalue problem of the Hamiltonian is solved numerically. Through a posteriori extrapolations of all observables to vanishing importance threshold, the full shell-model results can be recovered. In addition to simple threshold extrapolations, we explore extrapolations based on the energy variance. We apply the importance-truncated shell model for the study of 56Ni in the p f valence space and of 60Zn and 64Ge in the p f g9 /2 space. We demonstrate the efficiency and accuracy of the approach, which pave the way for future applications of valence-space interactions derived in ab initio approaches in larger valence spaces.

  3. Detailed investigation of flowfields within large scale hypersonic inlet models

    NASA Technical Reports Server (NTRS)

    Seebaugh, W. R.; Doran, R. W.; Decarlo, J. P.

    1971-01-01

    Analytical and experimental investigations were conducted to determine the characteristics of the internal flows in model passages representative of hypersonic inlets and also sufficiently large for meaningful data to be obtained. Three large-scale inlet models, each having a different compression ratio, were designed to provide high performance and approximately uniform static-pressure distributions at the throat stations. A wedge forebody was used to simulate the flowfield conditions at the entrance of the internal passages, thus removing the actual vehicle forebody from consideration in the design of the wind-tunnel models. Tests were conducted in a 3.5 foot hypersonic wind tunnel at a nominal test Mach number of 7.4 and freestream unit Reynolds number of 2,700,000 per foot. From flowfield survey data the inlet entrance, the entering inviscid and viscous flow conditions were determined prior to the analysis of the data obtained in the internal passages. Detailed flowfield survey data were obtained near the centerlines of the internal passages to define the boundary-layer development on the internal surfaces and the internal shock-wave configuration. Finally, flowfield data were measured across the throats of the inlet models to evaluate the internal performance of the internal passages. These data and additional results from surface instrumentation and flow visualization studies were utilized to determine the internal flowfield patterns and the inlet performance.

  4. Design of a Tree-Queue Model for a Large-Scale System

    NASA Astrophysics Data System (ADS)

    Park, Byungsung; Yoo, Jaeyeong; Kim, Hagbae

    In a large queuing system, the effect of the ratio of the filled data on the queue and waiting time from the head of a queue to the service gate are important factors for process efficiency because they are too large to ignore. However, many research works assumed that the factors can be considered to be negligible according to the queuing theory. Thus, the existing queuing models are not applicable to the design of large-scale systems. Such a system could be used as a product classification center for a home delivery service. In this paper, we propose a tree-queue model for large-scale systems that is more adaptive to efficient processes compared to existing models. We analyze and design a mean waiting time equation related to the ratio of the filled data in the queue. Based on simulations, the proposed model demonstrated improvement in process-efficiency, and it is more suitable to realistic system modeling than other compared models for large-scale systems.

  5. Numerically modelling the large scale coronal magnetic field

    NASA Astrophysics Data System (ADS)

    Panja, Mayukh; Nandi, Dibyendu

    2016-07-01

    The solar corona spews out vast amounts of magnetized plasma into the heliosphere which has a direct impact on the Earth's magnetosphere. Thus it is important that we develop an understanding of the dynamics of the solar corona. With our present technology it has not been possible to generate 3D magnetic maps of the solar corona; this warrants the use of numerical simulations to study the coronal magnetic field. A very popular method of doing this, is to extrapolate the photospheric magnetic field using NLFF or PFSS codes. However the extrapolations at different time intervals are completely independent of each other and do not capture the temporal evolution of magnetic fields. On the other hand full MHD simulations of the global coronal field, apart from being computationally very expensive would be physically less transparent, owing to the large number of free parameters that are typically used in such codes. This brings us to the Magneto-frictional model which is relatively simpler and computationally more economic. We have developed a Magnetofrictional Model, in 3D spherical polar co-ordinates to study the large scale global coronal field. Here we present studies of changing connectivities between active regions, in response to photospheric motions.

  6. Multi-Resolution Modeling of Large Scale Scientific Simulation Data

    SciTech Connect

    Baldwin, C; Abdulla, G; Critchlow, T

    2003-01-31

    This paper discusses using the wavelets modeling technique as a mechanism for querying large-scale spatio-temporal scientific simulation data. Wavelets have been used successfully in time series analysis and in answering surprise and trend queries. Our approach however is driven by the need for compression, which is necessary for viable throughput given the size of the targeted data, along with the end user requirements from the discovery process. Our users would like to run fast queries to check the validity of the simulation algorithms used. In some cases users are welling to accept approximate results if the answer comes back within a reasonable time. In other cases they might want to identify a certain phenomena and track it over time. We face a unique problem because of the data set sizes. It may take months to generate one set of the targeted data; because of its shear size, the data cannot be stored on disk for long and thus needs to be analyzed immediately before it is sent to tape. We integrated wavelets within AQSIM, a system that we are developing to support exploration and analyses of tera-scale size data sets. We will discuss the way we utilized wavelets decomposition in our domain to facilitate compression and in answering a specific class of queries that is harder to answer with any other modeling technique. We will also discuss some of the shortcomings of our implementation and how to address them.

  7. Towards a self-consistent halo model for the nonlinear large-scale structure

    NASA Astrophysics Data System (ADS)

    Schmidt, Fabian

    2016-03-01

    The halo model is a theoretically and empirically well-motivated framework for predicting the statistics of the nonlinear matter distribution in the Universe. However, current incarnations of the halo model suffer from two major deficiencies: (i) they do not enforce the stress-energy conservation of matter; (ii) they are not guaranteed to recover exact perturbation theory results on large scales. Here, we provide a formulation of the halo model (EHM) that remedies both drawbacks in a consistent way, while attempting to maintain the predictivity of the approach. In the formulation presented here, mass and momentum conservation are guaranteed on large scales, and results of the perturbation theory and the effective field theory can, in principle, be matched to any desired order on large scales. We find that a key ingredient in the halo model power spectrum is the halo stochasticity covariance, which has been studied to a much lesser extent than other ingredients such as mass function, bias, and profiles of halos. As written here, this approach still does not describe the transition regime between perturbation theory and halo scales realistically, which is left as an open problem. We also show explicitly that, when implemented consistently, halo model predictions do not depend on any properties of low-mass halos that are smaller than the scales of interest.

  8. A first large-scale flood inundation forecasting model

    SciTech Connect

    Schumann, Guy J-P; Neal, Jeffrey C.; Voisin, Nathalie; Andreadis, Konstantinos M.; Pappenberger, Florian; Phanthuwongpakdee, Kay; Hall, Amanda C.; Bates, Paul D.

    2013-11-04

    At present continental to global scale flood forecasting focusses on predicting at a point discharge, with little attention to the detail and accuracy of local scale inundation predictions. Yet, inundation is actually the variable of interest and all flood impacts are inherently local in nature. This paper proposes a first large scale flood inundation ensemble forecasting model that uses best available data and modeling approaches in data scarce areas and at continental scales. The model was built for the Lower Zambezi River in southeast Africa to demonstrate current flood inundation forecasting capabilities in large data-scarce regions. The inundation model domain has a surface area of approximately 170k km2. ECMWF meteorological data were used to force the VIC (Variable Infiltration Capacity) macro-scale hydrological model which simulated and routed daily flows to the input boundary locations of the 2-D hydrodynamic model. Efficient hydrodynamic modeling over large areas still requires model grid resolutions that are typically larger than the width of many river channels that play a key a role in flood wave propagation. We therefore employed a novel sub-grid channel scheme to describe the river network in detail whilst at the same time representing the floodplain at an appropriate and efficient scale. The modeling system was first calibrated using water levels on the main channel from the ICESat (Ice, Cloud, and land Elevation Satellite) laser altimeter and then applied to predict the February 2007 Mozambique floods. Model evaluation showed that simulated flood edge cells were within a distance of about 1 km (one model resolution) compared to an observed flood edge of the event. Our study highlights that physically plausible parameter values and satisfactory performance can be achieved at spatial scales ranging from tens to several hundreds of thousands of km2 and at model grid resolutions up to several km2. However, initial model test runs in forecast mode

  9. Large scale modelling of bankfull flow: An example for Europe

    NASA Astrophysics Data System (ADS)

    Schneider, Christof; Flörke, Martina; Eisner, Stephanie; Voss, Frank

    2011-10-01

    SummaryBankfull flow is a relevant parameter in the field of large scale modelling especially for the analysis of environmental flows and flood related hydrological processes. In our case, bankfull flow data were required within the SCENES project in order to analyse ecological important inundation events at selected grid cells of a European raster. In practise, the determination of bankfull flow is a complex task even on local scale. Subsequent to a literature survey of bankfull flow studies, this paper describes a method which can be applied to estimate bankfull flow on a global or continental grid cell raster. The method is based on the partial duration series approach taking into account a 40-years time series of daily discharge data modelled by the global water model WaterGAP. An increasing threshold censoring procedure, a declustering scheme and the generalised Pareto distribution are applied. Modelled bankfull flow values are then validated by different efficiency criteria against bankfull flows observed at gauging stations in Europe. Thereby, the impact of (i) the applied distribution function, (ii) the threshold setting in the partial duration series, (iii) the climate input data and (iv) applying the annual maxima series are evaluated and compared to the proposed approach. The results show that bankfull flow can be reasonably estimated with a high model efficiency ( E1 = 0.71) and weighted correlation ( ωr2 = 0.90) as well as a systematic overestimation of 22.8%. Finally it turned out that in our study focusing on hydrological extremes, the appliance of the daily climate input data is a basic requirement. While the choice of the distribution function had no significant impact on the final results, the threshold setting in the partial duration series was crucial.

  10. Symmetry-guided large-scale shell-model theory

    NASA Astrophysics Data System (ADS)

    Launey, Kristina D.; Dytrych, Tomas; Draayer, Jerry P.

    2016-07-01

    In this review, we present a symmetry-guided strategy that utilizes exact as well as partial symmetries for enabling a deeper understanding of and advancing ab initio studies for determining the microscopic structure of atomic nuclei. These symmetries expose physically relevant degrees of freedom that, for large-scale calculations with QCD-inspired interactions, allow the model space size to be reduced through a very structured selection of the basis states to physically relevant subspaces. This can guide explorations of simple patterns in nuclei and how they emerge from first principles, as well as extensions of the theory beyond current limitations toward heavier nuclei and larger model spaces. This is illustrated for the ab initio symmetry-adapted no-core shell model (SA-NCSM) and two significant underlying symmetries, the symplectic Sp(3 , R) group and its deformation-related SU(3) subgroup. We review the broad scope of nuclei, where these symmetries have been found to play a key role-from the light p-shell systems, such as 6Li, 8B, 8Be, 12C, and 16O, and sd-shell nuclei exemplified by 20Ne, based on first-principle explorations; through the Hoyle state in 12C and enhanced collectivity in intermediate-mass nuclei, within a no-core shell-model perspective; up to strongly deformed species of the rare-earth and actinide regions, as investigated in earlier studies. A complementary picture, driven by symmetries dual to Sp(3 , R) , is also discussed. We briefly review symmetry-guided techniques that prove useful in various nuclear-theory models, such as Elliott model, ab initio SA-NCSM, symplectic model, pseudo- SU(3) and pseudo-symplectic models, ab initio hyperspherical harmonics method, ab initio lattice effective field theory, exact pairing-plus-shell model approaches, and cluster models, including the resonating-group method. Important implications of these approaches that have deepened our understanding of emergent phenomena in nuclei, such as enhanced

  11. Multi-Resolution Modeling of Large Scale Scientific Simulation Data

    SciTech Connect

    Baldwin, C; Abdulla, G; Critchlow, T

    2002-02-25

    Data produced by large scale scientific simulations, experiments, and observations can easily reach tera-bytes in size. The ability to examine data-sets of this magnitude, even in moderate detail, is problematic at best. Generally this scientific data consists of multivariate field quantities with complex inter-variable correlations and spatial-temporal structure. To provide scientists and engineers with the ability to explore and analyze such data sets we are using a twofold approach. First, we model the data with the objective of creating a compressed yet manageable representation. Second, with that compressed representation, we provide the user with the ability to query the resulting approximation to obtain approximate yet sufficient answers; a process called adhoc querying. This paper is concerned with a wavelet modeling technique that seeks to capture the important physical characteristics of the target scientific data. Our approach is driven by the compression, which is necessary for viable throughput, along with the end user requirements from the discovery process. Our work contrasts existing research which applies wavelets to range querying, change detection, and clustering problems by working directly with a decomposition of the data. The difference in this procedures is due primarily to the nature of the data and the requirements of the scientists and engineers. Our approach directly uses the wavelet coefficients of the data to compress as well as query. We will provide some background on the problem, describe how the wavelet decomposition is used to facilitate data compression and how queries are posed on the resulting compressed model. Results of this process will be shown for several problems of interest and we will end with some observations and conclusions about this research.

  12. Oligopolistic competition in wholesale electricity markets: Large-scale simulation and policy analysis using complementarity models

    NASA Astrophysics Data System (ADS)

    Helman, E. Udi

    This dissertation conducts research into the large-scale simulation of oligopolistic competition in wholesale electricity markets. The dissertation has two parts. Part I is an examination of the structure and properties of several spatial, or network, equilibrium models of oligopolistic electricity markets formulated as mixed linear complementarity problems (LCP). Part II is a large-scale application of such models to the electricity system that encompasses most of the United States east of the Rocky Mountains, the Eastern Interconnection. Part I consists of Chapters 1 to 6. The models developed in this part continue research into mixed LCP models of oligopolistic electricity markets initiated by Hobbs [67] and subsequently developed by Metzler [87] and Metzler, Hobbs and Pang [88]. Hobbs' central contribution is a network market model with Cournot competition in generation and a price-taking spatial arbitrage firm that eliminates spatial price discrimination by the Cournot firms. In one variant, the solution to this model is shown to be equivalent to the "no arbitrage" condition in a "pool" market, in which a Regional Transmission Operator optimizes spot sales such that the congestion price between two locations is exactly equivalent to the difference in the energy prices at those locations (commonly known as locational marginal pricing). Extensions to this model are presented in Chapters 5 and 6. One of these is a market model with a profit-maximizing arbitrage firm. This model is structured as a mathematical program with equilibrium constraints (MPEC), but due to the linearity of its constraints, can be solved as a mixed LCP. Part II consists of Chapters 7 to 12. The core of these chapters is a large-scale simulation of the U.S. Eastern Interconnection applying one of the Cournot competition with arbitrage models. This is the first oligopolistic equilibrium market model to encompass the full Eastern Interconnection with a realistic network representation (using

  13. Graph theoretic modeling of large-scale semantic networks.

    PubMed

    Bales, Michael E; Johnson, Stephen B

    2006-08-01

    During the past several years, social network analysis methods have been used to model many complex real-world phenomena, including social networks, transportation networks, and the Internet. Graph theoretic methods, based on an elegant representation of entities and relationships, have been used in computational biology to study biological networks; however they have not yet been adopted widely by the greater informatics community. The graphs produced are generally large, sparse, and complex, and share common global topological properties. In this review of research (1998-2005) on large-scale semantic networks, we used a tailored search strategy to identify articles involving both a graph theoretic perspective and semantic information. Thirty-one relevant articles were retrieved. The majority (28, 90.3%) involved an investigation of a real-world network. These included corpora, thesauri, dictionaries, large computer programs, biological neuronal networks, word association networks, and files on the Internet. Twenty-two of the 28 (78.6%) involved a graph comprised of words or phrases. Fifteen of the 28 (53.6%) mentioned evidence of small-world characteristics in the network investigated. Eleven (39.3%) reported a scale-free topology, which tends to have a similar appearance when examined at varying scales. The results of this review indicate that networks generated from natural language have topological properties common to other natural phenomena. It has not yet been determined whether artificial human-curated terminology systems in biomedicine share these properties. Large network analysis methods have potential application in a variety of areas of informatics, such as in development of controlled vocabularies and for characterizing a given domain. PMID:16442849

  14. Double-step truncation procedure for large-scale shell-model calculations

    NASA Astrophysics Data System (ADS)

    Coraggio, L.; Gargano, A.; Itaco, N.

    2016-06-01

    We present a procedure that is helpful to reduce the computational complexity of large-scale shell-model calculations, by preserving as much as possible the role of the rejected degrees of freedom in an effective approach. Our truncation is driven first by the analysis of the effective single-particle energies of the original large-scale shell-model Hamiltonian, in order to locate the relevant degrees of freedom to describe a class of isotopes or isotones, namely the single-particle orbitals that will constitute a new truncated model space. The second step is to perform a unitary transformation of the original Hamiltonian from its model space into the truncated one. This transformation generates a new shell-model Hamiltonian, defined in a smaller model space, that retains effectively the role of the excluded single-particle orbitals. As an application of this procedure, we have chosen a realistic shell-model Hamiltonian defined in a large model space, set up by seven proton and five neutron single-particle orbitals outside 88Sr. We study the dependence of shell-model results upon different truncations of the original model space for the Zr, Mo, Ru, Pd, Cd, and Sn isotopic chains, showing the reliability of this truncation procedure.

  15. Modeling emergent large-scale structures of barchan dune fields

    NASA Astrophysics Data System (ADS)

    Worman, S. L.; Murray, A. B.; Littlewood, R.; Andreotti, B.; Claudin, P.

    2013-10-01

    In nature, barchan dunes typically exist as members of larger fields that display striking, enigmatic structures that cannot be readily explained by examining the dynamics at the scale of single dunes, or by appealing to patterns in external forcing. To explore the possibility that observed structures emerge spontaneously as a collective result of many dunes interacting with each other, we built a numerical model that treats barchans as discrete entities that interact with one another according to simplified rules derived from theoretical and numerical work and from field observations: (1) Dunes exchange sand through the fluxes that leak from the downwind side of each dune and are captured on their upstream sides; (2) when dunes become sufficiently large, small dunes are born on their downwind sides (`calving'); and (3) when dunes collide directly enough, they merge. Results show that these relatively simple interactions provide potential explanations for a range of field-scale phenomena including isolated patches of dunes and heterogeneous arrangements of similarly sized dunes in denser fields. The results also suggest that (1) dune field characteristics depend on the sand flux fed into the upwind boundary, although (2) moving downwind, the system approaches a common attracting state in which the memory of the upwind conditions vanishes. This work supports the hypothesis that calving exerts a first-order control on field-scale phenomena; it prevents individual dunes from growing without bound, as single-dune analyses suggest, and allows the formation of roughly realistic, persistent dune field patterns.

  16. Modeling emergent large-scale structures of barchan dune fields

    NASA Astrophysics Data System (ADS)

    Worman, S. L.; Murray, A.; Littlewood, R. C.; Andreotti, B.; Claudin, P.

    2013-12-01

    In nature, barchan dunes typically exist as members of larger fields that display striking, enigmatic structures that cannot be readily explained by examining the dynamics at the scale of single dunes, or by appealing to patterns in external forcing. To explore the possibility that observed structures emerge spontaneously as a collective result of many dunes interacting with each other, we built a numerical model that treats barchans as discrete entities that interact with one another according to simplified rules derived from theoretical and numerical work, and from field observations: Dunes exchange sand through the fluxes that leak from the downwind side of each dune and are captured on their upstream sides; when dunes become sufficiently large, small dunes are born on their downwind sides ('calving'); and when dunes collide directly enough, they merge. Results show that these relatively simple interactions provide potential explanations for a range of field-scale phenomena including isolated patches of dunes and heterogeneous arrangements of similarly sized dunes in denser fields. The results also suggest that (1) dune field characteristics depend on the sand flux fed into the upwind boundary, although (2) moving downwind, the system approaches a common attracting state in which the memory of the upwind conditions vanishes. This work supports the hypothesis that calving exerts a first order control on field-scale phenomena; it prevents individual dunes from growing without bound, as single-dune analyses suggest, and allows the formation of roughly realistic, persistent dune field patterns.

  17. Modeling parametric scattering instabilities in large-scale expanding plasmas

    NASA Astrophysics Data System (ADS)

    Masson-Laborde, P. E.; Hüller, S.; Pesme, D.; Casanova, M.; Loiseau, P.; Labaune, Ch.

    2006-06-01

    We present results from two-dimensional simulations of long scale-length laser-plasma interaction experiments performed at LULI. With the goal of predictive modeling of such experiments with our code Harmony2D, we take into account realistic plasma density and velocity profiles, the propagation of the laser light beam and the scattered light, as well as the coupling with the ion acoustic waves in order to describe Stimulated Brillouin Scattering (SBS). Laser pulse shaping is taken into account to follow the evolution ofthe SBS reflectivity as close as possible to the experiment. The light reflectivity is analyzed by distinguishing the backscattered light confined in the solid angle defined by the aperture of the incident light beam and the scattered light outside this cone. As in the experiment, it is observed that the aperture of the scattered light tends to increase with the mean intensity of the RPP-smoothed laser beam. A further common feature between simulations and experiments is the observed localization of the SBS-driven ion acoustic waves (IAW) in the front part of the target (with respect to the incoming laser beam).

  18. Numerical Modeling of Large-Scale Rocky Coastline Evolution

    NASA Astrophysics Data System (ADS)

    Limber, P.; Murray, A. B.; Littlewood, R.; Valvo, L.

    2008-12-01

    Seventy-five percent of the world's ocean coastline is rocky. On large scales (i.e. greater than a kilometer), many intertwined processes drive rocky coastline evolution, including coastal erosion and sediment transport, tectonics, antecedent topography, and variations in sea cliff lithology. In areas such as California, an additional aspect of rocky coastline evolution involves submarine canyons that cut across the continental shelf and extend into the nearshore zone. These types of canyons intercept alongshore sediment transport and flush sand to abyssal depths during periodic turbidity currents, thereby delineating coastal sediment transport pathways and affecting shoreline evolution over large spatial and time scales. How tectonic, sediment transport, and canyon processes interact with inherited topographic and lithologic settings to shape rocky coastlines remains an unanswered, and largely unexplored, question. We will present numerical model results of rocky coastline evolution that starts with an immature fractal coastline. The initial shape is modified by headland erosion, wave-driven alongshore sediment transport, and submarine canyon placement. Our previous model results have shown that, as expected, an initial sediment-free irregularly shaped rocky coastline with homogeneous lithology will undergo smoothing in response to wave attack; headlands erode and mobile sediment is swept into bays, forming isolated pocket beaches. As this diffusive process continues, pocket beaches coalesce, and a continuous sediment transport pathway results. However, when a randomly placed submarine canyon is introduced to the system as a sediment sink, the end results are wholly different: sediment cover is reduced, which in turn increases weathering and erosion rates and causes the entire shoreline to move landward more rapidly. The canyon's alongshore position also affects coastline morphology. When placed offshore of a headland, the submarine canyon captures local sediment

  19. An empirical model relating U.S. monthly hail occurrence to large-scale meteorological environment

    NASA Astrophysics Data System (ADS)

    Allen, John T.; Tippett, Michael K.; Sobel, Adam H.

    2015-03-01

    An empirical model relating monthly hail occurrence to the large-scale environment has been developed and tested for the United States (U.S.). Monthly hail occurrence for each 1°×1° grid box is defined as the number of hail events that occur there during a month; a hail event consists of a 3 h period with at least one report of hail larger than 1 in. The model is derived using climatological annual cycle data only. Environmental variables are taken from the North American Regional Reanalysis (NARR; 1979-2012). The model includes four environmental variables convective precipitation, convective available potential energy, storm relative helicity, and mean surface to 90 hPa specific humidity. The model differs in its choice of variables and their relative weighting from existing severe weather indices. The model realistically matches the annual cycle of hail occurrence both regionally and for the contiguous U.S. (CONUS). The modeled spatial distribution is also consistent with the observed hail climatology. However, the westward shift of maximum hail frequency during the summer months is delayed in the model relative to observations, and the model has a lower frequency of hail just east of the Rocky Mountains compared to observations. Year-to-year variability provides an independent test of the model. On monthly and annual time scales, the model reproduces observed hail frequencies. Overall model trends are small compared to observed changes, suggesting that further analysis is necessary to differentiate between physical and nonphysical trends. The empirical hail model provides a new tool for exploration of connections between large-scale climate and severe weather.

  20. Advancing Software Architecture Modeling for Large Scale Heterogeneous Systems

    SciTech Connect

    Gorton, Ian; Liu, Yan

    2010-11-07

    In this paper we describe how incorporating technology-specific modeling at the architecture level can help reduce risks and produce better designs for large, heterogeneous software applications. We draw an analogy with established modeling approaches in scientific domains, using groundwater modeling as an example, to help illustrate gaps in current software architecture modeling approaches. We then describe the advances in modeling, analysis and tooling that are required to bring sophisticated modeling and development methods within reach of software architects.

  1. Investigation of models for large-scale meteorological prediction experiments

    NASA Technical Reports Server (NTRS)

    Spar, J.

    1975-01-01

    The feasibility of extended and long-range weather prediction by means of global atmospheric models was studied. A number of computer experiments were conducted at GISS with the GISS global general circulation model. Topics discussed include atmospheric response to sea-surface temperature anomalies, and monthly mean forecast experiments with the global model.

  2. A robust and quick method to validate large scale flood inundation modelling with SAR remote sensing

    NASA Astrophysics Data System (ADS)

    Schumann, G. J.; Neal, J. C.; Bates, P. D.

    2011-12-01

    With flood frequency likely to increase as a result of altered precipitation patterns triggered by climate change, there is a growing demand for more data and, at the same time, improved flood inundation modeling. The aim is to develop more reliable flood forecasting systems over large scales that account for errors and inconsistencies in observations, modeling, and output. Over the last few decades, there have been major advances in the fields of remote sensing, particularly microwave remote sensing, and flood inundation modeling. At the same time both research communities are attempting to roll out their products on a continental to global scale. In a first attempt to harmonize both research efforts on a very large scale, a two-dimensional flood model has been built for the Niger Inland Delta basin in northwest Africa on a 700 km reach of the Niger River, an area similar to the size of the UK. This scale demands a different approach to traditional 2D model structuring and we have implemented a simplified version of the shallow water equations as developed in [1] and complemented this formulation with a sub-grid structure for simulating flows in a channel much smaller than the actual grid resolution of the model. This joined integration allows to model flood flows across two dimensions with efficient computational speeds but without losing out on channel resolution when moving to coarse model grids. Using gaged daily flows, the model was applied to simulate the wetting and drying of the Inland Delta floodplain for 7 years from 2002 to 2008, taking less than 30 minutes to simulate 365 days at 1 km resolution. In these rather data poor regions of the world and at this type of scale, verification of flood modeling is realistically only feasible with wide swath or global mode remotely sensed imagery. Validation of the Niger model was carried out using sequential global mode SAR images over the period 2006/7. This scale not only requires different types of models and

  3. Investigation of models for large scale meteorological prediction experiments

    NASA Technical Reports Server (NTRS)

    Spar, J.

    1982-01-01

    Long-range numerical prediction and climate simulation experiments with various global atmospheric general circulation models are reported. A chronological listing of the titles of all publications and technical reports already distributed is presented together with an account of the most recent reseach. Several reports on a series of perpetual January climate simulations with the GISS coarse mesh climate model are listed. A set of perpetual July climate simulations with the same model is presented and the results are described.

  4. Propagating waves in visual cortex: a large-scale model of turtle visual cortex.

    PubMed

    Nenadic, Zoran; Ghosh, Bijoy K; Ulinski, Philip

    2003-01-01

    This article describes a large-scale model of turtle visual cortex that simulates the propagating waves of activity seen in real turtle cortex. The cortex model contains 744 multicompartment models of pyramidal cells, stellate cells, and horizontal cells. Input is provided by an array of 201 geniculate neurons modeled as single compartments with spike-generating mechanisms and axons modeled as delay lines. Diffuse retinal flashes or presentation of spots of light to the retina are simulated by activating groups of geniculate neurons. The model is limited in that it does not have a retina to provide realistic input to the geniculate, and the cortex and does not incorporate all of the biophysical details of real cortical neurons. However, the model does reproduce the fundamental features of planar propagating waves. Activation of geniculate neurons produces a wave of activity that originates at the rostrolateral pole of the cortex at the point where a high density of geniculate afferents enter the cortex. Waves propagate across the cortex with velocities of 4 microm/ms to 70 microm/ms and occasionally reflect from the caudolateral border of the cortex. PMID:12567015

  5. Coordinated reset stimulation in a large-scale model of the STN-GPe circuit

    PubMed Central

    Ebert, Martin; Hauptmann, Christian; Tass, Peter A.

    2014-01-01

    Synchronization of populations of neurons is a hallmark of several brain diseases. Coordinated reset (CR) stimulation is a model-based stimulation technique which specifically counteracts abnormal synchrony by desynchronization. Electrical CR stimulation, e.g., for the treatment of Parkinson's disease (PD), is administered via depth electrodes. In order to get a deeper understanding of this technique, we extended the top-down approach of previous studies and constructed a large-scale computational model of the respective brain areas. Furthermore, we took into account the spatial anatomical properties of the simulated brain structures and incorporated a detailed numerical representation of 2 · 104 simulated neurons. We simulated the subthalamic nucleus (STN) and the globus pallidus externus (GPe). Connections within the STN were governed by spike-timing dependent plasticity (STDP). In this way, we modeled the physiological and pathological activity of the considered brain structures. In particular, we investigated how plasticity could be exploited and how the model could be shifted from strongly synchronized (pathological) activity to strongly desynchronized (healthy) activity of the neuronal populations via CR stimulation of the STN neurons. Furthermore, we investigated the impact of specific stimulation parameters especially the electrode position on the stimulation outcome. Our model provides a step forward toward a biophysically realistic model of the brain areas relevant to the emergence of pathological neuronal activity in PD. Furthermore, our model constitutes a test bench for the optimization of both stimulation parameters and novel electrode geometries for efficient CR stimulation. PMID:25505882

  6. Comparing large-scale computational approaches to epidemic modeling: Agent-based versus structured metapopulation models

    PubMed Central

    2010-01-01

    Background In recent years large-scale computational models for the realistic simulation of epidemic outbreaks have been used with increased frequency. Methodologies adapt to the scale of interest and range from very detailed agent-based models to spatially-structured metapopulation models. One major issue thus concerns to what extent the geotemporal spreading pattern found by different modeling approaches may differ and depend on the different approximations and assumptions used. Methods We provide for the first time a side-by-side comparison of the results obtained with a stochastic agent-based model and a structured metapopulation stochastic model for the progression of a baseline pandemic event in Italy, a large and geographically heterogeneous European country. The agent-based model is based on the explicit representation of the Italian population through highly detailed data on the socio-demographic structure. The metapopulation simulations use the GLobal Epidemic and Mobility (GLEaM) model, based on high-resolution census data worldwide, and integrating airline travel flow data with short-range human mobility patterns at the global scale. The model also considers age structure data for Italy. GLEaM and the agent-based models are synchronized in their initial conditions by using the same disease parameterization, and by defining the same importation of infected cases from international travels. Results The results obtained show that both models provide epidemic patterns that are in very good agreement at the granularity levels accessible by both approaches, with differences in peak timing on the order of a few days. The relative difference of the epidemic size depends on the basic reproductive ratio, R0, and on the fact that the metapopulation model consistently yields a larger incidence than the agent-based model, as expected due to the differences in the structure in the intra-population contact pattern of the approaches. The age breakdown analysis shows

  7. Large-scale measurement and modeling of backbone Internet traffic

    NASA Astrophysics Data System (ADS)

    Roughan, Matthew; Gottlieb, Joel

    2002-07-01

    There is a brewing controversy in the traffic modeling community concerning how to model backbone traffic. The fundamental work on self-similarity in data traffic appears to be contradicted by recent findings that suggest that backbone traffic is smooth. The traffic analysis work to date has focused on high-quality but limited-scope packet trace measurements; this limits its applicability to high-speed backbone traffic. This paper uses more than one year's worth of SNMP traffic data covering an entire Tier 1 ISP backbone to address the question of how backbone network traffic should be modeled. Although the limitations of SNMP measurements do not permit us to comment on the fine timescale behavior of the traffic, careful analysis of the data suggests that irrespective of the variation at fine timescales, we can construct a simple traffic model that captures key features of the observed traffic. Furthermore, the model's parameters are measurable using existing network infrastructure, making this model practical in a present-day operational network. In addition to its practicality, the model verifies basic statistical multiplexing results, and thus sheds deep insight into how smooth backbone traffic really is.

  8. Statistical Modeling of Large-Scale Simulation Data

    SciTech Connect

    Eliassi-Rad, T; Critchlow, T; Abdulla, G

    2002-02-22

    With the advent of fast computer systems, Scientists are now able to generate terabytes of simulation data. Unfortunately, the shear size of these data sets has made efficient exploration of them impossible. To aid scientists in gathering knowledge from their simulation data, we have developed an ad-hoc query infrastructure. Our system, called AQSim (short for Ad-hoc Queries for Simulation) reduces the data storage requirements and access times in two stages. First, it creates and stores mathematical and statistical models of the data. Second, it evaluates queries on the models of the data instead of on the entire data set. In this paper, we present two simple but highly effective statistical modeling techniques for simulation data. Our first modeling technique computes the true mean of systematic partitions of the data. It makes no assumptions about the distribution of the data and uses a variant of the root mean square error to evaluate a model. In our second statistical modeling technique, we use the Andersen-Darling goodness-of-fit method on systematic partitions of the data. This second method evaluates a model by how well it passes the normality test on the data. Both of our statistical models summarize the data so as to answer range queries in the most effective way. We calculate precision on an answer to a query by scaling the one-sided Chebyshev Inequalities with the original mesh's topology. Our experimental evaluations on two scientific simulation data sets illustrate the value of using these statistical modeling techniques on large simulation data sets.

  9. Multilevel method for modeling large-scale networks.

    SciTech Connect

    Safro, I. M.

    2012-02-24

    Understanding the behavior of real complex networks is of great theoretical and practical significance. It includes developing accurate artificial models whose topological properties are similar to the real networks, generating the artificial networks at different scales under special conditions, investigating a network dynamics, reconstructing missing data, predicting network response, detecting anomalies and other tasks. Network generation, reconstruction, and prediction of its future topology are central issues of this field. In this project, we address the questions related to the understanding of the network modeling, investigating its structure and properties, and generating artificial networks. Most of the modern network generation methods are based either on various random graph models (reinforced by a set of properties such as power law distribution of node degrees, graph diameter, and number of triangles) or on the principle of replicating an existing model with elements of randomization such as R-MAT generator and Kronecker product modeling. Hierarchical models operate at different levels of network hierarchy but with the same finest elements of the network. However, in many cases the methods that include randomization and replication elements on the finest relationships between network nodes and modeling that addresses the problem of preserving a set of simplified properties do not fit accurately enough the real networks. Among the unsatisfactory features are numerically inadequate results, non-stability of algorithms on real (artificial) data, that have been tested on artificial (real) data, and incorrect behavior at different scales. One reason is that randomization and replication of existing structures can create conflicts between fine and coarse scales of the real network geometry. Moreover, the randomization and satisfying of some attribute at the same time can abolish those topological attributes that have been undefined or hidden from

  10. Large scale structures and the cubic galileon model

    NASA Astrophysics Data System (ADS)

    Bhattacharya, Sourav; Dialektopoulos, Konstantinos F.; Tomaras, Theodore N.

    2016-05-01

    The maximum size of a bound cosmic structure is computed perturbatively as a function of its mass in the framework of the cubic galileon, proposed recently to model the dark energy of our Universe. Comparison of our results with observations constrains the matter-galileon coupling of the model to 0.033lesssim α lesssim 0.17, thus improving previous bounds based solely on solar system physics.

  11. Investigation of models for large-scale meteorological prediction experiments

    NASA Technical Reports Server (NTRS)

    Spar, J.

    1981-01-01

    An attempt is made to compute the contributions of various surface boundary conditions to the monthly mean states generated by the 7 layer, 8 x 10 GISS climate model (Hansen et al., 1980), and also to examine the influence of initial conditions on the model climate simulations. Obvious climatic controls as the shape and rotation of the Earth, the solar radiation, and the dry composition of the atmosphere are fixed, and only the surface boundary conditions are altered in the various climate simulations.

  12. Geometric algorithms for electromagnetic modeling of large scale structures

    NASA Astrophysics Data System (ADS)

    Pingenot, James

    With the rapid increase in the speed and complexity of integrated circuit designs, 3D full wave and time domain simulation of chip, package, and board systems becomes more and more important for the engineering of modern designs. Much effort has been applied to the problem of electromagnetic (EM) simulation of such systems in recent years. Major advances in boundary element EM simulations have led to O(n log n) simulations using iterative methods and advanced Fast. Fourier Transform (FFT), Multi-Level Fast Multi-pole Methods (MLFMM), and low-rank matrix compression techniques. These advances have been augmented with an explosion of multi-core and distributed computing technologies, however, realization of the full scale of these capabilities has been hindered by cumbersome and inefficient geometric processing. Anecdotal evidence from industry suggests that users may spend around 80% of turn-around time manipulating the geometric model and mesh. This dissertation addresses this problem by developing fast and efficient data structures and algorithms for 3D modeling of chips, packages, and boards. The methods proposed here harness the regular, layered 2D nature of the models (often referred to as "2.5D") to optimize these systems for large geometries. First, an architecture is developed for efficient storage and manipulation of 2.5D models. The architecture gives special attention to native representation of structures across various input models and special issues particular to 3D modeling. The 2.5D structure is then used to optimize the mesh systems First, circuit/EM co-simulation techniques are extended to provide electrical connectivity between objects. This concept is used to connect independently meshed layers, allowing simple and efficient 2D mesh algorithms to be used in creating a 3D mesh. Here, adaptive meshing is used to ensure that the mesh accurately models the physical unknowns (current and charge). Utilizing the regularized nature of 2.5D objects and

  13. Simulation of large-scale rule-based models

    SciTech Connect

    Hlavacek, William S; Monnie, Michael I; Colvin, Joshua; Faseder, James

    2008-01-01

    Interactions of molecules, such as signaling proteins, with multiple binding sites and/or multiple sites of post-translational covalent modification can be modeled using reaction rules. Rules comprehensively, but implicitly, define the individual chemical species and reactions that molecular interactions can potentially generate. Although rules can be automatically processed to define a biochemical reaction network, the network implied by a set of rules is often too large to generate completely or to simulate using conventional procedures. To address this problem, we present DYNSTOC, a general-purpose tool for simulating rule-based models. DYNSTOC implements a null-event algorithm for simulating chemical reactions in a homogenous reaction compartment. The simulation method does not require that a reaction network be specified explicitly in advance, but rather takes advantage of the availability of the reaction rules in a rule-based specification of a network to determine if a randomly selected set of molecular components participates in a reaction during a time step. DYNSTOC reads reaction rules written in the BioNetGen language which is useful for modeling protein-protein interactions involved in signal transduction. The method of DYNSTOC is closely related to that of STOCHSIM. DYNSTOC differs from STOCHSIM by allowing for model specification in terms of BNGL, which extends the range of protein complexes that can be considered in a model. DYNSTOC enables the simulation of rule-based models that cannot be simulated by conventional methods. We demonstrate the ability of DYNSTOC to simulate models accounting for multisite phosphorylation and multivalent binding processes that are characterized by large numbers of reactions. DYNSTOC is free for non-commercial use. The C source code, supporting documentation and example input files are available at .

  14. Modeling and simulation of large scale stirred tank

    NASA Astrophysics Data System (ADS)

    Neuville, John R.

    The purpose of this dissertation is to provide a written record of the evaluation performed on the DWPF mixing process by the construction of numerical models that resemble the geometry of this process. There were seven numerical models constructed to evaluate the DWPF mixing process and four pilot plants. The models were developed with Fluent software and the results from these models were used to evaluate the structure of the flow field and the power demand of the agitator. The results from the numerical models were compared with empirical data collected from these pilot plants that had been operated at an earlier date. Mixing is commonly used in a variety ways throughout industry to blend miscible liquids, disperse gas through liquid, form emulsions, promote heat transfer and, suspend solid particles. The DOE Sites at Hanford in Richland Washington, West Valley in New York, and Savannah River Site in Aiken South Carolina have developed a process that immobilizes highly radioactive liquid waste. The radioactive liquid waste at DWPF is an opaque sludge that is mixed in a stirred tank with glass frit particles and water to form slurry of specified proportions. The DWPF mixing process is composed of a flat bottom cylindrical mixing vessel with a centrally located helical coil, and agitator. The helical coil is used to heat and cool the contents of the tank and can improve flow circulation. The agitator shaft has two impellers; a radial blade and a hydrofoil blade. The hydrofoil is used to circulate the mixture between the top region and bottom region of the tank. The radial blade sweeps the bottom of the tank and pushes the fluid in the outward radial direction. The full scale vessel contains about 9500 gallons of slurry with flow behavior characterized as a Bingham Plastic. Particles in the mixture have an abrasive characteristic that cause excessive erosion to internal vessel components at higher impeller speeds. The desire for this mixing process is to ensure the

  15. Large-Scale Modeling of Wordform Learning and Representation

    ERIC Educational Resources Information Center

    Sibley, Daragh E.; Kello, Christopher T.; Plaut, David C.; Elman, Jeffrey L.

    2008-01-01

    The forms of words as they appear in text and speech are central to theories and models of lexical processing. Nonetheless, current methods for simulating their learning and representation fail to approach the scale and heterogeneity of real wordform lexicons. A connectionist architecture termed the "sequence encoder" is used to learn nearly…

  16. Modelling large scale human activity in San Francisco

    NASA Astrophysics Data System (ADS)

    Gonzalez, Marta

    2010-03-01

    Diverse group of people with a wide variety of schedules, activities and travel needs compose our cities nowadays. This represents a big challenge for modeling travel behaviors in urban environments; those models are of crucial interest for a wide variety of applications such as traffic forecasting, spreading of viruses, or measuring human exposure to air pollutants. The traditional means to obtain knowledge about travel behavior is limited to surveys on travel journeys. The obtained information is based in questionnaires that are usually costly to implement and with intrinsic limitations to cover large number of individuals and some problems of reliability. Using mobile phone data, we explore the basic characteristics of a model of human travel: The distribution of agents is proportional to the population density of a given region, and each agent has a characteristic trajectory size contain information on frequency of visits to different locations. Additionally we use a complementary data set given by smart subway fare cards offering us information about the exact time of each passenger getting in or getting out of the subway station and the coordinates of it. This allows us to uncover the temporal aspects of the mobility. Since we have the actual time and place of individual's origin and destination we can understand the temporal patterns in each visited location with further details. Integrating two described data set we provide a dynamical model of human travels that incorporates different aspects observed empirically.

  17. Parameterization of Fire Injection Height in Large Scale Transport Model

    NASA Astrophysics Data System (ADS)

    Paugam, R.; Wooster, M.; Atherton, J.; Val Martin, M.; Freitas, S.; Kaiser, J. W.; Schultz, M. G.

    2012-12-01

    The parameterization of fire injection height in global chemistry transport model is currently a subject of debate in the atmospheric community. The approach usually proposed in the literature is based on relationships linking injection height and remote sensing products like the Fire Radiative Power (FRP) which can measure active fire properties. In this work we present an approach based on the Plume Rise Model (PRM) developed by Freitas et al (2007, 2010). This plume model is already used in different host models (e.g. WRF, BRAMS). In its original version, the fire is modeled by: a convective heat flux (CHF; pre-defined by the land cover and evaluated as a fixed part of the total heat released) and a plume radius (derived from the GOES Wildfire-ABBA product) which defines the fire extension where the CHF is homogeneously distributed. Here in our approach the Freitas model is modified, in particular we added (i) an equation for mass conservation, (ii) a scheme to parameterize horizontal entrainment/detrainment, and (iii) a new initialization module which estimates the sensible heat released by the fire on the basis of measured FRP rather than fuel cover type. FRP and Active Fire (AF) area necessary for the initialization of the model are directly derived from a modified version of the Dozier algorithm applied to the MOD14 product. An optimization (using the simulating annealing method) of this new version of the PRM is then proposed based on fire plume characteristics derived from the official MISR plume height project and atmospheric profiles extracted from the ECMWF analysis. The data set covers the main fire region (Africa, Siberia, Indonesia, and North and South America) and is set up to (i) retain fires where plume height and FRP can be easily linked (i.e. avoid large fire cluster where individual plume might interact), (ii) keep fire which show decrease of FRP and AF area after MISR overpass (i.e. to minimize effect of the time period needed for the plume to

  18. Parameterization of Fire Injection Height in Large Scale Transport Model

    NASA Astrophysics Data System (ADS)

    Paugam, r.; Wooster, m.; Freitas, s.; Gonzi, s.; Palmer, p.

    2012-04-01

    The parameterization of fire injection height in global chemistry transport model is currently a subject of debate in the atmospheric community. The approach usually proposed in the literature is based on relationships linking injection height and remote sensing products like the Fire Radiative Power (FRP) which can measure active fire properties. In this work we present an approach based on the Plume Rise Model (PRM) developed by Freitas et al (2007, 2010). This plume model is already used in different host models (e.g. WRF, BRAMS). In its original version, the fire is modelled by: a convective heat flux (CHF; pre-defined by the land cover and evaluated as a fixed part of the total heat released) and a plume radius (derived from the GOES Wildfire-ABBA product) which defines the fire extension where the CHF is homogeneously distributed. Here in our approach the Freitas model is modified. Major modifications are implemented in its initialisation module: (i) CHF and the Active Fire area are directly force from FRP data derived from a modified version of the Dozier algorithm applied to the MOD12 product, (ii) and a new module of the buoyancy flux calculation is implemented instead of the original module based on the Morton Taylor and Turner equation. Furthermore the dynamical core of the plume model is also modified with a new entrainment scheme inspired from latest results from shallow convection parameterization. Optimization and validation of this new version of the Freitas PRM is based on fire plume characteristics derived from the official MISR plume height project and atmospheric profile extracted from the ECMWF analysis. The data set is (i) build up to only keep fires where plume height and FRP can be easily linked (i.e. avoid large fire cluster where individual plume might interact) and (ii) split per fire land cover type to optimize the constant of the buoyancy flux module and the entrainment scheme to different fire regime. Result shows that the new PRM is

  19. GIS for large-scale watershed observational data model

    NASA Astrophysics Data System (ADS)

    Patino-Gomez, Carlos

    Because integrated management of a river basin requires the development of models that are used for many purposes, e.g., to assess risks and possible mitigation of droughts and floods, manage water rights, assess water quality, and simply to understand the hydrology of the basin, the development of a relational database from which models can access the various data needed to describe the systems being modeled is fundamental. In order for this concept to be useful and widely applicable, however, it must have a standard design. The recently developed ArcHydro data model facilitates the organization of data according to the "basin" principle and allows access to hydrologic information by models. The development of a basin-scale relational database for the Rio Grande/Bravo basin implemented in a Geographic Information System is one of the contributions of this research. This geodatabase represents the first major attempt to establish a more complete understanding of the basin as a whole, including spatial and temporal information obtained from the United States of America and Mexico. Difficulties in processing raster datasets over large regions are studied in this research. One of the most important contributions is the application of a Raster-Network Regionalization technique, which utilizes raster-based analysis at the subregional scale in an efficient manner and combines the resulting subregional vector datasets into a regional database. Another important contribution of this research is focused on implementing a robust structure for handling huge temporal data sets related to monitoring points such as hydrometric and climatic stations, reservoir inlets and outlets, water rights, etc. For the Rio Grande study area, the ArcHydro format is applied to the historical information collected in order to include and relate these time series to the monitoring points in the geodatabase. Its standard time series format is changed to include a relationship to the agency from

  20. Testing model independent modified gravity with future large scale surveys

    SciTech Connect

    Thomas, Daniel B.; Contaldi, Carlo R. E-mail: c.contaldi@ic.ac.uk

    2011-12-01

    Model-independent parametrisations of modified gravity have attracted a lot of attention over the past few years and numerous combinations of experiments and observables have been suggested to constrain the parameters used in these models. Galaxy clusters have been mentioned, but not looked at as extensively in the literature as some other probes. Here we look at adding galaxy clusters into the mix of observables and examine how they could improve the constraints on the modified gravity parameters. In particular, we forecast the constraints from combining Planck satellite Cosmic Microwave Background (CMB) measurements and Sunyaev-Zeldovich (SZ) cluster catalogue with a DES-like Weak Lensing (WL) survey. We find that cluster counts significantly improve the constraints over those derived using CMB and WL. We then look at surveys further into the future, to see how much better it may be feasible to make the constraints.

  1. Multistability in Large Scale Models of Brain Activity

    PubMed Central

    Golos, Mathieu; Jirsa, Viktor; Daucé, Emmanuel

    2015-01-01

    Noise driven exploration of a brain network’s dynamic repertoire has been hypothesized to be causally involved in cognitive function, aging and neurodegeneration. The dynamic repertoire crucially depends on the network’s capacity to store patterns, as well as their stability. Here we systematically explore the capacity of networks derived from human connectomes to store attractor states, as well as various network mechanisms to control the brain’s dynamic repertoire. Using a deterministic graded response Hopfield model with connectome-based interactions, we reconstruct the system’s attractor space through a uniform sampling of the initial conditions. Large fixed-point attractor sets are obtained in the low temperature condition, with a bigger number of attractors than ever reported so far. Different variants of the initial model, including (i) a uniform activation threshold or (ii) a global negative feedback, produce a similarly robust multistability in a limited parameter range. A numerical analysis of the distribution of the attractors identifies spatially-segregated components, with a centro-medial core and several well-delineated regional patches. Those different modes share similarity with the fMRI independent components observed in the “resting state” condition. We demonstrate non-stationary behavior in noise-driven generalizations of the models, with different meta-stable attractors visited along the same time course. Only the model with a global dynamic density control is found to display robust and long-lasting non-stationarity with no tendency toward either overactivity or extinction. The best fit with empirical signals is observed at the edge of multistability, a parameter region that also corresponds to the highest entropy of the attractors. PMID:26709852

  2. Multistability in Large Scale Models of Brain Activity.

    PubMed

    Golos, Mathieu; Jirsa, Viktor; Daucé, Emmanuel

    2015-12-01

    Noise driven exploration of a brain network's dynamic repertoire has been hypothesized to be causally involved in cognitive function, aging and neurodegeneration. The dynamic repertoire crucially depends on the network's capacity to store patterns, as well as their stability. Here we systematically explore the capacity of networks derived from human connectomes to store attractor states, as well as various network mechanisms to control the brain's dynamic repertoire. Using a deterministic graded response Hopfield model with connectome-based interactions, we reconstruct the system's attractor space through a uniform sampling of the initial conditions. Large fixed-point attractor sets are obtained in the low temperature condition, with a bigger number of attractors than ever reported so far. Different variants of the initial model, including (i) a uniform activation threshold or (ii) a global negative feedback, produce a similarly robust multistability in a limited parameter range. A numerical analysis of the distribution of the attractors identifies spatially-segregated components, with a centro-medial core and several well-delineated regional patches. Those different modes share similarity with the fMRI independent components observed in the "resting state" condition. We demonstrate non-stationary behavior in noise-driven generalizations of the models, with different meta-stable attractors visited along the same time course. Only the model with a global dynamic density control is found to display robust and long-lasting non-stationarity with no tendency toward either overactivity or extinction. The best fit with empirical signals is observed at the edge of multistability, a parameter region that also corresponds to the highest entropy of the attractors. PMID:26709852

  3. Renormalizing a viscous fluid model for large scale structure formation

    NASA Astrophysics Data System (ADS)

    Führer, Florian; Rigopoulos, Gerasimos

    2016-02-01

    Using the Stochastic Adhesion Model (SAM) as a simple toy model for cosmic structure formation, we study renormalization and the removal of the cutoff dependence from loop integrals in perturbative calculations. SAM shares the same symmetry with the full system of continuity+Euler equations and includes a viscosity term and a stochastic noise term, similar to the effective theories recently put forward to model CDM clustering. We show in this context that if the viscosity and noise terms are treated as perturbative corrections to the standard eulerian perturbation theory, they are necessarily non-local in time. To ensure Galilean Invariance higher order vertices related to the viscosity and the noise must then be added and we explicitly show at one-loop that these terms act as counter terms for vertex diagrams. The Ward Identities ensure that the non-local-in-time theory can be renormalized consistently. Another possibility is to include the viscosity in the linear propagator, resulting in exponential damping at high wavenumber. The resulting local-in-time theory is then renormalizable to one loop, requiring less free parameters for its renormalization.

  4. Large scale molecular dynamics modeling of materials fabrication processes

    SciTech Connect

    Belak, J.; Glosli, J.N.; Boercker, D.B.; Stowers, I.F.

    1994-02-01

    An atomistic molecular dynamics model of materials fabrication processes is presented. Several material removal processes are shown to be within the domain of this simulation method. Results are presented for orthogonal cutting of copper and silicon and for crack propagation in silica glass. Both copper and silicon show ductile behavior, but the atomistic mechanisms that allow this behavior are significantly different in the two cases. The copper chip remains crystalline while the silicon chip transforms into an amorphous state. The critical stress for crack propagation in silica glass was found to be in reasonable agreement with experiment and a novel stick-slip phenomenon was observed.

  5. Evaluation of drought propagation in an ensemble mean of large-scale hydrological models

    NASA Astrophysics Data System (ADS)

    Van Loon, A. F.; Van Huijgevoort, M. H. J.; Van Lanen, H. A. J.

    2012-07-01

    Hydrological drought is increasingly studied using large-scale models. It is, however, not sure whether large-scale models reproduce the development of hydrological drought correctly. The pressing question is: how well do large-scale models simulate the propagation from meteorological to hydrological drought? To answer this question, we evaluated the simulation of drought propagation in an ensemble mean of ten large-scale models, both land-surface models and global hydrological models, that were part of the model intercomparison project of WATCH (WaterMIP). For a selection of case study areas, we studied drought characteristics (number of droughts, duration, severity), drought propagation features (pooling, attenuation, lag, lengthening), and hydrological drought typology (classical rainfall deficit drought, rain-to-snow-season drought, wet-to-dry-season drought, cold snow season drought, warm snow season drought, composite drought). Drought characteristics simulated by large-scale models clearly reflected drought propagation, i.e. drought events became less and longer when moving through the hydrological cycle. However, more differentiation was expected between fast and slowly responding systems, with slowly responding systems having less and longer droughts in runoff than fast responding systems. This was not found using large-scale models. Drought propagation features were poorly reproduced by the large-scale models, because runoff reacted immediately to precipitation, in all case study areas. This fast reaction to precipitation, even in cold climates in winter and in semi-arid climates in summer, also greatly influenced the hydrological drought typology as identified by the large-scale models. In general, the large-scale models had the correct representation of drought types, but the percentages of occurrence had some important mismatches, e.g. an overestimation of classical rainfall deficit droughts, and an underestimation of wet-to-dry-season droughts and

  6. Soil hydrologic characterization for modeling large scale soil remediation protocols

    NASA Astrophysics Data System (ADS)

    Romano, Nunzio; Palladino, Mario; Di Fiore, Paola; Sica, Benedetto; Speranza, Giuseppe

    2014-05-01

    In Campania Region (Italy), the Ministry of Environment identified a National Interest Priority Sites (NIPS) with a surface of about 200,000 ha, characterized by different levels and sources of pollution. This area, called Litorale Domitio-Agro Aversano includes some polluted agricultural land, belonging to more than 61 municipalities in the Naples and Caserta provinces. In this area, a high level spotted soil contamination is moreover due to the legal and outlaw industrial and municipal wastes dumping, with hazardous consequences also on the quality of the water table. The EU-Life+ project ECOREMED (Implementation of eco-compatible protocols for agricultural soil remediation in Litorale Domizio-Agro Aversano NIPS) has the major aim of defining an operating protocol for agriculture-based bioremediation of contaminated agricultural soils, also including the use of crops extracting pollutants to be used as biomasses for renewable energy production. In the framework of this project, soil hydrologic characterization plays a key role and modeling water flow and solute transport has two main challenging points on which we focus on. A first question is related to the fate of contaminants infiltrated from stormwater runoff and the potential for groundwater contamination. Another question is the quantification of fluxes and spatial extent of root water uptake by the plant species employed to extract pollutants in the uppermost soil horizons. Given the high variability of spatial distribution of pollutants, we use soil characterization at different scales, from field scale when facing root water uptake process, to regional scale when simulating interaction between soil hydrology and groundwater fluxes.

  7. Numerical models for ac loss calculation in large-scale applications of HTS coated conductors

    NASA Astrophysics Data System (ADS)

    Quéval, Loïc; Zermeño, Víctor M. R.; Grilli, Francesco

    2016-02-01

    Numerical models are powerful tools to predict the electromagnetic behavior of superconductors. In recent years, a variety of models have been successfully developed to simulate high-temperature-superconducting (HTS) coated conductor tapes. While the models work well for the simulation of individual tapes or relatively small assemblies, their direct applicability to devices involving hundreds or thousands of tapes, e.g., coils used in electrical machines, is questionable. Indeed, the simulation time and memory requirement can quickly become prohibitive. In this paper, we develop and compare two different models for simulating realistic HTS devices composed of a large number of tapes: (1) the homogenized model simulates the coil using an equivalent anisotropic homogeneous bulk with specifically developed current constraints to account for the fact that each turn carries the same current; (2) the multi-scale model parallelizes and reduces the computational problem by simulating only several individual tapes at significant positions of the coil’s cross-section using appropriate boundary conditions to account for the field generated by the neighboring turns. Both methods are used to simulate a coil made of 2000 tapes, and compared against the widely used H-formulation finite-element model that includes all the tapes. Both approaches allow faster simulations of large number of HTS tapes by 1-3 orders of magnitudes, while maintaining good accuracy of the results. Both models can therefore be used to design and optimize large-scale HTS devices. This study provides key advancement with respect to previous versions of both models. The homogenized model is extended from simple stacks to large arrays of tapes. For the multi-scale model, the importance of the choice of the current distribution used to generate the background field is underlined; the error in ac loss estimation resulting from the most obvious choice of starting from a uniform current distribution is revealed.

  8. Systematic large-scale secondary circulations in a regional climate model

    NASA Astrophysics Data System (ADS)

    Becker, Nico; Ulbrich, Uwe; Klein, Rupert

    2015-05-01

    Regional climate models (RCMs) are used to add the effects of nonresolved scales to coarser resolved model simulations by using a finer grid within a limited domain. We identify large-scale secondary circulations (SCs) relative to the driving global climate model (GCM) in an RCM simulation over Europe. By applying a clustering technique, we find that the SC depends on the large-scale flow prescribed by the driving GCM data. Evidence is presented that the SC is caused by the different representations of orographic effects in the RCM and the GCM. Flow modifications in the RCM caused by the Alps lead to large-scale vortices in the SC fields. These vortices are limited by the RCM boundaries, causing artificial boundary-parallel flows. The SC is associated with geopotential height and temperature anomalies between RCM and GCM and has the potential to produce systematic large-scale biases in RCMs.

  9. Forecasting and understanding cirrus clouds with the large scale Lagrangian microphysical model CLaMS-Ice

    NASA Astrophysics Data System (ADS)

    Rolf, Christian; Grooß, Jens-Uwe; Spichtinger, Peter; Costa, Anja; Krämer, Martina

    2015-04-01

    Cirrus clouds play an important role by influencing the Earth's radiation budget and the global climate (Heintzenberg and Charlson, 2009). This is shown in the recent IPCC reports, where the large error bars relating to the cloud radiative forcing underline the poor scientific knowledge of the underlying processes. The formation and further evolution of cirrus clouds is determined by the interplay of temperature, ice nuclei (IN) properties, relative humidity, cooling rates and ice crystal sedimentation. For that reason, a Lagrangian approach using meteorological wind fields is the most realistic way to simulate cirrus clouds. In addition, to represent complete cirrus systems as e.g. frontal cirrus, three dimensional cloud modeling on a large scale is desirable. To this end, we coupled the two momentum microphysical ice model of Spichtinger and Gierens (2009) with the 3D Lagrangian model CLaMS (McKenna et al., 2002). The new CLaMS-Ice module simulates cirrus formation by including heterogeneous and homogeneous freezing as well as ice crystal sedimentation. The boxmodel is operated along CLaMS trajectories and individually initialized with the ECMWF meteorological fields. In addition, temperature fluctuations are superimposed directly to the trajectory temperature and pressure by the parametrization of Gary et al. (2006). For a typical cirrus scenario with latitude/longitude coverage of 49° x 42° on three pressure levels, 6100 trajectories are simulated over 24 hours in time. To achieve the model results in an acceptable time, the box model is accelerated by about a factor of 10 before coupling to CLaMS. Now, CLaMS-Ice needs only about 30-40 minutes for such a simulation. During the first HALO cloud field campaign (ML-Cirrus), CLaMS-Ice has been successfully deployed as a forecast tool. Here, we give an overview about the capabilities of CLaMS-Ice for forecasting, modeling and understanding of cirrus clouds in general. In addition, examples from the recent ML

  10. Using large-scale neural models to interpret connectivity measures of cortico-cortical dynamics at millisecond temporal resolution

    PubMed Central

    Banerjee, Arpan; Pillai, Ajay S.; Horwitz, Barry

    2012-01-01

    Over the last two decades numerous functional imaging studies have shown that higher order cognitive functions are crucially dependent on the formation of distributed, large-scale neuronal assemblies (neurocognitive networks), often for very short durations. This has fueled the development of a vast number of functional connectivity measures that attempt to capture the spatiotemporal evolution of neurocognitive networks. Unfortunately, interpreting the neural basis of goal directed behavior using connectivity measures on neuroimaging data are highly dependent on the assumptions underlying the development of the measure, the nature of the task, and the modality of the neuroimaging technique that was used. This paper has two main purposes. The first is to provide an overview of some of the different measures of functional/effective connectivity that deal with high temporal resolution neuroimaging data. We will include some results that come from a recent approach that we have developed to identify the formation and extinction of task-specific, large-scale neuronal assemblies from electrophysiological recordings at a ms-by-ms temporal resolution. The second purpose of this paper is to indicate how to partially validate the interpretations drawn from this (or any other) connectivity technique by using simulated data from large-scale, neurobiologically realistic models. Specifically, we applied our recently developed method to realistic simulations of MEG data during a delayed match-to-sample (DMS) task condition and a passive viewing of stimuli condition using a large-scale neural model of the ventral visual processing pathway. Simulated MEG data using simple head models were generated from sources placed in V1, V4, IT, and prefrontal cortex (PFC) for the passive viewing condition. The results show how closely the conclusions obtained from the functional connectivity method match with what actually occurred at the neuronal network level. PMID:22291621

  11. Using large-scale neural models to interpret connectivity measures of cortico-cortical dynamics at millisecond temporal resolution.

    PubMed

    Banerjee, Arpan; Pillai, Ajay S; Horwitz, Barry

    2011-01-01

    Over the last two decades numerous functional imaging studies have shown that higher order cognitive functions are crucially dependent on the formation of distributed, large-scale neuronal assemblies (neurocognitive networks), often for very short durations. This has fueled the development of a vast number of functional connectivity measures that attempt to capture the spatiotemporal evolution of neurocognitive networks. Unfortunately, interpreting the neural basis of goal directed behavior using connectivity measures on neuroimaging data are highly dependent on the assumptions underlying the development of the measure, the nature of the task, and the modality of the neuroimaging technique that was used. This paper has two main purposes. The first is to provide an overview of some of the different measures of functional/effective connectivity that deal with high temporal resolution neuroimaging data. We will include some results that come from a recent approach that we have developed to identify the formation and extinction of task-specific, large-scale neuronal assemblies from electrophysiological recordings at a ms-by-ms temporal resolution. The second purpose of this paper is to indicate how to partially validate the interpretations drawn from this (or any other) connectivity technique by using simulated data from large-scale, neurobiologically realistic models. Specifically, we applied our recently developed method to realistic simulations of MEG data during a delayed match-to-sample (DMS) task condition and a passive viewing of stimuli condition using a large-scale neural model of the ventral visual processing pathway. Simulated MEG data using simple head models were generated from sources placed in V1, V4, IT, and prefrontal cortex (PFC) for the passive viewing condition. The results show how closely the conclusions obtained from the functional connectivity method match with what actually occurred at the neuronal network level. PMID:22291621

  12. Odor Experience Facilitates Sparse Representations of New Odors in a Large-Scale Olfactory Bulb Model

    PubMed Central

    Zhou, Shanglin; Migliore, Michele; Yu, Yuguo

    2016-01-01

    Prior odor experience has a profound effect on the coding of new odor inputs by animals. The olfactory bulb, the first relay of the olfactory pathway, can substantially shape the representations of odor inputs. How prior odor experience affects the representation of new odor inputs in olfactory bulb and its underlying network mechanism are still unclear. Here we carried out a series of simulations based on a large-scale realistic mitral-granule network model and found that prior odor experience not only accelerated formation of the network, but it also significantly strengthened sparse responses in the mitral cell network while decreasing sparse responses in the granule cell network. This modulation of sparse representations may be due to the increase of inhibitory synaptic weights. Correlations among mitral cells within the network and correlations between mitral network responses to different odors decreased gradually when the number of prior training odors was increased, resulting in a greater decorrelation of the bulb representations of input odors. Based on these findings, we conclude that the degree of prior odor experience facilitates degrees of sparse representations of new odors by the mitral cell network through experience-enhanced inhibition mechanism. PMID:26903819

  13. Odor Experience Facilitates Sparse Representations of New Odors in a Large-Scale Olfactory Bulb Model.

    PubMed

    Zhou, Shanglin; Migliore, Michele; Yu, Yuguo

    2016-01-01

    Prior odor experience has a profound effect on the coding of new odor inputs by animals. The olfactory bulb, the first relay of the olfactory pathway, can substantially shape the representations of odor inputs. How prior odor experience affects the representation of new odor inputs in olfactory bulb and its underlying network mechanism are still unclear. Here we carried out a series of simulations based on a large-scale realistic mitral-granule network model and found that prior odor experience not only accelerated formation of the network, but it also significantly strengthened sparse responses in the mitral cell network while decreasing sparse responses in the granule cell network. This modulation of sparse representations may be due to the increase of inhibitory synaptic weights. Correlations among mitral cells within the network and correlations between mitral network responses to different odors decreased gradually when the number of prior training odors was increased, resulting in a greater decorrelation of the bulb representations of input odors. Based on these findings, we conclude that the degree of prior odor experience facilitates degrees of sparse representations of new odors by the mitral cell network through experience-enhanced inhibition mechanism. PMID:26903819

  14. Evaluation of drought propagation in an ensemble mean of large-scale hydrological models

    NASA Astrophysics Data System (ADS)

    Van Loon, A. F.; Van Huijgevoort, M. H. J.; Van Lanen, H. A. J.

    2012-11-01

    Hydrological drought is increasingly studied using large-scale models. It is, however, not sure whether large-scale models reproduce the development of hydrological drought correctly. The pressing question is how well do large-scale models simulate the propagation from meteorological to hydrological drought? To answer this question, we evaluated the simulation of drought propagation in an ensemble mean of ten large-scale models, both land-surface models and global hydrological models, that participated in the model intercomparison project of WATCH (WaterMIP). For a selection of case study areas, we studied drought characteristics (number of droughts, duration, severity), drought propagation features (pooling, attenuation, lag, lengthening), and hydrological drought typology (classical rainfall deficit drought, rain-to-snow-season drought, wet-to-dry-season drought, cold snow season drought, warm snow season drought, composite drought). Drought characteristics simulated by large-scale models clearly reflected drought propagation; i.e. drought events became fewer and longer when moving through the hydrological cycle. However, more differentiation was expected between fast and slowly responding systems, with slowly responding systems having fewer and longer droughts in runoff than fast responding systems. This was not found using large-scale models. Drought propagation features were poorly reproduced by the large-scale models, because runoff reacted immediately to precipitation, in all case study areas. This fast reaction to precipitation, even in cold climates in winter and in semi-arid climates in summer, also greatly influenced the hydrological drought typology as identified by the large-scale models. In general, the large-scale models had the correct representation of drought types, but the percentages of occurrence had some important mismatches, e.g. an overestimation of classical rainfall deficit droughts, and an underestimation of wet-to-dry-season droughts and

  15. Multi-Physics Feedback Simulations with Realistic Initial Conditions of the Formation of Star Clusters: From Large Scale Magnetized Clouds to Turbulent Clumps to Cores to Stars

    NASA Astrophysics Data System (ADS)

    Klein, R. I.; Li, P.; McKee, C. F.

    2015-10-01

    Multi-physics zoom-in adaptive mesh refinement simulations with feedback and realistic initial conditions, starting from large scale turbulent molecular clouds through the formation of clumps and cores to the formation os stellar clusters are presented. I give a summary of results at the different scales undergoing gravitational collapse from cloud to core to cluster formation. Detailed comparisons with observations are made at each stage of the simulations. In particular, properties of the magnetized clumps are compared with recent observations of Crutcher et al. 2010 and Crutcher 2012 and the magnetic field orientation in cloud clumps relative to the global mean field of the inter-cloud medium (Li et al. 2009). The Initial Mass Function (IMF) obtained is compared with the Chabrier IMF and the protostellar mass function of the cluster is compared with different theories.

  16. Building a Large-Scale Computational Model of a Cortical Neuronal Network

    NASA Astrophysics Data System (ADS)

    Zemanová, Lucia; Zhou, Changsong; Kurths, Jürgen

    We introduce the general framework of the large-scale neuronal model used in the 5th Helmholtz Summer School — Complex Brain Networks. The main aim is to build a universal large-scale model of a cortical neuronal network, structured as a network of networks, which is flexible enough to implement different kinds of topology and neuronal models and which exhibits behavior in various dynamical regimes. First, we describe important biological aspects of brain topology and use them in the construction of a large-scale cortical network. Second, the general dynamical model is presented together with explanations of the major dynamical properties of neurons. Finally, we discuss the implementation of the model into parallel code and its possible modifications and improvements.

  17. Large scale groundwater modeling using globally available datasets: A test for the Rhine-Meuse basin

    NASA Astrophysics Data System (ADS)

    Sutanudjaja, Edwin H.; de Jong, Steven; van Geer, Frans C.; Bierkens, Marc F. P.

    2010-05-01

    Groundwater resources are vulnerable to global climate change and population growth. Therefore, monitoring and predicting groundwater change over large areas is imperative. However, large-scale groundwater models, especially those involve aquifers and basins of multiple countries, are still rare due to a lack of hydro-geological data. Such data may be widely available in developed countries but are seldom available in other parts of the world. In this study, we propose a novel approach to construct large-scale groundwater models by using global datasets that are readily available. As the test-bed, we choose the combined Rhine-Meuse basin (total area: ± 220000 km2) that contains ample data (e.g. groundwater head data) that can be used to verify the model output. However, while constructing the model, we use only globally available datasets such as the global GLCC land cover map [http://edc2.usgs.gov/glcc/glcc.php], global FAO soil map [1995], global lithological map of Dürr et al [2005], HydroSHEDS digital elevation map [Lehner et al, 2008], and global climatological datasets (e.g. the global CRU datasets [Mitchell and Jones, 2005 and New et al, 2002], ERA40 re-analysis data [Uppala et al, 2005], and ECMWF operational archive data [http://www.ecmwf.int/products/data/operational_system]). We started by building a distributed land surface model (1×1 km) to estimate groundwater recharge and river discharge. Then, a MODFLOW transient groundwater model is built and forced by the recharge and surface water levels calculated by the land surface model. We run the models for the period 1970-2008. The current results are promising. The simulated river discharges compare well to the discharge observation as indicated by the Nash-Sutcliffe model efficiency coefficients (68% for Rhine and 50% for Meuse). Moreover, the MODFLOW model can converge with realistic aquifer properties (i.e. transmissivities and storage coefficients) and can produce reasonable groundwater head

  18. Identification of large-scale genomic variation in cancer genomes using in silico reference models.

    PubMed

    Killcoyne, Sarah; Del Sol, Antonio

    2016-01-01

    Identifying large-scale structural variation in cancer genomes continues to be a challenge to researchers. Current methods rely on genome alignments based on a reference that can be a poor fit to highly variant and complex tumor genomes. To address this challenge we developed a method that uses available breakpoint information to generate models of structural variations. We use these models as references to align previously unmapped and discordant reads from a genome. By using these models to align unmapped reads, we show that our method can help to identify large-scale variations that have been previously missed. PMID:26264669

  19. Identification of large-scale genomic variation in cancer genomes using in silico reference models

    PubMed Central

    Killcoyne, Sarah; del Sol, Antonio

    2016-01-01

    Identifying large-scale structural variation in cancer genomes continues to be a challenge to researchers. Current methods rely on genome alignments based on a reference that can be a poor fit to highly variant and complex tumor genomes. To address this challenge we developed a method that uses available breakpoint information to generate models of structural variations. We use these models as references to align previously unmapped and discordant reads from a genome. By using these models to align unmapped reads, we show that our method can help to identify large-scale variations that have been previously missed. PMID:26264669

  20. Measuring Growth in a Longitudinal Large-Scale Assessment with a General Latent Variable Model

    ERIC Educational Resources Information Center

    von Davier, Matthias; Xu, Xueli; Carstensen, Claus H.

    2011-01-01

    The aim of the research presented here is the use of extensions of longitudinal item response theory (IRT) models in the analysis and comparison of group-specific growth in large-scale assessments of educational outcomes. A general discrete latent variable model was used to specify and compare two types of multidimensional item-response-theory…

  1. On Applications of Rasch Models in International Comparative Large-Scale Assessments: A Historical Review

    ERIC Educational Resources Information Center

    Wendt, Heike; Bos, Wilfried; Goy, Martin

    2011-01-01

    Several current international comparative large-scale assessments of educational achievement (ICLSA) make use of "Rasch models", to address functions essential for valid cross-cultural comparisons. From a historical perspective, ICLSA and Georg Rasch's "models for measurement" emerged at about the same time, half a century ago. However, the…

  2. Finite Mixture Multilevel Multidimensional Ordinal IRT Models for Large Scale Cross-Cultural Research

    ERIC Educational Resources Information Center

    de Jong, Martijn G.; Steenkamp, Jan-Benedict E. M.

    2010-01-01

    We present a class of finite mixture multilevel multidimensional ordinal IRT models for large scale cross-cultural research. Our model is proposed for confirmatory research settings. Our prior for item parameters is a mixture distribution to accommodate situations where different groups of countries have different measurement operations, while…

  3. An Alternative Way to Model Population Ability Distributions in Large-Scale Educational Surveys

    ERIC Educational Resources Information Center

    Wetzel, Eunike; Xu, Xueli; von Davier, Matthias

    2015-01-01

    In large-scale educational surveys, a latent regression model is used to compensate for the shortage of cognitive information. Conventionally, the covariates in the latent regression model are principal components extracted from background data. This operational method has several important disadvantages, such as the handling of missing data and…

  4. On the Estimation of Hierarchical Latent Regression Models for Large-Scale Assessments

    ERIC Educational Resources Information Center

    Li, Deping; Oranje, Andreas; Jiang, Yanlin

    2009-01-01

    To find population proficiency distributions, a two-level hierarchical linear model may be applied to large-scale survey assessments such as the National Assessment of Educational Progress (NAEP). The model and parameter estimation are developed and a simulation was carried out to evaluate parameter recovery. Subsequently, both a hierarchical and…

  5. Reconciling subduction dynamics during Tethys closure with large-scale Asian tectonics: Insights from numerical modeling

    NASA Astrophysics Data System (ADS)

    Capitanio, F. A.; Replumaz, A.; Riel, N.

    2015-03-01

    We use three-dimensional numerical models to investigate the relation between subduction dynamics and large-scale tectonics of continent interiors. The models show how the balance between forces at the plate margins such as subduction, ridge push, and far-field forces, controls the coupled plate margins and interiors evolution. Removal of part of the slab by lithospheric break-off during subduction destabilizes the convergent margin, forcing migration of the subduction zone, whereas in the upper plate large-scale lateral extrusion, rotations, and back-arc stretching ensue. When external forces are modeled, such as ridge push and far-field forces, indentation increases, with large collisional margin advance and thickening in the upper plate. The balance between margin and external forces leads to similar convergent margin evolutions, whereas major differences occur in the upper plate interiors. Here, three strain regimes are found: large-scale extrusion, extrusion and thickening along the collisional margin, and thickening only, when negligible far-field forces, ridge push, and larger far-field forces, respectively, add to the subduction dynamics. The extrusion tectonics develops a strong asymmetry toward the oceanic margin driven by large-scale subduction, with no need of preexisting heterogeneities in the upper plate. Because the slab break-off perturbation is transient, the ensuing plate tectonics is time-dependent. The modeled deformation and its evolution are remarkably similar to the Cenozoic Asian tectonics, explaining large-scale lithospheric faulting and thickening, and coupling of indentation, extrusion and extension along the Asian convergent margin as a result of large-scale subduction process.

  6. The three-point function as a probe of models for large-scale structure

    NASA Technical Reports Server (NTRS)

    Frieman, Joshua A.; Gaztanaga, Enrique

    1993-01-01

    The consequences of models of structure formation for higher-order (n-point) galaxy correlation functions in the mildly non-linear regime are analyzed. Several variations of the standard Omega = 1 cold dark matter model with scale-invariant primordial perturbations were recently introduced to obtain more power on large scales, R(sub p) is approximately 20 h(sup -1) Mpc, e.g., low-matter-density (non-zero cosmological constant) models, 'tilted' primordial spectra, and scenarios with a mixture of cold and hot dark matter. They also include models with an effective scale-dependent bias, such as the cooperative galaxy formation scenario of Bower, etal. It is shown that higher-order (n-point) galaxy correlation functions can provide a useful test of such models and can discriminate between models with true large-scale power in the density field and those where the galaxy power arises from scale-dependent bias: a bias with rapid scale-dependence leads to a dramatic decrease of the hierarchical amplitudes Q(sub J) at large scales, r is approximately greater than R(sub p). Current observational constraints on the three-point amplitudes Q(sub 3) and S(sub 3) can place limits on the bias parameter(s) and appear to disfavor, but not yet rule out, the hypothesis that scale-dependent bias is responsible for the extra power observed on large scales.

  7. The three-point function as a probe of models for large-scale structure

    SciTech Connect

    Frieman, J.A.; Gaztanaga, E.

    1993-06-19

    The authors analyze the consequences of models of structure formation for higher-order (n-point) galaxy correlation functions in the mildly non-linear regime. Several variations of the standard {Omega} = 1 cold dark matter model with scale-invariant primordial perturbations have recently been introduced to obtain more power on large scales, R{sub p} {approximately}20 h{sup {minus}1} Mpc, e.g., low-matter-density (non-zero cosmological constant) models, {open_quote}tilted{close_quote} primordial spectra, and scenarios with a mixture of cold and hot dark matter. They also include models with an effective scale-dependent bias, such as the cooperative galaxy formation scenario of Bower, et al. The authors show that higher-order (n-point) galaxy correlation functions can provide a useful test of such models and can discriminate between models with true large-scale power in the density field and those where the galaxy power arises from scale-dependent bias: a bias with rapid scale-dependence leads to a dramatic decrease of the hierarchical amplitudes Q{sub J} at large scales, r {approx_gt} R{sub p}. Current observational constraints on the three-point amplitudes Q{sub 3} and S{sub 3} can place limits on the bias parameter(s) and appear to disfavor, but not yet rule out, the hypothesis that scale-dependent bias is responsible for the extra power observed on large scales.

  8. The three-point function as a probe of models for large-scale structure

    SciTech Connect

    Frieman, J.A. ); Gaztanaga, E. )

    1993-06-19

    The authors analyze the consequences of models of structure formation for higher-order (n-point) galaxy correlation functions in the mildly non-linear regime. Several variations of the standard [Omega] = 1 cold dark matter model with scale-invariant primordial perturbations have recently been introduced to obtain more power on large scales, R[sub p] [approximately]20 h[sup [minus]1] Mpc, e.g., low-matter-density (non-zero cosmological constant) models, [open quote]tilted[close quote] primordial spectra, and scenarios with a mixture of cold and hot dark matter. They also include models with an effective scale-dependent bias, such as the cooperative galaxy formation scenario of Bower, et al. The authors show that higher-order (n-point) galaxy correlation functions can provide a useful test of such models and can discriminate between models with true large-scale power in the density field and those where the galaxy power arises from scale-dependent bias: a bias with rapid scale-dependence leads to a dramatic decrease of the hierarchical amplitudes Q[sub J] at large scales, r [approx gt] R[sub p]. Current observational constraints on the three-point amplitudes Q[sub 3] and S[sub 3] can place limits on the bias parameter(s) and appear to disfavor, but not yet rule out, the hypothesis that scale-dependent bias is responsible for the extra power observed on large scales.

  9. Modeling haboob dust storms in large-scale weather and climate models

    NASA Astrophysics Data System (ADS)

    Pantillon, Florian; Knippertz, Peter; Marsham, John H.; Panitz, Hans-Jürgen; Bischoff-Gauss, Ingeborg

    2016-03-01

    Recent field campaigns have shown that haboob dust storms, formed by convective cold pool outflows, contribute a significant fraction of dust uplift over the Sahara and Sahel in summer. However, in situ observations are sparse and haboobs are frequently concealed by clouds in satellite imagery. Furthermore, most large-scale weather and climate models lack haboobs, because they do not explicitly represent convection. Here a 1 year long model run with explicit representation of convection delivers the first full seasonal cycle of haboobs over northern Africa. Using conservative estimates, the model suggests that haboobs contribute one fifth of the annual dust-generating winds over northern Africa, one fourth between May and October, and one third over the western Sahel during this season. A simple parameterization of haboobs has recently been developed for models with parameterized convection, based on the downdraft mass flux of convection schemes. It is applied here to two model runs with different horizontal resolutions and assessed against the explicit run. The parameterization succeeds in capturing the geographical distribution of haboobs and their seasonal cycle over the Sahara and Sahel. It can be tuned to the different horizontal resolutions, and different formulations are discussed with respect to the frequency of extreme events. The results show that the parameterization is reliable and may solve a major and long-standing issue in simulating dust storms in large-scale weather and climate models.

  10. Development of a coupled soil erosion and large-scale hydrology modeling system

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Soil erosion models are usually limited in their application to the field-scale; however, the management of land resources requires information at the regional scale. Large-scale physically-based land surface schemes (LSS) provide estimates of regional scale hydrologic processes that contribute to e...

  11. Influence of a compost layer on the attenuation of 28 selected organic micropollutants under realistic soil aquifer treatment conditions: insights from a large scale column experiment.

    PubMed

    Schaffer, Mario; Kröger, Kerrin Franziska; Nödler, Karsten; Ayora, Carlos; Carrera, Jesús; Hernández, Marta; Licha, Tobias

    2015-05-01

    Soil aquifer treatment is widely applied to improve the quality of treated wastewater in its reuse as alternative source of water. To gain a deeper understanding of the fate of thereby introduced organic micropollutants, the attenuation of 28 compounds was investigated in column experiments using two large scale column systems in duplicate. The influence of increasing proportions of solid organic matter (0.04% vs. 0.17%) and decreasing redox potentials (denitrification vs. iron reduction) was studied by introducing a layer of compost. Secondary effluent from a wastewater treatment plant was used as water matrix for simulating soil aquifer treatment. For neutral and anionic compounds, sorption generally increases with the compound hydrophobicity and the solid organic matter in the column system. Organic cations showed the highest attenuation. Among them, breakthroughs were only registered for the cationic beta-blockers atenolol and metoprolol. An enhanced degradation in the columns with organic infiltration layer was observed for the majority of the compounds, suggesting an improved degradation for higher levels of biodegradable dissolved organic carbon. Solely the degradation of sulfamethoxazole could clearly be attributed to redox effects (when reaching iron reducing conditions). The study provides valuable insights into the attenuation potential for a wide spectrum of organic micropollutants under realistic soil aquifer treatment conditions. Furthermore, the introduction of the compost layer generally showed positive effects on the removal of compounds preferentially degraded under reducing conditions and also increases the residence times in the soil aquifer treatment system via sorption. PMID:25723339

  12. Optimization of large-scale heterogeneous system-of-systems models.

    SciTech Connect

    Parekh, Ojas; Watson, Jean-Paul; Phillips, Cynthia Ann; Siirola, John; Swiler, Laura Painton; Hough, Patricia Diane; Lee, Herbert K. H.; Hart, William Eugene; Gray, Genetha Anne; Woodruff, David L.

    2012-01-01

    Decision makers increasingly rely on large-scale computational models to simulate and analyze complex man-made systems. For example, computational models of national infrastructures are being used to inform government policy, assess economic and national security risks, evaluate infrastructure interdependencies, and plan for the growth and evolution of infrastructure capabilities. A major challenge for decision makers is the analysis of national-scale models that are composed of interacting systems: effective integration of system models is difficult, there are many parameters to analyze in these systems, and fundamental modeling uncertainties complicate analysis. This project is developing optimization methods to effectively represent and analyze large-scale heterogeneous system of systems (HSoS) models, which have emerged as a promising approach for describing such complex man-made systems. These optimization methods enable decision makers to predict future system behavior, manage system risk, assess tradeoffs between system criteria, and identify critical modeling uncertainties.

  13. PLATO: data-oriented approach to collaborative large-scale brain system modeling.

    PubMed

    Kannon, Takayuki; Inagaki, Keiichiro; Kamiji, Nilton L; Makimura, Kouji; Usui, Shiro

    2011-11-01

    The brain is a complex information processing system, which can be divided into sub-systems, such as the sensory organs, functional areas in the cortex, and motor control systems. In this sense, most of the mathematical models developed in the field of neuroscience have mainly targeted a specific sub-system. In order to understand the details of the brain as a whole, such sub-system models need to be integrated toward the development of a neurophysiologically plausible large-scale system model. In the present work, we propose a model integration library where models can be connected by means of a common data format. Here, the common data format should be portable so that models written in any programming language, computer architecture, and operating system can be connected. Moreover, the library should be simple so that models can be adapted to use the common data format without requiring any detailed knowledge on its use. Using this library, we have successfully connected existing models reproducing certain features of the visual system, toward the development of a large-scale visual system model. This library will enable users to reuse and integrate existing and newly developed models toward the development and simulation of a large-scale brain system model. The resulting model can also be executed on high performance computers using Message Passing Interface (MPI). PMID:21767932

  14. Large-Scale Numerical Modeling of Melt and Solution Crystal Growth

    NASA Astrophysics Data System (ADS)

    Derby, Jeffrey J.; Chelikowsky, James R.; Sinno, Talid; Dai, Bing; Kwon, Yong-Il; Lun, Lisa; Pandy, Arun; Yeckel, Andrew

    2007-06-01

    We present an overview of mathematical models and their large-scale numerical solution for simulating different phenomena and scales in melt and solution crystal growth. Samples of both classical analyses and state-of-the-art computations are presented. It is argued that the fundamental multi-scale nature of crystal growth precludes any one approach for modeling, rather successful crystal growth modeling relies on an artful blend of rigor and practicality.

  15. An Efficient Simulation Environment for Modeling Large-Scale Cortical Processing

    PubMed Central

    Richert, Micah; Nageswaran, Jayram Moorkanikara; Dutt, Nikil; Krichmar, Jeffrey L.

    2011-01-01

    We have developed a spiking neural network simulator, which is both easy to use and computationally efficient, for the generation of large-scale computational neuroscience models. The simulator implements current or conductance based Izhikevich neuron networks, having spike-timing dependent plasticity and short-term plasticity. It uses a standard network construction interface. The simulator allows for execution on either GPUs or CPUs. The simulator, which is written in C/C++, allows for both fine grain and coarse grain specificity of a host of parameters. We demonstrate the ease of use and computational efficiency of this model by implementing a large-scale model of cortical areas V1, V4, and area MT. The complete model, which has 138,240 neurons and approximately 30 million synapses, runs in real-time on an off-the-shelf GPU. The simulator source code, as well as the source code for the cortical model examples is publicly available. PMID:22007166

  16. Validating the runoff from the PRECIS model using a large-scale routing model

    NASA Astrophysics Data System (ADS)

    Cao, Lijuan; Dong, Wenjie; Xu, Yinlong; Zhang, Yong; Sparrow, Michael

    2007-09-01

    The streamflow over the Yellow River basin is simulated using the PRECIS (Providing REgional Climates for Impacts Studies) regional climate model driven by 15-year (1979 1993) ECMWF reanalysis data as the initial and lateral boundary conditions and an off-line large-scale routing model (LRM). The LRM uses physical catchment and river channel information and allows streamflow to be predicted for large continental rivers with a 1° × 1° spatial resolution. The results show that the PRECIS model can reproduce the general southeast to northwest gradient distribution of the precipitation over the Yellow River basin. The PRECISLRM model combination has the capability to simulate the seasonal and annual streamflow over the Yellow River basin. The simulated streamflow is generally coincident with the naturalized streamflow both in timing and in magnitude.

  17. Aspects of investigating STOL noise using large scale wind tunnel models

    NASA Technical Reports Server (NTRS)

    Falarski, M. D.; Koenig, D. G.; Soderman, P. T.

    1972-01-01

    The applicability of the NASA Ames 40- by 80-ft wind tunnel for acoustic research on STOL concepts has been investigated. The acoustic characteristics of the wind tunnel test section has been studied with calibrated acoustic sources. Acoustic characteristics of several large-scale STOL models have been studied both in the free-field and wind tunnel acoustic environments. The results indicate that the acoustic characteristics of large-scale STOL models can be measured in the wind tunnel if the test section acoustic environment and model acoustic similitude are taken into consideration. The reverberant field of the test section must be determined with an acoustically similar noise source. Directional microphone and extrapolation of near-field data to far-field are some of the techniques being explored as possible solutions to the directivity loss in a reverberant field. The model sound pressure levels must be of sufficient magnitude to be discernable from the wind tunnel background noise.

  18. Simulated pre-industrial climate in Bergen Climate Model (version 2): model description and large-scale circulation features

    NASA Astrophysics Data System (ADS)

    Otterâ, O. H.; Bentsen, M.; Bethke, I.; Kvamstø, N. G.

    2009-11-01

    The Bergen Climate Model (BCM) is a fully-coupled atmosphere-ocean-sea-ice model that provides state-of-the-art computer simulations of the Earth's past, present, and future climate. Here, a pre-industrial multi-century simulation with an updated version of BCM is described and compared to observational data. The model is run without any form of flux adjustments and is stable for several centuries. The simulated climate reproduces the general large-scale circulation in the atmosphere reasonably well, except for a positive bias in the high latitude sea level pressure distribution. Also, by introducing an updated turbulence scheme in the atmosphere model a persistent cold bias has been eliminated. For the ocean part, the model drifts in sea surface temperatures and salinities are considerably reduced compared to earlier versions of BCM. Improved conservation properties in the ocean model have contributed to this. Furthermore, by choosing a reference pressure at 2000 m and including thermobaric effects in the ocean model, a more realistic meridional overturning circulation is simulated in the Atlantic Ocean. The simulated sea-ice extent in the Northern Hemisphere is in general agreement with observational data except for summer where the extent is somewhat underestimated. In the Southern Hemisphere, large negative biases are found in the simulated sea-ice extent. This is partly related to problems with the mixed layer parametrization, causing the mixed layer in the Southern Ocean to be too deep, which in turn makes it hard to maintain a realistic sea-ice cover here. However, despite some problematic issues, the pre-industrial control simulation presented here should still be appropriate for climate change studies requiring multi-century simulations.

  19. Large-scale hydrological modelling: Parameterisation of runoff generation with high-resolution topographical data

    NASA Astrophysics Data System (ADS)

    Gong, Lebing; Halldin, Sven; Xu, C.-Y.

    2010-05-01

    Runoff generation is one of the most important components in hydrological cycle and in hydrological models at all spatial scales. The spatial distribution of the effective storage capacity accounts largely for the non-linearity of runoff generation dynamic. Many Hydrological models account for this spatial variability of storage in terms of statistical distributions; those models are generally proven to perform well. For example, both VIC and PDM account for the storage variability at sub-grid level. It is more important to account for the storage distribution for large river basins, where the varying land surface properties could mean a large variation in both the average storage capacity and the shape of the distribution of storage capacity when going from one part of the basin to another. However, limited by the statistical approaches same runoff generation parameters often have to be used everywhere in the basin. This is because it is harder to account for the spatial auto-correlation between those parameters than just the range of them. The Topmodel concept allows a linkage between the effective maximum storage capacity, or the maximum deficit, and the topography. It has the advantage of both a physically sound interpretation of runoff generation mechanism and the generally good availability of topography data. However, the strict limitation of the Topmodel assumption may limit its application in parts of the world with deep groundwater system or flat terrain. In this paper, we present a new runoff generation model designed for large-scale hydrology. The model relaxes the topmodel assumptions and only uses topographic index as a tool to distribute average storage to each topographic index class. The maximum storage capacity is proportional to the range of topographic index and is scaled by the recession parameter in Topmodel. The sub-cell distribution of storage capacity is obtained through topographic analysis. We then feed this topography

  20. Modelling Convective Dust Storms in Large-Scale Weather and Climate Models

    NASA Astrophysics Data System (ADS)

    Pantillon, Florian; Knippertz, Peter; Marsham, John H.; Panitz, Hans-Jürgen; Bischoff-Gauss, Ingeborg

    2016-04-01

    Recent field campaigns have shown that convective dust storms - also known as haboobs or cold pool outflows - contribute a significant fraction of dust uplift over the Sahara and Sahel in summer. However, in-situ observations are sparse and convective dust storms are frequently concealed by clouds in satellite imagery. Therefore numerical models are often the only available source of information over the area. Here a regional climate model with explicit representation of convection delivers the first full seasonal cycle of convective dust storms over North Africa. The model suggests that they contribute one fifth of the annual dust uplift over North Africa, one fourth between May and October, and one third over the western Sahel during this season. In contrast, most large-scale weather and climate models do not explicitly represent convection and thus lack such storms. A simple parameterization of convective dust storms has recently been developed, based on the downdraft mass flux of convection schemes. The parameterization is applied here to a set of regional climate runs with different horizontal resolutions and convection schemes, and assessed against the explicit run and against sparse station observations. The parameterization succeeds in capturing the geographical distribution and seasonal cycle of convective dust storms. It can be tuned to different horizontal resolutions and convection schemes, although the details of the geographical distribution and seasonal cycle depend on the representation of the monsoon in the parent model. Different versions of the parameterization are further discussed with respect to differences in the frequency of extreme events. The results show that the parameterization is reliable and can therefore solve a long-standing problem in simulating dust storms in large-scale weather and climate models.

  1. Modelling Convective Dust Storms in Large-Scale Weather and Climate Models

    NASA Astrophysics Data System (ADS)

    Pantillon, F.; Knippertz, P.; Marsham, J. H.; Panitz, H. J.; Bischoff-Gauss, I.

    2015-12-01

    Recent field campaigns have shown that convective dust storms - also known as haboobs or cold pool outflows - contribute a significant fraction of dust uplift over the Sahara and Sahel in summer. However, in-situ observations are sparse and convective dust storms are frequently concealed by clouds in satellite imagery. Therefore numerical models are often the only available source of information over the area. Here a regional climate model with explicit representation of convection delivers the first full seasonal cycle of convective dust storms over North Africa. The model suggests that they contribute one fifth of the annual dust uplift over North Africa, one fourth between May and October, and one third over the western Sahel during this season. In contrast, most large-scale weather and climate models do not explicitly represent convection and thus lack such storms.A simple parameterization of convective dust storms has recently been developed, based on the downdraft mass flux of convection schemes. The parameterization is applied here to a set of regional climate runs with different horizontal resolutions and convection schemes, and assessed against the explicit run and against sparse station observations. The parameterization succeeds in capturing the geographical distribution and seasonal cycle of convective dust storms. It can be tuned to different horizontal resolutions and convection schemes, although the details of the geographical distribution and seasonal cycle depend on the representation of the monsoon in the parent model. Different versions of the parameterization are further discussed with respect to differences in the frequency of extreme events. The results show that the parameterization is reliable and can therefore solve a long-standing problem in simulating dust storms in large-scale weather and climate models.

  2. Simulated pre-industrial climate in Bergen Climate Model (version 2): model description and large-scale circulation features

    NASA Astrophysics Data System (ADS)

    Otterå, O. H.; Bentsen, M.; Bethke, I.; Kvamstø, N. G.

    2009-05-01

    The Bergen Climate Model (BCM) is a fully-coupled atmosphere-ocean-sea-ice model that provides state-of-the-art computer simulations of the Earth's past, present, and future climate. Here, a pre-industrial multi-century simulation with an updated version of BCM is described and compared to observational data. The model is run without any form of flux adjustments and is stable for several centuries. The simulated climate reproduces the general large scale circulation in the atmosphere reasonably well, except for a positive bias in the high latitude sea level pressures distribution. Also, by introducing an updated turbulence scheme in the atmosphere model a persistent cold bias has been eliminated. For the ocean part, the model drifts in sea surface temperatures and salinities are considerably reduced compared to earlier versions of BCM. Improved conservation properties in the ocean have contributed to this. Furthermore, by choosing a reference pressure at 2000 m and including thermobaric effects in the ocean model, a more realistic meridional overturning circulation is simulated in the Atlantic Ocean. The simulated sea-ice extent in the Northern Hemisphere is in general agreement with observational data except for summer where the extent is somewhat underestimated. In the Southern Hemisphere, large negative biases are found in the simulated sea-ice extent. This is partly related to problems with the mixed layer parametrization, causing the mixed layer in the Southern Ocean to be too deep, which in turn makes it hard to maintain a realistic sea-ice cover here. However, despite some problematic issues, the pre-industrial control simulation presented here should still be appropriate for climate change studies requiring multi-century simulations.

  3. A Regression Algorithm for Model Reduction of Large-Scale Multi-Dimensional Problems

    NASA Astrophysics Data System (ADS)

    Rasekh, Ehsan

    2011-11-01

    Model reduction is an approach for fast and cost-efficient modelling of large-scale systems governed by Ordinary Differential Equations (ODEs). Multi-dimensional model reduction has been suggested for reduction of the linear systems simultaneously with respect to frequency and any other parameter of interest. Multi-dimensional model reduction is also used to reduce the weakly nonlinear systems based on Volterra theory. Multiple dimensions degrade the efficiency of reduction by increasing the size of the projection matrix. In this paper a new methodology is proposed to efficiently build the reduced model based on regression analysis. A numerical example confirms the validity of the proposed regression algorithm for model reduction.

  4. Small parametric model for nonlinear dynamics of large scale cyclogenesis with wind speed variations

    NASA Astrophysics Data System (ADS)

    Erokhin, Nikolay; Shkevov, Rumen; Zolnikova, Nadezhda; Mikhailovskaya, Ludmila

    2016-07-01

    It is performed a numerical investigation of a self consistent small parametric model (SPM) for large scale cyclogenesis (RLSC) by usage of connected nonlinear equations for mean wind speed and ocean surface temperature in the tropical cyclone (TC). These equations may describe the different scenario of temporal dynamics of a powerful atmospheric vortex during its full life cycle. The numerical calculations have shown that relevant choice of SPMTs incoming parameters allows to describe the seasonal behavior of regional large scale cyclogenesis dynamics for a given number of TC during the active season. It is shown that SPM allows describe also the variable wind speed variations inside the TC. Thus by usage of the nonlinear small parametric model it is possible to study the features of RLSCTs temporal dynamics during the active season in the region given and to analyze the relationship between regional cyclogenesis parameters and different external factors like the space weather including the solar activity level and cosmic rays variations.

  5. A PRACTICAL ONTOLOGY FOR THE LARGE-SCALE MODELING OF SCHOLARLY ARTIFACTS AND THEIR USAGE

    SciTech Connect

    RODRIGUEZ, MARKO A.; BOLLEN, JOHAN; VAN DE SOMPEL, HERBERT

    2007-01-30

    The large-scale analysis of scholarly artifact usage is constrained primarily by current practices in usage data archiving, privacy issues concerned with the dissemination of usage data, and the lack of a practical ontology for modeling the usage domain. As a remedy to the third constraint, this article presents a scholarly ontology that was engineered to represent those classes for which large-scale bibliographic and usage data exists, supports usage research, and whose instantiation is scalable to the order of 50 million articles along with their associated artifacts (e.g. authors and journals) and an accompanying 1 billion usage events. The real world instantiation of the presented abstract ontology is a semantic network model of the scholarly community which lends the scholarly process to statistical analysis and computational support. They present the ontology, discuss its instantiation, and provide some example inference rules for calculating various scholarly artifact metrics.

  6. Computational fluid dynamics simulations of particle deposition in large-scale, multigenerational lung models.

    PubMed

    Walters, D Keith; Luke, William H

    2011-01-01

    Computational fluid dynamics (CFD) has emerged as a useful tool for the prediction of airflow and particle transport within the human lung airway. Several published studies have demonstrated the use of Eulerian finite-volume CFD simulations coupled with Lagrangian particle tracking methods to determine local and regional particle deposition rates in small subsections of the bronchopulmonary tree. However, the simulation of particle transport and deposition in large-scale models encompassing more than a few generations is less common, due in part to the sheer size and complexity of the human lung airway. Highly resolved, fully coupled flowfield solution and particle tracking in the entire lung, for example, is currently an intractable problem and will remain so for the foreseeable future. This paper adopts a previously reported methodology for simulating large-scale regions of the lung airway (Walters, D. K., and Luke, W. H., 2010, "A Method for Three-Dimensional Navier-Stokes Simulations of Large-Scale Regions of the Human Lung Airway," ASME J. Fluids Eng., 132(5), p. 051101), which was shown to produce results similar to fully resolved geometries using approximate, reduced geometry models. The methodology is extended here to particle transport and deposition simulations. Lagrangian particle tracking simulations are performed in combination with Eulerian simulations of the airflow in an idealized representation of the human lung airway tree. Results using the reduced models are compared with those using the fully resolved models for an eight-generation region of the conducting zone. The agreement between fully resolved and reduced geometry simulations indicates that the new method can provide an accurate alternative for large-scale CFD simulations while potentially reducing the computational cost of these simulations by several orders of magnitude. PMID:21186893

  7. Impacts of Large-Scale Circulation on Convection: A 2-D Cloud Resolving Model Study

    NASA Technical Reports Server (NTRS)

    Li, X; Sui, C.-H.; Lau, K.-M.

    1999-01-01

    Studies of impacts of large-scale circulation on convection, and the roles of convection in heat and water balances over tropical region are fundamentally important for understanding global climate changes. Heat and water budgets over warm pool (SST=29.5 C) and cold pool (SST=26 C) were analyzed based on simulations of the two-dimensional cloud resolving model. Here the sensitivity of heat and water budgets to different sizes of warm and cold pools is examined.

  8. Large-scale brain networks and psychopathology: a unifying triple network model.

    PubMed

    Menon, Vinod

    2011-10-01

    The science of large-scale brain networks offers a powerful paradigm for investigating cognitive and affective dysfunction in psychiatric and neurological disorders. This review examines recent conceptual and methodological developments which are contributing to a paradigm shift in the study of psychopathology. I summarize methods for characterizing aberrant brain networks and demonstrate how network analysis provides novel insights into dysfunctional brain architecture. Deficits in access, engagement and disengagement of large-scale neurocognitive networks are shown to play a prominent role in several disorders including schizophrenia, depression, anxiety, dementia and autism. Synthesizing recent research, I propose a triple network model of aberrant saliency mapping and cognitive dysfunction in psychopathology, emphasizing the surprising parallels that are beginning to emerge across psychiatric and neurological disorders. PMID:21908230

  9. Using Agent Base Models to Optimize Large Scale Network for Large System Inventories

    NASA Technical Reports Server (NTRS)

    Shameldin, Ramez Ahmed; Bowling, Shannon R.

    2010-01-01

    The aim of this paper is to use Agent Base Models (ABM) to optimize large scale network handling capabilities for large system inventories and to implement strategies for the purpose of reducing capital expenses. The models used in this paper either use computational algorithms or procedure implementations developed by Matlab to simulate agent based models in a principal programming language and mathematical theory using clusters, these clusters work as a high performance computational performance to run the program in parallel computational. In both cases, a model is defined as compilation of a set of structures and processes assumed to underlie the behavior of a network system.

  10. Large-scale multi-configuration electromagnetic induction: a promising tool to improve hydrological models

    NASA Astrophysics Data System (ADS)

    von Hebel, Christian; Rudolph, Sebastian; Mester, Achim; Huisman, Johan A.; Montzka, Carsten; Weihermüller, Lutz; Vereecken, Harry; van der Kruk, Jan

    2015-04-01

    Large-scale multi-configuration electromagnetic induction (EMI) use different coil configurations, i.e., coil offsets and coil orientations, to sense coil specific depth volumes. The obtained apparent electrical conductivity (ECa) maps can be related to some soil properties such as clay content, soil water content, and pore water conductivity, which are important characteristics that influence hydrological processes. Here, we use large-scale EMI measurements to investigate changes in soil texture that drive the available water supply causing crop development patterns that were observed in leaf area index (LAI) maps obtained from RapidEye satellite images taken after a drought period. The 20 ha test site is situated within the Ellebach catchment (Germany) and consists of a sand-and-gravel dominated upper terrace (UT) and a loamy lower terrace (LT). The large-scale multi-configuration EMI measurements were calibrated using electrical resistivity tomography (ERT) measurements at selected transects and soil samples were taken at representative locations where changes in the electrical conductivity were observed and therefore changing soil properties were expected. By analyzing all the data, the observed LAI patterns could be attributed to buried paleo-river channel systems that contained a higher silt and clay content and provided a higher water holding capacity than the surrounding coarser material. Moreover, the measured EMI data showed highest correlation with LAI for the deepest sensing coil offset (up to 1.9 m), which indicates that the deeper subsoil is responsible for root water uptake especially under drought conditions. To obtain a layered subsurface electrical conductivity model that shows the subsurface structures more clearly, a novel EMI inversion scheme was applied to the field data. The obtained electrical conductivity distributions were validated with soil probes and ERT transects that confirmed the inverted lateral and vertical large-scale electrical

  11. Cooling biogeophysical effect of large-scale tropical deforestation in three Earth System models

    NASA Astrophysics Data System (ADS)

    Brovkin, Victor; Pugh, Thomas; Robertson, Eddy; Bathiany, Sebastian; Arneth, Almut; Jones, Chris

    2015-04-01

    Vegetation cover in the tropics is limited by moisture availability. Since transpiration from forests is much greater than from grasslands, the sensitivity of precipitation in the Amazon to large-scale deforestation has long been seen as a critical parameter of climate-vegetation interactions. Most Amazon deforestation experiments to date have been performed with interactive land-atmosphere models but prescribed sea surface temperatures (SSTs). They reveal a strong reduction in evapotranspiration and precipitation, and an increase in global air surface temperature due to reduced latent heat flux. We performed large-scale tropical deforestation experiments with three Earth system models (ESMs) including interactive ocean models, which participated in the FP7 project EMBRACE. In response to tropical deforestation, all models simulate a significant reduction in tropical precipitation, similar to the experiments with prescribed SSTs. However, all three models suggest that the response of global temperature to the deforestation is a cooling or no change, differing from the result of a global warming in prescribed SSTs runs. Presumably, changes in the hydrological cycle and in the water vapor feedback due to deforestation operate in the direction of a global cooling. In addition, one of the models simulates a local cooling over the deforested tropical region. This is opposite to the local warming in the other models. This suggests that the balance between warming due to latent heat flux decrease and cooling due to albedo increase is rather subtle and model-dependent. Last but not least, we suggest using large-scale deforestation as a standard biogeophysical experiment for model intercomparison, for example, within the CMIP6 framework.

  12. Image fusion for remote sensing using fast, large-scale neuroscience models

    NASA Astrophysics Data System (ADS)

    Brumby, Steven P.

    2011-05-01

    We present results with large-scale neuroscience-inspired models for feature detection using multi-spectral visible/ infrared satellite imagery. We describe a model using an artificial neural network architecture and learning rules to build sparse scene representations over an adaptive dictionary, fusing spectral and spatial textural characteristics of the objects of interest. Our results with fast codes implemented on clusters of graphical processor units (GPUs) suggest that visual cortex models are a promising approach to practical pattern recognition problems in remote sensing, even for datasets using spectral bands not found in natural visual systems.

  13. Modeling dynamic functional information flows on large-scale brain networks.

    PubMed

    Lv, Peili; Guo, Lei; Hu, Xintao; Li, Xiang; Jin, Changfeng; Han, Junwei; Li, Lingjiang; Liu, Tianming

    2013-01-01

    Growing evidence from the functional neuroimaging field suggests that human brain functions are realized via dynamic functional interactions on large-scale structural networks. Even in resting state, functional brain networks exhibit remarkable temporal dynamics. However, it has been rarely explored to computationally model such dynamic functional information flows on large-scale brain networks. In this paper, we present a novel computational framework to explore this problem using multimodal resting state fMRI (R-fMRI) and diffusion tensor imaging (DTI) data. Basically, recent literature reports including our own studies have demonstrated that the resting state brain networks dynamically undergo a set of distinct brain states. Within each quasi-stable state, functional information flows from one set of structural brain nodes to other sets of nodes, which is analogous to the message package routing on the Internet from the source node to the destination. Therefore, based on the large-scale structural brain networks constructed from DTI data, we employ a dynamic programming strategy to infer functional information transition routines on structural networks, based on which hub routers that most frequently participate in these routines are identified. It is interesting that a majority of those hub routers are located within the default mode network (DMN), revealing a possible mechanism of the critical functional hub roles played by the DMN in resting state. Also, application of this framework on a post trauma stress disorder (PTSD) dataset demonstrated interesting difference in hub router distributions between PTSD patients and healthy controls. PMID:24579202

  14. UDEC-AUTODYN Hybrid Modeling of a Large-Scale Underground Explosion Test

    NASA Astrophysics Data System (ADS)

    Deng, X. F.; Chen, S. G.; Zhu, J. B.; Zhou, Y. X.; Zhao, Z. Y.; Zhao, J.

    2015-03-01

    In this study, numerical modeling of a large-scale decoupled underground explosion test with 10 tons of TNT in Älvdalen, Sweden is performed by combining DEM and FEM with codes UDEC and AUTODYN. AUTODYN is adopted to model the explosion process, blast wave generation, and its action on the explosion chamber surfaces, while UDEC modeling is focused on shock wave propagation in jointed rock masses surrounding the explosion chamber. The numerical modeling results with the hybrid AUTODYN-UDEC method are compared with empirical estimations, purely AUTODYN modeling results, and the field test data. It is found that in terms of peak particle velocity, empirical estimations are much smaller than the measured data, while purely AUTODYN modeling results are larger than the test data. The UDEC-AUTODYN numerical modeling results agree well with the test data. Therefore, the UDEC-AUTODYN method is appropriate in modeling a large-scale explosive detonation in a closed space and the following wave propagation in jointed rock masses. It should be noted that joint mechanical and spatial properties adopted in UDEC-AUTODYN modeling are determined with empirical equations and available geological data, and they may not be sufficiently accurate.

  15. Oscillations in large-scale cortical networks: map-based model.

    PubMed

    Rulkov, N F; Timofeev, I; Bazhenov, M

    2004-01-01

    We develop a new computationally efficient approach for the analysis of complex large-scale neurobiological networks. Its key element is the use of a new phenomenological model of a neuron capable of replicating important spike pattern characteristics and designed in the form of a system of difference equations (a map). We developed a set of map-based models that replicate spiking activity of cortical fast spiking, regular spiking and intrinsically bursting neurons. Interconnected with synaptic currents these model neurons demonstrated responses very similar to those found with Hodgkin-Huxley models and in experiments. We illustrate the efficacy of this approach in simulations of one- and two-dimensional cortical network models consisting of regular spiking neurons and fast spiking interneurons to model sleep and activated states of the thalamocortical system. Our study suggests that map-based models can be widely used for large-scale simulations and that such models are especially useful for tasks where the modeling of specific firing patterns of different cell classes is important. PMID:15306740

  16. Downscaling large-scale climate variability using a regional climate model: the case of ENSO over Southern Africa

    NASA Astrophysics Data System (ADS)

    Boulard, Damien; Pohl, Benjamin; Crétat, Julien; Vigaud, Nicolas; Pham-Xuan, Thanh

    2013-03-01

    This study documents methodological issues arising when downscaling modes of large-scale atmospheric variability with a regional climate model, over a remote region that is yet under their influence. The retained case study is El Niño Southern Oscillation and its impacts on Southern Africa and the South West Indian Ocean. Regional simulations are performed with WRF model, driven laterally by ERA40 reanalyses over the 1971-1998 period. We document the sensitivity of simulated climate variability to the model physics, the constraint of relaxing the model solutions towards reanalyses, the size of the relaxation buffer zone towards the lateral forcings and the forcing fields through ERA-Interim driven simulations. The model's internal variability is quantified using 15-member ensemble simulations for seasons of interest, single 30-year integrations appearing as inappropriate to investigate the simulated interannual variability properly. The incidence of SST prescription is also assessed through additional integrations using a simple ocean mixed-layer model. Results show a limited skill of the model to reproduce the seasonal droughts associated with El Niño conditions. The model deficiencies are found to result from biased atmospheric forcings and/or biased response to these forcings, whatever the physical package retained. In contrast, regional SST forcing over adjacent oceans favor realistic rainfall anomalies over the continent, although their amplitude remains too weak. These results confirm the significant contribution of nearby ocean SST to the regional effects of ENSO, but also illustrate that regionalizing large-scale climate variability can be a demanding exercise.

  17. Nengo: a Python tool for building large-scale functional brain models

    PubMed Central

    Bekolay, Trevor; Bergstra, James; Hunsberger, Eric; DeWolf, Travis; Stewart, Terrence C.; Rasmussen, Daniel; Choo, Xuan; Voelker, Aaron Russell; Eliasmith, Chris

    2014-01-01

    Neuroscience currently lacks a comprehensive theory of how cognitive processes can be implemented in a biological substrate. The Neural Engineering Framework (NEF) proposes one such theory, but has not yet gathered significant empirical support, partly due to the technical challenge of building and simulating large-scale models with the NEF. Nengo is a software tool that can be used to build and simulate large-scale models based on the NEF; currently, it is the primary resource for both teaching how the NEF is used, and for doing research that generates specific NEF models to explain experimental data. Nengo 1.4, which was implemented in Java, was used to create Spaun, the world's largest functional brain model (Eliasmith et al., 2012). Simulating Spaun highlighted limitations in Nengo 1.4's ability to support model construction with simple syntax, to simulate large models quickly, and to collect large amounts of data for subsequent analysis. This paper describes Nengo 2.0, which is implemented in Python and overcomes these limitations. It uses simple and extendable syntax, simulates a benchmark model on the scale of Spaun 50 times faster than Nengo 1.4, and has a flexible mechanism for collecting simulation results. PMID:24431999

  18. Formation and disruption of tonotopy in a large-scale model of the auditory cortex.

    PubMed

    Tomková, Markéta; Tomek, Jakub; Novák, Ondřej; Zelenka, Ondřej; Syka, Josef; Brom, Cyril

    2015-10-01

    There is ample experimental evidence describing changes of tonotopic organisation in the auditory cortex due to environmental factors. In order to uncover the underlying mechanisms, we designed a large-scale computational model of the auditory cortex. The model has up to 100 000 Izhikevich's spiking neurons of 17 different types, almost 21 million synapses, which are evolved according to Spike-Timing-Dependent Plasticity (STDP) and have an architecture akin to existing observations. Validation of the model revealed alternating synchronised/desynchronised states and different modes of oscillatory activity. We provide insight into these phenomena via analysing the activity of neuronal subtypes and testing different causal interventions into the simulation. Our model is able to produce experimental predictions on a cell type basis. To study the influence of environmental factors on the tonotopy, different types of auditory stimulations during the evolution of the network were modelled and compared. We found that strong white noise resulted in completely disrupted tonotopy, which is consistent with in vivo experimental observations. Stimulation with pure tones or spontaneous activity led to a similar degree of tonotopy as in the initial state of the network. Interestingly, weak white noise led to a substantial increase in tonotopy. As the STDP was the only mechanism of plasticity in our model, our results suggest that STDP is a sufficient condition for the emergence and disruption of tonotopy under various types of stimuli. The presented large-scale model of the auditory cortex and the core simulator, SUSNOIMAC, have been made publicly available. PMID:26344164

  19. Realistic modeling of neurons and networks: towards brain simulation

    PubMed Central

    D’Angelo, Egidio; Solinas, Sergio; Garrido, Jesus; Casellato, Claudia; Pedrocchi, Alessandra; Mapelli, Jonathan; Gandolfi, Daniela; Prestori, Francesca

    Summary Realistic modeling is a new advanced methodology for investigating brain functions. Realistic modeling is based on a detailed biophysical description of neurons and synapses, which can be integrated into microcircuits. The latter can, in turn, be further integrated to form large-scale brain networks and eventually to reconstruct complex brain systems. Here we provide a review of the realistic simulation strategy and use the cerebellar network as an example. This network has been carefully investigated at molecular and cellular level and has been the object of intense theoretical investigation. The cerebellum is thought to lie at the core of the forward controller operations of the brain and to implement timing and sensory prediction functions. The cerebellum is well described and provides a challenging field in which one of the most advanced realistic microcircuit models has been generated. We illustrate how these models can be elaborated and embedded into robotic control systems to gain insight into how the cellular properties of cerebellar neurons emerge in integrated behaviors. Realistic network modeling opens up new perspectives for the investigation of brain pathologies and for the neurorobotic field. PMID:24139652

  20. Impact of structural heterogeneity on upscaled models for large-scale CO2 migration and trapping in saline aquifers

    NASA Astrophysics Data System (ADS)

    Gasda, Sarah E.; Nilsen, Halvor M.; Dahle, Helge K.

    2013-12-01

    Structural heterogeneity of the caprock surface influences both migration patterns and trapping efficiency for CO2 injected in open saline aquifers. Understanding these mechanisms relies on appropriate modeling tools to simulate CO2 flow over hundreds of square kilometers and several hundred years during the postinjection period. Vertical equilibrium (VE) models are well suited for this purpose. However, topographical heterogeneity below the scale of model resolution requires upscaling, for example by using traditional flow-based homogenization techniques. This can significantly simplify the geologic model and reduce computational effort while still capturing the relevant physical processes. In this paper, we identify key structural parameters, such as dominant amplitude and wavelength of the traps, that determine the form of the upscaled constitutive functions. We also compare the strength of these geologic controls on CO2 migration and trapping to other mechanisms such as capillarity. This allows for a better understanding of the dominant physical processes and their impact on storage security. It also provides intuition on which upscaling approach is best suited for the system of interest. We apply these concepts to realistic structurally heterogeneous surfaces that have been developed using different geologic depositional models. We show that while amplitude is important for determining the amount of CO2 trapped, the spacing between the traps, distribution of spillpoint locations, large-scale formation dip angle affect the shape of the functions and thus the dynamics of plume migration. We also show for these cases that the topography characterized by shorter wavelength features is better suited for upscaling, while the longer wavelength surface can be sufficiently resolved. These results can inform the type of geological characterization that is required to build the most reliable upscaled models for large-scale CO2 migration.

  1. Hyper-Resolution Large Scale Flood Inundation Modeling: Development of AutoRAPID Model

    NASA Astrophysics Data System (ADS)

    Tavakoly, A. A.; Follum, M. L.; Wahl, M.; Snow, A.

    2015-12-01

    Streamflow and the resultant flood inundation are defining elements in large scale flood analyses. High-fidelity predictive capabilities of flood inundation risk requires hydrologic and hydrodynamic modeling at hyper-resolution (<100 m) scales. Using spatiotemporal data from climate models as the driver, we couple a continental scale river routing model known as Routing Application for Parallel ComputatIon of Discharge (RAPID) with a regional scale flood delineation model called AutoRoute to estimate flood extents. We demonstrate how the coupled tool, referred to as AutoRAPID, can quickly and efficiently simulate flood extents using a high resolution dataset (~10 m) at the regional scale (> 100,000 km2). The AutoRAPID framework is implemented over 230,000 km2 in the Midwestern United States (between latitude 38°N and 44°N, and longitude 86°W to 91°W, approximately 8% of the Mississippi River Basin) using a 10 m DEM. We generate the flood inundation map over the entire area for a June 2008 flood event. The model is compared with observed data at five select locations: Spencer, IN; Newberry, IN; Gays Mills, WI; Ft. Atkinson, WI, and Janesville, WI. We show that the model results are generally satisfactory with observed flow and flood inundation data and suggest that the AutoRAPID model can be considered for several potential applications, such as: forecast flow and flood inundation information; generating flood recurrence maps using high resolution vector river data; and for emergency management applications to protect/evacuate large areas when time is limited and data are sparse.

  2. Automatic Construction of Predictive Neuron Models through Large Scale Assimilation of Electrophysiological Data.

    PubMed

    Nogaret, Alain; Meliza, C Daniel; Margoliash, Daniel; Abarbanel, Henry D I

    2016-01-01

    We report on the construction of neuron models by assimilating electrophysiological data with large-scale constrained nonlinear optimization. The method implements interior point line parameter search to determine parameters from the responses to intracellular current injections of zebra finch HVC neurons. We incorporated these parameters into a nine ionic channel conductance model to obtain completed models which we then use to predict the state of the neuron under arbitrary current stimulation. Each model was validated by successfully predicting the dynamics of the membrane potential induced by 20-50 different current protocols. The dispersion of parameters extracted from different assimilation windows was studied. Differences in constraints from current protocols, stochastic variability in neuron output, and noise behave as a residual temperature which broadens the global minimum of the objective function to an ellipsoid domain whose principal axes follow an exponentially decaying distribution. The maximum likelihood expectation of extracted parameters was found to provide an excellent approximation of the global minimum and yields highly consistent kinetics for both neurons studied. Large scale assimilation absorbs the intrinsic variability of electrophysiological data over wide assimilation windows. It builds models in an automatic manner treating all data as equal quantities and requiring minimal additional insight. PMID:27605157

  3. Automatic Construction of Predictive Neuron Models through Large Scale Assimilation of Electrophysiological Data

    PubMed Central

    Nogaret, Alain; Meliza, C. Daniel; Margoliash, Daniel; Abarbanel, Henry D. I.

    2016-01-01

    We report on the construction of neuron models by assimilating electrophysiological data with large-scale constrained nonlinear optimization. The method implements interior point line parameter search to determine parameters from the responses to intracellular current injections of zebra finch HVC neurons. We incorporated these parameters into a nine ionic channel conductance model to obtain completed models which we then use to predict the state of the neuron under arbitrary current stimulation. Each model was validated by successfully predicting the dynamics of the membrane potential induced by 20–50 different current protocols. The dispersion of parameters extracted from different assimilation windows was studied. Differences in constraints from current protocols, stochastic variability in neuron output, and noise behave as a residual temperature which broadens the global minimum of the objective function to an ellipsoid domain whose principal axes follow an exponentially decaying distribution. The maximum likelihood expectation of extracted parameters was found to provide an excellent approximation of the global minimum and yields highly consistent kinetics for both neurons studied. Large scale assimilation absorbs the intrinsic variability of electrophysiological data over wide assimilation windows. It builds models in an automatic manner treating all data as equal quantities and requiring minimal additional insight. PMID:27605157

  4. Model Selection and Hypothesis Testing for Large-Scale Network Models with Overlapping Groups

    NASA Astrophysics Data System (ADS)

    Peixoto, Tiago P.

    2015-01-01

    The effort to understand network systems in increasing detail has resulted in a diversity of methods designed to extract their large-scale structure from data. Unfortunately, many of these methods yield diverging descriptions of the same network, making both the comparison and understanding of their results a difficult challenge. A possible solution to this outstanding issue is to shift the focus away from ad hoc methods and move towards more principled approaches based on statistical inference of generative models. As a result, we face instead the more well-defined task of selecting between competing generative processes, which can be done under a unified probabilistic framework. Here, we consider the comparison between a variety of generative models including features such as degree correction, where nodes with arbitrary degrees can belong to the same group, and community overlap, where nodes are allowed to belong to more than one group. Because such model variants possess an increasing number of parameters, they become prone to overfitting. In this work, we present a method of model selection based on the minimum description length criterion and posterior odds ratios that is capable of fully accounting for the increased degrees of freedom of the larger models and selects the best one according to the statistical evidence available in the data. In applying this method to many empirical unweighted networks from different fields, we observe that community overlap is very often not supported by statistical evidence and is selected as a better model only for a minority of them. On the other hand, we find that degree correction tends to be almost universally favored by the available data, implying that intrinsic node proprieties (as opposed to group properties) are often an essential ingredient of network formation.

  5. Seemingly unrelated intervention time series models for effectiveness evaluation of large scale environmental remediation.

    PubMed

    Ip, Ryan H L; Li, W K; Leung, Kenneth M Y

    2013-09-15

    Large scale environmental remediation projects applied to sea water always involve large amount of capital investments. Rigorous effectiveness evaluations of such projects are, therefore, necessary and essential for policy review and future planning. This study aims at investigating effectiveness of environmental remediation using three different Seemingly Unrelated Regression (SUR) time series models with intervention effects, including Model (1) assuming no correlation within and across variables, Model (2) assuming no correlation across variable but allowing correlations within variable across different sites, and Model (3) allowing all possible correlations among variables (i.e., an unrestricted model). The results suggested that the unrestricted SUR model is the most reliable one, consistently having smallest variations of the estimated model parameters. We discussed our results with reference to marine water quality management in Hong Kong while bringing managerial issues into consideration. PMID:23932418

  6. Aerodynamic characteristics of a large scale model with a swept wing and augmented jet flap

    NASA Technical Reports Server (NTRS)

    Falarski, M. D.; Koenig, D. G.

    1971-01-01

    Data of tests of a large-scale swept augmentor wing model in the 40- by 80-foot wind tunnel are presented. The data includes longitudinal characteristics with and without a horizontal tail as well as results of preliminary investigation of lateral-directional characteristics. The augmentor flap deflection was varied from 0 deg to 70.6 deg at isentropic jet thrust coefficients of 0 to 1.47. The tests were made at a Reynolds number from 2.43 to 4.1 times one million.

  7. Reconstruction of large-scale gene regulatory networks using Bayesian model averaging.

    PubMed

    Kim, Haseong; Gelenbe, Erol

    2012-09-01

    Gene regulatory networks provide the systematic view of molecular interactions in a complex living system. However, constructing large-scale gene regulatory networks is one of the most challenging problems in systems biology. Also large burst sets of biological data require a proper integration technique for reliable gene regulatory network construction. Here we present a new reverse engineering approach based on Bayesian model averaging which attempts to combine all the appropriate models describing interactions among genes. This Bayesian approach with a prior based on the Gibbs distribution provides an efficient means to integrate multiple sources of biological data. In a simulation study with maximum of 2000 genes, our method shows better sensitivity than previous elastic-net and Gaussian graphical models, with a fixed specificity of 0.99. The study also shows that the proposed method outperforms the other standard methods for a DREAM dataset generated by nonlinear stochastic models. In brain tumor data analysis, three large-scale networks consisting of 4422 genes were built using the gene expression of non-tumor, low and high grade tumor mRNA expression samples, along with DNA-protein binding affinity information. We found that genes having a large variation of degree distribution among the three tumor networks are the ones that see most involved in regulatory and developmental processes, which possibly gives a novel insight concerning conventional differentially expressed gene analysis. PMID:22987132

  8. Aspects of investigating STOL noise using large-scale wind-tunnel models.

    NASA Technical Reports Server (NTRS)

    Falarski, M. D.; Soderman, P. T.; Koenig, D. G.

    1973-01-01

    The applicability of the NASA Ames 40- by 80-foot wind tunnel for acoustic research on STOL concepts has been investigated. The acoustic characteristics of the wind-tunnel test section have been studied with calibrated acoustic sources. Acoustic characteristics of several large-scale STOL models have been studied in both the free-field and wind-tunnel acoustic environments. The results of these studies indicate that the acoustic characteristics of large-scale STOL models can be measured in the wind tunnel if the test section acoustic environment and model acoustic similitude are taken into consideration. The reverberant field of the test section must be determined with an acoustically similar noise source. A directional microphone, a phased array of microphones, and extrapolation of near-field data to far-field are some of the techniques being explored as possible solutions to the directivity loss in a reverberant field. The model sound pressure levels must be of sufficient magnitude to be distinguishible from the wind-tunnel background noise.

  9. Sensitivity analysis of key components in large-scale hydroeconomic models

    NASA Astrophysics Data System (ADS)

    Medellin-Azuara, J.; Connell, C. R.; Lund, J. R.; Howitt, R. E.

    2008-12-01

    This paper explores the likely impact of different estimation methods in key components of hydro-economic models such as hydrology and economic costs or benefits, using the CALVIN hydro-economic optimization for water supply in California. In perform our analysis using two climate scenarios: historical and warm-dry. The components compared were perturbed hydrology using six versus eighteen basins, highly-elastic urban water demands, and different valuation of agricultural water scarcity. Results indicate that large scale hydroeconomic hydro-economic models are often rather robust to a variety of estimation methods of ancillary models and components. Increasing the level of detail in the hydrologic representation of this system might not greatly affect overall estimates of climate and its effects and adaptations for California's water supply. More price responsive urban water demands will have a limited role in allocating water optimally among competing uses. Different estimation methods for the economic value of water and scarcity in agriculture may influence economically optimal water allocation; however land conversion patterns may have a stronger influence in this allocation. Overall optimization results of large-scale hydro-economic models remain useful for a wide range of assumptions in eliciting promising water management alternatives.

  10. Meteorological and photochemical modeling of large-scale albedo changes in the South Coast Air Basin

    SciTech Connect

    Tran, K.T.; Mirabella, V.A.

    1998-12-31

    The effectiveness of large-scale surface albedo changes as an ozone control strategy is investigated. These albedo changes are part of the Cool Communities strategy that calls for the use of lighter colored roofing and paving materials as well as an increase in tree planting. The advanced mesoscale model MM5 was used to analyze the associated effects on ambient temperature, mixing depth and winds. The MM5 model was modified to accept surface properties derived from a satellite-based land use database. Preprocessors were also developed to allow a research-oriented model such as MM5 to be user friendly and amenable to practical, routine air quality modeling applications. Changes in ozone air quality are analyzed with the Urban Airshed Model (UAM). Results of the MM5/UAM simulations of the SCAQS August 26--28, 1987 ozone episode are presented and compared to those obtained with the CSUMM/UAM models.

  11. Development of a coupled soil erosion and large-scale hydrology modeling system

    NASA Astrophysics Data System (ADS)

    Mao, Dazhi; Cherkauer, Keith A.; Flanagan, Dennis C.

    2010-08-01

    Soil erosion models are usually limited in their application to the field scale; however, the management of land resources requires information at the regional scale. Large-scale physically based land surface schemes (LSS) provide estimates of regional scale hydrologic processes that contribute to erosion. If scaling issues are adequately addressed, coupling an LSS to a physically based erosion model can provide a tool to study the regional impact of soil erosion. A coupling scheme was developed using the Variable Infiltration Capacity (VIC) model to produce hydrologic inputs for the stand-alone Water Erosion Prediction Project-Hillslope Erosion (WEPP-HE) program, accounting for both temporal and spatial scaling issues. Precipitation events were disaggregated from daily to hourly and used with the VIC model to generate hydrologic fluxes. Slope profiles were downscaled from 30 arc second to 30 m hillslopes. Additionally, soil texture and erodibility were adjusted with simplified assumptions based on the full WEPP model. Soil erosion at the large scale was represented on a VIC model grid cell basis by applying WEPP-HE to subsamples of 30 m hillslopes. On an average annual basis, results showed that the coupled model was comparable with full WEPP model predictions. On an event basis, the coupled model system captured more small erosion events, with erodibility adjustments of the same magnitude as from the full WEPP model simulations. Differences in results can be attributed to discrepancies in hydrologic data calculations and simplified assumptions in vegetation and soil erodibility. Overall, the coupled model demonstrated the feasibility of erosion prediction for large river basins.

  12. Assimilative Modeling of Large-Scale Equatorial Plasma Trenches Observed by C/NOFS

    NASA Astrophysics Data System (ADS)

    Su, Y.; Retterer, J. M.; de La Beaujardiere, O.; Burke, W. J.; Roddy, P. A.; Pfaff, R. F.; Hunton, D. E.

    2009-12-01

    Low-latitude plasma irregularities commonly observed during post sunset local times have been studied extensively by ground-based measurements such as coherent and incoherent scatter radars and ionosondes, as well as by satellite observations. The pre-reversal enhancement in the upward plasma drift due to eastward electric fields has been identified as the primary cause of these irregularities. Reports of plasma depletions at post-midnight and early morning local times are scarce and are typically limited to storm time conditions. Such dawn plasma depletions were frequently observed by C/NOFS in June 2008 [de La Beaujardière et al., 2009]. We are able to qualitatively reproduce the large-scale density depletion observed by the Planar Langmuir Probe (PLP) on June 17, 2008 [Su et al., 2009], based on the assimilative physics-based ionospheric model (PBMOD) using available electric field data obtained from the Vector Electric Field Instrument (VEFI) as the model input. In comparison, no plasma depletion or irregularity is obtained from the climatology version of our model when large upward drift velocities caused by observed eastward electric fields were absent. In this presentation, we extend our study for the entire month of June 2008 to exercise the forecast capability of large-scale density trenches by PBMOD with available VEFI data. Geophys. Res. Lett, 36, L00C06, doi:10.1029/2009GL038884, 2009.Geophys. Res. Lett., 36, L00C02, doi:10.1029/ 2009GL038946, 2009.

  13. Mutual coupling of hydrologic and hydrodynamic models - a viable approach for improved large-scale inundation estimates?

    NASA Astrophysics Data System (ADS)

    Hoch, Jannis; Winsemius, Hessel; van Beek, Ludovicus; Haag, Arjen; Bierkens, Marc

    2016-04-01

    Due to their increasing occurrence rate and associated economic costs, fluvial floods are large-scale and cross-border phenomena that need to be well understood. Sound information about temporal and spatial variations of flood hazard is essential for adequate flood risk management and climate change adaption measures. While progress has been made in assessments of flood hazard and risk on the global scale, studies to date have made compromises between spatial resolution on the one hand and local detail that influences their temporal characteristics (rate of rise, duration) on the other. Moreover, global models cannot realistically model flood wave propagation due to a lack of detail in channel and floodplain geometry, and the representation of hydrologic processes influencing the surface water balance such as open water evaporation from inundated water and re-infiltration of water in river banks. To overcome these restrictions and to obtain a better understanding of flood propagation including its spatio-temporal variations at the large scale, yet at a sufficiently high resolution, the present study aims to develop a large-scale modeling tool by coupling the global hydrologic model PCR-GLOBWB and the recently developed hydrodynamic model DELFT3D-FM. The first computes surface water volumes which are routed by the latter, solving the full Saint-Venant equations. With DELFT3D FM being capable of representing the model domain as a flexible mesh, model accuracy is only improved at relevant locations (river and adjacent floodplain) and the computation time is not unnecessarily increased. This efficiency is very advantageous for large-scale modelling approaches. The model domain is thereby schematized by 2D floodplains, being derived from global data sets (HydroSHEDS and G3WBM, respectively). Since a previous study with 1way-coupling showed good model performance (J.M. Hoch et al., in prep.), this approach was extended to 2way-coupling to fully represent evaporation

  14. A New Statistically based Autoconversion rate Parameterization for use in Large-Scale Models

    NASA Technical Reports Server (NTRS)

    Lin, Bing; Zhang, Junhua; Lohmann, Ulrike

    2002-01-01

    The autoconversion rate is a key process for the formation of precipitation in warm clouds. In climate models, physical processes such as autoconversion rate, which are calculated from grid mean values, are biased, because they do not take subgrid variability into account. Recently, statistical cloud schemes have been introduced in large-scale models to account for partially cloud-covered grid boxes. However, these schemes do not include the in-cloud variability in their parameterizations. In this paper, a new statistically based autoconversion rate considering the in-cloud variability is introduced and tested in three cases using the Canadian Single Column Model (SCM) of the global climate model. The results show that the new autoconversion rate improves the model simulation, especially in terms of liquid water path in all three case studies.

  15. LipidWrapper: An Algorithm for Generating Large-Scale Membrane Models of Arbitrary Geometry

    PubMed Central

    Durrant, Jacob D.; Amaro, Rommie E.

    2014-01-01

    As ever larger and more complex biological systems are modeled in silico, approximating physiological lipid bilayers with simple planar models becomes increasingly unrealistic. In order to build accurate large-scale models of subcellular environments, models of lipid membranes with carefully considered, biologically relevant curvature will be essential. In the current work, we present a multi-scale utility called LipidWrapper capable of creating curved membrane models with geometries derived from various sources, both experimental and theoretical. To demonstrate its utility, we use LipidWrapper to examine an important mechanism of influenza virulence. A copy of the program can be downloaded free of charge under the terms of the open-source FreeBSD License from http://nbcr.ucsd.edu/lipidwrapper. LipidWrapper has been tested on all major computer operating systems. PMID:25032790

  16. LipidWrapper: an algorithm for generating large-scale membrane models of arbitrary geometry.

    PubMed

    Durrant, Jacob D; Amaro, Rommie E

    2014-07-01

    As ever larger and more complex biological systems are modeled in silico, approximating physiological lipid bilayers with simple planar models becomes increasingly unrealistic. In order to build accurate large-scale models of subcellular environments, models of lipid membranes with carefully considered, biologically relevant curvature will be essential. In the current work, we present a multi-scale utility called LipidWrapper capable of creating curved membrane models with geometries derived from various sources, both experimental and theoretical. To demonstrate its utility, we use LipidWrapper to examine an important mechanism of influenza virulence. A copy of the program can be downloaded free of charge under the terms of the open-source FreeBSD License from http://nbcr.ucsd.edu/lipidwrapper. LipidWrapper has been tested on all major computer operating systems. PMID:25032790

  17. Large-scale shell-model calculations on the spectroscopy of N <126 Pb isotopes

    NASA Astrophysics Data System (ADS)

    Qi, Chong; Jia, L. Y.; Fu, G. J.

    2016-07-01

    Large-scale shell-model calculations are carried out in the model space including neutron-hole orbitals 2 p1 /2 ,1 f5 /2 ,2 p3 /2 ,0 i13 /2 ,1 f7 /2 , and 0 h9 /2 to study the structure and electromagnetic properties of neutron-deficient Pb isotopes. An optimized effective interaction is used. Good agreement between full shell-model calculations and experimental data is obtained for the spherical states in isotopes Pb-206194. The lighter isotopes are calculated with an importance-truncation approach constructed based on the monopole Hamiltonian. The full shell-model results also agree well with our generalized seniority and nucleon-pair-approximation truncation calculations. The deviations between theory and experiment concerning the excitation energies and electromagnetic properties of low-lying 0+ and 2+ excited states and isomeric states may provide a constraint on our understanding of nuclear deformation and intruder configuration in this region.

  18. Large-scale hydrological modelling by using modified PUB recommendations: the India-HYPE case

    NASA Astrophysics Data System (ADS)

    Pechlivanidis, I. G.; Arheimer, B.

    2015-03-01

    The Prediction in Ungauged Basins (PUB) scientific initiative (2003-2012 by IAHS) put considerable effort into improving the reliability of hydrological models to predict flow response in ungauged rivers. PUB's collective experience advanced hydrologic science and defined guidelines to make predictions in catchments without observed runoff data. At present, there is a raised interest in applying catchment models for large domains and large data samples in a multi-basin manner. However, such modelling involves several sources of uncertainties, which may be caused by the imperfectness of input data, i.e. particularly regional and global databases. This may lead to inaccurate model parameterisation and incomplete process understanding. In order to bridge the gap between the best practices for single catchments and large-scale hydrology, we present a further developed and slightly modified version of the recommended best practices for PUB by Takeuchi et al. (2013). By using examples from a recent HYPE hydrological model set-up on the Indian subcontinent, named India-HYPE v1.0, we explore the recommendations, indicate challenges and recommend quality checks to avoid erroneous assumptions. We identify the obstacles, ways to overcome them and describe the work process related to: (a) errors and inconsistencies in global databases, unknown human impacts, poor data quality, (b) robust approaches to identify parameters using a stepwise calibration approach, remote sensing data, expert knowledge and catchment similarities; and (c) evaluation based on flow signatures and performance metrics, using both multiple criteria and multiple variables, and independent gauges for "blind tests". The results show that despite the strong hydro-climatic gradient over the subcontinent, a single model can adequately describe the spatial variability in dominant hydrological processes at the catchment scale. Eventually, during calibration of India-HYPE, the median Kling-Gupta Efficiency for

  19. Automatic Generation of Connectivity for Large-Scale Neuronal Network Models through Structural Plasticity.

    PubMed

    Diaz-Pier, Sandra; Naveau, Mikaël; Butz-Ostendorf, Markus; Morrison, Abigail

    2016-01-01

    With the emergence of new high performance computation technology in the last decade, the simulation of large scale neural networks which are able to reproduce the behavior and structure of the brain has finally become an achievable target of neuroscience. Due to the number of synaptic connections between neurons and the complexity of biological networks, most contemporary models have manually defined or static connectivity. However, it is expected that modeling the dynamic generation and deletion of the links among neurons, locally and between different regions of the brain, is crucial to unravel important mechanisms associated with learning, memory and healing. Moreover, for many neural circuits that could potentially be modeled, activity data is more readily and reliably available than connectivity data. Thus, a framework that enables networks to wire themselves on the basis of specified activity targets can be of great value in specifying network models where connectivity data is incomplete or has large error margins. To address these issues, in the present work we present an implementation of a model of structural plasticity in the neural network simulator NEST. In this model, synapses consist of two parts, a pre- and a post-synaptic element. Synapses are created and deleted during the execution of the simulation following local homeostatic rules until a mean level of electrical activity is reached in the network. We assess the scalability of the implementation in order to evaluate its potential usage in the self generation of connectivity of large scale networks. We show and discuss the results of simulations on simple two population networks and more complex models of the cortical microcircuit involving 8 populations and 4 layers using the new framework. PMID:27303272

  20. Automatic Generation of Connectivity for Large-Scale Neuronal Network Models through Structural Plasticity

    PubMed Central

    Diaz-Pier, Sandra; Naveau, Mikaël; Butz-Ostendorf, Markus; Morrison, Abigail

    2016-01-01

    With the emergence of new high performance computation technology in the last decade, the simulation of large scale neural networks which are able to reproduce the behavior and structure of the brain has finally become an achievable target of neuroscience. Due to the number of synaptic connections between neurons and the complexity of biological networks, most contemporary models have manually defined or static connectivity. However, it is expected that modeling the dynamic generation and deletion of the links among neurons, locally and between different regions of the brain, is crucial to unravel important mechanisms associated with learning, memory and healing. Moreover, for many neural circuits that could potentially be modeled, activity data is more readily and reliably available than connectivity data. Thus, a framework that enables networks to wire themselves on the basis of specified activity targets can be of great value in specifying network models where connectivity data is incomplete or has large error margins. To address these issues, in the present work we present an implementation of a model of structural plasticity in the neural network simulator NEST. In this model, synapses consist of two parts, a pre- and a post-synaptic element. Synapses are created and deleted during the execution of the simulation following local homeostatic rules until a mean level of electrical activity is reached in the network. We assess the scalability of the implementation in order to evaluate its potential usage in the self generation of connectivity of large scale networks. We show and discuss the results of simulations on simple two population networks and more complex models of the cortical microcircuit involving 8 populations and 4 layers using the new framework. PMID:27303272

  1. Towards large scale modelling of wetland water dynamics in northern basins.

    NASA Astrophysics Data System (ADS)

    Pedinotti, V.; Sapriza, G.; Stone, L.; Davison, B.; Pietroniro, A.; Quinton, W. L.; Spence, C.; Wheater, H. S.

    2015-12-01

    Understanding the hydrological behaviour of low topography, wetland-dominated sub-arctic areas is one major issue needed for the improvement of large scale hydrological models. These wet organic soils cover a large extent of Northern America and have a considerable impact on the rainfall-runoff response of a catchment. Moreover their strong interactions with the lower atmosphere and the carbon cycle make of these areas a noteworthy component of the regional climate system. In the framework of the Changing Cold Regions Network (CCRN), this study aims at providing a model for wetland water dynamics that can be used for large scale applications in cold regions. The modelling system has two main components : a) the simulation of surface runoff using the Modélisation Environmentale Communautaire - Surface and Hydrology (MESH) land surface model driven with several gridded atmospheric datasets and b) the routing of surface runoff using the WATROUTE channel scheme. As a preliminary study, we focus on two small representative study basins in Northern Canada : Scotty Creek in the lower Liard River valley of the Northwest Territories and Baker Creek, located a few kilometers north of Yellowknife. Both areas present characteristic landscapes dominated by a series of peat plateaus, channel fens, small lakes and bogs. Moreover, they constitute important fieldwork sites with detailed data to support our modelling study. The challenge of our new wetland model is to represent the hydrological functioning of the various landscape units encountered in those watersheds and their interactions using simple numerical formulations that can be later extended to larger basins such as the Mackenzie river basin. Using observed datasets, the performance of the model to simulate the temporal evolution of hydrological variables such as the water table depth, frost table depth and discharge is assessed.

  2. Integrating adaptive behaviour in large-scale flood risk assessments: an Agent-Based Modelling approach

    NASA Astrophysics Data System (ADS)

    Haer, Toon; Aerts, Jeroen

    2015-04-01

    Between 1998 and 2009, Europe suffered over 213 major damaging floods, causing 1126 deaths, displacing around half a million people. In this period, floods caused at least 52 billion euro in insured economic losses making floods the most costly natural hazard faced in Europe. In many low-lying areas, the main strategy to cope with floods is to reduce the risk of the hazard through flood defence structures, like dikes and levees. However, it is suggested that part of the responsibility for flood protection needs to shift to households and businesses in areas at risk, and that governments and insurers can effectively stimulate the implementation of individual protective measures. However, adaptive behaviour towards flood risk reduction and the interaction between the government, insurers, and individuals has hardly been studied in large-scale flood risk assessments. In this study, an European Agent-Based Model is developed including agent representatives for the administrative stakeholders of European Member states, insurers and reinsurers markets, and individuals following complex behaviour models. The Agent-Based Modelling approach allows for an in-depth analysis of the interaction between heterogeneous autonomous agents and the resulting (non-)adaptive behaviour. Existing flood damage models are part of the European Agent-Based Model to allow for a dynamic response of both the agents and the environment to changing flood risk and protective efforts. By following an Agent-Based Modelling approach this study is a first contribution to overcome the limitations of traditional large-scale flood risk models in which the influence of individual adaptive behaviour towards flood risk reduction is often lacking.

  3. The topology of large-scale structure. II - Nonlinear evolution of Gaussian models

    NASA Technical Reports Server (NTRS)

    Melott, Adrian L.; Weinberg, David H.; Gott, J. Richard, III

    1988-01-01

    The evolution of non-Gaussian behavior in the large-scale universe from Gaussian initial conditions is studied. Topology measures developed in previous papers are applied to the smoothed initial, final, and biased matter distributions of cold dark matter, white noise, and massive neutrino simulations. When the smoothing length is approximately twice the mass correlation length or larger, the evolved models look like the initial conditions, suggesting that random phase hypotheses in cosmology can be tested with adequate data sets. When a smaller smoothing length is used, nonlinear effects are recovered, so nonlinear effects on topology can be detected in redshift surveys after smoothing at the mean intergalaxy separation. Hot dark matter models develop manifestly non-Gaussian behavior attributable to phase correlations, with a topology reminiscent of bubble or sheet distributions. Cold dark matter models remain Gaussian, and biasing does not disguise this.

  4. Pangolin v1.0, a conservative 2-D transport model for large scale parallel calculation

    NASA Astrophysics Data System (ADS)

    Praga, A.; Cariolle, D.; Giraud, L.

    2014-07-01

    To exploit the possibilities of parallel computers, we designed a large-scale bidimensional atmospheric transport model named Pangolin. As the basis for a future chemistry-transport model, a finite-volume approach was chosen both for mass preservation and to ease parallelization. To overcome the pole restriction on time-steps for a regular latitude-longitude grid, Pangolin uses a quasi-area-preserving reduced latitude-longitude grid. The features of the regular grid are exploited to improve parallel performances and a custom domain decomposition algorithm is presented. To assess the validity of the transport scheme, its results are compared with state-of-the-art models on analytical test cases. Finally, parallel performances are shown in terms of strong scaling and confirm the efficient scalability up to a few hundred of cores.

  5. Numerical modeling of water spray suppression of conveyor belt fires in a large-scale tunnel

    PubMed Central

    Yuan, Liming; Smith, Alex C.

    2015-01-01

    Conveyor belt fires in an underground mine pose a serious life threat to miners. Water sprinkler systems are usually used to extinguish underground conveyor belt fires, but because of the complex interaction between conveyor belt fires and mine ventilation airflow, more effective engineering designs are needed for the installation of water sprinkler systems. A computational fluid dynamics (CFD) model was developed to simulate the interaction between the ventilation airflow, the belt flame spread, and the water spray system in a mine entry. The CFD model was calibrated using test results from a large-scale conveyor belt fire suppression experiment. Simulations were conducted using the calibrated CFD model to investigate the effects of sprinkler location, water flow rate, and sprinkler activation temperature on the suppression of conveyor belt fires. The sprinkler location and the activation temperature were found to have a major effect on the suppression of the belt fire, while the water flow rate had a minor effect. PMID:26190905

  6. Inclusive constraints on unified dark matter models from future large-scale surveys

    SciTech Connect

    Camera, Stefano; Carbone, Carmelita; Moscardini, Lauro E-mail: carmelita.carbone@unibo.it

    2012-03-01

    In the very last years, cosmological models where the properties of the dark components of the Universe — dark matter and dark energy — are accounted for by a single ''dark fluid'' have drawn increasing attention and interest. Amongst many proposals, Unified Dark Matter (UDM) cosmologies are promising candidates as effective theories. In these models, a scalar field with a non-canonical kinetic term in its Lagrangian mimics both the accelerated expansion of the Universe at late times and the clustering properties of the large-scale structure of the cosmos. However, UDM models also present peculiar behaviours, the most interesting one being the fact that the perturbations in the dark-matter component of the scalar field do have a non-negligible speed of sound. This gives rise to an effective Jeans scale for the Newtonian potential, below which the dark fluid does not cluster any more. This implies a growth of structures fairly different from that of the concordance ΛCDM model. In this paper, we demonstrate that forthcoming large-scale surveys will be able to discriminate between viable UDM models and ΛCDM to a good degree of accuracy. To this purpose, the planned Euclid satellite will be a powerful tool, since it will provide very accurate data on galaxy clustering and the weak lensing effect of cosmic shear. Finally, we also exploit the constraining power of the ongoing CMB Planck experiment. Although our approach is the most conservative, with the inclusion of only well-understood, linear dynamics, in the end we also show what could be done if some amount of non-linear information were included.

  7. Surprising Long Range Effects of Local Shoreline Stabilization in a Large-Scale Coastline Model

    NASA Astrophysics Data System (ADS)

    Slott, J.; Murray, B.; Valvo, L.; Ashton, A.

    2004-12-01

    As coastlines continue to retreat and threaten communities, roads, and other infrastructure, humans increasingly employ shoreline stabilization techniques to maintain the shoreline in its current position. Examples of shoreline stabilization techniques include beach nourishment and seawall construction. During beach nourishment, sand is typically dredged from locations offshore and placed on the beach. Seawalls or revetments, on the other hand, are hardened concrete structures which prevent the shoreline from retreating further yet do not add sand to the nearshore system. Coastal engineers and scientists have only addressed the local and relatively short-term effects of shoreline stabilization. Can beach nourishment or seawalls affect coastline behavior tens or hundreds of kilometers away in the longer term? We adapted a recently developed model of large-scale, long-term shoreline change to address such questions. On predominately sandy shorelines, waves breaking at oblique angles to the shoreline orientation drives the alongshore transport of sediment. Though traditionally believed to smooth out shoreline features, Ashton, et. al. (2001) have shown that alongshore-driven sediment transport can cause more complex shoreline evolution. Their model showed the spontaneous formation of large-scale features such as capes and cuspate forelands (e.g. the shape of the coastline of the Carolinas) using simple sediment transport relationships. This model accounts for non-local shoreline interactions, such as wave "shadowing." In this work, we have further developed the large-scale shoreline model to include the effects that shoreline stabilization techniques have on shoreline position and sediment supply. In one set of experiments, we chose an initial shoreline with cape-like features separated by approximately 100 kilometers, roughly similar to that of the coast off the Carolinas. In each individual experiment, we nourished a different 10 kilometer section of coastline. In

  8. Phanerozoic marine diversity: rock record modelling provides an independent test of large-scale trends

    PubMed Central

    Smith, Andrew B.; Lloyd, Graeme T.; McGowan, Alistair J.

    2012-01-01

    Sampling bias created by a heterogeneous rock record can seriously distort estimates of marine diversity and makes a direct reading of the fossil record unreliable. Here we compare two independent estimates of Phanerozoic marine diversity that explicitly take account of variation in sampling—a subsampling approach that standardizes for differences in fossil collection intensity, and a rock area modelling approach that takes account of differences in rock availability. Using the fossil records of North America and Western Europe, we demonstrate that a modelling approach applied to the combined data produces results that are significantly correlated with those derived from subsampling. This concordance between independent approaches argues strongly for the reality of the large-scale trends in diversity we identify from both approaches. PMID:22951734

  9. Large-scale shell-model calculations of nuclei around mass 210

    NASA Astrophysics Data System (ADS)

    Teruya, E.; Higashiyama, K.; Yoshinaga, N.

    2016-06-01

    Large-scale shell-model calculations are performed for even-even, odd-mass, and doubly odd nuclei of Pb, Bi, Po, At, Rn, and Fr isotopes in the neutron deficit region (Z ≥82 ,N ≤126 ) assuming 208Pb as a doubly magic core. All the six single-particle orbitals between the magic numbers 82 and 126, namely, 0 h9 /2,1 f7 /2,0 i13 /2,2 p3 /2,1 f5 /2 , and 2 p1 /2 , are considered. For a phenomenological effective two-body interaction, one set of the monopole pairing and quadrupole-quadrupole interactions including the multipole-pairing interactions is adopted for all the nuclei considered. The calculated energies and electromagnetic properties are compared with the experimental data. Furthermore, many isomeric states are analyzed in terms of the shell-model configurations.

  10. GPU-Based Parallelized Solver for Large Scale Vascular Blood Flow Modeling and Simulations.

    PubMed

    Santhanam, Anand P; Neylon, John; Eldredge, Jeff; Teran, Joseph; Dutson, Erik; Benharash, Peyman

    2016-01-01

    Cardio-vascular blood flow simulations are essential in understanding the blood flow behavior during normal and disease conditions. To date, such blood flow simulations have only been done at a macro scale level due to computational limitations. In this paper, we present a GPU based large scale solver that enables modeling the flow even in the smallest arteries. A mechanical equivalent of the circuit based flow modeling system is first developed to employ the GPU computing framework. Numerical studies were employed using a set of 10 million connected vascular elements. Run-time flow analysis were performed to simulate vascular blockages, as well as arterial cut-off. Our results showed that we can achieve ~100 FPS using a GTX 680m and ~40 FPS using a Tegra K1 computing platform. PMID:27046603

  11. Large scale nonlinear numerical optimal control for finite element models of flexible structures

    NASA Technical Reports Server (NTRS)

    Shoemaker, Christine A.; Liao, Li-Zhi

    1990-01-01

    This paper discusses the development of large scale numerical optimal control algorithms for nonlinear systems and their application to finite element models of structures. This work is based on our expansion of the optimal control algorithm (DDP) in the following steps: improvement of convergence for initial policies in non-convex regions, development of a numerically accurate penalty function method approach for constrained DDP problems, and parallel processing on supercomputers. The expanded constrained DDP algorithm was applied to the control of a four-bay, two dimensional truss with 12 soft members, which generates geometric nonlinearities. Using an explicit finite element model to describe the structural system requires 32 state variables and 10,000 time steps. Our numerical results indicate that for constrained or unconstrained structural problems with nonlinear dynamics, the results obtained by our expanded constrained DDP are significantly better than those obtained using linear-quadratic feedback control.

  12. Reversible Parallel Discrete-Event Execution of Large-scale Epidemic Outbreak Models

    SciTech Connect

    Perumalla, Kalyan S; Seal, Sudip K

    2010-01-01

    The spatial scale, runtime speed and behavioral detail of epidemic outbreak simulations together require the use of large-scale parallel processing. In this paper, an optimistic parallel discrete event execution of a reaction-diffusion simulation model of epidemic outbreaks is presented, with an implementation over the $\\mu$sik simulator. Rollback support is achieved with the development of a novel reversible model that combines reverse computation with a small amount of incremental state saving. Parallel speedup and other runtime performance metrics of the simulation are tested on a small (8,192-core) Blue Gene / P system, while scalability is demonstrated on 65,536 cores of a large Cray XT5 system. Scenarios representing large population sizes (up to several hundred million individuals in the largest case) are exercised.

  13. Microbranching in mode-I fracture using large-scale simulations of amorphous and perturbed-lattice models

    NASA Astrophysics Data System (ADS)

    Heizler, Shay I.; Kessler, David A.

    2015-07-01

    We study the high-velocity regime mode-I fracture instability wherein small microbranches start to appear near the main crack, using large-scale simulations. Some of the features of those microbranches have been reproduced qualitatively in smaller-scale studies [using O (104) atoms] on both a model of an amorphous material (via the continuous random network model) and using perturbed-lattice models. In this study, larger-scale simulations [ O (106) atoms] were performed using multithreading computing on a GPU device, in order to achieve more physically realistic results. First, we find that the microbranching pattern appears to be converging with the lattice width. Second, the simulations reproduce the growth of the size of a microbranch as a function of the crack velocity, as well as the increase of the amplitude of the derivative of the electrical-resistance root-mean square with respect to the time as a function of the crack velocity. In addition, the simulations yield the correct branching angle of the microbranches, and the power law exponent governing the shape of the microbranches seems to be lower than unity, so that the side cracks turn over in the direction of propagation of the main crack as seen in experiment.

  14. A large-scale neurocomputational model of task-oriented behavior selection and working memory in prefrontal cortex.

    PubMed

    Chadderdon, George L; Sporns, Olaf

    2006-02-01

    The prefrontal cortex (PFC) is crucially involved in the executive component of working memory, representation of task state, and behavior selection. This article presents a large-scale computational model of the PFC and associated brain regions designed to investigate the mechanisms by which working memory and task state interact to select adaptive behaviors from a behavioral repertoire. The model consists of multiple brain regions containing neuronal populations with realistic physiological and anatomical properties, including extrastriate visual cortical regions, the inferotemporal cortex, the PFC, the striatum, and midbrain dopamine (DA) neurons. The onset of a delayed match-to-sample or delayed nonmatch-to-sample task triggers tonic DA release in the PFC causing a switch into a persistent, stimulus-insensitive dynamic state that promotes the maintenance of stimulus representations within prefrontal networks. Other modeled prefrontal and striatal units select cognitive acceptance or rejection behaviors according to which task is active and whether prefrontal working memory representations match the current stimulus. Working memory task performance and memory fields of prefrontal delay units are degraded by extreme elevation or depletion of tonic DA levels. Analyses of cellular and synaptic activity suggest that hyponormal DA levels result in increased prefrontal activation, whereas hypernormal DA levels lead to decreased activation. Our simulation results suggest a range of predictions for behavioral, single-cell, and neuroimaging response data under the proposed task set and under manipulations of DA concentration. PMID:16494684

  15. Microbranching in mode-I fracture using large-scale simulations of amorphous and perturbed-lattice models.

    PubMed

    Heizler, Shay I; Kessler, David A

    2015-07-01

    We study the high-velocity regime mode-I fracture instability wherein small microbranches start to appear near the main crack, using large-scale simulations. Some of the features of those microbranches have been reproduced qualitatively in smaller-scale studies [using O(10(4)) atoms] on both a model of an amorphous material (via the continuous random network model) and using perturbed-lattice models. In this study, larger-scale simulations [O(10(6)) atoms] were performed using multithreading computing on a GPU device, in order to achieve more physically realistic results. First, we find that the microbranching pattern appears to be converging with the lattice width. Second, the simulations reproduce the growth of the size of a microbranch as a function of the crack velocity, as well as the increase of the amplitude of the derivative of the electrical-resistance root-mean square with respect to the time as a function of the crack velocity. In addition, the simulations yield the correct branching angle of the microbranches, and the power law exponent governing the shape of the microbranches seems to be lower than unity, so that the side cracks turn over in the direction of propagation of the main crack as seen in experiment. PMID:26274182

  16. Global Sensitivity Analysis for Large-scale Socio-hydrological Models using the Cloud

    NASA Astrophysics Data System (ADS)

    Hu, Y.; Garcia-Cabrejo, O.; Cai, X.; Valocchi, A. J.; Dupont, B.

    2014-12-01

    In the context of coupled human and natural system (CHNS), incorporating human factors into water resource management provides us with the opportunity to understand the interactions between human and environmental systems. A multi-agent system (MAS) model is designed to couple with the physically-based Republican River Compact Administration (RRCA) groundwater model, in an attempt to understand the declining water table and base flow in the heavily irrigated Republican River basin. For MAS modelling, we defined five behavioral parameters (κ_pr, ν_pr, κ_prep, ν_prep and λ) to characterize the agent's pumping behavior given the uncertainties of the future crop prices and precipitation. κ and ν describe agent's beliefs in their prior knowledge of the mean and variance of crop prices (κ_pr, ν_pr) and precipitation (κ_prep, ν_prep), and λ is used to describe the agent's attitude towards the fluctuation of crop profits. Notice that these human behavioral parameters as inputs to the MAS model are highly uncertain and even not measurable. Thus, we estimate the influences of these behavioral parameters on the coupled models using Global Sensitivity Analysis (GSA). In this paper, we address two main challenges arising from GSA with such a large-scale socio-hydrological model by using Hadoop-based Cloud Computing techniques and Polynomial Chaos Expansion (PCE) based variance decomposition approach. As a result, 1,000 scenarios of the coupled models are completed within two hours with the Hadoop framework, rather than about 28days if we run those scenarios sequentially. Based on the model results, GSA using PCE is able to measure the impacts of the spatial and temporal variations of these behavioral parameters on crop profits and water table, and thus identifies two influential parameters, κ_pr and λ. The major contribution of this work is a methodological framework for the application of GSA in large-scale socio-hydrological models. This framework attempts to

  17. Enhanced ICP for the Registration of Large-Scale 3D Environment Models: An Experimental Study

    PubMed Central

    Han, Jianda; Yin, Peng; He, Yuqing; Gu, Feng

    2016-01-01

    One of the main applications of mobile robots is the large-scale perception of the outdoor environment. One of the main challenges of this application is fusing environmental data obtained by multiple robots, especially heterogeneous robots. This paper proposes an enhanced iterative closest point (ICP) method for the fast and accurate registration of 3D environmental models. First, a hierarchical searching scheme is combined with the octree-based ICP algorithm. Second, an early-warning mechanism is used to perceive the local minimum problem. Third, a heuristic escape scheme based on sampled potential transformation vectors is used to avoid local minima and achieve optimal registration. Experiments involving one unmanned aerial vehicle and one unmanned surface vehicle were conducted to verify the proposed technique. The experimental results were compared with those of normal ICP registration algorithms to demonstrate the superior performance of the proposed method. PMID:26891298

  18. Large-scale sequencing and the natural history of model human RNA viruses

    PubMed Central

    Dugan, Vivien G; Saira, Kazima; Ghedin, Elodie

    2012-01-01

    RNA virus exploration within the field of medical virology has greatly benefited from technological developments in genomics, deepening our understanding of viral dynamics and emergence. Large-scale first-generation technology sequencing projects have expedited molecular epidemiology studies at an unprecedented scale for two pathogenic RNA viruses chosen as models: influenza A virus and dengue. Next-generation sequencing approaches are now leading to a more in-depth analysis of virus genetic diversity, which is greater for RNA than DNA viruses because of high replication rates and the absence of proofreading activity of the RNA-dependent RNA polymerase. In the field of virus discovery, technological advancements and metagenomic approaches are expanding the catalogs of novel viruses by facilitating our probing into the RNA virus world. PMID:23682295

  19. Structure of exotic nuclei by large-scale shell model calculations

    SciTech Connect

    Utsuno, Yutaka; Otsuka, Takaharu; Mizusaki, Takahiro; Honma, Michio

    2006-11-02

    An extensive large-scale shell-model study is conducted for unstable nuclei around N = 20 and N = 28, aiming to investigate how the shell structure evolves from stable to unstable nuclei and affects the nuclear structure. The structure around N = 20 including the disappearance of the magic number is reproduced systematically, exemplified in the systematics of the electromagnetic moments in the Na isotope chain. As a key ingredient dominating the structure/shell evolution in the exotic nuclei from a general viewpoint, we pay attention to the tensor force. Including a proper strength of the tensor force in the effective interaction, we successfully reproduce the proton shell evolution ranging from N = 20 to 28 without any arbitrary modifications in the interaction and predict the ground state of 42Si to contain a large deformed component.

  20. Enhanced ICP for the Registration of Large-Scale 3D Environment Models: An Experimental Study.

    PubMed

    Han, Jianda; Yin, Peng; He, Yuqing; Gu, Feng

    2016-01-01

    One of the main applications of mobile robots is the large-scale perception of the outdoor environment. One of the main challenges of this application is fusing environmental data obtained by multiple robots, especially heterogeneous robots. This paper proposes an enhanced iterative closest point (ICP) method for the fast and accurate registration of 3D environmental models. First, a hierarchical searching scheme is combined with the octree-based ICP algorithm. Second, an early-warning mechanism is used to perceive the local minimum problem. Third, a heuristic escape scheme based on sampled potential transformation vectors is used to avoid local minima and achieve optimal registration. Experiments involving one unmanned aerial vehicle and one unmanned surface vehicle were conducted to verify the proposed technique. The experimental results were compared with those of normal ICP registration algorithms to demonstrate the superior performance of the proposed method. PMID:26891298

  1. Investigation of airframe noise for a large-scale wing model with high-lift devices

    NASA Astrophysics Data System (ADS)

    Kopiev, V. F.; Zaytsev, M. Yu.; Belyaev, I. V.

    2016-01-01

    The acoustic characteristics of a large-scale model of a wing with high-lift devices in the landing configuration have been studied in the DNW-NWB wind tunnel with an anechoic test section. For the first time in domestic practice, data on airframe noise at high Reynolds numbers (1.1-1.8 × 106) have been obtained, which can be used for assessment of wing noise levels in aircraft certification tests. The scaling factor for recalculating the measurement results to natural conditions has been determined from the condition of collapsing the dimensionless noise spectra obtained at various flow velocities. The beamforming technique has been used to obtain localization of noise sources and provide their ranking with respect to intensity. For flap side-edge noise, which is an important noise component, a noise reduction method has been proposed. The efficiency of this method has been confirmed in DNW-NWB experiments.

  2. Excavating the Genome: Large Scale Mutagenesis Screening for the Discovery of New Mouse Models

    PubMed Central

    Sundberg, John P.; Dadras, Soheil S.; Silva, Kathleen A.; Kennedy, Victoria E.; Murray, Stephen A.; Denegre, James; Schofield, Paul N.; King, Lloyd E.; Wiles, Michael; Pratt, C. Herbert

    2016-01-01

    Technology now exists for rapid screening of mutated laboratory mice to identify phenotypes associated with specific genetic mutations. Large repositories exist for spontaneous mutants and those induced by chemical mutagenesis, many of which have never been studied or comprehensively evaluated. To supplement these resources, a variety of techniques have been consolidated in an international effort to create mutations in all known protein coding genes in the mouse. With targeted embryonic stem cell lines now available for almost all protein coding genes and more recently CRISPR/Cas9 technology, large-scale efforts are underway to create novel mutant mouse strains and to characterize their phenotypes. However, accurate diagnosis of skin, hair, and nail diseases still relies on careful gross and histological analysis. While not automated to the level of the physiological phenotyping, histopathology provides the most direct and accurate diagnosis and correlation with human diseases. As a result of these efforts, many new mouse dermatological disease models are being developed. PMID:26551941

  3. Computational framework for modeling the dynamic evolution of large-scale multi-agent organizations

    NASA Astrophysics Data System (ADS)

    Lazar, Alina; Reynolds, Robert G.

    2002-07-01

    A multi-agent system model of the origins of an archaic state is developed. Agent interaction is mediated by a collection of rules. The rules are mined from a related large-scale data base using two different techniques. One technique uses decision trees while the other uses rough sets. The latter was used since the data collection techniques were associated with a certain degree of uncertainty. The generation of the rough set rules was guided by Genetic Algorithms. Since the rules mediate agent interaction, the rule set with fewer rules and conditionals to check will make scaling up the simulation easier to do. The results suggest that explicitly dealing with uncertainty in rule formation can produce simpler rules than ignoring that uncertainty in situations where uncertainty is a factor in the measurement process.

  4. Large-scale Individual-based Models of Pandemic Influenza Mitigation Strategies

    NASA Astrophysics Data System (ADS)

    Kadau, Kai; Germann, Timothy; Longini, Ira; Macken, Catherine

    2007-03-01

    We have developed a large-scale stochastic simulation model to investigate the spread of a pandemic strain of influenza virus through the U.S. population of 281 million people, to assess the likely effectiveness of various potential intervention strategies including antiviral agents, vaccines, and modified social mobility (including school closure and travel restrictions) [1]. The heterogeneous population structure and mobility is based on available Census and Department of Transportation data where available. Our simulations demonstrate that, in a highly mobile population, restricting travel after an outbreak is detected is likely to delay slightly the time course of the outbreak without impacting the eventual number ill. For large basic reproductive numbers R0, we predict that multiple strategies in combination (involving both social and medical interventions) will be required to achieve a substantial reduction in illness rates. [1] T. C. Germann, K. Kadau, I. M. Longini, and C. A. Macken, Proc. Natl. Acad. Sci. (USA) 103, 5935-5940 (2006).

  5. Polarization predictions for cosmological models with large-scale power modulation

    NASA Astrophysics Data System (ADS)

    Bunn, Emory F.; Xue, Qingyang

    2016-01-01

    Several "anomalies" have been noted on large angular scales in maps of the cosmic microwave background (CMB) radiation, although the statistical significance of these anomalies is hotly debated. Of particular interest is the evidence for large-scale power modulation: the variance in one half of the sky is larger than the other half. Either this variation is a mere fluke, or it requires a major revision of the standard cosmological paradigm. The way to determine which is the case is to make predictions for future data sets, based on the hypothesis that the anomaly is meaningful and on the hypothesis that it is a fluke. We make predictions for the CMB polarization anisotropy based on a cosmological model in which statistical isotropy is broken via coupling with a dipolar modulation field. Our predictions are constrained to match the observed Planck temperature variations. We identify the modes in CMB polarization data that most strongly distinguish between the modulation and no-modulation hypotheses.

  6. Large-scale shell model study of the newly found isomer in 136La

    NASA Astrophysics Data System (ADS)

    Teruya, E.; Yoshinaga, N.; Higashiyama, K.; Nishibata, H.; Odahara, A.; Shimoda, T.

    2016-07-01

    The doubly-odd nucleus 136La is theoretically studied in terms of a large-scale shell model. The energy spectrum and transition rates are calculated and compared with the most updated experimental data. The isomerism is investigated for the first 14+ state, which was found to be an isomer in the previous study [Phys. Rev. C 91, 054305 (2015), 10.1103/PhysRevC.91.054305]. It is found that the 14+ state becomes an isomer due to a band crossing of two bands with completely different configurations. The yrast band with the (ν h11/2 -1⊗π h11 /2 ) configuration is investigated, revealing a staggering nature in M 1 transition rates.

  7. Large-scale Modeling of the Entry and Acceleration of Ions at the Magnetospheric Boundary

    NASA Astrophysics Data System (ADS)

    Berchem, J.; Richard, R. L.; Escoubet, C. P.; Pitout, F.

    2011-12-01

    We present the results of large-scale simulations of the entry and acceleration of ions at the magnetospheric boundary. The study is based on multipoint observations made during consecutive crossings of the cusps by the Cluster spacecraft. First, we use three-dimensional magnetohydrodynamic (MHD) simulations to follow the evolution of the global topology of the dayside magnetospheric boundary during the events. Subsequently, the time-dependent electric and magnetic fields predicted by the MHD simulations are utilized to compute the trajectories of large samples of solar wind ions launched upstream of the bow shock. We assess the results of the model by comparing Cluster ion measurements with ion dispersions calculated from the simulations along the spacecraft trajectories and discuss the temporal evolution and spatial distribution of precipitating particles in the context of the reconnection process at the dayside magnetopause.

  8. Towards large scale stochastic rainfall models for flood risk assessment in trans-national basins

    NASA Astrophysics Data System (ADS)

    Serinaldi, F.; Kilsby, C. G.

    2012-04-01

    While extensive research has been devoted to rainfall-runoff modelling for risk assessment in small and medium size watersheds, less attention has been paid, so far, to large scale trans-national basins, where flood events have severe societal and economic impacts with magnitudes quantified in billions of Euros. As an example, in the April 2006 flood events along the Danube basin at least 10 people lost their lives and up to 30 000 people were displaced, with overall damages estimated at more than half a billion Euros. In this context, refined analytical methods are fundamental to improve the risk assessment and, then, the design of structural and non structural measures of protection, such as hydraulic works and insurance/reinsurance policies. Since flood events are mainly driven by exceptional rainfall events, suitable characterization and modelling of space-time properties of rainfall fields is a key issue to perform a reliable flood risk analysis based on alternative precipitation scenarios to be fed in a new generation of large scale rainfall-runoff models. Ultimately, this approach should be extended to a global flood risk model. However, as the need of rainfall models able to account for and simulate spatio-temporal properties of rainfall fields over large areas is rather new, the development of new rainfall simulation frameworks is a challenging task involving that faces with the problem of overcoming the drawbacks of the existing modelling schemes (devised for smaller spatial scales), but keeping the desirable properties. In this study, we critically summarize the most widely used approaches for rainfall simulation. Focusing on stochastic approaches, we stress the importance of introducing suitable climate forcings in these simulation schemes in order to account for the physical coherence of rainfall fields over wide areas. Based on preliminary considerations, we suggest a modelling framework relying on the Generalized Additive Models for Location, Scale

  9. Hierarchical Modeling and Robust Synthesis for the Preliminary Design of Large Scale Complex Systems

    NASA Technical Reports Server (NTRS)

    Koch, Patrick N.

    1997-01-01

    Large-scale complex systems are characterized by multiple interacting subsystems and the analysis of multiple disciplines. The design and development of such systems inevitably requires the resolution of multiple conflicting objectives. The size of complex systems, however, prohibits the development of comprehensive system models, and thus these systems must be partitioned into their constituent parts. Because simultaneous solution of individual subsystem models is often not manageable iteration is inevitable and often excessive. In this dissertation these issues are addressed through the development of a method for hierarchical robust preliminary design exploration to facilitate concurrent system and subsystem design exploration, for the concurrent generation of robust system and subsystem specifications for the preliminary design of multi-level, multi-objective, large-scale complex systems. This method is developed through the integration and expansion of current design techniques: Hierarchical partitioning and modeling techniques for partitioning large-scale complex systems into more tractable parts, and allowing integration of subproblems for system synthesis; Statistical experimentation and approximation techniques for increasing both the efficiency and the comprehensiveness of preliminary design exploration; and Noise modeling techniques for implementing robust preliminary design when approximate models are employed. Hierarchical partitioning and modeling techniques including intermediate responses, linking variables, and compatibility constraints are incorporated within a hierarchical compromise decision support problem formulation for synthesizing subproblem solutions for a partitioned system. Experimentation and approximation techniques are employed for concurrent investigations and modeling of partitioned subproblems. A modified composite experiment is introduced for fitting better predictive models across the ranges of the factors, and an approach for

  10. Simulating large-scale pedestrian movement using CA and event driven model: Methodology and case study

    NASA Astrophysics Data System (ADS)

    Li, Jun; Fu, Siyao; He, Haibo; Jia, Hongfei; Li, Yanzhong; Guo, Yi

    2015-11-01

    Large-scale regional evacuation is an important part of national security emergency response plan. Large commercial shopping area, as the typical service system, its emergency evacuation is one of the hot research topics. A systematic methodology based on Cellular Automata with the Dynamic Floor Field and event driven model has been proposed, and the methodology has been examined within context of a case study involving the evacuation within a commercial shopping mall. Pedestrians walking is based on Cellular Automata and event driven model. In this paper, the event driven model is adopted to simulate the pedestrian movement patterns, the simulation process is divided into normal situation and emergency evacuation. The model is composed of four layers: environment layer, customer layer, clerk layer and trajectory layer. For the simulation of movement route of pedestrians, the model takes into account purchase intention of customers and density of pedestrians. Based on evacuation model of Cellular Automata with Dynamic Floor Field and event driven model, we can reflect behavior characteristics of customers and clerks at the situations of normal and emergency evacuation. The distribution of individual evacuation time as a function of initial positions and the dynamics of the evacuation process is studied. Our results indicate that the evacuation model using the combination of Cellular Automata with Dynamic Floor Field and event driven scheduling can be used to simulate the evacuation of pedestrian flows in indoor areas with complicated surroundings and to investigate the layout of shopping mall.

  11. Modeling the effects of large scale turbulence in the Madison Dynamo Experiment

    NASA Astrophysics Data System (ADS)

    Kaplan, Elliot; Clark, Mike; Rahbarnia, Kian; Nornberg, Mark; Taylor, Zane; Rasmus, Alex; Forest, Cary; Spence, Erik

    2011-10-01

    Early experiments in the Madison Dynamo Experiment (MDE) demonstrated the existence of electric curresnt which correspond to the α and β effects of mean field MHD, i.e. currents driven parallel to B, and turbulent resistivity respectively. A magnetic dipole moment was measured parallel to the symmetry axis of the flow (α) and the induced toroidal field was less than half what would be expected from the mean flow (β). Traditionally, mean field theory requires a large separation in scale between the mean magnetic field and turbulent eddies in the conductive medium. However, the recent campaign on the MDE eliminated these effects when a baffle was added to eliminate the largest scale turbulent eddies. A model is presented that builds α- and β- like effects from these large scale eddies without any assumption of scale separation. Early experiments in the Madison Dynamo Experiment (MDE) demonstrated the existence of electric curresnt which correspond to the α and β effects of mean field MHD, i.e. currents driven parallel to B, and turbulent resistivity respectively. A magnetic dipole moment was measured parallel to the symmetry axis of the flow (α) and the induced toroidal field was less than half what would be expected from the mean flow (β). Traditionally, mean field theory requires a large separation in scale between the mean magnetic field and turbulent eddies in the conductive medium. However, the recent campaign on the MDE eliminated these effects when a baffle was added to eliminate the largest scale turbulent eddies. A model is presented that builds α- and β- like effects from these large scale eddies without any assumption of scale separation. CMSO

  12. Large-scale hydrological modelling by using modified PUB recommendations: the India-HYPE case

    NASA Astrophysics Data System (ADS)

    Pechlivanidis, I. G.; Arheimer, B.

    2015-11-01

    The scientific initiative Prediction in Ungauged Basins (PUB) (2003-2012 by the IAHS) put considerable effort into improving the reliability of hydrological models to predict flow response in ungauged rivers. PUB's collective experience advanced hydrologic science and defined guidelines to make predictions in catchments without observed runoff data. At present, there is a raised interest in applying catchment models to large domains and large data samples in a multi-basin manner, to explore emerging spatial patterns or learn from comparative hydrology. However, such modelling involves additional sources of uncertainties caused by the inconsistency between input data sets, i.e. particularly regional and global databases. This may lead to inaccurate model parameterisation and erroneous process understanding. In order to bridge the gap between the best practices for flow predictions in single catchments and multi-basins at the large scale, we present a further developed and slightly modified version of the recommended best practices for PUB by Takeuchi et al. (2013). By using examples from a recent HYPE (Hydrological Predictions for the Environment) hydrological model set-up across 6000 subbasins for the Indian subcontinent, named India-HYPE v1.0, we explore the PUB recommendations, identify challenges and recommend ways to overcome them. We describe the work process related to (a) errors and inconsistencies in global databases, unknown human impacts, and poor data quality; (b) robust approaches to identify model parameters using a stepwise calibration approach, remote sensing data, expert knowledge, and catchment similarities; and (c) evaluation based on flow signatures and performance metrics, using both multiple criteria and multiple variables, and independent gauges for "blind tests". The results show that despite the strong physiographical gradient over the subcontinent, a single model can describe the spatial variability in dominant hydrological processes at the

  13. Multilevel Item Response Modeling: Applications to Large-Scale Assessment of Academic Achievement

    ERIC Educational Resources Information Center

    Zheng, Xiaohui

    2009-01-01

    The call for standards-based reform and educational accountability has led to increased attention to large-scale assessments. Over the past two decades, large-scale assessments have been providing policymakers and educators with timely information about student learning and achievement to facilitate their decisions regarding schools, teachers and…

  14. Multi-variate spatial explicit constraining of a large scale hydrological model

    NASA Astrophysics Data System (ADS)

    Rakovec, Oldrich; Kumar, Rohini; Samaniego, Luis

    2016-04-01

    Increased availability and quality of near real-time data should target at better understanding of predictive skills of distributed hydrological models. Nevertheless, predictions of regional scale water fluxes and states remains of great challenge to the scientific community. Large scale hydrological models are used for prediction of soil moisture, evapotranspiration and other related water states and fluxes. They are usually properly constrained against river discharge, which is an integral variable. Rakovec et al (2016) recently demonstrated that constraining model parameters against river discharge is necessary, but not a sufficient condition. Therefore, we further aim at scrutinizing appropriate incorporation of readily available information into a hydrological model that may help to improve the realism of hydrological processes. It is important to analyze how complementary datasets besides observed streamflow and related signature measures can improve model skill of internal model variables during parameter estimation. Among those products suitable for further scrutiny are for example the GRACE satellite observations. Recent developments of using this dataset in a multivariate fashion to complement traditionally used streamflow data within the distributed model mHM (www.ufz.de/mhm) are presented. Study domain consists of 80 European basins, which cover a wide range of distinct physiographic and hydrologic regimes. First-order data quality check ensures that heavily human influenced basins are eliminated. For river discharge simulations we show that model performance of discharge remains unchanged when complemented by information from the GRACE product (both, daily and monthly time steps). Moreover, the GRACE complementary data lead to consistent and statistically significant improvements in evapotranspiration estimates, which are evaluated using an independent gridded FLUXNET product. We also show that the choice of the objective function used to estimate

  15. Vertical Distributions of Sulfur Species Simulated by Large Scale Atmospheric Models in COSAM: Comparison with Observations

    SciTech Connect

    Lohmann, U.; Leaitch, W. R.; Barrie, Leonard A.; Law, K.; Yi, Y.; Bergmann, D.; Bridgeman, C.; Chin, M.; Christensen, J.; Easter, Richard C.; Feichter, J.; Jeuken, A.; Kjellstrom, E.; Koch, D.; Land, C.; Rasch, P.; Roelofs, G.-J.

    2001-11-01

    A comparison of large-scale models simulating atmospheric sulfate aerosols (COSAM) was conducted to increase our understanding of global distributions of sulfate aerosols and precursors. Earlier model comparisons focused on wet deposition measurements and sulfate aerosol concentrations in source regions at the surface. They found that different models simulated the observed sulfate surface concentrations mostly within a factor of two, but that the simulated column burdens and vertical profiles were very different amongst different models. In the COSAM exercise, one aspect is the comparison of sulfate aerosol and precursor gases above the surface. Vertical profiles of SO2, SO42-, oxidants and cloud properties were measured by aircraft during the North Atlantic Regional Experiment (NARE) experiment in August/September 1993 off the coast of Nova Scotia and during the Second Eulerian Model Evaluation Field Study (EMEFSII) in central Ontario in March/April 1990. While no single model stands out as being best or worst, the general tendency is that those models simulating the full oxidant chemistry tend to agree best with observations although differences in transport and treatment of clouds are important as well.

  16. Static proppant-settling characteristics of non-Newtonian fracturing fluids in a large-scale test model

    SciTech Connect

    McMechan, D.E.; Shah, S.N. )

    1991-08-01

    Large-scale testing of the settling behavior of propants in fracturing fluids was conducted with a slot configuration to model realistically the conditions observed in a hydraulic fracture. The test apparatus consists of a 1/2{times}8-in. (1.3{times}20.3-cm) rectangular slot 141/2 ft (4.4m) high, faced with Plexiglas and equipped with pressure taps at 1-ft (1.3m) intervals. This configuration allows both qualitive visual observations and quantitative density measurements for calculation of proppant concentrations and settling velocities. In this paper, the authors examine uncrosslinked hydroxypropyl guar (HPG) and hydroxyethylcellulose (HEC) fluids, as well as crosslinked guar, HPG, and carboxymethyl HPG (CMHPG) systems. Sand loadings of 2 to 15 lbm/gal (240 to 1797 kg/m{sup 3}) (3 to 40 vol% of solids) were tested. Experimental results were compared with the predictions of existing particle-settling models for a 40-lbm/1,000-gal (4.8-kg/m{sub 3}) HPG fluid system.

  17. Representations of the Nordic Seas overflows and their large scale climate impact in coupled models

    NASA Astrophysics Data System (ADS)

    Wang, He; Legg, Sonya A.; Hallberg, Robert W.

    2015-02-01

    The sensitivity of large scale ocean circulation and climate to overflow representation is studied using coupled climate models, motivated by the differences between two models differing only in their ocean components: CM2G (which uses an isopycnal-coordinate ocean model) and CM2M (which uses a z-coordinate ocean model). Analysis of the control simulations of the two models shows that the Atlantic Meridional Overturning Circulation (AMOC) and the North Atlantic climate have some differences, which may be related to the representation of overflow processes. Firstly, in CM2G, as in the real world, overflows have two branches flowing out of the Nordic Seas, to the east and west of Iceland, respectively, while only the western branch is present in CM2M. This difference in overflow location results in different horizontal circulation in the North Atlantic. Secondly, the diapycnal mixing in the overflow downstream region is much larger in CM2M than in CM2G, which affects the entrainment and product water properties. Two sensitivity experiments are conducted in CM2G to isolate the effect of these two model differences: in the first experiment, the outlet of the eastern branch of the overflow is blocked, and the North Atlantic horizontal circulation is modified due to the absence of the eastern branch of the overflow, although the AMOC has little change; in the second experiment, the diapycnal mixing downstream of the overflow is enhanced, resulting in changes in the structure and magnitude of the AMOC.

  18. Analyzing the prediction error of large scale Vis-NIR spectroscopic models

    NASA Astrophysics Data System (ADS)

    Stevens, Antoine; Nocita, Marco; Montanarella, Luca; van Wesemael, Bas

    2013-04-01

    Based on the LUCAS soil spectral library (~ 20,000 samples distributed over 23 EU countries), we developed multivariate calibration models (model trees) for estimating the SOC content from the visible and near infrared reflectance (Vis-NIR) spectra. The root mean square error of validation of these models ranged from 4 to 15 g C kg-1. The prediction accuracy is usually negatively related to samples heterogeneity in a given library, so that large scale databases typically demonstrate low prediction accuracy compared to local scale studies. This is inherent to the empirical nature of the approach that cannot accommodate well the changing and scale-dependent relationship between Vis-NIR spectra and soil properties. In our study, we analyzed the effect of key soil properties and environmental covariates (land cover) on the SOC prediction accuracy of the spectroscopic models. It is shown that mineralogy as well as soil texture have large impacts on prediction accuracy and that pedogenetic factors that are easily obtainable if the samples are geo-referenced can be used as input in the spectroscopic models to improve model accuracies.

  19. Non-intrusive Ensemble Kalman filtering for large scale geophysical models

    NASA Astrophysics Data System (ADS)

    Amour, Idrissa; Kauranne, Tuomo

    2016-04-01

    Advanced data assimilation techniques, such as variational assimilation methods, present often challenging implementation issues for large-scale models, both because of computational complexity and because of complexity of implementation. We present a non-intrusive wrapper library that addresses this problem by isolating the direct model and the linear algebra employed in data assimilation from each other completely. In this approach we have adopted a hybrid Variational Ensemble Kalman filter that combines Ensemble propagation with a 3DVAR analysis stage. The inverse problem of state and covariance propagation from prior to posterior estimates is thereby turned into a time-independent problem. This feature allows the linear algebra and minimization steps required in the variational step to be conducted outside the direct model and no tangent linear or adjoint codes are required. Communication between the model and the assimilation module is conducted exclusively via standard input and output files of the model. This non-intrusive approach is tested with the comprehensive 3D lake and shallow sea model COHERENS that is used to forecast and assimilate turbidity in lake Säkylän Pyhäjärvi in Finland, using both sparse satellite images and continuous real-time point measurements as observations.

  20. A modelling case study of a large-scale cirrus in the tropical tropopause layer

    NASA Astrophysics Data System (ADS)

    Podglajen, Aurélien; Plougonven, Riwal; Hertzog, Albert; Legras, Bernard

    2016-03-01

    We use the Weather Research and Forecast (WRF) model to simulate a large-scale tropical tropopause layer (TTL) cirrus in order to understand the formation and life cycle of the cloud. This cirrus event has been previously described through satellite observations by Taylor et al. (2011). Comparisons of the simulated and observed cirrus show a fair agreement and validate the reference simulation regarding cloud extension, location and life time. The validated simulation is used to understand the causes of cloud formation. It is shown that several cirrus clouds successively form in the region due to adiabatic cooling and large-scale uplift rather than from convective anvils. The structure of the uplift is tied to the equatorial response (equatorial wave excitation) to a potential vorticity intrusion from the midlatitudes. Sensitivity tests are then performed to assess the relative importance of the choice of the microphysics parameterization and of the initial and boundary conditions. The initial dynamical conditions (wind and temperature) essentially control the horizontal location and area of the cloud. However, the choice of the microphysics scheme influences the ice water content and the cloud vertical position. Last, the fair agreement with the observations allows to estimate the cloud impact in the TTL in the simulations. The cirrus clouds have a small but not negligible impact on the radiative budget of the local TTL. However, for this particular case, the cloud radiative heating does not significantly influence the simulated dynamics. This result is due to (1) the lifetime of air parcels in the cloud system, which is too short to significantly influence the dynamics, and (2) the fact that induced vertical motions would be comparable to or smaller than the typical mesoscale motions present. Finally, the simulation also provides an estimate of the vertical redistribution of water by the cloud and the results emphasize the importance in our case of both

  1. A modelling case study of a large-scale cirrus in the tropical tropopause layer

    NASA Astrophysics Data System (ADS)

    Podglajen, A.; Plougonven, R.; Hertzog, A.; Legras, B.

    2015-11-01

    We use the Weather Research and Forecast (WRF) model to simulate a large-scale tropical tropopause layer (TTL) cirrus, in order to understand the formation and life cycle of the cloud. This cirrus event has been previously described through satellite observations by Taylor et al. (2011). Comparisons of the simulated and observed cirrus show a fair agreement, and validate the reference simulation regarding cloud extension, location and life time. The validated simulation is used to understand the causes of cloud formation. It is shown that several cirrus clouds successively form in the region due to adiabatic cooling and large-scale uplift rather than from ice lofting from convective anvils. The equatorial response (equatorial wave excitation) to a midlatitude potential vorticity (PV) intrusion structures the uplift. Sensitivity tests are then performed to assess the relative importance of the choice of the microphysics parametrisation and of the initial and boundary conditions. The initial dynamical conditions (wind and temperature) essentially control the horizontal location and area of the cloud. On the other hand, the choice of the microphysics scheme influences the ice water content and the cloud vertical position. Last, the fair agreement with the observations allows to estimate the cloud impact in the TTL in the simulations. The cirrus clouds have a small but not negligible impact on the radiative budget of the local TTL. However, the cloud radiative heating does not significantly influence the simulated dynamics. The simulation also provides an estimate of the vertical redistribution of water by the cloud and the results emphasize the importance in our case of both re and dehydration in the vicinity of the cirrus.

  2. Development of a realistic human airway model.

    PubMed

    Lizal, Frantisek; Elcner, Jakub; Hopke, Philip K; Jedelsky, Jan; Jicha, Miroslav

    2012-03-01

    Numerous models of human lungs with various levels of idealization have been reported in the literature; consequently, results acquired using these models are difficult to compare to in vivo measurements. We have developed a set of model components based on realistic geometries, which permits the analysis of the effects of subsequent model simplification. A realistic digital upper airway geometry except for the lack of an oral cavity has been created which proved suitable both for computational fluid dynamics (CFD) simulations and for the fabrication of physical models. Subsequently, an oral cavity was added to the tracheobronchial geometry. The airway geometry including the oral cavity was adjusted to enable fabrication of a semi-realistic model. Five physical models were created based on these three digital geometries. Two optically transparent models, one with and one without the oral cavity, were constructed for flow velocity measurements, two realistic segmented models, one with and one without the oral cavity, were constructed for particle deposition measurements, and a semi-realistic model with glass cylindrical airways was developed for optical measurements of flow velocity and in situ particle size measurements. One-dimensional phase doppler anemometry measurements were made and compared to the CFD calculations for this model and good agreement was obtained. PMID:22558834

  3. Estimating the impact of satellite observations on the predictability of large-scale hydraulic models

    NASA Astrophysics Data System (ADS)

    Andreadis, Konstantinos M.; Schumann, Guy J.-P.

    2014-11-01

    Large-scale hydraulic models are able to predict flood characteristics, and are being used in forecasting applications. In this work, the potential value of satellite observations to initialize hydraulic forecasts is explored, using the Ensemble Sensitivity method. The impact estimation is based on the Local Ensemble Transform Kalman Filter, allowing for the forecast error reductions to be computed without additional model runs. The experimental design consisted of two configurations of the LISFLOOD-FP model over the Ohio River basin: a baseline simulation represents a 'best effort' model using observations for parameters and boundary conditions, whereas the second simulation consists of erroneous parameters and boundary conditions. Results showed that the forecast skill was improved for water heights up to lead times of 11 days (error reductions ranged from 0.2 to 0.6 m/km), while even partial observations of the river contained information for the entire river's water surface profile and allowed forecasting 5 to 7 days ahead. Moreover, water height observations had a negative impact on discharge forecasts for longer lead times although they did improve forecast skill for 1 and 3 days (up to 60 m3 / s / km). Lastly, the inundated area forecast errors were reduced overall for all examined lead times. Albeit, when examining a specific flood event the limitations of predictability were revealed suggesting that model errors or inflows were more important than initial conditions.

  4. A Comparison of Large-Scale Atmospheric Sulphate Aerosol Models (COSAM): Overview and Highlights

    SciTech Connect

    Barrie, Leonard A.; Yi, Y.; Leaitch, W. R.; Lohmann, U.; Kasibhatla, P.; Roelofs, G.-J.; Wilson, J.; Mcgovern, F.; Benkovitz, C.; Melieres, M. A.; Law, K.; Prospero, J.; Kritz, M.; Bergmann, D.; Bridgeman, C.; Chin, M.; Christiansen, J.; Easter, Richard C.; Feichter, J.; Land, C.; Jeuken, A.; Kjellstrom, E.; Koch, D.; Rasch, P.

    2001-11-01

    The comparison of large-scale sulphate aerosol models study (COSAM) compared the performance of atmospheric models with each other and observations. It involved: (i) design of a standard model experiment for the world wide web, (ii) 10 model simulations of the cycles of sulphur and 222Rn/210Pb conforming to the experimental design, (iii) assemblage of the best available observations of atmospheric SO4=, SO2 and MSA and (iv) a workshop in Halifax, Canada to analyze model performance and future model development needs. The analysis presented in this paper and two companion papers by Roelofs, and Lohmann and co-workers examines the variance between models and observations, discusses the sources of that variance and suggests ways to improve models. Variations between models in the export of SOx from Europe or North America are not sufficient to explain an order of magnitude variation in spatial distributions of SOx downwind in the northern hemisphere. On average, models predicted surface level seasonal mean SO4= aerosol mixing ratios better (most within 20%) than SO2 mixing ratios (over-prediction by factors of 2 or more). Results suggest that vertical mixing from the planetary boundary layer into the free troposphere in source regions is a major source of uncertainty in predicting the global distribution of SO4= aerosols in climate models today. For improvement, it is essential that globally coordinated research efforts continue to address emissions of all atmospheric species that affect the distribution and optical properties of ambient aerosols in models and that a global network of observations be established that will ultimately produce a world aerosol chemistry climatology.

  5. Ensemble modeling to predict habitat suitability for a large-scale disturbance specialist

    PubMed Central

    Latif, Quresh S; Saab, Victoria A; Dudley, Jonathan G; Hollenbeck, Jeff P

    2013-01-01

    help guide managers attempting to balance salvage logging with habitat conservation in burned-forest landscapes where black-backed woodpecker nest location data are not immediately available. Ensemble modeling represents a promising tool for guiding conservation of large-scale disturbance specialists. PMID:24340177

  6. Modelling high angle wave instability and the generation of large scale shoreline sand waves

    NASA Astrophysics Data System (ADS)

    van den Berg, Niels; Falqués, Albert; Ribas, Francesca

    2010-05-01

    Sandy coasts are dynamic systems, shaped by the continuous interaction between hydrodynamics and morphology. On a large time and spacial scale it is commonly assumed that the diffusive action of alongshore wave driven sediment transport dominates and maintains a stable and straight shoreline. Ashton et. al. (2001) however showed with a cellular model that for high angle off-shore wave incidence a coastline can be unstable and that shoreline sand waves can develop due to the feedback of shoreline changes into the wave field. These shoreline undulations can migrate and merge to form large scale capes and spits. Falqués and Calvete (2005) confirmed the mechanism of shoreline instability and shoreline sand wave formation with a linear stability analysis. They found a typical wavelength in the range 4-15 km and a characteristic growth time of a few years. Both studies however have there limitations. Ashton et. al. (2001) assume rectilinear depth contours and an infinite cross-shore extent of shoreline changes in the bathymetry. The linear stability analysis by Falqués and Calvete (2005) can only be applied for small amplitude shoreline changes. Both studies neglect cross-shore dynamics as bathymetric changes associated to shoreline changes are assumed to be instantaneous. In the current study, a nonlinear morphodynamic model is used. In this model the bathymetric lines are curvilinear and the cross-shore extent of shoreline changes in the bathymetry is dynamic due to the introduction of cross-shore dynamics. The cross-shore dynamics are parameterized by assuming a relaxation to an equilibrium cross-shore profile. The relaxation is controlled by a diffusivity which is proportional to wave energy dissipation. The new model is equivalent to N-lines models but applies sediment conservation like 2DH models instead of just moving contour lines. The main objective of this study is to extend the work of Falqués and Calvete (2005) and to study in more detail the mechanism of

  7. Gravitational waves during inflation from a 5D large-scale repulsive gravity model

    NASA Astrophysics Data System (ADS)

    Reyes, Luz M.; Moreno, Claudia; Madriz Aguilar, José Edgar; Bellini, Mauricio

    2012-10-01

    We investigate, in the transverse traceless (TT) gauge, the generation of the relic background of gravitational waves, generated during the early inflationary stage, on the framework of a large-scale repulsive gravity model. We calculate the spectrum of the tensor metric fluctuations of an effective 4D Schwarzschild-de Sitter metric on cosmological scales. This metric is obtained after implementing a planar coordinate transformation on a 5D Ricci-flat metric solution, in the context of a non-compact Kaluza-Klein theory of gravity. We found that the spectrum is nearly scale invariant under certain conditions. One interesting aspect of this model is that it is possible to derive the dynamical field equations for the tensor metric fluctuations, valid not just at cosmological scales, but also at astrophysical scales, from the same theoretical model. The astrophysical and cosmological scales are determined by the gravity-antigravity radius, which is a natural length scale of the model, that indicates when gravity becomes repulsive in nature.

  8. Statistical Modeling of Large-Scale Signal Path Loss in Underwater Acoustic Networks

    PubMed Central

    Llor, Jesús; Malumbres, Manuel Perez

    2013-01-01

    In an underwater acoustic channel, the propagation conditions are known to vary in time, causing the deviation of the received signal strength from the nominal value predicted by a deterministic propagation model. To facilitate a large-scale system design in such conditions (e.g., power allocation), we have developed a statistical propagation model in which the transmission loss is treated as a random variable. By applying repetitive computation to the acoustic field, using ray tracing for a set of varying environmental conditions (surface height, wave activity, small node displacements around nominal locations, etc.), an ensemble of transmission losses is compiled and later used to infer the statistical model parameters. A reasonable agreement is found with log-normal distribution, whose mean obeys a log-distance increases, and whose variance appears to be constant for a certain range of inter-node distances in a given deployment location. The statistical model is deemed useful for higher-level system planning, where simulation is needed to assess the performance of candidate network protocols under various resource allocation policies, i.e., to determine the transmit power and bandwidth allocation necessary to achieve a desired level of performance (connectivity, throughput, reliability, etc.). PMID:23396190

  9. Deterministic methods for sensitivity and uncertainty analysis in large-scale computer models

    SciTech Connect

    Worley, B.A.; Oblow, E.M.; Pin, F.G.; Maerker, R.E.; Horwedel, J.E.; Wright, R.Q.; Lucius, J.L.

    1987-01-01

    The fields of sensitivity and uncertainty analysis are dominated by statistical techniques when large-scale modeling codes are being analyzed. This paper reports on the development and availability of two systems, GRESS and ADGEN, that make use of computer calculus compilers to automate the implementation of deterministic sensitivity analysis capability into existing computer models. This automation removes the traditional limitation of deterministic sensitivity methods. The paper describes a deterministic uncertainty analysis method (DUA) that uses derivative information as a basis to propagate parameter probability distributions to obtain result probability distributions. The paper demonstrates the deterministic approach to sensitivity and uncertainty analysis as applied to a sample problem that models the flow of water through a borehole. The sample problem is used as a basis to compare the cumulative distribution function of the flow rate as calculated by the standard statistical methods and the DUA method. The DUA method gives a more accurate result based upon only two model executions compared to fifty executions in the statistical case.

  10. Statistical modeling of large-scale signal path loss in underwater acoustic networks.

    PubMed

    Llor, Jesús; Malumbres, Manuel Perez

    2013-01-01

    In an underwater acoustic channel, the propagation conditions are known to vary in time, causing the deviation of the received signal strength from the nominal value predicted by a deterministic propagation model. To facilitate a large-scale system design in such conditions (e.g., power allocation), we have developed a statistical propagation model in which the transmission loss is treated as a random variable. By applying repetitive computation to the acoustic field, using ray tracing for a set of varying environmental conditions (surface height, wave activity, small node displacements around nominal locations, etc.), an ensemble of transmission losses is compiled and later used to infer the statistical model parameters. A reasonable agreement is found with log-normal distribution, whose mean obeys a log-distance increases, and whose variance appears to be constant for a certain range of inter-node distances in a given deployment location. The statistical model is deemed useful for higher-level system planning, where simulation is needed to assess the performance of candidate network protocols under various resource allocation policies, i.e., to determine the transmit power and bandwidth allocation necessary to achieve a desired level of performance (connectivity, throughput, reliability, etc.). PMID:23396190

  11. Morphotectonic evolution of passive margins undergoing active surface processes: large-scale experiments using numerical models.

    NASA Astrophysics Data System (ADS)

    Beucher, Romain; Huismans, Ritske S.

    2016-04-01

    Extension of the continental lithosphere can lead to the formation of a wide range of rifted margins styles with contrasting tectonic and geomorphological characteristics. It is now understood that many of these characteristics depend on the manner extension is distributed depending on (among others factors) rheology, structural inheritance, thermal structure and surface processes. The relative importance and the possible interactions of these controlling factors is still largely unknown. Here we investigate the feedbacks between tectonics and the transfers of material at the surface resulting from erosion, transport, and sedimentation. We use large-scale (1200 x 600 km) and high-resolution (~1km) numerical experiments coupling a 2D upper-mantle-scale thermo-mechanical model with a plan-form 2D surface processes model (SPM). We test the sensitivity of the coupled models to varying crust-lithosphere rheology and erosional efficiency ranging from no-erosion to very efficient erosion. We discuss how fast, when and how the topography of the continents evolves and how it can be compared to actual passive margins escarpment morphologies. We show that although tectonics is the main factor controlling the rift geometry, transfers of masses at the surface affect the timing of faulting and the initiation of sea-floor spreading. We discuss how such models may help to understand the evolution of high-elevated passive margins around the world.

  12. Modelling potential changes in marine biogeochemistry due to large-scale offshore wind farms

    NASA Astrophysics Data System (ADS)

    van der Molen, Johan; Rees, Jon; Limpenny, Sian

    2013-04-01

    Large-scale renewable energy generation by offshore wind farms may lead to changes in marine ecosystem processes through the following mechanism: 1) wind-energy extraction leads to a reduction in local surface wind speeds; 2) these lead to a reduction in the local wind wave height; 3) as a consequence there's a reduction in SPM resuspension and concentrations; 4) this results in an improvement in under-water light regime, which 5) may lead to increased primary production, which subsequently 6) cascades through the ecosystem. A three-dimensional coupled hydrodynamics-biogeochemistry model (GETM_ERSEM) was used to investigate this process for a hypothetical wind farm in the central North Sea, by running a reference scenario and a scenario with a 10% reduction (as was found in a case study of a small farm in Danish waters) in surface wind velocities in the area of the wind farm. The ERSEM model included both pelagic and benthic processes. The results showed that, within the farm area, the physical mechanisms were as expected, but with variations in the magnitude of the response depending on the ecosystem variable or exchange rate between two ecosystem variables (3-28%, depending on variable/rate). Benthic variables tended to be more sensitive to the changes than pelagic variables. Reduced, but noticeable changes also occurred for some variables in a region of up to two farm diameters surrounding the wind farm. An additional model run in which the 10% reduction in surface wind speed was applied only for wind speeds below the generally used threshold of 25 m/s for operational shut-down showed only minor differences from the run in which all wind speeds were reduced. These first results indicate that there is potential for measurable effects of large-scale offshore wind farms on the marine ecosystem, mainly within the farm but for some variables up to two farm diameters away. However, the wave and SPM parameterisations currently used in the model are crude and need to be

  13. Real-World-Time Simulation of Memory Consolidation in a Large-Scale Cerebellar Model.

    PubMed

    Gosui, Masato; Yamazaki, Tadashi

    2016-01-01

    We report development of a large-scale spiking network model of the cerebellum composed of more than 1 million neurons. The model is implemented on graphics processing units (GPUs), which are dedicated hardware for parallel computing. Using 4 GPUs simultaneously, we achieve realtime simulation, in which computer simulation of cerebellar activity for 1 s completes within 1 s in the real-world time, with temporal resolution of 1 ms. This allows us to carry out a very long-term computer simulation of cerebellar activity in a practical time with millisecond temporal resolution. Using the model, we carry out computer simulation of long-term gain adaptation of optokinetic response (OKR) eye movements for 5 days aimed to study the neural mechanisms of posttraining memory consolidation. The simulation results are consistent with animal experiments and our theory of posttraining memory consolidation. These results suggest that realtime computing provides a useful means to study a very slow neural process such as memory consolidation in the brain. PMID:26973472

  14. A method to search for large-scale concavities in asteroid shape models

    NASA Astrophysics Data System (ADS)

    Devogèle, M.; Rivet, J. P.; Tanga, P.; Bendjoya, Ph.; Surdej, J.; Bartczak, P.; Hanus, J.

    2015-11-01

    Photometric light-curve inversion of minor planets has proven to produce a unique model solution only under the hypothesis that the asteroid is convex. However, it was suggested that the resulting shape model, for the case of non-convex asteroids, is the convex-hull of the true asteroid non-convex shape. While a convex shape is already useful to provide the overall aspect of the target, much information about real shapes is missed, as we know that asteroids are very irregular. It is a commonly accepted evidence that large flat areas sometimes appearing on shapes derived from light curves correspond to concave areas, but this information has not been further explored and exploited so far. We present in this paper a method that allows to predict the presence of concavities from such flat regions. This method analyses the distribution of the local normals to the facets composing shape models to predict the presence of abnormally large flat surfaces. In order to test our approach, we consider here its application to a large family of synthetic asteroid shapes, and to real asteroids with large-scale concavities, whose detailed shape is known by other kinds of observations (radar and spacecraft encounters). The method that we propose has proven to be reliable and capable of providing a qualitative indication of the relevance of concavities on well-constrained asteroid shapes derived from purely photometric data sets.

  15. A multigrid integral equation method for large-scale models with inhomogeneous backgrounds

    NASA Astrophysics Data System (ADS)

    Endo, Masashi; Čuma, Martin; Zhdanov, Michael S.

    2008-12-01

    We present a multigrid integral equation (IE) method for three-dimensional (3D) electromagnetic (EM) field computations in large-scale models with inhomogeneous background conductivity (IBC). This method combines the advantages of the iterative IBC IE method and the multigrid quasi-linear (MGQL) approximation. The new EM modelling method solves the corresponding systems of linear equations within the domains of anomalous conductivity, Da, and inhomogeneous background conductivity, Db, separately on coarse grids. The observed EM fields in the receivers are computed using grids with fine discretization. The developed MGQL IBC IE method can also be applied iteratively by taking into account the return effect of the anomalous field inside the domain of the background inhomogeneity Db, and vice versa. The iterative process described above is continued until we reach the required accuracy of the EM field calculations in both domains, Da and Db. The method was tested for modelling the marine controlled-source electromagnetic field for complex geoelectrical structures with hydrocarbon petroleum reservoirs and a rough sea-bottom bathymetry.

  16. Large scale cratering of the lunar highlands - Some Monte Carlo model considerations

    NASA Technical Reports Server (NTRS)

    Hoerz, F.; Gibbons, R. V.; Hill, R. E.; Gault, D. E.

    1976-01-01

    In an attempt to understand the scale and intensity of the moon's early, large scale meteoritic bombardment, a Monte Carlo computer model simulated the effects of all lunar craters greater than 800 m in diameter, for example, the number of times and depths specific fractions of the entire lunar surface were cratered. The model used observed crater size frequencies and crater-geometries compatible with the suggestions of Pike (1974) and Dence (1973); it simulated bombardment histories up to a factor of 10 more intense than those reflected by the present-day crater number density of the lunar highlands. For the present-day cratering record the model yields the following: approximately 25% of the entire lunar surface has not been cratered deeper than 100 m; 50% may have been cratered to 2-3 km depth; less than 5% of the surface has been cratered deeper than about 15 km. A typical highland site has suffered 1-2 impacts. Corresponding values for more intense bombardment histories are also presented, though it must remain uncertain what the absolute intensity of the moon's early meteorite bombardment was.

  17. Large-scale functional models of visual cortex for remote sensing

    SciTech Connect

    Brumby, Steven P; Kenyon, Garrett; Rasmussen, Craig E; Swaminarayan, Sriram; Bettencourt, Luis; Landecker, Will

    2009-01-01

    Neuroscience has revealed many properties of neurons and of the functional organization of visual cortex that are believed to be essential to human vision, but are missing in standard artificial neural networks. Equally important may be the sheer scale of visual cortex requiring {approx}1 petaflop of computation. In a year, the retina delivers {approx}1 petapixel to the brain, leading to massively large opportunities for learning at many levels of the cortical system. We describe work at Los Alamos National Laboratory (LANL) to develop large-scale functional models of visual cortex on LANL's Roadrunner petaflop supercomputer. An initial run of a simple region VI code achieved 1.144 petaflops during trials at the IBM facility in Poughkeepsie, NY (June 2008). Here, we present criteria for assessing when a set of learned local representations is 'complete' along with general criteria for assessing computer vision models based on their projected scaling behavior. Finally, we extend one class of biologically-inspired learning models to problems of remote sensing imagery.

  18. Influenza epidemic spread simulation for Poland — a large scale, individual based model study

    NASA Astrophysics Data System (ADS)

    Rakowski, Franciszek; Gruziel, Magdalena; Bieniasz-Krzywiec, Łukasz; Radomski, Jan P.

    2010-08-01

    In this work a construction of an agent based model for studying the effects of influenza epidemic in large scale (38 million individuals) stochastic simulations, together with the resulting various scenarios of disease spread in Poland are reported. Simple transportation rules were employed to mimic individuals’ travels in dynamic route-changing schemes, allowing for the infection spread during a journey. Parameter space was checked for stable behaviour, especially towards the effective infection transmission rate variability. Although the model reported here is based on quite simple assumptions, it allowed to observe two different types of epidemic scenarios: characteristic for urban and rural areas. This differentiates it from the results obtained in the analogous studies for the UK or US, where settlement and daily commuting patterns are both substantially different and more diverse. The resulting epidemic scenarios from these ABM simulations were compared with simple, differential equations based, SIR models - both types of the results displaying strong similarities. The pDYN software platform developed here is currently used in the next stage of the project employed to study various epidemic mitigation strategies.

  19. Development of explosive event scale model testing capability at Sandia`s large scale centrifuge facility

    SciTech Connect

    Blanchat, T.K.; Davie, N.T.; Calderone, J.J.

    1998-02-01

    Geotechnical structures such as underground bunkers, tunnels, and building foundations are subjected to stress fields produced by the gravity load on the structure and/or any overlying strata. These stress fields may be reproduced on a scaled model of the structure by proportionally increasing the gravity field through the use of a centrifuge. This technology can then be used to assess the vulnerability of various geotechnical structures to explosive loading. Applications of this technology include assessing the effectiveness of earth penetrating weapons, evaluating the vulnerability of various structures, counter-terrorism, and model validation. This document describes the development of expertise in scale model explosive testing on geotechnical structures using Sandia`s large scale centrifuge facility. This study focused on buried structures such as hardened storage bunkers or tunnels. Data from this study was used to evaluate the predictive capabilities of existing hydrocodes and structural dynamics codes developed at Sandia National Laboratories (such as Pronto/SPH, Pronto/CTH, and ALEGRA). 7 refs., 50 figs., 8 tabs.

  20. Real-World-Time Simulation of Memory Consolidation in a Large-Scale Cerebellar Model

    PubMed Central

    Gosui, Masato; Yamazaki, Tadashi

    2016-01-01

    We report development of a large-scale spiking network model of the cerebellum composed of more than 1 million neurons. The model is implemented on graphics processing units (GPUs), which are dedicated hardware for parallel computing. Using 4 GPUs simultaneously, we achieve realtime simulation, in which computer simulation of cerebellar activity for 1 s completes within 1 s in the real-world time, with temporal resolution of 1 ms. This allows us to carry out a very long-term computer simulation of cerebellar activity in a practical time with millisecond temporal resolution. Using the model, we carry out computer simulation of long-term gain adaptation of optokinetic response (OKR) eye movements for 5 days aimed to study the neural mechanisms of posttraining memory consolidation. The simulation results are consistent with animal experiments and our theory of posttraining memory consolidation. These results suggest that realtime computing provides a useful means to study a very slow neural process such as memory consolidation in the brain. PMID:26973472

  1. Aerodynamic characteristics of a large-scale hybrid upper surface blown flap model having four engines

    NASA Technical Reports Server (NTRS)

    Carros, R. J.; Boissevain, A. G.; Aoyagi, K.

    1975-01-01

    Data are presented from an investigation of the aerodynamic characteristics of large-scale wind tunnel aircraft model that utilized a hybrid-upper surface blown flap to augment lift. The hybrid concept of this investigation used a portion of the turbofan exhaust air for blowing over the trailing edge flap to provide boundary layer control. The model, tested in the Ames 40- by 80-foot Wind Tunnel, had a 27.5 deg swept wing of aspect ratio 8 and 4 turbofan engines mounted on the upper surface of the wing. The lift of the model was augmented by turbofan exhaust impingement on the wind upper-surface and flap system. Results were obtained for three flap deflections, for some variation of engine nozzle configuration and for jet thrust coefficients from 0 to 3.0. Six-component longitudinal and lateral data are presented with four engine operation and with the critical engine out. In addition, a limited number of cross-plots of the data are presented. All of the tests were made with a downwash rake installed instead of a horizontal tail. Some of these downwash data are also presented.

  2. Modeling the Hydrologic Effects of Large-Scale Green Infrastructure Projects with GIS

    NASA Astrophysics Data System (ADS)

    Bado, R. A.; Fekete, B. M.; Khanbilvardi, R.

    2015-12-01

    Impervious surfaces in urban areas generate excess runoff, which in turn causes flooding, combined sewer overflows, and degradation of adjacent surface waters. Municipal environmental protection agencies have shown a growing interest in mitigating these effects with 'green' infrastructure practices that partially restore the perviousness and water holding capacity of urban centers. Assessment of the performance of current and future green infrastructure projects is hindered by the lack of adequate hydrological modeling tools; conventional techniques fail to account for the complex flow pathways of urban environments, and detailed analyses are difficult to prepare for the very large domains in which green infrastructure projects are implemented. Currently, no standard toolset exists that can rapidly and conveniently predict runoff, consequent inundations, and sewer overflows at a city-wide scale. We demonstrate how streamlined modeling techniques can be used with open-source GIS software to efficiently model runoff in large urban catchments. Hydraulic parameters and flow paths through city blocks, roadways, and sewer drains are automatically generated from GIS layers, and ultimately urban flow simulations can be executed for a variety of rainfall conditions. With this methodology, users can understand the implications of large-scale land use changes and green/gray storm water retention systems on hydraulic loading, peak flow rates, and runoff volumes.

  3. Large-scale Folding: Implications For Effective Lithospheric Rheology And Thin Sheet Models.

    NASA Astrophysics Data System (ADS)

    Schmalholz, S. M.; Podladchikov, Yu. Yu.; Burg, J.-P.

    We show that folding of a non-Newtonian layer resting on a homogeneous Newto- nian matrix with finite thickness under influence of gravity can occur by three modes: (i) matrix-controlled folding, dependent on the effective viscosity contrast between layer and matrix, (ii) gravity-controlled folding, dependent on the Argand number (the ratio of the stress caused by gravity to the stress caused by shortening) and (iii) detachment folding, dependent on the ratio of matrix thickness to layer thickness. We construct a phase diagram that defines the transitions between each of the three fold- ing modes. Our priority is transparency of the analytical derivations (e.g. thin-plate versus thick-plate approximations), which permits complete classification of the fold- ing modes involving a minimum number of dimensionless parameters. Accuracy and sensitivity of the analytical results to model assumptions are investigated. In particu- lar, depth-dependence of matrix rheology is only important for folding over a narrow range of material parameters. In contrast, strong depth-dependence of the folding layer viscosity limits applicability of ductile rheology and leads to a viscoelastic transition for layers on the crustal and lithospheric scales. This transition allows estimating the critical elastic thickness of the oceanic lithosphere, which determines if the oceanic lithosphere deforms effectively ductile or elastic. Considering applicability conditions of thin viscous sheet models for large-scale lithospheric deformation, derived in terms of the Argand number, our results show that the uplift rates caused by folding (which are neglected by the thin sheet models) are of the same order than the uplift rates caused by layer thickening. This result further indicates that large-scale folding and not crustal thickening was the dominant deformation mode during the evolution of the Himalayan syntaxes. Our theory is applied to estimate the effective thickness of the folded Central Asian

  4. Modeling long-term, large-scale sediment storage using a simple sediment budget approach

    NASA Astrophysics Data System (ADS)

    Naipal, Victoria; Reick, Christian; Van Oost, Kristof; Hoffmann, Thomas; Pongratz, Julia

    2016-05-01

    Currently, the anthropogenic perturbation of the biogeochemical cycles remains unquantified due to the poor representation of lateral fluxes of carbon and nutrients in Earth system models (ESMs). This lateral transport of carbon and nutrients between terrestrial ecosystems is strongly affected by accelerated soil erosion rates. However, the quantification of global soil erosion by rainfall and runoff, and the resulting redistribution is missing. This study aims at developing new tools and methods to estimate global soil erosion and redistribution by presenting and evaluating a new large-scale coarse-resolution sediment budget model that is compatible with ESMs. This model can simulate spatial patterns and long-term trends of soil redistribution in floodplains and on hillslopes, resulting from external forces such as climate and land use change. We applied the model to the Rhine catchment using climate and land cover data from the Max Planck Institute Earth System Model (MPI-ESM) for the last millennium (here AD 850-2005). Validation is done using observed Holocene sediment storage data and observed scaling between sediment storage and catchment area. We find that the model reproduces the spatial distribution of floodplain sediment storage and the scaling behavior for floodplains and hillslopes as found in observations. After analyzing the dependence of the scaling behavior on the main parameters of the model, we argue that the scaling is an emergent feature of the model and mainly dependent on the underlying topography. Furthermore, we find that land use change is the main contributor to the change in sediment storage in the Rhine catchment during the last millennium. Land use change also explains most of the temporal variability in sediment storage in floodplains and on hillslopes.

  5. QSAR Modeling Using Large-Scale Databases: Case Study for HIV-1 Reverse Transcriptase Inhibitors.

    PubMed

    Tarasova, Olga A; Urusova, Aleksandra F; Filimonov, Dmitry A; Nicklaus, Marc C; Zakharov, Alexey V; Poroikov, Vladimir V

    2015-07-27

    Large-scale databases are important sources of training sets for various QSAR modeling approaches. Generally, these databases contain information extracted from different sources. This variety of sources can produce inconsistency in the data, defined as sometimes widely diverging activity results for the same compound against the same target. Because such inconsistency can reduce the accuracy of predictive models built from these data, we are addressing the question of how best to use data from publicly and commercially accessible databases to create accurate and predictive QSAR models. We investigate the suitability of commercially and publicly available databases to QSAR modeling of antiviral activity (HIV-1 reverse transcriptase (RT) inhibition). We present several methods for the creation of modeling (i.e., training and test) sets from two, either commercially or freely available, databases: Thomson Reuters Integrity and ChEMBL. We found that the typical predictivities of QSAR models obtained using these different modeling set compilation methods differ significantly from each other. The best results were obtained using training sets compiled for compounds tested using only one method and material (i.e., a specific type of biological assay). Compound sets aggregated by target only typically yielded poorly predictive models. We discuss the possibility of "mix-and-matching" assay data across aggregating databases such as ChEMBL and Integrity and their current severe limitations for this purpose. One of them is the general lack of complete and semantic/computer-parsable descriptions of assay methodology carried by these databases that would allow one to determine mix-and-matchability of result sets at the assay level. PMID:26046311

  6. Evaluation of large-scale meteorological patterns associated with temperature extremes in the NARCCAP regional climate model simulations

    NASA Astrophysics Data System (ADS)

    Loikith, Paul C.; Waliser, Duane E.; Lee, Huikyo; Neelin, J. David; Lintner, Benjamin R.; McGinnis, Seth; Mearns, Linda O.; Kim, Jinwon

    2015-12-01

    Large-scale meteorological patterns (LSMPs) associated with temperature extremes are evaluated in a suite of regional climate model (RCM) simulations contributing to the North American Regional Climate Change Assessment Program. LSMPs are characterized through composites of surface air temperature, sea level pressure, and 500 hPa geopotential height anomalies concurrent with extreme temperature days. Six of the seventeen RCM simulations are driven by boundary conditions from reanalysis while the other eleven are driven by one of four global climate models (GCMs). Four illustrative case studies are analyzed in detail. Model fidelity in LSMP spatial representation is high for cold winter extremes near Chicago. Winter warm extremes are captured by most RCMs in northern California, with some notable exceptions. Model fidelity is lower for cool summer days near Houston and extreme summer heat events in the Ohio Valley. Physical interpretation of these patterns and identification of well-simulated cases, such as for Chicago, boosts confidence in the ability of these models to simulate days in the tails of the temperature distribution. Results appear consistent with the expectation that the ability of an RCM to reproduce a realistically shaped frequency distribution for temperature, especially at the tails, is related to its fidelity in simulating LMSPs. Each ensemble member is ranked for its ability to reproduce LSMPs associated with observed warm and cold extremes, identifying systematically high performing RCMs and the GCMs that provide superior boundary forcing. The methodology developed here provides a framework for identifying regions where further process-based evaluation would improve the understanding of simulation error and help guide future model improvement and downscaling efforts.

  7. Predictions of a non-Gaussian model for large scale structure

    SciTech Connect

    Fan, Z.H.; Bardeen, J.M.

    1992-06-26

    A modified CDM model for the origin of structure in the universe based on an inflation model with two interacting scalar fields, is analyzed to make predictions for the statistical properties of the density and velocity fields and the microwave background anisotropy. The initial gauge-invariant potential {zeta} which is defined as {zeta} = {delta}{rho}/({rho} + p) + 3{var_phi}, where {var_phi} is the curvature perturbation amplitude and p is the pressure, is the sum of a Gaussian field {phi}{sub 1}, and the square of a Gaussian field {phi}{sub 2}. A Harrison-Zel`dovich scale-invariant power spectrum is assumed for {phi}{sub 1}; and a log-normal `peak` power spectrum for {phi}{sub 2}. The location and the width of the peak are described by parameters k{sub c} and a. respectively. The model is motivated to some extent by inflation models with two interacting scalar fields, but is mainly interesting as an example of a model whose statistical properties change with scale. On small scales, it is almost identical to a standard scale-invariant Gaussian CDM model. On scales near the location of the peak of the non-Gaussian field, the distributions have long tails in high positive values of the density and velocity fields. Thus, it is easier to get large-scale streaming velocities than the standard CDM model. The quadrupole amplitude of fluctuations of the cosmic microwave background radiation and the rms variation of the temperature field smoothed with a 10{degree} FWHM Gaussian are calculated; a reasonable agreement is found with the new COBE results.

  8. Predictions of a non-Gaussian model for large scale structure

    SciTech Connect

    Fan, Z.H.; Bardeen, J.M.

    1992-06-26

    A modified CDM model for the origin of structure in the universe based on an inflation model with two interacting scalar fields, is analyzed to make predictions for the statistical properties of the density and velocity fields and the microwave background anisotropy. The initial gauge-invariant potential [zeta] which is defined as [zeta] = [delta][rho]/([rho] + p) + 3[var phi], where [var phi] is the curvature perturbation amplitude and p is the pressure, is the sum of a Gaussian field [phi][sub 1], and the square of a Gaussian field [phi][sub 2]. A Harrison-Zel'dovich scale-invariant power spectrum is assumed for [phi][sub 1]; and a log-normal 'peak' power spectrum for [phi][sub 2]. The location and the width of the peak are described by parameters k[sub c] and a. respectively. The model is motivated to some extent by inflation models with two interacting scalar fields, but is mainly interesting as an example of a model whose statistical properties change with scale. On small scales, it is almost identical to a standard scale-invariant Gaussian CDM model. On scales near the location of the peak of the non-Gaussian field, the distributions have long tails in high positive values of the density and velocity fields. Thus, it is easier to get large-scale streaming velocities than the standard CDM model. The quadrupole amplitude of fluctuations of the cosmic microwave background radiation and the rms variation of the temperature field smoothed with a 10[degree] FWHM Gaussian are calculated; a reasonable agreement is found with the new COBE results.

  9. Query Large Scale Microarray Compendium Datasets Using a Model-Based Bayesian Approach with Variable Selection

    PubMed Central

    Hu, Ming; Qin, Zhaohui S.

    2009-01-01

    In microarray gene expression data analysis, it is often of interest to identify genes that share similar expression profiles with a particular gene such as a key regulatory protein. Multiple studies have been conducted using various correlation measures to identify co-expressed genes. While working well for small datasets, the heterogeneity introduced from increased sample size inevitably reduces the sensitivity and specificity of these approaches. This is because most co-expression relationships do not extend to all experimental conditions. With the rapid increase in the size of microarray datasets, identifying functionally related genes from large and diverse microarray gene expression datasets is a key challenge. We develop a model-based gene expression query algorithm built under the Bayesian model selection framework. It is capable of detecting co-expression profiles under a subset of samples/experimental conditions. In addition, it allows linearly transformed expression patterns to be recognized and is robust against sporadic outliers in the data. Both features are critically important for increasing the power of identifying co-expressed genes in large scale gene expression datasets. Our simulation studies suggest that this method outperforms existing correlation coefficients or mutual information-based query tools. When we apply this new method to the Escherichia coli microarray compendium data, it identifies a majority of known regulons as well as novel potential target genes of numerous key transcription factors. PMID:19214232

  10. Constraining Large-Scale Solar Magnetic Field Models with Optical Coronal Observations

    NASA Astrophysics Data System (ADS)

    Uritsky, V. M.; Davila, J. M.; Jones, S. I.

    2015-12-01

    Scientific success of the Solar Probe Plus (SPP) and Solar Orbiter (SO) missions will depend to a large extent on the accuracy of the available coronal magnetic field models describing the connectivity of plasma disturbances in the inner heliosphere with their source regions. We argue that ground based and satellite coronagraph images can provide robust geometric constraints for the next generation of improved coronal magnetic field extrapolation models. In contrast to the previously proposed loop segmentation codes designed for detecting compact closed-field structures above solar active regions, we focus on the large-scale geometry of the open-field coronal regions located at significant radial distances from the solar surface. Details on the new feature detection algorithms will be presented. By applying the developed image processing methodology to high-resolution Mauna Loa Solar Observatory images, we perform an optimized 3D B-line tracing for a full Carrington rotation using the magnetic field extrapolation code presented in a companion talk by S.Jones at al. Tracing results are shown to be in a good qualitative agreement with the large-scalie configuration of the optical corona. Subsequent phases of the project and the related data products for SSP and SO missions as wwll as the supporting global heliospheric simulations will be discussed.

  11. Prospective large-scale field study generates predictive model identifying major contributors to colony losses.

    PubMed

    Kielmanowicz, Merav Gleit; Inberg, Alex; Lerner, Inbar Maayan; Golani, Yael; Brown, Nicholas; Turner, Catherine Louise; Hayes, Gerald J R; Ballam, Joan M

    2015-04-01

    Over the last decade, unusually high losses of colonies have been reported by beekeepers across the USA. Multiple factors such as Varroa destructor, bee viruses, Nosema ceranae, weather, beekeeping practices, nutrition, and pesticides have been shown to contribute to colony losses. Here we describe a large-scale controlled trial, in which different bee pathogens, bee population, and weather conditions across winter were monitored at three locations across the USA. In order to minimize influence of various known contributing factors and their interaction, the hives in the study were not treated with antibiotics or miticides. Additionally, the hives were kept at one location and were not exposed to potential stress factors associated with migration. Our results show that a linear association between load of viruses (DWV or IAPV) in Varroa and bees is present at high Varroa infestation levels (>3 mites per 100 bees). The collection of comprehensive data allowed us to draw a predictive model of colony losses and to show that Varroa destructor, along with bee viruses, mainly DWV replication, contributes to approximately 70% of colony losses. This correlation further supports the claim that insufficient control of the virus-vectoring Varroa mite would result in increased hive loss. The predictive model also indicates that a single factor may not be sufficient to trigger colony losses, whereas a combination of stressors appears to impact hive health. PMID:25875764

  12. Towards a large-scale scalable adaptive heart model using shallow tree meshes

    NASA Astrophysics Data System (ADS)

    Krause, Dorian; Dickopf, Thomas; Potse, Mark; Krause, Rolf

    2015-10-01

    Electrophysiological heart models are sophisticated computational tools that place high demands on the computing hardware due to the high spatial resolution required to capture the steep depolarization front. To address this challenge, we present a novel adaptive scheme for resolving the deporalization front accurately using adaptivity in space. Our adaptive scheme is based on locally structured meshes. These tensor meshes in space are organized in a parallel forest of trees, which allows us to resolve complicated geometries and to realize high variations in the local mesh sizes with a minimal memory footprint in the adaptive scheme. We discuss both a non-conforming mortar element approximation and a conforming finite element space and present an efficient technique for the assembly of the respective stiffness matrices using matrix representations of the inclusion operators into the product space on the so-called shallow tree meshes. We analyzed the parallel performance and scalability for a two-dimensional ventricle slice as well as for a full large-scale heart model. Our results demonstrate that the method has good performance and high accuracy.

  13. Prospective Large-Scale Field Study Generates Predictive Model Identifying Major Contributors to Colony Losses

    PubMed Central

    Kielmanowicz, Merav Gleit; Inberg, Alex; Lerner, Inbar Maayan; Golani, Yael; Brown, Nicholas; Turner, Catherine Louise; Hayes, Gerald J. R.; Ballam, Joan M.

    2015-01-01

    Over the last decade, unusually high losses of colonies have been reported by beekeepers across the USA. Multiple factors such as Varroa destructor, bee viruses, Nosema ceranae, weather, beekeeping practices, nutrition, and pesticides have been shown to contribute to colony losses. Here we describe a large-scale controlled trial, in which different bee pathogens, bee population, and weather conditions across winter were monitored at three locations across the USA. In order to minimize influence of various known contributing factors and their interaction, the hives in the study were not treated with antibiotics or miticides. Additionally, the hives were kept at one location and were not exposed to potential stress factors associated with migration. Our results show that a linear association between load of viruses (DWV or IAPV) in Varroa and bees is present at high Varroa infestation levels (>3 mites per 100 bees). The collection of comprehensive data allowed us to draw a predictive model of colony losses and to show that Varroa destructor, along with bee viruses, mainly DWV replication, contributes to approximately 70% of colony losses. This correlation further supports the claim that insufficient control of the virus-vectoring Varroa mite would result in increased hive loss. The predictive model also indicates that a single factor may not be sufficient to trigger colony losses, whereas a combination of stressors appears to impact hive health. PMID:25875764

  14. Development of models for the planning of large-scale water-energy systems. Final report

    SciTech Connect

    Matsumoto, J.; Mays, L.W.; Rohlich, G.A.

    1982-01-01

    A mathematical optimization model has been developed to help investigate various alternatives for future water-energy systems. The capacity expansion problem of water-energy systems can be stated as follows: Given the future demands for water, electricity, gas, and coal and the availability of water and coal, determine the location, timing, and size of facilities to satisfy the demands at minimum cost, which is the sum of operating and capacity costs. Specifically, the system consists of four subsystems: water, coal, electricity, and gas systems. Their interactions are expressed explicitly in mathematical terms and equations, whereas most models describe individual constraints but their interactions are not stated explicitly. Because of the large scale, decomposition techniques are extensively applied. To do this an in-depth study was made of the mathematical structure of the water-energy system problem. The Benders decomposition is applied to the capacity expansion problem, decomposing it into a three-level problem: the capacity problem, the production problem, and the distribution problem. These problems are solved by special algorithms: the generally upper bounded (GUB) algorithm, the simply upper bounded (SUB) algorithm, and the generalized network flow algorithm, respectively.

  15. A mass-flux cumulus parameterization scheme for large-scale models: description and test with observations

    NASA Astrophysics Data System (ADS)

    Wu, Tongwen

    2012-02-01

    A simple mass-flux cumulus parameterization scheme suitable for large-scale atmospheric models is presented. The scheme is based on a bulk-cloud approach and has the following properties: (1) Deep convection is launched at the level of maximum moist static energy above the top of the boundary layer. It is triggered if there is positive convective available potential energy (CAPE) and relative humidity of the air at the lifting level of convection cloud is greater than 75%; (2) Convective updrafts for mass, dry static energy, moisture, cloud liquid water and momentum are parameterized by a one-dimensional entrainment/detrainment bulk-cloud model. The lateral entrainment of the environmental air into the unstable ascending parcel before it rises to the lifting condensation level is considered. The entrainment/detrainment amount for the updraft cloud parcel is separately determined according to the increase/decrease of updraft parcel mass with altitude, and the mass change for the adiabatic ascent cloud parcel with altitude is derived from a total energy conservation equation of the whole adiabatic system in which involves the updraft cloud parcel and the environment; (3) The convective downdraft is assumed saturated and originated from the level of minimum environmental saturated equivalent potential temperature within the updraft cloud; (4) The mass flux at the base of convective cloud is determined by a closure scheme suggested by Zhang (J Geophys Res 107(D14), doi: 10.1029/2001JD001005 , 2002) in which the increase/decrease of CAPE due to changes of the thermodynamic states in the free troposphere resulting from convection approximately balances the decrease/increase resulting from large-scale processes. Evaluation of the proposed convection scheme is performed by using a single column model (SCM) forced by the Atmospheric Radiation Measurement Program

  16. Improving urban streamflow forecasting using a high-resolution large scale modeling framework

    NASA Astrophysics Data System (ADS)

    Read, Laura; Hogue, Terri; Gochis, David; Salas, Fernando

    2016-04-01

    Urban flood forecasting is a critical component in effective water management, emergency response, regional planning, and disaster mitigation. As populations across the world continue to move to cities (~1.8% growth per year), and studies indicate that significant flood damages are occurring outside the floodplain in urban areas, the ability to model and forecast flow over the urban landscape becomes critical to maintaining infrastructure and society. In this work, we use the Weather Research and Forecasting- Hydrological (WRF-Hydro) modeling framework as a platform for testing improvements to representation of urban land cover, impervious surfaces, and urban infrastructure. The three improvements we evaluate include: updating the land cover to the latest 30-meter National Land Cover Dataset, routing flow over a high-resolution 30-meter grid, and testing a methodology for integrating an urban drainage network into the routing regime. We evaluate performance of these improvements in the WRF-Hydro model for specific flood events in the Denver-Metro Colorado domain, comparing to historic gaged streamflow for retrospective forecasts. Denver-Metro provides an interesting case study as it is a rapidly growing urban/peri-urban region with an active history of flooding events that have caused significant loss of life and property. Considering that the WRF-Hydro model will soon be implemented nationally in the U.S. to provide flow forecasts on the National Hydrography Dataset Plus river reaches - increasing capability from 3,600 forecast points to 2.7 million, we anticipate that this work will support validation of this service in urban areas for operational forecasting. Broadly, this research aims to provide guidance for integrating complex urban infrastructure with a large-scale, high resolution coupled land-surface and distributed hydrologic model.

  17. Inverse transport modeling of volcanic sulfur dioxide emissions using large-scale ensemble simulations

    NASA Astrophysics Data System (ADS)

    Heng, Y.; Hoffmann, L.; Griessbach, S.; Rößler, T.; Stein, O.

    2015-10-01

    An inverse transport modeling approach based on the concepts of sequential importance resampling and parallel computing is presented to reconstruct altitude-resolved time series of volcanic emissions, which often can not be obtained directly with current measurement techniques. A new inverse modeling and simulation system, which implements the inversion approach with the Lagrangian transport model Massive-Parallel Trajectory Calculations (MPTRAC) is developed to provide reliable transport simulations of volcanic sulfur dioxide (SO2). In the inverse modeling system MPTRAC is used to perform two types of simulations, i. e., large-scale ensemble simulations for the reconstruction of volcanic emissions and final transport simulations. The transport simulations are based on wind fields of the ERA-Interim meteorological reanalysis of the European Centre for Medium Range Weather Forecasts. The reconstruction of altitude-dependent SO2 emission time series is also based on Atmospheric Infrared Sounder (AIRS) satellite observations. A case study for the eruption of the Nabro volcano, Eritrea, in June 2011, with complex emission patterns, is considered for method validation. Meteosat Visible and InfraRed Imager (MVIRI) near-real-time imagery data are used to validate the temporal development of the reconstructed emissions. Furthermore, the altitude distributions of the emission time series are compared with top and bottom altitude measurements of aerosol layers obtained by the Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP) and the Michelson Interferometer for Passive Atmospheric Sounding (MIPAS) satellite instruments. The final transport simulations provide detailed spatial and temporal information on the SO2 distributions of the Nabro eruption. The SO2 column densities from the simulations are in good qualitative agreement with the AIRS observations. Our new inverse modeling and simulation system is expected to become a useful tool to also study other volcanic

  18. Large-scale protein-protein interactions detection by integrating big biosensing data with computational model.

    PubMed

    You, Zhu-Hong; Li, Shuai; Gao, Xin; Luo, Xin; Ji, Zhen

    2014-01-01

    Protein-protein interactions are the basis of biological functions, and studying these interactions on a molecular level is of crucial importance for understanding the functionality of a living cell. During the past decade, biosensors have emerged as an important tool for the high-throughput identification of proteins and their interactions. However, the high-throughput experimental methods for identifying PPIs are both time-consuming and expensive. On the other hand, high-throughput PPI data are often associated with high false-positive and high false-negative rates. Targeting at these problems, we propose a method for PPI detection by integrating biosensor-based PPI data with a novel computational model. This method was developed based on the algorithm of extreme learning machine combined with a novel representation of protein sequence descriptor. When performed on the large-scale human protein interaction dataset, the proposed method achieved 84.8% prediction accuracy with 84.08% sensitivity at the specificity of 85.53%. We conducted more extensive experiments to compare the proposed method with the state-of-the-art techniques, support vector machine. The achieved results demonstrate that our approach is very promising for detecting new PPIs, and it can be a helpful supplement for biosensor-based PPI data detection. PMID:25215285

  19. Large-scale infiltration experiments into unsaturated stratified loess sediments: Monitoring and modeling

    NASA Astrophysics Data System (ADS)

    Gvirtzman, Haim; Shalev, Eyal; Dahan, Ofer; Hatzor, Yossef H.

    2008-01-01

    SummaryTwo large-scale field experiments were conducted to track water flow through unsaturated stratified loess deposits. In the experiments, a trench was flooded with water, and water infiltration was allowed until full saturation of the sediment column, to a depth of 20 m, was achieved. The water penetrated through a sequence of alternating silty-sand and sandy-clay loess deposits. The changes in water content over time were monitored at 28 points beneath the trench, using time domain reflectometry (TDR) probes placed in four boreholes. Detailed records were obtained from a 21-day-period of wetting, followed by a 3-month-period of drying, and finally followed by a second 14-day-period of re-wetting. These processes were simulated using a two-dimensional numerical code that solves the flow equation. The model was calibrated using PEST. The simulations demonstrate that the propagation of the wetting front is hampered due to alternating silty-sand and sandy-clay loess layers. Moreover, wetting front propagation is further hampered by the extremely low values of the initial, unsaturated, hydraulic conductivity; thereby increasing the water content within the onion-shaped wetted zone up to full saturation. Numerical simulations indicate that above-hydrostatic pressure is developed within intermediate saturated layers, enhancing wetting front propagation.

  20. Large Scale Frequent Pattern Mining using MPI One-Sided Model

    SciTech Connect

    Vishnu, Abhinav; Agarwal, Khushbu

    2015-09-08

    In this paper, we propose a work-stealing runtime --- Library for Work Stealing LibWS --- using MPI one-sided model for designing scalable FP-Growth --- {\\em de facto} frequent pattern mining algorithm --- on large scale systems. LibWS provides locality efficient and highly scalable work-stealing techniques for load balancing on a variety of data distributions. We also propose a novel communication algorithm for FP-growth data exchange phase, which reduces the communication complexity from state-of-the-art O(p) to O(f + p/f) for p processes and f frequent attributed-ids. FP-Growth is implemented using LibWS and evaluated on several work distributions and support counts. An experimental evaluation of the FP-Growth on LibWS using 4096 processes on an InfiniBand Cluster demonstrates excellent efficiency for several work distributions (87\\% efficiency for Power-law and 91% for Poisson). The proposed distributed FP-Tree merging algorithm provides 38x communication speedup on 4096 cores.

  1. Combining flux and energy balance analysis to model large-scale biochemical networks.

    PubMed

    Heuett, William J; Qian, Hong

    2006-12-01

    Stoichiometric Network Theory is a constraints-based, optimization approach for quantitative analysis of the phenotypes of large-scale biochemical networks that avoids the use of detailed kinetics. This approach uses the reaction stoichiometric matrix in conjunction with constraints provided by flux balance and energy balance to guarantee mass conserved and thermodynamically allowable predictions. However, the flux and energy balance constraints have not been effectively applied simultaneously on the genome scale because optimization under the combined constraints is non-linear. In this paper, a sequential quadratic programming algorithm that solves the non-linear optimization problem is introduced. A simple example and the system of fermentation in Saccharomyces cerevisiae are used to illustrate the new method. The algorithm allows the use of non-linear objective functions. As a result, we suggest a novel optimization with respect to the heat dissipation rate of a system. We also emphasize the importance of incorporating interactions between a model network and its surroundings. PMID:17245812

  2. Excavating the Genome: Large-Scale Mutagenesis Screening for the Discovery of New Mouse Models.

    PubMed

    Sundberg, John P; Dadras, Soheil S; Silva, Kathleen A; Kennedy, Victoria E; Murray, Stephen A; Denegre, James M; Schofield, Paul N; King, Lloyd E; Wiles, Michael V; Pratt, C Herbert

    2015-11-01

    Technology now exists for rapid screening of mutated laboratory mice to identify phenotypes associated with specific genetic mutations. Large repositories exist for spontaneous mutants and those induced by chemical mutagenesis, many of which have never been fully studied or comprehensively evaluated. To supplement these resources, a variety of techniques have been consolidated in an international effort to create mutations in all known protein coding genes in the mouse. With targeted embryonic stem cell lines now available for almost all protein coding genes and more recently CRISPR/Cas9 technology, large-scale efforts are underway to create further novel mutant mouse strains and to characterize their phenotypes. However, accurate diagnosis of skin, hair, and nail diseases still relies on careful gross and histological analysis, and while not automated to the level of the physiological phenotyping, histopathology still provides the most direct and accurate diagnosis and correlation with human diseases. As a result of these efforts, many new mouse dermatological disease models are being characterized and developed. PMID:26551941

  3. Acoustic characteristics of a large scale wind-tunnel model of a jet flap aircraft

    NASA Technical Reports Server (NTRS)

    Falarski, M. D.; Aiken, T. N.; Aoyagi, K.

    1975-01-01

    The expanding-duct jet flap (EJF) concept is studied to determine STOL performance in turbofan-powered aircraft. The EJF is used to solve the problem of ducting the required volume of air into the wing by providing an expanding cavity between the upper and lower surfaces of the flap. The results are presented of an investigation of the acoustic characteristics of the EJF concept on a large-scale aircraft model powered by JT15D engines. The noise of the EJF is generated by acoustic dipoles as shown by the sixth power dependence of the noise on jet velocity. These sources result from the interaction of the flow turbulence with flap of internal and external surfaces and the trailing edges. Increasing the trailing edge jet from 70 percent span to 100 percent span increased the noise 2 db for the equivalent nozzle area. Blowing at the knee of the flap rather than the trailing edge reduced the noise 5 to 10 db by displacing the jet from the trailing edge and providing shielding from high-frequency noise. Deflecting the flap and varying the angle of attack modified the directivity of the underwing noise but did not affect the peak noise. A forward speed of 33.5 m/sec (110 ft/sec) reduced the dipole noise less than 1 db.

  4. Repurposing of open data through large scale hydrological modelling - hypeweb.smhi.se

    NASA Astrophysics Data System (ADS)

    Strömbäck, Lena; Andersson, Jafet; Donnelly, Chantal; Gustafsson, David; Isberg, Kristina; Pechlivanidis, Ilias; Strömqvist, Johan; Arheimer, Berit

    2015-04-01

    Hydrological modelling demands large amounts of spatial data, such as soil properties, land use, topography, lakes and reservoirs, ice and snow coverage, water management (e.g. irrigation patterns and regulations), meteorological data and observed water discharge in rivers. By using such data, the hydrological model will in turn provide new data that can be used for new purposes (i.e. re-purposing). This presentation will give an example of how readily available open data from public portals have been re-purposed by using the Hydrological Predictions for the Environment (HYPE) model in a number of large-scale model applications covering numerous subbasins and rivers. HYPE is a dynamic, semi-distributed, process-based, and integrated catchment model. The model output is launched as new Open Data at the web site www.hypeweb.smhi.se to be used for (i) Climate change impact assessments on water resources and dynamics; (ii) The European Water Framework Directive (WFD) for characterization and development of measure programs to improve the ecological status of water bodies; (iii) Design variables for infrastructure constructions; (iv) Spatial water-resource mapping; (v) Operational forecasts (1-10 days and seasonal) on floods and droughts; (vi) Input to oceanographic models for operational forecasts and marine status assessments; (vii) Research. The following regional domains have been modelled so far with different resolutions (number of subbasins within brackets): Sweden (37 000), Europe (35 000), Arctic basin (30 000), La Plata River (6 000), Niger River (800), Middle-East North-Africa (31 000), and the Indian subcontinent (6 000). The Hype web site provides several interactive web applications for exploring results from the models. The user can explore an overview of various water variables for historical and future conditions. Moreover the user can explore and download historical time series of discharge for each basin and explore the performance of the model

  5. A large-scale methane model by incorporating the surface water transport

    NASA Astrophysics Data System (ADS)

    Lu, Xiaoliang; Zhuang, Qianlai; Liu, Yaling; Zhou, Yuyu; Aghakouchak, Amir

    2016-06-01

    The effect of surface water movement on methane emissions is not explicitly considered in most of the current methane models. In this study, a surface water routing was coupled into our previously developed large-scale methane model. The revised methane model was then used to simulate global methane emissions during 2006-2010. From our simulations, the global mean annual maximum inundation extent is 10.6 ± 1.9 km2 and the methane emission is 297 ± 11 Tg C/yr in the study period. In comparison to the currently used TOPMODEL-based approach, we found that the incorporation of surface water routing leads to 24.7% increase in the annual maximum inundation extent and 30.8% increase in the methane emissions at the global scale for the study period, respectively. The effect of surface water transport on methane emissions varies in different regions: (1) the largest difference occurs in flat and moist regions, such as Eastern China; (2) high-latitude regions, hot spots in methane emissions, show a small increase in both inundation extent and methane emissions with the consideration of surface water movement; and (3) in arid regions, the new model yields significantly larger maximum flooded areas and a relatively small increase in the methane emissions. Although surface water is a small component in the terrestrial water balance, it plays an important role in determining inundation extent and methane emissions, especially in flat regions. This study indicates that future quantification of methane emissions shall consider the effects of surface water transport.

  6. Large-scale collection and annotation of gene models for date palm (Phoenix dactylifera, L.).

    PubMed

    Zhang, Guangyu; Pan, Linlin; Yin, Yuxin; Liu, Wanfei; Huang, Dawei; Zhang, Tongwu; Wang, Lei; Xin, Chengqi; Lin, Qiang; Sun, Gaoyuan; Ba Abdullah, Mohammed M; Zhang, Xiaowei; Hu, Songnian; Al-Mssallem, Ibrahim S; Yu, Jun

    2012-08-01

    The date palm (Phoenix dactylifera L.), famed for its sugar-rich fruits (dates) and cultivated by humans since 4,000 B.C., is an economically important crop in the Middle East, Northern Africa, and increasingly other places where climates are suitable. Despite a long history of human cultivation, the understanding of P. dactylifera genetics and molecular biology are rather limited, hindered by lack of basic data in high quality from genomics and transcriptomics. Here we report a large-scale effort in generating gene models (assembled expressed sequence tags or ESTs and mapped to a genome assembly) for P. dactylifera, using the long-read pyrosequencing platform (Roche/454 GS FLX Titanium) in high coverage. We built fourteen cDNA libraries from different P. dactylifera tissues (cultivar Khalas) and acquired 15,778,993 raw sequencing reads-about one million sequencing reads per library-and the pooled sequences were assembled into 67,651 non-redundant contigs and 301,978 singletons. We annotated 52,725 contigs based on the plant databases and 45 contigs based on functional domains referencing to the Pfam database. From the annotated contigs, we assigned GO (Gene Ontology) terms to 36,086 contigs and KEGG pathways to 7,032 contigs. Our comparative analysis showed that 70.6 % (47,930), 69.4 % (47,089), 68.4 % (46,441), and 69.3 % (47,048) of the P. dactylifera gene models are shared with rice, sorghum, Arabidopsis, and grapevine, respectively. We also assigned our gene models into house-keeping and tissue-specific genes based on their tissue specificity. PMID:22736259

  7. Identification of water quality degradation hotspots in developing countries by applying large scale water quality modelling

    NASA Astrophysics Data System (ADS)

    Malsy, Marcus; Reder, Klara; Flörke, Martina

    2014-05-01

    Decreasing water quality is one of the main global issues which poses risks to food security, economy, and public health and is consequently crucial for ensuring environmental sustainability. During the last decades access to clean drinking water increased, but 2.5 billion people still do not have access to basic sanitation, especially in Africa and parts of Asia. In this context not only connection to sewage system is of high importance, but also treatment, as an increasing connection rate will lead to higher loadings and therefore higher pressure on water resources. Furthermore, poor people in developing countries use local surface waters for daily activities, e.g. bathing and washing. It is thus clear that water utilization and water sewerage are indispensable connected. In this study, large scale water quality modelling is used to point out hotspots of water pollution to get an insight on potential environmental impacts, in particular, in regions with a low observation density and data gaps in measured water quality parameters. We applied the global water quality model WorldQual to calculate biological oxygen demand (BOD) loadings from point and diffuse sources, as well as in-stream concentrations. Regional focus in this study is on developing countries i.e. Africa, Asia, and South America, as they are most affected by water pollution. Hereby, model runs were conducted for the year 2010 to draw a picture of recent status of surface waters quality and to figure out hotspots and main causes of pollution. First results show that hotspots mainly occur in highly agglomerated regions where population density is high. Large urban areas are initially loading hotspots and pollution prevention and control become increasingly important as point sources are subject to connection rates and treatment levels. Furthermore, river discharge plays a crucial role due to dilution potential, especially in terms of seasonal variability. Highly varying shares of BOD sources across

  8. Large Scale Terrestrial Modeling: A Discussion of Technical and Conceptual Challenges and Solution Approaches

    NASA Astrophysics Data System (ADS)

    Rahman, M.; Aljazzar, T.; Kollet, S.; Maxwell, R.

    2012-04-01

    A number of simulation platforms have been developed to study the spatiotemporal variability of hydrologic responses to global change. Sophisticated terrestrial models demand large data sets and considerable computing resources as they attempt to include detailed physics for all relevant processes involving the feedbacks between subsurface, land surface and atmospheric processes. Access to required data scarcity, error and uncertainty; allocation of computing resources; and post processing/analysis are some of the well-known challenges. And have been discussed in previous studies dealing with catchments ranging from plot scale research (102m2), to small experimental catchments (0.1-10km2), and occasionally medium-sized catchments (102-103km2). However, there is still a lack of knowledge about large-scale simulations of the coupled terrestrial mass and energy balance over long time scales (years to decades). In this study, the interaction between subsurface, land surface, and the atmosphere are simulated in two large scale (>104km2) river catchments that are the Luanhe catchment in the North Plain, China and the Rur catchment, Germany. As a simulation platform, a fully coupled model (ParFlow.CLM) that links a three-dimensional variably-saturated groundwater flow model (ParFlow) with a land surface model (CLM) is used. The Luanhe and the Rur catchments have areas of 54,000 and 28,224km2 respectively and are being simulated using spatial resolutions on the order of 102 to 103m in the horizontal and 10-2 to 10-1m in the vertical direction. ParFlow.CLM was configured over computational domains well beyond the actual watershed boundaries to account for cross-watershed flow. The resulting catchment models consist of up to 108 cells which were implemented over more than 1000 processors each with 512MB memory on JUGENE hosted by the Juelich Supercomputing Centre, Germany. Consequently, large numbers of input and output files were produced for each parameter such as; soil

  9. Uncertainty analysis of channel capacity assumptions in large scale hydraulic modelling

    NASA Astrophysics Data System (ADS)

    Walsh, Alexander; Stroud, Rebecca; Willis, Thomas

    2015-04-01

    Flood modelling on national or even global scales is of great interest to re/insurers, governments and other agencies. Channel bathymetry data is not available over large areas which is a major limitation to this scale of modelling. It requires expensive channel surveying and the majority of remotely sensed data cannot see through water. Furthermore, channels represented as 1D models, or as an explicit feature in the model domain is computationally demanding, and so it is often necessary to find ways to reduce computational costs. A more efficient methodology is to make assumptions concerning the capacity of the channel, and then to remove this volume from inflow hydrographs. Previous research have shown that natural channels generally conform to carry flow for a 1-in-2 year return period (QMED). This assumption is widely used in large scale modelling studies across the world. However, channels flowing through high-risk areas, such as urban environments, are often modified to increase their capacity and thus reduce flood risk. Simulated flood outlines are potentially very sensitive to assumptions made regarding these capacities. For example, under the 1-in-2 year assumption, the flooding associated with smaller events might be overestimated, with too much flow being modelled as out of bank. There are requirements to; i) quantify the impact of uncertainty in assumed channel capacity on simulated flooded areas, and ii) to develop more optimal capacity assumptions, depending on specific reach characteristics, so that the effects of channel modification can be better represented in future studies. This work will demonstrate findings from a preliminary uncertainty analysis that seeks to address the former requirement. A set of benchmark tests, using 2D hydraulic models, were undertaken where different estimated return period flows in contrasting catchments are modelled with varying channel capacity parameters. The depth and extent for each benchmark model output were

  10. Metabolic Flux Elucidation for Large-Scale Models Using 13C Labeled Isotopes

    PubMed Central

    Suthers, Patrick F.; Burgard, Anthony P.; Dasika, Madhukar S.; Nowroozi, Farnaz; Van Dien, Stephen; Keasling, Jay D.; Maranas, Costas D.

    2007-01-01

    A key consideration in metabolic engineering is the determination of fluxes of the metabolites within the cell. This determination provides an unambiguous description of metabolism before and/or after engineering interventions. Here, we present a computational framework that combines a constraint-based modeling framework with isotopic label tracing on a large-scale. When cells are fed a growth substrate with certain carbon positions labeled with 13C, the distribution of this label in the intracellular metabolites can be calculated based on the known biochemistry of the participating pathways. Most labeling studies focus on skeletal representations of central metabolism and ignore many flux routes that could contribute to the observed isotopic labeling patterns. In contrast, our approach investigates the importance of carrying out isotopic labeling studies using a more comprehensive reaction network consisting of 350 fluxes and 184 metabolites in Escherichia coli including global metabolite balances on cofactors such as ATP, NADH, and NADPH. The proposed procedure is demonstrated on an E. coli strain engineered to produce amorphadiene, a precursor to the anti-malarial drug artemisinin. The cells were grown in continuous culture on glucose containing 20% [U-13C]glucose; the measurements are made using GC-MS performed on 13 amino acids extracted from the cells. We identify flux distributions for which the calculated labeling patterns agree well with the measurements alluding to the accuracy of the network reconstruction. Furthermore, we explore the robustness of the flux calculations to variability in the experimental MS measurements, as well as highlight the key experimental measurements necessary for flux determination. Finally, we discuss the effect of reducing the model, as well as shed light onto the customization of the developed computational framework to other systems. PMID:17632026

  11. Observational and Model Studies of Large-Scale Mixing Processes in the Stratosphere

    NASA Technical Reports Server (NTRS)

    Bowman, Kenneth P.

    1997-01-01

    The following is the final technical report for grant NAGW-3442, 'Observational and Model Studies of Large-Scale Mixing Processes in the Stratosphere'. Research efforts in the first year concentrated on transport and mixing processes in the polar vortices. Three papers on mixing in the Antarctic were published. The first was a numerical modeling study of wavebreaking and mixing and their relationship to the period of observed stratospheric waves (Bowman). The second paper presented evidence from TOMS for wavebreaking in the Antarctic (Bowman and Mangus 1993). The third paper used Lagrangian trajectory calculations from analyzed winds to show that there is very little transport into the Antarctic polar vortex prior to the vortex breakdown (Bowman). Mixing is significantly greater at lower levels. This research helped to confirm theoretical arguments for vortex isolation and data from the Antarctic field experiments that were interpreted as indicating isolation. A Ph.D. student, Steve Dahlberg, used the trajectory approach to investigate mixing and transport in the Arctic. While the Arctic vortex is much more disturbed than the Antarctic, there still appears to be relatively little transport across the vortex boundary at 450 K prior to the vortex breakdown. The primary reason for the absence of an ozone hole in the Arctic is the earlier warming and breakdown of the vortex compared to the Antarctic, not replenishment of ozone by greater transport. Two papers describing these results have appeared (Dahlberg and Bowman; Dahlberg and Bowman). Steve Dahlberg completed his Ph.D. thesis (Dahlberg and Bowman) and is now teaching in the Physics Department at Concordia College. We also prepared an analysis of the QBO in SBUV ozone data (Hollandsworth et al.). A numerical study in collaboration with Dr. Ping Chen investigated mixing by barotropic instability, which is the probable origin of the 4-day wave in the upper stratosphere (Bowman and Chen). The important result from

  12. Large-scale Models Reveal the Two-component Mechanics of Striated Muscle

    PubMed Central

    Jarosch, Robert

    2008-01-01

    This paper provides a comprehensive explanation of striated muscle mechanics and contraction on the basis of filament rotations. Helical proteins, particularly the coiled-coils of tropomyosin, myosin and α-actinin, shorten their H-bonds cooperatively and produce torque and filament rotations when the Coulombic net-charge repulsion of their highly charged side-chains is diminished by interaction with ions. The classical “two-component model” of active muscle differentiated a “contractile component” which stretches the “series elastic component” during force production. The contractile components are the helically shaped thin filaments of muscle that shorten the sarcomeres by clockwise drilling into the myosin cross-bridges with torque decrease (= force-deficit). Muscle stretch means drawing out the thin filament helices off the cross-bridges under passive counterclockwise rotation with torque increase (= stretch activation). Since each thin filament is anchored by four elastic α-actinin Z-filaments (provided with force-regulating sites for Ca2+ binding), the thin filament rotations change the torsional twist of the four Z-filaments as the “series elastic components”. Large scale models simulate the changes of structure and force in the Z-band by the different Z-filament twisting stages A, B, C, D, E, F and G. Stage D corresponds to the isometric state. The basic phenomena of muscle physiology, i. e. latency relaxation, Fenn-effect, the force-velocity relation, the length-tension relation, unexplained energy, shortening heat, the Huxley-Simmons phases, etc. are explained and interpreted with the help of the model experiments. PMID:19330099

  13. Large-Scale Model-Based Assessment of Deer-Vehicle Collision Risk

    PubMed Central

    Hothorn, Torsten; Brandl, Roland; Müller, Jörg

    2012-01-01

    Ungulates, in particular the Central European roe deer Capreolus capreolus and the North American white-tailed deer Odocoileus virginianus, are economically and ecologically important. The two species are risk factors for deer–vehicle collisions and as browsers of palatable trees have implications for forest regeneration. However, no large-scale management systems for ungulates have been implemented, mainly because of the high efforts and costs associated with attempts to estimate population sizes of free-living ungulates living in a complex landscape. Attempts to directly estimate population sizes of deer are problematic owing to poor data quality and lack of spatial representation on larger scales. We used data on 74,000 deer–vehicle collisions observed in 2006 and 2009 in Bavaria, Germany, to model the local risk of deer–vehicle collisions and to investigate the relationship between deer–vehicle collisions and both environmental conditions and browsing intensities. An innovative modelling approach for the number of deer–vehicle collisions, which allows nonlinear environment–deer relationships and assessment of spatial heterogeneity, was the basis for estimating the local risk of collisions for specific road types on the scale of Bavarian municipalities. Based on this risk model, we propose a new “deer–vehicle collision index” for deer management. We show that the risk of deer–vehicle collisions is positively correlated to browsing intensity and to harvest numbers. Overall, our results demonstrate that the number of deer–vehicle collisions can be predicted with high precision on the scale of municipalities. In the densely populated and intensively used landscapes of Central Europe and North America, a model-based risk assessment for deer–vehicle collisions provides a cost-efficient instrument for deer management on the landscape scale. The measures derived from our model provide valuable information for planning road protection and

  14. Large-scale Validation of AMIP II Land-surface Simulations: Preliminary Results for Ten Models

    SciTech Connect

    Phillips, T J; Henderson-Sellers, A; Irannejad, P; McGuffie, K; Zhang, H

    2005-12-01

    This report summarizes initial findings of a large-scale validation of the land-surface simulations of ten atmospheric general circulation models that are entries in phase II of the Atmospheric Model Intercomparison Project (AMIP II). This validation is conducted by AMIP Diagnostic Subproject 12 on Land-surface Processes and Parameterizations, which is focusing on putative relationships between the continental climate simulations and the associated models' land-surface schemes. The selected models typify the diversity of representations of land-surface climate that are currently implemented by the global modeling community. The current dearth of global-scale terrestrial observations makes exacting validation of AMIP II continental simulations impractical. Thus, selected land-surface processes of the models are compared with several alternative validation data sets, which include merged in-situ/satellite products, climate reanalyses, and off-line simulations of land-surface schemes that are driven by observed forcings. The aggregated spatio-temporal differences between each simulated process and a chosen reference data set then are quantified by means of root-mean-square error statistics; the differences among alternative validation data sets are similarly quantified as an estimate of the current observational uncertainty in the selected land-surface process. Examples of these metrics are displayed for land-surface air temperature, precipitation, and the latent and sensible heat fluxes. It is found that the simulations of surface air temperature, when aggregated over all land and seasons, agree most closely with the chosen reference data, while the simulations of precipitation agree least. In the latter case, there also is considerable inter-model scatter in the error statistics, with the reanalyses estimates of precipitation resembling the AMIP II simulations more than to the chosen reference data. In aggregate, the simulations of land-surface latent and sensible

  15. Large Scale Numerical Modelling to Study the Dispersion of Persistent Toxic Substances Over Europe

    NASA Astrophysics Data System (ADS)

    Aulinger, A.; Petersen, G.

    2003-12-01

    For the past two decades environmental research at the GKSS Research Centre has been concerned with airborne pollutants with adverse effects on human health. The research was mainly focused on investigating the dispersion and deposition of heavy metals like lead and mercury over Europe by means of numerical modelling frameworks. Lead, in particular, served as a model substance to study the relationship between emissions and human exposition. The major source of airborne lead in Germany was fuel combustion until the 1980ies when its use as gasoline additive declined due to political decisions. Since then, the concentration of lead in ambient air and the deposition rates decreased in the same way as the consumption of leaded fuel. These observations could further be related to the decrease of lead concentrations in human blood measured during medical studies in several German cities. Based on the experience with models for heavy metal transport and deposition we have now started to turn our research focus to organic substances, e.g. PAHs. PAHs have been recognized as significant air borne carcinogens for several decades. However, it is not yet possible to precisely quantify the risk of human exposure to those compounds. Physical and chemical data, known from literature, describing the partitioning of the compounds between particle and gas phase and their degradation in the gas phase are implemented in a tropospheric chemistry module. In this way, the fate of PAHs in the atmosphere due to different particle type and size and different meteorological conditions is tested before carrying out large-scale and long-time studies. First model runs have been carried out for Benzo(a)Pyrene as one of the principal carcinogenic PAHs. Up to now, nearly nothing is known about degradation reactions of particle bound BaP. Thus, they could not be taken into account in the model so far. On the other hand, the proportion of BaP in the gas phase has to be considered at higher ambient

  16. Using stochastically-generated subcolumns to represent cloud structure in a large-scale model

    SciTech Connect

    Pincus, R; Hemler, R; Klein, S A

    2005-12-08

    A new method for representing subgrid-scale cloud structure, in which each model column is decomposed into a set of subcolumns, has been introduced into the Geophysical Fluid Dynamics Laboratory's global climate model AM2. Each subcolumn in the decomposition is homogeneous but the ensemble reproduces the initial profiles of cloud properties including cloud fraction, internal variability (if any) in cloud condensate, and arbitrary overlap assumptions that describe vertical correlations. These subcolumns are used in radiation and diagnostic calculations, and have allowed the introduction of more realistic overlap assumptions. This paper describes the impact of these new methods for representing cloud structure in instantaneous calculations and long-term integrations. Shortwave radiation computed using subcolumns and the random overlap assumption differs in the global annual average by more than 4 W/m{sup 2} from the operational radiation scheme in instantaneous calculations; much of this difference is counteracted by a change in the overlap assumption to one in which overlap varies continuously with the separation distance between layers. Internal variability in cloud condensate, diagnosed from the mean condensate amount and cloud fraction, has about the same effect on radiative fluxes as does the ad hoc tuning accounting for this effect in the operational radiation scheme. Long simulations with the new model configuration show little difference from the operational model configuration, while statistical tests indicate that the model does not respond systematically to the sampling noise introduced by the approximate radiative transfer techniques introduced to work with the subcolumns.

  17. Comparing wave shoaling methods used in large-scale coastal evolution modeling

    NASA Astrophysics Data System (ADS)

    Limber, P. W.; Adams, P. N.; Murray, A.

    2013-12-01

    output where wave height is approximately one-half of the water depth (a standard wave breaking threshold). The goal of this modeling exercise is to understand under what conditions a simple wave model is sufficient for simulating coastline evolution, and when using a more complex shoaling routine can optimize a coastline model. The Coastline Evolution Model (CEM; Ashton and Murray, 2006) is used to show how different shoaling routines affect modeled coastline behavior. The CEM currently includes the most basic wave shoaling approach to simulate cape and spit formation. We will instead couple it to SWAN, using the insight from the comprehensive wave model (above) to guide its application. This will allow waves transformed over complex bathymetry, such as cape-associated shoals and ridges, to be input for the CEM so that large-scale coastline behavior can be addressed in less idealized environments. Ashton, A., and Murray, A.B., 2006, High-angle wave instability and emergent shoreline shapes: 1. Modeling of sand waves, flying spits, and capes: Journal of Geophysical Research, v. 111, p. F04011, doi:10.1029/2005JF000422.

  18. Large Scale Computing

    NASA Astrophysics Data System (ADS)

    Capiluppi, Paolo

    2005-04-01

    Large Scale Computing is acquiring an important role in the field of data analysis and treatment for many Sciences and also for some Social activities. The present paper discusses the characteristics of Computing when it becomes "Large Scale" and the current state of the art for some particular application needing such a large distributed resources and organization. High Energy Particle Physics (HEP) Experiments are discussed in this respect; in particular the Large Hadron Collider (LHC) Experiments are analyzed. The Computing Models of LHC Experiments represent the current prototype implementation of Large Scale Computing and describe the level of maturity of the possible deployment solutions. Some of the most recent results on the measurements of the performances and functionalities of the LHC Experiments' testing are discussed.

  19. Mathematical model of influenza A virus production in large-scale microcarrier culture.

    PubMed

    Möhler, Lars; Flockerzi, Dietrich; Sann, Heiner; Reichl, Udo

    2005-04-01

    A mathematical model that describes the replication of influenza A virus in animal cells in large-scale microcarrier culture is presented. The virus is produced in a two-step process, which begins with the growth of adherent Madin-Darby canine kidney (MDCK) cells. After several washing steps serum-free virus maintenance medium is added, and the cells are infected with equine influenza virus (A/Equi 2 (H3N8), Newmarket 1/93). A time-delayed model is considered that has three state variables: the number of uninfected cells, infected cells, and free virus particles. It is assumed that uninfected cells adsorb the virus added at the time of infection. The infection rate is proportional to the number of uninfected cells and free virions. Depending on multiplicity of infection (MOI), not necessarily all cells are infected by this first step leading to the production of free virions. Newly produced viruses can infect the remaining uninfected cells in a chain reaction. To follow the time course of virus replication, infected cells were stained with fluorescent antibodies. Quantitation of influenza viruses by a hemagglutination assay (HA) enabled the estimation of the total number of new virions produced, which is relevant for the production of inactivated influenza vaccines. It takes about 4-6 h before visibly infected cells can be identified on the microcarriers followed by a strong increase in HA titers after 15-16 h in the medium. Maximum virus yield Vmax was about 1x10(10) virions/mL (2.4 log HA units/100 microL), which corresponds to a burst size ratio of about 18,755 virus particles produced per cell. The model tracks the time course of uninfected and infected cells as well as virus production. It suggests that small variations (<10%) in initial values and specific rates do not have a significant influence on Vmax. The main parameters relevant for the optimization of virus antigen yields are specific virus replication rate and specific cell death rate due to infection

  20. Using geophysical observations to constrain dynamic models of large-scale continental deformation in Asia

    NASA Astrophysics Data System (ADS)

    Flesch, L. M.; Holt, W. E.; Haines, A. J.

    2003-04-01

    The deformation of continental lithosphere is controlled by a variety of factors, including (1) body forces, (2) basal tractions, (3) boundary forces, and (4) rheology. Obtaining unique solutions that describe the dynamics of continental lithosphere is extremely challenging. Limitations are associated with inadequate observations that can uniquely constrain the dynamics as well as inadequate numerical methods. However, the compilation of space geodetic, seismic, and geologic data over the past 10-15 years have made it possible to make significant strides toward understanding the dynamics of large-scale continental deformation. The first step in making inferences about continental dynamics involves a quantification of the kinematics of active deformation (measurement of the velocity gradient tensor field). We interpolate both GPS velocity vectors and Quaternary strain rates with continuous spline functions (bi-cubic Bessel interpolation) to define a model velocity gradient tensor field solution (strain rates, rotation rates, and relative motions). In our methodology grid areas can be defined to be small enough such that fault zones are narrow and regions between faults (crustal blocks) possess rigid behavior. Our dynamic models are solutions to equations for a thin sheet, accounting for body forces associated with horizontal density variations and edge forces associated with accommodation of relative plate motion. The formalism can also include basal tractions associated with coupling between lithosphere and deeper mantle circulation. These dynamic models allow for lateral variations of viscosity and they allow for different power-law rheologies with power law exponents ranging from n = 1-9. Thus our dynamic models account for possible block-like behavior (high effective viscosity) as well as concentrated strain within shear zones. Kinematic results to date for central Asia show block-like behavior for large regions such as South China, Tarim Basin, Amurian block

  1. Self-consistency tests of large-scale dynamics parameterizations for single-column modeling

    SciTech Connect

    Edman, Jacob P.; Romps, David M.

    2015-03-18

    Large-scale dynamics parameterizations are tested numerically in cloud-resolving simulations, including a new version of the weak-pressure-gradient approximation (WPG) introduced by Edman and Romps (2014), the weak-temperature-gradient approximation (WTG), and a prior implementation of WPG. We perform a series of self-consistency tests with each large-scale dynamics parameterization, in which we compare the result of a cloud-resolving simulation coupled to WTG or WPG with an otherwise identical simulation with prescribed large-scale convergence. In self-consistency tests based on radiative-convective equilibrium (RCE; i.e., no large-scale convergence), we find that simulations either weakly coupled or strongly coupled to either WPG or WTG are self-consistent, but WPG-coupled simulations exhibit a nonmonotonic behavior as the strength of the coupling to WPG is varied. We also perform self-consistency tests based on observed forcings from two observational campaigns: the Tropical Warm Pool International Cloud Experiment (TWP-ICE) and the ARM Southern Great Plains (SGP) Summer 1995 IOP. In these tests, we show that the new version of WPG improves upon prior versions of WPG by eliminating a potentially troublesome gravity-wave resonance.

  2. The Large-Scale Structure of Semantic Networks: Statistical Analyses and a Model of Semantic Growth

    ERIC Educational Resources Information Center

    Steyvers, Mark; Tenenbaum, Joshua B.

    2005-01-01

    We present statistical analyses of the large-scale structure of 3 types of semantic networks: word associations, WordNet, and Roget's Thesaurus. We show that they have a small-world structure, characterized by sparse connectivity, short average path lengths between words, and strong local clustering. In addition, the distributions of the number of…

  3. Modeling Booklet Effects for Nonequivalent Group Designs in Large-Scale Assessment

    ERIC Educational Resources Information Center

    Hecht, Martin; Weirich, Sebastian; Siegle, Thilo; Frey, Andreas

    2015-01-01

    Multiple matrix designs are commonly used in large-scale assessments to distribute test items to students. These designs comprise several booklets, each containing a subset of the complete item pool. Besides reducing the test burden of individual students, using various booklets allows aligning the difficulty of the presented items to the assumed…

  4. Self-consistency tests of large-scale dynamics parameterizations for single-column modeling

    DOE PAGESBeta

    Edman, Jacob P.; Romps, David M.

    2015-03-18

    Large-scale dynamics parameterizations are tested numerically in cloud-resolving simulations, including a new version of the weak-pressure-gradient approximation (WPG) introduced by Edman and Romps (2014), the weak-temperature-gradient approximation (WTG), and a prior implementation of WPG. We perform a series of self-consistency tests with each large-scale dynamics parameterization, in which we compare the result of a cloud-resolving simulation coupled to WTG or WPG with an otherwise identical simulation with prescribed large-scale convergence. In self-consistency tests based on radiative-convective equilibrium (RCE; i.e., no large-scale convergence), we find that simulations either weakly coupled or strongly coupled to either WPG or WTG are self-consistent, butmore » WPG-coupled simulations exhibit a nonmonotonic behavior as the strength of the coupling to WPG is varied. We also perform self-consistency tests based on observed forcings from two observational campaigns: the Tropical Warm Pool International Cloud Experiment (TWP-ICE) and the ARM Southern Great Plains (SGP) Summer 1995 IOP. In these tests, we show that the new version of WPG improves upon prior versions of WPG by eliminating a potentially troublesome gravity-wave resonance.« less

  5. Large-scale hydrological modelling in the semi-arid north-east of Brazil

    NASA Astrophysics Data System (ADS)

    Güntner, Andreas

    2002-07-01

    the framework of an integrated model which contains modules that do not work on the basis of natural spatial units. The target units mentioned above are disaggregated in Wasa into smaller modelling units within a new multi-scale, hierarchical approach. The landscape units defined in this scheme capture in particular the effect of structured variability of terrain, soil and vegetation characteristics along toposequences on soil moisture and runoff generation. Lateral hydrological processes at the hillslope scale, as reinfiltration of surface runoff, being of particular importance in semi-arid environments, can thus be represented also within the large-scale model in a simplified form. Depending on the resolution of available data, small-scale variability is not represented explicitly with geographic reference in Wasa, but by the distribution of sub-scale units and by statistical transition frequencies for lateral fluxes between these units. Further model components of Wasa which respect specific features of semi-arid hydrology are: (1) A two-layer model for evapotranspiration comprises energy transfer at the soil surface (including soil evaporation), which is of importance in view of the mainly sparse vegetation cover. Additionally, vegetation parameters are differentiated in space and time in dependence on the occurrence of the rainy season. (2) The infiltration module represents in particular infiltration-excess surface runoff as the dominant runoff component. (3) For the aggregate description of the water balance of reservoirs that cannot be represented explicitly in the model, a storage approach respecting different reservoirs size classes and their interaction via the river network is applied. (4) A model for the quantification of water withdrawal by water use in different sectors is coupled to Wasa. (5) A cascade model for the temporal disaggregation of precipitation time series, adapted to the specific characteristics of tropical convective rainfall, is applied

  6. Collaborative Visualization for Large-Scale Accelerator Electromagnetic Modeling (Final Report)

    SciTech Connect

    William J. Schroeder

    2011-11-13

    This report contains the comprehensive summary of the work performed on the SBIR Phase II, Collaborative Visualization for Large-Scale Accelerator Electromagnetic Modeling at Kitware Inc. in collaboration with Stanford Linear Accelerator Center (SLAC). The goal of the work was to develop collaborative visualization tools for large-scale data as illustrated in the figure below. The solutions we proposed address the typical problems faced by geographicallyand organizationally-separated research and engineering teams, who produce large data (either through simulation or experimental measurement) and wish to work together to analyze and understand their data. Because the data is large, we expect that it cannot be easily transported to each team member's work site, and that the visualization server must reside near the data. Further, we also expect that each work site has heterogeneous resources: some with large computing clients, tiled (or large) displays and high bandwidth; others sites as simple as a team member on a laptop computer. Our solution is based on the open-source, widely used ParaView large-data visualization application. We extended this tool to support multiple collaborative clients who may locally visualize data, and then periodically rejoin and synchronize with the group to discuss their findings. Options for managing session control, adding annotation, and defining the visualization pipeline, among others, were incorporated. We also developed and deployed a Web visualization framework based on ParaView that enables the Web browser to act as a participating client in a collaborative session. The ParaView Web Visualization framework leverages various Web technologies including WebGL, JavaScript, Java and Flash to enable interactive 3D visualization over the web using ParaView as the visualization server. We steered the development of this technology by teaming with the SLAC National Accelerator Laboratory. SLAC has a computationally-intensive problem

  7. Application of large-scale, multi-resolution watershed modeling framework using the Hydrologic and Water Quality System (HAWQS)

    Technology Transfer Automated Retrieval System (TEKTRAN)

    In recent years, large-scale watershed modeling has been implemented broadly in the field of water resources planning and management. Complex hydrological, sediment, and nutrient processes can be simulated by sophisticated watershed simulation models for important issues such as water resources all...

  8. Modeling the MJO rain rates using parameterized large scale dynamics: vertical structure, radiation, and horizontal advection of dry air

    NASA Astrophysics Data System (ADS)

    Wang, S.; Sobel, A. H.; Nie, J.

    2015-12-01

    Two Madden Julian Oscillation (MJO) events were observed during October and November 2011 in the equatorial Indian Ocean during the DYNAMO field campaign. Precipitation rates and large-scale vertical motion profiles derived from the DYNAMO northern sounding array are simulated in a small-domain cloud-resolving model using parameterized large-scale dynamics. Three parameterizations of large-scale dynamics --- the conventional weak temperature gradient (WTG) approximation, vertical mode based spectral WTG (SWTG), and damped gravity wave coupling (DGW) --- are employed. The target temperature profiles and radiative heating rates are taken from a control simulation in which the large-scale vertical motion is imposed (rather than directly from observations), and the model itself is significantly modified from that used in previous work. These methodological changes lead to significant improvement in the results.Simulations using all three methods, with imposed time -dependent radiation and horizontal moisture advection, capture the time variations in precipitation associated with the two MJO events well. The three methods produce significant differences in the large-scale vertical motion profile, however. WTG produces the most top-heavy and noisy profiles, while DGW's is smoother with a peak in midlevels. SWTG produces a smooth profile, somewhere between WTG and DGW, and in better agreement with observations than either of the others. Numerical experiments without horizontal advection of moisture suggest that that process significantly reduces the precipitation and suppresses the top-heaviness of large-scale vertical motion during the MJO active phases, while experiments in which the effect of cloud on radiation are disabled indicate that cloud-radiative interaction significantly amplifies the MJO. Experiments in which interactive radiation is used produce poorer agreement with observation than those with imposed time-varying radiative heating. Our results highlight the

  9. Parameterizing mesoscale and large-scale ice clouds in general circulation models

    NASA Technical Reports Server (NTRS)

    Donner, Leo J.

    1990-01-01

    The paper discusses GCM parameterizations for two types of ice clouds: (1) ice clouds formed by large-scale lifting, often of limited vertical extent but usually of large-scale horizontal extent; and (2) ice clouds formed as anvils in convective systems, often of moderate vertical extent but of mesoscale size horizontally. It is shown that the former type of clouds can be parameterized with reference to an equilibrium between ice generation by deposition from vapor, and ice removal by crystal settling. The same mechanisms operate in the mesoscale clouds, but the ice content in these cases is considered to be more closely linked to the moisture supplied to the anvil by cumulus towers. It is shown that a GCM can simulate widespread ice clouds of both types.

  10. Large-Scale Modeling of the Entry of Solar Wind Ions into the Magnetosphere

    NASA Astrophysics Data System (ADS)

    Berchem, J.; Richard, R. L.; Escoubet, C. P.; Pitout, F.

    2012-12-01

    Ion observations made by multiple spacecraft in the mid-altitude cusps have revealed the complexity of the entry of the solar wind plasma at the magnetospheric boundary. In particular, ion energy-latitude dispersions measured by the Cluster spacecraft often indicate the formation of large-scale structures in ion precipitation. We have carried out large-scale simulations of the entry of ions at the dayside magnetopause. Our study is based on using the time-dependent electric and magnetic fields predicted by three-dimensional global MHD simulations to compute the trajectories of large samples of solar wind ions launched upstream of the bow shock for different solar wind conditions. Particle information collected in the simulations is then analyzed to determine the relation between the structures observed in the cusp and ion injection processes at the magnetospheric boundary. We discuss the results of the study in the context of entry and acceleration processes at the dayside magnetopause.

  11. Development of Large-Scale Forcing Data for GoAmazon2014/5 Cloud Modeling Studies

    NASA Astrophysics Data System (ADS)

    Tang, S.; Xie, S.; Zhang, Y.; Schumacher, C.; Upton, H. M.; Ahlgrimm, M.; Feng, Z.

    2015-12-01

    The Observations and Modeling of the Green Ocean 2014-2015 (GoAmazon2014/5) field campaign is an international collaborated experiment conducted near Manaus, Brazil from January 2014 through December 2015. This experiment is designed to enable the study of aerosols, tropical clouds, convections and their interactions. To support modeling studies of these processes with data collected from the GoAmazon2014/5 campaign, we have developed a large-scale forcing data (e.g., vertical velocities and advective tendencies) during the second intensive operational period (IOP) of GoAmazon2014/5 from 1 Sep to 10 Oct, 2014. The method used in this study is the constrained variational analysis method in which the large-scale state fields are constrained by the surface and top-of-atmosphere observations (e.g. surface precipitation and outgoing longwave radiation) to conserve column-integrated mass, moisture and dry static energy. To address potential uncertainties in the derived forcing data due to uncertainties in surface precipitation, two sets of large-scale forcing data are developed based on the ECMWF analysis constrained by the two precipitation products respectively from SIPAM radar and TRMM 3B42 products. Our initial analysis shows large differences in these two precipitation products, which causes considerable differences in the derived large-scale forcing data. Potential uncertainties in the large-scale forcing data to other surface constraints such as surface latent and sensible fluxes will be explored. The characteristics of the large-scale forcing structures for selected cases will be discussed.

  12. LARGE-SCALE CYCLOGENESIS, FRONTAL WAVES AND DUST ON MARS: MODELING AND DIAGNOSTIC CONSIDERATIONS

    NASA Astrophysics Data System (ADS)

    Hollingsworth, J.; Kahre, M.

    2009-12-01

    During late autumn through early spring, Mars’ northern middle and high latitudes exhibit very strong equator-to-pole mean temperature contrasts (i.e., baroclinicity). From data collected during the Viking era and recent observations from both the Mars Global Surveyor (MGS) and Mars Reconnaissance Orbiter (MRO) missions, this strong baroclinicity supports vigorous large-scale eastward traveling weather systems (i.e., transient synoptic-period waves). These systems also have accompanying sub-synoptic scale ramifications on the atmospheric environment through cyclonic/anticyclonic winds, intense deformations and contractions/dilations in temperatures, and sharp perturbations amongst atmospheric tracers (e.g., dust and volatiles/condensates). Mars’ northern-hemisphere frontal waves can exhibit extended meridional structure, and appear to be active agents in the planet’s dust cycle. Their parenting cyclones tend to develop, travel eastward, and decay preferentially within certain geographic regions (i.e., storm zones). We adapt a version of the NASA Ames Mars general circulation model (GCM) at high horizontal resolution that includes the lifting, transport and sedimentation of radiatively-active dust to investigate the nature of cyclogenesis and frontal-wave circulations (both horizontally and vertically), and regional dust transport and concentration within the atmosphere. Near late winter and early spring (Ls ˜ 320-350°), high-resolution simulations indicate that the predominant dust lifting occurs through wind-stress lifting, in particular over the Tharsis highlands of the western hemisphere and to a lesser extent over the Arabia highlands of the eastern hemisphere. The former region also indicates considerable interaction with regards to upslope/downslope (i.e., nocturnal) flows and the synoptic/subsynoptic-scale circulations associated with cyclogenesis whereby dust can be readily “focused” within a frontal-wave disturbance and carried downstream both

  13. Realistic molecular model of kerogen's nanostructure

    NASA Astrophysics Data System (ADS)

    Bousige, Colin; Ghimbeu, Camélia Matei; Vix-Guterl, Cathie; Pomerantz, Andrew E.; Suleimenova, Assiya; Vaughan, Gavin; Garbarino, Gaston; Feygenson, Mikhail; Wildgruber, Christoph; Ulm, Franz-Josef; Pellenq, Roland J.-M.; Coasne, Benoit

    2016-05-01

    Despite kerogen's importance as the organic backbone for hydrocarbon production from source rocks such as gas shale, the interplay between kerogen's chemistry, morphology and mechanics remains unexplored. As the environmental impact of shale gas rises, identifying functional relations between its geochemical, transport, elastic and fracture properties from realistic molecular models of kerogens becomes all the more important. Here, by using a hybrid experimental-simulation method, we propose a panel of realistic molecular models of mature and immature kerogens that provide a detailed picture of kerogen's nanostructure without considering the presence of clays and other minerals in shales. We probe the models' strengths and limitations, and show that they predict essential features amenable to experimental validation, including pore distribution, vibrational density of states and stiffness. We also show that kerogen's maturation, which manifests itself as an increase in the sp2/sp3 hybridization ratio, entails a crossover from plastic-to-brittle rupture mechanisms.

  14. Realistic molecular model of kerogen's nanostructure.

    PubMed

    Bousige, Colin; Ghimbeu, Camélia Matei; Vix-Guterl, Cathie; Pomerantz, Andrew E; Suleimenova, Assiya; Vaughan, Gavin; Garbarino, Gaston; Feygenson, Mikhail; Wildgruber, Christoph; Ulm, Franz-Josef; Pellenq, Roland J-M; Coasne, Benoit

    2016-05-01

    Despite kerogen's importance as the organic backbone for hydrocarbon production from source rocks such as gas shale, the interplay between kerogen's chemistry, morphology and mechanics remains unexplored. As the environmental impact of shale gas rises, identifying functional relations between its geochemical, transport, elastic and fracture properties from realistic molecular models of kerogens becomes all the more important. Here, by using a hybrid experimental-simulation method, we propose a panel of realistic molecular models of mature and immature kerogens that provide a detailed picture of kerogen's nanostructure without considering the presence of clays and other minerals in shales. We probe the models' strengths and limitations, and show that they predict essential features amenable to experimental validation, including pore distribution, vibrational density of states and stiffness. We also show that kerogen's maturation, which manifests itself as an increase in the sp(2)/sp(3) hybridization ratio, entails a crossover from plastic-to-brittle rupture mechanisms. PMID:26828313

  15. Applying Multidimensional Item Response Theory Models in Validating Test Dimensionality: An Example of K-12 Large-Scale Science Assessment

    ERIC Educational Resources Information Center

    Li, Ying; Jiao, Hong; Lissitz, Robert W.

    2012-01-01

    This study investigated the application of multidimensional item response theory (IRT) models to validate test structure and dimensionality. Multiple content areas or domains within a single subject often exist in large-scale achievement tests. Such areas or domains may cause multidimensionality or local item dependence, which both violate the…

  16. Path2Models: large-scale generation of computational models from biochemical pathway maps

    PubMed Central

    2013-01-01

    Background Systems biology projects and omics technologies have led to a growing number of biochemical pathway models and reconstructions. However, the majority of these models are still created de novo, based on literature mining and the manual processing of pathway data. Results To increase the efficiency of model creation, the Path2Models project has automatically generated mathematical models from pathway representations using a suite of freely available software. Data sources include KEGG, BioCarta, MetaCyc and SABIO-RK. Depending on the source data, three types of models are provided: kinetic, logical and constraint-based. Models from over 2 600 organisms are encoded consistently in SBML, and are made freely available through BioModels Database at http://www.ebi.ac.uk/biomodels-main/path2models. Each model contains the list of participants, their interactions, the relevant mathematical constructs, and initial parameter values. Most models are also available as easy-to-understand graphical SBGN maps. Conclusions To date, the project has resulted in more than 140 000 freely available models. Such a resource can tremendously accelerate the development of mathematical models by providing initial starting models for simulation and analysis, which can be subsequently curated and further parameterized. PMID:24180668

  17. A large scale microwave emission model for forests. Contribution to the SMOS algorithm

    NASA Astrophysics Data System (ADS)

    Rahmoune, R.; Della Vecchia, A.; Ferrazzoli, P.; Guerriero, L.; Martin-Porqueras, F.

    2009-04-01

    1. INTRODUCTION It is well known that surface soil moisture plays an important role in the water cycle and the global climate. SMOS is a L-Band multi-angle dual-polarization microwave radiometer for global monitoring of this variable. In the areas covered by forests, the opacity is relatively high, and the knowledge of moisture remains problematic. A significant percentage of SMOS pixels at global scale is affected by fractional forest. Whereas the effect of the vegetation can be corrected thanks a simple radiative model, in case of dense forests the wave penetration is limited and the sensitivity to variations of soil moisture is poor. However, most of the pixels are mixed, and a reliable estimate of forest emissivity is important to retrieve the soil moisture of the areas less affected by forest cover. Moreover, there are many sparse woodlands, where the sensitivity to variations of soil moisture is still acceptable. At the scale of spaceborne radiometers, it is difficult to have a detailed knowledge of the variables which affect the overall emissivity. In order to manage effectively these problems, the electromagnetic model developed at Tor Vergata University was combined with information available from forest literature. Using allometric equations and other information, the geometrical and dielectric inputs required by the model were related to global variables available at large scale, such as the Leaf Area Index. This procedure is necessarily approximate. In a first version of the model, forest variables were assumed to be constant in time, and were simply related to the maximum yearly value of Leaf Area Index. Moreover, a unique sparse distribution of trunk diameters was assumed. Finally, the temperature distribution within the crown canopy was assumed to be uniform. The model is being refined, in order to consider seasonal variations of foliage cover, subdivided into arboreous foliage and understory contributions. Different distributions of trunk diameter

  18. Comparing large-scale computational approaches to epidemic modeling: agent based versus structured metapopulation models

    NASA Astrophysics Data System (ADS)

    Gonçalves, Bruno; Ajelli, Marco; Balcan, Duygu; Colizza, Vittoria; Hu, Hao; Ramasco, José; Merler, Stefano; Vespignani, Alessandro

    2010-03-01

    We provide for the first time a side by side comparison of the results obtained with a stochastic agent based model and a structured metapopulation stochastic model for the evolution of a baseline pandemic event in Italy. The Agent Based model is based on the explicit representation of the Italian population through highly detailed data on the socio-demographic structure. The metapopulation simulations use the GLobal Epidemic and Mobility (GLEaM) model, based on high resolution census data worldwide, and integrating airline travel flow data with short range human mobility patterns at the global scale. Both models provide epidemic patterns that are in very good agreement at the granularity levels accessible by both approaches, with differences in peak timing of the order of few days. The age breakdown analysis shows that similar attack rates are obtained for the younger age classes.

  19. Testing LTB void models without the cosmic microwave background or large scale structure: new constraints from galaxy ages

    SciTech Connect

    Putter, Roland de; Verde, Licia; Jimenez, Raul E-mail: liciaverde@icc.ub.edu

    2013-02-01

    We present new observational constraints on inhomogeneous models based on observables independent of the CMB and large-scale structure. Using Bayesian evidence we find very strong evidence for the homogeneous LCDM model, thus disfavouring inhomogeneous models. Our new constraints are based on quantities independent of the growth of perturbations and rely on cosmic clocks based on atomic physics and on the local density of matter.

  20. The Nature of Global Large-scale Sea Level Variability in Relation to Atmospheric Forcing: A Modeling Study

    NASA Technical Reports Server (NTRS)

    Fukumori, I.; Raghunath, R.; Fu, L. L.

    1996-01-01

    The relation between large-scale sea level variability and ocean circulation is studied using a numerical model. A global primitive equaiton model of the ocean is forced by daily winds and climatological heat fluxes corresponding to the period from January 1992 to February 1996. The physical nature of the temporal variability from periods of days to a year, are examined based on spectral analyses of model results and comparisons with satellite altimetry and tide gauge measurements.

  1. Model parameter estimation with data assimilation and MCMC in small and large scale models

    NASA Astrophysics Data System (ADS)

    Susiluoto, Jouni; Hakkarainen, Janne

    2014-05-01

    Climate models in general, have non-linear responses to changing environmental forcing. Many of the participating processes contain, partly for computational reasons, simplifications, that is, parametrizations of physical phenomena. Due to lack of complete information and thus mismatch between model world and the real world, the parametrizations are not measurable, but rather approximations of some abstract simplified processes' properties. Hence they cannot be tuned directly with observations. We investigate how MCMC using an objective function constructed from the extended kalman filter helps us gain understanding to what the studied parameter posterior PDFs look like. This is done at different levels: using Lorenz96 model as a testbed and then exporting the methods to a full-blown climate model ECHAM5. Additionally, the limitations of the method are discussed.

  2. A Computational Framework for Realistic Retina Modeling.

    PubMed

    Martínez-Cañada, Pablo; Morillas, Christian; Pino, Begoña; Ros, Eduardo; Pelayo, Francisco

    2016-11-01

    Computational simulations of the retina have led to valuable insights about the biophysics of its neuronal activity and processing principles. A great number of retina models have been proposed to reproduce the behavioral diversity of the different visual processing pathways. While many of these models share common computational stages, previous efforts have been more focused on fitting specific retina functions rather than generalizing them beyond a particular model. Here, we define a set of computational retinal microcircuits that can be used as basic building blocks for the modeling of different retina mechanisms. To validate the hypothesis that similar processing structures may be repeatedly found in different retina functions, we implemented a series of retina models simply by combining these computational retinal microcircuits. Accuracy of the retina models for capturing neural behavior was assessed by fitting published electrophysiological recordings that characterize some of the best-known phenomena observed in the retina: adaptation to the mean light intensity and temporal contrast, and differential motion sensitivity. The retinal microcircuits are part of a new software platform for efficient computational retina modeling from single-cell to large-scale levels. It includes an interface with spiking neural networks that allows simulation of the spiking response of ganglion cells and integration with models of higher visual areas. PMID:27354192

  3. COST MINIMIZATION MODEL OF OCEANGOING CARRIERS ON A LARGE-SCALE INTERNATIONAL MARITIME CONTAINER SHIPPING NETWORK CONSIDERING CHARACTERISTICS OF PORTS

    NASA Astrophysics Data System (ADS)

    Shibasaki, Ryuichi; Watanabe, Tomihiro; Ieda, Hitoshi

    This paper deals with a cost minimization problem of oceangoing carriers on a large-scale network of international maritime container shipping industry, in order to measure impact of port policies for each country including Japan. Concretely, the authors develop a model to decide ports to call and size of containership in each route by ocean-going carrier group, with consideration of construction of deeper berths to deal with enlargement of containership, decrease of various port charges per cargo by attracting cargos into one port, and congestion by exceeding aggregation. The developed model is applied to the actual large-scale international maritime container shipping network in Eastern Asia. The performance of the model developed is validated. Also, the sensitivity of the model output is confirmed from the viewpoints of economy and diseconomy of scale included in the model.

  4. Hydro-economic Modeling: Reducing the Gap between Large Scale Simulation and Optimization Models

    NASA Astrophysics Data System (ADS)

    Forni, L.; Medellin-Azuara, J.; Purkey, D.; Joyce, B. A.; Sieber, J.; Howitt, R.

    2012-12-01

    The integration of hydrological and socio economic components into hydro-economic models has become essential for water resources policy and planning analysis. In this study we integrate the economic value of water in irrigated agricultural production using SWAP (a StateWide Agricultural Production Model for California), and WEAP (Water Evaluation and Planning System) a climate driven hydrological model. The integration of the models is performed using a step function approximation of water demand curves from SWAP, and by relating the demand tranches to the priority scheme in WEAP. In order to do so, a modified version of SWAP was developed called SWEAP that has the Planning Area delimitations of WEAP, a Maximum Entropy Model to estimate evenly sized steps (tranches) of water derived demand functions, and the translation of water tranches into crop land. In addition, a modified version of WEAP was created called ECONWEAP with minor structural changes for the incorporation of land decisions from SWEAP and series of iterations run via an external VBA script. This paper shows the validity of this integration by comparing revenues from WEAP vs. ECONWEAP as well as an assessment of the approximation of tranches. Results show a significant increase in the resulting agricultural revenues for our case study in California's Central Valley using ECONWEAP while maintaining the same hydrology and regional water flows. These results highlight the gains from allocating water based on its economic compared to priority-based water allocation systems. Furthermore, this work shows the potential of integrating optimization and simulation-based hydrologic models like ECONWEAP.ercentage difference in total agricultural revenues (EconWEAP versus WEAP).

  5. The Challenge of Realistic TPV System Modeling

    NASA Astrophysics Data System (ADS)

    Aschaber, J.; Hebling, C.; Luther, J.

    2003-01-01

    Realistic modeling of a TPV system is a very demanding task. For a rough estimation of system limits many of assumptions simplify the complexity of a thermophotovoltaic converter. It's obvious that real systems can not be described by this way. An alternative approach that can deal with all these complexities like arbitrary geometries, participating media, temperature distributions etc. is the Monte Carlo method (MCM). This statistical method simulates radiative energy transfer by tracking the histories of a number of photons beginning with the emission by a radiating surface and ending with absorption on a surface or in a medium. All interactions in this way are considered. The disadvantage of large computation time compared to other methods is not longer a weakness with the speed of todays computers. This article points out different ways for realistic TPV system simulation focusing on statistical methods.

  6. Computational Models of Consumer Confidence from Large-Scale Online Attention Data: Crowd-Sourcing Econometrics

    PubMed Central

    2015-01-01

    Economies are instances of complex socio-technical systems that are shaped by the interactions of large numbers of individuals. The individual behavior and decision-making of consumer agents is determined by complex psychological dynamics that include their own assessment of present and future economic conditions as well as those of others, potentially leading to feedback loops that affect the macroscopic state of the economic system. We propose that the large-scale interactions of a nation's citizens with its online resources can reveal the complex dynamics of their collective psychology, including their assessment of future system states. Here we introduce a behavioral index of Chinese Consumer Confidence (C3I) that computationally relates large-scale online search behavior recorded by Google Trends data to the macroscopic variable of consumer confidence. Our results indicate that such computational indices may reveal the components and complex dynamics of consumer psychology as a collective socio-economic phenomenon, potentially leading to improved and more refined economic forecasting. PMID:25826692

  7. Computational models of consumer confidence from large-scale online attention data: crowd-sourcing econometrics.

    PubMed

    Dong, Xianlei; Bollen, Johan

    2015-01-01

    Economies are instances of complex socio-technical systems that are shaped by the interactions of large numbers of individuals. The individual behavior and decision-making of consumer agents is determined by complex psychological dynamics that include their own assessment of present and future economic conditions as well as those of others, potentially leading to feedback loops that affect the macroscopic state of the economic system. We propose that the large-scale interactions of a nation's citizens with its online resources can reveal the complex dynamics of their collective psychology, including their assessment of future system states. Here we introduce a behavioral index of Chinese Consumer Confidence (C3I) that computationally relates large-scale online search behavior recorded by Google Trends data to the macroscopic variable of consumer confidence. Our results indicate that such computational indices may reveal the components and complex dynamics of consumer psychology as a collective socio-economic phenomenon, potentially leading to improved and more refined economic forecasting. PMID:25826692

  8. Dynamics of the Polar Cusps for Active Solar Wind Conditions: Large-scale Modeling

    NASA Astrophysics Data System (ADS)

    Berchem, J.; Richard, R. L.; Escoubet, C. P.; Taylor, M. G.; Laakso, H. E.; Masson, A.; Dandouras, I. S.; Reme, H.; Pitout, F.; Lucek, E. A.

    2010-12-01

    The energy-latitude dispersion of precipitating particles observed by spacecraft near the high-latitude dayside magnetosphere offers a unique opportunity to investigate the large-scale topology and dynamics of the polar cusps. In particular, consecutive crossings of the cusps made by the Cluster spacecraft in a string of pearl configuration are particularly well suited for investigating the temporal and spatial evolution of precipitating particles as solar wind discontinuities interact with the dayside magnetopause. We present the results of large-scale simulation studies based on Cluster observations of ion dispersions following rapid changes in the direction of the interplanetary magnetic field (IMF). First, we use three-dimensional magnetohydrodynamic (MHD) simulations to follow the evolution of the global topology of the magnetic field during the events. Subsequently, the time-dependent electric and magnetic fields predicted by the MHD simulations are utilized to compute the trajectories of large samples of solar wind ions launched upstream of the bow shock. We assess the results of the studies by comparing Cluster ion measurements with ion dispersions calculated from the simulations along the spacecraft trajectories and discuss the temporal evolution and spatial extent of precipitating particles in the context of the reconnection process at the dayside magnetopause.

  9. Seismic Modelling of the Earth's Large-Scale Three-Dimensional Structure

    NASA Astrophysics Data System (ADS)

    Woodhouse, J. H.; Dziewonski, A. M.

    1989-07-01

    Several different kinds of seismological data, spanning more than three orders of magnitude in frequency, have been employed in the study of the Earth's large-scale three-dimensional structure. These yield different but overlapping information, which is leading to a coherent picture of the Earth's internal heterogeneity. In this article we describe several methods of seismic inversion and intercompare the resulting models. Models of upper-mantle shear velocity based upon mantle waveforms (Woodhouse & Dziewonski (J. geophys. Res. 89, 5953-5986 (1984))) (f lesssim 7 mHz) and long-period body waveforms (f lesssim 20 mHz; Woodhouse & Dziewonski (Eos, Wash. 67, 307 (1986))) show the mid-oceanic ridges to be the major low-velocity anomalies in the uppermost mantle, together with regions in the western Pacific, characterized by back-arc volcanism. High velocities are associated with the continents, and in particular with the continental shields, extending to depths in excess of 300 km. By assuming a given ratio between density and wave velocity variations, and a given mantle viscosity structure, such models have been successful in explaining some aspects of observed plate motion in terms of thermal convection in the mantle (Forte & Peltier (J. geophys. Res. 92, 3645-3679 (1987))). An important qualitative conclusion from such analysis is that the magnitude of the observed seismic anomalies is of the order expected in a convecting system having the viscosity, temperature derivatives and flow rates which characterize the mantle. Models of the lower mantle based upon P-wave arrival times (f ≈ 1 Hz; Dziewonski (J. geophys. Res. 89, 5929-5952 (1984)); Morelli & Dziewonski (Eos, Wash. 67, 311 (1986))) SH waveforms (f ≈ 20 mHz; Woodhouse & Dziewonski (1986)) and free oscillations (Giardini et al. (Nature, Lond. 325, 405-411 (1987); J. geophys. Res. 93, 13716-13742 (1988))) (f ≈ 0.5-5 mHz) show a very long wavelength pattern, largely contained in spherical harmonics of

  10. Proposed damage evolution model for large-scale finite element modeling of the dual coolant US-ITER TBM

    NASA Astrophysics Data System (ADS)

    Sharafat, S.; El-Awady, J.; Liu, S.; Diegele, E.; Ghoniem, N. M.

    2007-08-01

    Large-scale finite element modeling (FEM) of the US Dual Coolant Lead Lithium ITER Test Blanket Module including damage evolution is under development. A comprehensive rate-theory based radiation damage creep deformation code was integrated with the ABACUS FEM code. The advantage of this approach is that time-dependent in-reactor deformations and radiation damage can now be directly coupled with 'material properties' of FEM analyses. The coupled FEM-Creep damage model successfully simulated the simultaneous microstructure and stress evolution in small tensile test-bar structures. Applying the integrated Creep/FEM code to large structures is still computationally prohibitive. Instead, for thermo-structural analysis of the DCLL TBM structure the integrated FEM-creep damage model was used to develop true stress-strain behavior of F82H ferritic steel. Based on this integrated damage evolution-FEM approach it is proposed to use large-scale FEM analysis to identify and isolate critical stress areas for follow up analysis using detailed and fully integrated creep-FEM approach.

  11. Modeling relief demands in an emergency supply chain system under large-scale disasters based on a queuing network.

    PubMed

    He, Xinhua; Hu, Wenfa

    2014-01-01

    This paper presents a multiple-rescue model for an emergency supply chain system under uncertainties in large-scale affected area of disasters. The proposed methodology takes into consideration that the rescue demands caused by a large-scale disaster are scattered in several locations; the servers are arranged in multiple echelons (resource depots, distribution centers, and rescue center sites) located in different places but are coordinated within one emergency supply chain system; depending on the types of rescue demands, one or more distinct servers dispatch emergency resources in different vehicle routes, and emergency rescue services queue in multiple rescue-demand locations. This emergency system is modeled as a minimal queuing response time model of location and allocation. A solution to this complex mathematical problem is developed based on genetic algorithm. Finally, a case study of an emergency supply chain system operating in Shanghai is discussed. The results demonstrate the robustness and applicability of the proposed model. PMID:24688367

  12. Modeling Relief Demands in an Emergency Supply Chain System under Large-Scale Disasters Based on a Queuing Network

    PubMed Central

    He, Xinhua

    2014-01-01

    This paper presents a multiple-rescue model for an emergency supply chain system under uncertainties in large-scale affected area of disasters. The proposed methodology takes into consideration that the rescue demands caused by a large-scale disaster are scattered in several locations; the servers are arranged in multiple echelons (resource depots, distribution centers, and rescue center sites) located in different places but are coordinated within one emergency supply chain system; depending on the types of rescue demands, one or more distinct servers dispatch emergency resources in different vehicle routes, and emergency rescue services queue in multiple rescue-demand locations. This emergency system is modeled as a minimal queuing response time model of location and allocation. A solution to this complex mathematical problem is developed based on genetic algorithm. Finally, a case study of an emergency supply chain system operating in Shanghai is discussed. The results demonstrate the robustness and applicability of the proposed model. PMID:24688367

  13. Geodynamic models of a Yellowstone plume and its interaction with subduction and large-scale mantle circulation

    NASA Astrophysics Data System (ADS)

    Steinberger, B. M.

    2012-12-01

    Yellowstone is a site of intra-plate volcanism, with many traits of a classical "hotspot" (chain of age-progressive volcanics with active volcanism on one end; associated with flood basalt), yet it is atypical, as it is located near an area of Cenozoic subduction zones. Tomographic images show a tilted plume conduit in the upper mantle beneath Yellowstone; a similar tilt is predicted by simple geodynamic models: In these models, an initially (at the time when the corresponding Large Igneous Province erupted, ~15 Myr ago) vertical conduit gets tilted while it is advected in and buoyantly rising through large-scale flow: Generally eastward flow in the upper mantle in these models yields a predicted eastward tilt (i.e., the conduit is coming up from the west). In these models, mantle flow is derived from density anomalies, which are either inferred from seismic tomography or from subduction history. One drawback of these models is, that the initial plume location is chosen "ad hoc" such that the present-day position of Yellowstone is matched. Therefore, in another set of models, we study how subducted slabs (inferred from 300 Myr of subduction history) shape a basal chemically distinct layer into thermo-chemical piles, and create plumes along its margins. Our results show the formation of a Pacific pile. As subduction approaches this pile, the models frequently show part of the pile being separated off, with a plume rising above this part. This could be an analog to the formation and dynamics of the Yellowstone plume, yet there is a mismatch in location of about 30 degrees. It is therefore a goal to devise a model that combines the advantages of both models, i.e. a fully dynamic plume model, that matches the present-day position of Yellowstone. This will probably require "seeding" a plume through a thermal anomaly at the core-mantle boundary and possibly other modifications. Also, for a realistic model, the present-day density anomaly derived from subduction should

  14. Aerodynamic Characteristics of a Large-Scale Model with a High Disk-Loading Lifting Fan Mounted in the Fuselage

    NASA Technical Reports Server (NTRS)

    Aoyagi, Kiyoshi; Hickey, David H.; deSavigny, Richard A.

    1961-01-01

    An investigation was conducted to determine the longitudinal characteristics during low-speed flight of a large-scale VTOL airplane model with a direct lifting fan enclosed in the fuselage. The model had a shoulder-mounted unswept wing of aspect ratio 5. The effect on longitudinal characteristics of fan operation, propulsion by means of deflecting the fan efflux, trailing-edge flap deflection, and horizontal-tail height were studied.

  15. A realistic renormalizable supersymmetric E₆ model

    SciTech Connect

    Bajc, Borut; Susič, Vasja

    2014-01-01

    A complete realistic model based on the supersymmetric version of E₆ is presented. It consists of three copies of matter 27, and a Higgs sector made of 2×(27+27⁻)+351´+351´⁻ representations. An analytic solution to the equations of motion is found which spontaneously breaks the gauge group into the Standard Model. The light fermion mass matrices are written down explicitly as non-linear functions of three Yukawa matrices. This contribution is based on Ref. [1].

  16. A refined regional modeling approach for the Corn Belt - Experiences and recommendations for large-scale integrated modeling

    NASA Astrophysics Data System (ADS)

    Panagopoulos, Yiannis; Gassman, Philip W.; Jha, Manoj K.; Kling, Catherine L.; Campbell, Todd; Srinivasan, Raghavan; White, Michael; Arnold, Jeffrey G.

    2015-05-01

    Nonpoint source pollution from agriculture is the main source of nitrogen and phosphorus in the stream systems of the Corn Belt region in the Midwestern US. This region is comprised of two large river basins, the intensely row-cropped Upper Mississippi River Basin (UMRB) and Ohio-Tennessee River Basin (OTRB), which are considered the key contributing areas for the Northern Gulf of Mexico hypoxic zone according to the US Environmental Protection Agency. Thus, in this area it is of utmost importance to ensure that intensive agriculture for food, feed and biofuel production can coexist with a healthy water environment. To address these objectives within a river basin management context, an integrated modeling system has been constructed with the hydrologic Soil and Water Assessment Tool (SWAT) model, capable of estimating river basin responses to alternative cropping and/or management strategies. To improve modeling performance compared to previous studies and provide a spatially detailed basis for scenario development, this SWAT Corn Belt application incorporates a greatly refined subwatershed structure based on 12-digit hydrologic units or 'subwatersheds' as defined by the US Geological Service. The model setup, calibration and validation are time-demanding and challenging tasks for these large systems, given the scale intensive data requirements, and the need to ensure the reliability of flow and pollutant load predictions at multiple locations. Thus, the objectives of this study are both to comprehensively describe this large-scale modeling approach, providing estimates of pollution and crop production in the region as well as to present strengths and weaknesses of integrated modeling at such a large scale along with how it can be improved on the basis of the current modeling structure and results. The predictions were based on a semi-automatic hydrologic calibration approach for large-scale and spatially detailed modeling studies, with the use of the Sequential

  17. Solving large-scale fixed cost integer linear programming models for grid-based location problems with heuristic techniques

    NASA Astrophysics Data System (ADS)

    Noor-E-Alam, Md.; Doucette, John

    2015-08-01

    Grid-based location problems (GBLPs) can be used to solve location problems in business, engineering, resource exploitation, and even in the field of medical sciences. To solve these decision problems, an integer linear programming (ILP) model is designed and developed to provide the optimal solution for GBLPs considering fixed cost criteria. Preliminary results show that the ILP model is efficient in solving small to moderate-sized problems. However, this ILP model becomes intractable in solving large-scale instances. Therefore, a decomposition heuristic is proposed to solve these large-scale GBLPs, which demonstrates significant reduction of solution runtimes. To benchmark the proposed heuristic, results are compared with the exact solution via ILP. The experimental results show that the proposed method significantly outperforms the exact method in runtime with minimal (and in most cases, no) loss of optimality.

  18. Large-Scale Disasters

    NASA Astrophysics Data System (ADS)

    Gad-El-Hak, Mohamed

    "Extreme" events - including climatic events, such as hurricanes, tornadoes, and drought - can cause massive disruption to society, including large death tolls and property damage in the billions of dollars. Events in recent years have shown the importance of being prepared and that countries need to work together to help alleviate the resulting pain and suffering. This volume presents a review of the broad research field of large-scale disasters. It establishes a common framework for predicting, controlling and managing both manmade and natural disasters. There is a particular focus on events caused by weather and climate change. Other topics include air pollution, tsunamis, disaster modeling, the use of remote sensing and the logistics of disaster management. It will appeal to scientists, engineers, first responders and health-care professionals, in addition to graduate students and researchers who have an interest in the prediction, prevention or mitigation of large-scale disasters.

  19. Modeling and extraction of interconnect parameters in very-large-scale integrated circuits

    NASA Astrophysics Data System (ADS)

    Yuan, C. P.

    1983-08-01

    The increased complexity of the very large scale integrated circuits (VLSI) has greatly impacted the field of computer-aided design (CAD). One of the problems brought about is the interconnection problem. In this research, the goal is two fold. First of all, a more accurate numerical method to evaluate the interconnect capacitance, including the coupling capacitance between interconnects and the fringing field capacitance, was investigated, and the integral method was employed. Two FORTRAN programs "CAP2D' and "CAP3D' based on this method were developed. Second, a PASCAL extraction program emphasizing the extraction of interconnect parameters was developed. It employs the cylindrical approximation formula for the self-capacitance of a single interconnect and other simple formulas for the coupling capacitances derived by a least square method. The extractor assumes only Manhattan geometry and NMOS technology. Four-dimensional binary search trees are used as the basic data structure.

  20. The Large-Scale Debris Avalanche From The Tancitaro Volcano (Mexico): Characterization And Modeling

    NASA Astrophysics Data System (ADS)

    Morelli, S.; Gigli, G.; Falorni, G.; Garduno Monroy, V. H.; Arreygue, E.

    2008-12-01

    until they disappear entirely in the most distal reaches. The granulometric analysis and the comparison between the debris avalanche of the Tancitaro and other collapses with similar morphometric features (vertical relief during runout, travel distance, volume and area of the deposit) indicate that the collapse was most likely not primed by any type of eruption, but rather triggered by a strong seismic shock that could have induced the failure of a portion of the edifice, already deeply altered by intense hydrothermal fluid circulation. It is also possible to hypothesize that mechanical fluidization may have been the mechanism controlling the long runout of the avalanche, as has been determined for other well-known events. The behavior of the Tancitaro debris avalanche was numerically modeled using the DAN-W code. By opportunely modifying the rheological parameters of the different models selectable within DAN, it was determined that the two-parameter 'Voellmy model' provides the best approximation of the avalanche movement. The Voellmy model produces the most realistic results in terms of runout distance, velocity and spatial distribution of the failed mass. Since the Tancitaro event was not witnessed directly, it is possible to infer approximate velocities only from comparisons with similar and documented events, namely the Mt. St. Helens debris avalanche occurred on May 18, 1980.

  1. Modeling and Analysis of Realistic Fire Scenarios in Spacecraft

    NASA Technical Reports Server (NTRS)

    Brooker, J. E.; Dietrich, D. L.; Gokoglu, S. A.; Urban, D. L.; Ruff, G. A.

    2015-01-01

    An accidental fire inside a spacecraft is an unlikely, but very real emergency situation that can easily have dire consequences. While much has been learned over the past 25+ years of dedicated research on flame behavior in microgravity, a quantitative understanding of the initiation, spread, detection and extinguishment of a realistic fire aboard a spacecraft is lacking. Virtually all combustion experiments in microgravity have been small-scale, by necessity (hardware limitations in ground-based facilities and safety concerns in space-based facilities). Large-scale, realistic fire experiments are unlikely for the foreseeable future (unlike in terrestrial situations). Therefore, NASA will have to rely on scale modeling, extrapolation of small-scale experiments and detailed numerical modeling to provide the data necessary for vehicle and safety system design. This paper presents the results of parallel efforts to better model the initiation, spread, detection and extinguishment of fires aboard spacecraft. The first is a detailed numerical model using the freely available Fire Dynamics Simulator (FDS). FDS is a CFD code that numerically solves a large eddy simulation form of the Navier-Stokes equations. FDS provides a detailed treatment of the smoke and energy transport from a fire. The simulations provide a wealth of information, but are computationally intensive and not suitable for parametric studies where the detailed treatment of the mass and energy transport are unnecessary. The second path extends a model previously documented at ICES meetings that attempted to predict maximum survivable fires aboard space-craft. This one-dimensional model implies the heat and mass transfer as well as toxic species production from a fire. These simplifications result in a code that is faster and more suitable for parametric studies (having already been used to help in the hatch design of the Multi-Purpose Crew Vehicle, MPCV).

  2. Simulations of a magnetic fluctuation driven large-scale dynamo and comparison with a two-scale model

    NASA Astrophysics Data System (ADS)

    Park, Kiwan; Blackman, E. G.

    2012-07-01

    Models of large-scale (magnetohydrodynamic) dynamos (LSDs) which couple large-scale field growth to total magnetic helicity evolution best predict the saturation of LSDs seen in simulations. For the simplest so-called 'α2' LSDs in periodic boxes, the electromotive force driving LSD growth depends on the difference between the time-integrated kinetic and current helicity associated with fluctuations. When the system is helically kinetically forced (KF), the growth of the large-scale helical field is accompanied by growth of small-scale magnetic (and current) helicity which ultimately quench the LSD. Here, using both simulations and theory, we study the complementary magnetically forced (MF) case in which the system is forced with an electric field that supplies magnetic helicity. For this MF case, the kinetic helicity and turbulent diffusion terms comprise the backreaction that saturates the LSD. Simulations of both MF and KF cases can be approximately modelled with the same equations of magnetic helicity evolution, but with complementary initial conditions. A key difference between KF and MF cases is that the helical large-scale field in the MF case grows with the same sign of injected magnetic helicity, whereas the large- and small-scale magnetic helicities grow with opposite sign for the KF case. The MF case can arise even when the thermal pressure is approximately smaller than the magnetic pressure, and requires only that helical small-scale magnetic fluctuations dominate helical velocity fluctuations in LSD driving. We suggest that LSDs in accretion discs and Babcock models of the solar dynamo are actually MF LSDs.

  3. Large-scale 3D modeling of projectile impact damage in brittle plates

    NASA Astrophysics Data System (ADS)

    Seagraves, A.; Radovitzky, R.

    2015-10-01

    The damage and failure of brittle plates subjected to projectile impact is investigated through large-scale three-dimensional simulation using the DG/CZM approach introduced by Radovitzky et al. [Comput. Methods Appl. Mech. Eng. 2011; 200(1-4), 326-344]. Two standard experimental setups are considered: first, we simulate edge-on impact experiments on Al2O3 tiles by Strassburger and Senf [Technical Report ARL-CR-214, Army Research Laboratory, 1995]. Qualitative and quantitative validation of the simulation results is pursued by direct comparison of simulations with experiments at different loading rates and good agreement is obtained. In the second example considered, we investigate the fracture patterns in normal impact of spheres on thin, unconfined ceramic plates over a wide range of loading rates. For both the edge-on and normal impact configurations, the full field description provided by the simulations is used to interpret the mechanisms underlying the crack propagation patterns and their strong dependence on loading rate.

  4. Large-scale modeling of epileptic seizures: scaling properties of two parallel neuronal network simulation algorithms.

    PubMed

    Pesce, Lorenzo L; Lee, Hyong C; Hereld, Mark; Visser, Sid; Stevens, Rick L; Wildeman, Albert; van Drongelen, Wim

    2013-01-01

    Our limited understanding of the relationship between the behavior of individual neurons and large neuronal networks is an important limitation in current epilepsy research and may be one of the main causes of our inadequate ability to treat it. Addressing this problem directly via experiments is impossibly complex; thus, we have been developing and studying medium-large-scale simulations of detailed neuronal networks to guide us. Flexibility in the connection schemas and a complete description of the cortical tissue seem necessary for this purpose. In this paper we examine some of the basic issues encountered in these multiscale simulations. We have determined the detailed behavior of two such simulators on parallel computer systems. The observed memory and computation-time scaling behavior for a distributed memory implementation were very good over the range studied, both in terms of network sizes (2,000 to 400,000 neurons) and processor pool sizes (1 to 256 processors). Our simulations required between a few megabytes and about 150 gigabytes of RAM and lasted between a few minutes and about a week, well within the capability of most multinode clusters. Therefore, simulations of epileptic seizures on networks with millions of cells should be feasible on current supercomputers. PMID:24416069

  5. Similarity-based modeling in large-scale prediction of drug-drug interactions.

    PubMed

    Vilar, Santiago; Uriarte, Eugenio; Santana, Lourdes; Lorberbaum, Tal; Hripcsak, George; Friedman, Carol; Tatonetti, Nicholas P

    2014-09-01

    Drug-drug interactions (DDIs) are a major cause of adverse drug effects and a public health concern, as they increase hospital care expenses and reduce patients' quality of life. DDI detection is, therefore, an important objective in patient safety, one whose pursuit affects drug development and pharmacovigilance. In this article, we describe a protocol applicable on a large scale to predict novel DDIs based on similarity of drug interaction candidates to drugs involved in established DDIs. The method integrates a reference standard database of known DDIs with drug similarity information extracted from different sources, such as 2D and 3D molecular structure, interaction profile, target and side-effect similarities. The method is interpretable in that it generates drug interaction candidates that are traceable to pharmacological or clinical effects. We describe a protocol with applications in patient safety and preclinical toxicity screening. The time frame to implement this protocol is 5-7 h, with additional time potentially necessary, depending on the complexity of the reference standard DDI database and the similarity measures implemented. PMID:25122524

  6. Large-Scale Modeling of Epileptic Seizures: Scaling Properties of Two Parallel Neuronal Network Simulation Algorithms

    DOE PAGESBeta

    Pesce, Lorenzo L.; Lee, Hyong C.; Hereld, Mark; Visser, Sid; Stevens, Rick L.; Wildeman, Albert; van Drongelen, Wim

    2013-01-01

    Our limited understanding of the relationship between the behavior of individual neurons and large neuronal networks is an important limitation in current epilepsy research and may be one of the main causes of our inadequate ability to treat it. Addressing this problem directly via experiments is impossibly complex; thus, we have been developing and studying medium-large-scale simulations of detailed neuronal networks to guide us. Flexibility in the connection schemas and a complete description of the cortical tissue seem necessary for this purpose. In this paper we examine some of the basic issues encountered in these multiscale simulations. We have determinedmore » the detailed behavior of two such simulators on parallel computer systems. The observed memory and computation-time scaling behavior for a distributed memory implementation were very good over the range studied, both in terms of network sizes (2,000 to 400,000 neurons) and processor pool sizes (1 to 256 processors). Our simulations required between a few megabytes and about 150 gigabytes of RAM and lasted between a few minutes and about a week, well within the capability of most multinode clusters. Therefore, simulations of epileptic seizures on networks with millions of cells should be feasible on current supercomputers.« less

  7. Performance of hybrid methods for large-scale unconstrained optimization as applied to models of proteins.

    PubMed

    Das, B; Meirovitch, H; Navon, I M

    2003-07-30

    Energy minimization plays an important role in structure determination and analysis of proteins, peptides, and other organic molecules; therefore, development of efficient minimization algorithms is important. Recently, Morales and Nocedal developed hybrid methods for large-scale unconstrained optimization that interlace iterations of the limited-memory BFGS method (L-BFGS) and the Hessian-free Newton method (Computat Opt Appl 2002, 21, 143-154). We test the performance of this approach as compared to those of the L-BFGS algorithm of Liu and Nocedal and the truncated Newton (TN) with automatic preconditioner of Nash, as applied to the protein bovine pancreatic trypsin inhibitor (BPTI) and a loop of the protein ribonuclease A. These systems are described by the all-atom AMBER force field with a dielectric constant epsilon = 1 and a distance-dependent dielectric function epsilon = 2r, where r is the distance between two atoms. It is shown that for the optimal parameters the hybrid approach is typically two times more efficient in terms of CPU time and function/gradient calculations than the two other methods. The advantage of the hybrid approach increases as the electrostatic interactions become stronger, that is, in going from epsilon = 2r to epsilon = 1, which leads to a more rugged and probably more nonlinear potential energy surface. However, no general rule that defines the optimal parameters has been found and their determination requires a relatively large number of trial-and-error calculations for each problem. PMID:12820130

  8. Plasma transport at the dayside magnetopause: observations and large-scale modeling

    NASA Astrophysics Data System (ADS)

    Berchem, Jean; Richard, Robert; Escoubet, C. Philippe; Pitout, Frederic; Taylor, Matthew G.; Laasko, Harri; Masson, Arnaud; Dandouras, Iannis; Reme, Henri

    2013-04-01

    Multipoint observations made by the Cluster spacecraft as they cross the polar cusps can provide significant insight into the plasma transport that occurs at the magnetospheric boundary. In particular, the formation of discrete structures in the energy-latitude dispersion of ions observed in the cusp reflects fundamental properties of the entry and acceleration of solar wind ions at the dayside magnetopause. We present the results of a study that uses large-scale numerical simulations to determine the relationship between the structures observed in ion dispersions in the cusp and the injection process at the magnetopause. This study uses the time-dependent electric and magnetic fields predicted by three-dimensional global MHD simulations to compute the trajectories of large samples of ions launched upstream of the bow shock for different solar wind conditions. Particle information collected in the simulations is then used to reconstruct ion dispersions that are compared with Cluster observations in the cusp. Individual particle trajectories are subsequently analyzed to determine the relationship between the structures observed in the cusp and the entry and acceleration process at the dayside magnetopause.

  9. The CAM/IMPACT/CoCiP Coupled Climate Model: Radiative forcing by aircraft in large-scale clouds

    NASA Astrophysics Data System (ADS)

    Penner, J. E.; Schumann, U.; Chen, Y.; Zhou, C.; Graf, K.

    2013-12-01

    Radiative forcing by aircraft soot in large-scale clouds has been estimated to be both positive and negative, while forcing by contrails and contrail cirrus (i.e. spreading contrails) is positive. Here we use an improved model to estimate the forcing in large-scale clouds and evaluate the effects of coupling the hydrological cycle within CAM with the CoCiP contrail model. The large-scale cloud effects assume that the fraction of soot particles that have been processed through contrails are good heterogeneous ice nuclei (IN), in agreement with laboratory data. We explore the effect of sulfate deposition on soot in decreasing the ability of contrail-processed soot to act as IN. The calculated total all-sky radiative climate forcing with and without coupling of CoCiP to the hydrological cycle within CAM and its range is reported. We compare results with observations and discuss what is needed to narrow the range of forcing.

  10. Two Realistic Beagle Models for Dose Assessment.

    PubMed

    Stabin, Michael G; Kost, Susan D; Segars, William P; Guilmette, Raymond A

    2015-09-01

    Previously, the authors developed a series of eight realistic digital mouse and rat whole body phantoms based on NURBS technology to facilitate internal and external dose calculations in various species of rodents. In this paper, two body phantoms of adult beagles are described based on voxel images converted to NURBS models. Specific absorbed fractions for activity in 24 organs are presented in these models. CT images were acquired of an adult male and female beagle. The images were segmented, and the organs and structures were modeled using NURBS surfaces and polygon meshes. Each model was voxelized at a resolution of 0.75 × 0.75 × 2 mm. The voxel versions were implemented in GEANT4 radiation transport codes to calculate specific absorbed fractions (SAFs) using internal photon and electron sources. Photon and electron SAFs were then calculated for relevant organs in both models. The SAFs for photons and electrons were compatible with results observed by others. Absorbed fractions for electrons for organ self-irradiation were significantly less than 1.0 at energies above 0.5 MeV, as expected for many of these small-sized organs, and measurable cross irradiation was observed for many organ pairs for high-energy electrons (as would be emitted by nuclides like 32P, 90Y, or 188Re). The SAFs were used with standardized decay data to develop dose factors (DFs) for radiation dose calculations using the RADAR Method. These two new realistic models of male and female beagle dogs will be useful in radiation dosimetry calculations for external or internal simulated sources. PMID:26222214

  11. Lattice models for large-scale simulations of coherent wave scattering.

    PubMed

    Wang, Shumin; Teixeira, Fernando L

    2004-01-01

    Lattice approximations for partial differential equations describing physical phenomena are commonly used for the numerical simulation of many problems otherwise intractable by pure analytical approaches. The discretization inevitably leads to many of the original symmetries to be broken or modified. In the case of Maxwell's equations for example, invariance and isotropy of the speed of light in vacuum is invariably lost because of the so-called grid dispersion. Since it is a cumulative effect, grid dispersion is particularly harmful for the accuracy of results of large-scale simulations of scattering problems. Grid dispersion is usually combated by either increasing the lattice resolution or by employing higher-order schemes with larger stencils for the space and time derivatives. Both alternatives lead to increased computational cost to simulate a problem of a given physical size. Here, we introduce a general approach to develop lattice approximations with reduced grid dispersion error for a given stencil (and hence at no additional computational cost). The present approach is based on first obtaining stencil coefficients in the Fourier domain that minimize the maximum grid dispersion error for wave propagation at all directions (minimax sense). The resulting coefficients are then expanded into a Taylor series in terms of the frequency variable and incorporated into time-domain (update) equations after an inverse Fourier transformation. Maximally flat (Butterworth) or Chebyshev filters are subsequently used to minimize the wave speed variations for a given frequency range of interest. The use of such filters also allows for the adjustment of the grid dispersion characteristics so as to minimize not only the local dispersion error but also the accumulated phase error in a frequency range of interest. PMID:14995749

  12. Lattice models for large-scale simulations of coherent wave scattering

    NASA Astrophysics Data System (ADS)

    Wang, Shumin; Teixeira, Fernando L.

    2004-01-01

    Lattice approximations for partial differential equations describing physical phenomena are commonly used for the numerical simulation of many problems otherwise intractable by pure analytical approaches. The discretization inevitably leads to many of the original symmetries to be broken or modified. In the case of Maxwell’s equations for example, invariance and isotropy of the speed of light in vacuum is invariably lost because of the so-called grid dispersion. Since it is a cumulative effect, grid dispersion is particularly harmful for the accuracy of results of large-scale simulations of scattering problems. Grid dispersion is usually combated by either increasing the lattice resolution or by employing higher-order schemes with larger stencils for the space and time derivatives. Both alternatives lead to increased computational cost to simulate a problem of a given physical size. Here, we introduce a general approach to develop lattice approximations with reduced grid dispersion error for a given stencil (and hence at no additional computational cost). The present approach is based on first obtaining stencil coefficients in the Fourier domain that minimize the maximum grid dispersion error for wave propagation at all directions (minimax sense). The resulting coefficients are then expanded into a Taylor series in terms of the frequency variable and incorporated into time-domain (update) equations after an inverse Fourier transformation. Maximally flat (Butterworth) or Chebyshev filters are subsequently used to minimize the wave speed variations for a given frequency range of interest. The use of such filters also allows for the adjustment of the grid dispersion characteristics so as to minimize not only the local dispersion error but also the accumulated phase error in a frequency range of interest.

  13. Can key vegetation parameters be retrieved at the large-scale using LAI satellite products and a generic modelling approach ?

    NASA Astrophysics Data System (ADS)

    Dewaele, Helene; Calvet, Jean-Christophe; Carrer, Dominique; Laanaia, Nabil

    2016-04-01

    In the context of climate change, the need to assess and predict the impact of droughts on vegetation and water resources increases. The generic approaches permitting the modelling of continental surfaces at large-scale has progressed in recent decades towards land surface models able to couple cycles of water, energy and carbon. A major source of uncertainty in these generic models is the maximum available water content of the soil (MaxAWC) usable by plants which is constrained by the rooting depth parameter and unobservable at the large-scale. In this study, vegetation products derived from the SPOT/VEGETATION satellite data available since 1999 are used to optimize the model rooting depth over rainfed croplands and permanent grasslands at 1 km x 1 km resolution. The inter-annual variability of the Leaf Area Index (LAI) is simulated over France using the Interactions between Soil, Biosphere and Atmosphere, CO2-reactive (ISBA-A-gs) generic land surface model and a two-layer force-restore (FR-2L) soil profile scheme. The leaf nitrogen concentration directly impacts the modelled value of the maximum annual LAI. In a first step this parameter is estimated for the last 15 years by using an iterative procedure that matches the maximum values of LAI modelled by ISBA-A-gs to the highest satellite-derived LAI values. The Root Mean Square Error (RMSE) is used as a cost function to be minimized. In a second step, the model rooting depth is optimized in order to reproduce the inter-annual variability resulting from the drought impact on the vegetation. The evaluation of the retrieved soil rooting depth is achieved using the French agricultural statistics of Agreste. Retrieved leaf nitrogen concentrations are compared with values from previous studies. The preliminary results show a good potential of this approach to estimate these two vegetation parameters (leaf nitrogen concentration, MaxAWC) at the large-scale over grassland areas. Besides, a marked impact of the

  14. Large scale traffic simulations

    SciTech Connect

    Nagel, K.; Barrett, C.L.; Rickert, M.

    1997-04-01

    Large scale microscopic (i.e. vehicle-based) traffic simulations pose high demands on computational speed in at least two application areas: (i) real-time traffic forecasting, and (ii) long-term planning applications (where repeated {open_quotes}looping{close_quotes} between the microsimulation and the simulated planning of individual person`s behavior is necessary). As a rough number, a real-time simulation of an area such as Los Angeles (ca. 1 million travellers) will need a computational speed of much higher than 1 million {open_quotes}particle{close_quotes} (= vehicle) updates per second. This paper reviews how this problem is approached in different projects and how these approaches are dependent both on the specific questions and on the prospective user community. The approaches reach from highly parallel and vectorizable, single-bit implementations on parallel supercomputers for Statistical Physics questions, via more realistic implementations on coupled workstations, to more complicated driving dynamics implemented again on parallel supercomputers. 45 refs., 9 figs., 1 tab.

  15. The use of remotely sensed soil moisture data in large-scale models of the hydrological cycle

    NASA Technical Reports Server (NTRS)

    Salomonson, V. V.; Gurney, R. J.; Schmugge, T. J.

    1985-01-01

    Manabe (1982) has reviewed numerical simulations of the atmosphere which provided a framework within which an examination of the dynamics of the hydrological cycle could be conducted. It was found that the climate is sensitive to soil moisture variability in space and time. The challenge arises now to improve the observations of soil moisture so as to provide up-dated boundary condition inputs to large scale models including the hydrological cycle. Attention is given to details regarding the significance of understanding soil moisture variations, soil moisture estimation using remote sensing, and energy and moisture balance modeling.

  16. Growth Mixture Modeling: Application to Reading Achievement Data from a Large-Scale Assessment

    ERIC Educational Resources Information Center

    Bilir, Mustafa Kuzey; Binici, Salih; Kamata, Akihito

    2008-01-01

    The popularity of growth modeling has increased in psychological and cognitive development research as a means to investigate patterns of changes and differences between observation units over time. Random coefficient modeling, such as multilevel modeling and latent growth curve modeling as a special application of structural equation modeling are…

  17. Improved Large-Scale Inundation Modelling by 1D-2D Coupling and Consideration of Hydrologic and Hydrodynamic Processes - a Case Study in the Amazon

    NASA Astrophysics Data System (ADS)

    Hoch, J. M.; Bierkens, M. F.; Van Beek, R.; Winsemius, H.; Haag, A.

    2015-12-01

    Understanding the dynamics of fluvial floods is paramount to accurate flood hazard and risk modeling. Currently, economic losses due to flooding constitute about one third of all damage resulting from natural hazards. Given future projections of climate change, the anticipated increase in the World's population and the associated implications, sound knowledge of flood hazard and related risk is crucial. Fluvial floods are cross-border phenomena that need to be addressed accordingly. Yet, only few studies model floods at the large-scale which is preferable to tiling the output of small-scale models. Most models cannot realistically model flood wave propagation due to a lack of either detailed channel and floodplain geometry or the absence of hydrologic processes. This study aims to develop a large-scale modeling tool that accounts for both hydrologic and hydrodynamic processes, to find and understand possible sources of errors and improvements and to assess how the added hydrodynamics affect flood wave propagation. Flood wave propagation is simulated by DELFT3D-FM (FM), a hydrodynamic model using a flexible mesh to schematize the study area. It is coupled to PCR-GLOBWB (PCR), a macro-scale hydrological model, that has its own simpler 1D routing scheme (DynRout) which has already been used for global inundation modeling and flood risk assessments (GLOFRIS; Winsemius et al., 2013). A number of model set-ups are compared and benchmarked for the simulation period 1986-1996: (0) PCR with DynRout; (1) using a FM 2D flexible mesh forced with PCR output and (2) as in (1) but discriminating between 1D channels and 2D floodplains, and, for comparison, (3) and (4) the same set-ups as (1) and (2) but forced with observed GRDC discharge values. Outputs are subsequently validated against observed GRDC data at Óbidos and flood extent maps from the Dartmouth Flood Observatory. The present research constitutes a first step into a globally applicable approach to fully couple

  18. Convective aggregation in idealised models and realistic equatorial cases

    NASA Astrophysics Data System (ADS)

    Holloway, Chris

    2015-04-01

    Idealised explicit convection simulations of the Met Office Unified Model are shown to exhibit spontaneous self-aggregation in radiative-convective equilibrium, as seen previously in other models in several recent studies. This self-aggregation is linked to feedbacks between radiation, surface fluxes, and convection, and the organization is intimately related to the evolution of the column water vapour (CWV) field. To investigate the relevance of this behaviour to the real world, these idealized simulations are compared with five 15-day cases of real organized convection in the tropics, including multiple simulations of each case testing sensitivities of the convective organization and mean states to interactive radiation, interactive surface fluxes, and evaporation of rain. Despite similar large-scale forcing via lateral boundary conditions, systematic differences in mean CWV, CWV distribution shape, and the length scale of CWV features are found between the different sensitivity runs, showing that there are at least some similarities in sensitivities to these feedbacks in both idealized and realistic simulations.

  19. Large-Scale Atmospheric Circulation Patterns Associated with Temperature Extremes as a Basis for Model Evaluation: Methodological Overview and Results

    NASA Astrophysics Data System (ADS)

    Loikith, P. C.; Broccoli, A. J.; Waliser, D. E.; Lintner, B. R.; Neelin, J. D.

    2015-12-01

    Anomalous large-scale circulation patterns often play a key role in the occurrence of temperature extremes. For example, large-scale circulation can drive horizontal temperature advection or influence local processes that lead to extreme temperatures, such as by inhibiting moderating sea breezes, promoting downslope adiabatic warming, and affecting the development of cloud cover. Additionally, large-scale circulation can influence the shape of temperature distribution tails, with important implications for the magnitude of future changes in extremes. As a result of the prominent role these patterns play in the occurrence and character of extremes, the way in which temperature extremes change in the future will be highly influenced by if and how these patterns change. It is therefore critical to identify and understand the key patterns associated with extremes at local to regional scales in the current climate and to use this foundation as a target for climate model validation. This presentation provides an overview of recent and ongoing work aimed at developing and applying novel approaches to identifying and describing the large-scale circulation patterns associated with temperature extremes in observations and using this foundation to evaluate state-of-the-art global and regional climate models. Emphasis is given to anomalies in sea level pressure and 500 hPa geopotential height over North America using several methods to identify circulation patterns, including self-organizing maps and composite analysis. Overall, evaluation results suggest that models are able to reproduce observed patterns associated with temperature extremes with reasonable fidelity in many cases. Model skill is often highest when and where synoptic-scale processes are the dominant mechanisms for extremes, and lower where sub-grid scale processes (such as those related to topography) are important. Where model skill in reproducing these patterns is high, it can be inferred that extremes are

  20. Comparing Realistic Subthalamic Nucleus Neuron Models

    NASA Astrophysics Data System (ADS)

    Njap, Felix; Claussen, Jens C.; Moser, Andreas; Hofmann, Ulrich G.

    2011-06-01

    The mechanism of action of clinically effective electrical high frequency stimulation is still under debate. However, recent evidence points at the specific activation of GABA-ergic ion channels. Using a computational approach, we analyze temporal properties of the spike trains emitted by biologically realistic neurons of the subthalamic nucleus (STN) as a function of GABA-ergic synaptic input conductances. Our contribution is based on a model proposed by Rubin and Terman and exhibits a wide variety of different firing patterns, silent, low spiking, moderate spiking and intense spiking activity. We observed that most of the cells in our network turn to silent mode when we increase the GABAA input conductance above the threshold of 3.75 mS/cm2. On the other hand, insignificant changes in firing activity are observed when the input conductance is low or close to zero. We thus reproduce Rubin's model with vanishing synaptic conductances. To quantitatively compare spike trains from the original model with the modified model at different conductance levels, we apply four different (dis)similarity measures between them. We observe that Mahalanobis distance, Victor-Purpura metric, and Interspike Interval distribution are sensitive to different firing regimes, whereas Mutual Information seems undiscriminative for these functional changes.

  1. Development of Residential Prototype Building Models and Analysis System for Large-Scale Energy Efficiency Studies Using EnergyPlus

    SciTech Connect

    Mendon, Vrushali V.; Taylor, Zachary T.

    2014-09-10

    ABSTRACT: Recent advances in residential building energy efficiency and codes have resulted in increased interest in detailed residential building energy models using the latest energy simulation software. One of the challenges of developing residential building models to characterize new residential building stock is to allow for flexibility to address variability in house features like geometry, configuration, HVAC systems etc. Researchers solved this problem in a novel way by creating a simulation structure capable of creating fully-functional EnergyPlus batch runs using a completely scalable residential EnergyPlus template system. This system was used to create a set of thirty-two residential prototype building models covering single- and multifamily buildings, four common foundation types and four common heating system types found in the United States (US). A weighting scheme with detailed state-wise and national weighting factors was designed to supplement the residential prototype models. The complete set is designed to represent a majority of new residential construction stock. The entire structure consists of a system of utility programs developed around the core EnergyPlus simulation engine to automate the creation and management of large-scale simulation studies with minimal human effort. The simulation structure and the residential prototype building models have been used for numerous large-scale studies, one of which is briefly discussed in this paper.

  2. The HyperHydro (H2) experiment for comparing different large-scale models at various resolutions

    NASA Astrophysics Data System (ADS)

    Sutanudjaja, Edwin

    2016-04-01

    HyperHydro (http://www.hyperhydro.org/) is an open network of scientists with the objective of simulating large-scale terrestrial hydrology and water resources at hyper-resolution (Wood et al., 2011, DOI: 10.1029/2010WR010090; Bierkens et al., 2014, DOI: 10.1002/hyp.10391). Within the HyperHydro network, a modeling workshop was held at Utrecht University, the Netherlands, on 9-12 June 2015. The goal of the workshop was to start the HyperHydro (H^2) experiment for comparing different large-scale hydrological models, at different spatial resolutions, from 50 km to 1 km. Model simulation results (e.g. discharge, soil moisture, evaporation, snow, groundwater depth, etc.) are evaluated to available observation data and compared across various models and resolutions. At EGU 2016, we would like to present the latest results of this inter-comparison experiment. We also invite participation from the hydrology community on this experiment. Up to now, the models compared are CLM, LISFLOOD, mHM, ParFlow-CLM, PCR-GLOBWB, TerrSysMP, VIC, WaterGAP, and wflow. As initial test-beds, we mainly focus on two river basins: San Joaquin/California (82000 km^2) and Rhine (185000 km^2). Moreover, comparison at a larger region, such for the CONUS (Contiguous-US) domain, is also explored and presented.

  3. The HyperHydro (H^2) experiment for comparing different large-scale models at various resolutions

    NASA Astrophysics Data System (ADS)

    Sutanudjaja, E.; Bosmans, J.; Chaney, N.; Clark, M. P.; Condon, L. E.; David, C. H.; De Roo, A. P. J.; Doll, P. M.; Drost, N.; Eisner, S.; Famiglietti, J. S.; Floerke, M.; Gilbert, J. M.; Gochis, D. J.; Hut, R.; Keune, J.; Kollet, S. J.; Maxwell, R. M.; Pan, M.; Rakovec, O.; Reager, J. T., II; Samaniego, L. E.; Mueller Schmied, H.; Trautmann, T.; Van Beek, L. P.; Van De Giesen, N.; Wood, E. F.; Bierkens, M. F.; Kumar, R.

    2015-12-01

    HyperHydro (http://www.hyperhydro.org/) is an open network of scientists with the objective of simulating large-scale terrestrial hydrology and water resources at hyper-resolution (Bierkens et al., 2014, DOI: 10.1002/hyp.10391). Within the HyperHydro network, a modeling workshop was held at Utrecht University, the Netherlands, on 9-12 June 2015. The goal of the workshop was to start the HyperHydro (H^2) experiment for comparing different large-scale hydrological models, at different spatial resolutions, from 50 km to 1 km. Model simulation results (e.g. discharge, soil moisture, evaporation, snow, groundwater depth, etc.) are evaluated to available observation data and compared across various models and resolutions. In AGU 2015, we would like to present the results of this inter-comparison experiment. During the workshop in Utrecht, the models compared were CLM, LISFLOOD, mHM, ParFlow-CLM, PCR-GLOBWB, TerrSysMP, VIC and WaterGAP. We invite participation from the hydrology community on this experiment. As test-beds, we focus on two river basins: San Joaquin (~82000 km2) and Rhine (~185000 km2). In the near future, we will escalate this experiment to the CONUS and CORDEX-EU domains. The picture below was taken during the workshop in Utrecht (9-12 June 2015).

  4. Social and Economic Effects of Large-Scale Energy Development in Rural Areas: An Assessment Model.

    ERIC Educational Resources Information Center

    Murdock, Steve H.; Leistritz, F. Larry

    General development, structure, and uses of a computerized impact projection model, the North Dakota Regional Environmental Assessment Program (REAP) Economic-Demographic Assessment Model, were studied not only to describe a model developed to meet informational needs of local decision makers (especially in a rural area undergoing development),…

  5. Alveolar mechanics using realistic acinar models

    NASA Astrophysics Data System (ADS)

    Kumar, Haribalan; Lin, Ching-Long; Tawhai, Merryn H.; Hoffman, Eric A.

    2009-11-01

    Accurate modeling of the mechanics in terminal airspaces of the lung is desirable for study of particle transport and pathology. The flow in the acinar region is traditionally studied by employing prescribed boundary conditions to represent rhythmic breathing and volumetric expansion. Conventional models utilize simplified spherical or polygonal units to represent the alveolar duct and sac. Accurate prediction of flow and transport characteristics may require geometries reconstructed from CT-based images and serve to understand the importance of physiologically realistic representation of the acinus. In this effort, we present a stabilized finite element framework, supplemented with appropriate boundary conditions at the alveolar mouth and septal borders for simulation of the alveolar mechanics and the resulting airflow. Results of material advection based on Lagrangian tracking are presented to complete the study of transport and compare the results with simplified acinar models. The current formulation provides improved understanding and realization of a dynamic framework for parenchymal mechanics with incorporation of alveolar pressure and traction stresses.

  6. Modeling Cultural/ecological Impacts of Large-scale Mining and Industrial Development in the Yukon-Kuskokwim Basin

    NASA Astrophysics Data System (ADS)

    Bunn, J. T.; Sparck, A.

    2004-12-01

    We are developing a methodology for predicting the cultural impact of large-scale mineral resource development in the Yukon-Kuskokwim (Y-K) basin. The Yup'ik/Cup'ik/Dene people of the Y-K basin currently practice a mixed-market subsistence economy, in which native subsistence traditions and social structures are largely intact. Large-scale mining and industrial-infrastructure developments are being planned that will constitute a significant expansion of the market economy, and will also significantly affect the physical environment that is central to the subsistence way of life. To explore the impact that these changes are likely to have on native culture we use a systems modeling approach, considering "culture" to be a system that encompasses the physical, biological and verbal realms. We draw upon Alaska Department of Fish and Game technical reports, anthropological studies, Yup'ik cultural visioning exercises, and personal experience to identify the components of our cultural model. We use structural equation modeling to determine causal relationships between system components. The resulting model is used predict changes that are likely to occur as a result of planned developments.

  7. The topology of large-scale structure. I - Topology and the random phase hypothesis. [galactic formation models

    NASA Technical Reports Server (NTRS)

    Weinberg, David H.; Gott, J. Richard, III; Melott, Adrian L.

    1987-01-01

    Many models for the formation of galaxies and large-scale structure assume a spectrum of random phase (Gaussian), small-amplitude density fluctuations as initial conditions. In such scenarios, the topology of the galaxy distribution on large scales relates directly to the topology of the initial density fluctuations. Here a quantitative measure of topology - the genus of contours in a smoothed density distribution - is described and applied to numerical simulations of galaxy clustering, to a variety of three-dimensional toy models, and to a volume-limited sample of the CfA redshift survey. For random phase distributions the genus of density contours exhibits a universal dependence on threshold density. The clustering simulations show that a smoothing length of 2-3 times the mass correlation length is sufficient to recover the topology of the initial fluctuations from the evolved galaxy distribution. Cold dark matter and white noise models retain a random phase topology at shorter smoothing lengths, but massive neutrino models develop a cellular topology.

  8. Assessment of climate change impacts on rainfall using large scale climate variables and downscaling models - A case study

    NASA Astrophysics Data System (ADS)

    Ahmadi, Azadeh; Moridi, Ali; Lafdani, Elham Kakaei; Kianpisheh, Ghasem

    2014-10-01

    Many of the applied techniques in water resources management can be directly or indirectly influenced by hydro-climatology predictions. In recent decades, utilizing the large scale climate variables as predictors of hydrological phenomena and downscaling numerical weather ensemble forecasts has revolutionized the long-lead predictions. In this study, two types of rainfall prediction models are developed to predict the rainfall of the Zayandehrood dam basin located in the central part of Iran. The first seasonal model is based on large scale climate signals data around the world. In order to determine the inputs of the seasonal rainfall prediction model, the correlation coefficient analysis and the new Gamma Test (GT) method are utilized. Comparison of modelling results shows that the Gamma test method improves the Nash-Sutcliffe efficiency coefficient of modelling performance as 8% and 10% for dry and wet seasons, respectively. In this study, Support Vector Machine (SVM) model for predicting rainfall in the region has been used and its results are compared with the benchmark models such as K-nearest neighbours (KNN) and Artificial Neural Network (ANN). The results show better performance of the SVM model at testing stage. In the second model, statistical downscaling model (SDSM) as a popular downscaling tool has been used. In this model, using the outputs from GCM, the rainfall of Zayandehrood dam is projected under two climate change scenarios. Most effective variables have been identified among 26 predictor variables. Comparison of the results of the two models shows that the developed SVM model has lesser errors in monthly rainfall estimation. The results show that the rainfall in the future wet periods are more than historical values and it is lower than historical values in the dry periods. The highest monthly uncertainty of future rainfall occurs in March and the lowest in July.

  9. Combining local- and large-scale models to predict the distributions of invasive plant species.

    PubMed

    Jones, Chad C; Acker, Steven A; Halpern, Charles B

    2010-03-01

    Habitat distribution models are increasingly used to predict the potential distributions of invasive species and to inform monitoring. However, these models assume that species are in equilibrium with the environment, which is clearly not true for most invasive species. Although this assumption is frequently acknowledged, solutions have not been adequately addressed. There are several potential methods for improving habitat distribution models. Models that require only presence data may be more effective for invasive species, but this assumption has rarely been tested. In addition, combining modeling types to form "ensemble" models may improve the accuracy of predictions. However, even with these improvements, models developed for recently invaded areas are greatly influenced by the current distributions of species and thus reflect near- rather than long-term potential for invasion. Larger scale models from species' native and invaded ranges may better reflect long-term invasion potential, but they lack finer scale resolution. We compared logistic regression (which uses presence/absence data) and two presence-only methods for modeling the potential distributions of three invasive plant species on the Olympic Peninsula in Washington, USA. We then combined the three methods to create ensemble models. We also developed climate envelope models for the same species based on larger scale distributions and combined models from multiple scales to create an index of near- and long-term invasion risk to inform monitoring in Olympic National Park (ONP). Neither presence-only nor ensemble models were more accurate than logistic regression for any of the species. Larger scale models predicted much greater areas at risk of invasion. Our index of near- and long-term invasion risk indicates that < 4% of ONP is at high near-term risk of invasion while 67-99% of the Park is at moderate or high long-term risk of invasion. We demonstrate how modeling results can be used to guide the

  10. Volterra representation enables modeling of complex synaptic nonlinear dynamics in large-scale simulations

    PubMed Central

    Hu, Eric Y.; Bouteiller, Jean-Marie C.; Song, Dong; Baudry, Michel; Berger, Theodore W.

    2015-01-01

    Chemical synapses are comprised of a wide collection of intricate signaling pathways involving complex dynamics. These mechanisms are often reduced to simple spikes or exponential representations in order to enable computer simulations at higher spatial levels of complexity. However, these representations cannot capture important nonlinear dynamics found in synaptic transmission. Here, we propose an input-output (IO) synapse model capable of generating complex nonlinear dynamics while maintaining low computational complexity. This IO synapse model is an extension of a detailed mechanistic glutamatergic synapse model capable of capturing the input-output relationships of the mechanistic model using the Volterra functional power series. We demonstrate that the IO synapse model is able to successfully track the nonlinear dynamics of the synapse up to the third order with high accuracy. We also evaluate the accuracy of the IO synapse model at different input frequencies and compared its performance with that of kinetic models in compartmental neuron models. Our results demonstrate that the IO synapse model is capable of efficiently replicating complex nonlinear dynamics that were represented in the original mechanistic model and provide a method to replicate complex and diverse synaptic transmission within neuron network simulations. PMID:26441622

  11. Large-Scale Features of Pliocene Climate: Results from the Pliocene Model Intercomparison Project

    NASA Technical Reports Server (NTRS)

    Haywood, A. M.; Hill, D.J.; Dolan, A. M.; Otto-Bliesner, B. L.; Bragg, F.; Chan, W.-L.; Chandler, M. A.; Contoux, C.; Dowsett, H. J.; Jost, A.; Kamae, Y.; Lohmann, G.; Lunt, D. J.; Abe-Ouchi, A.; Pickering, S. J.; Ramstein, G.; Rosenbloom, N. A.; Salzmann, U.; Sohl, L.; Stepanek, C.; Ueda, H.; Yan, Q.; Zhang, Z.

    2013-01-01

    Climate and environments of the mid-Pliocene warm period (3.264 to 3.025 Ma) have been extensively studied.Whilst numerical models have shed light on the nature of climate at the time, uncertainties in their predictions have not been systematically examined. The Pliocene Model Intercomparison Project quantifies uncertainties in model outputs through a coordinated multi-model and multi-mode data intercomparison. Whilst commonalities in model outputs for the Pliocene are clearly evident, we show substantial variation in the sensitivity of models to the implementation of Pliocene boundary conditions. Models appear able to reproduce many regional changes in temperature reconstructed from geological proxies. However, data model comparison highlights that models potentially underestimate polar amplification. To assert this conclusion with greater confidence, limitations in the time-averaged proxy data currently available must be addressed. Furthermore, sensitivity tests exploring the known unknowns in modelling Pliocene climate specifically relevant to the high latitudes are essential (e.g. palaeogeography, gateways, orbital forcing and trace gasses). Estimates of longer-term sensitivity to CO2 (also known as Earth System Sensitivity; ESS), support previous work suggesting that ESS is greater than Climate Sensitivity (CS), and suggest that the ratio of ESS to CS is between 1 and 2, with a "best" estimate of 1.5.

  12. Why can't current large-scale models predict mixed-phase clouds correctly?

    NASA Astrophysics Data System (ADS)

    Barrett, Andrew; Hogan, Robin; Forbes, Richard

    2013-04-01

    Stratiform mid-level mixed-phase clouds have a significant radiative impact but are often missing from numerical model simulations for a number of reasons. This is particularly true more recently as models move towards treating cloud ice as a prognostic variable. This presentation will demonstrate three important findings that will help lead to better simulations of mixed-phase clouds by models in the future. Each is briefly covered in the paragraphs below. 1) The occurrence of mid-level mixed-phase clouds in models is compared with ground based remote sensors, finding an under-prediction of the supercooled liquid water content in the models of a factor of 2 or more. This is accompanied by a low bias in the liquid cloud fraction whilst the ice properties are better simulated. Models with more sophisticated microphysics schemes that include prognostic cloud ice are the worst performing models. 2) A new single column model is used to investigate which processes are important for the maintenance of supercooled liquid layers. By running the model over multiple days and exploring the parameter-space of numerous physical parameterizations it was determined that the most sensitive areas of the model are ice microphysical processes and vertical resolution. 3) Vertical resolutions finer than 200 metres are required to capture the thin liquid layers in these clouds and therefore their important radiative effect. Leading models are still far coarser than this in the mid-troposphere, limiting hope of simulating these clouds properly. A new parameterization of the vertical structure of these clouds is developed and allows their properties to be correctly simulated in a resolution independent way by numerical models with coarse vertical resolution. This parameterization is explained and demonstrated here and could enable significant improvement in model simulations of stratiform mixed-phase clouds.

  13. Large-Scale Transport Model Uncertainty and Sensitivity Analysis: Distributed Sources in Complex, Hydrogeologic Systems

    NASA Astrophysics Data System (ADS)

    Wolfsberg, A.; Kang, Q.; Li, C.; Ruskauff, G.; Bhark, E.; Freeman, E.; Prothro, L.; Drellack, S.

    2007-12-01

    The Underground Test Area (UGTA) Project of the U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office is in the process of assessing and developing regulatory decision options based on modeling predictions of contaminant transport from underground testing of nuclear weapons at the Nevada Test Site (NTS). The UGTA Project is attempting to develop an effective modeling strategy that addresses and quantifies multiple components of uncertainty including natural variability, parameter uncertainty, conceptual/model uncertainty, and decision uncertainty in translating model results into regulatory requirements. The modeling task presents multiple unique challenges to the hydrological sciences as a result of the complex fractured and faulted hydrostratigraphy, the distributed locations of sources, the suite of reactive and non-reactive radionuclides, and uncertainty in conceptual models. Characterization of the hydrogeologic system is difficult and expensive because of deep groundwater in the arid desert setting and the large spatial setting of the NTS. Therefore, conceptual model uncertainty is partially addressed through the development of multiple alternative conceptual models of the hydrostratigraphic framework and multiple alternative models of recharge and discharge. Uncertainty in boundary conditions is assessed through development of alternative groundwater fluxes through multiple simulations using the regional groundwater flow model. Calibration of alternative models to heads and measured or inferred fluxes has not proven to provide clear measures of model quality. Therefore, model screening by comparison to independently-derived natural geochemical mixing targets through cluster analysis has also been invoked to evaluate differences between alternative conceptual models. Advancing multiple alternative flow models, sensitivity of transport predictions to parameter uncertainty is assessed through Monte Carlo simulations. The

  14. Large-Scale Transport Model Uncertainty and Sensitivity Analysis: Distributed Sources in Complex Hydrogeologic Systems

    SciTech Connect

    Sig Drellack, Lance Prothro

    2007-12-01

    The Underground Test Area (UGTA) Project of the U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office is in the process of assessing and developing regulatory decision options based on modeling predictions of contaminant transport from underground testing of nuclear weapons at the Nevada Test Site (NTS). The UGTA Project is attempting to develop an effective modeling strategy that addresses and quantifies multiple components of uncertainty including natural variability, parameter uncertainty, conceptual/model uncertainty, and decision uncertainty in translating model results into regulatory requirements. The modeling task presents multiple unique challenges to the hydrological sciences as a result of the complex fractured and faulted hydrostratigraphy, the distributed locations of sources, the suite of reactive and non-reactive radionuclides, and uncertainty in conceptual models. Characterization of the hydrogeologic system is difficult and expensive because of deep groundwater in the arid desert setting and the large spatial setting of the NTS. Therefore, conceptual model uncertainty is partially addressed through the development of multiple alternative conceptual models of the hydrostratigraphic framework and multiple alternative models of recharge and discharge. Uncertainty in boundary conditions is assessed through development of alternative groundwater fluxes through multiple simulations using the regional groundwater flow model. Calibration of alternative models to heads and measured or inferred fluxes has not proven to provide clear measures of model quality. Therefore, model screening by comparison to independently-derived natural geochemical mixing targets through cluster analysis has also been invoked to evaluate differences between alternative conceptual models. Advancing multiple alternative flow models, sensitivity of transport predictions to parameter uncertainty is assessed through Monte Carlo simulations. The

  15. Research on transformation and optimization of large scale 3D modeling for real time rendering

    NASA Astrophysics Data System (ADS)

    Yan, Hu; Yang, Yongchao; Zhao, Gang; He, Bin; Shen, Guosheng

    2011-12-01

    During the simulation process of real-time three-dimensional scene, the popular modeling software and the real-time rendering platform are not compatible. The common solution is to create three-dimensional scene model by using modeling software and then transform the format supported by rendering platform. This paper takes digital campus scene simulation as an example, analyzes and solves the problems of surface loss; texture distortion and loss; model flicker and so on during the transformation from 3Ds Max to MultiGen Creator. Besides, it proposes the optimization strategy of model which is transformed. The operation results show that this strategy is a good solution to all kinds of problems existing in transformation and it can speed up the rendering speed of the model.

  16. Modelling of a large-scale urban contamination situation and remediation alternatives.

    PubMed

    Thiessen, K M; Arkhipov, A; Batandjieva, B; Charnock, T W; Gaschak, S; Golikov, V; Hwang, W T; Tomás, J; Zlobenko, B

    2009-05-01

    The Urban Remediation Working Group of the International Atomic Energy Agency's EMRAS (Environmental Modelling for Radiation Safety) program was organized to address issues of remediation assessment modelling for urban areas contaminated with dispersed radionuclides. The present paper describes the first of two modelling exercises, which was based on Chernobyl fallout data in the town of Pripyat, Ukraine. Modelling endpoints for the exercise included radionuclide concentrations and external dose rates at specified locations, contributions to the dose rates from individual surfaces and radionuclides, and annual and cumulative external doses to specified reference individuals. Model predictions were performed for a "no action" situation (with no remedial measures) and for selected countermeasures. The exercise provided a valuable opportunity to compare modelling approaches and parameter values, as well as to compare the predicted effectiveness of various countermeasures with respect to short-term and long-term reduction of predicted doses to people. PMID:19324477

  17. Large-scale in silico modeling of metabolic interactions between cell types in the human brain.

    PubMed

    Lewis, Nathan E; Schramm, Gunnar; Bordbar, Aarash; Schellenberger, Jan; Andersen, Michael P; Cheng, Jeffrey K; Patel, Nilam; Yee, Alex; Lewis, Randall A; Eils, Roland; König, Rainer; Palsson, Bernhard Ø

    2010-12-01

    Metabolic interactions between multiple cell types are difficult to model using existing approaches. Here we present a workflow that integrates gene expression data, proteomics data and literature-based manual curation to model human metabolism within and between different types of cells. Transport reactions are used to account for the transfer of metabolites between models of different cell types via the interstitial fluid. We apply the method to create models of brain energy metabolism that recapitulate metabolic interactions between astrocytes and various neuron types relevant to Alzheimer's disease. Analysis of the models identifies genes and pathways that may explain observed experimental phenomena, including the differential effects of the disease on cell types and regions of the brain. Constraint-based modeling can thus contribute to the study and analysis of multicellular metabolic processes in the human tissue microenvironment and provide detailed mechanistic insight into high-throughput data analysis. PMID:21102456

  18. The application of sensitivity analysis to models of large scale physiological systems

    NASA Technical Reports Server (NTRS)

    Leonard, J. I.

    1974-01-01

    A survey of the literature of sensitivity analysis as it applies to biological systems is reported as well as a brief development of sensitivity theory. A simple population model and a more complex thermoregulatory model illustrate the investigatory techniques and interpretation of parameter sensitivity analysis. The role of sensitivity analysis in validating and verifying models, and in identifying relative parameter influence in estimating errors in model behavior due to uncertainty in input data is presented. This analysis is valuable to the simulationist and the experimentalist in allocating resources for data collection. A method for reducing highly complex, nonlinear models to simple linear algebraic models that could be useful for making rapid, first order calculations of system behavior is presented.

  19. Open source large-scale high-resolution environmental modelling with GEMS

    NASA Astrophysics Data System (ADS)

    Baarsma, Rein; Alberti, Koko; Marra, Wouter; Karssenberg, Derek

    2016-04-01

    Many environmental, topographic and climate data sets are freely available at a global scale, creating the opportunities to run environmental models for every location on Earth. Collection of the data necessary to do this and the consequent conversion into a useful format is very demanding however, not to mention the computational demand of a model itself. We developed GEMS (Global Environmental Modelling System), an online application to run environmental models on various scales directly in your browser and share the results with other researchers. GEMS is open-source and uses open-source platforms including Flask, Leaflet, GDAL, MapServer and the PCRaster-Python modelling framework to process spatio-temporal models in real time. With GEMS, users can write, run, and visualize the results of dynamic PCRaster-Python models in a browser. GEMS uses freely available global data to feed the models, and automatically converts the data to the relevant model extent and data format. Currently available data includes the SRTM elevation model, a selection of monthly vegetation data from MODIS, land use classifications from GlobCover, historical climate data from WorldClim, HWSD soil information from WorldGrids, population density from SEDAC and near real-time weather forecasts, most with a ±100m resolution. Furthermore, users can add other or their own datasets using a web coverage service or a custom data provider script. With easy access to a wide range of base datasets and without the data preparation that is usually necessary to run environmental models, building and running a model becomes a matter hours. Furthermore, it is easy to share the resulting maps, timeseries data or model scenarios with other researchers through a web mapping service (WMS). GEMS can be used to provide open access to model results. Additionally, environmental models in GEMS can be employed by users with no extensive experience with writing code, which is for example valuable for using models

  20. Use of Item Models in a Large-Scale Admissions Test: A Case Study

    ERIC Educational Resources Information Center

    Sinharay, Sandip; Johnson, Matthew S.

    2008-01-01

    "Item models" (LaDuca, Staples, Templeton, & Holzman, 1986) are classes from which it is possible to generate items that are equivalent/isomorphic to other items from the same model (e.g., Bejar, 1996, 2002). They have the potential to produce large numbers of high-quality items at reduced cost. This article introduces data from an application of…

  1. Large-scale ligand-based predictive modelling using support vector machines.

    PubMed

    Alvarsson, Jonathan; Lampa, Samuel; Schaal, Wesley; Andersson, Claes; Wikberg, Jarl E S; Spjuth, Ola

    2016-01-01

    The increasing size of datasets in drug discovery makes it challenging to build robust and accurate predictive models within a reasonable amount of time. In order to investigate the effect of dataset sizes on predictive performance and modelling time, ligand-based regression models were trained on open datasets of varying sizes of up to 1.2 million chemical structures. For modelling, two implementations of support vector machines (SVM) were used. Chemical structures were described by the signatures molecular descriptor. Results showed that for the larger datasets, the LIBLINEAR SVM implementation performed on par with the well-established libsvm with a radial basis function kernel, but with dramatically less time for model building even on modest computer resources. Using a non-linear kernel proved to be infeasible for large data sizes, even with substantial computational resources on a computer cluster. To deploy the resulting models, we extended the Bioclipse decision support framework to support models from LIBLINEAR and made our models of logD and solubility available from within Bioclipse. PMID:27516811

  2. Development of a coupled soil erosion and large-scale hydrology modeling system

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The impact of frozen soil on soil erosion is becoming increasingly important for sustainable management of soil resources, especially in regions where agricultural land use is dominant. A newly developed coupled modeling system that integrates the Variable Infiltration Capacity (VIC) model and the p...

  3. Large-scale parameter extraction in electrocardiology models through Born approximation

    NASA Astrophysics Data System (ADS)

    He, Yuan; Keyes, David E.

    2013-01-01

    One of the main objectives in electrocardiology is to extract physical properties of cardiac tissues from measured information on electrical activity of the heart. Mathematically, this is an inverse problem for reconstructing coefficients in electrocardiology models from partial knowledge of the solutions of the models. In this work, we consider such parameter extraction problems for two well-studied electrocardiology models: the bidomain model and the FitzHugh-Nagumo model. We propose a systematic reconstruction method based on the Born approximation of the original nonlinear inverse problem. We describe a two-step procedure that allows us to reconstruct not only perturbations of the unknowns, but also the backgrounds around which the linearization is performed. We show some numerical simulations under various conditions to demonstrate the performance of our method. We also introduce a parameterization strategy using eigenfunctions of the Laplacian operator to reduce the number of unknowns in the parameter extraction problem.

  4. Incremental learning of Bayesian sensorimotor models: from low-level behaviours to large-scale structure of the environment

    NASA Astrophysics Data System (ADS)

    Diard, Julien; Gilet, Estelle; Simonin, Éva; Bessière, Pierre

    2010-12-01

    This paper concerns the incremental learning of hierarchies of representations of space in artificial or natural cognitive systems. We propose a mathematical formalism for defining space representations (Bayesian Maps) and modelling their interaction in hierarchies of representations (sensorimotor interaction operator). We illustrate our formalism with a robotic experiment. Starting from a model based on the proximity to obstacles, we learn a new one related to the direction of the light source. It provides new behaviours, like phototaxis and photophobia. We then combine these two maps so as to identify parts of the environment where the way the two modalities interact is recognisable. This classification is a basis for learning a higher level of abstraction map that describes the large-scale structure of the environment. In the final model, the perception-action cycle is modelled by a hierarchy of sensorimotor models of increasing time and space scales, which provide navigation strategies of increasing complexities.

  5. Of mice, flies--and men? Comparing fungal infection models for large-scale screening efforts.

    PubMed

    Brunke, Sascha; Quintin, Jessica; Kasper, Lydia; Jacobsen, Ilse D; Richter, Martin E; Hiller, Ekkehard; Schwarzmüller, Tobias; d'Enfert, Christophe; Kuchler, Karl; Rupp, Steffen; Hube, Bernhard; Ferrandon, Dominique

    2015-05-01

    Studying infectious diseases requires suitable hosts for experimental in vivo infections. Recent years have seen the advent of many alternatives to murine infection models. However, the use of non-mammalian models is still controversial because it is often unclear how well findings from these systems predict virulence potential in humans or other mammals. Here, we compare the commonly used models, fruit fly and mouse (representing invertebrate and mammalian hosts), for their similarities and degree of correlation upon infection with a library of mutants of an important fungal pathogen, the yeast Candida glabrata. Using two indices, for fly survival time and for mouse fungal burden in specific organs, we show a good agreement between the models. We provide a suitable predictive model for estimating the virulence potential of C. glabrata mutants in the mouse from fly survival data. As examples, we found cell wall integrity mutants attenuated in flies, and mutants of a MAP kinase pathway had defective virulence in flies and reduced relative pathogen fitness in mice. In addition, mutants with strongly reduced in vitro growth generally, but not always, had reduced virulence in flies. Overall, we demonstrate that surveying Drosophila survival after infection is a suitable model to predict the outcome of murine infections, especially for severely attenuated C. glabrata mutants. Pre-screening of mutants in an invertebrate Drosophila model can, thus, provide a good estimate of the probability of finding a strain with reduced microbial burden in the mouse host. PMID:25786415

  6. Exploring large-scale phenomena in composite membranes through an efficient implicit-solvent model

    NASA Astrophysics Data System (ADS)

    Laradji, Mohamed; Kumar, P. B. Sunil; Spangler, Eric J.

    2016-07-01

    Several microscopic and mesoscale models have been introduced in the past to investigate various phenomena in lipid membranes. Most of these models account for the solvent explicitly. Since in a typical molecular dynamics simulation, the majority of particles belong to the solvent, much of the computational effort in these simulations is devoted for calculating forces between solvent particles. To overcome this problem, several implicit-solvent mesoscale models for lipid membranes have been proposed during the last few years. In the present article, we review an efficient coarse-grained implicit-solvent model we introduced earlier for studies of lipid membranes. In this model, lipid molecules are coarse-grained into short semi-flexible chains of beads with soft interactions. Through molecular dynamics simulations, the model is used to investigate the thermal, structural and elastic properties of lipid membranes. We will also review here few studies, based on this model, of the phase behavior of nanoscale liposomes, cytoskeleton-induced blebbing in lipid membranes, as well as nanoparticles wrapping and endocytosis by tensionless lipid membranes. Topical Review article submitted to the Journal of Physics D: Applied Physics, May 9, 2016

  7. A large-scale model for simulating the fate & transport of organic contaminants in river basins.

    PubMed

    Lindim, C; van Gils, J; Cousins, I T

    2016-02-01

    We present STREAM-EU (Spatially and Temporally Resolved Exposure Assessment Model for EUropean basins), a novel dynamic mass balance model for predicting the environmental fate of organic contaminants in river basins. STREAM-EU goes beyond the current state-of-the-science in that it can simulate spatially and temporally-resolved contaminant concentrations in all relevant environmental media (surface water, groundwater, snow, soil and sediments) at the river basin scale. The model can currently be applied to multiple organic contaminants in any river basin in Europe, but the model framework is adaptable to any river basin in any continent. We simulate the environmental fate of perfluoroctanesulfonic acid (PFOS) and perfluorooctanoic acid (PFOA) in the Danube River basin and compare model predictions to recent monitoring data. The model predicts PFOS and PFOA concentrations that agree well with measured concentrations for large stretches of the river. Disagreements between the model predictions and measurements in some river sections are shown to be useful indicators of unknown contamination sources to the river basin. PMID:26414740

  8. The UP modelling system for large scale hydrology: simulation of the Arkansas-Red River basin

    NASA Astrophysics Data System (ADS)

    Kilsby, C. G.; Ewen, J.; Sloan, W. T.; Burton, A.; Fallows, C. S.; O'Connell, P. E.

    The UP (Upscaled Physically-based) hydrological modelling system to the Arkansas-Red River basin (USA) is designed for macro-scale simulations of land surface processes, and aims for a physical basis and, avoids the use of discharge records in the direct calibration of parameters. This is achieved in a two stage process: in the first stage parametrizations are derived from detailed modelling of selected representative small and then used in a second stage in which a simple distributed model is used to simulate the dynamic behaviour of the whole basin. The first stage of the process is described in a companion paper (Ewen et al., this issue), and the second stage of this process is described here. The model operated at an hourly time-step on 17-km grid squares for a two year simulation period, and represents all the important hydrological processes including regional aquifer recharge, groundwater discharge, infiltration- and saturation-excess runoff, evapotranspiration, snowmelt, overland and channel flow. Outputs from the model are discussed, and include river discharge at gauging stations and space-time fields of evaporation and soil moisture. Whilst the model efficiency assessed by comparison of simulated and observed discharge records is not as good as could be achieved with a model calibrated against discharge, there are considerable advantages in retaining a physical basis in applications to ungauged river basins and assessments of impacts of land use or climate change.

  9. Implementation of a large-scale flow routing scheme in the Canadian Regional Climate Model (CRCM)

    NASA Astrophysics Data System (ADS)

    Lucas-Picher, P.; Arora, V.; Caya, D.; Laprise, R.

    2002-12-01

    Freshwater flux from river acts as an important forcing on the ocean. With lower density than ocean saltwater, freshwater from rivers affects thermohaline circulation and sea-ice formation at high-latitudes. Freshwater flux can be computed in a climate model by using runoff as an input into a flow routing model, which transfers runoff from the land surface to the continental edges. In addition to modeling freshwater flux for oceans, the streamflow obtained by the routing model can be used to assess the performance of atmospheric models on a climatological basis by comparisons with observed streamflow. The variable velocity flow routing algorithm of Arora and Boer (1999, JGR-Atmos., 104, 30965-30979) is used to compute river flow in the Canadian Regional Climate Model (CRCM) (Caya and Laprise, 1999, Mon. Wea. Rev., 127, 341-362). The flow routing scheme consists of surface and groundwater reservoirs, which obtain daily estimates of surface runoff and drainage inputs, respectively simulated by the land surface scheme. The flow routing algorithm uses Manning's equation to estimate flow velocities. A rectangular river cross section is assumed with a fixed width and the variable depth is estimated using the amount of water in the river, slope, and river width. Discretization of major river basins and flow directions for the North America domain are obtained at the polar stereographic resolution of the CRCM using 5 minute global river flow directions (Graham et al., 1999, WRR, 35, 583-587) as a template. Model runoff estimates from a global simulation of Variable Infiltration Capacity (VIC) hydrological model are use to validate the routing scheme. Routing models results show that compared to the unrouted runoff, the inclusion of flow routing improves comparison with observation-based streamflow estimates.

  10. Large-scale hydrologic and hydrodynamic modeling of the Amazon River basin

    NASA Astrophysics Data System (ADS)

    de Paiva, Rodrigo Cauduro Dias; Buarque, Diogo Costa; Collischonn, Walter; Bonnet, Marie-Paule; Frappart, FréDéRic; Calmant, Stephane; Bulhões Mendes, Carlos André

    2013-03-01

    In this paper, a hydrologic/hydrodynamic modeling of the Amazon River basin is presented using the MGB-IPH model with a validation using remotely sensed observations. Moreover, the sources of model errors by means of the validation and sensitivity tests are investigated, and the physical functioning of the Amazon basin is also explored. The MGB-IPH is a physically based model resolving all land hydrological processes and here using a full 1-D river hydrodynamic module with a simple floodplain storage model. River-floodplain geometry parameters were extracted from the SRTM digital elevation model, and the model was forced using satellite-derived rainfall from TRMM3B42. Model results agree with observed in situ daily river discharges and water levels and with three complementary satellite-based products: (1) water levels derived from ENVISAT altimetry data; (2) a global data set of monthly inundation extent; and (3) monthly terrestrial water storage (TWS) anomalies derived from the Gravity Recovery and Climate Experimental mission. However, the model is sensitive to precipitation forcing and river-floodplain parameters. Most of the errors occur in westerly regions, possibly due to the poor quality of TRMM 3B42 rainfall data set in these mountainous and/or poorly monitored areas. In addition, uncertainty in river-floodplain geometry causes errors in simulated water levels and inundation extent, suggesting the need for improvement of parameter estimation methods. Finally, analyses of Amazon hydrological processes demonstrate that surface waters govern most of the Amazon TWS changes (56%), followed by soil water (27%) and ground water (8%). Moreover, floodplains play a major role in stream flow routing, although backwater effects are also important to delay and attenuate flood waves.

  11. Modeling oxygen isotopes in the Pliocene: Large-scale features over the land and ocean

    NASA Astrophysics Data System (ADS)

    Tindall, Julia C.; Haywood, Alan M.

    2015-09-01

    The first isotope-enabled general circulation model (GCM) simulations of the Pliocene are used to discuss the interpretation of δ18O measurements for a warm climate. The model suggests that spatial patterns of Pliocene ocean surface δ18O (δ18Osw) were similar to those of the preindustrial period; however, Arctic and coastal regions were relatively depleted, while South Atlantic and Mediterranean regions were relatively enriched. Modeled δ18Osw anomalies are closely related to modeled salinity anomalies, which supports using δ18Osw as a paleosalinity proxy. Modeled Pliocene precipitation δ18O (δ18Op) was enriched relative to the preindustrial values (but with depletion of <2‰ over some tropical regions). While usually modest (<4‰), the enrichment can reach 25‰ over ice sheet regions. In the tropics δ18Op anomalies are related to precipitation amount anomalies, although there is usually a spatial offset between the two. This offset suggests that the location of precipitation change is more uncertain than the amplitude when interpreting δ18Op. At high latitudes δ18Op anomalies relate to temperature anomalies; however, the relationship is neither linear nor spatially coincident: a large δ18Op signal does not always translate to a large temperature signal. These results suggest that isotope modeling can lead to enhanced synergy between climate models and climate proxy data. The model can relate proxy data to climate in a physically based way even when the relationship is complex and nonlocal. The δ18O-climate relationships, identified here from a GCM, could not be determined from transfer functions or simple models.

  12. A HIERARCHIAL STOCHASTIC MODEL OF LARGE SCALE ATMOSPHERIC CIRCULATION PATTERNS AND MULTIPLE STATION DAILY PRECIPITATION

    EPA Science Inventory

    A stochastic model of weather states and concurrent daily precipitation at multiple precipitation stations is described. our algorithms are invested for classification of daily weather states; k means, fuzzy clustering, principal components, and principal components coupled with ...

  13. Middle atmosphere project. A semi-spectral numerical model for the large-scale stratospheric circulation

    NASA Technical Reports Server (NTRS)

    Holton, J. R.; Wehrbein, W.

    1979-01-01

    The complete model is a semispectral model in which the longitudinal dependence is represented by expansion in zonal harmonics while the latitude and height dependencies are represented by a finite difference grid. The model is based on the primitive equations in the log pressure coordinate system. The lower boundary of the model domain is set at the 100 mb level (i.e., near the tropopause) and the effects of tropospheric forcing are included in the lower boundary condition. The upper boundary is at approximately 96 km, and the latitudinal extent is either global or hemispheric. The basic differential equations and boundary conditions are outlined. The finite difference equations are described. The initial conditions are discussed and a sample calculation is presented. The FORTRAN code is given in the appendix.

  14. Topology of large-scale structure in seeded hot dark matter models

    NASA Technical Reports Server (NTRS)

    Beaky, Matthew M.; Scherrer, Robert J.; Villumsen, Jens V.

    1992-01-01

    The topology of the isodensity surfaces in seeded hot dark matter models, in which static seed masses provide the density perturbations in a universe dominated by massive neutrinos is examined. When smoothed with a Gaussian window, the linear initial conditions in these models show no trace of non-Gaussian behavior for r0 equal to or greater than 5 Mpc (h = 1/2), except for very low seed densities, which show a shift toward isolated peaks. An approximate analytic expression is given for the genus curve expected in linear density fields from randomly distributed seed masses. The evolved models have a Gaussian topology for r0 = 10 Mpc, but show a shift toward a cellular topology with r0 = 5 Mpc; Gaussian models with an identical power spectrum show the same behavior.

  15. Comparing selected morphological models of hydrated Nafion using large scale molecular dynamics simulations

    NASA Astrophysics Data System (ADS)

    Knox, Craig K.

    Experimental elucidation of the nanoscale structure of hydrated Nafion, the most popular polymer electrolyte or proton exchange membrane (PEM) to date, and its influence on macroscopic proton conductance is particularly challenging. While it is generally agreed that hydrated Nafion is organized into distinct hydrophilic domains or clusters within a hydrophobic matrix, the geometry and length scale of these domains continues to be debated. For example, at least half a dozen different domain shapes, ranging from spheres to cylinders, have been proposed based on experimental SAXS and SANS studies. Since the characteristic length scale of these domains is believed to be ˜2 to 5 nm, very large molecular dynamics (MD) simulations are needed to accurately probe the structure and morphology of these domains, especially their connectivity and percolation phenomena at varying water content. Using classical, all-atom MD with explicit hydronium ions, simulations have been performed to study the first-ever hydrated Nafion systems that are large enough (~2 million atoms in a ˜30 nm cell) to directly observe several hydrophilic domains at the molecular level. These systems consisted of six of the most significant and relevant morphological models of Nafion to-date: (1) the cluster-channel model of Gierke, (2) the parallel cylinder model of Schmidt-Rohr, (3) the local-order model of Dreyfus, (4) the lamellar model of Litt, (5) the rod network model of Kreuer, and (6) a 'random' model, commonly used in previous simulations, that does not directly assume any particular geometry, distribution, or morphology. These simulations revealed fast intercluster bridge formation and network percolation in all of the models. Sulfonates were found inside these bridges and played a significant role in percolation. Sulfonates also strongly aggregated around and inside clusters. Cluster surfaces were analyzed to study the hydrophilic-hydrophobic interface. Interfacial area and cluster volume

  16. Study of an engine flow diverter system for a large scale ejector powered aircraft model

    NASA Technical Reports Server (NTRS)

    Springer, R. J.; Langley, B.; Plant, T.; Hunter, L.; Brock, O.

    1981-01-01

    Requirements were established for a conceptual design study to analyze and design an engine flow diverter system and to include accommodations for an ejector system in an existing 3/4 scale fighter model equipped with YJ-79 engines. Model constraints were identified and cost-effective limited modification was proposed to accept the ejectors, ducting and flow diverter valves. Complete system performance was calculated and a versatile computer program capable of analyzing any ejector system was developed.

  17. A large-scale simulation model to assess karstic groundwater recharge over Europe and the Mediterranean

    NASA Astrophysics Data System (ADS)

    Hartmann, A.; Gleeson, T.; Rosolem, R.; Pianosi, F.; Wada, Y.; Wagener, T.

    2015-06-01

    Karst develops through the dissolution of carbonate rock and is a major source of groundwater contributing up to half of the total drinking water supply in some European countries. Previous approaches to model future water availability in Europe are either too-small scale or do not incorporate karst processes, i.e. preferential flow paths. This study presents the first simulations of groundwater recharge in all karst regions in Europe with a parsimonious karst hydrology model. A novel parameter confinement strategy combines a priori information with recharge-related observations (actual evapotranspiration and soil moisture) at locations across Europe while explicitly identifying uncertainty in the model parameters. Europe's karst regions are divided into four typical karst landscapes (humid, mountain, Mediterranean and desert) by cluster analysis and recharge is simulated from 2002 to 2012 for each karst landscape. Mean annual recharge ranges from negligible in deserts to > 1 m a-1 in humid regions. The majority of recharge rates range from 20 to 50% of precipitation and are sensitive to subannual climate variability. Simulation results are consistent with independent observations of mean annual recharge and significantly better than other global hydrology models that do not consider karst processes (PCR-GLOBWB, WaterGAP). Global hydrology models systematically under-estimate karst recharge implying that they over-estimate actual evapotranspiration and surface runoff. Karst water budgets and thus information to support management decisions regarding drinking water supply and flood risk are significantly improved by our model.

  18. Uncovering Implicit Assumptions: a Large-Scale Study on Students' Mental Models of Diffusion

    NASA Astrophysics Data System (ADS)

    Stains, Marilyne; Sevian, Hannah

    2015-12-01

    Students' mental models of diffusion in a gas phase solution were studied through the use of the Structure and Motion of Matter (SAMM) survey. This survey permits identification of categories of ways students think about the structure of the gaseous solute and solvent, the origin of motion of gas particles, and trajectories of solute particles in the gaseous medium. A large sample of data ( N = 423) from students across grade 8 (age 13) through upper-level undergraduate was subjected to a cluster analysis to determine the main mental models present. The cluster analysis resulted in a reduced data set ( N = 308), and then, mental models were ascertained from robust clusters. The mental models that emerged from analysis were triangulated through interview data and characterised according to underlying implicit assumptions that guide and constrain thinking about diffusion of a solute in a gaseous medium. Impacts of students' level of preparation in science and relationships of mental models to science disciplines studied by students were examined. Implications are discussed for the value of this approach to identify typical mental models and the sets of implicit assumptions that constrain them.

  19. Hydrological improvements for nutrient and pollutant emission modeling in large scale catchments

    NASA Astrophysics Data System (ADS)

    Höllering, S.; Ihringer, J.

    2012-04-01

    An estimation of emissions and loads of nutrients and pollutants into European water bodies with as much accuracy as possible depends largely on the knowledge about the spatially and temporally distributed hydrological runoff patterns. An improved hydrological water balance model for the pollutant emission model MoRE (Modeling of Regionalized Emissions) (IWG, 2011) has been introduced, that can form an adequate basis to simulate discharge in a hydrologically differentiated, land-use based way to subsequently provide the required distributed discharge components. First of all the hydrological model had to comply both with requirements of space and time in order to calculate sufficiently precise the water balance on the catchment scale spatially distributed in sub-catchments and with a higher temporal resolution. Aiming to reproduce seasonal dynamics and the characteristic hydrological regimes of river catchments a daily (instead of a yearly) time increment was applied allowing for a more process oriented simulation of discharge dynamics, volume and therefore water balance. The enhancement of the hydrological model became also necessary to potentially account for the hydrological functioning of catchments in regard to scenarios of e.g. a changing climate or alterations of land use. As a deterministic, partly physically based, conceptual hydrological watershed and water balance model the Precipitation Runoff Modeling System (PRMS) (USGS, 2009) was selected to improve the hydrological input for MoRE. In PRMS the spatial discretization is implemented with sub-catchments and so called hydrologic response units (HRUs) which are the hydrotropic, distributed, finite modeling entities each having a homogeneous runoff reaction due to hydro-meteorological events. Spatial structures and heterogeneities in sub-catchments e.g. urbanity, land use and soil types were identified to derive hydrological similarities and classify in different urban and rural HRUs. In this way the

  20. Implementation of large-scale landscape evolution modelling to real high-resolution DEM

    NASA Astrophysics Data System (ADS)

    Schroeder, S.; Babeyko, A. Y.

    2012-12-01

    We have developed a surface evolution model to be naturally integrated with 3D thermomechanical codes like SLIM-3D to study coupled tectonic-climate interaction. The resolution of the surface evolution model is independent of that of the underlying continuum box. The surface model follows the concept of the cellular automaton implemented on a regular Eulerian mesh. It incorporates an effective filling algorithm that guarantees flow direction in each cell, D8 search for flow directions, computation of discharges and bedrock incision. Additionally, the model implements hillslope erosion in the form of non-linear, slope-dependent diffusion. The model was designed to be employed not only to synthetic topographies but also to real Digital Elevation Models (DEM). In present work we report our experience with model implication to the 30-meter resolution ASTER GDEM of the Pamir orogen, in particular, to the segment of the Panj river. We start with calibration of the model parameters (fluvial incision and hillslope diffusion coefficients) using direct measurements of Panj incision rates and volumes of suspended sediment transport. Since the incision algorithm is independent on hillslope processes, we first adjust the incision parameters. Power-law exponents of the incision equation were evaluated from the profile curvature of the main Pamir rivers. After that, incision coefficient was adjusted to fit the observed incision rate of 5 mm/y. Once the model results are consistent with the measured data, the calibration of hillslope processes follows. For given critical slope, diffusivity could be fitted to match the observed sediment discharge. Applying of surface evolution model to real DEM reveals specific problems which do not appear when working with synthetic landscapes. One of them is the noise of the satellite-measured topography. In particular, due to the non-vertical observation perspective, satellite may not be able to detect the bottom of the river channel, especially

  1. Large-Scale Modelling of the Environmentally-Driven Population Dynamics of Temperate Aedes albopictus (Skuse)

    PubMed Central

    Erguler, Kamil; Smith-Unna, Stephanie E.; Waldock, Joanna; Proestos, Yiannis; Christophides, George K.; Lelieveld, Jos; Parham, Paul E.

    2016-01-01

    The Asian tiger mosquito, Aedes albopictus, is a highly invasive vector species. It is a proven vector of dengue and chikungunya viruses, with the potential to host a further 24 arboviruses. It has recently expanded its geographical range, threatening many countries in the Middle East, Mediterranean, Europe and North America. Here, we investigate the theoretical limitations of its range expansion by developing an environmentally-driven mathematical model of its population dynamics. We focus on the temperate strain of Ae. albopictus and compile a comprehensive literature-based database of physiological parameters. As a novel approach, we link its population dynamics to globally-available environmental datasets by performing inference on all parameters. We adopt a Bayesian approach using experimental data as prior knowledge and the surveillance dataset of Emilia-Romagna, Italy, as evidence. The model accounts for temperature, precipitation, human population density and photoperiod as the main environmental drivers, and, in addition, incorporates the mechanism of diapause and a simple breeding site model. The model demonstrates high predictive skill over the reference region and beyond, confirming most of the current reports of vector presence in Europe. One of the main hypotheses derived from the model is the survival of Ae. albopictus populations through harsh winter conditions. The model, constrained by the environmental datasets, requires that either diapausing eggs or adult vectors have increased cold resistance. The model also suggests that temperature and photoperiod control diapause initiation and termination differentially. We demonstrate that it is possible to account for unobserved properties and constraints, such as differences between laboratory and field conditions, to derive reliable inferences on the environmental dependence of Ae. albopictus populations. PMID:26871447

  2. Large-Scale Modelling of the Environmentally-Driven Population Dynamics of Temperate Aedes albopictus (Skuse).

    PubMed

    Erguler, Kamil; Smith-Unna, Stephanie E; Waldock, Joanna; Proestos, Yiannis; Christophides, George K; Lelieveld, Jos; Parham, Paul E

    2016-01-01

    The Asian tiger mosquito, Aedes albopictus, is a highly invasive vector species. It is a proven vector of dengue and chikungunya viruses, with the potential to host a further 24 arboviruses. It has recently expanded its geographical range, threatening many countries in the Middle East, Mediterranean, Europe and North America. Here, we investigate the theoretical limitations of its range expansion by developing an environmentally-driven mathematical model of its population dynamics. We focus on the temperate strain of Ae. albopictus and compile a comprehensive literature-based database of physiological parameters. As a novel approach, we link its population dynamics to globally-available environmental datasets by performing inference on all parameters. We adopt a Bayesian approach using experimental data as prior knowledge and the surveillance dataset of Emilia-Romagna, Italy, as evidence. The model accounts for temperature, precipitation, human population density and photoperiod as the main environmental drivers, and, in addition, incorporates the mechanism of diapause and a simple breeding site model. The model demonstrates high predictive skill over the reference region and beyond, confirming most of the current reports of vector presence in Europe. One of the main hypotheses derived from the model is the survival of Ae. albopictus populations through harsh winter conditions. The model, constrained by the environmental datasets, requires that either diapausing eggs or adult vectors have increased cold resistance. The model also suggests that temperature and photoperiod control diapause initiation and termination differentially. We demonstrate that it is possible to account for unobserved properties and constraints, such as differences between laboratory and field conditions, to derive reliable inferences on the environmental dependence of Ae. albopictus populations. PMID:26871447

  3. Evapotranspiration modelling at large scale using near-real time MSG SEVIRI derived data

    NASA Astrophysics Data System (ADS)

    Ghilain, N.; Arboleda, A.; Gellens-Meulenberghs, F.

    2010-09-01

    We present an evapotranspiration (ET) model developed in the framework of the EUMETSAT "Satellite Application Facility" (SAF) on Land Surface Analysis (LSA). The model is a simplified Soil-Vegetation-Atmosphere Transfer (SVAT) scheme that uses as input a combination of remote sensed data and atmospheric model outputs. The inputs based on remote sensing are LSA-SAF products: the Albedo (AL), the Downwelling Surface Shortwave Flux (DSSF) and the Downwelling Surface Longwave Flux (DSLF). They are available with the spatial resolution of the MSG SEVIRI instrument. ET maps covering the whole MSG field of view are produced by the model every 30 min, in near-real-time, for all weather conditions. This paper presents the adopted methodology and a set of validation results. The model quality is evaluated in two ways. First, ET results are compared with ground observations (from CarboEurope and national weather services), for different land cover types, over a full vegetation cycle in the Northern Hemisphere in 2007. This validation shows that the model is able to reproduce the observed ET temporal evolution from the diurnal to annual time scales for the temperate climate zones: the mean bias is less than 0.02 mm h-1 and the root-mean square error is between 0.06 and 0.10 mm h-1. Then, ET model outputs are compared with those from the European Centre for Medium-Range Weather Forecasts (ECMWF) and the Global Land Data Assimilation System (GLDAS). From this comparison, a high spatial correlation is noted, between 80 to 90%, around midday time frame. Nevertheless, some discrepancies are also observed and are due to the different input variables and parameterisations used.

  4. A realistic molecular model of cement hydrates

    PubMed Central

    Pellenq, Roland J.-M.; Kushima, Akihiro; Shahsavari, Rouzbeh; Van Vliet, Krystyn J.; Buehler, Markus J.; Yip, Sidney; Ulm, Franz-Josef

    2009-01-01

    Despite decades of studies of calcium-silicate-hydrate (C-S-H), the structurally complex binder phase of concrete, the interplay between chemical composition and density remains essentially unexplored. Together these characteristics of C-S-H define and modulate the physical and mechanical properties of this “liquid stone” gel phase. With the recent determination of the calcium/silicon (C/S = 1.7) ratio and the density of the C-S-H particle (2.6 g/cm3) by neutron scattering measurements, there is new urgency to the challenge of explaining these essential properties. Here we propose a molecular model of C-S-H based on a bottom-up atomistic simulation approach that considers only the chemical specificity of the system as the overriding constraint. By allowing for short silica chains distributed as monomers, dimers, and pentamers, this C-S-H archetype of a molecular description of interacting CaO, SiO2, and H2O units provides not only realistic values of the C/S ratio and the density computed by grand canonical Monte Carlo simulation of water adsorption at 300 K. The model, with a chemical composition of (CaO)1.65(SiO2)(H2O)1.75, also predicts other essential structural features and fundamental physical properties amenable to experimental validation, which suggest that the C-S-H gel structure includes both glass-like short-range order and crystalline features of the mineral tobermorite. Additionally, we probe the mechanical stiffness, strength, and hydrolytic shear response of our molecular model, as compared to experimentally measured properties of C-S-H. The latter results illustrate the prospect of treating cement on equal footing with metals and ceramics in the current application of mechanism-based models and multiscale simulations to study inelastic deformation and cracking. PMID:19805265

  5. A Spatio-Temporally Explicit Random Encounter Model for Large-Scale Population Surveys.

    PubMed

    Jousimo, Jussi; Ovaskainen, Otso

    2016-01-01

    Random encounter models can be used to estimate population abundance from indirect data collected by non-invasive sampling methods, such as track counts or camera-trap data. The classical Formozov-Malyshev-Pereleshin (FMP) estimator converts track counts into an estimate of mean population density, assuming that data on the daily movement distances of the animals are available. We utilize generalized linear models with spatio-temporal error structures to extend the FMP estimator into a flexible Bayesian modelling approach that estimates not only total population size, but also spatio-temporal variation in population density. We also introduce a weighting scheme to estimate density on habitats that are not covered by survey transects, assuming that movement data on a subset of individuals is available. We test the performance of spatio-temporal and temporal approaches by a simulation study mimicking the Finnish winter track count survey. The results illustrate how the spatio-temporal modelling approach is able to borrow information from observations made on neighboring locations and times when estimating population density, and that spatio-temporal and temporal smoothing models can provide improved estimates of total population size compared to the FMP method. PMID:27611683

  6. High-resolution global topographic index values for use in large-scale hydrological modelling

    NASA Astrophysics Data System (ADS)

    Marthews, Toby; Dadson, Simon; Lehner, Bernhard; Abele, Simon; Gedney, Nicola

    2015-04-01

    Modelling land surface water flow is of critical importance for simulating land-surface fluxes, predicting runoff and water table dynamics and for many other applications of Land Surface Models. Many approaches are based on the popular hydrology model TOPMODEL, and the most important parameter of this model is the well-known topographic index. Here we present new, high-resolution parameter maps of the topographic index for all ice-free land pixels calculated from hydrologically-conditioned HydroSHEDS data using the GA2 algorithm ('GRIDATB 2'). At 15 arc-sec resolution, these layers are four times finer than the resolution of the previously best-available topographic index layers, the Compound Topographic Index of HYDRO1k (CTI). For the largest river catchments occurring on each continent we found that, in comparison with CTI our revised values were up to 20% lower in, e.g., the Amazon. We found the highest catchment means were for the Murray-Darling and Nelson-Saskatchewan rather than for the Amazon and St. Lawrence as found from the CTI. For the majority of large catchments, however, the spread of our new GA2 index values is very similar to those of CTI, yet with more spatial variability apparent at fine scale. We believe these new index layers represent greatly-improved global-scale topographic index values and hope that they will be widely used in land surface modelling applications in the future.

  7. The effects of large-scale topography on the circulation in low-order models

    NASA Technical Reports Server (NTRS)

    O'Brien, Enda; Branscome, Lee E.

    1990-01-01

    This paper investigates the effect of topography on circulation produced by low-order quasi-geostrophic models that are capable of reproducing many basic features of midlatitude general circulation in the absence of topography. Using a simple two-level spectral model, time-mean stationary waves and low-frequency phenomena were examined for three different topographic configurations, of which two consisted of a sinusoidal mountain-valley structure, and the third was the Fourier representation of an isolated mountain peak. In the experiment with an isolated mountain, it was found that the time-mean wave in the model was highly dependent on the operation of wave-wave interactions, which had a significant impact on stationary waves through modifications in the mean zonal flow.

  8. The flow structure of pyroclastic density currents: evidence from particle models and large-scale experiments

    NASA Astrophysics Data System (ADS)

    Dellino, Pierfrancesco; Büttner, Ralf; Dioguardi, Fabio; Doronzo, Domenico Maria; La Volpe, Luigi; Mele, Daniela; Sonder, Ingo; Sulpizio, Roberto; Zimanowski, Bernd

    2010-05-01

    Pyroclastic flows are ground hugging, hot, gas-particle flows. They represent the most hazardous events of explosive volcanism, one striking example being the famous historical eruption of Pompeii (AD 79) at Vesuvius. Much of our knowledge on the mechanics of pyroclastic flows comes from theoretical models and numerical simulations. Valuable data are also stored in the geological record of past eruptions, i.e. the particles contained in pyroclastic deposits, but they are rarely used for quantifying the destructive potential of pyroclastic flows. In this paper, by means of experiments, we validate a model that is based on data from pyroclastic deposits. It allows the reconstruction of the current's fluid-dynamic behaviour. We show that our model results in likely values of dynamic pressure and particle volumetric concentration, and allows quantifying the hazard potential of pyroclastic flows.

  9. Large Scale Dynamics of the Persistent Turning Walker Model of Fish Behavior

    NASA Astrophysics Data System (ADS)

    Degond, Pierre; Motsch, Sébastien

    2008-06-01

    This paper considers a new model of individual displacement, based on fish motion, the so-called Persistent Turning Walker (PTW) model, which involves an Ornstein-Uhlenbeck process on the curvature of the particle trajectory. The goal is to show that its large time and space scale dynamics is of diffusive type, and to provide an analytic expression of the diffusion coefficient. Two methods are investigated. In the first one, we compute the large time asymptotics of the variance of the individual stochastic trajectories. The second method is based on a diffusion approximation of the kinetic formulation of these stochastic trajectories. The kinetic model is a Fokker-Planck type equation posed in an extended phase-space involving the curvature among the kinetic variables. We show that both methods lead to the same value of the diffusion constant. We present some numerical simulations to illustrate the theoretical results.

  10. Norway's 2011 Terror Attacks: Alleviating National Trauma With a Large-Scale Proactive Intervention Model.

    PubMed

    Kärki, Freja Ulvestad

    2015-09-01

    After the terror attacks of July 22, 2011, Norwegian health authorities piloted a new model for municipality-based psychosocial follow-up with victims. This column describes the development of a comprehensive follow-up intervention by health authorities and others that has been implemented at the municipality level across Norway. The model's principles emphasize proactivity by service providers; individually tailored help, with each victim being assigned a contact person in the residential municipality; continuity and long-term focus; effective intersectorial collaboration; and standardized screening of symptoms during the first year. Weekend reunions were also organized for the bereaved, and one-day reunions were organized for the survivors and their families at intervals over the first 18 months. Preliminary findings indicate a high level of success in model implementation. However, the overall effect of the interventions will be a subject for future evaluations. PMID:26030322

  11. Large-Scale Recurrent Neural Network Based Modelling of Gene Regulatory Network Using Cuckoo Search-Flower Pollination Algorithm

    PubMed Central

    Mandal, Sudip; Khan, Abhinandan; Saha, Goutam; Pal, Rajat K.

    2016-01-01

    The accurate prediction of genetic networks using computational tools is one of the greatest challenges in the postgenomic era. Recurrent Neural Network is one of the most popular but simple approaches to model the network dynamics from time-series microarray data. To date, it has been successfully applied to computationally derive small-scale artificial and real-world genetic networks with high accuracy. However, they underperformed for large-scale genetic networks. Here, a new methodology has been proposed where a hybrid Cuckoo Search-Flower Pollination Algorithm has been implemented with Recurrent Neural Network. Cuckoo Search is used to search the best combination of regulators. Moreover, Flower Pollination Algorithm is applied to optimize the model parameters of the Recurrent Neural Network formalism. Initially, the proposed method is tested on a benchmark large-scale artificial network for both noiseless and noisy data. The results obtained show that the proposed methodology is capable of increasing the inference of correct regulations and decreasing false regulations to a high degree. Secondly, the proposed methodology has been validated against the real-world dataset of the DNA SOS repair network of Escherichia coli. However, the proposed method sacrifices computational time complexity in both cases due to the hybrid optimization process. PMID:26989410

  12. Element-by-element model updating of large-scale structures based on component mode synthesis method

    NASA Astrophysics Data System (ADS)

    Yu, Jie-xin; Xia, Yong; Lin, Wei; Zhou, Xiao-qing

    2016-02-01

    Component mode synthesis (CMS) method is developed and applied to the element-by-element model updating of large-scale structures in this study. Several lowest frequencies and mode shapes of the global structure are obtained with the free interface CMS method by employing the several lowest frequencies and mode shapes of each substructure individually. In this process, the removal of higher modes is compensated by the residual modes. The eigensensitivity of the global structure is then assembled from the eigensensitivities of each substructure to the updating element parameters. Subsequently, the global model is updated using the sensitivity-based optimization technique. The application of the present method to an 11-floor frame structure and to a large-scale structure demonstrates its accuracy and efficiency. The computational time required by the substructuring method to calculate the eigensensitivity matrices is significantly reduced, as compared with that consumed by the conventional global-based approach. Selection of the number of master modes is also proposed.

  13. Constructing Model of Relationship among Behaviors and Injuries to Products Based on Large Scale Text Data on Injuries

    NASA Astrophysics Data System (ADS)

    Nomori, Koji; Kitamura, Koji; Motomura, Yoichi; Nishida, Yoshifumi; Yamanaka, Tatsuhiro; Komatsubara, Akinori

    In Japan, childhood injury prevention is urgent issue. Safety measures through creating knowledge of injury data are essential for preventing childhood injuries. Especially the injury prevention approach by product modification is very important. The risk assessment is one of the most fundamental methods to design safety products. The conventional risk assessment has been carried out subjectively because product makers have poor data on injuries. This paper deals with evidence-based risk assessment, in which artificial intelligence technologies are strongly needed. This paper describes a new method of foreseeing usage of products, which is the first step of the evidence-based risk assessment, and presents a retrieval system of injury data. The system enables a product designer to foresee how children use a product and which types of injuries occur due to the product in daily environment. The developed system consists of large scale injury data, text mining technology and probabilistic modeling technology. Large scale text data on childhood injuries was collected from medical institutions by an injury surveillance system. Types of behaviors to a product were derived from the injury text data using text mining technology. The relationship among products, types of behaviors, types of injuries and characteristics of children was modeled by Bayesian Network. The fundamental functions of the developed system and examples of new findings obtained by the system are reported in this paper.

  14. Large-Scale Recurrent Neural Network Based Modelling of Gene Regulatory Network Using Cuckoo Search-Flower Pollination Algorithm.

    PubMed

    Mandal, Sudip; Khan, Abhinandan; Saha, Goutam; Pal, Rajat K

    2016-01-01

    The accurate prediction of genetic networks using computational tools is one of the greatest challenges in the postgenomic era. Recurrent Neural Network is one of the most popular but simple approaches to model the network dynamics from time-series microarray data. To date, it has been successfully applied to computationally derive small-scale artificial and real-world genetic networks with high accuracy. However, they underperformed for large-scale genetic networks. Here, a new methodology has been proposed where a hybrid Cuckoo Search-Flower Pollination Algorithm has been implemented with Recurrent Neural Network. Cuckoo Search is used to search the best combination of regulators. Moreover, Flower Pollination Algorithm is applied to optimize the model parameters of the Recurrent Neural Network formalism. Initially, the proposed method is tested on a benchmark large-scale artificial network for both noiseless and noisy data. The results obtained show that the proposed methodology is capable of increasing the inference of correct regulations and decreasing false regulations to a high degree. Secondly, the proposed methodology has been validated against the real-world dataset of the DNA SOS repair network of Escherichia coli. However, the proposed method sacrifices computational time complexity in both cases due to the hybrid optimization process. PMID:26989410

  15. Prediction model of potential hepatocarcinogenicity of rat hepatocarcinogens using a large-scale toxicogenomics database

    SciTech Connect

    Uehara, Takeki; Minowa, Yohsuke; Morikawa, Yuji; Kondo, Chiaki; Maruyama, Toshiyuki; Kato, Ikuo; Nakatsu, Noriyuki; Igarashi, Yoshinobu; Ono, Atsushi; Hayashi, Hitomi; Mitsumori, Kunitoshi; Yamada, Hiroshi; Ohno, Yasuo; Urushidani, Tetsuro

    2011-09-15

    The present study was performed to develop a robust gene-based prediction model for early assessment of potential hepatocarcinogenicity of chemicals in rats by using our toxicogenomics database, TG-GATEs (Genomics-Assisted Toxicity Evaluation System developed by the Toxicogenomics Project in Japan). The positive training set consisted of high- or middle-dose groups that received 6 different non-genotoxic hepatocarcinogens during a 28-day period. The negative training set consisted of high- or middle-dose groups of 54 non-carcinogens. Support vector machine combined with wrapper-type gene selection algorithms was used for modeling. Consequently, our best classifier yielded prediction accuracies for hepatocarcinogenicity of 99% sensitivity and 97% specificity in the training data set, and false positive prediction was almost completely eliminated. Pathway analysis of feature genes revealed that the mitogen-activated protein kinase p38- and phosphatidylinositol-3-kinase-centered interactome and the v-myc myelocytomatosis viral oncogene homolog-centered interactome were the 2 most significant networks. The usefulness and robustness of our predictor were further confirmed in an independent validation data set obtained from the public database. Interestingly, similar positive predictions were obtained in several genotoxic hepatocarcinogens as well as non-genotoxic hepatocarcinogens. These results indicate that the expression profiles of our newly selected candidate biomarker genes might be common characteristics in the early stage of carcinogenesis for both genotoxic and non-genotoxic carcinogens in the rat liver. Our toxicogenomic model might be useful for the prospective screening of hepatocarcinogenicity of compounds and prioritization of compounds for carcinogenicity testing. - Highlights: >We developed a toxicogenomic model to predict hepatocarcinogenicity of chemicals. >The optimized model consisting of 9 probes had 99% sensitivity and 97% specificity. >This model

  16. High-resolution global topographic index values for use in large-scale hydrological modelling

    NASA Astrophysics Data System (ADS)

    Marthews, T. R.; Dadson, S. J.; Lehner, B.; Abele, S.; Gedney, N.

    2015-01-01

    Modelling land surface water flow is of critical importance for simulating land surface fluxes, predicting runoff and water table dynamics and for many other applications of Land Surface Models. Many approaches are based on the popular hydrology model TOPMODEL (TOPography-based hydrological MODEL), and the most important parameter of this model is the well-known topographic index. Here we present new, high-resolution parameter maps of the topographic index for all ice-free land pixels calculated from hydrologically conditioned HydroSHEDS (Hydrological data and maps based on SHuttle Elevation Derivatives at multiple Scales) data using the GA2 algorithm (GRIDATB 2). At 15 arcsec resolution, these layers are 4 times finer than the resolution of the previously best-available topographic index layers, the compound topographic index of HYDRO1k (CTI). For the largest river catchments occurring on each continent we found that, in comparison with CTI our revised values were up to 20% lower in, e.g. the Amazon. We found the highest catchment means were for the Murray-Darling and Nelson-Saskatchewan rather than for the Amazon and St. Lawrence as found from the CTI. For the majority of large catchments, however, the spread of our new GA2 index values is very similar to those of CTI, yet with more spatial variability apparent at fine scale. We believe these new index layers represent greatly improved global-scale topographic index values and hope that they will be widely used in land surface modelling applications in the future.

  17. Stability analysis of operator splitting for large-scale ocean modeling

    SciTech Connect

    Higdon, R.L.; Bennett, A.F.

    1996-02-01

    The ocean plays a crucial role in the earth`s climate system, and an improved understanding of that role will be aided greatly by high-resolution simulations of global ocean circulation over periods of many years. For such simulations the computational requirements are extremely demanding and maximum efficiency is essential. However, the governing equations typically used for ocean modeling admit wave velocities having widely varying magnitudes, and this situation can create serious problems with the efficiency of numerical algorithms. One common approach to resolving these problems is to split the fast and slow dynamics into separate sub-problems. The fast motions are nearly independent of depth, and it is natural to try to model these motions with a two-dimensional system of equations. These fast equations could be solved with an implicit time discretization or with an explicit method with short time steps. The slow motions would then be modeled with a three-dimensional system that is solved explicitly with long time steps that are determined by the slow wave speeds. However, if the splitting is inexact, then the equations that model the slow motions might actually contain some fast components, so the stability of explicit algorithms for the slow equations could come into doubt. In this paper we discuss some general features of the operator splitting problem, and we then describe an example of such a splitting and show that instability can arise in that case. 21 refs., 7 figs.

  18. Modeling the Large-Scale Structure and Long-Term Evolution of a Barchan Dune Field

    NASA Astrophysics Data System (ADS)

    Worman, S.; Littlewood, R. C.; Murray, A.; Andreotti, B.; Claudin, P.

    2011-12-01

    Barchans are mobile, crescent-shaped dunes that form atop hard, flat surfaces in regions where sediment supply is limited and fluid flow is approximately unidirectional. At the dune-scale, coupled models of sand transport and fluid dynamics have successfully reproduced their characteristic behavior and morphology. However, in nature, dunes rarely exist as isolated individuals but are instead found in highly-structured fields: Within a dune field with a cross-wind dimension on the order of 10 kilometers, patches of dunes can alternate spatially with sparse or dune-free regions, and the patches may have different characteristic dune size and spacing. The origin of such enigmatic structures cannot seem to be explained by differences in external forcing and remains an open research question. We use a partly rule-based numerical model that treats single dunes as discrete entities, based on the results of a dune-scale fluid-dynamics/sediment transport model. Our model integrates all currently known processes through which dunes interact with one another (i.e. sand flux exchange, collision, and calving). A rich array of patterns similar to those observed in nature emerge from these relatively simple interactions, offering a potential explanation of field-scale phenomena. We also develop simple statistics to characterize these structures and furnish testable predictions for future empirical work.

  19. Robust classification of protein variation using structural modelling and large-scale data integration

    PubMed Central

    Baugh, Evan H.; Simmons-Edler, Riley; Müller, Christian L.; Alford, Rebecca F.; Volfovsky, Natalia; Lash, Alex E.; Bonneau, Richard

    2016-01-01

    Existing methods for interpreting protein variation focus on annotating mutation pathogenicity rather than detailed interpretation of variant deleteriousness and frequently use only sequence-based or structure-based information. We present VIPUR, a computational framework that seamlessly integrates sequence analysis and structural modelling (using the Rosetta protein modelling suite) to identify and interpret deleterious protein variants. To train VIPUR, we collected 9477 protein variants with known effects on protein function from multiple organisms and curated structural models for each variant from crystal structures and homology models. VIPUR can be applied to mutations in any organism's proteome with improved generalized accuracy (AUROC .83) and interpretability (AUPR .87) compared to other methods. We demonstrate that VIPUR's predictions of deleteriousness match the biological phenotypes in ClinVar and provide a clear ranking of prediction confidence. We use VIPUR to interpret known mutations associated with inflammation and diabetes, demonstrating the structural diversity of disrupted functional sites and improved interpretation of mutations associated with human diseases. Lastly, we demonstrate VIPUR's ability to highlight candidate variants associated with human diseases by applying VIPUR to de novo variants associated with autism spectrum disorders. PMID:26926108

  20. Large Scale Tissue Morphogenesis Simulation on Heterogenous Systems Based on a Flexible Biomechanical Cell Model.

    PubMed

    Jeannin-Girardon, Anne; Ballet, Pascal; Rodin, Vincent

    2015-01-01

    The complexity of biological tissue morphogenesis makes in silico simulations of such system very interesting in order to gain a better understanding of the underlying mechanisms ruling the development of multicellular tissues. This complexity is mainly due to two elements: firstly, biological tissues comprise a large amount of cells; secondly, these cells exhibit complex interactions and behaviors. To address these two issues, we propose two tools: the first one is a virtual cell model that comprise two main elements: firstly, a mechanical structure (membrane, cytoskeleton, and cortex) and secondly, the main behaviors exhibited by biological cells, i.e., mitosis, growth, differentiation, molecule consumption, and production as well as the consideration of the physical constraints issued from the environment. An artificial chemistry is also included in the model. This virtual cell model is coupled to an agent-based formalism. The second tool is a simulator that relies on the OpenCL framework. It allows efficient parallel simulations on heterogenous devices such as micro-processors or graphics processors. We present two case studies validating the implementation of our model in our simulator: cellular proliferation controlled by cell signalling and limb growth in a virtual organism. PMID:26451816

  1. Predicting agricultural impacts of large-scale drought: 2012 and the case for better modeling

    Technology Transfer Automated Retrieval System (TEKTRAN)

    We present an example of a simulation-based forecast for the 2012 U.S. maize growing season produced as part of a high-resolution, multi-scale, predictive mechanistic modeling study designed for decision support, risk management, and counterfactual analysis. The simulations undertaken for this analy...

  2. Effects of Large-Scale Flows on Coronal Abundances: Multispecies Models and TRACE Observations

    NASA Astrophysics Data System (ADS)

    Lenz, D. D.

    2003-05-01

    Understanding coronal abundances is crucial for interpreting coronal observations and for understanding coronal physical processes and heating. Bulk flows and gravity, both unmistakably present in the corona, significantly affect abundances. We present multispecies simulations of long-lived coronal structures and compare model results with TRACE observations, focusing on abundance variations and flows.

  3. Toward an Aspirational Learning Model Gleaned from Large-Scale Assessment

    ERIC Educational Resources Information Center

    Diket, Read M.; Xu, Lihua; Brewer, Thomas M.

    2014-01-01

    The aspirational model resulted from the authors' secondary analysis of the Mother/Child (M/C) test block from the 2008 National Assessment of Educational Progress restricted data that examined the responses of the national sample of 8th-grade students (n = 1648). This test block presented no artmaking task and consisted of the same 13…

  4. A balanced water layer concept for subglacial hydrology in large scale ice sheet models

    NASA Astrophysics Data System (ADS)

    Goeller, S.; Thoma, M.; Grosfeld, K.; Miller, H.

    2012-12-01

    There is currently no doubt about the existence of a wide-spread hydrological network under the Antarctic ice sheet, which lubricates the ice base and thus leads to increased ice velocities. Consequently, ice models should incorporate basal hydrology to obtain meaningful results for future ice dynamics and their contribution to global sea level rise. Here, we introduce the balanced water layer concept, covering two prominent subglacial hydrological features for ice sheet modeling on a continental scale: the evolution of subglacial lakes and balance water fluxes. We couple it to the thermomechanical ice-flow model RIMBAY and apply it to a synthetic model domain inspired by the Gamburtsev Mountains, Antarctica. In our experiments we demonstrate the dynamic generation of subglacial lakes and their impact on the velocity field of the overlaying ice sheet, resulting in a negative ice mass balance. Furthermore, we introduce an elementary parametrization of the water flux-basal sliding coupling and reveal the predominance of the ice loss through the resulting ice streams against the stabilizing influence of less hydrologically active areas. We point out, that established balance flux schemes quantify these effects only partially as their ability to store subglacial water is lacking.

  5. Large-scale shell model calculations for structure of Ni and Cu isotopes

    NASA Astrophysics Data System (ADS)

    Tsunoda, Yusuke; Otsuka, Takaharu; Shimizu, Noritaka; Honma, Michio; Utsuno, Yutaka

    2014-09-01

    We study nuclear structure of Ni and Cu isotopes, especially neutron-rich ones in the N ~ 40 region by Monte Carlo shell model (MCSM) calculations in pfg9d5 model space (0f7 / 2 , 1p3 / 2 , 0f5 / 2 , 1p1 / 2 , 0g9 / 2 , 1d5 / 2). Effects of excitation across N = 40 and other gaps are important to describe properties such as deformation, and we include this effects by using the pfg9d5 model space. We can calculate in this large model space without any truncation, as an advantage of MCSM. In the MCSM, a wave function is represented as a linear combination of angular-momentum- and parity-projected deformed Slater determinants. We can study intrinsic shapes of nuclei by using quadrupole deformations of MCSM basis states before projection. In doubly-magic 68Ni, there are oblate and prolate deformed bands as well as the spherical ground state from the calculation. Such shape coexistence can be explained by introducing the mechanism called Type II shell evolution, driven by changes of configurations within the same nucleus mainly due to the tensor force.

  6. Robust classification of protein variation using structural modelling and large-scale data integration.

    PubMed

    Baugh, Evan H; Simmons-Edler, Riley; Müller, Christian L; Alford, Rebecca F; Volfovsky, Natalia; Lash, Alex E; Bonneau, Richard

    2016-04-01

    Existing methods for interpreting protein variation focus on annotating mutation pathogenicity rather than detailed interpretation of variant deleteriousness and frequently use only sequence-based or structure-based information. We present VIPUR, a computational framework that seamlessly integrates sequence analysis and structural modelling (using the Rosetta protein modelling suite) to identify and interpret deleterious protein variants. To train VIPUR, we collected 9477 protein variants with known effects on protein function from multiple organisms and curated structural models for each variant from crystal structures and homology models. VIPUR can be applied to mutations in any organism's proteome with improved generalized accuracy (AUROC .83) and interpretability (AUPR .87) compared to other methods. We demonstrate that VIPUR's predictions of deleteriousness match the biological phenotypes in ClinVar and provide a clear ranking of prediction confidence. We use VIPUR to interpret known mutations associated with inflammation and diabetes, demonstrating the structural diversity of disrupted functional sites and improved interpretation of mutations associated with human diseases. Lastly, we demonstrate VIPUR's ability to highlight candidate variants associated with human diseases by applying VIPUR tode novovariants associated with autism spectrum disorders. PMID:26926108

  7. Methods for Modeling and Decomposing Treatment Effect Variation in Large-Scale Randomized Trials

    ERIC Educational Resources Information Center

    Ding, Peng; Feller, Avi; Miratrix, Luke

    2015-01-01

    Recent literature has underscored the critical role of treatment effect variation in estimating and understanding causal effects. This approach, however, is in contrast to much of the foundational research on causal inference. Linear models, for example, classically rely on constant treatment effect assumptions, or treatment effects defined by…

  8. Uncovering Implicit Assumptions: A Large-Scale Study on Students' Mental Models of Diffusion

    ERIC Educational Resources Information Center

    Stains, Marilyne; Sevian, Hannah

    2015-01-01

    Students' mental models of diffusion in a gas phase solution were studied through the use of the Structure and Motion of Matter (SAMM) survey. This survey permits identification of categories of ways students think about the structure of the gaseous solute and solvent, the origin of motion of gas particles, and trajectories of solute particles in…

  9. HYPERstream: a multi-scale framework for streamflow routing in large-scale hydrological model

    NASA Astrophysics Data System (ADS)

    Piccolroaz, Sebastiano; Di Lazzaro, Michele; Zarlenga, Antonio; Majone, Bruno; Bellin, Alberto; Fiori, Aldo

    2016-05-01

    We present HYPERstream, an innovative streamflow routing scheme based on the width function instantaneous unit hydrograph (WFIUH) theory, which is specifically designed to facilitate coupling with weather forecasting and climate models. The proposed routing scheme preserves geomorphological dispersion of the river network when dealing with horizontal hydrological fluxes, irrespective of the computational grid size inherited from the overlaying climate model providing the meteorological forcing. This is achieved by simulating routing within the river network through suitable transfer functions obtained by applying the WFIUH theory to the desired level of detail. The underlying principle is similar to the block-effective dispersion employed in groundwater hydrology, with the transfer functions used to represent the effect on streamflow of morphological heterogeneity at scales smaller than the computational grid. Transfer functions are constructed for each grid cell with respect to the nodes of the network where streamflow is simulated, by taking advantage of the detailed morphological information contained in the digital elevation model (DEM) of the zone of interest. These characteristics make HYPERstream well suited for multi-scale applications, ranging from catchment up to continental scale, and to investigate extreme events (e.g., floods) that require an accurate description of routing through the river network. The routing scheme enjoys parsimony in the adopted parametrization and computational efficiency, leading to a dramatic reduction of the computational effort with respect to full-gridded models at comparable level of accuracy. HYPERstream is designed with a simple and flexible modular structure that allows for the selection of any rainfall-runoff model to be coupled with the routing scheme and the choice of different hillslope processes to be represented, and it makes the framework particularly suitable to massive parallelization, customization according to

  10. North American extreme temperature events and related large scale meteorological patterns: A review of statistical methods, dynamics, modeling, and trends

    SciTech Connect

    Grotjahn, Richard; Black, Robert; Leung, Ruby; Wehner, Michael F.; Barlow, Mathew; Bosilovich, Michael; Gershunov, Alexander; Gutowski, Jr., William J.; Gyakum, John R.; Katz, Richard W.; Lee, Yun -Young; Lim, Young -Kwon; Prabhat, -

    2015-05-22

    This paper reviews research approaches and open questions regarding data, statistical analyses, dynamics, modeling efforts, and trends in relation to temperature extremes. Our specific focus is upon extreme events of short duration (roughly less than 5 days) that affect parts of North America. These events are associated with large scale meteorological patterns (LSMPs). Methods used to define extreme events statistics and to identify and connect LSMPs to extreme temperatures are presented. Recent advances in statistical techniques can connect LSMPs to extreme temperatures through appropriately defined covariates that supplements more straightforward analyses. A wide array of LSMPs, ranging from synoptic to planetary scale phenomena, have been implicated as contributors to extreme temperature events. Current knowledge about the physical nature of these contributions and the dynamical mechanisms leading to the implicated LSMPs is incomplete. There is a pressing need for (a) systematic study of the physics of LSMPs life cycles and (b) comprehensive model assessment of LSMP-extreme temperature event linkages and LSMP behavior. Generally, climate models capture the observed heat waves and cold air outbreaks with some fidelity. However they overestimate warm wave frequency and underestimate cold air outbreaks frequency, and underestimate the collective influence of low-frequency modes on temperature extremes. Climate models have been used to investigate past changes and project future trends in extreme temperatures. Overall, modeling studies have identified important mechanisms such as the effects of large-scale circulation anomalies and land-atmosphere interactions on changes in extreme temperatures. However, few studies have examined changes in LSMPs more specifically to understand the role of LSMPs on past and future extreme temperature changes. Even though LSMPs are resolvable by global and regional climate models, they are not necessarily well simulated so more

  11. North American extreme temperature events and related large scale meteorological patterns: A review of statistical methods, dynamics, modeling, and trends

    DOE PAGESBeta

    Grotjahn, Richard; Black, Robert; Leung, Ruby; Wehner, Michael F.; Barlow, Mathew; Bosilovich, Michael; Gershunov, Alexander; Gutowski, Jr., William J.; Gyakum, John R.; Katz, Richard W.; et al

    2015-05-22

    This paper reviews research approaches and open questions regarding data, statistical analyses, dynamics, modeling efforts, and trends in relation to temperature extremes. Our specific focus is upon extreme events of short duration (roughly less than 5 days) that affect parts of North America. These events are associated with large scale meteorological patterns (LSMPs). Methods used to define extreme events statistics and to identify and connect LSMPs to extreme temperatures are presented. Recent advances in statistical techniques can connect LSMPs to extreme temperatures through appropriately defined covariates that supplements more straightforward analyses. A wide array of LSMPs, ranging from synoptic tomore » planetary scale phenomena, have been implicated as contributors to extreme temperature events. Current knowledge about the physical nature of these contributions and the dynamical mechanisms leading to the implicated LSMPs is incomplete. There is a pressing need for (a) systematic study of the physics of LSMPs life cycles and (b) comprehensive model assessment of LSMP-extreme temperature event linkages and LSMP behavior. Generally, climate models capture the observed heat waves and cold air outbreaks with some fidelity. However they overestimate warm wave frequency and underestimate cold air outbreaks frequency, and underestimate the collective influence of low-frequency modes on temperature extremes. Climate models have been used to investigate past changes and project future trends in extreme temperatures. Overall, modeling studies have identified important mechanisms such as the effects of large-scale circulation anomalies and land-atmosphere interactions on changes in extreme temperatures. However, few studies have examined changes in LSMPs more specifically to understand the role of LSMPs on past and future extreme temperature changes. Even though LSMPs are resolvable by global and regional climate models, they are not necessarily well simulated so

  12. Exploring the Potential of Large Scale Distributed Modeling of Snow Accumulation and Melt on GPUs

    NASA Astrophysics Data System (ADS)

    Bisht, G.; Kumar, M.

    2010-12-01

    Water from snow melt is a critical resource in watersheds of the western US, Canada, and other similar regions of the world. The distribution of snow and melt-water controls the temporal and spatial distributions of soil moisture, evapo-transpiration (ET), recharge, stream-aquifer interaction and other hydrologic processes within the watershed. It also influences the quantity and timing of water availability in downstream areas. In spite of the serious impacts on the water resources at multiple scales, the knowledge base for prediction of snow accumulation and melt in mountainous watersheds is notably weak. Physics-based, distributed snow models such as UEB, SNTHERM, SHAW and ISNOBAL, have positioned themselves as an appropriate tool for understanding of snow-process interactions and prediction of melt, and have been applied in numerous watersheds to varying degrees of success. In spite of the significant advances in hardware speed and programming efficiency, the application of the above-mentioned snow models has mostly been limited to small watersheds. Application of these models at finer spatio-temporal resolution, in large domains, and for longer time periods, to address problems such as quantifying the response of snow-dominated watersheds to climate change scenarios, is restrictive due to the large computational cost involved. Additionally, the computational requirement of current generation snow models is expected to rise as improved snow-depth characterization and a tighter coupling with hydrologic processes are incorporated. This poses considerable challenge to their application in feasible time. We suggest alleviating this problem by taking advantage of high performance computing (HPC) systems based on Graphics Processing Unit (GPU) processors. High performance GPUs work like SIMD processors, but can take advantage of larger number of cores thus providing higher throughput. As of June 2010, the second fastest supercomputer in the world uses NVidia Tesla

  13. Two modeling approaches for quantifying hydrologic and biologic controls on large-scale nitrogen cycling, Upper Rio Grande, NM

    NASA Astrophysics Data System (ADS)

    Oelsner, G. P.; Brooks, P. D.; Hogan, J. F.; Meixner, T.; Tidwell, V.; Roach, J. D.

    2007-12-01

    Variations in nutrient concentrations can be caused by both abiotic changes in hydrology and biotic processes. Most process-level studies of nutrient cycling are conducted in small catchment systems and at points on large river systems. Relatively less understanding has been developed on how biotic and abiotic processes influence large-scale nutrient concentrations and variability in large river systems. To address this issue, we performed biannual synoptic chemical sampling along a 640 km reach of the Upper Rio Grande for five years to determine the large-scale patterns in dissolved carbon and nitrogen concentrations and then used two different and simple models to evaluate the abiotic and biotic processes that generate the observed large-scale patterns. First, we used a Cl mixing model, validated with Br to quantify the effects of evapoconcentration, tributaries, and point sources on dissolved nitrogen and carbon concentrations. Ratios of observed to predicted concentrations close to 1 suggest that abiotic hydrologic processes are the dominant controls on concentrations while ratios departing from 1 indicate that biological processes are important controls. Our conservative mixing model generally captured patterns in DOC concentrations, suggesting minimal, net biological processing. In contrast, both nitrate and TDN concentrations were altered biogeochemically in all reaches. In areas where observed and predicted values differed, the spatial variability of river characteristics was more strongly correlated to relative nutrient retention than seasonal or inter-annual discharge variability. Second, we used an integrated surface water - groundwater dynamic simulation model to evaluate the agricultural conveyance and riparian systems as potential nitrogen removal locations. Under conservative behavior, modeled nitrate concentrations were higher than observed in the groundwater, river, and conveyance channels. We calibrated the model using denitrification in the

  14. Large scale behavior of a two-dimensional model of anisotropic branched polymers.

    PubMed

    Knežević, Milan; Knežević, Dragica

    2013-10-28

    We study critical properties of anisotropic branched polymers modeled by semi-directed lattice animals on a triangular lattice. Using the exact transfer-matrix approach on strips of quite large widths and phenomenological renormalization group analysis, we obtained pretty good estimates of various critical exponents in the whole high-temperature region, including the point of collapse transition. Our numerical results suggest that this collapse transition belongs to the universality class of directed percolation. PMID:24182076

  15. Large scale behavior of a two-dimensional model of anisotropic branched polymers

    NASA Astrophysics Data System (ADS)

    Knežević, Milan; Knežević, Dragica

    2013-10-01

    We study critical properties of anisotropic branched polymers modeled by semi-directed lattice animals on a triangular lattice. Using the exact transfer-matrix approach on strips of quite large widths and phenomenological renormalization group analysis, we obtained pretty good estimates of various critical exponents in the whole high-temperature region, including the point of collapse transition. Our numerical results suggest that this collapse transition belongs to the universality class of directed percolation.

  16. Modeling ramp compression experiments using large-scale molecular dynamics simulation.

    SciTech Connect

    Mattsson, Thomas Kjell Rene; Desjarlais, Michael Paul; Grest, Gary Stephen; Templeton, Jeremy Alan; Thompson, Aidan Patrick; Jones, Reese E.; Zimmerman, Jonathan A.; Baskes, Michael I.; Winey, J. Michael; Gupta, Yogendra Mohan; Lane, J. Matthew D.; Ditmire, Todd; Quevedo, Hernan J.

    2011-10-01

    Molecular dynamics simulation (MD) is an invaluable tool for studying problems sensitive to atomscale physics such as structural transitions, discontinuous interfaces, non-equilibrium dynamics, and elastic-plastic deformation. In order to apply this method to modeling of ramp-compression experiments, several challenges must be overcome: accuracy of interatomic potentials, length- and time-scales, and extraction of continuum quantities. We have completed a 3 year LDRD project with the goal of developing molecular dynamics simulation capabilities for modeling the response of materials to ramp compression. The techniques we have developed fall in to three categories (i) molecular dynamics methods (ii) interatomic potentials (iii) calculation of continuum variables. Highlights include the development of an accurate interatomic potential describing shock-melting of Beryllium, a scaling technique for modeling slow ramp compression experiments using fast ramp MD simulations, and a technique for extracting plastic strain from MD simulations. All of these methods have been implemented in Sandia's LAMMPS MD code, ensuring their widespread availability to dynamic materials research at Sandia and elsewhere.

  17. Transforming GIS data into functional road models for large-scale traffic simulation.

    PubMed

    Wilkie, David; Sewall, Jason; Lin, Ming C

    2012-06-01

    There exists a vast amount of geographic information system (GIS) data that model road networks around the world as polylines with attributes. In this form, the data are insufficient for applications such as simulation and 3D visualization-tools which will grow in power and demand as sensor data become more pervasive and as governments try to optimize their existing physical infrastructure. In this paper, we propose an efficient method for enhancing a road map from a GIS database to create a geometrically and topologically consistent 3D model to be used in real-time traffic simulation, interactive visualization of virtual worlds, and autonomous vehicle navigation. The resulting representation provides important road features for traffic simulations, including ramps, highways, overpasses, legal merge zones, and intersections with arbitrary states, and it is independent of the simulation methodologies. We test the 3D models of road networks generated by our algorithm on real-time traffic simulation using both macroscopic and microscopic techniques. PMID:21690653

  18. User Friendly Open GIS Tool for Large Scale Data Assimilation - a Case Study of Hydrological Modelling

    NASA Astrophysics Data System (ADS)

    Gupta, P. K.

    2012-08-01

    Open source software (OSS) coding has tremendous advantages over proprietary software. These are primarily fuelled by high level programming languages (JAVA, C++, Python etc...) and open source geospatial libraries (GDAL/OGR, GEOS, GeoTools etc.). Quantum GIS (QGIS) is a popular open source GIS package, which is licensed under GNU GPL and is written in C++. It allows users to perform specialised tasks by creating plugins in C++ and Python. This research article emphasises on exploiting this capability of QGIS to build and implement plugins across multiple platforms using the easy to learn - Python programming language. In the present study, a tool has been developed to assimilate large spatio-temporal datasets such as national level gridded rainfall, temperature, topographic (digital elevation model, slope, aspect), landuse/landcover and multi-layer soil data for input into hydrological models. At present this tool has been developed for Indian sub-continent. An attempt is also made to use popular scientific and numerical libraries to create custom applications for digital inclusion. In the hydrological modelling calibration and validation are important steps which are repetitively carried out for the same study region. As such the developed tool will be user friendly and used efficiently for these repetitive processes by reducing the time required for data management and handling. Moreover, it was found that the developed tool can easily assimilate large dataset in an organised manner.

  19. Inverse transport modeling of volcanic sulfur dioxide emissions using large-scale simulations

    NASA Astrophysics Data System (ADS)

    Heng, Yi; Hoffmann, Lars; Griessbach, Sabine; Rößler, Thomas; Stein, Olaf

    2016-05-01

    An inverse transport modeling approach based on the concepts of sequential importance resampling and parallel computing is presented to reconstruct altitude-resolved time series of volcanic emissions, which often cannot be obtained directly with current measurement techniques. A new inverse modeling and simulation system, which implements the inversion approach with the Lagrangian transport model Massive-Parallel Trajectory Calculations (MPTRAC) is developed to provide reliable transport simulations of volcanic sulfur dioxide (SO2). In the inverse modeling system MPTRAC is used to perform two types of simulations, i.e., unit simulations for the reconstruction of volcanic emissions and final forward simulations. Both types of transport simulations are based on wind fields of the ERA-Interim meteorological reanalysis of the European Centre for Medium Range Weather Forecasts. The reconstruction of altitude-dependent SO2 emission time series is also based on Atmospheric InfraRed Sounder (AIRS) satellite observations. A case study for the eruption of the Nabro volcano, Eritrea, in June 2011, with complex emission patterns, is considered for method validation. Meteosat Visible and InfraRed Imager (MVIRI) near-real-time imagery data are used to validate the temporal development of the reconstructed emissions. Furthermore, the altitude distributions of the emission time series are compared with top and bottom altitude measurements of aerosol layers obtained by the Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP) and the Michelson Interferometer for Passive Atmospheric Sounding (MIPAS) satellite instruments. The final forward simulations provide detailed spatial and temporal information on the SO2 distributions of the Nabro eruption. By using the critical success index (CSI), the simulation results are evaluated with the AIRS observations. Compared to the results with an assumption of a constant flux of SO2 emissions, our inversion approach leads to an improvement

  20. Large-scale parallel lattice Boltzmann-cellular automaton model of two-dimensional dendritic growth

    NASA Astrophysics Data System (ADS)

    Jelinek, Bohumir; Eshraghi, Mohsen; Felicelli, Sergio; Peters, John F.

    2014-03-01

    An extremely scalable lattice Boltzmann (LB)-cellular automaton (CA) model for simulations of two-dimensional (2D) dendritic solidification under forced convection is presented. The model incorporates effects of phase change, solute diffusion, melt convection, and heat transport. The LB model represents the diffusion, convection, and heat transfer phenomena. The dendrite growth is driven by a difference between actual and equilibrium liquid composition at the solid-liquid interface. The CA technique is deployed to track the new interface cells. The computer program was parallelized using the Message Passing Interface (MPI) technique. Parallel scaling of the algorithm was studied and major scalability bottlenecks were identified. Efficiency loss attributable to the high memory bandwidth requirement of the algorithm was observed when using multiple cores per processor. Parallel writing of the output variables of interest was implemented in the binary Hierarchical Data Format 5 (HDF5) to improve the output performance, and to simplify visualization. Calculations were carried out in single precision arithmetic without significant loss in accuracy, resulting in 50% reduction of memory and computational time requirements. The presented solidification model shows a very good scalability up to centimeter size domains, including more than ten million of dendrites. Catalogue identifier: AEQZ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEQZ_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, UK Licensing provisions: Standard CPC license, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 29,767 No. of bytes in distributed program, including test data, etc.: 3131,367 Distribution format: tar.gz Programming language: Fortran 90. Computer: Linux PC and clusters. Operating system: Linux. Has the code been vectorized or parallelized?: Yes. Program is parallelized using MPI

  1. 5D Modelling: An Efficient Approach for Creating Spatiotemporal Predictive 3D Maps of Large-Scale Cultural Resources

    NASA Astrophysics Data System (ADS)

    Doulamis, A.; Doulamis, N.; Ioannidis, C.; Chrysouli, C.; Grammalidis, N.; Dimitropoulos, K.; Potsiou, C.; Stathopoulou, E.-K.; Ioannides, M.

    2015-08-01

    Outdoor large-scale cultural sites are mostly sensitive to environmental, natural and human made factors, implying an imminent need for a spatio-temporal assessment to identify regions of potential cultural interest (material degradation, structuring, conservation). On the other hand, in Cultural Heritage research quite different actors are involved (archaeologists, curators, conservators, simple users) each of diverse needs. All these statements advocate that a 5D modelling (3D geometry plus time plus levels of details) is ideally required for preservation and assessment of outdoor large scale cultural sites, which is currently implemented as a simple aggregation of 3D digital models at different time and levels of details. The main bottleneck of such an approach is its complexity, making 5D modelling impossible to be validated in real life conditions. In this paper, a cost effective and affordable framework for 5D modelling is proposed based on a spatial-temporal dependent aggregation of 3D digital models, by incorporating a predictive assessment procedure to indicate which regions (surfaces) of an object should be reconstructed at higher levels of details at next time instances and which at lower ones. In this way, dynamic change history maps are created, indicating spatial probabilities of regions needed further 3D modelling at forthcoming instances. Using these maps, predictive assessment can be made, that is, to localize surfaces within the objects where a high accuracy reconstruction process needs to be activated at the forthcoming time instances. The proposed 5D Digital Cultural Heritage Model (5D-DCHM) is implemented using open interoperable standards based on the CityGML framework, which also allows the description of additional semantic metadata information. Visualization aspects are also supported to allow easy manipulation, interaction and representation of the 5D-DCHM geometry and the respective semantic information. The open source 3DCity

  2. Large-Scale Physical Modelling of Complex Tsunami-Generated Currents

    NASA Astrophysics Data System (ADS)

    Lynett, P. J.; Kalligeris, N.; Ayca, A.

    2014-12-01

    For tsunamis passing through sharp bathymetric variability, such as a shoal or a harbor entrance channel, z-axis vortical motions are created. These structures are often characterized by a horizontal length scale that is much greater than the local depth and are herein called shallow turbulent coherent structures (TCS). These shallow TCS can greatly increase the drag force on affected infrastructure and the ability of the flow to transport debris and floating objects. Shallow TCS typically manifest as large "whirlpools" during tsunamis, very commonly in ports and harbors. Such structures have been observed numerous times in the tsunamis over the past decade, and are postulated as the cause of large vessels parting their mooring lines due to yaw induced by the rotational eddy. Through the NSF NEES program, a laboratory study to examine a shallow TCS was performed during the summer of 2014. To generate this phenomenon, a 60 second period long wave was created and then interacted with a breakwater in the basin, forcing the generation of a large and stable TCS. The model scale is 1:30, equating to a 5.5 minute period and 0.5 m amplitude in the prototype scale. Surface tracers, dye studies, AVD's, wave gages, and bottom pressure sensors are used to characterize the flow. Complex patterns of surface convergence and divergence are easily seen in the data, indicating three-dimensional flow patterns. Dye studies show areas of relatively high and low spatial mixing. Model vessels are placed in the basin such that ship motion in the presence of these rapidly varying currents might be captured. The data obtained from this laboratory study should permit a better physical understanding of the nearshore currents that tsunamis are known to generate, as well as provide a benchmark for numerical modelers who wish to simulate currents.

  3. Modeling change from large-scale high-dimensional spatio-temporal array data

    NASA Astrophysics Data System (ADS)

    Lu, Meng; Pebesma, Edzer

    2014-05-01

    The massive data that come from Earth observation satellite and other sensors provide significant information for modeling global change. At the same time, the high dimensionality of the data has brought challenges in data acquisition, management, effective querying and processing. In addition, the output of earth system modeling tends to be data intensive and needs methodologies for storing, validation, analyzing and visualization, e.g. as maps. An important proportion of earth system observations and simulated data can be represented as multi-dimensional array data, which has received increasingly attention in big data management and spatial-temporal analysis. Study cases will be developed in natural science such as climate change, hydrological modeling, sediment dynamics, from which the addressing of big data problems is necessary. Multi-dimensional array-based database management and analytics system such as Rasdaman, SciDB, and R will be applied to these cases. From these studies will hope to learn the strengths and weaknesses of these systems, how they might work together or how semantics of array operations differ, through addressing the problems associated with big data. Research questions include: • How can we reduce dimensions spatially and temporally, or thematically? • How can we extend existing GIS functions to work on multidimensional arrays? • How can we combine data sets of different dimensionality or different resolutions? • Can map algebra be extended to an intelligible array algebra? • What are effective semantics for array programming of dynamic data driven applications? • In which sense are space and time special, as dimensions, compared to other properties? • How can we make the analysis of multi-spectral, multi-temporal and multi-sensor earth observation data easy?

  4. Examining tissue differentiation stability through large scale, multi-cellular pathway modeling.

    SciTech Connect

    May, Elebeoba Eni; Schiek, Richard Louis

    2005-03-01

    Using a multi-cellular, pathway model approach, we investigate the Drosophila sp. segmental differentiation network's stability as a function of initial conditions. While this network's functionality has been investigated in the absence of noise, this is the first work to specifically investigate how natural systems respond to random errors or noise. Our findings agree with earlier results that the overall network is robust in the absence of noise. However, when one includes random initial perturbations in intracellular protein WG levels, the robustness of the system decreases dramatically. The effect of noise on the system is not linear, and appears to level out at high noise levels.

  5. Cosmic microwave background and large-scale structure constraints on a simple quintessential inflation model

    SciTech Connect

    Rosenfeld, Rogerio; Frieman, Joshua A.; /Fermilab /Chicago U., Astron. Astrophys. Ctr.

    2006-11-01

    We derive constraints on a simple quintessential inflation model, based on a spontaneously broken {Phi}{sup 4} theory, imposed by the Wilkinson Microwave Anisotropy Probe three-year data (WMAP3) and by galaxy clustering results from the Sloan Digital Sky Survey (SDSS). We find that the scale of symmetry breaking must be larger than about 3 Planck masses in order for inflation to generate acceptable values of the scalar spectral index and of the tensor-to-scalar ratio. We also show that the resulting quintessence equation-of-state can evolve rapidly at recent times and hence can potentially be distinguished from a simple cosmological constant in this parameter regime.

  6. Characterizing and modeling the efficiency limits in large-scale production of hyperpolarized 129Xe

    PubMed Central

    Freeman, M.S.; Emami, K.; Driehuys, B.

    2014-01-01

    The ability to produce liter volumes of highly spin-polarized 129Xe enables a wide range of investigations, most notably in the fields of materials science and biomedical MRI. However, for nearly all polarizers built to date, both peak 129Xe polarization and the rate at which it is produced fall far below those predicted by the standard model of Rb metal vapor, spin-exchange optical pumping (SEOP). In this work, we comprehensively characterized a high-volume, flow-through 129Xe polarizer using three different SEOP cells with internal volumes of 100, 200 and 300 cc and two types of optical sources: a broad-spectrum 111-W laser (FWHM = 1.92 nm) and a line-narrowed 71-W laser (FWHM = 0.39 nm). By measuring 129Xe polarization as a function of gas flow rate, we extracted peak polarization and polarization production rate across a wide range of laser absorption levels. Peak polarization for all cells consistently remained a factor of 2-3 times lower than predicted at all absorption levels. Moreover, although production rates increased with laser absorption, they did so much more slowly than predicted by the standard theoretical model and basic spin exchange efficiency arguments. Underperformance was most notable in the smallest optical cells. We propose that all these systematic deviations from theory can be explained by invoking the presence of paramagnetic Rb clusters within the vapor. Cluster formation within saturated alkali vapors is well established and their interaction with resonant laser light was recently shown to create plasma-like conditions. Such cluster systems cause both Rb and 129Xe depolarization, as well as excess photon scattering. These effects were incorporated into the SEOP model by assuming that clusters are activated in proportion to excited-state Rb number density and by further estimating physically reasonable values for the nanocluster-induced, velocity-averaged spin-destruction cross-section for Rb (<σcluster-Rbv> ≈4×10-7 cm3s-1), 129Xe

  7. Stochastic and recursive calibration for operational, large-scale, agricultural land and water use management models

    NASA Astrophysics Data System (ADS)

    Maneta, M. P.; Kimball, J. S.; Jencso, K. G.

    2015-12-01

    Managing the impact of climatic cycles on agricultural production, on land allocation, and on the state of active and projected water sources is challenging. This is because in addition to the uncertainties associated with climate projections, it is difficult to anticipate how farmers will respond to climatic change or to economic and policy incentives. Some sophisticated decision support systems available to water managers consider farmers' adaptive behavior but they are data intensive and difficult to apply operationally over large regions. Satellite-based observational technologies, in conjunction with models and assimilation methods, create an opportunity for new, cost-effective analysis tools to support policy and decision-making over large spatial extents at seasonal scales.We present an integrated modeling framework that can be driven by satellite remote sensing to enable robust regional assessment and prediction of climatic and policy impacts on agricultural production, water resources, and management decisions. The core of this framework is a widely used model of agricultural production and resource allocation adapted to be used in conjunction with remote sensing inputs to quantify the amount of land and water farmers allocate for each crop they choose to grow on a seasonal basis in response to reduced or enhanced access to water due to climatic or policy restrictions. A recursive Bayesian update method is used to adjust the model parameters by assimilating information on crop acreage, production, and crop evapotranspiration as a proxy for water use that can be estimated from high spatial resolution satellite remote sensing. The data assimilation framework blends new and old information to avoid over-calibration to the specific conditions of a single year and permits the updating of parameters to track gradual changes in the agricultural system.This integrated framework provides an operational means of monitoring and forecasting what crops will be grown

  8. Pangolin v1.0, a conservative 2-D advection model towards large-scale parallel calculation

    NASA Astrophysics Data System (ADS)

    Praga, A.; Cariolle, D.; Giraud, L.

    2015-02-01

    To exploit the possibilities of parallel computers, we designed a large-scale bidimensional atmospheric advection model named Pangolin. As the basis for a future chemistry-transport model, a finite-volume approach for advection was chosen to ensure mass preservation and to ease parallelization. To overcome the pole restriction on time steps for a regular latitude-longitude grid, Pangolin uses a quasi-area-preserving reduced latitude-longitude grid. The features of the regular grid are exploited to reduce the memory footprint and enable effective parallel performances. In addition, a custom domain decomposition algorithm is presented. To assess the validity of the advection scheme, its results are compared with state-of-the-art models on algebraic test cases. Finally, parallel performances are shown in terms of strong scaling and confirm the efficient scalability up to a few hundred cores.

  9. Assessment of survivability of liquid water and organic materials through modeling of large-scale impacts

    NASA Astrophysics Data System (ADS)

    Blank, Jennifer

    Comets, estimated to contain up to 25 wt.% organic material as both ices and more complex, refractory compounds, have been proposed as a vehicle for the delivery of organic compounds to the early Earth and other rocky planets. Successful delivery requires that some of the organic materials survive the extreme temperatures associated with impact, but the response of organic compounds to impact (shock) processing under these conditions is unknown. Several researchers have explored organic-delivery scenarios computationally and experimentally. Here, I will summarize work that addresses the issue of impact delivery and focus on current efforts to track the phase-state of water during a modeled comet-earth collision over a range of impact angles. On the basis of model results generated using a three-dimensional shock physics code (GEODYN), I will infer survivability of organic compounds and liquid water in a range of impact scenarios for comet-Earth and asteroid-Earth collisions. These results will be described in the context of the flux of astromaterials, and organic matter in particular, to young planets.

  10. Thermal Reactor Model for Large-Scale Algae Cultivation in Vertical Flat Panel Photobioreactors.

    PubMed

    Endres, Christian H; Roth, Arne; Brück, Thomas B

    2016-04-01

    Microalgae can grow significantly faster than terrestrial plants and are a promising feedstock for sustainable value added products encompassing pharmaceuticals, pigments, proteins and most prominently biofuels. As the biomass productivity of microalgae strongly depends on the cultivation temperature, detailed information on the reactor temperature as a function of time and geographical location is essential to evaluate the true potential of microalgae as an industrial feedstock. In the present study, a temperature model for an array of vertical flat plate photobioreactors is presented. It was demonstrated that mutual shading of reactor panels has a decisive effect on the reactor temperature. By optimizing distance and thickness of the panels, the occurrence of extreme temperatures and the amplitude of daily temperature fluctuations in the culture medium can be drastically reduced, while maintaining a high level of irradiation on the panels. The presented model was developed and applied to analyze the suitability of various climate zones for algae production in flat panel photobioreactors. Our results demonstrate that in particular Mediterranean and tropical climates represent favorable locations. Lastly, the thermal energy demand required for the case of active temperature control is determined for several locations. PMID:26950078

  11. Large-scale Environmental Variables and Transition to Deep Convection in Cloud Resolving Model Simulations: A Vector Representation

    SciTech Connect

    Hagos, Samson M.; Leung, Lai-Yung R.

    2012-11-01

    Cloud resolving model simulations and vector analysis are used to develop a quantitative method of assessing regional variations in the relationships between various large-scale environmental variables and the transition to deep convection. Results of the CRM simulations from three tropical regions are used to cluster environmental conditions under which transition to deep convection does and does not take place. Projections of the large-scale environmental variables on the difference between these two clusters are used to quantify the roles of these variables in the transition to deep convection. While the transition to deep convection is most sensitive to moisture and vertical velocity perturbations, the details of the profiles of the anomalies vary from region to region. In comparison, the transition to deep convection is found to be much less sensitive to temperature anomalies over all three regions. The vector formulation presented in this study represents a simple general framework for quantifying various aspects of how the transition to deep convection is sensitive to environmental conditions.

  12. North American extreme temperature events and related large scale meteorological patterns: a review of statistical methods, dynamics, modeling, and trends

    NASA Astrophysics Data System (ADS)

    Grotjahn, Richard; Black, Robert; Leung, Ruby; Wehner, Michael F.; Barlow, Mathew; Bosilovich, Mike; Gershunov, Alexander; Gutowski, William J.; Gyakum, John R.; Katz, Richard W.; Lee, Yun-Young; Lim, Young-Kwon; Prabhat

    2016-02-01

    The objective of this paper is to review statistical methods, dynamics, modeling efforts, and trends related to temperature extremes, with a focus upon extreme events of short duration that affect parts of North America. These events are associated with large scale meteorological patterns (LSMPs). The statistics, dynamics, and modeling sections of this paper are written to be autonomous and so can be read separately. Methods to define extreme events statistics and to identify and connect LSMPs to extreme temperature events are presented. Recent advances in statistical techniques connect LSMPs to extreme temperatures through appropriately defined covariates that supplement more straightforward analyses. Various LSMPs, ranging from synoptic to planetary scale structures, are associated with extreme temperature events. Current knowledge about the synoptics and the dynamical mechanisms leading to the associated LSMPs is incomplete. Systematic studies of: the physics of LSMP life cycles, comprehensive model assessment of LSMP-extreme temperature event linkages, and LSMP properties are needed. Generally, climate models capture observed properties of heat waves and cold air outbreaks with some fidelity. However they overestimate warm wave frequency and underestimate cold air outbreak frequency, and underestimate the collective influence of low-frequency modes on temperature extremes. Modeling studies have identified the impact of large-scale circulation anomalies and land-atmosphere interactions on changes in extreme temperatures. However, few studies have examined changes in LSMPs to more specifically understand the role of LSMPs on past and future extreme temperature changes. Even though LSMPs are resolvable by global and regional climate models, they are not necessarily well simulated. The paper concludes with unresolved issues and research questions.

  13. Measurement, Modeling, and Analysis of a Large-scale Blog Sever Workload

    SciTech Connect

    Jeon, Myeongjae; Hwang, Jeaho; Kim, Youngjae; Jae-Wan, Jang; Lee, Joonwon; Seo, Euiseong

    2010-01-01

    Despite the growing popularity of Online Social Networks (OSNs), the workload characteristics of OSN servers, such as those hosting blog services, are not well understood. Understanding workload characteristics is important for opti- mizing and improving the performance of current systems and software based on observed trends. Thus, in this paper, we characterize the system workload of the largest blog hosting servers in South Korea, Tistory1. In addition to understanding the system workload of the blog hosting server, we have developed synthesized workloads and obtained the following major findings: (i) the transfer size of non-multimedia files and blog articles can be modeled by a truncated Pareto distribution and a log-normal distribution respectively, and (ii) users accesses to blog articles do not show temporal locality, but they are strongly biased toward those posted along with images or audio.

  14. A Large-scale Dissemination and Implementation Model for Evidence-based Treatment and Continuing Care

    PubMed Central

    Garner, Bryan R.; Smith, Jane Ellen; Meyers, Robert J.; Godley, Mark D.

    2010-01-01

    Multiple evidence-based treatments for adolescents with substance use disorders are available; however, the diffusion of these treatments in practice remains minimal. A dissemination and implementation model incorporating research-based training components for simultaneous implementation across 33 dispersed sites and over 200 clinical staff is described. Key elements for the diffusion of the Adolescent Community Reinforcement Approach and Assertive Continuing Care were: (a) three years of funding to support local implementation; (b) comprehensive training, including a 3.5 day workshop, bi-weekly coaching calls, and ongoing performance feedback facilitated by a web tool; (c) a clinician certification process; (d) a supervisor certification process to promote long-term sustainability; and (e) random fidelity reviews after certification. Process data are summarized for 167 clinicians and 64 supervisors. PMID:21547241

  15. A simple model for large-scale simulations of fcc metals with explicit treatment of electrons

    NASA Astrophysics Data System (ADS)

    Mason, D. R.; Foulkes, W. M. C.; Sutton, A. P.

    2010-01-01

    The continuing advance in computational power is beginning to make accurate electronic structure calculations routine. Yet, where physics emerges through the dynamics of tens of thousands of atoms in metals, simplifications must be made to the electronic Hamiltonian. We present the simplest extension to a single s-band model [A.P. Sutton, T.N. Todorov, M.J. Cawkwell and J. Hoekstra, Phil. Mag. A 81 (2001) p.1833.] of metallic bonding, namely, the addition of a second s-band. We show that this addition yields a reasonable description of the density of states at the Fermi level, the cohesive energy, formation energies of point defects and elastic constants of some face-centred cubic (fcc) metals.

  16. Solving large-scale sparse eigenvalue problems and linear systems of equations for accelerator modeling

    SciTech Connect

    Gene Golub; Kwok Ko

    2009-03-30

    The solutions of sparse eigenvalue problems and linear systems constitute one of the key computational kernels in the discretization of partial differential equations for the modeling of linear accelerators. The computational challenges faced by existing techniques for solving those sparse eigenvalue problems and linear systems call for continuing research to improve on the algorithms so that ever increasing problem size as required by the physics application can be tackled. Under the support of this award, the filter algorithm for solving large sparse eigenvalue problems was developed at Stanford to address the computational difficulties in the previous methods with the goal to enable accelerator simulations on then the world largest unclassified supercomputer at NERSC for this class of problems. Specifically, a new method, the Hemitian skew-Hemitian splitting method, was proposed and researched as an improved method for solving linear systems with non-Hermitian positive definite and semidefinite matrices.

  17. Reduced Order Modeling for Prediction and Control of Large-Scale Systems.

    SciTech Connect

    Kalashnikova, Irina; Arunajatesan, Srinivasan; Barone, Matthew Franklin; van Bloemen Waanders, Bart Gustaaf; Fike, Jeffrey A.

    2014-05-01

    This report describes work performed from June 2012 through May 2014 as a part of a Sandia Early Career Laboratory Directed Research and Development (LDRD) project led by the first author. The objective of the project is to investigate methods for building stable and efficient proper orthogonal decomposition (POD)/Galerkin reduced order models (ROMs): models derived from a sequence of high-fidelity simulations but having a much lower computational cost. Since they are, by construction, small and fast, ROMs can enable real-time simulations of complex systems for onthe- spot analysis, control and decision-making in the presence of uncertainty. Of particular interest to Sandia is the use of ROMs for the quantification of the compressible captive-carry environment, simulated for the design and qualification of nuclear weapons systems. It is an unfortunate reality that many ROM techniques are computationally intractable or lack an a priori stability guarantee for compressible flows. For this reason, this LDRD project focuses on the development of techniques for building provably stable projection-based ROMs. Model reduction approaches based on continuous as well as discrete projection are considered. In the first part of this report, an approach for building energy-stable Galerkin ROMs for linear hyperbolic or incompletely parabolic systems of partial differential equations (PDEs) using continuous projection is developed. The key idea is to apply a transformation induced by the Lyapunov function for the system, and to build the ROM in the transformed variables. It is shown that, for many PDE systems including the linearized compressible Euler and linearized compressible Navier-Stokes equations, the desired transformation is induced by a special inner product, termed the “symmetry inner product”. Attention is then turned to nonlinear conservation laws. A new transformation and corresponding energy-based inner product for the full nonlinear compressible Navier

  18. The Impacts of Armoring Our Deltas: Mapping and Modeling Large-Scale Deltaplain Aggradation

    NASA Astrophysics Data System (ADS)

    Overeem, I.; Higgins, S.; Syvitski, J. P.; Kettner, A. J.; Brakenridge, R.

    2014-12-01

    Humans have hardened land-water boundaries in almost every delta they live on. Engineering includes stabilizing and embanking channels to protect from river floods, building dikes around islands and emerging bars to reclaim land, and putting up sea walls to protect from waves and storm surges. These measures aim to reduce the exchange of water and sediment between the distributary delta channel network and the adjacent deltaplain. To first order, armoring of deltas results in net elevation loss of the floodplain, due to subsidence, compaction and reduced aggradation. Here, we ask what are the mechanisms of aggradation in 'armored' deltas? How do aggradation patterns compare to more natural depositional patterns? We analyze 2-week period aggregates of MODIS satellite data from 2000 onwards to map inundation patterns due to irrigation, river floods and storm surges for selected deltas. Using a MODIS band-ratio, we assess relative concentrations of suspended sediment in stagnant water on the floodplains. In addition, we use a simple approach to route sediment through the delta distributary network based on the relative channel geometries. A depositional process model then calculates cross-channel sediment flux as an exponential decay function, and determines sediment deposition over inundated areas. Stacked inundation maps show vast areas of deltaplains have flooded between 2000-2014, despite armoring channels with dikes, and coastlines with seawalls. Flooding is caused by overtopping of levees and more rarely by breaching and in those latter cases the flooded areas are often locally constrained. In Asian deltas, rice paddy irrigation with floodwater can be mapped even in the more distal floodplain. Our model predicts that inundated areas still receive significant amounts of fresh sediment, but that the pattern is more variable than in natural systems. Sparse in-situ observations of floodplain aggradation rates and storm surge deposits corroborate high, but localized

  19. Model-Data Fusion and Adaptive Sensing for Large Scale Systems: Applications to Atmospheric Release Incidents

    NASA Astrophysics Data System (ADS)

    Madankan, Reza

    All across the world, toxic material clouds are emitted from sources, such as industrial plants, vehicular traffic, and volcanic eruptions can contain chemical, biological or radiological material. With the growing fear of natural, accidental or deliberate release of toxic agents, there is tremendous interest in precise source characterization and generating accurate hazard maps of toxic material dispersion for appropriate disaster management. In this dissertation, an end-to-end framework has been developed for probabilistic source characterization and forecasting of atmospheric release incidents. The proposed methodology consists of three major components which are combined together to perform the task of source characterization and forecasting. These components include Uncertainty Quantification, Optimal Information Collection, and Data Assimilation. Precise approximation of prior statistics is crucial to ensure performance of the source characterization process. In this work, an efficient quadrature based method has been utilized for quantification of uncertainty in plume dispersion models that are subject to uncertain source parameters. In addition, a fast and accurate approach is utilized for the approximation of probabilistic hazard maps, based on combination of polynomial chaos theory and the method of quadrature points. Besides precise quantification of uncertainty, having useful measurement data is also highly important to warranty accurate source parameter estimation. The performance of source characterization is highly affected by applied sensor orientation for data observation. Hence, a general framework has been developed for the optimal allocation of data observation sensors, to improve performance of the source characterization process. The key goal of this framework is to optimally locate a set of mobile sensors such that measurement of textit{better} data is guaranteed. This is achieved by maximizing the mutual information between model predictions

  20. Applications of the halo model to large scale structure measurements of the Luminous Red Galaxies

    NASA Astrophysics Data System (ADS)

    Reid, Beth Ann

    The power spectrum of density fluctuations in the evolved universe provides constraints on cosmological parameters that are complementary to cosmic microwave background and other astronomical probes. The Sloan Digital Sky Survey (SDSS) Luminous Red Galaxy (LRG) sample probes a volume of ~ 3 (Gpc) 3 , and systematic errors in modeling the nonlinearities limit our ability to extract information on the shape of the linear power spectrum. There are three main effects that distort the observed power spectrum from the linear power spectrum: nonlinear gravitational evolution, redshift space distortions, and a nonlinear relation between the galaxy density field and the underlying matter density field. In this thesis we introduce a new method to mitigate the latter two distortions and rely on carefully tuned N-body simulations to model the first. In Chapter 2 we present the technique 'Counts-in-Cylinders' (CiC) and use it to measure the multiplicity function of groups of LRGs in SDSS. We use the Halo Occupation Distribution description of the galaxy-matter mapping and N -body simulations to connect this observation with constraints on the distribution of LRGs in dark matter halos. In Chapter 3 we study the effects of resolution on statistics relating to both the large and small scale distributions and motions of matter and dark matter halos. We combine these results to produce a large set of high quality mock LRG catalogs that reproduce the higher order statistics in the density field probed by the CiC technique. Using these catalogs we present a detailed analysis of the method used in Tegmark et al. (2006) to estimate the LRG power spectrum, and find that the large nonlinear correction necessary for their analysis is degenerate with changes in the linear spectrum we wish to constrain. We show that the CiC group-finding method in Chapter 2 can be used to reconstruct the underlying halo density field. The power spectrum of this field has only percent-level deviations from

  1. Large-scale erosion processes and parameters derived from a modeling of the Messinian salinity crisis

    NASA Astrophysics Data System (ADS)

    Loget, N.; Davy, P.; van den Driessche, J.

    2003-04-01

    The closing of the Gibraltar strait during Messinian have produced a drop of the sea level of about 1500 m in less than half a million year. This certainly constitutes one of the largest perturbation of erosion systems in the Earth, whose analysis in terms of form and dynamics should bring invaluable constraints on erosion processes and parameters. In addition to a precise chronology of the bulk crisis, the main data consists of the reconstruction of paleocanyons, that were eroded during sea drop and refilled during sea rise. The Rhone's canyon is certainly the most documented, with numerous seismic lines and boreholes. We have now a reasonable estimation of the canyon profile from its outlet to the Bresse graben, more 500 km upslope. Sparse data are also available in the Languedoc region, in the Pyrenees, for some drainage basins of the Var-Ligure coast, in the gulf of Valence. A particularity of this erosion phase was to propagate very far inland along the main rivers, but in a very localized way in the sense that hillslopes or upslope drainage basins were barely affected. All these data were compiled in a database that we used to constrain erosion processes. We assume that the erosion law belongs to the classical power-law framework, where the erosion flux depends on local slope s, and water flow q, such as: e=k qmsn-ec, where k and ec are two constants which depend on material strength properties, and m and n are two exponents which are found to play an important role in the time-length scaling. The transfer model must be completed by a transfer or deposition terms that we assume to be controlled by a deposition length Ld. If Ld is very small, the model comes to the transport-limited case where the height variation is proportional to the gradient of the erosion flux e. In contrast if Ld is very large, rivers can carry all the eroded sediment out; the process is usually called detachment-limited. We simulate the erosion dynamics, induced by the Messinian sea

  2. The role of soil hydrologic heterogeneity for modeling large-scale bioremediation protocols.

    NASA Astrophysics Data System (ADS)

    Romano, N.; Palladino, M.; Speranza, G.; Di Fiore, P.; Sica, B.; Nasta, P.

    2014-12-01

    The major aim of the EU-Life+ project EcoRemed (Implementation of eco-compatible protocols for agricultural soil remediation in Litorale Domizio-Agro Aversano NIPS) is the implementation of operating protocols for agriculture-based bioremediation of contaminated croplands, which also involves plants extracting pollutants being then used as biomasses for renewable energy production. The study area is the National Interest Priority Site (NIPS) called Litorale Domitio-Agro Aversano, which is located in the Campania Region (Southern Italy) and has an extent of about 200,000 ectars. In this area, a high-level spotted soil contamination is mostly due to the legal or outlaw industrial and municipal wastes, with hazardous consequences also on the quality of the groundwater. An accurate determination of the soil hydraulic properties to characterize the landscape heterogeneity of the study area plays a key role within the general framework of this project, especially in view of the use of various modeling tools for water flow and solute transport simulations and to predict the effectiveness of the adopted bioremediation protocols. The present contribution is part of an ongoing study where we are investigating the following research questions: a) Which spatial aggregation schemes seem more suitable for upscaling from point to block support? b) Which effective soil hydrologic characteristic schemes simulate better the average behavior of larger scale phytoremediation processes? c) Allowing also for questions a) and b), how the spatial variability of soil hydraulic properties affect the variability of plant responses to hydro-meteorological forcing?

  3. The periglacial engine of mountain erosion - Part 2: Modelling large-scale landscape evolution

    NASA Astrophysics Data System (ADS)

    Egholm, D. L.; Andersen, J. L.; Knudsen, M. F.; Jansen, J. D.; Nielsen, S. B.

    2015-10-01

    There is growing recognition of strong periglacial control on bedrock erosion in mountain landscapes, including the shaping of low-relief surfaces at high elevations (summit flats). But, as yet, the hypothesis that frost action was crucial to the assumed Late Cenozoic rise in erosion rates remains compelling and untested. Here we present a landscape evolution model incorporating two key periglacial processes - regolith production via frost cracking and sediment transport via frost creep - which together are harnessed to variations in temperature and the evolving thickness of sediment cover. Our computational experiments time-integrate the contribution of frost action to shaping mountain topography over million-year timescales, with the primary and highly reproducible outcome being the development of flattish or gently convex summit flats. A simple scaling of temperature to marine δ18O records spanning the past 14 Myr indicates that the highest summit flats in mid- to high-latitude mountains may have formed via frost action prior to the Quaternary. We suggest that deep cooling in the Quaternary accelerated mechanical weathering globally by significantly expanding the area subject to frost. Further, the inclusion of subglacial erosion alongside periglacial processes in our computational experiments points to alpine glaciers increasing the long-term efficiency of frost-driven erosion by steepening hillslopes.

  4. A simplified model of ADAF with the jet driven by the large-scale magnetic field

    NASA Astrophysics Data System (ADS)

    Li, Yang; Gan, Zhao-Ming; Wang, Ding-Xiong

    2010-01-01

    We propose a simplified model of outflow/jet driven by the Blandford-Payne (BP) process from advection-dominated accretion flows (ADAF) and derive the expressions of the BP power and disk luminosity based on the conservation laws of mass, angular momentum and energy. We fit the 2-10 keV luminosity and kinetic power of 15 active galactic nucleus (AGNs) of sub-Eddington luminosity. It is found that there exists an anti-correlation between the accretion rate and the advection parameter, which could be used to explain the correlation between Eddington-scaled kinetic power and bolometric luminosity of the 15 samples. In addition, the Ledlow-Owen relation for FR I/II dichotomy is re-expressed in a parameter space consisting of logarithm of dimensionless accretion rate versus that of the BH mass. It turns out that the FR I/II dichotomy is determined mainly by the dimensionless accretion rate, being insensitive to the BH mass. And the dividing accretion rate is less than the critical accretion rate for ADAFs, suggesting that FR I sources are all in the ADAF state.

  5. Applications of the Halo Model to Large Scale Structure Measurements of the Luminous Red Galaxies

    NASA Astrophysics Data System (ADS)

    Reid, Beth A.; Spergel, D. N.; Bode, P.

    2009-01-01

    The power spectrum of density fluctuations in the evolved universe provides constraints on cosmological parameters that are complementary to the CMB and other astronomical probes. The Sloan Digital Sky Survey (SDSS) Luminous Red Galaxy (LRG) sample probes a volume of 3 Gpc3, and systematic errors in modeling the nonlinearities limit our ability to extract information on the shape of the linear power spectrum. In Chapter 2 of this dissertation we present the technique `Counts-in-Cylinders' (CiC) and use it to measure the multiplicity function of groups of LRGs in SDSS. We use the Halo Occupation Distribution description of the galaxy-matter mapping and N-body simulations to connect this observation with constraints on the distribution of LRGs in dark matter halos. In Chapter 3 we study the effects of resolution on statistics relating to both the large and small scale distributions and motions of matter and dark matter halos. We combine these results to produce a large set of high quality mock LRG catalogs that reproduce the higher order statistics in the density field probed by the CiC technique. Using these catalogs we present a detailed analysis of the method used in Tegmark et al. 2006 to estimate the LRG power spectrum, and find that the large nonlinear correction necessary for their analysis is degenerate with changes in the linear spectrum we wish to constrain. We show that the CiC group-finding method in Chapter 2 can be used to reconstruct the underlying halo density field. The power spectrum of this field has only percent-level deviations from the underlying matter power spectrum, and will therefore provided tighter constraints on cosmological parameters. Techniques presented in this dissertation will be useful for final analysis of the SDSS LRGs and upcoming surveys probing much larger volumes. B.A.R. gratefully acknowledges support from the NSF Graduate Research Fellowship.

  6. Analysis of large-scale tablet coating: Modeling, simulation and experiments.

    PubMed

    Boehling, P; Toschkoff, G; Knop, K; Kleinebudde, P; Just, S; Funke, A; Rehbaum, H; Khinast, J G

    2016-07-30

    This work concerns a tablet coating process in an industrial-scale drum coater. We set up a full-scale Design of Simulation Experiment (DoSE) using the Discrete Element Method (DEM) to investigate the influence of various process parameters (the spray rate, the number of nozzles, the rotation rate and the drum load) on the coefficient of inter-tablet coating variation (cv,inter). The coater was filled with up to 290kg of material, which is equivalent to 1,028,369 tablets. To mimic the tablet shape, the glued sphere approach was followed, and each modeled tablet consisted of eight spheres. We simulated the process via the eXtended Particle System (XPS), proving that it is possible to accurately simulate the tablet coating process on the industrial scale. The process time required to reach a uniform tablet coating was extrapolated based on the simulated data and was in good agreement with experimental results. The results are provided at various levels of details, from thorough investigation of the influence that the process parameters have on the cv,inter and the amount of tablets that visit the spray zone during the simulated 90s to the velocity in the spray zone and the spray and bed cycle time. It was found that increasing the number of nozzles and decreasing the spray rate had the highest influence on the cv,inter. Although increasing the drum load and the rotation rate increased the tablet velocity, it did not have a relevant influence on the cv,inter and the process time. PMID:26709079

  7. A consistent formulation of the anisotropic stress tensor for use in models of the large-scale ocean circulation

    SciTech Connect

    Wajsowicz, R.C. )

    1993-04-01

    Subgrid-scale dissipation of momentum in numerical models of the large-scale ocean circulation is commonly parameterized as a viscous diffusion resulting from the divergence of a stress tensor of the form [omega] = Au. The form of the fourth-order coefficient tensor A is derived for anisotropic dissipation with an axis of rotational symmetry. Sufficient conditions for A to be positive definite for incompressible flows, so guaranteeing a net positive dissipation of kinetic energy, are found. The divergence of the stress tensor, in Cartesian and spherical polar coordinates, is given for A with constant and spatially varying elements. A consistent form of A and [omega] for use in models based on the Arakawa B- and C-grids is also derived. 16 refs.

  8. Wind tunnel investigation of aerodynamic loads on a large-scale externally blown flap model and comparison with theory

    NASA Technical Reports Server (NTRS)

    Perry, B., III; Greene, G. C.

    1975-01-01

    Results from a wind-tunnel investigation of a large-scale externally blown flap model are presented. The model was equipped with four turbofan engines, a triple-slotted flap system, and a T-tail. The wing had a quarter-chord sweep of 25 deg, an aspect ratio of 7.28, and a taper ratio of 0.4. Aerodynamic loads and load distributions were determined from a total of 564 static pressure orifices located on the upper and lower surfaces of the slat, wing, and flaps. Loads are presented for variations of angle of attack, engine thrust setting, and flap deflection angle. In addition, the experimental results are compared with analytical results calculated by using a potential flow analysis.

  9. Low-speed wind-tunnel investigation of a large-scale VTOL lift-fan transport model

    NASA Technical Reports Server (NTRS)

    Aoyagi, K.

    1979-01-01

    An investigation was conducted in the NASA-Ames 40 by 80 Foot Wind Tunnel to determine the aerodynamic characteristics of a large scale, VTOL, lift fan, jet transport model. The model had two lift fans at the forward portion of the fuselage, a lift fan at each wing tip, and two lift/cruise fans at the aft portion of the fuselage. All fans were driven by tip turbines using T-58 gas generators. Results were obtained for several lift fan, exit vane deflections and lift/cruise fan thrust deflections are zero sideslip. Three component longitudinal data are presented at several fan tip speed ratios. A limited amount of six component data were obtained with asymmetric vane settings. All of the data were obtained without a horizontal tail. Downwash angles at a typical tail location are also presented.

  10. Aerodynamic characteristics of a large-scale model with a swept wing and a jet flap having an expandable duct

    NASA Technical Reports Server (NTRS)

    Aiken, T. N.; Aoyagi, K.; Falarski, M. D.

    1973-01-01

    The data from an investigation of the aerodynamic characteristics of the expandable duct-jet flap concept are presented. The investigation was made using a large-scale model in the Ames 40- by 80-foot Wind Tunnel. The expandable duct-jet flap concept uses a lower surface, split flap and an upper surface, Fowler flap to form an internal, variable area cavity for the blowing air. Small amounts of blowing are used on the knee of the upper surface flap and the knee of a short-chord, trailing edge control flap. The bulk of the blowing is at the trailing edge. The flap could extend the full span of the model wing or over the inboard part only, with blown ailerons outboard. Primary configurations tested were two flap angles, typical of takeoff and landing; symmetric control flap deflections, primarily for improved landing performance; and asymmetric aileron and control flap deflections, for lateral control.

  11. A Preliminary Model Study of the Large-Scale Seasonal Cycle in Bottom Pressure Over the Global Ocean

    NASA Technical Reports Server (NTRS)

    Ponte, Rui M.

    1998-01-01

    Output from the primitive equation model of Semtner and Chervin is used to examine the seasonal cycle in bottom pressure (Pb) over the global ocean. Effects of the volume-conserving formulation of the model on the calculation Of Pb are considered. The estimated seasonal, large-scale Pb signals have amplitudes ranging from less than 1 cm over most of the deep ocean to several centimeters over shallow, boundary regions. Variability generally increases toward the western sides of the basins, and is also larger in some Southern Ocean regions. An oscillation between subtropical and higher latitudes in the North Pacific is clear. Comparison with barotropic simulations indicates that, on basin scales, seasonal Pb variability is related to barotropic dynamics and the seasonal cycle in Ekman pumping, and results from a small, net residual in mass divergence from the balance between Ekman and Sverdrup flows.

  12. Physical control oriented model of large scale refrigerators to synthesize advanced control schemes. Design, validation, and first control results

    NASA Astrophysics Data System (ADS)

    Bonne, François; Alamir, Mazen; Bonnay, Patrick

    2014-01-01

    In this paper, a physical method to obtain control-oriented dynamical models of large scale cryogenic refrigerators is proposed, in order to synthesize model-based advanced control schemes. These schemes aim to replace classical user experience designed approaches usually based on many independent PI controllers. This is particularly useful in the case where cryoplants are submitted to large pulsed thermal loads, expected to take place in the cryogenic cooling systems of future fusion reactors such as the International Thermonuclear Experimental Reactor (ITER) or the Japan Torus-60 Super Advanced Fusion Experiment (JT-60SA). Advanced control schemes lead to a better perturbation immunity and rejection, to offer a safer utilization of cryoplants. The paper gives details on how basic components used in the field of large scale helium refrigeration (especially those present on the 400W @1.8K helium test facility at CEA-Grenoble) are modeled and assembled to obtain the complete dynamic description of controllable subsystems of the refrigerator (controllable subsystems are namely the Joule-Thompson Cycle, the Brayton Cycle, the Liquid Nitrogen Precooling Unit and the Warm Compression Station). The complete 400W @1.8K (in the 400W @4.4K configuration) helium test facility model is then validated against experimental data and the optimal control of both the Joule-Thompson valve and the turbine valve is proposed, to stabilize the plant under highly variable thermals loads. This work is partially supported through the European Fusion Development Agreement (EFDA) Goal Oriented Training Program, task agreement WP10-GOT-GIRO.

  13. Physical control oriented model of large scale refrigerators to synthesize advanced control schemes. Design, validation, and first control results

    SciTech Connect

    Bonne, François; Bonnay, Patrick

    2014-01-29

    In this paper, a physical method to obtain control-oriented dynamical models of large scale cryogenic refrigerators is proposed, in order to synthesize model-based advanced control schemes. These schemes aim to replace classical user experience designed approaches usually based on many independent PI controllers. This is particularly useful in the case where cryoplants are submitted to large pulsed thermal loads, expected to take place in the cryogenic cooling systems of future fusion reactors such as the International Thermonuclear Experimental Reactor (ITER) or the Japan Torus-60 Super Advanced Fusion Experiment (JT-60SA). Advanced control schemes lead to a better perturbation immunity and rejection, to offer a safer utilization of cryoplants. The paper gives details on how basic components used in the field of large scale helium refrigeration (especially those present on the 400W @1.8K helium test facility at CEA-Grenoble) are modeled and assembled to obtain the complete dynamic description of controllable subsystems of the refrigerator (controllable subsystems are namely the Joule-Thompson Cycle, the Brayton Cycle, the Liquid Nitrogen Precooling Unit and the Warm Compression Station). The complete 400W @1.8K (in the 400W @4.4K configuration) helium test facility model is then validated against experimental data and the optimal control of both the Joule-Thompson valve and the turbine valve is proposed, to stabilize the plant under highly variable thermals loads. This work is partially supported through the European Fusion Development Agreement (EFDA) Goal Oriented Training Program, task agreement WP10-GOT-GIRO.

  14. Physics-based animation of large-scale splashing liquids, elastoplastic solids, and model-reduced flow

    NASA Astrophysics Data System (ADS)

    Gerszewski, Daniel James

    Physical simulation has become an essential tool in computer animation. As the use of visual effects increases, the need for simulating real-world materials increases. In this dissertation, we consider three problems in physics-based animation: large-scale splashing liquids, elastoplastic material simulation, and dimensionality reduction techniques for fluid simulation. Fluid simulation has been one of the greatest successes of physics-based animation, generating hundreds of research papers and a great many special effects over the last fifteen years. However, the animation of large-scale, splashing liquids remains challenging. We show that a novel combination of unilateral incompressibility, mass-full FLIP, and blurred boundaries is extremely well-suited to the animation of large-scale, violent, splashing liquids. Materials that incorporate both plastic and elastic deformations, also referred to as elastioplastic materials, are frequently encountered in everyday life. Methods for animating such common real-world materials are useful for effects practitioners and have been successfully employed in films. We describe a point-based method for animating elastoplastic materials. Our primary contribution is a simple method for computing the deformation gradient for each particle in the simulation. Given the deformation gradient, we can apply arbitrary constitutive models and compute the resulting elastic forces. Our method has two primary advantages: we do not store or compare to an initial rest configuration and we work directly with the deformation gradient. The first advantage avoids poor numerical conditioning and the second naturally leads to a multiplicative model of deformation appropriate for finite deformations. One of the most significant drawbacks of physics-based animation is that ever-higher fidelity leads to an explosion in the number of degrees of freedom. This problem leads us to the consideration of dimensionality reduction techniques. We present

  15. Breaking Computational Barriers: Real-time Analysis and Optimization with Large-scale Nonlinear Models via Model Reduction

    SciTech Connect

    Carlberg, Kevin Thomas; Drohmann, Martin; Tuminaro, Raymond S.; Boggs, Paul T.; Ray, Jaideep; van Bloemen Waanders, Bart Gustaaf

    2014-10-01

    Model reduction for dynamical systems is a promising approach for reducing the computational cost of large-scale physics-based simulations to enable high-fidelity models to be used in many- query (e.g., Bayesian inference) and near-real-time (e.g., fast-turnaround simulation) contexts. While model reduction works well for specialized problems such as linear time-invariant systems, it is much more difficult to obtain accurate, stable, and efficient reduced-order models (ROMs) for systems with general nonlinearities. This report describes several advances that enable nonlinear reduced-order models (ROMs) to be deployed in a variety of time-critical settings. First, we present an error bound for the Gauss-Newton with Approximated Tensors (GNAT) nonlinear model reduction technique. This bound allows the state-space error for the GNAT method to be quantified when applied with the backward Euler time-integration scheme. Second, we present a methodology for preserving classical Lagrangian structure in nonlinear model reduction. This technique guarantees that important properties--such as energy conservation and symplectic time-evolution maps--are preserved when performing model reduction for models described by a Lagrangian formalism (e.g., molecular dynamics, structural dynamics). Third, we present a novel technique for decreasing the temporal complexity --defined as the number of Newton-like iterations performed over the course of the simulation--by exploiting time-domain data. Fourth, we describe a novel method for refining projection-based reduced-order models a posteriori using a goal-oriented framework similar to mesh-adaptive h -refinement in finite elements. The technique allows the ROM to generate arbitrarily accurate solutions, thereby providing the ROM with a 'failsafe' mechanism in the event of insufficient training data. Finally, we present the reduced-order model error surrogate (ROMES) method for statistically quantifying reduced- order-model errors. This

  16. Large-scale Flood Simulation with Rainfall-Runoff-Inundation Model in the Chao Phraya River Basin

    NASA Astrophysics Data System (ADS)

    Sayama, Takahiro; Tatebe, Yuya; Tanaka, Shigenobu

    2013-04-01

    A large amount of rainfall during the 2011 monsoonal season caused an unprecedented flood disaster in the Chao Phraya River basin in Thailand. When a large-scale flood occurs, it is very important to take appropriate emergency measures by holistically understanding the characteristics of the flooding based on available information and by predicting its possible development. This paper proposes quick response-type flood simulation that can be conducted during a severe flooding event. The hydrologic simulation model used in this study is designed to simulate river discharges and flood inundation simultaneously for an entire river basin with satellite based rainfall and topographic information. The model is based on two-dimensional diffusive wave equations for rainfall-runoff and inundation calculations. The model takes into account the effects of lateral subsurface flow and vertical infiltration flow since these two types of flow are also important processes. This paper presents prediction results obtained in mid-October 2011, when the flooding in Thailand was approaching to its peak. Our scientific question is how well we can predict the possible development of a large-scale flooding event with limited information and how much we can improve the prediction with more local information. In comparison with a satellite based flood inundation map, the study found that the quick response-type simulation (Lv1) was capable of capturing the peak flood inundation extent reasonably as compared to the estimation based on satellite remote sensing. Our interpretation of the prediction was that the flooding might continue even until the end of November, which was also positively confirmed to some extent by the actual flooding status in late November. Nevertheless, the Lv1 simulation generally overestimated the peak water level. To address this overestimation, the input data was updated with additional local information (Lv2). Consequently, the simulation accuracy improved in the

  17. Building the repertoire of dispensable chromosome regions in Bacillus subtilis entails major refinement of cognate large-scale metabolic model

    PubMed Central

    Henry, Christopher S.; Zinner, Jenifer F.; Jolivet, Edmond; Cohoon, Matthew P.; Xia, Fangfang; Bidnenko, Vladimir; Ehrlich, S. Dusko; Stevens, Rick L.; Noirot, Philippe

    2013-01-01

    The nonessential regions in bacterial chromosomes are ill-defined due to incomplete functional information. Here, we establish a comprehensive repertoire of the genome regions that are dispensable for growth of Bacillus subtilis in a variety of media conditions. In complex medium, we attempted deletion of 157 individual regions ranging in size from 2 to 159 kb. A total of 146 deletions were successful in complex medium, whereas the remaining regions were subdivided to identify new essential genes (4) and coessential gene sets (7). Overall, our repertoire covers ∼76% of the genome. We screened for viability of mutant strains in rich defined medium and glucose minimal media. Experimental observations were compared with predictions by the iBsu1103 model, revealing discrepancies that led to numerous model changes, including the large-scale application of model reconciliation techniques. We ultimately produced the iBsu1103V2 model and generated predictions of metabolites that could restore the growth of unviable strains. These predictions were experimentally tested and demonstrated to be correct for 27 strains, validating the refinements made to the model. The iBsu1103V2 model has improved considerably at predicting loss of viability, and many insights gained from the model revisions have been integrated into the Model SEED to improve reconstruction of other microbial models. PMID:23109554

  18. Beyond single syllables: large-scale modeling of reading aloud with the Connectionist Dual Process (CDP++) model.

    PubMed

    Perry, Conrad; Ziegler, Johannes C; Zorzi, Marco

    2010-09-01

    Most words in English have more than one syllable, yet the most influential computational models of reading aloud are restricted to processing monosyllabic words. Here, we present CDP++, a new version of the Connectionist Dual Process model (Perry, Ziegler, & Zorzi, 2007). CDP++ is able to simulate the reading aloud of mono- and disyllabic words and nonwords, and learns to assign stress in exactly the same way as it learns to associate graphemes with phonemes. CDP++ is able to simulate the monosyllabic benchmark effects its predecessor could, and therefore shows full backwards compatibility. CDP++ also accounts for a number of novel effects specific to disyllabic words, including the effects of stress regularity and syllable number. In terms of database performance, CDP++ accounts for over 49% of the reaction time variance on items selected from the English Lexicon Project, a very large database of several thousand of words. With its lexicon of over 32,000 words, CDP++ is therefore a notable example of the successful scaling-up of a connectionist model to a size that more realistically approximates the human lexical system. PMID:20510406

  19. The integration of large-scale neural network modeling and functional brain imaging in speech motor control

    PubMed Central

    Golfinopoulos, E.; Tourville, J.A.; Guenther, F.H.

    2009-01-01

    Speech production demands a number of integrated processing stages. The system must encode the speech motor programs that command movement trajectories of the articulators and monitor transient spatiotemporal variations in auditory and somatosensory feedback. Early models of this system proposed that independent neural regions perform specialized speech processes. As technology advanced, neuroimaging data revealed that the dynamic sensorimotor processes of speech require a distributed set of interacting neural regions. The DIVA (Directions into Velocities of Articulators) neurocomputational model elaborates on early theories, integrating existing data and contemporary ideologies, to provide a mechanistic account of acoustic, kinematic, and functional magnetic resonance imaging (fMRI) data on speech acquisition and production. This large-scale neural network model is composed of several interconnected components whose cell activities and synaptic weight strengths are governed by differential equations. Cells in the model are associated with neuroanatomical substrates and have been mapped to locations in Montreal Neurological Institute stereotactic space, providing a means to compare simulated and empirical fMRI data. The DIVA model also provides a computational and neurophysiological framework within which to interpret and organize research on speech acquisition and production in fluent and dysfluent child and adult speakers. The purpose of this review article is to demonstrate how the DIVA model is used to motivate and guide functional imaging studies. We describe how model predictions are evaluated using voxel-based, region-of-interest-based parametric analyses and inter-regional effective connectivity modeling of fMRI data. PMID:19837177

  20. Stringent restriction from the growth of large-scale structure on apparent acceleration in inhomogeneous cosmological models.

    PubMed

    Ishak, Mustapha; Peel, Austin; Troxel, M A

    2013-12-20

    Probes of cosmic expansion constitute the main basis for arguments to support or refute a possible apparent acceleration due to different expansion rates in the Universe as described by inhomogeneous cosmological models. We present in this Letter a separate argument based on results from an analysis of the growth rate of large-scale structure in the Universe as modeled by the inhomogeneous cosmological models of Szekeres. We use the models with no assumptions of spherical or axial symmetries. We find that while the Szekeres models can fit very well the observed expansion history without a Λ, they fail to produce the observed late-time suppression in the growth unless Λ is added to the dynamics. A simultaneous fit to the supernova and growth factor data shows that the cold dark matter model with a cosmological constant (ΛCDM) provides consistency with the data at a confidence level of 99.65%, while the Szekeres model without Λ achieves only a 60.46% level. When the data sets are considered separately, the Szekeres with no Λ fits the supernova data as well as the ΛCDM does, but provides a very poor fit to the growth data with only 31.31% consistency level compared to 99.99% for the ΛCDM. This absence of late-time growth suppression in inhomogeneous models without a Λ is consolidated by a physical explanation. PMID:24483736

  1. Large Scale Earth's Bow Shock with Northern IMF as Simulated by PIC Code in Parallel with MHD Model

    NASA Astrophysics Data System (ADS)

    Baraka, Suleiman

    2016-06-01

    In this paper, we propose a 3D kinetic model (particle-in-cell, PIC) for the description of the large scale Earth's bow shock. The proposed version is stable and does not require huge or extensive computer resources. Because PIC simulations work with scaled plasma and field parameters, we also propose to validate our code by comparing its results with the available MHD simulations under same scaled solar wind (SW) and (IMF) conditions. We report new results from the two models. In both codes the Earth's bow shock position is found to be ≈14.8 R E along the Sun-Earth line, and ≈29 R E on the dusk side. Those findings are consistent with past in situ observations. Both simulations reproduce the theoretical jump conditions at the shock. However, the PIC code density and temperature distributions are inflated and slightly shifted sunward when compared to the MHD results. Kinetic electron motions and reflected ions upstream may cause this sunward shift. Species distributions in the foreshock region are depicted within the transition of the shock (measured ≈2 c/ ω pi for Θ Bn = 90° and M MS = 4.7) and in the downstream. The size of the foot jump in the magnetic field at the shock is measured to be (1.7 c/ ω pi ). In the foreshocked region, the thermal velocity is found equal to 213 km s-1 at 15 R E and is equal to 63 km s -1 at 12 R E (magnetosheath region). Despite the large cell size of the current version of the PIC code, it is powerful to retain macrostructure of planets magnetospheres in very short time, thus it can be used for pedagogical test purposes. It is also likely complementary with MHD to deepen our understanding of the large scale magnetosphere.

  2. Analyzing large-scale conservation interventions with Bayesian hierarchical models: a case study of supplementing threatened Pacific salmon

    PubMed Central

    Scheuerell, Mark D; Buhle, Eric R; Semmens, Brice X; Ford, Michael J; Cooney, Tom; Carmichael, Richard W

    2015-01-01

    Myriad human activities increasingly threaten the existence of many species. A variety of conservation interventions such as habitat restoration, protected areas, and captive breeding have been used to prevent extinctions. Evaluating the effectiveness of these interventions requires appropriate statistical methods, given the quantity and quality of available data. Historically, analysis of variance has been used with some form of predetermined before-after control-impact design to estimate the effects of large-scale experiments or conservation interventions. However, ad hoc retrospective study designs or the presence of random effects at multiple scales may preclude the use of these tools. We evaluated the effects of a large-scale supplementation program on the density of adult Chinook salmon Oncorhynchus tshawytscha from the Snake River basin in the northwestern United States currently listed under the U.S. Endangered Species Act. We analyzed 43 years of data from 22 populations, accounting for random effects across time and space using a form of Bayesian hierarchical time-series model common in analyses of financial markets. We found that varying degrees of supplementation over a period of 25 years increased the density of natural-origin adults, on average, by 0–8% relative to nonsupplementation years. Thirty-nine of the 43 year effects were at least two times larger in magnitude than the mean supplementation effect, suggesting common environmental variables play a more important role in driving interannual variability in adult density. Additional residual variation in density varied considerably across the region, but there was no systematic difference between supplemented and reference populations. Our results demonstrate the power of hierarchical Bayesian models to detect the diffuse effects of management interventions and to quantitatively describe the variability of intervention success. Nevertheless, our study could not address whether ecological

  3. Analyzing large-scale conservation interventions with Bayesian hierarchical models: a case study of supplementing threatened Pacific salmon.

    PubMed

    Scheuerell, Mark D; Buhle, Eric R; Semmens, Brice X; Ford, Michael J; Cooney, Tom; Carmichael, Richard W

    2015-05-01

    Myriad human activities increasingly threaten the existence of many species. A variety of conservation interventions such as habitat restoration, protected areas, and captive breeding have been used to prevent extinctions. Evaluating the effectiveness of these interventions requires appropriate statistical methods, given the quantity and quality of available data. Historically, analysis of variance has been used with some form of predetermined before-after control-impact design to estimate the effects of large-scale experiments or conservation interventions. However, ad hoc retrospective study designs or the presence of random effects at multiple scales may preclude the use of these tools. We evaluated the effects of a large-scale supplementation program on the density of adult Chinook salmon Oncorhynchus tshawytscha from the Snake River basin in the northwestern United States currently listed under the U.S. Endangered Species Act. We analyzed 43 years of data from 22 populations, accounting for random effects across time and space using a form of Bayesian hierarchical time-series model common in analyses of financial markets. We found that varying degrees of supplementation over a period of 25 years increased the density of natural-origin adults, on average, by 0-8% relative to nonsupplementation years. Thirty-nine of the 43 year effects were at least two times larger in magnitude than the mean supplementation effect, suggesting common environmental variables play a more important role in driving interannual variability in adult density. Additional residual variation in density varied considerably across the region, but there was no systematic difference between supplemented and reference populations. Our results demonstrate the power of hierarchical Bayesian models to detect the diffuse effects of management interventions and to quantitatively describe the variability of intervention success. Nevertheless, our study could not address whether ecological factors

  4. Contrasting non-local effects of shoreline stabilization methods in a model of large-scale coastline morphodynamics

    NASA Astrophysics Data System (ADS)

    Ells, K. D.; Murray, A.

    2011-12-01

    Advances in the understanding of the wave-angle dependence of large-scale sandy coastline evolution have allowed exploratory modeling investigations into the emergence of large-scale coastline features such as sandwaves, capes, and spits; the possible responses of these complex coastline shapes to changing wave climates; and the dynamic coupling of natural coastal processes with economic decisions for shoreline stabilization. Recent numerical-model experiments found that beach nourishment on a complex-shaped coastline can significantly alter rates of shoreline change on spatial scales commensurate with the alongshore distance of adjacent features (up to 100 km). While the effect of beach nourishment is to fix a given shoreline position while maintaining a saturated sediment flux locally, hard structured stabilization methods (e.g. seawalls, revetments, or groynes) tend to reduce local alongshore fluxes of sediment. In long-term numerical experiments (decades to centuries), the effects of local stabilization propagate both progressively alongshore and through a non-local mechanism (wave shadowing). Comparing these two fundamentally different methods of shoreline stabilization on various locations along a cuspate cape coastline, we find that both the local and regional responses to hard structures greatly contrast those of beach nourishment. Sustained nourishment near the tip of a cape tends to extend the cape both seaward and in the direction of alongshore flux, increasing the effect that wave shadowing would have otherwise had on distant shorelines, leading to a negative (landward) perturbation to an adjacent cape. A hard structure at the same location, however, completely fixes the cape's original location, decreasing the shadowing effect and resulting in a positive (seaward) perturbation to the downdrift cape. Recent extensions of this work examine how different stabilization methods affect long-term coastline morphodynamics on other coastline types, starting

  5. Calibration of a large-scale groundwater flow model using GRACE data: a case study in the Qaidam Basin, China

    NASA Astrophysics Data System (ADS)

    Hu, Litang; Jiao, Jiu Jimmy

    2015-11-01

    Traditional numerical models usually use extensive observed hydraulic-head data as calibration targets. However, this calibration process is not applicable in remote areas with limited or no monitoring data. This study presents an approach to calibrate a large-scale groundwater flow model using the monthly Gravity Recovery and Climate Experiment (GRACE) satellite data, which have been available globally on a spatial grid of 1° in the geographic coordinate system since 2002. A groundwater storage anomaly isolated from the terrestrial water storage (TWS) anomaly is converted into hydraulic head at the center of the grid, which is then used as observed data to calibrate a numerical model to estimate aquifer hydraulic conductivity. The aquifer system in the remote and hyperarid Qaidam Basin, China, is used as a case study to demonstrate the applicability of this approach. A groundwater model using FEFLOW is constructed for the Qaidam Basin and the GRACE-derived groundwater storage anomaly over the period 2003-2012 is included to calibrate the model, which is done using an automatic estimation method (PEST). The calibrated model is then run to output hydraulic heads at three sites where long-term hydraulic head data are available. The reasonably good fit between the calculated and observed hydraulic heads, together with the very similar groundwater storage anomalies from the numerical model and GRACE data, demonstrate that this approach is generally applicable in regions of groundwater data scarcity.

  6. Large scale dynamic systems

    NASA Technical Reports Server (NTRS)

    Doolin, B. F.

    1975-01-01

    Classes of large scale dynamic systems were discussed in the context of modern control theory. Specific examples discussed were in the technical fields of aeronautics, water resources and electric power.

  7. Large-Scale Multiphase Flow Modeling of Hydrocarbon Migration and Fluid Sequestration in Faulted Cenozoic Sedimentary Basins, Southern California

    NASA Astrophysics Data System (ADS)

    Jung, B.; Garven, G.; Boles, J. R.

    2011-12-01

    Major fault systems play a first-order role in controlling fluid migration in the Earth's crust, and also in the genesis/preservation of hydrocarbon reservoirs in young sedimentary basins undergoing deformation, and therefore understanding the geohydrology of faults is essential for the successful exploration of energy resources. For actively deforming systems like the Santa Barbara Basin and Los Angeles Basin, we have found it useful to develop computational geohydrologic models to study the various coupled and nonlinear processes affecting multiphase fluid migration, including relative permeability, anisotropy, heterogeneity, capillarity, pore pressure, and phase saturation that affect hydrocarbon mobility within fault systems and to search the possible hydrogeologic conditions that enable the natural sequestration of prolific hydrocarbon reservoirs in these young basins. Subsurface geology, reservoir data (fluid pressure-temperature-chemistry), structural reconstructions, and seismic profiles provide important constraints for model geometry and parameter testing, and provide critical insight on how large-scale faults and aquifer networks influence the distribution and the hydrodynamics of liquid and gas-phase hydrocarbon migration. For example, pore pressure changes at a methane seepage site on the seafloor have been carefully analyzed to estimate large-scale fault permeability, which helps to constrain basin-scale natural gas migration models for the Santa Barbara Basin. We have developed our own 2-D multiphase finite element/finite IMPES numerical model, and successfully modeled hydrocarbon gas/liquid movement for intensely faulted and heterogeneous basin profiles of the Los Angeles Basin. Our simulations suggest that hydrocarbon reservoirs that are today aligned with the Newport-Inglewood Fault Zone were formed by massive hydrocarbon flows from deeply buried source beds in the central synclinal region during post-Miocene time. Fault permeability, capillarity

  8. The Interaction of Trade-Wind Clouds with the Large-Scale Flow in Observations and Models

    NASA Astrophysics Data System (ADS)

    Nuijens, L.; Medeiros, B.; Sandu, I.; Ahlgrimm, M.; Vogel, R.

    2015-12-01

    Most of the (sub)tropical oceans within the Hadley circulation experience either moderate subsidence or weak ascent. In these regions shallow trade-wind clouds prevail, whose vertical and spatial distribution have emerged as key factors determining the sensitivity of our climate in global climate models. A large unknown is how large the effect of these clouds should be. For instance, how sensitive is the radiation budget to variations in the distribution of trade-wind cloudiness in nature? How variable is trade-wind cloudiness in the first place? And do we understand the role of the large-scale flow in that variability? In this talk we present how space-borne remote sensing and reanalysis products combined with ground-based remote sensing and high resolution modeling at a representative location start to answer these questions and help validate climate models. We show that across regimes or seasons with moderate subsidence and weak ascent the cloud radiative effect and low-level cloudiness vary remarkably little. A negative feedback mechanism of convection on cloudiness near the lifting condensation level is used to explain this insensitivity. The main difference across regimes is a moderate change in cloudiness in the upper cloud layer, whereby the presence of a trade-wind inversion and strong winds appear a prerequisite for larger cloudiness. However, most variance in cloudiness at that level takes place on shorter time scales, with an important role for the deepening of individual clouds and local patterns in vertical motion induced by convection itself, which can significantly alter the trade-wind layer structure. Trade-wind cloudiness in climate models in turn is overly sensitive to changes in the large-scale flow, because relationships that separate cloudiness across regimes in long-term climatologies, which have inspired parameterizations, also act on shorter time scales. We discuss how these findings relate to recent explanations for the spread in modeled

  9. Estimation of Van Genuchten and preferential flow parameters by inverse modelling for large scale vertical flow constructed wetlands

    NASA Astrophysics Data System (ADS)

    Maier, U.

    2009-04-01

    Background of this study is the attempt to predict the capability of vertical flow constructed wetlands for cleanup of contaminated groundwater. Constructed wetlands have been used for waste water treatment for decades and they provide a promising cost-efficient tool for large scale contaminated groundwater remediation. Vertical soil filters are one type of such constructed wetlands, where water flows vertically under alternating unsaturated conditions (intermittent load). The present study focusses on the model and calibration of unsaturated water flow at two different vertical soil filters. Flow data used for the calibration correspond to measurements performed in two vertical filters used for sewage water treatment at a research pilot treatment plant. Numerical simulations were performed using the code MIN3P, in which variably saturated flow is based on the Richards equation. Soil hydraulic functions based on van Genuchten coefficients and preferential flow characteristics were obtained by calibrating the model to measured data using evolution strategies with covariance matrix adaptation (CMA-ES). The presented inverse modelling procedure not only provides best fit parameterizations for separate and joint model objectives, but also utilizes the information from multiple re-starts of the optimization algorithm to determine suitable parameter ranges and reveal potential correlations. The sequential automatic calibration is both straightforward and efficient even if different complex objective functions are considered.

  10. Wind tunnel investigation of a large-scale upper surface blown-flap model having four engines

    NASA Technical Reports Server (NTRS)

    Aoyagi, K.; Falarski, M. D.; Koenig, D. G.

    1975-01-01

    Investigations were conducted in the Ames 40- by 80-Foot Wind Tunnel to determine the aerodynamic characteristics of a large-scale subsonic jet transport model with an upper surface blown flap system. The model had a 25 deg swept wing of aspect ratio 7.28 and four turbofan engines. The lift of the flap system was augmented by turning the turbofan exhaust over the Coanda surface. Results were obtained for several flap deflections with several wing leading-edge configurations at jet momentum coefficients from 0 to 4.0. Three-component longitudinal data are presented with four engines operating. In addition, longitudinal and lateral data are presented with an engine out. The maximum lift and stall angle of the four engine model were lower than those obtained with a two engine model that was previously investigated. The addition of the outboard nacelles had an adverse effect on these values. Efforts to improve these values were successful. A maximum lift of 8.8 at an angle-of-attack of 27 deg was obtained with a jet thrust coefficient of 2 for the landing flap configuration.

  11. Sensitivity study of a large-scale air pollution model by using high-performance computations and Monte Carlo algorithms

    NASA Astrophysics Data System (ADS)

    Ostromsky, Tz.; Dimov, I.; Georgieva, R.; Marinov, P.; Zlatev, Z.

    2013-10-01

    In this paper we present some new results of our work on sensitivity analysis of a large-scale air pollution model, more specificly the Danish Eulerian Model (DEM). The main purpose of this study is to analyse the sensitivity of ozone concentrations with respect to the rates of some chemical reactions. The current sensitivity study considers the rates of six important chemical reactions and is done for the areas of several European cities with different geographical locations, climate, industrialization and population density. One of the most widely used variance-based techniques for sensitivity analysis, such as Sobol estimates and their modifications, have been used in this study. A vast number of numerical experiments with a specially adapted for the purpose version of the Danish Eulerian Model (SA-DEM) were carried out to compute global Sobol sensitivity measures. SA-DEM was implemented and run on two powerful cluster supercomputers: IBM Blue Gene/P, the most powerful parallel supercomputer in Bulgaria and IBM MareNostrum III, the most powerful parallel supercomputer in Spain. The refined (480 × 480) mesh version of the model was used in the experiments on MareNostrum III, which is a challenging computational problem even on such a powerful machine. Some optimizations of the code with respect to the parallel efficiency and the memory use were performed. Tables with performance results of a number of numerical experiments on IBM BlueGene/P and on IBM MareNostrum III are presented and analysed.

  12. Systems Perturbation Analysis of a Large-Scale Signal Transduction Model Reveals Potentially Influential Candidates for Cancer Therapeutics.

    PubMed

    Puniya, Bhanwar Lal; Allen, Laura; Hochfelder, Colleen; Majumder, Mahbubul; Helikar, Tomáš

    2016-01-01

    Dysregulation in signal transduction pathways can lead to a variety of complex disorders, including cancer. Computational approaches such as network analysis are important tools to understand system dynamics as well as to identify critical components that could be further explored as therapeutic targets. Here, we performed perturbation analysis of a large-scale signal transduction model in extracellular environments that stimulate cell death, growth, motility, and quiescence. Each of the model's components was perturbed under both loss-of-function and gain-of-function mutations. Using 1,300 simulations under both types of perturbations across various extracellular conditions, we identified the most and least influential components based on the magnitude of their influence on the rest of the system. Based on the premise that the most influential components might serve as better drug targets, we characterized them for biological functions, housekeeping genes, essential genes, and druggable proteins. The most influential components under all environmental conditions were enriched with several biological processes. The inositol pathway was found as most influential under inactivating perturbations, whereas the kinase and small lung cancer pathways were identified as the most influential under activating perturbations. The most influential components were enriched with essential genes and druggable proteins. Moreover, known cancer drug targets were also classified in influential components based on the affected components in the network. Additionally, the systemic perturbation analysis of the model revealed a network motif of most influential components which affect each other. Furthermore, our analysis predicted novel combinations of cancer drug targets with various effects on other most influential components. We found that the combinatorial perturbation consisting of PI3K inactivation and overactivation of IP3R1 can lead to increased activity levels of apoptosis

  13. Assimilation of satellite data to optimize large-scale hydrological model parameters: a case study for the SWOT mission

    NASA Astrophysics Data System (ADS)

    Pedinotti, V.; Boone, A.; Ricci, S.; Biancamaria, S.; Mognard, N.

    2014-11-01

    During the last few decades, satellite measurements have been widely used to study the continental water cycle, especially in regions where in situ measurements are not readily available. The future Surface Water and Ocean Topography (SWOT) satellite mission will deliver maps of water surface elevation (WSE) with an unprecedented resolution and provide observation of rivers wider than 100 m and water surface areas greater than approximately 250 x 250 m over continental surfaces between 78° S and 78° N. This study aims to investigate the potential of SWOT data for parameter optimization for large-scale river routing models. The method consists in applying a data assimilation approach, the extended Kalman filter (EKF) algorithm, to correct the Manning roughness coefficients of the ISBA (Interactions between Soil, Biosphere, and Atmosphere)-TRIP (Total Runoff Integrating Pathways) continental hydrologic system. Parameters such as the Manning coefficient, used within such models to describe water basin characteristics, are generally derived from geomorphological relationships, which leads to significant errors at reach and large scales. The current study focuses on the Niger Basin, a transboundary river. Since the SWOT observations are not available yet and also to assess the proposed assimilation method, the study is carried out under the framework of an observing system simulation experiment (OSSE). It is assumed that modeling errors are only due to uncertainties in the Manning coefficient. The true Manning coefficients are then supposed to be known and are used to generate synthetic SWOT observations over the period 2002-2003. The impact of the assimilation system on the Niger Basin hydrological cycle is then quantified. The optimization of the Manning coefficient using the EKF (extended Kalman filter) algorithm over an 18-month period led to a significant improvement of the river water levels. The relative bias of the water level is globally improved (a 30

  14. Realist model approach to quantum mechanics

    NASA Astrophysics Data System (ADS)

    Hájíček, P.

    2013-06-01

    The paper proves that quantum mechanics is compatible with the constructive realism of modern philosophy of science. The proof is based on the observation that properties of quantum systems that are uniquely determined by their preparations can be assumed objective without the difficulties that are encountered by the same assumption about values of observables. The resulting realist interpretation of quantum mechanics is made rigorous by studying the space of quantum states—the convex set of state operators. Prepared states are classified according to their statistical structure into indecomposable and decomposable instead of pure and mixed. Simple objective properties are defined and showed to form a Boolean lattice.

  15. The application of ICOM, a non-hydrostatic, fully unstructured mesh model in large scale ocean domains

    NASA Astrophysics Data System (ADS)

    Kramer, Stephan C.; Piggott, Matthew D.; Cotter, Colin J.; Pain, Chris C.; Nelson, Rhodri B.

    2010-05-01

    given of some of the difficulties that were encountered in the application of ICOM in large scale, high aspect ratio ocean domains and how they have been overcome. A large scale application in the form of a baroclinic, wind-driven double gyre will be presented and the results are compared to two other models, the MIT general circulation model (MITgcm, [3]) and NEMO (Nucleus for European Modelling of the Ocean, [4]). Also a comparison of the performance and parallel scaling of the models on a supercomputing platform will be made. References [1] M.D. Piggott, G.J. Gorman, C.C. Pain, P.A. Allison, A.S. Candy, B.T. Martin and W.R. Wells, "A new computational framework for multi-scale ocean modelling based on adapting unstructured meshes", International Journal for Numerical Methods in Fluids 56, pp 1003 - 1015, 2008 [2] S.C. Kramer, C.J. Cotter and C.C. Pain, "Solving the Poisson equation on small aspect ratio domains using unstructured meshes", submitted to Ocean Modelling [3] J. Marshall, C. Hill, L. Perelman, and A. Adcroft, "Hydrostatic, quasi-hydrostatic, and nonhydrostatic ocean modeling", J. Geophysical Res., 102(C3), pp 5733-5752, 1997 [4] G. Madec, "NEMO ocean engine", Note du Pole de modélisation, Institut Pierre-Simon Laplace (IPSL), France, No 27 ISSN No 1288-1619

  16. The Determination of the Large-Scale Circulation of the Pacific Ocean from Satellite Altimetry using Model Green's Functions

    NASA Technical Reports Server (NTRS)

    Stammer, Detlef; Wunsch, Carl

    1996-01-01

    A Green's function method for obtaining an estimate of the ocean circulation using both a general circulation model and altimetric data is demonstrated. The fundamental assumption is that the model is so accurate that the differences between the observations and the model-estimated fields obey a linear dynamics. In the present case, the calculations are demonstrated for model/data differences occurring on very a large scale, where the linearization hypothesis appears to be a good one. A semi-automatic linearization of the Bryan/Cox general circulation model is effected by calculating the model response to a series of isolated (in both space and time) geostrophically balanced vortices. These resulting impulse responses or 'Green's functions' then provide the kernels for a linear inverse problem. The method is first demonstrated with a set of 'twin experiments' and then with real data spanning the entire model domain and a year of TOPEX/POSEIDON observations. Our present focus is on the estimate of the time-mean and annual cycle of the model. Residuals of the inversion/assimilation are largest in the western tropical Pacific, and are believed to reflect primarily geoid error. Vertical resolution diminishes with depth with 1 year of data. The model mean is modified such that the subtropical gyre is weakened by about 1 cm/s and the center of the gyre shifted southward by about 10 deg. Corrections to the flow field at the annual cycle suggest that the dynamical response is weak except in the tropics, where the estimated seasonal cycle of the low-latitude current system is of the order of 2 cm/s. The underestimation of observed fluctuations can be related to the inversion on the coarse spatial grid, which does not permit full resolution of the tropical physics. The methodology is easily extended to higher resolution, to use of spatially correlated errors, and to other data types.

  17. Large-scale water resources management within the framework of GLOWA-Danube—The water supply model

    NASA Astrophysics Data System (ADS)

    Nickel, Darla; Barthel, Roland; Braun, Juergen

    The research project GLOWA-Danube, financed by the German Federal Government, investigates long-term changes in the water cycle of the upper Danube river basin in light of global climatic change. Its aim is to build a fully integrated decision support tool “DANUBIA” that combines the competence of eleven institutes in domains covering all major aspects governing the water cycle. The research group “Groundwater and Water Supply” at the Institute of Hydraulic Engineering (IWS), Universitaet Stuttgart, contributes a three-dimensional groundwater flow model and a large-scale water supply model which simulate both water availability and quality and water supply and the related costs for global change scenarios. This article addresses the task of creating an agent-based model of the water supply sector. The water supply model links the various physical models determining water quality and availability on the one hand and the so-called “Actor” models calculating water demand on the other by determining the actual water supply and the costs related, which underlie both technical and physical constraints (e.g., existing infrastructure and its capacity, water availability and quality, geology, elevation, etc.). In reality, water supply within the study is organised through a three-tiered structure: long-distance, regional, and a multitude of community-based suppliers. In order to model this system in which each supply company defines its own optimum, an agent-based modelling approach (implemented using JAVA) was chosen. This approach is novel to modelling water supply in that not only water supply infrastructure but more importantly the decision makers (communities, water supply companies) are represented as generalised objects, capable of performing actions following rules that are determined by the class they belong to.

  18. A statistical model for Windstorm Variability over the British Isles based on Large-scale Atmospheric and Oceanic Mechanisms

    NASA Astrophysics Data System (ADS)

    Kirchner-Bossi, Nicolas; Befort, Daniel J.; Wild, Simon B.; Ulbrich, Uwe; Leckebusch, Gregor C.

    2016-04-01

    Time-clustered winter storms are responsible for a majority of the wind-induced losses in Europe. Over last years, different atmospheric and oceanic large-scale mechanisms as the North Atlantic Oscillation (NAO) or the Meridional Overturning Circulation (MOC) have been proven to drive some significant portion of the windstorm variability over Europe. In this work we systematically investigate the influence of different large-scale natural variability modes: more than 20 indices related to those mechanisms with proven or potential influence on the windstorm frequency variability over Europe - mostly SST- or pressure-based - are derived by means of ECMWF ERA-20C reanalysis during the last century (1902-2009), and compared to the windstorm variability for the European winter (DJF). Windstorms are defined and tracked as in Leckebusch et al. (2008). The derived indices are then employed to develop a statistical procedure including a stepwise Multiple Linear Regression (MLR) and an Artificial Neural Network (ANN), aiming to hindcast the inter-annual (DJF) regional windstorm frequency variability in a case study for the British Isles. This case study reveals 13 indices with a statistically significant coupling with seasonal windstorm counts. The Scandinavian Pattern (SCA) showed the strongest correlation (0.61), followed by the NAO (0.48) and the Polar/Eurasia Pattern (0.46). The obtained indices (standard-normalised) are selected as predictors for a windstorm variability hindcast model applied for the British Isles. First, a stepwise linear regression is performed, to identify which mechanisms can explain windstorm variability best. Finally, the indices retained by the stepwise regression are used to develop a multlayer perceptron-based ANN that hindcasted seasonal windstorm frequency and clustering. Eight indices (SCA, NAO, EA, PDO, W.NAtl.SST, AMO (unsmoothed), EA/WR and Trop.N.Atl SST) are retained by the stepwise regression. Among them, SCA showed the highest linear

  19. Large-scale atmospheric processes in the Arctic region reproduced by Sl-AV model and reanalysis data

    NASA Astrophysics Data System (ADS)

    Kulikova, Irina; Kruglova, Ekaterina; Khan, Valentina; Kiktev, Dmitry; Tischenko, Vladimir

    2015-04-01

    The variability of large-scale atmospheric processes in the Arctic region was analyzed on the base of the NCEP/DOE reanalysis data and seasonal hindcasts from global semi-Lagrangian model (SL-AV), developed in collaboration of Hydrometeorological Centre of Russia with Institute of Numerical Mathematics. Using the factor analysis it was shown that the model reproduces well the first major variability modes to explain 85-90% of the accumulated dispersion. Teleconnection indices as the quantitative characteristics of low-frequency variability are used to identify zonal and meridional flow regimes. Composite maps indicating the spatial distribution of anomalies of the main meteorological variables (500 hPa geopotential height, the sea level atmospheric pressure, the temperature at 850 hPa, 2m air temperature, precipitation, zonal and meridional wind component) for positive and negative phases of each index of atmospheric circulation are created. Average values of composite maps are accompanied with their statistical significance assessed using the "bootstrap" technique. Main characteristics of field configuration in Arctic region of cited above meteorological parameters corresponding to positive and negative phases of circulation indices are analyzed and discussed. Ability of SL-AV model to reproduce these characteristics at monthly and seasonal time scale is discussed as well. Results of this study are aimed to improve the quality of long-range forecasts and increase the "limit of predictability" and can be useful in the practice to develop monthly and seasonal weather forecasts for the Arctic region.

  20. Vibration, performance, flutter and forced response characteristics of a large-scale propfan and its aeroelastic model

    NASA Technical Reports Server (NTRS)

    August, Richard; Kaza, Krishna Rao V.

    1988-01-01

    An investigation of the vibration, performance, flutter, and forced response of the large-scale propfan, SR7L, and its aeroelastic model, SR7A, has been performed by applying available structural and aeroelastic analytical codes and then correlating measured and calculated results. Finite element models of the blades were used to obtain modal frequencies, displacements, stresses and strains. These values were then used in conjunction with a 3-D, unsteady, lifting surface aerodynamic theory for the subsequent aeroelastic analyses of the blades. The agreement between measured and calculated frequencies and mode shapes for both models is very good. Calculated power coefficients correlate well with those measured for low advance ratios. Flutter results show that both propfans are stable at their respective design points. There is also good agreement between calculated and measured blade vibratory strains due to excitation resulting from yawed flow for the SR7A propfan. The similarity of structural and aeroelastic results show that the SR7A propfan simulates the SR7L characteristics.

  1. Systems Perturbation Analysis of a Large-Scale Signal Transduction Model Reveals Potentially Influential Candidates for Cancer Therapeutics

    PubMed Central

    Puniya, Bhanwar Lal; Allen, Laura; Hochfelder, Colleen; Majumder, Mahbubul; Helikar, Tomáš

    2016-01-01

    Dysregulation in signal transduction pathways can lead to a variety of complex disorders, including cancer. Computational approaches such as network analysis are important tools to understand system dynamics as well as to identify critical components that could be further explored as therapeutic targets. Here, we performed perturbation analysis of a large-scale signal transduction model in extracellular environments that stimulate cell death, growth, motility, and quiescence. Each of the model’s components was perturbed under both loss-of-function and gain-of-function mutations. Using 1,300 simulations under both types of perturbations across various extracellular conditions, we identified the most and least influential components based on the magnitude of their influence on the rest of the system. Based on the premise that the most influential components might serve as better drug targets, we characterized them for biological functions, housekeeping genes, essential genes, and druggable proteins. The most influential components under all environmental conditions were enriched with several biological processes. The inositol pathway was found as most influential under inactivating perturbations, whereas the kinase and small lung cancer pathways were identified as the most influential under activating perturbations. The most influential components were enriched with essential genes and druggable proteins. Moreover, known cancer drug targets were also classified in influential components based on the affected components in the network. Additionally, the systemic perturbation analysis of the model revealed a network motif of most influential components which affect each other. Furthermore, our analysis predicted novel combinations of cancer drug targets with various effects on other most influential components. We found that the combinatorial perturbation consisting of PI3K inactivation and overactivation of IP3R1 can lead to increased activity levels of apoptosis

  2. A Limited-Memory BFGS Algorithm Based on a Trust-Region Quadratic Model for Large-Scale Nonlinear Equations

    PubMed Central

    Li, Yong; Yuan, Gonglin; Wei, Zengxin

    2015-01-01

    In this paper, a trust-region algorithm is proposed for large-scale nonlinear equations, where the limited-memory BFGS (L-M-BFGS) update matrix is used in the trust-region subproblem to improve the effectiveness of the algorithm for large-scale problems. The global convergence of the presented method is established under suitable conditions. The numerical results of the test problems show that the method is competitive with the norm method. PMID:25950725

  3. HydroSCAPE: a multi-scale framework for streamflow routing in large-scale hydrological models

    NASA Astrophysics Data System (ADS)

    Piccolroaz, S.; Di Lazzaro, M.; Zarlenga, A.; Majone, B.; Bellin, A.; Fiori, A.

    2015-09-01

    We present HydroSCAPE, a large scale hydrological model with an innovative streamflow routing scheme based on the Width Function Instantaneous Unit Hydrograph (WFIUH) theory, which is designed to facilitate coupling with weather forecasting and climate models. HydroSCAPE preserves geomorphological dispersion of the river network when dealing with horizontal hydrological fluxes, irrespective of the adopted grid size, which is typically inherited from the overlaying weather forecast or climate model. This is achieved through a separate treatment of hillslope processes and routing within the river network, with the latter simulated by suitable transfer functions constructed by applying the WFIUH theory to the desired level of detail. Transfer functions are constructed for each grid cell and nodes of the network where water discharge is desired by taking advantage of the detailed morphological information contained in the Digital Elevation Model of the zone of interest. These characteristics render HydroSCAPE well suited for multi-scale applications, ranging from catchment up to continental scale, and to investigate extreme events (e.g. floods) that require an accurate description of routing through the river network. The model enjoys reliability and robustness, united to parsimony in the adopted parametrization and computational efficiency, leading to a dramatic reduction of the computational effort with respect to full-gridded models at comparable level of accuracy of routing. Additionally, HydroSCAPE is designed with a simple and flexible modular structure, which makes it particularly suitable to massive parallelization, customization according to the specific user needs and preferences (e.g. choice of rainfall-runoff model), and continuous development and improvements.

  4. Study of materials and machines for 3D printed large-scale, flexible electronic structures using fused deposition modeling

    NASA Astrophysics Data System (ADS)

    Hwang, Seyeon

    The 3 dimensional printing (3DP), called to additive manufacturing (AM) or rapid prototyping (RP), is emerged to revolutionize manufacturing and completely transform how products are designed and fabricated. A great deal of research activities have been carried out to apply this new technology to a variety of fields. In spite of many endeavors, much more research is still required to perfect the processes of the 3D printing techniques especially in the area of the large-scale additive manufacturing and flexible printed electronics. The principles of various 3D printing processes are briefly outlined in the Introduction Section. New types of thermoplastic polymer composites aiming to specified functional applications are also introduced in this section. Chapter 2 shows studies about the metal/polymer composite filaments for fused deposition modeling (FDM) process. Various metal particles, copper and iron particles, are added into thermoplastics polymer matrices as the reinforcement filler. The thermo-mechanical properties, such as thermal conductivity, hardness, tensile strength, and fracture mechanism, of composites are tested to figure out the effects of metal fillers on 3D printed composite structures for the large-scale printing process. In Chapter 3, carbon/polymer composite filaments are developed by a simple mechanical blending process with an aim of fabricating the flexible 3D printed electronics as a single structure. Various types of carbon particles consisting of multi-wall carbon nanotube (MWCNT), conductive carbon black (CCB), and graphite are used as the conductive fillers to provide the thermoplastic polyurethane (TPU) with improved electrical conductivity. The mechanical behavior and conduction mechanisms of the developed composite materials are observed in terms of the loading amount of carbon fillers in this section. Finally, the prototype flexible electronics are modeled and manufactured by the FDM process using Carbon/TPU composite filaments and

  5. Experimental results and numerical modeling of a high-performance large-scale cryopump. I. Test particle Monte Carlo simulation

    SciTech Connect

    Luo Xueli; Day, Christian; Haas, Horst; Varoutis, Stylianos

    2011-07-15

    For the torus of the nuclear fusion project ITER (originally the International Thermonuclear Experimental Reactor, but also Latin: the way), eight high-performance large-scale customized cryopumps must be designed and manufactured to accommodate the very high pumping speeds and throughputs of the fusion exhaust gas needed to maintain the plasma under stable vacuum conditions and comply with other criteria which cannot be met by standard commercial vacuum pumps. Under an earlier research and development program, a model pump of reduced scale based on active cryosorption on charcoal-coated panels at 4.5 K was manufactured and tested systematically. The present article focuses on the simulation of the true three-dimensional complex geometry of the model pump by the newly developed ProVac3D Monte Carlo code. It is shown for gas throughputs of up to 1000 sccm ({approx}1.69 Pa m{sup 3}/s at T = 0 deg. C) in the free molecular regime that the numerical simulation results are in good agreement with the pumping speeds measured. Meanwhile, the capture coefficient associated with the virtual region around the cryogenic panels and shields which holds for higher throughputs is calculated using this generic approach. This means that the test particle Monte Carlo simulations in free molecular flow can be used not only for the optimization of the pumping system but also for the supply of the input parameters necessary for the future direct simulation Monte Carlo in the full flow regime.

  6. Feasibility analysis of using inverse modeling for estimating natural groundwater recharge from a large-scale soil moisture monitoring network

    NASA Astrophysics Data System (ADS)

    Wang, Tiejun; Franz, Trenton E.; Yue, Weifeng; Szilagyi, Jozsef; Zlotnik, Vitaly A.; You, Jinsheng; Chen, Xunhong; Shulski, Martha D.; Young, Aaron

    2016-02-01

    Despite the importance of groundwater recharge (GR), its accurate estimation still remains one of the most challenging tasks in the field of hydrology. In this study, with the help of inverse modeling, long-term (6 years) soil moisture data at 34 sites from the Automated Weather Data Network (AWDN) were used to estimate the spatial distribution of GR across Nebraska, USA, where significant spatial variability exists in soil properties and precipitation (P). To ensure the generality of this study and its potential broad applications, data from public domains and literature were used to parameterize the standard Hydrus-1D model. Although observed soil moisture differed significantly across the AWDN sites mainly due to the variations in P and soil properties, the simulations were able to capture the dynamics of observed soil moisture under different climatic and soil conditions. The inferred mean annual GR from the calibrated models varied over three orders of magnitude across the study area. To assess the uncertainties of the approach, estimates of GR and actual evapotranspiration (ETa) from the calibrated models were compared to the GR and ETa obtained from other techniques in the study area (e.g., remote sensing, tracers, and regional water balance). Comparison clearly demonstrated the feasibility of inverse modeling and large-scale (>104 km2) soil moisture monitoring networks for estimating GR. In addition, the model results were used to further examine the impacts of climate and soil on GR. The data showed that both P and soil properties had significant impacts on GR in the study area with coarser soils generating higher GR; however, different relationships between GR and P emerged at the AWDN sites, defined by local climatic and soil conditions. In general, positive correlations existed between annual GR and P for the sites with coarser-textured soils or under wetter climatic conditions. With the rapidly expanding soil moisture monitoring networks around the

  7. The benefits of using remotely sensed soil moisture in parameter identification of large-scale hydrological models

    NASA Astrophysics Data System (ADS)

    Karssenberg, D.; Wanders, N.; de Roo, A.; de Jong, S.; Bierkens, M. F.

    2013-12-01

    Large-scale hydrological models are nowadays mostly calibrated using observed discharge. As a result, a large part of the hydrological system that is not directly linked to discharge, in particular the unsaturated zone, remains uncalibrated, or might be modified unrealistically. Soil moisture observations from satellites have the potential to fill this gap, as these provide the closest thing to a direct measurement of the state of the unsaturated zone, and thus are potentially useful in calibrating unsaturated zone model parameters. This is expected to result in a better identification of the complete hydrological system, potentially leading to improved forecasts of the hydrograph as well. Here we evaluate this added value of remotely sensed soil moisture in calibration of large-scale hydrological models by addressing two research questions: 1) Which parameters of hydrological models can be identified by calibration with remotely sensed soil moisture? 2) Does calibration with remotely sensed soil moisture lead to an improved calibration of hydrological models compared to approaches that calibrate only with discharge, such that this leads to improved forecasts of soil moisture content and discharge as well? To answer these questions we use a dual state and parameter ensemble Kalman filter to calibrate the hydrological model LISFLOOD for the Upper Danube area. Calibration is done with discharge and remotely sensed soil moisture acquired by AMSR-E, SMOS and ASCAT. Four scenarios are studied: no calibration (expert knowledge), calibration on discharge, calibration on remote sensing data (three satellites) and calibration on both discharge and remote sensing data. Using a split-sample approach, the model is calibrated for a period of 2 years and validated for the calibrated model parameters on a validation period of 10 years. Results show that calibration with discharge data improves the estimation of groundwater parameters (e.g., groundwater reservoir constant) and

  8. Towards a Quantitative Use of Satellite Remote Sensing in Crop Growth Models for Large Scale Agricultural Production Estimate (Invited)

    NASA Astrophysics Data System (ADS)

    Defourny, P.

    2013-12-01

    such the Green Area Index (GAI), fAPAR and fcover usually retrieved from MODIS, MERIS, SPOT-Vegetation described the quality of the green vegetation development. The GLOBAM (Belgium) and EU FP-7 MOCCCASIN projects (Russia) improved the standard products and were demonstrated over large scale. The GAI retrieved from MODIS time series using a purity index criterion depicted successfully the inter-annual variability. Furthermore, the quantitative assimilation of these GAI time series into a crop growth model improved the yield estimate over years. These results showed that the GAI assimilation works best at the district or provincial level. In the context of the GEO Ag., the Joint Experiment of Crop Assessment and Monitoring (JECAM) was designed to enable the global agricultural monitoring community to compare such methods and results over a variety of regional cropping systems. For a network of test sites around the world, satellite and field measurements are currently collected and will be made available for collaborative effort. This experiment should facilitate international standards for data products and reporting, eventually supporting the development of a global system of systems for agricultural crop assessment and monitoring.

  9. The Climate Potentials and Side-Effects of Large-Scale terrestrial CO2 Removal - Insights from Quantitative Model Assessments

    NASA Astrophysics Data System (ADS)

    Boysen, L.; Heck, V.; Lucht, W.; Gerten, D.

    2015-12-01

    Terrestrial carbon dioxide removal (tCDR) through dedicated biomass plantations is considered as one climate engineering (CE) option if implemented at large-scale. While the risks and costs are supposed to be small, the effectiveness depends strongly on spatial and temporal scales of implementation. Based on simulations with a dynamic global vegetation model (LPJmL) we comprehensively assess the effectiveness, biogeochemical side-effects and tradeoffs from an earth system-analytic perspective. We analyzed systematic land-use scenarios in which all, 25%, or 10% of natural and/or agricultural areas are converted to tCDR plantations including the assumption that biomass plantations are established once the 2°C target is crossed in a business-as-usual climate change trajectory. The resulting tCDR potentials in year 2100 include the net accumulated annual biomass harvests and changes in all land carbon pools. We find that only the most spatially excessive, and thus undesirable, scenario would be capable to restore the 2° target by 2100 under continuing high emissions (with a cooling of 3.02°C). Large-scale biomass plantations covering areas between 1.1 - 4.2 Gha would produce a climate reduction potential of 0.8 - 1.4°C. tCDR plantations at smaller scales do not build up enough biomass over this considered period and the potentials to achieve global warming reductions are substantially lowered to no more than 0.5-0.6°C. Finally, we demonstrate that the (non-economic) costs for the Earth system include negative impacts on the water cycle and on ecosystems, which are already under pressure due to both land use change and climate change. Overall, tCDR may lead to a further transgression of land- and water-related planetary boundaries while not being able to set back the crossing of the planetary boundary for climate change. tCDR could still be considered in the near-future mitigation portfolio if implemented on small scales on wisely chosen areas.

  10. Troposphere-stratosphere response to large-scale North Atlantic Ocean variability in an atmosphere/ocean coupled model

    NASA Astrophysics Data System (ADS)

    Omrani, N.-E.; Bader, Jürgen; Keenlyside, N. S.; Manzini, Elisa

    2016-03-01

    The instrumental records indicate that the basin-wide wintertime North Atlantic warm conditions are accompanied by a pattern resembling negative North Atlantic oscillation (NAO), and cold conditions with pattern resembling the positive NAO. This relation is well reproduced in a control simulation by the stratosphere resolving atmosphere-ocean coupled Max-Planck-Institute Earth System Model (MPI-ESM). Further analyses of the MPI-ESM model simulation shows that the large-scale warm North Atlantic conditions are associated with a stratospheric precursory signal that propagates down into the troposphere, preceding the wintertime negative NAO. Additional experiments using only the atmospheric component of MPI-ESM (ECHAM6) indicate that these stratospheric and tropospheric changes are forced by the warm North Atlantic conditions. The basin-wide warming excites a wave-induced stratospheric vortex weakening, stratosphere/troposphere coupling and a high-latitude tropospheric warming. The induced high-latitude tropospheric warming is associated with reduction of the growth rate of low-level baroclinic waves over the North Atlantic region, contributing to the negative NAO pattern. For the cold North Atlantic conditions, the strengthening of the westerlies in the coupled model is confined to the troposphere and lower stratosphere. Comparing the coupled and uncoupled model shows that in the cold phase the tropospheric changes seen in the coupled model are not well reproduced by the standalone atmospheric configuration. Our experiments provide further evidence that North Atlantic Ocean variability (NAV) impacts the coupled stratosphere/troposphere system. As NAV has been shown to be predictable on seasonal-to-decadal timescales, these results have important implications for the predictability of the extra-tropical atmospheric circulation on these time-scales.

  11. Large-Scale Integrated Hydrologic Modeling: Response of the Susquehanna River Basin to 99-Year Climate Forcing

    NASA Astrophysics Data System (ADS)

    Sedmera, K. A.; Duffy, C. J.; Reed, P. M.

    2004-05-01

    This research focuses on large scale (10,000-100,000 sq. km) simulation of regional water budgets using digital data sets and a fully-coupled integrated (surface/subsurface) hydrologic model for the Susquehanna River basin (SRB). The main objectives in this effort are to develop an appropriate and consistent data model for the SRB, delineate groundwater basins, assess the dominant modes and spatial scales affecting the SRB, and estimate the dominant hydrologic response of relatively un-gaged sub-basins. The data model primarily consists of 1) a 99-year climate and vegetation history from PRISM and VEMAP, 2) land surface parameters from various EPA, NRCS, and USGS reports and data sets, and 3) hydrogeology from various state geologic surveys and reports. MODHMS (MODFLOW Hydrologic Modeling System) is a fully-coupled integrated hydrologic model that simulates 3-D variably saturated subsurface flow (Richard's equation), 1-D channel flow and 2-D surface runoff (diffusion wave approximation), canopy interception and evapotranspiration, and offers robust solutions to the governing equations for coupled surface/subsurface flow. The first step in this approach uses a steady-state simulation to estimate regional recharge, to delineate groundwater basins within each river basin, and to assess the validity of the hydrologic landscape concept. The long term climate history is then used to drive a transient simulation that will be used to study the effect of seasonal, inter-annual, and decadal climate patterns and land use on the persistence of wet and dry cycles in soil moisture, on recharge, and on the regional water budget as a whole.

  12. The benefits of using remotely sensed soil moisture in parameter identification of large-scale hydrological models

    NASA Astrophysics Data System (ADS)

    Wanders, Niko; Bierkens, Marc F. P.; de Jong, Steven M.; de Roo, Ad; Karssenberg, Derek

    2013-04-01

    Nowadays large-scale hydrological models are mostly calibrated using observed discharge. Although this may lead to accurate hydrograph estimation, calibration on discharge is restricted to parameters that directly affect discharge. As a result, a large part of the hydrological system that is not directly linked to discharge, in particular the unsaturated zone, remains uncalibrated, or might be modified unrealistically. Soil moisture observations from satellites have the potential to fill this gap, as these provide a direct measurement of the state of the unsaturated zone, and thus are potentially useful in calibrating unsaturated zone model parameters. This is expected to result in a better identification of the complete hydrological system, potentially leading to improved forecasts of the hydrograph as well. Here we evaluate this added value of remotely sensed soil moisture in calibration of large-scale hydrological models by addressing two research questions: 1) Does calibration on remotely sensed soil moisture lead to an improved identification of hydrological models compared to approaches that calibrate on discharge alone? 2) If this is the case, what is the improvement in the forecasted hydrograph? To answer these questions we use a dual state and parameter ensemble Kalman filter to calibrate the hydrological model LISFLOOD for the Upper-Danube area. Calibration is done with discharge and remotely sensed soil moisture from AMSR-E, SMOS and ASCAT. Estimates and spatial correlation are derived from a previous published study on the quantification of the errors and spatial error structure of microwave remote sensing techniques. Four scenarios are studied, namely, no calibration (expert knowledge), calibration on discharge, calibration on remote sensing data and calibration on both discharge and remote sensing data. Using a split-sample approach, the model is calibrated for a period of 2 years and validated using a validation period of 10 years with the calibrated

  13. Impact of land-surface moisture variability on local shallow convective cumulus and precipitation in large-scale models

    NASA Technical Reports Server (NTRS)

    Chen, Fei; Avissar, Roni

    1994-01-01

    Numerical experiments using state-of-the-art high-resolution mesoscale cloud model showed that land-surface moisture significantly affects the timing of onset of clouds and the intensity and distribution of precipitation. In general, landscape discontinuity enhances shallow convective precipitation. Two mechanisms that are strongly modulated by land-surface moisture-namely, random turbulent thermal cells and organized sea-breeze-like mesoscale circulations-also determine the horizontal distribution of maximum precipitation. However, interactions between shallow cumulus and land-surface moisture are highly nonlinear and complicated by different factors, such as atmospheric thermodynamic structure and large-scale background wind. This analysis also showed that land-surface moisture discontinuities seem to play a more important role in a relatively dry atmsophere, and that the strongest precipitation is produced by a wavelength of land-surface forcing equivalent to the local Rossby radius of deformation. A general trend between the maximum precipitation and the normalized maximum latent heat flux was identified. In general, large values of mesoscale latent heat flux imply strongly developed mesoscale circulations and intense cloud activity, accompanied by large surface latent heat fluxes that transport more water vapor into the atmosphere.

  14. Decoupling MSW settlement into mechanical and biochemical processes--modelling and validation on large-scale setups.

    PubMed

    Gourc, J-P; Staub, M J; Conte, M

    2010-01-01

    Forecasting settlements of non-hazardous waste is essential to ensure the integrity and durability of landfill covers over time. Over a short time span, the survey of settlements may also contribute to the investigation of the biodegradation processes. This paper addresses secondary settlements of Municipal Solid Waste (MSW), a heterogeneous and time-evolving material. An analysis of available experimental data from different pilots and the literature was conducted to quantify the influence of biodegradation on MSW secondary settlements. After making assumptions about the various features of the waste and their constitutive relationships, a one-dimensional biomechanical model to predict the secondary settlement has been developed. The determination of the total secondary settlement was obtained by the addition of two separate parts, the mechanical settlement, due to creep, and the biochemical settlement, due to the degradation of the organic matter. The latter has been evaluated based on the observed biogas production. Using the data from different recent large-scale experiments that provide a monitoring of biogas production, a method for predicting the biochemically-induced settlements is proposed and validated on these tests. The relative contributions of mechanical and biochemical settlements are also calculated and discussed as a function of waste pre-treatment and operation conditions (biological pre-treatment, shredding, leachate injection). Finally, settlement may be considered as a relevant indicator for the state of biodegradation. PMID:20381332

  15. Anti-L. donovani activity in macrophage/amastigote model of palmarumycin CP18 and its large scale production.

    PubMed

    Ortega, Humberto E; Teixeira, Eliane de Morais; Rabello, Ana; Higginbotham, Sarah; Cubilla-Ríos, Luis

    2014-01-01

    Palmarumycin CP18, isolated from an extract of the fermentation broth and mycelium of the Panamanian endophytic fungus Edenia sp., was previously reported with strong and specific activity against Leishmania donovani. Here we report that when the same strain was cultured on different solid media--Harrold Agar, Leonian Agar, Potato dextrose Agar (PDA), Corn Meal Agar, Honey Peptone Agar, and eight vegetables (V8) Agar--in order to determine the optimal conditions for isolation of palmarumycin CP18, no signal for this compound was observed in any of the 1H NMR spectra of fractions obtained from these extracts. However, one extract, prepared from the fungal culture in PDA contained significant amounts of CJ-12,372, a possible biosynthetic precursor of palmarumycin CP18. Edenia sp. was cultivated on a large scale on PDA and CJ-12,372 was converted to palmarumycin CP18 by oxidation of its p-hydroquinone moiety with DDQ in dioxane. Palmarumycin CP18 showed anti-leishmanial activity against L. donovani in a macrophage/amastigote model, with IC50 values of 23.5 microM. PMID:24660473

  16. Afterslip and viscoelastic relaxation model inferred from the large scale postseismic deformation following the 2010 Mw 8.8 Maule earthquake (Chile)

    NASA Astrophysics Data System (ADS)

    Klein, E.; Fleitout, L.; Vigny, C.; Garaud, J. D.

    2016-03-01

    Megathrust earthquakes of magnitude close to 9 are followed by large scale (thousands of km) and long-lasting (decades), significant crustal and mantle deformation. This deformation can be observed at the surface and quantified with GPS measurements. Here we report on deformation observed during the 5-years time span after the 2010 Mw8.8 Maule Megathrust Earthquake (February 27, 2010) over the whole South American continent. With the first two years of those data, we use finite element modelling (FEM) to relate this deformation to slip on the plate interface and relaxation in the mantle, using a realistic layered Earth model and Burgers rheologies. Slip alone on the interface, even up to large depths, is unable to provide a satisfactory fit simultaneously to horizontal and vertical displacements. The horizontal deformation pattern requires relaxation both in the asthenosphere and in a Low Viscosity Channel along the deepest part of the plate interface and no additional Low Viscosity Wedge is required by the data. The vertical velocity pattern (intense and quick uplift over the Cordillera) is well fitted only when the channel extends deeper than 100km. Additionally, viscoelastic relaxation alone cannot explain the characteristics and amplitude of displacements over the first 200 km from the trench and aseismic slip on the fault plane is needed. This aseismic slip on the interface generates stresses, which induce additional relaxation in the mantle. In the final model, all three components (relaxation due to the coseismic slip, aseismic slip on the fault plane and relaxation due to aseismic slip) are taken into account. Our best-fit model uses slip at shallow depths on the subduction interface decreasing as function of time and includes (i) an asthenosphere extending down to 200km, with a steady-state Maxwell viscosity of 4.75 × 1018 Pa.s; and (ii) a Low Viscosity Channel along the plate interface extending from depths of 55 to 135 km with viscosities below 1018 Pa.s.

  17. Afterslip and viscoelastic relaxation model inferred from the large-scale post-seismic deformation following the 2010 Mw 8.8 Maule earthquake (Chile)

    NASA Astrophysics Data System (ADS)

    Klein, E.; Fleitout, L.; Vigny, C.; Garaud, J. D.

    2016-06-01

    Megathrust earthquakes of magnitude close to 9 are followed by large-scale (thousands of km) and long-lasting (decades), significant crustal and mantle deformation. This deformation can be observed at the surface and quantified with GPS measurements. Here we report on deformation observed during the 5 yr time span after the 2010 Mw 8.8 Maule Megathrust Earthquake (2010 February 27) over the whole South American continent. With the first 2 yr of those data, we use finite element modelling (FEM) to relate this deformation to slip on the plate interface and relaxation in the mantle, using a realistic layered Earth model and Burgers rheologies. Slip alone on the interface, even up to large depths, is unable to provide a satisfactory fit simultaneously to horizontal and vertical displacements. The horizontal deformation pattern requires relaxation both in the asthenosphere and in a low-viscosity channel along the deepest part of the plate interface and no additional low-viscosity wedge is required by the data. The vertical velocity pattern (intense and quick uplift over the Cordillera) is well fitted only when the channel extends deeper than 100 km. Additionally, viscoelastic relaxation alone cannot explain the characteristics and amplitude of displacements over the first 200 km from the trench and aseismic slip on the fault plane is needed. This aseismic slip on the interface generates stresses, which induce additional relaxation in the mantle. In the final model, all three components (relaxation due to the coseismic slip, aseismic slip on the fault plane and relaxation due to aseismic slip) are taken into account. Our best-fit model uses slip at shallow depths on the subduction interface decreasing as function of time and includes (i) an asthenosphere extending down to 200 km, with a steady-state Maxwell viscosity of 4.75 × 1018 Pa s; and (ii) a low-viscosity channel along the plate interface extending from depths of 55-135 km with viscosities below 1018 Pa s.

  18. The eHabitat R library: Large scale modelling of habitat uniqueness for the management and assessment of protected areas

    NASA Astrophysics Data System (ADS)

    Olav Skøien, Jon; Martínez-López, Javier; Dubois, Gregoire

    2014-05-01

    There are over 100,000 protected areas in the world that need to be assessed systematically according to their ecological values in order to support decision making and fund allocation processes. Ecological modelling has become an important tool for conservation and biodiversity studies. Moreover, linking remote sensing with ecological modelling can help overcoming some typical limitations of ecological studies related to conservation, such as sampling effort bias of biodiversity inventories. Habitats offer refuge for species and can be mapped at ecoregion scale by means of remote sensing. Large-scale ecological models are thus needed to make progress on important conservation challenges and the adoption of an open source community approach is crucial for its implementation. R is a Free and Open Source Software (FOSS) which allows the analysis of large amounts of remote sensing data through multivariate statistics and GIS capabilities, offers interoperability with other models and tools, and can be further implemented and used within a web processing service, as well as under a local desktop environment. The eHabitat R library, one of the Web Processing Services (WPS) supporting DOPA, the Digital Observatory for Protected Areas (http://dopa.jrc.ec.europa.eu/), computes habitat similarities and proposes a habitat replaceability index (HRI) which can be used for characterizing each protected area worldwide. More exactly, eHabitat computes for each protected area a map of probabilities to find areas presenting ecological characteristics that are similar to those found in the selected protected area. The library is available online for using it and extending it by the research and end users communities. This paper presents the eHabitat library, as an example of a successful development and application of FOSS tools for geoscientific tasks, in particular for delivering critical services in relation the conservation of protected areas. Some methodological aspects, such

  19. Sensitivity and foreground modelling for large-scale cosmic microwave background B-mode polarization satellite missions

    NASA Astrophysics Data System (ADS)

    Remazeilles, M.; Dickinson, C.; Eriksen, H. K. K.; Wehus, I. K.

    2016-05-01

    The measurement of the large-scale B-mode polarization in the cosmic microwave background (CMB) is a fundamental goal of future CMB experiments. However, because of unprecedented sensitivity, future CMB experiments will be much more sensitive to any imperfect modelling of the Galactic foreground polarization in the reconstruction of