Science.gov

Sample records for realistic large-scale model

  1. Towards a large-scale biologically realistic model of the hippocampus.

    PubMed

    Hendrickson, Phillip J; Yu, Gene J; Robinson, Brian S; Song, Dong; Berger, Theodore W

    2012-01-01

    Real neurobiological systems in the mammalian brain have a complicated and detailed structure, being composed of 1) large numbers of neurons with intricate, branching morphologies--complex morphology brings with it complex passive membrane properties; 2) active membrane properties--nonlinear sodium, potassium, calcium, etc. conductances; 3) non-uniform distributions throughout the dendritic and somal membrane surface of these non-linear conductances; 4) non-uniform and topographic connectivity between pre- and post-synaptic neurons; and 5) activity-dependent changes in synaptic function. One of the essential, and as yet unanswered questions in neuroscience is the role of these fundamental structural and functional features in determining "neural processing" properties of a given brain system. To help answer that question, we're creating a large-scale biologically realistic model of the intrinsic pathway of the hippocampus, which consists of the projection from layer II entorhinal cortex (EC) to dentate gyrus (DG), EC to CA3, DG to CA3, and CA3 to CA1. We describe the computational hardware and software tools the model runs on, and demonstrate its viability as a modeling platform with an EC-to-DG model. PMID:23366951

  2. A novel CPU/GPU simulation environment for large-scale biologically realistic neural modeling

    PubMed Central

    Hoang, Roger V.; Tanna, Devyani; Jayet Bray, Laurence C.; Dascalu, Sergiu M.; Harris, Frederick C.

    2013-01-01

    Computational Neuroscience is an emerging field that provides unique opportunities to study complex brain structures through realistic neural simulations. However, as biological details are added to models, the execution time for the simulation becomes longer. Graphics Processing Units (GPUs) are now being utilized to accelerate simulations due to their ability to perform computations in parallel. As such, they have shown significant improvement in execution time compared to Central Processing Units (CPUs). Most neural simulators utilize either multiple CPUs or a single GPU for better performance, but still show limitations in execution time when biological details are not sacrificed. Therefore, we present a novel CPU/GPU simulation environment for large-scale biological networks, the NeoCortical Simulator version 6 (NCS6). NCS6 is a free, open-source, parallelizable, and scalable simulator, designed to run on clusters of multiple machines, potentially with high performance computing devices in each of them. It has built-in leaky-integrate-and-fire (LIF) and Izhikevich (IZH) neuron models, but users also have the capability to design their own plug-in interface for different neuron types as desired. NCS6 is currently able to simulate one million cells and 100 million synapses in quasi real time by distributing data across eight machines with each having two video cards. PMID:24106475

  3. Modeling of the cross-beam energy transfer with realistic inertial-confinement-fusion beams in a large-scale hydrocode.

    PubMed

    Colaïtis, A; Duchateau, G; Ribeyre, X; Tikhonchuk, V

    2015-01-01

    A method for modeling realistic laser beams smoothed by kinoform phase plates is presented. The ray-based paraxial complex geometrical optics (PCGO) model with Gaussian thick rays allows one to create intensity variations, or pseudospeckles, that reproduce the beam envelope, contrast, and high-intensity statistics predicted by paraxial laser propagation codes. A steady-state cross-beam energy-transfer (CBET) model is implemented in a large-scale radiative hydrocode based on the PCGO model. It is used in conjunction with the realistic beam modeling technique to study the effects of CBET between coplanar laser beams on the target implosion. The pseudospeckle pattern imposed by PCGO produces modulations in the irradiation field and the shell implosion pressure. Cross-beam energy transfer between beams at 20(∘) and 40(∘) significantly degrades the irradiation symmetry by amplifying low-frequency modes and reducing the laser-capsule coupling efficiency, ultimately leading to large modulations of the shell areal density and lower convergence ratios. These results highlight the role of laser-plasma interaction and its influence on the implosion dynamics. PMID:25679718

  4. Photorealistic large-scale urban city model reconstruction.

    PubMed

    Poullis, Charalambos; You, Suya

    2009-01-01

    The rapid and efficient creation of virtual environments has become a crucial part of virtual reality applications. In particular, civil and defense applications often require and employ detailed models of operations areas for training, simulations of different scenarios, planning for natural or man-made events, monitoring, surveillance, games, and films. A realistic representation of the large-scale environments is therefore imperative for the success of such applications since it increases the immersive experience of its users and helps reduce the difference between physical and virtual reality. However, the task of creating such large-scale virtual environments still remains a time-consuming and manual work. In this work, we propose a novel method for the rapid reconstruction of photorealistic large-scale virtual environments. First, a novel, extendible, parameterized geometric primitive is presented for the automatic building identification and reconstruction of building structures. In addition, buildings with complex roofs containing complex linear and nonlinear surfaces are reconstructed interactively using a linear polygonal and a nonlinear primitive, respectively. Second, we present a rendering pipeline for the composition of photorealistic textures, which unlike existing techniques, can recover missing or occluded texture information by integrating multiple information captured from different optical sensors (ground, aerial, and satellite). PMID:19423889

  5. Large Scale, High Resolution, Mantle Dynamics Modeling

    NASA Astrophysics Data System (ADS)

    Geenen, T.; Berg, A. V.; Spakman, W.

    2007-12-01

    To model the geodynamic evolution of plate convergence, subduction and collision and to allow for a connection to various types of observational data, geophysical, geodetical and geological, we developed a 4D (space-time) numerical mantle convection code. The model is based on a spherical 3D Eulerian fem model, with quadratic elements, on top of which we constructed a 3D Lagrangian particle in cell(PIC) method. We use the PIC method to transport material properties and to incorporate a viscoelastic rheology. Since capturing small scale processes associated with localization phenomena require a high resolution, we spend a considerable effort on implementing solvers suitable to solve for models with over 100 million degrees of freedom. We implemented Additive Schwartz type ILU based methods in combination with a Krylov solver, GMRES. However we found that for problems with over 500 thousend degrees of freedom the convergence of the solver degraded severely. This observation is known from the literature [Saad, 2003] and results from the local character of the ILU preconditioner resulting in a poor approximation of the inverse of A for large A. The size of A for which ILU is no longer usable depends on the condition of A and on the amount of fill in allowed for the ILU preconditioner. We found that for our problems with over 5×105 degrees of freedom convergence became to slow to solve the system within an acceptable amount of walltime, one minute, even when allowing for considerable amount of fill in. We also implemented MUMPS and found good scaling results for problems up to 107 degrees of freedom for up to 32 CPU¡¯s. For problems with over 100 million degrees of freedom we implemented Algebraic Multigrid type methods (AMG) from the ML library [Sala, 2006]. Since multigrid methods are most effective for single parameter problems, we rebuild our model to use the SIMPLE method in the Stokes solver [Patankar, 1980]. We present scaling results from these solvers for 3D

  6. Large-Scale Simulations of Realistic Fluidized Bed Reactors using Novel Numerical Methods

    NASA Astrophysics Data System (ADS)

    Capecelatro, Jesse; Desjardins, Olivier; Pepiot, Perrine; National Renewable Energy Lab Collaboration

    2011-11-01

    Turbulent particle-laden flows in the form of fluidized bed reactors display good mixing properties, low pressure drops, and a fairly uniform temperature distribution. Understanding and predicting the flow dynamics within the reactor is necessary for improving the efficiency, and providing technologies for large-scale industrialization. A numerical strategy based on an Eulerian representation of the gas phase and Lagrangian tracking of the particles is developed in the framework of NGA, a high- order fully conservative parallel code tailored for turbulent flows. The particles are accounted for using a point-particle assumption. Once the gas-phase quantities are mapped to the particle location a conservative, implicit diffusion operation smoothes the field. Normal and tangential collisions are handled via soft-sphere model, modified to allow the bed to reach close packing at rest. The pressure drop across the bed is compared with theory to accurately predict the minimum fluidization velocity. 3D simulations of the National Renewable Energy Lab's 4-inch reactor are then conducted. Tens of millions of particles are tracked. The reactor's geometry is modeled using an immersed boundary scheme. Statistics for volume fraction, velocities, bed expansion, and bubble characteristics are analyzed and compared with experimental data.

  7. Adaptive Texture Synthesis for Large Scale City Modeling

    NASA Astrophysics Data System (ADS)

    Despine, G.; Colleu, T.

    2015-02-01

    Large scale city models textured with aerial images are well suited for bird-eye navigation but generally the image resolution does not allow pedestrian navigation. One solution to face this problem is to use high resolution terrestrial photos but it requires huge amount of manual work to remove occlusions. Another solution is to synthesize generic textures with a set of procedural rules and elementary patterns like bricks, roof tiles, doors and windows. This solution may give realistic textures but with no correlation to the ground truth. Instead of using pure procedural modelling we present a method to extract information from aerial images and adapt the texture synthesis to each building. We describe a workflow allowing the user to drive the information extraction and to select the appropriate texture patterns. We also emphasize the importance to organize the knowledge about elementary pattern in a texture catalogue allowing attaching physical information, semantic attributes and to execute selection requests. Roofs are processed according to the detected building material. Façades are first described in terms of principal colours, then opening positions are detected and some window features are computed. These features allow selecting the most appropriate patterns from the texture catalogue. We experimented this workflow on two samples with 20 cm and 5 cm resolution images. The roof texture synthesis and opening detection were successfully conducted on hundreds of buildings. The window characterization is still sensitive to the distortions inherent to the projection of aerial images onto the facades.

  8. Homogenization of Large-Scale Movement Models in Ecology

    USGS Publications Warehouse

    Garlick, M.J.; Powell, J.A.; Hooten, M.B.; McFarlane, L.R.

    2011-01-01

    A difficulty in using diffusion models to predict large scale animal population dispersal is that individuals move differently based on local information (as opposed to gradients) in differing habitat types. This can be accommodated by using ecological diffusion. However, real environments are often spatially complex, limiting application of a direct approach. Homogenization for partial differential equations has long been applied to Fickian diffusion (in which average individual movement is organized along gradients of habitat and population density). We derive a homogenization procedure for ecological diffusion and apply it to a simple model for chronic wasting disease in mule deer. Homogenization allows us to determine the impact of small scale (10-100 m) habitat variability on large scale (10-100 km) movement. The procedure generates asymptotic equations for solutions on the large scale with parameters defined by small-scale variation. The simplicity of this homogenization procedure is striking when compared to the multi-dimensional homogenization procedure for Fickian diffusion,and the method will be equally straightforward for more complex models. ?? 2010 Society for Mathematical Biology.

  9. Multiresolution comparison of precipitation datasets for large-scale models

    NASA Astrophysics Data System (ADS)

    Chun, K. P.; Sapriza Azuri, G.; Davison, B.; DeBeer, C. M.; Wheater, H. S.

    2014-12-01

    Gridded precipitation datasets are crucial for driving large-scale models which are related to weather forecast and climate research. However, the quality of precipitation products is usually validated individually. Comparisons between gridded precipitation products along with ground observations provide another avenue for investigating how the precipitation uncertainty would affect the performance of large-scale models. In this study, using data from a set of precipitation gauges over British Columbia and Alberta, we evaluate several widely used North America gridded products including the Canadian Gridded Precipitation Anomalies (CANGRD), the National Center for Environmental Prediction (NCEP) reanalysis, the Water and Global Change (WATCH) project, the thin plate spline smoothing algorithms (ANUSPLIN) and Canadian Precipitation Analysis (CaPA). Based on verification criteria for various temporal and spatial scales, results provide an assessment of possible applications for various precipitation datasets. For long-term climate variation studies (~100 years), CANGRD, NCEP, WATCH and ANUSPLIN have different comparative advantages in terms of their resolution and accuracy. For synoptic and mesoscale precipitation patterns, CaPA provides appealing performance of spatial coherence. In addition to the products comparison, various downscaling methods are also surveyed to explore new verification and bias-reduction methods for improving gridded precipitation outputs for large-scale models.

  10. Statistical Modeling of Large-Scale Scientific Simulation Data

    SciTech Connect

    Eliassi-Rad, T; Baldwin, C; Abdulla, G; Critchlow, T

    2003-11-15

    With the advent of massively parallel computer systems, scientists are now able to simulate complex phenomena (e.g., explosions of a stars). Such scientific simulations typically generate large-scale data sets over the spatio-temporal space. Unfortunately, the sheer sizes of the generated data sets make efficient exploration of them impossible. Constructing queriable statistical models is an essential step in helping scientists glean new insight from their computer simulations. We define queriable statistical models to be descriptive statistics that (1) summarize and describe the data within a user-defined modeling error, and (2) are able to answer complex range-based queries over the spatiotemporal dimensions. In this chapter, we describe systems that build queriable statistical models for large-scale scientific simulation data sets. In particular, we present our Ad-hoc Queries for Simulation (AQSim) infrastructure, which reduces the data storage requirements and query access times by (1) creating and storing queriable statistical models of the data at multiple resolutions, and (2) evaluating queries on these models of the data instead of the entire data set. Within AQSim, we focus on three simple but effective statistical modeling techniques. AQSim's first modeling technique (called univariate mean modeler) computes the ''true'' (unbiased) mean of systematic partitions of the data. AQSim's second statistical modeling technique (called univariate goodness-of-fit modeler) uses the Andersen-Darling goodness-of-fit method on systematic partitions of the data. Finally, AQSim's third statistical modeling technique (called multivariate clusterer) utilizes the cosine similarity measure to cluster the data into similar groups. Our experimental evaluations on several scientific simulation data sets illustrate the value of using these statistical models on large-scale simulation data sets.

  11. Ecohydrological modeling for large-scale environmental impact assessment.

    PubMed

    Woznicki, Sean A; Nejadhashemi, A Pouyan; Abouali, Mohammad; Herman, Matthew R; Esfahanian, Elaheh; Hamaamin, Yaseen A; Zhang, Zhen

    2016-02-01

    Ecohydrological models are frequently used to assess the biological integrity of unsampled streams. These models vary in complexity and scale, and their utility depends on their final application. Tradeoffs are usually made in model scale, where large-scale models are useful for determining broad impacts of human activities on biological conditions, and regional-scale (e.g. watershed or ecoregion) models provide stakeholders greater detail at the individual stream reach level. Given these tradeoffs, the objective of this study was to develop large-scale stream health models with reach level accuracy similar to regional-scale models thereby allowing for impacts assessments and improved decision-making capabilities. To accomplish this, four measures of biological integrity (Ephemeroptera, Plecoptera, and Trichoptera taxa (EPT), Family Index of Biotic Integrity (FIBI), Hilsenhoff Biotic Index (HBI), and fish Index of Biotic Integrity (IBI)) were modeled based on four thermal classes (cold, cold-transitional, cool, and warm) of streams that broadly dictate the distribution of aquatic biota in Michigan. The Soil and Water Assessment Tool (SWAT) was used to simulate streamflow and water quality in seven watersheds and the Hydrologic Index Tool was used to calculate 171 ecologically relevant flow regime variables. Unique variables were selected for each thermal class using a Bayesian variable selection method. The variables were then used in development of adaptive neuro-fuzzy inference systems (ANFIS) models of EPT, FIBI, HBI, and IBI. ANFIS model accuracy improved when accounting for stream thermal class rather than developing a global model. PMID:26595397

  12. Challenges of Modeling Flood Risk at Large Scales

    NASA Astrophysics Data System (ADS)

    Guin, J.; Simic, M.; Rowe, J.

    2009-04-01

    algorithm propagates the flows for each simulated event. The model incorporates a digital terrain model (DTM) at 10m horizontal resolution, which is used to extract flood plain cross-sections such that a one-dimensional hydraulic model can be used to estimate extent and elevation of flooding. In doing so the effect of flood defenses in mitigating floods are accounted for. Finally a suite of vulnerability relationships have been developed to estimate flood losses for a portfolio of properties that are exposed to flood hazard. Historical experience indicates that a for recent floods in Great Britain more than 50% of insurance claims occur outside the flood plain and these are primarily a result of excess surface flow, hillside flooding, flooding due to inadequate drainage. A sub-component of the model addresses this issue by considering several parameters that best explain the variability of claims off the flood plain. The challenges of modeling such a complex phenomenon at a large scale largely dictate the choice of modeling approaches that need to be adopted for each of these model components. While detailed numerically-based physical models exist and have been used for conducting flood hazard studies, they are generally restricted to small geographic regions. In a probabilistic risk estimation framework like our current model, a blend of deterministic and statistical techniques have to be employed such that each model component is independent, physically sound and is able to maintain the statistical properties of observed historical data. This is particularly important because of the highly non-linear behavior of the flooding process. With respect to vulnerability modeling, both on and off the flood plain, the challenges include the appropriate scaling of a damage relationship when applied to a portfolio of properties. This arises from the fact that the estimated hazard parameter used for damage assessment, namely maximum flood depth has considerable uncertainty. The

  13. Large-scale electromagnetic modeling for multiple inhomogeneous domains

    NASA Astrophysics Data System (ADS)

    Zhdanov, M. S.; Endo, M.; Cuma, M.

    2008-12-01

    We develop a new formulation of the integral equation (IE) method for three-dimensional (3D) electromagnetic (EM) field computation in large-scale models with multiple inhomogeneous domains. This problem arises in many practical applications including modeling the EM fields within the complex geoelectrical structures in geophysical exploration. In geophysical applications, it is difficult to describe an earth structure using a horizontally layered background conductivity model, which is required for the efficient implementation of the conventional IE approach. As a result, a large domain of interest with anomalous conductivity distribution needs to be discretized, which complicates the computations. The new method allows us to consider multiple inhomogeneous domains, where the conductivity distribution is different from that of the background, and to use independent discretizations for different domains. This reduces dramatically the computational resources required for large-scale modeling. In addition, by using this method, we can analyze the response of each domain separately without an inappropriate use of the superposition principle for the EM field calculations. The method was carefully tested for modeling the marine controlled-source electromagnetic (MCSEM) fields for complex geoelectrical structures with multiple inhomogeneous domains, such as a seafloor with rough bathymetry, salt domes, and reservoirs. We have also used this technique to investigate the return induction effects from regional geoelectrical structures, e.g., seafloor bathymetry and salt domes, which can distort the EM response from the geophysical exploration target.

  14. Disinformative data in large-scale hydrological modelling

    NASA Astrophysics Data System (ADS)

    Kauffeldt, Anna; Halldin, Sven; Rodhe, Allan; Xu, Chong-Yu; Westerberg, Ida

    2013-04-01

    Large-scale hydrological modelling has become an important tool for the study of global and regional water resources, climate impacts, and water-resources management. However, modelling efforts over large spatial domains are fraught with problems of data scarcity, uncertainties and inconsistencies between forcing and evaluation data. Model-independent methods to screen and analyse data for such problems are needed. This study aims at identifying two types of data inconsistencies in global datasets using a pre-modelling analysis, inconsistencies that can be disinformative for subsequent modelling. Firstly, four hydrographic datasets were examined in terms of how well basin areas were represented in the flow networks. It was found that most basins could be well represented in both gridded basin delineations and polygon-based ones, but some basins exhibited large area discrepancies between hydrographic datasets and archived basin areas. Secondly, the consistency between climate data (precipitation and potential evaporation) and discharge data was examined for the possibility of water-balance closure. It was found that basins exhibiting too high runoff coefficients were abundant in areas where precipitation data were likely affected by snow undercatch, and that the occurrence of basins exhibiting losses exceeding the energy limit were strongly dependent on the potential-evaporation data, both in terms of numbers and geographical distribution. These results emphasise the need for pre-modelling data analysis to identify dataset inconsistencies as an important first step in any large-scale study. Applying data-screening methods before modelling increases our chances to draw robust conclusions from subsequent model simulations.

  15. Performance modeling and analysis of consumer classes in large scale systems

    NASA Astrophysics Data System (ADS)

    Al-Shukri, Sh.; Lenin, R. B.; Ramaswamy, S.; Anand, A.; Narasimhan, V. L.; Abraham, J.; Varadan, Vijay

    2009-03-01

    Peer-to-Peer (P2P) networks have been used efficiently as building blocks as overlay networks for large-scale distributed network applications with Internet Protocol (IP) based bottom layer networks. With large scale Wireless Sensor Networks (WSNs) becoming increasingly realistic, it is important to overlay networks with WSNs in the bottom layer. The suitable mathematical (stochastic) model that can model the overlay network over WSNs is Queuing Networks with Multi-Class customers. In this paper, we discuss how these mathematical network models can be simulated using the object oriented simulation package OMNeT++. We discuss the Graphical User Interface (GUI) which is developed to accept the input parameter files and execute the simulation using this interface. We compare the simulation results with analytical formulas available in the literature for these mathematical models.

  16. Modeling and Dynamic Simulation of a Large Scale Helium Refrigerator

    NASA Astrophysics Data System (ADS)

    Lv, C.; Qiu, T. N.; Wu, J. H.; Xie, X. J.; Li, Q.

    In order to simulate the transient behaviors of a newly developed 2 kW helium refrigerator, a numerical model of the critical equipment including a screw compressor with variable-frequency drive, plate-fin heat exchangers, a turbine expander, and pneumatic valves wasdeveloped. In the simulation,the calculation of the helium thermodynamic properties arebased on 32-parameter modified Benedict-Webb-Rubin (MBWR) state equation.The start-up process of the warm compressor station with gas management subsystem, and the cool-down process of cold box in an actual operation, were dynamically simulated. The developed model was verified by comparing the simulated results with the experimental data.Besides, system responses of increasing heat load were simulated. This model can also be used to design and optimize other large scale helium refrigerators.

  17. MODELING THE LARGE-SCALE BIAS OF NEUTRAL HYDROGEN

    SciTech Connect

    MarIn, Felipe A.; Gnedin, Nickolay Y.; Seo, Hee-Jong; Vallinotto, Alberto E-mail: gnedin@fnal.go E-mail: avalli@fnal.go

    2010-08-01

    We present new analytical estimates of the large-scale bias of neutral hydrogen (H I). We use a simple, non-parametric model which monotonically relates the total mass of a halo M{sub tot} with its H I mass M{sub HI} at zero redshift; for earlier times we assume limiting models for the {Omega}{sub HI} evolution consistent with the data presently available, as well as two main scenarios for the evolution of our M{sub HI}-M{sub tot} relation. We find that both the linear and the first nonlinear bias terms exhibit a strong evolution with redshift, regardless of the specific limiting model assumed for the H I density over time. These analytical predictions are then shown to be consistent with measurements performed on the Millennium Simulation. Additionally, we show that this strong bias evolution does not sensibly affect the measurement of the H I power spectrum.

  18. Fourier method for large scale surface modeling and registration.

    PubMed

    Shen, Li; Kim, Sungeun; Saykin, Andrew J

    2009-06-01

    Spherical harmonic (SPHARM) description is a powerful Fourier shape modeling method for processing arbitrarily shaped but simply connected 3D objects. As a highly promising method, SPHARM has been widely used in several domains including medical imaging. However, its primary use has been focused on modeling small or moderately-sized surfaces that are relatively smooth, due to challenges related to its applicability, robustness and scalability. This paper presents an enhanced SPHARM framework that addresses these issues and show that the use of SPHARM can expand into broader areas. In particular, we present a simple and efficient Fourier expansion method on the sphere that enables large scale modeling, and propose a new SPHARM registration method that aims to preserve the important homological properties between 3D models. Although SPHARM is a global descriptor, our experimental results show that the proposed SPHARM framework can accurately describe complicated graphics models and highly convoluted 3D surfaces and the proposed registration method allows for effective alignment and registration of these 3D models for further processing or analysis. These methods greatly enable the potential of applying SPHARM to broader areas such as computer graphics, medical imaging, CAD/CAM, bioinformatics, and other related geometric modeling and processing fields. PMID:20161536

  19. Can global hydrological models reproduce large scale river flood regimes?

    NASA Astrophysics Data System (ADS)

    Eisner, Stephanie; Flörke, Martina

    2013-04-01

    River flooding remains one of the most severe natural hazards. On the one hand, major flood events pose a serious threat to human well-being, causing deaths and considerable economic damage. On the other hand, the periodic occurrence of flood pulses is crucial to maintain the functioning of riverine floodplains and wetlands, and to preserve the ecosystem services the latter provide. In many regions, river floods reveal a distinct seasonality, i.e. they occur at a particular time during the year. This seasonality is related to regionally dominant flood generating processes which can be expressed in river flood types. While in data-rich regions (esp. Europe and North America) the analysis of flood regimes can be based on observed river discharge time series, this data is sparse or lacking in many other regions of the world. This gap of knowledge can be filled by global modeling approaches. However, to date most global modeling studies have focused on mean annual or monthly water availability and their change over time while simulating discharge extremes, both floods and droughts, still remains a challenge for large scale hydrological models. This study will explore the ability of the global hydrological model WaterGAP3 to simulate the large scale patterns of river flood regimes, represented by seasonal pattern and the dominant flood type. WaterGAP3 simulates the global terrestrial water balance on a 5 arc minute spatial grid (excluding Greenland and Antarctica) at a daily time step. The model accounts for human interference on river flow, i.e. water abstraction for various purposes, e.g. irrigation, and flow regulation by large dams and reservoirs. Our analysis will provide insight in the general ability of global hydrological models to reproduce river flood regimes and thus will promote the creation of a global map of river flood regimes to provide a spatially inclusive and comprehensive picture. Understanding present-day flood regimes can support both flood risk

  20. A first large-scale flood inundation forecasting model

    NASA Astrophysics Data System (ADS)

    Schumann, G. J.-P.; Neal, J. C.; Voisin, N.; Andreadis, K. M.; Pappenberger, F.; Phanthuwongpakdee, N.; Hall, A. C.; Bates, P. D.

    2013-10-01

    At present continental to global scale flood forecasting predicts at a point discharge, with little attention to detail and accuracy of local scale inundation predictions. Yet, inundation variables are of interest and all flood impacts are inherently local in nature. This paper proposes a large-scale flood inundation ensemble forecasting model that uses best available data and modeling approaches in data scarce areas. The model was built for the Lower Zambezi River to demonstrate current flood inundation forecasting capabilities in large data-scarce regions. ECMWF ensemble forecast (ENS) data were used to force the VIC (Variable Infiltration Capacity) hydrologic model, which simulated and routed daily flows to the input boundary locations of a 2-D hydrodynamic model. Efficient hydrodynamic modeling over large areas still requires model grid resolutions that are typically larger than the width of channels that play a key role in flood wave propagation. We therefore employed a novel subgrid channel scheme to describe the river network in detail while representing the floodplain at an appropriate scale. The modeling system was calibrated using channel water levels from satellite laser altimetry and then applied to predict the February 2007 Mozambique floods. Model evaluation showed that simulated flood edge cells were within a distance of between one and two model resolutions compared to an observed flood edge and inundation area agreement was on average 86%. Our study highlights that physically plausible parameter values and satisfactory performance can be achieved at spatial scales ranging from tens to several hundreds of thousands of km2 and at model grid resolutions up to several km2.

  1. Importance-truncated large-scale shell model

    NASA Astrophysics Data System (ADS)

    Stumpf, Christina; Braun, Jonas; Roth, Robert

    2016-02-01

    We propose an importance-truncation scheme for the large-scale nuclear shell model that extends its range of applicability to larger valence spaces and midshell nuclei. It is based on a perturbative measure for the importance of individual basis states that acts as an additional truncation for the many-body model space in which the eigenvalue problem of the Hamiltonian is solved numerically. Through a posteriori extrapolations of all observables to vanishing importance threshold, the full shell-model results can be recovered. In addition to simple threshold extrapolations, we explore extrapolations based on the energy variance. We apply the importance-truncated shell model for the study of 56Ni in the p f valence space and of 60Zn and 64Ge in the p f g9 /2 space. We demonstrate the efficiency and accuracy of the approach, which pave the way for future applications of valence-space interactions derived in ab initio approaches in larger valence spaces.

  2. Large-scale Modeling of Inundation in the Amazon Basin

    NASA Astrophysics Data System (ADS)

    Luo, X.; Li, H. Y.; Getirana, A.; Leung, L. R.; Tesfa, T. K.

    2015-12-01

    Flood events have impacts on the exchange of energy, water and trace gases between land and atmosphere, hence potentially affecting the climate. The Amazon River basin is the world's largest river basin. Seasonal floods occur in the Amazon Basin each year. The basin being characterized by flat gradients, backwater effects are evident in the river dynamics. This factor, together with large uncertainties in river hydraulic geometry, surface topography and other datasets, contribute to difficulties in simulating flooding processes over this basin. We have developed a large-scale inundation scheme in the framework of the Model for Scale Adaptive River Transport (MOSART) river routing model. Both the kinematic wave and the diffusion wave routing methods are implemented in the model. A new process-based algorithm is designed to represent river channel - floodplain interactions. Uncertainties in the input datasets are partly addressed through model calibration. We will present the comparison of simulated results against satellite and in situ observations and analysis to understand factors that influence inundation processes in the Amazon Basin.

  3. Detailed investigation of flowfields within large scale hypersonic inlet models

    NASA Technical Reports Server (NTRS)

    Seebaugh, W. R.; Doran, R. W.; Decarlo, J. P.

    1971-01-01

    Analytical and experimental investigations were conducted to determine the characteristics of the internal flows in model passages representative of hypersonic inlets and also sufficiently large for meaningful data to be obtained. Three large-scale inlet models, each having a different compression ratio, were designed to provide high performance and approximately uniform static-pressure distributions at the throat stations. A wedge forebody was used to simulate the flowfield conditions at the entrance of the internal passages, thus removing the actual vehicle forebody from consideration in the design of the wind-tunnel models. Tests were conducted in a 3.5 foot hypersonic wind tunnel at a nominal test Mach number of 7.4 and freestream unit Reynolds number of 2,700,000 per foot. From flowfield survey data the inlet entrance, the entering inviscid and viscous flow conditions were determined prior to the analysis of the data obtained in the internal passages. Detailed flowfield survey data were obtained near the centerlines of the internal passages to define the boundary-layer development on the internal surfaces and the internal shock-wave configuration. Finally, flowfield data were measured across the throats of the inlet models to evaluate the internal performance of the internal passages. These data and additional results from surface instrumentation and flow visualization studies were utilized to determine the internal flowfield patterns and the inlet performance.

  4. Design of a Tree-Queue Model for a Large-Scale System

    NASA Astrophysics Data System (ADS)

    Park, Byungsung; Yoo, Jaeyeong; Kim, Hagbae

    In a large queuing system, the effect of the ratio of the filled data on the queue and waiting time from the head of a queue to the service gate are important factors for process efficiency because they are too large to ignore. However, many research works assumed that the factors can be considered to be negligible according to the queuing theory. Thus, the existing queuing models are not applicable to the design of large-scale systems. Such a system could be used as a product classification center for a home delivery service. In this paper, we propose a tree-queue model for large-scale systems that is more adaptive to efficient processes compared to existing models. We analyze and design a mean waiting time equation related to the ratio of the filled data in the queue. Based on simulations, the proposed model demonstrated improvement in process-efficiency, and it is more suitable to realistic system modeling than other compared models for large-scale systems.

  5. Numerically modelling the large scale coronal magnetic field

    NASA Astrophysics Data System (ADS)

    Panja, Mayukh; Nandi, Dibyendu

    2016-07-01

    The solar corona spews out vast amounts of magnetized plasma into the heliosphere which has a direct impact on the Earth's magnetosphere. Thus it is important that we develop an understanding of the dynamics of the solar corona. With our present technology it has not been possible to generate 3D magnetic maps of the solar corona; this warrants the use of numerical simulations to study the coronal magnetic field. A very popular method of doing this, is to extrapolate the photospheric magnetic field using NLFF or PFSS codes. However the extrapolations at different time intervals are completely independent of each other and do not capture the temporal evolution of magnetic fields. On the other hand full MHD simulations of the global coronal field, apart from being computationally very expensive would be physically less transparent, owing to the large number of free parameters that are typically used in such codes. This brings us to the Magneto-frictional model which is relatively simpler and computationally more economic. We have developed a Magnetofrictional Model, in 3D spherical polar co-ordinates to study the large scale global coronal field. Here we present studies of changing connectivities between active regions, in response to photospheric motions.

  6. Multi-Resolution Modeling of Large Scale Scientific Simulation Data

    SciTech Connect

    Baldwin, C; Abdulla, G; Critchlow, T

    2003-01-31

    This paper discusses using the wavelets modeling technique as a mechanism for querying large-scale spatio-temporal scientific simulation data. Wavelets have been used successfully in time series analysis and in answering surprise and trend queries. Our approach however is driven by the need for compression, which is necessary for viable throughput given the size of the targeted data, along with the end user requirements from the discovery process. Our users would like to run fast queries to check the validity of the simulation algorithms used. In some cases users are welling to accept approximate results if the answer comes back within a reasonable time. In other cases they might want to identify a certain phenomena and track it over time. We face a unique problem because of the data set sizes. It may take months to generate one set of the targeted data; because of its shear size, the data cannot be stored on disk for long and thus needs to be analyzed immediately before it is sent to tape. We integrated wavelets within AQSIM, a system that we are developing to support exploration and analyses of tera-scale size data sets. We will discuss the way we utilized wavelets decomposition in our domain to facilitate compression and in answering a specific class of queries that is harder to answer with any other modeling technique. We will also discuss some of the shortcomings of our implementation and how to address them.

  7. Towards a self-consistent halo model for the nonlinear large-scale structure

    NASA Astrophysics Data System (ADS)

    Schmidt, Fabian

    2016-03-01

    The halo model is a theoretically and empirically well-motivated framework for predicting the statistics of the nonlinear matter distribution in the Universe. However, current incarnations of the halo model suffer from two major deficiencies: (i) they do not enforce the stress-energy conservation of matter; (ii) they are not guaranteed to recover exact perturbation theory results on large scales. Here, we provide a formulation of the halo model (EHM) that remedies both drawbacks in a consistent way, while attempting to maintain the predictivity of the approach. In the formulation presented here, mass and momentum conservation are guaranteed on large scales, and results of the perturbation theory and the effective field theory can, in principle, be matched to any desired order on large scales. We find that a key ingredient in the halo model power spectrum is the halo stochasticity covariance, which has been studied to a much lesser extent than other ingredients such as mass function, bias, and profiles of halos. As written here, this approach still does not describe the transition regime between perturbation theory and halo scales realistically, which is left as an open problem. We also show explicitly that, when implemented consistently, halo model predictions do not depend on any properties of low-mass halos that are smaller than the scales of interest.

  8. A first large-scale flood inundation forecasting model

    SciTech Connect

    Schumann, Guy J-P; Neal, Jeffrey C.; Voisin, Nathalie; Andreadis, Konstantinos M.; Pappenberger, Florian; Phanthuwongpakdee, Kay; Hall, Amanda C.; Bates, Paul D.

    2013-11-04

    At present continental to global scale flood forecasting focusses on predicting at a point discharge, with little attention to the detail and accuracy of local scale inundation predictions. Yet, inundation is actually the variable of interest and all flood impacts are inherently local in nature. This paper proposes a first large scale flood inundation ensemble forecasting model that uses best available data and modeling approaches in data scarce areas and at continental scales. The model was built for the Lower Zambezi River in southeast Africa to demonstrate current flood inundation forecasting capabilities in large data-scarce regions. The inundation model domain has a surface area of approximately 170k km2. ECMWF meteorological data were used to force the VIC (Variable Infiltration Capacity) macro-scale hydrological model which simulated and routed daily flows to the input boundary locations of the 2-D hydrodynamic model. Efficient hydrodynamic modeling over large areas still requires model grid resolutions that are typically larger than the width of many river channels that play a key a role in flood wave propagation. We therefore employed a novel sub-grid channel scheme to describe the river network in detail whilst at the same time representing the floodplain at an appropriate and efficient scale. The modeling system was first calibrated using water levels on the main channel from the ICESat (Ice, Cloud, and land Elevation Satellite) laser altimeter and then applied to predict the February 2007 Mozambique floods. Model evaluation showed that simulated flood edge cells were within a distance of about 1 km (one model resolution) compared to an observed flood edge of the event. Our study highlights that physically plausible parameter values and satisfactory performance can be achieved at spatial scales ranging from tens to several hundreds of thousands of km2 and at model grid resolutions up to several km2. However, initial model test runs in forecast mode

  9. Large scale modelling of bankfull flow: An example for Europe

    NASA Astrophysics Data System (ADS)

    Schneider, Christof; Flörke, Martina; Eisner, Stephanie; Voss, Frank

    2011-10-01

    SummaryBankfull flow is a relevant parameter in the field of large scale modelling especially for the analysis of environmental flows and flood related hydrological processes. In our case, bankfull flow data were required within the SCENES project in order to analyse ecological important inundation events at selected grid cells of a European raster. In practise, the determination of bankfull flow is a complex task even on local scale. Subsequent to a literature survey of bankfull flow studies, this paper describes a method which can be applied to estimate bankfull flow on a global or continental grid cell raster. The method is based on the partial duration series approach taking into account a 40-years time series of daily discharge data modelled by the global water model WaterGAP. An increasing threshold censoring procedure, a declustering scheme and the generalised Pareto distribution are applied. Modelled bankfull flow values are then validated by different efficiency criteria against bankfull flows observed at gauging stations in Europe. Thereby, the impact of (i) the applied distribution function, (ii) the threshold setting in the partial duration series, (iii) the climate input data and (iv) applying the annual maxima series are evaluated and compared to the proposed approach. The results show that bankfull flow can be reasonably estimated with a high model efficiency ( E1 = 0.71) and weighted correlation ( ωr2 = 0.90) as well as a systematic overestimation of 22.8%. Finally it turned out that in our study focusing on hydrological extremes, the appliance of the daily climate input data is a basic requirement. While the choice of the distribution function had no significant impact on the final results, the threshold setting in the partial duration series was crucial.

  10. Symmetry-guided large-scale shell-model theory

    NASA Astrophysics Data System (ADS)

    Launey, Kristina D.; Dytrych, Tomas; Draayer, Jerry P.

    2016-07-01

    In this review, we present a symmetry-guided strategy that utilizes exact as well as partial symmetries for enabling a deeper understanding of and advancing ab initio studies for determining the microscopic structure of atomic nuclei. These symmetries expose physically relevant degrees of freedom that, for large-scale calculations with QCD-inspired interactions, allow the model space size to be reduced through a very structured selection of the basis states to physically relevant subspaces. This can guide explorations of simple patterns in nuclei and how they emerge from first principles, as well as extensions of the theory beyond current limitations toward heavier nuclei and larger model spaces. This is illustrated for the ab initio symmetry-adapted no-core shell model (SA-NCSM) and two significant underlying symmetries, the symplectic Sp(3 , R) group and its deformation-related SU(3) subgroup. We review the broad scope of nuclei, where these symmetries have been found to play a key role-from the light p-shell systems, such as 6Li, 8B, 8Be, 12C, and 16O, and sd-shell nuclei exemplified by 20Ne, based on first-principle explorations; through the Hoyle state in 12C and enhanced collectivity in intermediate-mass nuclei, within a no-core shell-model perspective; up to strongly deformed species of the rare-earth and actinide regions, as investigated in earlier studies. A complementary picture, driven by symmetries dual to Sp(3 , R) , is also discussed. We briefly review symmetry-guided techniques that prove useful in various nuclear-theory models, such as Elliott model, ab initio SA-NCSM, symplectic model, pseudo- SU(3) and pseudo-symplectic models, ab initio hyperspherical harmonics method, ab initio lattice effective field theory, exact pairing-plus-shell model approaches, and cluster models, including the resonating-group method. Important implications of these approaches that have deepened our understanding of emergent phenomena in nuclei, such as enhanced

  11. Multi-Resolution Modeling of Large Scale Scientific Simulation Data

    SciTech Connect

    Baldwin, C; Abdulla, G; Critchlow, T

    2002-02-25

    Data produced by large scale scientific simulations, experiments, and observations can easily reach tera-bytes in size. The ability to examine data-sets of this magnitude, even in moderate detail, is problematic at best. Generally this scientific data consists of multivariate field quantities with complex inter-variable correlations and spatial-temporal structure. To provide scientists and engineers with the ability to explore and analyze such data sets we are using a twofold approach. First, we model the data with the objective of creating a compressed yet manageable representation. Second, with that compressed representation, we provide the user with the ability to query the resulting approximation to obtain approximate yet sufficient answers; a process called adhoc querying. This paper is concerned with a wavelet modeling technique that seeks to capture the important physical characteristics of the target scientific data. Our approach is driven by the compression, which is necessary for viable throughput, along with the end user requirements from the discovery process. Our work contrasts existing research which applies wavelets to range querying, change detection, and clustering problems by working directly with a decomposition of the data. The difference in this procedures is due primarily to the nature of the data and the requirements of the scientists and engineers. Our approach directly uses the wavelet coefficients of the data to compress as well as query. We will provide some background on the problem, describe how the wavelet decomposition is used to facilitate data compression and how queries are posed on the resulting compressed model. Results of this process will be shown for several problems of interest and we will end with some observations and conclusions about this research.

  12. Oligopolistic competition in wholesale electricity markets: Large-scale simulation and policy analysis using complementarity models

    NASA Astrophysics Data System (ADS)

    Helman, E. Udi

    This dissertation conducts research into the large-scale simulation of oligopolistic competition in wholesale electricity markets. The dissertation has two parts. Part I is an examination of the structure and properties of several spatial, or network, equilibrium models of oligopolistic electricity markets formulated as mixed linear complementarity problems (LCP). Part II is a large-scale application of such models to the electricity system that encompasses most of the United States east of the Rocky Mountains, the Eastern Interconnection. Part I consists of Chapters 1 to 6. The models developed in this part continue research into mixed LCP models of oligopolistic electricity markets initiated by Hobbs [67] and subsequently developed by Metzler [87] and Metzler, Hobbs and Pang [88]. Hobbs' central contribution is a network market model with Cournot competition in generation and a price-taking spatial arbitrage firm that eliminates spatial price discrimination by the Cournot firms. In one variant, the solution to this model is shown to be equivalent to the "no arbitrage" condition in a "pool" market, in which a Regional Transmission Operator optimizes spot sales such that the congestion price between two locations is exactly equivalent to the difference in the energy prices at those locations (commonly known as locational marginal pricing). Extensions to this model are presented in Chapters 5 and 6. One of these is a market model with a profit-maximizing arbitrage firm. This model is structured as a mathematical program with equilibrium constraints (MPEC), but due to the linearity of its constraints, can be solved as a mixed LCP. Part II consists of Chapters 7 to 12. The core of these chapters is a large-scale simulation of the U.S. Eastern Interconnection applying one of the Cournot competition with arbitrage models. This is the first oligopolistic equilibrium market model to encompass the full Eastern Interconnection with a realistic network representation (using

  13. Graph theoretic modeling of large-scale semantic networks.

    PubMed

    Bales, Michael E; Johnson, Stephen B

    2006-08-01

    During the past several years, social network analysis methods have been used to model many complex real-world phenomena, including social networks, transportation networks, and the Internet. Graph theoretic methods, based on an elegant representation of entities and relationships, have been used in computational biology to study biological networks; however they have not yet been adopted widely by the greater informatics community. The graphs produced are generally large, sparse, and complex, and share common global topological properties. In this review of research (1998-2005) on large-scale semantic networks, we used a tailored search strategy to identify articles involving both a graph theoretic perspective and semantic information. Thirty-one relevant articles were retrieved. The majority (28, 90.3%) involved an investigation of a real-world network. These included corpora, thesauri, dictionaries, large computer programs, biological neuronal networks, word association networks, and files on the Internet. Twenty-two of the 28 (78.6%) involved a graph comprised of words or phrases. Fifteen of the 28 (53.6%) mentioned evidence of small-world characteristics in the network investigated. Eleven (39.3%) reported a scale-free topology, which tends to have a similar appearance when examined at varying scales. The results of this review indicate that networks generated from natural language have topological properties common to other natural phenomena. It has not yet been determined whether artificial human-curated terminology systems in biomedicine share these properties. Large network analysis methods have potential application in a variety of areas of informatics, such as in development of controlled vocabularies and for characterizing a given domain. PMID:16442849

  14. Double-step truncation procedure for large-scale shell-model calculations

    NASA Astrophysics Data System (ADS)

    Coraggio, L.; Gargano, A.; Itaco, N.

    2016-06-01

    We present a procedure that is helpful to reduce the computational complexity of large-scale shell-model calculations, by preserving as much as possible the role of the rejected degrees of freedom in an effective approach. Our truncation is driven first by the analysis of the effective single-particle energies of the original large-scale shell-model Hamiltonian, in order to locate the relevant degrees of freedom to describe a class of isotopes or isotones, namely the single-particle orbitals that will constitute a new truncated model space. The second step is to perform a unitary transformation of the original Hamiltonian from its model space into the truncated one. This transformation generates a new shell-model Hamiltonian, defined in a smaller model space, that retains effectively the role of the excluded single-particle orbitals. As an application of this procedure, we have chosen a realistic shell-model Hamiltonian defined in a large model space, set up by seven proton and five neutron single-particle orbitals outside 88Sr. We study the dependence of shell-model results upon different truncations of the original model space for the Zr, Mo, Ru, Pd, Cd, and Sn isotopic chains, showing the reliability of this truncation procedure.

  15. Modeling emergent large-scale structures of barchan dune fields

    NASA Astrophysics Data System (ADS)

    Worman, S. L.; Murray, A. B.; Littlewood, R.; Andreotti, B.; Claudin, P.

    2013-10-01

    In nature, barchan dunes typically exist as members of larger fields that display striking, enigmatic structures that cannot be readily explained by examining the dynamics at the scale of single dunes, or by appealing to patterns in external forcing. To explore the possibility that observed structures emerge spontaneously as a collective result of many dunes interacting with each other, we built a numerical model that treats barchans as discrete entities that interact with one another according to simplified rules derived from theoretical and numerical work and from field observations: (1) Dunes exchange sand through the fluxes that leak from the downwind side of each dune and are captured on their upstream sides; (2) when dunes become sufficiently large, small dunes are born on their downwind sides (`calving'); and (3) when dunes collide directly enough, they merge. Results show that these relatively simple interactions provide potential explanations for a range of field-scale phenomena including isolated patches of dunes and heterogeneous arrangements of similarly sized dunes in denser fields. The results also suggest that (1) dune field characteristics depend on the sand flux fed into the upwind boundary, although (2) moving downwind, the system approaches a common attracting state in which the memory of the upwind conditions vanishes. This work supports the hypothesis that calving exerts a first-order control on field-scale phenomena; it prevents individual dunes from growing without bound, as single-dune analyses suggest, and allows the formation of roughly realistic, persistent dune field patterns.

  16. Modeling emergent large-scale structures of barchan dune fields

    NASA Astrophysics Data System (ADS)

    Worman, S. L.; Murray, A.; Littlewood, R. C.; Andreotti, B.; Claudin, P.

    2013-12-01

    In nature, barchan dunes typically exist as members of larger fields that display striking, enigmatic structures that cannot be readily explained by examining the dynamics at the scale of single dunes, or by appealing to patterns in external forcing. To explore the possibility that observed structures emerge spontaneously as a collective result of many dunes interacting with each other, we built a numerical model that treats barchans as discrete entities that interact with one another according to simplified rules derived from theoretical and numerical work, and from field observations: Dunes exchange sand through the fluxes that leak from the downwind side of each dune and are captured on their upstream sides; when dunes become sufficiently large, small dunes are born on their downwind sides ('calving'); and when dunes collide directly enough, they merge. Results show that these relatively simple interactions provide potential explanations for a range of field-scale phenomena including isolated patches of dunes and heterogeneous arrangements of similarly sized dunes in denser fields. The results also suggest that (1) dune field characteristics depend on the sand flux fed into the upwind boundary, although (2) moving downwind, the system approaches a common attracting state in which the memory of the upwind conditions vanishes. This work supports the hypothesis that calving exerts a first order control on field-scale phenomena; it prevents individual dunes from growing without bound, as single-dune analyses suggest, and allows the formation of roughly realistic, persistent dune field patterns.

  17. Modeling parametric scattering instabilities in large-scale expanding plasmas

    NASA Astrophysics Data System (ADS)

    Masson-Laborde, P. E.; Hüller, S.; Pesme, D.; Casanova, M.; Loiseau, P.; Labaune, Ch.

    2006-06-01

    We present results from two-dimensional simulations of long scale-length laser-plasma interaction experiments performed at LULI. With the goal of predictive modeling of such experiments with our code Harmony2D, we take into account realistic plasma density and velocity profiles, the propagation of the laser light beam and the scattered light, as well as the coupling with the ion acoustic waves in order to describe Stimulated Brillouin Scattering (SBS). Laser pulse shaping is taken into account to follow the evolution ofthe SBS reflectivity as close as possible to the experiment. The light reflectivity is analyzed by distinguishing the backscattered light confined in the solid angle defined by the aperture of the incident light beam and the scattered light outside this cone. As in the experiment, it is observed that the aperture of the scattered light tends to increase with the mean intensity of the RPP-smoothed laser beam. A further common feature between simulations and experiments is the observed localization of the SBS-driven ion acoustic waves (IAW) in the front part of the target (with respect to the incoming laser beam).

  18. Numerical Modeling of Large-Scale Rocky Coastline Evolution

    NASA Astrophysics Data System (ADS)

    Limber, P.; Murray, A. B.; Littlewood, R.; Valvo, L.

    2008-12-01

    Seventy-five percent of the world's ocean coastline is rocky. On large scales (i.e. greater than a kilometer), many intertwined processes drive rocky coastline evolution, including coastal erosion and sediment transport, tectonics, antecedent topography, and variations in sea cliff lithology. In areas such as California, an additional aspect of rocky coastline evolution involves submarine canyons that cut across the continental shelf and extend into the nearshore zone. These types of canyons intercept alongshore sediment transport and flush sand to abyssal depths during periodic turbidity currents, thereby delineating coastal sediment transport pathways and affecting shoreline evolution over large spatial and time scales. How tectonic, sediment transport, and canyon processes interact with inherited topographic and lithologic settings to shape rocky coastlines remains an unanswered, and largely unexplored, question. We will present numerical model results of rocky coastline evolution that starts with an immature fractal coastline. The initial shape is modified by headland erosion, wave-driven alongshore sediment transport, and submarine canyon placement. Our previous model results have shown that, as expected, an initial sediment-free irregularly shaped rocky coastline with homogeneous lithology will undergo smoothing in response to wave attack; headlands erode and mobile sediment is swept into bays, forming isolated pocket beaches. As this diffusive process continues, pocket beaches coalesce, and a continuous sediment transport pathway results. However, when a randomly placed submarine canyon is introduced to the system as a sediment sink, the end results are wholly different: sediment cover is reduced, which in turn increases weathering and erosion rates and causes the entire shoreline to move landward more rapidly. The canyon's alongshore position also affects coastline morphology. When placed offshore of a headland, the submarine canyon captures local sediment

  19. An empirical model relating U.S. monthly hail occurrence to large-scale meteorological environment

    NASA Astrophysics Data System (ADS)

    Allen, John T.; Tippett, Michael K.; Sobel, Adam H.

    2015-03-01

    An empirical model relating monthly hail occurrence to the large-scale environment has been developed and tested for the United States (U.S.). Monthly hail occurrence for each 1°×1° grid box is defined as the number of hail events that occur there during a month; a hail event consists of a 3 h period with at least one report of hail larger than 1 in. The model is derived using climatological annual cycle data only. Environmental variables are taken from the North American Regional Reanalysis (NARR; 1979-2012). The model includes four environmental variables convective precipitation, convective available potential energy, storm relative helicity, and mean surface to 90 hPa specific humidity. The model differs in its choice of variables and their relative weighting from existing severe weather indices. The model realistically matches the annual cycle of hail occurrence both regionally and for the contiguous U.S. (CONUS). The modeled spatial distribution is also consistent with the observed hail climatology. However, the westward shift of maximum hail frequency during the summer months is delayed in the model relative to observations, and the model has a lower frequency of hail just east of the Rocky Mountains compared to observations. Year-to-year variability provides an independent test of the model. On monthly and annual time scales, the model reproduces observed hail frequencies. Overall model trends are small compared to observed changes, suggesting that further analysis is necessary to differentiate between physical and nonphysical trends. The empirical hail model provides a new tool for exploration of connections between large-scale climate and severe weather.

  20. Advancing Software Architecture Modeling for Large Scale Heterogeneous Systems

    SciTech Connect

    Gorton, Ian; Liu, Yan

    2010-11-07

    In this paper we describe how incorporating technology-specific modeling at the architecture level can help reduce risks and produce better designs for large, heterogeneous software applications. We draw an analogy with established modeling approaches in scientific domains, using groundwater modeling as an example, to help illustrate gaps in current software architecture modeling approaches. We then describe the advances in modeling, analysis and tooling that are required to bring sophisticated modeling and development methods within reach of software architects.

  1. Investigation of models for large-scale meteorological prediction experiments

    NASA Technical Reports Server (NTRS)

    Spar, J.

    1975-01-01

    The feasibility of extended and long-range weather prediction by means of global atmospheric models was studied. A number of computer experiments were conducted at GISS with the GISS global general circulation model. Topics discussed include atmospheric response to sea-surface temperature anomalies, and monthly mean forecast experiments with the global model.

  2. A robust and quick method to validate large scale flood inundation modelling with SAR remote sensing

    NASA Astrophysics Data System (ADS)

    Schumann, G. J.; Neal, J. C.; Bates, P. D.

    2011-12-01

    With flood frequency likely to increase as a result of altered precipitation patterns triggered by climate change, there is a growing demand for more data and, at the same time, improved flood inundation modeling. The aim is to develop more reliable flood forecasting systems over large scales that account for errors and inconsistencies in observations, modeling, and output. Over the last few decades, there have been major advances in the fields of remote sensing, particularly microwave remote sensing, and flood inundation modeling. At the same time both research communities are attempting to roll out their products on a continental to global scale. In a first attempt to harmonize both research efforts on a very large scale, a two-dimensional flood model has been built for the Niger Inland Delta basin in northwest Africa on a 700 km reach of the Niger River, an area similar to the size of the UK. This scale demands a different approach to traditional 2D model structuring and we have implemented a simplified version of the shallow water equations as developed in [1] and complemented this formulation with a sub-grid structure for simulating flows in a channel much smaller than the actual grid resolution of the model. This joined integration allows to model flood flows across two dimensions with efficient computational speeds but without losing out on channel resolution when moving to coarse model grids. Using gaged daily flows, the model was applied to simulate the wetting and drying of the Inland Delta floodplain for 7 years from 2002 to 2008, taking less than 30 minutes to simulate 365 days at 1 km resolution. In these rather data poor regions of the world and at this type of scale, verification of flood modeling is realistically only feasible with wide swath or global mode remotely sensed imagery. Validation of the Niger model was carried out using sequential global mode SAR images over the period 2006/7. This scale not only requires different types of models and

  3. Investigation of models for large scale meteorological prediction experiments

    NASA Technical Reports Server (NTRS)

    Spar, J.

    1982-01-01

    Long-range numerical prediction and climate simulation experiments with various global atmospheric general circulation models are reported. A chronological listing of the titles of all publications and technical reports already distributed is presented together with an account of the most recent reseach. Several reports on a series of perpetual January climate simulations with the GISS coarse mesh climate model are listed. A set of perpetual July climate simulations with the same model is presented and the results are described.

  4. Coordinated reset stimulation in a large-scale model of the STN-GPe circuit

    PubMed Central

    Ebert, Martin; Hauptmann, Christian; Tass, Peter A.

    2014-01-01

    Synchronization of populations of neurons is a hallmark of several brain diseases. Coordinated reset (CR) stimulation is a model-based stimulation technique which specifically counteracts abnormal synchrony by desynchronization. Electrical CR stimulation, e.g., for the treatment of Parkinson's disease (PD), is administered via depth electrodes. In order to get a deeper understanding of this technique, we extended the top-down approach of previous studies and constructed a large-scale computational model of the respective brain areas. Furthermore, we took into account the spatial anatomical properties of the simulated brain structures and incorporated a detailed numerical representation of 2 · 104 simulated neurons. We simulated the subthalamic nucleus (STN) and the globus pallidus externus (GPe). Connections within the STN were governed by spike-timing dependent plasticity (STDP). In this way, we modeled the physiological and pathological activity of the considered brain structures. In particular, we investigated how plasticity could be exploited and how the model could be shifted from strongly synchronized (pathological) activity to strongly desynchronized (healthy) activity of the neuronal populations via CR stimulation of the STN neurons. Furthermore, we investigated the impact of specific stimulation parameters especially the electrode position on the stimulation outcome. Our model provides a step forward toward a biophysically realistic model of the brain areas relevant to the emergence of pathological neuronal activity in PD. Furthermore, our model constitutes a test bench for the optimization of both stimulation parameters and novel electrode geometries for efficient CR stimulation. PMID:25505882

  5. Propagating waves in visual cortex: a large-scale model of turtle visual cortex.

    PubMed

    Nenadic, Zoran; Ghosh, Bijoy K; Ulinski, Philip

    2003-01-01

    This article describes a large-scale model of turtle visual cortex that simulates the propagating waves of activity seen in real turtle cortex. The cortex model contains 744 multicompartment models of pyramidal cells, stellate cells, and horizontal cells. Input is provided by an array of 201 geniculate neurons modeled as single compartments with spike-generating mechanisms and axons modeled as delay lines. Diffuse retinal flashes or presentation of spots of light to the retina are simulated by activating groups of geniculate neurons. The model is limited in that it does not have a retina to provide realistic input to the geniculate, and the cortex and does not incorporate all of the biophysical details of real cortical neurons. However, the model does reproduce the fundamental features of planar propagating waves. Activation of geniculate neurons produces a wave of activity that originates at the rostrolateral pole of the cortex at the point where a high density of geniculate afferents enter the cortex. Waves propagate across the cortex with velocities of 4 microm/ms to 70 microm/ms and occasionally reflect from the caudolateral border of the cortex. PMID:12567015

  6. Comparing large-scale computational approaches to epidemic modeling: Agent-based versus structured metapopulation models

    PubMed Central

    2010-01-01

    Background In recent years large-scale computational models for the realistic simulation of epidemic outbreaks have been used with increased frequency. Methodologies adapt to the scale of interest and range from very detailed agent-based models to spatially-structured metapopulation models. One major issue thus concerns to what extent the geotemporal spreading pattern found by different modeling approaches may differ and depend on the different approximations and assumptions used. Methods We provide for the first time a side-by-side comparison of the results obtained with a stochastic agent-based model and a structured metapopulation stochastic model for the progression of a baseline pandemic event in Italy, a large and geographically heterogeneous European country. The agent-based model is based on the explicit representation of the Italian population through highly detailed data on the socio-demographic structure. The metapopulation simulations use the GLobal Epidemic and Mobility (GLEaM) model, based on high-resolution census data worldwide, and integrating airline travel flow data with short-range human mobility patterns at the global scale. The model also considers age structure data for Italy. GLEaM and the agent-based models are synchronized in their initial conditions by using the same disease parameterization, and by defining the same importation of infected cases from international travels. Results The results obtained show that both models provide epidemic patterns that are in very good agreement at the granularity levels accessible by both approaches, with differences in peak timing on the order of a few days. The relative difference of the epidemic size depends on the basic reproductive ratio, R0, and on the fact that the metapopulation model consistently yields a larger incidence than the agent-based model, as expected due to the differences in the structure in the intra-population contact pattern of the approaches. The age breakdown analysis shows

  7. Large-scale measurement and modeling of backbone Internet traffic

    NASA Astrophysics Data System (ADS)

    Roughan, Matthew; Gottlieb, Joel

    2002-07-01

    There is a brewing controversy in the traffic modeling community concerning how to model backbone traffic. The fundamental work on self-similarity in data traffic appears to be contradicted by recent findings that suggest that backbone traffic is smooth. The traffic analysis work to date has focused on high-quality but limited-scope packet trace measurements; this limits its applicability to high-speed backbone traffic. This paper uses more than one year's worth of SNMP traffic data covering an entire Tier 1 ISP backbone to address the question of how backbone network traffic should be modeled. Although the limitations of SNMP measurements do not permit us to comment on the fine timescale behavior of the traffic, careful analysis of the data suggests that irrespective of the variation at fine timescales, we can construct a simple traffic model that captures key features of the observed traffic. Furthermore, the model's parameters are measurable using existing network infrastructure, making this model practical in a present-day operational network. In addition to its practicality, the model verifies basic statistical multiplexing results, and thus sheds deep insight into how smooth backbone traffic really is.

  8. Statistical Modeling of Large-Scale Simulation Data

    SciTech Connect

    Eliassi-Rad, T; Critchlow, T; Abdulla, G

    2002-02-22

    With the advent of fast computer systems, Scientists are now able to generate terabytes of simulation data. Unfortunately, the shear size of these data sets has made efficient exploration of them impossible. To aid scientists in gathering knowledge from their simulation data, we have developed an ad-hoc query infrastructure. Our system, called AQSim (short for Ad-hoc Queries for Simulation) reduces the data storage requirements and access times in two stages. First, it creates and stores mathematical and statistical models of the data. Second, it evaluates queries on the models of the data instead of on the entire data set. In this paper, we present two simple but highly effective statistical modeling techniques for simulation data. Our first modeling technique computes the true mean of systematic partitions of the data. It makes no assumptions about the distribution of the data and uses a variant of the root mean square error to evaluate a model. In our second statistical modeling technique, we use the Andersen-Darling goodness-of-fit method on systematic partitions of the data. This second method evaluates a model by how well it passes the normality test on the data. Both of our statistical models summarize the data so as to answer range queries in the most effective way. We calculate precision on an answer to a query by scaling the one-sided Chebyshev Inequalities with the original mesh's topology. Our experimental evaluations on two scientific simulation data sets illustrate the value of using these statistical modeling techniques on large simulation data sets.

  9. Multilevel method for modeling large-scale networks.

    SciTech Connect

    Safro, I. M.

    2012-02-24

    Understanding the behavior of real complex networks is of great theoretical and practical significance. It includes developing accurate artificial models whose topological properties are similar to the real networks, generating the artificial networks at different scales under special conditions, investigating a network dynamics, reconstructing missing data, predicting network response, detecting anomalies and other tasks. Network generation, reconstruction, and prediction of its future topology are central issues of this field. In this project, we address the questions related to the understanding of the network modeling, investigating its structure and properties, and generating artificial networks. Most of the modern network generation methods are based either on various random graph models (reinforced by a set of properties such as power law distribution of node degrees, graph diameter, and number of triangles) or on the principle of replicating an existing model with elements of randomization such as R-MAT generator and Kronecker product modeling. Hierarchical models operate at different levels of network hierarchy but with the same finest elements of the network. However, in many cases the methods that include randomization and replication elements on the finest relationships between network nodes and modeling that addresses the problem of preserving a set of simplified properties do not fit accurately enough the real networks. Among the unsatisfactory features are numerically inadequate results, non-stability of algorithms on real (artificial) data, that have been tested on artificial (real) data, and incorrect behavior at different scales. One reason is that randomization and replication of existing structures can create conflicts between fine and coarse scales of the real network geometry. Moreover, the randomization and satisfying of some attribute at the same time can abolish those topological attributes that have been undefined or hidden from

  10. Large scale structures and the cubic galileon model

    NASA Astrophysics Data System (ADS)

    Bhattacharya, Sourav; Dialektopoulos, Konstantinos F.; Tomaras, Theodore N.

    2016-05-01

    The maximum size of a bound cosmic structure is computed perturbatively as a function of its mass in the framework of the cubic galileon, proposed recently to model the dark energy of our Universe. Comparison of our results with observations constrains the matter-galileon coupling of the model to 0.033lesssim α lesssim 0.17, thus improving previous bounds based solely on solar system physics.

  11. Investigation of models for large-scale meteorological prediction experiments

    NASA Technical Reports Server (NTRS)

    Spar, J.

    1981-01-01

    An attempt is made to compute the contributions of various surface boundary conditions to the monthly mean states generated by the 7 layer, 8 x 10 GISS climate model (Hansen et al., 1980), and also to examine the influence of initial conditions on the model climate simulations. Obvious climatic controls as the shape and rotation of the Earth, the solar radiation, and the dry composition of the atmosphere are fixed, and only the surface boundary conditions are altered in the various climate simulations.

  12. Geometric algorithms for electromagnetic modeling of large scale structures

    NASA Astrophysics Data System (ADS)

    Pingenot, James

    With the rapid increase in the speed and complexity of integrated circuit designs, 3D full wave and time domain simulation of chip, package, and board systems becomes more and more important for the engineering of modern designs. Much effort has been applied to the problem of electromagnetic (EM) simulation of such systems in recent years. Major advances in boundary element EM simulations have led to O(n log n) simulations using iterative methods and advanced Fast. Fourier Transform (FFT), Multi-Level Fast Multi-pole Methods (MLFMM), and low-rank matrix compression techniques. These advances have been augmented with an explosion of multi-core and distributed computing technologies, however, realization of the full scale of these capabilities has been hindered by cumbersome and inefficient geometric processing. Anecdotal evidence from industry suggests that users may spend around 80% of turn-around time manipulating the geometric model and mesh. This dissertation addresses this problem by developing fast and efficient data structures and algorithms for 3D modeling of chips, packages, and boards. The methods proposed here harness the regular, layered 2D nature of the models (often referred to as "2.5D") to optimize these systems for large geometries. First, an architecture is developed for efficient storage and manipulation of 2.5D models. The architecture gives special attention to native representation of structures across various input models and special issues particular to 3D modeling. The 2.5D structure is then used to optimize the mesh systems First, circuit/EM co-simulation techniques are extended to provide electrical connectivity between objects. This concept is used to connect independently meshed layers, allowing simple and efficient 2D mesh algorithms to be used in creating a 3D mesh. Here, adaptive meshing is used to ensure that the mesh accurately models the physical unknowns (current and charge). Utilizing the regularized nature of 2.5D objects and

  13. Simulation of large-scale rule-based models

    SciTech Connect

    Hlavacek, William S; Monnie, Michael I; Colvin, Joshua; Faseder, James

    2008-01-01

    Interactions of molecules, such as signaling proteins, with multiple binding sites and/or multiple sites of post-translational covalent modification can be modeled using reaction rules. Rules comprehensively, but implicitly, define the individual chemical species and reactions that molecular interactions can potentially generate. Although rules can be automatically processed to define a biochemical reaction network, the network implied by a set of rules is often too large to generate completely or to simulate using conventional procedures. To address this problem, we present DYNSTOC, a general-purpose tool for simulating rule-based models. DYNSTOC implements a null-event algorithm for simulating chemical reactions in a homogenous reaction compartment. The simulation method does not require that a reaction network be specified explicitly in advance, but rather takes advantage of the availability of the reaction rules in a rule-based specification of a network to determine if a randomly selected set of molecular components participates in a reaction during a time step. DYNSTOC reads reaction rules written in the BioNetGen language which is useful for modeling protein-protein interactions involved in signal transduction. The method of DYNSTOC is closely related to that of STOCHSIM. DYNSTOC differs from STOCHSIM by allowing for model specification in terms of BNGL, which extends the range of protein complexes that can be considered in a model. DYNSTOC enables the simulation of rule-based models that cannot be simulated by conventional methods. We demonstrate the ability of DYNSTOC to simulate models accounting for multisite phosphorylation and multivalent binding processes that are characterized by large numbers of reactions. DYNSTOC is free for non-commercial use. The C source code, supporting documentation and example input files are available at .

  14. Modeling and simulation of large scale stirred tank

    NASA Astrophysics Data System (ADS)

    Neuville, John R.

    The purpose of this dissertation is to provide a written record of the evaluation performed on the DWPF mixing process by the construction of numerical models that resemble the geometry of this process. There were seven numerical models constructed to evaluate the DWPF mixing process and four pilot plants. The models were developed with Fluent software and the results from these models were used to evaluate the structure of the flow field and the power demand of the agitator. The results from the numerical models were compared with empirical data collected from these pilot plants that had been operated at an earlier date. Mixing is commonly used in a variety ways throughout industry to blend miscible liquids, disperse gas through liquid, form emulsions, promote heat transfer and, suspend solid particles. The DOE Sites at Hanford in Richland Washington, West Valley in New York, and Savannah River Site in Aiken South Carolina have developed a process that immobilizes highly radioactive liquid waste. The radioactive liquid waste at DWPF is an opaque sludge that is mixed in a stirred tank with glass frit particles and water to form slurry of specified proportions. The DWPF mixing process is composed of a flat bottom cylindrical mixing vessel with a centrally located helical coil, and agitator. The helical coil is used to heat and cool the contents of the tank and can improve flow circulation. The agitator shaft has two impellers; a radial blade and a hydrofoil blade. The hydrofoil is used to circulate the mixture between the top region and bottom region of the tank. The radial blade sweeps the bottom of the tank and pushes the fluid in the outward radial direction. The full scale vessel contains about 9500 gallons of slurry with flow behavior characterized as a Bingham Plastic. Particles in the mixture have an abrasive characteristic that cause excessive erosion to internal vessel components at higher impeller speeds. The desire for this mixing process is to ensure the

  15. Modelling large scale human activity in San Francisco

    NASA Astrophysics Data System (ADS)

    Gonzalez, Marta

    2010-03-01

    Diverse group of people with a wide variety of schedules, activities and travel needs compose our cities nowadays. This represents a big challenge for modeling travel behaviors in urban environments; those models are of crucial interest for a wide variety of applications such as traffic forecasting, spreading of viruses, or measuring human exposure to air pollutants. The traditional means to obtain knowledge about travel behavior is limited to surveys on travel journeys. The obtained information is based in questionnaires that are usually costly to implement and with intrinsic limitations to cover large number of individuals and some problems of reliability. Using mobile phone data, we explore the basic characteristics of a model of human travel: The distribution of agents is proportional to the population density of a given region, and each agent has a characteristic trajectory size contain information on frequency of visits to different locations. Additionally we use a complementary data set given by smart subway fare cards offering us information about the exact time of each passenger getting in or getting out of the subway station and the coordinates of it. This allows us to uncover the temporal aspects of the mobility. Since we have the actual time and place of individual's origin and destination we can understand the temporal patterns in each visited location with further details. Integrating two described data set we provide a dynamical model of human travels that incorporates different aspects observed empirically.

  16. Large-Scale Modeling of Wordform Learning and Representation

    ERIC Educational Resources Information Center

    Sibley, Daragh E.; Kello, Christopher T.; Plaut, David C.; Elman, Jeffrey L.

    2008-01-01

    The forms of words as they appear in text and speech are central to theories and models of lexical processing. Nonetheless, current methods for simulating their learning and representation fail to approach the scale and heterogeneity of real wordform lexicons. A connectionist architecture termed the "sequence encoder" is used to learn nearly…

  17. Parameterization of Fire Injection Height in Large Scale Transport Model

    NASA Astrophysics Data System (ADS)

    Paugam, R.; Wooster, M.; Atherton, J.; Val Martin, M.; Freitas, S.; Kaiser, J. W.; Schultz, M. G.

    2012-12-01

    The parameterization of fire injection height in global chemistry transport model is currently a subject of debate in the atmospheric community. The approach usually proposed in the literature is based on relationships linking injection height and remote sensing products like the Fire Radiative Power (FRP) which can measure active fire properties. In this work we present an approach based on the Plume Rise Model (PRM) developed by Freitas et al (2007, 2010). This plume model is already used in different host models (e.g. WRF, BRAMS). In its original version, the fire is modeled by: a convective heat flux (CHF; pre-defined by the land cover and evaluated as a fixed part of the total heat released) and a plume radius (derived from the GOES Wildfire-ABBA product) which defines the fire extension where the CHF is homogeneously distributed. Here in our approach the Freitas model is modified, in particular we added (i) an equation for mass conservation, (ii) a scheme to parameterize horizontal entrainment/detrainment, and (iii) a new initialization module which estimates the sensible heat released by the fire on the basis of measured FRP rather than fuel cover type. FRP and Active Fire (AF) area necessary for the initialization of the model are directly derived from a modified version of the Dozier algorithm applied to the MOD14 product. An optimization (using the simulating annealing method) of this new version of the PRM is then proposed based on fire plume characteristics derived from the official MISR plume height project and atmospheric profiles extracted from the ECMWF analysis. The data set covers the main fire region (Africa, Siberia, Indonesia, and North and South America) and is set up to (i) retain fires where plume height and FRP can be easily linked (i.e. avoid large fire cluster where individual plume might interact), (ii) keep fire which show decrease of FRP and AF area after MISR overpass (i.e. to minimize effect of the time period needed for the plume to

  18. Parameterization of Fire Injection Height in Large Scale Transport Model

    NASA Astrophysics Data System (ADS)

    Paugam, r.; Wooster, m.; Freitas, s.; Gonzi, s.; Palmer, p.

    2012-04-01

    The parameterization of fire injection height in global chemistry transport model is currently a subject of debate in the atmospheric community. The approach usually proposed in the literature is based on relationships linking injection height and remote sensing products like the Fire Radiative Power (FRP) which can measure active fire properties. In this work we present an approach based on the Plume Rise Model (PRM) developed by Freitas et al (2007, 2010). This plume model is already used in different host models (e.g. WRF, BRAMS). In its original version, the fire is modelled by: a convective heat flux (CHF; pre-defined by the land cover and evaluated as a fixed part of the total heat released) and a plume radius (derived from the GOES Wildfire-ABBA product) which defines the fire extension where the CHF is homogeneously distributed. Here in our approach the Freitas model is modified. Major modifications are implemented in its initialisation module: (i) CHF and the Active Fire area are directly force from FRP data derived from a modified version of the Dozier algorithm applied to the MOD12 product, (ii) and a new module of the buoyancy flux calculation is implemented instead of the original module based on the Morton Taylor and Turner equation. Furthermore the dynamical core of the plume model is also modified with a new entrainment scheme inspired from latest results from shallow convection parameterization. Optimization and validation of this new version of the Freitas PRM is based on fire plume characteristics derived from the official MISR plume height project and atmospheric profile extracted from the ECMWF analysis. The data set is (i) build up to only keep fires where plume height and FRP can be easily linked (i.e. avoid large fire cluster where individual plume might interact) and (ii) split per fire land cover type to optimize the constant of the buoyancy flux module and the entrainment scheme to different fire regime. Result shows that the new PRM is

  19. GIS for large-scale watershed observational data model

    NASA Astrophysics Data System (ADS)

    Patino-Gomez, Carlos

    Because integrated management of a river basin requires the development of models that are used for many purposes, e.g., to assess risks and possible mitigation of droughts and floods, manage water rights, assess water quality, and simply to understand the hydrology of the basin, the development of a relational database from which models can access the various data needed to describe the systems being modeled is fundamental. In order for this concept to be useful and widely applicable, however, it must have a standard design. The recently developed ArcHydro data model facilitates the organization of data according to the "basin" principle and allows access to hydrologic information by models. The development of a basin-scale relational database for the Rio Grande/Bravo basin implemented in a Geographic Information System is one of the contributions of this research. This geodatabase represents the first major attempt to establish a more complete understanding of the basin as a whole, including spatial and temporal information obtained from the United States of America and Mexico. Difficulties in processing raster datasets over large regions are studied in this research. One of the most important contributions is the application of a Raster-Network Regionalization technique, which utilizes raster-based analysis at the subregional scale in an efficient manner and combines the resulting subregional vector datasets into a regional database. Another important contribution of this research is focused on implementing a robust structure for handling huge temporal data sets related to monitoring points such as hydrometric and climatic stations, reservoir inlets and outlets, water rights, etc. For the Rio Grande study area, the ArcHydro format is applied to the historical information collected in order to include and relate these time series to the monitoring points in the geodatabase. Its standard time series format is changed to include a relationship to the agency from

  20. Testing model independent modified gravity with future large scale surveys

    SciTech Connect

    Thomas, Daniel B.; Contaldi, Carlo R. E-mail: c.contaldi@ic.ac.uk

    2011-12-01

    Model-independent parametrisations of modified gravity have attracted a lot of attention over the past few years and numerous combinations of experiments and observables have been suggested to constrain the parameters used in these models. Galaxy clusters have been mentioned, but not looked at as extensively in the literature as some other probes. Here we look at adding galaxy clusters into the mix of observables and examine how they could improve the constraints on the modified gravity parameters. In particular, we forecast the constraints from combining Planck satellite Cosmic Microwave Background (CMB) measurements and Sunyaev-Zeldovich (SZ) cluster catalogue with a DES-like Weak Lensing (WL) survey. We find that cluster counts significantly improve the constraints over those derived using CMB and WL. We then look at surveys further into the future, to see how much better it may be feasible to make the constraints.

  1. Multistability in Large Scale Models of Brain Activity

    PubMed Central

    Golos, Mathieu; Jirsa, Viktor; Daucé, Emmanuel

    2015-01-01

    Noise driven exploration of a brain network’s dynamic repertoire has been hypothesized to be causally involved in cognitive function, aging and neurodegeneration. The dynamic repertoire crucially depends on the network’s capacity to store patterns, as well as their stability. Here we systematically explore the capacity of networks derived from human connectomes to store attractor states, as well as various network mechanisms to control the brain’s dynamic repertoire. Using a deterministic graded response Hopfield model with connectome-based interactions, we reconstruct the system’s attractor space through a uniform sampling of the initial conditions. Large fixed-point attractor sets are obtained in the low temperature condition, with a bigger number of attractors than ever reported so far. Different variants of the initial model, including (i) a uniform activation threshold or (ii) a global negative feedback, produce a similarly robust multistability in a limited parameter range. A numerical analysis of the distribution of the attractors identifies spatially-segregated components, with a centro-medial core and several well-delineated regional patches. Those different modes share similarity with the fMRI independent components observed in the “resting state” condition. We demonstrate non-stationary behavior in noise-driven generalizations of the models, with different meta-stable attractors visited along the same time course. Only the model with a global dynamic density control is found to display robust and long-lasting non-stationarity with no tendency toward either overactivity or extinction. The best fit with empirical signals is observed at the edge of multistability, a parameter region that also corresponds to the highest entropy of the attractors. PMID:26709852

  2. Multistability in Large Scale Models of Brain Activity.

    PubMed

    Golos, Mathieu; Jirsa, Viktor; Daucé, Emmanuel

    2015-12-01

    Noise driven exploration of a brain network's dynamic repertoire has been hypothesized to be causally involved in cognitive function, aging and neurodegeneration. The dynamic repertoire crucially depends on the network's capacity to store patterns, as well as their stability. Here we systematically explore the capacity of networks derived from human connectomes to store attractor states, as well as various network mechanisms to control the brain's dynamic repertoire. Using a deterministic graded response Hopfield model with connectome-based interactions, we reconstruct the system's attractor space through a uniform sampling of the initial conditions. Large fixed-point attractor sets are obtained in the low temperature condition, with a bigger number of attractors than ever reported so far. Different variants of the initial model, including (i) a uniform activation threshold or (ii) a global negative feedback, produce a similarly robust multistability in a limited parameter range. A numerical analysis of the distribution of the attractors identifies spatially-segregated components, with a centro-medial core and several well-delineated regional patches. Those different modes share similarity with the fMRI independent components observed in the "resting state" condition. We demonstrate non-stationary behavior in noise-driven generalizations of the models, with different meta-stable attractors visited along the same time course. Only the model with a global dynamic density control is found to display robust and long-lasting non-stationarity with no tendency toward either overactivity or extinction. The best fit with empirical signals is observed at the edge of multistability, a parameter region that also corresponds to the highest entropy of the attractors. PMID:26709852

  3. Renormalizing a viscous fluid model for large scale structure formation

    NASA Astrophysics Data System (ADS)

    Führer, Florian; Rigopoulos, Gerasimos

    2016-02-01

    Using the Stochastic Adhesion Model (SAM) as a simple toy model for cosmic structure formation, we study renormalization and the removal of the cutoff dependence from loop integrals in perturbative calculations. SAM shares the same symmetry with the full system of continuity+Euler equations and includes a viscosity term and a stochastic noise term, similar to the effective theories recently put forward to model CDM clustering. We show in this context that if the viscosity and noise terms are treated as perturbative corrections to the standard eulerian perturbation theory, they are necessarily non-local in time. To ensure Galilean Invariance higher order vertices related to the viscosity and the noise must then be added and we explicitly show at one-loop that these terms act as counter terms for vertex diagrams. The Ward Identities ensure that the non-local-in-time theory can be renormalized consistently. Another possibility is to include the viscosity in the linear propagator, resulting in exponential damping at high wavenumber. The resulting local-in-time theory is then renormalizable to one loop, requiring less free parameters for its renormalization.

  4. Large scale molecular dynamics modeling of materials fabrication processes

    SciTech Connect

    Belak, J.; Glosli, J.N.; Boercker, D.B.; Stowers, I.F.

    1994-02-01

    An atomistic molecular dynamics model of materials fabrication processes is presented. Several material removal processes are shown to be within the domain of this simulation method. Results are presented for orthogonal cutting of copper and silicon and for crack propagation in silica glass. Both copper and silicon show ductile behavior, but the atomistic mechanisms that allow this behavior are significantly different in the two cases. The copper chip remains crystalline while the silicon chip transforms into an amorphous state. The critical stress for crack propagation in silica glass was found to be in reasonable agreement with experiment and a novel stick-slip phenomenon was observed.

  5. Evaluation of drought propagation in an ensemble mean of large-scale hydrological models

    NASA Astrophysics Data System (ADS)

    Van Loon, A. F.; Van Huijgevoort, M. H. J.; Van Lanen, H. A. J.

    2012-07-01

    Hydrological drought is increasingly studied using large-scale models. It is, however, not sure whether large-scale models reproduce the development of hydrological drought correctly. The pressing question is: how well do large-scale models simulate the propagation from meteorological to hydrological drought? To answer this question, we evaluated the simulation of drought propagation in an ensemble mean of ten large-scale models, both land-surface models and global hydrological models, that were part of the model intercomparison project of WATCH (WaterMIP). For a selection of case study areas, we studied drought characteristics (number of droughts, duration, severity), drought propagation features (pooling, attenuation, lag, lengthening), and hydrological drought typology (classical rainfall deficit drought, rain-to-snow-season drought, wet-to-dry-season drought, cold snow season drought, warm snow season drought, composite drought). Drought characteristics simulated by large-scale models clearly reflected drought propagation, i.e. drought events became less and longer when moving through the hydrological cycle. However, more differentiation was expected between fast and slowly responding systems, with slowly responding systems having less and longer droughts in runoff than fast responding systems. This was not found using large-scale models. Drought propagation features were poorly reproduced by the large-scale models, because runoff reacted immediately to precipitation, in all case study areas. This fast reaction to precipitation, even in cold climates in winter and in semi-arid climates in summer, also greatly influenced the hydrological drought typology as identified by the large-scale models. In general, the large-scale models had the correct representation of drought types, but the percentages of occurrence had some important mismatches, e.g. an overestimation of classical rainfall deficit droughts, and an underestimation of wet-to-dry-season droughts and

  6. Soil hydrologic characterization for modeling large scale soil remediation protocols

    NASA Astrophysics Data System (ADS)

    Romano, Nunzio; Palladino, Mario; Di Fiore, Paola; Sica, Benedetto; Speranza, Giuseppe

    2014-05-01

    In Campania Region (Italy), the Ministry of Environment identified a National Interest Priority Sites (NIPS) with a surface of about 200,000 ha, characterized by different levels and sources of pollution. This area, called Litorale Domitio-Agro Aversano includes some polluted agricultural land, belonging to more than 61 municipalities in the Naples and Caserta provinces. In this area, a high level spotted soil contamination is moreover due to the legal and outlaw industrial and municipal wastes dumping, with hazardous consequences also on the quality of the water table. The EU-Life+ project ECOREMED (Implementation of eco-compatible protocols for agricultural soil remediation in Litorale Domizio-Agro Aversano NIPS) has the major aim of defining an operating protocol for agriculture-based bioremediation of contaminated agricultural soils, also including the use of crops extracting pollutants to be used as biomasses for renewable energy production. In the framework of this project, soil hydrologic characterization plays a key role and modeling water flow and solute transport has two main challenging points on which we focus on. A first question is related to the fate of contaminants infiltrated from stormwater runoff and the potential for groundwater contamination. Another question is the quantification of fluxes and spatial extent of root water uptake by the plant species employed to extract pollutants in the uppermost soil horizons. Given the high variability of spatial distribution of pollutants, we use soil characterization at different scales, from field scale when facing root water uptake process, to regional scale when simulating interaction between soil hydrology and groundwater fluxes.

  7. Numerical models for ac loss calculation in large-scale applications of HTS coated conductors

    NASA Astrophysics Data System (ADS)

    Quéval, Loïc; Zermeño, Víctor M. R.; Grilli, Francesco

    2016-02-01

    Numerical models are powerful tools to predict the electromagnetic behavior of superconductors. In recent years, a variety of models have been successfully developed to simulate high-temperature-superconducting (HTS) coated conductor tapes. While the models work well for the simulation of individual tapes or relatively small assemblies, their direct applicability to devices involving hundreds or thousands of tapes, e.g., coils used in electrical machines, is questionable. Indeed, the simulation time and memory requirement can quickly become prohibitive. In this paper, we develop and compare two different models for simulating realistic HTS devices composed of a large number of tapes: (1) the homogenized model simulates the coil using an equivalent anisotropic homogeneous bulk with specifically developed current constraints to account for the fact that each turn carries the same current; (2) the multi-scale model parallelizes and reduces the computational problem by simulating only several individual tapes at significant positions of the coil’s cross-section using appropriate boundary conditions to account for the field generated by the neighboring turns. Both methods are used to simulate a coil made of 2000 tapes, and compared against the widely used H-formulation finite-element model that includes all the tapes. Both approaches allow faster simulations of large number of HTS tapes by 1-3 orders of magnitudes, while maintaining good accuracy of the results. Both models can therefore be used to design and optimize large-scale HTS devices. This study provides key advancement with respect to previous versions of both models. The homogenized model is extended from simple stacks to large arrays of tapes. For the multi-scale model, the importance of the choice of the current distribution used to generate the background field is underlined; the error in ac loss estimation resulting from the most obvious choice of starting from a uniform current distribution is revealed.

  8. Systematic large-scale secondary circulations in a regional climate model

    NASA Astrophysics Data System (ADS)

    Becker, Nico; Ulbrich, Uwe; Klein, Rupert

    2015-05-01

    Regional climate models (RCMs) are used to add the effects of nonresolved scales to coarser resolved model simulations by using a finer grid within a limited domain. We identify large-scale secondary circulations (SCs) relative to the driving global climate model (GCM) in an RCM simulation over Europe. By applying a clustering technique, we find that the SC depends on the large-scale flow prescribed by the driving GCM data. Evidence is presented that the SC is caused by the different representations of orographic effects in the RCM and the GCM. Flow modifications in the RCM caused by the Alps lead to large-scale vortices in the SC fields. These vortices are limited by the RCM boundaries, causing artificial boundary-parallel flows. The SC is associated with geopotential height and temperature anomalies between RCM and GCM and has the potential to produce systematic large-scale biases in RCMs.

  9. Using large-scale neural models to interpret connectivity measures of cortico-cortical dynamics at millisecond temporal resolution

    PubMed Central

    Banerjee, Arpan; Pillai, Ajay S.; Horwitz, Barry

    2012-01-01

    Over the last two decades numerous functional imaging studies have shown that higher order cognitive functions are crucially dependent on the formation of distributed, large-scale neuronal assemblies (neurocognitive networks), often for very short durations. This has fueled the development of a vast number of functional connectivity measures that attempt to capture the spatiotemporal evolution of neurocognitive networks. Unfortunately, interpreting the neural basis of goal directed behavior using connectivity measures on neuroimaging data are highly dependent on the assumptions underlying the development of the measure, the nature of the task, and the modality of the neuroimaging technique that was used. This paper has two main purposes. The first is to provide an overview of some of the different measures of functional/effective connectivity that deal with high temporal resolution neuroimaging data. We will include some results that come from a recent approach that we have developed to identify the formation and extinction of task-specific, large-scale neuronal assemblies from electrophysiological recordings at a ms-by-ms temporal resolution. The second purpose of this paper is to indicate how to partially validate the interpretations drawn from this (or any other) connectivity technique by using simulated data from large-scale, neurobiologically realistic models. Specifically, we applied our recently developed method to realistic simulations of MEG data during a delayed match-to-sample (DMS) task condition and a passive viewing of stimuli condition using a large-scale neural model of the ventral visual processing pathway. Simulated MEG data using simple head models were generated from sources placed in V1, V4, IT, and prefrontal cortex (PFC) for the passive viewing condition. The results show how closely the conclusions obtained from the functional connectivity method match with what actually occurred at the neuronal network level. PMID:22291621

  10. Using large-scale neural models to interpret connectivity measures of cortico-cortical dynamics at millisecond temporal resolution.

    PubMed

    Banerjee, Arpan; Pillai, Ajay S; Horwitz, Barry

    2011-01-01

    Over the last two decades numerous functional imaging studies have shown that higher order cognitive functions are crucially dependent on the formation of distributed, large-scale neuronal assemblies (neurocognitive networks), often for very short durations. This has fueled the development of a vast number of functional connectivity measures that attempt to capture the spatiotemporal evolution of neurocognitive networks. Unfortunately, interpreting the neural basis of goal directed behavior using connectivity measures on neuroimaging data are highly dependent on the assumptions underlying the development of the measure, the nature of the task, and the modality of the neuroimaging technique that was used. This paper has two main purposes. The first is to provide an overview of some of the different measures of functional/effective connectivity that deal with high temporal resolution neuroimaging data. We will include some results that come from a recent approach that we have developed to identify the formation and extinction of task-specific, large-scale neuronal assemblies from electrophysiological recordings at a ms-by-ms temporal resolution. The second purpose of this paper is to indicate how to partially validate the interpretations drawn from this (or any other) connectivity technique by using simulated data from large-scale, neurobiologically realistic models. Specifically, we applied our recently developed method to realistic simulations of MEG data during a delayed match-to-sample (DMS) task condition and a passive viewing of stimuli condition using a large-scale neural model of the ventral visual processing pathway. Simulated MEG data using simple head models were generated from sources placed in V1, V4, IT, and prefrontal cortex (PFC) for the passive viewing condition. The results show how closely the conclusions obtained from the functional connectivity method match with what actually occurred at the neuronal network level. PMID:22291621

  11. Forecasting and understanding cirrus clouds with the large scale Lagrangian microphysical model CLaMS-Ice

    NASA Astrophysics Data System (ADS)

    Rolf, Christian; Grooß, Jens-Uwe; Spichtinger, Peter; Costa, Anja; Krämer, Martina

    2015-04-01

    Cirrus clouds play an important role by influencing the Earth's radiation budget and the global climate (Heintzenberg and Charlson, 2009). This is shown in the recent IPCC reports, where the large error bars relating to the cloud radiative forcing underline the poor scientific knowledge of the underlying processes. The formation and further evolution of cirrus clouds is determined by the interplay of temperature, ice nuclei (IN) properties, relative humidity, cooling rates and ice crystal sedimentation. For that reason, a Lagrangian approach using meteorological wind fields is the most realistic way to simulate cirrus clouds. In addition, to represent complete cirrus systems as e.g. frontal cirrus, three dimensional cloud modeling on a large scale is desirable. To this end, we coupled the two momentum microphysical ice model of Spichtinger and Gierens (2009) with the 3D Lagrangian model CLaMS (McKenna et al., 2002). The new CLaMS-Ice module simulates cirrus formation by including heterogeneous and homogeneous freezing as well as ice crystal sedimentation. The boxmodel is operated along CLaMS trajectories and individually initialized with the ECMWF meteorological fields. In addition, temperature fluctuations are superimposed directly to the trajectory temperature and pressure by the parametrization of Gary et al. (2006). For a typical cirrus scenario with latitude/longitude coverage of 49° x 42° on three pressure levels, 6100 trajectories are simulated over 24 hours in time. To achieve the model results in an acceptable time, the box model is accelerated by about a factor of 10 before coupling to CLaMS. Now, CLaMS-Ice needs only about 30-40 minutes for such a simulation. During the first HALO cloud field campaign (ML-Cirrus), CLaMS-Ice has been successfully deployed as a forecast tool. Here, we give an overview about the capabilities of CLaMS-Ice for forecasting, modeling and understanding of cirrus clouds in general. In addition, examples from the recent ML

  12. Odor Experience Facilitates Sparse Representations of New Odors in a Large-Scale Olfactory Bulb Model.

    PubMed

    Zhou, Shanglin; Migliore, Michele; Yu, Yuguo

    2016-01-01

    Prior odor experience has a profound effect on the coding of new odor inputs by animals. The olfactory bulb, the first relay of the olfactory pathway, can substantially shape the representations of odor inputs. How prior odor experience affects the representation of new odor inputs in olfactory bulb and its underlying network mechanism are still unclear. Here we carried out a series of simulations based on a large-scale realistic mitral-granule network model and found that prior odor experience not only accelerated formation of the network, but it also significantly strengthened sparse responses in the mitral cell network while decreasing sparse responses in the granule cell network. This modulation of sparse representations may be due to the increase of inhibitory synaptic weights. Correlations among mitral cells within the network and correlations between mitral network responses to different odors decreased gradually when the number of prior training odors was increased, resulting in a greater decorrelation of the bulb representations of input odors. Based on these findings, we conclude that the degree of prior odor experience facilitates degrees of sparse representations of new odors by the mitral cell network through experience-enhanced inhibition mechanism. PMID:26903819

  13. Odor Experience Facilitates Sparse Representations of New Odors in a Large-Scale Olfactory Bulb Model

    PubMed Central

    Zhou, Shanglin; Migliore, Michele; Yu, Yuguo

    2016-01-01

    Prior odor experience has a profound effect on the coding of new odor inputs by animals. The olfactory bulb, the first relay of the olfactory pathway, can substantially shape the representations of odor inputs. How prior odor experience affects the representation of new odor inputs in olfactory bulb and its underlying network mechanism are still unclear. Here we carried out a series of simulations based on a large-scale realistic mitral-granule network model and found that prior odor experience not only accelerated formation of the network, but it also significantly strengthened sparse responses in the mitral cell network while decreasing sparse responses in the granule cell network. This modulation of sparse representations may be due to the increase of inhibitory synaptic weights. Correlations among mitral cells within the network and correlations between mitral network responses to different odors decreased gradually when the number of prior training odors was increased, resulting in a greater decorrelation of the bulb representations of input odors. Based on these findings, we conclude that the degree of prior odor experience facilitates degrees of sparse representations of new odors by the mitral cell network through experience-enhanced inhibition mechanism. PMID:26903819

  14. Evaluation of drought propagation in an ensemble mean of large-scale hydrological models

    NASA Astrophysics Data System (ADS)

    Van Loon, A. F.; Van Huijgevoort, M. H. J.; Van Lanen, H. A. J.

    2012-11-01

    Hydrological drought is increasingly studied using large-scale models. It is, however, not sure whether large-scale models reproduce the development of hydrological drought correctly. The pressing question is how well do large-scale models simulate the propagation from meteorological to hydrological drought? To answer this question, we evaluated the simulation of drought propagation in an ensemble mean of ten large-scale models, both land-surface models and global hydrological models, that participated in the model intercomparison project of WATCH (WaterMIP). For a selection of case study areas, we studied drought characteristics (number of droughts, duration, severity), drought propagation features (pooling, attenuation, lag, lengthening), and hydrological drought typology (classical rainfall deficit drought, rain-to-snow-season drought, wet-to-dry-season drought, cold snow season drought, warm snow season drought, composite drought). Drought characteristics simulated by large-scale models clearly reflected drought propagation; i.e. drought events became fewer and longer when moving through the hydrological cycle. However, more differentiation was expected between fast and slowly responding systems, with slowly responding systems having fewer and longer droughts in runoff than fast responding systems. This was not found using large-scale models. Drought propagation features were poorly reproduced by the large-scale models, because runoff reacted immediately to precipitation, in all case study areas. This fast reaction to precipitation, even in cold climates in winter and in semi-arid climates in summer, also greatly influenced the hydrological drought typology as identified by the large-scale models. In general, the large-scale models had the correct representation of drought types, but the percentages of occurrence had some important mismatches, e.g. an overestimation of classical rainfall deficit droughts, and an underestimation of wet-to-dry-season droughts and

  15. Multi-Physics Feedback Simulations with Realistic Initial Conditions of the Formation of Star Clusters: From Large Scale Magnetized Clouds to Turbulent Clumps to Cores to Stars

    NASA Astrophysics Data System (ADS)

    Klein, R. I.; Li, P.; McKee, C. F.

    2015-10-01

    Multi-physics zoom-in adaptive mesh refinement simulations with feedback and realistic initial conditions, starting from large scale turbulent molecular clouds through the formation of clumps and cores to the formation os stellar clusters are presented. I give a summary of results at the different scales undergoing gravitational collapse from cloud to core to cluster formation. Detailed comparisons with observations are made at each stage of the simulations. In particular, properties of the magnetized clumps are compared with recent observations of Crutcher et al. 2010 and Crutcher 2012 and the magnetic field orientation in cloud clumps relative to the global mean field of the inter-cloud medium (Li et al. 2009). The Initial Mass Function (IMF) obtained is compared with the Chabrier IMF and the protostellar mass function of the cluster is compared with different theories.

  16. Building a Large-Scale Computational Model of a Cortical Neuronal Network

    NASA Astrophysics Data System (ADS)

    Zemanová, Lucia; Zhou, Changsong; Kurths, Jürgen

    We introduce the general framework of the large-scale neuronal model used in the 5th Helmholtz Summer School — Complex Brain Networks. The main aim is to build a universal large-scale model of a cortical neuronal network, structured as a network of networks, which is flexible enough to implement different kinds of topology and neuronal models and which exhibits behavior in various dynamical regimes. First, we describe important biological aspects of brain topology and use them in the construction of a large-scale cortical network. Second, the general dynamical model is presented together with explanations of the major dynamical properties of neurons. Finally, we discuss the implementation of the model into parallel code and its possible modifications and improvements.

  17. Large scale groundwater modeling using globally available datasets: A test for the Rhine-Meuse basin

    NASA Astrophysics Data System (ADS)

    Sutanudjaja, Edwin H.; de Jong, Steven; van Geer, Frans C.; Bierkens, Marc F. P.

    2010-05-01

    Groundwater resources are vulnerable to global climate change and population growth. Therefore, monitoring and predicting groundwater change over large areas is imperative. However, large-scale groundwater models, especially those involve aquifers and basins of multiple countries, are still rare due to a lack of hydro-geological data. Such data may be widely available in developed countries but are seldom available in other parts of the world. In this study, we propose a novel approach to construct large-scale groundwater models by using global datasets that are readily available. As the test-bed, we choose the combined Rhine-Meuse basin (total area: ± 220000 km2) that contains ample data (e.g. groundwater head data) that can be used to verify the model output. However, while constructing the model, we use only globally available datasets such as the global GLCC land cover map [http://edc2.usgs.gov/glcc/glcc.php], global FAO soil map [1995], global lithological map of Dürr et al [2005], HydroSHEDS digital elevation map [Lehner et al, 2008], and global climatological datasets (e.g. the global CRU datasets [Mitchell and Jones, 2005 and New et al, 2002], ERA40 re-analysis data [Uppala et al, 2005], and ECMWF operational archive data [http://www.ecmwf.int/products/data/operational_system]). We started by building a distributed land surface model (1×1 km) to estimate groundwater recharge and river discharge. Then, a MODFLOW transient groundwater model is built and forced by the recharge and surface water levels calculated by the land surface model. We run the models for the period 1970-2008. The current results are promising. The simulated river discharges compare well to the discharge observation as indicated by the Nash-Sutcliffe model efficiency coefficients (68% for Rhine and 50% for Meuse). Moreover, the MODFLOW model can converge with realistic aquifer properties (i.e. transmissivities and storage coefficients) and can produce reasonable groundwater head

  18. Identification of large-scale genomic variation in cancer genomes using in silico reference models.

    PubMed

    Killcoyne, Sarah; Del Sol, Antonio

    2016-01-01

    Identifying large-scale structural variation in cancer genomes continues to be a challenge to researchers. Current methods rely on genome alignments based on a reference that can be a poor fit to highly variant and complex tumor genomes. To address this challenge we developed a method that uses available breakpoint information to generate models of structural variations. We use these models as references to align previously unmapped and discordant reads from a genome. By using these models to align unmapped reads, we show that our method can help to identify large-scale variations that have been previously missed. PMID:26264669

  19. Identification of large-scale genomic variation in cancer genomes using in silico reference models

    PubMed Central

    Killcoyne, Sarah; del Sol, Antonio

    2016-01-01

    Identifying large-scale structural variation in cancer genomes continues to be a challenge to researchers. Current methods rely on genome alignments based on a reference that can be a poor fit to highly variant and complex tumor genomes. To address this challenge we developed a method that uses available breakpoint information to generate models of structural variations. We use these models as references to align previously unmapped and discordant reads from a genome. By using these models to align unmapped reads, we show that our method can help to identify large-scale variations that have been previously missed. PMID:26264669

  20. On Applications of Rasch Models in International Comparative Large-Scale Assessments: A Historical Review

    ERIC Educational Resources Information Center

    Wendt, Heike; Bos, Wilfried; Goy, Martin

    2011-01-01

    Several current international comparative large-scale assessments of educational achievement (ICLSA) make use of "Rasch models", to address functions essential for valid cross-cultural comparisons. From a historical perspective, ICLSA and Georg Rasch's "models for measurement" emerged at about the same time, half a century ago. However, the…

  1. Measuring Growth in a Longitudinal Large-Scale Assessment with a General Latent Variable Model

    ERIC Educational Resources Information Center

    von Davier, Matthias; Xu, Xueli; Carstensen, Claus H.

    2011-01-01

    The aim of the research presented here is the use of extensions of longitudinal item response theory (IRT) models in the analysis and comparison of group-specific growth in large-scale assessments of educational outcomes. A general discrete latent variable model was used to specify and compare two types of multidimensional item-response-theory…

  2. On the Estimation of Hierarchical Latent Regression Models for Large-Scale Assessments

    ERIC Educational Resources Information Center

    Li, Deping; Oranje, Andreas; Jiang, Yanlin

    2009-01-01

    To find population proficiency distributions, a two-level hierarchical linear model may be applied to large-scale survey assessments such as the National Assessment of Educational Progress (NAEP). The model and parameter estimation are developed and a simulation was carried out to evaluate parameter recovery. Subsequently, both a hierarchical and…

  3. Finite Mixture Multilevel Multidimensional Ordinal IRT Models for Large Scale Cross-Cultural Research

    ERIC Educational Resources Information Center

    de Jong, Martijn G.; Steenkamp, Jan-Benedict E. M.

    2010-01-01

    We present a class of finite mixture multilevel multidimensional ordinal IRT models for large scale cross-cultural research. Our model is proposed for confirmatory research settings. Our prior for item parameters is a mixture distribution to accommodate situations where different groups of countries have different measurement operations, while…

  4. An Alternative Way to Model Population Ability Distributions in Large-Scale Educational Surveys

    ERIC Educational Resources Information Center

    Wetzel, Eunike; Xu, Xueli; von Davier, Matthias

    2015-01-01

    In large-scale educational surveys, a latent regression model is used to compensate for the shortage of cognitive information. Conventionally, the covariates in the latent regression model are principal components extracted from background data. This operational method has several important disadvantages, such as the handling of missing data and…

  5. Reconciling subduction dynamics during Tethys closure with large-scale Asian tectonics: Insights from numerical modeling

    NASA Astrophysics Data System (ADS)

    Capitanio, F. A.; Replumaz, A.; Riel, N.

    2015-03-01

    We use three-dimensional numerical models to investigate the relation between subduction dynamics and large-scale tectonics of continent interiors. The models show how the balance between forces at the plate margins such as subduction, ridge push, and far-field forces, controls the coupled plate margins and interiors evolution. Removal of part of the slab by lithospheric break-off during subduction destabilizes the convergent margin, forcing migration of the subduction zone, whereas in the upper plate large-scale lateral extrusion, rotations, and back-arc stretching ensue. When external forces are modeled, such as ridge push and far-field forces, indentation increases, with large collisional margin advance and thickening in the upper plate. The balance between margin and external forces leads to similar convergent margin evolutions, whereas major differences occur in the upper plate interiors. Here, three strain regimes are found: large-scale extrusion, extrusion and thickening along the collisional margin, and thickening only, when negligible far-field forces, ridge push, and larger far-field forces, respectively, add to the subduction dynamics. The extrusion tectonics develops a strong asymmetry toward the oceanic margin driven by large-scale subduction, with no need of preexisting heterogeneities in the upper plate. Because the slab break-off perturbation is transient, the ensuing plate tectonics is time-dependent. The modeled deformation and its evolution are remarkably similar to the Cenozoic Asian tectonics, explaining large-scale lithospheric faulting and thickening, and coupling of indentation, extrusion and extension along the Asian convergent margin as a result of large-scale subduction process.

  6. The three-point function as a probe of models for large-scale structure

    NASA Technical Reports Server (NTRS)

    Frieman, Joshua A.; Gaztanaga, Enrique

    1993-01-01

    The consequences of models of structure formation for higher-order (n-point) galaxy correlation functions in the mildly non-linear regime are analyzed. Several variations of the standard Omega = 1 cold dark matter model with scale-invariant primordial perturbations were recently introduced to obtain more power on large scales, R(sub p) is approximately 20 h(sup -1) Mpc, e.g., low-matter-density (non-zero cosmological constant) models, 'tilted' primordial spectra, and scenarios with a mixture of cold and hot dark matter. They also include models with an effective scale-dependent bias, such as the cooperative galaxy formation scenario of Bower, etal. It is shown that higher-order (n-point) galaxy correlation functions can provide a useful test of such models and can discriminate between models with true large-scale power in the density field and those where the galaxy power arises from scale-dependent bias: a bias with rapid scale-dependence leads to a dramatic decrease of the hierarchical amplitudes Q(sub J) at large scales, r is approximately greater than R(sub p). Current observational constraints on the three-point amplitudes Q(sub 3) and S(sub 3) can place limits on the bias parameter(s) and appear to disfavor, but not yet rule out, the hypothesis that scale-dependent bias is responsible for the extra power observed on large scales.

  7. The three-point function as a probe of models for large-scale structure

    SciTech Connect

    Frieman, J.A.; Gaztanaga, E.

    1993-06-19

    The authors analyze the consequences of models of structure formation for higher-order (n-point) galaxy correlation functions in the mildly non-linear regime. Several variations of the standard {Omega} = 1 cold dark matter model with scale-invariant primordial perturbations have recently been introduced to obtain more power on large scales, R{sub p} {approximately}20 h{sup {minus}1} Mpc, e.g., low-matter-density (non-zero cosmological constant) models, {open_quote}tilted{close_quote} primordial spectra, and scenarios with a mixture of cold and hot dark matter. They also include models with an effective scale-dependent bias, such as the cooperative galaxy formation scenario of Bower, et al. The authors show that higher-order (n-point) galaxy correlation functions can provide a useful test of such models and can discriminate between models with true large-scale power in the density field and those where the galaxy power arises from scale-dependent bias: a bias with rapid scale-dependence leads to a dramatic decrease of the hierarchical amplitudes Q{sub J} at large scales, r {approx_gt} R{sub p}. Current observational constraints on the three-point amplitudes Q{sub 3} and S{sub 3} can place limits on the bias parameter(s) and appear to disfavor, but not yet rule out, the hypothesis that scale-dependent bias is responsible for the extra power observed on large scales.

  8. The three-point function as a probe of models for large-scale structure

    SciTech Connect

    Frieman, J.A. ); Gaztanaga, E. )

    1993-06-19

    The authors analyze the consequences of models of structure formation for higher-order (n-point) galaxy correlation functions in the mildly non-linear regime. Several variations of the standard [Omega] = 1 cold dark matter model with scale-invariant primordial perturbations have recently been introduced to obtain more power on large scales, R[sub p] [approximately]20 h[sup [minus]1] Mpc, e.g., low-matter-density (non-zero cosmological constant) models, [open quote]tilted[close quote] primordial spectra, and scenarios with a mixture of cold and hot dark matter. They also include models with an effective scale-dependent bias, such as the cooperative galaxy formation scenario of Bower, et al. The authors show that higher-order (n-point) galaxy correlation functions can provide a useful test of such models and can discriminate between models with true large-scale power in the density field and those where the galaxy power arises from scale-dependent bias: a bias with rapid scale-dependence leads to a dramatic decrease of the hierarchical amplitudes Q[sub J] at large scales, r [approx gt] R[sub p]. Current observational constraints on the three-point amplitudes Q[sub 3] and S[sub 3] can place limits on the bias parameter(s) and appear to disfavor, but not yet rule out, the hypothesis that scale-dependent bias is responsible for the extra power observed on large scales.

  9. Modeling haboob dust storms in large-scale weather and climate models

    NASA Astrophysics Data System (ADS)

    Pantillon, Florian; Knippertz, Peter; Marsham, John H.; Panitz, Hans-Jürgen; Bischoff-Gauss, Ingeborg

    2016-03-01

    Recent field campaigns have shown that haboob dust storms, formed by convective cold pool outflows, contribute a significant fraction of dust uplift over the Sahara and Sahel in summer. However, in situ observations are sparse and haboobs are frequently concealed by clouds in satellite imagery. Furthermore, most large-scale weather and climate models lack haboobs, because they do not explicitly represent convection. Here a 1 year long model run with explicit representation of convection delivers the first full seasonal cycle of haboobs over northern Africa. Using conservative estimates, the model suggests that haboobs contribute one fifth of the annual dust-generating winds over northern Africa, one fourth between May and October, and one third over the western Sahel during this season. A simple parameterization of haboobs has recently been developed for models with parameterized convection, based on the downdraft mass flux of convection schemes. It is applied here to two model runs with different horizontal resolutions and assessed against the explicit run. The parameterization succeeds in capturing the geographical distribution of haboobs and their seasonal cycle over the Sahara and Sahel. It can be tuned to the different horizontal resolutions, and different formulations are discussed with respect to the frequency of extreme events. The results show that the parameterization is reliable and may solve a major and long-standing issue in simulating dust storms in large-scale weather and climate models.

  10. Development of a coupled soil erosion and large-scale hydrology modeling system

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Soil erosion models are usually limited in their application to the field-scale; however, the management of land resources requires information at the regional scale. Large-scale physically-based land surface schemes (LSS) provide estimates of regional scale hydrologic processes that contribute to e...

  11. Influence of a compost layer on the attenuation of 28 selected organic micropollutants under realistic soil aquifer treatment conditions: insights from a large scale column experiment.

    PubMed

    Schaffer, Mario; Kröger, Kerrin Franziska; Nödler, Karsten; Ayora, Carlos; Carrera, Jesús; Hernández, Marta; Licha, Tobias

    2015-05-01

    Soil aquifer treatment is widely applied to improve the quality of treated wastewater in its reuse as alternative source of water. To gain a deeper understanding of the fate of thereby introduced organic micropollutants, the attenuation of 28 compounds was investigated in column experiments using two large scale column systems in duplicate. The influence of increasing proportions of solid organic matter (0.04% vs. 0.17%) and decreasing redox potentials (denitrification vs. iron reduction) was studied by introducing a layer of compost. Secondary effluent from a wastewater treatment plant was used as water matrix for simulating soil aquifer treatment. For neutral and anionic compounds, sorption generally increases with the compound hydrophobicity and the solid organic matter in the column system. Organic cations showed the highest attenuation. Among them, breakthroughs were only registered for the cationic beta-blockers atenolol and metoprolol. An enhanced degradation in the columns with organic infiltration layer was observed for the majority of the compounds, suggesting an improved degradation for higher levels of biodegradable dissolved organic carbon. Solely the degradation of sulfamethoxazole could clearly be attributed to redox effects (when reaching iron reducing conditions). The study provides valuable insights into the attenuation potential for a wide spectrum of organic micropollutants under realistic soil aquifer treatment conditions. Furthermore, the introduction of the compost layer generally showed positive effects on the removal of compounds preferentially degraded under reducing conditions and also increases the residence times in the soil aquifer treatment system via sorption. PMID:25723339

  12. Optimization of large-scale heterogeneous system-of-systems models.

    SciTech Connect

    Parekh, Ojas; Watson, Jean-Paul; Phillips, Cynthia Ann; Siirola, John; Swiler, Laura Painton; Hough, Patricia Diane; Lee, Herbert K. H.; Hart, William Eugene; Gray, Genetha Anne; Woodruff, David L.

    2012-01-01

    Decision makers increasingly rely on large-scale computational models to simulate and analyze complex man-made systems. For example, computational models of national infrastructures are being used to inform government policy, assess economic and national security risks, evaluate infrastructure interdependencies, and plan for the growth and evolution of infrastructure capabilities. A major challenge for decision makers is the analysis of national-scale models that are composed of interacting systems: effective integration of system models is difficult, there are many parameters to analyze in these systems, and fundamental modeling uncertainties complicate analysis. This project is developing optimization methods to effectively represent and analyze large-scale heterogeneous system of systems (HSoS) models, which have emerged as a promising approach for describing such complex man-made systems. These optimization methods enable decision makers to predict future system behavior, manage system risk, assess tradeoffs between system criteria, and identify critical modeling uncertainties.

  13. PLATO: data-oriented approach to collaborative large-scale brain system modeling.

    PubMed

    Kannon, Takayuki; Inagaki, Keiichiro; Kamiji, Nilton L; Makimura, Kouji; Usui, Shiro

    2011-11-01

    The brain is a complex information processing system, which can be divided into sub-systems, such as the sensory organs, functional areas in the cortex, and motor control systems. In this sense, most of the mathematical models developed in the field of neuroscience have mainly targeted a specific sub-system. In order to understand the details of the brain as a whole, such sub-system models need to be integrated toward the development of a neurophysiologically plausible large-scale system model. In the present work, we propose a model integration library where models can be connected by means of a common data format. Here, the common data format should be portable so that models written in any programming language, computer architecture, and operating system can be connected. Moreover, the library should be simple so that models can be adapted to use the common data format without requiring any detailed knowledge on its use. Using this library, we have successfully connected existing models reproducing certain features of the visual system, toward the development of a large-scale visual system model. This library will enable users to reuse and integrate existing and newly developed models toward the development and simulation of a large-scale brain system model. The resulting model can also be executed on high performance computers using Message Passing Interface (MPI). PMID:21767932

  14. Large-Scale Numerical Modeling of Melt and Solution Crystal Growth

    NASA Astrophysics Data System (ADS)

    Derby, Jeffrey J.; Chelikowsky, James R.; Sinno, Talid; Dai, Bing; Kwon, Yong-Il; Lun, Lisa; Pandy, Arun; Yeckel, Andrew

    2007-06-01

    We present an overview of mathematical models and their large-scale numerical solution for simulating different phenomena and scales in melt and solution crystal growth. Samples of both classical analyses and state-of-the-art computations are presented. It is argued that the fundamental multi-scale nature of crystal growth precludes any one approach for modeling, rather successful crystal growth modeling relies on an artful blend of rigor and practicality.

  15. An Efficient Simulation Environment for Modeling Large-Scale Cortical Processing

    PubMed Central

    Richert, Micah; Nageswaran, Jayram Moorkanikara; Dutt, Nikil; Krichmar, Jeffrey L.

    2011-01-01

    We have developed a spiking neural network simulator, which is both easy to use and computationally efficient, for the generation of large-scale computational neuroscience models. The simulator implements current or conductance based Izhikevich neuron networks, having spike-timing dependent plasticity and short-term plasticity. It uses a standard network construction interface. The simulator allows for execution on either GPUs or CPUs. The simulator, which is written in C/C++, allows for both fine grain and coarse grain specificity of a host of parameters. We demonstrate the ease of use and computational efficiency of this model by implementing a large-scale model of cortical areas V1, V4, and area MT. The complete model, which has 138,240 neurons and approximately 30 million synapses, runs in real-time on an off-the-shelf GPU. The simulator source code, as well as the source code for the cortical model examples is publicly available. PMID:22007166

  16. Validating the runoff from the PRECIS model using a large-scale routing model

    NASA Astrophysics Data System (ADS)

    Cao, Lijuan; Dong, Wenjie; Xu, Yinlong; Zhang, Yong; Sparrow, Michael

    2007-09-01

    The streamflow over the Yellow River basin is simulated using the PRECIS (Providing REgional Climates for Impacts Studies) regional climate model driven by 15-year (1979 1993) ECMWF reanalysis data as the initial and lateral boundary conditions and an off-line large-scale routing model (LRM). The LRM uses physical catchment and river channel information and allows streamflow to be predicted for large continental rivers with a 1° × 1° spatial resolution. The results show that the PRECIS model can reproduce the general southeast to northwest gradient distribution of the precipitation over the Yellow River basin. The PRECISLRM model combination has the capability to simulate the seasonal and annual streamflow over the Yellow River basin. The simulated streamflow is generally coincident with the naturalized streamflow both in timing and in magnitude.

  17. Aspects of investigating STOL noise using large scale wind tunnel models

    NASA Technical Reports Server (NTRS)

    Falarski, M. D.; Koenig, D. G.; Soderman, P. T.

    1972-01-01

    The applicability of the NASA Ames 40- by 80-ft wind tunnel for acoustic research on STOL concepts has been investigated. The acoustic characteristics of the wind tunnel test section has been studied with calibrated acoustic sources. Acoustic characteristics of several large-scale STOL models have been studied both in the free-field and wind tunnel acoustic environments. The results indicate that the acoustic characteristics of large-scale STOL models can be measured in the wind tunnel if the test section acoustic environment and model acoustic similitude are taken into consideration. The reverberant field of the test section must be determined with an acoustically similar noise source. Directional microphone and extrapolation of near-field data to far-field are some of the techniques being explored as possible solutions to the directivity loss in a reverberant field. The model sound pressure levels must be of sufficient magnitude to be discernable from the wind tunnel background noise.

  18. Simulated pre-industrial climate in Bergen Climate Model (version 2): model description and large-scale circulation features

    NASA Astrophysics Data System (ADS)

    Otterâ, O. H.; Bentsen, M.; Bethke, I.; Kvamstø, N. G.

    2009-11-01

    The Bergen Climate Model (BCM) is a fully-coupled atmosphere-ocean-sea-ice model that provides state-of-the-art computer simulations of the Earth's past, present, and future climate. Here, a pre-industrial multi-century simulation with an updated version of BCM is described and compared to observational data. The model is run without any form of flux adjustments and is stable for several centuries. The simulated climate reproduces the general large-scale circulation in the atmosphere reasonably well, except for a positive bias in the high latitude sea level pressure distribution. Also, by introducing an updated turbulence scheme in the atmosphere model a persistent cold bias has been eliminated. For the ocean part, the model drifts in sea surface temperatures and salinities are considerably reduced compared to earlier versions of BCM. Improved conservation properties in the ocean model have contributed to this. Furthermore, by choosing a reference pressure at 2000 m and including thermobaric effects in the ocean model, a more realistic meridional overturning circulation is simulated in the Atlantic Ocean. The simulated sea-ice extent in the Northern Hemisphere is in general agreement with observational data except for summer where the extent is somewhat underestimated. In the Southern Hemisphere, large negative biases are found in the simulated sea-ice extent. This is partly related to problems with the mixed layer parametrization, causing the mixed layer in the Southern Ocean to be too deep, which in turn makes it hard to maintain a realistic sea-ice cover here. However, despite some problematic issues, the pre-industrial control simulation presented here should still be appropriate for climate change studies requiring multi-century simulations.

  19. Modelling Convective Dust Storms in Large-Scale Weather and Climate Models

    NASA Astrophysics Data System (ADS)

    Pantillon, F.; Knippertz, P.; Marsham, J. H.; Panitz, H. J.; Bischoff-Gauss, I.

    2015-12-01

    Recent field campaigns have shown that convective dust storms - also known as haboobs or cold pool outflows - contribute a significant fraction of dust uplift over the Sahara and Sahel in summer. However, in-situ observations are sparse and convective dust storms are frequently concealed by clouds in satellite imagery. Therefore numerical models are often the only available source of information over the area. Here a regional climate model with explicit representation of convection delivers the first full seasonal cycle of convective dust storms over North Africa. The model suggests that they contribute one fifth of the annual dust uplift over North Africa, one fourth between May and October, and one third over the western Sahel during this season. In contrast, most large-scale weather and climate models do not explicitly represent convection and thus lack such storms.A simple parameterization of convective dust storms has recently been developed, based on the downdraft mass flux of convection schemes. The parameterization is applied here to a set of regional climate runs with different horizontal resolutions and convection schemes, and assessed against the explicit run and against sparse station observations. The parameterization succeeds in capturing the geographical distribution and seasonal cycle of convective dust storms. It can be tuned to different horizontal resolutions and convection schemes, although the details of the geographical distribution and seasonal cycle depend on the representation of the monsoon in the parent model. Different versions of the parameterization are further discussed with respect to differences in the frequency of extreme events. The results show that the parameterization is reliable and can therefore solve a long-standing problem in simulating dust storms in large-scale weather and climate models.

  20. Modelling Convective Dust Storms in Large-Scale Weather and Climate Models

    NASA Astrophysics Data System (ADS)

    Pantillon, Florian; Knippertz, Peter; Marsham, John H.; Panitz, Hans-Jürgen; Bischoff-Gauss, Ingeborg

    2016-04-01

    Recent field campaigns have shown that convective dust storms - also known as haboobs or cold pool outflows - contribute a significant fraction of dust uplift over the Sahara and Sahel in summer. However, in-situ observations are sparse and convective dust storms are frequently concealed by clouds in satellite imagery. Therefore numerical models are often the only available source of information over the area. Here a regional climate model with explicit representation of convection delivers the first full seasonal cycle of convective dust storms over North Africa. The model suggests that they contribute one fifth of the annual dust uplift over North Africa, one fourth between May and October, and one third over the western Sahel during this season. In contrast, most large-scale weather and climate models do not explicitly represent convection and thus lack such storms. A simple parameterization of convective dust storms has recently been developed, based on the downdraft mass flux of convection schemes. The parameterization is applied here to a set of regional climate runs with different horizontal resolutions and convection schemes, and assessed against the explicit run and against sparse station observations. The parameterization succeeds in capturing the geographical distribution and seasonal cycle of convective dust storms. It can be tuned to different horizontal resolutions and convection schemes, although the details of the geographical distribution and seasonal cycle depend on the representation of the monsoon in the parent model. Different versions of the parameterization are further discussed with respect to differences in the frequency of extreme events. The results show that the parameterization is reliable and can therefore solve a long-standing problem in simulating dust storms in large-scale weather and climate models.

  1. Large-scale hydrological modelling: Parameterisation of runoff generation with high-resolution topographical data

    NASA Astrophysics Data System (ADS)

    Gong, Lebing; Halldin, Sven; Xu, C.-Y.

    2010-05-01

    Runoff generation is one of the most important components in hydrological cycle and in hydrological models at all spatial scales. The spatial distribution of the effective storage capacity accounts largely for the non-linearity of runoff generation dynamic. Many Hydrological models account for this spatial variability of storage in terms of statistical distributions; those models are generally proven to perform well. For example, both VIC and PDM account for the storage variability at sub-grid level. It is more important to account for the storage distribution for large river basins, where the varying land surface properties could mean a large variation in both the average storage capacity and the shape of the distribution of storage capacity when going from one part of the basin to another. However, limited by the statistical approaches same runoff generation parameters often have to be used everywhere in the basin. This is because it is harder to account for the spatial auto-correlation between those parameters than just the range of them. The Topmodel concept allows a linkage between the effective maximum storage capacity, or the maximum deficit, and the topography. It has the advantage of both a physically sound interpretation of runoff generation mechanism and the generally good availability of topography data. However, the strict limitation of the Topmodel assumption may limit its application in parts of the world with deep groundwater system or flat terrain. In this paper, we present a new runoff generation model designed for large-scale hydrology. The model relaxes the topmodel assumptions and only uses topographic index as a tool to distribute average storage to each topographic index class. The maximum storage capacity is proportional to the range of topographic index and is scaled by the recession parameter in Topmodel. The sub-cell distribution of storage capacity is obtained through topographic analysis. We then feed this topography

  2. Simulated pre-industrial climate in Bergen Climate Model (version 2): model description and large-scale circulation features

    NASA Astrophysics Data System (ADS)

    Otterå, O. H.; Bentsen, M.; Bethke, I.; Kvamstø, N. G.

    2009-05-01

    The Bergen Climate Model (BCM) is a fully-coupled atmosphere-ocean-sea-ice model that provides state-of-the-art computer simulations of the Earth's past, present, and future climate. Here, a pre-industrial multi-century simulation with an updated version of BCM is described and compared to observational data. The model is run without any form of flux adjustments and is stable for several centuries. The simulated climate reproduces the general large scale circulation in the atmosphere reasonably well, except for a positive bias in the high latitude sea level pressures distribution. Also, by introducing an updated turbulence scheme in the atmosphere model a persistent cold bias has been eliminated. For the ocean part, the model drifts in sea surface temperatures and salinities are considerably reduced compared to earlier versions of BCM. Improved conservation properties in the ocean have contributed to this. Furthermore, by choosing a reference pressure at 2000 m and including thermobaric effects in the ocean model, a more realistic meridional overturning circulation is simulated in the Atlantic Ocean. The simulated sea-ice extent in the Northern Hemisphere is in general agreement with observational data except for summer where the extent is somewhat underestimated. In the Southern Hemisphere, large negative biases are found in the simulated sea-ice extent. This is partly related to problems with the mixed layer parametrization, causing the mixed layer in the Southern Ocean to be too deep, which in turn makes it hard to maintain a realistic sea-ice cover here. However, despite some problematic issues, the pre-industrial control simulation presented here should still be appropriate for climate change studies requiring multi-century simulations.

  3. A Regression Algorithm for Model Reduction of Large-Scale Multi-Dimensional Problems

    NASA Astrophysics Data System (ADS)

    Rasekh, Ehsan

    2011-11-01

    Model reduction is an approach for fast and cost-efficient modelling of large-scale systems governed by Ordinary Differential Equations (ODEs). Multi-dimensional model reduction has been suggested for reduction of the linear systems simultaneously with respect to frequency and any other parameter of interest. Multi-dimensional model reduction is also used to reduce the weakly nonlinear systems based on Volterra theory. Multiple dimensions degrade the efficiency of reduction by increasing the size of the projection matrix. In this paper a new methodology is proposed to efficiently build the reduced model based on regression analysis. A numerical example confirms the validity of the proposed regression algorithm for model reduction.

  4. Small parametric model for nonlinear dynamics of large scale cyclogenesis with wind speed variations

    NASA Astrophysics Data System (ADS)

    Erokhin, Nikolay; Shkevov, Rumen; Zolnikova, Nadezhda; Mikhailovskaya, Ludmila

    2016-07-01

    It is performed a numerical investigation of a self consistent small parametric model (SPM) for large scale cyclogenesis (RLSC) by usage of connected nonlinear equations for mean wind speed and ocean surface temperature in the tropical cyclone (TC). These equations may describe the different scenario of temporal dynamics of a powerful atmospheric vortex during its full life cycle. The numerical calculations have shown that relevant choice of SPMTs incoming parameters allows to describe the seasonal behavior of regional large scale cyclogenesis dynamics for a given number of TC during the active season. It is shown that SPM allows describe also the variable wind speed variations inside the TC. Thus by usage of the nonlinear small parametric model it is possible to study the features of RLSCTs temporal dynamics during the active season in the region given and to analyze the relationship between regional cyclogenesis parameters and different external factors like the space weather including the solar activity level and cosmic rays variations.

  5. A PRACTICAL ONTOLOGY FOR THE LARGE-SCALE MODELING OF SCHOLARLY ARTIFACTS AND THEIR USAGE

    SciTech Connect

    RODRIGUEZ, MARKO A.; BOLLEN, JOHAN; VAN DE SOMPEL, HERBERT

    2007-01-30

    The large-scale analysis of scholarly artifact usage is constrained primarily by current practices in usage data archiving, privacy issues concerned with the dissemination of usage data, and the lack of a practical ontology for modeling the usage domain. As a remedy to the third constraint, this article presents a scholarly ontology that was engineered to represent those classes for which large-scale bibliographic and usage data exists, supports usage research, and whose instantiation is scalable to the order of 50 million articles along with their associated artifacts (e.g. authors and journals) and an accompanying 1 billion usage events. The real world instantiation of the presented abstract ontology is a semantic network model of the scholarly community which lends the scholarly process to statistical analysis and computational support. They present the ontology, discuss its instantiation, and provide some example inference rules for calculating various scholarly artifact metrics.

  6. Impacts of Large-Scale Circulation on Convection: A 2-D Cloud Resolving Model Study

    NASA Technical Reports Server (NTRS)

    Li, X; Sui, C.-H.; Lau, K.-M.

    1999-01-01

    Studies of impacts of large-scale circulation on convection, and the roles of convection in heat and water balances over tropical region are fundamentally important for understanding global climate changes. Heat and water budgets over warm pool (SST=29.5 C) and cold pool (SST=26 C) were analyzed based on simulations of the two-dimensional cloud resolving model. Here the sensitivity of heat and water budgets to different sizes of warm and cold pools is examined.

  7. Computational fluid dynamics simulations of particle deposition in large-scale, multigenerational lung models.

    PubMed

    Walters, D Keith; Luke, William H

    2011-01-01

    Computational fluid dynamics (CFD) has emerged as a useful tool for the prediction of airflow and particle transport within the human lung airway. Several published studies have demonstrated the use of Eulerian finite-volume CFD simulations coupled with Lagrangian particle tracking methods to determine local and regional particle deposition rates in small subsections of the bronchopulmonary tree. However, the simulation of particle transport and deposition in large-scale models encompassing more than a few generations is less common, due in part to the sheer size and complexity of the human lung airway. Highly resolved, fully coupled flowfield solution and particle tracking in the entire lung, for example, is currently an intractable problem and will remain so for the foreseeable future. This paper adopts a previously reported methodology for simulating large-scale regions of the lung airway (Walters, D. K., and Luke, W. H., 2010, "A Method for Three-Dimensional Navier-Stokes Simulations of Large-Scale Regions of the Human Lung Airway," ASME J. Fluids Eng., 132(5), p. 051101), which was shown to produce results similar to fully resolved geometries using approximate, reduced geometry models. The methodology is extended here to particle transport and deposition simulations. Lagrangian particle tracking simulations are performed in combination with Eulerian simulations of the airflow in an idealized representation of the human lung airway tree. Results using the reduced models are compared with those using the fully resolved models for an eight-generation region of the conducting zone. The agreement between fully resolved and reduced geometry simulations indicates that the new method can provide an accurate alternative for large-scale CFD simulations while potentially reducing the computational cost of these simulations by several orders of magnitude. PMID:21186893

  8. Large-scale brain networks and psychopathology: a unifying triple network model.

    PubMed

    Menon, Vinod

    2011-10-01

    The science of large-scale brain networks offers a powerful paradigm for investigating cognitive and affective dysfunction in psychiatric and neurological disorders. This review examines recent conceptual and methodological developments which are contributing to a paradigm shift in the study of psychopathology. I summarize methods for characterizing aberrant brain networks and demonstrate how network analysis provides novel insights into dysfunctional brain architecture. Deficits in access, engagement and disengagement of large-scale neurocognitive networks are shown to play a prominent role in several disorders including schizophrenia, depression, anxiety, dementia and autism. Synthesizing recent research, I propose a triple network model of aberrant saliency mapping and cognitive dysfunction in psychopathology, emphasizing the surprising parallels that are beginning to emerge across psychiatric and neurological disorders. PMID:21908230

  9. Using Agent Base Models to Optimize Large Scale Network for Large System Inventories

    NASA Technical Reports Server (NTRS)

    Shameldin, Ramez Ahmed; Bowling, Shannon R.

    2010-01-01

    The aim of this paper is to use Agent Base Models (ABM) to optimize large scale network handling capabilities for large system inventories and to implement strategies for the purpose of reducing capital expenses. The models used in this paper either use computational algorithms or procedure implementations developed by Matlab to simulate agent based models in a principal programming language and mathematical theory using clusters, these clusters work as a high performance computational performance to run the program in parallel computational. In both cases, a model is defined as compilation of a set of structures and processes assumed to underlie the behavior of a network system.

  10. Large-scale multi-configuration electromagnetic induction: a promising tool to improve hydrological models

    NASA Astrophysics Data System (ADS)

    von Hebel, Christian; Rudolph, Sebastian; Mester, Achim; Huisman, Johan A.; Montzka, Carsten; Weihermüller, Lutz; Vereecken, Harry; van der Kruk, Jan

    2015-04-01

    Large-scale multi-configuration electromagnetic induction (EMI) use different coil configurations, i.e., coil offsets and coil orientations, to sense coil specific depth volumes. The obtained apparent electrical conductivity (ECa) maps can be related to some soil properties such as clay content, soil water content, and pore water conductivity, which are important characteristics that influence hydrological processes. Here, we use large-scale EMI measurements to investigate changes in soil texture that drive the available water supply causing crop development patterns that were observed in leaf area index (LAI) maps obtained from RapidEye satellite images taken after a drought period. The 20 ha test site is situated within the Ellebach catchment (Germany) and consists of a sand-and-gravel dominated upper terrace (UT) and a loamy lower terrace (LT). The large-scale multi-configuration EMI measurements were calibrated using electrical resistivity tomography (ERT) measurements at selected transects and soil samples were taken at representative locations where changes in the electrical conductivity were observed and therefore changing soil properties were expected. By analyzing all the data, the observed LAI patterns could be attributed to buried paleo-river channel systems that contained a higher silt and clay content and provided a higher water holding capacity than the surrounding coarser material. Moreover, the measured EMI data showed highest correlation with LAI for the deepest sensing coil offset (up to 1.9 m), which indicates that the deeper subsoil is responsible for root water uptake especially under drought conditions. To obtain a layered subsurface electrical conductivity model that shows the subsurface structures more clearly, a novel EMI inversion scheme was applied to the field data. The obtained electrical conductivity distributions were validated with soil probes and ERT transects that confirmed the inverted lateral and vertical large-scale electrical

  11. Cooling biogeophysical effect of large-scale tropical deforestation in three Earth System models

    NASA Astrophysics Data System (ADS)

    Brovkin, Victor; Pugh, Thomas; Robertson, Eddy; Bathiany, Sebastian; Arneth, Almut; Jones, Chris

    2015-04-01

    Vegetation cover in the tropics is limited by moisture availability. Since transpiration from forests is much greater than from grasslands, the sensitivity of precipitation in the Amazon to large-scale deforestation has long been seen as a critical parameter of climate-vegetation interactions. Most Amazon deforestation experiments to date have been performed with interactive land-atmosphere models but prescribed sea surface temperatures (SSTs). They reveal a strong reduction in evapotranspiration and precipitation, and an increase in global air surface temperature due to reduced latent heat flux. We performed large-scale tropical deforestation experiments with three Earth system models (ESMs) including interactive ocean models, which participated in the FP7 project EMBRACE. In response to tropical deforestation, all models simulate a significant reduction in tropical precipitation, similar to the experiments with prescribed SSTs. However, all three models suggest that the response of global temperature to the deforestation is a cooling or no change, differing from the result of a global warming in prescribed SSTs runs. Presumably, changes in the hydrological cycle and in the water vapor feedback due to deforestation operate in the direction of a global cooling. In addition, one of the models simulates a local cooling over the deforested tropical region. This is opposite to the local warming in the other models. This suggests that the balance between warming due to latent heat flux decrease and cooling due to albedo increase is rather subtle and model-dependent. Last but not least, we suggest using large-scale deforestation as a standard biogeophysical experiment for model intercomparison, for example, within the CMIP6 framework.

  12. Image fusion for remote sensing using fast, large-scale neuroscience models

    NASA Astrophysics Data System (ADS)

    Brumby, Steven P.

    2011-05-01

    We present results with large-scale neuroscience-inspired models for feature detection using multi-spectral visible/ infrared satellite imagery. We describe a model using an artificial neural network architecture and learning rules to build sparse scene representations over an adaptive dictionary, fusing spectral and spatial textural characteristics of the objects of interest. Our results with fast codes implemented on clusters of graphical processor units (GPUs) suggest that visual cortex models are a promising approach to practical pattern recognition problems in remote sensing, even for datasets using spectral bands not found in natural visual systems.

  13. Modeling dynamic functional information flows on large-scale brain networks.

    PubMed

    Lv, Peili; Guo, Lei; Hu, Xintao; Li, Xiang; Jin, Changfeng; Han, Junwei; Li, Lingjiang; Liu, Tianming

    2013-01-01

    Growing evidence from the functional neuroimaging field suggests that human brain functions are realized via dynamic functional interactions on large-scale structural networks. Even in resting state, functional brain networks exhibit remarkable temporal dynamics. However, it has been rarely explored to computationally model such dynamic functional information flows on large-scale brain networks. In this paper, we present a novel computational framework to explore this problem using multimodal resting state fMRI (R-fMRI) and diffusion tensor imaging (DTI) data. Basically, recent literature reports including our own studies have demonstrated that the resting state brain networks dynamically undergo a set of distinct brain states. Within each quasi-stable state, functional information flows from one set of structural brain nodes to other sets of nodes, which is analogous to the message package routing on the Internet from the source node to the destination. Therefore, based on the large-scale structural brain networks constructed from DTI data, we employ a dynamic programming strategy to infer functional information transition routines on structural networks, based on which hub routers that most frequently participate in these routines are identified. It is interesting that a majority of those hub routers are located within the default mode network (DMN), revealing a possible mechanism of the critical functional hub roles played by the DMN in resting state. Also, application of this framework on a post trauma stress disorder (PTSD) dataset demonstrated interesting difference in hub router distributions between PTSD patients and healthy controls. PMID:24579202

  14. Oscillations in large-scale cortical networks: map-based model.

    PubMed

    Rulkov, N F; Timofeev, I; Bazhenov, M

    2004-01-01

    We develop a new computationally efficient approach for the analysis of complex large-scale neurobiological networks. Its key element is the use of a new phenomenological model of a neuron capable of replicating important spike pattern characteristics and designed in the form of a system of difference equations (a map). We developed a set of map-based models that replicate spiking activity of cortical fast spiking, regular spiking and intrinsically bursting neurons. Interconnected with synaptic currents these model neurons demonstrated responses very similar to those found with Hodgkin-Huxley models and in experiments. We illustrate the efficacy of this approach in simulations of one- and two-dimensional cortical network models consisting of regular spiking neurons and fast spiking interneurons to model sleep and activated states of the thalamocortical system. Our study suggests that map-based models can be widely used for large-scale simulations and that such models are especially useful for tasks where the modeling of specific firing patterns of different cell classes is important. PMID:15306740

  15. UDEC-AUTODYN Hybrid Modeling of a Large-Scale Underground Explosion Test

    NASA Astrophysics Data System (ADS)

    Deng, X. F.; Chen, S. G.; Zhu, J. B.; Zhou, Y. X.; Zhao, Z. Y.; Zhao, J.

    2015-03-01

    In this study, numerical modeling of a large-scale decoupled underground explosion test with 10 tons of TNT in Älvdalen, Sweden is performed by combining DEM and FEM with codes UDEC and AUTODYN. AUTODYN is adopted to model the explosion process, blast wave generation, and its action on the explosion chamber surfaces, while UDEC modeling is focused on shock wave propagation in jointed rock masses surrounding the explosion chamber. The numerical modeling results with the hybrid AUTODYN-UDEC method are compared with empirical estimations, purely AUTODYN modeling results, and the field test data. It is found that in terms of peak particle velocity, empirical estimations are much smaller than the measured data, while purely AUTODYN modeling results are larger than the test data. The UDEC-AUTODYN numerical modeling results agree well with the test data. Therefore, the UDEC-AUTODYN method is appropriate in modeling a large-scale explosive detonation in a closed space and the following wave propagation in jointed rock masses. It should be noted that joint mechanical and spatial properties adopted in UDEC-AUTODYN modeling are determined with empirical equations and available geological data, and they may not be sufficiently accurate.

  16. Downscaling large-scale climate variability using a regional climate model: the case of ENSO over Southern Africa

    NASA Astrophysics Data System (ADS)

    Boulard, Damien; Pohl, Benjamin; Crétat, Julien; Vigaud, Nicolas; Pham-Xuan, Thanh

    2013-03-01

    This study documents methodological issues arising when downscaling modes of large-scale atmospheric variability with a regional climate model, over a remote region that is yet under their influence. The retained case study is El Niño Southern Oscillation and its impacts on Southern Africa and the South West Indian Ocean. Regional simulations are performed with WRF model, driven laterally by ERA40 reanalyses over the 1971-1998 period. We document the sensitivity of simulated climate variability to the model physics, the constraint of relaxing the model solutions towards reanalyses, the size of the relaxation buffer zone towards the lateral forcings and the forcing fields through ERA-Interim driven simulations. The model's internal variability is quantified using 15-member ensemble simulations for seasons of interest, single 30-year integrations appearing as inappropriate to investigate the simulated interannual variability properly. The incidence of SST prescription is also assessed through additional integrations using a simple ocean mixed-layer model. Results show a limited skill of the model to reproduce the seasonal droughts associated with El Niño conditions. The model deficiencies are found to result from biased atmospheric forcings and/or biased response to these forcings, whatever the physical package retained. In contrast, regional SST forcing over adjacent oceans favor realistic rainfall anomalies over the continent, although their amplitude remains too weak. These results confirm the significant contribution of nearby ocean SST to the regional effects of ENSO, but also illustrate that regionalizing large-scale climate variability can be a demanding exercise.

  17. Nengo: a Python tool for building large-scale functional brain models

    PubMed Central

    Bekolay, Trevor; Bergstra, James; Hunsberger, Eric; DeWolf, Travis; Stewart, Terrence C.; Rasmussen, Daniel; Choo, Xuan; Voelker, Aaron Russell; Eliasmith, Chris

    2014-01-01

    Neuroscience currently lacks a comprehensive theory of how cognitive processes can be implemented in a biological substrate. The Neural Engineering Framework (NEF) proposes one such theory, but has not yet gathered significant empirical support, partly due to the technical challenge of building and simulating large-scale models with the NEF. Nengo is a software tool that can be used to build and simulate large-scale models based on the NEF; currently, it is the primary resource for both teaching how the NEF is used, and for doing research that generates specific NEF models to explain experimental data. Nengo 1.4, which was implemented in Java, was used to create Spaun, the world's largest functional brain model (Eliasmith et al., 2012). Simulating Spaun highlighted limitations in Nengo 1.4's ability to support model construction with simple syntax, to simulate large models quickly, and to collect large amounts of data for subsequent analysis. This paper describes Nengo 2.0, which is implemented in Python and overcomes these limitations. It uses simple and extendable syntax, simulates a benchmark model on the scale of Spaun 50 times faster than Nengo 1.4, and has a flexible mechanism for collecting simulation results. PMID:24431999

  18. Formation and disruption of tonotopy in a large-scale model of the auditory cortex.

    PubMed

    Tomková, Markéta; Tomek, Jakub; Novák, Ondřej; Zelenka, Ondřej; Syka, Josef; Brom, Cyril

    2015-10-01

    There is ample experimental evidence describing changes of tonotopic organisation in the auditory cortex due to environmental factors. In order to uncover the underlying mechanisms, we designed a large-scale computational model of the auditory cortex. The model has up to 100 000 Izhikevich's spiking neurons of 17 different types, almost 21 million synapses, which are evolved according to Spike-Timing-Dependent Plasticity (STDP) and have an architecture akin to existing observations. Validation of the model revealed alternating synchronised/desynchronised states and different modes of oscillatory activity. We provide insight into these phenomena via analysing the activity of neuronal subtypes and testing different causal interventions into the simulation. Our model is able to produce experimental predictions on a cell type basis. To study the influence of environmental factors on the tonotopy, different types of auditory stimulations during the evolution of the network were modelled and compared. We found that strong white noise resulted in completely disrupted tonotopy, which is consistent with in vivo experimental observations. Stimulation with pure tones or spontaneous activity led to a similar degree of tonotopy as in the initial state of the network. Interestingly, weak white noise led to a substantial increase in tonotopy. As the STDP was the only mechanism of plasticity in our model, our results suggest that STDP is a sufficient condition for the emergence and disruption of tonotopy under various types of stimuli. The presented large-scale model of the auditory cortex and the core simulator, SUSNOIMAC, have been made publicly available. PMID:26344164

  19. Realistic modeling of neurons and networks: towards brain simulation

    PubMed Central

    D’Angelo, Egidio; Solinas, Sergio; Garrido, Jesus; Casellato, Claudia; Pedrocchi, Alessandra; Mapelli, Jonathan; Gandolfi, Daniela; Prestori, Francesca

    Summary Realistic modeling is a new advanced methodology for investigating brain functions. Realistic modeling is based on a detailed biophysical description of neurons and synapses, which can be integrated into microcircuits. The latter can, in turn, be further integrated to form large-scale brain networks and eventually to reconstruct complex brain systems. Here we provide a review of the realistic simulation strategy and use the cerebellar network as an example. This network has been carefully investigated at molecular and cellular level and has been the object of intense theoretical investigation. The cerebellum is thought to lie at the core of the forward controller operations of the brain and to implement timing and sensory prediction functions. The cerebellum is well described and provides a challenging field in which one of the most advanced realistic microcircuit models has been generated. We illustrate how these models can be elaborated and embedded into robotic control systems to gain insight into how the cellular properties of cerebellar neurons emerge in integrated behaviors. Realistic network modeling opens up new perspectives for the investigation of brain pathologies and for the neurorobotic field. PMID:24139652

  20. Impact of structural heterogeneity on upscaled models for large-scale CO2 migration and trapping in saline aquifers

    NASA Astrophysics Data System (ADS)

    Gasda, Sarah E.; Nilsen, Halvor M.; Dahle, Helge K.

    2013-12-01

    Structural heterogeneity of the caprock surface influences both migration patterns and trapping efficiency for CO2 injected in open saline aquifers. Understanding these mechanisms relies on appropriate modeling tools to simulate CO2 flow over hundreds of square kilometers and several hundred years during the postinjection period. Vertical equilibrium (VE) models are well suited for this purpose. However, topographical heterogeneity below the scale of model resolution requires upscaling, for example by using traditional flow-based homogenization techniques. This can significantly simplify the geologic model and reduce computational effort while still capturing the relevant physical processes. In this paper, we identify key structural parameters, such as dominant amplitude and wavelength of the traps, that determine the form of the upscaled constitutive functions. We also compare the strength of these geologic controls on CO2 migration and trapping to other mechanisms such as capillarity. This allows for a better understanding of the dominant physical processes and their impact on storage security. It also provides intuition on which upscaling approach is best suited for the system of interest. We apply these concepts to realistic structurally heterogeneous surfaces that have been developed using different geologic depositional models. We show that while amplitude is important for determining the amount of CO2 trapped, the spacing between the traps, distribution of spillpoint locations, large-scale formation dip angle affect the shape of the functions and thus the dynamics of plume migration. We also show for these cases that the topography characterized by shorter wavelength features is better suited for upscaling, while the longer wavelength surface can be sufficiently resolved. These results can inform the type of geological characterization that is required to build the most reliable upscaled models for large-scale CO2 migration.

  1. Hyper-Resolution Large Scale Flood Inundation Modeling: Development of AutoRAPID Model

    NASA Astrophysics Data System (ADS)

    Tavakoly, A. A.; Follum, M. L.; Wahl, M.; Snow, A.

    2015-12-01

    Streamflow and the resultant flood inundation are defining elements in large scale flood analyses. High-fidelity predictive capabilities of flood inundation risk requires hydrologic and hydrodynamic modeling at hyper-resolution (<100 m) scales. Using spatiotemporal data from climate models as the driver, we couple a continental scale river routing model known as Routing Application for Parallel ComputatIon of Discharge (RAPID) with a regional scale flood delineation model called AutoRoute to estimate flood extents. We demonstrate how the coupled tool, referred to as AutoRAPID, can quickly and efficiently simulate flood extents using a high resolution dataset (~10 m) at the regional scale (> 100,000 km2). The AutoRAPID framework is implemented over 230,000 km2 in the Midwestern United States (between latitude 38°N and 44°N, and longitude 86°W to 91°W, approximately 8% of the Mississippi River Basin) using a 10 m DEM. We generate the flood inundation map over the entire area for a June 2008 flood event. The model is compared with observed data at five select locations: Spencer, IN; Newberry, IN; Gays Mills, WI; Ft. Atkinson, WI, and Janesville, WI. We show that the model results are generally satisfactory with observed flow and flood inundation data and suggest that the AutoRAPID model can be considered for several potential applications, such as: forecast flow and flood inundation information; generating flood recurrence maps using high resolution vector river data; and for emergency management applications to protect/evacuate large areas when time is limited and data are sparse.

  2. Automatic Construction of Predictive Neuron Models through Large Scale Assimilation of Electrophysiological Data.

    PubMed

    Nogaret, Alain; Meliza, C Daniel; Margoliash, Daniel; Abarbanel, Henry D I

    2016-01-01

    We report on the construction of neuron models by assimilating electrophysiological data with large-scale constrained nonlinear optimization. The method implements interior point line parameter search to determine parameters from the responses to intracellular current injections of zebra finch HVC neurons. We incorporated these parameters into a nine ionic channel conductance model to obtain completed models which we then use to predict the state of the neuron under arbitrary current stimulation. Each model was validated by successfully predicting the dynamics of the membrane potential induced by 20-50 different current protocols. The dispersion of parameters extracted from different assimilation windows was studied. Differences in constraints from current protocols, stochastic variability in neuron output, and noise behave as a residual temperature which broadens the global minimum of the objective function to an ellipsoid domain whose principal axes follow an exponentially decaying distribution. The maximum likelihood expectation of extracted parameters was found to provide an excellent approximation of the global minimum and yields highly consistent kinetics for both neurons studied. Large scale assimilation absorbs the intrinsic variability of electrophysiological data over wide assimilation windows. It builds models in an automatic manner treating all data as equal quantities and requiring minimal additional insight. PMID:27605157

  3. Automatic Construction of Predictive Neuron Models through Large Scale Assimilation of Electrophysiological Data

    PubMed Central

    Nogaret, Alain; Meliza, C. Daniel; Margoliash, Daniel; Abarbanel, Henry D. I.

    2016-01-01

    We report on the construction of neuron models by assimilating electrophysiological data with large-scale constrained nonlinear optimization. The method implements interior point line parameter search to determine parameters from the responses to intracellular current injections of zebra finch HVC neurons. We incorporated these parameters into a nine ionic channel conductance model to obtain completed models which we then use to predict the state of the neuron under arbitrary current stimulation. Each model was validated by successfully predicting the dynamics of the membrane potential induced by 20–50 different current protocols. The dispersion of parameters extracted from different assimilation windows was studied. Differences in constraints from current protocols, stochastic variability in neuron output, and noise behave as a residual temperature which broadens the global minimum of the objective function to an ellipsoid domain whose principal axes follow an exponentially decaying distribution. The maximum likelihood expectation of extracted parameters was found to provide an excellent approximation of the global minimum and yields highly consistent kinetics for both neurons studied. Large scale assimilation absorbs the intrinsic variability of electrophysiological data over wide assimilation windows. It builds models in an automatic manner treating all data as equal quantities and requiring minimal additional insight. PMID:27605157

  4. Model Selection and Hypothesis Testing for Large-Scale Network Models with Overlapping Groups

    NASA Astrophysics Data System (ADS)

    Peixoto, Tiago P.

    2015-01-01

    The effort to understand network systems in increasing detail has resulted in a diversity of methods designed to extract their large-scale structure from data. Unfortunately, many of these methods yield diverging descriptions of the same network, making both the comparison and understanding of their results a difficult challenge. A possible solution to this outstanding issue is to shift the focus away from ad hoc methods and move towards more principled approaches based on statistical inference of generative models. As a result, we face instead the more well-defined task of selecting between competing generative processes, which can be done under a unified probabilistic framework. Here, we consider the comparison between a variety of generative models including features such as degree correction, where nodes with arbitrary degrees can belong to the same group, and community overlap, where nodes are allowed to belong to more than one group. Because such model variants possess an increasing number of parameters, they become prone to overfitting. In this work, we present a method of model selection based on the minimum description length criterion and posterior odds ratios that is capable of fully accounting for the increased degrees of freedom of the larger models and selects the best one according to the statistical evidence available in the data. In applying this method to many empirical unweighted networks from different fields, we observe that community overlap is very often not supported by statistical evidence and is selected as a better model only for a minority of them. On the other hand, we find that degree correction tends to be almost universally favored by the available data, implying that intrinsic node proprieties (as opposed to group properties) are often an essential ingredient of network formation.

  5. Seemingly unrelated intervention time series models for effectiveness evaluation of large scale environmental remediation.

    PubMed

    Ip, Ryan H L; Li, W K; Leung, Kenneth M Y

    2013-09-15

    Large scale environmental remediation projects applied to sea water always involve large amount of capital investments. Rigorous effectiveness evaluations of such projects are, therefore, necessary and essential for policy review and future planning. This study aims at investigating effectiveness of environmental remediation using three different Seemingly Unrelated Regression (SUR) time series models with intervention effects, including Model (1) assuming no correlation within and across variables, Model (2) assuming no correlation across variable but allowing correlations within variable across different sites, and Model (3) allowing all possible correlations among variables (i.e., an unrestricted model). The results suggested that the unrestricted SUR model is the most reliable one, consistently having smallest variations of the estimated model parameters. We discussed our results with reference to marine water quality management in Hong Kong while bringing managerial issues into consideration. PMID:23932418

  6. Aerodynamic characteristics of a large scale model with a swept wing and augmented jet flap

    NASA Technical Reports Server (NTRS)

    Falarski, M. D.; Koenig, D. G.

    1971-01-01

    Data of tests of a large-scale swept augmentor wing model in the 40- by 80-foot wind tunnel are presented. The data includes longitudinal characteristics with and without a horizontal tail as well as results of preliminary investigation of lateral-directional characteristics. The augmentor flap deflection was varied from 0 deg to 70.6 deg at isentropic jet thrust coefficients of 0 to 1.47. The tests were made at a Reynolds number from 2.43 to 4.1 times one million.

  7. Reconstruction of large-scale gene regulatory networks using Bayesian model averaging.

    PubMed

    Kim, Haseong; Gelenbe, Erol

    2012-09-01

    Gene regulatory networks provide the systematic view of molecular interactions in a complex living system. However, constructing large-scale gene regulatory networks is one of the most challenging problems in systems biology. Also large burst sets of biological data require a proper integration technique for reliable gene regulatory network construction. Here we present a new reverse engineering approach based on Bayesian model averaging which attempts to combine all the appropriate models describing interactions among genes. This Bayesian approach with a prior based on the Gibbs distribution provides an efficient means to integrate multiple sources of biological data. In a simulation study with maximum of 2000 genes, our method shows better sensitivity than previous elastic-net and Gaussian graphical models, with a fixed specificity of 0.99. The study also shows that the proposed method outperforms the other standard methods for a DREAM dataset generated by nonlinear stochastic models. In brain tumor data analysis, three large-scale networks consisting of 4422 genes were built using the gene expression of non-tumor, low and high grade tumor mRNA expression samples, along with DNA-protein binding affinity information. We found that genes having a large variation of degree distribution among the three tumor networks are the ones that see most involved in regulatory and developmental processes, which possibly gives a novel insight concerning conventional differentially expressed gene analysis. PMID:22987132

  8. Aspects of investigating STOL noise using large-scale wind-tunnel models.

    NASA Technical Reports Server (NTRS)

    Falarski, M. D.; Soderman, P. T.; Koenig, D. G.

    1973-01-01

    The applicability of the NASA Ames 40- by 80-foot wind tunnel for acoustic research on STOL concepts has been investigated. The acoustic characteristics of the wind-tunnel test section have been studied with calibrated acoustic sources. Acoustic characteristics of several large-scale STOL models have been studied in both the free-field and wind-tunnel acoustic environments. The results of these studies indicate that the acoustic characteristics of large-scale STOL models can be measured in the wind tunnel if the test section acoustic environment and model acoustic similitude are taken into consideration. The reverberant field of the test section must be determined with an acoustically similar noise source. A directional microphone, a phased array of microphones, and extrapolation of near-field data to far-field are some of the techniques being explored as possible solutions to the directivity loss in a reverberant field. The model sound pressure levels must be of sufficient magnitude to be distinguishible from the wind-tunnel background noise.

  9. Sensitivity analysis of key components in large-scale hydroeconomic models

    NASA Astrophysics Data System (ADS)

    Medellin-Azuara, J.; Connell, C. R.; Lund, J. R.; Howitt, R. E.

    2008-12-01

    This paper explores the likely impact of different estimation methods in key components of hydro-economic models such as hydrology and economic costs or benefits, using the CALVIN hydro-economic optimization for water supply in California. In perform our analysis using two climate scenarios: historical and warm-dry. The components compared were perturbed hydrology using six versus eighteen basins, highly-elastic urban water demands, and different valuation of agricultural water scarcity. Results indicate that large scale hydroeconomic hydro-economic models are often rather robust to a variety of estimation methods of ancillary models and components. Increasing the level of detail in the hydrologic representation of this system might not greatly affect overall estimates of climate and its effects and adaptations for California's water supply. More price responsive urban water demands will have a limited role in allocating water optimally among competing uses. Different estimation methods for the economic value of water and scarcity in agriculture may influence economically optimal water allocation; however land conversion patterns may have a stronger influence in this allocation. Overall optimization results of large-scale hydro-economic models remain useful for a wide range of assumptions in eliciting promising water management alternatives.

  10. Meteorological and photochemical modeling of large-scale albedo changes in the South Coast Air Basin

    SciTech Connect

    Tran, K.T.; Mirabella, V.A.

    1998-12-31

    The effectiveness of large-scale surface albedo changes as an ozone control strategy is investigated. These albedo changes are part of the Cool Communities strategy that calls for the use of lighter colored roofing and paving materials as well as an increase in tree planting. The advanced mesoscale model MM5 was used to analyze the associated effects on ambient temperature, mixing depth and winds. The MM5 model was modified to accept surface properties derived from a satellite-based land use database. Preprocessors were also developed to allow a research-oriented model such as MM5 to be user friendly and amenable to practical, routine air quality modeling applications. Changes in ozone air quality are analyzed with the Urban Airshed Model (UAM). Results of the MM5/UAM simulations of the SCAQS August 26--28, 1987 ozone episode are presented and compared to those obtained with the CSUMM/UAM models.

  11. Development of a coupled soil erosion and large-scale hydrology modeling system

    NASA Astrophysics Data System (ADS)

    Mao, Dazhi; Cherkauer, Keith A.; Flanagan, Dennis C.

    2010-08-01

    Soil erosion models are usually limited in their application to the field scale; however, the management of land resources requires information at the regional scale. Large-scale physically based land surface schemes (LSS) provide estimates of regional scale hydrologic processes that contribute to erosion. If scaling issues are adequately addressed, coupling an LSS to a physically based erosion model can provide a tool to study the regional impact of soil erosion. A coupling scheme was developed using the Variable Infiltration Capacity (VIC) model to produce hydrologic inputs for the stand-alone Water Erosion Prediction Project-Hillslope Erosion (WEPP-HE) program, accounting for both temporal and spatial scaling issues. Precipitation events were disaggregated from daily to hourly and used with the VIC model to generate hydrologic fluxes. Slope profiles were downscaled from 30 arc second to 30 m hillslopes. Additionally, soil texture and erodibility were adjusted with simplified assumptions based on the full WEPP model. Soil erosion at the large scale was represented on a VIC model grid cell basis by applying WEPP-HE to subsamples of 30 m hillslopes. On an average annual basis, results showed that the coupled model was comparable with full WEPP model predictions. On an event basis, the coupled model system captured more small erosion events, with erodibility adjustments of the same magnitude as from the full WEPP model simulations. Differences in results can be attributed to discrepancies in hydrologic data calculations and simplified assumptions in vegetation and soil erodibility. Overall, the coupled model demonstrated the feasibility of erosion prediction for large river basins.

  12. Assimilative Modeling of Large-Scale Equatorial Plasma Trenches Observed by C/NOFS

    NASA Astrophysics Data System (ADS)

    Su, Y.; Retterer, J. M.; de La Beaujardiere, O.; Burke, W. J.; Roddy, P. A.; Pfaff, R. F.; Hunton, D. E.

    2009-12-01

    Low-latitude plasma irregularities commonly observed during post sunset local times have been studied extensively by ground-based measurements such as coherent and incoherent scatter radars and ionosondes, as well as by satellite observations. The pre-reversal enhancement in the upward plasma drift due to eastward electric fields has been identified as the primary cause of these irregularities. Reports of plasma depletions at post-midnight and early morning local times are scarce and are typically limited to storm time conditions. Such dawn plasma depletions were frequently observed by C/NOFS in June 2008 [de La Beaujardière et al., 2009]. We are able to qualitatively reproduce the large-scale density depletion observed by the Planar Langmuir Probe (PLP) on June 17, 2008 [Su et al., 2009], based on the assimilative physics-based ionospheric model (PBMOD) using available electric field data obtained from the Vector Electric Field Instrument (VEFI) as the model input. In comparison, no plasma depletion or irregularity is obtained from the climatology version of our model when large upward drift velocities caused by observed eastward electric fields were absent. In this presentation, we extend our study for the entire month of June 2008 to exercise the forecast capability of large-scale density trenches by PBMOD with available VEFI data. Geophys. Res. Lett, 36, L00C06, doi:10.1029/2009GL038884, 2009.Geophys. Res. Lett., 36, L00C02, doi:10.1029/ 2009GL038946, 2009.

  13. Mutual coupling of hydrologic and hydrodynamic models - a viable approach for improved large-scale inundation estimates?

    NASA Astrophysics Data System (ADS)

    Hoch, Jannis; Winsemius, Hessel; van Beek, Ludovicus; Haag, Arjen; Bierkens, Marc

    2016-04-01

    Due to their increasing occurrence rate and associated economic costs, fluvial floods are large-scale and cross-border phenomena that need to be well understood. Sound information about temporal and spatial variations of flood hazard is essential for adequate flood risk management and climate change adaption measures. While progress has been made in assessments of flood hazard and risk on the global scale, studies to date have made compromises between spatial resolution on the one hand and local detail that influences their temporal characteristics (rate of rise, duration) on the other. Moreover, global models cannot realistically model flood wave propagation due to a lack of detail in channel and floodplain geometry, and the representation of hydrologic processes influencing the surface water balance such as open water evaporation from inundated water and re-infiltration of water in river banks. To overcome these restrictions and to obtain a better understanding of flood propagation including its spatio-temporal variations at the large scale, yet at a sufficiently high resolution, the present study aims to develop a large-scale modeling tool by coupling the global hydrologic model PCR-GLOBWB and the recently developed hydrodynamic model DELFT3D-FM. The first computes surface water volumes which are routed by the latter, solving the full Saint-Venant equations. With DELFT3D FM being capable of representing the model domain as a flexible mesh, model accuracy is only improved at relevant locations (river and adjacent floodplain) and the computation time is not unnecessarily increased. This efficiency is very advantageous for large-scale modelling approaches. The model domain is thereby schematized by 2D floodplains, being derived from global data sets (HydroSHEDS and G3WBM, respectively). Since a previous study with 1way-coupling showed good model performance (J.M. Hoch et al., in prep.), this approach was extended to 2way-coupling to fully represent evaporation

  14. A New Statistically based Autoconversion rate Parameterization for use in Large-Scale Models

    NASA Technical Reports Server (NTRS)

    Lin, Bing; Zhang, Junhua; Lohmann, Ulrike

    2002-01-01

    The autoconversion rate is a key process for the formation of precipitation in warm clouds. In climate models, physical processes such as autoconversion rate, which are calculated from grid mean values, are biased, because they do not take subgrid variability into account. Recently, statistical cloud schemes have been introduced in large-scale models to account for partially cloud-covered grid boxes. However, these schemes do not include the in-cloud variability in their parameterizations. In this paper, a new statistically based autoconversion rate considering the in-cloud variability is introduced and tested in three cases using the Canadian Single Column Model (SCM) of the global climate model. The results show that the new autoconversion rate improves the model simulation, especially in terms of liquid water path in all three case studies.

  15. LipidWrapper: An Algorithm for Generating Large-Scale Membrane Models of Arbitrary Geometry

    PubMed Central

    Durrant, Jacob D.; Amaro, Rommie E.

    2014-01-01

    As ever larger and more complex biological systems are modeled in silico, approximating physiological lipid bilayers with simple planar models becomes increasingly unrealistic. In order to build accurate large-scale models of subcellular environments, models of lipid membranes with carefully considered, biologically relevant curvature will be essential. In the current work, we present a multi-scale utility called LipidWrapper capable of creating curved membrane models with geometries derived from various sources, both experimental and theoretical. To demonstrate its utility, we use LipidWrapper to examine an important mechanism of influenza virulence. A copy of the program can be downloaded free of charge under the terms of the open-source FreeBSD License from http://nbcr.ucsd.edu/lipidwrapper. LipidWrapper has been tested on all major computer operating systems. PMID:25032790

  16. LipidWrapper: an algorithm for generating large-scale membrane models of arbitrary geometry.

    PubMed

    Durrant, Jacob D; Amaro, Rommie E

    2014-07-01

    As ever larger and more complex biological systems are modeled in silico, approximating physiological lipid bilayers with simple planar models becomes increasingly unrealistic. In order to build accurate large-scale models of subcellular environments, models of lipid membranes with carefully considered, biologically relevant curvature will be essential. In the current work, we present a multi-scale utility called LipidWrapper capable of creating curved membrane models with geometries derived from various sources, both experimental and theoretical. To demonstrate its utility, we use LipidWrapper to examine an important mechanism of influenza virulence. A copy of the program can be downloaded free of charge under the terms of the open-source FreeBSD License from http://nbcr.ucsd.edu/lipidwrapper. LipidWrapper has been tested on all major computer operating systems. PMID:25032790

  17. Large-scale shell-model calculations on the spectroscopy of N <126 Pb isotopes

    NASA Astrophysics Data System (ADS)

    Qi, Chong; Jia, L. Y.; Fu, G. J.

    2016-07-01

    Large-scale shell-model calculations are carried out in the model space including neutron-hole orbitals 2 p1 /2 ,1 f5 /2 ,2 p3 /2 ,0 i13 /2 ,1 f7 /2 , and 0 h9 /2 to study the structure and electromagnetic properties of neutron-deficient Pb isotopes. An optimized effective interaction is used. Good agreement between full shell-model calculations and experimental data is obtained for the spherical states in isotopes Pb-206194. The lighter isotopes are calculated with an importance-truncation approach constructed based on the monopole Hamiltonian. The full shell-model results also agree well with our generalized seniority and nucleon-pair-approximation truncation calculations. The deviations between theory and experiment concerning the excitation energies and electromagnetic properties of low-lying 0+ and 2+ excited states and isomeric states may provide a constraint on our understanding of nuclear deformation and intruder configuration in this region.

  18. Large-scale hydrological modelling by using modified PUB recommendations: the India-HYPE case

    NASA Astrophysics Data System (ADS)

    Pechlivanidis, I. G.; Arheimer, B.

    2015-03-01

    The Prediction in Ungauged Basins (PUB) scientific initiative (2003-2012 by IAHS) put considerable effort into improving the reliability of hydrological models to predict flow response in ungauged rivers. PUB's collective experience advanced hydrologic science and defined guidelines to make predictions in catchments without observed runoff data. At present, there is a raised interest in applying catchment models for large domains and large data samples in a multi-basin manner. However, such modelling involves several sources of uncertainties, which may be caused by the imperfectness of input data, i.e. particularly regional and global databases. This may lead to inaccurate model parameterisation and incomplete process understanding. In order to bridge the gap between the best practices for single catchments and large-scale hydrology, we present a further developed and slightly modified version of the recommended best practices for PUB by Takeuchi et al. (2013). By using examples from a recent HYPE hydrological model set-up on the Indian subcontinent, named India-HYPE v1.0, we explore the recommendations, indicate challenges and recommend quality checks to avoid erroneous assumptions. We identify the obstacles, ways to overcome them and describe the work process related to: (a) errors and inconsistencies in global databases, unknown human impacts, poor data quality, (b) robust approaches to identify parameters using a stepwise calibration approach, remote sensing data, expert knowledge and catchment similarities; and (c) evaluation based on flow signatures and performance metrics, using both multiple criteria and multiple variables, and independent gauges for "blind tests". The results show that despite the strong hydro-climatic gradient over the subcontinent, a single model can adequately describe the spatial variability in dominant hydrological processes at the catchment scale. Eventually, during calibration of India-HYPE, the median Kling-Gupta Efficiency for

  19. Integrating adaptive behaviour in large-scale flood risk assessments: an Agent-Based Modelling approach

    NASA Astrophysics Data System (ADS)

    Haer, Toon; Aerts, Jeroen

    2015-04-01

    Between 1998 and 2009, Europe suffered over 213 major damaging floods, causing 1126 deaths, displacing around half a million people. In this period, floods caused at least 52 billion euro in insured economic losses making floods the most costly natural hazard faced in Europe. In many low-lying areas, the main strategy to cope with floods is to reduce the risk of the hazard through flood defence structures, like dikes and levees. However, it is suggested that part of the responsibility for flood protection needs to shift to households and businesses in areas at risk, and that governments and insurers can effectively stimulate the implementation of individual protective measures. However, adaptive behaviour towards flood risk reduction and the interaction between the government, insurers, and individuals has hardly been studied in large-scale flood risk assessments. In this study, an European Agent-Based Model is developed including agent representatives for the administrative stakeholders of European Member states, insurers and reinsurers markets, and individuals following complex behaviour models. The Agent-Based Modelling approach allows for an in-depth analysis of the interaction between heterogeneous autonomous agents and the resulting (non-)adaptive behaviour. Existing flood damage models are part of the European Agent-Based Model to allow for a dynamic response of both the agents and the environment to changing flood risk and protective efforts. By following an Agent-Based Modelling approach this study is a first contribution to overcome the limitations of traditional large-scale flood risk models in which the influence of individual adaptive behaviour towards flood risk reduction is often lacking.

  20. Automatic Generation of Connectivity for Large-Scale Neuronal Network Models through Structural Plasticity.

    PubMed

    Diaz-Pier, Sandra; Naveau, Mikaël; Butz-Ostendorf, Markus; Morrison, Abigail

    2016-01-01

    With the emergence of new high performance computation technology in the last decade, the simulation of large scale neural networks which are able to reproduce the behavior and structure of the brain has finally become an achievable target of neuroscience. Due to the number of synaptic connections between neurons and the complexity of biological networks, most contemporary models have manually defined or static connectivity. However, it is expected that modeling the dynamic generation and deletion of the links among neurons, locally and between different regions of the brain, is crucial to unravel important mechanisms associated with learning, memory and healing. Moreover, for many neural circuits that could potentially be modeled, activity data is more readily and reliably available than connectivity data. Thus, a framework that enables networks to wire themselves on the basis of specified activity targets can be of great value in specifying network models where connectivity data is incomplete or has large error margins. To address these issues, in the present work we present an implementation of a model of structural plasticity in the neural network simulator NEST. In this model, synapses consist of two parts, a pre- and a post-synaptic element. Synapses are created and deleted during the execution of the simulation following local homeostatic rules until a mean level of electrical activity is reached in the network. We assess the scalability of the implementation in order to evaluate its potential usage in the self generation of connectivity of large scale networks. We show and discuss the results of simulations on simple two population networks and more complex models of the cortical microcircuit involving 8 populations and 4 layers using the new framework. PMID:27303272

  1. Automatic Generation of Connectivity for Large-Scale Neuronal Network Models through Structural Plasticity

    PubMed Central

    Diaz-Pier, Sandra; Naveau, Mikaël; Butz-Ostendorf, Markus; Morrison, Abigail

    2016-01-01

    With the emergence of new high performance computation technology in the last decade, the simulation of large scale neural networks which are able to reproduce the behavior and structure of the brain has finally become an achievable target of neuroscience. Due to the number of synaptic connections between neurons and the complexity of biological networks, most contemporary models have manually defined or static connectivity. However, it is expected that modeling the dynamic generation and deletion of the links among neurons, locally and between different regions of the brain, is crucial to unravel important mechanisms associated with learning, memory and healing. Moreover, for many neural circuits that could potentially be modeled, activity data is more readily and reliably available than connectivity data. Thus, a framework that enables networks to wire themselves on the basis of specified activity targets can be of great value in specifying network models where connectivity data is incomplete or has large error margins. To address these issues, in the present work we present an implementation of a model of structural plasticity in the neural network simulator NEST. In this model, synapses consist of two parts, a pre- and a post-synaptic element. Synapses are created and deleted during the execution of the simulation following local homeostatic rules until a mean level of electrical activity is reached in the network. We assess the scalability of the implementation in order to evaluate its potential usage in the self generation of connectivity of large scale networks. We show and discuss the results of simulations on simple two population networks and more complex models of the cortical microcircuit involving 8 populations and 4 layers using the new framework. PMID:27303272

  2. Towards large scale modelling of wetland water dynamics in northern basins.

    NASA Astrophysics Data System (ADS)

    Pedinotti, V.; Sapriza, G.; Stone, L.; Davison, B.; Pietroniro, A.; Quinton, W. L.; Spence, C.; Wheater, H. S.

    2015-12-01

    Understanding the hydrological behaviour of low topography, wetland-dominated sub-arctic areas is one major issue needed for the improvement of large scale hydrological models. These wet organic soils cover a large extent of Northern America and have a considerable impact on the rainfall-runoff response of a catchment. Moreover their strong interactions with the lower atmosphere and the carbon cycle make of these areas a noteworthy component of the regional climate system. In the framework of the Changing Cold Regions Network (CCRN), this study aims at providing a model for wetland water dynamics that can be used for large scale applications in cold regions. The modelling system has two main components : a) the simulation of surface runoff using the Modélisation Environmentale Communautaire - Surface and Hydrology (MESH) land surface model driven with several gridded atmospheric datasets and b) the routing of surface runoff using the WATROUTE channel scheme. As a preliminary study, we focus on two small representative study basins in Northern Canada : Scotty Creek in the lower Liard River valley of the Northwest Territories and Baker Creek, located a few kilometers north of Yellowknife. Both areas present characteristic landscapes dominated by a series of peat plateaus, channel fens, small lakes and bogs. Moreover, they constitute important fieldwork sites with detailed data to support our modelling study. The challenge of our new wetland model is to represent the hydrological functioning of the various landscape units encountered in those watersheds and their interactions using simple numerical formulations that can be later extended to larger basins such as the Mackenzie river basin. Using observed datasets, the performance of the model to simulate the temporal evolution of hydrological variables such as the water table depth, frost table depth and discharge is assessed.

  3. The topology of large-scale structure. II - Nonlinear evolution of Gaussian models

    NASA Technical Reports Server (NTRS)

    Melott, Adrian L.; Weinberg, David H.; Gott, J. Richard, III

    1988-01-01

    The evolution of non-Gaussian behavior in the large-scale universe from Gaussian initial conditions is studied. Topology measures developed in previous papers are applied to the smoothed initial, final, and biased matter distributions of cold dark matter, white noise, and massive neutrino simulations. When the smoothing length is approximately twice the mass correlation length or larger, the evolved models look like the initial conditions, suggesting that random phase hypotheses in cosmology can be tested with adequate data sets. When a smaller smoothing length is used, nonlinear effects are recovered, so nonlinear effects on topology can be detected in redshift surveys after smoothing at the mean intergalaxy separation. Hot dark matter models develop manifestly non-Gaussian behavior attributable to phase correlations, with a topology reminiscent of bubble or sheet distributions. Cold dark matter models remain Gaussian, and biasing does not disguise this.

  4. Numerical modeling of water spray suppression of conveyor belt fires in a large-scale tunnel

    PubMed Central

    Yuan, Liming; Smith, Alex C.

    2015-01-01

    Conveyor belt fires in an underground mine pose a serious life threat to miners. Water sprinkler systems are usually used to extinguish underground conveyor belt fires, but because of the complex interaction between conveyor belt fires and mine ventilation airflow, more effective engineering designs are needed for the installation of water sprinkler systems. A computational fluid dynamics (CFD) model was developed to simulate the interaction between the ventilation airflow, the belt flame spread, and the water spray system in a mine entry. The CFD model was calibrated using test results from a large-scale conveyor belt fire suppression experiment. Simulations were conducted using the calibrated CFD model to investigate the effects of sprinkler location, water flow rate, and sprinkler activation temperature on the suppression of conveyor belt fires. The sprinkler location and the activation temperature were found to have a major effect on the suppression of the belt fire, while the water flow rate had a minor effect. PMID:26190905

  5. Pangolin v1.0, a conservative 2-D transport model for large scale parallel calculation

    NASA Astrophysics Data System (ADS)

    Praga, A.; Cariolle, D.; Giraud, L.

    2014-07-01

    To exploit the possibilities of parallel computers, we designed a large-scale bidimensional atmospheric transport model named Pangolin. As the basis for a future chemistry-transport model, a finite-volume approach was chosen both for mass preservation and to ease parallelization. To overcome the pole restriction on time-steps for a regular latitude-longitude grid, Pangolin uses a quasi-area-preserving reduced latitude-longitude grid. The features of the regular grid are exploited to improve parallel performances and a custom domain decomposition algorithm is presented. To assess the validity of the transport scheme, its results are compared with state-of-the-art models on analytical test cases. Finally, parallel performances are shown in terms of strong scaling and confirm the efficient scalability up to a few hundred of cores.

  6. Inclusive constraints on unified dark matter models from future large-scale surveys

    SciTech Connect

    Camera, Stefano; Carbone, Carmelita; Moscardini, Lauro E-mail: carmelita.carbone@unibo.it

    2012-03-01

    In the very last years, cosmological models where the properties of the dark components of the Universe — dark matter and dark energy — are accounted for by a single ''dark fluid'' have drawn increasing attention and interest. Amongst many proposals, Unified Dark Matter (UDM) cosmologies are promising candidates as effective theories. In these models, a scalar field with a non-canonical kinetic term in its Lagrangian mimics both the accelerated expansion of the Universe at late times and the clustering properties of the large-scale structure of the cosmos. However, UDM models also present peculiar behaviours, the most interesting one being the fact that the perturbations in the dark-matter component of the scalar field do have a non-negligible speed of sound. This gives rise to an effective Jeans scale for the Newtonian potential, below which the dark fluid does not cluster any more. This implies a growth of structures fairly different from that of the concordance ΛCDM model. In this paper, we demonstrate that forthcoming large-scale surveys will be able to discriminate between viable UDM models and ΛCDM to a good degree of accuracy. To this purpose, the planned Euclid satellite will be a powerful tool, since it will provide very accurate data on galaxy clustering and the weak lensing effect of cosmic shear. Finally, we also exploit the constraining power of the ongoing CMB Planck experiment. Although our approach is the most conservative, with the inclusion of only well-understood, linear dynamics, in the end we also show what could be done if some amount of non-linear information were included.

  7. Surprising Long Range Effects of Local Shoreline Stabilization in a Large-Scale Coastline Model

    NASA Astrophysics Data System (ADS)

    Slott, J.; Murray, B.; Valvo, L.; Ashton, A.

    2004-12-01

    As coastlines continue to retreat and threaten communities, roads, and other infrastructure, humans increasingly employ shoreline stabilization techniques to maintain the shoreline in its current position. Examples of shoreline stabilization techniques include beach nourishment and seawall construction. During beach nourishment, sand is typically dredged from locations offshore and placed on the beach. Seawalls or revetments, on the other hand, are hardened concrete structures which prevent the shoreline from retreating further yet do not add sand to the nearshore system. Coastal engineers and scientists have only addressed the local and relatively short-term effects of shoreline stabilization. Can beach nourishment or seawalls affect coastline behavior tens or hundreds of kilometers away in the longer term? We adapted a recently developed model of large-scale, long-term shoreline change to address such questions. On predominately sandy shorelines, waves breaking at oblique angles to the shoreline orientation drives the alongshore transport of sediment. Though traditionally believed to smooth out shoreline features, Ashton, et. al. (2001) have shown that alongshore-driven sediment transport can cause more complex shoreline evolution. Their model showed the spontaneous formation of large-scale features such as capes and cuspate forelands (e.g. the shape of the coastline of the Carolinas) using simple sediment transport relationships. This model accounts for non-local shoreline interactions, such as wave "shadowing." In this work, we have further developed the large-scale shoreline model to include the effects that shoreline stabilization techniques have on shoreline position and sediment supply. In one set of experiments, we chose an initial shoreline with cape-like features separated by approximately 100 kilometers, roughly similar to that of the coast off the Carolinas. In each individual experiment, we nourished a different 10 kilometer section of coastline. In

  8. Large scale nonlinear numerical optimal control for finite element models of flexible structures

    NASA Technical Reports Server (NTRS)

    Shoemaker, Christine A.; Liao, Li-Zhi

    1990-01-01

    This paper discusses the development of large scale numerical optimal control algorithms for nonlinear systems and their application to finite element models of structures. This work is based on our expansion of the optimal control algorithm (DDP) in the following steps: improvement of convergence for initial policies in non-convex regions, development of a numerically accurate penalty function method approach for constrained DDP problems, and parallel processing on supercomputers. The expanded constrained DDP algorithm was applied to the control of a four-bay, two dimensional truss with 12 soft members, which generates geometric nonlinearities. Using an explicit finite element model to describe the structural system requires 32 state variables and 10,000 time steps. Our numerical results indicate that for constrained or unconstrained structural problems with nonlinear dynamics, the results obtained by our expanded constrained DDP are significantly better than those obtained using linear-quadratic feedback control.

  9. GPU-Based Parallelized Solver for Large Scale Vascular Blood Flow Modeling and Simulations.

    PubMed

    Santhanam, Anand P; Neylon, John; Eldredge, Jeff; Teran, Joseph; Dutson, Erik; Benharash, Peyman

    2016-01-01

    Cardio-vascular blood flow simulations are essential in understanding the blood flow behavior during normal and disease conditions. To date, such blood flow simulations have only been done at a macro scale level due to computational limitations. In this paper, we present a GPU based large scale solver that enables modeling the flow even in the smallest arteries. A mechanical equivalent of the circuit based flow modeling system is first developed to employ the GPU computing framework. Numerical studies were employed using a set of 10 million connected vascular elements. Run-time flow analysis were performed to simulate vascular blockages, as well as arterial cut-off. Our results showed that we can achieve ~100 FPS using a GTX 680m and ~40 FPS using a Tegra K1 computing platform. PMID:27046603

  10. Reversible Parallel Discrete-Event Execution of Large-scale Epidemic Outbreak Models

    SciTech Connect

    Perumalla, Kalyan S; Seal, Sudip K

    2010-01-01

    The spatial scale, runtime speed and behavioral detail of epidemic outbreak simulations together require the use of large-scale parallel processing. In this paper, an optimistic parallel discrete event execution of a reaction-diffusion simulation model of epidemic outbreaks is presented, with an implementation over the $\\mu$sik simulator. Rollback support is achieved with the development of a novel reversible model that combines reverse computation with a small amount of incremental state saving. Parallel speedup and other runtime performance metrics of the simulation are tested on a small (8,192-core) Blue Gene / P system, while scalability is demonstrated on 65,536 cores of a large Cray XT5 system. Scenarios representing large population sizes (up to several hundred million individuals in the largest case) are exercised.

  11. Phanerozoic marine diversity: rock record modelling provides an independent test of large-scale trends

    PubMed Central

    Smith, Andrew B.; Lloyd, Graeme T.; McGowan, Alistair J.

    2012-01-01

    Sampling bias created by a heterogeneous rock record can seriously distort estimates of marine diversity and makes a direct reading of the fossil record unreliable. Here we compare two independent estimates of Phanerozoic marine diversity that explicitly take account of variation in sampling—a subsampling approach that standardizes for differences in fossil collection intensity, and a rock area modelling approach that takes account of differences in rock availability. Using the fossil records of North America and Western Europe, we demonstrate that a modelling approach applied to the combined data produces results that are significantly correlated with those derived from subsampling. This concordance between independent approaches argues strongly for the reality of the large-scale trends in diversity we identify from both approaches. PMID:22951734

  12. Large-scale shell-model calculations of nuclei around mass 210

    NASA Astrophysics Data System (ADS)

    Teruya, E.; Higashiyama, K.; Yoshinaga, N.

    2016-06-01

    Large-scale shell-model calculations are performed for even-even, odd-mass, and doubly odd nuclei of Pb, Bi, Po, At, Rn, and Fr isotopes in the neutron deficit region (Z ≥82 ,N ≤126 ) assuming 208Pb as a doubly magic core. All the six single-particle orbitals between the magic numbers 82 and 126, namely, 0 h9 /2,1 f7 /2,0 i13 /2,2 p3 /2,1 f5 /2 , and 2 p1 /2 , are considered. For a phenomenological effective two-body interaction, one set of the monopole pairing and quadrupole-quadrupole interactions including the multipole-pairing interactions is adopted for all the nuclei considered. The calculated energies and electromagnetic properties are compared with the experimental data. Furthermore, many isomeric states are analyzed in terms of the shell-model configurations.

  13. Microbranching in mode-I fracture using large-scale simulations of amorphous and perturbed-lattice models

    NASA Astrophysics Data System (ADS)

    Heizler, Shay I.; Kessler, David A.

    2015-07-01

    We study the high-velocity regime mode-I fracture instability wherein small microbranches start to appear near the main crack, using large-scale simulations. Some of the features of those microbranches have been reproduced qualitatively in smaller-scale studies [using O (104) atoms] on both a model of an amorphous material (via the continuous random network model) and using perturbed-lattice models. In this study, larger-scale simulations [ O (106) atoms] were performed using multithreading computing on a GPU device, in order to achieve more physically realistic results. First, we find that the microbranching pattern appears to be converging with the lattice width. Second, the simulations reproduce the growth of the size of a microbranch as a function of the crack velocity, as well as the increase of the amplitude of the derivative of the electrical-resistance root-mean square with respect to the time as a function of the crack velocity. In addition, the simulations yield the correct branching angle of the microbranches, and the power law exponent governing the shape of the microbranches seems to be lower than unity, so that the side cracks turn over in the direction of propagation of the main crack as seen in experiment.

  14. A large-scale neurocomputational model of task-oriented behavior selection and working memory in prefrontal cortex.

    PubMed

    Chadderdon, George L; Sporns, Olaf

    2006-02-01

    The prefrontal cortex (PFC) is crucially involved in the executive component of working memory, representation of task state, and behavior selection. This article presents a large-scale computational model of the PFC and associated brain regions designed to investigate the mechanisms by which working memory and task state interact to select adaptive behaviors from a behavioral repertoire. The model consists of multiple brain regions containing neuronal populations with realistic physiological and anatomical properties, including extrastriate visual cortical regions, the inferotemporal cortex, the PFC, the striatum, and midbrain dopamine (DA) neurons. The onset of a delayed match-to-sample or delayed nonmatch-to-sample task triggers tonic DA release in the PFC causing a switch into a persistent, stimulus-insensitive dynamic state that promotes the maintenance of stimulus representations within prefrontal networks. Other modeled prefrontal and striatal units select cognitive acceptance or rejection behaviors according to which task is active and whether prefrontal working memory representations match the current stimulus. Working memory task performance and memory fields of prefrontal delay units are degraded by extreme elevation or depletion of tonic DA levels. Analyses of cellular and synaptic activity suggest that hyponormal DA levels result in increased prefrontal activation, whereas hypernormal DA levels lead to decreased activation. Our simulation results suggest a range of predictions for behavioral, single-cell, and neuroimaging response data under the proposed task set and under manipulations of DA concentration. PMID:16494684

  15. Microbranching in mode-I fracture using large-scale simulations of amorphous and perturbed-lattice models.

    PubMed

    Heizler, Shay I; Kessler, David A

    2015-07-01

    We study the high-velocity regime mode-I fracture instability wherein small microbranches start to appear near the main crack, using large-scale simulations. Some of the features of those microbranches have been reproduced qualitatively in smaller-scale studies [using O(10(4)) atoms] on both a model of an amorphous material (via the continuous random network model) and using perturbed-lattice models. In this study, larger-scale simulations [O(10(6)) atoms] were performed using multithreading computing on a GPU device, in order to achieve more physically realistic results. First, we find that the microbranching pattern appears to be converging with the lattice width. Second, the simulations reproduce the growth of the size of a microbranch as a function of the crack velocity, as well as the increase of the amplitude of the derivative of the electrical-resistance root-mean square with respect to the time as a function of the crack velocity. In addition, the simulations yield the correct branching angle of the microbranches, and the power law exponent governing the shape of the microbranches seems to be lower than unity, so that the side cracks turn over in the direction of propagation of the main crack as seen in experiment. PMID:26274182

  16. Global Sensitivity Analysis for Large-scale Socio-hydrological Models using the Cloud

    NASA Astrophysics Data System (ADS)

    Hu, Y.; Garcia-Cabrejo, O.; Cai, X.; Valocchi, A. J.; Dupont, B.

    2014-12-01

    In the context of coupled human and natural system (CHNS), incorporating human factors into water resource management provides us with the opportunity to understand the interactions between human and environmental systems. A multi-agent system (MAS) model is designed to couple with the physically-based Republican River Compact Administration (RRCA) groundwater model, in an attempt to understand the declining water table and base flow in the heavily irrigated Republican River basin. For MAS modelling, we defined five behavioral parameters (κ_pr, ν_pr, κ_prep, ν_prep and λ) to characterize the agent's pumping behavior given the uncertainties of the future crop prices and precipitation. κ and ν describe agent's beliefs in their prior knowledge of the mean and variance of crop prices (κ_pr, ν_pr) and precipitation (κ_prep, ν_prep), and λ is used to describe the agent's attitude towards the fluctuation of crop profits. Notice that these human behavioral parameters as inputs to the MAS model are highly uncertain and even not measurable. Thus, we estimate the influences of these behavioral parameters on the coupled models using Global Sensitivity Analysis (GSA). In this paper, we address two main challenges arising from GSA with such a large-scale socio-hydrological model by using Hadoop-based Cloud Computing techniques and Polynomial Chaos Expansion (PCE) based variance decomposition approach. As a result, 1,000 scenarios of the coupled models are completed within two hours with the Hadoop framework, rather than about 28days if we run those scenarios sequentially. Based on the model results, GSA using PCE is able to measure the impacts of the spatial and temporal variations of these behavioral parameters on crop profits and water table, and thus identifies two influential parameters, κ_pr and λ. The major contribution of this work is a methodological framework for the application of GSA in large-scale socio-hydrological models. This framework attempts to

  17. Large-scale Individual-based Models of Pandemic Influenza Mitigation Strategies

    NASA Astrophysics Data System (ADS)

    Kadau, Kai; Germann, Timothy; Longini, Ira; Macken, Catherine

    2007-03-01

    We have developed a large-scale stochastic simulation model to investigate the spread of a pandemic strain of influenza virus through the U.S. population of 281 million people, to assess the likely effectiveness of various potential intervention strategies including antiviral agents, vaccines, and modified social mobility (including school closure and travel restrictions) [1]. The heterogeneous population structure and mobility is based on available Census and Department of Transportation data where available. Our simulations demonstrate that, in a highly mobile population, restricting travel after an outbreak is detected is likely to delay slightly the time course of the outbreak without impacting the eventual number ill. For large basic reproductive numbers R0, we predict that multiple strategies in combination (involving both social and medical interventions) will be required to achieve a substantial reduction in illness rates. [1] T. C. Germann, K. Kadau, I. M. Longini, and C. A. Macken, Proc. Natl. Acad. Sci. (USA) 103, 5935-5940 (2006).

  18. Excavating the Genome: Large Scale Mutagenesis Screening for the Discovery of New Mouse Models

    PubMed Central

    Sundberg, John P.; Dadras, Soheil S.; Silva, Kathleen A.; Kennedy, Victoria E.; Murray, Stephen A.; Denegre, James; Schofield, Paul N.; King, Lloyd E.; Wiles, Michael; Pratt, C. Herbert

    2016-01-01

    Technology now exists for rapid screening of mutated laboratory mice to identify phenotypes associated with specific genetic mutations. Large repositories exist for spontaneous mutants and those induced by chemical mutagenesis, many of which have never been studied or comprehensively evaluated. To supplement these resources, a variety of techniques have been consolidated in an international effort to create mutations in all known protein coding genes in the mouse. With targeted embryonic stem cell lines now available for almost all protein coding genes and more recently CRISPR/Cas9 technology, large-scale efforts are underway to create novel mutant mouse strains and to characterize their phenotypes. However, accurate diagnosis of skin, hair, and nail diseases still relies on careful gross and histological analysis. While not automated to the level of the physiological phenotyping, histopathology provides the most direct and accurate diagnosis and correlation with human diseases. As a result of these efforts, many new mouse dermatological disease models are being developed. PMID:26551941

  19. Large-scale sequencing and the natural history of model human RNA viruses

    PubMed Central

    Dugan, Vivien G; Saira, Kazima; Ghedin, Elodie

    2012-01-01

    RNA virus exploration within the field of medical virology has greatly benefited from technological developments in genomics, deepening our understanding of viral dynamics and emergence. Large-scale first-generation technology sequencing projects have expedited molecular epidemiology studies at an unprecedented scale for two pathogenic RNA viruses chosen as models: influenza A virus and dengue. Next-generation sequencing approaches are now leading to a more in-depth analysis of virus genetic diversity, which is greater for RNA than DNA viruses because of high replication rates and the absence of proofreading activity of the RNA-dependent RNA polymerase. In the field of virus discovery, technological advancements and metagenomic approaches are expanding the catalogs of novel viruses by facilitating our probing into the RNA virus world. PMID:23682295

  20. Structure of exotic nuclei by large-scale shell model calculations

    SciTech Connect

    Utsuno, Yutaka; Otsuka, Takaharu; Mizusaki, Takahiro; Honma, Michio

    2006-11-02

    An extensive large-scale shell-model study is conducted for unstable nuclei around N = 20 and N = 28, aiming to investigate how the shell structure evolves from stable to unstable nuclei and affects the nuclear structure. The structure around N = 20 including the disappearance of the magic number is reproduced systematically, exemplified in the systematics of the electromagnetic moments in the Na isotope chain. As a key ingredient dominating the structure/shell evolution in the exotic nuclei from a general viewpoint, we pay attention to the tensor force. Including a proper strength of the tensor force in the effective interaction, we successfully reproduce the proton shell evolution ranging from N = 20 to 28 without any arbitrary modifications in the interaction and predict the ground state of 42Si to contain a large deformed component.

  1. Polarization predictions for cosmological models with large-scale power modulation

    NASA Astrophysics Data System (ADS)

    Bunn, Emory F.; Xue, Qingyang

    2016-01-01

    Several "anomalies" have been noted on large angular scales in maps of the cosmic microwave background (CMB) radiation, although the statistical significance of these anomalies is hotly debated. Of particular interest is the evidence for large-scale power modulation: the variance in one half of the sky is larger than the other half. Either this variation is a mere fluke, or it requires a major revision of the standard cosmological paradigm. The way to determine which is the case is to make predictions for future data sets, based on the hypothesis that the anomaly is meaningful and on the hypothesis that it is a fluke. We make predictions for the CMB polarization anisotropy based on a cosmological model in which statistical isotropy is broken via coupling with a dipolar modulation field. Our predictions are constrained to match the observed Planck temperature variations. We identify the modes in CMB polarization data that most strongly distinguish between the modulation and no-modulation hypotheses.

  2. Enhanced ICP for the Registration of Large-Scale 3D Environment Models: An Experimental Study

    PubMed Central

    Han, Jianda; Yin, Peng; He, Yuqing; Gu, Feng

    2016-01-01

    One of the main applications of mobile robots is the large-scale perception of the outdoor environment. One of the main challenges of this application is fusing environmental data obtained by multiple robots, especially heterogeneous robots. This paper proposes an enhanced iterative closest point (ICP) method for the fast and accurate registration of 3D environmental models. First, a hierarchical searching scheme is combined with the octree-based ICP algorithm. Second, an early-warning mechanism is used to perceive the local minimum problem. Third, a heuristic escape scheme based on sampled potential transformation vectors is used to avoid local minima and achieve optimal registration. Experiments involving one unmanned aerial vehicle and one unmanned surface vehicle were conducted to verify the proposed technique. The experimental results were compared with those of normal ICP registration algorithms to demonstrate the superior performance of the proposed method. PMID:26891298

  3. Enhanced ICP for the Registration of Large-Scale 3D Environment Models: An Experimental Study.

    PubMed

    Han, Jianda; Yin, Peng; He, Yuqing; Gu, Feng

    2016-01-01

    One of the main applications of mobile robots is the large-scale perception of the outdoor environment. One of the main challenges of this application is fusing environmental data obtained by multiple robots, especially heterogeneous robots. This paper proposes an enhanced iterative closest point (ICP) method for the fast and accurate registration of 3D environmental models. First, a hierarchical searching scheme is combined with the octree-based ICP algorithm. Second, an early-warning mechanism is used to perceive the local minimum problem. Third, a heuristic escape scheme based on sampled potential transformation vectors is used to avoid local minima and achieve optimal registration. Experiments involving one unmanned aerial vehicle and one unmanned surface vehicle were conducted to verify the proposed technique. The experimental results were compared with those of normal ICP registration algorithms to demonstrate the superior performance of the proposed method. PMID:26891298

  4. Investigation of airframe noise for a large-scale wing model with high-lift devices

    NASA Astrophysics Data System (ADS)

    Kopiev, V. F.; Zaytsev, M. Yu.; Belyaev, I. V.

    2016-01-01

    The acoustic characteristics of a large-scale model of a wing with high-lift devices in the landing configuration have been studied in the DNW-NWB wind tunnel with an anechoic test section. For the first time in domestic practice, data on airframe noise at high Reynolds numbers (1.1-1.8 × 106) have been obtained, which can be used for assessment of wing noise levels in aircraft certification tests. The scaling factor for recalculating the measurement results to natural conditions has been determined from the condition of collapsing the dimensionless noise spectra obtained at various flow velocities. The beamforming technique has been used to obtain localization of noise sources and provide their ranking with respect to intensity. For flap side-edge noise, which is an important noise component, a noise reduction method has been proposed. The efficiency of this method has been confirmed in DNW-NWB experiments.

  5. Computational framework for modeling the dynamic evolution of large-scale multi-agent organizations

    NASA Astrophysics Data System (ADS)

    Lazar, Alina; Reynolds, Robert G.

    2002-07-01

    A multi-agent system model of the origins of an archaic state is developed. Agent interaction is mediated by a collection of rules. The rules are mined from a related large-scale data base using two different techniques. One technique uses decision trees while the other uses rough sets. The latter was used since the data collection techniques were associated with a certain degree of uncertainty. The generation of the rough set rules was guided by Genetic Algorithms. Since the rules mediate agent interaction, the rule set with fewer rules and conditionals to check will make scaling up the simulation easier to do. The results suggest that explicitly dealing with uncertainty in rule formation can produce simpler rules than ignoring that uncertainty in situations where uncertainty is a factor in the measurement process.

  6. Large-scale Modeling of the Entry and Acceleration of Ions at the Magnetospheric Boundary

    NASA Astrophysics Data System (ADS)

    Berchem, J.; Richard, R. L.; Escoubet, C. P.; Pitout, F.

    2011-12-01

    We present the results of large-scale simulations of the entry and acceleration of ions at the magnetospheric boundary. The study is based on multipoint observations made during consecutive crossings of the cusps by the Cluster spacecraft. First, we use three-dimensional magnetohydrodynamic (MHD) simulations to follow the evolution of the global topology of the dayside magnetospheric boundary during the events. Subsequently, the time-dependent electric and magnetic fields predicted by the MHD simulations are utilized to compute the trajectories of large samples of solar wind ions launched upstream of the bow shock. We assess the results of the model by comparing Cluster ion measurements with ion dispersions calculated from the simulations along the spacecraft trajectories and discuss the temporal evolution and spatial distribution of precipitating particles in the context of the reconnection process at the dayside magnetopause.

  7. Large-scale shell model study of the newly found isomer in 136La

    NASA Astrophysics Data System (ADS)

    Teruya, E.; Yoshinaga, N.; Higashiyama, K.; Nishibata, H.; Odahara, A.; Shimoda, T.

    2016-07-01

    The doubly-odd nucleus 136La is theoretically studied in terms of a large-scale shell model. The energy spectrum and transition rates are calculated and compared with the most updated experimental data. The isomerism is investigated for the first 14+ state, which was found to be an isomer in the previous study [Phys. Rev. C 91, 054305 (2015), 10.1103/PhysRevC.91.054305]. It is found that the 14+ state becomes an isomer due to a band crossing of two bands with completely different configurations. The yrast band with the (ν h11/2 -1⊗π h11 /2 ) configuration is investigated, revealing a staggering nature in M 1 transition rates.

  8. Towards large scale stochastic rainfall models for flood risk assessment in trans-national basins

    NASA Astrophysics Data System (ADS)

    Serinaldi, F.; Kilsby, C. G.

    2012-04-01

    While extensive research has been devoted to rainfall-runoff modelling for risk assessment in small and medium size watersheds, less attention has been paid, so far, to large scale trans-national basins, where flood events have severe societal and economic impacts with magnitudes quantified in billions of Euros. As an example, in the April 2006 flood events along the Danube basin at least 10 people lost their lives and up to 30 000 people were displaced, with overall damages estimated at more than half a billion Euros. In this context, refined analytical methods are fundamental to improve the risk assessment and, then, the design of structural and non structural measures of protection, such as hydraulic works and insurance/reinsurance policies. Since flood events are mainly driven by exceptional rainfall events, suitable characterization and modelling of space-time properties of rainfall fields is a key issue to perform a reliable flood risk analysis based on alternative precipitation scenarios to be fed in a new generation of large scale rainfall-runoff models. Ultimately, this approach should be extended to a global flood risk model. However, as the need of rainfall models able to account for and simulate spatio-temporal properties of rainfall fields over large areas is rather new, the development of new rainfall simulation frameworks is a challenging task involving that faces with the problem of overcoming the drawbacks of the existing modelling schemes (devised for smaller spatial scales), but keeping the desirable properties. In this study, we critically summarize the most widely used approaches for rainfall simulation. Focusing on stochastic approaches, we stress the importance of introducing suitable climate forcings in these simulation schemes in order to account for the physical coherence of rainfall fields over wide areas. Based on preliminary considerations, we suggest a modelling framework relying on the Generalized Additive Models for Location, Scale

  9. Hierarchical Modeling and Robust Synthesis for the Preliminary Design of Large Scale Complex Systems

    NASA Technical Reports Server (NTRS)

    Koch, Patrick N.

    1997-01-01

    Large-scale complex systems are characterized by multiple interacting subsystems and the analysis of multiple disciplines. The design and development of such systems inevitably requires the resolution of multiple conflicting objectives. The size of complex systems, however, prohibits the development of comprehensive system models, and thus these systems must be partitioned into their constituent parts. Because simultaneous solution of individual subsystem models is often not manageable iteration is inevitable and often excessive. In this dissertation these issues are addressed through the development of a method for hierarchical robust preliminary design exploration to facilitate concurrent system and subsystem design exploration, for the concurrent generation of robust system and subsystem specifications for the preliminary design of multi-level, multi-objective, large-scale complex systems. This method is developed through the integration and expansion of current design techniques: Hierarchical partitioning and modeling techniques for partitioning large-scale complex systems into more tractable parts, and allowing integration of subproblems for system synthesis; Statistical experimentation and approximation techniques for increasing both the efficiency and the comprehensiveness of preliminary design exploration; and Noise modeling techniques for implementing robust preliminary design when approximate models are employed. Hierarchical partitioning and modeling techniques including intermediate responses, linking variables, and compatibility constraints are incorporated within a hierarchical compromise decision support problem formulation for synthesizing subproblem solutions for a partitioned system. Experimentation and approximation techniques are employed for concurrent investigations and modeling of partitioned subproblems. A modified composite experiment is introduced for fitting better predictive models across the ranges of the factors, and an approach for

  10. Simulating large-scale pedestrian movement using CA and event driven model: Methodology and case study

    NASA Astrophysics Data System (ADS)

    Li, Jun; Fu, Siyao; He, Haibo; Jia, Hongfei; Li, Yanzhong; Guo, Yi

    2015-11-01

    Large-scale regional evacuation is an important part of national security emergency response plan. Large commercial shopping area, as the typical service system, its emergency evacuation is one of the hot research topics. A systematic methodology based on Cellular Automata with the Dynamic Floor Field and event driven model has been proposed, and the methodology has been examined within context of a case study involving the evacuation within a commercial shopping mall. Pedestrians walking is based on Cellular Automata and event driven model. In this paper, the event driven model is adopted to simulate the pedestrian movement patterns, the simulation process is divided into normal situation and emergency evacuation. The model is composed of four layers: environment layer, customer layer, clerk layer and trajectory layer. For the simulation of movement route of pedestrians, the model takes into account purchase intention of customers and density of pedestrians. Based on evacuation model of Cellular Automata with Dynamic Floor Field and event driven model, we can reflect behavior characteristics of customers and clerks at the situations of normal and emergency evacuation. The distribution of individual evacuation time as a function of initial positions and the dynamics of the evacuation process is studied. Our results indicate that the evacuation model using the combination of Cellular Automata with Dynamic Floor Field and event driven scheduling can be used to simulate the evacuation of pedestrian flows in indoor areas with complicated surroundings and to investigate the layout of shopping mall.

  11. Modeling the effects of large scale turbulence in the Madison Dynamo Experiment

    NASA Astrophysics Data System (ADS)

    Kaplan, Elliot; Clark, Mike; Rahbarnia, Kian; Nornberg, Mark; Taylor, Zane; Rasmus, Alex; Forest, Cary; Spence, Erik

    2011-10-01

    Early experiments in the Madison Dynamo Experiment (MDE) demonstrated the existence of electric curresnt which correspond to the α and β effects of mean field MHD, i.e. currents driven parallel to B, and turbulent resistivity respectively. A magnetic dipole moment was measured parallel to the symmetry axis of the flow (α) and the induced toroidal field was less than half what would be expected from the mean flow (β). Traditionally, mean field theory requires a large separation in scale between the mean magnetic field and turbulent eddies in the conductive medium. However, the recent campaign on the MDE eliminated these effects when a baffle was added to eliminate the largest scale turbulent eddies. A model is presented that builds α- and β- like effects from these large scale eddies without any assumption of scale separation. Early experiments in the Madison Dynamo Experiment (MDE) demonstrated the existence of electric curresnt which correspond to the α and β effects of mean field MHD, i.e. currents driven parallel to B, and turbulent resistivity respectively. A magnetic dipole moment was measured parallel to the symmetry axis of the flow (α) and the induced toroidal field was less than half what would be expected from the mean flow (β). Traditionally, mean field theory requires a large separation in scale between the mean magnetic field and turbulent eddies in the conductive medium. However, the recent campaign on the MDE eliminated these effects when a baffle was added to eliminate the largest scale turbulent eddies. A model is presented that builds α- and β- like effects from these large scale eddies without any assumption of scale separation. CMSO

  12. Large-scale hydrological modelling by using modified PUB recommendations: the India-HYPE case

    NASA Astrophysics Data System (ADS)

    Pechlivanidis, I. G.; Arheimer, B.

    2015-11-01

    The scientific initiative Prediction in Ungauged Basins (PUB) (2003-2012 by the IAHS) put considerable effort into improving the reliability of hydrological models to predict flow response in ungauged rivers. PUB's collective experience advanced hydrologic science and defined guidelines to make predictions in catchments without observed runoff data. At present, there is a raised interest in applying catchment models to large domains and large data samples in a multi-basin manner, to explore emerging spatial patterns or learn from comparative hydrology. However, such modelling involves additional sources of uncertainties caused by the inconsistency between input data sets, i.e. particularly regional and global databases. This may lead to inaccurate model parameterisation and erroneous process understanding. In order to bridge the gap between the best practices for flow predictions in single catchments and multi-basins at the large scale, we present a further developed and slightly modified version of the recommended best practices for PUB by Takeuchi et al. (2013). By using examples from a recent HYPE (Hydrological Predictions for the Environment) hydrological model set-up across 6000 subbasins for the Indian subcontinent, named India-HYPE v1.0, we explore the PUB recommendations, identify challenges and recommend ways to overcome them. We describe the work process related to (a) errors and inconsistencies in global databases, unknown human impacts, and poor data quality; (b) robust approaches to identify model parameters using a stepwise calibration approach, remote sensing data, expert knowledge, and catchment similarities; and (c) evaluation based on flow signatures and performance metrics, using both multiple criteria and multiple variables, and independent gauges for "blind tests". The results show that despite the strong physiographical gradient over the subcontinent, a single model can describe the spatial variability in dominant hydrological processes at the

  13. Multilevel Item Response Modeling: Applications to Large-Scale Assessment of Academic Achievement

    ERIC Educational Resources Information Center

    Zheng, Xiaohui

    2009-01-01

    The call for standards-based reform and educational accountability has led to increased attention to large-scale assessments. Over the past two decades, large-scale assessments have been providing policymakers and educators with timely information about student learning and achievement to facilitate their decisions regarding schools, teachers and…

  14. Multi-variate spatial explicit constraining of a large scale hydrological model

    NASA Astrophysics Data System (ADS)

    Rakovec, Oldrich; Kumar, Rohini; Samaniego, Luis

    2016-04-01

    Increased availability and quality of near real-time data should target at better understanding of predictive skills of distributed hydrological models. Nevertheless, predictions of regional scale water fluxes and states remains of great challenge to the scientific community. Large scale hydrological models are used for prediction of soil moisture, evapotranspiration and other related water states and fluxes. They are usually properly constrained against river discharge, which is an integral variable. Rakovec et al (2016) recently demonstrated that constraining model parameters against river discharge is necessary, but not a sufficient condition. Therefore, we further aim at scrutinizing appropriate incorporation of readily available information into a hydrological model that may help to improve the realism of hydrological processes. It is important to analyze how complementary datasets besides observed streamflow and related signature measures can improve model skill of internal model variables during parameter estimation. Among those products suitable for further scrutiny are for example the GRACE satellite observations. Recent developments of using this dataset in a multivariate fashion to complement traditionally used streamflow data within the distributed model mHM (www.ufz.de/mhm) are presented. Study domain consists of 80 European basins, which cover a wide range of distinct physiographic and hydrologic regimes. First-order data quality check ensures that heavily human influenced basins are eliminated. For river discharge simulations we show that model performance of discharge remains unchanged when complemented by information from the GRACE product (both, daily and monthly time steps). Moreover, the GRACE complementary data lead to consistent and statistically significant improvements in evapotranspiration estimates, which are evaluated using an independent gridded FLUXNET product. We also show that the choice of the objective function used to estimate

  15. Vertical Distributions of Sulfur Species Simulated by Large Scale Atmospheric Models in COSAM: Comparison with Observations

    SciTech Connect

    Lohmann, U.; Leaitch, W. R.; Barrie, Leonard A.; Law, K.; Yi, Y.; Bergmann, D.; Bridgeman, C.; Chin, M.; Christensen, J.; Easter, Richard C.; Feichter, J.; Jeuken, A.; Kjellstrom, E.; Koch, D.; Land, C.; Rasch, P.; Roelofs, G.-J.

    2001-11-01

    A comparison of large-scale models simulating atmospheric sulfate aerosols (COSAM) was conducted to increase our understanding of global distributions of sulfate aerosols and precursors. Earlier model comparisons focused on wet deposition measurements and sulfate aerosol concentrations in source regions at the surface. They found that different models simulated the observed sulfate surface concentrations mostly within a factor of two, but that the simulated column burdens and vertical profiles were very different amongst different models. In the COSAM exercise, one aspect is the comparison of sulfate aerosol and precursor gases above the surface. Vertical profiles of SO2, SO42-, oxidants and cloud properties were measured by aircraft during the North Atlantic Regional Experiment (NARE) experiment in August/September 1993 off the coast of Nova Scotia and during the Second Eulerian Model Evaluation Field Study (EMEFSII) in central Ontario in March/April 1990. While no single model stands out as being best or worst, the general tendency is that those models simulating the full oxidant chemistry tend to agree best with observations although differences in transport and treatment of clouds are important as well.

  16. Static proppant-settling characteristics of non-Newtonian fracturing fluids in a large-scale test model

    SciTech Connect

    McMechan, D.E.; Shah, S.N. )

    1991-08-01

    Large-scale testing of the settling behavior of propants in fracturing fluids was conducted with a slot configuration to model realistically the conditions observed in a hydraulic fracture. The test apparatus consists of a 1/2{times}8-in. (1.3{times}20.3-cm) rectangular slot 141/2 ft (4.4m) high, faced with Plexiglas and equipped with pressure taps at 1-ft (1.3m) intervals. This configuration allows both qualitive visual observations and quantitative density measurements for calculation of proppant concentrations and settling velocities. In this paper, the authors examine uncrosslinked hydroxypropyl guar (HPG) and hydroxyethylcellulose (HEC) fluids, as well as crosslinked guar, HPG, and carboxymethyl HPG (CMHPG) systems. Sand loadings of 2 to 15 lbm/gal (240 to 1797 kg/m{sup 3}) (3 to 40 vol% of solids) were tested. Experimental results were compared with the predictions of existing particle-settling models for a 40-lbm/1,000-gal (4.8-kg/m{sub 3}) HPG fluid system.

  17. Non-intrusive Ensemble Kalman filtering for large scale geophysical models

    NASA Astrophysics Data System (ADS)

    Amour, Idrissa; Kauranne, Tuomo

    2016-04-01

    Advanced data assimilation techniques, such as variational assimilation methods, present often challenging implementation issues for large-scale models, both because of computational complexity and because of complexity of implementation. We present a non-intrusive wrapper library that addresses this problem by isolating the direct model and the linear algebra employed in data assimilation from each other completely. In this approach we have adopted a hybrid Variational Ensemble Kalman filter that combines Ensemble propagation with a 3DVAR analysis stage. The inverse problem of state and covariance propagation from prior to posterior estimates is thereby turned into a time-independent problem. This feature allows the linear algebra and minimization steps required in the variational step to be conducted outside the direct model and no tangent linear or adjoint codes are required. Communication between the model and the assimilation module is conducted exclusively via standard input and output files of the model. This non-intrusive approach is tested with the comprehensive 3D lake and shallow sea model COHERENS that is used to forecast and assimilate turbidity in lake Säkylän Pyhäjärvi in Finland, using both sparse satellite images and continuous real-time point measurements as observations.

  18. Representations of the Nordic Seas overflows and their large scale climate impact in coupled models

    NASA Astrophysics Data System (ADS)

    Wang, He; Legg, Sonya A.; Hallberg, Robert W.

    2015-02-01

    The sensitivity of large scale ocean circulation and climate to overflow representation is studied using coupled climate models, motivated by the differences between two models differing only in their ocean components: CM2G (which uses an isopycnal-coordinate ocean model) and CM2M (which uses a z-coordinate ocean model). Analysis of the control simulations of the two models shows that the Atlantic Meridional Overturning Circulation (AMOC) and the North Atlantic climate have some differences, which may be related to the representation of overflow processes. Firstly, in CM2G, as in the real world, overflows have two branches flowing out of the Nordic Seas, to the east and west of Iceland, respectively, while only the western branch is present in CM2M. This difference in overflow location results in different horizontal circulation in the North Atlantic. Secondly, the diapycnal mixing in the overflow downstream region is much larger in CM2M than in CM2G, which affects the entrainment and product water properties. Two sensitivity experiments are conducted in CM2G to isolate the effect of these two model differences: in the first experiment, the outlet of the eastern branch of the overflow is blocked, and the North Atlantic horizontal circulation is modified due to the absence of the eastern branch of the overflow, although the AMOC has little change; in the second experiment, the diapycnal mixing downstream of the overflow is enhanced, resulting in changes in the structure and magnitude of the AMOC.

  19. Analyzing the prediction error of large scale Vis-NIR spectroscopic models

    NASA Astrophysics Data System (ADS)

    Stevens, Antoine; Nocita, Marco; Montanarella, Luca; van Wesemael, Bas

    2013-04-01

    Based on the LUCAS soil spectral library (~ 20,000 samples distributed over 23 EU countries), we developed multivariate calibration models (model trees) for estimating the SOC content from the visible and near infrared reflectance (Vis-NIR) spectra. The root mean square error of validation of these models ranged from 4 to 15 g C kg-1. The prediction accuracy is usually negatively related to samples heterogeneity in a given library, so that large scale databases typically demonstrate low prediction accuracy compared to local scale studies. This is inherent to the empirical nature of the approach that cannot accommodate well the changing and scale-dependent relationship between Vis-NIR spectra and soil properties. In our study, we analyzed the effect of key soil properties and environmental covariates (land cover) on the SOC prediction accuracy of the spectroscopic models. It is shown that mineralogy as well as soil texture have large impacts on prediction accuracy and that pedogenetic factors that are easily obtainable if the samples are geo-referenced can be used as input in the spectroscopic models to improve model accuracies.

  20. A modelling case study of a large-scale cirrus in the tropical tropopause layer

    NASA Astrophysics Data System (ADS)

    Podglajen, Aurélien; Plougonven, Riwal; Hertzog, Albert; Legras, Bernard

    2016-03-01

    We use the Weather Research and Forecast (WRF) model to simulate a large-scale tropical tropopause layer (TTL) cirrus in order to understand the formation and life cycle of the cloud. This cirrus event has been previously described through satellite observations by Taylor et al. (2011). Comparisons of the simulated and observed cirrus show a fair agreement and validate the reference simulation regarding cloud extension, location and life time. The validated simulation is used to understand the causes of cloud formation. It is shown that several cirrus clouds successively form in the region due to adiabatic cooling and large-scale uplift rather than from convective anvils. The structure of the uplift is tied to the equatorial response (equatorial wave excitation) to a potential vorticity intrusion from the midlatitudes. Sensitivity tests are then performed to assess the relative importance of the choice of the microphysics parameterization and of the initial and boundary conditions. The initial dynamical conditions (wind and temperature) essentially control the horizontal location and area of the cloud. However, the choice of the microphysics scheme influences the ice water content and the cloud vertical position. Last, the fair agreement with the observations allows to estimate the cloud impact in the TTL in the simulations. The cirrus clouds have a small but not negligible impact on the radiative budget of the local TTL. However, for this particular case, the cloud radiative heating does not significantly influence the simulated dynamics. This result is due to (1) the lifetime of air parcels in the cloud system, which is too short to significantly influence the dynamics, and (2) the fact that induced vertical motions would be comparable to or smaller than the typical mesoscale motions present. Finally, the simulation also provides an estimate of the vertical redistribution of water by the cloud and the results emphasize the importance in our case of both

  1. A modelling case study of a large-scale cirrus in the tropical tropopause layer

    NASA Astrophysics Data System (ADS)

    Podglajen, A.; Plougonven, R.; Hertzog, A.; Legras, B.

    2015-11-01

    We use the Weather Research and Forecast (WRF) model to simulate a large-scale tropical tropopause layer (TTL) cirrus, in order to understand the formation and life cycle of the cloud. This cirrus event has been previously described through satellite observations by Taylor et al. (2011). Comparisons of the simulated and observed cirrus show a fair agreement, and validate the reference simulation regarding cloud extension, location and life time. The validated simulation is used to understand the causes of cloud formation. It is shown that several cirrus clouds successively form in the region due to adiabatic cooling and large-scale uplift rather than from ice lofting from convective anvils. The equatorial response (equatorial wave excitation) to a midlatitude potential vorticity (PV) intrusion structures the uplift. Sensitivity tests are then performed to assess the relative importance of the choice of the microphysics parametrisation and of the initial and boundary conditions. The initial dynamical conditions (wind and temperature) essentially control the horizontal location and area of the cloud. On the other hand, the choice of the microphysics scheme influences the ice water content and the cloud vertical position. Last, the fair agreement with the observations allows to estimate the cloud impact in the TTL in the simulations. The cirrus clouds have a small but not negligible impact on the radiative budget of the local TTL. However, the cloud radiative heating does not significantly influence the simulated dynamics. The simulation also provides an estimate of the vertical redistribution of water by the cloud and the results emphasize the importance in our case of both re and dehydration in the vicinity of the cirrus.

  2. Development of a realistic human airway model.

    PubMed

    Lizal, Frantisek; Elcner, Jakub; Hopke, Philip K; Jedelsky, Jan; Jicha, Miroslav

    2012-03-01

    Numerous models of human lungs with various levels of idealization have been reported in the literature; consequently, results acquired using these models are difficult to compare to in vivo measurements. We have developed a set of model components based on realistic geometries, which permits the analysis of the effects of subsequent model simplification. A realistic digital upper airway geometry except for the lack of an oral cavity has been created which proved suitable both for computational fluid dynamics (CFD) simulations and for the fabrication of physical models. Subsequently, an oral cavity was added to the tracheobronchial geometry. The airway geometry including the oral cavity was adjusted to enable fabrication of a semi-realistic model. Five physical models were created based on these three digital geometries. Two optically transparent models, one with and one without the oral cavity, were constructed for flow velocity measurements, two realistic segmented models, one with and one without the oral cavity, were constructed for particle deposition measurements, and a semi-realistic model with glass cylindrical airways was developed for optical measurements of flow velocity and in situ particle size measurements. One-dimensional phase doppler anemometry measurements were made and compared to the CFD calculations for this model and good agreement was obtained. PMID:22558834

  3. Estimating the impact of satellite observations on the predictability of large-scale hydraulic models

    NASA Astrophysics Data System (ADS)

    Andreadis, Konstantinos M.; Schumann, Guy J.-P.

    2014-11-01

    Large-scale hydraulic models are able to predict flood characteristics, and are being used in forecasting applications. In this work, the potential value of satellite observations to initialize hydraulic forecasts is explored, using the Ensemble Sensitivity method. The impact estimation is based on the Local Ensemble Transform Kalman Filter, allowing for the forecast error reductions to be computed without additional model runs. The experimental design consisted of two configurations of the LISFLOOD-FP model over the Ohio River basin: a baseline simulation represents a 'best effort' model using observations for parameters and boundary conditions, whereas the second simulation consists of erroneous parameters and boundary conditions. Results showed that the forecast skill was improved for water heights up to lead times of 11 days (error reductions ranged from 0.2 to 0.6 m/km), while even partial observations of the river contained information for the entire river's water surface profile and allowed forecasting 5 to 7 days ahead. Moreover, water height observations had a negative impact on discharge forecasts for longer lead times although they did improve forecast skill for 1 and 3 days (up to 60 m3 / s / km). Lastly, the inundated area forecast errors were reduced overall for all examined lead times. Albeit, when examining a specific flood event the limitations of predictability were revealed suggesting that model errors or inflows were more important than initial conditions.

  4. A Comparison of Large-Scale Atmospheric Sulphate Aerosol Models (COSAM): Overview and Highlights

    SciTech Connect

    Barrie, Leonard A.; Yi, Y.; Leaitch, W. R.; Lohmann, U.; Kasibhatla, P.; Roelofs, G.-J.; Wilson, J.; Mcgovern, F.; Benkovitz, C.; Melieres, M. A.; Law, K.; Prospero, J.; Kritz, M.; Bergmann, D.; Bridgeman, C.; Chin, M.; Christiansen, J.; Easter, Richard C.; Feichter, J.; Land, C.; Jeuken, A.; Kjellstrom, E.; Koch, D.; Rasch, P.

    2001-11-01

    The comparison of large-scale sulphate aerosol models study (COSAM) compared the performance of atmospheric models with each other and observations. It involved: (i) design of a standard model experiment for the world wide web, (ii) 10 model simulations of the cycles of sulphur and 222Rn/210Pb conforming to the experimental design, (iii) assemblage of the best available observations of atmospheric SO4=, SO2 and MSA and (iv) a workshop in Halifax, Canada to analyze model performance and future model development needs. The analysis presented in this paper and two companion papers by Roelofs, and Lohmann and co-workers examines the variance between models and observations, discusses the sources of that variance and suggests ways to improve models. Variations between models in the export of SOx from Europe or North America are not sufficient to explain an order of magnitude variation in spatial distributions of SOx downwind in the northern hemisphere. On average, models predicted surface level seasonal mean SO4= aerosol mixing ratios better (most within 20%) than SO2 mixing ratios (over-prediction by factors of 2 or more). Results suggest that vertical mixing from the planetary boundary layer into the free troposphere in source regions is a major source of uncertainty in predicting the global distribution of SO4= aerosols in climate models today. For improvement, it is essential that globally coordinated research efforts continue to address emissions of all atmospheric species that affect the distribution and optical properties of ambient aerosols in models and that a global network of observations be established that will ultimately produce a world aerosol chemistry climatology.

  5. Ensemble modeling to predict habitat suitability for a large-scale disturbance specialist

    PubMed Central

    Latif, Quresh S; Saab, Victoria A; Dudley, Jonathan G; Hollenbeck, Jeff P

    2013-01-01

    help guide managers attempting to balance salvage logging with habitat conservation in burned-forest landscapes where black-backed woodpecker nest location data are not immediately available. Ensemble modeling represents a promising tool for guiding conservation of large-scale disturbance specialists. PMID:24340177

  6. Modelling high angle wave instability and the generation of large scale shoreline sand waves

    NASA Astrophysics Data System (ADS)

    van den Berg, Niels; Falqués, Albert; Ribas, Francesca

    2010-05-01

    Sandy coasts are dynamic systems, shaped by the continuous interaction between hydrodynamics and morphology. On a large time and spacial scale it is commonly assumed that the diffusive action of alongshore wave driven sediment transport dominates and maintains a stable and straight shoreline. Ashton et. al. (2001) however showed with a cellular model that for high angle off-shore wave incidence a coastline can be unstable and that shoreline sand waves can develop due to the feedback of shoreline changes into the wave field. These shoreline undulations can migrate and merge to form large scale capes and spits. Falqués and Calvete (2005) confirmed the mechanism of shoreline instability and shoreline sand wave formation with a linear stability analysis. They found a typical wavelength in the range 4-15 km and a characteristic growth time of a few years. Both studies however have there limitations. Ashton et. al. (2001) assume rectilinear depth contours and an infinite cross-shore extent of shoreline changes in the bathymetry. The linear stability analysis by Falqués and Calvete (2005) can only be applied for small amplitude shoreline changes. Both studies neglect cross-shore dynamics as bathymetric changes associated to shoreline changes are assumed to be instantaneous. In the current study, a nonlinear morphodynamic model is used. In this model the bathymetric lines are curvilinear and the cross-shore extent of shoreline changes in the bathymetry is dynamic due to the introduction of cross-shore dynamics. The cross-shore dynamics are parameterized by assuming a relaxation to an equilibrium cross-shore profile. The relaxation is controlled by a diffusivity which is proportional to wave energy dissipation. The new model is equivalent to N-lines models but applies sediment conservation like 2DH models instead of just moving contour lines. The main objective of this study is to extend the work of Falqués and Calvete (2005) and to study in more detail the mechanism of

  7. Statistical modeling of large-scale signal path loss in underwater acoustic networks.

    PubMed

    Llor, Jesús; Malumbres, Manuel Perez

    2013-01-01

    In an underwater acoustic channel, the propagation conditions are known to vary in time, causing the deviation of the received signal strength from the nominal value predicted by a deterministic propagation model. To facilitate a large-scale system design in such conditions (e.g., power allocation), we have developed a statistical propagation model in which the transmission loss is treated as a random variable. By applying repetitive computation to the acoustic field, using ray tracing for a set of varying environmental conditions (surface height, wave activity, small node displacements around nominal locations, etc.), an ensemble of transmission losses is compiled and later used to infer the statistical model parameters. A reasonable agreement is found with log-normal distribution, whose mean obeys a log-distance increases, and whose variance appears to be constant for a certain range of inter-node distances in a given deployment location. The statistical model is deemed useful for higher-level system planning, where simulation is needed to assess the performance of candidate network protocols under various resource allocation policies, i.e., to determine the transmit power and bandwidth allocation necessary to achieve a desired level of performance (connectivity, throughput, reliability, etc.). PMID:23396190

  8. Statistical Modeling of Large-Scale Signal Path Loss in Underwater Acoustic Networks

    PubMed Central

    Llor, Jesús; Malumbres, Manuel Perez

    2013-01-01

    In an underwater acoustic channel, the propagation conditions are known to vary in time, causing the deviation of the received signal strength from the nominal value predicted by a deterministic propagation model. To facilitate a large-scale system design in such conditions (e.g., power allocation), we have developed a statistical propagation model in which the transmission loss is treated as a random variable. By applying repetitive computation to the acoustic field, using ray tracing for a set of varying environmental conditions (surface height, wave activity, small node displacements around nominal locations, etc.), an ensemble of transmission losses is compiled and later used to infer the statistical model parameters. A reasonable agreement is found with log-normal distribution, whose mean obeys a log-distance increases, and whose variance appears to be constant for a certain range of inter-node distances in a given deployment location. The statistical model is deemed useful for higher-level system planning, where simulation is needed to assess the performance of candidate network protocols under various resource allocation policies, i.e., to determine the transmit power and bandwidth allocation necessary to achieve a desired level of performance (connectivity, throughput, reliability, etc.). PMID:23396190

  9. Morphotectonic evolution of passive margins undergoing active surface processes: large-scale experiments using numerical models.

    NASA Astrophysics Data System (ADS)

    Beucher, Romain; Huismans, Ritske S.

    2016-04-01

    Extension of the continental lithosphere can lead to the formation of a wide range of rifted margins styles with contrasting tectonic and geomorphological characteristics. It is now understood that many of these characteristics depend on the manner extension is distributed depending on (among others factors) rheology, structural inheritance, thermal structure and surface processes. The relative importance and the possible interactions of these controlling factors is still largely unknown. Here we investigate the feedbacks between tectonics and the transfers of material at the surface resulting from erosion, transport, and sedimentation. We use large-scale (1200 x 600 km) and high-resolution (~1km) numerical experiments coupling a 2D upper-mantle-scale thermo-mechanical model with a plan-form 2D surface processes model (SPM). We test the sensitivity of the coupled models to varying crust-lithosphere rheology and erosional efficiency ranging from no-erosion to very efficient erosion. We discuss how fast, when and how the topography of the continents evolves and how it can be compared to actual passive margins escarpment morphologies. We show that although tectonics is the main factor controlling the rift geometry, transfers of masses at the surface affect the timing of faulting and the initiation of sea-floor spreading. We discuss how such models may help to understand the evolution of high-elevated passive margins around the world.

  10. Gravitational waves during inflation from a 5D large-scale repulsive gravity model

    NASA Astrophysics Data System (ADS)

    Reyes, Luz M.; Moreno, Claudia; Madriz Aguilar, José Edgar; Bellini, Mauricio

    2012-10-01

    We investigate, in the transverse traceless (TT) gauge, the generation of the relic background of gravitational waves, generated during the early inflationary stage, on the framework of a large-scale repulsive gravity model. We calculate the spectrum of the tensor metric fluctuations of an effective 4D Schwarzschild-de Sitter metric on cosmological scales. This metric is obtained after implementing a planar coordinate transformation on a 5D Ricci-flat metric solution, in the context of a non-compact Kaluza-Klein theory of gravity. We found that the spectrum is nearly scale invariant under certain conditions. One interesting aspect of this model is that it is possible to derive the dynamical field equations for the tensor metric fluctuations, valid not just at cosmological scales, but also at astrophysical scales, from the same theoretical model. The astrophysical and cosmological scales are determined by the gravity-antigravity radius, which is a natural length scale of the model, that indicates when gravity becomes repulsive in nature.

  11. Deterministic methods for sensitivity and uncertainty analysis in large-scale computer models

    SciTech Connect

    Worley, B.A.; Oblow, E.M.; Pin, F.G.; Maerker, R.E.; Horwedel, J.E.; Wright, R.Q.; Lucius, J.L.

    1987-01-01

    The fields of sensitivity and uncertainty analysis are dominated by statistical techniques when large-scale modeling codes are being analyzed. This paper reports on the development and availability of two systems, GRESS and ADGEN, that make use of computer calculus compilers to automate the implementation of deterministic sensitivity analysis capability into existing computer models. This automation removes the traditional limitation of deterministic sensitivity methods. The paper describes a deterministic uncertainty analysis method (DUA) that uses derivative information as a basis to propagate parameter probability distributions to obtain result probability distributions. The paper demonstrates the deterministic approach to sensitivity and uncertainty analysis as applied to a sample problem that models the flow of water through a borehole. The sample problem is used as a basis to compare the cumulative distribution function of the flow rate as calculated by the standard statistical methods and the DUA method. The DUA method gives a more accurate result based upon only two model executions compared to fifty executions in the statistical case.

  12. Modelling potential changes in marine biogeochemistry due to large-scale offshore wind farms

    NASA Astrophysics Data System (ADS)

    van der Molen, Johan; Rees, Jon; Limpenny, Sian

    2013-04-01

    Large-scale renewable energy generation by offshore wind farms may lead to changes in marine ecosystem processes through the following mechanism: 1) wind-energy extraction leads to a reduction in local surface wind speeds; 2) these lead to a reduction in the local wind wave height; 3) as a consequence there's a reduction in SPM resuspension and concentrations; 4) this results in an improvement in under-water light regime, which 5) may lead to increased primary production, which subsequently 6) cascades through the ecosystem. A three-dimensional coupled hydrodynamics-biogeochemistry model (GETM_ERSEM) was used to investigate this process for a hypothetical wind farm in the central North Sea, by running a reference scenario and a scenario with a 10% reduction (as was found in a case study of a small farm in Danish waters) in surface wind velocities in the area of the wind farm. The ERSEM model included both pelagic and benthic processes. The results showed that, within the farm area, the physical mechanisms were as expected, but with variations in the magnitude of the response depending on the ecosystem variable or exchange rate between two ecosystem variables (3-28%, depending on variable/rate). Benthic variables tended to be more sensitive to the changes than pelagic variables. Reduced, but noticeable changes also occurred for some variables in a region of up to two farm diameters surrounding the wind farm. An additional model run in which the 10% reduction in surface wind speed was applied only for wind speeds below the generally used threshold of 25 m/s for operational shut-down showed only minor differences from the run in which all wind speeds were reduced. These first results indicate that there is potential for measurable effects of large-scale offshore wind farms on the marine ecosystem, mainly within the farm but for some variables up to two farm diameters away. However, the wave and SPM parameterisations currently used in the model are crude and need to be

  13. A multigrid integral equation method for large-scale models with inhomogeneous backgrounds

    NASA Astrophysics Data System (ADS)

    Endo, Masashi; Čuma, Martin; Zhdanov, Michael S.

    2008-12-01

    We present a multigrid integral equation (IE) method for three-dimensional (3D) electromagnetic (EM) field computations in large-scale models with inhomogeneous background conductivity (IBC). This method combines the advantages of the iterative IBC IE method and the multigrid quasi-linear (MGQL) approximation. The new EM modelling method solves the corresponding systems of linear equations within the domains of anomalous conductivity, Da, and inhomogeneous background conductivity, Db, separately on coarse grids. The observed EM fields in the receivers are computed using grids with fine discretization. The developed MGQL IBC IE method can also be applied iteratively by taking into account the return effect of the anomalous field inside the domain of the background inhomogeneity Db, and vice versa. The iterative process described above is continued until we reach the required accuracy of the EM field calculations in both domains, Da and Db. The method was tested for modelling the marine controlled-source electromagnetic field for complex geoelectrical structures with hydrocarbon petroleum reservoirs and a rough sea-bottom bathymetry.

  14. Large scale cratering of the lunar highlands - Some Monte Carlo model considerations

    NASA Technical Reports Server (NTRS)

    Hoerz, F.; Gibbons, R. V.; Hill, R. E.; Gault, D. E.

    1976-01-01

    In an attempt to understand the scale and intensity of the moon's early, large scale meteoritic bombardment, a Monte Carlo computer model simulated the effects of all lunar craters greater than 800 m in diameter, for example, the number of times and depths specific fractions of the entire lunar surface were cratered. The model used observed crater size frequencies and crater-geometries compatible with the suggestions of Pike (1974) and Dence (1973); it simulated bombardment histories up to a factor of 10 more intense than those reflected by the present-day crater number density of the lunar highlands. For the present-day cratering record the model yields the following: approximately 25% of the entire lunar surface has not been cratered deeper than 100 m; 50% may have been cratered to 2-3 km depth; less than 5% of the surface has been cratered deeper than about 15 km. A typical highland site has suffered 1-2 impacts. Corresponding values for more intense bombardment histories are also presented, though it must remain uncertain what the absolute intensity of the moon's early meteorite bombardment was.

  15. Large-scale functional models of visual cortex for remote sensing

    SciTech Connect

    Brumby, Steven P; Kenyon, Garrett; Rasmussen, Craig E; Swaminarayan, Sriram; Bettencourt, Luis; Landecker, Will

    2009-01-01

    Neuroscience has revealed many properties of neurons and of the functional organization of visual cortex that are believed to be essential to human vision, but are missing in standard artificial neural networks. Equally important may be the sheer scale of visual cortex requiring {approx}1 petaflop of computation. In a year, the retina delivers {approx}1 petapixel to the brain, leading to massively large opportunities for learning at many levels of the cortical system. We describe work at Los Alamos National Laboratory (LANL) to develop large-scale functional models of visual cortex on LANL's Roadrunner petaflop supercomputer. An initial run of a simple region VI code achieved 1.144 petaflops during trials at the IBM facility in Poughkeepsie, NY (June 2008). Here, we present criteria for assessing when a set of learned local representations is 'complete' along with general criteria for assessing computer vision models based on their projected scaling behavior. Finally, we extend one class of biologically-inspired learning models to problems of remote sensing imagery.

  16. Influenza epidemic spread simulation for Poland — a large scale, individual based model study

    NASA Astrophysics Data System (ADS)

    Rakowski, Franciszek; Gruziel, Magdalena; Bieniasz-Krzywiec, Łukasz; Radomski, Jan P.

    2010-08-01

    In this work a construction of an agent based model for studying the effects of influenza epidemic in large scale (38 million individuals) stochastic simulations, together with the resulting various scenarios of disease spread in Poland are reported. Simple transportation rules were employed to mimic individuals’ travels in dynamic route-changing schemes, allowing for the infection spread during a journey. Parameter space was checked for stable behaviour, especially towards the effective infection transmission rate variability. Although the model reported here is based on quite simple assumptions, it allowed to observe two different types of epidemic scenarios: characteristic for urban and rural areas. This differentiates it from the results obtained in the analogous studies for the UK or US, where settlement and daily commuting patterns are both substantially different and more diverse. The resulting epidemic scenarios from these ABM simulations were compared with simple, differential equations based, SIR models - both types of the results displaying strong similarities. The pDYN software platform developed here is currently used in the next stage of the project employed to study various epidemic mitigation strategies.

  17. Development of explosive event scale model testing capability at Sandia`s large scale centrifuge facility

    SciTech Connect

    Blanchat, T.K.; Davie, N.T.; Calderone, J.J.

    1998-02-01

    Geotechnical structures such as underground bunkers, tunnels, and building foundations are subjected to stress fields produced by the gravity load on the structure and/or any overlying strata. These stress fields may be reproduced on a scaled model of the structure by proportionally increasing the gravity field through the use of a centrifuge. This technology can then be used to assess the vulnerability of various geotechnical structures to explosive loading. Applications of this technology include assessing the effectiveness of earth penetrating weapons, evaluating the vulnerability of various structures, counter-terrorism, and model validation. This document describes the development of expertise in scale model explosive testing on geotechnical structures using Sandia`s large scale centrifuge facility. This study focused on buried structures such as hardened storage bunkers or tunnels. Data from this study was used to evaluate the predictive capabilities of existing hydrocodes and structural dynamics codes developed at Sandia National Laboratories (such as Pronto/SPH, Pronto/CTH, and ALEGRA). 7 refs., 50 figs., 8 tabs.

  18. Real-World-Time Simulation of Memory Consolidation in a Large-Scale Cerebellar Model.

    PubMed

    Gosui, Masato; Yamazaki, Tadashi

    2016-01-01

    We report development of a large-scale spiking network model of the cerebellum composed of more than 1 million neurons. The model is implemented on graphics processing units (GPUs), which are dedicated hardware for parallel computing. Using 4 GPUs simultaneously, we achieve realtime simulation, in which computer simulation of cerebellar activity for 1 s completes within 1 s in the real-world time, with temporal resolution of 1 ms. This allows us to carry out a very long-term computer simulation of cerebellar activity in a practical time with millisecond temporal resolution. Using the model, we carry out computer simulation of long-term gain adaptation of optokinetic response (OKR) eye movements for 5 days aimed to study the neural mechanisms of posttraining memory consolidation. The simulation results are consistent with animal experiments and our theory of posttraining memory consolidation. These results suggest that realtime computing provides a useful means to study a very slow neural process such as memory consolidation in the brain. PMID:26973472

  19. A method to search for large-scale concavities in asteroid shape models

    NASA Astrophysics Data System (ADS)

    Devogèle, M.; Rivet, J. P.; Tanga, P.; Bendjoya, Ph.; Surdej, J.; Bartczak, P.; Hanus, J.

    2015-11-01

    Photometric light-curve inversion of minor planets has proven to produce a unique model solution only under the hypothesis that the asteroid is convex. However, it was suggested that the resulting shape model, for the case of non-convex asteroids, is the convex-hull of the true asteroid non-convex shape. While a convex shape is already useful to provide the overall aspect of the target, much information about real shapes is missed, as we know that asteroids are very irregular. It is a commonly accepted evidence that large flat areas sometimes appearing on shapes derived from light curves correspond to concave areas, but this information has not been further explored and exploited so far. We present in this paper a method that allows to predict the presence of concavities from such flat regions. This method analyses the distribution of the local normals to the facets composing shape models to predict the presence of abnormally large flat surfaces. In order to test our approach, we consider here its application to a large family of synthetic asteroid shapes, and to real asteroids with large-scale concavities, whose detailed shape is known by other kinds of observations (radar and spacecraft encounters). The method that we propose has proven to be reliable and capable of providing a qualitative indication of the relevance of concavities on well-constrained asteroid shapes derived from purely photometric data sets.

  20. Real-World-Time Simulation of Memory Consolidation in a Large-Scale Cerebellar Model

    PubMed Central

    Gosui, Masato; Yamazaki, Tadashi

    2016-01-01

    We report development of a large-scale spiking network model of the cerebellum composed of more than 1 million neurons. The model is implemented on graphics processing units (GPUs), which are dedicated hardware for parallel computing. Using 4 GPUs simultaneously, we achieve realtime simulation, in which computer simulation of cerebellar activity for 1 s completes within 1 s in the real-world time, with temporal resolution of 1 ms. This allows us to carry out a very long-term computer simulation of cerebellar activity in a practical time with millisecond temporal resolution. Using the model, we carry out computer simulation of long-term gain adaptation of optokinetic response (OKR) eye movements for 5 days aimed to study the neural mechanisms of posttraining memory consolidation. The simulation results are consistent with animal experiments and our theory of posttraining memory consolidation. These results suggest that realtime computing provides a useful means to study a very slow neural process such as memory consolidation in the brain. PMID:26973472

  1. Aerodynamic characteristics of a large-scale hybrid upper surface blown flap model having four engines

    NASA Technical Reports Server (NTRS)

    Carros, R. J.; Boissevain, A. G.; Aoyagi, K.

    1975-01-01

    Data are presented from an investigation of the aerodynamic characteristics of large-scale wind tunnel aircraft model that utilized a hybrid-upper surface blown flap to augment lift. The hybrid concept of this investigation used a portion of the turbofan exhaust air for blowing over the trailing edge flap to provide boundary layer control. The model, tested in the Ames 40- by 80-foot Wind Tunnel, had a 27.5 deg swept wing of aspect ratio 8 and 4 turbofan engines mounted on the upper surface of the wing. The lift of the model was augmented by turbofan exhaust impingement on the wind upper-surface and flap system. Results were obtained for three flap deflections, for some variation of engine nozzle configuration and for jet thrust coefficients from 0 to 3.0. Six-component longitudinal and lateral data are presented with four engine operation and with the critical engine out. In addition, a limited number of cross-plots of the data are presented. All of the tests were made with a downwash rake installed instead of a horizontal tail. Some of these downwash data are also presented.

  2. Modeling the Hydrologic Effects of Large-Scale Green Infrastructure Projects with GIS

    NASA Astrophysics Data System (ADS)

    Bado, R. A.; Fekete, B. M.; Khanbilvardi, R.

    2015-12-01

    Impervious surfaces in urban areas generate excess runoff, which in turn causes flooding, combined sewer overflows, and degradation of adjacent surface waters. Municipal environmental protection agencies have shown a growing interest in mitigating these effects with 'green' infrastructure practices that partially restore the perviousness and water holding capacity of urban centers. Assessment of the performance of current and future green infrastructure projects is hindered by the lack of adequate hydrological modeling tools; conventional techniques fail to account for the complex flow pathways of urban environments, and detailed analyses are difficult to prepare for the very large domains in which green infrastructure projects are implemented. Currently, no standard toolset exists that can rapidly and conveniently predict runoff, consequent inundations, and sewer overflows at a city-wide scale. We demonstrate how streamlined modeling techniques can be used with open-source GIS software to efficiently model runoff in large urban catchments. Hydraulic parameters and flow paths through city blocks, roadways, and sewer drains are automatically generated from GIS layers, and ultimately urban flow simulations can be executed for a variety of rainfall conditions. With this methodology, users can understand the implications of large-scale land use changes and green/gray storm water retention systems on hydraulic loading, peak flow rates, and runoff volumes.

  3. Large-scale Folding: Implications For Effective Lithospheric Rheology And Thin Sheet Models.

    NASA Astrophysics Data System (ADS)

    Schmalholz, S. M.; Podladchikov, Yu. Yu.; Burg, J.-P.

    We show that folding of a non-Newtonian layer resting on a homogeneous Newto- nian matrix with finite thickness under influence of gravity can occur by three modes: (i) matrix-controlled folding, dependent on the effective viscosity contrast between layer and matrix, (ii) gravity-controlled folding, dependent on the Argand number (the ratio of the stress caused by gravity to the stress caused by shortening) and (iii) detachment folding, dependent on the ratio of matrix thickness to layer thickness. We construct a phase diagram that defines the transitions between each of the three fold- ing modes. Our priority is transparency of the analytical derivations (e.g. thin-plate versus thick-plate approximations), which permits complete classification of the fold- ing modes involving a minimum number of dimensionless parameters. Accuracy and sensitivity of the analytical results to model assumptions are investigated. In particu- lar, depth-dependence of matrix rheology is only important for folding over a narrow range of material parameters. In contrast, strong depth-dependence of the folding layer viscosity limits applicability of ductile rheology and leads to a viscoelastic transition for layers on the crustal and lithospheric scales. This transition allows estimating the critical elastic thickness of the oceanic lithosphere, which determines if the oceanic lithosphere deforms effectively ductile or elastic. Considering applicability conditions of thin viscous sheet models for large-scale lithospheric deformation, derived in terms of the Argand number, our results show that the uplift rates caused by folding (which are neglected by the thin sheet models) are of the same order than the uplift rates caused by layer thickening. This result further indicates that large-scale folding and not crustal thickening was the dominant deformation mode during the evolution of the Himalayan syntaxes. Our theory is applied to estimate the effective thickness of the folded Central Asian

  4. QSAR Modeling Using Large-Scale Databases: Case Study for HIV-1 Reverse Transcriptase Inhibitors.

    PubMed

    Tarasova, Olga A; Urusova, Aleksandra F; Filimonov, Dmitry A; Nicklaus, Marc C; Zakharov, Alexey V; Poroikov, Vladimir V

    2015-07-27

    Large-scale databases are important sources of training sets for various QSAR modeling approaches. Generally, these databases contain information extracted from different sources. This variety of sources can produce inconsistency in the data, defined as sometimes widely diverging activity results for the same compound against the same target. Because such inconsistency can reduce the accuracy of predictive models built from these data, we are addressing the question of how best to use data from publicly and commercially accessible databases to create accurate and predictive QSAR models. We investigate the suitability of commercially and publicly available databases to QSAR modeling of antiviral activity (HIV-1 reverse transcriptase (RT) inhibition). We present several methods for the creation of modeling (i.e., training and test) sets from two, either commercially or freely available, databases: Thomson Reuters Integrity and ChEMBL. We found that the typical predictivities of QSAR models obtained using these different modeling set compilation methods differ significantly from each other. The best results were obtained using training sets compiled for compounds tested using only one method and material (i.e., a specific type of biological assay). Compound sets aggregated by target only typically yielded poorly predictive models. We discuss the possibility of "mix-and-matching" assay data across aggregating databases such as ChEMBL and Integrity and their current severe limitations for this purpose. One of them is the general lack of complete and semantic/computer-parsable descriptions of assay methodology carried by these databases that would allow one to determine mix-and-matchability of result sets at the assay level. PMID:26046311

  5. Modeling long-term, large-scale sediment storage using a simple sediment budget approach

    NASA Astrophysics Data System (ADS)

    Naipal, Victoria; Reick, Christian; Van Oost, Kristof; Hoffmann, Thomas; Pongratz, Julia

    2016-05-01

    Currently, the anthropogenic perturbation of the biogeochemical cycles remains unquantified due to the poor representation of lateral fluxes of carbon and nutrients in Earth system models (ESMs). This lateral transport of carbon and nutrients between terrestrial ecosystems is strongly affected by accelerated soil erosion rates. However, the quantification of global soil erosion by rainfall and runoff, and the resulting redistribution is missing. This study aims at developing new tools and methods to estimate global soil erosion and redistribution by presenting and evaluating a new large-scale coarse-resolution sediment budget model that is compatible with ESMs. This model can simulate spatial patterns and long-term trends of soil redistribution in floodplains and on hillslopes, resulting from external forces such as climate and land use change. We applied the model to the Rhine catchment using climate and land cover data from the Max Planck Institute Earth System Model (MPI-ESM) for the last millennium (here AD 850-2005). Validation is done using observed Holocene sediment storage data and observed scaling between sediment storage and catchment area. We find that the model reproduces the spatial distribution of floodplain sediment storage and the scaling behavior for floodplains and hillslopes as found in observations. After analyzing the dependence of the scaling behavior on the main parameters of the model, we argue that the scaling is an emergent feature of the model and mainly dependent on the underlying topography. Furthermore, we find that land use change is the main contributor to the change in sediment storage in the Rhine catchment during the last millennium. Land use change also explains most of the temporal variability in sediment storage in floodplains and on hillslopes.

  6. Evaluation of large-scale meteorological patterns associated with temperature extremes in the NARCCAP regional climate model simulations

    NASA Astrophysics Data System (ADS)

    Loikith, Paul C.; Waliser, Duane E.; Lee, Huikyo; Neelin, J. David; Lintner, Benjamin R.; McGinnis, Seth; Mearns, Linda O.; Kim, Jinwon

    2015-12-01

    Large-scale meteorological patterns (LSMPs) associated with temperature extremes are evaluated in a suite of regional climate model (RCM) simulations contributing to the North American Regional Climate Change Assessment Program. LSMPs are characterized through composites of surface air temperature, sea level pressure, and 500 hPa geopotential height anomalies concurrent with extreme temperature days. Six of the seventeen RCM simulations are driven by boundary conditions from reanalysis while the other eleven are driven by one of four global climate models (GCMs). Four illustrative case studies are analyzed in detail. Model fidelity in LSMP spatial representation is high for cold winter extremes near Chicago. Winter warm extremes are captured by most RCMs in northern California, with some notable exceptions. Model fidelity is lower for cool summer days near Houston and extreme summer heat events in the Ohio Valley. Physical interpretation of these patterns and identification of well-simulated cases, such as for Chicago, boosts confidence in the ability of these models to simulate days in the tails of the temperature distribution. Results appear consistent with the expectation that the ability of an RCM to reproduce a realistically shaped frequency distribution for temperature, especially at the tails, is related to its fidelity in simulating LMSPs. Each ensemble member is ranked for its ability to reproduce LSMPs associated with observed warm and cold extremes, identifying systematically high performing RCMs and the GCMs that provide superior boundary forcing. The methodology developed here provides a framework for identifying regions where further process-based evaluation would improve the understanding of simulation error and help guide future model improvement and downscaling efforts.

  7. Predictions of a non-Gaussian model for large scale structure

    SciTech Connect

    Fan, Z.H.; Bardeen, J.M.

    1992-06-26

    A modified CDM model for the origin of structure in the universe based on an inflation model with two interacting scalar fields, is analyzed to make predictions for the statistical properties of the density and velocity fields and the microwave background anisotropy. The initial gauge-invariant potential {zeta} which is defined as {zeta} = {delta}{rho}/({rho} + p) + 3{var_phi}, where {var_phi} is the curvature perturbation amplitude and p is the pressure, is the sum of a Gaussian field {phi}{sub 1}, and the square of a Gaussian field {phi}{sub 2}. A Harrison-Zel`dovich scale-invariant power spectrum is assumed for {phi}{sub 1}; and a log-normal `peak` power spectrum for {phi}{sub 2}. The location and the width of the peak are described by parameters k{sub c} and a. respectively. The model is motivated to some extent by inflation models with two interacting scalar fields, but is mainly interesting as an example of a model whose statistical properties change with scale. On small scales, it is almost identical to a standard scale-invariant Gaussian CDM model. On scales near the location of the peak of the non-Gaussian field, the distributions have long tails in high positive values of the density and velocity fields. Thus, it is easier to get large-scale streaming velocities than the standard CDM model. The quadrupole amplitude of fluctuations of the cosmic microwave background radiation and the rms variation of the temperature field smoothed with a 10{degree} FWHM Gaussian are calculated; a reasonable agreement is found with the new COBE results.

  8. Predictions of a non-Gaussian model for large scale structure

    SciTech Connect

    Fan, Z.H.; Bardeen, J.M.

    1992-06-26

    A modified CDM model for the origin of structure in the universe based on an inflation model with two interacting scalar fields, is analyzed to make predictions for the statistical properties of the density and velocity fields and the microwave background anisotropy. The initial gauge-invariant potential [zeta] which is defined as [zeta] = [delta][rho]/([rho] + p) + 3[var phi], where [var phi] is the curvature perturbation amplitude and p is the pressure, is the sum of a Gaussian field [phi][sub 1], and the square of a Gaussian field [phi][sub 2]. A Harrison-Zel'dovich scale-invariant power spectrum is assumed for [phi][sub 1]; and a log-normal 'peak' power spectrum for [phi][sub 2]. The location and the width of the peak are described by parameters k[sub c] and a. respectively. The model is motivated to some extent by inflation models with two interacting scalar fields, but is mainly interesting as an example of a model whose statistical properties change with scale. On small scales, it is almost identical to a standard scale-invariant Gaussian CDM model. On scales near the location of the peak of the non-Gaussian field, the distributions have long tails in high positive values of the density and velocity fields. Thus, it is easier to get large-scale streaming velocities than the standard CDM model. The quadrupole amplitude of fluctuations of the cosmic microwave background radiation and the rms variation of the temperature field smoothed with a 10[degree] FWHM Gaussian are calculated; a reasonable agreement is found with the new COBE results.

  9. Prospective large-scale field study generates predictive model identifying major contributors to colony losses.

    PubMed

    Kielmanowicz, Merav Gleit; Inberg, Alex; Lerner, Inbar Maayan; Golani, Yael; Brown, Nicholas; Turner, Catherine Louise; Hayes, Gerald J R; Ballam, Joan M

    2015-04-01

    Over the last decade, unusually high losses of colonies have been reported by beekeepers across the USA. Multiple factors such as Varroa destructor, bee viruses, Nosema ceranae, weather, beekeeping practices, nutrition, and pesticides have been shown to contribute to colony losses. Here we describe a large-scale controlled trial, in which different bee pathogens, bee population, and weather conditions across winter were monitored at three locations across the USA. In order to minimize influence of various known contributing factors and their interaction, the hives in the study were not treated with antibiotics or miticides. Additionally, the hives were kept at one location and were not exposed to potential stress factors associated with migration. Our results show that a linear association between load of viruses (DWV or IAPV) in Varroa and bees is present at high Varroa infestation levels (>3 mites per 100 bees). The collection of comprehensive data allowed us to draw a predictive model of colony losses and to show that Varroa destructor, along with bee viruses, mainly DWV replication, contributes to approximately 70% of colony losses. This correlation further supports the claim that insufficient control of the virus-vectoring Varroa mite would result in increased hive loss. The predictive model also indicates that a single factor may not be sufficient to trigger colony losses, whereas a combination of stressors appears to impact hive health. PMID:25875764

  10. Towards a large-scale scalable adaptive heart model using shallow tree meshes

    NASA Astrophysics Data System (ADS)

    Krause, Dorian; Dickopf, Thomas; Potse, Mark; Krause, Rolf

    2015-10-01

    Electrophysiological heart models are sophisticated computational tools that place high demands on the computing hardware due to the high spatial resolution required to capture the steep depolarization front. To address this challenge, we present a novel adaptive scheme for resolving the deporalization front accurately using adaptivity in space. Our adaptive scheme is based on locally structured meshes. These tensor meshes in space are organized in a parallel forest of trees, which allows us to resolve complicated geometries and to realize high variations in the local mesh sizes with a minimal memory footprint in the adaptive scheme. We discuss both a non-conforming mortar element approximation and a conforming finite element space and present an efficient technique for the assembly of the respective stiffness matrices using matrix representations of the inclusion operators into the product space on the so-called shallow tree meshes. We analyzed the parallel performance and scalability for a two-dimensional ventricle slice as well as for a full large-scale heart model. Our results demonstrate that the method has good performance and high accuracy.

  11. Prospective Large-Scale Field Study Generates Predictive Model Identifying Major Contributors to Colony Losses

    PubMed Central

    Kielmanowicz, Merav Gleit; Inberg, Alex; Lerner, Inbar Maayan; Golani, Yael; Brown, Nicholas; Turner, Catherine Louise; Hayes, Gerald J. R.; Ballam, Joan M.

    2015-01-01

    Over the last decade, unusually high losses of colonies have been reported by beekeepers across the USA. Multiple factors such as Varroa destructor, bee viruses, Nosema ceranae, weather, beekeeping practices, nutrition, and pesticides have been shown to contribute to colony losses. Here we describe a large-scale controlled trial, in which different bee pathogens, bee population, and weather conditions across winter were monitored at three locations across the USA. In order to minimize influence of various known contributing factors and their interaction, the hives in the study were not treated with antibiotics or miticides. Additionally, the hives were kept at one location and were not exposed to potential stress factors associated with migration. Our results show that a linear association between load of viruses (DWV or IAPV) in Varroa and bees is present at high Varroa infestation levels (>3 mites per 100 bees). The collection of comprehensive data allowed us to draw a predictive model of colony losses and to show that Varroa destructor, along with bee viruses, mainly DWV replication, contributes to approximately 70% of colony losses. This correlation further supports the claim that insufficient control of the virus-vectoring Varroa mite would result in increased hive loss. The predictive model also indicates that a single factor may not be sufficient to trigger colony losses, whereas a combination of stressors appears to impact hive health. PMID:25875764

  12. Query Large Scale Microarray Compendium Datasets Using a Model-Based Bayesian Approach with Variable Selection

    PubMed Central

    Hu, Ming; Qin, Zhaohui S.

    2009-01-01

    In microarray gene expression data analysis, it is often of interest to identify genes that share similar expression profiles with a particular gene such as a key regulatory protein. Multiple studies have been conducted using various correlation measures to identify co-expressed genes. While working well for small datasets, the heterogeneity introduced from increased sample size inevitably reduces the sensitivity and specificity of these approaches. This is because most co-expression relationships do not extend to all experimental conditions. With the rapid increase in the size of microarray datasets, identifying functionally related genes from large and diverse microarray gene expression datasets is a key challenge. We develop a model-based gene expression query algorithm built under the Bayesian model selection framework. It is capable of detecting co-expression profiles under a subset of samples/experimental conditions. In addition, it allows linearly transformed expression patterns to be recognized and is robust against sporadic outliers in the data. Both features are critically important for increasing the power of identifying co-expressed genes in large scale gene expression datasets. Our simulation studies suggest that this method outperforms existing correlation coefficients or mutual information-based query tools. When we apply this new method to the Escherichia coli microarray compendium data, it identifies a majority of known regulons as well as novel potential target genes of numerous key transcription factors. PMID:19214232

  13. Constraining Large-Scale Solar Magnetic Field Models with Optical Coronal Observations

    NASA Astrophysics Data System (ADS)

    Uritsky, V. M.; Davila, J. M.; Jones, S. I.

    2015-12-01

    Scientific success of the Solar Probe Plus (SPP) and Solar Orbiter (SO) missions will depend to a large extent on the accuracy of the available coronal magnetic field models describing the connectivity of plasma disturbances in the inner heliosphere with their source regions. We argue that ground based and satellite coronagraph images can provide robust geometric constraints for the next generation of improved coronal magnetic field extrapolation models. In contrast to the previously proposed loop segmentation codes designed for detecting compact closed-field structures above solar active regions, we focus on the large-scale geometry of the open-field coronal regions located at significant radial distances from the solar surface. Details on the new feature detection algorithms will be presented. By applying the developed image processing methodology to high-resolution Mauna Loa Solar Observatory images, we perform an optimized 3D B-line tracing for a full Carrington rotation using the magnetic field extrapolation code presented in a companion talk by S.Jones at al. Tracing results are shown to be in a good qualitative agreement with the large-scalie configuration of the optical corona. Subsequent phases of the project and the related data products for SSP and SO missions as wwll as the supporting global heliospheric simulations will be discussed.

  14. Development of models for the planning of large-scale water-energy systems. Final report

    SciTech Connect

    Matsumoto, J.; Mays, L.W.; Rohlich, G.A.

    1982-01-01

    A mathematical optimization model has been developed to help investigate various alternatives for future water-energy systems. The capacity expansion problem of water-energy systems can be stated as follows: Given the future demands for water, electricity, gas, and coal and the availability of water and coal, determine the location, timing, and size of facilities to satisfy the demands at minimum cost, which is the sum of operating and capacity costs. Specifically, the system consists of four subsystems: water, coal, electricity, and gas systems. Their interactions are expressed explicitly in mathematical terms and equations, whereas most models describe individual constraints but their interactions are not stated explicitly. Because of the large scale, decomposition techniques are extensively applied. To do this an in-depth study was made of the mathematical structure of the water-energy system problem. The Benders decomposition is applied to the capacity expansion problem, decomposing it into a three-level problem: the capacity problem, the production problem, and the distribution problem. These problems are solved by special algorithms: the generally upper bounded (GUB) algorithm, the simply upper bounded (SUB) algorithm, and the generalized network flow algorithm, respectively.

  15. A mass-flux cumulus parameterization scheme for large-scale models: description and test with observations

    NASA Astrophysics Data System (ADS)

    Wu, Tongwen

    2012-02-01

    A simple mass-flux cumulus parameterization scheme suitable for large-scale atmospheric models is presented. The scheme is based on a bulk-cloud approach and has the following properties: (1) Deep convection is launched at the level of maximum moist static energy above the top of the boundary layer. It is triggered if there is positive convective available potential energy (CAPE) and relative humidity of the air at the lifting level of convection cloud is greater than 75%; (2) Convective updrafts for mass, dry static energy, moisture, cloud liquid water and momentum are parameterized by a one-dimensional entrainment/detrainment bulk-cloud model. The lateral entrainment of the environmental air into the unstable ascending parcel before it rises to the lifting condensation level is considered. The entrainment/detrainment amount for the updraft cloud parcel is separately determined according to the increase/decrease of updraft parcel mass with altitude, and the mass change for the adiabatic ascent cloud parcel with altitude is derived from a total energy conservation equation of the whole adiabatic system in which involves the updraft cloud parcel and the environment; (3) The convective downdraft is assumed saturated and originated from the level of minimum environmental saturated equivalent potential temperature within the updraft cloud; (4) The mass flux at the base of convective cloud is determined by a closure scheme suggested by Zhang (J Geophys Res 107(D14), doi: 10.1029/2001JD001005 , 2002) in which the increase/decrease of CAPE due to changes of the thermodynamic states in the free troposphere resulting from convection approximately balances the decrease/increase resulting from large-scale processes. Evaluation of the proposed convection scheme is performed by using a single column model (SCM) forced by the Atmospheric Radiation Measurement Program

  16. Improving urban streamflow forecasting using a high-resolution large scale modeling framework

    NASA Astrophysics Data System (ADS)

    Read, Laura; Hogue, Terri; Gochis, David; Salas, Fernando

    2016-04-01

    Urban flood forecasting is a critical component in effective water management, emergency response, regional planning, and disaster mitigation. As populations across the world continue to move to cities (~1.8% growth per year), and studies indicate that significant flood damages are occurring outside the floodplain in urban areas, the ability to model and forecast flow over the urban landscape becomes critical to maintaining infrastructure and society. In this work, we use the Weather Research and Forecasting- Hydrological (WRF-Hydro) modeling framework as a platform for testing improvements to representation of urban land cover, impervious surfaces, and urban infrastructure. The three improvements we evaluate include: updating the land cover to the latest 30-meter National Land Cover Dataset, routing flow over a high-resolution 30-meter grid, and testing a methodology for integrating an urban drainage network into the routing regime. We evaluate performance of these improvements in the WRF-Hydro model for specific flood events in the Denver-Metro Colorado domain, comparing to historic gaged streamflow for retrospective forecasts. Denver-Metro provides an interesting case study as it is a rapidly growing urban/peri-urban region with an active history of flooding events that have caused significant loss of life and property. Considering that the WRF-Hydro model will soon be implemented nationally in the U.S. to provide flow forecasts on the National Hydrography Dataset Plus river reaches - increasing capability from 3,600 forecast points to 2.7 million, we anticipate that this work will support validation of this service in urban areas for operational forecasting. Broadly, this research aims to provide guidance for integrating complex urban infrastructure with a large-scale, high resolution coupled land-surface and distributed hydrologic model.

  17. Inverse transport modeling of volcanic sulfur dioxide emissions using large-scale ensemble simulations

    NASA Astrophysics Data System (ADS)

    Heng, Y.; Hoffmann, L.; Griessbach, S.; Rößler, T.; Stein, O.

    2015-10-01

    An inverse transport modeling approach based on the concepts of sequential importance resampling and parallel computing is presented to reconstruct altitude-resolved time series of volcanic emissions, which often can not be obtained directly with current measurement techniques. A new inverse modeling and simulation system, which implements the inversion approach with the Lagrangian transport model Massive-Parallel Trajectory Calculations (MPTRAC) is developed to provide reliable transport simulations of volcanic sulfur dioxide (SO2). In the inverse modeling system MPTRAC is used to perform two types of simulations, i. e., large-scale ensemble simulations for the reconstruction of volcanic emissions and final transport simulations. The transport simulations are based on wind fields of the ERA-Interim meteorological reanalysis of the European Centre for Medium Range Weather Forecasts. The reconstruction of altitude-dependent SO2 emission time series is also based on Atmospheric Infrared Sounder (AIRS) satellite observations. A case study for the eruption of the Nabro volcano, Eritrea, in June 2011, with complex emission patterns, is considered for method validation. Meteosat Visible and InfraRed Imager (MVIRI) near-real-time imagery data are used to validate the temporal development of the reconstructed emissions. Furthermore, the altitude distributions of the emission time series are compared with top and bottom altitude measurements of aerosol layers obtained by the Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP) and the Michelson Interferometer for Passive Atmospheric Sounding (MIPAS) satellite instruments. The final transport simulations provide detailed spatial and temporal information on the SO2 distributions of the Nabro eruption. The SO2 column densities from the simulations are in good qualitative agreement with the AIRS observations. Our new inverse modeling and simulation system is expected to become a useful tool to also study other volcanic

  18. Acoustic characteristics of a large scale wind-tunnel model of a jet flap aircraft

    NASA Technical Reports Server (NTRS)

    Falarski, M. D.; Aiken, T. N.; Aoyagi, K.

    1975-01-01

    The expanding-duct jet flap (EJF) concept is studied to determine STOL performance in turbofan-powered aircraft. The EJF is used to solve the problem of ducting the required volume of air into the wing by providing an expanding cavity between the upper and lower surfaces of the flap. The results are presented of an investigation of the acoustic characteristics of the EJF concept on a large-scale aircraft model powered by JT15D engines. The noise of the EJF is generated by acoustic dipoles as shown by the sixth power dependence of the noise on jet velocity. These sources result from the interaction of the flow turbulence with flap of internal and external surfaces and the trailing edges. Increasing the trailing edge jet from 70 percent span to 100 percent span increased the noise 2 db for the equivalent nozzle area. Blowing at the knee of the flap rather than the trailing edge reduced the noise 5 to 10 db by displacing the jet from the trailing edge and providing shielding from high-frequency noise. Deflecting the flap and varying the angle of attack modified the directivity of the underwing noise but did not affect the peak noise. A forward speed of 33.5 m/sec (110 ft/sec) reduced the dipole noise less than 1 db.

  19. Excavating the Genome: Large-Scale Mutagenesis Screening for the Discovery of New Mouse Models.

    PubMed

    Sundberg, John P; Dadras, Soheil S; Silva, Kathleen A; Kennedy, Victoria E; Murray, Stephen A; Denegre, James M; Schofield, Paul N; King, Lloyd E; Wiles, Michael V; Pratt, C Herbert

    2015-11-01

    Technology now exists for rapid screening of mutated laboratory mice to identify phenotypes associated with specific genetic mutations. Large repositories exist for spontaneous mutants and those induced by chemical mutagenesis, many of which have never been fully studied or comprehensively evaluated. To supplement these resources, a variety of techniques have been consolidated in an international effort to create mutations in all known protein coding genes in the mouse. With targeted embryonic stem cell lines now available for almost all protein coding genes and more recently CRISPR/Cas9 technology, large-scale efforts are underway to create further novel mutant mouse strains and to characterize their phenotypes. However, accurate diagnosis of skin, hair, and nail diseases still relies on careful gross and histological analysis, and while not automated to the level of the physiological phenotyping, histopathology still provides the most direct and accurate diagnosis and correlation with human diseases. As a result of these efforts, many new mouse dermatological disease models are being characterized and developed. PMID:26551941

  20. Large-scale protein-protein interactions detection by integrating big biosensing data with computational model.

    PubMed

    You, Zhu-Hong; Li, Shuai; Gao, Xin; Luo, Xin; Ji, Zhen

    2014-01-01

    Protein-protein interactions are the basis of biological functions, and studying these interactions on a molecular level is of crucial importance for understanding the functionality of a living cell. During the past decade, biosensors have emerged as an important tool for the high-throughput identification of proteins and their interactions. However, the high-throughput experimental methods for identifying PPIs are both time-consuming and expensive. On the other hand, high-throughput PPI data are often associated with high false-positive and high false-negative rates. Targeting at these problems, we propose a method for PPI detection by integrating biosensor-based PPI data with a novel computational model. This method was developed based on the algorithm of extreme learning machine combined with a novel representation of protein sequence descriptor. When performed on the large-scale human protein interaction dataset, the proposed method achieved 84.8% prediction accuracy with 84.08% sensitivity at the specificity of 85.53%. We conducted more extensive experiments to compare the proposed method with the state-of-the-art techniques, support vector machine. The achieved results demonstrate that our approach is very promising for detecting new PPIs, and it can be a helpful supplement for biosensor-based PPI data detection. PMID:25215285

  1. Large-scale infiltration experiments into unsaturated stratified loess sediments: Monitoring and modeling

    NASA Astrophysics Data System (ADS)

    Gvirtzman, Haim; Shalev, Eyal; Dahan, Ofer; Hatzor, Yossef H.

    2008-01-01

    SummaryTwo large-scale field experiments were conducted to track water flow through unsaturated stratified loess deposits. In the experiments, a trench was flooded with water, and water infiltration was allowed until full saturation of the sediment column, to a depth of 20 m, was achieved. The water penetrated through a sequence of alternating silty-sand and sandy-clay loess deposits. The changes in water content over time were monitored at 28 points beneath the trench, using time domain reflectometry (TDR) probes placed in four boreholes. Detailed records were obtained from a 21-day-period of wetting, followed by a 3-month-period of drying, and finally followed by a second 14-day-period of re-wetting. These processes were simulated using a two-dimensional numerical code that solves the flow equation. The model was calibrated using PEST. The simulations demonstrate that the propagation of the wetting front is hampered due to alternating silty-sand and sandy-clay loess layers. Moreover, wetting front propagation is further hampered by the extremely low values of the initial, unsaturated, hydraulic conductivity; thereby increasing the water content within the onion-shaped wetted zone up to full saturation. Numerical simulations indicate that above-hydrostatic pressure is developed within intermediate saturated layers, enhancing wetting front propagation.

  2. Combining flux and energy balance analysis to model large-scale biochemical networks.

    PubMed

    Heuett, William J; Qian, Hong

    2006-12-01

    Stoichiometric Network Theory is a constraints-based, optimization approach for quantitative analysis of the phenotypes of large-scale biochemical networks that avoids the use of detailed kinetics. This approach uses the reaction stoichiometric matrix in conjunction with constraints provided by flux balance and energy balance to guarantee mass conserved and thermodynamically allowable predictions. However, the flux and energy balance constraints have not been effectively applied simultaneously on the genome scale because optimization under the combined constraints is non-linear. In this paper, a sequential quadratic programming algorithm that solves the non-linear optimization problem is introduced. A simple example and the system of fermentation in Saccharomyces cerevisiae are used to illustrate the new method. The algorithm allows the use of non-linear objective functions. As a result, we suggest a novel optimization with respect to the heat dissipation rate of a system. We also emphasize the importance of incorporating interactions between a model network and its surroundings. PMID:17245812

  3. Large Scale Frequent Pattern Mining using MPI One-Sided Model

    SciTech Connect

    Vishnu, Abhinav; Agarwal, Khushbu

    2015-09-08

    In this paper, we propose a work-stealing runtime --- Library for Work Stealing LibWS --- using MPI one-sided model for designing scalable FP-Growth --- {\\em de facto} frequent pattern mining algorithm --- on large scale systems. LibWS provides locality efficient and highly scalable work-stealing techniques for load balancing on a variety of data distributions. We also propose a novel communication algorithm for FP-growth data exchange phase, which reduces the communication complexity from state-of-the-art O(p) to O(f + p/f) for p processes and f frequent attributed-ids. FP-Growth is implemented using LibWS and evaluated on several work distributions and support counts. An experimental evaluation of the FP-Growth on LibWS using 4096 processes on an InfiniBand Cluster demonstrates excellent efficiency for several work distributions (87\\% efficiency for Power-law and 91% for Poisson). The proposed distributed FP-Tree merging algorithm provides 38x communication speedup on 4096 cores.

  4. Repurposing of open data through large scale hydrological modelling - hypeweb.smhi.se

    NASA Astrophysics Data System (ADS)

    Strömbäck, Lena; Andersson, Jafet; Donnelly, Chantal; Gustafsson, David; Isberg, Kristina; Pechlivanidis, Ilias; Strömqvist, Johan; Arheimer, Berit

    2015-04-01

    Hydrological modelling demands large amounts of spatial data, such as soil properties, land use, topography, lakes and reservoirs, ice and snow coverage, water management (e.g. irrigation patterns and regulations), meteorological data and observed water discharge in rivers. By using such data, the hydrological model will in turn provide new data that can be used for new purposes (i.e. re-purposing). This presentation will give an example of how readily available open data from public portals have been re-purposed by using the Hydrological Predictions for the Environment (HYPE) model in a number of large-scale model applications covering numerous subbasins and rivers. HYPE is a dynamic, semi-distributed, process-based, and integrated catchment model. The model output is launched as new Open Data at the web site www.hypeweb.smhi.se to be used for (i) Climate change impact assessments on water resources and dynamics; (ii) The European Water Framework Directive (WFD) for characterization and development of measure programs to improve the ecological status of water bodies; (iii) Design variables for infrastructure constructions; (iv) Spatial water-resource mapping; (v) Operational forecasts (1-10 days and seasonal) on floods and droughts; (vi) Input to oceanographic models for operational forecasts and marine status assessments; (vii) Research. The following regional domains have been modelled so far with different resolutions (number of subbasins within brackets): Sweden (37 000), Europe (35 000), Arctic basin (30 000), La Plata River (6 000), Niger River (800), Middle-East North-Africa (31 000), and the Indian subcontinent (6 000). The Hype web site provides several interactive web applications for exploring results from the models. The user can explore an overview of various water variables for historical and future conditions. Moreover the user can explore and download historical time series of discharge for each basin and explore the performance of the model

  5. A large-scale methane model by incorporating the surface water transport

    NASA Astrophysics Data System (ADS)

    Lu, Xiaoliang; Zhuang, Qianlai; Liu, Yaling; Zhou, Yuyu; Aghakouchak, Amir

    2016-06-01

    The effect of surface water movement on methane emissions is not explicitly considered in most of the current methane models. In this study, a surface water routing was coupled into our previously developed large-scale methane model. The revised methane model was then used to simulate global methane emissions during 2006-2010. From our simulations, the global mean annual maximum inundation extent is 10.6 ± 1.9 km2 and the methane emission is 297 ± 11 Tg C/yr in the study period. In comparison to the currently used TOPMODEL-based approach, we found that the incorporation of surface water routing leads to 24.7% increase in the annual maximum inundation extent and 30.8% increase in the methane emissions at the global scale for the study period, respectively. The effect of surface water transport on methane emissions varies in different regions: (1) the largest difference occurs in flat and moist regions, such as Eastern China; (2) high-latitude regions, hot spots in methane emissions, show a small increase in both inundation extent and methane emissions with the consideration of surface water movement; and (3) in arid regions, the new model yields significantly larger maximum flooded areas and a relatively small increase in the methane emissions. Although surface water is a small component in the terrestrial water balance, it plays an important role in determining inundation extent and methane emissions, especially in flat regions. This study indicates that future quantification of methane emissions shall consider the effects of surface water transport.

  6. Large-scale collection and annotation of gene models for date palm (Phoenix dactylifera, L.).

    PubMed

    Zhang, Guangyu; Pan, Linlin; Yin, Yuxin; Liu, Wanfei; Huang, Dawei; Zhang, Tongwu; Wang, Lei; Xin, Chengqi; Lin, Qiang; Sun, Gaoyuan; Ba Abdullah, Mohammed M; Zhang, Xiaowei; Hu, Songnian; Al-Mssallem, Ibrahim S; Yu, Jun

    2012-08-01

    The date palm (Phoenix dactylifera L.), famed for its sugar-rich fruits (dates) and cultivated by humans since 4,000 B.C., is an economically important crop in the Middle East, Northern Africa, and increasingly other places where climates are suitable. Despite a long history of human cultivation, the understanding of P. dactylifera genetics and molecular biology are rather limited, hindered by lack of basic data in high quality from genomics and transcriptomics. Here we report a large-scale effort in generating gene models (assembled expressed sequence tags or ESTs and mapped to a genome assembly) for P. dactylifera, using the long-read pyrosequencing platform (Roche/454 GS FLX Titanium) in high coverage. We built fourteen cDNA libraries from different P. dactylifera tissues (cultivar Khalas) and acquired 15,778,993 raw sequencing reads-about one million sequencing reads per library-and the pooled sequences were assembled into 67,651 non-redundant contigs and 301,978 singletons. We annotated 52,725 contigs based on the plant databases and 45 contigs based on functional domains referencing to the Pfam database. From the annotated contigs, we assigned GO (Gene Ontology) terms to 36,086 contigs and KEGG pathways to 7,032 contigs. Our comparative analysis showed that 70.6 % (47,930), 69.4 % (47,089), 68.4 % (46,441), and 69.3 % (47,048) of the P. dactylifera gene models are shared with rice, sorghum, Arabidopsis, and grapevine, respectively. We also assigned our gene models into house-keeping and tissue-specific genes based on their tissue specificity. PMID:22736259

  7. Identification of water quality degradation hotspots in developing countries by applying large scale water quality modelling

    NASA Astrophysics Data System (ADS)

    Malsy, Marcus; Reder, Klara; Flörke, Martina

    2014-05-01

    Decreasing water quality is one of the main global issues which poses risks to food security, economy, and public health and is consequently crucial for ensuring environmental sustainability. During the last decades access to clean drinking water increased, but 2.5 billion people still do not have access to basic sanitation, especially in Africa and parts of Asia. In this context not only connection to sewage system is of high importance, but also treatment, as an increasing connection rate will lead to higher loadings and therefore higher pressure on water resources. Furthermore, poor people in developing countries use local surface waters for daily activities, e.g. bathing and washing. It is thus clear that water utilization and water sewerage are indispensable connected. In this study, large scale water quality modelling is used to point out hotspots of water pollution to get an insight on potential environmental impacts, in particular, in regions with a low observation density and data gaps in measured water quality parameters. We applied the global water quality model WorldQual to calculate biological oxygen demand (BOD) loadings from point and diffuse sources, as well as in-stream concentrations. Regional focus in this study is on developing countries i.e. Africa, Asia, and South America, as they are most affected by water pollution. Hereby, model runs were conducted for the year 2010 to draw a picture of recent status of surface waters quality and to figure out hotspots and main causes of pollution. First results show that hotspots mainly occur in highly agglomerated regions where population density is high. Large urban areas are initially loading hotspots and pollution prevention and control become increasingly important as point sources are subject to connection rates and treatment levels. Furthermore, river discharge plays a crucial role due to dilution potential, especially in terms of seasonal variability. Highly varying shares of BOD sources across

  8. Large Scale Terrestrial Modeling: A Discussion of Technical and Conceptual Challenges and Solution Approaches

    NASA Astrophysics Data System (ADS)

    Rahman, M.; Aljazzar, T.; Kollet, S.; Maxwell, R.

    2012-04-01

    A number of simulation platforms have been developed to study the spatiotemporal variability of hydrologic responses to global change. Sophisticated terrestrial models demand large data sets and considerable computing resources as they attempt to include detailed physics for all relevant processes involving the feedbacks between subsurface, land surface and atmospheric processes. Access to required data scarcity, error and uncertainty; allocation of computing resources; and post processing/analysis are some of the well-known challenges. And have been discussed in previous studies dealing with catchments ranging from plot scale research (102m2), to small experimental catchments (0.1-10km2), and occasionally medium-sized catchments (102-103km2). However, there is still a lack of knowledge about large-scale simulations of the coupled terrestrial mass and energy balance over long time scales (years to decades). In this study, the interaction between subsurface, land surface, and the atmosphere are simulated in two large scale (>104km2) river catchments that are the Luanhe catchment in the North Plain, China and the Rur catchment, Germany. As a simulation platform, a fully coupled model (ParFlow.CLM) that links a three-dimensional variably-saturated groundwater flow model (ParFlow) with a land surface model (CLM) is used. The Luanhe and the Rur catchments have areas of 54,000 and 28,224km2 respectively and are being simulated using spatial resolutions on the order of 102 to 103m in the horizontal and 10-2 to 10-1m in the vertical direction. ParFlow.CLM was configured over computational domains well beyond the actual watershed boundaries to account for cross-watershed flow. The resulting catchment models consist of up to 108 cells which were implemented over more than 1000 processors each with 512MB memory on JUGENE hosted by the Juelich Supercomputing Centre, Germany. Consequently, large numbers of input and output files were produced for each parameter such as; soil

  9. Uncertainty analysis of channel capacity assumptions in large scale hydraulic modelling

    NASA Astrophysics Data System (ADS)

    Walsh, Alexander; Stroud, Rebecca; Willis, Thomas

    2015-04-01

    Flood modelling on national or even global scales is of great interest to re/insurers, governments and other agencies. Channel bathymetry data is not available over large areas which is a major limitation to this scale of modelling. It requires expensive channel surveying and the majority of remotely sensed data cannot see through water. Furthermore, channels represented as 1D models, or as an explicit feature in the model domain is computationally demanding, and so it is often necessary to find ways to reduce computational costs. A more efficient methodology is to make assumptions concerning the capacity of the channel, and then to remove this volume from inflow hydrographs. Previous research have shown that natural channels generally conform to carry flow for a 1-in-2 year return period (QMED). This assumption is widely used in large scale modelling studies across the world. However, channels flowing through high-risk areas, such as urban environments, are often modified to increase their capacity and thus reduce flood risk. Simulated flood outlines are potentially very sensitive to assumptions made regarding these capacities. For example, under the 1-in-2 year assumption, the flooding associated with smaller events might be overestimated, with too much flow being modelled as out of bank. There are requirements to; i) quantify the impact of uncertainty in assumed channel capacity on simulated flooded areas, and ii) to develop more optimal capacity assumptions, depending on specific reach characteristics, so that the effects of channel modification can be better represented in future studies. This work will demonstrate findings from a preliminary uncertainty analysis that seeks to address the former requirement. A set of benchmark tests, using 2D hydraulic models, were undertaken where different estimated return period flows in contrasting catchments are modelled with varying channel capacity parameters. The depth and extent for each benchmark model output were

  10. Large-scale Models Reveal the Two-component Mechanics of Striated Muscle

    PubMed Central

    Jarosch, Robert

    2008-01-01

    This paper provides a comprehensive explanation of striated muscle mechanics and contraction on the basis of filament rotations. Helical proteins, particularly the coiled-coils of tropomyosin, myosin and α-actinin, shorten their H-bonds cooperatively and produce torque and filament rotations when the Coulombic net-charge repulsion of their highly charged side-chains is diminished by interaction with ions. The classical “two-component model” of active muscle differentiated a “contractile component” which stretches the “series elastic component” during force production. The contractile components are the helically shaped thin filaments of muscle that shorten the sarcomeres by clockwise drilling into the myosin cross-bridges with torque decrease (= force-deficit). Muscle stretch means drawing out the thin filament helices off the cross-bridges under passive counterclockwise rotation with torque increase (= stretch activation). Since each thin filament is anchored by four elastic α-actinin Z-filaments (provided with force-regulating sites for Ca2+ binding), the thin filament rotations change the torsional twist of the four Z-filaments as the “series elastic components”. Large scale models simulate the changes of structure and force in the Z-band by the different Z-filament twisting stages A, B, C, D, E, F and G. Stage D corresponds to the isometric state. The basic phenomena of muscle physiology, i. e. latency relaxation, Fenn-effect, the force-velocity relation, the length-tension relation, unexplained energy, shortening heat, the Huxley-Simmons phases, etc. are explained and interpreted with the help of the model experiments. PMID:19330099

  11. Metabolic Flux Elucidation for Large-Scale Models Using 13C Labeled Isotopes

    PubMed Central

    Suthers, Patrick F.; Burgard, Anthony P.; Dasika, Madhukar S.; Nowroozi, Farnaz; Van Dien, Stephen; Keasling, Jay D.; Maranas, Costas D.

    2007-01-01

    A key consideration in metabolic engineering is the determination of fluxes of the metabolites within the cell. This determination provides an unambiguous description of metabolism before and/or after engineering interventions. Here, we present a computational framework that combines a constraint-based modeling framework with isotopic label tracing on a large-scale. When cells are fed a growth substrate with certain carbon positions labeled with 13C, the distribution of this label in the intracellular metabolites can be calculated based on the known biochemistry of the participating pathways. Most labeling studies focus on skeletal representations of central metabolism and ignore many flux routes that could contribute to the observed isotopic labeling patterns. In contrast, our approach investigates the importance of carrying out isotopic labeling studies using a more comprehensive reaction network consisting of 350 fluxes and 184 metabolites in Escherichia coli including global metabolite balances on cofactors such as ATP, NADH, and NADPH. The proposed procedure is demonstrated on an E. coli strain engineered to produce amorphadiene, a precursor to the anti-malarial drug artemisinin. The cells were grown in continuous culture on glucose containing 20% [U-13C]glucose; the measurements are made using GC-MS performed on 13 amino acids extracted from the cells. We identify flux distributions for which the calculated labeling patterns agree well with the measurements alluding to the accuracy of the network reconstruction. Furthermore, we explore the robustness of the flux calculations to variability in the experimental MS measurements, as well as highlight the key experimental measurements necessary for flux determination. Finally, we discuss the effect of reducing the model, as well as shed light onto the customization of the developed computational framework to other systems. PMID:17632026

  12. Observational and Model Studies of Large-Scale Mixing Processes in the Stratosphere

    NASA Technical Reports Server (NTRS)

    Bowman, Kenneth P.

    1997-01-01

    The following is the final technical report for grant NAGW-3442, 'Observational and Model Studies of Large-Scale Mixing Processes in the Stratosphere'. Research efforts in the first year concentrated on transport and mixing processes in the polar vortices. Three papers on mixing in the Antarctic were published. The first was a numerical modeling study of wavebreaking and mixing and their relationship to the period of observed stratospheric waves (Bowman). The second paper presented evidence from TOMS for wavebreaking in the Antarctic (Bowman and Mangus 1993). The third paper used Lagrangian trajectory calculations from analyzed winds to show that there is very little transport into the Antarctic polar vortex prior to the vortex breakdown (Bowman). Mixing is significantly greater at lower levels. This research helped to confirm theoretical arguments for vortex isolation and data from the Antarctic field experiments that were interpreted as indicating isolation. A Ph.D. student, Steve Dahlberg, used the trajectory approach to investigate mixing and transport in the Arctic. While the Arctic vortex is much more disturbed than the Antarctic, there still appears to be relatively little transport across the vortex boundary at 450 K prior to the vortex breakdown. The primary reason for the absence of an ozone hole in the Arctic is the earlier warming and breakdown of the vortex compared to the Antarctic, not replenishment of ozone by greater transport. Two papers describing these results have appeared (Dahlberg and Bowman; Dahlberg and Bowman). Steve Dahlberg completed his Ph.D. thesis (Dahlberg and Bowman) and is now teaching in the Physics Department at Concordia College. We also prepared an analysis of the QBO in SBUV ozone data (Hollandsworth et al.). A numerical study in collaboration with Dr. Ping Chen investigated mixing by barotropic instability, which is the probable origin of the 4-day wave in the upper stratosphere (Bowman and Chen). The important result from

  13. Large-Scale Model-Based Assessment of Deer-Vehicle Collision Risk

    PubMed Central

    Hothorn, Torsten; Brandl, Roland; Müller, Jörg

    2012-01-01

    Ungulates, in particular the Central European roe deer Capreolus capreolus and the North American white-tailed deer Odocoileus virginianus, are economically and ecologically important. The two species are risk factors for deer–vehicle collisions and as browsers of palatable trees have implications for forest regeneration. However, no large-scale management systems for ungulates have been implemented, mainly because of the high efforts and costs associated with attempts to estimate population sizes of free-living ungulates living in a complex landscape. Attempts to directly estimate population sizes of deer are problematic owing to poor data quality and lack of spatial representation on larger scales. We used data on 74,000 deer–vehicle collisions observed in 2006 and 2009 in Bavaria, Germany, to model the local risk of deer–vehicle collisions and to investigate the relationship between deer–vehicle collisions and both environmental conditions and browsing intensities. An innovative modelling approach for the number of deer–vehicle collisions, which allows nonlinear environment–deer relationships and assessment of spatial heterogeneity, was the basis for estimating the local risk of collisions for specific road types on the scale of Bavarian municipalities. Based on this risk model, we propose a new “deer–vehicle collision index” for deer management. We show that the risk of deer–vehicle collisions is positively correlated to browsing intensity and to harvest numbers. Overall, our results demonstrate that the number of deer–vehicle collisions can be predicted with high precision on the scale of municipalities. In the densely populated and intensively used landscapes of Central Europe and North America, a model-based risk assessment for deer–vehicle collisions provides a cost-efficient instrument for deer management on the landscape scale. The measures derived from our model provide valuable information for planning road protection and

  14. Large-scale Validation of AMIP II Land-surface Simulations: Preliminary Results for Ten Models

    SciTech Connect

    Phillips, T J; Henderson-Sellers, A; Irannejad, P; McGuffie, K; Zhang, H

    2005-12-01

    This report summarizes initial findings of a large-scale validation of the land-surface simulations of ten atmospheric general circulation models that are entries in phase II of the Atmospheric Model Intercomparison Project (AMIP II). This validation is conducted by AMIP Diagnostic Subproject 12 on Land-surface Processes and Parameterizations, which is focusing on putative relationships between the continental climate simulations and the associated models' land-surface schemes. The selected models typify the diversity of representations of land-surface climate that are currently implemented by the global modeling community. The current dearth of global-scale terrestrial observations makes exacting validation of AMIP II continental simulations impractical. Thus, selected land-surface processes of the models are compared with several alternative validation data sets, which include merged in-situ/satellite products, climate reanalyses, and off-line simulations of land-surface schemes that are driven by observed forcings. The aggregated spatio-temporal differences between each simulated process and a chosen reference data set then are quantified by means of root-mean-square error statistics; the differences among alternative validation data sets are similarly quantified as an estimate of the current observational uncertainty in the selected land-surface process. Examples of these metrics are displayed for land-surface air temperature, precipitation, and the latent and sensible heat fluxes. It is found that the simulations of surface air temperature, when aggregated over all land and seasons, agree most closely with the chosen reference data, while the simulations of precipitation agree least. In the latter case, there also is considerable inter-model scatter in the error statistics, with the reanalyses estimates of precipitation resembling the AMIP II simulations more than to the chosen reference data. In aggregate, the simulations of land-surface latent and sensible

  15. Large Scale Numerical Modelling to Study the Dispersion of Persistent Toxic Substances Over Europe

    NASA Astrophysics Data System (ADS)

    Aulinger, A.; Petersen, G.

    2003-12-01

    For the past two decades environmental research at the GKSS Research Centre has been concerned with airborne pollutants with adverse effects on human health. The research was mainly focused on investigating the dispersion and deposition of heavy metals like lead and mercury over Europe by means of numerical modelling frameworks. Lead, in particular, served as a model substance to study the relationship between emissions and human exposition. The major source of airborne lead in Germany was fuel combustion until the 1980ies when its use as gasoline additive declined due to political decisions. Since then, the concentration of lead in ambient air and the deposition rates decreased in the same way as the consumption of leaded fuel. These observations could further be related to the decrease of lead concentrations in human blood measured during medical studies in several German cities. Based on the experience with models for heavy metal transport and deposition we have now started to turn our research focus to organic substances, e.g. PAHs. PAHs have been recognized as significant air borne carcinogens for several decades. However, it is not yet possible to precisely quantify the risk of human exposure to those compounds. Physical and chemical data, known from literature, describing the partitioning of the compounds between particle and gas phase and their degradation in the gas phase are implemented in a tropospheric chemistry module. In this way, the fate of PAHs in the atmosphere due to different particle type and size and different meteorological conditions is tested before carrying out large-scale and long-time studies. First model runs have been carried out for Benzo(a)Pyrene as one of the principal carcinogenic PAHs. Up to now, nearly nothing is known about degradation reactions of particle bound BaP. Thus, they could not be taken into account in the model so far. On the other hand, the proportion of BaP in the gas phase has to be considered at higher ambient

  16. Comparing wave shoaling methods used in large-scale coastal evolution modeling

    NASA Astrophysics Data System (ADS)

    Limber, P. W.; Adams, P. N.; Murray, A.

    2013-12-01

    output where wave height is approximately one-half of the water depth (a standard wave breaking threshold). The goal of this modeling exercise is to understand under what conditions a simple wave model is sufficient for simulating coastline evolution, and when using a more complex shoaling routine can optimize a coastline model. The Coastline Evolution Model (CEM; Ashton and Murray, 2006) is used to show how different shoaling routines affect modeled coastline behavior. The CEM currently includes the most basic wave shoaling approach to simulate cape and spit formation. We will instead couple it to SWAN, using the insight from the comprehensive wave model (above) to guide its application. This will allow waves transformed over complex bathymetry, such as cape-associated shoals and ridges, to be input for the CEM so that large-scale coastline behavior can be addressed in less idealized environments. Ashton, A., and Murray, A.B., 2006, High-angle wave instability and emergent shoreline shapes: 1. Modeling of sand waves, flying spits, and capes: Journal of Geophysical Research, v. 111, p. F04011, doi:10.1029/2005JF000422.

  17. Using stochastically-generated subcolumns to represent cloud structure in a large-scale model

    SciTech Connect

    Pincus, R; Hemler, R; Klein, S A

    2005-12-08

    A new method for representing subgrid-scale cloud structure, in which each model column is decomposed into a set of subcolumns, has been introduced into the Geophysical Fluid Dynamics Laboratory's global climate model AM2. Each subcolumn in the decomposition is homogeneous but the ensemble reproduces the initial profiles of cloud properties including cloud fraction, internal variability (if any) in cloud condensate, and arbitrary overlap assumptions that describe vertical correlations. These subcolumns are used in radiation and diagnostic calculations, and have allowed the introduction of more realistic overlap assumptions. This paper describes the impact of these new methods for representing cloud structure in instantaneous calculations and long-term integrations. Shortwave radiation computed using subcolumns and the random overlap assumption differs in the global annual average by more than 4 W/m{sup 2} from the operational radiation scheme in instantaneous calculations; much of this difference is counteracted by a change in the overlap assumption to one in which overlap varies continuously with the separation distance between layers. Internal variability in cloud condensate, diagnosed from the mean condensate amount and cloud fraction, has about the same effect on radiative fluxes as does the ad hoc tuning accounting for this effect in the operational radiation scheme. Long simulations with the new model configuration show little difference from the operational model configuration, while statistical tests indicate that the model does not respond systematically to the sampling noise introduced by the approximate radiative transfer techniques introduced to work with the subcolumns.

  18. Large Scale Computing

    NASA Astrophysics Data System (ADS)

    Capiluppi, Paolo

    2005-04-01

    Large Scale Computing is acquiring an important role in the field of data analysis and treatment for many Sciences and also for some Social activities. The present paper discusses the characteristics of Computing when it becomes "Large Scale" and the current state of the art for some particular application needing such a large distributed resources and organization. High Energy Particle Physics (HEP) Experiments are discussed in this respect; in particular the Large Hadron Collider (LHC) Experiments are analyzed. The Computing Models of LHC Experiments represent the current prototype implementation of Large Scale Computing and describe the level of maturity of the possible deployment solutions. Some of the most recent results on the measurements of the performances and functionalities of the LHC Experiments' testing are discussed.

  19. Using geophysical observations to constrain dynamic models of large-scale continental deformation in Asia

    NASA Astrophysics Data System (ADS)

    Flesch, L. M.; Holt, W. E.; Haines, A. J.

    2003-04-01

    The deformation of continental lithosphere is controlled by a variety of factors, including (1) body forces, (2) basal tractions, (3) boundary forces, and (4) rheology. Obtaining unique solutions that describe the dynamics of continental lithosphere is extremely challenging. Limitations are associated with inadequate observations that can uniquely constrain the dynamics as well as inadequate numerical methods. However, the compilation of space geodetic, seismic, and geologic data over the past 10-15 years have made it possible to make significant strides toward understanding the dynamics of large-scale continental deformation. The first step in making inferences about continental dynamics involves a quantification of the kinematics of active deformation (measurement of the velocity gradient tensor field). We interpolate both GPS velocity vectors and Quaternary strain rates with continuous spline functions (bi-cubic Bessel interpolation) to define a model velocity gradient tensor field solution (strain rates, rotation rates, and relative motions). In our methodology grid areas can be defined to be small enough such that fault zones are narrow and regions between faults (crustal blocks) possess rigid behavior. Our dynamic models are solutions to equations for a thin sheet, accounting for body forces associated with horizontal density variations and edge forces associated with accommodation of relative plate motion. The formalism can also include basal tractions associated with coupling between lithosphere and deeper mantle circulation. These dynamic models allow for lateral variations of viscosity and they allow for different power-law rheologies with power law exponents ranging from n = 1-9. Thus our dynamic models account for possible block-like behavior (high effective viscosity) as well as concentrated strain within shear zones. Kinematic results to date for central Asia show block-like behavior for large regions such as South China, Tarim Basin, Amurian block

  20. Mathematical model of influenza A virus production in large-scale microcarrier culture.

    PubMed

    Möhler, Lars; Flockerzi, Dietrich; Sann, Heiner; Reichl, Udo

    2005-04-01

    A mathematical model that describes the replication of influenza A virus in animal cells in large-scale microcarrier culture is presented. The virus is produced in a two-step process, which begins with the growth of adherent Madin-Darby canine kidney (MDCK) cells. After several washing steps serum-free virus maintenance medium is added, and the cells are infected with equine influenza virus (A/Equi 2 (H3N8), Newmarket 1/93). A time-delayed model is considered that has three state variables: the number of uninfected cells, infected cells, and free virus particles. It is assumed that uninfected cells adsorb the virus added at the time of infection. The infection rate is proportional to the number of uninfected cells and free virions. Depending on multiplicity of infection (MOI), not necessarily all cells are infected by this first step leading to the production of free virions. Newly produced viruses can infect the remaining uninfected cells in a chain reaction. To follow the time course of virus replication, infected cells were stained with fluorescent antibodies. Quantitation of influenza viruses by a hemagglutination assay (HA) enabled the estimation of the total number of new virions produced, which is relevant for the production of inactivated influenza vaccines. It takes about 4-6 h before visibly infected cells can be identified on the microcarriers followed by a strong increase in HA titers after 15-16 h in the medium. Maximum virus yield Vmax was about 1x10(10) virions/mL (2.4 log HA units/100 microL), which corresponds to a burst size ratio of about 18,755 virus particles produced per cell. The model tracks the time course of uninfected and infected cells as well as virus production. It suggests that small variations (<10%) in initial values and specific rates do not have a significant influence on Vmax. The main parameters relevant for the optimization of virus antigen yields are specific virus replication rate and specific cell death rate due to infection

  1. Self-consistency tests of large-scale dynamics parameterizations for single-column modeling

    SciTech Connect

    Edman, Jacob P.; Romps, David M.

    2015-03-18

    Large-scale dynamics parameterizations are tested numerically in cloud-resolving simulations, including a new version of the weak-pressure-gradient approximation (WPG) introduced by Edman and Romps (2014), the weak-temperature-gradient approximation (WTG), and a prior implementation of WPG. We perform a series of self-consistency tests with each large-scale dynamics parameterization, in which we compare the result of a cloud-resolving simulation coupled to WTG or WPG with an otherwise identical simulation with prescribed large-scale convergence. In self-consistency tests based on radiative-convective equilibrium (RCE; i.e., no large-scale convergence), we find that simulations either weakly coupled or strongly coupled to either WPG or WTG are self-consistent, but WPG-coupled simulations exhibit a nonmonotonic behavior as the strength of the coupling to WPG is varied. We also perform self-consistency tests based on observed forcings from two observational campaigns: the Tropical Warm Pool International Cloud Experiment (TWP-ICE) and the ARM Southern Great Plains (SGP) Summer 1995 IOP. In these tests, we show that the new version of WPG improves upon prior versions of WPG by eliminating a potentially troublesome gravity-wave resonance.

  2. The Large-Scale Structure of Semantic Networks: Statistical Analyses and a Model of Semantic Growth

    ERIC Educational Resources Information Center

    Steyvers, Mark; Tenenbaum, Joshua B.

    2005-01-01

    We present statistical analyses of the large-scale structure of 3 types of semantic networks: word associations, WordNet, and Roget's Thesaurus. We show that they have a small-world structure, characterized by sparse connectivity, short average path lengths between words, and strong local clustering. In addition, the distributions of the number of…

  3. Self-consistency tests of large-scale dynamics parameterizations for single-column modeling

    DOE PAGESBeta

    Edman, Jacob P.; Romps, David M.

    2015-03-18

    Large-scale dynamics parameterizations are tested numerically in cloud-resolving simulations, including a new version of the weak-pressure-gradient approximation (WPG) introduced by Edman and Romps (2014), the weak-temperature-gradient approximation (WTG), and a prior implementation of WPG. We perform a series of self-consistency tests with each large-scale dynamics parameterization, in which we compare the result of a cloud-resolving simulation coupled to WTG or WPG with an otherwise identical simulation with prescribed large-scale convergence. In self-consistency tests based on radiative-convective equilibrium (RCE; i.e., no large-scale convergence), we find that simulations either weakly coupled or strongly coupled to either WPG or WTG are self-consistent, butmore » WPG-coupled simulations exhibit a nonmonotonic behavior as the strength of the coupling to WPG is varied. We also perform self-consistency tests based on observed forcings from two observational campaigns: the Tropical Warm Pool International Cloud Experiment (TWP-ICE) and the ARM Southern Great Plains (SGP) Summer 1995 IOP. In these tests, we show that the new version of WPG improves upon prior versions of WPG by eliminating a potentially troublesome gravity-wave resonance.« less

  4. Modeling Booklet Effects for Nonequivalent Group Designs in Large-Scale Assessment

    ERIC Educational Resources Information Center

    Hecht, Martin; Weirich, Sebastian; Siegle, Thilo; Frey, Andreas

    2015-01-01

    Multiple matrix designs are commonly used in large-scale assessments to distribute test items to students. These designs comprise several booklets, each containing a subset of the complete item pool. Besides reducing the test burden of individual students, using various booklets allows aligning the difficulty of the presented items to the assumed…

  5. Large-scale hydrological modelling in the semi-arid north-east of Brazil

    NASA Astrophysics Data System (ADS)

    Güntner, Andreas

    2002-07-01

    the framework of an integrated model which contains modules that do not work on the basis of natural spatial units. The target units mentioned above are disaggregated in Wasa into smaller modelling units within a new multi-scale, hierarchical approach. The landscape units defined in this scheme capture in particular the effect of structured variability of terrain, soil and vegetation characteristics along toposequences on soil moisture and runoff generation. Lateral hydrological processes at the hillslope scale, as reinfiltration of surface runoff, being of particular importance in semi-arid environments, can thus be represented also within the large-scale model in a simplified form. Depending on the resolution of available data, small-scale variability is not represented explicitly with geographic reference in Wasa, but by the distribution of sub-scale units and by statistical transition frequencies for lateral fluxes between these units. Further model components of Wasa which respect specific features of semi-arid hydrology are: (1) A two-layer model for evapotranspiration comprises energy transfer at the soil surface (including soil evaporation), which is of importance in view of the mainly sparse vegetation cover. Additionally, vegetation parameters are differentiated in space and time in dependence on the occurrence of the rainy season. (2) The infiltration module represents in particular infiltration-excess surface runoff as the dominant runoff component. (3) For the aggregate description of the water balance of reservoirs that cannot be represented explicitly in the model, a storage approach respecting different reservoirs size classes and their interaction via the river network is applied. (4) A model for the quantification of water withdrawal by water use in different sectors is coupled to Wasa. (5) A cascade model for the temporal disaggregation of precipitation time series, adapted to the specific characteristics of tropical convective rainfall, is applied

  6. Collaborative Visualization for Large-Scale Accelerator Electromagnetic Modeling (Final Report)

    SciTech Connect

    William J. Schroeder

    2011-11-13

    This report contains the comprehensive summary of the work performed on the SBIR Phase II, Collaborative Visualization for Large-Scale Accelerator Electromagnetic Modeling at Kitware Inc. in collaboration with Stanford Linear Accelerator Center (SLAC). The goal of the work was to develop collaborative visualization tools for large-scale data as illustrated in the figure below. The solutions we proposed address the typical problems faced by geographicallyand organizationally-separated research and engineering teams, who produce large data (either through simulation or experimental measurement) and wish to work together to analyze and understand their data. Because the data is large, we expect that it cannot be easily transported to each team member's work site, and that the visualization server must reside near the data. Further, we also expect that each work site has heterogeneous resources: some with large computing clients, tiled (or large) displays and high bandwidth; others sites as simple as a team member on a laptop computer. Our solution is based on the open-source, widely used ParaView large-data visualization application. We extended this tool to support multiple collaborative clients who may locally visualize data, and then periodically rejoin and synchronize with the group to discuss their findings. Options for managing session control, adding annotation, and defining the visualization pipeline, among others, were incorporated. We also developed and deployed a Web visualization framework based on ParaView that enables the Web browser to act as a participating client in a collaborative session. The ParaView Web Visualization framework leverages various Web technologies including WebGL, JavaScript, Java and Flash to enable interactive 3D visualization over the web using ParaView as the visualization server. We steered the development of this technology by teaming with the SLAC National Accelerator Laboratory. SLAC has a computationally-intensive problem

  7. Application of large-scale, multi-resolution watershed modeling framework using the Hydrologic and Water Quality System (HAWQS)

    Technology Transfer Automated Retrieval System (TEKTRAN)

    In recent years, large-scale watershed modeling has been implemented broadly in the field of water resources planning and management. Complex hydrological, sediment, and nutrient processes can be simulated by sophisticated watershed simulation models for important issues such as water resources all...

  8. Modeling the MJO rain rates using parameterized large scale dynamics: vertical structure, radiation, and horizontal advection of dry air

    NASA Astrophysics Data System (ADS)

    Wang, S.; Sobel, A. H.; Nie, J.

    2015-12-01

    Two Madden Julian Oscillation (MJO) events were observed during October and November 2011 in the equatorial Indian Ocean during the DYNAMO field campaign. Precipitation rates and large-scale vertical motion profiles derived from the DYNAMO northern sounding array are simulated in a small-domain cloud-resolving model using parameterized large-scale dynamics. Three parameterizations of large-scale dynamics --- the conventional weak temperature gradient (WTG) approximation, vertical mode based spectral WTG (SWTG), and damped gravity wave coupling (DGW) --- are employed. The target temperature profiles and radiative heating rates are taken from a control simulation in which the large-scale vertical motion is imposed (rather than directly from observations), and the model itself is significantly modified from that used in previous work. These methodological changes lead to significant improvement in the results.Simulations using all three methods, with imposed time -dependent radiation and horizontal moisture advection, capture the time variations in precipitation associated with the two MJO events well. The three methods produce significant differences in the large-scale vertical motion profile, however. WTG produces the most top-heavy and noisy profiles, while DGW's is smoother with a peak in midlevels. SWTG produces a smooth profile, somewhere between WTG and DGW, and in better agreement with observations than either of the others. Numerical experiments without horizontal advection of moisture suggest that that process significantly reduces the precipitation and suppresses the top-heaviness of large-scale vertical motion during the MJO active phases, while experiments in which the effect of cloud on radiation are disabled indicate that cloud-radiative interaction significantly amplifies the MJO. Experiments in which interactive radiation is used produce poorer agreement with observation than those with imposed time-varying radiative heating. Our results highlight the

  9. Parameterizing mesoscale and large-scale ice clouds in general circulation models

    NASA Technical Reports Server (NTRS)

    Donner, Leo J.

    1990-01-01

    The paper discusses GCM parameterizations for two types of ice clouds: (1) ice clouds formed by large-scale lifting, often of limited vertical extent but usually of large-scale horizontal extent; and (2) ice clouds formed as anvils in convective systems, often of moderate vertical extent but of mesoscale size horizontally. It is shown that the former type of clouds can be parameterized with reference to an equilibrium between ice generation by deposition from vapor, and ice removal by crystal settling. The same mechanisms operate in the mesoscale clouds, but the ice content in these cases is considered to be more closely linked to the moisture supplied to the anvil by cumulus towers. It is shown that a GCM can simulate widespread ice clouds of both types.

  10. Large-Scale Modeling of the Entry of Solar Wind Ions into the Magnetosphere

    NASA Astrophysics Data System (ADS)

    Berchem, J.; Richard, R. L.; Escoubet, C. P.; Pitout, F.

    2012-12-01

    Ion observations made by multiple spacecraft in the mid-altitude cusps have revealed the complexity of the entry of the solar wind plasma at the magnetospheric boundary. In particular, ion energy-latitude dispersions measured by the Cluster spacecraft often indicate the formation of large-scale structures in ion precipitation. We have carried out large-scale simulations of the entry of ions at the dayside magnetopause. Our study is based on using the time-dependent electric and magnetic fields predicted by three-dimensional global MHD simulations to compute the trajectories of large samples of solar wind ions launched upstream of the bow shock for different solar wind conditions. Particle information collected in the simulations is then analyzed to determine the relation between the structures observed in the cusp and ion injection processes at the magnetospheric boundary. We discuss the results of the study in the context of entry and acceleration processes at the dayside magnetopause.