Daleu, C. L.; Plant, R. S.; Woolnough, S. J.; ...
2016-03-18
As part of an international intercomparison project, the weak temperature gradient (WTG) and damped gravity wave (DGW) methods are used to parameterize large-scale dynamics in a set of cloud-resolving models (CRMs) and single column models (SCMs). The WTG or DGW method is implemented using a configuration that couples a model to a reference state defined with profiles obtained from the same model in radiative-convective equilibrium. We investigated the sensitivity of each model to changes in SST, given a fixed reference state. We performed a systematic comparison of the WTG and DGW methods in different models, and a systematic comparison ofmore » the behavior of those models using the WTG method and the DGW method. The sensitivity to the SST depends on both the large-scale parameterization method and the choice of the cloud model. In general, SCMs display a wider range of behaviors than CRMs. All CRMs using either the WTG or DGW method show an increase of precipitation with SST, while SCMs show sensitivities which are not always monotonic. CRMs using either the WTG or DGW method show a similar relationship between mean precipitation rate and column-relative humidity, while SCMs exhibit a much wider range of behaviors. DGW simulations produce large-scale velocity profiles which are smoother and less top-heavy compared to those produced by the WTG simulations. Lastly, these large-scale parameterization methods provide a useful tool to identify the impact of parameterization differences on model behavior in the presence of two-way feedback between convection and the large-scale circulation.« less
Stabilization of Large Generalized Lotka-Volterra Foodwebs By Evolutionary Feedback
NASA Astrophysics Data System (ADS)
Ackland, G. J.; Gallagher, I. D.
2004-10-01
Conventional ecological models show that complexity destabilizes foodwebs, suggesting that foodwebs should have neither large numbers of species nor a large number of interactions. However, in nature the opposite appears to be the case. Here we show that if the interactions between species are allowed to evolve within a generalized Lotka-Volterra model such stabilizing feedbacks and weak interactions emerge automatically. Moreover, we show that trophic levels also emerge spontaneously from the evolutionary approach, and the efficiency of the unperturbed ecosystem increases with time. The key to stability in large foodwebs appears to arise not from complexity perse but from evolution at the level of the ecosystem which favors stabilizing (negative) feedbacks.
Stabilization of large generalized Lotka-Volterra foodwebs by evolutionary feedback.
Ackland, G J; Gallagher, I D
2004-10-08
Conventional ecological models show that complexity destabilizes foodwebs, suggesting that foodwebs should have neither large numbers of species nor a large number of interactions. However, in nature the opposite appears to be the case. Here we show that if the interactions between species are allowed to evolve within a generalized Lotka-Volterra model such stabilizing feedbacks and weak interactions emerge automatically. Moreover, we show that trophic levels also emerge spontaneously from the evolutionary approach, and the efficiency of the unperturbed ecosystem increases with time. The key to stability in large foodwebs appears to arise not from complexity per se but from evolution at the level of the ecosystem which favors stabilizing (negative) feedbacks.
NASA Astrophysics Data System (ADS)
Ichii, K.; Suzuki, T.; Kato, T.; Ito, A.; Hajima, T.; Ueyama, M.; Sasai, T.; Hirata, R.; Saigusa, N.; Ohtani, Y.; Takagi, K.
2010-07-01
Terrestrial biosphere models show large differences when simulating carbon and water cycles, and reducing these differences is a priority for developing more accurate estimates of the condition of terrestrial ecosystems and future climate change. To reduce uncertainties and improve the understanding of their carbon budgets, we investigated the utility of the eddy flux datasets to improve model simulations and reduce variabilities among multi-model outputs of terrestrial biosphere models in Japan. Using 9 terrestrial biosphere models (Support Vector Machine - based regressions, TOPS, CASA, VISIT, Biome-BGC, DAYCENT, SEIB, LPJ, and TRIFFID), we conducted two simulations: (1) point simulations at four eddy flux sites in Japan and (2) spatial simulations for Japan with a default model (based on original settings) and a modified model (based on model parameter tuning using eddy flux data). Generally, models using default model settings showed large deviations in model outputs from observation with large model-by-model variability. However, after we calibrated the model parameters using eddy flux data (GPP, RE and NEP), most models successfully simulated seasonal variations in the carbon cycle, with less variability among models. We also found that interannual variations in the carbon cycle are mostly consistent among models and observations. Spatial analysis also showed a large reduction in the variability among model outputs. This study demonstrated that careful validation and calibration of models with available eddy flux data reduced model-by-model differences. Yet, site history, analysis of model structure changes, and more objective procedure of model calibration should be included in the further analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Daleu, C. L.; Plant, R. S.; Woolnough, S. J.
As part of an international intercomparison project, the weak temperature gradient (WTG) and damped gravity wave (DGW) methods are used to parameterize large-scale dynamics in a set of cloud-resolving models (CRMs) and single column models (SCMs). The WTG or DGW method is implemented using a configuration that couples a model to a reference state defined with profiles obtained from the same model in radiative-convective equilibrium. We investigated the sensitivity of each model to changes in SST, given a fixed reference state. We performed a systematic comparison of the WTG and DGW methods in different models, and a systematic comparison ofmore » the behavior of those models using the WTG method and the DGW method. The sensitivity to the SST depends on both the large-scale parameterization method and the choice of the cloud model. In general, SCMs display a wider range of behaviors than CRMs. All CRMs using either the WTG or DGW method show an increase of precipitation with SST, while SCMs show sensitivities which are not always monotonic. CRMs using either the WTG or DGW method show a similar relationship between mean precipitation rate and column-relative humidity, while SCMs exhibit a much wider range of behaviors. DGW simulations produce large-scale velocity profiles which are smoother and less top-heavy compared to those produced by the WTG simulations. Lastly, these large-scale parameterization methods provide a useful tool to identify the impact of parameterization differences on model behavior in the presence of two-way feedback between convection and the large-scale circulation.« less
Attitude Estimation for Large Field-of-View Sensors
NASA Technical Reports Server (NTRS)
Cheng, Yang; Crassidis, John L.; Markley, F. Landis
2005-01-01
The QUEST measurement noise model for unit vector observations has been widely used in spacecraft attitude estimation for more than twenty years. It was derived under the approximation that the noise lies in the tangent plane of the respective unit vector and is axially symmetrically distributed about the vector. For large field-of-view sensors, however, this approximation may be poor, especially when the measurement falls near the edge of the field of view. In this paper a new measurement noise model is derived based on a realistic noise distribution in the focal-plane of a large field-of-view sensor, which shows significant differences from the QUEST model for unit vector observations far away from the sensor boresight. An extended Kalman filter for attitude estimation is then designed with the new measurement noise model. Simulation results show that with the new measurement model the extended Kalman filter achieves better estimation performance using large field-of-view sensor observations.
Measuring the topology of large-scale structure in the universe
NASA Technical Reports Server (NTRS)
Gott, J. Richard, III
1988-01-01
An algorithm for quantitatively measuring the topology of large-scale structure has now been applied to a large number of observational data sets. The present paper summarizes and provides an overview of some of these observational results. On scales significantly larger than the correlation length, larger than about 1200 km/s, the cluster and galaxy data are fully consistent with a sponge-like random phase topology. At a smoothing length of about 600 km/s, however, the observed genus curves show a small shift in the direction of a meatball topology. Cold dark matter (CDM) models show similar shifts at these scales but not generally as large as those seen in the data. Bubble models, with voids completely surrounded on all sides by wall of galaxies, show shifts in the opposite direction. The CDM model is overall the most successful in explaining the data.
Measuring the topology of large-scale structure in the universe
NASA Astrophysics Data System (ADS)
Gott, J. Richard, III
1988-11-01
An algorithm for quantitatively measuring the topology of large-scale structure has now been applied to a large number of observational data sets. The present paper summarizes and provides an overview of some of these observational results. On scales significantly larger than the correlation length, larger than about 1200 km/s, the cluster and galaxy data are fully consistent with a sponge-like random phase topology. At a smoothing length of about 600 km/s, however, the observed genus curves show a small shift in the direction of a meatball topology. Cold dark matter (CDM) models show similar shifts at these scales but not generally as large as those seen in the data. Bubble models, with voids completely surrounded on all sides by wall of galaxies, show shifts in the opposite direction. The CDM model is overall the most successful in explaining the data.
Large scale anomalies in the microwave background: causation and correlation.
Aslanyan, Grigor; Easther, Richard
2013-12-27
Most treatments of large scale anomalies in the microwave sky are a posteriori, with unquantified look-elsewhere effects. We contrast these with physical models of specific inhomogeneities in the early Universe which can generate these apparent anomalies. Physical models predict correlations between candidate anomalies and the corresponding signals in polarization and large scale structure, reducing the impact of cosmic variance. We compute the apparent spatial curvature associated with large-scale inhomogeneities and show that it is typically small, allowing for a self-consistent analysis. As an illustrative example we show that a single large plane wave inhomogeneity can contribute to low-l mode alignment and odd-even asymmetry in the power spectra and the best-fit model accounts for a significant part of the claimed odd-even asymmetry. We argue that this approach can be generalized to provide a more quantitative assessment of potential large scale anomalies in the Universe.
Rarefaction and blood pressure in systemic and pulmonary arteries
OLUFSEN, METTE S.; HILL, N. A.; VAUGHAN, GARETH D. A.; SAINSBURY, CHRISTOPHER; JOHNSON, MARTIN
2012-01-01
The effects of vascular rarefaction (the loss of small arteries) on the circulation of blood are studied using a multiscale mathematical model that can predict blood flow and pressure in the systemic and pulmonary arteries. We augmented a model originally developed for the systemic arteries (Olufsen et al. 1998, 1999, 2000, 2004) to (a) predict flow and pressure in the pulmonary arteries, and (b) predict pressure propagation along the small arteries in the vascular beds. The systemic and pulmonary arteries are modelled as separate, bifurcating trees of compliant and tapering vessels. Each tree is divided into two parts representing the `large' and `small' arteries. Blood flow and pressure in the large arteries are predicted using a nonlinear cross-sectional area-averaged model for a Newtonian fluid in an elastic tube with inflow obtained from magnetic resonance measurements. Each terminal vessel within the network of the large arteries is coupled to a vascular bed of small `resistance' arteries, which are modelled as asymmetric structured trees with specified area and asymmetry ratios between the parent and daughter arteries. For the systemic circulation, each structured tree represents a specific vascular bed corresponding to major organs and limbs. For the pulmonary circulation, there are four vascular beds supplied by the interlobar arteries. This manuscript presents the first theoretical calculations of the propagation of the pressure and flow waves along systemic and pulmonary large and small arteries. Results for all networks were in agreement with published observations. Two studies were done with this model. First, we showed how rarefaction can be modelled by pruning the tree of arteries in the microvascular system. This was done by modulating parameters used for designing the structured trees. Results showed that rarefaction leads to increased mean and decreased pulse pressure in the large arteries. Second, we investigated the impact of decreasing vessel compliance in both large and small arteries. Results showed, that the effects of decreased compliance in the large arteries far outweigh the effects observed when decreasing the compliance of the small arteries. We further showed that a decrease of compliance in the large arteries results in pressure increases consistent with observations of isolated systolic hypertension, as occurs in ageing. PMID:22962497
Signals of doubly-charged Higgsinos at the CERN Large Hadron Collider
DOE Office of Scientific and Technical Information (OSTI.GOV)
Demir, Durmus A.; Deutsches Elektronen--Synchrotron, DESY, D-22603 Hamburg; Frank, Mariana
2008-08-01
Several supersymmetric models with extended gauge structures, motivated by either grand unification or by neutrino mass generation, predict light doubly-charged Higgsinos. In this work we study productions and decays of doubly-charged Higgsinos present in left-right supersymmetric models, and show that they invariably lead to novel collider signals not found in the minimal supersymmetric model or in any of its extensions motivated by the {mu} problem or even in extra dimensional theories. We investigate their distinctive signatures at the Large Hadron Collider in both pair- and single-production modes, and show that they are powerful tools in determining the underlying model viamore » the measurements at the Large Hadron Collider experiments.« less
Large-scale DCMs for resting-state fMRI.
Razi, Adeel; Seghier, Mohamed L; Zhou, Yuan; McColgan, Peter; Zeidman, Peter; Park, Hae-Jeong; Sporns, Olaf; Rees, Geraint; Friston, Karl J
2017-01-01
This paper considers the identification of large directed graphs for resting-state brain networks based on biophysical models of distributed neuronal activity, that is, effective connectivity . This identification can be contrasted with functional connectivity methods based on symmetric correlations that are ubiquitous in resting-state functional MRI (fMRI). We use spectral dynamic causal modeling (DCM) to invert large graphs comprising dozens of nodes or regions. The ensuing graphs are directed and weighted, hence providing a neurobiologically plausible characterization of connectivity in terms of excitatory and inhibitory coupling. Furthermore, we show that the use of to discover the most likely sparse graph (or model) from a parent (e.g., fully connected) graph eschews the arbitrary thresholding often applied to large symmetric (functional connectivity) graphs. Using empirical fMRI data, we show that spectral DCM furnishes connectivity estimates on large graphs that correlate strongly with the estimates provided by stochastic DCM. Furthermore, we increase the efficiency of model inversion using functional connectivity modes to place prior constraints on effective connectivity. In other words, we use a small number of modes to finesse the potentially redundant parameterization of large DCMs. We show that spectral DCM-with functional connectivity priors-is ideally suited for directed graph theoretic analyses of resting-state fMRI. We envision that directed graphs will prove useful in understanding the psychopathology and pathophysiology of neurodegenerative and neurodevelopmental disorders. We will demonstrate the utility of large directed graphs in clinical populations in subsequent reports, using the procedures described in this paper.
Large-scale tropospheric transport in the Chemistry-Climate Model Initiative (CCMI) simulations
NASA Astrophysics Data System (ADS)
Orbe, Clara; Yang, Huang; Waugh, Darryn W.; Zeng, Guang; Morgenstern, Olaf; Kinnison, Douglas E.; Lamarque, Jean-Francois; Tilmes, Simone; Plummer, David A.; Scinocca, John F.; Josse, Beatrice; Marecal, Virginie; Jöckel, Patrick; Oman, Luke D.; Strahan, Susan E.; Deushi, Makoto; Tanaka, Taichu Y.; Yoshida, Kohei; Akiyoshi, Hideharu; Yamashita, Yousuke; Stenke, Andreas; Revell, Laura; Sukhodolov, Timofei; Rozanov, Eugene; Pitari, Giovanni; Visioni, Daniele; Stone, Kane A.; Schofield, Robyn; Banerjee, Antara
2018-05-01
Understanding and modeling the large-scale transport of trace gases and aerosols is important for interpreting past (and projecting future) changes in atmospheric composition. Here we show that there are large differences in the global-scale atmospheric transport properties among the models participating in the IGAC SPARC Chemistry-Climate Model Initiative (CCMI). Specifically, we find up to 40 % differences in the transport timescales connecting the Northern Hemisphere (NH) midlatitude surface to the Arctic and to Southern Hemisphere high latitudes, where the mean age ranges between 1.7 and 2.6 years. We show that these differences are related to large differences in vertical transport among the simulations, in particular to differences in parameterized convection over the oceans. While stronger convection over NH midlatitudes is associated with slower transport to the Arctic, stronger convection in the tropics and subtropics is associated with faster interhemispheric transport. We also show that the differences among simulations constrained with fields derived from the same reanalysis products are as large as (and in some cases larger than) the differences among free-running simulations, most likely due to larger differences in parameterized convection. Our results indicate that care must be taken when using simulations constrained with analyzed winds to interpret the influence of meteorology on tropospheric composition.
Exploring natural supersymmetry at the LHC
NASA Astrophysics Data System (ADS)
Nasir, Fariha
This dissertation demonstrates how a variety of supersymmetric grand unified theories can resolve the little hierarchy problem in the minimal supersymmetric standard model and also explain the observed deviation in the anomalous magnetic moment of the muon. The origin of the little hierarchy problem lies in the sensitive manner in which the Z boson mass depends on parameters that can be much larger than its mass. Large values of these parameters imply that a large fine tuning is required to obtain the correct Z boson mass. With large fine tuning supersymmetry appears unnatural which is why models that attempt to resolve this problem are referred to as natural SUSY models. We show that a possible way to exhibit natural supersymmetry is to assume non-universal gauginos in a class of supersymmetric grand unified models. We further show that considering non-universal gauginos in a class of supersymmetric models can help explain the apparent anomaly in the magnetic moment of the muon.
Simplified large African carnivore density estimators from track indices.
Winterbach, Christiaan W; Ferreira, Sam M; Funston, Paul J; Somers, Michael J
2016-01-01
The range, population size and trend of large carnivores are important parameters to assess their status globally and to plan conservation strategies. One can use linear models to assess population size and trends of large carnivores from track-based surveys on suitable substrates. The conventional approach of a linear model with intercept may not intercept at zero, but may fit the data better than linear model through the origin. We assess whether a linear regression through the origin is more appropriate than a linear regression with intercept to model large African carnivore densities and track indices. We did simple linear regression with intercept analysis and simple linear regression through the origin and used the confidence interval for ß in the linear model y = αx + ß, Standard Error of Estimate, Mean Squares Residual and Akaike Information Criteria to evaluate the models. The Lion on Clay and Low Density on Sand models with intercept were not significant ( P > 0.05). The other four models with intercept and the six models thorough origin were all significant ( P < 0.05). The models using linear regression with intercept all included zero in the confidence interval for ß and the null hypothesis that ß = 0 could not be rejected. All models showed that the linear model through the origin provided a better fit than the linear model with intercept, as indicated by the Standard Error of Estimate and Mean Square Residuals. Akaike Information Criteria showed that linear models through the origin were better and that none of the linear models with intercept had substantial support. Our results showed that linear regression through the origin is justified over the more typical linear regression with intercept for all models we tested. A general model can be used to estimate large carnivore densities from track densities across species and study areas. The formula observed track density = 3.26 × carnivore density can be used to estimate densities of large African carnivores using track counts on sandy substrates in areas where carnivore densities are 0.27 carnivores/100 km 2 or higher. To improve the current models, we need independent data to validate the models and data to test for non-linear relationship between track indices and true density at low densities.
NASA Astrophysics Data System (ADS)
Sanchez-Gomez, Emilia; Somot, S.; Déqué, M.
2009-10-01
One of the main concerns in regional climate modeling is to which extent limited-area regional climate models (RCM) reproduce the large-scale atmospheric conditions of their driving general circulation model (GCM). In this work we investigate the ability of a multi-model ensemble of regional climate simulations to reproduce the large-scale weather regimes of the driving conditions. The ensemble consists of a set of 13 RCMs on a European domain, driven at their lateral boundaries by the ERA40 reanalysis for the time period 1961-2000. Two sets of experiments have been completed with horizontal resolutions of 50 and 25 km, respectively. The spectral nudging technique has been applied to one of the models within the ensemble. The RCMs reproduce the weather regimes behavior in terms of composite pattern, mean frequency of occurrence and persistence reasonably well. The models also simulate well the long-term trends and the inter-annual variability of the frequency of occurrence. However, there is a non-negligible spread among the models which is stronger in summer than in winter. This spread is due to two reasons: (1) we are dealing with different models and (2) each RCM produces an internal variability. As far as the day-to-day weather regime history is concerned, the ensemble shows large discrepancies. At daily time scale, the model spread has also a seasonal dependence, being stronger in summer than in winter. Results also show that the spectral nudging technique improves the model performance in reproducing the large-scale of the driving field. In addition, the impact of increasing the number of grid points has been addressed by comparing the 25 and 50 km experiments. We show that the horizontal resolution does not affect significantly the model performance for large-scale circulation.
NASA Astrophysics Data System (ADS)
Fucugauchi, J. U.; Ortiz-Aleman, C.; Martin, R.
2017-12-01
Large complex craters are characterized by central uplifts that represent large-scale differential movement of deep basement from the transient cavity. Here we investigate the central sector of the large multiring Chicxulub crater, which has been surveyed by an array of marine, aerial and land-borne geophysical methods. Despite high contrasts in physical properties,contrasting results for the central uplift have been obtained, with seismic reflection surveys showing lack of resolution in the central zone. We develop an integrated seismic and gravity model for the main structural elements, imaging the central basement uplift and melt and breccia units. The 3-D velocity model built from interpolation of seismic data is validated using perfectly matched layer seismic acoustic wave propagation modeling, optimized at grazing incidence using shift in the frequency domain. Modeling shows significant lack of illumination in the central sector, masking presence of the central uplift. Seismic energy remains trapped in an upper low velocity zone corresponding to the sedimentary infill, melt/breccias and surrounding faulted blocks. After conversion of seismic velocities into a volume of density values, we use massive parallel forward gravity modeling to constrain the size and shape of the central uplift that lies at 4.5 km depth, providing a high-resolution image of crater structure.The Bouguer anomaly and gravity response of modeled units show asymmetries, corresponding to the crater structure and distribution of post-impact carbonates, breccias, melt and target sediments
Info-gap robust-satisficing model of foraging behavior: do foragers optimize or satisfice?
Carmel, Yohay; Ben-Haim, Yakov
2005-11-01
In this note we compare two mathematical models of foraging that reflect two competing theories of animal behavior: optimizing and robust satisficing. The optimal-foraging model is based on the marginal value theorem (MVT). The robust-satisficing model developed here is an application of info-gap decision theory. The info-gap robust-satisficing model relates to the same circumstances described by the MVT. We show how these two alternatives translate into specific predictions that at some points are quite disparate. We test these alternative predictions against available data collected in numerous field studies with a large number of species from diverse taxonomic groups. We show that a large majority of studies appear to support the robust-satisficing model and reject the optimal-foraging model.
Laboratory and modeling studies of chemistry in dense molecular clouds
NASA Technical Reports Server (NTRS)
Huntress, W. T., Jr.; Prasad, S. S.; Mitchell, G. F.
1980-01-01
A chemical evolutionary model with a large number of species and a large chemical library is used to examine the principal chemical processes in interstellar clouds. Simple chemical equilibrium arguments show the potential for synthesis of very complex organic species by ion-molecule radiative association reactions.
NASA Astrophysics Data System (ADS)
Noor-E-Alam, Md.; Doucette, John
2015-08-01
Grid-based location problems (GBLPs) can be used to solve location problems in business, engineering, resource exploitation, and even in the field of medical sciences. To solve these decision problems, an integer linear programming (ILP) model is designed and developed to provide the optimal solution for GBLPs considering fixed cost criteria. Preliminary results show that the ILP model is efficient in solving small to moderate-sized problems. However, this ILP model becomes intractable in solving large-scale instances. Therefore, a decomposition heuristic is proposed to solve these large-scale GBLPs, which demonstrates significant reduction of solution runtimes. To benchmark the proposed heuristic, results are compared with the exact solution via ILP. The experimental results show that the proposed method significantly outperforms the exact method in runtime with minimal (and in most cases, no) loss of optimality.
Impact of Spatial Soil and Climate Input Data Aggregation on Regional Yield Simulations
Hoffmann, Holger; Zhao, Gang; Asseng, Senthold; Bindi, Marco; Biernath, Christian; Constantin, Julie; Coucheney, Elsa; Dechow, Rene; Doro, Luca; Eckersten, Henrik; Gaiser, Thomas; Grosz, Balázs; Heinlein, Florian; Kassie, Belay T.; Kersebaum, Kurt-Christian; Klein, Christian; Kuhnert, Matthias; Lewan, Elisabet; Moriondo, Marco; Nendel, Claas; Priesack, Eckart; Raynal, Helene; Roggero, Pier P.; Rötter, Reimund P.; Siebert, Stefan; Specka, Xenia; Tao, Fulu; Teixeira, Edmar; Trombi, Giacomo; Wallach, Daniel; Weihermüller, Lutz; Yeluripati, Jagadeesh; Ewert, Frank
2016-01-01
We show the error in water-limited yields simulated by crop models which is associated with spatially aggregated soil and climate input data. Crop simulations at large scales (regional, national, continental) frequently use input data of low resolution. Therefore, climate and soil data are often generated via averaging and sampling by area majority. This may bias simulated yields at large scales, varying largely across models. Thus, we evaluated the error associated with spatially aggregated soil and climate data for 14 crop models. Yields of winter wheat and silage maize were simulated under water-limited production conditions. We calculated this error from crop yields simulated at spatial resolutions from 1 to 100 km for the state of North Rhine-Westphalia, Germany. Most models showed yields biased by <15% when aggregating only soil data. The relative mean absolute error (rMAE) of most models using aggregated soil data was in the range or larger than the inter-annual or inter-model variability in yields. This error increased further when both climate and soil data were aggregated. Distinct error patterns indicate that the rMAE may be estimated from few soil variables. Illustrating the range of these aggregation effects across models, this study is a first step towards an ex-ante assessment of aggregation errors in large-scale simulations. PMID:27055028
Impact of Spatial Soil and Climate Input Data Aggregation on Regional Yield Simulations.
Hoffmann, Holger; Zhao, Gang; Asseng, Senthold; Bindi, Marco; Biernath, Christian; Constantin, Julie; Coucheney, Elsa; Dechow, Rene; Doro, Luca; Eckersten, Henrik; Gaiser, Thomas; Grosz, Balázs; Heinlein, Florian; Kassie, Belay T; Kersebaum, Kurt-Christian; Klein, Christian; Kuhnert, Matthias; Lewan, Elisabet; Moriondo, Marco; Nendel, Claas; Priesack, Eckart; Raynal, Helene; Roggero, Pier P; Rötter, Reimund P; Siebert, Stefan; Specka, Xenia; Tao, Fulu; Teixeira, Edmar; Trombi, Giacomo; Wallach, Daniel; Weihermüller, Lutz; Yeluripati, Jagadeesh; Ewert, Frank
2016-01-01
We show the error in water-limited yields simulated by crop models which is associated with spatially aggregated soil and climate input data. Crop simulations at large scales (regional, national, continental) frequently use input data of low resolution. Therefore, climate and soil data are often generated via averaging and sampling by area majority. This may bias simulated yields at large scales, varying largely across models. Thus, we evaluated the error associated with spatially aggregated soil and climate data for 14 crop models. Yields of winter wheat and silage maize were simulated under water-limited production conditions. We calculated this error from crop yields simulated at spatial resolutions from 1 to 100 km for the state of North Rhine-Westphalia, Germany. Most models showed yields biased by <15% when aggregating only soil data. The relative mean absolute error (rMAE) of most models using aggregated soil data was in the range or larger than the inter-annual or inter-model variability in yields. This error increased further when both climate and soil data were aggregated. Distinct error patterns indicate that the rMAE may be estimated from few soil variables. Illustrating the range of these aggregation effects across models, this study is a first step towards an ex-ante assessment of aggregation errors in large-scale simulations.
Large-Signal Lyapunov-Based Stability Analysis of DC/AC Inverters and Inverter-Based Microgrids
NASA Astrophysics Data System (ADS)
Kabalan, Mahmoud
Microgrid stability studies have been largely based on small-signal linearization techniques. However, the validity and magnitude of the linearization domain is limited to small perturbations. Thus, there is a need to examine microgrids with large-signal nonlinear techniques to fully understand and examine their stability. Large-signal stability analysis can be accomplished by Lyapunov-based mathematical methods. These Lyapunov methods estimate the domain of asymptotic stability of the studied system. A survey of Lyapunov-based large-signal stability studies showed that few large-signal studies have been completed on either individual systems (dc/ac inverters, dc/dc rectifiers, etc.) or microgrids. The research presented in this thesis addresses the large-signal stability of droop-controlled dc/ac inverters and inverter-based microgrids. Dc/ac power electronic inverters allow microgrids to be technically feasible. Thus, as a prelude to examining the stability of microgrids, the research presented in Chapter 3 analyzes the stability of inverters. First, the 13 th order large-signal nonlinear model of a droop-controlled dc/ac inverter connected to an infinite bus is presented. The singular perturbation method is used to decompose the nonlinear model into 11th, 9th, 7th, 5th, 3rd and 1st order models. Each model ignores certain control or structural components of the full order model. The aim of the study is to understand the accuracy and validity of the reduced order models in replicating the performance of the full order nonlinear model. The performance of each model is studied in three different areas: time domain simulations, Lyapunov's indirect method and domain of attraction estimation. The work aims to present the best model to use in each of the three domains of study. Results show that certain reduced order models are capable of accurately reproducing the performance of the full order model while others can be used to gain insights into those three areas of study. This will enable future studies to save computational effort and produce the most accurate results according to the needs of the study being performed. Moreover, the effect of grid (line) impedance on the accuracy of droop control is explored using the 5th order model. Simulation results show that traditional droop control is valid up to R/X line impedance value of 2. Furthermore, the 3rd order nonlinear model improves the currently available inverter-infinite bus models by accounting for grid impedance, active power-frequency droop and reactive power-voltage droop. Results show the 3rd order model's ability to account for voltage and reactive power changes during a transient event. Finally, the large-signal Lyapunov-based stability analysis is completed for a 3 bus microgrid system (made up of 2 inverters and 1 linear load). The thesis provides a systematic state space large-signal nonlinear mathematical modeling method of inverter-based microgrids. The inverters include the dc-side dynamics associated with dc sources. The mathematical model is then used to estimate the domain of asymptotic stability of the 3 bus microgrid. The three bus microgrid system was used as a case study to highlight the design and optimization capability of a large-signal-based approach. The study explores the effect of system component sizing, load transient and generation variations on the asymptotic stability of the microgrid. Essentially, this advancement gives microgrid designers and engineers the ability to manipulate the domain of asymptotic stability depending on performance requirements. Especially important, this research was able to couple the domain of asymptotic stability of the ac microgrid with that of the dc side voltage source. Time domain simulations were used to demonstrate the mathematical nonlinear analysis results.
Accuracy Assessment of Recent Global Ocean Tide Models around Antarctica
NASA Astrophysics Data System (ADS)
Lei, J.; Li, F.; Zhang, S.; Ke, H.; Zhang, Q.; Li, W.
2017-09-01
Due to the coverage limitation of T/P-series altimeters, the lack of bathymetric data under large ice shelves, and the inaccurate definitions of coastlines and grounding lines, the accuracy of ocean tide models around Antarctica is poorer than those in deep oceans. Using tidal measurements from tide gauges, gravimetric data and GPS records, the accuracy of seven state-of-the-art global ocean tide models (DTU10, EOT11a, GOT4.8, FES2012, FES2014, HAMTIDE12, TPXO8) is assessed, as well as the most widely-used conventional model FES2004. Four regions (Antarctic Peninsula region, Amery ice shelf region, Filchner-Ronne ice shelf region and Ross ice shelf region) are separately reported. The standard deviations of eight main constituents between the selected models are large in polar regions, especially under the big ice shelves, suggesting that the uncertainty in these regions remain large. Comparisons with in situ tidal measurements show that the most accurate model is TPXO8, and all models show worst performance in Weddell sea and Filchner-Ronne ice shelf regions. The accuracy of tidal predictions around Antarctica is gradually improving.
Wang, Heng; Sang, Yuanjun
2017-10-01
The mechanical behavior modeling of human soft biological tissues is a key issue for a large number of medical applications, such as surgery simulation, surgery planning, diagnosis, etc. To develop a biomechanical model of human soft tissues under large deformation for surgery simulation, the adaptive quasi-linear viscoelastic (AQLV) model was proposed and applied in human forearm soft tissues by indentation tests. An incremental ramp-and-hold test was carried out to calibrate the model parameters. To verify the predictive ability of the AQLV model, the incremental ramp-and-hold test, a single large amplitude ramp-and-hold test and a sinusoidal cyclic test at large strain amplitude were adopted in this study. Results showed that the AQLV model could predict the test results under the three kinds of load conditions. It is concluded that the AQLV model is feasible to describe the nonlinear viscoelastic properties of in vivo soft tissues under large deformation. It is promising that this model can be selected as one of the soft tissues models in the software design for surgery simulation or diagnosis.
Sub-Scale Analysis of New Large Aircraft Pool Fire-Suppression
2016-01-01
discrete ordinates radiation and single step Khan and Greeves soot model provided radiation and soot interaction. Agent spray dynamics were...Notable differences observed showed a modeled increase in the mockup surface heat-up rate as well as a modeled decreased rate of soot production...488 K SUPPRESSION STARTED Large deviation between sensors due to sensor alignment challenges and asymmetric fuel surface ignition Unremarkable
Mishra, Manoj K; Beaty, Claude A; Lesniak, Wojciech G; Kambhampati, Siva P; Zhang, Fan; Wilson, Mary A; Blue, Mary E; Troncoso, Juan C; Kannan, Sujatha; Johnston, Michael V; Baumgartner, William A; Kannan, Rangaramanujam M
2014-03-25
Treatment of brain injury following circulatory arrest is a challenging health issue with no viable therapeutic options. Based on studies in a clinically relevant large animal (canine) model of hypothermic circulatory arrest (HCA)-induced brain injury, neuroinflammation and excitotoxicity have been identified as key players in mediating the brain injury after HCA. Therapy with large doses of valproic acid (VPA) showed some neuroprotection but was associated with adverse side effects. For the first time in a large animal model, we explored whether systemically administered polyamidoamine (PAMAM) dendrimers could be effective in reaching target cells in the brain and deliver therapeutics. We showed that, upon systemic administration, hydroxyl-terminated PAMAM dendrimers are taken up in the brain of injured animals and selectively localize in the injured neurons and microglia in the brain. The biodistribution in other major organs was similar to that seen in small animal models. We studied systemic dendrimer-drug combination therapy with two clinically approved drugs, N-acetyl cysteine (NAC) (attenuating neuroinflammation) and valproic acid (attenuating excitotoxicity), building on positive outcomes in a rabbit model of perinatal brain injury. We prepared and characterized dendrimer-NAC (D-NAC) and dendrimer-VPA (D-VPA) conjugates in multigram quantities. A glutathione-sensitive linker to enable for fast intracellular release. In preliminary efficacy studies, combination therapy with D-NAC and D-VPA showed promise in this large animal model, producing 24 h neurological deficit score improvements comparable to high dose combination therapy with VPA and NAC, or free VPA, but at one-tenth the dose, while significantly reducing the adverse side effects. Since adverse side effects of drugs are exaggerated in HCA, the reduced side effects with dendrimer conjugates and suggestions of neuroprotection offer promise for these nanoscale drug delivery systems.
2015-01-01
Treatment of brain injury following circulatory arrest is a challenging health issue with no viable therapeutic options. Based on studies in a clinically relevant large animal (canine) model of hypothermic circulatory arrest (HCA)-induced brain injury, neuroinflammation and excitotoxicity have been identified as key players in mediating the brain injury after HCA. Therapy with large doses of valproic acid (VPA) showed some neuroprotection but was associated with adverse side effects. For the first time in a large animal model, we explored whether systemically administered polyamidoamine (PAMAM) dendrimers could be effective in reaching target cells in the brain and deliver therapeutics. We showed that, upon systemic administration, hydroxyl-terminated PAMAM dendrimers are taken up in the brain of injured animals and selectively localize in the injured neurons and microglia in the brain. The biodistribution in other major organs was similar to that seen in small animal models. We studied systemic dendrimer–drug combination therapy with two clinically approved drugs, N-acetyl cysteine (NAC) (attenuating neuroinflammation) and valproic acid (attenuating excitotoxicity), building on positive outcomes in a rabbit model of perinatal brain injury. We prepared and characterized dendrimer-NAC (D-NAC) and dendrimer-VPA (D-VPA) conjugates in multigram quantities. A glutathione-sensitive linker to enable for fast intracellular release. In preliminary efficacy studies, combination therapy with D-NAC and D-VPA showed promise in this large animal model, producing 24 h neurological deficit score improvements comparable to high dose combination therapy with VPA and NAC, or free VPA, but at one-tenth the dose, while significantly reducing the adverse side effects. Since adverse side effects of drugs are exaggerated in HCA, the reduced side effects with dendrimer conjugates and suggestions of neuroprotection offer promise for these nanoscale drug delivery systems. PMID:24499315
The Stochastic Multi-strain Dengue Model: Analysis of the Dynamics
NASA Astrophysics Data System (ADS)
Aguiar, Maíra; Stollenwerk, Nico; Kooi, Bob W.
2011-09-01
Dengue dynamics is well known to be particularly complex with large fluctuations of disease incidences. An epidemic multi-strain model motivated by dengue fever epidemiology shows deterministic chaos in wide parameter regions. The addition of seasonal forcing, mimicking the vectorial dynamics, and a low import of infected individuals, which is realistic in the dynamics of infectious diseases epidemics show complex dynamics and qualitatively a good agreement between empirical DHF monitoring data and the obtained model simulation. The addition of noise can explain the fluctuations observed in the empirical data and for large enough population size, the stochastic system can be well described by the deterministic skeleton.
A semiparametric graphical modelling approach for large-scale equity selection.
Liu, Han; Mulvey, John; Zhao, Tianqi
2016-01-01
We propose a new stock selection strategy that exploits rebalancing returns and improves portfolio performance. To effectively harvest rebalancing gains, we apply ideas from elliptical-copula graphical modelling and stability inference to select stocks that are as independent as possible. The proposed elliptical-copula graphical model has a latent Gaussian representation; its structure can be effectively inferred using the regularized rank-based estimators. The resulting algorithm is computationally efficient and scales to large data-sets. To show the efficacy of the proposed method, we apply it to conduct equity selection based on a 16-year health care stock data-set and a large 34-year stock data-set. Empirical tests show that the proposed method is superior to alternative strategies including a principal component analysis-based approach and the classical Markowitz strategy based on the traditional buy-and-hold assumption.
Real-time simulation of large-scale floods
NASA Astrophysics Data System (ADS)
Liu, Q.; Qin, Y.; Li, G. D.; Liu, Z.; Cheng, D. J.; Zhao, Y. H.
2016-08-01
According to the complex real-time water situation, the real-time simulation of large-scale floods is very important for flood prevention practice. Model robustness and running efficiency are two critical factors in successful real-time flood simulation. This paper proposed a robust, two-dimensional, shallow water model based on the unstructured Godunov- type finite volume method. A robust wet/dry front method is used to enhance the numerical stability. An adaptive method is proposed to improve the running efficiency. The proposed model is used for large-scale flood simulation on real topography. Results compared to those of MIKE21 show the strong performance of the proposed model.
Patankar, Ravindra
2003-10-01
Statistical fatigue life of a ductile alloy specimen is traditionally divided into three stages, namely, crack nucleation, small crack growth, and large crack growth. Crack nucleation and small crack growth show a wide variation and hence a big spread on cycles versus crack length graph. Relatively, large crack growth shows a lesser variation. Therefore, different models are fitted to the different stages of the fatigue evolution process, thus treating different stages as different phenomena. With these independent models, it is impossible to predict one phenomenon based on the information available about the other phenomenon. Experimentally, it is easier to carry out crack length measurements of large cracks compared to nucleating cracks and small cracks. Thus, it is easier to collect statistical data for large crack growth compared to the painstaking effort it would take to collect statistical data for crack nucleation and small crack growth. This article presents a fracture mechanics-based stochastic model of fatigue crack growth in ductile alloys that are commonly encountered in mechanical structures and machine components. The model has been validated by Ray (1998) for crack propagation by various statistical fatigue data. Based on the model, this article proposes a technique to predict statistical information of fatigue crack nucleation and small crack growth properties that uses the statistical properties of large crack growth under constant amplitude stress excitation. The statistical properties of large crack growth under constant amplitude stress excitation can be obtained via experiments.
ERIC Educational Resources Information Center
Patz, Richard J.; Junker, Brian W.; Johnson, Matthew S.; Mariano, Louis T.
2002-01-01
Discusses the hierarchical rater model (HRM) of R. Patz (1996) and shows how it can be used to scale examinees and items, model aspects of consensus among raters, and model individual rater severity and consistency effects. Also shows how the HRM fits into the generalizability theory framework. Compares the HRM to the conventional item response…
A Full-Maxwell Approach for Large-Angle Polar Wander of Viscoelastic Bodies
NASA Astrophysics Data System (ADS)
Hu, H.; van der Wal, W.; Vermeersen, L. L. A.
2017-12-01
For large-angle long-term true polar wander (TPW) there are currently two types of nonlinear methods which give approximated solutions: those assuming that the rotational axis coincides with the axis of maximum moment of inertia (MoI), which simplifies the Liouville equation, and those based on the quasi-fluid approximation, which approximates the Love number. Recent studies show that both can have a significant bias for certain models. Therefore, we still lack an (semi)analytical method which can give exact solutions for large-angle TPW for a model based on Maxwell rheology. This paper provides a method which analytically solves the MoI equation and adopts an extended iterative procedure introduced in Hu et al. (2017) to obtain a time-dependent solution. The new method can be used to simulate the effect of a remnant bulge or models in different hydrostatic states. We show the effect of the viscosity of the lithosphere on long-term, large-angle TPW. We also simulate models without hydrostatic equilibrium and show that the choice of the initial stress-free shape for the elastic (or highly viscous) lithosphere of a given model is as important as its thickness for obtaining a correct TPW behavior. The initial shape of the lithosphere can be an alternative explanation to mantle convection for the difference between the observed and model predicted flattening. Finally, it is concluded that based on the quasi-fluid approximation, TPW speed on Earth and Mars is underestimated, while the speed of the rotational axis approaching the end position on Venus is overestimated.
Decentralized model reference adaptive control of large flexible structures
NASA Technical Reports Server (NTRS)
Lee, Fu-Ming; Fong, I-Kong; Lin, Yu-Hwan
1988-01-01
A decentralized model reference adaptive control (DMRAC) method is developed for large flexible structures (LFS). The development follows that of a centralized model reference adaptive control for LFS that have been shown to be feasible. The proposed method is illustrated using a simply supported beam with collocated actuators and sensors. Results show that the DMRAC can achieve either output regulation or output tracking with adequate convergence, provided the reference model inputs and their time derivatives are integrable, bounded, and approach zero as t approaches infinity.
A biophysical model for defibrillation of cardiac tissue.
Keener, J P; Panfilov, A V
1996-01-01
We propose a new model for electrical activity of cardiac tissue that incorporates the effects of cellular microstructure. As such, this model provides insight into the mechanism of direct stimulation and defibrillation of cardiac tissue after injection of large currents. To illustrate the usefulness of the model, numerical stimulations are used to show the difference between successful and unsuccessful defibrillation of large pieces of tissue. Images FIGURE 2 FIGURE 3 FIGURE 4 FIGURE 5 FIGURE 6 FIGURE 7 FIGURE 8 FIGURE 9 PMID:8874007
NASA Astrophysics Data System (ADS)
Silvis, Maurits H.; Remmerswaal, Ronald A.; Verstappen, Roel
2017-01-01
We study the construction of subgrid-scale models for large-eddy simulation of incompressible turbulent flows. In particular, we aim to consolidate a systematic approach of constructing subgrid-scale models, based on the idea that it is desirable that subgrid-scale models are consistent with the mathematical and physical properties of the Navier-Stokes equations and the turbulent stresses. To that end, we first discuss in detail the symmetries of the Navier-Stokes equations, and the near-wall scaling behavior, realizability and dissipation properties of the turbulent stresses. We furthermore summarize the requirements that subgrid-scale models have to satisfy in order to preserve these important mathematical and physical properties. In this fashion, a framework of model constraints arises that we apply to analyze the behavior of a number of existing subgrid-scale models that are based on the local velocity gradient. We show that these subgrid-scale models do not satisfy all the desired properties, after which we explain that this is partly due to incompatibilities between model constraints and limitations of velocity-gradient-based subgrid-scale models. However, we also reason that the current framework shows that there is room for improvement in the properties and, hence, the behavior of existing subgrid-scale models. We furthermore show how compatible model constraints can be combined to construct new subgrid-scale models that have desirable properties built into them. We provide a few examples of such new models, of which a new model of eddy viscosity type, that is based on the vortex stretching magnitude, is successfully tested in large-eddy simulations of decaying homogeneous isotropic turbulence and turbulent plane-channel flow.
Current fluctuations in periodically driven systems
NASA Astrophysics Data System (ADS)
Barato, Andre C.; Chetrite, Raphael
2018-05-01
Small nonequelibrium systems driven by an external periodic protocol can be described by Markov processes with time-periodic transition rates. In general, current fluctuations in such small systems are large and may play a crucial role. We develop a theoretical formalism to evaluate the rate of such large deviations in periodically driven systems. We show that the scaled cumulant generating function that characterizes current fluctuations is given by a maximal Floquet exponent. Comparing deterministic protocols with stochastic protocols, we show that, with respect to large deviations, systems driven by a stochastic protocol with an infinitely large number of jumps are equivalent to systems driven by deterministic protocols. Our results are illustrated with three case studies: a two-state model for a heat engine, a three-state model for a molecular pump, and a biased random walk with a time-periodic affinity.
Paleoclimate diagnostics: consistent large-scale temperature responses in warm and cold climates
NASA Astrophysics Data System (ADS)
Izumi, Kenji; Bartlein, Patrick; Harrison, Sandy
2015-04-01
The CMIP5 model simulations of the large-scale temperature responses to increased raditative forcing include enhanced land-ocean contrast, stronger response at higher latitudes than in the tropics, and differential responses in warm and cool season climates to uniform forcing. Here we show that these patterns are also characteristic of CMIP5 model simulations of past climates. The differences in the responses over land as opposed to over the ocean, between high and low latitudes, and between summer and winter are remarkably consistent (proportional and nearly linear) across simulations of both cold and warm climates. Similar patterns also appear in historical observations and paleoclimatic reconstructions, implying that such responses are characteristic features of the climate system and not simple model artifacts, thereby increasing our confidence in the ability of climate models to correctly simulate different climatic states. We also show the possibility that a small set of common mechanisms control these large-scale responses of the climate system across multiple states.
Spatial Distribution of Large Cloud Drops
NASA Technical Reports Server (NTRS)
Marshak, A.; Knyazikhin, Y.; Larsen, M.; Wiscombe, W.
2004-01-01
By analyzing aircraft measurements of individual drop sizes in clouds, we have shown in a companion paper (Knyazikhin et al., 2004) that the probability of finding a drop of radius r at a linear scale l decreases as l(sup D(r)) where 0 less than or equal to D(r) less than or equal to 1. This paper shows striking examples of the spatial distribution of large cloud drops using models that simulate the observed power laws. In contrast to currently used models that assume homogeneity and therefore a Poisson distribution of cloud drops, these models show strong drop clustering, the more so the larger the drops. The degree of clustering is determined by the observed exponents D(r). The strong clustering of large drops arises naturally from the observed power-law statistics. This clustering has vital consequences for rain physics explaining how rain can form so fast. It also helps explain why remotely sensed cloud drop size is generally biased and why clouds absorb more sunlight than conventional radiative transfer models predict.
Tropospheric transport differences between models using the same large-scale meteorological fields
NASA Astrophysics Data System (ADS)
Orbe, Clara; Waugh, Darryn W.; Yang, Huang; Lamarque, Jean-Francois; Tilmes, Simone; Kinnison, Douglas E.
2017-01-01
The transport of chemicals is a major uncertainty in the modeling of tropospheric composition. A common approach is to transport gases using the winds from meteorological analyses, either using them directly in a chemical transport model or by constraining the flow in a general circulation model. Here we compare the transport of idealized tracers in several different models that use the same meteorological fields taken from Modern-Era Retrospective analysis for Research and Applications (MERRA). We show that, even though the models use the same meteorological fields, there are substantial differences in their global-scale tropospheric transport related to large differences in parameterized convection between the simulations. Furthermore, we find that the transport differences between simulations constrained with the same-large scale flow are larger than differences between free-running simulations, which have differing large-scale flow but much more similar convective mass fluxes. Our results indicate that more attention needs to be paid to convective parameterizations in order to understand large-scale tropospheric transport in models, particularly in simulations constrained with analyzed winds.
A semiparametric graphical modelling approach for large-scale equity selection
Liu, Han; Mulvey, John; Zhao, Tianqi
2016-01-01
We propose a new stock selection strategy that exploits rebalancing returns and improves portfolio performance. To effectively harvest rebalancing gains, we apply ideas from elliptical-copula graphical modelling and stability inference to select stocks that are as independent as possible. The proposed elliptical-copula graphical model has a latent Gaussian representation; its structure can be effectively inferred using the regularized rank-based estimators. The resulting algorithm is computationally efficient and scales to large data-sets. To show the efficacy of the proposed method, we apply it to conduct equity selection based on a 16-year health care stock data-set and a large 34-year stock data-set. Empirical tests show that the proposed method is superior to alternative strategies including a principal component analysis-based approach and the classical Markowitz strategy based on the traditional buy-and-hold assumption. PMID:28316507
Modeling the Fear Effect in Predator-Prey Interactions with Adaptive Avoidance of Predators.
Wang, Xiaoying; Zou, Xingfu
2017-06-01
Recent field experiments on vertebrates showed that the mere presence of a predator would cause a dramatic change of prey demography. Fear of predators increases the survival probability of prey, but leads to a cost of prey reproduction. Based on the experimental findings, we propose a predator-prey model with the cost of fear and adaptive avoidance of predators. Mathematical analyses show that the fear effect can interplay with maturation delay between juvenile prey and adult prey in determining the long-term population dynamics. A positive equilibrium may lose stability with an intermediate value of delay and regain stability if the delay is large. Numerical simulations show that both strong adaptation of adult prey and the large cost of fear have destabilizing effect while large population of predators has a stabilizing effect on the predator-prey interactions. Numerical simulations also imply that adult prey demonstrates stronger anti-predator behaviors if the population of predators is larger and shows weaker anti-predator behaviors if the cost of fear is larger.
Modeling space-time correlations of velocity fluctuations in wind farms
NASA Astrophysics Data System (ADS)
Lukassen, Laura J.; Stevens, Richard J. A. M.; Meneveau, Charles; Wilczek, Michael
2018-07-01
An analytical model for the streamwise velocity space-time correlations in turbulent flows is derived and applied to the special case of velocity fluctuations in large wind farms. The model is based on the Kraichnan-Tennekes random sweeping hypothesis, capturing the decorrelation in time while including a mean wind velocity in the streamwise direction. In the resulting model, the streamwise velocity space-time correlation is expressed as a convolution of the pure space correlation with an analytical temporal decorrelation kernel. Hence, the spatio-temporal structure of velocity fluctuations in wind farms can be derived from the spatial correlations only. We then explore the applicability of the model to predict spatio-temporal correlations in turbulent flows in wind farms. Comparisons of the model with data from a large eddy simulation of flow in a large, spatially periodic wind farm are performed, where needed model parameters such as spatial and temporal integral scales and spatial correlations are determined from the large eddy simulation. Good agreement is obtained between the model and large eddy simulation data showing that spatial data may be used to model the full temporal structure of fluctuations in wind farms.
A Discrete Constraint for Entropy Conservation and Sound Waves in Cloud-Resolving Modeling
NASA Technical Reports Server (NTRS)
Zeng, Xi-Ping; Tao, Wei-Kuo; Simpson, Joanne
2003-01-01
Ideal cloud-resolving models contain little-accumulative errors. When their domain is so large that synoptic large-scale circulations are accommodated, they can be used for the simulation of the interaction between convective clouds and the large-scale circulations. This paper sets up a framework for the models, using moist entropy as a prognostic variable and employing conservative numerical schemes. The models possess no accumulative errors of thermodynamic variables when they comply with a discrete constraint on entropy conservation and sound waves. Alternatively speaking, the discrete constraint is related to the correct representation of the large-scale convergence and advection of moist entropy. Since air density is involved in entropy conservation and sound waves, the challenge is how to compute sound waves efficiently under the constraint. To address the challenge, a compensation method is introduced on the basis of a reference isothermal atmosphere whose governing equations are solved analytically. Stability analysis and numerical experiments show that the method allows the models to integrate efficiently with a large time step.
Large-scale shell-model calculation with core excitations for neutron-rich nuclei beyond 132Sn
NASA Astrophysics Data System (ADS)
Jin, Hua; Hasegawa, Munetake; Tazaki, Shigeru; Kaneko, Kazunari; Sun, Yang
2011-10-01
The structure of neutron-rich nuclei with a few nucleons beyond 132Sn is investigated by means of large-scale shell-model calculations. For a considerably large model space, including neutron core excitations, a new effective interaction is determined by employing the extended pairing-plus-quadrupole model with monopole corrections. The model provides a systematical description for energy levels of A=133-135 nuclei up to high spins and reproduces available data of electromagnetic transitions. The structure of these nuclei is analyzed in detail, with emphasis of effects associated with core excitations. The results show evidence of hexadecupole correlation in addition to octupole correlation in this mass region. The suggested feature of magnetic rotation in 135Te occurs in the present shell-model calculation.
A Case Study Examining the Career Academy Model at a Large Urban Public High School
ERIC Educational Resources Information Center
Ho, Howard
2013-01-01
This study focused on how career academies were implemented at a large, urban, public high school. Research shows that the career academy model should consist of 3 core components: (a) a small learning community (SLC), (b) a theme-based curriculum, and (c) business partnerships (Stern, Dayton, & Raby, 2010). The purpose of this qualitative…
NASA Astrophysics Data System (ADS)
Coll, Marta; Navarro, Joan; Olson, Robert J.; Christensen, Villy
2013-10-01
We synthesized available information from ecological models at local and regional scales to obtain a global picture of the trophic position and ecological role of squids in marine ecosystems. First, static food-web models were used to analyze basic ecological parameters and indicators of squids: biomass, production, consumption, trophic level, omnivory index, predation mortality diet, and the ecological role. In addition, we developed various dynamic temporal simulations using two food-web models that included squids in their parameterization, and we investigated potential impacts of fishing pressure and environmental conditions for squid populations and, consequently, for marine food webs. Our results showed that squids occupy a large range of trophic levels in marine food webs and show a large trophic width, reflecting the versatility in their feeding behaviors and dietary habits. Models illustrated that squids are abundant organisms in marine ecosystems, and have high growth and consumption rates, but these parameters are highly variable because squids are adapted to a large variety of environmental conditions. Results also show that squids can have a large trophic impact on other elements of the food web, and top-down control from squids to their prey can be high. In addition, some squid species are important prey of apical predators and may be keystone species in marine food webs. In fact, we found strong interrelationships between neritic squids and the populations of their prey and predators in coastal and shelf areas, while the role of squids in open ocean and upwelling ecosystems appeared more constrained to a bottom-up impact on their predators. Therefore, large removals of squids will likely have large-scale effects on marine ecosystems. In addition, simulations confirm that squids are able to benefit from a general increase in fishing pressure, mainly due to predation release, and quickly respond to changes triggered by the environment. Squids may thus be very sensitive to the effects of fishing and climate change.
NASA Technical Reports Server (NTRS)
Hattermann, F. F.; Krysanova, V.; Gosling, S. N.; Dankers, R.; Daggupati, P.; Donnelly, C.; Florke, M.; Huang, S.; Motovilov, Y.; Buda, S.;
2017-01-01
Ideally, the results from models operating at different scales should agree in trend direction and magnitude of impacts under climate change. However, this implies that the sensitivity to climate variability and climate change is comparable for impact models designed for either scale. In this study, we compare hydrological changes simulated by 9 global and 9 regional hydrological models (HM) for 11 large river basins in all continents under reference and scenario conditions. The foci are on model validation runs, sensitivity of annual discharge to climate variability in the reference period, and sensitivity of the long-term average monthly seasonal dynamics to climate change. One major result is that the global models, mostly not calibrated against observations, often show a considerable bias in mean monthly discharge, whereas regional models show a better reproduction of reference conditions. However, the sensitivity of the two HM ensembles to climate variability is in general similar. The simulated climate change impacts in terms of long-term average monthly dynamics evaluated for HM ensemble medians and spreads show that the medians are to a certain extent comparable in some cases, but have distinct differences in other cases, and the spreads related to global models are mostly notably larger. Summarizing, this implies that global HMs are useful tools when looking at large-scale impacts of climate change and variability. Whenever impacts for a specific river basin or region are of interest, e.g. for complex water management applications, the regional-scale models calibrated and validated against observed discharge should be used.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hattermann, F. F.; Krysanova, V.; Gosling, S. N.
Ideally, the results from models operating at different scales should agree in trend direction and magnitude of impacts under climate change. However, this implies that the sensitivity of impact models designed for either scale to climate variability and change is comparable. In this study, we compare hydrological changes simulated by 9 global and 9 regional hydrological models (HM) for 11 large river basins in all continents under reference and scenario conditions. The foci are on model validation runs, sensitivity of annual discharge to climate variability in the reference period, and sensitivity of the long-term average monthly seasonal dynamics to climatemore » change. One major result is that the global models, mostly not calibrated against observations, often show a considerable bias in mean monthly discharge, whereas regional models show a much better reproduction of reference conditions. However, the sensitivity of two HM ensembles to climate variability is in general similar. The simulated climate change impacts in terms of long-term average monthly dynamics evaluated for HM ensemble medians and spreads show that the medians are to a certain extent comparable in some cases with distinct differences in others, and the spreads related to global models are mostly notably larger. Summarizing, this implies that global HMs are useful tools when looking at large-scale impacts of climate change and variability, but whenever impacts for a specific river basin or region are of interest, e.g. for complex water management applications, the regional-scale models validated against observed discharge should be used.« less
Anthropic prediction for a large multi-jump landscape
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schwartz-Perlov, Delia, E-mail: delia@perlov.com
2008-10-15
The assumption of a flat prior distribution plays a critical role in the anthropic prediction of the cosmological constant. In a previous paper we analytically calculated the distribution for the cosmological constant, including the prior and anthropic selection effects, in a large toy 'single-jump' landscape model. We showed that it is possible for the fractal prior distribution that we found to behave as an effectively flat distribution in a wide class of landscapes, but only if the single-jump size is large enough. We extend this work here by investigating a large (N{approx}10{sup 500}) toy 'multi-jump' landscape model. The jump sizesmore » range over three orders of magnitude and an overall free parameter c determines the absolute size of the jumps. We will show that for 'large' c the distribution of probabilities of vacua in the anthropic range is effectively flat, and thus the successful anthropic prediction is validated. However, we argue that for small c, the distribution may not be smooth.« less
Subgrid-scale models for large-eddy simulation of rotating turbulent channel flows
NASA Astrophysics Data System (ADS)
Silvis, Maurits H.; Bae, Hyunji Jane; Trias, F. Xavier; Abkar, Mahdi; Moin, Parviz; Verstappen, Roel
2017-11-01
We aim to design subgrid-scale models for large-eddy simulation of rotating turbulent flows. Rotating turbulent flows form a challenging test case for large-eddy simulation due to the presence of the Coriolis force. The Coriolis force conserves the total kinetic energy while transporting it from small to large scales of motion, leading to the formation of large-scale anisotropic flow structures. The Coriolis force may also cause partial flow laminarization and the occurrence of turbulent bursts. Many subgrid-scale models for large-eddy simulation are, however, primarily designed to parametrize the dissipative nature of turbulent flows, ignoring the specific characteristics of transport processes. We, therefore, propose a new subgrid-scale model that, in addition to the usual dissipative eddy viscosity term, contains a nondissipative nonlinear model term designed to capture transport processes, such as those due to rotation. We show that the addition of this nonlinear model term leads to improved predictions of the energy spectra of rotating homogeneous isotropic turbulence as well as of the Reynolds stress anisotropy in spanwise-rotating plane-channel flows. This work is financed by the Netherlands Organisation for Scientific Research (NWO) under Project Number 613.001.212.
NASA Astrophysics Data System (ADS)
Kim, Dongmin; Lee, Myong-In; Jeong, Su-Jong; Im, Jungho; Cha, Dong Hyun; Lee, Sanggyun
2017-12-01
This study compares historical simulations of the terrestrial carbon cycle produced by 10 Earth System Models (ESMs) that participated in the fifth phase of the Coupled Model Intercomparison Project (CMIP5). Using MODIS satellite estimates, this study validates the simulation of gross primary production (GPP), net primary production (NPP), and carbon use efficiency (CUE), which depend on plant function types (PFTs). The models show noticeable deficiencies compared to the MODIS data in the simulation of the spatial patterns of GPP and NPP and large differences among the simulations, although the multi-model ensemble (MME) mean provides a realistic global mean value and spatial distributions. The larger model spreads in GPP and NPP compared to those of surface temperature and precipitation suggest that the differences among simulations in terms of the terrestrial carbon cycle are largely due to uncertainties in the parameterization of terrestrial carbon fluxes by vegetation. The models also exhibit large spatial differences in their simulated CUE values and at locations where the dominant PFT changes, primarily due to differences in the parameterizations. While the MME-simulated CUE values show a strong dependence on surface temperatures, the observed CUE values from MODIS show greater complexity, as well as non-linear sensitivity. This leads to the overall underestimation of CUE using most of the PFTs incorporated into current ESMs. The results of this comparison suggest that more careful and extensive validation is needed to improve the terrestrial carbon cycle in terms of ecosystem-level processes.
Modeling the coupled return-spread high frequency dynamics of large tick assets
NASA Astrophysics Data System (ADS)
Curato, Gianbiagio; Lillo, Fabrizio
2015-01-01
Large tick assets, i.e. assets where one tick movement is a significant fraction of the price and bid-ask spread is almost always equal to one tick, display a dynamics in which price changes and spread are strongly coupled. We present an approach based on the hidden Markov model, also known in econometrics as the Markov switching model, for the dynamics of price changes, where the latent Markov process is described by the transitions between spreads. We then use a finite Markov mixture of logit regressions on past squared price changes to describe temporal dependencies in the dynamics of price changes. The model can thus be seen as a double chain Markov model. We show that the model describes the shape of the price change distribution at different time scales, volatility clustering, and the anomalous decrease of kurtosis. We calibrate our models based on Nasdaq stocks and we show that this model reproduces remarkably well the statistical properties of real data.
NASA Astrophysics Data System (ADS)
Jiang, Zhou; Xia, Zhenhua; Shi, Yipeng; Chen, Shiyi
2018-04-01
A fully developed spanwise rotating turbulent channel flow has been numerically investigated utilizing large-eddy simulation. Our focus is to assess the performances of the dynamic variants of eddy viscosity models, including dynamic Vreman's model (DVM), dynamic wall adapting local eddy viscosity (DWALE) model, dynamic σ (Dσ ) model, and the dynamic volumetric strain-stretching (DVSS) model, in this canonical flow. The results with dynamic Smagorinsky model (DSM) and direct numerical simulations (DNS) are used as references. Our results show that the DVM has a wrong asymptotic behavior in the near wall region, while the other three models can correctly predict it. In the high rotation case, the DWALE can get reliable mean velocity profile, but the turbulence intensities in the wall-normal and spanwise directions show clear deviations from DNS data. DVSS exhibits poor predictions on both the mean velocity profile and turbulence intensities. In all three cases, Dσ performs the best.
NASA Astrophysics Data System (ADS)
Fakhari, Vahid; Choi, Seung-Bok; Cho, Chang-Hyun
2015-04-01
This work presents a new robust model reference adaptive control (MRAC) for vibration control caused from vehicle engine using an electromagnetic type of active engine mount. Vibration isolation performances of the active mount associated with the robust controller are evaluated in the presence of large uncertainties. As a first step, an active mount with linear solenoid actuator is prepared and its dynamic model is identified via experimental test. Subsequently, a new robust MRAC based on the gradient method with σ-modification is designed by selecting a proper reference model. In designing the robust adaptive control, structured (parametric) uncertainties in the stiffness of the passive part of the mount and in damping ratio of the active part of the mount are considered to investigate the robustness of the proposed controller. Experimental and simulation results are presented to evaluate performance focusing on the robustness behavior of the controller in the face of large uncertainties. The obtained results show that the proposed controller can sufficiently provide the robust vibration control performance even in the presence of large uncertainties showing an effective vibration isolation.
Stanley, Clayton; Byrne, Michael D
2016-12-01
The growth of social media and user-created content on online sites provides unique opportunities to study models of human declarative memory. By framing the task of choosing a hashtag for a tweet and tagging a post on Stack Overflow as a declarative memory retrieval problem, 2 cognitively plausible declarative memory models were applied to millions of posts and tweets and evaluated on how accurately they predict a user's chosen tags. An ACT-R based Bayesian model and a random permutation vector-based model were tested on the large data sets. The results show that past user behavior of tag use is a strong predictor of future behavior. Furthermore, past behavior was successfully incorporated into the random permutation model that previously used only context. Also, ACT-R's attentional weight term was linked to an entropy-weighting natural language processing method used to attenuate high-frequency words (e.g., articles and prepositions). Word order was not found to be a strong predictor of tag use, and the random permutation model performed comparably to the Bayesian model without including word order. This shows that the strength of the random permutation model is not in the ability to represent word order, but rather in the way in which context information is successfully compressed. The results of the large-scale exploration show how the architecture of the 2 memory models can be modified to significantly improve accuracy, and may suggest task-independent general modifications that can help improve model fit to human data in a much wider range of domains. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
van Soest, Johan; Meldolesi, Elisa; van Stiphout, Ruud; Gatta, Roberto; Damiani, Andrea; Valentini, Vincenzo; Lambin, Philippe; Dekker, Andre
2017-09-01
Multiple models have been developed to predict pathologic complete response (pCR) in locally advanced rectal cancer patients. Unfortunately, validation of these models normally omit the implications of cohort differences on prediction model performance. In this work, we will perform a prospective validation of three pCR models, including information whether this validation will target transferability or reproducibility (cohort differences) of the given models. We applied a novel methodology, the cohort differences model, to predict whether a patient belongs to the training or to the validation cohort. If the cohort differences model performs well, it would suggest a large difference in cohort characteristics meaning we would validate the transferability of the model rather than reproducibility. We tested our method in a prospective validation of three existing models for pCR prediction in 154 patients. Our results showed a large difference between training and validation cohort for one of the three tested models [Area under the Receiver Operating Curve (AUC) cohort differences model: 0.85], signaling the validation leans towards transferability. Two out of three models had a lower AUC for validation (0.66 and 0.58), one model showed a higher AUC in the validation cohort (0.70). We have successfully applied a new methodology in the validation of three prediction models, which allows us to indicate if a validation targeted transferability (large differences between training/validation cohort) or reproducibility (small cohort differences). © 2017 American Association of Physicists in Medicine.
NASA Astrophysics Data System (ADS)
Xavier, Prince K.; Petch, Jon C.; Klingaman, Nicholas P.; Woolnough, Steve J.; Jiang, Xianan; Waliser, Duane E.; Caian, Mihaela; Cole, Jason; Hagos, Samson M.; Hannay, Cecile; Kim, Daehyun; Miyakawa, Tomoki; Pritchard, Michael S.; Roehrig, Romain; Shindo, Eiki; Vitart, Frederic; Wang, Hailan
2015-05-01
An analysis of diabatic heating and moistening processes from 12 to 36 h lead time forecasts from 12 Global Circulation Models are presented as part of the "Vertical structure and physical processes of the Madden-Julian Oscillation (MJO)" project. A lead time of 12-36 h is chosen to constrain the large-scale dynamics and thermodynamics to be close to observations while avoiding being too close to the initial spin-up of the models as they adjust to being driven from the Years of Tropical Convection (YOTC) analysis. A comparison of the vertical velocity and rainfall with the observations and YOTC analysis suggests that the phases of convection associated with the MJO are constrained in most models at this lead time although the rainfall in the suppressed phase is typically overestimated. Although the large-scale dynamics is reasonably constrained, moistening and heating profiles have large intermodel spread. In particular, there are large spreads in convective heating and moistening at midlevels during the transition to active convection. Radiative heating and cloud parameters have the largest relative spread across models at upper levels during the active phase. A detailed analysis of time step behavior shows that some models show strong intermittency in rainfall and differences in the precipitation and dynamics relationship between models. The wealth of model outputs archived during this project is a very valuable resource for model developers beyond the study of the MJO. In addition, the findings of this study can inform the design of process model experiments, and inform the priorities for field experiments and future observing systems.
NASA Technical Reports Server (NTRS)
Nguyen, Nhan; Ting, Eric; Chaparro, Daniel
2017-01-01
This paper investigates the effect of nonlinear large deflection bending on the aerodynamic performance of a high aspect ratio flexible wing. A set of nonlinear static aeroelastic equations are derived for the large bending deflection of a high aspect ratio wing structure. An analysis is conducted to compare the nonlinear bending theory with the linear bending theory. The results show that the nonlinear bending theory is length-preserving whereas the linear bending theory causes a non-physical effect of lengthening the wing structure under the no axial load condition. A modified lifting line theory is developed to compute the lift and drag coefficients of a wing structure undergoing a large bending deflection. The lift and drag coefficients are more accurately estimated by the nonlinear bending theory due to its length-preserving property. The nonlinear bending theory yields lower lift and span efficiency than the linear bending theory. A coupled aerodynamic-nonlinear finite element model is developed to implement the nonlinear bending theory for a Common Research Model (CRM) flexible wing wind tunnel model to be tested in the University of Washington Aeronautical Laboratory (UWAL). The structural stiffness of the model is designed to give about 10% wing tip deflection which is large enough that could cause the nonlinear deflection effect to become significant. The computational results show that the nonlinear bending theory yields slightly less lift than the linear bending theory for this wind tunnel model. As a result, the linear bending theory is deemed adequate for the CRM wind tunnel model.
NASA Astrophysics Data System (ADS)
LI, J.; Chen, Y.; Wang, H. Y.
2016-12-01
In large basin flood forecasting, the forecasting lead time is very important. Advances in numerical weather forecasting in the past decades provides new input to extend flood forecasting lead time in large rivers. Challenges for fulfilling this goal currently is that the uncertainty of QPF with these kinds of NWP models are still high, so controlling the uncertainty of QPF is an emerging technique requirement.The Weather Research and Forecasting (WRF) model is one of these NWPs, and how to control the QPF uncertainty of WRF is the research topic of many researchers among the meteorological community. In this study, the QPF products in the Liujiang river basin, a big river with a drainage area of 56,000 km2, was compared with the ground observation precipitation from a rain gauge networks firstly, and the results show that the uncertainty of the WRF QPF is relatively high. So a post-processed algorithm by correlating the QPF with the observed precipitation is proposed to remove the systematical bias in QPF. With this algorithm, the post-processed WRF QPF is close to the ground observed precipitation in area-averaged precipitation. Then the precipitation is coupled with the Liuxihe model, a physically based distributed hydrological model that is widely used in small watershed flash flood forecasting. The Liuxihe Model has the advantage with gridded precipitation from NWP and could optimize model parameters when there are some observed hydrological data even there is only a few, it also has very high model resolution to improve model performance, and runs on high performance supercomputer with parallel algorithm if executed in large rivers. Two flood events in the Liujiang River were collected, one was used to optimize the model parameters and another is used to validate the model. The results show that the river flow simulation has been improved largely, and could be used for real-time flood forecasting trail in extending flood forecasting leading time.
Large deviation approach to the generalized random energy model
NASA Astrophysics Data System (ADS)
Dorlas, T. C.; Dukes, W. M. B.
2002-05-01
The generalized random energy model is a generalization of the random energy model introduced by Derrida to mimic the ultrametric structure of the Parisi solution of the Sherrington-Kirkpatrick model of a spin glass. It was solved exactly in two special cases by Derrida and Gardner. A complete solution for the thermodynamics in the general case was given by Capocaccia et al. Here we use large deviation theory to analyse the model in a very straightforward way. We also show that the variational expression for the free energy can be evaluated easily using the Cauchy-Schwarz inequality.
NASA Astrophysics Data System (ADS)
Juhui, Chen; Yanjia, Tang; Dan, Li; Pengfei, Xu; Huilin, Lu
2013-07-01
Flow behavior of gas and particles is predicted by the large eddy simulation of gas-second order moment of solid model (LES-SOM model) in the simulation of flow behavior in CFB. This study shows that the simulated solid volume fractions along height using a two-dimensional model are in agreement with experiments. The velocity, volume fraction and second-order moments of particles are computed. The second-order moments of clusters are calculated. The solid volume fraction, velocity and second order moments are compared at the three different model constants.
Gao, Yongnian; Gao, Junfeng; Yin, Hongbin; Liu, Chuansheng; Xia, Ting; Wang, Jing; Huang, Qi
2015-03-15
Remote sensing has been widely used for ater quality monitoring, but most of these monitoring studies have only focused on a few water quality variables, such as chlorophyll-a, turbidity, and total suspended solids, which have typically been considered optically active variables. Remote sensing presents a challenge in estimating the phosphorus concentration in water. The total phosphorus (TP) in lakes has been estimated from remotely sensed observations, primarily using the simple individual band ratio or their natural logarithm and the statistical regression method based on the field TP data and the spectral reflectance. In this study, we investigated the possibility of establishing a spatial modeling scheme to estimate the TP concentration of a large lake from multi-spectral satellite imagery using band combinations and regional multivariate statistical modeling techniques, and we tested the applicability of the spatial modeling scheme. The results showed that HJ-1A CCD multi-spectral satellite imagery can be used to estimate the TP concentration in a lake. The correlation and regression analysis showed a highly significant positive relationship between the TP concentration and certain remotely sensed combination variables. The proposed modeling scheme had a higher accuracy for the TP concentration estimation in the large lake compared with the traditional individual band ratio method and the whole-lake scale regression-modeling scheme. The TP concentration values showed a clear spatial variability and were high in western Lake Chaohu and relatively low in eastern Lake Chaohu. The northernmost portion, the northeastern coastal zone and the southeastern portion of western Lake Chaohu had the highest TP concentrations, and the other regions had the lowest TP concentration values, except for the coastal zone of eastern Lake Chaohu. These results strongly suggested that the proposed modeling scheme, i.e., the band combinations and the regional multivariate statistical modeling techniques, demonstrated advantages for estimating the TP concentration in a large lake and had a strong potential for universal application for the TP concentration estimation in large lake waters worldwide. Copyright © 2014 Elsevier Ltd. All rights reserved.
Wave models for turbulent free shear flows
NASA Technical Reports Server (NTRS)
Liou, W. W.; Morris, P. J.
1991-01-01
New predictive closure models for turbulent free shear flows are presented. They are based on an instability wave description of the dominant large scale structures in these flows using a quasi-linear theory. Three model were developed to study the structural dynamics of turbulent motions of different scales in free shear flows. The local characteristics of the large scale motions are described using linear theory. Their amplitude is determined from an energy integral analysis. The models were applied to the study of an incompressible free mixing layer. In all cases, predictions are made for the development of the mean flow field. In the last model, predictions of the time dependent motion of the large scale structure of the mixing region are made. The predictions show good agreement with experimental observations.
HELICITY CONSERVATION IN NONLINEAR MEAN-FIELD SOLAR DYNAMO
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pipin, V. V.; Sokoloff, D. D.; Zhang, H.
It is believed that magnetic helicity conservation is an important constraint on large-scale astrophysical dynamos. In this paper, we study a mean-field solar dynamo model that employs two different formulations of the magnetic helicity conservation. In the first approach, the evolution of the averaged small-scale magnetic helicity is largely determined by the local induction effects due to the large-scale magnetic field, turbulent motions, and the turbulent diffusive loss of helicity. In this case, the dynamo model shows that the typical strength of the large-scale magnetic field generated by the dynamo is much smaller than the equipartition value for the magneticmore » Reynolds number 10{sup 6}. This is the so-called catastrophic quenching (CQ) phenomenon. In the literature, this is considered to be typical for various kinds of solar dynamo models, including the distributed-type and the Babcock-Leighton-type dynamos. The problem can be resolved by the second formulation, which is derived from the integral conservation of the total magnetic helicity. In this case, the dynamo model shows that magnetic helicity propagates with the dynamo wave from the bottom of the convection zone to the surface. This prevents CQ because of the local balance between the large-scale and small-scale magnetic helicities. Thus, the solar dynamo can operate in a wide range of magnetic Reynolds numbers up to 10{sup 6}.« less
Ren, Junjie; Zhang, Shimin
2013-01-01
Recurrence interval of large earthquake on an active fault zone is an important parameter in assessing seismic hazard. The 2008 Wenchuan earthquake (Mw 7.9) occurred on the central Longmen Shan fault zone and ruptured the Yingxiu-Beichuan fault (YBF) and the Guanxian-Jiangyou fault (GJF). However, there is a considerable discrepancy among recurrence intervals of large earthquake in preseismic and postseismic estimates based on slip rate and paleoseismologic results. Post-seismic trenches showed that the central Longmen Shan fault zone probably undertakes an event similar to the 2008 quake, suggesting a characteristic earthquake model. In this paper, we use the published seismogenic model of the 2008 earthquake based on Global Positioning System (GPS) and Interferometric Synthetic Aperture Radar (InSAR) data and construct a characteristic seismic moment accumulation/release model to estimate recurrence interval of large earthquakes on the central Longmen Shan fault zone. Our results show that the seismogenic zone accommodates a moment rate of (2.7 ± 0.3) × 10¹⁷ N m/yr, and a recurrence interval of 3900 ± 400 yrs is necessary for accumulation of strain energy equivalent to the 2008 earthquake. This study provides a preferred interval estimation of large earthquakes for seismic hazard analysis in the Longmen Shan region.
Zhang, Shimin
2013-01-01
Recurrence interval of large earthquake on an active fault zone is an important parameter in assessing seismic hazard. The 2008 Wenchuan earthquake (Mw 7.9) occurred on the central Longmen Shan fault zone and ruptured the Yingxiu-Beichuan fault (YBF) and the Guanxian-Jiangyou fault (GJF). However, there is a considerable discrepancy among recurrence intervals of large earthquake in preseismic and postseismic estimates based on slip rate and paleoseismologic results. Post-seismic trenches showed that the central Longmen Shan fault zone probably undertakes an event similar to the 2008 quake, suggesting a characteristic earthquake model. In this paper, we use the published seismogenic model of the 2008 earthquake based on Global Positioning System (GPS) and Interferometric Synthetic Aperture Radar (InSAR) data and construct a characteristic seismic moment accumulation/release model to estimate recurrence interval of large earthquakes on the central Longmen Shan fault zone. Our results show that the seismogenic zone accommodates a moment rate of (2.7 ± 0.3) × 1017 N m/yr, and a recurrence interval of 3900 ± 400 yrs is necessary for accumulation of strain energy equivalent to the 2008 earthquake. This study provides a preferred interval estimation of large earthquakes for seismic hazard analysis in the Longmen Shan region. PMID:23878524
A Canonical Response in Rainfall Characteristics to Global Warming: Projections by IPCC CMIP5 Models
NASA Technical Reports Server (NTRS)
Lau, William K. M.; Wu, H. T.; Kim, K. M.
2012-01-01
Changes in rainfall characteristics induced by global warming are examined based on probability distribution function (PDF) analysis, from outputs of 14 IPCC (Intergovernmental Panel on Climate Change), CMIP (5th Coupled Model Intercomparison Project) models under various scenarios of increased CO2 emissions. Results show that collectively CMIP5 models project a robust and consistent global and regional rainfall response to CO2 warming. Globally, the models show a 1-3% increase in rainfall per degree rise in temperature, with a canonical response featuring large increase (100-250 %) in frequency of occurrence of very heavy rain, a reduction (5-10%) of moderate rain, and an increase (10-15%) of light rain events. Regionally, even though details vary among models, a majority of the models (>10 out of 14) project a consistent large scale response with more heavy rain events in climatologically wet regions, most pronounced in the Pacific ITCZ and the Asian monsoon. Moderate rain events are found to decrease over extensive regions of the subtropical and extratropical oceans, but increases over the extratropical land regions, and the Southern Oceans. The spatial distribution of light rain resembles that of moderate rain, but mostly with opposite polarity. The majority of the models also show increase in the number of dry events (absence or only trace amount of rain) over subtropical and tropical land regions in both hemispheres. These results suggest that rainfall characteristics are changing and that increased extreme rainfall events and droughts occurrences are connected, as a consequent of a global adjustment of the large scale circulation to global warming.
NASA Technical Reports Server (NTRS)
Zhou, Jiayu; Lau, K.-M.; Lau, William K. M. (Technical Monitor)
2002-01-01
The simulations of climatology and response of the South American summer monsoon (SASM) to the 1997/98 El Nino are investigated using six atmospheric general circulation models. Results show all models simulate the large-scale features of the SASM reasonably well. However, both stationary and seasonal components of the surface pressure are overestimated, resulting in an excessively strong SASM in the model climatology. The low-level northwesterly jet over eastern foothills of the Andes is not well resolved because of the coarse resolution of the models. Large rainfall simulation biases are found in association with the Andes and the Atlantic ITCZ, indicating model problems in handling steep mountains and parameterization of convective processes. The simulation of the 1997/98 El Nino impact on SASM is examined based on an ensemble of ten two-year (September 1996 - August 1998) integration. Results show that most models can simulate the large-scale tropospheric warming response over the tropical central Pacific, including the dynamic response of Rossby wave propagation of the Pacific-South America (PSA) pattern that influences remote areas. Deficiencies are found in simulating the regional impacts over South America. Model simulation fails to capture the southeastward expansion of anomalously warm tropospheric air. As a result, the upper tropospheric anomalous high over the subtropical Andes is less pronounced, and the enhancement of subtropical westerly jet is displaced 5deg-10deg equatorward compared to the observed. Over the Amazon basin, the shift of Walker cell induced by El Nino is not well represented, showing anomalous easterlies in both upper and lower troposphere.
NASA Technical Reports Server (NTRS)
Frey, H. V.
2004-01-01
A comparison of the distribution of visible and buried impact basins (Quasi-Circular Depressions or QCDs) on Mars > 200 km in diameter with free air gravity, crustal thickness and magnetization models shows some QCDs have coincident gravity anomalies but most do not. Very few QCDs have closely coincident magnetization anomalies, and only the oldest of the very large impact basins have strong magnetic anomalies within their main rings. Crustal thickness data show a large number of Circular Thinned Areas (CTAs). Some of these correspond to known impact basins, while others may represent buried impact basins not always recognized as QCDs in topography data alone. If true, the buried lowlands may be even older than we have previously estimated.
Ouyang, Wenjun; Subotnik, Joseph E
2017-05-07
Using the Anderson-Holstein model, we investigate charge transfer dynamics between a molecule and a metal surface for two extreme cases. (i) With a large barrier, we show that the dynamics follow a single exponential decay as expected; (ii) without any barrier, we show that the dynamics are more complicated. On the one hand, if the metal-molecule coupling is small, single exponential dynamics persist. On the other hand, when the coupling between the metal and the molecule is large, the dynamics follow a biexponential decay. We analyze the dynamics using the Smoluchowski equation, develop a simple model, and explore the consequences of biexponential dynamics for a hypothetical cyclic voltammetry experiment.
Improved modeling of GaN HEMTs for predicting thermal and trapping-induced-kink effects
NASA Astrophysics Data System (ADS)
Jarndal, Anwar; Ghannouchi, Fadhel M.
2016-09-01
In this paper, an improved modeling approach has been developed and validated for GaN high electron mobility transistors (HEMTs). The proposed analytical model accurately simulates the drain current and its inherent trapping and thermal effects. Genetic-algorithm-based procedure is developed to automatically find the fitting parameters of the model. The developed modeling technique is implemented on a packaged GaN-on-Si HEMT and validated by DC and small-/large-signal RF measurements. The model is also employed for designing and realizing a switch-mode inverse class-F power amplifier. The amplifier simulations showed a very good agreement with RF large-signal measurements.
Calibration of the 7—Equation Transition Model for High Reynolds Flows at Low Mach
NASA Astrophysics Data System (ADS)
Colonia, S.; Leble, V.; Steijl, R.; Barakos, G.
2016-09-01
The numerical simulation of flows over large-scale wind turbine blades without considering the transition from laminar to fully turbulent flow may result in incorrect estimates of the blade loads and performance. Thanks to its relative simplicity and promising results, the Local-Correlation based Transition Modelling concept represents a valid way to include transitional effects into practical CFD simulations. However, the model involves coefficients that need tuning. In this paper, the γ—equation transition model is assessed and calibrated, for a wide range of Reynolds numbers at low Mach, as needed for wind turbine applications. An aerofoil is used to evaluate the original model and calibrate it; while a large scale wind turbine blade is employed to show that the calibrated model can lead to reliable solutions for complex three-dimensional flows. The calibrated model shows promising results for both two-dimensional and three-dimensional flows, even if cross-flow instabilities are neglected.
How uncertain are climate model projections of water availability indicators across the Middle East?
Hemming, Debbie; Buontempo, Carlo; Burke, Eleanor; Collins, Mat; Kaye, Neil
2010-11-28
The projection of robust regional climate changes over the next 50 years presents a considerable challenge for the current generation of climate models. Water cycle changes are particularly difficult to model in this area because major uncertainties exist in the representation of processes such as large-scale and convective rainfall and their feedback with surface conditions. We present climate model projections and uncertainties in water availability indicators (precipitation, run-off and drought index) for the 1961-1990 and 2021-2050 periods. Ensembles from two global climate models (GCMs) and one regional climate model (RCM) are used to examine different elements of uncertainty. Although all three ensembles capture the general distribution of observed annual precipitation across the Middle East, the RCM is consistently wetter than observations, especially over the mountainous areas. All future projections show decreasing precipitation (ensemble median between -5 and -25%) in coastal Turkey and parts of Lebanon, Syria and Israel and consistent run-off and drought index changes. The Intergovernmental Panel on Climate Change (IPCC) Fourth Assessment Report (AR4) GCM ensemble exhibits drying across the north of the region, whereas the Met Office Hadley Centre work Quantifying Uncertainties in Model ProjectionsAtmospheric (QUMP-A) GCM and RCM ensembles show slight drying in the north and significant wetting in the south. RCM projections also show greater sensitivity (both wetter and drier) and a wider uncertainty range than QUMP-A. The nature of these uncertainties suggests that both large-scale circulation patterns, which influence region-wide drying/wetting patterns, and regional-scale processes, which affect localized water availability, are important sources of uncertainty in these projections. To reduce large uncertainties in water availability projections, it is suggested that efforts would be well placed to focus on the understanding and modelling of both large-scale processes and their teleconnections with Middle East climate and localized processes involved in orographic precipitation.
Tidal bending of ice shelves as a mechanism for large-scale temporal variations in ice flow
NASA Astrophysics Data System (ADS)
Rosier, Sebastian H. R.; Hilmar Gudmundsson, G.
2018-05-01
GPS measurements reveal strong modulation of horizontal ice shelf and ice stream flow at a variety of tidal frequencies, most notably a fortnightly (Msf) frequency not present in the vertical tides themselves. Current theories largely fail to explain the strength and prevalence of this signal over floating ice shelves. We show how well-known non-linear aspects of ice rheology can give rise to widespread, long-periodic tidal modulation in ice shelf flow, generated within ice shelves themselves through tidal flexure acting at diurnal and semidiurnal frequencies. Using full-Stokes viscoelastic modelling, we show that inclusion of tidal bending within the model accounts for much of the observed tidal modulation of ice shelf flow. Furthermore, our model shows that, in the absence of vertical tidal forcing, the mean flow of the ice shelf is reduced by almost 30 % for the geometry that we consider.
On the distinction between large deformation and large distortion for anisotropic materials
DOE Office of Scientific and Technical Information (OSTI.GOV)
BRANNON,REBECCA M.
2000-02-24
A motion involves large distortion if the ratios of principal stretches differ significantly from unity. A motion involves large deformation if the deformation gradient tensor is significantly different from the identity. Unfortunately, rigid rotation fits the definition of large deformation, and models that claim to be valid for large deformation are often inadequate for large distortion. An exact solution for the stress in an idealized fiber-reinforced composite is used to show that conventional large deformation representations for transverse isotropy give errant results. Possible alternative approaches are discussed.
NASA Astrophysics Data System (ADS)
Ushijima, Timothy T.; Yeh, William W.-G.
2013-10-01
An optimal experimental design algorithm is developed to select locations for a network of observation wells that provide maximum information about unknown groundwater pumping in a confined, anisotropic aquifer. The design uses a maximal information criterion that chooses, among competing designs, the design that maximizes the sum of squared sensitivities while conforming to specified design constraints. The formulated optimization problem is non-convex and contains integer variables necessitating a combinatorial search. Given a realistic large-scale model, the size of the combinatorial search required can make the problem difficult, if not impossible, to solve using traditional mathematical programming techniques. Genetic algorithms (GAs) can be used to perform the global search; however, because a GA requires a large number of calls to a groundwater model, the formulated optimization problem still may be infeasible to solve. As a result, proper orthogonal decomposition (POD) is applied to the groundwater model to reduce its dimensionality. Then, the information matrix in the full model space can be searched without solving the full model. Results from a small-scale test case show identical optimal solutions among the GA, integer programming, and exhaustive search methods. This demonstrates the GA's ability to determine the optimal solution. In addition, the results show that a GA with POD model reduction is several orders of magnitude faster in finding the optimal solution than a GA using the full model. The proposed experimental design algorithm is applied to a realistic, two-dimensional, large-scale groundwater problem. The GA converged to a solution for this large-scale problem.
Mathematical study of the effects of different intrahepatic cooling on thermal ablation zones.
Peng, Tingying; O'Neill, David; Payne, Stephen
2011-01-01
Thermal ablation of a tumour in the liver with Radio Frequency energy can be accomplished by using a probe inserted into the tissue under the guidance of medical imaging. The extent of ablation can be significantly affected by heat loss due to the high blood perfusion in the liver, especially when the tumour is located close to large vessels. A mathematical model is thus presented here to investigate the heat sinking effects of large vessels, combining a 3D two-equation coupled bio-heat model and a 1D model of convective heat transport across the blood vessel surface. The model simulation is able to recover the experimentally observed different intrahepatic cooling on thermal ablation zones: hepatic veins showed a focal indentation whereas portal veins showed broad flattening of the ablation zones. Moreover, this study also illustrates that this shape derivation can largely be attributed to the temperature variations between the microvascular branches of portal vein as compared with hepatic vein. In contrast, different amount of surface heat convection on the vessel wall between these two types of veins, however, has a minor effect.
NASA Astrophysics Data System (ADS)
Choi, Hyun-Jung; Lee, Hwa Woon; Sung, Kyoung-Hee; Kim, Min-Jung; Kim, Yoo-Keun; Jung, Woo-Sik
In order to incorporate correctly the large or local scale circulation in the model, a nudging term is introduced into the equation of motion. Nudging effects should be included properly in the model to reduce the uncertainties and improve the air flow field. To improve the meteorological components, the nudging coefficient should perform the adequate influence on complex area for the model initialization technique which related to data reliability and error suppression. Several numerical experiments have been undertaken in order to evaluate the effects on air quality modeling by comparing the performance of the meteorological result with variable nudging coefficient experiment. All experiments are calculated by the upper wind conditions (synoptic or asynoptic condition), respectively. Consequently, it is important to examine the model response to nudging effect of wind and mass information. The MM5-CMAQ model was used to assess the ozone differences in each case, during the episode day in Seoul, Korea and we revealed that there were large differences in the ozone concentration for each run. These results suggest that for the appropriate simulation of large or small-scale circulations, nudging considering the synoptic and asynoptic nudging coefficient does have a clear advantage over dynamic initialization, so appropriate limitation of these nudging coefficient values on its upper wind conditions is necessary before making an assessment. The statistical verifications showed that adequate nudging coefficient for both wind and temperature data throughout the model had a consistently positive impact on the atmospheric and air quality field. On the case dominated by large-scale circulation, a large nudging coefficient shows a minor improvement in the atmospheric and air quality field. However, when small-scale convection is present, the large nudging coefficient produces consistent improvement in the atmospheric and air quality field.
Top quark forward-backward asymmetry and same-sign top quark pairs.
Berger, Edmond L; Cao, Qing-Hong; Chen, Chuan-Ren; Li, Chong Sheng; Zhang, Hao
2011-05-20
The top quark forward-backward asymmetry measured at the Tevatron collider shows a large deviation from standard model expectations. Among possible interpretations, a nonuniversal Z' model is of particular interest as it naturally predicts a top quark in the forward region of large rapidity. To reproduce the size of the asymmetry, the couplings of the Z' to standard model quarks must be large, inevitably leading to copious production of same-sign top quark pairs at the energies of the Large Hadron Collider (LHC). We explore the discovery potential for tt and ttj production in early LHC experiments at 7-8 TeV and conclude that if no tt signal is observed with 1 fb⁻¹ of integrated luminosity, then a nonuniversal Z' alone cannot explain the Tevatron forward-backward asymmetry.
Effects of selective attention on continuous opinions and discrete decisions
NASA Astrophysics Data System (ADS)
Si, Xia-Meng; Liu, Yun; Xiong, Fei; Zhang, Yan-Chao; Ding, Fei; Cheng, Hui
2010-09-01
Selective attention describes that individuals have a preference on information according to their involving motivation. Based on achievements of social psychology, we propose an opinion interacting model to improve the modeling of individuals’ interacting behaviors. There are two parameters governing the probability of agents interacting with opponents, i.e. individual relevance and time-openness. It is found that, large individual relevance and large time-openness advance the appearance of large clusters, but large individual relevance and small time-openness favor the lessening of extremism. We also put this new model into application to work out some factor leading to a successful product. Numerical simulations show that selective attention, especially individual relevance, cannot be ignored by launcher firms and information spreaders so as to attain the most successful promotion.
Roudi, Yasser; Nirenberg, Sheila; Latham, Peter E.
2009-01-01
One of the most critical problems we face in the study of biological systems is building accurate statistical descriptions of them. This problem has been particularly challenging because biological systems typically contain large numbers of interacting elements, which precludes the use of standard brute force approaches. Recently, though, several groups have reported that there may be an alternate strategy. The reports show that reliable statistical models can be built without knowledge of all the interactions in a system; instead, pairwise interactions can suffice. These findings, however, are based on the analysis of small subsystems. Here, we ask whether the observations will generalize to systems of realistic size, that is, whether pairwise models will provide reliable descriptions of true biological systems. Our results show that, in most cases, they will not. The reason is that there is a crossover in the predictive power of pairwise models: If the size of the subsystem is below the crossover point, then the results have no predictive power for large systems. If the size is above the crossover point, then the results may have predictive power. This work thus provides a general framework for determining the extent to which pairwise models can be used to predict the behavior of large biological systems. Applied to neural data, the size of most systems studied so far is below the crossover point. PMID:19424487
Kharche, Sanjay R.; So, Aaron; Salerno, Fabio; Lee, Ting-Yim; Ellis, Chris; Goldman, Daniel; McIntyre, Christopher W.
2018-01-01
Dialysis prolongs life but augments cardiovascular mortality. Imaging data suggests that dialysis increases myocardial blood flow (BF) heterogeneity, but its causes remain poorly understood. A biophysical model of human coronary vasculature was used to explain the imaging observations, and highlight causes of coronary BF heterogeneity. Post-dialysis CT images from patients under control, pharmacological stress (adenosine), therapy (cooled dialysate), and adenosine and cooled dialysate conditions were obtained. The data presented disparate phenotypes. To dissect vascular mechanisms, a 3D human vasculature model based on known experimental coronary morphometry and a space filling algorithm was implemented. Steady state simulations were performed to investigate the effects of altered aortic pressure and blood vessel diameters on myocardial BF heterogeneity. Imaging showed that stress and therapy potentially increased mean and total BF, while reducing heterogeneity. BF histograms of one patient showed multi-modality. Using the model, it was found that total coronary BF increased as coronary perfusion pressure was increased. BF heterogeneity was differentially affected by large or small vessel blocking. BF heterogeneity was found to be inversely related to small blood vessel diameters. Simulation of large artery stenosis indicates that BF became heterogeneous (increase relative dispersion) and gave multi-modal histograms. The total transmural BF as well as transmural BF heterogeneity reduced due to large artery stenosis, generating large patches of very low BF regions downstream. Blocking of arteries at various orders showed that blocking larger arteries results in multi-modal BF histograms and large patches of low BF, whereas smaller artery blocking results in augmented relative dispersion and fractal dimension. Transmural heterogeneity was also affected. Finally, the effects of augmented aortic pressure in the presence of blood vessel blocking shows differential effects on BF heterogeneity as well as transmural BF. Improved aortic blood pressure may improve total BF. Stress and therapy may be effective if they dilate small vessels. A potential cause for the observed complex BF distributions (multi-modal BF histograms) may indicate existing large vessel stenosis. The intuitive BF heterogeneity methods used can be readily used in clinical studies. Further development of the model and methods will permit personalized assessment of patient BF status. PMID:29867555
NASA Astrophysics Data System (ADS)
Helman, E. Udi
This dissertation conducts research into the large-scale simulation of oligopolistic competition in wholesale electricity markets. The dissertation has two parts. Part I is an examination of the structure and properties of several spatial, or network, equilibrium models of oligopolistic electricity markets formulated as mixed linear complementarity problems (LCP). Part II is a large-scale application of such models to the electricity system that encompasses most of the United States east of the Rocky Mountains, the Eastern Interconnection. Part I consists of Chapters 1 to 6. The models developed in this part continue research into mixed LCP models of oligopolistic electricity markets initiated by Hobbs [67] and subsequently developed by Metzler [87] and Metzler, Hobbs and Pang [88]. Hobbs' central contribution is a network market model with Cournot competition in generation and a price-taking spatial arbitrage firm that eliminates spatial price discrimination by the Cournot firms. In one variant, the solution to this model is shown to be equivalent to the "no arbitrage" condition in a "pool" market, in which a Regional Transmission Operator optimizes spot sales such that the congestion price between two locations is exactly equivalent to the difference in the energy prices at those locations (commonly known as locational marginal pricing). Extensions to this model are presented in Chapters 5 and 6. One of these is a market model with a profit-maximizing arbitrage firm. This model is structured as a mathematical program with equilibrium constraints (MPEC), but due to the linearity of its constraints, can be solved as a mixed LCP. Part II consists of Chapters 7 to 12. The core of these chapters is a large-scale simulation of the U.S. Eastern Interconnection applying one of the Cournot competition with arbitrage models. This is the first oligopolistic equilibrium market model to encompass the full Eastern Interconnection with a realistic network representation (using a DC load flow approximation). Chapter 9 shows the price results. In contrast to prior market power simulations of these markets, much greater variability in price-cost margins is found when using a realistic model of hourly conditions on such a large network. Chapter 10 shows that the conventional concentration indices (HHIs) are poorly correlated with PCMs. Finally, Chapter 11 proposes that the simulation models are applied to merger analysis and provides two large-scale merger examples. (Abstract shortened by UMI.)
An increase in aerosol burden due to the land-sea warming contrast
NASA Astrophysics Data System (ADS)
Hassan, T.; Allen, R.; Randles, C. A.
2017-12-01
Climate models simulate an increase in most aerosol species in response to warming, particularly over the tropics and Northern Hemisphere midlatitudes. This increase in aerosol burden is related to a decrease in wet removal, primarily due to reduced large-scale precipitation. Here, we show that the increase in aerosol burden, and the decrease in large-scale precipitation, is related to a robust climate change phenomenon—the land/sea warming contrast. Idealized simulations with two state of the art climate models, the National Center for Atmospheric Research Community Atmosphere Model version 5 (NCAR CAM5) and the Geophysical Fluid Dynamics Laboratory Atmospheric Model 3 (GFDL AM3), show that muting the land-sea warming contrast negates the increase in aerosol burden under warming. This is related to smaller decreases in near-surface relative humidity over land, and in turn, smaller decreases in large-scale precipitation over land—especially in the NH midlatitudes. Furthermore, additional idealized simulations with an enhanced land/sea warming contrast lead to the opposite result—larger decreases in relative humidity over land, larger decreases in large-scale precipitation, and larger increases in aerosol burden. Our results, which relate the increase in aerosol burden to the robust climate projection of enhanced land warming, adds confidence that a warmer world will be associated with a larger aerosol burden.
Gravity versus radiation models: on the importance of scale and heterogeneity in commuting flows.
Masucci, A Paolo; Serras, Joan; Johansson, Anders; Batty, Michael
2013-08-01
We test the recently introduced radiation model against the gravity model for the system composed of England and Wales, both for commuting patterns and for public transportation flows. The analysis is performed both at macroscopic scales, i.e., at the national scale, and at microscopic scales, i.e., at the city level. It is shown that the thermodynamic limit assumption for the original radiation model significantly underestimates the commuting flows for large cities. We then generalize the radiation model, introducing the correct normalization factor for finite systems. We show that even if the gravity model has a better overall performance the parameter-free radiation model gives competitive results, especially for large scales.
Dynamics and control of three-body tethered system in large elliptic orbits
NASA Astrophysics Data System (ADS)
Shi, Gefei; Zhu, Zhanxia; Zhu, Zheng H.
2018-03-01
This paper investigates the dynamic characteristics a three-body tethered satellite system in large elliptic orbits and the control strategy to suppress the libration of the system in orbital transfer process. The system is modeled by a two-piece dumbbell model in the domain of true anomaly. The model consists of one main satellite and two subsatellites connected with two straight, massless and inextensible tethers. Two control strategies based on the sliding mode control are developed to control the libration to the zero state and the steady state respectively. The results of numerical simulations show that the proposed control scheme has good performance in controlling the libration motion of a three-body tethered satellite system in an elliptic orbit with large eccentricity by limited control inputs. Furthermore, Hamiltonians in both states are examined and it shows that less control input is required to control the libration motion to the steady state than that of zero state.
Cytology of DNA Replication Reveals Dynamic Plasticity of Large-Scale Chromatin Fibers.
Deng, Xiang; Zhironkina, Oxana A; Cherepanynets, Varvara D; Strelkova, Olga S; Kireev, Igor I; Belmont, Andrew S
2016-09-26
In higher eukaryotic interphase nuclei, the 100- to >1,000-fold linear compaction of chromatin is difficult to reconcile with its function as a template for transcription, replication, and repair. It is challenging to imagine how DNA and RNA polymerases with their associated molecular machinery would move along the DNA template without transient decondensation of observed large-scale chromatin "chromonema" fibers [1]. Transcription or "replication factory" models [2], in which polymerases remain fixed while DNA is reeled through, are similarly difficult to conceptualize without transient decondensation of these chromonema fibers. Here, we show how a dynamic plasticity of chromatin folding within large-scale chromatin fibers allows DNA replication to take place without significant changes in the global large-scale chromatin compaction or shape of these large-scale chromatin fibers. Time-lapse imaging of lac-operator-tagged chromosome regions shows no major change in the overall compaction of these chromosome regions during their DNA replication. Improved pulse-chase labeling of endogenous interphase chromosomes yields a model in which the global compaction and shape of large-Mbp chromatin domains remains largely invariant during DNA replication, with DNA within these domains undergoing significant movements and redistribution as they move into and then out of adjacent replication foci. In contrast to hierarchical folding models, this dynamic plasticity of large-scale chromatin organization explains how localized changes in DNA topology allow DNA replication to take place without an accompanying global unfolding of large-scale chromatin fibers while suggesting a possible mechanism for maintaining epigenetic programming of large-scale chromatin domains throughout DNA replication. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Vermeulen, A.; Verheggen, B.; Pieterse, G.; Haszpra, L.
2007-12-01
Tall towers allow us to observe the integrated influence of carbon exchange processes from large areas on the concentrations of CO2. The signal received shows a large variability at diurnal and synoptic timescales. The question remains how high resolutions and how accurate transport models need to be, in order to discriminate the relevant source terms from the atmospheric signal. We will examine the influence of the resolution of (ECMWF) meteorological fields, antropogenic and biogenic fluxes when going from resolutions of 2° to 0.2° lat-lon, using a simple Lagrangian 2D transport model. Model results will be compared to other Eulerian model results and observations at the CHIOTTO/CarboEurope tall tower network in Europe. Biogenic fluxes taken into account are from the FACEM model (Pieterse et al, 2006). Results show that the relative influence of the different CO2 exchange processes is very different at each tower and that higher model resolution clearly pays off in better model performance.
Downscaling GLOF Hazards: An in-depth look at the Nepal Himalaya
NASA Astrophysics Data System (ADS)
Rounce, D.; McKinney, D. C.; Lala, J.
2016-12-01
The Nepal Himalaya house a large number of glacial lakes that pose a flood hazard to downstream communities and infrastructure. The modeling of the entire process chain of these glacial lake outburst floods (GLOFs) has been advancing rapidly in recent years. The most common cause of failure is mass movement entering the glacial lake, which triggers a tsunami-like wave that breaches the terminal moraine and causes the ensuing downstream flood. Unfortunately, modeling the avalanche, the breach of the moraine, and the downstream flood requires a large amount of site-specific information and can be very labor-intensive. Therefore, these detailed models need to be paired with large-scale hazard assessments that identify the glacial lakes that are the biggest threat and the triggering events that threaten these lakes. This study discusses the merger of a large-scale, remotely-based hazard assessment with more detailed GLOF models to show how GLOF hazard modeling can be downscaled in the Nepal Himalaya.
Large-Scale Simulation of Multi-Asset Ising Financial Markets
NASA Astrophysics Data System (ADS)
Takaishi, Tetsuya
2017-03-01
We perform a large-scale simulation of an Ising-based financial market model that includes 300 asset time series. The financial system simulated by the model shows a fat-tailed return distribution and volatility clustering and exhibits unstable periods indicated by the volatility index measured as the average of absolute-returns. Moreover, we determine that the cumulative risk fraction, which measures the system risk, changes at high volatility periods. We also calculate the inverse participation ratio (IPR) and its higher-power version, IPR6, from the absolute-return cross-correlation matrix. Finally, we show that the IPR and IPR6 also change at high volatility periods.
NASA Astrophysics Data System (ADS)
Diokhane, Aminata Mbow; Jenkins, Gregory S.; Manga, Noel; Drame, Mamadou S.; Mbodji, Boubacar
2016-04-01
The Sahara desert transports large quantities of dust over the Sahelian region during the Northern Hemisphere winter and spring seasons (December-April). In episodic events, high dust concentrations are found at the surface, negatively impacting respiratory health. Bacterial meningitis in particular is known to affect populations that live in the Sahelian zones, which is otherwise known as the meningitis belt. During the winter and spring of 2012, suspected meningitis cases (SMCs) were with three times higher than in 2013. We show higher surface particular matter concentrations at Dakar, Senegal and elevated atmospheric dust loading in Senegal for the period of 1 January-31 May during 2012 relative to 2013. We analyze simulated particulate matter over Senegal from the Weather Research and Forecasting (WRF) model during 2012 and 2013. The results show higher simulated dust concentrations during the winter season of 2012 for Senegal. The WRF model correctly captures the large dust events from 1 January-31 March but has shown less skill during April and May for simulated dust concentrations. The results also show that the boundary conditions are the key feature for correctly simulating large dust events and initial conditions are less important.
Projection-free approximate balanced truncation of large unstable systems
NASA Astrophysics Data System (ADS)
Flinois, Thibault L. B.; Morgans, Aimee S.; Schmid, Peter J.
2015-08-01
In this article, we show that the projection-free, snapshot-based, balanced truncation method can be applied directly to unstable systems. We prove that even for unstable systems, the unmodified balanced proper orthogonal decomposition algorithm theoretically yields a converged transformation that balances the Gramians (including the unstable subspace). We then apply the method to a spatially developing unstable system and show that it results in reduced-order models of similar quality to the ones obtained with existing methods. Due to the unbounded growth of unstable modes, a practical restriction on the final impulse response simulation time appears, which can be adjusted depending on the desired order of the reduced-order model. Recommendations are given to further reduce the cost of the method if the system is large and to improve the performance of the method if it does not yield acceptable results in its unmodified form. Finally, the method is applied to the linearized flow around a cylinder at Re = 100 to show that it actually is able to accurately reproduce impulse responses for more realistic unstable large-scale systems in practice. The well-established approximate balanced truncation numerical framework therefore can be safely applied to unstable systems without any modifications. Additionally, balanced reduced-order models can readily be obtained even for large systems, where the computational cost of existing methods is prohibitive.
Multilingual Twitter Sentiment Classification: The Role of Human Annotators
Mozetič, Igor; Grčar, Miha; Smailović, Jasmina
2016-01-01
What are the limits of automated Twitter sentiment classification? We analyze a large set of manually labeled tweets in different languages, use them as training data, and construct automated classification models. It turns out that the quality of classification models depends much more on the quality and size of training data than on the type of the model trained. Experimental results indicate that there is no statistically significant difference between the performance of the top classification models. We quantify the quality of training data by applying various annotator agreement measures, and identify the weakest points of different datasets. We show that the model performance approaches the inter-annotator agreement when the size of the training set is sufficiently large. However, it is crucial to regularly monitor the self- and inter-annotator agreements since this improves the training datasets and consequently the model performance. Finally, we show that there is strong evidence that humans perceive the sentiment classes (negative, neutral, and positive) as ordered. PMID:27149621
Chalise, D. R.; Haj, Adel E.; Fontaine, T.A.
2018-01-01
The hydrological simulation program Fortran (HSPF) [Hydrological Simulation Program Fortran version 12.2 (Computer software). USEPA, Washington, DC] and the precipitation runoff modeling system (PRMS) [Precipitation Runoff Modeling System version 4.0 (Computer software). USGS, Reston, VA] models are semidistributed, deterministic hydrological tools for simulating the impacts of precipitation, land use, and climate on basin hydrology and streamflow. Both models have been applied independently to many watersheds across the United States. This paper reports the statistical results assessing various temporal (daily, monthly, and annual) and spatial (small versus large watershed) scale biases in HSPF and PRMS simulations using two watersheds in the Black Hills, South Dakota. The Nash-Sutcliffe efficiency (NSE), Pearson correlation coefficient (r">rr), and coefficient of determination (R2">R2R2) statistics for the daily, monthly, and annual flows were used to evaluate the models’ performance. Results from the HSPF models showed that the HSPF consistently simulated the annual flows for both large and small basins better than the monthly and daily flows, and the simulated flows for the small watershed better than flows for the large watershed. In comparison, the PRMS model results show that the PRMS simulated the monthly flows for both the large and small watersheds better than the daily and annual flows, and the range of statistical error in the PRMS models was greater than that in the HSPF models. Moreover, it can be concluded that the statistical error in the HSPF and the PRMSdaily, monthly, and annual flow estimates for watersheds in the Black Hills was influenced by both temporal and spatial scale variability.
Xavier, Prince K.; Petch, Jon C.; Klingaman, Nicholas P.; ...
2015-05-26
We present an analysis of diabatic heating and moistening processes from 12 to 36 h lead time forecasts from 12 Global Circulation Models as part of the “Vertical structure and physical processes of the Madden-Julian Oscillation (MJO)” project. A lead time of 12–36 h is chosen to constrain the large-scale dynamics and thermodynamics to be close to observations while avoiding being too close to the initial spin-up of the models as they adjust to being driven from the Years of Tropical Convection (YOTC) analysis. A comparison of the vertical velocity and rainfall with the observations and YOTC analysis suggests thatmore » the phases of convection associated with the MJO are constrained in most models at this lead time although the rainfall in the suppressed phase is typically overestimated. Although the large-scale dynamics is reasonably constrained, moistening and heating profiles have large intermodel spread. In particular, there are large spreads in convective heating and moistening at midlevels during the transition to active convection. Radiative heating and cloud parameters have the largest relative spread across models at upper levels during the active phase. A detailed analysis of time step behavior shows that some models show strong intermittency in rainfall and differences in the precipitation and dynamics relationship between models. In conclusion, the wealth of model outputs archived during this project is a very valuable resource for model developers beyond the study of the MJO. Additionally, the findings of this study can inform the design of process model experiments, and inform the priorities for field experiments and future observing systems.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhan, Yiduo; Zheng, Qipeng P.; Wang, Jianhui
Power generation expansion planning needs to deal with future uncertainties carefully, given that the invested generation assets will be in operation for a long time. Many stochastic programming models have been proposed to tackle this challenge. However, most previous works assume predetermined future uncertainties (i.e., fixed random outcomes with given probabilities). In several recent studies of generation assets' planning (e.g., thermal versus renewable), new findings show that the investment decisions could affect the future uncertainties as well. To this end, this paper proposes a multistage decision-dependent stochastic optimization model for long-term large-scale generation expansion planning, where large amounts of windmore » power are involved. In the decision-dependent model, the future uncertainties are not only affecting but also affected by the current decisions. In particular, the probability distribution function is determined by not only input parameters but also decision variables. To deal with the nonlinear constraints in our model, a quasi-exact solution approach is then introduced to reformulate the multistage stochastic investment model to a mixed-integer linear programming model. The wind penetration, investment decisions, and the optimality of the decision-dependent model are evaluated in a series of multistage case studies. The results show that the proposed decision-dependent model provides effective optimization solutions for long-term generation expansion planning.« less
Analysis and modeling of subgrid scalar mixing using numerical data
NASA Technical Reports Server (NTRS)
Girimaji, Sharath S.; Zhou, YE
1995-01-01
Direct numerical simulations (DNS) of passive scalar mixing in isotropic turbulence is used to study, analyze and, subsequently, model the role of small (subgrid) scales in the mixing process. In particular, we attempt to model the dissipation of the large scale (supergrid) scalar fluctuations caused by the subgrid scales by decomposing it into two parts: (1) the effect due to the interaction among the subgrid scales; and (2) the effect due to interaction between the supergrid and the subgrid scales. Model comparisons with DNS data show good agreement. This model is expected to be useful in the large eddy simulations of scalar mixing and reaction.
Weak Hydrological Sensitivity to Temperature Change over Land, Independent of Climate Forcing
NASA Technical Reports Server (NTRS)
Samset, B. H.; Myhre, G.; Forster, P. M.; Hodnebrog, O.; Andrews, T.; Boucher, O.; Faluvegi, G.; Flaeschner, D.; Kasoar, M.; Kharin, V.;
2018-01-01
We present the global and regional hydrological sensitivity (HS) to surface temperature changes, for perturbations to CO2, CH4, sulfate and black carbon concentrations, and solar irradiance. Based on results from ten climate models, we show how modeled global mean precipitation increases by 2-3% per kelvin of global mean surface warming, independent of driver, when the effects of rapid adjustments are removed. Previously reported differences in response between drivers are therefore mainly ascribable to rapid atmospheric adjustment processes. All models show a sharp contrast in behavior over land and over ocean, with a strong surface temperature-driven (slow) ocean HS of 3-5%/K, while the slow land HS is only 0-2%/K. Separating the response into convective and large-scale cloud processes, we find larger inter-model differences, in particular over land regions. Large-scale precipitation changes are most relevant at high latitudes, while the equatorial HS is dominated by convective precipitation changes. Black carbon stands out as the driver with the largest inter-model slow HS variability, and also the strongest contrast between a weak land and strong sea response. We identify a particular need for model investigations and observational constraints on convective precipitation in the Arctic, and large-scale precipitation around the Equator.
A Wetter Future For California?
NASA Astrophysics Data System (ADS)
Luptowitz, R.; Allen, R.
2016-12-01
Future California (CA) precipitation projections, including those from the most recent Climate Model Intercomparison Project (CMIP5), remain uncertain. This uncertainty is related to several factors, including relatively large natural variability, model shortcomings, and because CA lies within a transition zone, where mid-latitude regions are expected to become wetter and subtropical regions drier. Here, we use the Community Earth System Model (CESM) Large Ensemble Project driven by the business-as-usual scenario, and find a robust increase in CA precipitation. This implies CMIP5 model differences are the dominant cause of the large range of future CA precipitation projections. The boreal winter season-when most of the CA precipitation increase occurs-is associated with changes in the mean circulation reminiscent of an El Niño teleconnection, including a southeastward shift of the upper level winds and an increase in storm track activity in the east Pacific, and an increase in CA moisture convergence. We further show that warming of tropical eastern Pacific sea surface temperatures-a robust feature in all models-accounts for these changes. Models that better simulate El Niño-CA precipitation teleconnections, including CESM, tend to yield larger, and more consistent increases in CA precipitation. Our results show that California will become wetter in a warmer world.
Kosmopoulos, Victor; Luedke, Colten; Nana, Arvind D
2015-01-01
A smaller humerus in some patients makes the use of a large fragment fixation plate difficult. Dual small fragment plate constructs have been suggested as an alternative. This study compares the biomechanical performance of three single and one dual plate construct for mid-diaphyseal humeral fracture fixation. Five humeral shaft finite element models (1 intact and 4 fixation) were loaded in torsion, compression, posterior-anterior (PA) bending, and lateral-medial (LM) bending. A comminuted fracture was simulated by a 1-cm gap. Fracture fixation was modelled by: (A) 4.5-mm 9-hole large fragment plate (wide), (B) 4.5-mm 9-hole large fragment plate (narrow), (C) 3.5-mm 9-hole small fragment plate, and (D) one 3.5-mm 9-hole small fragment plate and one 3.5-mm 7-hole small fragment plate. Model A showed the best outcomes in torsion and PA bending, whereas Model D outperformed the others in compression and LM bending. Stress concentrations were located near and around the unused screw holes for each of the single plate models and at the neck of the screws just below the plates for all the models studied. Other than in PA bending, Model D showed the best overall screw-to-screw load sharing characteristics. The results support using a dual small fragment locking plate construct as an alternative in cases where crutch weight-bearing (compression) tolerance may be important and where anatomy limits the size of the humerus bone segment available for large fragment plate fixation.
Why large cells dominate estuarine phytoplankton
Cloern, James E.
2018-01-01
Surveys across the world oceans have shown that phytoplankton biomass and production are dominated by small cells (picoplankton) where nutrient concentrations are low, but large cells (microplankton) dominate when nutrient-rich deep water is mixed to the surface. I analyzed phytoplankton size structure in samples collected over 25 yr in San Francisco Bay, a nutrient-rich estuary. Biomass was dominated by large cells because their biomass selectively grew during blooms. Large-cell dominance appears to be a characteristic of ecosystems at the land–sea interface, and these places may therefore function as analogs to oceanic upwelling systems. Simulations with a size-structured NPZ model showed that runs of positive net growth rate persisted long enough for biomass of large, but not small, cells to accumulate. Model experiments showed that small cells would dominate in the absence of grazing, at lower nutrient concentrations, and at elevated (+5°C) temperatures. Underlying these results are two fundamental scaling laws: (1) large cells are grazed more slowly than small cells, and (2) grazing rate increases with temperature faster than growth rate. The model experiments suggest testable hypotheses about phytoplankton size structure at the land–sea interface: (1) anthropogenic nutrient enrichment increases cell size; (2) this response varies with temperature and only occurs at mid-high latitudes; (3) large-cell blooms can only develop when temperature is below a critical value, around 15°C; (4) cell size diminishes along temperature gradients from high to low latitudes; and (5) large-cell blooms will diminish or disappear where planetary warming increases temperature beyond their critical threshold.
Is There Any Real Observational Contradictoty To The Lcdm Model?
NASA Astrophysics Data System (ADS)
Ma, Yin-Zhe
2011-01-01
In this talk, I am going to question the two apparent observational contradictories to LCDM cosmology---- the lack of large angle correlations in the cosmic microwave background, and the very large bulk flow of galaxy peculiar velocities. On the super-horizon scale, "Copi etal. (2009)” have been arguing that the lack of large angular correlations of the CMB temperature field provides strong evidence against the standard, statistically isotropic, LCDM cosmology. I am going to argue that the "ad-hoc” discrepancy is due to the sub-optimal estimator of the low-l multipoles, and a posteriori statistics, which exaggerates the statistical significance. On Galactic scales, "Watkins et al. (2008)” shows that the very large bulk flow prefers a very large density fluctuation, which seems to contradict to the LCDM model. I am going to show that these results are due to their underestimation of the small scale velocity dispersion, and an arbitrary way of combining catalogues. With the appropriate way of combining catalogue data, as well as the treating the small scale velocity dispersion as a free parameter, the peculiar velocity field provides unconvincing evidence against LCDM cosmology.
Chen, Kuan-Chou; Chuang, Chao-Ming; Lin, Li-Yun; Chiu, Wen-Ta; Wang, Hui-Er; Hsieh, Chiu-Lan; Tsai, Tsuimin; Peng, Robert Y
2010-01-01
Guava [Psidium guajava L. (Myrtaceae)] budding leaf extract (PE) has shown tremendous bioactivities. Previously, we found seven major compounds in PE, i.e., gallic acid, catechin, epicatechin, rutin, quercetin, naringenin, and kaempferol. PE showed a potentially active antiglycative effect in an LDL (low density lipoprotein) mimic biomodel, which can be attributed to its large content of polyphenolics. The glycation and antiglycative reactions showed characteristic distinct four-phase kinetic patterns. In the presence of PE, the kinetic coefficients were 0.000438, 0.000060, 0.000, and -0.0001354 ABS-mL/mg-min, respectively, for phases 1 to 4. Computer simulation evidenced the dose-dependent inhibition model. Conclusively, PE contains a large amount of polyphenolics, whose antiglycative bioactivity fits the inhibition model.
Woo, Jiyoung; Chen, Hsinchun
2016-01-01
As social media has become more prevalent, its influence on business, politics, and society has become significant. Due to easy access and interaction between large numbers of users, information diffuses in an epidemic style on the web. Understanding the mechanisms of information diffusion through these new publication methods is important for political and marketing purposes. Among social media, web forums, where people in online communities disseminate and receive information, provide a good environment for examining information diffusion. In this paper, we model topic diffusion in web forums using the epidemiology model, the susceptible-infected-recovered (SIR) model, frequently used in previous research to analyze both disease outbreaks and knowledge diffusion. The model was evaluated on a large longitudinal dataset from the web forum of a major retail company and from a general political discussion forum. The fitting results showed that the SIR model is a plausible model to describe the diffusion process of a topic. This research shows that epidemic models can expand their application areas to topic discussion on the web, particularly social media such as web forums.
Daytime sky polarization calibration limitations
NASA Astrophysics Data System (ADS)
Harrington, David M.; Kuhn, Jeffrey R.; Ariste, Arturo López
2017-01-01
The daytime sky has recently been demonstrated as a useful calibration tool for deriving polarization cross-talk properties of large astronomical telescopes. The Daniel K. Inouye Solar Telescope and other large telescopes under construction can benefit from precise polarimetric calibration of large mirrors. Several atmospheric phenomena and instrumental errors potentially limit the technique's accuracy. At the 3.67-m AEOS telescope on Haleakala, we performed a large observing campaign with the HiVIS spectropolarimeter to identify limitations and develop algorithms for extracting consistent calibrations. Effective sampling of the telescope optical configurations and filtering of data for several derived parameters provide robustness to the derived Mueller matrix calibrations. Second-order scattering models of the sky show that this method is relatively insensitive to multiple-scattering in the sky, provided calibration observations are done in regions of high polarization degree. The technique is also insensitive to assumptions about telescope-induced polarization, provided the mirror coatings are highly reflective. Zemax-derived polarization models show agreement between the functional dependence of polarization predictions and the corresponding on-sky calibrations.
NASA Astrophysics Data System (ADS)
Hoch, Jannis M.; Neal, Jeffrey C.; Baart, Fedor; van Beek, Rens; Winsemius, Hessel C.; Bates, Paul D.; Bierkens, Marc F. P.
2017-10-01
We here present GLOFRIM, a globally applicable computational framework for integrated hydrological-hydrodynamic modelling. GLOFRIM facilitates spatially explicit coupling of hydrodynamic and hydrologic models and caters for an ensemble of models to be coupled. It currently encompasses the global hydrological model PCR-GLOBWB as well as the hydrodynamic models Delft3D Flexible Mesh (DFM; solving the full shallow-water equations and allowing for spatially flexible meshing) and LISFLOOD-FP (LFP; solving the local inertia equations and running on regular grids). The main advantages of the framework are its open and free access, its global applicability, its versatility, and its extensibility with other hydrological or hydrodynamic models. Before applying GLOFRIM to an actual test case, we benchmarked both DFM and LFP for a synthetic test case. Results show that for sub-critical flow conditions, discharge response to the same input signal is near-identical for both models, which agrees with previous studies. We subsequently applied the framework to the Amazon River basin to not only test the framework thoroughly, but also to perform a first-ever benchmark of flexible and regular grids on a large-scale. Both DFM and LFP produce comparable results in terms of simulated discharge with LFP exhibiting slightly higher accuracy as expressed by a Kling-Gupta efficiency of 0.82 compared to 0.76 for DFM. However, benchmarking inundation extent between DFM and LFP over the entire study area, a critical success index of 0.46 was obtained, indicating that the models disagree as often as they agree. Differences between models in both simulated discharge and inundation extent are to a large extent attributable to the gridding techniques employed. In fact, the results show that both the numerical scheme of the inundation model and the gridding technique can contribute to deviations in simulated inundation extent as we control for model forcing and boundary conditions. This study shows that the presented computational framework is robust and widely applicable. GLOFRIM is designed as open access and easily extendable, and thus we hope that other large-scale hydrological and hydrodynamic models will be added. Eventually, more locally relevant processes would be captured and more robust model inter-comparison, benchmarking, and ensemble simulations of flood hazard on a large scale would be allowed for.
Edifice strength and magma transfer modulation at Piton de la Fournaise volcano
NASA Astrophysics Data System (ADS)
Peltier, A.; Got, J.; Staudacher, T.; Kowalski, P.; Boissier, P.
2013-12-01
From 2003 to 2007, eruptive activity at Piton de la Fournaise followed cycles, comprising many summit/proximal eruptions and finishing by a distal eruption. GPS measurements evidenced striking asymmetric deformation between its western and eastern flanks. Horizontal displacements recorded during inter-distal periods showed a characteristic amplitude at the top of the eastern flank. Displacements recorded at the base of the summit cone showed a bimodal distribution, with low amplitudes during inter-distal periods and large ones during distal eruptions. To account for displacement asymmetry, characteristic amplitude and large flank displacement, we modeled the volcanic edifice using a Drücker-Prager elasto-plastic rheology. Friction angles of 15° and >30° were needed to model the displacements respectively during distal eruptions and inter-distal periods; this change shows that strain weakening occurred during distal events. Large plastic displacement that occurred in the eastern flank during distal eruptions relaxed the horizontal elastic stress accumulated during inter-distal periods; it triggered summit deflation, horizontal magma transfer and distal flank eruption, and reset the eruptive cycle. Our elasto-plastic models also show that simple source geometries may induce large eastern flank displacements that would be explained by a complex geometry in a linear elastic edifice. Magma supply is often thought to control volcano's eruptive activity, with surface deformation reflecting changes in magma supply rate, the volcano's response being linear. Our results bring some evidences that on Piton de la Fournaise time-space discretization of magma transfer may be the result of the edifice's non-linear response, rather than changes in magma supply.
The three-point function as a probe of models for large-scale structure
NASA Astrophysics Data System (ADS)
Frieman, Joshua A.; Gaztanaga, Enrique
1994-04-01
We analyze the consequences of models of structure formation for higher order (n-point) galaxy correlation functions in the mildly nonlinear regime. Several variations of the standard Omega = 1 cold dark matter model with scale-invariant primordial perturbations have recently been introduced to obtain more power on large scales, Rp is approximately 20/h Mpc, e.g., low matter-density (nonzero cosmological constant) models, 'tilted' primordial spectra, and scenarios with a mixture of cold and hot dark matter. They also include models with an effective scale-dependent bias, such as the cooperative galaxy formation scenario of Bower et al. We show that higher-order (n-point) galaxy correlation functions can provide a useful test of such models and can discriminate between models with true large-scale power in the density field and those where the galaxy power arises from scale-dependent bias: a bias with rapid scale dependence leads to a dramatic decrease of the the hierarchical amplitudes QJ at large scales, r is greater than or approximately Rp. Current observational constraints on the three-point amplitudes Q3 and S3 can place limits on the bias parameter(s) and appear to disfavor, but not yet rule out, the hypothesis that scale-dependent bias is responsible for the extra power observed on large scales.
Exposing earth surface process model simulations to a large audience
NASA Astrophysics Data System (ADS)
Overeem, I.; Kettner, A. J.; Borkowski, L.; Russell, E. L.; Peddicord, H.
2015-12-01
The Community Surface Dynamics Modeling System (CSDMS) represents a diverse group of >1300 scientists who develop and apply numerical models to better understand the Earth's surface. CSDMS has a mandate to make the public more aware of model capabilities and therefore started sharing state-of-the-art surface process modeling results with large audiences. One platform to reach audiences outside the science community is through museum displays on 'Science on a Sphere' (SOS). Developed by NOAA, SOS is a giant globe, linked with computers and multiple projectors and can display data and animations on a sphere. CSDMS has developed and contributed model simulation datasets for the SOS system since 2014, including hydrological processes, coastal processes, and human interactions with the environment. Model simulations of a hydrological and sediment transport model (WBM-SED) illustrate global river discharge patterns. WAVEWATCH III simulations have been specifically processed to show the impacts of hurricanes on ocean waves, with focus on hurricane Katrina and super storm Sandy. A large world dataset of dams built over the last two centuries gives an impression of the profound influence of humans on water management. Given the exposure of SOS, CSDMS aims to contribute at least 2 model datasets a year, and will soon provide displays of global river sediment fluxes and changes of the sea ice free season along the Arctic coast. Over 100 facilities worldwide show these numerical model displays to an estimated 33 million people every year. Datasets storyboards, and teacher follow-up materials associated with the simulations, are developed to address common core science K-12 standards. CSDMS dataset documentation aims to make people aware of the fact that they look at numerical model results, that underlying models have inherent assumptions and simplifications, and that limitations are known. CSDMS contributions aim to familiarize large audiences with the use of numerical modeling as a tool to create understanding of environmental processes.
General framework for dynamic large deformation contact problems based on phantom-node X-FEM
NASA Astrophysics Data System (ADS)
Broumand, P.; Khoei, A. R.
2018-04-01
This paper presents a general framework for modeling dynamic large deformation contact-impact problems based on the phantom-node extended finite element method. The large sliding penalty contact formulation is presented based on a master-slave approach which is implemented within the phantom-node X-FEM and an explicit central difference scheme is used to model the inertial effects. The method is compared with conventional contact X-FEM; advantages, limitations and implementational aspects are also addressed. Several numerical examples are presented to show the robustness and accuracy of the proposed method.
Bellin, Alberto; Tonina, Daniele
2007-10-30
Available models of solute transport in heterogeneous formations lack in providing complete characterization of the predicted concentration. This is a serious drawback especially in risk analysis where confidence intervals and probability of exceeding threshold values are required. Our contribution to fill this gap of knowledge is a probability distribution model for the local concentration of conservative tracers migrating in heterogeneous aquifers. Our model accounts for dilution, mechanical mixing within the sampling volume and spreading due to formation heterogeneity. It is developed by modeling local concentration dynamics with an Ito Stochastic Differential Equation (SDE) that under the hypothesis of statistical stationarity leads to the Beta probability distribution function (pdf) for the solute concentration. This model shows large flexibility in capturing the smoothing effect of the sampling volume and the associated reduction of the probability of exceeding large concentrations. Furthermore, it is fully characterized by the first two moments of the solute concentration, and these are the same pieces of information required for standard geostatistical techniques employing Normal or Log-Normal distributions. Additionally, we show that in the absence of pore-scale dispersion and for point concentrations the pdf model converges to the binary distribution of [Dagan, G., 1982. Stochastic modeling of groundwater flow by unconditional and conditional probabilities, 2, The solute transport. Water Resour. Res. 18 (4), 835-848.], while it approaches the Normal distribution for sampling volumes much larger than the characteristic scale of the aquifer heterogeneity. Furthermore, we demonstrate that the same model with the spatial moments replacing the statistical moments can be applied to estimate the proportion of the plume volume where solute concentrations are above or below critical thresholds. Application of this model to point and vertically averaged bromide concentrations from the first Cape Cod tracer test and to a set of numerical simulations confirms the above findings and for the first time it shows the superiority of the Beta model to both Normal and Log-Normal models in interpreting field data. Furthermore, we show that assuming a-priori that local concentrations are normally or log-normally distributed may result in a severe underestimate of the probability of exceeding large concentrations.
A Single Column Model Ensemble Approach Applied to the TWP-ICE Experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davies, Laura; Jakob, Christian; Cheung, K.
2013-06-27
Single column models (SCM) are useful testbeds for investigating the parameterisation schemes of numerical weather prediction and climate models. The usefulness of SCM simulations are limited, however, by the accuracy of the best-estimate large-scale data prescribed. One method to address this uncertainty is to perform ensemble simulations of the SCM. This study first derives an ensemble of large-scale data for the Tropical Warm Pool International Cloud Experiment (TWP-ICE) based on an estimate of a possible source of error in the best-estimate product. This data is then used to carry out simulations with 11 SCM and 2 cloud-resolving models (CRM). Best-estimatemore » simulations are also performed. All models show that moisture related variables are close to observations and there are limited differences between the best-estimate and ensemble mean values. The models, however, show different sensitivities to changes in the forcing particularly when weakly forced. The ensemble simulations highlight important differences in the moisture budget between the SCM and CRM. Systematic differences are also apparent in the ensemble mean vertical structure of cloud variables. The ensemble is further used to investigate relations between cloud variables and precipitation identifying large differences between CRM and SCM. This study highlights that additional information can be gained by performing ensemble simulations enhancing the information derived from models using the more traditional single best-estimate simulation.« less
Design and Application of an Ontology for Component-Based Modeling of Water Systems
NASA Astrophysics Data System (ADS)
Elag, M.; Goodall, J. L.
2012-12-01
Many Earth system modeling frameworks have adopted an approach of componentizing models so that a large model can be assembled by linking a set of smaller model components. These model components can then be more easily reused, extended, and maintained by a large group of model developers and end users. While there has been a notable increase in component-based model frameworks in the Earth sciences in recent years, there has been less work on creating framework-agnostic metadata and ontologies for model components. Well defined model component metadata is needed, however, to facilitate sharing, reuse, and interoperability both within and across Earth system modeling frameworks. To address this need, we have designed an ontology for the water resources community named the Water Resources Component (WRC) ontology in order to advance the application of component-based modeling frameworks across water related disciplines. Here we present the design of the WRC ontology and demonstrate its application for integration of model components used in watershed management. First we show how the watershed modeling system Soil and Water Assessment Tool (SWAT) can be decomposed into a set of hydrological and ecological components that adopt the Open Modeling Interface (OpenMI) standard. Then we show how the components can be used to estimate nitrogen losses from land to surface water for the Baltimore Ecosystem study area. Results of this work are (i) a demonstration of how the WRC ontology advances the conceptual integration between components of water related disciplines by handling the semantic and syntactic heterogeneity present when describing components from different disciplines and (ii) an investigation of a methodology by which large models can be decomposed into a set of model components that can be well described by populating metadata according to the WRC ontology.
Simple Statistical Model to Quantify Maximum Expected EMC in Spacecraft and Avionics Boxes
NASA Technical Reports Server (NTRS)
Trout, Dawn H.; Bremner, Paul
2014-01-01
This study shows cumulative distribution function (CDF) comparisons of composite a fairing electromagnetic field data obtained by computational electromagnetic 3D full wave modeling and laboratory testing. Test and model data correlation is shown. In addition, this presentation shows application of the power balance and extention of this method to predict the variance and maximum exptected mean of the E-field data. This is valuable for large scale evaluations of transmission inside cavities.
NASA Technical Reports Server (NTRS)
Bardino, J.; Ferziger, J. H.; Reynolds, W. C.
1983-01-01
The physical bases of large eddy simulation and subgrid modeling are studied. A subgrid scale similarity model is developed that can account for system rotation. Large eddy simulations of homogeneous shear flows with system rotation were carried out. Apparently contradictory experimental results were explained. The main effect of rotation is to increase the transverse length scales in the rotation direction, and thereby decrease the rates of dissipation. Experimental results are shown to be affected by conditions at the turbulence producing grid, which make the initial states a function of the rotation rate. A two equation model is proposed that accounts for effects of rotation and shows good agreement with experimental results. In addition, a Reynolds stress model is developed that represents the turbulence structure of homogeneous shear flows very well and can account also for the effects of system rotation.
Carbon Dioxide Physiological Forcing Dominates Projected Eastern Amazonian Drying
NASA Astrophysics Data System (ADS)
Richardson, T. B.; Forster, P. M.; Andrews, T.; Boucher, O.; Faluvegi, G.; Fläschner, D.; Kasoar, M.; Kirkevâg, A.; Lamarque, J.-F.; Myhre, G.; Olivié, D.; Samset, B. H.; Shawki, D.; Shindell, D.; Takemura, T.; Voulgarakis, A.
2018-03-01
Future projections of east Amazonian precipitation indicate drying, but they are uncertain and poorly understood. In this study we analyze the Amazonian precipitation response to individual atmospheric forcings using a number of global climate models. Black carbon is found to drive reduced precipitation over the Amazon due to temperature-driven circulation changes, but the magnitude is uncertain. CO2 drives reductions in precipitation concentrated in the east, mainly due to a robustly negative, but highly variable in magnitude, fast response. We find that the physiological effect of CO2 on plant stomata is the dominant driver of the fast response due to reduced latent heating and also contributes to the large model spread. Using a simple model, we show that CO2 physiological effects dominate future multimodel mean precipitation projections over the Amazon. However, in individual models temperature-driven changes can be large, but due to little agreement, they largely cancel out in the model mean.
NASA Astrophysics Data System (ADS)
Omrani, Hiba; Drobinski, Philippe; Dubos, Thomas
2010-05-01
In this work, we consider the effect of indiscriminate and spectral nudging on the large and small scales of an idealized model simulation. The model is a two layer quasi-geostrophic model on the beta-plane driven at its boundaries by the « global » version with periodic boundary condition. This setup mimics the configuration used for regional climate modelling. The effect of large-scale nudging is studied by using the "perfect model" approach. Two sets of experiments are performed: (1) the effect of nudging is investigated with a « global » high resolution two layer quasi-geostrophic model driven by a low resolution two layer quasi-geostrophic model. (2) similar simulations are conducted with the two layer quasi-geostrophic Limited Area Model (LAM) where the size of the LAM domain comes into play in addition to the first set of simulations. The study shows that the indiscriminate nudging time that minimizes the error at both the large and small scales is reached for a nudging time close to the predictability time, for spectral nudging, the optimum nudging time should tend to zero since the best large scale dynamics is supposed to be given by the driving large-scale fields are generally given at much lower frequency than the model time step(e,g, 6-hourly analysis) with a basic interpolation between the fields, the optimum nudging time differs from zero, however remaining smaller than the predictability time.
Lyashevska, Olga; Brus, Dick J; van der Meer, Jaap
2016-01-01
The objective of the study was to provide a general procedure for mapping species abundance when data are zero-inflated and spatially correlated counts. The bivalve species Macoma balthica was observed on a 500×500 m grid in the Dutch part of the Wadden Sea. In total, 66% of the 3451 counts were zeros. A zero-inflated Poisson mixture model was used to relate counts to environmental covariates. Two models were considered, one with relatively fewer covariates (model "small") than the other (model "large"). The models contained two processes: a Bernoulli (species prevalence) and a Poisson (species intensity, when the Bernoulli process predicts presence). The model was used to make predictions for sites where only environmental data are available. Predicted prevalences and intensities show that the model "small" predicts lower mean prevalence and higher mean intensity, than the model "large". Yet, the product of prevalence and intensity, which might be called the unconditional intensity, is very similar. Cross-validation showed that the model "small" performed slightly better, but the difference was small. The proposed methodology might be generally applicable, but is computer intensive.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vakili, Masoud
1997-01-01
Data from the CCFR E770 Neutrino Deep Inelastic Scatter- ing (DIS) experiment at Fermilab contain large Bjorken x, highmore » $Q^2$ events. A comparison of the data with a model, based on no nuclear effects at large $x$, shows an excess of events in the data. Addition of Fermi gas motion of the nucleons in the nucleus to the model does not explain the model's deficit. Adding higher momentum tail due to the formation of "quasi-deuterons" makes the agreement better. Certain models based on "multi- quark clusters" and "few-nucleon correlations" predict an exponentially falling behavior for $$F_2$$ as $$F_2 \\sim e^{s(x -x_0)}$$ at large $x$. We measure a $s$ = 8.3 $$\\pm$$ 0.8 for the best fit to our data. This corresponds to a value of $$F_2$$($$x = 1, Q^2 > 50) \\approx 2$$ x $$10^{-3}$$ in neutrino DIS. These values agree with results from theoretical models and the $SLAC$ $E133$ experiment but seem to be different from the result of the BCDMS experiment« less
Topology of large-scale structure in seeded hot dark matter models
NASA Technical Reports Server (NTRS)
Beaky, Matthew M.; Scherrer, Robert J.; Villumsen, Jens V.
1992-01-01
The topology of the isodensity surfaces in seeded hot dark matter models, in which static seed masses provide the density perturbations in a universe dominated by massive neutrinos is examined. When smoothed with a Gaussian window, the linear initial conditions in these models show no trace of non-Gaussian behavior for r0 equal to or greater than 5 Mpc (h = 1/2), except for very low seed densities, which show a shift toward isolated peaks. An approximate analytic expression is given for the genus curve expected in linear density fields from randomly distributed seed masses. The evolved models have a Gaussian topology for r0 = 10 Mpc, but show a shift toward a cellular topology with r0 = 5 Mpc; Gaussian models with an identical power spectrum show the same behavior.
An object-oriented forest landscape model and its representation of tree species
Hong S. He; David J. Mladenoff; Joel Boeder
1999-01-01
LANDIS is a forest landscape model that simulates the interaction of large landscape processes and forest successional dynamics at tree species level. We discuss how object-oriented design (OOD) approaches such as modularity, abstraction and encapsulation are integrated into the design of LANDIS. We show that using OOD approaches, model decisions (olden as model...
Self-Organized Percolation and Critical Sales Fluctuations
NASA Astrophysics Data System (ADS)
Weisbuch, Gérard; Solomon, Sorin
There is a discrepancy between the standard view of equilibrium through price adjustment in economics and the observation of large fluctuations in stock markets. We study here a simple model where agents decisions not only depend upon their individual preferences but also upon information obtained from their neighbors in a social network. The model shows that information diffusion coupled to the adjustment process drives the system to criticality with large fluctuations rather than converging smoothly to equilibrium.
Adaptive-Grid Methods for Phase Field Models of Microstructure Development
NASA Technical Reports Server (NTRS)
Provatas, Nikolas; Goldenfeld, Nigel; Dantzig, Jonathan A.
1999-01-01
In this work the authors show how the phase field model can be solved in a computationally efficient manner that opens a new large-scale simulational window on solidification physics. Our method uses a finite element, adaptive-grid formulation, and exploits the fact that the phase and temperature fields vary significantly only near the interface. We illustrate how our method allows efficient simulation of phase-field models in very large systems, and verify the predictions of solvability theory at intermediate undercooling. We then present new results at low undercoolings that suggest that solvability theory may not give the correct tip speed in that regime. We model solidification using the phase-field model used by Karma and Rappel.
Trending in Probability of Collision Measurements
NASA Technical Reports Server (NTRS)
Vallejo, J. J.; Hejduk, M. D.; Stamey, J. D.
2015-01-01
A simple model is proposed to predict the behavior of Probabilities of Collision (P(sub c)) for conjunction events. The model attempts to predict the location and magnitude of the peak P(sub c) value for an event by assuming the progression of P(sub c) values can be modeled to first order by a downward-opening parabola. To incorporate prior information from a large database of past conjunctions, the Bayes paradigm is utilized; and the operating characteristics of the model are established through a large simulation study. Though the model is simple, it performs well in predicting the temporal location of the peak (P(sub c)) and thus shows promise as a decision aid in operational conjunction assessment risk analysis.
The influence of material anisotropy on vibration at onset in a three-dimensional vocal fold model
Zhang, Zhaoyan
2014-01-01
Although vocal folds are known to be anisotropic, the influence of material anisotropy on vocal fold vibration remains largely unknown. Using a linear stability analysis, phonation onset characteristics were investigated in a three-dimensional anisotropic vocal fold model. The results showed that isotropic models had a tendency to vibrate in a swing-like motion, with vibration primarily along the superior-inferior direction. Anterior-posterior (AP) out-of-phase motion was also observed and large vocal fold vibration was confined to the middle third region along the AP length. In contrast, increasing anisotropy or increasing AP-transverse stiffness ratio suppressed this swing-like motion and allowed the vocal fold to vibrate in a more wave-like motion with strong medial-lateral motion over the entire medial surface. Increasing anisotropy also suppressed the AP out-of-phase motion, allowing the vocal fold to vibrate in phase along the entire AP length. Results also showed that such improvement in vibration pattern was the most effective with large anisotropy in the cover layer alone. These numerical predictions were consistent with previous experimental observations using self-oscillating physical models. It was further hypothesized that these differences may facilitate complete glottal closure in finite-amplitude vibration of anisotropic models as observed in recent experiments. PMID:24606284
Garrido, Luis Eduardo; Barrada, Juan Ramón; Aguasvivas, José Armando; Martínez-Molina, Agustín; Arias, Víctor B; Golino, Hudson F; Legaz, Eva; Ferrís, Gloria; Rojo-Moreno, Luis
2018-06-01
During the present decade a large body of research has employed confirmatory factor analysis (CFA) to evaluate the factor structure of the Strengths and Difficulties Questionnaire (SDQ) across multiple languages and cultures. However, because CFA can produce strongly biased estimations when the population cross-loadings differ meaningfully from zero, it may not be the most appropriate framework to model the SDQ responses. With this in mind, the current study sought to assess the factorial structure of the SDQ using the more flexible exploratory structural equation modeling approach. Using a large-scale Spanish sample composed of 67,253 youths aged between 10 and 18 years ( M = 14.16, SD = 1.07), the results showed that CFA provided a severely biased and overly optimistic assessment of the underlying structure of the SDQ. In contrast, exploratory structural equation modeling revealed a generally weak factorial structure, including questionable indicators with large cross-loadings, multiple error correlations, and significant wording variance. A subsequent Monte Carlo study showed that sample sizes greater than 4,000 would be needed to adequately recover the SDQ loading structure. The findings from this study prevent recommending the SDQ as a screening tool and suggest caution when interpreting previous results in the literature based on CFA modeling.
Marketing Library and Information Services: Comparing Experiences at Large Institutions.
ERIC Educational Resources Information Center
Noel, Robert; Waugh, Timothy
This paper explores some of the similarities and differences between publicizing information services within the academic and corporate environments, comparing the marketing experiences of Abbot Laboratories (Illinois) and Indiana University. It shows some innovative online marketing tools, including an animated gif model of a large, integrated…
NASA Astrophysics Data System (ADS)
Wang, Lixia; Pei, Jihong; Xie, Weixin; Liu, Jinyuan
2018-03-01
Large-scale oceansat remote sensing images cover a big area sea surface, which fluctuation can be considered as a non-stationary process. Short-Time Fourier Transform (STFT) is a suitable analysis tool for the time varying nonstationary signal. In this paper, a novel ship detection method using 2-D STFT sea background statistical modeling for large-scale oceansat remote sensing images is proposed. First, the paper divides the large-scale oceansat remote sensing image into small sub-blocks, and 2-D STFT is applied to each sub-block individually. Second, the 2-D STFT spectrum of sub-blocks is studied and the obvious different characteristic between sea background and non-sea background is found. Finally, the statistical model for all valid frequency points in the STFT spectrum of sea background is given, and the ship detection method based on the 2-D STFT spectrum modeling is proposed. The experimental result shows that the proposed algorithm can detect ship targets with high recall rate and low missing rate.
Exploring the large-scale structure of Taylor–Couette turbulence through Large-Eddy Simulations
NASA Astrophysics Data System (ADS)
Ostilla-Mónico, Rodolfo; Zhu, Xiaojue; Verzicco, Roberto
2018-04-01
Large eddy simulations (LES) of Taylor-Couette (TC) flow, the flow between two co-axial and independently rotating cylinders are performed in an attempt to explore the large-scale axially-pinned structures seen in experiments and simulations. Both static and dynamic LES models are used. The Reynolds number is kept fixed at Re = 3.4 · 104, and the radius ratio η = ri /ro is set to η = 0.909, limiting the effects of curvature and resulting in frictional Reynolds numbers of around Re τ ≈ 500. Four rotation ratios from Rot = ‑0.0909 to Rot = 0.3 are simulated. First, the LES of TC is benchmarked for different rotation ratios. Both the Smagorinsky model with a constant of cs = 0.1 and the dynamic model are found to produce reasonable results for no mean rotation and cyclonic rotation, but deviations increase for increasing rotation. This is attributed to the increasing anisotropic character of the fluctuations. Second, “over-damped” LES, i.e. LES with a large Smagorinsky constant is performed and is shown to reproduce some features of the large-scale structures, even when the near-wall region is not adequately modeled. This shows the potential for using over-damped LES for fast explorations of the parameter space where large-scale structures are found.
NASA Astrophysics Data System (ADS)
Schmith, Johanne; Höskuldsson, Ármann; Holm, Paul Martin; Larsen, Guðrún
2018-04-01
Katla volcano in Iceland produces hazardous large explosive basaltic eruptions on a regular basis, but very little quantitative data for future hazard assessments exist. Here details on fragmentation mechanism and eruption dynamics are derived from a study of deposit stratigraphy with detailed granulometry and grain morphology analysis, granulometric modeling, componentry and the new quantitative regularity index model of fragmentation mechanism. We show that magma/water interaction is important in the ash generation process, but to a variable extent. By investigating the large explosive basaltic eruptions from 1755 and 1625, we document that eruptions of similar size and magma geochemistry can have very different fragmentation dynamics. Our models show that fragmentation in the 1755 eruption was a combination of magmatic degassing and magma/water-interaction with the most magma/water-interaction at the beginning of the eruption. The fragmentation of the 1625 eruption was initially also a combination of both magmatic and phreatomagmatic processes, but magma/water-interaction diminished progressively during the later stages of the eruption. However, intense magma/water interaction was reintroduced during the final stages of the eruption dominating the fine fragmentation at the end. This detailed study of fragmentation changes documents that subglacial eruptions have highly variable interaction with the melt water showing that the amount and access to melt water changes significantly during eruptions. While it is often difficult to reconstruct the progression of eruptions that have no quantitative observational record, this study shows that integrating field observations and granulometry with the new regularity index can form a coherent model of eruption evolution.
Conformation-controlled binding kinetics of antibodies
NASA Astrophysics Data System (ADS)
Galanti, Marta; Fanelli, Duccio; Piazza, Francesco
2016-01-01
Antibodies are large, extremely flexible molecules, whose internal dynamics is certainly key to their astounding ability to bind antigens of all sizes, from small hormones to giant viruses. In this paper, we build a shape-based coarse-grained model of IgG molecules and show that it can be used to generate 3D conformations in agreement with single-molecule Cryo-Electron Tomography data. Furthermore, we elaborate a theoretical model that can be solved exactly to compute the binding rate constant of a small antigen to an IgG in a prescribed 3D conformation. Our model shows that the antigen binding process is tightly related to the internal dynamics of the IgG. Our findings pave the way for further investigation of the subtle connection between the dynamics and the function of large, flexible multi-valent molecular machines.
Tang, Shuaiqi; Zhang, Minghua; Xie, Shaocheng
2016-01-05
Large-scale atmospheric forcing data can greatly impact the simulations of atmospheric process models including Large Eddy Simulations (LES), Cloud Resolving Models (CRMs) and Single-Column Models (SCMs), and impact the development of physical parameterizations in global climate models. This study describes the development of an ensemble variationally constrained objective analysis of atmospheric large-scale forcing data and its application to evaluate the cloud biases in the Community Atmospheric Model (CAM5). Sensitivities of the variational objective analysis to background data, error covariance matrix and constraint variables are described and used to quantify the uncertainties in the large-scale forcing data. Application of the ensemblemore » forcing in the CAM5 SCM during March 2000 intensive operational period (IOP) at the Southern Great Plains (SGP) of the Atmospheric Radiation Measurement (ARM) program shows systematic biases in the model simulations that cannot be explained by the uncertainty of large-scale forcing data, which points to the deficiencies of physical parameterizations. The SCM is shown to overestimate high clouds and underestimate low clouds. These biases are found to also exist in the global simulation of CAM5 when it is compared with satellite data.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tang, Shuaiqi; Zhang, Minghua; Xie, Shaocheng
Large-scale atmospheric forcing data can greatly impact the simulations of atmospheric process models including Large Eddy Simulations (LES), Cloud Resolving Models (CRMs) and Single-Column Models (SCMs), and impact the development of physical parameterizations in global climate models. This study describes the development of an ensemble variationally constrained objective analysis of atmospheric large-scale forcing data and its application to evaluate the cloud biases in the Community Atmospheric Model (CAM5). Sensitivities of the variational objective analysis to background data, error covariance matrix and constraint variables are described and used to quantify the uncertainties in the large-scale forcing data. Application of the ensemblemore » forcing in the CAM5 SCM during March 2000 intensive operational period (IOP) at the Southern Great Plains (SGP) of the Atmospheric Radiation Measurement (ARM) program shows systematic biases in the model simulations that cannot be explained by the uncertainty of large-scale forcing data, which points to the deficiencies of physical parameterizations. The SCM is shown to overestimate high clouds and underestimate low clouds. These biases are found to also exist in the global simulation of CAM5 when it is compared with satellite data.« less
European Wintertime Windstorms and its Links to Large-Scale Variability Modes
NASA Astrophysics Data System (ADS)
Befort, D. J.; Wild, S.; Walz, M. A.; Knight, J. R.; Lockwood, J. F.; Thornton, H. E.; Hermanson, L.; Bett, P.; Weisheimer, A.; Leckebusch, G. C.
2017-12-01
Winter storms associated with extreme wind speeds and heavy precipitation are the most costly natural hazard in several European countries. Improved understanding and seasonal forecast skill of winter storms will thus help society, policy-makers and (re-) insurance industry to be better prepared for such events. We firstly assess the ability to represent extra-tropical windstorms over the Northern Hemisphere of three seasonal forecast ensemble suites: ECMWF System3, ECMWF System4 and GloSea5. Our results show significant skill for inter-annual variability of windstorm frequency over parts of Europe in two of these forecast suites (ECMWF-S4 and GloSea5) indicating the potential use of current seasonal forecast systems. In a regression model we further derive windstorm variability using the forecasted NAO from the seasonal model suites thus estimating the suitability of the NAO as the only predictor. We find that the NAO as the main large-scale mode over Europe can explain some of the achieved skill and is therefore an important source of variability in the seasonal models. However, our results show that the regression model fails to reproduce the skill level of the directly forecast windstorm frequency over large areas of central Europe. This suggests that the seasonal models also capture other sources of variability/predictability of windstorms than the NAO. In order to investigate which other large-scale variability modes steer the interannual variability of windstorms we develop a statistical model using a Poisson GLM. We find that the Scandinavian Pattern (SCA) in fact explains a larger amount of variability for Central Europe during the 20th century than the NAO. This statistical model is able to skilfully reproduce the interannual variability of windstorm frequency especially for the British Isles and Central Europe with correlations up to 0.8.
NASA Astrophysics Data System (ADS)
Riley, W. J.; Zhu, Q.; Tang, J.
2017-12-01
Uncertainties in current Earth System Model (ESM) predictions of terrestrial carbon-climate feedbacks over the 21st century are as large as, or larger than, any other reported natural system uncertainties. Soil Organic Matter (SOM) decomposition and photosynthesis, the dominant fluxes in this regard, are tightly linked through nutrient availability, and the recent Coupled Model Inter-comparison Project 5 (CMIP5) used for climate change assessment had no credible representations of these constraints. In response, many ESM land models (ESMLMs) have developed dynamic and coupled soil and plant nutrient cycles. Here we quantify terrestrial carbon cycle impacts from well-known observed plant nutrient uptake mechanisms ignored in most current ESMLMs. In particular, we estimate the global role of plant root nutrient competition with microbes and abiotic process at night and during the non-growing season using the ACME land model (ALMv1-ECA-CNP) that explicitly represents these dynamics. We first demonstrate that short-term nutrient uptake dynamics and competition between plants and microbes are accurately predicted by the model compared to 15N and 33P isotopic tracer measurements from more than 20 sites. We then show that global nighttime and non-growing season nitrogen and phosphorus uptake accounts for 46 and 45%, respectively, of annual uptake, with large latitudinal variation. Model experiments show that ignoring these plant uptake periods leads to large positive biases in annual N leaching (globally 58%) and N2O emissions (globally 68%). Biases these large will affect modeled carbon cycle dynamics over time, and lead to predictions of ecosystems that have overly open nutrient cycles and therefore lower capacity to sequester carbon.
Research on Fault Rate Prediction Method of T/R Component
NASA Astrophysics Data System (ADS)
Hou, Xiaodong; Yang, Jiangping; Bi, Zengjun; Zhang, Yu
2017-07-01
T/R component is an important part of the large phased array radar antenna array, because of its large numbers, high fault rate, it has important significance for fault prediction. Aiming at the problems of traditional grey model GM(1,1) in practical operation, the discrete grey model is established based on the original model in this paper, and the optimization factor is introduced to optimize the background value, and the linear form of the prediction model is added, the improved discrete grey model of linear regression is proposed, finally, an example is simulated and compared with other models. The results show that the method proposed in this paper has higher accuracy and the solution is simple and the application scope is more extensive.
Can limited area NWP and/or RCM models improve on large scales inside their domain?
NASA Astrophysics Data System (ADS)
Mesinger, Fedor; Veljovic, Katarina
2017-04-01
In a paper in press in Meteorology and Atmospheric Physics at the time this abstract is being written, Mesinger and Veljovic point out four requirements that need to be fulfilled by a limited area model (LAM), be it in NWP or RCM environment, to improve on large scales inside its domain. First, NWP/RCM model needs to be run on a relatively large domain. Note that domain size in quite inexpensive compared to resolution. Second, NWP/RCM model should not use more forcing at its boundaries than required by the mathematics of the problem. That means prescribing lateral boundary conditions only at its outside boundary, with one less prognostic variable prescribed at the outflow than at the inflow parts of the boundary. Next, nudging towards the large scales of the driver model must not be used, as it would obviously be nudging in the wrong direction if the nested model can improve on large scales inside its domain. And finally, the NWP/RCM model must have features that enable development of large scales improved compared to those of the driver model. This would typically include higher resolution, but obviously does not have to. Integrations showing improvements in large scales by LAM ensemble members are summarized in the mentioned paper in press. Ensemble members referred to are run using the Eta model, and are driven by ECMWF 32-day ensemble members, initialized 0000 UTC 4 October 2012. The Eta model used is the so-called "upgraded Eta," or "sloping steps Eta," which is free of the Gallus-Klemp problem of weak flow in the lee of the bell-shaped topography, seemed to many as suggesting the eta coordinate to be ill suited for high resolution models. The "sloping steps" in fact represent a simple version of the cut cell scheme. Accuracy of forecasting the position of jet stream winds, chosen to be those of speeds greater than 45 m/s at 250 hPa, expressed by Equitable Threat (or Gilbert) skill scores adjusted to unit bias (ETSa) was taken to show the skill at large scales. Average rms wind difference at 250 hPa compared to ECMWF analyses was used as another verification measure. With 21 members run, at about the same resolution of the driver global and the nested Eta during the first 10 days of the experiment, both verification measures generally demonstrate advantage of the Eta, in particular during and after the time of a deep upper tropospheric trough crossing the Rockies at the first 2-6 days of the experiment. Rerunning the Eta ensemble switched to use sigma (Eta/sigma) showed this advantage of the Eta to come to a considerable degree, but not entirely, from its use of the eta coordinate. Compared to cumulative scores of the ensembles run, this is demonstrated to even a greater degree by the number of "wins" of one model vs. another. Thus, at 4.5 day time when the trough just about crossed the Rockies, all 21 Eta/eta members have better ETSa scores than their ECMWF driver members. Eta/sigma has 19 members improving upon ECMWF, but loses to Eta/eta by a score of as much as 20 to 1. ECMWF members do better with rms scores, losing to Eta/eta by 18 vs. 3, but winning over Eta/sigma by 12 to 9. Examples of wind plots behind these results are shown, and additional reasons possibly helping or not helping the results summarized are discussed.
Scaling predictive modeling in drug development with cloud computing.
Moghadam, Behrooz Torabi; Alvarsson, Jonathan; Holm, Marcus; Eklund, Martin; Carlsson, Lars; Spjuth, Ola
2015-01-26
Growing data sets with increased time for analysis is hampering predictive modeling in drug discovery. Model building can be carried out on high-performance computer clusters, but these can be expensive to purchase and maintain. We have evaluated ligand-based modeling on cloud computing resources where computations are parallelized and run on the Amazon Elastic Cloud. We trained models on open data sets of varying sizes for the end points logP and Ames mutagenicity and compare with model building parallelized on a traditional high-performance computing cluster. We show that while high-performance computing results in faster model building, the use of cloud computing resources is feasible for large data sets and scales well within cloud instances. An additional advantage of cloud computing is that the costs of predictive models can be easily quantified, and a choice can be made between speed and economy. The easy access to computational resources with no up-front investments makes cloud computing an attractive alternative for scientists, especially for those without access to a supercomputer, and our study shows that it enables cost-efficient modeling of large data sets on demand within reasonable time.
Concept definition study for an extremely large aerophysics range facility
NASA Technical Reports Server (NTRS)
Swift, H.; Witcofski, R.
1992-01-01
The development of a large aerophysical ballistic range facility is considered to study large-scale hypersonic flows at high Reynolds numbers for complex shapes. A two-stage light gas gun is considered for the hypervelocity launcher, and the extensive range tankage is discussed with respect to blast suppression, model disposition, and the sabot impact tank. A layout is given for the large aerophysics facility, and illustrations are provided for key elements such as the guide rail. The paper shows that such a facility could be used to launch models with diameters approaching 250 mm at velocities of 6.5 km/s with peak achievable accelerations of not more than 85.0 kgs. The envisioned range would provide gas-flow facilities capable of controlling the modeled quiescent atmospheric conditions. The facility is argued to be a feasible and important step in the investigation and experiment of such hypersonic vehicles as the National Aerospace Plane.
Wong, William W L; Feng, Zeny Z; Thein, Hla-Hla
2016-11-01
Agent-based models (ABMs) are computer simulation models that define interactions among agents and simulate emergent behaviors that arise from the ensemble of local decisions. ABMs have been increasingly used to examine trends in infectious disease epidemiology. However, the main limitation of ABMs is the high computational cost for a large-scale simulation. To improve the computational efficiency for large-scale ABM simulations, we built a parallelizable sliding region algorithm (SRA) for ABM and compared it to a nonparallelizable ABM. We developed a complex agent network and performed two simulations to model hepatitis C epidemics based on the real demographic data from Saskatchewan, Canada. The first simulation used the SRA that processed on each postal code subregion subsequently. The second simulation processed the entire population simultaneously. It was concluded that the parallelizable SRA showed computational time saving with comparable results in a province-wide simulation. Using the same method, SRA can be generalized for performing a country-wide simulation. Thus, this parallel algorithm enables the possibility of using ABM for large-scale simulation with limited computational resources.
Simple Deterministically Constructed Recurrent Neural Networks
NASA Astrophysics Data System (ADS)
Rodan, Ali; Tiňo, Peter
A large number of models for time series processing, forecasting or modeling follows a state-space formulation. Models in the specific class of state-space approaches, referred to as Reservoir Computing, fix their state-transition function. The state space with the associated state transition structure forms a reservoir, which is supposed to be sufficiently complex so as to capture a large number of features of the input stream that can be potentially exploited by the reservoir-to-output readout mapping. The largely "black box" character of reservoirs prevents us from performing a deeper theoretical investigation of the dynamical properties of successful reservoirs. Reservoir construction is largely driven by a series of (more-or-less) ad-hoc randomized model building stages, with both the researchers and practitioners having to rely on a series of trials and errors. We show that a very simple deterministically constructed reservoir with simple cycle topology gives performances comparable to those of the Echo State Network (ESN) on a number of time series benchmarks. Moreover, we argue that the memory capacity of such a model can be made arbitrarily close to the proved theoretical limit.
Building occupancy simulation and data assimilation using a graph-based agent-oriented model
NASA Astrophysics Data System (ADS)
Rai, Sanish; Hu, Xiaolin
2018-07-01
Building occupancy simulation and estimation simulates the dynamics of occupants and estimates their real-time spatial distribution in a building. It requires a simulation model and an algorithm for data assimilation that assimilates real-time sensor data into the simulation model. Existing building occupancy simulation models include agent-based models and graph-based models. The agent-based models suffer high computation cost for simulating large numbers of occupants, and graph-based models overlook the heterogeneity and detailed behaviors of individuals. Recognizing the limitations of existing models, this paper presents a new graph-based agent-oriented model which can efficiently simulate large numbers of occupants in various kinds of building structures. To support real-time occupancy dynamics estimation, a data assimilation framework based on Sequential Monte Carlo Methods is also developed and applied to the graph-based agent-oriented model to assimilate real-time sensor data. Experimental results show the effectiveness of the developed model and the data assimilation framework. The major contributions of this work are to provide an efficient model for building occupancy simulation that can accommodate large numbers of occupants and an effective data assimilation framework that can provide real-time estimations of building occupancy from sensor data.
Large memory capacity in chaotic artificial neural networks: a view of the anti-integrable limit.
Lin, Wei; Chen, Guanrong
2009-08-01
In the literature, it was reported that the chaotic artificial neural network model with sinusoidal activation functions possesses a large memory capacity as well as a remarkable ability of retrieving the stored patterns, better than the conventional chaotic model with only monotonic activation functions such as sigmoidal functions. This paper, from the viewpoint of the anti-integrable limit, elucidates the mechanism inducing the superiority of the model with periodic activation functions that includes sinusoidal functions. Particularly, by virtue of the anti-integrable limit technique, this paper shows that any finite-dimensional neural network model with periodic activation functions and properly selected parameters has much more abundant chaotic dynamics that truly determine the model's memory capacity and pattern-retrieval ability. To some extent, this paper mathematically and numerically demonstrates that an appropriate choice of the activation functions and control scheme can lead to a large memory capacity and better pattern-retrieval ability of the artificial neural network models.
Computation of rare transitions in the barotropic quasi-geostrophic equations
NASA Astrophysics Data System (ADS)
Laurie, Jason; Bouchet, Freddy
2015-01-01
We investigate the theoretical and numerical computation of rare transitions in simple geophysical turbulent models. We consider the barotropic quasi-geostrophic and two-dimensional Navier-Stokes equations in regimes where bistability between two coexisting large-scale attractors exist. By means of large deviations and instanton theory with the use of an Onsager-Machlup path integral formalism for the transition probability, we show how one can directly compute the most probable transition path between two coexisting attractors analytically in an equilibrium (Langevin) framework and numerically otherwise. We adapt a class of numerical optimization algorithms known as minimum action methods to simple geophysical turbulent models. We show that by numerically minimizing an appropriate action functional in a large deviation limit, one can predict the most likely transition path for a rare transition between two states. By considering examples where theoretical predictions can be made, we show that the minimum action method successfully predicts the most likely transition path. Finally, we discuss the application and extension of such numerical optimization schemes to the computation of rare transitions observed in direct numerical simulations and experiments and to other, more complex, turbulent systems.
Coupling large scale hydrologic-reservoir-hydraulic models for impact studies in data sparse regions
NASA Astrophysics Data System (ADS)
O'Loughlin, Fiachra; Neal, Jeff; Wagener, Thorsten; Bates, Paul; Freer, Jim; Woods, Ross; Pianosi, Francesca; Sheffied, Justin
2017-04-01
As hydraulic modelling moves to increasingly large spatial domains it has become essential to take reservoirs and their operations into account. Large-scale hydrological models have been including reservoirs for at least the past two decades, yet they cannot explicitly model the variations in spatial extent of reservoirs, and many reservoirs operations in hydrological models are not undertaken during the run-time operation. This requires a hydraulic model, yet to-date no continental scale hydraulic model has directly simulated reservoirs and their operations. In addition to the need to include reservoirs and their operations in hydraulic models as they move to global coverage, there is also a need to link such models to large scale hydrology models or land surface schemes. This is especially true for Africa where the number of river gauges has consistently declined since the middle of the twentieth century. In this study we address these two major issues by developing: 1) a coupling methodology for the VIC large-scale hydrological model and the LISFLOOD-FP hydraulic model, and 2) a reservoir module for the LISFLOOD-FP model, which currently includes four sets of reservoir operating rules taken from the major large-scale hydrological models. The Volta Basin, West Africa, was chosen to demonstrate the capability of the modelling framework as it is a large river basin ( 400,000 km2) and contains the largest man-made lake in terms of area (8,482 km2), Lake Volta, created by the Akosombo dam. Lake Volta also experiences a seasonal variation in water levels of between two and six metres that creates a dynamic shoreline. In this study, we first run our coupled VIC and LISFLOOD-FP model without explicitly modelling Lake Volta and then compare these results with those from model runs where the dam operations and Lake Volta are included. The results show that we are able to obtain variation in the Lake Volta water levels and that including the dam operations and Lake Volta has significant impacts on the water levels across the domain.
Nanoindentation of virus capsids in a molecular model
NASA Astrophysics Data System (ADS)
Cieplak, Marek; Robbins, Mark O.
2010-01-01
A molecular-level model is used to study the mechanical response of empty cowpea chlorotic mottle virus (CCMV) and cowpea mosaic virus (CPMV) capsids. The model is based on the native structure of the proteins that constitute the capsids and is described in terms of the Cα atoms. Nanoindentation by a large tip is modeled as compression between parallel plates. Plots of the compressive force versus plate separation for CCMV are qualitatively consistent with continuum models and experiments, showing an elastic region followed by an irreversible drop in force. The mechanical response of CPMV has not been studied, but the molecular model predicts an order of magnitude higher stiffness and a much shorter elastic region than for CCMV. These large changes result from small structural changes that increase the number of bonds by only 30% and would be difficult to capture in continuum models. Direct comparison of local deformations in continuum and molecular models of CCMV shows that the molecular model undergoes a gradual symmetry breaking rotation and accommodates more strain near the walls than the continuum model. The irreversible drop in force at small separations is associated with rupturing nearly all of the bonds between capsid proteins in the molecular model, while a buckling transition is observed in continuum models.
Interactive computer graphics and its role in control system design of large space structures
NASA Technical Reports Server (NTRS)
Reddy, A. S. S. R.
1985-01-01
This paper attempts to show the relevance of interactive computer graphics in the design of control systems to maintain attitude and shape of large space structures to accomplish the required mission objectives. The typical phases of control system design, starting from the physical model such as modeling the dynamics, modal analysis, and control system design methodology are reviewed and the need of the interactive computer graphics is demonstrated. Typical constituent parts of large space structures such as free-free beams and free-free plates are used to demonstrate the complexity of the control system design and the effectiveness of the interactive computer graphics.
Equilibrium Shapes of Large Trans-Neptunian Objects
NASA Astrophysics Data System (ADS)
Rambaux, Nicolas; Baguet, Daniel; Chambat, Frederic; Castillo-Rogez, Julie C.
2017-11-01
The large trans-Neptunian objects (TNO) with radii larger than 400 km are thought to be in hydrostatic equilibrium. Their shapes can provide clues regarding their internal structures that would reveal information on their formation and evolution. In this paper, we explore the equilibrium figures of five TNOs, and we show that the difference between the equilibrium figures of homogeneous and heterogeneous interior models can reach several kilometers for fast rotating and low density bodies. Such a difference could be measurable by ground-based techniques. This demonstrates the importance of developing the shape up to second and third order when modeling the shapes of large and rapid rotators.
Sampling large random knots in a confined space
NASA Astrophysics Data System (ADS)
Arsuaga, J.; Blackstone, T.; Diao, Y.; Hinson, K.; Karadayi, E.; Saito, M.
2007-09-01
DNA knots formed under extreme conditions of condensation, as in bacteriophage P4, are difficult to analyze experimentally and theoretically. In this paper, we propose to use the uniform random polygon model as a supplementary method to the existing methods for generating random knots in confinement. The uniform random polygon model allows us to sample knots with large crossing numbers and also to generate large diagrammatically prime knot diagrams. We show numerically that uniform random polygons sample knots with large minimum crossing numbers and certain complicated knot invariants (as those observed experimentally). We do this in terms of the knot determinants or colorings. Our numerical results suggest that the average determinant of a uniform random polygon of n vertices grows faster than O(e^{n^2}) . We also investigate the complexity of prime knot diagrams. We show rigorously that the probability that a randomly selected 2D uniform random polygon of n vertices is almost diagrammatically prime goes to 1 as n goes to infinity. Furthermore, the average number of crossings in such a diagram is at the order of O(n2). Therefore, the two-dimensional uniform random polygons offer an effective way in sampling large (prime) knots, which can be useful in various applications.
Effect of noble gases on an atmospheric greenhouse /Titan/.
NASA Technical Reports Server (NTRS)
Cess, R.; Owen, T.
1973-01-01
Several models for the atmosphere of Titan have been investigated, taking into account various combinations of neon and argon. The investigation shows that the addition of large amounts of Ne and/or Ar will substantially reduce the hydrogen abundance required for a given greenhouse effect. The fact that a large amount of neon should be present if the atmosphere is a relic of the solar nebula is an especially attractive feature of the models, because it is hard to justify appropriate abundances of other enhancing agents.
High storage capacity in the Hopfield model with auto-interactions—stability analysis
NASA Astrophysics Data System (ADS)
Rocchi, Jacopo; Saad, David; Tantari, Daniele
2017-11-01
Recent studies point to the potential storage of a large number of patterns in the celebrated Hopfield associative memory model, well beyond the limits obtained previously. We investigate the properties of new fixed points to discover that they exhibit instabilities for small perturbations and are therefore of limited value as associative memories. Moreover, a large deviations approach also shows that errors introduced to the original patterns induce additional errors and increased corruption with respect to the stored patterns.
Bali, Rachna; Savino, Laura; Ramirez, Diego A.; Tsvetkova, Nelly M.; Bagatolli, Luis; Tablin, Fern; Crowe, John H.; Leidy, Chad
2009-01-01
There has been ample debate on whether cell membranes can present macroscopic lipid domains as predicted by three-component phase diagrams obtained by fluorescence microscopy. Several groups have argued that membrane proteins and interactions with the cytoskeleton inhibit the formation of large domains. In contrast, some polarizable cells do show large regions with qualitative differences in lipid fluidity. It is important to ask more precisely, based on the current phase diagrams, under what conditions would large domains be expected to form in cells. In this work we study the thermotropic phase behavior of the platelet plasma membrane by FTIR, and compare it to a POPC/Sphingomyelin/Cholesterol model representing the outer leaflet composition. We find that this model closely reflects the platelet phase behavior. Previous work has shown that the platelet plasma membrane presents inhomogeneous distribution of DiI18:0 at 24°C, but not at 37°C, which suggests the formation of macroscopic lipid domains at low temperatures. We show by fluorescence microscopy, and by comparison with published phase diagrams, that the outer leaflet model system enters the macroscopic domain region only at the lower temperature. In addition, the low cholesterol content in platelets (~15 mol %), appears to be crucial for the formation of large domains during cooling. PMID:19341703
Post-Test Analysis of 11% Break at PSB-VVER Experimental Facility using Cathare 2 Code
NASA Astrophysics Data System (ADS)
Sabotinov, Luben; Chevrier, Patrick
The best estimate French thermal-hydraulic computer code CATHARE 2 Version 2.5_1 was used for post-test analysis of the experiment “11% upper plenum break”, conducted at the large-scale test facility PSB-VVER in Russia. The PSB rig is 1:300 scaled model of VVER-1000 NPP. A computer model has been developed for CATHARE 2 V2.5_1, taking into account all important components of the PSB facility: reactor model (lower plenum, core, bypass, upper plenum, downcomer), 4 separated loops, pressurizer, horizontal multitube steam generators, break section. The secondary side is represented by recirculation model. A large number of sensitivity calculations has been performed regarding break modeling, reactor pressure vessel modeling, counter current flow modeling, hydraulic losses, heat losses. The comparison between calculated and experimental results shows good prediction of the basic thermal-hydraulic phenomena and parameters such as pressures, temperatures, void fractions, loop seal clearance, etc. The experimental and calculation results are very sensitive regarding the fuel cladding temperature, which show a periodical nature. With the applied CATHARE 1D modeling, the global thermal-hydraulic parameters and the core heat up have been reasonably predicted.
Accurate force field for molybdenum by machine learning large materials data
NASA Astrophysics Data System (ADS)
Chen, Chi; Deng, Zhi; Tran, Richard; Tang, Hanmei; Chu, Iek-Heng; Ong, Shyue Ping
2017-09-01
In this work, we present a highly accurate spectral neighbor analysis potential (SNAP) model for molybdenum (Mo) developed through the rigorous application of machine learning techniques on large materials data sets. Despite Mo's importance as a structural metal, existing force fields for Mo based on the embedded atom and modified embedded atom methods do not provide satisfactory accuracy on many properties. We will show that by fitting to the energies, forces, and stress tensors of a large density functional theory (DFT)-computed dataset on a diverse set of Mo structures, a Mo SNAP model can be developed that achieves close to DFT accuracy in the prediction of a broad range of properties, including elastic constants, melting point, phonon spectra, surface energies, grain boundary energies, etc. We will outline a systematic model development process, which includes a rigorous approach to structural selection based on principal component analysis, as well as a differential evolution algorithm for optimizing the hyperparameters in the model fitting so that both the model error and the property prediction error can be simultaneously lowered. We expect that this newly developed Mo SNAP model will find broad applications in large and long-time scale simulations.
Variability of pCO2 in surface waters and development of prediction model.
Chung, Sewoong; Park, Hyungseok; Yoo, Jisu
2018-05-01
Inland waters are substantial sources of atmospheric carbon, but relevant data are rare in Asian monsoon regions including Korea. Emissions of CO 2 to the atmosphere depend largely on the partial pressure of CO 2 (pCO 2 ) in water; however, measured pCO 2 data are scarce and calculated pCO 2 can show large uncertainty. This study had three objectives: 1) to examine the spatial variability of pCO 2 in diverse surface water systems in Korea; 2) to compare pCO 2 calculated using pH-total alkalinity (Alk) and pH-dissolved inorganic carbon (DIC) with pCO 2 measured by an in situ submersible nondispersive infrared detector; and 3) to characterize the major environmental variables determining the variation of pCO 2 based on physical, chemical, and biological data collected concomitantly. Of 30 samples, 80% were found supersaturated in CO 2 with respect to the overlying atmosphere. Calculated pCO 2 using pH-Alk and pH-DIC showed weak prediction capability and large variations with respect to measured pCO 2 . Error analysis indicated that calculated pCO 2 is highly sensitive to the accuracy of pH measurements, particularly at low pH. Stepwise multiple linear regression (MLR) and random forest (RF) techniques were implemented to develop the most parsimonious model based on 10 potential predictor variables (pH, Alk, DIC, Uw, Cond, Turb, COD, DOC, TOC, Chla) by optimizing model performance. The RF model showed better performance than the MLR model, and the most parsimonious RF model (pH, Turb, Uw, Chla) improved pCO 2 prediction capability considerably compared with the simple calculation approach, reducing the RMSE from 527-544 to 105μatm at the study sites. Copyright © 2017 Elsevier B.V. All rights reserved.
Short-term depression and transient memory in sensory cortex.
Gillary, Grant; Heydt, Rüdiger von der; Niebur, Ernst
2017-12-01
Persistent neuronal activity is usually studied in the context of short-term memory localized in central cortical areas. Recent studies show that early sensory areas also can have persistent representations of stimuli which emerge quickly (over tens of milliseconds) and decay slowly (over seconds). Traditional positive feedback models cannot explain sensory persistence for at least two reasons: (i) They show attractor dynamics, with transient perturbations resulting in a quasi-permanent change of system state, whereas sensory systems return to the original state after a transient. (ii) As we show, those positive feedback models which decay to baseline lose their persistence when their recurrent connections are subject to short-term depression, a common property of excitatory connections in early sensory areas. Dual time constant network behavior has also been implemented by nonlinear afferents producing a large transient input followed by much smaller steady state input. We show that such networks require unphysiologically large onset transients to produce the rise and decay observed in sensory areas. Our study explores how memory and persistence can be implemented in another model class, derivative feedback networks. We show that these networks can operate with two vastly different time courses, changing their state quickly when new information is coming in but retaining it for a long time, and that these capabilities are robust to short-term depression. Specifically, derivative feedback networks with short-term depression that acts differentially on positive and negative feedback projections are capable of dynamically changing their time constant, thus allowing fast onset and slow decay of responses without requiring unrealistically large input transients.
Failure analysis and modeling of a multicomputer system. M.S. Thesis
NASA Technical Reports Server (NTRS)
Subramani, Sujatha Srinivasan
1990-01-01
This thesis describes the results of an extensive measurement-based analysis of real error data collected from a 7-machine DEC VaxCluster multicomputer system. In addition to evaluating basic system error and failure characteristics, we develop reward models to analyze the impact of failures and errors on the system. The results show that, although 98 percent of errors in the shared resources recover, they result in 48 percent of all system failures. The analysis of rewards shows that the expected reward rate for the VaxCluster decreases to 0.5 in 100 days for a 3 out of 7 model, which is well over a 100 times that for a 7-out-of-7 model. A comparison of the reward rates for a range of k-out-of-n models indicates that the maximum increase in reward rate (0.25) occurs in going from the 6-out-of-7 model to the 5-out-of-7 model. The analysis also shows that software errors have the lowest reward (0.2 vs. 0.91 for network errors). The large loss in reward rate for software errors is due to the fact that a large proportion (94 percent) of software errors lead to failure. In comparison, the high reward rate for network errors is due to fast recovery from a majority of these errors (median recovery duration is 0 seconds).
Comparison and Analysis of Geometric Correction Models of Spaceborne SAR
Jiang, Weihao; Yu, Anxi; Dong, Zhen; Wang, Qingsong
2016-01-01
Following the development of synthetic aperture radar (SAR), SAR images have become increasingly common. Many researchers have conducted large studies on geolocation models, but little work has been conducted on the available models for the geometric correction of SAR images of different terrain. To address the terrain issue, four different models were compared and are described in this paper: a rigorous range-doppler (RD) model, a rational polynomial coefficients (RPC) model, a revised polynomial (PM) model and an elevation derivation (EDM) model. The results of comparisons of the geolocation capabilities of the models show that a proper model for a SAR image of a specific terrain can be determined. A solution table was obtained to recommend a suitable model for users. Three TerraSAR-X images, two ALOS-PALSAR images and one Envisat-ASAR image were used for the experiment, including flat terrain and mountain terrain SAR images as well as two large area images. Geolocation accuracies of the models for different terrain SAR images were computed and analyzed. The comparisons of the models show that the RD model was accurate but was the least efficient; therefore, it is not the ideal model for real-time implementations. The RPC model is sufficiently accurate and efficient for the geometric correction of SAR images of flat terrain, whose precision is below 0.001 pixels. The EDM model is suitable for the geolocation of SAR images of mountainous terrain, and its precision can reach 0.007 pixels. Although the PM model does not produce results as precise as the other models, its efficiency is excellent and its potential should not be underestimated. With respect to the geometric correction of SAR images over large areas, the EDM model has higher accuracy under one pixel, whereas the RPC model consumes one third of the time of the EDM model. PMID:27347973
NASA Technical Reports Server (NTRS)
Scott, Carl D.
2004-01-01
Chemical kinetic models for the nucleation and growth of clusters and single-walled carbon nanotube (SWNT) growth are developed for numerical simulations of the production of SWNTs. Two models that involve evaporation and condensation of carbon and metal catalysts, a full model involving all carbon clusters up to C80, and a reduced model are discussed. The full model is based on a fullerene model, but nickel and carbon/nickel cluster reactions are added to form SWNTs from soot and fullerenes. The full model has a large number of species--so large that to incorporate them into a flow field computation for simulating laser ablation and arc processes requires that they be simplified. The model is reduced by defining large clusters that represent many various sized clusters. Comparisons are given between these models for cases that may be applicable to arc and laser ablation production. Solutions to the system of chemical rate equations of these models for a ramped temperature profile show that production of various species, including SWNTs, agree to within about 50% for a fast ramp, and within 10% for a slower temperature decay time.
LARGE-SCALE PREDICTIONS OF MOBILE SOURCE CONTRIBUTIONS TO CONCENTRATIONS OF TOXIC AIR POLLUTANTS
This presentation shows concentrations and deposition of toxic air pollutants predicted by a 3-D air quality model, the Community Multi Scale Air Quality (CMAQ) modeling system. Contributions from both on-road and non-road mobile sources are analyzed.
Qin, Changbo; Jia, Yangwen; Su, Z; Zhou, Zuhao; Qiu, Yaqin; Suhui, Shen
2008-07-29
This paper investigates whether remote sensing evapotranspiration estimates can be integrated by means of data assimilation into a distributed hydrological model for improving the predictions of spatial water distribution over a large river basin with an area of 317,800 km2. A series of available MODIS satellite images over the Haihe River basin in China are used for the year 2005. Evapotranspiration is retrieved from these 1×1 km resolution images using the SEBS (Surface Energy Balance System) algorithm. The physically-based distributed model WEP-L (Water and Energy transfer Process in Large river basins) is used to compute the water balance of the Haihe River basin in the same year. Comparison between model-derived and remote sensing retrieval basin-averaged evapotranspiration estimates shows a good piecewise linear relationship, but their spatial distribution within the Haihe basin is different. The remote sensing derived evapotranspiration shows variability at finer scales. An extended Kalman filter (EKF) data assimilation algorithm, suitable for non-linear problems, is used. Assimilation results indicate that remote sensing observations have a potentially important role in providing spatial information to the assimilation system for the spatially optical hydrological parameterization of the model. This is especially important for large basins, such as the Haihe River basin in this study. Combining and integrating the capabilities of and information from model simulation and remote sensing techniques may provide the best spatial and temporal characteristics for hydrological states/fluxes, and would be both appealing and necessary for improving our knowledge of fundamental hydrological processes and for addressing important water resource management problems.
Qin, Changbo; Jia, Yangwen; Su, Z.(Bob); Zhou, Zuhao; Qiu, Yaqin; Suhui, Shen
2008-01-01
This paper investigates whether remote sensing evapotranspiration estimates can be integrated by means of data assimilation into a distributed hydrological model for improving the predictions of spatial water distribution over a large river basin with an area of 317,800 km2. A series of available MODIS satellite images over the Haihe River basin in China are used for the year 2005. Evapotranspiration is retrieved from these 1×1 km resolution images using the SEBS (Surface Energy Balance System) algorithm. The physically-based distributed model WEP-L (Water and Energy transfer Process in Large river basins) is used to compute the water balance of the Haihe River basin in the same year. Comparison between model-derived and remote sensing retrieval basin-averaged evapotranspiration estimates shows a good piecewise linear relationship, but their spatial distribution within the Haihe basin is different. The remote sensing derived evapotranspiration shows variability at finer scales. An extended Kalman filter (EKF) data assimilation algorithm, suitable for non-linear problems, is used. Assimilation results indicate that remote sensing observations have a potentially important role in providing spatial information to the assimilation system for the spatially optical hydrological parameterization of the model. This is especially important for large basins, such as the Haihe River basin in this study. Combining and integrating the capabilities of and information from model simulation and remote sensing techniques may provide the best spatial and temporal characteristics for hydrological states/fluxes, and would be both appealing and necessary for improving our knowledge of fundamental hydrological processes and for addressing important water resource management problems. PMID:27879946
Hierarchy in air travel: Few large and many small
NASA Astrophysics Data System (ADS)
Bejan, A.; Chen, R.; Lorente, S.; Wen, C. Y.
2017-07-01
Here, we document the diversity of commercial aircraft models and bodies in use during the past five decades. Special emphasis is on the models that have moved humanity across the globe during the past three decades. The first objective is to show that the apparent diversity is in fact underpinned (sustained) by organization, which is a distinct hierarchy of "few large and many small" coexisting and moving people harmoniously everywhere. The second objective is to rely on the emerging hierarchy in order to predict for the future how few the even bigger models will be and how more numerous the even smaller models (e.g., drones for package delivery) will be, naturally.
Research on simulation of supercritical steam turbine system in large thermal power station
NASA Astrophysics Data System (ADS)
Zhou, Qiongyang
2018-04-01
In order to improve the stability and safety of supercritical steam turbine system operation in large thermal power station, the body of the steam turbine is modeled in this paper. And in accordance with the hierarchical modeling idea, the steam turbine body model, condensing system model, deaeration system model and regenerative system model are combined to build a simulation model of steam turbine system according to the connection relationship of each subsystem of steam turbine. Finally, the correctness of the model is verified by design and operation data of the 600MW supercritical unit. The results show that the maximum simulation error of the model is 2.15%, which meets the requirements of the engineering. This research provides a platform for the research on the variable operating conditions of the turbine system, and lays a foundation for the construction of the whole plant model of the thermal power plant.
NASA Astrophysics Data System (ADS)
Farag, Mohammed; Sweity, Haitham; Fleckenstein, Matthias; Habibi, Saeid
2017-08-01
Real-time prediction of the battery's core temperature and terminal voltage is very crucial for an accurate battery management system. In this paper, a combined electrochemical, heat generation, and thermal model is developed for large prismatic cells. The proposed model consists of three sub-models, an electrochemical model, heat generation model, and thermal model which are coupled together in an iterative fashion through physicochemical temperature dependent parameters. The proposed parameterization cycles identify the sub-models' parameters separately by exciting the battery under isothermal and non-isothermal operating conditions. The proposed combined model structure shows accurate terminal voltage and core temperature prediction at various operating conditions while maintaining a simple mathematical structure, making it ideal for real-time BMS applications. Finally, the model is validated against both isothermal and non-isothermal drive cycles, covering a broad range of C-rates, and temperature ranges [-25 °C to 45 °C].
NASA Astrophysics Data System (ADS)
Shin, Sun-Hee; Kim, Ok-Yeon; Kim, Dongmin; Lee, Myong-In
2017-07-01
Using 32 CMIP5 (Coupled Model Intercomparison Project Phase 5) models, this study examines the veracity in the simulation of cloud amount and their radiative effects (CREs) in the historical run driven by observed external radiative forcing for 1850-2005, and their future changes in the RCP (Representative Concentration Pathway) 4.5 scenario runs for 2006-2100. Validation metrics for the historical run are designed to examine the accuracy in the representation of spatial patterns for climatological mean, and annual and interannual variations of clouds and CREs. The models show large spread in the simulation of cloud amounts, specifically in the low cloud amount. The observed relationship between cloud amount and the controlling large-scale environment are also reproduced diversely by various models. Based on the validation metrics, four models—ACCESS1.0, ACCESS1.3, HadGEM2-CC, and HadGEM2-ES—are selected as best models, and the average of the four models performs more skillfully than the multimodel ensemble average. All models project global-mean SST warming at the increase of the greenhouse gases, but the magnitude varies across the simulations between 1 and 2 K, which is largely attributable to the difference in the change of cloud amount and distribution. The models that simulate more SST warming show a greater increase in the net CRE due to reduced low cloud and increased incoming shortwave radiation, particularly over the regions of marine boundary layer in the subtropics. Selected best-performing models project a significant reduction in global-mean cloud amount of about -0.99% K-1 and net radiative warming of 0.46 W m-2 K-1, suggesting a role of positive feedback to global warming.
de Thoisy, Benoit; Fayad, Ibrahim; Clément, Luc; Barrioz, Sébastien; Poirier, Eddy; Gond, Valéry
2016-01-01
Tropical forests with a low human population and absence of large-scale deforestation provide unique opportunities to study successful conservation strategies, which should be based on adequate monitoring tools. This study explored the conservation status of a large predator, the jaguar, considered an indicator of the maintenance of how well ecological processes are maintained. We implemented an original integrative approach, exploring successive ecosystem status proxies, from habitats and responses to threats of predators and their prey, to canopy structure and forest biomass. Niche modeling allowed identification of more suitable habitats, significantly related to canopy height and forest biomass. Capture/recapture methods showed that jaguar density was higher in habitats identified as more suitable by the niche model. Surveys of ungulates, large rodents and birds also showed higher density where jaguars were more abundant. Although jaguar density does not allow early detection of overall vertebrate community collapse, a decrease in the abundance of large terrestrial birds was noted as good first evidence of disturbance. The most promising tool comes from easily acquired LiDAR data and radar images: a decrease in canopy roughness was closely associated with the disturbance of forests and associated decreasing vertebrate biomass. This mixed approach, focusing on an apex predator, ecological modeling and remote-sensing information, not only helps detect early population declines in large mammals, but is also useful to discuss the relevance of large predators as indicators and the efficiency of conservation measures. It can also be easily extrapolated and adapted in a timely manner, since important open-source data are increasingly available and relevant for large-scale and real-time monitoring of biodiversity.
de Thoisy, Benoit; Fayad, Ibrahim; Clément, Luc; Barrioz, Sébastien; Poirier, Eddy; Gond, Valéry
2016-01-01
Tropical forests with a low human population and absence of large-scale deforestation provide unique opportunities to study successful conservation strategies, which should be based on adequate monitoring tools. This study explored the conservation status of a large predator, the jaguar, considered an indicator of the maintenance of how well ecological processes are maintained. We implemented an original integrative approach, exploring successive ecosystem status proxies, from habitats and responses to threats of predators and their prey, to canopy structure and forest biomass. Niche modeling allowed identification of more suitable habitats, significantly related to canopy height and forest biomass. Capture/recapture methods showed that jaguar density was higher in habitats identified as more suitable by the niche model. Surveys of ungulates, large rodents and birds also showed higher density where jaguars were more abundant. Although jaguar density does not allow early detection of overall vertebrate community collapse, a decrease in the abundance of large terrestrial birds was noted as good first evidence of disturbance. The most promising tool comes from easily acquired LiDAR data and radar images: a decrease in canopy roughness was closely associated with the disturbance of forests and associated decreasing vertebrate biomass. This mixed approach, focusing on an apex predator, ecological modeling and remote-sensing information, not only helps detect early population declines in large mammals, but is also useful to discuss the relevance of large predators as indicators and the efficiency of conservation measures. It can also be easily extrapolated and adapted in a timely manner, since important open-source data are increasingly available and relevant for large-scale and real-time monitoring of biodiversity. PMID:27828993
DOE Office of Scientific and Technical Information (OSTI.GOV)
Creminelli, Paolo; Gleyzes, Jérôme; Vernizzi, Filippo
2014-06-01
The recently derived consistency relations for Large Scale Structure do not hold if the Equivalence Principle (EP) is violated. We show it explicitly in a toy model with two fluids, one of which is coupled to a fifth force. We explore the constraints that galaxy surveys can set on EP violation looking at the squeezed limit of the 3-point function involving two populations of objects. We find that one can explore EP violations of order 10{sup −3}÷10{sup −4} on cosmological scales. Chameleon models are already very constrained by the requirement of screening within the Solar System and only a verymore » tiny region of the parameter space can be explored with this method. We show that no violation of the consistency relations is expected in Galileon models.« less
Stability of Black Holes and the Speed of Gravitational Waves within Self-Tuning Cosmological Models
NASA Astrophysics Data System (ADS)
Babichev, Eugeny; Charmousis, Christos; Esposito-Farèse, Gilles; Lehébel, Antoine
2018-06-01
The gravitational wave event GW170817 together with its electromagnetic counterparts constrains the speed of gravity to be extremely close to that of light. We first show, on the example of an exact Schwarzschild-de Sitter solution of a specific beyond-Horndeski theory, that imposing the strict equality of these speeds in the asymptotic homogeneous Universe suffices to guarantee so even in the vicinity of the black hole, where large curvature and scalar-field gradients are present. We also find that the solution is stable in a range of the model parameters. We finally show that an infinite class of beyond-Horndeski models satisfying the equality of gravity and light speeds still provides an elegant self-tuning: the very large bare cosmological constant entering the Lagrangian is almost perfectly counterbalanced by the energy-momentum tensor of the scalar field, yielding a tiny observable effective cosmological constant.
Simulation Based Exploration of Critical Zone Dynamics in Intensively Managed Landscapes
NASA Astrophysics Data System (ADS)
Kumar, P.
2017-12-01
The advent of high-resolution measurements of topographic and (vertical) vegetation features using areal LiDAR are enabling us to resolve micro-scale ( 1m) landscape structural characteristics over large areas. Availability of hyperspectral measurements is further augmenting these LiDAR data by enabling the biogeochemical characterization of vegetation and soils at unprecedented spatial resolutions ( 1-10m). Such data have opened up novel opportunities for modeling Critical Zone processes and exploring questions that were not possible before. We show how an integrated 3-D model at 1m grid resolution can enable us to resolve micro-topographic and ecological dynamics and their control on hydrologic and biogeochemical processes over large areas. We address the computational challenge of such detailed modeling by exploiting hybrid CPU and GPU computing technologies. We show results of moisture, biogeochemical, and vegetation dynamics from studies in the Critical Zone Observatory for Intensively managed Landscapes (IMLCZO) in the Midwestern United States.
NASA Astrophysics Data System (ADS)
Werner, Micha; Blyth, Eleanor; Schellekens, Jaap
2016-04-01
Global hydrological and land-surface models are becoming increasingly available, and as the resolution of these improves, as well how hydrological processes are represented, so does their potential. These offer consistent datasets at the global scale, which can be used to establish water balances and derive policy relevant indicators in medium to large basins, including those that are poorly gauged. However, differences in model structure, model parameterisation, and model forcing may result in quite different indicator values being derived, depending on the model used. In this paper we explore indicators developed using four land surface models (LSM) and five global hydrological models (GHM). Results from these models have been made available through the Earth2Observe project, a recent research initiative funded by the European Union 7th Research Framework. All models have a resolution of 0.5 arc degrees, and are forced using the same WATCH-ERA-Interim (WFDEI) meteorological re-analysis data at a daily time step for the 32 year period from 1979 to 2012. We explore three water resources indicators; an aridity index, a simplified water exploitation index; and an indicator that calculates the frequency of occurrence of root zone stress. We compare indicators derived over selected areas/basins in Europe, Colombia, Southern Africa, the Indian Subcontinent and Australia/New Zealand. The hydrological fluxes calculated show quite significant differences between the nine models, despite the common forcing dataset, with these differences reflected in the indicators subsequently derived. The results show that the variability between models is related to the different climates types, with that variability quite logically depending largely on the availability of water. Patterns are also found in the type of models that dominate different parts of the distribution of the indicator values, with LSM models providing lower values, and GHM models providing higher values in some climates, and vice versa in others. How important this variability is in supporting a policy decision, depends largely on how a decision thresholds are set. For example in the case of the aridity index, with areas being denoted as arid with an index of 0.6 or above, we show that the variability is primarily of interest in transitional climates, such as the Mediterranean The analysis shows that while both LSM's and GHM's provide useful data, indices derived to support water resources management planning may differ substantially, depending on the model used. The analysis also identifies in which climates improvements to the models are particularly relevant to support the confidence with which decisions can be taken based on derived indicators.
Sensitivity of the Greenland Ice Sheet to Pliocene sea surface temperatures
Hill, Daniel J.; Dolan, Aisling M.; Haywood, Alan M.; Hunter, Stephen J.; Stoll, Danielle K.
2010-01-01
PRISM3).Use of these different SSTswithin theHadley CentreGCM(GeneralCirculationModel) and BASISM (BritishAntarctic Survey Ice Sheet Model), consistently show large reductions of Pliocene Greenland ice volumes compared to modern. The changes in climate introduced by the use of different SST reconstructions do change the predicted ice volumes, mainly through precipitation feedbacks. However, the models show a relatively low sensitivity of modelled Greenland ice volumes to different mid-Piacenzian SST reconstructions, with the largest SST induced changes being 20% of Pliocene ice volume or less than a metre of sea-level rise.
Wave functions of symmetry-protected topological phases from conformal field theories
NASA Astrophysics Data System (ADS)
Scaffidi, Thomas; Ringel, Zohar
2016-03-01
We propose a method for analyzing two-dimensional symmetry-protected topological (SPT) wave functions using a correspondence with conformal field theories (CFTs) and integrable lattice models. This method generalizes the CFT approach for the fractional quantum Hall effect wherein the wave-function amplitude is written as a many-operator correlator in the CFT. Adopting a bottom-up approach, we start from various known microscopic wave functions of SPTs with discrete symmetries and show how the CFT description emerges at large scale, thereby revealing a deep connection between group cocycles and critical, sometimes integrable, models. We show that the CFT describing the bulk wave function is often also the one describing the entanglement spectrum, but not always. Using a plasma analogy, we also prove the existence of hidden quasi-long-range order for a large class of SPTs. Finally, we show how response to symmetry fluxes is easily described in terms of the CFT.
ERIC Educational Resources Information Center
Sparrow, Wendy; Butvilofsky, Sandra; Escamilla, Kathy; Hopewell, Susan; Tolento, Teresa
2014-01-01
This longitudinal study examines the biliteracy results of Spanish-English emerging bilingual students who participated in a K-5 paired literacy model in a large school district in Oregon. Spanish and English reading and writing data show longitudinal gains in students' biliterate development, demonstrating the potential of the model in developing…
A simple model for pollen-parent fecundity distributions in bee-pollinated forage legume polycrosses
USDA-ARS?s Scientific Manuscript database
Random mating or panmixis is a fundamental assumption in quantitative genetic theory. Random mating is sometimes thought to occur in actual fact although a large body of empirical work shows that this is often not the case in nature. Models have been developed to model many non-random mating phenome...
Insufficiency of avoided crossings for witnessing large-scale quantum coherence in flux qubits
NASA Astrophysics Data System (ADS)
Fröwis, Florian; Yadin, Benjamin; Gisin, Nicolas
2018-04-01
Do experiments based on superconducting loops segmented with Josephson junctions (e.g., flux qubits) show macroscopic quantum behavior in the sense of Schrödinger's cat example? Various arguments based on microscopic and phenomenological models were recently adduced in this debate. We approach this problem by adapting (to flux qubits) the framework of large-scale quantum coherence, which was already successfully applied to spin ensembles and photonic systems. We show that contemporary experiments might show quantum coherence more than 100 times larger than experiments in the classical regime. However, we argue that the often-used demonstration of an avoided crossing in the energy spectrum is not sufficient to make a conclusion about the presence of large-scale quantum coherence. Alternative, rigorous witnesses are proposed.
THE CHALLENGE OF THE LARGEST STRUCTURES IN THE UNIVERSE TO COSMOLOGY
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, Changbom; Choi, Yun-Young; Kim, Sungsoo S.
2012-11-01
Large galaxy redshift surveys have long been used to constrain cosmological models and structure formation scenarios. In particular, the largest structures discovered observationally are thought to carry critical information on the amplitude of large-scale density fluctuations or homogeneity of the universe, and have often challenged the standard cosmological framework. The Sloan Great Wall (SGW) recently found in the Sloan Digital Sky Survey (SDSS) region casts doubt on the concordance cosmological model with a cosmological constant (i.e., the flat {Lambda}CDM model). Here we show that the existence of the SGW is perfectly consistent with the {Lambda}CDM model, a result that onlymore » our very large cosmological N-body simulation (the Horizon Run 2, HR2) could supply. In addition, we report on the discovery of a void complex in the SDSS much larger than the SGW, and show that such size of the largest void is also predicted in the {Lambda}CDM paradigm. Our results demonstrate that an initially homogeneous isotropic universe with primordial Gaussian random phase density fluctuations growing in accordance with the general relativity can explain the richness and size of the observed large-scale structures in the SDSS. Using the HR2 simulation we predict that a future galaxy redshift survey about four times deeper or with 3 mag fainter limit than the SDSS should reveal a largest structure of bright galaxies about twice as big as the SGW.« less
Evaluation of a Theory of Instructional Sequences for Physics Instruction
NASA Astrophysics Data System (ADS)
Wackermann, Rainer; Trendel, Georg; Fischer, Hans E.
2010-05-01
The background of the study is the theory of basis models of teaching and learning, a comprehensive set of models of learning processes which includes, for example, learning through experience and problem-solving. The combined use of different models of learning processes has not been fully investigated and it is frequently not clear under what circumstances a particular model should be used by teachers. In contrast, the theory under investigation here gives guidelines for choosing a particular model and provides instructional sequences for each model. The aim is to investigate the implementation of the theory applied to physics instruction and to show if possible effects for the students may be attributed to the use of the theory. Therefore, a theory-oriented education programme for 18 physics teachers was developed and implemented in the 2005/06 school year. The main features of the intervention consisted of coaching physics lessons and video analysis according to the theory. The study follows a pre-treatment-post design with non-equivalent control group. Findings of repeated-measures ANOVAs show large effects for teachers' subjective beliefs, large effects for classroom actions, and small to medium effects for student outcomes such as perceived instructional quality and student emotions. The teachers/classes that applied the theory especially well according to video analysis showed the larger effects. The results showed that differentiating between different models of learning processes improves physics instruction. Effects can be followed through to student outcomes. The education programme effect was clearer for classroom actions and students' outcomes than for teachers' beliefs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xie, Shaocheng; Klein, Stephen A.; Yio, J. John
2006-03-11
European Centre for Medium-Range Weather Forecasts (ECMWF) analysis and model forecast data are evaluated using observations collected during the Atmospheric Radiation Measurement (ARM) October 2004 Mixed-Phase Arctic Cloud Experiment (M-PACE) at its North Slope of Alaska (NSA) site. It is shown that the ECMWF analysis reasonably represents the dynamic and thermodynamic structures of the large-scale systems that affected the NSA during M-PACE. The model-analyzed near-surface horizontal winds, temperature, and relative humidity also agree well with the M-PACE surface measurements. Given the well-represented large-scale fields, the model shows overall good skill in predicting various cloud types observed during M-PACE; however, themore » physical properties of single-layer boundary layer clouds are in substantial error. At these times, the model substantially underestimates the liquid water path in these clouds, with the concomitant result that the model largely underpredicts the downwelling longwave radiation at the surface and overpredicts the outgoing longwave radiation at the top of the atmosphere. The model also overestimates the net surface shortwave radiation, mainly because of the underestimation of the surface albedo. The problem in the surface albedo is primarily associated with errors in the surface snow prediction. Principally because of the underestimation of the surface downwelling longwave radiation at the times of single-layer boundary layer clouds, the model shows a much larger energy loss (-20.9 W m-2) than the observation (-9.6 W m-2) at the surface during the M-PACE period.« less
The Influence of Internal Model Variability in GEOS-5 on Interhemispheric CO2 Exchange
NASA Technical Reports Server (NTRS)
Allen, Melissa; Erickson, David; Kendall, Wesley; Fu, Joshua; Ott, Leslie; Pawson, Steven
2012-01-01
An ensemble of eight atmospheric CO2 simulations was completed employing the National Aeronautics and Space Administration (NASA) Goddard Earth Observation System, Version 5 (GEOS-5) for the years 2000-2001, each with initial meteorological conditions corresponding to different days in January 2000 to examine internal model variability. Globally, the model runs show similar concentrations of CO2 for the two years, but in regions of high CO2 concentrations due to fossil fuel emissions, large differences among different model simulations appear. The phasing and amplitude of the CO2 cycle at Northern Hemisphere locations in all of the ensemble members is similar to that of surface observations. In several southern hemisphere locations, however, some of the GEOS-5 model CO2 cycles are out of phase by as much as four months, and large variations occur between the ensemble members. This result indicates that there is large sensitivity to transport in these regions. The differences vary by latitude-the most extreme differences in the Tropics and the least at the South Pole. Examples of these differences among the ensemble members with regard to CO2 uptake and respiration of the terrestrial biosphere and CO2 emissions due to fossil fuel emissions are shown at Cape Grim, Tasmania. Integration-based flow analysis of the atmospheric circulation in the model runs shows widely varying paths of flow into the Tasmania region among the models including sources from North America, South America, South Africa, South Asia and Indonesia. These results suggest that interhemispheric transport can be strongly influenced by internal model variability.
Variance decomposition shows the importance of human-climate feedbacks in the Earth system
NASA Astrophysics Data System (ADS)
Calvin, K. V.; Bond-Lamberty, B. P.; Jones, A. D.; Shi, X.; Di Vittorio, A. V.; Thornton, P. E.
2017-12-01
The human and Earth systems are intricately linked: climate influences agricultural production, renewable energy potential, and water availability, for example, while anthropogenic emissions from industry and land use change alter temperature and precipitation. Such feedbacks have the potential to significantly alter future climate change. Current climate change projections contain significant uncertainties, however, and because Earth System Models do not generally include dynamic human (demography, economy, energy, water, land use) components, little is known about how climate feedbacks contribute to that uncertainty. Here we use variance decomposition of a novel coupled human-earth system model to show that the influence of human-climate feedbacks can be as large as 17% of the total variance in the near term for global mean temperature rise, and 11% in the long term for cropland area. The near-term contribution of energy and land use feedbacks to the climate on global mean temperature rise is as large as that from model internal variability, a factor typically considered in modeling studies. Conversely, the contribution of climate feedbacks to cropland extent, while non-negligible, is less than that from socioeconomics, policy, or model. Previous assessments have largely excluded these feedbacks, with the climate community focusing on uncertainty due to internal variability, scenario, and model and the integrated assessment community focusing on uncertainty due to socioeconomics, technology, policy, and model. Our results set the stage for a new generation of models and hypothesis testing to determine when and how bidirectional feedbacks between human and Earth systems should be considered in future assessments of climate change.
Efficient Geological Modelling of Large AEM Surveys
NASA Astrophysics Data System (ADS)
Bach, Torben; Martlev Pallesen, Tom; Jørgensen, Flemming; Lundh Gulbrandsen, Mats; Mejer Hansen, Thomas
2014-05-01
Combining geological expert knowledge with geophysical observations into a final 3D geological model is, in most cases, not a straight forward process. It typically involves many types of data and requires both an understanding of the data and the geological target. When dealing with very large areas, such as modelling of large AEM surveys, the manual task for the geologist to correctly evaluate and properly utilise all the data available in the survey area, becomes overwhelming. In the ERGO project (Efficient High-Resolution Geological Modelling) we address these issues and propose a new modelling methodology enabling fast and consistent modelling of very large areas. The vision of the project is to build a user friendly expert system that enables the combination of very large amounts of geological and geophysical data with geological expert knowledge. This is done in an "auto-pilot" type functionality, named Smart Interpretation, designed to aid the geologist in the interpretation process. The core of the expert system is a statistical model that describes the relation between data and geological interpretation made by a geological expert. This facilitates fast and consistent modelling of very large areas. It will enable the construction of models with high resolution as the system will "learn" the geology of an area directly from interpretations made by a geological expert, and instantly apply it to all hard data in the survey area, ensuring the utilisation of all the data available in the geological model. Another feature is that the statistical model the system creates for one area can be used in another area with similar data and geology. This feature can be useful as an aid to an untrained geologist to build a geological model, guided by the experienced geologist way of interpretation, as quantified by the expert system in the core statistical model. In this project presentation we provide some examples of the problems we are aiming to address in the project, and show some preliminary results.
NASA Technical Reports Server (NTRS)
Weinberg, David H.; Gott, J. Richard, III; Melott, Adrian L.
1987-01-01
Many models for the formation of galaxies and large-scale structure assume a spectrum of random phase (Gaussian), small-amplitude density fluctuations as initial conditions. In such scenarios, the topology of the galaxy distribution on large scales relates directly to the topology of the initial density fluctuations. Here a quantitative measure of topology - the genus of contours in a smoothed density distribution - is described and applied to numerical simulations of galaxy clustering, to a variety of three-dimensional toy models, and to a volume-limited sample of the CfA redshift survey. For random phase distributions the genus of density contours exhibits a universal dependence on threshold density. The clustering simulations show that a smoothing length of 2-3 times the mass correlation length is sufficient to recover the topology of the initial fluctuations from the evolved galaxy distribution. Cold dark matter and white noise models retain a random phase topology at shorter smoothing lengths, but massive neutrino models develop a cellular topology.
Global fits of GUT-scale SUSY models with GAMBIT
NASA Astrophysics Data System (ADS)
Athron, Peter; Balázs, Csaba; Bringmann, Torsten; Buckley, Andy; Chrząszcz, Marcin; Conrad, Jan; Cornell, Jonathan M.; Dal, Lars A.; Edsjö, Joakim; Farmer, Ben; Jackson, Paul; Krislock, Abram; Kvellestad, Anders; Mahmoudi, Farvah; Martinez, Gregory D.; Putze, Antje; Raklev, Are; Rogan, Christopher; de Austri, Roberto Ruiz; Saavedra, Aldo; Savage, Christopher; Scott, Pat; Serra, Nicola; Weniger, Christoph; White, Martin
2017-12-01
We present the most comprehensive global fits to date of three supersymmetric models motivated by grand unification: the constrained minimal supersymmetric standard model (CMSSM), and its Non-Universal Higgs Mass generalisations NUHM1 and NUHM2. We include likelihoods from a number of direct and indirect dark matter searches, a large collection of electroweak precision and flavour observables, direct searches for supersymmetry at LEP and Runs I and II of the LHC, and constraints from Higgs observables. Our analysis improves on existing results not only in terms of the number of included observables, but also in the level of detail with which we treat them, our sampling techniques for scanning the parameter space, and our treatment of nuisance parameters. We show that stau co-annihilation is now ruled out in the CMSSM at more than 95% confidence. Stop co-annihilation turns out to be one of the most promising mechanisms for achieving an appropriate relic density of dark matter in all three models, whilst avoiding all other constraints. We find high-likelihood regions of parameter space featuring light stops and charginos, making them potentially detectable in the near future at the LHC. We also show that tonne-scale direct detection will play a largely complementary role, probing large parts of the remaining viable parameter space, including essentially all models with multi-TeV neutralinos.
Trends in evaporation of a large subtropical lake
NASA Astrophysics Data System (ADS)
Hu, Cheng; Wang, Yongwei; Wang, Wei; Liu, Shoudong; Piao, Meihua; Xiao, Wei; Lee, Xuhui
2017-07-01
How rising temperature and changing solar radiation affect evaporation of natural water bodies remains poor understood. In this study, evaporation from Lake Taihu, a large (area 2400 km2) freshwater lake in the Yangtze River Delta, China, was simulated by the CLM4-LISSS offline lake model and estimated with pan evaporation data. Both methods were calibrated against lake evaporation measured directly with eddy covariance in 2012. Results show a significant increasing trend of annual lake evaporation from 1979 to 2013, at a rate of 29.6 mm decade-1 according to the lake model and 25.4 mm decade-1 according to the pan method. The mean annual evaporation during this period shows good agreement between these two methods (977 mm according to the model and 1007 mm according to the pan method). A stepwise linear regression reveals that downward shortwave radiation was the most significant contributor to the modeled evaporation trend, while air temperature was the most significant contributor to the pan evaporation trend. Wind speed had little impact on the modeled lake evaporation but had a negative contribution to the pan evaporation trend offsetting some of the temperature effect. Reference evaporation was not a good proxy for the lake evaporation because it was on average 20.6 % too high and its increasing trend was too large (56.5 mm decade-1).
Combining points and lines in rectifying satellite images
NASA Astrophysics Data System (ADS)
Elaksher, Ahmed F.
2017-09-01
The quick advance in remote sensing technologies established the potential to gather accurate and reliable information about the Earth surface using high resolution satellite images. Remote sensing satellite images of less than one-meter pixel size are currently used in large-scale mapping. Rigorous photogrammetric equations are usually used to describe the relationship between the image coordinates and ground coordinates. These equations require the knowledge of the exterior and interior orientation parameters of the image that might not be available. On the other hand, the parallel projection transformation could be used to represent the mathematical relationship between the image-space and objectspace coordinate systems and provides the required accuracy for large-scale mapping using fewer ground control features. This article investigates the differences between point-based and line-based parallel projection transformation models in rectifying satellite images with different resolutions. The point-based parallel projection transformation model and its extended form are presented and the corresponding line-based forms are developed. Results showed that the RMS computed using the point- or line-based transformation models are equivalent and satisfy the requirement for large-scale mapping. The differences between the transformation parameters computed using the point- and line-based transformation models are insignificant. The results showed high correlation between the differences in the ground elevation and the RMS.
Estimating Animal Abundance in Ground Beef Batches Assayed with Molecular Markers
Hu, Xin-Sheng; Simila, Janika; Platz, Sindey Schueler; Moore, Stephen S.; Plastow, Graham; Meghen, Ciaran N.
2012-01-01
Estimating animal abundance in industrial scale batches of ground meat is important for mapping meat products through the manufacturing process and for effectively tracing the finished product during a food safety recall. The processing of ground beef involves a potentially large number of animals from diverse sources in a single product batch, which produces a high heterogeneity in capture probability. In order to estimate animal abundance through DNA profiling of ground beef constituents, two parameter-based statistical models were developed for incidence data. Simulations were applied to evaluate the maximum likelihood estimate (MLE) of a joint likelihood function from multiple surveys, showing superiority in the presence of high capture heterogeneity with small sample sizes, or comparable estimation in the presence of low capture heterogeneity with a large sample size when compared to other existing models. Our model employs the full information on the pattern of the capture-recapture frequencies from multiple samples. We applied the proposed models to estimate animal abundance in six manufacturing beef batches, genotyped using 30 single nucleotide polymorphism (SNP) markers, from a large scale beef grinding facility. Results show that between 411∼1367 animals were present in six manufacturing beef batches. These estimates are informative as a reference for improving recall processes and tracing finished meat products back to source. PMID:22479559
Ferrari, Ulisse
2016-08-01
Maximum entropy models provide the least constrained probability distributions that reproduce statistical properties of experimental datasets. In this work we characterize the learning dynamics that maximizes the log-likelihood in the case of large but finite datasets. We first show how the steepest descent dynamics is not optimal as it is slowed down by the inhomogeneous curvature of the model parameters' space. We then provide a way for rectifying this space which relies only on dataset properties and does not require large computational efforts. We conclude by solving the long-time limit of the parameters' dynamics including the randomness generated by the systematic use of Gibbs sampling. In this stochastic framework, rather than converging to a fixed point, the dynamics reaches a stationary distribution, which for the rectified dynamics reproduces the posterior distribution of the parameters. We sum up all these insights in a "rectified" data-driven algorithm that is fast and by sampling from the parameters' posterior avoids both under- and overfitting along all the directions of the parameters' space. Through the learning of pairwise Ising models from the recording of a large population of retina neurons, we show how our algorithm outperforms the steepest descent method.
A collective phase in resource competition in a highly diverse ecosystem
NASA Astrophysics Data System (ADS)
Tikhonov, Mikhail; Monasson, Remi
Recent technological advances uncovered that most habitats, including the human body, harbor hundreds of coexisting microbial ``species''. The problem of understanding such complex communities is currently at the forefront of medical and environmental sciences. A particularly intriguing question is whether the high-diversity regime (large number of species N) gives rise to qualitatively novel phenomena that could not be intuited from analysis of low-dimensional models (with few species). However, few existing approaches allow studying this regime, except in simulations. Here, we use methods of statistical physics to show that the large- N limit of a classic ecological model of resource competition introduced by MacArthur in 1969 can be solved analytically. Our results provide a tractable model where the implications of large dimensionality of eco-evolutionary problems can be investigated. In particular, we show that at high diversity, the MacArthur model exhibits a phase transition into a curious regime where the environment constructed by the community becomes a collective property, insensitive to the external conditions such as the total resource influx supplied to the community. Supported by Harvard Center of Mathematical Sciences and Applications, and the Simons Foundation. This work was completed at the Aspen Center for Physics, supported by National Science Foundation Grant PHY-1066293.
NASA Astrophysics Data System (ADS)
Barberis, Lucas; Peruani, Fernando
2016-12-01
We study a minimal cognitive flocking model, which assumes that the moving entities navigate using the available instantaneous visual information exclusively. The model consists of active particles, with no memory, that interact by a short-ranged, position-based, attractive force, which acts inside a vision cone (VC), and lack velocity-velocity alignment. We show that this active system can exhibit—due to the VC that breaks Newton's third law—various complex, large-scale, self-organized patterns. Depending on parameter values, we observe the emergence of aggregates or millinglike patterns, the formation of moving—locally polar—files with particles at the front of these structures acting as effective leaders, and the self-organization of particles into macroscopic nematic structures leading to long-ranged nematic order. Combining simulations and nonlinear field equations, we show that position-based active models, as the one analyzed here, represent a new class of active systems fundamentally different from other active systems, including velocity-alignment-based flocking systems. The reported results are of prime importance in the study, interpretation, and modeling of collective motion patterns in living and nonliving active systems.
Barberis, Lucas; Peruani, Fernando
2016-12-09
We study a minimal cognitive flocking model, which assumes that the moving entities navigate using the available instantaneous visual information exclusively. The model consists of active particles, with no memory, that interact by a short-ranged, position-based, attractive force, which acts inside a vision cone (VC), and lack velocity-velocity alignment. We show that this active system can exhibit-due to the VC that breaks Newton's third law-various complex, large-scale, self-organized patterns. Depending on parameter values, we observe the emergence of aggregates or millinglike patterns, the formation of moving-locally polar-files with particles at the front of these structures acting as effective leaders, and the self-organization of particles into macroscopic nematic structures leading to long-ranged nematic order. Combining simulations and nonlinear field equations, we show that position-based active models, as the one analyzed here, represent a new class of active systems fundamentally different from other active systems, including velocity-alignment-based flocking systems. The reported results are of prime importance in the study, interpretation, and modeling of collective motion patterns in living and nonliving active systems.
Strong control of Southern Ocean cloud reflectivity by ice-nucleating particles
NASA Astrophysics Data System (ADS)
Vergara-Temprado, Jesús; Miltenberger, Annette K.; Furtado, Kalli; Grosvenor, Daniel P.; Shipway, Ben J.; Hill, Adrian A.; Wilkinson, Jonathan M.; Field, Paul R.; Murray, Benjamin J.; Carslaw, Ken S.
2018-03-01
Large biases in climate model simulations of cloud radiative properties over the Southern Ocean cause large errors in modeled sea surface temperatures, atmospheric circulation, and climate sensitivity. Here, we combine cloud-resolving model simulations with estimates of the concentration of ice-nucleating particles in this region to show that our simulated Southern Ocean clouds reflect far more radiation than predicted by global models, in agreement with satellite observations. Specifically, we show that the clouds that are most sensitive to the concentration of ice-nucleating particles are low-level mixed-phase clouds in the cold sectors of extratropical cyclones, which have previously been identified as a main contributor to the Southern Ocean radiation bias. The very low ice-nucleating particle concentrations that prevail over the Southern Ocean strongly suppress cloud droplet freezing, reduce precipitation, and enhance cloud reflectivity. The results help explain why a strong radiation bias occurs mainly in this remote region away from major sources of ice-nucleating particles. The results present a substantial challenge to climate models to be able to simulate realistic ice-nucleating particle concentrations and their effects under specific meteorological conditions.
Fluctuations in the DNA double helix
NASA Astrophysics Data System (ADS)
Peyrard, M.; López, S. C.; Angelov, D.
2007-08-01
DNA is not the static entity suggested by the famous double helix structure. It shows large fluctuational openings, in which the bases, which contain the genetic code, are temporarily open. Therefore it is an interesting system to study the effect of nonlinearity on the physical properties of a system. A simple model for DNA, at a mesoscopic scale, can be investigated by computer simulation, in the same spirit as the original work of Fermi, Pasta and Ulam. These calculations raise fundamental questions in statistical physics because they show a temporary breaking of equipartition of energy, regions with large amplitude fluctuations being able to coexist with regions where the fluctuations are very small, even when the model is studied in the canonical ensemble. This phenomenon can be related to nonlinear excitations in the model. The ability of the model to describe the actual properties of DNA is discussed by comparing theoretical and experimental results for the probability that base pairs open an a given temperature in specific DNA sequences. These studies give us indications on the proper description of the effect of the sequence in the mesoscopic model.
Strong control of Southern Ocean cloud reflectivity by ice-nucleating particles
Miltenberger, Annette K.; Furtado, Kalli; Grosvenor, Daniel P.; Shipway, Ben J.; Hill, Adrian A.; Wilkinson, Jonathan M.; Field, Paul R.
2018-01-01
Large biases in climate model simulations of cloud radiative properties over the Southern Ocean cause large errors in modeled sea surface temperatures, atmospheric circulation, and climate sensitivity. Here, we combine cloud-resolving model simulations with estimates of the concentration of ice-nucleating particles in this region to show that our simulated Southern Ocean clouds reflect far more radiation than predicted by global models, in agreement with satellite observations. Specifically, we show that the clouds that are most sensitive to the concentration of ice-nucleating particles are low-level mixed-phase clouds in the cold sectors of extratropical cyclones, which have previously been identified as a main contributor to the Southern Ocean radiation bias. The very low ice-nucleating particle concentrations that prevail over the Southern Ocean strongly suppress cloud droplet freezing, reduce precipitation, and enhance cloud reflectivity. The results help explain why a strong radiation bias occurs mainly in this remote region away from major sources of ice-nucleating particles. The results present a substantial challenge to climate models to be able to simulate realistic ice-nucleating particle concentrations and their effects under specific meteorological conditions. PMID:29490918
The Planform Mobility of Large River Channel Confluences
NASA Astrophysics Data System (ADS)
Sambrook Smith, Greg; Dixon, Simon; Nicholas, Andrew; Bull, Jon; Vardy, Mark; Best, James; Goodbred, Steven; Sarker, Maminul
2017-04-01
Large river confluences are widely acknowledged as exerting a controlling influence upon both upstream and downstream morphology and thus channel planform evolution. Despite their importance, little is known concerning their longer-term evolution and planform morphodynamics, with much of the literature focusing on confluences as representing fixed, nodal points in the fluvial network. In contrast, some studies of large sand bed rivers in India and Bangladesh have shown large river confluences can be highly mobile, although the extent to which this is representative of large confluences around the world is unknown. Confluences have also been shown to generate substantial bed scours, and if the confluence location is mobile these scours could 'comb' across wide areas. This paper presents field data of large confluences morphologies in the Ganges-Brahmaputra-Meghna river basin, illustrating the spatial extent of large river bed scours and showing scour depth can extend below base level, enhancing long term preservation potential. Based on a global review of the planform of large river confluences using Landsat imagery from 1972 to 2014 this study demonstrates such scour features can be highly mobile and there is an array of confluence morphodynamic types: from freely migrating confluences, through confluences migrating on decadal timescales to fixed confluences. Based on this analysis, a conceptual model of large river confluence types is proposed, which shows large river confluences can be sites of extensive bank erosion and avulsion, creating substantial management challenges. We quantify the abundance of mobile confluence types by classifying all large confluences in both the Amazon and Ganges-Brahmaputra-Meghna basins, showing these two large rivers have contrasting confluence morphodynamics. We show large river confluences have multiple scales of planform adjustment with important implications for river management, infrastructure and interpretation of the rock record.
Enhanced pairing susceptibility in a photodoped two-orbital Hubbard model
NASA Astrophysics Data System (ADS)
Werner, Philipp; Strand, Hugo U. R.; Hoshino, Shintaro; Murakami, Yuta; Eckstein, Martin
2018-04-01
Local spin fluctuations provide the glue for orbital-singlet spin-triplet pairing in the doped Mott insulating regime of multiorbital Hubbard models. At large Hubbard repulsion U , the pairing susceptibility is nevertheless tiny because the pairing interaction cannot overcome the suppression of charge fluctuations. Using nonequilibrium dynamical mean field simulations of the two-orbital Hubbard model, we show that out of equilibrium the pairing susceptibility in this large-U regime can be strongly enhanced by creating a photoinduced population of the relevant charge states. This enhancement is supported by the long lifetime of photodoped charge carriers and a built-in cooling mechanism in multiorbital Hubbard systems.
NASA Astrophysics Data System (ADS)
Mistrík, Pavel; Ashmore, Jonathan
2009-02-01
We describe a large scale computational model of electrical current flow in the cochlea which is constructed by a flexible Modified Nodal Analysis algorithm to incorporate electrical components representing hair cells and the intercellular radial and longitudinal current flow. The model is used as a laboratory to study the effects of changing longitudinal gap junctional coupling, and shows the way in which cochlear microphonic spreads and tuning is affected. The process for incorporating mechanical longitudinal coupling and feedback is described. We find a difference in tuning and attenuation depending on whether longitudinal or radial couplings are altered.
Utilization of Large Scale Surface Models for Detailed Visibility Analyses
NASA Astrophysics Data System (ADS)
Caha, J.; Kačmařík, M.
2017-11-01
This article demonstrates utilization of large scale surface models with small spatial resolution and high accuracy, acquired from Unmanned Aerial Vehicle scanning, for visibility analyses. The importance of large scale data for visibility analyses on the local scale, where the detail of the surface model is the most defining factor, is described. The focus is not only the classic Boolean visibility, that is usually determined within GIS, but also on so called extended viewsheds that aims to provide more information about visibility. The case study with examples of visibility analyses was performed on river Opava, near the Ostrava city (Czech Republic). The multiple Boolean viewshed analysis and global horizon viewshed were calculated to determine most prominent features and visibility barriers of the surface. Besides that, the extended viewshed showing angle difference above the local horizon, which describes angular height of the target area above the barrier, is shown. The case study proved that large scale models are appropriate data source for visibility analyses on local level. The discussion summarizes possible future applications and further development directions of visibility analyses.
Optimizing BAO measurements with non-linear transformations of the Lyman-α forest
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Xinkang; Font-Ribera, Andreu; Seljak, Uroš, E-mail: xinkang.wang@berkeley.edu, E-mail: afont@lbl.gov, E-mail: useljak@berkeley.edu
2015-04-01
We explore the effect of applying a non-linear transformation to the Lyman-α forest transmitted flux F=e{sup −τ} and the ability of analytic models to predict the resulting clustering amplitude. Both the large-scale bias of the transformed field (signal) and the amplitude of small scale fluctuations (noise) can be arbitrarily modified, but we were unable to find a transformation that increases significantly the signal-to-noise ratio on large scales using Taylor expansion up to the third order. In particular, however, we achieve a 33% improvement in signal to noise for Gaussianized field in transverse direction. On the other hand, we explore anmore » analytic model for the large-scale biasing of the Lyα forest, and present an extension of this model to describe the biasing of the transformed fields. Using hydrodynamic simulations we show that the model works best to describe the biasing with respect to velocity gradients, but is less successful in predicting the biasing with respect to large-scale density fluctuations, especially for very nonlinear transformations.« less
State of the Art in Large-Scale Soil Moisture Monitoring
NASA Technical Reports Server (NTRS)
Ochsner, Tyson E.; Cosh, Michael Harold; Cuenca, Richard H.; Dorigo, Wouter; Draper, Clara S.; Hagimoto, Yutaka; Kerr, Yan H.; Larson, Kristine M.; Njoku, Eni Gerald; Small, Eric E.;
2013-01-01
Soil moisture is an essential climate variable influencing land atmosphere interactions, an essential hydrologic variable impacting rainfall runoff processes, an essential ecological variable regulating net ecosystem exchange, and an essential agricultural variable constraining food security. Large-scale soil moisture monitoring has advanced in recent years creating opportunities to transform scientific understanding of soil moisture and related processes. These advances are being driven by researchers from a broad range of disciplines, but this complicates collaboration and communication. For some applications, the science required to utilize large-scale soil moisture data is poorly developed. In this review, we describe the state of the art in large-scale soil moisture monitoring and identify some critical needs for research to optimize the use of increasingly available soil moisture data. We review representative examples of 1) emerging in situ and proximal sensing techniques, 2) dedicated soil moisture remote sensing missions, 3) soil moisture monitoring networks, and 4) applications of large-scale soil moisture measurements. Significant near-term progress seems possible in the use of large-scale soil moisture data for drought monitoring. Assimilation of soil moisture data for meteorological or hydrologic forecasting also shows promise, but significant challenges related to model structures and model errors remain. Little progress has been made yet in the use of large-scale soil moisture observations within the context of ecological or agricultural modeling. Opportunities abound to advance the science and practice of large-scale soil moisture monitoring for the sake of improved Earth system monitoring, modeling, and forecasting.
The statistical overlap theory of chromatography using power law (fractal) statistics.
Schure, Mark R; Davis, Joe M
2011-12-30
The chromatographic dimensionality was recently proposed as a measure of retention time spacing based on a power law (fractal) distribution. Using this model, a statistical overlap theory (SOT) for chromatographic peaks is developed that estimates the number of peak maxima as a function of the chromatographic dimension, saturation and scale. Power law models exhibit a threshold region whereby below a critical saturation value no loss of peak maxima due to peak fusion occurs as saturation increases. At moderate saturation, behavior is similar to the random (Poisson) peak model. At still higher saturation, the power law model shows loss of peaks nearly independent of the scale and dimension of the model. The physicochemical meaning of the power law scale parameter is discussed and shown to be equal to the Boltzmann-weighted free energy of transfer over the scale limits. The scale is discussed. Small scale range (small β) is shown to generate more uniform chromatograms. Large scale range chromatograms (large β) are shown to give occasional large excursions of retention times; this is a property of power laws where "wild" behavior is noted to occasionally occur. Both cases are shown to be useful depending on the chromatographic saturation. A scale-invariant model of the SOT shows very simple relationships between the fraction of peak maxima and the saturation, peak width and number of theoretical plates. These equations provide much insight into separations which follow power law statistics. Copyright © 2011 Elsevier B.V. All rights reserved.
A semi-analytic dynamical friction model for cored galaxies
NASA Astrophysics Data System (ADS)
Petts, J. A.; Read, J. I.; Gualandris, A.
2016-11-01
We present a dynamical friction model based on Chandrasekhar's formula that reproduces the fast inspiral and stalling experienced by satellites orbiting galaxies with a large constant density core. We show that the fast inspiral phase does not owe to resonance. Rather, it owes to the background velocity distribution function for the constant density core being dissimilar from the usually assumed Maxwellian distribution. Using the correct background velocity distribution function and our semi-analytic model from previous work, we are able to correctly reproduce the infall rate in both cored and cusped potentials. However, in the case of large cores, our model is no longer able to correctly capture core-stalling. We show that this stalling owes to the tidal radius of the satellite approaching the size of the core. By switching off dynamical friction when rt(r) = r (where rt is the tidal radius at the satellite's position), we arrive at a model which reproduces the N-body results remarkably well. Since the tidal radius can be very large for constant density background distributions, our model recovers the result that stalling can occur for Ms/Menc ≪ 1, where Ms and Menc are the mass of the satellite and the enclosed galaxy mass, respectively. Finally, we include the contribution to dynamical friction that comes from stars moving faster than the satellite. This next-to-leading order effect becomes the dominant driver of inspiral near the core region, prior to stalling.
The Spike-and-Slab Lasso Generalized Linear Models for Prediction and Associated Genes Detection.
Tang, Zaixiang; Shen, Yueping; Zhang, Xinyan; Yi, Nengjun
2017-01-01
Large-scale "omics" data have been increasingly used as an important resource for prognostic prediction of diseases and detection of associated genes. However, there are considerable challenges in analyzing high-dimensional molecular data, including the large number of potential molecular predictors, limited number of samples, and small effect of each predictor. We propose new Bayesian hierarchical generalized linear models, called spike-and-slab lasso GLMs, for prognostic prediction and detection of associated genes using large-scale molecular data. The proposed model employs a spike-and-slab mixture double-exponential prior for coefficients that can induce weak shrinkage on large coefficients, and strong shrinkage on irrelevant coefficients. We have developed a fast and stable algorithm to fit large-scale hierarchal GLMs by incorporating expectation-maximization (EM) steps into the fast cyclic coordinate descent algorithm. The proposed approach integrates nice features of two popular methods, i.e., penalized lasso and Bayesian spike-and-slab variable selection. The performance of the proposed method is assessed via extensive simulation studies. The results show that the proposed approach can provide not only more accurate estimates of the parameters, but also better prediction. We demonstrate the proposed procedure on two cancer data sets: a well-known breast cancer data set consisting of 295 tumors, and expression data of 4919 genes; and the ovarian cancer data set from TCGA with 362 tumors, and expression data of 5336 genes. Our analyses show that the proposed procedure can generate powerful models for predicting outcomes and detecting associated genes. The methods have been implemented in a freely available R package BhGLM (http://www.ssg.uab.edu/bhglm/). Copyright © 2017 by the Genetics Society of America.
One-side forward-backward asymmetry in top quark pair production at the CERN Large Hadron Collider
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang Youkai; Xiao Bo; Zhu Shouhua
2010-11-01
Both D0 and CDF at Tevatron reported the measurements of forward-backward asymmetry in top pair production, which showed possible deviation from the standard model QCD prediction. In this paper, we explore how to examine the same higher-order QCD effects at the more powerful Large Hadron Collider.
Long range transport of air pollutants in Europe and acid precipitation in Norway
Jack Nordo
1976-01-01
Observations show that pollutants from large emission sources may cause significant air concentrations 500 to 1000 miles away. Very acid precipitation occurs in such periods. The scavenging is often intensified by the topography. Case studies will be presented, with special emphasis on acid precipitation in Scandinavia. Large scale dispersion models have been developed...
Optimal fiber design for large capacity long haul coherent transmission [Invited].
Hasegawa, Takemi; Yamamoto, Yoshinori; Hirano, Masaaki
2017-01-23
Fiber figure of merit (FOM), derived from the GN-model theory and validated by several experiments, can predict improvement in OSNR or transmission distance using advanced fibers. We review the FOM theory and present design results of optimal fiber for large capacity long haul transmission, showing variation in design results according to system configuration.
NASA Astrophysics Data System (ADS)
Norris, J. Q.
2016-12-01
Published 60 years ago, the Gutenburg-Richter law provides a universal frequency-magnitude distribution for natural and induced seismicity. The GR law is a two parameter power-law with the b-value specifying the relative frequency of small and large events. For large catalogs of natural seismicity, the observed b-values are near one, while fracking associated seismicity has observed b-values near two, indicating relatively fewer large events. We have developed a computationally inexpensive percolation model for fracking that allows us to generate large catalogs of fracking associated seismicity. Using these catalogs, we show that different power-law fitting procedures produce different b-values for the same data set. This shows that care must be taken when determining and comparing b-values for fracking associated seismicity.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yim, Bo; Yeh, Sang -Wook; Sohn, Byung -Ju
Observational evidence shows that the Walker circulation (WC) in the tropical Pacific has strengthened in recent decades. In this study, we examine the WC trend for 1979–2005 and its relationship with the precipitation associated with the El Niño Southern Oscillation (ENSO) using the sea surface temperature (SST)-constrained Atmospheric Model Intercomparison Project (AMIP) simulations of the Coupled Model Intercomparison Project Phase 5 (CMIP5) climate models. All of the 29 models show a strengthening of the WC trend in response to an increase in the SST zonal gradient along the equator. Despite the same SST-constrained AMIP simulations, however, a large diversity ismore » found among the CMIP5 climate models in the magnitude of the WC trend. The relationship between the WC trend and precipitation anomalies (PRCPAs) associated with ENSO (ENSO-related PRCPAs) shows that the longitudinal position of the ENSO-related PRCPAs in the western tropical Pacific is closely related to the magnitude of the WC trend. Specifically, it is found that the strengthening of the WC trend is large (small) in the CMIP5 AMIP simulations in which the ENSO-related PRCPAs are located relatively westward (eastward) in the western tropical Pacific. Furthermore, the zonal shift of the ENSO-related precipitation in the western tropical Pacific, which is associated with the climatological mean precipitation in the tropical Pacific, could play an important role in modifying the WC trend in the CMIP5 climate models.« less
Parametric Investigation of Liquid Jets in Low Gravity
NASA Technical Reports Server (NTRS)
Chato, David J.
2005-01-01
An axisymmetric phase field model is developed and used to model surface tension forces on liquid jets in microgravity. The previous work in this area is reviewed and a baseline drop tower experiment selected for model comparison. This paper uses the model to parametrically investigate the influence of key parameters on the geysers formed by jets in microgravity. Investigation of the contact angle showed the expected trend of increasing contact angle increasing geyser height. Investigation of the tank radius showed some interesting effects and demonstrated the zone of free surface deformation is quite large. Variation of the surface tension with a laminar jet showed clearly the evolution of free surface shape with Weber number. It predicted a breakthrough Weber number of 1.
Shared and Distinct Rupture Discriminants of Small and Large Intracranial Aneurysms.
Varble, Nicole; Tutino, Vincent M; Yu, Jihnhee; Sonig, Ashish; Siddiqui, Adnan H; Davies, Jason M; Meng, Hui
2018-04-01
Many ruptured intracranial aneurysms (IAs) are small. Clinical presentations suggest that small and large IAs could have different phenotypes. It is unknown if small and large IAs have different characteristics that discriminate rupture. We analyzed morphological, hemodynamic, and clinical parameters of 413 retrospectively collected IAs (training cohort; 102 ruptured IAs). Hierarchal cluster analysis was performed to determine a size cutoff to dichotomize the IA population into small and large IAs. We applied multivariate logistic regression to build rupture discrimination models for small IAs, large IAs, and an aggregation of all IAs. We validated the ability of these 3 models to predict rupture status in a second, independently collected cohort of 129 IAs (testing cohort; 14 ruptured IAs). Hierarchal cluster analysis in the training cohort confirmed that small and large IAs are best separated at 5 mm based on morphological and hemodynamic features (area under the curve=0.81). For small IAs (<5 mm), the resulting rupture discrimination model included undulation index, oscillatory shear index, previous subarachnoid hemorrhage, and absence of multiple IAs (area under the curve=0.84; 95% confidence interval, 0.78-0.88), whereas for large IAs (≥5 mm), the model included undulation index, low wall shear stress, previous subarachnoid hemorrhage, and IA location (area under the curve=0.87; 95% confidence interval, 0.82-0.93). The model for the aggregated training cohort retained all the parameters in the size-dichotomized models. Results in the testing cohort showed that the size-dichotomized rupture discrimination model had higher sensitivity (64% versus 29%) and accuracy (77% versus 74%), marginally higher area under the curve (0.75; 95% confidence interval, 0.61-0.88 versus 0.67; 95% confidence interval, 0.52-0.82), and similar specificity (78% versus 80%) compared with the aggregate-based model. Small (<5 mm) and large (≥5 mm) IAs have different hemodynamic and clinical, but not morphological, rupture discriminants. Size-dichotomized rupture discrimination models performed better than the aggregate model. © 2018 American Heart Association, Inc.
The topology of large-scale structure. VI - Slices of the universe
NASA Astrophysics Data System (ADS)
Park, Changbom; Gott, J. R., III; Melott, Adrian L.; Karachentsev, I. D.
1992-03-01
Results of an investigation of the topology of large-scale structure in two observed slices of the universe are presented. Both slices pass through the Coma cluster and their depths are 100 and 230/h Mpc. The present topology study shows that the largest void in the CfA slice is divided into two smaller voids by a statistically significant line of galaxies. The topology of toy models like the white noise and bubble models is shown to be inconsistent with that of the observed slices. A large N-body simulation was made of the biased cloud dark matter model and the slices are simulated by matching them in selection functions and boundary conditions. The genus curves for these simulated slices are spongelike and have a small shift in the direction of a meatball topology like those of observed slices.
The topology of large-scale structure. VI - Slices of the universe
NASA Technical Reports Server (NTRS)
Park, Changbom; Gott, J. R., III; Melott, Adrian L.; Karachentsev, I. D.
1992-01-01
Results of an investigation of the topology of large-scale structure in two observed slices of the universe are presented. Both slices pass through the Coma cluster and their depths are 100 and 230/h Mpc. The present topology study shows that the largest void in the CfA slice is divided into two smaller voids by a statistically significant line of galaxies. The topology of toy models like the white noise and bubble models is shown to be inconsistent with that of the observed slices. A large N-body simulation was made of the biased cloud dark matter model and the slices are simulated by matching them in selection functions and boundary conditions. The genus curves for these simulated slices are spongelike and have a small shift in the direction of a meatball topology like those of observed slices.
An economic model of large Medicaid practices.
Cromwell, J; Mitchell, J B
1984-01-01
Public attention given to Medicaid "mills" prompted this more general investigation of the origins of large Medicaid practices. A dual market demand model is proposed showing how Medicaid competes with private insurers for scarce physician time. Various program parameters--fee schedules, coverage, collection costs--are analyzed along with physician preferences, specialties, and other supply-side characteristics. Maximum likelihood techniques are used to test the model. The principal finding is that in raising Medicaid fees, as many physicians opt into the program as expand their Medicaid caseloads to exceptional levels, leaving the maldistribution of patients unaffected while notably improving access. Still, the fact that Medicaid fees are lower than those of private insurers does lead to reduced access to more qualified practitioners. Where anti-Medicaid sentiment is stronger, access is also reduced and large Medicaid practices more likely to flourish. PMID:6376426
Short-Term Forecasting of Taiwanese Earthquakes Using a Universal Model of Fusion-Fission Processes
Cheong, Siew Ann; Tan, Teck Liang; Chen, Chien-Chih; Chang, Wu-Lung; Liu, Zheng; Chew, Lock Yue; Sloot, Peter M. A.; Johnson, Neil F.
2014-01-01
Predicting how large an earthquake can be, where and when it will strike remains an elusive goal in spite of the ever-increasing volume of data collected by earth scientists. In this paper, we introduce a universal model of fusion-fission processes that can be used to predict earthquakes starting from catalog data. We show how the equilibrium dynamics of this model very naturally explains the Gutenberg-Richter law. Using the high-resolution earthquake catalog of Taiwan between Jan 1994 and Feb 2009, we illustrate how out-of-equilibrium spatio-temporal signatures in the time interval between earthquakes and the integrated energy released by earthquakes can be used to reliably determine the times, magnitudes, and locations of large earthquakes, as well as the maximum numbers of large aftershocks that would follow. PMID:24406467
NASA Astrophysics Data System (ADS)
Bastola, S.; Bras, R. L.
2017-12-01
Feedbacks between vegetation and the soil nutrient cycle are important in ecosystems where nitrogen limits plant growth, and consequently influences the carbon balance in the plant-soil system. However, many biosphere models do not include such feedbacks, because interactions between carbon and the nitrogen cycle can be complex, and remain poorly understood. In this study we coupled a nitrogen cycle model with an eco-hydrological model by using the concept of carbon cost economics. This concept accounts for different "costs" to the plant of acquiring nitrogen via different pathways. This study builds on tRIBS-VEGGIE, a spatially explicit hydrological model coupled with a model of photosynthesis, stomatal resistance, and energy balance, by combining it with a model of nitrogen recycling. Driven by climate and spatially explicit data of soils, vegetation and topography, the model (referred to as tRIBS-VEGGIE-CN) simulates the dynamics of carbon and nitrogen in the soil-plant system; the dynamics of vegetation; and different components of the hydrological cycle. The tRIBS-VEGGIE-CN is applied in a humid tropical watershed at the Luquillo Critical Zone Observatory (LCZO). The region is characterized by high availability and cycling of nitrogen, high soil respiration rates, and large carbon stocks.We drive the model under contemporary CO2 and hydro-climatic forcing and compare results to a simulation under doubling CO2 and a range of future climate scenarios. The results with parameterization of nitrogen limitation based on carbon cost economics show that the carbon cost of the acquisition of nitrogen is 14% of the net primary productivity (NPP) and the N uptake cost for different pathways vary over a large range depending on leaf nitrogen content, turnover rates of carbon in soil and nitrogen cycling processes. Moreover, the N fertilization simulation experiment shows that the application of N fertilizer does not significantly change the simulated NPP. Furthermore, an experiment with doubling of the CO2 concentration level shows a significant increase of the NPP and turnover of plant tissues. The simulation with future climate scenarios shows consistent decrease in NPP but the uncertainties in projected NPP arising from selection of climate model and scenario is large.
Spin Funneling for Enhanced Spin Injection into Ferromagnets
Sayed, Shehrin; Diep, Vinh Q.; Camsari, Kerem Yunus; Datta, Supriyo
2016-01-01
It is well-established that high spin-orbit coupling (SOC) materials convert a charge current density into a spin current density which can be used to switch a magnet efficiently and there is increasing interest in identifying materials with large spin Hall angle for lower switching current. Using experimentally benchmarked models, we show that composite structures can be designed using existing spin Hall materials such that the effective spin Hall angle is larger by an order of magnitude. The basic idea is to funnel spins from a large area of spin Hall material into a small area of ferromagnet using a normal metal with large spin diffusion length and low resistivity like Cu or Al. We show that this approach is increasingly effective as magnets get smaller. We avoid unwanted charge current shunting by the low resistive NM layer utilizing the newly discovered phenomenon of pure spin conduction in ferromagnetic insulators via magnon diffusion. We provide a spin circuit model for magnon diffusion in FMI that is benchmarked against recent experiments and theory. PMID:27374496
Spin Funneling for Enhanced Spin Injection into Ferromagnets
NASA Astrophysics Data System (ADS)
Sayed, Shehrin; Diep, Vinh Q.; Camsari, Kerem Yunus; Datta, Supriyo
2016-07-01
It is well-established that high spin-orbit coupling (SOC) materials convert a charge current density into a spin current density which can be used to switch a magnet efficiently and there is increasing interest in identifying materials with large spin Hall angle for lower switching current. Using experimentally benchmarked models, we show that composite structures can be designed using existing spin Hall materials such that the effective spin Hall angle is larger by an order of magnitude. The basic idea is to funnel spins from a large area of spin Hall material into a small area of ferromagnet using a normal metal with large spin diffusion length and low resistivity like Cu or Al. We show that this approach is increasingly effective as magnets get smaller. We avoid unwanted charge current shunting by the low resistive NM layer utilizing the newly discovered phenomenon of pure spin conduction in ferromagnetic insulators via magnon diffusion. We provide a spin circuit model for magnon diffusion in FMI that is benchmarked against recent experiments and theory.
Impact of a large density gradient on linear and nonlinear edge-localized mode simulations
Xi, P. W.; Xu, X. Q.; Xia, T. Y.; ...
2013-09-27
Here, the impact of a large density gradient on edge-localized modes (ELMs) is studied linearly and nonlinearly by employing both two-fluid and gyro-fluid simulations. In two-fluid simulations, the ion diamagnetic stabilization on high-n modes disappears when the large density gradient is taken into account. But gyro-fluid simulations show that the finite Larmor radius (FLR) effect can effectively stabilize high-n modes, so the ion diamagnetic effect alone is not sufficient to represent the FLR stabilizing effect. We further demonstrate that additional gyroviscous terms must be kept in the two-fluid model to recover the linear results from the gyro-fluid model. Nonlinear simulations show that the density variation significantly weakens the E × B shearing at the top of the pedestal and thus leads to more energy loss during ELMs. The turbulence spectrum after an ELM crash is measured and has the relation ofmore » $$P(k_{z})\\propto k_{z}^{-3.3}$$ .« less
Development and application of an acceptance testing model
NASA Technical Reports Server (NTRS)
Pendley, Rex D.; Noonan, Caroline H.; Hall, Kenneth R.
1992-01-01
The process of acceptance testing large software systems for NASA has been analyzed, and an empirical planning model of the process constructed. This model gives managers accurate predictions of the staffing needed, the productivity of a test team, and the rate at which the system will pass. Applying the model to a new system shows a high level of agreement between the model and actual performance. The model also gives managers an objective measure of process improvement.
NASA Astrophysics Data System (ADS)
Pignatari, Marco; Hoppe, Peter; Trappitsch, Reto; Fryer, Chris; Timmes, F. X.; Herwig, Falk; Hirschi, Raphael
2018-01-01
Carbon-rich presolar grains are found in primitive meteorites, with isotopic measurements to date suggesting a core-collapse supernovae origin site for some of them. This holds for about 1-2% of presolar silicon carbide (SiC) grains, so-called Type X and C grains, and about 30% of presolar graphite grains. Presolar SiC grains of Type X show anomalous isotopic signatures for several elements heavier than iron compared to the solar abundances: most notably for strontium, zirconium, molybdenum, ruthenium and barium. We study the nucleosynthesis of zirconium and molybdenum isotopes in the He-shell of three core-collapse supernovae models of 15, 20 and 25 M⊙ with solar metallicity, and compare the results to measurements of presolar grains. We find the stellar models show a large scatter of isotopic abundances for zirconium and molybdenum, but the mass averaged abundances are qualitatively similar to the measurements. We find all models show an excess of 96Zr relative to the measurements, but the model abundances are affected by the fractionation between Sr and Zr since a large contribution to 90Zr is due to the radiogenic decay of 90Sr. Some supernova models show excesses of 95,97Mo and depletion of 96Mo relative to solar. The mass averaged distribution from these models shows an excess of 100Mo, but this may be alleviated by very recent neutron-capture cross section measurements. We encourage future explorations to assess the impact of the uncertainties in key neutron-capture reaction rates that lie along the n-process path.
A robust and high-performance queue management controller for large round trip time networks
NASA Astrophysics Data System (ADS)
Khoshnevisan, Ladan; Salmasi, Farzad R.
2016-05-01
Congestion management for transmission control protocol is of utmost importance to prevent packet loss within a network. This necessitates strategies for active queue management. The most applied active queue management strategies have their inherent disadvantages which lead to suboptimal performance and even instability in the case of large round trip time and/or external disturbance. This paper presents an internal model control robust queue management scheme with two degrees of freedom in order to restrict the undesired effects of large and small round trip time and parameter variations in the queue management. Conventional approaches such as proportional integral and random early detection procedures lead to unstable behaviour due to large delay. Moreover, internal model control-Smith scheme suffers from large oscillations due to the large round trip time. On the other hand, other schemes such as internal model control-proportional integral and derivative show excessive sluggish performance for small round trip time values. To overcome these shortcomings, we introduce a system entailing two individual controllers for queue management and disturbance rejection, simultaneously. Simulation results based on Matlab/Simulink and also Network Simulator 2 (NS2) demonstrate the effectiveness of the procedure and verify the analytical approach.
The isentropic quantum drift-diffusion model in two or three space dimensions
NASA Astrophysics Data System (ADS)
Chen, Xiuqing
2009-05-01
We investigate the isentropic quantum drift-diffusion model, a fourth order parabolic system, in space dimensions d = 2, 3. First, we establish the global weak solutions with large initial value and periodic boundary conditions. Then we show the semiclassical limit by delicate interpolation estimates and compactness argument.
MODEL2TALK: An Intervention to Promote Productive Classroom Talk
ERIC Educational Resources Information Center
van der Veen, Chiel; van der Wilt, Femke; van Kruistum, Claudia; van Oers, Bert; Michaels, Sarah
2017-01-01
This article describes the MODEL2TALK intervention, which aims to promote young children's oral communicative competence through productive classroom talk. Productive classroom talk provides children in early childhood education with many opportunities to talk and think together. Results from a large-scale study show that productive classroom talk…
Solar luminosity variations and the climate of Mars
NASA Technical Reports Server (NTRS)
Toon, O. B.; Gierasch, P. J.; Sagan, C.
1975-01-01
A simple climatological model of Mars indicates that its climate may be more sensitive to luminosity changes than earth's because of strong positive feedback mechanisms at work on Mars. Mariner 9 photographs of Mars show an abundance of large sinuous channels that point to an epoch of higher atmospheric pressures and abundant liquid water. Such an epoch could have been the result of large-scale solar luminosity variations. The climatological model suggests that other less controversial mechanisms, such as obliquity or polar albedo changes, also could have led to such an epoch.
Neural control and transient analysis of the LCL-type resonant converter
NASA Astrophysics Data System (ADS)
Zouggar, S.; Nait Charif, H.; Azizi, M.
2000-07-01
This paper proposes a generalised inverse learning structure to control the LCL converter. A feedforward neural network is trained to act as an inverse model of the LCL converter then both are cascaded such that the composed system results in an identity mapping between desired response and the LCL output voltage. Using the large signal model, we analyse the transient output response of the controlled LCL converter in the case of large variation of the load. The simulation results show the efficiency of using neural networks to regulate the LCL converter.
Malignant infarction of the middle cerebral artery in a porcine model. A pilot study.
Arikan, Fuat; Martínez-Valverde, Tamara; Sánchez-Guerrero, Ángela; Campos, Mireia; Esteves, Marielle; Gandara, Dario; Torné, Ramon; Castro, Lidia; Dalmau, Antoni; Tibau, Joan; Sahuquillo, Juan
2017-01-01
Interspecies variability and poor clinical translation from rodent studies indicate that large gyrencephalic animal stroke models are urgently needed. We present a proof-of-principle study describing an alternative animal model of malignant infarction of the middle cerebral artery (MCA) in the common pig and illustrate some of its potential applications. We report on metabolic patterns, ionic profile, brain partial pressure of oxygen (PtiO2), expression of sulfonylurea receptor 1 (SUR1), and the transient receptor potential melastatin 4 (TRPM4). A 5-hour ischemic infarct of the MCA territory was performed in 5 2.5-to-3-month-old female hybrid pigs (Large White x Landrace) using a frontotemporal approach. The core and penumbra areas were intraoperatively monitored to determine the metabolic and ionic profiles. To determine the infarct volume, 2,3,5-triphenyltetrazolium chloride staining and immunohistochemistry analysis was performed to determine SUR1 and TRPM4 expression. PtiO2 monitoring showed an abrupt reduction in values close to 0 mmHg after MCA occlusion in the core area. Hourly cerebral microdialysis showed that the infarcted tissue was characterized by reduced concentrations of glucose (0.03 mM) and pyruvate (0.003 mM) and increases in lactate levels (8.87mM), lactate-pyruvate ratio (4202), glycerol levels (588 μM), and potassium concentration (27.9 mmol/L). Immunohistochemical analysis showed increased expression of SUR1-TRPM4 channels. The aim of the present proof-of-principle study was to document the feasibility of a large animal model of malignant MCA infarction by performing transcranial occlusion of the MCA in the common pig, as an alternative to lisencephalic animals. This model may be useful for detailed studies of cerebral ischemia mechanisms and the development of neuroprotective strategies.
Kanagawa, Motoi; Lu, Zhongpeng; Ito, Chiyomi; Matsuda, Chie; Miyake, Katsuya; Toda, Tatsushi
2014-01-01
Defects in dystroglycan glycosylation are associated with a group of muscular dystrophies, termed dystroglycanopathies, that include Fukuyama congenital muscular dystrophy (FCMD). It is widely believed that abnormal glycosylation of dystroglycan leads to disease-causing membrane fragility. We previously generated knock-in mice carrying a founder retrotransposal insertion in fukutin, the gene responsible for FCMD, but these mice did not develop muscular dystrophy, which hindered exploring therapeutic strategies. We hypothesized that dysferlin functions may contribute to muscle cell viability in the knock-in mice; however, pathological interactions between glycosylation abnormalities and dysferlin defects remain unexplored. To investigate contributions of dysferlin deficiency to the pathology of dystroglycanopathy, we have crossed dysferlin-deficient dysferlin sjl/sjl mice to the fukutin-knock-in fukutin Hp/− and Large-deficient Large myd/myd mice, which are phenotypically distinct models of dystroglycanopathy. The fukutin Hp/− mice do not show a dystrophic phenotype; however, (dysferlin sjl/sjl: fukutin Hp/−) mice showed a deteriorated phenotype compared with (dysferlin sjl/sjl: fukutin Hp/+) mice. These data indicate that the absence of functional dysferlin in the asymptomatic fukutin Hp/− mice triggers disease manifestation and aggravates the dystrophic phenotype. A series of pathological analyses using double mutant mice for Large and dysferlin indicate that the protective effects of dysferlin appear diminished when the dystrophic pathology is severe and also may depend on the amount of dysferlin proteins. Together, our results show that dysferlin exerts protective effects on the fukutin Hp/− FCMD mouse model, and the (dysferlin sjl/sjl: fukutin Hp/−) mice will be useful as a novel model for a recently proposed antisense oligonucleotide therapy for FCMD. PMID:25198651
On the contribution of active galactic nuclei to the high-redshift metagalactic ionizing background
NASA Astrophysics Data System (ADS)
D'Aloisio, Anson; Upton Sanderbeck, Phoebe R.; McQuinn, Matthew; Trac, Hy; Shapiro, Paul R.
2017-07-01
Motivated by the claimed detection of a large population of faint active galactic nuclei (AGNs) at high redshift, recent studies have proposed models in which AGNs contribute significantly to the z > 4 H I ionizing background. In some models, AGNs are even the chief sources of reionization. If proved true, these models would make necessary a complete revision to the standard view that galaxies dominated the high-redshift ionizing background. It has been suggested that AGN-dominated models can better account for two recent observations that appear to be in conflict with the standard view: (1) large opacity variations in the z ˜ 5.5 H I Ly α forest, and (2) slow evolution in the mean opacity of the He II Ly α forest. Large spatial fluctuations in the ionizing background from the brightness and rarity of AGNs may account for the former, while the earlier onset of He II reionization in these models may account for the latter. Here we show that models in which AGN emissions source ≳50 per cent of the ionizing background generally provide a better fit to the observed H I Ly α forest opacity variations compared to standard galaxy-dominated models. However, we argue that these AGN-dominated models are in tension with constraints on the thermal history of the intergalactic medium (IGM). Under standard assumptions about the spectra of AGNs, we show that the earlier onset of He II reionization heats up the IGM well above recent temperature measurements. We further argue that the slower evolution of the mean opacity of the He II Ly α forest relative to simulations may reflect deficiencies in current simulations rather than favour AGN-dominated models as has been suggested.
Towards Modeling False Memory With Computational Knowledge Bases.
Li, Justin; Kohanyi, Emma
2017-01-01
One challenge to creating realistic cognitive models of memory is the inability to account for the vast common-sense knowledge of human participants. Large computational knowledge bases such as WordNet and DBpedia may offer a solution to this problem but may pose other challenges. This paper explores some of these difficulties through a semantic network spreading activation model of the Deese-Roediger-McDermott false memory task. In three experiments, we show that these knowledge bases only capture a subset of human associations, while irrelevant information introduces noise and makes efficient modeling difficult. We conclude that the contents of these knowledge bases must be augmented and, more important, that the algorithms must be refined and optimized, before large knowledge bases can be widely used for cognitive modeling. Copyright © 2016 Cognitive Science Society, Inc.
Microstructure-based hyperelastic models for closed-cell solids
Wyatt, Hayley
2017-01-01
For cellular bodies involving large elastic deformations, mesoscopic continuum models that take into account the interplay between the geometry and the microstructural responses of the constituents are developed, analysed and compared with finite-element simulations of cellular structures with different architecture. For these models, constitutive restrictions for the physical plausibility of the material responses are established, and global descriptors such as nonlinear elastic and shear moduli and Poisson’s ratio are obtained from the material characteristics of the constituents. Numerical results show that these models capture well the mechanical responses of finite-element simulations for three-dimensional periodic structures of neo-Hookean material with closed cells under large tension. In particular, the mesoscopic models predict the macroscopic stiffening of the structure when the stiffness of the cell-core increases. PMID:28484340
Microstructure-based hyperelastic models for closed-cell solids.
Mihai, L Angela; Wyatt, Hayley; Goriely, Alain
2017-04-01
For cellular bodies involving large elastic deformations, mesoscopic continuum models that take into account the interplay between the geometry and the microstructural responses of the constituents are developed, analysed and compared with finite-element simulations of cellular structures with different architecture. For these models, constitutive restrictions for the physical plausibility of the material responses are established, and global descriptors such as nonlinear elastic and shear moduli and Poisson's ratio are obtained from the material characteristics of the constituents. Numerical results show that these models capture well the mechanical responses of finite-element simulations for three-dimensional periodic structures of neo-Hookean material with closed cells under large tension. In particular, the mesoscopic models predict the macroscopic stiffening of the structure when the stiffness of the cell-core increases.
Microstructure-based hyperelastic models for closed-cell solids
NASA Astrophysics Data System (ADS)
Mihai, L. Angela; Wyatt, Hayley; Goriely, Alain
2017-04-01
For cellular bodies involving large elastic deformations, mesoscopic continuum models that take into account the interplay between the geometry and the microstructural responses of the constituents are developed, analysed and compared with finite-element simulations of cellular structures with different architecture. For these models, constitutive restrictions for the physical plausibility of the material responses are established, and global descriptors such as nonlinear elastic and shear moduli and Poisson's ratio are obtained from the material characteristics of the constituents. Numerical results show that these models capture well the mechanical responses of finite-element simulations for three-dimensional periodic structures of neo-Hookean material with closed cells under large tension. In particular, the mesoscopic models predict the macroscopic stiffening of the structure when the stiffness of the cell-core increases.
Protein Structure Determination using Metagenome sequence data
Ovchinnikov, Sergey; Park, Hahnbeom; Varghese, Neha; Huang, Po-Ssu; Pavlopoulos, Georgios A.; Kim, David E.; Kamisetty, Hetunandan; Kyrpides, Nikos C.; Baker, David
2017-01-01
Despite decades of work by structural biologists, there are still ~5200 protein families with unknown structure outside the range of comparative modeling. We show that Rosetta structure prediction guided by residue-residue contacts inferred from evolutionary information can accurately model proteins that belong to large families, and that metagenome sequence data more than triples the number of protein families with sufficient sequences for accurate modeling. We then integrate metagenome data, contact based structure matching and Rosetta structure calculations to generate models for 614 protein families with currently unknown structures; 206 are membrane proteins and 137 have folds not represented in the PDB. This approach provides the representative models for large protein families originally envisioned as the goal of the protein structure initiative at a fraction of the cost. PMID:28104891
Yurk, Brian P
2018-07-01
Animal movement behaviors vary spatially in response to environmental heterogeneity. An important problem in spatial ecology is to determine how large-scale population growth and dispersal patterns emerge within highly variable landscapes. We apply the method of homogenization to study the large-scale behavior of a reaction-diffusion-advection model of population growth and dispersal. Our model includes small-scale variation in the directed and random components of movement and growth rates, as well as large-scale drift. Using the homogenized model we derive simple approximate formulas for persistence conditions and asymptotic invasion speeds, which are interpreted in terms of residence index. The homogenization results show good agreement with numerical solutions for environments with a high degree of fragmentation, both with and without periodicity at the fast scale. The simplicity of the formulas, and their connection to residence index make them appealing for studying the large-scale effects of a variety of small-scale movement behaviors.
Barberopoulou, A.; Qamar, A.; Pratt, T.L.; Steele, W.P.
2006-01-01
Analysis of strong-motion instrument recordings in Seattle, Washington, resulting from the 2002 Mw 7.9 Denali, Alaska, earthquake reveals that amplification in the 0.2-to 1.0-Hz frequency band is largely governed by the shallow sediments both inside and outside the sedimentary basins beneath the Puget Lowland. Sites above the deep sedimentary strata show additional seismic-wave amplification in the 0.04- to 0.2-Hz frequency range. Surface waves generated by the Mw 7.9 Denali, Alaska, earthquake of 3 November 2002 produced pronounced water waves across Washington state. The largest water waves coincided with the area of largest seismic-wave amplification underlain by the Seattle basin. In the current work, we present reports that show Lakes Union and Washington, both located on the Seattle basin, are susceptible to large water waves generated by large local earthquakes and teleseisms. A simple model of a water body is adopted to explain the generation of waves in water basins. This model provides reasonable estimates for the water-wave amplitudes in swimming pools during the Denali earthquake but appears to underestimate the waves observed in Lake Union.
Chasing Perfection: Should We Reduce Model Uncertainty in Carbon Cycle-Climate Feedbacks
NASA Astrophysics Data System (ADS)
Bonan, G. B.; Lombardozzi, D.; Wieder, W. R.; Lindsay, K. T.; Thomas, R. Q.
2015-12-01
Earth system model simulations of the terrestrial carbon (C) cycle show large multi-model spread in the carbon-concentration and carbon-climate feedback parameters. Large differences among models are also seen in their simulation of global vegetation and soil C stocks and other aspects of the C cycle, prompting concern about model uncertainty and our ability to faithfully represent fundamental aspects of the terrestrial C cycle in Earth system models. Benchmarking analyses that compare model simulations with common datasets have been proposed as a means to assess model fidelity with observations, and various model-data fusion techniques have been used to reduce model biases. While such efforts will reduce multi-model spread, they may not help reduce uncertainty (and increase confidence) in projections of the C cycle over the twenty-first century. Many ecological and biogeochemical processes represented in Earth system models are poorly understood at both the site scale and across large regions, where biotic and edaphic heterogeneity are important. Our experience with the Community Land Model (CLM) suggests that large uncertainty in the terrestrial C cycle and its feedback with climate change is an inherent property of biological systems. The challenge of representing life in Earth system models, with the rich diversity of lifeforms and complexity of biological systems, may necessitate a multitude of modeling approaches to capture the range of possible outcomes. Such models should encompass a range of plausible model structures. We distinguish between model parameter uncertainty and model structural uncertainty. Focusing on improved parameter estimates may, in fact, limit progress in assessing model structural uncertainty associated with realistically representing biological processes. Moreover, higher confidence may be achieved through better process representation, but this does not necessarily reduce uncertainty.
Investigation of low-latitude hydrogen emission in terms of a two-component interstellar gas model
NASA Technical Reports Server (NTRS)
Baker, P. L.; Burton, W. B.
1975-01-01
High-resolution 21-cm hydrogen line observations at low galactic latitude are analyzed to determine the large-scale distribution of galactic hydrogen. Distribution parameters are found by model fitting, optical depth effects are computed using a two-component gas model suggested by the observations, and calculations are made for a one-component uniform spin-temperature gas model to show the systematic departures between this model and data obtained by incorrect treatment of the optical depth effects. Synthetic 21-cm line profiles are computed from the two-component model, and the large-scale trends of the observed emission profiles are reproduced together with the magnitude of the small-scale emission irregularities. Values are determined for the thickness of the galactic hydrogen disk between half density points, the total observed neutral hydrogen mass of the galaxy, and the central number density of the intercloud hydrogen atoms. It is shown that typical hydrogen clouds must be between 1 and 13 pc in diameter and that optical thinness exists on large-scale despite the presence of optically thin gas.
Flattening the inflaton potential beyond minimal gravity
NASA Astrophysics Data System (ADS)
Lee, Hyun Min
2018-01-01
We review the status of the Starobinsky-like models for inflation beyond minimal gravity and discuss the unitarity problem due to the presence of a large non-minimal gravity coupling. We show that the induced gravity models allow for a self-consistent description of inflation and discuss the implications of the inflaton couplings to the Higgs field in the Standard Model.
Modeling tsunami damage in Aceh: a reply
Louis R. Iverson; Anantha M. Prasad
2008-01-01
In reply to the critique of Baird and Kerr, we emphasize that our model is a generalized vulnerability model, built from easily acquired data from anywhere in the world, to identify areas with probable susceptibility to large tsunamis--and discuss their other criticisms in detail. We also show that a rejection of the role of trees in helping protect vulnerable areas is...
High subsonic flow tests of a parallel pipe followed by a large area ratio diffuser
NASA Technical Reports Server (NTRS)
Barna, P. S.
1975-01-01
Experiments were performed on a pilot model duct system in order to explore its aerodynamic characteristics. The model was scaled from a design projected for the high speed operation mode of the Aircraft Noise Reduction Laboratory. The test results show that the model performed satisfactorily and therefore the projected design will most likely meet the specifications.
Robot vibration control using inertial damping forces
NASA Technical Reports Server (NTRS)
Lee, Soo Han; Book, Wayne J.
1991-01-01
This paper concerns the suppression of the vibration of a large flexible robot by inertial forces of a small robot which is located at the tip of the large robot. A controller for generating damping forces to a large robot is designed based on the two time scale model. The controller does not need to calculate the quasi-steady variables and is efficient in computation. Simulation results show the effectiveness of the inertial forces and the controller designed.
Impurity effects in crystal growth from solutions: Steady states, transients and step bunch motion
NASA Astrophysics Data System (ADS)
Ranganathan, Madhav; Weeks, John D.
2014-05-01
We analyze a recently formulated model in which adsorbed impurities impede the motion of steps in crystals grown from solutions, while moving steps can remove or deactivate adjacent impurities. In this model, the chemical potential change of an atom on incorporation/desorption to/from a step is calculated for different step configurations and used in the dynamical simulation of step motion. The crucial difference between solution growth and vapor growth is related to the dependence of the driving force for growth of the main component on the size of the terrace in front of the step. This model has features resembling experiments in solution growth, which yields a dead zone with essentially no growth at low supersaturation and the motion of large coherent step bunches at larger supersaturation. The transient behavior shows a regime wherein steps bunch together and move coherently as the bunch size increases. The behavior at large line tension is reminiscent of the kink-poisoning mechanism of impurities observed in calcite growth. Our model unifies different impurity models and gives a picture of nonequilibrium dynamics that includes both steady states and time dependent behavior and shows similarities with models of disordered systems and the pinning/depinning transition.
The impact of galaxy formation on satellite kinematics and redshift-space distortions
NASA Astrophysics Data System (ADS)
Orsi, Álvaro A.; Angulo, Raúl E.
2018-04-01
Galaxy surveys aim to map the large-scale structure of the Universe and use redshift-space distortions to constrain deviations from general relativity and probe the existence of massive neutrinos. However, the amount of information that can be extracted is limited by the accuracy of theoretical models used to analyse the data. Here, by using the L-Galaxies semi-analytical model run over the Millennium-XXL N-body simulation, we assess the impact of galaxy formation on satellite kinematics and the theoretical modelling of redshift-space distortions. We show that different galaxy selection criteria lead to noticeable differences in the radial distributions and velocity structure of satellite galaxies. Specifically, whereas samples of stellar mass selected galaxies feature satellites that roughly follow the dark matter, emission line satellite galaxies are located preferentially in the outskirts of haloes and display net infall velocities. We demonstrate that capturing these differences is crucial for modelling the multipoles of the correlation function in redshift space, even on large scales. In particular, we show how modelling small-scale velocities with a single Gaussian distribution leads to a poor description of the measured clustering. In contrast, we propose a parametrization that is flexible enough to model the satellite kinematics and that leads to an accurate description of the correlation function down to sub-Mpc scales. We anticipate that our model will be a necessary ingredient in improved theoretical descriptions of redshift-space distortions, which together could result in significantly tighter cosmological constraints and a more optimal exploitation of future large data sets.
Analysis of the pump-turbine S characteristics using the detached eddy simulation method
NASA Astrophysics Data System (ADS)
Sun, Hui; Xiao, Ruofu; Wang, Fujun; Xiao, Yexiang; Liu, Weichao
2015-01-01
Current research on pump-turbine units is focused on the unstable operation at off-design conditions, with the characteristic curves in generating mode being S-shaped. Unlike in the traditional water turbines, pump-turbine operation along the S-shaped curve can lead to difficulties during load rejection with unusual increases in the water pressure, which leads to machine vibrations. This paper describes both model tests and numerical simulations. A reduced scale model of a low specific speed pump-turbine was used for the performance tests, with comparisons to computational fluid dynamics(CFD) results. Predictions using the detached eddy simulation(DES) turbulence model, which is a combined Reynolds averaged Naviers-Stokes(RANS) and large eddy simulation(LES) model, are compared with the two-equation turbulence mode results. The external characteristics as well as the internal flow are for various guide vane openings to understand the unsteady flow along the so called S characteristics of a pump-turbine. Comparison of the experimental data with the CFD results for various conditions and times shows that DES model gives better agreement with experimental data than the two-equation turbulence model. For low flow conditions, the centrifugal forces and the large incident angle create large vortices between the guide vanes and the runner inlet in the runner passage, which is the main factor leading to the S-shaped characteristics. The turbulence model used here gives more accurate simulations of the internal flow characteristics of the pump-turbine and a more detailed force analysis which shows the mechanisms controlling of the S characteristics.
NASA Technical Reports Server (NTRS)
Baurle, R. A.
2015-01-01
Steady-state and scale-resolving simulations have been performed for flow in and around a model scramjet combustor flameholder. The cases simulated corresponded to those used to examine this flowfield experimentally using particle image velocimetry. A variety of turbulence models were used for the steady-state Reynolds-averaged simulations which included both linear and non-linear eddy viscosity models. The scale-resolving simulations used a hybrid Reynolds-averaged / large eddy simulation strategy that is designed to be a large eddy simulation everywhere except in the inner portion (log layer and below) of the boundary layer. Hence, this formulation can be regarded as a wall-modeled large eddy simulation. This effort was undertaken to formally assess the performance of the hybrid Reynolds-averaged / large eddy simulation modeling approach in a flowfield of interest to the scramjet research community. The numerical errors were quantified for both the steady-state and scale-resolving simulations prior to making any claims of predictive accuracy relative to the measurements. The steady-state Reynolds-averaged results showed a high degree of variability when comparing the predictions obtained from each turbulence model, with the non-linear eddy viscosity model (an explicit algebraic stress model) providing the most accurate prediction of the measured values. The hybrid Reynolds-averaged/large eddy simulation results were carefully scrutinized to ensure that even the coarsest grid had an acceptable level of resolution for large eddy simulation, and that the time-averaged statistics were acceptably accurate. The autocorrelation and its Fourier transform were the primary tools used for this assessment. The statistics extracted from the hybrid simulation strategy proved to be more accurate than the Reynolds-averaged results obtained using the linear eddy viscosity models. However, there was no predictive improvement noted over the results obtained from the explicit Reynolds stress model. Fortunately, the numerical error assessment at most of the axial stations used to compare with measurements clearly indicated that the scale-resolving simulations were improving (i.e. approaching the measured values) as the grid was refined. Hence, unlike a Reynolds-averaged simulation, the hybrid approach provides a mechanism to the end-user for reducing model-form errors.
Data Programming: Creating Large Training Sets, Quickly.
Ratner, Alexander; De Sa, Christopher; Wu, Sen; Selsam, Daniel; Ré, Christopher
2016-12-01
Large labeled training sets are the critical building blocks of supervised learning methods and are key enablers of deep learning techniques. For some applications, creating labeled training sets is the most time-consuming and expensive part of applying machine learning. We therefore propose a paradigm for the programmatic creation of training sets called data programming in which users express weak supervision strategies or domain heuristics as labeling functions , which are programs that label subsets of the data, but that are noisy and may conflict. We show that by explicitly representing this training set labeling process as a generative model, we can "denoise" the generated training set, and establish theoretically that we can recover the parameters of these generative models in a handful of settings. We then show how to modify a discriminative loss function to make it noise-aware, and demonstrate our method over a range of discriminative models including logistic regression and LSTMs. Experimentally, on the 2014 TAC-KBP Slot Filling challenge, we show that data programming would have led to a new winning score, and also show that applying data programming to an LSTM model leads to a TAC-KBP score almost 6 F1 points over a state-of-the-art LSTM baseline (and into second place in the competition). Additionally, in initial user studies we observed that data programming may be an easier way for non-experts to create machine learning models when training data is limited or unavailable.
Data Programming: Creating Large Training Sets, Quickly
Ratner, Alexander; De Sa, Christopher; Wu, Sen; Selsam, Daniel; Ré, Christopher
2018-01-01
Large labeled training sets are the critical building blocks of supervised learning methods and are key enablers of deep learning techniques. For some applications, creating labeled training sets is the most time-consuming and expensive part of applying machine learning. We therefore propose a paradigm for the programmatic creation of training sets called data programming in which users express weak supervision strategies or domain heuristics as labeling functions, which are programs that label subsets of the data, but that are noisy and may conflict. We show that by explicitly representing this training set labeling process as a generative model, we can “denoise” the generated training set, and establish theoretically that we can recover the parameters of these generative models in a handful of settings. We then show how to modify a discriminative loss function to make it noise-aware, and demonstrate our method over a range of discriminative models including logistic regression and LSTMs. Experimentally, on the 2014 TAC-KBP Slot Filling challenge, we show that data programming would have led to a new winning score, and also show that applying data programming to an LSTM model leads to a TAC-KBP score almost 6 F1 points over a state-of-the-art LSTM baseline (and into second place in the competition). Additionally, in initial user studies we observed that data programming may be an easier way for non-experts to create machine learning models when training data is limited or unavailable. PMID:29872252
Seasonal-to-Interannual Variability and Land Surface Processes
NASA Technical Reports Server (NTRS)
Koster, Randal
2004-01-01
Atmospheric chaos severely limits the predictability of precipitation on subseasonal to interannual timescales. Hope for accurate long-term precipitation forecasts lies with simulating atmospheric response to components of the Earth system, such as the ocean, that can be predicted beyond a couple of weeks. Indeed, seasonal forecasts centers now rely heavily on forecasts of ocean circulation. Soil moisture, another slow component of the Earth system, is relatively ignored by the operational seasonal forecasting community. It is starting, however, to garner more attention. Soil moisture anomalies can persist for months. Because these anomalies can have a strong impact on evaporation and other surface energy fluxes, and because the atmosphere may respond consistently to anomalies in the surface fluxes, an accurate soil moisture initialization in a forecast system has the potential to provide additional forecast skill. This potential has motivated a number of atmospheric general circulation model (AGCM) studies of soil moisture and its contribution to variability in the climate system. Some of these studies even suggest that in continental midlatitudes during summer, oceanic impacts on precipitation are quite small relative to soil moisture impacts. The model results, though, are strongly model-dependent, with some models showing large impacts and others showing almost none at all. A validation of the model results with observations thus naturally suggests itself, but this is exceedingly difficult. The necessary contemporaneous soil moisture, evaporation, and precipitation measurements at the large scale are virtually non-existent, and even if they did exist, showing statistically that soil moisture affects rainfall would be difficult because the other direction of causality - wherein rainfall affects soil moisture - is unquestionably active and is almost certainly dominant. Nevertheless, joint analyses of observations and AGCM results do reveal some suggestions of land-atmosphere feedback in the observational record, suggestions that soil moisture can affect precipitation over seasonal timescales and across certain large continental areas. The strength of this observed feedback in nature is not large but is still significant enough to be potentially useful, e.g., for forecasts. This talk will address all of these issues. It will begin with a brief overview of land surface modeling in atmospheric models but will then focus on recent research - using both observations and models - into the impact of land surface processes on variability in the climate system.
Cortical circuitry implementing graphical models.
Litvak, Shai; Ullman, Shimon
2009-11-01
In this letter, we develop and simulate a large-scale network of spiking neurons that approximates the inference computations performed by graphical models. Unlike previous related schemes, which used sum and product operations in either the log or linear domains, the current model uses an inference scheme based on the sum and maximization operations in the log domain. Simulations show that using these operations, a large-scale circuit, which combines populations of spiking neurons as basic building blocks, is capable of finding close approximations to the full mathematical computations performed by graphical models within a few hundred milliseconds. The circuit is general in the sense that it can be wired for any graph structure, it supports multistate variables, and it uses standard leaky integrate-and-fire neuronal units. Following previous work, which proposed relations between graphical models and the large-scale cortical anatomy, we focus on the cortical microcircuitry and propose how anatomical and physiological aspects of the local circuitry may map onto elements of the graphical model implementation. We discuss in particular the roles of three major types of inhibitory neurons (small fast-spiking basket cells, large layer 2/3 basket cells, and double-bouquet neurons), subpopulations of strongly interconnected neurons with their unique connectivity patterns in different cortical layers, and the possible role of minicolumns in the realization of the population-based maximum operation.
Constitutive Behavior Modelling of AA1100-O AT Large Strain and High Strain Rates
NASA Astrophysics Data System (ADS)
Testa, Gabriel; Iannitti, Gianluca; Ruggiero, Andrew; Gentile, Domenico; Bonora, Nicola
2017-06-01
Constitutive behavior of AA1100-O, provided as extruded bar, was investigated. Microscopic observation showed that the cross-section has a peculiar microstructure consisting in the inner core with a large grain size surrounded by an external annulus with finer grains. Low and high strain rates tensile tests were carried out at different temperature ranging from -190 ° C to 100 ° C. Constitutive behavior was modelled using a modified version of Rusinek & Klepaczko model. Parameters were calibrated on tensile test results. Tests and numerical simulations of symmetric Taylor (RoR) and dynamic tensile extrusion (DTE) tests at different impact velocities were carried out in order to validate the model under complex deformation paths.
NASA Technical Reports Server (NTRS)
Murch, Austin M.; Foster, John V.
2007-01-01
A simulation study was conducted to investigate aerodynamic modeling methods for prediction of post-stall flight dynamics of large transport airplanes. The research approach involved integrating dynamic wind tunnel data from rotary balance and forced oscillation testing with static wind tunnel data to predict aerodynamic forces and moments during highly dynamic departure and spin motions. Several state-of-the-art aerodynamic modeling methods were evaluated and predicted flight dynamics using these various approaches were compared. Results showed the different modeling methods had varying effects on the predicted flight dynamics and the differences were most significant during uncoordinated maneuvers. Preliminary wind tunnel validation data indicated the potential of the various methods for predicting steady spin motions.
NASA Astrophysics Data System (ADS)
Liu, Xiao-Huan; Zhang, Yang; Olsen, Kristen M.; Wang, Wen-Xing; Do, Bebhinn A.; Bridgers, George M.
2010-07-01
The prediction of future air quality and its responses to emission control strategies at national and state levels requires a reliable model that can replicate atmospheric observations. In this work, the Mesoscale Model (MM5) and the Community Multiscale Air Quality Modeling (CMAQ) system are applied at a 4-km horizontal grid resolution for four one-month periods, i.e., January, June, July, and August in 2002 to evaluate model performance and compare with that at 12-km. The evaluation shows skills of MM5/CMAQ that are overall consistent with current model performance. The large cold bias in temperature at 1.5 m is likely due to too cold soil initial temperatures and inappropriate snow treatments. The large overprediction in precipitation in July is due likely to too frequent afternoon convective rainfall and/or an overestimation in the rainfall intensity. The normalized mean biases and errors are -1.6% to 9.1% and 15.3-18.5% in January and -18.7% to -5.7% and 13.9-20.6% in July for max 1-h and 8-h O 3 mixing ratios, respectively, and those for 24-h average PM 2.5 concentrations are 8.3-25.9% and 27.6-38.5% in January and -57.8% to -45.4% and 46.1-59.3% in July. The large underprediction in PM 2.5 in summer is attributed mainly to overpredicted precipitation, inaccurate emissions, incomplete treatments for secondary organic aerosols, and model difficulties in resolving complex meteorology and geography. While O 3 prediction shows relatively less sensitivity to horizontal grid resolutions, PM 2.5 and its secondary components, visibility indices, and dry and wet deposition show a moderate to high sensitivity. These results have important implications for the regulatory applications of MM5/CMAQ for future air quality attainment.
Cytomegalovirus Reinfections Stimulate CD8 T-Memory Inflation.
Trgovcich, Joanne; Kincaid, Michelle; Thomas, Alicia; Griessl, Marion; Zimmerman, Peter; Dwivedi, Varun; Bergdall, Valerie; Klenerman, Paul; Cook, Charles H
2016-01-01
Cytomegalovirus (CMV) has been shown to induce large populations of CD8 T-effector memory cells that unlike central memory persist in large quantities following infection, a phenomenon commonly termed "memory inflation". Although murine models to date have shown very large and persistent CMV-specific T-cell expansions following infection, there is considerable variability in CMV-specific T-memory responses in humans. Historically such memory inflation in humans has been assumed a consequence of reactivation events during the life of the host. Because basic information about CMV infection/re-infection and reactivation in immune competent humans is not available, we used a murine model to test how primary infection, reinfection, and reactivation stimuli influence memory inflation. We show that low titer infections induce "partial" memory inflation of both mCMV specific CD8 T-cells and antibody. We show further that reinfection with different strains can boost partial memory inflation. Finally, we show preliminary results suggesting that a single strong reactivation stimulus does not stimulate memory inflation. Altogether, our results suggest that while high titer primary infections can induce memory inflation, reinfections during the life of a host may be more important than previously appreciated.
A discrete element model for the investigation of the geometrically nonlinear behaviour of solids
NASA Astrophysics Data System (ADS)
Ockelmann, Felix; Dinkler, Dieter
2018-07-01
A three-dimensional discrete element model for elastic solids with large deformations is presented. Therefore, an discontinuum approach is made for solids. The properties of elastic material are transferred analytically into the parameters of a discrete element model. A new and improved octahedron gap-filled face-centred cubic close packing of spheres is split into unit cells, to determine the parameters of the discrete element model. The symmetrical unit cells allow a model with equal shear components in each contact plane and fully isotropic behaviour for Poisson's ratio above 0. To validate and show the broad field of applications of the new model, the pin-pin Euler elastica is presented and investigated. The thin and sensitive structure tends to undergo large deformations and rotations with a highly geometrically nonlinear behaviour. This behaviour of the elastica can be modelled and is compared to reference solutions. Afterwards, an improved more realistic simulation of the elastica is presented which softens secondary buckling phenomena. The model is capable of simulating solids with small strains but large deformations and a strongly geometrically nonlinear behaviour, taking the shear stiffness of the material into account correctly.
Impact of compressibility on heat transport characteristics of large terrestrial planets
NASA Astrophysics Data System (ADS)
Čížková, Hana; van den Berg, Arie; Jacobs, Michel
2017-07-01
We present heat transport characteristics for mantle convection in large terrestrial exoplanets (M ⩽ 8M⊕) . Our thermal convection model is based on a truncated anelastic liquid approximation (TALA) for compressible fluids and takes into account a selfconsistent thermodynamic description of material properties derived from mineral physics based on a multi-Einstein vibrational approach. We compare heat transport characteristics in compressible models with those obtained with incompressible models based on the classical- and extended Boussinesq approximation (BA and EBA respectively). Our scaling analysis shows that heat flux scales with effective dissipation number as Nu ∼Dieff-0.71 and with Rayleigh number as Nu ∼Raeff0.27. The surface heat flux of the BA models strongly overestimates the values from the corresponding compressible models, whereas the EBA models systematically underestimate the heat flux by ∼10%-15% with respect to a corresponding compressible case. Compressible models are also systematically warmer than the EBA models. Compressibility effects are therefore important for mantle dynamic processes, especially for large rocky exoplanets and consequently also for formation of planetary atmospheres, through outgassing, and the existence of a magnetic field, through thermal coupling of mantle and core dynamic systems.
Michalareas, George; Schoffelen, Jan-Mathijs; Paterson, Gavin; Gross, Joachim
2013-01-01
Abstract In this work, we investigate the feasibility to estimating causal interactions between brain regions based on multivariate autoregressive models (MAR models) fitted to magnetoencephalographic (MEG) sensor measurements. We first demonstrate the theoretical feasibility of estimating source level causal interactions after projection of the sensor-level model coefficients onto the locations of the neural sources. Next, we show with simulated MEG data that causality, as measured by partial directed coherence (PDC), can be correctly reconstructed if the locations of the interacting brain areas are known. We further demonstrate, if a very large number of brain voxels is considered as potential activation sources, that PDC as a measure to reconstruct causal interactions is less accurate. In such case the MAR model coefficients alone contain meaningful causality information. The proposed method overcomes the problems of model nonrobustness and large computation times encountered during causality analysis by existing methods. These methods first project MEG sensor time-series onto a large number of brain locations after which the MAR model is built on this large number of source-level time-series. Instead, through this work, we demonstrate that by building the MAR model on the sensor-level and then projecting only the MAR coefficients in source space, the true casual pathways are recovered even when a very large number of locations are considered as sources. The main contribution of this work is that by this methodology entire brain causality maps can be efficiently derived without any a priori selection of regions of interest. Hum Brain Mapp, 2013. © 2012 Wiley Periodicals, Inc. PMID:22328419
Boom and bust in continuous time evolving economic model
NASA Astrophysics Data System (ADS)
Mitchell, L.; Ackland, G. J.
2009-08-01
We show that a simple model of a spatially resolved evolving economic system, which has a steady state under simultaneous updating, shows stable oscillations in price when updated asynchronously. The oscillations arise from a gradual decline of the mean price due to competition among sellers competing for the same resource. This lowers profitability and hence population but is followed by a sharp rise as speculative sellers invade the large un-inhabited areas. This cycle then begins again.
Yim, Bo; Yeh, Sang -Wook; Sohn, Byung -Ju
2016-01-29
Observational evidence shows that the Walker circulation (WC) in the tropical Pacific has strengthened in recent decades. In this study, we examine the WC trend for 1979–2005 and its relationship with the precipitation associated with the El Niño Southern Oscillation (ENSO) using the sea surface temperature (SST)-constrained Atmospheric Model Intercomparison Project (AMIP) simulations of the Coupled Model Intercomparison Project Phase 5 (CMIP5) climate models. All of the 29 models show a strengthening of the WC trend in response to an increase in the SST zonal gradient along the equator. Despite the same SST-constrained AMIP simulations, however, a large diversity ismore » found among the CMIP5 climate models in the magnitude of the WC trend. The relationship between the WC trend and precipitation anomalies (PRCPAs) associated with ENSO (ENSO-related PRCPAs) shows that the longitudinal position of the ENSO-related PRCPAs in the western tropical Pacific is closely related to the magnitude of the WC trend. Specifically, it is found that the strengthening of the WC trend is large (small) in the CMIP5 AMIP simulations in which the ENSO-related PRCPAs are located relatively westward (eastward) in the western tropical Pacific. Furthermore, the zonal shift of the ENSO-related precipitation in the western tropical Pacific, which is associated with the climatological mean precipitation in the tropical Pacific, could play an important role in modifying the WC trend in the CMIP5 climate models.« less
Megascours: the morphodynamics of large river confluences
NASA Astrophysics Data System (ADS)
Dixon, Simon; Sambrook Smith, Greg; Nicholas, Andrew; Best, Jim; Bull, Jon; Vardy, Mark; Goodbred, Steve; Haque Sarker, Maminul
2015-04-01
River confluences are wildly acknowledged as crucial controlling influences upon upstream and downstream morphology and thus landscape evolution. Despite their importance very little is known about their evolution and morphodynamics, and there is a consensus in the literature that confluences represent fixed, nodal points in the fluvial network. Confluences have been shown to generate substantial bed scours around five times greater than mean depth. Previous research on the Ganges-Jamuna junction has shown large river confluences can be highly mobile, potentially 'combing' bed scours across a large area, although the extent to which this is representative of large confluences in general is unknown. Understanding the migration of confluences and associated scours is important for multiple applications including: designing civil engineering infrastructure (e.g. bridges, laying cable, pipelines, etc.), sequence stratigraphic interpretation for reconstruction of past environmental and sea level change, and in the hydrocarbon industry where it is crucial to discriminate autocyclic confluence scours from widespread allocyclic surfaces. Here we present a wide-ranging global review of large river confluence planforms based on analysis of Landsat imagery from 1972 through to 2014. This demonstrates there is an array of confluence morphodynamic types: from freely migrating confluences such as the Ganges-Jamuna, through confluences migrating on decadal timescales and fixed confluences. Along with data from recent geophysical field studies in the Ganges-Brahmaputra-Meghna basin we propose a conceptual model of large river confluence types and hypothesise how these influence morphodynamics and preservation of 'megascours' in the rock record. This conceptual model has implications for sequence stratigraphic models and the correct identification of surfaces related to past sea level change. We quantify the abundance of mobile confluence types by classifying all large confluences in the Amazon and Ganges-Brahmaputra-Meghna basins, showing these two basins have contrasting confluence morphodynamics. For the first time we show large river confluences have multiple scales of planform adjustment with important implications for infrastructure and interpretation of the rock record.
Renormalizable Quantum Field Theories in the Large -n Limit
NASA Astrophysics Data System (ADS)
Guruswamy, Sathya
1995-01-01
In this thesis, we study two examples of renormalizable quantum field theories in the large-N limit. Chapter one is a general introduction describing physical motivations for studying such theories. In chapter two, we describe the large-N method in field theory and discuss the pioneering work of 't Hooft in large-N two-dimensional Quantum Chromodynamics (QCD). In chapter three we study a spherically symmetric approximation to four-dimensional QCD ('spherical QCD'). We recast spherical QCD into a bilocal (constrained) theory of hadrons which in the large-N limit is equivalent to large-N spherical QCD for all energy scales. The linear approximation to this theory gives an eigenvalue equation which is the analogue of the well-known 't Hooft's integral equation in two dimensions. This eigenvalue equation is a scale invariant one and therefore leads to divergences in the theory. We give a non-perturbative renormalization prescription to cure this and obtain a beta function which shows that large-N spherical QCD is asymptotically free. In chapter four, we review the essentials of conformal field theories in two and higher dimensions, particularly in the context of critical phenomena. In chapter five, we study the O(N) non-linear sigma model on three-dimensional curved spaces in the large-N limit and show that there is a non-trivial ultraviolet stable critical point at which it becomes conformally invariant. We study this model at this critical point on examples of spaces of constant curvature and compute the mass gap in the theory, the free energy density (which turns out to be a universal function of the information contained in the geometry of the manifold) and the two-point correlation functions. The results we get give an indication that this model is an example of a three-dimensional analogue of a rational conformal field theory. A conclusion with a brief summary and remarks follows at the end.
Contribution of Large Region Joint Associations to Complex Traits Genetics
Paré, Guillaume; Asma, Senay; Deng, Wei Q.
2015-01-01
A polygenic model of inheritance, whereby hundreds or thousands of weakly associated variants contribute to a trait’s heritability, has been proposed to underlie the genetic architecture of complex traits. However, relatively few genetic variants have been positively identified so far and they collectively explain only a small fraction of the predicted heritability. We hypothesized that joint association of multiple weakly associated variants over large chromosomal regions contributes to complex traits variance. Confirmation of such regional associations can help identify new loci and lead to a better understanding of known ones. To test this hypothesis, we first characterized the ability of commonly used genetic association models to identify large region joint associations. Through theoretical derivation and simulation, we showed that multivariate linear models where multiple SNPs are included as independent predictors have the most favorable association profile. Based on these results, we tested for large region association with height in 3,740 European participants from the Health and Retirement Study (HRS) study. Adjusting for SNPs with known association with height, we demonstrated clustering of weak associations (p = 2x10-4) in regions extending up to 433.0 Kb from known height loci. The contribution of regional associations to phenotypic variance was estimated at 0.172 (95% CI 0.063-0.279; p < 0.001), which compared favorably to 0.129 explained by known height variants. Conversely, we showed that suggestively associated regions are enriched for known height loci. To extend our findings to other traits, we also tested BMI, HDLc and CRP for large region associations, with consistent results for CRP. Our results demonstrate the presence of large region joint associations and suggest these can be used to pinpoint weakly associated SNPs. PMID:25856144
Firing patterns in the adaptive exponential integrate-and-fire model.
Naud, Richard; Marcille, Nicolas; Clopath, Claudia; Gerstner, Wulfram
2008-11-01
For simulations of large spiking neuron networks, an accurate, simple and versatile single-neuron modeling framework is required. Here we explore the versatility of a simple two-equation model: the adaptive exponential integrate-and-fire neuron. We show that this model generates multiple firing patterns depending on the choice of parameter values, and present a phase diagram describing the transition from one firing type to another. We give an analytical criterion to distinguish between continuous adaption, initial bursting, regular bursting and two types of tonic spiking. Also, we report that the deterministic model is capable of producing irregular spiking when stimulated with constant current, indicating low-dimensional chaos. Lastly, the simple model is fitted to real experiments of cortical neurons under step current stimulation. The results provide support for the suitability of simple models such as the adaptive exponential integrate-and-fire neuron for large network simulations.
Quantum coherence of planar spin models with Dzyaloshinsky-Moriya interaction
NASA Astrophysics Data System (ADS)
Radhakrishnan, Chandrashekar; Ermakov, Igor; Byrnes, Tim
2017-07-01
The quantum coherence of one-dimensional planar spin models with Dzyaloshinsky-Moriya interaction is investigated. The anisotropic XY model, the isotropic XX model, and the transverse field model are studied in the large N limit using two qubit reduced density matrices and two point correlation functions. From our investigations we find that the coherence as measured using Jensen-Shannon divergence can be used to detect quantum phase transitions and quantum critical points. The derivative of coherence shows nonanalytic behavior at critical points, leading to the conclusion that these transitions are of second order. Further, we show that the presence of Dzyaloshinsky-Moriya coupling suppresses the phase transition due to residual ferromagnetism, which is caused by spin canting.
Lower Bound on the Mean Square Displacement of Particles in the Hard Disk Model
NASA Astrophysics Data System (ADS)
Richthammer, Thomas
2016-08-01
The hard disk model is a 2D Gibbsian process of particles interacting via pure hard core repulsion. At high particle density the model is believed to show orientational order, however, it is known not to exhibit positional order. Here we investigate to what extent particle positions may fluctuate. We consider a finite volume version of the model in a box of dimensions 2 n × 2 n with arbitrary boundary configuration, and we show that the mean square displacement of particles near the center of the box is bounded from below by c log n. The result generalizes to a large class of models with fairly arbitrary interaction.
Fermionic Field Theory for Trees and Forests
NASA Astrophysics Data System (ADS)
Caracciolo, Sergio; Jacobsen, Jesper Lykke; Saleur, Hubert; Sokal, Alan D.; Sportiello, Andrea
2004-08-01
We prove a generalization of Kirchhoff’s matrix-tree theorem in which a large class of combinatorial objects are represented by non-Gaussian Grassmann integrals. As a special case, we show that unrooted spanning forests, which arise as a q→0 limit of the Potts model, can be represented by a Grassmann theory involving a Gaussian term and a particular bilocal four-fermion term. We show that this latter model can be mapped, to all orders in perturbation theory, onto the N-vector model at N=-1 or, equivalently, onto the σ model taking values in the unit supersphere in R1|2. It follows that, in two dimensions, this fermionic model is perturbatively asymptotically free.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tang, Shuaiqi; Zhang, Minghua; Xie, Shaocheng
Large-scale forcing data, such as vertical velocity and advective tendencies, are required to drive single-column models (SCMs), cloud-resolving models, and large-eddy simulations. Previous studies suggest that some errors of these model simulations could be attributed to the lack of spatial variability in the specified domain-mean large-scale forcing. This study investigates the spatial variability of the forcing and explores its impact on SCM simulated precipitation and clouds. A gridded large-scale forcing data during the March 2000 Cloud Intensive Operational Period at the Atmospheric Radiation Measurement program's Southern Great Plains site is used for analysis and to drive the single-column version ofmore » the Community Atmospheric Model Version 5 (SCAM5). When the gridded forcing data show large spatial variability, such as during a frontal passage, SCAM5 with the domain-mean forcing is not able to capture the convective systems that are partly located in the domain or that only occupy part of the domain. This problem has been largely reduced by using the gridded forcing data, which allows running SCAM5 in each subcolumn and then averaging the results within the domain. This is because the subcolumns have a better chance to capture the timing of the frontal propagation and the small-scale systems. As a result, other potential uses of the gridded forcing data, such as understanding and testing scale-aware parameterizations, are also discussed.« less
Escorting commercial aircraft to reduce the MANPAD threat
NASA Astrophysics Data System (ADS)
Hock, Nicholas; Richardson, M. A.; Butters, B.; Walmsley, R.; Ayling, R.; Taylor, B.
2005-11-01
This paper studies the Man-Portable Air Defence System (MANPADS) threat against large commercial aircraft using flight profile analysis, engagement modelling and simulation. Non-countermeasure equipped commercial aircraft are at risk during approach and departure due to the large areas around airports that would need to be secured to prevent the use of highly portable and concealable MANPADs. A software model (CounterSim) has been developed and was used to simulate an SA-7b and large commercial aircraft engagement. The results of this simulation have found that the threat was lessened when a escort fighter aircraft is flown in the 'Centreline Low' position, or 25 m rearward from the large aircraft and 15 m lower, similar to the Air-to-Air refuelling position. In the model a large aircraft on approach had a 50% chance of being hit or having a near miss (within 20m) whereas escorted by a countermeasure equipped F-16 in the 'Centerline Low' position, this was reduced to only 14%. Departure is a particularly vulnerable time for large aircraft due to slow climb rates and the inability to fly evasive manoeuvres. The 'Centreline Low' escorted departure greatly reduced the threat to 16% hit or near miss from 62% for an unescorted heavy aircraft. Overall the CounterSim modelling has showed that escorting a civilian aircraft on approach and departure can reduce the MANPAD threat by 3 to 4 times.
The Large-Scale Structure of Semantic Networks: Statistical Analyses and a Model of Semantic Growth
ERIC Educational Resources Information Center
Steyvers, Mark; Tenenbaum, Joshua B.
2005-01-01
We present statistical analyses of the large-scale structure of 3 types of semantic networks: word associations, WordNet, and Roget's Thesaurus. We show that they have a small-world structure, characterized by sparse connectivity, short average path lengths between words, and strong local clustering. In addition, the distributions of the number of…
A contact layer element for large deformations
NASA Astrophysics Data System (ADS)
Weißenfels, C.; Wriggers, P.
2015-05-01
In many contact situations the material behavior of one contact member strongly influences the force acting between the two bodies. Unfortunately standard friction models cannot reproduce all of these material effects at the contact layer and often continuum interface elements are used instead. These elements are intrinsically tied to the fixed grid and hence cannot be used in large sliding simulations. Due to the shortcomings of the standard contact formulations and of the interface elements a new type of a contact layer element is developed in this work. The advantages of this element are the direct implementation of continuum models into the contact formulation and the application to arbitrary large deformations. Showing a relation between continuum and contact kinematics based on the solid-shell concept the new contact element is at the end a natural extension of the standard contact formulations into 3D. Two examples show that the continuum behavior can be exactly reproduced at the contact surface even in large sliding situations using this contact layer element. For the discretization of the new contact element the Mortar method is chosen exemplary, but it can be combined with all kinds of contact formulations.
NASA Astrophysics Data System (ADS)
Liu, Jiping; Zhang, Zhanhai; Hu, Yongyun; Chen, Liqi; Dai, Yongjiu; Ren, Xiaobo
2008-05-01
The surface air temperature (SAT) over the Arctic Ocean in reanalyses and global climate model simulations was assessed using the International Arctic Buoy Programme/Polar Exchange at the Sea Surface (IABP/POLES) observations for the period 1979-1999. The reanalyses, including the National Centers for Environmental Prediction Reanalysis II (NCEP2) and European Centre for Medium-Range Weather Forecast 40-year Reanalysis (ERA40), show encouraging agreement with the IABP/POLES observations, although some spatiotemporal discrepancies are noteworthy. The reanalyses have warm annual mean biases and underestimate the observed interannual SAT variability in summer. Additionally, NCEP2 shows an excessive warming trend. Most model simulations (coordinated by the International Panel on Climate Change for its Fourth Assessment Report) reproduce the annual mean, seasonal cycle, and trend of the observed SAT reasonably well, particularly the multi-model ensemble mean. However, large discrepancies are found. Some models have the annual mean SAT biases far exceeding the standard deviation of the observed interannul SAT variability and the across-model standard deviation. Spatially, the largest inter-model variance of the annual mean SAT is found over the North Pole, Greenland Sea, Barents Sea and Baffin Bay. Seasonally, a large spread of the simulated SAT among the models is found in winter. The models show interannual variability and decadal trend of various amplitudes, and can not capture the observed dominant SAT mode variability and cooling trend in winter. Further discussions of the possible attributions to the identified SAT errors for some models suggest that the model's performance in the sea ice simulation is an important factor.
A second-order impact model for forest fire regimes.
Maggi, Stefano; Rinaldi, Sergio
2006-09-01
We present a very simple "impact" model for the description of forest fires and show that it can mimic the known characteristics of wild fire regimes in savannas, boreal forests, and Mediterranean forests. Moreover, the distribution of burned biomasses in model generated fires resemble those of burned areas in numerous large forests around the world. The model has also the merits of being the first second-order model for forest fires and the first example of the use of impact models in the study of ecosystems.
NASA Astrophysics Data System (ADS)
Wanders, Niko; Wood, Eric
2016-04-01
Sub-seasonal to seasonal weather and hydrological forecasts have the potential to provide vital information for a variety of water-related decision makers. For example, seasonal forecasts of drought risk can enable farmers to make adaptive choices on crop varieties, labour usage, and technology investments. Seasonal and sub-seasonal predictions can increase preparedness to hydrological extremes that regularly occur in all regions of the world with large impacts on society. We investigated the skill of six seasonal forecast models from the NMME-2 ensemble coupled to two global hydrological models (VIC and PCRGLOBWB) for the period 1982-2012. The 31 years of NNME-2 hindcast data is used in combination with an ensemble mean and ESP forecast, to forecast important hydrological variables (e.g. soil moisture, groundwater storage, snow, reservoir levels and river discharge). By using two global hydrological models we are able to quantify both the uncertainty in the meteorological input and the uncertainty created by the different hydrological models. We show that the NMME-2 forecast outperforms the ESP forecasts in terms of anomaly correlation and brier skill score for all forecasted hydrological variables, with a low uncertainty in the performance amongst the hydrological models. However, the continuous ranked probability score (CRPS) of the NMME-2 ensemble is inferior to the ESP due to a large spread between the individual ensemble members. We use a cost analysis to show that the damage caused by floods and droughts in large scale rivers can globally be reduced by 48% (for leads from 1-2 months) to 20% (for leads between 6-9 months) when precautions are taken based on the NMME-2 ensemble instead of an ESP forecast. In collaboration with our local partner in West Africa (AGHRYMET), we looked at the performance of the sub-seasonal forecasts for crop planting dates and high flow season in West Africa. We show that the uncertainty in the optimal planting date is reduced from 30 days to 12 days (2.5 month lead) and an increased predictability of the high flow season from 45 days to 20 days (3-4 months lead). Additionally, we show that snow accumulation and melt onset in the Northern hemisphere can be forecasted with an uncertainty of 10 days (2.5 months lead). Both the overall skill, and the skill found in these last two examples, indicates that the new NMME-2 forecast dataset is valuable for sub-seasonal forecast applications. The high temporal resolution (daily), long leads (one year leads) and large hindcast archive enable new sub-seasonal forecasting applications to be explored. We show that the NMME-2 has a large potential for sub-seasonal hydrological forecasting and other potential hydrological applications (e.g. reservoir management), which could benefit from these new forecasts.
Spectra of eigenstates in fermionic tensor quantum mechanics
NASA Astrophysics Data System (ADS)
Klebanov, Igor R.; Milekhin, Alexey; Popov, Fedor; Tarnopolsky, Grigory
2018-05-01
We study the O (N1)×O (N2)×O (N3) symmetric quantum mechanics of 3-index Majorana fermions. When the ranks Ni are all equal, this model has a large N limit which is dominated by the melonic Feynman diagrams. We derive an integral formula which computes the number of group invariant states for any set of Ni. It is non-vanishing only when each Ni is even. For equal ranks the number of singlets exhibits rapid growth with N : it jumps from 36 in the O (4 )3 model to 595 354 780 in the O (6 )3 model. We derive bounds on the values of energy, which show that they scale at most as N3 in the large N limit, in agreement with expectations. We also show that the splitting between the lowest singlet and non-singlet states is of order 1 /N . For N3=1 the tensor model reduces to O (N1)×O (N2) fermionic matrix quantum mechanics, and we find a simple expression for the Hamiltonian in terms of the quadratic Casimir operators of the symmetry group. A similar expression is derived for the complex matrix model with S U (N1)×S U (N2)×U (1 ) symmetry. Finally, we study the N3=2 case of the tensor model, which gives a more intricate complex matrix model whose symmetry is only O (N1)×O (N2)×U (1 ). All energies are again integers in appropriate units, and we derive a concise formula for the spectrum. The fermionic matrix models we studied possess standard 't Hooft large N limits where the ground state energies are of order N2, while the energy gaps are of order 1.
NASA Astrophysics Data System (ADS)
Tan, Z.; Leung, L. R.; Li, H. Y.; Tesfa, T. K.
2017-12-01
Sediment yield (SY) has significant impacts on river biogeochemistry and aquatic ecosystems but it is rarely represented in Earth System Models (ESMs). Existing SY models focus on estimating SY from large river basins or individual catchments so it is not clear how well they simulate SY in ESMs at larger spatial scales and globally. In this study, we compare the strengths and weaknesses of eight well-known SY models in simulating annual mean SY at about 400 small catchments ranging in size from 0.22 to 200 km2 in the US, Canada and Puerto Rico. In addition, we also investigate the performance of these models in simulating event-scale SY at six catchments in the US using high-quality hydrological inputs. The model comparison shows that none of the models can reproduce the SY at large spatial scales but the Morgan model performs the better than others despite its simplicity. In all model simulations, large underestimates occur in catchments with very high SY. A possible pathway to reduce the discrepancies is to incorporate sediment detachment by landsliding, which is currently not included in the models being evaluated. We propose a new SY model that is based on the Morgan model but including a landsliding soil detachment scheme that is being developed. Along with the results of the model comparison and evaluation, preliminary findings from the revised Morgan model will be presented.
Competing opinions and stubborness: Connecting models to data.
Burghardt, Keith; Rand, William; Girvan, Michelle
2016-03-01
We introduce a general contagionlike model for competing opinions that includes dynamic resistance to alternative opinions. We show that this model can describe candidate vote distributions, spatial vote correlations, and a slow approach to opinion consensus with sensible parameter values. These empirical properties of large group dynamics, previously understood using distinct models, may be different aspects of human behavior that can be captured by a more unified model, such as the one introduced in this paper.
Comparative analysis of used car price evaluation models
NASA Astrophysics Data System (ADS)
Chen, Chuancan; Hao, Lulu; Xu, Cong
2017-05-01
An accurate used car price evaluation is a catalyst for the healthy development of used car market. Data mining has been applied to predict used car price in several articles. However, little is studied on the comparison of using different algorithms in used car price estimation. This paper collects more than 100,000 used car dealing records throughout China to do empirical analysis on a thorough comparison of two algorithms: linear regression and random forest. These two algorithms are used to predict used car price in three different models: model for a certain car make, model for a certain car series and universal model. Results show that random forest has a stable but not ideal effect in price evaluation model for a certain car make, but it shows great advantage in the universal model compared with linear regression. This indicates that random forest is an optimal algorithm when handling complex models with a large number of variables and samples, yet it shows no obvious advantage when coping with simple models with less variables.
Comparison of statistical models for writer verification
NASA Astrophysics Data System (ADS)
Srihari, Sargur; Ball, Gregory R.
2009-01-01
A novel statistical model for determining whether a pair of documents, a known and a questioned, were written by the same individual is proposed. The goal of this formulation is to learn the specific uniqueness of style in a particular author's writing, given the known document. Since there are often insufficient samples to extrapolate a generalized model of an writer's handwriting based solely on the document, we instead generalize over the differences between the author and a large population of known different writers. This is in contrast to an earlier model proposed whereby probability distributions were a priori without learning. We show the performance of the model along with a comparison in performance to the non-learning, older model, which shows significant improvement.
NASA Astrophysics Data System (ADS)
Ercolano, Barbara; Weber, Michael L.; Owen, James E.
2018-01-01
Circumstellar discs with large dust depleted cavities and vigorous accretion on to the central star are often considered signposts for (multiple) giant planet formation. In this Letter, we show that X-ray photoevaporation operating in discs with modest (factors 3-10) gas-phase depletion of carbon and oxygen at large radii ( > 15 au) yields the inner radius and accretion rates for most of the observed discs, without the need to invoke giant planet formation. We present one-dimensional viscous evolution models of discs affected by X-ray photoevaporation assuming moderate gas-phase depletion of carbon and oxygen, well within the range reported by recent observations. Our models use a simplified prescription for scaling the X-ray photoevaporation rates and profiles at different metallicity, and our quantitative result depends on this scaling. While more rigorous hydrodynamical modelling of mass-loss profiles at low metallicities is required to constrain the observational parameter space that can be explained by our models, the general conclusion that metal sequestering at large radii may be responsible for the observed diversity of transition discs is shown to be robust. Gap opening by giant planet formation may still be responsible for a number of observed transition discs with large cavities and very high accretion rate.
Modeling needs for very large systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stein, Joshua S.
2010-10-01
Most system performance models assume a point measurement for irradiance and that, except for the impact of shading from nearby obstacles, incident irradiance is uniform across the array. Module temperature is also assumed to be uniform across the array. For small arrays and hourly-averaged simulations, this may be a reasonable assumption. Stein is conducting research to characterize variability in large systems and to develop models that can better accommodate large system factors. In large, multi-MW arrays, passing clouds may block sunlight from a portion of the array but never affect another portion. Figure 22 shows that two irradiance measurements atmore » opposite ends of a multi-MW PV plant appear to have similar irradiance (left), but in fact the irradiance is not always the same (right). Module temperature may also vary across the array, with modules on the edges being cooler because they have greater wind exposure. Large arrays will also have long wire runs and will be subject to associated losses. Soiling patterns may also vary, with modules closer to the source of soiling, such as an agricultural field, receiving more dust load. One of the primary concerns associated with this effort is how to work with integrators to gain access to better and more comprehensive data for model development and validation.« less
Efficient Bayesian mixed model analysis increases association power in large cohorts
Loh, Po-Ru; Tucker, George; Bulik-Sullivan, Brendan K; Vilhjálmsson, Bjarni J; Finucane, Hilary K; Salem, Rany M; Chasman, Daniel I; Ridker, Paul M; Neale, Benjamin M; Berger, Bonnie; Patterson, Nick; Price, Alkes L
2014-01-01
Linear mixed models are a powerful statistical tool for identifying genetic associations and avoiding confounding. However, existing methods are computationally intractable in large cohorts, and may not optimize power. All existing methods require time cost O(MN2) (where N = #samples and M = #SNPs) and implicitly assume an infinitesimal genetic architecture in which effect sizes are normally distributed, which can limit power. Here, we present a far more efficient mixed model association method, BOLT-LMM, which requires only a small number of O(MN)-time iterations and increases power by modeling more realistic, non-infinitesimal genetic architectures via a Bayesian mixture prior on marker effect sizes. We applied BOLT-LMM to nine quantitative traits in 23,294 samples from the Women’s Genome Health Study (WGHS) and observed significant increases in power, consistent with simulations. Theory and simulations show that the boost in power increases with cohort size, making BOLT-LMM appealing for GWAS in large cohorts. PMID:25642633
Markov-modulated Markov chains and the covarion process of molecular evolution.
Galtier, N; Jean-Marie, A
2004-01-01
The covarion (or site specific rate variation, SSRV) process of biological sequence evolution is a process by which the evolutionary rate of a nucleotide/amino acid/codon position can change in time. In this paper, we introduce time-continuous, space-discrete, Markov-modulated Markov chains as a model for representing SSRV processes, generalizing existing theory to any model of rate change. We propose a fast algorithm for diagonalizing the generator matrix of relevant Markov-modulated Markov processes. This algorithm makes phylogeny likelihood calculation tractable even for a large number of rate classes and a large number of states, so that SSRV models become applicable to amino acid or codon sequence datasets. Using this algorithm, we investigate the accuracy of the discrete approximation to the Gamma distribution of evolutionary rates, widely used in molecular phylogeny. We show that a relatively large number of classes is required to achieve accurate approximation of the exact likelihood when the number of analyzed sequences exceeds 20, both under the SSRV and among site rate variation (ASRV) models.
NASA Astrophysics Data System (ADS)
Masselink, Rens; Temme, Arnaud; Giménez, Rafael; Casalí, Javier; Keesstra, Saskia
2017-04-01
Soil erosion from agricultural areas is a large problem, because of off-site effects like the rapid filling of reservoirs. To mitigate the problem of sediments from agricultural areas reaching the channel, reservoirs and other surface waters, it is important to understand hillslope-channel connectivity and catchment connectivity. To determine the functioning of hillslope-channel connectivity and the continuation of transport of these sediments in the channel, it is necessary to obtain data on sediment transport from the hillslopes to the channels. Simultaneously, the factors that influence sediment export out of the catchment need to be studied. For measuring hillslope-channel sediment connectivity, Rare-Earth Oxide (REO) tracers were applied to a hillslope in an agricultural catchment in Navarre, Spain, preceding the winter of 2014-2015. The results showed that during the winter there was no sediment transport from the hillslope to the channel. Analysis of precipitation data showed that total precipitation quantities did not differ much from the mean. However, precipitation intensities were low, causing little sediment mobilisation. To test the implication of the REO results at the catchment scale, two conceptual models for sediment connectivity were assessed using a Random Forest (RF) machine learning method. One model proposes that small events provide sediment for large events, while the other proposes that only large events cause sediment detachment and small events subsequently remove these sediments from near and in the channel. The RF method was applied to a daily dataset of sediment yield from the catchment (N=2451 days), and two subsets of the whole dataset: small events (N=2319) and large events (N=132). For sediment yield prediction of small events, variables related to large preceding events were the most important. The model for large events underperformed and, therefore, we could not draw any immediate conclusions whether small events influence the amount of sediment exported during large events. Both REO tracers and RF method showed that low intensity events do not contribute any sediments to the channel in the Latxaga catchment (cf. Masselink et al., 2016). Sediment dynamics are dominated by sediment mobilisation during large (high intensity) events. Sediments are for a large part exported during those events, but large amount of sediments are deposited in and near the channel after these events. These sediments are gradually removed by small events. To better understand the delivery of sediments to the channel and how large and small events influence each other more field data on hillslope-channel connectivity and within-channel sediment dynamics is necessary. Reference: Masselink, R.J.H., Keesstra, S.D., Temme, A.J.A.M., Seeger, M., Giménez, R., Casalí, J., 2016. Modelling Discharge and Sediment Yield at Catchment Scale Using Connectivity Components. Land Degrad. Dev. 27, 933-945. doi:10.1002/ldr.2512
Li, Yong; Yuan, Gonglin; Wei, Zengxin
2015-01-01
In this paper, a trust-region algorithm is proposed for large-scale nonlinear equations, where the limited-memory BFGS (L-M-BFGS) update matrix is used in the trust-region subproblem to improve the effectiveness of the algorithm for large-scale problems. The global convergence of the presented method is established under suitable conditions. The numerical results of the test problems show that the method is competitive with the norm method.
NASA Astrophysics Data System (ADS)
Ambjørn, J.; Watabiki, Y.
2017-12-01
We recently formulated a model of the universe based on an underlying W3-symmetry. It allows the creation of the universe from nothing and the creation of baby universes and wormholes for spacetimes of dimension 2, 3, 4, 6 and 10. Here we show that the classical large time and large space limit of these universes is one of exponential fast expansion without the need of a cosmological constant. Under a number of simplifying assumptions, our model predicts that w = ‑1.2 in the case of four-dimensional spacetime. The possibility of obtaining a w-value less than ‑1 is linked to the ability of our model to create baby universes and wormholes.
Bayesian exponential random graph modelling of interhospital patient referral networks.
Caimo, Alberto; Pallotti, Francesca; Lomi, Alessandro
2017-08-15
Using original data that we have collected on referral relations between 110 hospitals serving a large regional community, we show how recently derived Bayesian exponential random graph models may be adopted to illuminate core empirical issues in research on relational coordination among healthcare organisations. We show how a rigorous Bayesian computation approach supports a fully probabilistic analytical framework that alleviates well-known problems in the estimation of model parameters of exponential random graph models. We also show how the main structural features of interhospital patient referral networks that prior studies have described can be reproduced with accuracy by specifying the system of local dependencies that produce - but at the same time are induced by - decentralised collaborative arrangements between hospitals. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Scalable Parameter Estimation for Genome-Scale Biochemical Reaction Networks
Kaltenbacher, Barbara; Hasenauer, Jan
2017-01-01
Mechanistic mathematical modeling of biochemical reaction networks using ordinary differential equation (ODE) models has improved our understanding of small- and medium-scale biological processes. While the same should in principle hold for large- and genome-scale processes, the computational methods for the analysis of ODE models which describe hundreds or thousands of biochemical species and reactions are missing so far. While individual simulations are feasible, the inference of the model parameters from experimental data is computationally too intensive. In this manuscript, we evaluate adjoint sensitivity analysis for parameter estimation in large scale biochemical reaction networks. We present the approach for time-discrete measurement and compare it to state-of-the-art methods used in systems and computational biology. Our comparison reveals a significantly improved computational efficiency and a superior scalability of adjoint sensitivity analysis. The computational complexity is effectively independent of the number of parameters, enabling the analysis of large- and genome-scale models. Our study of a comprehensive kinetic model of ErbB signaling shows that parameter estimation using adjoint sensitivity analysis requires a fraction of the computation time of established methods. The proposed method will facilitate mechanistic modeling of genome-scale cellular processes, as required in the age of omics. PMID:28114351
Implementing Capsule Representation in a Total Hip Dislocation Finite Element Model
Stewart, Kristofer J; Pedersen, Douglas R; Callaghan, John J; Brown, Thomas D
2004-01-01
Previously validated hardware-only finite element models of THA dislocation have clarified how various component design and surgical placement variables contribute to resisting the propensity for implant dislocation. This body of work has now been enhanced with the incorporation of experimentally based capsule representation, and with anatomic bone structures. The current form of this finite element model provides for large deformation multi-body contact (including capsule wrap-around on bone and/or implant), large displacement interfacial sliding, and large deformation (hyperelastic) capsule representation. In addition, the modular nature of this model now allows for rapid incorporation of current or future total hip implant designs, accepts complex multi-axial physiologic motion inputs, and outputs case-specific component/bone/soft-tissue impingement events. This soft-tissue-augmented finite element model is being used to investigate the performance of various implant designs for a range of clinically-representative soft tissue integrities and surgical techniques. Preliminary results show that capsule enhancement makes a substantial difference in stability, compared to an otherwise identical hardware-only model. This model is intended to help put implant design and surgical technique decisions on a firmer scientific basis, in terms of reducing the likelihood of dislocation. PMID:15296198
Lepton number violation in theories with a large number of standard model copies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kovalenko, Sergey; Schmidt, Ivan; Paes, Heinrich
2011-03-01
We examine lepton number violation (LNV) in theories with a saturated black hole bound on a large number of species. Such theories have been advocated recently as a possible solution to the hierarchy problem and an explanation of the smallness of neutrino masses. On the other hand, the violation of the lepton number can be a potential phenomenological problem of this N-copy extension of the standard model as due to the low quantum gravity scale black holes may induce TeV scale LNV operators generating unacceptably large rates of LNV processes. We show, however, that this issue can be avoided bymore » introducing a spontaneously broken U{sub 1(B-L)}. Then, due to the existence of a specific compensation mechanism between contributions of different Majorana neutrino states, LNV processes in the standard model copy become extremely suppressed with rates far beyond experimental reach.« less
A Category Adjustment Approach to Memory for Spatial Location in Natural Scenes
ERIC Educational Resources Information Center
Holden, Mark P.; Curby, Kim M.; Newcombe, Nora S.; Shipley, Thomas F.
2010-01-01
Memories for spatial locations often show systematic errors toward the central value of the surrounding region. This bias has been explained using a Bayesian model in which fine-grained and categorical information are combined (Huttenlocher, Hedges, & Duncan, 1991). However, experiments testing this model have largely used locations contained in…
A cellular model for sporadic ALS using patient-derived induced pluripotent stem cells
Burkhardt, Matthew F; Martinez, Fernando J; Wright, Sarah; Ramos, Carla; Volfson, Dmitri; Mason, Michael; Garnes, Jeff; Dang, Vu; Lievers, Jeffery; Shoukat-Mumtaz, Uzma; Martinez, Rita; Gai, Hui; Blake, Robert; Vaisberg, Eugeni; Grskovic, Marica; Johnson, Charles; Irion, Stefan; Bright, Jessica; Cooper, Bonnie; Nguyen, Leane; Griswold-Prenner, Irene; Javaherian, Ashkan
2016-01-01
Development of therapeutics for genetically complex neurodegenerative diseases such as sporadic amyotrophic lateral sclerosis (ALS) has largely been hampered by lack of relevant disease models. Reprogramming of sporadic ALS patients’ fibroblasts into induced pluripotent stem cells (iPSC) and differentiation into affected neurons that show a disease phenotype could provide a cellular model for disease mechanism studies and drug discovery. Here we report the reprogramming to pluripotency of fibroblasts from a large cohort of healthy controls and ALS patients and their differentiation into motor neurons. We demonstrate that motor neurons derived from three sALS patients show de novo TDP-43 aggregation and that the aggregates recapitulate pathology in postmortem tissue from one of the same patients from which the iPSC were derived. We configured a high-content chemical screen using the TDP-43 aggregate endpoint both in lower motor neurons and upper motor neuron like cells and identified FDA-approved small molecule modulators including Digoxin demonstrating the feasibility of patient-derived iPSC-based disease modelling for drug screening. PMID:23891805
NASA Technical Reports Server (NTRS)
August, Richard; Kaza, Krishna Rao V.
1988-01-01
An investigation of the vibration, performance, flutter, and forced response of the large-scale propfan, SR7L, and its aeroelastic model, SR7A, has been performed by applying available structural and aeroelastic analytical codes and then correlating measured and calculated results. Finite element models of the blades were used to obtain modal frequencies, displacements, stresses and strains. These values were then used in conjunction with a 3-D, unsteady, lifting surface aerodynamic theory for the subsequent aeroelastic analyses of the blades. The agreement between measured and calculated frequencies and mode shapes for both models is very good. Calculated power coefficients correlate well with those measured for low advance ratios. Flutter results show that both propfans are stable at their respective design points. There is also good agreement between calculated and measured blade vibratory strains due to excitation resulting from yawed flow for the SR7A propfan. The similarity of structural and aeroelastic results show that the SR7A propfan simulates the SR7L characteristics.
Continuum and atomistic description of excess electrons in TiO2
NASA Astrophysics Data System (ADS)
Maggio, Emanuele; Martsinovich, Natalia; Troisi, Alessandro
2016-02-01
The modelling of an excess electron in a semiconductor in a prototypical dye sensitised solar cell is carried out using two complementary approaches: atomistic simulation of the TiO2 nanoparticle surface is complemented by a dielectric continuum model of the solvent-semiconductor interface. The two methods are employed to characterise the bound (excitonic) states formed by the interaction of the electron in the semiconductor with a positive charge opposite the interface. Density-functional theory (DFT) calculations show that the excess electron in TiO2 in the presence of a counterion is not fully localised but extends laterally over a large region, larger than system sizes accessible to DFT calculations. The numerical description of the excess electron at the semiconductor-electrolyte interface based on the continuum model shows that the exciton is also delocalised over a large area: the exciton radius can have values from tens to hundreds of Ångströms, depending on the nature of the semiconductor (characterised by the dielectric constant and the electron effective mass in our model).
Zeng, Tao; Wang, Yuhang; Yoshida, Yasuko; Tian, Di; Russell, Amistead G; Barnard, William R
2008-11-15
Prescribed burning is a large aerosol source in the southeastern United States. Its air quality impact is investigated using 3-D model simulations and analysis of ground and satellite observations. Fire emissions for 2002 are calculated based on a recently developed VISTAS emission inventory. March was selected for the investigation because it is the most active prescribed fire month. Inclusion of fire emissions significantly improved model performance. Model results show that prescribed fire emissions lead to approximately 50% enhancements of mean OC and EC concentrations in the Southeast and a daily increase of PM2.5 up to 25 microg m(-3), indicating that fire emissions can lead to PM2.5 nonattainment in affected regions. Surface enhancements of CO up to 200 ppbv are found. Fire count measurements from the moderate resolution imaging spectroradiometer (MODIS) onboard the NASA Terra satellite show large springtime burning in most states, which is consistent with the emission inventory. These measurements also indicate that the inventory may underestimate fire emissions in the summer.
Improving parallel I/O autotuning with performance modeling
Behzad, Babak; Byna, Surendra; Wild, Stefan M.; ...
2014-01-01
Various layers of the parallel I/O subsystem offer tunable parameters for improving I/O performance on large-scale computers. However, searching through a large parameter space is challenging. We are working towards an autotuning framework for determining the parallel I/O parameters that can achieve good I/O performance for different data write patterns. In this paper, we characterize parallel I/O and discuss the development of predictive models for use in effectively reducing the parameter space. Furthermore, applying our technique on tuning an I/O kernel derived from a large-scale simulation code shows that the search time can be reduced from 12 hours to 2more » hours, while achieving 54X I/O performance speedup.« less
ERIC Educational Resources Information Center
Furnham, Adrian; Guenole, Nigel; Levine, Stephen Z.; Chamorro-Premuzic, Tomas
2013-01-01
This study presents new analyses of NEO Personality Inventory-Revised (NEO-PI-R) responses collected from a large British sample in a high-stakes setting. The authors show the appropriateness of the five-factor model underpinning these responses in a variety of new ways. Using the recently developed exploratory structural equation modeling (ESEM)…
Anomalous properties of the acoustic excitations in glasses on the mesoscopic length scale.
Monaco, Giulio; Mossa, Stefano
2009-10-06
The low-temperature thermal properties of dielectric crystals are governed by acoustic excitations with large wavelengths that are well described by plane waves. This is the Debye model, which rests on the assumption that the medium is an elastic continuum, holds true for acoustic wavelengths large on the microscopic scale fixed by the interatomic spacing, and gradually breaks down on approaching it. Glasses are characterized as well by universal low-temperature thermal properties that are, however, anomalous with respect to those of the corresponding crystalline phases. Related universal anomalies also appear in the low-frequency vibrational density of states and, despite a longstanding debate, remain poorly understood. By using molecular dynamics simulations of a model monatomic glass of extremely large size, we show that in glasses the structural disorder undermines the Debye model in a subtle way: The elastic continuum approximation for the acoustic excitations breaks down abruptly on the mesoscopic, medium-range-order length scale of approximately 10 interatomic spacings, where it still works well for the corresponding crystalline systems. On this scale, the sound velocity shows a marked reduction with respect to the macroscopic value. This reduction turns out to be closely related to the universal excess over the Debye model prediction found in glasses at frequencies of approximately 1 THz in the vibrational density of states or at temperatures of approximately 10 K in the specific heat.
Atomic model of a cell-wall cross-linking enzyme in complex with an intact bacterial peptidoglycan.
Schanda, Paul; Triboulet, Sébastien; Laguri, Cédric; Bougault, Catherine M; Ayala, Isabel; Callon, Morgane; Arthur, Michel; Simorre, Jean-Pierre
2014-12-24
The maintenance of bacterial cell shape and integrity is largely attributed to peptidoglycan, a highly cross-linked biopolymer. The transpeptidases that perform this cross-linking are important targets for antibiotics. Despite this biomedical importance, to date no structure of a protein in complex with an intact bacterial peptidoglycan has been resolved, primarily due to the large size and flexibility of peptidoglycan sacculi. Here we use solid-state NMR spectroscopy to derive for the first time an atomic model of an l,d-transpeptidase from Bacillus subtilis bound to its natural substrate, the intact B. subtilis peptidoglycan. Importantly, the model obtained from protein chemical shift perturbation data shows that both domains-the catalytic domain as well as the proposed peptidoglycan recognition domain-are important for the interaction and reveals a novel binding motif that involves residues outside of the classical enzymatic pocket. Experiments on mutants and truncated protein constructs independently confirm the binding site and the implication of both domains. Through measurements of dipolar-coupling derived order parameters of bond motion we show that protein binding reduces the flexibility of peptidoglycan. This first report of an atomic model of a protein-peptidoglycan complex paves the way for the design of new antibiotic drugs targeting l,d-transpeptidases. The strategy developed here can be extended to the study of a large variety of enzymes involved in peptidoglycan morphogenesis.
Eccentricity effects on leakage of a brush seal at low speeds
NASA Technical Reports Server (NTRS)
Schlumberger, Julie A.; Proctor, Margaret P.; Hendricks, Robert C.
1991-01-01
The effects of eccentricity on brush seal leakage at low rotational speeds were investigated. Included are the leakage results for ambient temperature air and nearly saturated streams at three different rotor eccentricities at both 0 and 400 rpm. A brush seal with a nominal bore diameter of 13.647 cm. (5.3730 in.) was used. It had a radial concentric interference of 0.071 cm (0.0028 in.) and a fence height of 0.0927 cm (0.0365 in.). There were 1060 bristles per centimeter of circumference (2690 bristles per inch of circumference). Rotor eccentricities of 0.003, 0.010, 0.038, and 0.043 cm (0.001, 0.004, 0.015, and 0.017 in.) were achieved by using bushings with different offsets. The results were compared with an annular seal model (FLOWCAL) for air and to a standard labyrinth seal model for steam. The annular seal model was also compared with a bulk flow model of a concentric brush seal in air. Large eccentricities did not damage the brush seals or their Haynes 25 bristles. However, the 304 stainless steel rotor did not show wear, indicating a harder surface is needed. Only the stream data showed hysteresis and were affected by shaft rotation. The brush seal had lower leakage rates than those predicted for comparable annular and labyrinth seals (conventional) because of the large clearances those seals require to accommodate large shaft excursions.
Yuan, Ke-Hai; Tian, Yubin; Yanagihara, Hirokazu
2015-06-01
Survey data typically contain many variables. Structural equation modeling (SEM) is commonly used in analyzing such data. The most widely used statistic for evaluating the adequacy of a SEM model is T ML, a slight modification to the likelihood ratio statistic. Under normality assumption, T ML approximately follows a chi-square distribution when the number of observations (N) is large and the number of items or variables (p) is small. However, in practice, p can be rather large while N is always limited due to not having enough participants. Even with a relatively large N, empirical results show that T ML rejects the correct model too often when p is not too small. Various corrections to T ML have been proposed, but they are mostly heuristic. Following the principle of the Bartlett correction, this paper proposes an empirical approach to correct T ML so that the mean of the resulting statistic approximately equals the degrees of freedom of the nominal chi-square distribution. Results show that empirically corrected statistics follow the nominal chi-square distribution much more closely than previously proposed corrections to T ML, and they control type I errors reasonably well whenever N ≥ max(50,2p). The formulations of the empirically corrected statistics are further used to predict type I errors of T ML as reported in the literature, and they perform well.
NASA Astrophysics Data System (ADS)
Ferrari, Ulisse
A maximal entropy model provides the least constrained probability distribution that reproduces experimental averages of an observables set. In this work we characterize the learning dynamics that maximizes the log-likelihood in the case of large but finite datasets. We first show how the steepest descent dynamics is not optimal as it is slowed down by the inhomogeneous curvature of the model parameters space. We then provide a way for rectifying this space which relies only on dataset properties and does not require large computational efforts. We conclude by solving the long-time limit of the parameters dynamics including the randomness generated by the systematic use of Gibbs sampling. In this stochastic framework, rather than converging to a fixed point, the dynamics reaches a stationary distribution, which for the rectified dynamics reproduces the posterior distribution of the parameters. We sum up all these insights in a ``rectified'' Data-Driven algorithm that is fast and by sampling from the parameters posterior avoids both under- and over-fitting along all the directions of the parameters space. Through the learning of pairwise Ising models from the recording of a large population of retina neurons, we show how our algorithm outperforms the steepest descent method. This research was supported by a Grant from the Human Brain Project (HBP CLAP).
A Quantum Shuffling Game for Teaching Statistical Mechanics
ERIC Educational Resources Information Center
Black, P. J.; And Others
1971-01-01
A game simulating an Einstein model of a crystal producing a Boltzmann distribution. Computer-made films present the results with large distributions showing heat flow and some applications to entropy. (TS)
Evaluating Arctic warming mechanisms in CMIP5 models
NASA Astrophysics Data System (ADS)
Franzke, Christian L. E.; Lee, Sukyoung; Feldstein, Steven B.
2017-05-01
Arctic warming is one of the most striking signals of global warming. The Arctic is one of the fastest warming regions on Earth and constitutes, thus, a good test bed to evaluate the ability of climate models to reproduce the physics and dynamics involved in Arctic warming. Different physical and dynamical mechanisms have been proposed to explain Arctic amplification. These mechanisms include the surface albedo feedback and poleward sensible and latent heat transport processes. During the winter season when Arctic amplification is most pronounced, the first mechanism relies on an enhancement in upward surface heat flux, while the second mechanism does not. In these mechanisms, it has been proposed that downward infrared radiation (IR) plays a role to a varying degree. Here, we show that the current generation of CMIP5 climate models all reproduce Arctic warming and there are high pattern correlations—typically greater than 0.9—between the surface air temperature (SAT) trend and the downward IR trend. However, we find that there are two groups of CMIP5 models: one with small pattern correlations between the Arctic SAT trend and the surface vertical heat flux trend (Group 1), and the other with large correlations (Group 2) between the same two variables. The Group 1 models exhibit higher pattern correlations between Arctic SAT and 500 hPa geopotential height trends, than do the Group 2 models. These findings suggest that Arctic warming in Group 1 models is more closely related to changes in the large-scale atmospheric circulation, whereas in Group 2, the albedo feedback effect plays a more important role. Interestingly, while Group 1 models have a warm or weak bias in their Arctic SAT, Group 2 models show large cold biases. This stark difference in model bias leads us to hypothesize that for a given model, the dominant Arctic warming mechanism and trend may be dependent on the bias of the model mean state.
Dynamics of a neuron model in different two-dimensional parameter-spaces
NASA Astrophysics Data System (ADS)
Rech, Paulo C.
2011-03-01
We report some two-dimensional parameter-space diagrams numerically obtained for the multi-parameter Hindmarsh-Rose neuron model. Several different parameter planes are considered, and we show that regardless of the combination of parameters, a typical scenario is preserved: for all choice of two parameters, the parameter-space presents a comb-shaped chaotic region immersed in a large periodic region. We also show that exist regions close these chaotic region, separated by the comb teeth, organized themselves in period-adding bifurcation cascades.
Assessing modelled spatial distributions of ice water path using satellite data
NASA Astrophysics Data System (ADS)
Eliasson, S.; Buehler, S. A.; Milz, M.; Eriksson, P.; John, V. O.
2010-05-01
The climate models used in the IPCC AR4 show large differences in monthly mean cloud ice. The most valuable source of information that can be used to potentially constrain the models is global satellite data. For this, the data sets must be long enough to capture the inter-annual variability of Ice Water Path (IWP). PATMOS-x was used together with ISCCP for the annual cycle evaluation in Fig. 7 while ECHAM-5 was used for the correlation with other models in Table 3. A clear distinction between ice categories in satellite retrievals, as desired from a model point of view, is currently impossible. However, long-term satellite data sets may still be used to indicate the climatology of IWP spatial distribution. We evaluated satellite data sets from CloudSat, PATMOS-x, ISCCP, MODIS and MSPPS in terms of monthly mean IWP, to determine which data sets can be used to evaluate the climate models. IWP data from CloudSat cloud profiling radar provides the most advanced data set on clouds. As CloudSat data are too short to evaluate the model data directly, it was mainly used here to evaluate IWP from the other satellite data sets. ISCCP and MSPPS were shown to have comparatively low IWP values. ISCCP shows particularly low values in the tropics, while MSPPS has particularly low values outside the tropics. MODIS and PATMOS-x were in closest agreement with CloudSat in terms of magnitude and spatial distribution, with MODIS being the best of the two. As PATMOS-x extends over more than 25 years and is in fairly close agreement with CloudSat, it was chosen as the reference data set for the model evaluation. In general there are large discrepancies between the individual climate models, and all of the models show problems in reproducing the observed spatial distribution of cloud-ice. Comparisons consistently showed that ECHAM-5 is the GCM from IPCC AR4 closest to satellite observations.
Using spiral chain models for study of nanoscroll structures
NASA Astrophysics Data System (ADS)
Savin, Alexander V.; Sakovich, Ruslan A.; Mazo, Mikhail A.
2018-04-01
Molecular nanoribbons with different chemical structures can form scrolled packings possessing outstanding properties and application perspectives due to their morphology. Here, we propose a simplified two-dimensional model of the molecular chain that allows us to describe the molecular nanoribbon's scrolled packings of various structures as a spiral packaging chain. The model allows us to obtain the possible stationary states of single-layer nanoribbon scrolls of graphene, graphane, fluorographene, fluorographane (graphene hydrogenated on one side and fluorinated on the other side), graphone C4H (graphene partially hydrogenated on one side), and fluorographone C4F . The obtained states and the states of the scrolls found through all-atomic models coincide with good accuracy. We show the stability of scrolled packings and calculate the dependence of energy, the number of coils, and the inner and outer radius of the scrolled packing on the nanoribbon length. It is shown that a scrolled packing is the most energetically favorable conformation for nanoribbons of graphene, graphane, fluorographene, and fluorographane at large lengths. A double-scrolled packing when the nanoribbon is symmetrically rolled into a scroll from opposite ends is more advantageous for longer length nanoribbons of graphone and fluorographone. We show the possibility of the existence of scrolled packings for nanoribbons of fluorographene and the existence of two different types of scrolls for nanoribbons of fluorographane, which correspond to the left and right Archimedean spirals of the chain model. The simplicity of the proposed model allows us to consider the dynamics of molecular nanoribbon scrolls of sufficiently large lengths and at sufficiently large time intervals.
Spatiotemporal patterns of terrestrial gross primary production: A review
NASA Astrophysics Data System (ADS)
Anav, Alessandro; Friedlingstein, Pierre; Beer, Christian; Ciais, Philippe; Harper, Anna; Jones, Chris; Murray-Tortarolo, Guillermo; Papale, Dario; Parazoo, Nicholas C.; Peylin, Philippe; Piao, Shilong; Sitch, Stephen; Viovy, Nicolas; Wiltshire, Andy; Zhao, Maosheng
2015-09-01
Great advances have been made in the last decade in quantifying and understanding the spatiotemporal patterns of terrestrial gross primary production (GPP) with ground, atmospheric, and space observations. However, although global GPP estimates exist, each data set relies upon assumptions and none of the available data are based only on measurements. Consequently, there is no consensus on the global total GPP and large uncertainties exist in its benchmarking. The objective of this review is to assess how the different available data sets predict the spatiotemporal patterns of GPP, identify the differences among data sets, and highlight the main advantages/disadvantages of each data set. We compare GPP estimates for the historical period (1990-2009) from two observation-based data sets (Model Tree Ensemble and Moderate Resolution Imaging Spectroradiometer) to coupled carbon-climate models and terrestrial carbon cycle models from the Fifth Climate Model Intercomparison Project and TRENDY projects and to a new hybrid data set (CARBONES). Results show a large range in the mean global GPP estimates. The different data sets broadly agree on GPP seasonal cycle in terms of phasing, while there is still discrepancy on the amplitude. For interannual variability (IAV) and trends, there is a clear separation between the observation-based data that show little IAV and trend, while the process-based models have large GPP variability and significant trends. These results suggest that there is an urgent need to improve observation-based data sets and develop carbon cycle modeling with processes that are currently treated either very simplistically to correctly estimate present GPP and better quantify the future uptake of carbon dioxide by the world's vegetation.
Modelling of pollen dispersion in the atmosphere: evaluation with a continuous 1β+1δ lidar
NASA Astrophysics Data System (ADS)
Sicard, Michaël; Izquierdo, Rebeca; Jorba, Oriol; Alarcón, Marta; Belmonte, Jordina; Comerón, Adolfo; De Linares, Concepción; Baldasano, José Maria
2018-04-01
Pollen allergenicity plays an important role on human health and wellness. It is thus of large public interest to increase our knowledge of pollen grain behavior in the atmosphere (source, emission, processes involved during their transport, etc.) at fine temporal and spatial scales. First simulations with the Barcelona Supercomputing Center NMMB/BSC-CTM model of Platanus and Pinus dispersion in the atmosphere were performed during a 5-day pollination event observed in Barcelona, Spain, between 27 - 31 March, 2015. The simulations are compared to vertical profiles measured with the continuous Barcelona Micro Pulse Lidar system. First results show that the vertical distribution is well reproduced by the model in shape, but not in intensity, the model largely underestimating in the afternoon. Guidelines are proposed to improve the dispersion of airborne pollen by numerical prediction models.
Data-driven process decomposition and robust online distributed modelling for large-scale processes
NASA Astrophysics Data System (ADS)
Shu, Zhang; Lijuan, Li; Lijuan, Yao; Shipin, Yang; Tao, Zou
2018-02-01
With the increasing attention of networked control, system decomposition and distributed models show significant importance in the implementation of model-based control strategy. In this paper, a data-driven system decomposition and online distributed subsystem modelling algorithm was proposed for large-scale chemical processes. The key controlled variables are first partitioned by affinity propagation clustering algorithm into several clusters. Each cluster can be regarded as a subsystem. Then the inputs of each subsystem are selected by offline canonical correlation analysis between all process variables and its controlled variables. Process decomposition is then realised after the screening of input and output variables. When the system decomposition is finished, the online subsystem modelling can be carried out by recursively block-wise renewing the samples. The proposed algorithm was applied in the Tennessee Eastman process and the validity was verified.
Forcing, feedbacks and climate sensitivity in CMIP5 coupled atmosphere-ocean climate models
Andrews, Timothy; Gregory, Jonathan M.; Webb, Mark J.; ...
2012-05-15
We quantify forcing and feedbacks across available CMIP5 coupled atmosphere-ocean general circulation models (AOGCMs) by analysing simulations forced by an abrupt quadrupling of atmospheric carbon dioxide concentration. This is the first application of the linear forcing-feedback regression analysis of Gregory et al. (2004) to an ensemble of AOGCMs. The range of equilibrium climate sensitivity is 2.1–4.7 K. Differences in cloud feedbacks continue to be important contributors to this range. Some models show small deviations from a linear dependence of top-of-atmosphere radiative fluxes on global surface temperature change. We show that this phenomenon largely arises from shortwave cloud radiative effects overmore » the ocean and is consistent with independent estimates of forcing using fixed sea-surface temperature methods. Moreover, we suggest that future research should focus more on understanding transient climate change, including any time-scale dependence of the forcing and/or feedback, rather than on the equilibrium response to large instantaneous forcing.« less
Skilful Seasonal Predictions of Summer European Rainfall
NASA Astrophysics Data System (ADS)
Dunstone, Nick; Smith, Doug; Scaife, Adam; Hermanson, Leon; Fereday, David; O'Reilly, Chris; Stirling, Alison; Eade, Rosie; Gordon, Margaret; MacLachlan, Craig; Woollings, Tim; Sheen, Katy; Belcher, Stephen
2018-04-01
Year-to-year variability in Northern European summer rainfall has profound societal and economic impacts; however, current seasonal forecast systems show no significant forecast skill. Here we show that skillful predictions are possible (r 0.5, p < 0.001) using the latest high-resolution Met Office near-term prediction system over 1960-2017. The model predictions capture both low-frequency changes (e.g., wet summers 2007-2012) and some of the large individual events (e.g., dry summer 1976). Skill is linked to predictable North Atlantic sea surface temperature variability changing the supply of water vapor into Northern Europe and so modulating convective rainfall. However, dynamical circulation variability is not well predicted in general—although some interannual skill is found. Due to the weak amplitude of the forced model signal (likely caused by missing or weak model responses), very large ensembles (>80 members) are required for skillful predictions. This work is promising for the development of European summer rainfall climate services.
Russell, G.M.; Goodwin, C.R.
1987-01-01
Results of a two-dimensional, vertically averaged, computer simulation model of the Loxahatchee River estuary show that under typical low freshwater inflow and vertically well mixed conditions, water circulation is dominated by freshwater inflow rather than by tidal influence. The model can simulate tidal flow and circulation in the Loxahatchee River estuary under typical low freshwater inflow and vertically well mixed conditions, but is limited, however, to low-flow and well mixed conditions. Computed patterns of residual water transport show a consistent seaward flow from the northwest fork through the central embayment and out Jupiter Inlet to the Atlantic Ocean. A large residual seaward flow was computed from the North Intracoastal Waterway to the inlet channel. Although the tide produces large flood and ebb flows in the estuary, tide-induced residual transport rates are low in comparison with freshwater-induced residual transport. Model investigations of partly mixed or stratified conditions in the estuary need to await development of systems capable of simulating three-dimensional flow patterns. (Author 's abstract)
The cost of conservative synchronization in parallel discrete event simulations
NASA Technical Reports Server (NTRS)
Nicol, David M.
1990-01-01
The performance of a synchronous conservative parallel discrete-event simulation protocol is analyzed. The class of simulation models considered is oriented around a physical domain and possesses a limited ability to predict future behavior. A stochastic model is used to show that as the volume of simulation activity in the model increases relative to a fixed architecture, the complexity of the average per-event overhead due to synchronization, event list manipulation, lookahead calculations, and processor idle time approach the complexity of the average per-event overhead of a serial simulation. The method is therefore within a constant factor of optimal. The analysis demonstrates that on large problems--those for which parallel processing is ideally suited--there is often enough parallel workload so that processors are not usually idle. The viability of the method is also demonstrated empirically, showing how good performance is achieved on large problems using a thirty-two node Intel iPSC/2 distributed memory multiprocessor.
NASA Astrophysics Data System (ADS)
Meng, X.; Lyu, S.; Zhang, T.; Zhao, L.; Li, Z.; Han, B.; Li, S.; Ma, D.; Chen, H.; Ao, Y.; Luo, S.; Shen, Y.; Guo, J.; Wen, L.
2018-04-01
Systematic cold biases exist in the simulation for 2 m air temperature in the Tibetan Plateau (TP) when using regional climate models and global atmospheric general circulation models. We updated the albedo in the Weather Research and Forecasting (WRF) Model lower boundary condition using the Global LAnd Surface Satellite Moderate-Resolution Imaging Spectroradiometer albedo products and demonstrated evident improvement for cold temperature biases in the TP. It is the large overestimation of albedo in winter and spring in the WRF model that resulted in the large cold temperature biases. The overestimated albedo was caused by the simulated precipitation biases and over-parameterization of snow albedo. Furthermore, light-absorbing aerosols can result in a large reduction of albedo in snow and ice cover. The results suggest the necessity of developing snow albedo parameterization using observations in the TP, where snow cover and melting are very different from other low-elevation regions, and the influence of aerosols should be considered as well. In addition to defining snow albedo, our results show an urgent call for improving precipitation simulation in the TP.
A new model for extinction and recolonization in two dimensions: quantifying phylogeography.
Barton, Nicholas H; Kelleher, Jerome; Etheridge, Alison M
2010-09-01
Classical models of gene flow fail in three ways: they cannot explain large-scale patterns; they predict much more genetic diversity than is observed; and they assume that loosely linked genetic loci evolve independently. We propose a new model that deals with these problems. Extinction events kill some fraction of individuals in a region. These are replaced by offspring from a small number of parents, drawn from the preexisting population. This model of evolution forwards in time corresponds to a backwards model, in which ancestral lineages jump to a new location if they are hit by an event, and may coalesce with other lineages that are hit by the same event. We derive an expression for the identity in allelic state, and show that, over scales much larger than the largest event, this converges to the classical value derived by Wright and Malécot. However, rare events that cover large areas cause low genetic diversity, large-scale patterns, and correlations in ancestry between unlinked loci. © 2010 The Author(s). Journal compilation © 2010 The Society for the Study of Evolution.
Further Investigation of the Support System Effects and Wing Twist on the NASA Common Research Model
NASA Technical Reports Server (NTRS)
Rivers, Melissa B.; Hunter, Craig A.; Campbell, Richard L.
2012-01-01
An experimental investigation of the NASA Common Research Model was conducted in the NASA Langley National Transonic Facility and NASA Ames 11-foot Transonic Wind Tunnel Facility for use in the Drag Prediction Workshop. As data from the experimental investigations was collected, a large difference in moment values was seen between the experiment and computational data from the 4th Drag Prediction Workshop. This difference led to a computational assessment to investigate model support system interference effects on the Common Research Model. The results from this investigation showed that the addition of the support system to the computational cases did increase the pitching moment so that it more closely matched the experimental results, but there was still a large discrepancy in pitching moment. This large discrepancy led to an investigation into the shape of the as-built model, which in turn led to a change in the computational grids and re-running of all the previous support system cases. The results of these cases are the focus of this paper.
Knijnenburg, Theo A.; Klau, Gunnar W.; Iorio, Francesco; Garnett, Mathew J.; McDermott, Ultan; Shmulevich, Ilya; Wessels, Lodewyk F. A.
2016-01-01
Mining large datasets using machine learning approaches often leads to models that are hard to interpret and not amenable to the generation of hypotheses that can be experimentally tested. We present ‘Logic Optimization for Binary Input to Continuous Output’ (LOBICO), a computational approach that infers small and easily interpretable logic models of binary input features that explain a continuous output variable. Applying LOBICO to a large cancer cell line panel, we find that logic combinations of multiple mutations are more predictive of drug response than single gene predictors. Importantly, we show that the use of the continuous information leads to robust and more accurate logic models. LOBICO implements the ability to uncover logic models around predefined operating points in terms of sensitivity and specificity. As such, it represents an important step towards practical application of interpretable logic models. PMID:27876821
Knijnenburg, Theo A; Klau, Gunnar W; Iorio, Francesco; Garnett, Mathew J; McDermott, Ultan; Shmulevich, Ilya; Wessels, Lodewyk F A
2016-11-23
Mining large datasets using machine learning approaches often leads to models that are hard to interpret and not amenable to the generation of hypotheses that can be experimentally tested. We present 'Logic Optimization for Binary Input to Continuous Output' (LOBICO), a computational approach that infers small and easily interpretable logic models of binary input features that explain a continuous output variable. Applying LOBICO to a large cancer cell line panel, we find that logic combinations of multiple mutations are more predictive of drug response than single gene predictors. Importantly, we show that the use of the continuous information leads to robust and more accurate logic models. LOBICO implements the ability to uncover logic models around predefined operating points in terms of sensitivity and specificity. As such, it represents an important step towards practical application of interpretable logic models.
A generic discrete-event simulation model for outpatient clinics in a large public hospital.
Weerawat, Waressara; Pichitlamken, Juta; Subsombat, Peerapong
2013-01-01
The orthopedic outpatient department (OPD) ward in a large Thai public hospital is modeled using Discrete-Event Stochastic (DES) simulation. Key Performance Indicators (KPIs) are used to measure effects across various clinical operations during different shifts throughout the day. By considering various KPIs such as wait times to see doctors, percentage of patients who can see a doctor within a target time frame, and the time that the last patient completes their doctor consultation, bottlenecks are identified and resource-critical clinics can be prioritized. The simulation model quantifies the chronic, high patient congestion that is prevalent amongst Thai public hospitals with very high patient-to-doctor ratios. Our model can be applied across five different OPD wards by modifying the model parameters. Throughout this work, we show how DES models can be used as decision-support tools for hospital management.
NASA Technical Reports Server (NTRS)
Okong'o, Nora; Bellan, Josette
2005-01-01
Models for large eddy simulation (LES) are assessed on a database obtained from direct numerical simulations (DNS) of supercritical binary-species temporal mixing layers. The analysis is performed at the DNS transitional states for heptane/nitrogen, oxygen/hydrogen and oxygen/helium mixing layers. The incorporation of simplifying assumptions that are validated on the DNS database leads to a set of LES equations that requires only models for the subgrid scale (SGS) fluxes, which arise from filtering the convective terms in the DNS equations. Constant-coefficient versions of three different models for the SGS fluxes are assessed and calibrated. The Smagorinsky SGS-flux model shows poor correlations with the SGS fluxes, while the Gradient and Similarity models have high correlations, as well as good quantitative agreement with the SGS fluxes when the calibrated coefficients are used.
Users matter : multi-agent systems model of high performance computing cluster users.
DOE Office of Scientific and Technical Information (OSTI.GOV)
North, M. J.; Hood, C. S.; Decision and Information Sciences
2005-01-01
High performance computing clusters have been a critical resource for computational science for over a decade and have more recently become integral to large-scale industrial analysis. Despite their well-specified components, the aggregate behavior of clusters is poorly understood. The difficulties arise from complicated interactions between cluster components during operation. These interactions have been studied by many researchers, some of whom have identified the need for holistic multi-scale modeling that simultaneously includes network level, operating system level, process level, and user level behaviors. Each of these levels presents its own modeling challenges, but the user level is the most complex duemore » to the adaptability of human beings. In this vein, there are several major user modeling goals, namely descriptive modeling, predictive modeling and automated weakness discovery. This study shows how multi-agent techniques were used to simulate a large-scale computing cluster at each of these levels.« less
Harrison, David A; Brady, Anthony R; Parry, Gareth J; Carpenter, James R; Rowan, Kathy
2006-05-01
To assess the performance of published risk prediction models in common use in adult critical care in the United Kingdom and to recalibrate these models in a large representative database of critical care admissions. Prospective cohort study. A total of 163 adult general critical care units in England, Wales, and Northern Ireland, during the period of December 1995 to August 2003. A total of 231,930 admissions, of which 141,106 met inclusion criteria and had sufficient data recorded for all risk prediction models. None. The published versions of the Acute Physiology and Chronic Health Evaluation (APACHE) II, APACHE II UK, APACHE III, Simplified Acute Physiology Score (SAPS) II, and Mortality Probability Models (MPM) II were evaluated for discrimination and calibration by means of a combination of appropriate statistical measures recommended by an expert steering committee. All models showed good discrimination (the c index varied from 0.803 to 0.832) but imperfect calibration. Recalibration of the models, which was performed by both the Cox method and re-estimating coefficients, led to improved discrimination and calibration, although all models still showed significant departures from perfect calibration. Risk prediction models developed in another country require validation and recalibration before being used to provide risk-adjusted outcomes within a new country setting. Periodic reassessment is beneficial to ensure calibration is maintained.
NASA Astrophysics Data System (ADS)
Zettergren, M. D.; Snively, J. B.; Inchin, P.; Komjathy, A.; Verkhoglyadova, O. P.
2017-12-01
Ocean and solid earth responses during earthquakes are a significant source of large amplitude acoustic and gravity waves (AGWs) that perturb the overlying ionosphere-thermosphere (IT) system. IT disturbances are routinely detected following large earthquakes (M > 7.0) via GPS total electron content (TEC) observations, which often show acoustic wave ( 3-4 min periods) and gravity wave ( 10-15 min) signatures with amplitudes of 0.05-2 TECU. In cases of very large earthquakes (M > 8.0) the persisting acoustic waves are estimated to have 100-200 m/s compressional velocities in the conducting ionospheric E and F-regions and should generate significant dynamo currents and magnetic field signatures. Indeed, some recent reports (e.g. Hao et al, 2013, JGR, 118, 6) show evidence for magnetic fluctuations, which appear to be related to AGWs, following recent large earthquakes. However, very little quantitative information is available on: (1) the detailed spatial and temporal dependence of these magnetic fluctuations, which are usually observed at a small number of irregularly arranged stations, and (2) the relation of these signatures to TEC perturbations in terms of relative amplitudes, frequency, and timing for different events. This work investigates space- and time-dependent behavior of both TEC and magnetic fluctuations following recent large earthquakes, with the aim to improve physical understanding of these perturbations via detailed, high-resolution, two- and three-dimensional modeling case studies with a coupled neutral atmospheric and ionospheric model, MAGIC-GEMINI (Zettergren and Snively, 2015, JGR, 120, 9). We focus on cases inspired by the large Chilean earthquakes from the past decade (viz., the M > 8.0 earthquakes from 2010 and 2015) to constrain the sources for the model, i.e. size, frequency, amplitude, and timing, based on available information from ocean buoy and seismometer data. TEC data are used to validate source amplitudes and to constrain background ionospheric conditions. Preliminary comparisons against available magnetic field and TEC data from these events provide evidence, albeit limited and localized, that support the validity of the spatially-resolved simulation results.
Strong control of Southern Ocean cloud reflectivity by ice-nucleating particles.
Vergara-Temprado, Jesús; Miltenberger, Annette K; Furtado, Kalli; Grosvenor, Daniel P; Shipway, Ben J; Hill, Adrian A; Wilkinson, Jonathan M; Field, Paul R; Murray, Benjamin J; Carslaw, Ken S
2018-03-13
Large biases in climate model simulations of cloud radiative properties over the Southern Ocean cause large errors in modeled sea surface temperatures, atmospheric circulation, and climate sensitivity. Here, we combine cloud-resolving model simulations with estimates of the concentration of ice-nucleating particles in this region to show that our simulated Southern Ocean clouds reflect far more radiation than predicted by global models, in agreement with satellite observations. Specifically, we show that the clouds that are most sensitive to the concentration of ice-nucleating particles are low-level mixed-phase clouds in the cold sectors of extratropical cyclones, which have previously been identified as a main contributor to the Southern Ocean radiation bias. The very low ice-nucleating particle concentrations that prevail over the Southern Ocean strongly suppress cloud droplet freezing, reduce precipitation, and enhance cloud reflectivity. The results help explain why a strong radiation bias occurs mainly in this remote region away from major sources of ice-nucleating particles. The results present a substantial challenge to climate models to be able to simulate realistic ice-nucleating particle concentrations and their effects under specific meteorological conditions. Copyright © 2018 the Author(s). Published by PNAS.
Modeling Image Patches with a Generic Dictionary of Mini-Epitomes
Papandreou, George; Chen, Liang-Chieh; Yuille, Alan L.
2015-01-01
The goal of this paper is to question the necessity of features like SIFT in categorical visual recognition tasks. As an alternative, we develop a generative model for the raw intensity of image patches and show that it can support image classification performance on par with optimized SIFT-based techniques in a bag-of-visual-words setting. Key ingredient of the proposed model is a compact dictionary of mini-epitomes, learned in an unsupervised fashion on a large collection of images. The use of epitomes allows us to explicitly account for photometric and position variability in image appearance. We show that this flexibility considerably increases the capacity of the dictionary to accurately approximate the appearance of image patches and support recognition tasks. For image classification, we develop histogram-based image encoding methods tailored to the epitomic representation, as well as an “epitomic footprint” encoding which is easy to visualize and highlights the generative nature of our model. We discuss in detail computational aspects and develop efficient algorithms to make the model scalable to large tasks. The proposed techniques are evaluated with experiments on the challenging PASCAL VOC 2007 image classification benchmark. PMID:26321859
Strongly enhanced thermal transport in a lightly doped Mott insulator at low temperature.
Zlatić, V; Freericks, J K
2012-12-28
We show how a lightly doped Mott insulator has hugely enhanced electronic thermal transport at low temperature. It displays universal behavior independent of the interaction strength when the carriers can be treated as nondegenerate fermions and a nonuniversal "crossover" region where the Lorenz number grows to large values, while still maintaining a large thermoelectric figure of merit. The electron dynamics are described by the Falicov-Kimball model which is solved for arbitrary large on-site correlation with a dynamical mean-field theory algorithm on a Bethe lattice. We show how these results are generic for lightly doped Mott insulators as long as the renormalized Fermi liquid scale is pushed to very low temperature and the system is not magnetically ordered.
An integrated approach to reconstructing genome-scale transcriptional regulatory networks
Imam, Saheed; Noguera, Daniel R.; Donohue, Timothy J.; ...
2015-02-27
Transcriptional regulatory networks (TRNs) program cells to dynamically alter their gene expression in response to changing internal or environmental conditions. In this study, we develop a novel workflow for generating large-scale TRN models that integrates comparative genomics data, global gene expression analyses, and intrinsic properties of transcription factors (TFs). An assessment of this workflow using benchmark datasets for the well-studied γ-proteobacterium Escherichia coli showed that it outperforms expression-based inference approaches, having a significantly larger area under the precision-recall curve. Further analysis indicated that this integrated workflow captures different aspects of the E. coli TRN than expression-based approaches, potentially making themmore » highly complementary. We leveraged this new workflow and observations to build a large-scale TRN model for the α-Proteobacterium Rhodobacter sphaeroides that comprises 120 gene clusters, 1211 genes (including 93 TFs), 1858 predicted protein-DNA interactions and 76 DNA binding motifs. We found that ~67% of the predicted gene clusters in this TRN are enriched for functions ranging from photosynthesis or central carbon metabolism to environmental stress responses. We also found that members of many of the predicted gene clusters were consistent with prior knowledge in R. sphaeroides and/or other bacteria. Experimental validation of predictions from this R. sphaeroides TRN model showed that high precision and recall was also obtained for TFs involved in photosynthesis (PpsR), carbon metabolism (RSP_0489) and iron homeostasis (RSP_3341). In addition, this integrative approach enabled generation of TRNs with increased information content relative to R. sphaeroides TRN models built via other approaches. We also show how this approach can be used to simultaneously produce TRN models for each related organism used in the comparative genomics analysis. Our results highlight the advantages of integrating comparative genomics of closely related organisms with gene expression data to assemble large-scale TRN models with high-quality predictions.« less
Fluidized-bed reactor modeling for production of silicon by silane pyrolysis
NASA Technical Reports Server (NTRS)
Dudukovic, M. P.; Ramachandran, P. A.; Lai, S.
1986-01-01
An ideal backmixed reactor model (CSTR) and a fluidized bed bubbling reactor model (FBBR) were developed for silane pyrolysis. Silane decomposition is assumed to occur via two pathways: homogeneous decomposition and heterogeneous chemical vapor deposition (CVD). Both models account for homogeneous and heterogeneous silane decomposition, homogeneous nucleation, coagulation and growth by diffusion of fines, scavenging of fines by large particles, elutriation of fines and CVD growth of large seed particles. At present the models do not account for attrition. The preliminary comparison of the model predictions with experimental results shows reasonable agreement. The CSTR model with no adjustable parameter yields a lower bound on fines formed and upper estimate on production rates. The FBBR model overpredicts the formation of fines but could be matched to experimental data by adjusting the unkown jet emulsion exchange efficients. The models clearly indicate that in order to suppress the formation of fines (smoke) good gas-solid contacting in the grid region must be achieved and the formation of the bubbles suppressed.
NASA Astrophysics Data System (ADS)
Sun, T.; Fujiwara, T.; Kodaira, S.; Wang, K.; He, J.
2014-12-01
Large coseismic motion (up to ~ 31 m) of seafloor GPS sites during the 2011 M 9 Tohoku earthquake suggests large rupture at shallow depths of the megathrust. However, compilation of all published rupture models, constrained by the near-field seafloor geodetic observation and also various other datasets, shows large uncertainties in the slip of the most near-trench (within ~ 50 km from the trench) part of the megathrust. Repeated multi-beam bathymetry surveys that cover the trench axis, carried out by Japan Agency for Marine-Earth Science and Technology, for the first time recorded coseismic deformation in a megathrust earthquake at the trench. In previous studies of the differential bathymetry (DB) before and after the earthquake to determine coseismic fault slip, only the rigid-body translation component of the upper plate deformation was considered. In this work, we construct Synthetic Differential Bathymetry (SDB) using an elastic deformation model and make comparisons with the observed DB. We use a 3-D elastic Finite Element model with actual fault geometry of the Japan trench subduction zone and allowing the rupture to breach the trench. The SDB can well predict short-wavelength variations in the observed DB. Our tests using different coseismic slip models show that the internal elastic deformation of the hanging wall plays an important role in generating DB. Comparing the SDB with the observed DB suggests that the largest slip is located within ~ 50 km from the trench. The SDB proves to be the most effective tool to evaluate the performance of different rupture models in predicting near-trench slip. Our SDB work will further explore the updip slip variation. The SDB may help to constrain the slip gradient in the updip direction and may help to determine whether the large shallow slip in the Tohoku earthquake plateaued at the trench or before reaching the trench. Resolving these issues will provide some of the key tests for various competing models that were proposed to explain the large shallow rupture in this event.
Large Interstellar Polarisation Survey:The Dust Elongation When Combining Optical-Submm Polarisation
NASA Astrophysics Data System (ADS)
Siebenmorgen, Ralf; Voschinnikov, N.; Bagnulo, S.; Cox, N.; Cami, J.
2017-10-01
The Planck mission has shown that dust properties of the diffuse ISM varies on a large scale and we present variability on a small scales. We present FORS spectro-polarimetry obtained by the Large Interstellar Polarisation Survey along 60 sight-lines. We fit these combined with extinction data by a silicate and carbon dust model with grain sizes ranging from the molecular to the sub-mic. domain. Large silicates of prolate shape account for the observed polarisation. For 37 sight-lines we complement our data set with UVES high-resolution spectra that establish the presence of single or multiple clouds along individual sight-lines. We find correlations between extinction and Serkowski parameters with the dust model and that the presence of multiple clouds depolarises the incoming radiation. However, there is a degeneracy in the dust model between alignment efficiency and the elongation of the grains. This degeneracy can be broken by combining polarization data in the optical-to-submm. This is of wide general interest as it improves the accuracy of deriving dust masses. We show that a flat IR/submm polarisation spectrum with substantial polarisation is predicted from dust models.
Predicting spatio-temporal failure in large scale observational and micro scale experimental systems
NASA Astrophysics Data System (ADS)
de las Heras, Alejandro; Hu, Yong
2006-10-01
Forecasting has become an essential part of modern thought, but the practical limitations still are manifold. We addressed future rates of change by comparing models that take into account time, and models that focus more on space. Cox regression confirmed that linear change can be safely assumed in the short-term. Spatially explicit Poisson regression, provided a ceiling value for the number of deforestation spots. With several observed and estimated rates, it was decided to forecast using the more robust assumptions. A Markov-chain cellular automaton thus projected 5-year deforestation in the Amazonian Arc of Deforestation, showing that even a stable rate of change would largely deplete the forest area. More generally, resolution and implementation of the existing models could explain many of the modelling difficulties still affecting forecasting.
Effect of recent popularity on heat-conduction based recommendation models
NASA Astrophysics Data System (ADS)
Li, Wen-Jun; Dong, Qiang; Shi, Yang-Bo; Fu, Yan; He, Jia-Lin
2017-05-01
Accuracy and diversity are two important measures in evaluating the performance of recommender systems. It has been demonstrated that the recommendation model inspired by the heat conduction process has high diversity yet low accuracy. Many variants have been introduced to improve the accuracy while keeping high diversity, most of which regard the current node-degree of an item as its popularity. However in this way, a few outdated items of large degree may be recommended to an enormous number of users. In this paper, we take the recent popularity (recently increased item degrees) into account in the heat-conduction based methods, and propose accordingly the improved recommendation models. Experimental results on two benchmark data sets show that the accuracy can be largely improved while keeping the high diversity compared with the original models.
Large-Scale Chaos and Fluctuations in Active Nematics
NASA Astrophysics Data System (ADS)
Ngo, Sandrine; Peshkov, Anton; Aranson, Igor S.; Bertin, Eric; Ginelli, Francesco; Chaté, Hugues
2014-07-01
We show that dry active nematics, e.g., collections of shaken elongated granular particles, exhibit large-scale spatiotemporal chaos made of interacting dense, ordered, bandlike structures in a parameter region including the linear onset of nematic order. These results are obtained from the study of both the well-known (deterministic) hydrodynamic equations describing these systems and of the self-propelled particle model they were derived from. We prove, in particular, that the chaos stems from the generic instability of the band solution of the hydrodynamic equations. Revisiting the status of the strong fluctuations and long-range correlations in the particle model, we show that the giant number fluctuations observed in the chaotic phase are a trivial consequence of density segregation. However anomalous, curvature-driven number fluctuations are present in the homogeneous quasiordered nematic phase and characterized by a nontrivial scaling exponent.
Adsorption differences between low coverage enantiomers of alanine on the chiral Cu{421}R surface.
Gladys, Michael J; Han, Jeong Woo; Pedersen, Therese S; Tadich, Anton; O'Donnell, Kane M; Thomsen, Lars
2017-05-31
Chiral separation using heterogeneous methods has long been sought after. Chiral metal surfaces have the potential to make it possible to model these systems using small amino acids, the building blocks for proteins. A comparison of submonolayer concentrations of alanine enantiomers adsorbed onto Cu{421} R has revealed a large geometrical differences between the two molecules as compared to the saturated coverage. Large differences were observed in HR-XPS and NEXAFS and complemented by theoretical DFT calculations. At approximately one third of a monolayer a comparison of the C1s XPS signal showed a shift in the methyl group of more than 300 meV indicating that the two enantiomers are in different chemical environments. NEXAFS spectroscopy confirmed the XPS variations and showed large differences in the orientation of the adsorbed molecules. Our DFT results show that the l-enantiomer is energetically the most stable in the {311} microfacet configuration. In contrast to the full monolayer coverage, these lower coverages showed enhanced selectivity.
On the shape of giant soap bubbles.
Cohen, Caroline; Darbois Texier, Baptiste; Reyssat, Etienne; Snoeijer, Jacco H; Quéré, David; Clanet, Christophe
2017-03-07
We study the effect of gravity on giant soap bubbles and show that it becomes dominant above the critical size [Formula: see text], where [Formula: see text] is the mean thickness of the soap film and [Formula: see text] is the capillary length ([Formula: see text] stands for vapor-liquid surface tension, and [Formula: see text] stands for the liquid density). We first show experimentally that large soap bubbles do not retain a spherical shape but flatten when increasing their size. A theoretical model is then developed to account for this effect, predicting the shape based on mechanical equilibrium. In stark contrast to liquid drops, we show that there is no mechanical limit of the height of giant bubble shapes. In practice, the physicochemical constraints imposed by surfactant molecules limit the access to this large asymptotic domain. However, by an exact analogy, it is shown how the giant bubble shapes can be realized by large inflatable structures.
Could the electroweak scale be linked to the large scale structure of the Universe?
NASA Technical Reports Server (NTRS)
Chakravorty, Alak; Massarotti, Alessandro
1991-01-01
We study a model where the domain walls are generated through a cosmological phase transition involving a scalar field. We assume the existence of a coupling between the scalar field and dark matter and show that the interaction between domain walls and dark matter leads to an energy dependent reflection mechanism. For a simple Yakawa coupling, we find that the vacuum expectation value of the scalar field is theta approx. equals 30GeV - 1TeV, in order for the model to be successful in the formation of large scale 'pancake' structures.
Asymptotic freedom in certain S O (N ) and S U (N ) models
NASA Astrophysics Data System (ADS)
Einhorn, Martin B.; Jones, D. R. Timothy
2017-09-01
We calculate the β -functions for S O (N ) and S U (N ) gauge theories coupled to adjoint and fundamental scalar representations, correcting longstanding, previous results. We explore the constraints on N resulting from requiring asymptotic freedom for all couplings. When we take into account the actual allowed behavior of the gauge coupling, the minimum value of N in both cases turns out to be larger than realized in earlier treatments. We also show that in the large N limit, both models have large regions of parameter space corresponding to total asymptotic freedom.
NASA Astrophysics Data System (ADS)
Revuelto, J.; Dumont, M.; Tuzet, F.; Vionnet, V.; Lafaysse, M.; Lecourt, G.; Vernay, M.; Morin, S.; Cosme, E.; Six, D.; Rabatel, A.
2017-12-01
Nowadays snowpack models show a good capability in simulating the evolution of snow in mountain areas. However singular deviations of meteorological forcing and shortcomings in the modelling of snow physical processes, when accumulated on time along a snow season, could produce large deviations from real snowpack state. The evaluation of these deviations is usually assessed with on-site observations from automatic weather stations. Nevertheless the location of these stations could strongly influence the results of these evaluations since local topography may have a marked influence on snowpack evolution. Despite the evaluation of snowpack models with automatic weather stations usually reveal good results, there exist a lack of large scale evaluations of simulations results on heterogeneous alpine terrain subjected to local topographic effects.This work firstly presents a complete evaluation of the detailed snowpack model Crocus over an extended mountain area, the Arve upper catchment (western European Alps). This catchment has a wide elevation range with a large area above 2000m a.s.l. and/or glaciated. The evaluation compares results obtained with distributed and semi-distributed simulations (the latter nowadays used on the operational forecasting). Daily observations of the snow covered area from MODIS satellite sensor, seasonal glacier surface mass balance evolution measured in more than 65 locations and the galciers annual equilibrium line altitude from Landsat/Spot/Aster satellites, have been used for model evaluation. Additionally the latest advances in producing ensemble snowpack simulations for assimilating satellite reflectance data over extended areas will be presented. These advances comprises the generation of an ensemble of downscaled high-resolution meteorological forcing from meso-scale meteorological models and the application of a particle filter scheme for assimilating satellite observations. Despite the results are prefatory, they show a good potential improving snowpack forecasting capabilities.
Large Extremity Peripheral Nerve Repair
2013-10-01
show that the PTB method can provide fixation strengths approaching that of conventional microsurgery and that the PTB repair is unlikely to be...biomaterial during long periods of recovery associated with large nerve deficit reconstruction and long nerve grafts. As with the human amnion nerve...functional recovery model (SFI, sciatic function index) using PTB/xHAM wrap compared to standard (suture) of care microsurgery . Demonstrated improved nerve
NASA Astrophysics Data System (ADS)
Li, Ji; Chen, Yangbo; Wang, Huanyu; Qin, Jianming; Li, Jie; Chiao, Sen
2017-03-01
Long lead time flood forecasting is very important for large watershed flood mitigation as it provides more time for flood warning and emergency responses. The latest numerical weather forecast model could provide 1-15-day quantitative precipitation forecasting products in grid format, and by coupling this product with a distributed hydrological model could produce long lead time watershed flood forecasting products. This paper studied the feasibility of coupling the Liuxihe model with the Weather Research and Forecasting quantitative precipitation forecast (WRF QPF) for large watershed flood forecasting in southern China. The QPF of WRF products has three lead times, including 24, 48 and 72 h, with the grid resolution being 20 km × 20 km. The Liuxihe model is set up with freely downloaded terrain property; the model parameters were previously optimized with rain gauge observed precipitation, and re-optimized with the WRF QPF. Results show that the WRF QPF has bias with the rain gauge precipitation, and a post-processing method is proposed to post-process the WRF QPF products, which improves the flood forecasting capability. With model parameter re-optimization, the model's performance improves also. This suggests that the model parameters be optimized with QPF, not the rain gauge precipitation. With the increasing of lead time, the accuracy of the WRF QPF decreases, as does the flood forecasting capability. Flood forecasting products produced by coupling the Liuxihe model with the WRF QPF provide a good reference for large watershed flood warning due to its long lead time and rational results.
NASA Astrophysics Data System (ADS)
Li, Xiaowen; Janiga, Matthew A.; Wang, Shuguang; Tao, Wei-Kuo; Rowe, Angela; Xu, Weixin; Liu, Chuntao; Matsui, Toshihisa; Zhang, Chidong
2018-04-01
Evolution of precipitation structures are simulated and compared with radar observations for the November Madden-Julian Oscillation (MJO) event during the DYNAmics of the MJO (DYNAMO) field campaign. Three ground-based, ship-borne, and spaceborne precipitation radars and three cloud-resolving models (CRMs) driven by observed large-scale forcing are used to study precipitation structures at different locations over the central equatorial Indian Ocean. Convective strength is represented by 0-dBZ echo-top heights, and convective organization by contiguous 17-dBZ areas. The multi-radar and multi-model framework allows for more stringent model validations. The emphasis is on testing models' ability to simulate subtle differences observed at different radar sites when the MJO event passed through. The results show that CRMs forced by site-specific large-scale forcing can reproduce not only common features in cloud populations but also subtle variations observed by different radars. The comparisons also revealed common deficiencies in CRM simulations where they underestimate radar echo-top heights for the strongest convection within large, organized precipitation features. Cross validations with multiple radars and models also enable quantitative comparisons in CRM sensitivity studies using different large-scale forcing, microphysical schemes and parameters, resolutions, and domain sizes. In terms of radar echo-top height temporal variations, many model sensitivity tests have better correlations than radar/model comparisons, indicating robustness in model performance on this aspect. It is further shown that well-validated model simulations could be used to constrain uncertainties in observed echo-top heights when the low-resolution surveillance scanning strategy is used.
The good, the bad and the ugly of marine reserves for fishery yields
De Leo, Giulio A.; Micheli, Fiorenza
2015-01-01
Marine reserves (MRs) are used worldwide as a means of conserving biodiversity and protecting depleted populations. Despite major investments in MRs, their environmental and social benefits have proven difficult to demonstrate and are still debated. Clear expectations of the possible outcomes of MR establishment are needed to guide and strengthen empirical assessments. Previous models show that reserve establishment in overcapitalized, quota-based fisheries can reduce both catch and population abundance, thereby negating fisheries and even conservation benefits. By using a stage-structured, spatially explicit stochastic model, we show that catches under quota-based fisheries that include a network of MRs can exceed maximum sustainable yield (MSY) under conventional quota management if reserves provide protection to old, large spawners that disproportionally contribute to recruitment outside the reserves. Modelling results predict that the net fishery benefit of MRs is lost when gains in fecundity of old, large individuals are small, is highest in the case of sedentary adults with high larval dispersal, and decreases with adult mobility. We also show that environmental variability may mask fishery benefits of reserve implementation and that MRs may buffer against collapse when sustainable catch quotas are exceeded owing to stock overestimation or systematic overfishing. PMID:26460129
Spatiotemporal property and predictability of large-scale human mobility
NASA Astrophysics Data System (ADS)
Zhang, Hai-Tao; Zhu, Tao; Fu, Dongfei; Xu, Bowen; Han, Xiao-Pu; Chen, Duxin
2018-04-01
Spatiotemporal characteristics of human mobility emerging from complexity on individual scale have been extensively studied due to the application potential on human behavior prediction and recommendation, and control of epidemic spreading. We collect and investigate a comprehensive data set of human activities on large geographical scales, including both websites browse and mobile towers visit. Numerical results show that the degree of activity decays as a power law, indicating that human behaviors are reminiscent of scale-free random walks known as Lévy flight. More significantly, this study suggests that human activities on large geographical scales have specific non-Markovian characteristics, such as a two-segment power-law distribution of dwelling time and a high possibility for prediction. Furthermore, a scale-free featured mobility model with two essential ingredients, i.e., preferential return and exploration, and a Gaussian distribution assumption on the exploration tendency parameter is proposed, which outperforms existing human mobility models under scenarios of large geographical scales.
Systematic observations of the slip pulse properties of large earthquake ruptures
Melgar, Diego; Hayes, Gavin
2017-01-01
In earthquake dynamics there are two end member models of rupture: propagating cracks and self-healing pulses. These arise due to different properties of faults and have implications for seismic hazard; rupture mode controls near-field strong ground motions. Past studies favor the pulse-like mode of rupture; however, due to a variety of limitations, it has proven difficult to systematically establish their kinematic properties. Here we synthesize observations from a database of >150 rupture models of earthquakes spanning M7–M9 processed in a uniform manner and show the magnitude scaling properties of these slip pulses indicates self-similarity. Further, we find that large and very large events are statistically distinguishable relatively early (at ~15 s) in the rupture process. This suggests that with dense regional geophysical networks strong ground motions from a large rupture can be identified before their onset across the source region.
NASA Astrophysics Data System (ADS)
Pal, Rahul; Yang, Jinping; Qiu, Suimin; McCammon, Susan; Resto, Vicente; Vargas, Gracie
2016-03-01
Volumetric Multiphoton Autofluorescence Microscopy (MPAM) and Second Harmonic Generation Microscopy (SHGM) show promise for revealing indicators of neoplasia representing the complex microstructural organization of mucosa, potentially providing high specificity for detection of neoplasia, but is limited by small imaging area. Large area fluorescence methods on the other hand show high sensitivity appropriate for screening but are hampered by low specificity. In this study, we apply MPAM-SHGM following guidance from large area fluorescence, by either autofluorescence or a targeted metabolic fluorophore, as a potentially clinically viable approach for detection of oral neoplasia. Sites of high neoplastic potentially were identified by large area red/green autofluorescence or by a fluorescently labelled deoxy-glucose analog, 2-deoxy-2-[(7-nitro-2,1,3-benzoxadiazol-4-yl)amino]-D-glucose (2-NBDG) to highlight areas of high glucose uptake across the buccal pouch of a hamster model for OSCC. Follow-up MPAM-SHGM was conducted on regions of interests (ROIs) to assess whether microscopy would reveal microscopic features associated with neoplasia to confirm or exclude large area fluorescence findings. Parameters for analysis included cytologic metrics, 3D epithelial connective tissue interface metrics (MPAM-SHGM) and intensity of fluorescence (widefield). Imaged sites were biopsied and processed for histology and graded by a pathologist. A small sample of human ex vivo tissues were also imaged. A generalized linear model combining image metrics from large area fluorescence and volumetric MPAM-SHGM indicated the ability to delineate normal and inflammation from neoplasia.
Importance sampling large deviations in nonequilibrium steady states. I.
Ray, Ushnish; Chan, Garnet Kin-Lic; Limmer, David T
2018-03-28
Large deviation functions contain information on the stability and response of systems driven into nonequilibrium steady states and in such a way are similar to free energies for systems at equilibrium. As with equilibrium free energies, evaluating large deviation functions numerically for all but the simplest systems is difficult because by construction they depend on exponentially rare events. In this first paper of a series, we evaluate different trajectory-based sampling methods capable of computing large deviation functions of time integrated observables within nonequilibrium steady states. We illustrate some convergence criteria and best practices using a number of different models, including a biased Brownian walker, a driven lattice gas, and a model of self-assembly. We show how two popular methods for sampling trajectory ensembles, transition path sampling and diffusion Monte Carlo, suffer from exponentially diverging correlations in trajectory space as a function of the bias parameter when estimating large deviation functions. Improving the efficiencies of these algorithms requires introducing guiding functions for the trajectories.
Tachyon inflation in the large-N formalism
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barbosa-Cendejas, Nandinii; De-Santiago, Josue; German, Gabriel
2015-11-01
We study tachyon inflation within the large-N formalism, which takes a prescription for the small Hubble flow slow-roll parameter ε{sub 1} as a function of the large number of e-folds N. This leads to a classification of models through their behaviour at large N. In addition to the perturbative N class, we introduce the polynomial and exponential classes for the ε{sub 1} parameter. With this formalism we reconstruct a large number of potentials used previously in the literature for tachyon inflation. We also obtain new families of potentials from the polynomial class. We characterize the realizations of tachyon inflation bymore » computing the usual cosmological observables up to second order in the Hubble flow slow-roll parameters. This allows us to look at observable differences between tachyon and canonical single field inflation. The analysis of observables in light of the Planck 2015 data shows the viability of some of these models, mostly for certain realization of the polynomial and exponential classes.« less
Importance sampling large deviations in nonequilibrium steady states. I
NASA Astrophysics Data System (ADS)
Ray, Ushnish; Chan, Garnet Kin-Lic; Limmer, David T.
2018-03-01
Large deviation functions contain information on the stability and response of systems driven into nonequilibrium steady states and in such a way are similar to free energies for systems at equilibrium. As with equilibrium free energies, evaluating large deviation functions numerically for all but the simplest systems is difficult because by construction they depend on exponentially rare events. In this first paper of a series, we evaluate different trajectory-based sampling methods capable of computing large deviation functions of time integrated observables within nonequilibrium steady states. We illustrate some convergence criteria and best practices using a number of different models, including a biased Brownian walker, a driven lattice gas, and a model of self-assembly. We show how two popular methods for sampling trajectory ensembles, transition path sampling and diffusion Monte Carlo, suffer from exponentially diverging correlations in trajectory space as a function of the bias parameter when estimating large deviation functions. Improving the efficiencies of these algorithms requires introducing guiding functions for the trajectories.
NASA Astrophysics Data System (ADS)
Vlemmix, T.; Eskes, H. J.; Piters, A. J. M.; Schaap, M.; Sauter, F. J.; Kelder, H.; Levelt, P. F.
2015-02-01
A 14-month data set of MAX-DOAS (Multi-Axis Differential Optical Absorption Spectroscopy) tropospheric NO2 column observations in De Bilt, the Netherlands, has been compared with the regional air quality model Lotos-Euros. The model was run on a 7×7 km2 grid, the same resolution as the emission inventory used. A study was performed to assess the effect of clouds on the retrieval accuracy of the MAX-DOAS observations. Good agreement was found between modeled and measured tropospheric NO2 columns, with an average difference of less than 1% of the average tropospheric column (14.5 · 1015 molec cm-2). The comparisons show little cloud cover dependence after cloud corrections for which ceilometer data were used. Hourly differences between observations and model show a Gaussian behavior with a standard deviation (σ) of 5.5 · 1015 molec cm-2. For daily averages of tropospheric NO2 columns, a correlation of 0.72 was found for all observations, and 0.79 for cloud free conditions. The measured and modeled tropospheric NO2 columns have an almost identical distribution over the wind direction. A significant difference between model and measurements was found for the average weekly cycle, which shows a much stronger decrease during the weekend for the observations; for the diurnal cycle, the observed range is about twice as large as the modeled range. The results of the comparison demonstrate that averaged over a long time period, the tropospheric NO2 column observations are representative for a large spatial area despite the fact that they were obtained in an urban region. This makes the MAX-DOAS technique especially suitable for validation of satellite observations and air quality models in urban regions.
Assessment of Arctic and Antarctic Sea Ice Predictability in CMIP5 Decadal Hindcasts
NASA Technical Reports Server (NTRS)
Yang, Chao-Yuan; Liu, Jiping (Inventor); Hu, Yongyun; Horton, Radley M.; Chen, Liqi; Cheng, Xiao
2016-01-01
This paper examines the ability of coupled global climate models to predict decadal variability of Arctic and Antarctic sea ice. We analyze decadal hindcasts/predictions of 11 Coupled Model Intercomparison Project Phase 5 (CMIP5) models. Decadal hindcasts exhibit a large multimodel spread in the simulated sea ice extent, with some models deviating significantly from the observations as the predicted ice extent quickly drifts away from the initial constraint. The anomaly correlation analysis between the decadal hindcast and observed sea ice suggests that in the Arctic, for most models, the areas showing significant predictive skill become broader associated with increasing lead times. This area expansion is largely because nearly all the models are capable of predicting the observed decreasing Arctic sea ice cover. Sea ice extent in the North Pacific has better predictive skill than that in the North Atlantic (particularly at a lead time of 3-7 years), but there is a reemerging predictive skill in the North Atlantic at a lead time of 6-8 years. In contrast to the Arctic, Antarctic sea ice decadal hindcasts do not show broad predictive skill at any timescales, and there is no obvious improvement linking the areal extent of significant predictive skill to lead time increase. This might be because nearly all the models predict a retreating Antarctic sea ice cover, opposite to the observations. For the Arctic, the predictive skill of the multi-model ensemble mean outperforms most models and the persistence prediction at longer timescales, which is not the case for the Antarctic. Overall, for the Arctic, initialized decadal hindcasts show improved predictive skill compared to uninitialized simulations, although this improvement is not present in the Antarctic.
Highly Variable Cycle Exhaust Model Test (HVC10)
NASA Technical Reports Server (NTRS)
Henderson, Brenda; Wernet, Mark; Podboy, Gary; Bozak, Rick
2010-01-01
Results from acoustic and flow-field studies using the Highly Variable Cycle Exhaust (HVC) model were presented. The model consisted of a lobed mixer on the core stream, an elliptic nozzle on the fan stream, and an ejector. For baseline comparisons, the fan nozzle was replaced with a round nozzle and the ejector doors were removed from the model. Acoustic studies showed far-field noise levels were higher for the HVC model with the ejector than for the baseline configuration. Results from Particle Image Velocimetry (PIV) studies indicated that large flow separation regions occurred along the ejector doors, thus restricting flow through the ejector. Phased array measurements showed noise sources located near the ejector doors for operating conditions where tones were present in the acoustic spectra.
NASA Astrophysics Data System (ADS)
Uno, Itsushi; Satake, Shinsuke; Carmichael, Gregory R.; Tang, Youhua; Wang, Zifa; Takemura, Toshihiko; Sugimoto, Nobuo; Shimizu, Atsushi; Murayama, Toshiyuki; Cahill, Thomas A.; Cliff, Steven; Uematsu, Mitsuo; Ohta, Sachio; Quinn, Patricia K.; Bates, Timothy S.
2004-10-01
The regional-scale aerosol transport model Chemical Weather Forecasting System (CFORS) is used for analysis of large-scale dust phenomena during the Asian Pacific Regional Characterization Experiment (ACE-Asia) intensive observation. Dust modeling results are examined with the surface weather reports, satellite-derived dust index (Total Ozone Mapping Spectrometer (TOMS) Aerosol Index (AI)), Mie-scattering lidar observation, and surface aerosol observations. The CFORS dust results are shown to accurately reproduce many of the important observed features. Model analysis shows that the simulated dust vertical loading correlates well with TOMS AI and that the dust loading is transported with the meandering of the synoptic-scale temperature field at the 500-hPa level. Quantitative examination of aerosol optical depth shows that model predictions are within 20% difference of the lidar observations for the major dust episodes. The structure of the ACE-Asia Perfect Dust Storm, which occurred in early April, is clarified with the help of the CFORS model analysis. This storm consisted of two boundary layer components and one elevated dust (>6-km height) feature (resulting from the movement of two large low-pressure systems). Time variation of the CFORS dust fields shows the correct onset timing of the elevated dust for each observation site, but the model results tend to overpredict dust concentrations at lower latitude sites. The horizontal transport flux at 130°E longitude is examined, and the overall dust transport flux at 130°E during March-April is evaluated to be 55 Tg.
The dynamics of Machiavellian intelligence.
Gavrilets, Sergey; Vose, Aaron
2006-11-07
The "Machiavellian intelligence" hypothesis (or the "social brain" hypothesis) posits that large brains and distinctive cognitive abilities of humans have evolved via intense social competition in which social competitors developed increasingly sophisticated "Machiavellian" strategies as a means to achieve higher social and reproductive success. Here we build a mathematical model aiming to explore this hypothesis. In the model, genes control brains which invent and learn strategies (memes) which are used by males to gain advantage in competition for mates. We show that the dynamics of intelligence has three distinct phases. During the dormant phase only newly invented memes are present in the population. During the cognitive explosion phase the population's meme count and the learning ability, cerebral capacity (controlling the number of different memes that the brain can learn and use), and Machiavellian fitness of individuals increase in a runaway fashion. During the saturation phase natural selection resulting from the costs of having large brains checks further increases in cognitive abilities. Overall, our results suggest that the mechanisms underlying the "Machiavellian intelligence" hypothesis can indeed result in the evolution of significant cognitive abilities on the time scale of 10 to 20 thousand generations. We show that cerebral capacity evolves faster and to a larger degree than learning ability. Our model suggests that there may be a tendency toward a reduction in cognitive abilities (driven by the costs of having a large brain) as the reproductive advantage of having a large brain decreases and the exposure to memes increases in modern societies.
The dynamics of Machiavellian intelligence
Gavrilets, Sergey; Vose, Aaron
2006-01-01
The “Machiavellian intelligence” hypothesis (or the “social brain” hypothesis) posits that large brains and distinctive cognitive abilities of humans have evolved via intense social competition in which social competitors developed increasingly sophisticated “Machiavellian” strategies as a means to achieve higher social and reproductive success. Here we build a mathematical model aiming to explore this hypothesis. In the model, genes control brains which invent and learn strategies (memes) which are used by males to gain advantage in competition for mates. We show that the dynamics of intelligence has three distinct phases. During the dormant phase only newly invented memes are present in the population. During the cognitive explosion phase the population's meme count and the learning ability, cerebral capacity (controlling the number of different memes that the brain can learn and use), and Machiavellian fitness of individuals increase in a runaway fashion. During the saturation phase natural selection resulting from the costs of having large brains checks further increases in cognitive abilities. Overall, our results suggest that the mechanisms underlying the “Machiavellian intelligence” hypothesis can indeed result in the evolution of significant cognitive abilities on the time scale of 10 to 20 thousand generations. We show that cerebral capacity evolves faster and to a larger degree than learning ability. Our model suggests that there may be a tendency toward a reduction in cognitive abilities (driven by the costs of having a large brain) as the reproductive advantage of having a large brain decreases and the exposure to memes increases in modern societies. PMID:17075072
Speckle in the diffraction patterns of Hendricks-Teller and icosahedral glass models
NASA Technical Reports Server (NTRS)
Garg, Anupam; Levine, Dov
1988-01-01
It is shown that the X-ray diffraction patterns from the Hendricks-Teller model for layered systems and the icosahedral glass models for the icosahedral phases show large fluctuations between nearby scattering wave vectors and from sample to sample, that are quite analogous to laser speckle. The statistics of these fluctuations are studied analytically for the first model and via computer simulations for the second. The observability of these effects is discussed briefly.
A supersymmetric SYK-like tensor model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peng, Cheng; Spradlin, Marcus; Volovich, Anastasia
2017-05-11
We consider a supersymmetric SYK-like model without quenched disorder that is built by coupling two kinds of fermionic Ν = 1 tensor-valued superfields, ''quarks'' and ''mesons''. We prove that the model has a well-defined large-N limit in which the (s)quark 2-point functions are dominated by mesonic ''melon'' diagrams. We sum these diagrams to obtain the Schwinger-Dyson equations and show that in the IR, the solution agrees with that of the supersymmetric SYK model.
Estimation of the linear mixed integrated Ornstein–Uhlenbeck model
Hughes, Rachael A.; Kenward, Michael G.; Sterne, Jonathan A. C.; Tilling, Kate
2017-01-01
ABSTRACT The linear mixed model with an added integrated Ornstein–Uhlenbeck (IOU) process (linear mixed IOU model) allows for serial correlation and estimation of the degree of derivative tracking. It is rarely used, partly due to the lack of available software. We implemented the linear mixed IOU model in Stata and using simulations we assessed the feasibility of fitting the model by restricted maximum likelihood when applied to balanced and unbalanced data. We compared different (1) optimization algorithms, (2) parameterizations of the IOU process, (3) data structures and (4) random-effects structures. Fitting the model was practical and feasible when applied to large and moderately sized balanced datasets (20,000 and 500 observations), and large unbalanced datasets with (non-informative) dropout and intermittent missingness. Analysis of a real dataset showed that the linear mixed IOU model was a better fit to the data than the standard linear mixed model (i.e. independent within-subject errors with constant variance). PMID:28515536
Master-slave system with force feedback based on dynamics of virtual model
NASA Technical Reports Server (NTRS)
Nojima, Shuji; Hashimoto, Hideki
1994-01-01
A master-slave system can extend manipulating and sensing capabilities of a human operator to a remote environment. But the master-slave system has two serious problems: one is the mechanically large impedance of the system; the other is the mechanical complexity of the slave for complex remote tasks. These two problems reduce the efficiency of the system. If the slave has local intelligence, it can help the human operator by using its good points like fast calculation and large memory. The authors suggest that the slave is a dextrous hand with many degrees of freedom able to manipulate an object of known shape. It is further suggested that the dimensions of the remote work space be shared by the human operator and the slave. The effect of the large impedance of the system can be reduced in a virtual model, a physical model constructed in a computer with physical parameters as if it were in the real world. A method to determine the damping parameter dynamically for the virtual model is proposed. Experimental results show that this virtual model is better than the virtual model with fixed damping.
NASA Astrophysics Data System (ADS)
Pan, Wen-hao; Liu, Shi-he; Huang, Li
2018-02-01
This study developed a three-layer velocity model for turbulent flow over large-scale roughness. Through theoretical analysis, this model coupled both surface and subsurface flow. Flume experiments with flat cobble bed were conducted to examine the theoretical model. Results show that both the turbulent flow field and the total flow characteristics are quite different from that in the low gradient flow over microscale roughness. The velocity profile in a shallow stream converges to the logarithmic law away from the bed, while inflecting over the roughness layer to the non-zero subsurface flow. The velocity fluctuations close to a cobble bed are different from that of a sand bed, and it indicates no sufficiently large peak velocity. The total flow energy loss deviates significantly from the 1/7 power law equation when the relative flow depth is shallow. Both the coupled model and experiments indicate non-negligible subsurface flow that accounts for a considerable proportion of the total flow. By including the subsurface flow, the coupled model is able to predict a wider range of velocity profiles and total flow energy loss coefficients when compared with existing equations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, G. H.; Pesaran, A.; Spotnitz, R.
To understand further the thermal abuse behavior of large format Li-ion batteries for automotive applications, the one-dimensional modeling approach formulated by Hatchard et al. was reproduced. Then it was extended to three dimensions so we could consider the geometrical features, which are critical in large cells for automotive applications. The three-dimensional model captures the shapes and dimensions of cell components and the spatial distributions of materials and temperatures, and is used to simulate oven tests, and to determine how a local hot spot can propagate through the cell. In simulations of oven abuse testing of cells with cobalt oxide cathodemore » and graphite anode with standard LiPF6 electrolyte, the three-dimensional model predicts that thermal runaway will occur sooner or later than the lumped model, depending on the size of the cell. The model results showed that smaller cells reject heat faster than larger cells; this may prevent them from going into thermal runaway under identical abuse conditions. In simulations of local hot spots inside a large cylindrical cell, the three-dimensional model predicts that the reactions initially propagate in the azimuthal and longitudinal directions to form a hollow cylinder-shaped reaction zone.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Churchfield, M. J.; Moriarty, P. J.; Hao, Y.
The focus of this work is the comparison of the dynamic wake meandering model and large-eddy simulation with field data from the Egmond aan Zee offshore wind plant composed of 36 3-MW turbines. The field data includes meteorological mast measurements, SCADA information from all turbines, and strain-gauge data from two turbines. The dynamic wake meandering model and large-eddy simulation are means of computing unsteady wind plant aerodynamics, including the important unsteady meandering of wakes as they convect downstream and interact with other turbines and wakes. Both of these models are coupled to a turbine model such that power and mechanicalmore » loads of each turbine in the wind plant are computed. We are interested in how accurately different types of waking (e.g., direct versus partial waking), can be modeled, and how background turbulence level affects these loads. We show that both the dynamic wake meandering model and large-eddy simulation appear to underpredict power and overpredict fatigue loads because of wake effects, but it is unclear that they are really in error. This discrepancy may be caused by wind-direction uncertainty in the field data, which tends to make wake effects appear less pronounced.« less
Modeling Study of the Low-Temperature Oxidation of Large Methyl Esters from C11 to C19
Herbinet, Olivier; Biet, Joffrey; Hakka, Mohammed Hichem; Warth, Valérie; Glaude, Pierre Alexandre; Nicolle, André; Battin-Leclerc, Frédérique
2013-01-01
The modeling of the low temperature oxidation of large saturated methyl esters really representative of those found in biodiesel fuels has been investigated. Models have been developed for these species and then detailed kinetic mechanisms have been automatically generated using a new extended version of software EXGAS, which includes reactions specific to the chemistry of esters. A model generated for a binary mixture of n-decane and methyl palmitate was used to simulate experimental results obtained in a jet-stirred reactor for this fuel. This model predicts very well the reactivity of the fuel and the mole fraction profiles of most reaction products. This work also shows that a model for a middle size methyl ester such as methyl decanoate predicts fairly well the reactivity and the mole fractions of most species with a substantial decrease in computational time. Large n-alkanes such as n-hexadecane are also good surrogates for reproducing the reactivity of methyl esters, with an important gain in computational time, but they cannot account for the formation of specific products such as unsaturated esters or cyclic ethers with an ester function. PMID:23814504
Dong, Jian; Jin, Yanli; Dong, He; Liu, Jiawei; Ye, Senbin
2018-06-26
The profile, apparent contact angle (ACA), contact angle hysteresis (CAH), and wetting state transmission energy barrier (WSTEB) are important static and dynamic properties of a large-volume droplet on the hierarchical surface. Understanding them can provide us with important insights into functional surfaces and promote the application in corresponding areas. In this paper, we establish three theoretical models (models 1-3) and the corresponding numerical methods, which were obtained by the free energy minimization and the nonlinear optimization algorithm, to predict the profile, ACA, CAH, and WSTEB of a large-volume droplet on the horizontal regular dual-rough surface. In consideration of the gravity, the energy barrier on the contact circle, the dual heterogeneous structures and their roughness on the surface, the models are more universal and accurate than the previous models. It showed that the predictions of the models were in good agreement with the results from the experiment or literature. The models are promising to become novel design approaches of functional surfaces, which are frequently applied in microfluidic chips, water self-catchment system, and dropwise condensation heat transfer system.
ERIC Educational Resources Information Center
Herridge, Bart; Heil, Robert
2003-01-01
Predictive modeling has been a popular topic in higher education for the last few years. This case study shows an example of an effective use of modeling combined with market segmentation to strategically divide large, unmanageable prospect and inquiry pools and convert them into applicants, and eventually, enrolled students. (Contains 6 tables.)
NASA Technical Reports Server (NTRS)
McGhee, D. S.
2004-01-01
Launch vehicles consume large quantities of propellant quickly, causing the mass properties and structural dynamics of the vehicle to change dramatically. Currently, structural load assessments account for this change with a large collection of structural models representing various propellant fill levels. This creates a large database of models complicating the delivery of reduced models and requiring extensive work for model changes. Presented here is a method to account for these mass changes in a more efficient manner. The method allows for the subtraction of propellant mass as the propellant is used in the simulation. This subtraction is done in the modal domain of the vehicle generalized model. Additional computation required is primarily for constructing the used propellant mass matrix from an initial propellant model and further matrix multiplications and subtractions. An additional eigenvalue solution is required to uncouple the new equations of motion; however, this is a much simplier calculation starting from a system that is already substantially uncoupled. The method was successfully tested in a simulation of Saturn V loads. Results from the method are compared to results from separate structural models for several propellant levels, showing excellent agreement. Further development to encompass more complicated propellant models, including slosh dynamics, is possible.
Contrasting Models of Posttraumatic Stress Disorder: Reply to Monroe and Mineka (2008)
Berntsen, Dorthe; Rubin, David C.; Johansen, Malene Klindt
2009-01-01
We address the four main points in Monroe and Mineka (2008)’s Comment. First, we first show that the DSM PTSD diagnosis includes an etiology and that it is based on a theoretical model with a distinguished history in psychology and psychiatry. Two tenets of this theoretical model are that voluntary (strategic) recollections of the trauma are fragmented and incomplete while involuntary (spontaneous) recollections are vivid and persistent and yield privileged access to traumatic material. Second, we describe differences between our model and other cognitive models of PTSD. We argue that these other models share the same two tenets as the diagnosis and we show that these two tenets are largely unsupported by empirical evidence. Third, we counter arguments about the strength of the evidence favoring the mnemonic model, and fourth, we show that concerns about the causal role of memory in PTSD are based on views of causality that are generally inappropriate for the explanation of PTSD in the social and biological sciences. PMID:20808720
On synthetic gravitational waves from multi-field inflation
NASA Astrophysics Data System (ADS)
Ozsoy, Ogan
2018-04-01
We revisit the possibility of producing observable tensor modes through a continuous particle production process during inflation. Particularly, we focus on the multi-field realization of inflation where a spectator pseudoscalar σ induces a significant amplification of the U(1) gauge fields through the coupling propto σFμνtilde Fμν. In this model, both the scalar σ and the Abelian gauge fields are gravitationally coupled to the inflaton sector, therefore they can only affect the primordial scalar and tensor fluctuations through their mixing with gravitational fluctuations. Recent studies on this scenario show that the sourced contributions to the scalar correlators can be dangerously large to invalidate a large tensor power spectrum through the particle production mechanism. In this paper, we re-examine these recent claims by explicitly calculating the dominant contribution to the scalar power and bispectrum. Particularly, we show that once the current limits from CMB data are taken into account, it is still possible to generate a signal as large as r ≈ 10‑3 and the limitations on the model building are more relaxed than what was considered before.
Training and Scoring Issues Involved in Large-Scale Writing Assessments.
ERIC Educational Resources Information Center
Moon, Tonya R.; Hughes, Kevin R.
2002-01-01
Examined a scoring anomaly that became apparent in a state-mandated writing assessment. Results for 3,660 essays by sixth graders show that using a spiral model for training raters and scoring papers results in higher mean ratings than does using a sequential model for training and scoring. Findings demonstrate the importance of making decisions…
Jeffrey P. Prestemon
2009-01-01
Timber product markets are subject to large shocks deriving from natural disturbances and policy shifts. Statistical modeling of shocks is often done to assess their economic importance. In this article, I simulate the statistical power of univariate and bivariate methods of shock detection using time series intervention models. Simulations show that bivariate methods...
The western Pacific monsoon in CMIP5 models: Model evaluation and projections
NASA Astrophysics Data System (ADS)
Brown, Josephine R.; Colman, Robert A.; Moise, Aurel F.; Smith, Ian N.
2013-11-01
ability of 35 models from the Coupled Model Intercomparison Project Phase 5 (CMIP5) to simulate the western Pacific (WP) monsoon is evaluated over four representative regions around Timor, New Guinea, the Solomon Islands and Palau. Coupled model simulations are compared with atmosphere-only model simulations (with observed sea surface temperatures, SSTs) to determine the impact of SST biases on model performance. Overall, the CMIP5 models simulate the WP monsoon better than previous-generation Coupled Model Intercomparison Project Phase 3 (CMIP3) models, but some systematic biases remain. The atmosphere-only models are better able to simulate the seasonal cycle of zonal winds than the coupled models, but display comparable biases in the rainfall. The CMIP5 models are able to capture features of interannual variability in response to the El Niño-Southern Oscillation. In climate projections under the RCP8.5 scenario, monsoon rainfall is increased over most of the WP monsoon domain, while wind changes are small. Widespread rainfall increases at low latitudes in the summer hemisphere appear robust as a large majority of models agree on the sign of the change. There is less agreement on rainfall changes in winter. Interannual variability of monsoon wet season rainfall is increased in a warmer climate, particularly over Palau, Timor and the Solomon Islands. A subset of the models showing greatest skill in the current climate confirms the overall projections, although showing markedly smaller rainfall increases in the western equatorial Pacific. The changes found here may have large impacts on Pacific island countries influenced by the WP monsoon.
Effect of increasing disorder on domains of the 2d Coulomb glass.
Bhandari, Preeti; Malik, Vikas
2017-12-06
We have studied a two dimensional lattice model of Coulomb glass for a wide range of disorders at [Formula: see text]. The system was first annealed using Monte Carlo simulation. Further minimization of the total energy of the system was done using an algorithm developed by Baranovskii et al, followed by cluster flipping to obtain the pseudo-ground states. We have shown that the energy required to create a domain of linear size L in d dimensions is proportional to [Formula: see text]. Using Imry-Ma arguments given for random field Ising model, one gets critical dimension [Formula: see text] for Coulomb glass. The investigation of domains in the transition region shows a discontinuity in staggered magnetization which is an indication of a first-order type transition from charge-ordered phase to disordered phase. The structure and nature of random field fluctuations of the second largest domain in Coulomb glass are inconsistent with the assumptions of Imry and Ma, as was also reported for random field Ising model. The study of domains showed that in the transition region there were mostly two large domains, and that as disorder was increased the two large domains remained, but a large number of small domains also opened up. We have also studied the properties of the second largest domain as a function of disorder. We furthermore analysed the effect of disorder on the density of states, and showed a transition from hard gap at low disorders to a soft gap at higher disorders. At [Formula: see text], we have analysed the soft gap in detail, and found that the density of states deviates slightly ([Formula: see text]) from the linear behaviour in two dimensions. Analysis of local minima show that the pseudo-ground states have similar structure.
Tang, Shuaiqi; Zhang, Minghua; Xie, Shaocheng
2017-08-05
Large-scale forcing data, such as vertical velocity and advective tendencies, are required to drive single-column models (SCMs), cloud-resolving models, and large-eddy simulations. Previous studies suggest that some errors of these model simulations could be attributed to the lack of spatial variability in the specified domain-mean large-scale forcing. This study investigates the spatial variability of the forcing and explores its impact on SCM simulated precipitation and clouds. A gridded large-scale forcing data during the March 2000 Cloud Intensive Operational Period at the Atmospheric Radiation Measurement program's Southern Great Plains site is used for analysis and to drive the single-column version ofmore » the Community Atmospheric Model Version 5 (SCAM5). When the gridded forcing data show large spatial variability, such as during a frontal passage, SCAM5 with the domain-mean forcing is not able to capture the convective systems that are partly located in the domain or that only occupy part of the domain. This problem has been largely reduced by using the gridded forcing data, which allows running SCAM5 in each subcolumn and then averaging the results within the domain. This is because the subcolumns have a better chance to capture the timing of the frontal propagation and the small-scale systems. As a result, other potential uses of the gridded forcing data, such as understanding and testing scale-aware parameterizations, are also discussed.« less
Imprint of thawing scalar fields on the large scale galaxy overdensity
NASA Astrophysics Data System (ADS)
Dinda, Bikash R.; Sen, Anjan A.
2018-04-01
We investigate the observed galaxy power spectrum for the thawing class of scalar field models taking into account various general relativistic corrections that occur on very large scales. We consider the full general relativistic perturbation equations for the matter as well as the dark energy fluid. We form a single autonomous system of equations containing both the background and the perturbed equations of motion which we subsequently solve for different scalar field potentials. First we study the percentage deviation from the Λ CDM model for different cosmological parameters as well as in the observed galaxy power spectra on different scales in scalar field models for various choices of scalar field potentials. Interestingly the difference in background expansion results from the enhancement of power from Λ CDM on small scales, whereas the inclusion of general relativistic (GR) corrections results in the suppression of power from Λ CDM on large scales. This can be useful to distinguish scalar field models from Λ CDM with future optical/radio surveys. We also compare the observed galaxy power spectra for tracking and thawing types of scalar field using some particular choices for the scalar field potentials. We show that thawing and tracking models can have large differences in observed galaxy power spectra on large scales and for smaller redshifts due to different GR effects. But on smaller scales and for larger redshifts, the difference is small and is mainly due to the difference in background expansion.
Carbon recombination lines as a diagnostic of photodissociation regions
NASA Technical Reports Server (NTRS)
Natta, A.; Walmsley, C. M.; Tielens, A. G. G. M.
1994-01-01
We have observed the C91 alpha radio recombination line toward the Orion H II region. This narrow (approximately 3-5 km per sec full width at half maximum (FWHM)) line is spatially very extended (approximately 8 arcmin or 1 pc). These charateristics compare well with the observed characteristics of the C II fine structure line at 158 microns. Thus, the C91 alpha line originates in the predominantly neutral photodissociation regions separating the H II region from the molecular cloud. We have developed theoretical models for the C II radio recombination lines from photodissociation regions. The results show that the I(C91 alpha)/I(C158) intensity ratio is a sensitive function of the temperature and density of the emitting gas. We have also extended theoretical models for photodissociation regions to include the C II recombination lines. Comparison with these models show that, in the central portion of the Orion region, the C91 alpha line originates in dense (10(exp 6) per cu cm), warm (500-1000 K) gas. Even at large projected distances (approximately 1 pc), the inferred density is still high (10(exp 5) per cu cm) and implies extremely high thermal pressures. As in the case of the (C II) 158 microns line, the large extent of the C91 alpha line shows that (FUV) photons can penetrate to large distances from the illuminating source. The decline of the intensity of the incident radiation field with distance from Theta(sup 1) C seems to be dominated by geometrical dilution, rather than dust extinction. Finally, we have used our models to calculate the intensity of the 9850 A recombination line of C II. The physical conditions inferred from this line are in good agreement with those determined from the radio recombination and the far-infrared fine-structure lines. We show that the ratio of the 9850 A to the C91 alpha lines is a very good probe of very high density clumps.
Particle Interactions Mediated by Dynamical Networks: Assessment of Macroscopic Descriptions
NASA Astrophysics Data System (ADS)
Barré, J.; Carrillo, J. A.; Degond, P.; Peurichard, D.; Zatorska, E.
2018-02-01
We provide a numerical study of the macroscopic model of Barré et al. (Multiscale Model Simul, 2017, to appear) derived from an agent-based model for a system of particles interacting through a dynamical network of links. Assuming that the network remodeling process is very fast, the macroscopic model takes the form of a single aggregation-diffusion equation for the density of particles. The theoretical study of the macroscopic model gives precise criteria for the phase transitions of the steady states, and in the one-dimensional case, we show numerically that the stationary solutions of the microscopic model undergo the same phase transitions and bifurcation types as the macroscopic model. In the two-dimensional case, we show that the numerical simulations of the macroscopic model are in excellent agreement with the predicted theoretical values. This study provides a partial validation of the formal derivation of the macroscopic model from a microscopic formulation and shows that the former is a consistent approximation of an underlying particle dynamics, making it a powerful tool for the modeling of dynamical networks at a large scale.
[Establishment of a 3D finite element model of human skull using MSCT images and mimics software].
Huang, Ping; Li, Zheng-dong; Shao, Yu; Zou, Dong-hua; Liu, Ning-guo; Li, Li; Chen, Yuan-yuan; Wan, Lei; Chen, Yi-jiu
2011-02-01
To establish a human 3D finite element skull model, and to explore its value in biomechanics analysis. The cadaveric head was scanned and then 3D skull model was created using Mimics software based on 2D CT axial images. The 3D skull model was optimized by preprocessor along with creation of the surface and volume meshes. The stress changes, after the head was struck by an object or the head hit the ground directly, were analyzed using ANSYS software. The original 3D skull model showed a large number of triangles with a poor quality and high similarity with the real head, while the optimized model showed high quality surface and volume meshes with a small number of triangles comparatively. The model could show the local and global stress changes effectively. The human 3D skull model can be established using MSCT and Mimics software and provides a good finite element model for biomechanics analysis. This model may also provide a base for the study of head stress changes following different forces.
Particle Interactions Mediated by Dynamical Networks: Assessment of Macroscopic Descriptions.
Barré, J; Carrillo, J A; Degond, P; Peurichard, D; Zatorska, E
2018-01-01
We provide a numerical study of the macroscopic model of Barré et al. (Multiscale Model Simul, 2017, to appear) derived from an agent-based model for a system of particles interacting through a dynamical network of links. Assuming that the network remodeling process is very fast, the macroscopic model takes the form of a single aggregation-diffusion equation for the density of particles. The theoretical study of the macroscopic model gives precise criteria for the phase transitions of the steady states, and in the one-dimensional case, we show numerically that the stationary solutions of the microscopic model undergo the same phase transitions and bifurcation types as the macroscopic model. In the two-dimensional case, we show that the numerical simulations of the macroscopic model are in excellent agreement with the predicted theoretical values. This study provides a partial validation of the formal derivation of the macroscopic model from a microscopic formulation and shows that the former is a consistent approximation of an underlying particle dynamics, making it a powerful tool for the modeling of dynamical networks at a large scale.
Feng, Yuan; Lee, Chung-Hao; Sun, Lining; Ji, Songbai; Zhao, Xuefeng
2017-01-01
Characterizing the mechanical properties of white matter is important to understand and model brain development and injury. With embedded aligned axonal fibers, white matter is typically modeled as a transversely isotropic material. However, most studies characterize the white matter tissue using models with a single anisotropic invariant or in a small-strain regime. In this study, we combined a single experimental procedure - asymmetric indentation - with inverse finite element (FE) modeling to estimate the nearly incompressible transversely isotropic material parameters of white matter. A minimal form comprising three parameters was employed to simulate indentation responses in the large-strain regime. The parameters were estimated using a global optimization procedure based on a genetic algorithm (GA). Experimental data from two indentation configurations of porcine white matter, parallel and perpendicular to the axonal fiber direction, were utilized to estimate model parameters. Results in this study confirmed a strong mechanical anisotropy of white matter in large strain. Further, our results suggested that both indentation configurations are needed to estimate the parameters with sufficient accuracy, and that the indenter-sample friction is important. Finally, we also showed that the estimated parameters were consistent with those previously obtained via a trial-and-error forward FE method in the small-strain regime. These findings are useful in modeling and parameterization of white matter, especially under large deformation, and demonstrate the potential of the proposed asymmetric indentation technique to characterize other soft biological tissues with transversely isotropic properties. Copyright © 2016 Elsevier Ltd. All rights reserved.
Liu, Yuan; Chen, Wei-Hua; Hou, Qiao-Juan; Wang, Xi-Chang; Dong, Ruo-Yan; Wu, Hao
2014-04-01
Near infrared spectroscopy (NIR) was used in this experiment to evaluate the freshness of ice-stored large yellow croaker (Pseudosciaena crocea) during different storage periods. And the TVB-N was used as an index to evaluate the freshness. Through comparing the correlation coefficent and standard deviations of calibration set and validation set of models established by singly and combined using of different pretreatment methods, different modeling methods and different wavelength region, the best TVB-N models of ice-stored large yellow croaker sold in the market were established to predict the freshness quickly. According to the research, the model shows that the best performance could be established by using the normalization by closure (Ncl) with 1st derivative (Dbl) and normalization to unit length (Nle) with 1st derivative as the pretreated method and partial least square (PLS) as the modeling method combined with choosing the wavelength region of 5 000-7 144, and 7 404-10 000 cm(-1). The calibration model gave the correlation coefficient of 0.992, with a standard error of calibration of 1.045 and the validation model gave the correlation coefficient of 0.999, with a standard error of prediction of 0.990. This experiment attempted to combine several pretreatment methods and choose the best wavelength region, which has got a good result. It could have a good prospective application of freshness detection and quality evaluation of large yellow croaker in the market.
NASA Astrophysics Data System (ADS)
Bonan, G. B.; Wieder, W. R.
2012-12-01
Decomposition is a large term in the global carbon budget, but models of the earth system that simulate carbon cycle-climate feedbacks are largely untested with respect to litter decomposition. Here, we demonstrate a protocol to document model performance with respect to both long-term (10 year) litter decomposition and steady-state soil carbon stocks. First, we test the soil organic matter parameterization of the Community Land Model version 4 (CLM4), the terrestrial component of the Community Earth System Model, with data from the Long-term Intersite Decomposition Experiment Team (LIDET). The LIDET dataset is a 10-year study of litter decomposition at multiple sites across North America and Central America. We show results for 10-year litter decomposition simulations compared with LIDET for 9 litter types and 20 sites in tundra, grassland, and boreal, conifer, deciduous, and tropical forest biomes. We show additional simulations with DAYCENT, a version of the CENTURY model, to ask how well an established ecosystem model matches the observations. The results reveal large discrepancy between the laboratory microcosm studies used to parameterize the CLM4 litter decomposition and the LIDET field study. Simulated carbon loss is more rapid than the observations across all sites, despite using the LIDET-provided climatic decomposition index to constrain temperature and moisture effects on decomposition. Nitrogen immobilization is similarly biased high. Closer agreement with the observations requires much lower decomposition rates, obtained with the assumption that nitrogen severely limits decomposition. DAYCENT better replicates the observations, for both carbon mass remaining and nitrogen, without requirement for nitrogen limitation of decomposition. Second, we compare global observationally-based datasets of soil carbon with simulated steady-state soil carbon stocks for both models. The models simulations were forced with observationally-based estimates of annual litterfall and model-derived climatic decomposition index. While comparison with the LIDET 10-year litterbag study reveals sharp contrasts between CLM4 and DAYCENT, simulations of steady-state soil carbon show less difference between models. Both CLM4 and DAYCENT significantly underestimate soil carbon. Sensitivity analyses highlight causes of the low soil carbon bias. The terrestrial biogeochemistry of earth system models must be critically tested with observations, and the consequences of particular model choices must be documented. Long-term litter decomposition experiments such as LIDET provide a real-world process-oriented benchmark to evaluate models and can critically inform model development. Analysis of steady-state soil carbon estimates reveal additional, but here different, inferences about model performance.
The origin of the structure of large-scale magnetic fields in disc galaxies
NASA Astrophysics Data System (ADS)
Nixon, C. J.; Hands, T. O.; King, A. R.; Pringle, J. E.
2018-07-01
The large-scale magnetic fields observed in spiral disc galaxies are often thought to result from dynamo action in the disc plane. However, the increasing importance of Faraday depolarization along any line of sight towards the galactic plane suggests that the strongest polarization signal may come from well above (˜0.3-1 kpc) this plane, from the vicinity of the warm interstellar medium (WIM)/halo interface. We propose (see also Henriksen & Irwin 2016) that the observed spiral fields (polarization patterns) result from the action of vertical shear on an initially poloidal field. We show that this simple model accounts for the main observed properties of large-scale fields. We speculate as to how current models of optical spiral structure may generate the observed arm/interarm spiral polarization patterns.
On the relativistic large-angle electron collision operator for runaway avalanches in plasmas
NASA Astrophysics Data System (ADS)
Embréus, O.; Stahl, A.; Fülöp, T.
2018-02-01
Large-angle Coulomb collisions lead to an avalanching generation of runaway electrons in a plasma. We present the first fully conservative large-angle collision operator, derived from the relativistic Boltzmann operator. The relation to previous models for large-angle collisions is investigated, and their validity assessed. We present a form of the generalized collision operator which is suitable for implementation in a numerical kinetic equation solver, and demonstrate the effect on the runaway-electron growth rate. Finally we consider the reverse avalanche effect, where runaways are slowed down by large-angle collisions, and show that the choice of operator is important if the electric field is close to the avalanche threshold.
Comment on "Continuum Lowering and Fermi-Surface Rising in Stromgly Coupled and Degenerate Plasmas"
DOE Office of Scientific and Technical Information (OSTI.GOV)
Iglesias, C. A.; Sterne, P. A.
In a recent Letter, Hu [1] reported photon absorption cross sections in strongly coupled, degenerate plasmas from quantum molecular dynamics (QMD). The Letter claims that the K-edge shift as a function of plasma density computed with simple ionization potential depression (IPD) models are in violent disagreement with the QMD results. The QMD calculations displayed an increase in Kedge shift with increasing density while the simpler models yielded a decrease. Here, this Comment shows that the claimed large errors reported by Hu for the widely used Stewart- Pyatt (SP) model [2] stem from an invalid comparison of disparate physical quantities andmore » is largely resolved by including well-known corrections for degenerate systems.« less
Comment on "Continuum Lowering and Fermi-Surface Rising in Stromgly Coupled and Degenerate Plasmas"
Iglesias, C. A.; Sterne, P. A.
2018-03-16
In a recent Letter, Hu [1] reported photon absorption cross sections in strongly coupled, degenerate plasmas from quantum molecular dynamics (QMD). The Letter claims that the K-edge shift as a function of plasma density computed with simple ionization potential depression (IPD) models are in violent disagreement with the QMD results. The QMD calculations displayed an increase in Kedge shift with increasing density while the simpler models yielded a decrease. Here, this Comment shows that the claimed large errors reported by Hu for the widely used Stewart- Pyatt (SP) model [2] stem from an invalid comparison of disparate physical quantities andmore » is largely resolved by including well-known corrections for degenerate systems.« less
An electromagnetism-like metaheuristic for open-shop problems with no buffer
NASA Astrophysics Data System (ADS)
Naderi, Bahman; Najafi, Esmaeil; Yazdani, Mehdi
2012-12-01
This paper considers open-shop scheduling with no intermediate buffer to minimize total tardiness. This problem occurs in many production settings, in the plastic molding, chemical, and food processing industries. The paper mathematically formulates the problem by a mixed integer linear program. The problem can be optimally solved by the model. The paper also develops a novel metaheuristic based on an electromagnetism algorithm to solve the large-sized problems. The paper conducts two computational experiments. The first includes small-sized instances by which the mathematical model and general performance of the proposed metaheuristic are evaluated. The second evaluates the metaheuristic for its performance to solve some large-sized instances. The results show that the model and algorithm are effective to deal with the problem.
NASA Astrophysics Data System (ADS)
Roberts, Michael J.; Braun, Noah O.; Sinclair, Thomas R.; Lobell, David B.; Schlenker, Wolfram
2017-09-01
We compare predictions of a simple process-based crop model (Soltani and Sinclair 2012), a simple statistical model (Schlenker and Roberts 2009), and a combination of both models to actual maize yields on a large, representative sample of farmer-managed fields in the Corn Belt region of the United States. After statistical post-model calibration, the process model (Simple Simulation Model, or SSM) predicts actual outcomes slightly better than the statistical model, but the combined model performs significantly better than either model. The SSM, statistical model and combined model all show similar relationships with precipitation, while the SSM better accounts for temporal patterns of precipitation, vapor pressure deficit and solar radiation. The statistical and combined models show a more negative impact associated with extreme heat for which the process model does not account. Due to the extreme heat effect, predicted impacts under uniform climate change scenarios are considerably more severe for the statistical and combined models than for the process-based model.
Universally Sloppy Parameter Sensitivities in Systems Biology Models
Gutenkunst, Ryan N; Waterfall, Joshua J; Casey, Fergal P; Brown, Kevin S; Myers, Christopher R; Sethna, James P
2007-01-01
Quantitative computational models play an increasingly important role in modern biology. Such models typically involve many free parameters, and assigning their values is often a substantial obstacle to model development. Directly measuring in vivo biochemical parameters is difficult, and collectively fitting them to other experimental data often yields large parameter uncertainties. Nevertheless, in earlier work we showed in a growth-factor-signaling model that collective fitting could yield well-constrained predictions, even when it left individual parameters very poorly constrained. We also showed that the model had a “sloppy” spectrum of parameter sensitivities, with eigenvalues roughly evenly distributed over many decades. Here we use a collection of models from the literature to test whether such sloppy spectra are common in systems biology. Strikingly, we find that every model we examine has a sloppy spectrum of sensitivities. We also test several consequences of this sloppiness for building predictive models. In particular, sloppiness suggests that collective fits to even large amounts of ideal time-series data will often leave many parameters poorly constrained. Tests over our model collection are consistent with this suggestion. This difficulty with collective fits may seem to argue for direct parameter measurements, but sloppiness also implies that such measurements must be formidably precise and complete to usefully constrain many model predictions. We confirm this implication in our growth-factor-signaling model. Our results suggest that sloppy sensitivity spectra are universal in systems biology models. The prevalence of sloppiness highlights the power of collective fits and suggests that modelers should focus on predictions rather than on parameters. PMID:17922568
Universally sloppy parameter sensitivities in systems biology models.
Gutenkunst, Ryan N; Waterfall, Joshua J; Casey, Fergal P; Brown, Kevin S; Myers, Christopher R; Sethna, James P
2007-10-01
Quantitative computational models play an increasingly important role in modern biology. Such models typically involve many free parameters, and assigning their values is often a substantial obstacle to model development. Directly measuring in vivo biochemical parameters is difficult, and collectively fitting them to other experimental data often yields large parameter uncertainties. Nevertheless, in earlier work we showed in a growth-factor-signaling model that collective fitting could yield well-constrained predictions, even when it left individual parameters very poorly constrained. We also showed that the model had a "sloppy" spectrum of parameter sensitivities, with eigenvalues roughly evenly distributed over many decades. Here we use a collection of models from the literature to test whether such sloppy spectra are common in systems biology. Strikingly, we find that every model we examine has a sloppy spectrum of sensitivities. We also test several consequences of this sloppiness for building predictive models. In particular, sloppiness suggests that collective fits to even large amounts of ideal time-series data will often leave many parameters poorly constrained. Tests over our model collection are consistent with this suggestion. This difficulty with collective fits may seem to argue for direct parameter measurements, but sloppiness also implies that such measurements must be formidably precise and complete to usefully constrain many model predictions. We confirm this implication in our growth-factor-signaling model. Our results suggest that sloppy sensitivity spectra are universal in systems biology models. The prevalence of sloppiness highlights the power of collective fits and suggests that modelers should focus on predictions rather than on parameters.
NASA Astrophysics Data System (ADS)
Liu, J.; Allen, S. E.; Soontiens, N. K.
2016-02-01
Fraser River is the largest river on the west coast of Canada. It empties into the Strait of Georgia, which is a large, semi-enclosed body of water between Vancouver Island and the mainland of British Columbia. We have developed a three-dimensional model of the Strait of Georgia, including the Fraser River plume, using the NEMO model in its regional configuration. This operational model produces daily nowcasts and forecasts for salinity, temperature, currents and sea surface heights. Observational data available for evaluation of the model includes daily British Columbia ferry salinity data, profile data and surface drifter data. The salinity of the modelled Fraser River plume agrees well with ferry based measurements of salinity. However, large discrepencies exist between the modelled and observed position of the plume. Modelled surface currents compared to drifter observations show that the model has too strong along-strait velocities and too weak cross-strait velocities. We investigated the impact of river geometry. A sensitivity experiment was performed comparing the original, short, shallow river channel to an extended and deepened river channel. With the latter bathymetry, tidal amplitudes within Fraser River correspond well with observations. Comparisons to drifter tracks show that the surface currents have been improved with the new bathymetry. However, substantial discrepencies remain. We will discuss how reducing vertical eddy viscosity and other changes further improve the modelled position of the plume.
NASA Astrophysics Data System (ADS)
Ramu, Dandi A.; Chowdary, Jasti S.; Ramakrishna, S. S. V. S.; Kumar, O. S. R. U. B.
2018-04-01
Realistic simulation of large-scale circulation patterns associated with El Niño-Southern Oscillation (ENSO) is vital in coupled models in order to represent teleconnections to different regions of globe. The diversity in representing large-scale circulation patterns associated with ENSO-Indian summer monsoon (ISM) teleconnections in 23 Coupled Model Intercomparison Project Phase 5 (CMIP5) models is examined. CMIP5 models have been classified into three groups based on the correlation between Niño3.4 sea surface temperature (SST) index and ISM rainfall anomalies, models in group 1 (G1) overestimated El Niño-ISM teleconections and group 3 (G3) models underestimated it, whereas these teleconnections are better represented in group 2 (G2) models. Results show that in G1 models, El Niño-induced Tropical Indian Ocean (TIO) SST anomalies are not well represented. Anomalous low-level anticyclonic circulation anomalies over the southeastern TIO and western subtropical northwest Pacific (WSNP) cyclonic circulation are shifted too far west to 60° E and 120° E, respectively. This bias in circulation patterns implies dry wind advection from extratropics/midlatitudes to Indian subcontinent. In addition to this, large-scale upper level convergence together with lower level divergence over ISM region corresponding to El Niño are stronger in G1 models than in observations. Thus, unrealistic shift in low-level circulation centers corroborated by upper level circulation changes are responsible for overestimation of ENSO-ISM teleconnections in G1 models. Warm Pacific SST anomalies associated with El Niño are shifted too far west in many G3 models unlike in the observations. Further large-scale circulation anomalies over the Pacific and ISM region are misrepresented during El Niño years in G3 models. Too strong upper-level convergence away from Indian subcontinent and too weak WSNP cyclonic circulation are prominent in most of G3 models in which ENSO-ISM teleconnections are underestimated. On the other hand, many G2 models are able to represent most of large-scale circulation over Indo-Pacific region associated with El Niño and hence provide more realistic ENSO-ISM teleconnections. Therefore, this study advocates the importance of representation/simulation of large-scale circulation patterns during El Niño years in coupled models in order to capture El Niño-monsoon teleconnections well.
The Effect of Sea-Surface Sun Glitter on Microwave Radiometer Measurements
NASA Technical Reports Server (NTRS)
Wentz, F. J.
1981-01-01
A relatively simple model for the microwave brightness temperature of sea surface Sun glitter is presented. The model is an accurate closeform approximation for the fourfold Sun glitter integral. The model computations indicate that Sun glitter contamination of on orbit radiometer measurements is appreciable over a large swath area. For winds near 20 m/s, Sun glitter affects the retrieval of environmental parameters for Sun angles as large as 20 to 25 deg. The model predicted biases in retrieved wind speed and sea surface temperature due to neglecting Sun glitter are consistent with those experimentally observed in SEASAT SMMR retrievals. A least squares retrieval algorithm that uses a combined sea and Sun model function shows the potential of retrieving accurate environmental parameters in the presence of Sun glitter so long as the Sun angles and wind speed are above 5 deg and 2 m/s, respectively.
NASA Astrophysics Data System (ADS)
Kiely, Thomas G.; Freericks, J. K.
2018-02-01
In a large transverse field, there is an energy cost associated with flipping spins along the axis of the field. This penalty can be employed to relate the transverse-field Ising model in a large field to the X Y model in no field (when measurements are performed at the proper stroboscopic times). We describe the details for how this relationship works and, in particular, we also show under what circumstances it fails. We examine wave-function overlap between the two models and observables, such as spin-spin Green's functions. In general, the mapping is quite robust at short times, but will ultimately fail if the run time becomes too long. There is also a tradeoff between the length of time one can run a simulation out to and the time jitter of the stroboscopic measurements that must be balanced when planning to employ this mapping.
Fermiophobia in a Higgs triplet model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Akeroyd, A. G.; NExT Institute and School of Physics and Astronomy, University of Southampton, Highfield, Southampton SO17 1BJ; Diaz, Marco A.
2011-05-01
A fermiophobic Higgs boson can arise in models with an extended Higgs sector, such as models with scalars in an isospin triplet representation. In a specific model with a scalar triplet and spontaneous violation of lepton number induced by a scalar singlet field, we show that fermiophobia is not a fine-tuned situation, unlike in two higgs doublet models. We study distinctive signals of fermiophobia which can be probed at the LHC. For the case of a small Higgs mass, a characteristic signal would be a moderate B(H{yields}{gamma}{gamma}) accompanied by a large B(H{yields}JJ) (where J is a Majoron), the latter beingmore » an invisible decay. For the case of a large Higgs mass there is the possibility of dominant H{yields}ZZ, WW and suppressed H{yields}JJ decay modes. In this situation, B(H{yields}ZZ) is larger than B(H{yields}WW), which differs from the SM prediction.« less
Probing lepton flavor violation signal via γ γ →l¯ilj in the left-right twin Higgs model at the ILC
NASA Astrophysics Data System (ADS)
Liu, Guo-Li; Wang, Fei; Xie, Kuan; Guo, Xiao-Fei
2017-08-01
To explain the small neutrino masses, heavy Majorana neutrinos are introduced in the left-right twin Higgs model. The heavy neutrinos—together with the charged scalars and the heavy gauge bosons—may contribute large mixings between the neutrinos and the charged leptons, which may induce some distinct lepton-flavor-violating processes. We check ℓ¯iℓj (i ,j =e ,μ ,τ ,i ≠j ) production in γ γ collisions in the left-right twin Higgs model, and find that the production rates may be large in some specific parameter space. In optimal cases, it is even possible to detect them with reasonable kinematical cuts. We also show that these collisions can effectively constrain the model parameters—such as the Higgs vacuum expectation value, the right-handed neutrino mass, etc.—and may serve as a sensitive probe of this new physics model.
Identifying fMRI Model Violations with Lagrange Multiplier Tests
Cassidy, Ben; Long, Christopher J; Rae, Caroline; Solo, Victor
2013-01-01
The standard modeling framework in Functional Magnetic Resonance Imaging (fMRI) is predicated on assumptions of linearity, time invariance and stationarity. These assumptions are rarely checked because doing so requires specialised software, although failure to do so can lead to bias and mistaken inference. Identifying model violations is an essential but largely neglected step in standard fMRI data analysis. Using Lagrange Multiplier testing methods we have developed simple and efficient procedures for detecting model violations such as non-linearity, non-stationarity and validity of the common Double Gamma specification for hemodynamic response. These procedures are computationally cheap and can easily be added to a conventional analysis. The test statistic is calculated at each voxel and displayed as a spatial anomaly map which shows regions where a model is violated. The methodology is illustrated with a large number of real data examples. PMID:22542665
A Study on Phase Changes of Heterogeneous Composite Materials
NASA Astrophysics Data System (ADS)
Hirasawa, Yoshio; Saito, Akio; Takegoshi, Eisyun
In this study, a phase change process in heterogeneous composite materials which consist of water and coiled copper wires as conductive solid is investigated by four kinds of typical calculation models : 1) model-1 in which the effective thermal conductivity of the composite material is used, 2) model-2 in which a fin metal acts for many conductive solids, 3) model-3 in which the effective thermal conductivities between nodes are estimated and three-dimensional calculation is performed, 4) model-4 proposed by authors in the previous paper in which effective thermal conductivity is not needed. Consequently, model-1 showed the phase change rate considerably lower than the experimental results. Model-2 gave the larger amount of the phase change rate. Model-3 agreed well with the experiment in the case of small coil diameter and relatively large Vd. Model-4 showed a very well agreement with the experiment in the range of this study.
Empirical validation of an agent-based model of wood markets in Switzerland
Hilty, Lorenz M.; Lemm, Renato; Thees, Oliver
2018-01-01
We present an agent-based model of wood markets and show our efforts to validate this model using empirical data from different sources, including interviews, workshops, experiments, and official statistics. Own surveys closed gaps where data was not available. Our approach to model validation used a variety of techniques, including the replication of historical production amounts, prices, and survey results, as well as a historical case study of a large sawmill entering the market and becoming insolvent only a few years later. Validating the model using this case provided additional insights, showing how the model can be used to simulate scenarios of resource availability and resource allocation. We conclude that the outcome of the rigorous validation qualifies the model to simulate scenarios concerning resource availability and allocation in our study region. PMID:29351300
Phase-field-based lattice Boltzmann modeling of large-density-ratio two-phase flows
NASA Astrophysics Data System (ADS)
Liang, Hong; Xu, Jiangrong; Chen, Jiangxing; Wang, Huili; Chai, Zhenhua; Shi, Baochang
2018-03-01
In this paper, we present a simple and accurate lattice Boltzmann (LB) model for immiscible two-phase flows, which is able to deal with large density contrasts. This model utilizes two LB equations, one of which is used to solve the conservative Allen-Cahn equation, and the other is adopted to solve the incompressible Navier-Stokes equations. A forcing distribution function is elaborately designed in the LB equation for the Navier-Stokes equations, which make it much simpler than the existing LB models. In addition, the proposed model can achieve superior numerical accuracy compared with previous Allen-Cahn type of LB models. Several benchmark two-phase problems, including static droplet, layered Poiseuille flow, and spinodal decomposition are simulated to validate the present LB model. It is found that the present model can achieve relatively small spurious velocity in the LB community, and the obtained numerical results also show good agreement with the analytical solutions or some available results. Lastly, we use the present model to investigate the droplet impact on a thin liquid film with a large density ratio of 1000 and the Reynolds number ranging from 20 to 500. The fascinating phenomena of droplet splashing is successfully reproduced by the present model and the numerically predicted spreading radius exhibits to obey the power law reported in the literature.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Varble, A. C.; Zipser, Edward J.; Fridlind, Ann
2014-12-27
Ten 3D cloud-resolving model (CRM) simulations and four 3D limited area model (LAM) simulations of an intense mesoscale convective system observed on January 23-24, 2006 during the Tropical Warm Pool – International Cloud Experiment (TWP-ICE) are compared with each other and with observed radar reflectivity fields and dual-Doppler retrievals of vertical wind speeds in an attempt to explain published results showing a high bias in simulated convective radar reflectivity aloft. This high bias results from ice water content being large, which is a product of large, strong convective updrafts, although hydrometeor size distribution assumptions modulate the size of this bias.more » Snow reflectivity can exceed 40 dBZ in a two-moment scheme when a constant bulk density of 100 kg m-3 is used. Making snow mass more realistically proportional to area rather than volume should somewhat alleviate this problem. Graupel, unlike snow, produces high biased reflectivity in all simulations. This is associated with large amounts of liquid water above the freezing level in updraft cores. Peak vertical velocities in deep convective updrafts are greater than dual-Doppler retrieved values, especially in the upper troposphere. Freezing of large rainwater contents lofted above the freezing level in simulated updraft cores greatly contributes to these excessive upper tropospheric vertical velocities. Strong simulated updraft cores are nearly undiluted, with some showing supercell characteristics. Decreasing horizontal grid spacing from 900 meters to 100 meters weakens strong updrafts, but not enough to match observational retrievals. Therefore, overly intense simulated updrafts may partly be a product of interactions between convective dynamics, parameterized microphysics, and large-scale environmental biases that promote different convective modes and strengths than observed.« less
Seismic Imaging of the Source Physics Experiment Site with the Large-N Seismic Array
NASA Astrophysics Data System (ADS)
Chen, T.; Snelson, C. M.; Mellors, R. J.
2017-12-01
The Source Physics Experiment (SPE) consists of a series of chemical explosions at the Nevada National Security Site. The goal of SPE is to understand seismic wave generation and propagation from these explosions. To achieve this goal, we need an accurate geophysical model of the SPE site. A Large-N seismic array that was deployed at the SPE site during one of the chemical explosions (SPE-5) helps us construct high-resolution local geophysical model. The Large-N seismic array consists of 996 geophones, and covers an area of approximately 2 × 2.5 km. The array is located in the northern end of the Yucca Flat basin, at a transition from Climax Stock (granite) to Yucca Flat (alluvium). In addition to the SPE-5 explosion, the Large-N array also recorded 53 weight drops. Using the Large-N seismic array recordings, we perform body wave and surface wave velocity analysis, and obtain 3D seismic imaging of the SPE site for the top crust of approximately 1 km. The imaging results show clear variation of geophysical parameter with local geological structures, including heterogeneous weathering layer and various rock types. The results of this work are being incorporated in the larger 3D modeling effort of the SPE program to validate the predictive models developed for the site.
Robust large-scale parallel nonlinear solvers for simulations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bader, Brett William; Pawlowski, Roger Patrick; Kolda, Tamara Gibson
2005-11-01
This report documents research to develop robust and efficient solution techniques for solving large-scale systems of nonlinear equations. The most widely used method for solving systems of nonlinear equations is Newton's method. While much research has been devoted to augmenting Newton-based solvers (usually with globalization techniques), little has been devoted to exploring the application of different models. Our research has been directed at evaluating techniques using different models than Newton's method: a lower order model, Broyden's method, and a higher order model, the tensor method. We have developed large-scale versions of each of these models and have demonstrated their usemore » in important applications at Sandia. Broyden's method replaces the Jacobian with an approximation, allowing codes that cannot evaluate a Jacobian or have an inaccurate Jacobian to converge to a solution. Limited-memory methods, which have been successful in optimization, allow us to extend this approach to large-scale problems. We compare the robustness and efficiency of Newton's method, modified Newton's method, Jacobian-free Newton-Krylov method, and our limited-memory Broyden method. Comparisons are carried out for large-scale applications of fluid flow simulations and electronic circuit simulations. Results show that, in cases where the Jacobian was inaccurate or could not be computed, Broyden's method converged in some cases where Newton's method failed to converge. We identify conditions where Broyden's method can be more efficient than Newton's method. We also present modifications to a large-scale tensor method, originally proposed by Bouaricha, for greater efficiency, better robustness, and wider applicability. Tensor methods are an alternative to Newton-based methods and are based on computing a step based on a local quadratic model rather than a linear model. The advantage of Bouaricha's method is that it can use any existing linear solver, which makes it simple to write and easily portable. However, the method usually takes twice as long to solve as Newton-GMRES on general problems because it solves two linear systems at each iteration. In this paper, we discuss modifications to Bouaricha's method for a practical implementation, including a special globalization technique and other modifications for greater efficiency. We present numerical results showing computational advantages over Newton-GMRES on some realistic problems. We further discuss a new approach for dealing with singular (or ill-conditioned) matrices. In particular, we modify an algorithm for identifying a turning point so that an increasingly ill-conditioned Jacobian does not prevent convergence.« less
Second look at the spread of epidemics on networks
NASA Astrophysics Data System (ADS)
Kenah, Eben; Robins, James M.
2007-09-01
In an important paper, Newman [Phys. Rev. E66, 016128 (2002)] claimed that a general network-based stochastic Susceptible-Infectious-Removed (SIR) epidemic model is isomorphic to a bond percolation model, where the bonds are the edges of the contact network and the bond occupation probability is equal to the marginal probability of transmission from an infected node to a susceptible neighbor. In this paper, we show that this isomorphism is incorrect and define a semidirected random network we call the epidemic percolation network that is exactly isomorphic to the SIR epidemic model in any finite population. In the limit of a large population, (i) the distribution of (self-limited) outbreak sizes is identical to the size distribution of (small) out-components, (ii) the epidemic threshold corresponds to the phase transition where a giant strongly connected component appears, (iii) the probability of a large epidemic is equal to the probability that an initial infection occurs in the giant in-component, and (iv) the relative final size of an epidemic is equal to the proportion of the network contained in the giant out-component. For the SIR model considered by Newman, we show that the epidemic percolation network predicts the same mean outbreak size below the epidemic threshold, the same epidemic threshold, and the same final size of an epidemic as the bond percolation model. However, the bond percolation model fails to predict the correct outbreak size distribution and probability of an epidemic when there is a nondegenerate infectious period distribution. We confirm our findings by comparing predictions from percolation networks and bond percolation models to the results of simulations. In the Appendix, we show that an isomorphism to an epidemic percolation network can be defined for any time-homogeneous stochastic SIR model.
The use of Argo for validation and tuning of mixed layer models
NASA Astrophysics Data System (ADS)
Acreman, D. M.; Jeffery, C. D.
We present results from validation and tuning of 1-D ocean mixed layer models using data from Argo floats and data from Ocean Weather Station Papa (145°W, 50°N). Model tests at Ocean Weather Station Papa showed that a bulk model could perform well provided it was tuned correctly. The Large et al. [Large, W.G., McWilliams, J.C., Doney, S.C., 1994. Oceanic vertical mixing: a review and a model with a nonlocal boundary layer parameterisation. Rev. Geophys. 32 (Novermber), 363-403] K-profile parameterisation (KPP) model also gave a good representation of mixed layer depth provided the vertical resolution was sufficiently high. Model tests using data from a single Argo float indicated a tendency for the KPP model to deepen insufficiently over an annual cycle, whereas the tuned bulk model and general ocean turbulence model (GOTM) gave a better representation of mixed layer depth. The bulk model was then tuned using data from a sample of Argo floats and a set of optimum parameters was found; these optimum parameters were consistent with the tuning at OWS Papa.
Li, Min; Zhang, John Z H
2017-03-08
The development of polarizable water models at coarse-grained (CG) levels is of much importance to CG molecular dynamics simulations of large biomolecular systems. In this work, we combined the newly developed two-bead multipole force field (TMFF) for proteins with the two-bead polarizable water models to carry out CG molecular dynamics simulations for benchmark proteins. In our simulations, two different two-bead polarizable water models are employed, the RTPW model representing five water molecules by Riniker et al. and the LTPW model representing four water molecules. The LTPW model is developed in this study based on the Martini three-bead polarizable water model. Our simulation results showed that the combination of TMFF with the LTPW model significantly stabilizes the protein's native structure in CG simulations, while the use of the RTPW model gives better agreement with all-atom simulations in predicting the residue-level fluctuation dynamics. Overall, the TMFF coupled with the two-bead polarizable water models enables one to perform an efficient and reliable CG dynamics study of the structural and functional properties of large biomolecules.
Computing the universe: how large-scale simulations illuminate galaxies and dark energy
NASA Astrophysics Data System (ADS)
O'Shea, Brian
2015-04-01
High-performance and large-scale computing is absolutely to understanding astronomical objects such as stars, galaxies, and the cosmic web. This is because these are structures that operate on physical, temporal, and energy scales that cannot be reasonably approximated in the laboratory, and whose complexity and nonlinearity often defies analytic modeling. In this talk, I show how the growth of computing platforms over time has facilitated our understanding of astrophysical and cosmological phenomena, focusing primarily on galaxies and large-scale structure in the Universe.
The influence of lateral Earth structure on glacial isostatic adjustment in Greenland
NASA Astrophysics Data System (ADS)
Milne, Glenn A.; Latychev, Konstantin; Schaeffer, Andrew; Crowley, John W.; Lecavalier, Benoit S.; Audette, Alexandre
2018-05-01
We present the first results that focus on the influence of lateral Earth structure on Greenland glacial isostatic adjustment (GIA) using a model that can explicitly incorporate 3-D Earth structure. In total, eight realisations of lateral viscosity structure were developed using four global seismic velocity models and two global lithosphere (elastic) thickness models. Our results show that lateral viscosity structure has a significant influence on model output of both deglacial relative sea level (RSL) changes and present-day rates of vertical land motion. For example, lateral structure changes the RSL predictions in the Holocene by several 10 s of metres in many locations relative to the 1-D case. Modelled rates of vertical land motion are also significantly affected, with differences from the 1-D case commonly at the mm/yr level and exceeding 2 mm/yr in some locations. The addition of lateral structure was unable to account for previously identified data-model RSL misfits in northern and southern Greenland, suggesting limitations in the adopted ice model (Lecavalier et al. 2014) and/or the existence of processes not included in our model. Our results show large data-model discrepancies in uplift rates when applying a 1-D viscosity model tuned to fit the RSL data; these discrepancies cannot be reconciled by adding the realisations of lateral structure considered here. In many locations, the spread in model output for the eight different 3-D Earth models is of similar amplitude or larger than the influence of lateral structure (as defined by the average of all eight model runs). This reflects the differences between the four seismic and two lithosphere models used and implies a large uncertainty in defining the GIA signal given that other aspects that contribute to this uncertainty (e.g. scaling from seismic velocity to viscosity) were not considered in this study. In order to reduce this large model uncertainty, an important next step is to develop more accurate constraints on Earth structure beneath Greenland based on regional geophysical data sets.
Zyvoloski, G.; Kwicklis, E.; Eddebbarh, A.-A.; Arnold, B.; Faunt, C.; Robinson, B.A.
2003-01-01
This paper presents several different conceptual models of the Large Hydraulic Gradient (LHG) region north of Yucca Mountain and describes the impact of those models on groundwater flow near the potential high-level repository site. The results are based on a numerical model of site-scale saturated zone beneath Yucca Mountain. This model is used for performance assessment predictions of radionuclide transport and to guide future data collection and modeling activities. The numerical model is calibrated by matching available water level measurements using parameter estimation techniques, along with more informal comparisons of the model to hydrologic and geochemical information. The model software (hydrologic simulation code FEHM and parameter estimation software PEST) and model setup allows for efficient calibration of multiple conceptual models. Until now, the Large Hydraulic Gradient has been simulated using a low-permeability, east-west oriented feature, even though direct evidence for this feature is lacking. In addition to this model, we investigate and calibrate three additional conceptual models of the Large Hydraulic Gradient, all of which are based on a presumed zone of hydrothermal chemical alteration north of Yucca Mountain. After examining the heads and permeabilities obtained from the calibrated models, we present particle pathways from the potential repository that record differences in the predicted groundwater flow regime. The results show that Large Hydraulic Gradient can be represented with the alternate conceptual models that include the hydrothermally altered zone. The predicted pathways are mildly sensitive to the choice of the conceptual model and more sensitive to the quality of calibration in the vicinity on the repository. These differences are most likely due to different degrees of fit of model to data, and do not represent important differences in hydrologic conditions for the different conceptual models. ?? 2002 Elsevier Science B.V. All rights reserved.
Simulating Conditional Deterministic Predictability within Ocean Frontogenesis
2014-03-26
Prediction System (COAMPS; Hodur, 1997) across the inner domain. The surface wind stress is determined from the atmo- spheric model wind velocity...layers on the light side of the front. Increasing the strength of the down-front wind increases the frontogenesis. Mahadevan and Tandon (2006) showed...Filaments of shallow MLD, large frontogenesis and large surface divergence ( upwelling ) are found in the OSEs, but at different locations and strengths . The
Attack risk for butterflies changes with eyespot number and size
Ho, Sebastian; Schachat, Sandra R.; Piel, William H.; Monteiro, Antónia
2016-01-01
Butterfly eyespots are known to function in predator deflection and predator intimidation, but it is still unclear what factors cause eyespots to serve one function over the other. Both functions have been demonstrated in different species that varied in eyespot size, eyespot number and wing size, leaving the contribution of each of these factors to butterfly survival unclear. Here, we study how each of these factors contributes to eyespot function by using paper butterfly models, where each factor is varied in turn, and exposing these models to predation in the field. We find that the presence of multiple, small eyespots results in high predation, whereas single large eyespots (larger than 6 mm in diameter) results in low predation. These data indicate that single large eyespots intimidate predators, whereas multiple small eyespots produce a conspicuous, but non-intimidating signal to predators. We propose that eyespots may gain an intimidation function by increasing in size. Our measurements of eyespot size in 255 nymphalid butterfly species show that large eyespots are relatively rare and occur predominantly on ventral wing surfaces. By mapping eyespot size on the phylogeny of the family Nymphalidae, we show that these large eyespots, with a potential intimidation function, are dispersed throughout multiple nymphalid lineages, indicating that phylogeny is not a strong predictor of eyespot size. PMID:26909190
An iterated cubature unscented Kalman filter for large-DoF systems identification with noisy data
NASA Astrophysics Data System (ADS)
Ghorbani, Esmaeil; Cha, Young-Jin
2018-04-01
Structural and mechanical system identification under dynamic loading has been an important research topic over the last three or four decades. Many Kalman-filtering-based approaches have been developed for linear and nonlinear systems. For example, to predict nonlinear systems, an unscented Kalman filter was applied. However, from extensive literature reviews, the unscented Kalman filter still showed weak performance on systems with large degrees of freedom. In this research, a modified unscented Kalman filter is proposed by integration of a cubature Kalman filter to improve the system identification performance of systems with large degrees of freedom. The novelty of this work lies on conjugating the unscented transform with the cubature integration concept to find a more accurate output from the transformation of the state vector and its related covariance matrix. To evaluate the proposed method, three different numerical models (i.e., the single degree-of-freedom Bouc-Wen model, the linear 3-degrees-of-freedom system, and the 10-degrees-of-freedom system) are investigated. To evaluate the robustness of the proposed method, high levels of noise in the measured response data are considered. The results show that the proposed method is significantly superior to the traditional UKF for noisy measured data in systems with large degrees of freedom.
Parametric Cost Models for Space Telescopes
NASA Technical Reports Server (NTRS)
Stahl, H. Philip; Henrichs, Todd; Dollinger, Courtney
2010-01-01
Multivariable parametric cost models for space telescopes provide several benefits to designers and space system project managers. They identify major architectural cost drivers and allow high-level design trades. They enable cost-benefit analysis for technology development investment. And, they provide a basis for estimating total project cost. A survey of historical models found that there is no definitive space telescope cost model. In fact, published models vary greatly [1]. Thus, there is a need for parametric space telescopes cost models. An effort is underway to develop single variable [2] and multi-variable [3] parametric space telescope cost models based on the latest available data and applying rigorous analytical techniques. Specific cost estimating relationships (CERs) have been developed which show that aperture diameter is the primary cost driver for large space telescopes; technology development as a function of time reduces cost at the rate of 50% per 17 years; it costs less per square meter of collecting aperture to build a large telescope than a small telescope; and increasing mass reduces cost.
Modeling correlated motion in thermoelectric skutterudite materials
NASA Astrophysics Data System (ADS)
Keiber, Trevor; Bridges, Frank; Bridges Lab Team
2014-03-01
Filled skutterudite compounds, LnT4X12 (Ln=rare earth; T=Fe,Ru,Os; X=P,As,Sb), have previously been modeled using a rigid cage approximation for the ``rattling'' rare earth atom. The large thermal broadening with temperature of the rattler can be fit using an Einstein model. Recent measurements of the second neighbor Ln-T peaks show an unusually large thermal broadening suggesting motion of the cage of atoms. To incorporate these results we developed three and four mass spring models to give the acoustic and optical phonon mode spectra. For the simplest three mass model we identify the low energy optical mode as the rattling mode. This rattling mode is likely coupled to the acoustic mode, and responsible for the low thermal conductivity of the skutterudite compound. We extend this model to four atoms to describe the CuO4 rings in oxy-skutterudites and the X4 rings in LnT4X12. This talk provides a model for the experimental results of the previous presentation. Support: NSF DMR1005568.
Kim, Jong Ok; Lee, Jong-Ho; Kim, Kwang-Sup; Ji, Jong-Hun; Koh, Sung-Jun; Lee, Jae-Ho
2017-11-01
This study investigated the efficacy of the bridging repair using an acellular dermal matrix (ADM) and an ADM with stem cells in rabbits. Also investigated were clinical outcomes of ADM bridging repair for large to massive rotator cuff tears. ADM, with and without stem cells, was used to cover a 5- × 5-mm-sized cuff defect in 17 rabbits, and biomechanical, histologic, and immunohistochemical analyses were conducted. Also evaluated were 24 patients with large to massive rotator cuff tears after ADM bridging repair. In the biomechanical test, the normal rotator cuff, cuff with ADM plus stem cells, and cuff with ADM in the rabbit model showed a maximum load (N) of 287.3, 217.5, and 170.3 and ultimate tensile strength (N/mm 2 ) of 11.1, 8.0, and 5.2, respectively. Histologically, the cuff tendons with the ADM or ADM plus stem cells showed characteristically mature tendons as time passed. In the clinical study, the mean American Shoulder and Elbow Surgeons score improved from preoperative 50 to postoperative 83, the University of California Los Angeles Shoulder Rating Scale from 17 to 30, and the Simple Shoulder Test from 4 to 8, respectively. No further fatty deteriorations or muscle atrophy were observed on follow-up magnetic resonance imaging. A retear was found in 5 of 24 patients (21%). Bridging repair with ADM or stem cells in the rabbit model showed cellular infiltration into the graft and some evidence of neotendon formation. Clinically, ADM repair was a safe alternative that did not show any further fatty deterioration nor muscle atrophy in large to massive rotator cuff tears. Copyright © 2017 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Elsevier Inc. All rights reserved.
Deformable image registration for tissues with large displacements
Huang, Xishi; Ren, Jing; Green, Mark
2017-01-01
Abstract. Image registration for internal organs and soft tissues is considered extremely challenging due to organ shifts and tissue deformation caused by patients’ movements such as respiration and repositioning. In our previous work, we proposed a fast registration method for deformable tissues with small rotations. We extend our method to deformable registration of soft tissues with large displacements. We analyzed the deformation field of the liver by decomposing the deformation into shift, rotation, and pure deformation components and concluded that in many clinical cases, the liver deformation contains large rotations and small deformations. This analysis justified the use of linear elastic theory in our image registration method. We also proposed a region-based neuro-fuzzy transformation model to seamlessly stitch together local affine and local rigid models in different regions. We have performed the experiments on a liver MRI image set and showed the effectiveness of the proposed registration method. We have also compared the performance of the proposed method with the previous method on tissues with large rotations and showed that the proposed method outperformed the previous method when dealing with the combination of pure deformation and large rotations. Validation results show that we can achieve a target registration error of 1.87±0.87 mm and an average centerline distance error of 1.28±0.78 mm. The proposed technique has the potential to significantly improve registration capabilities and the quality of intraoperative image guidance. To the best of our knowledge, this is the first time that the complex displacement of the liver is explicitly separated into local pure deformation and rigid motion. PMID:28149924
A Computational Approach to Qualitative Analysis in Large Textual Datasets
Evans, Michael S.
2014-01-01
In this paper I introduce computational techniques to extend qualitative analysis into the study of large textual datasets. I demonstrate these techniques by using probabilistic topic modeling to analyze a broad sample of 14,952 documents published in major American newspapers from 1980 through 2012. I show how computational data mining techniques can identify and evaluate the significance of qualitatively distinct subjects of discussion across a wide range of public discourse. I also show how examining large textual datasets with computational methods can overcome methodological limitations of conventional qualitative methods, such as how to measure the impact of particular cases on broader discourse, how to validate substantive inferences from small samples of textual data, and how to determine if identified cases are part of a consistent temporal pattern. PMID:24498398
NASA Astrophysics Data System (ADS)
McNider, R. T.; Steeneveld, G.; Holtslag, B.; Pielke, R. A.; Mackaro, S.; Nair, U. S.; Biazar, A. P.; Christy, J. R.; Walters, J.
2012-12-01
. One of the most significant signals in the thermometer-observed temperature record since 1900 is the decrease in the diurnal temperature range (DTR) over land. CMIP3 climate models only captured about 20% of this trend difference. An update of observed trends through 2010 indicates that CMIP5 models still only capture about 28%. Because climate models have not captured this asymmetry, many investigators have looked to forcing or processes that models have not included to explain the lack of fidelity of models. Our paper takes an alternative view of the role nonlinear dynamics of the stable nocturnal boundary layer (SNBL) may provide as a general explanation of the asymmetry. This was first postulated in a nonlinear analysis of a simple two layer model that found slight changes in incoming longwave radiation might result in large changes in the near surface temperature as the boundary is destabilized slightly due to the added downward radiation. This produced a mixing of warmer temperatures from aloft to the surface as the turbulent mixing was enhanced. In the present study we examine whether this behavior is retained in a more complete multi-layer column model with a state of the art radiation scheme for the stable boundary layer. The response of a nocturnal boundary layer to an added increment of downward radiation from CO2 and water vapor (4.8 W m -2 ) was compared to the solution without this forcing. These experiments showed that indeed the SNBL grew slightly and was less stable due to the added longwave radiation. The model showed that the shelter temperature warmed substantially due to this destabilization. Moreover, the budget calculations showed that only about 20% of the warming was due to the added longwave energy. Most of the warming at shelter height was due to the redistribution. Budget calculations in the paper also showed that the ultimate fate of the added input of longwave energy was highly sensitive to boundary layer parameters and turbulent parameterizations. The model showed that at light winds (weak turbulence) the atmosphere was not able to lift this energy off the surface and into the atmosphere. Thus, more radiation was emitted from the surface. If soil conductivity or heat capacity were large then more of the energy would heat the ground. Parameterizations of the type used in large scale models added much more sensible heat to the atmosphere. Based on these model analyses, it is likely that part of the observed long-term increase in minimum temperature is reflecting a redistribution of heat by changes in turbulence and not by an accumulation of heat in the SNBL. Because of the sensitivity of the shelter temperature to parameters and to uncertain turbulence parameterization in the SNBL, there should be caution about the use of minimum temperatures as a global warming metric in either observations or models.
Persistent model order reduction for complex dynamical systems using smooth orthogonal decomposition
NASA Astrophysics Data System (ADS)
Ilbeigi, Shahab; Chelidze, David
2017-11-01
Full-scale complex dynamic models are not effective for parametric studies due to the inherent constraints on available computational power and storage resources. A persistent reduced order model (ROM) that is robust, stable, and provides high-fidelity simulations for a relatively wide range of parameters and operating conditions can provide a solution to this problem. The fidelity of a new framework for persistent model order reduction of large and complex dynamical systems is investigated. The framework is validated using several numerical examples including a large linear system and two complex nonlinear systems with material and geometrical nonlinearities. While the framework is used for identifying the robust subspaces obtained from both proper and smooth orthogonal decompositions (POD and SOD, respectively), the results show that SOD outperforms POD in terms of stability, accuracy, and robustness.
NASA Astrophysics Data System (ADS)
Bier, Martin; Brak, Bastiaan
2015-04-01
In the Netherlands there has been nationwide vaccination against the measles since 1976. However, in small clustered communities of orthodox Protestants there is widespread refusal of the vaccine. After 1976, three large outbreaks with about 3000 reported cases of the measles have occurred among these orthodox Protestants. The outbreaks appear to occur about every twelve years. We show how a simple Kermack-McKendrick-like model can quantitatively account for the periodic outbreaks. Approximate analytic formulae to connect the period, size, and outbreak duration are derived. With an enhanced model we take the latency period in account. We also expand the model to follow how different age groups are affected. Like other researchers using other methods, we conclude that large scale underreporting of the disease must occur.
NASA Technical Reports Server (NTRS)
Noor, A. K.; Andersen, C. M.; Tanner, J. A.
1984-01-01
An effective computational strategy is presented for the large-rotation, nonlinear axisymmetric analysis of shells of revolution. The three key elements of the computational strategy are: (1) use of mixed finite-element models with discontinuous stress resultants at the element interfaces; (2) substantial reduction in the total number of degrees of freedom through the use of a multiple-parameter reduction technique; and (3) reduction in the size of the analysis model through the decomposition of asymmetric loads into symmetric and antisymmetric components coupled with the use of the multiple-parameter reduction technique. The potential of the proposed computational strategy is discussed. Numerical results are presented to demonstrate the high accuracy of the mixed models developed and to show the potential of using the proposed computational strategy for the analysis of tires.
Sodium Intake and Osteoporosis. Findings From the Women's Health Initiative.
Carbone, Laura; Johnson, Karen C; Huang, Ying; Pettinger, Mary; Thomas, Fridjtof; Cauley, Jane; Crandall, Carolyn; Tinker, Lesley; LeBoff, Meryl Susan; Wactawski-Wende, Jean; Bethel, Monique; Li, Wenjun; Prentice, Ross
2016-04-01
In this large, prospective, observational cohort study of postmenopausal women in the WHI, Cox proportional hazard regression models showed that sodium intake at or near recommended levels is not likely to impact bone metabolism.
Environmental modeling of trans-arctic and re-routed flights.
DOT National Transportation Integrated Search
2010-02-01
Recent work by researchers at Stanford University showed potentially large impacts on Arctic temperature increases due to aircraft over-flights. The FAAs Office of Environment and Energy tasked the Volpe Center, the MITRE Corporation, and Stanford...
This paper provides an overview of existing statistical methodologies for the estimation of site-specific and regional trends in wet deposition. The interaction of atmospheric processes and emissions tend to produce wet deposition data patterns that show large spatial and tempora...
Modeling stochastic noise in gene regulatory systems
Meister, Arwen; Du, Chao; Li, Ye Henry; Wong, Wing Hung
2014-01-01
The Master equation is considered the gold standard for modeling the stochastic mechanisms of gene regulation in molecular detail, but it is too complex to solve exactly in most cases, so approximation and simulation methods are essential. However, there is still a lack of consensus about the best way to carry these out. To help clarify the situation, we review Master equation models of gene regulation, theoretical approximations based on an expansion method due to N.G. van Kampen and R. Kubo, and simulation algorithms due to D.T. Gillespie and P. Langevin. Expansion of the Master equation shows that for systems with a single stable steady-state, the stochastic model reduces to a deterministic model in a first-order approximation. Additional theory, also due to van Kampen, describes the asymptotic behavior of multistable systems. To support and illustrate the theory and provide further insight into the complex behavior of multistable systems, we perform a detailed simulation study comparing the various approximation and simulation methods applied to synthetic gene regulatory systems with various qualitative characteristics. The simulation studies show that for large stochastic systems with a single steady-state, deterministic models are quite accurate, since the probability distribution of the solution has a single peak tracking the deterministic trajectory whose variance is inversely proportional to the system size. In multistable stochastic systems, large fluctuations can cause individual trajectories to escape from the domain of attraction of one steady-state and be attracted to another, so the system eventually reaches a multimodal probability distribution in which all stable steady-states are represented proportional to their relative stability. However, since the escape time scales exponentially with system size, this process can take a very long time in large systems. PMID:25632368
Mantle flow influence on subduction evolution
NASA Astrophysics Data System (ADS)
Chertova, Maria V.; Spakman, Wim; Steinberger, Bernhard
2018-05-01
The impact of remotely forced mantle flow on regional subduction evolution is largely unexplored. Here we investigate this by means of 3D thermo-mechanical numerical modeling using a regional modeling domain. We start with simplified models consisting of a 600 km (or 1400 km) wide subducting plate surrounded by other plates. Mantle inflow of ∼3 cm/yr is prescribed during 25 Myr of slab evolution on a subset of the domain boundaries while the other side boundaries are open. Our experiments show that the influence of imposed mantle flow on subduction evolution is the least for trench-perpendicular mantle inflow from either the back or front of the slab leading to 10-50 km changes in slab morphology and trench position while no strong slab dip changes were observed, as compared to a reference model with no imposed mantle inflow. In experiments with trench-oblique mantle inflow we notice larger effects of slab bending and slab translation of the order of 100-200 km. Lastly, we investigate how subduction in the western Mediterranean region is influenced by remotely excited mantle flow that is computed by back-advection of a temperature and density model scaled from a global seismic tomography model. After 35 Myr of subduction evolution we find 10-50 km changes in slab position and slab morphology and a slight change in overall slab tilt. Our study shows that remotely forced mantle flow leads to secondary effects on slab evolution as compared to slab buoyancy and plate motion. Still these secondary effects occur on scales, 10-50 km, typical for the large-scale deformation of the overlying crust and thus may still be of large importance for understanding geological evolution.
Lin, Fu; Leyffer, Sven; Munson, Todd
2016-04-12
We study a two-stage mixed-integer linear program (MILP) with more than 1 million binary variables in the second stage. We develop a two-level approach by constructing a semi-coarse model that coarsens with respect to variables and a coarse model that coarsens with respect to both variables and constraints. We coarsen binary variables by selecting a small number of prespecified on/off profiles. We aggregate constraints by partitioning them into groups and taking convex combination over each group. With an appropriate choice of coarsened profiles, the semi-coarse model is guaranteed to find a feasible solution of the original problem and hence providesmore » an upper bound on the optimal solution. We show that solving a sequence of coarse models converges to the same upper bound with proven finite steps. This is achieved by adding violated constraints to coarse models until all constraints in the semi-coarse model are satisfied. We demonstrate the effectiveness of our approach in cogeneration for buildings. Here, the coarsened models allow us to obtain good approximate solutions at a fraction of the time required by solving the original problem. Extensive numerical experiments show that the two-level approach scales to large problems that are beyond the capacity of state-of-the-art commercial MILP solvers.« less
Damage Mechanics in the Community Ice Sheet Model
NASA Astrophysics Data System (ADS)
Whitcomb, R.; Cathles, L. M. M., IV; Bassis, J. N.; Lipscomb, W. H.; Price, S. F.
2016-12-01
Half of the mass that floating ice shelves lose to the ocean comes from iceberg calving, which is a difficult process to simulate accurately. This is especially true in the large-scale ice dynamics models that couple changes in the cryosphere to climate projections. Damage mechanics provide a powerful technique with the potential to overcome this obstacle by describing how fractures in ice evolve over time. Here, we demonstrate the application of a damage model to ice shelves that predicts realistic geometries. We incorporated this solver into the Community Ice Sheet Model, a three dimensional ice sheet model developed at Los Alamos National Laboratory. The damage mechanics formulation that we use comes from a first principles-based evolution law for the depth of basal and surface crevasses and depends on the large scale strain rate, stress state, and basal melt. We show that under idealized conditions it produces ice tongue lengths that match well with observations for a selection of natural ice tongues, including Erebus, Drygalski, and Pine Island in Antarctica, as well as Petermann in Greenland. We also apply the model to more generalized ideal ice shelf geometries and show that it produces realistic calving front positions. Although our results are preliminary, the damage mechanics model that we developed provides a promising first principles method for predicting ice shelf extent and how the calving margins of ice shelves respond to climate change.
NASA Astrophysics Data System (ADS)
Hu, J.; Zhang, R.; Wang, Y.; Ming, Y.; Lin, Y.; Pan, B.
2015-12-01
Aerosols can alter atmospheric radiation and cloud physics, which further exert impacts on weather and global climate. With the development and industrialization of the developing Asian countries, anthropogenic aerosols have received considerable attentions and remain to be the largest uncertainty in the climate projection. Here we assess the performance of two stat-of-art global climate models (National Center for Atmospheric Research-Community Atmosphere Model 5 (CAM5) and Geophysical Fluid Dynamics Laboratory Atmosphere Model 3 (AM3)) in simulating the impacts of anthropogenic aerosols on North Pacific storm track region. By contrasting two aerosol scenarios, i.e. present day (PD) and pre-industrial (PI), both models show aerosol optical depth (AOD) enhanced by about 22%, with CAM5 AOD 40% lower in magnitude due to the long range transport of anthropogenic aerosols. Aerosol effects on the ice water path (IWP), stratiform precipitation, convergence and convection strengths in the two models are distinctive in patterns and magnitudes. AM3 shows qualitatively good agreement with long-term satellite observations, while CAM5 overestimates convection and liquid water path resulting in an underestimation of large-scale precipitation and IWP. Due to coarse resolution and parameterization in convection schemes, both models' performance on convection needs to be improved. Aerosols performance on large-scale circulation and radiative budget are also examined in this study.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Fu; Leyffer, Sven; Munson, Todd
We study a two-stage mixed-integer linear program (MILP) with more than 1 million binary variables in the second stage. We develop a two-level approach by constructing a semi-coarse model that coarsens with respect to variables and a coarse model that coarsens with respect to both variables and constraints. We coarsen binary variables by selecting a small number of prespecified on/off profiles. We aggregate constraints by partitioning them into groups and taking convex combination over each group. With an appropriate choice of coarsened profiles, the semi-coarse model is guaranteed to find a feasible solution of the original problem and hence providesmore » an upper bound on the optimal solution. We show that solving a sequence of coarse models converges to the same upper bound with proven finite steps. This is achieved by adding violated constraints to coarse models until all constraints in the semi-coarse model are satisfied. We demonstrate the effectiveness of our approach in cogeneration for buildings. Here, the coarsened models allow us to obtain good approximate solutions at a fraction of the time required by solving the original problem. Extensive numerical experiments show that the two-level approach scales to large problems that are beyond the capacity of state-of-the-art commercial MILP solvers.« less
NASA Technical Reports Server (NTRS)
Van Norman, John W.; Dyakonov, Artem; Schoenenberger, Mark; Davis, Jody; Muppidi, Suman; Tang, Chun; Bose, Deepak; Mobley, Brandon; Clark, Ian
2015-01-01
An overview of pre-flight aerodynamic models for the Low Density Supersonic Decelerator (LDSD) Supersonic Flight Dynamics Test (SFDT) campaign is presented, with comparisons to reconstructed flight data and discussion of model updates. The SFDT campaign objective is to test Supersonic Inflatable Aerodynamic Decelerator (SIAD) and large supersonic parachute technologies at high altitude Earth conditions relevant to entry, descent, and landing (EDL) at Mars. Nominal SIAD test conditions are attained by lifting a test vehicle (TV) to 36 km altitude with a large helium balloon, then accelerating the TV to Mach 4 and and 53 km altitude with a solid rocket motor. The first flight test (SFDT-1) delivered a 6 meter diameter robotic mission class decelerator (SIAD-R) to several seconds of flight on June 28, 2014, and was successful in demonstrating the SFDT flight system concept and SIAD-R. The trajectory was off-nominal, however, lofting to over 8 km higher than predicted in flight simulations. Comparisons between reconstructed flight data and aerodynamic models show that SIAD-R aerodynamic performance was in good agreement with pre-flight predictions. Similar comparisons of powered ascent phase aerodynamics show that the pre-flight model overpredicted TV pitch stability, leading to underprediction of trajectory peak altitude. Comparisons between pre-flight aerodynamic models and reconstructed flight data are shown, and changes to aerodynamic models using improved fidelity and knowledge gained from SFDT-1 are discussed.
The impact of accelerating faster than exponential population growth on genetic variation.
Reppell, Mark; Boehnke, Michael; Zöllner, Sebastian
2014-03-01
Current human sequencing projects observe an abundance of extremely rare genetic variation, suggesting recent acceleration of population growth. To better understand the impact of such accelerating growth on the quantity and nature of genetic variation, we present a new class of models capable of incorporating faster than exponential growth in a coalescent framework. Our work shows that such accelerated growth affects only the population size in the recent past and thus large samples are required to detect the models' effects on patterns of variation. When we compare models with fixed initial growth rate, models with accelerating growth achieve very large current population sizes and large samples from these populations contain more variation than samples from populations with constant growth. This increase is driven almost entirely by an increase in singleton variation. Moreover, linkage disequilibrium decays faster in populations with accelerating growth. When we instead condition on current population size, models with accelerating growth result in less overall variation and slower linkage disequilibrium decay compared to models with exponential growth. We also find that pairwise linkage disequilibrium of very rare variants contains information about growth rates in the recent past. Finally, we demonstrate that models of accelerating growth may substantially change estimates of present-day effective population sizes and growth times.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Koll, Daniel D. B.; Abbot, Dorian S., E-mail: dkoll@uchicago.edu
Next-generation space telescopes will observe the atmospheres of rocky planets orbiting nearby M-dwarfs. Understanding these observations will require well-developed theory in addition to numerical simulations. Here we present theoretical models for the temperature structure and atmospheric circulation of dry, tidally locked rocky exoplanets with gray radiative transfer and test them using a general circulation model (GCM). First, we develop a radiative-convective (RC) model that captures surface temperatures of slowly rotating and cool atmospheres. Second, we show that the atmospheric circulation acts as a global heat engine, which places strong constraints on large-scale wind speeds. Third, we develop an RC-subsiding modelmore » which extends our RC model to hot and thin atmospheres. We find that rocky planets develop large day–night temperature gradients at a ratio of wave-to-radiative timescales up to two orders of magnitude smaller than the value suggested by work on hot Jupiters. The small ratio is due to the heat engine inefficiency and asymmetry between updrafts and subsidence in convecting atmospheres. Fourth, we show, using GCM simulations, that rotation only has a strong effect on temperature structure if the atmosphere is hot or thin. Our models let us map out atmospheric scenarios for planets such as GJ 1132b, and show how thermal phase curves could constrain them. Measuring phase curves of short-period planets will require similar amounts of time on the James Webb Space Telescope as detecting molecules via transit spectroscopy, so future observations should pursue both techniques.« less
Startup analysis for a high temperature gas loaded heat pipe
NASA Technical Reports Server (NTRS)
Sockol, P. M.
1973-01-01
A model for the rapid startup of a high-temperature gas-loaded heat pipe is presented. A two-dimensional diffusion analysis is used to determine the rate of energy transport by the vapor between the hot and cold zones of the pipe. The vapor transport rate is then incorporated in a simple thermal model of the startup of a radiation-cooled heat pipe. Numerical results for an argon-lithium system show that radial diffusion to the cold wall can produce large vapor flow rates during a rapid startup. The results also show that startup is not initiated until the vapor pressure p sub v in the hot zone reaches a precise value proportional to the initial gas pressure p sub i. Through proper choice of p sub i, startup can be delayed until p sub v is large enough to support a heat-transfer rate sufficient to overcome a thermal load on the heat pipe.
NASA Technical Reports Server (NTRS)
Soula, Serge
1994-01-01
The evolution of the vertical electric field profile deduced from simultaneous field measurements at several levels below a thundercloud shows the development of a space charge layer at least up to 600 m. The average charge density in the whole layer from 0 m to 600 m can reach about 1 nC m(exp -3). The ions are generated at the ground by corona effect and the production rate is evaluated with a new method from the comparison of field evolutions at the ground and at altitude after a lightning flash. The modeling of the relevant processes shows tht ground corona accounts for the observed field evolutions and that the aerosol particles concentration has a very large effect on the evolution of corona ions. However, with a realistic value for this concentration a large amount of ground corona ions reach the level of 600 m.
Spatial organization of foreshocks as a tool to forecast large earthquakes.
Lippiello, E; Marzocchi, W; de Arcangelis, L; Godano, C
2012-01-01
An increase in the number of smaller magnitude events, retrospectively named foreshocks, is often observed before large earthquakes. We show that the linear density probability of earthquakes occurring before and after small or intermediate mainshocks displays a symmetrical behavior, indicating that the size of the area fractured during the mainshock is encoded in the foreshock spatial organization. This observation can be used to discriminate spatial clustering due to foreshocks from the one induced by aftershocks and is implemented in an alarm-based model to forecast m > 6 earthquakes. A retrospective study of the last 19 years Southern California catalog shows that the daily occurrence probability presents isolated peaks closely located in time and space to the epicenters of five of the six m > 6 earthquakes. We find daily probabilities as high as 25% (in cells of size 0.04 × 0.04deg(2)), with significant probability gains with respect to standard models.
Spatial organization of foreshocks as a tool to forecast large earthquakes
Lippiello, E.; Marzocchi, W.; de Arcangelis, L.; Godano, C.
2012-01-01
An increase in the number of smaller magnitude events, retrospectively named foreshocks, is often observed before large earthquakes. We show that the linear density probability of earthquakes occurring before and after small or intermediate mainshocks displays a symmetrical behavior, indicating that the size of the area fractured during the mainshock is encoded in the foreshock spatial organization. This observation can be used to discriminate spatial clustering due to foreshocks from the one induced by aftershocks and is implemented in an alarm-based model to forecast m > 6 earthquakes. A retrospective study of the last 19 years Southern California catalog shows that the daily occurrence probability presents isolated peaks closely located in time and space to the epicenters of five of the six m > 6 earthquakes. We find daily probabilities as high as 25% (in cells of size 0.04 × 0.04deg2), with significant probability gains with respect to standard models. PMID:23152938
Anomalous Impact in Reaction-Diffusion Financial Models
NASA Astrophysics Data System (ADS)
Mastromatteo, I.; Tóth, B.; Bouchaud, J.-P.
2014-12-01
We generalize the reaction-diffusion model A +B → /0 in order to study the impact of an excess of A (or B ) at the reaction front. We provide an exact solution of the model, which shows that the linear response breaks down: the average displacement of the reaction front grows as the square root of the imbalance. We argue that this model provides a highly simplified but generic framework to understand the square-root impact of large orders in financial markets.
NASA Technical Reports Server (NTRS)
Walker, Raymond J.; Ogino, Tatsuki
1988-01-01
A time-dependent three-dimensional MHD model was used to investigate the magnetospheric configuration as a function of the interplanetary magnetic field direction when it was in the y-z plane in geocentric solar magnetospheric coordinates. The model results show large global convection cells, tail lobe cells, high-latitude polarcap cells, and low latitude cells. The field-aligned currents generated in the model magnetosphere and the model convection system are compared with observations from low-altitude polar orbiting satellites.
Simulating a Skilled Typist: A Study of Skilled Cognitive-Motor Performance.
1981-05-01
points out, such behavior is to be expected from a metronome model of typing in which the typist ini- tiates a stroke regularly to some sort of...long. As we show, this behavior is also to be expected from models not involving such an internal clock. All other things being equal, the model... behavior actually engaged in by expert typ- ists. The Units of Typing Seem to Be Largely at the Word Level or Smaller The units of typing in our model are
Development and Application of a Process-based River System Model at a Continental Scale
NASA Astrophysics Data System (ADS)
Kim, S. S. H.; Dutta, D.; Vaze, J.; Hughes, J. D.; Yang, A.; Teng, J.
2014-12-01
Existing global and continental scale river models, mainly designed for integrating with global climate model, are of very course spatial resolutions and they lack many important hydrological processes, such as overbank flow, irrigation diversion, groundwater seepage/recharge, which operate at a much finer resolution. Thus, these models are not suitable for producing streamflow forecast at fine spatial resolution and water accounts at sub-catchment levels, which are important for water resources planning and management at regional and national scale. A large-scale river system model has been developed and implemented for water accounting in Australia as part of the Water Information Research and Development Alliance between Australia's Bureau of Meteorology (BoM) and CSIRO. The model, developed using node-link architecture, includes all major hydrological processes, anthropogenic water utilisation and storage routing that influence the streamflow in both regulated and unregulated river systems. It includes an irrigation model to compute water diversion for irrigation use and associated fluxes and stores and a storage-based floodplain inundation model to compute overbank flow from river to floodplain and associated floodplain fluxes and stores. An auto-calibration tool has been built within the modelling system to automatically calibrate the model in large river systems using Shuffled Complex Evolution optimiser and user-defined objective functions. The auto-calibration tool makes the model computationally efficient and practical for large basin applications. The model has been implemented in several large basins in Australia including the Murray-Darling Basin, covering more than 2 million km2. The results of calibration and validation of the model shows highly satisfactory performance. The model has been operalisationalised in BoM for producing various fluxes and stores for national water accounting. This paper introduces this newly developed river system model describing the conceptual hydrological framework, methods used for representing different hydrological processes in the model and the results and evaluation of the model performance. The operational implementation of the model for water accounting is discussed.
Weak hydrological sensitivity to temperature change over land, independent of climate forcing
NASA Astrophysics Data System (ADS)
Samset, Bjorn H.
2017-04-01
As the global surface temperature changes, so will patterns and rates of precipitation. Theoretically, these changes can be understood in terms of changes to the energy balance of the atmosphere, caused by introducing drivers of climate change such as greenhouse gases, aerosols and altered insolation. Climate models, however, disagree strongly in their prediction of precipitation changes, both for historical and future emission pathways, and per degree of surface warming in idealized experiments. The latter value, often termed the apparent hydrological sensitivity, has also been found to differ substantially between climate drivers. Here, we present the global and regional hydrological sensitivity (HS) to surface temperature changes, for perturbations to CO2, CH4, sulfate and black carbon concentrations, and solar irradiance. Based on results from 10 climate models participating in the Precipitation Driver and Response Model Intercomparison Project (PDRMIP), we show how modeled global mean precipitation increases by 2-3 % per kelvin of global mean surface warming, independent of driver, when the effects of rapid adjustments are removed. Previously reported differences in response between drivers are therefore mainly ascribable to rapid atmospheric adjustment processes. All models show a sharp contrast in behavior over land and over ocean, with a strong surface temperature driven (slow) ocean HS of 3-5 %/K, while the slow land HS is only 0-2 %/K. Separating the response into convective and large-scale cloud processes, we find larger inter-model differences, in particular over land regions. Large-scale precipitation changes are most relevant at high latitudes, while the equatorial HS is dominated by convective precipitation changes. Black carbon stands out as the driver with the largest inter-model slow HS variability, and also the strongest contrast between a weak land and strong sea response. Convective precipitation in the Arctic and large scale precipitation around the Equator are found to be topics where further model investigations and observational constraints may provide rapid improvements to modelling of the precipitation response to future, CO2 dominated climate change.
Fog Simulations Based on Multi-Model System: A Feasibility Study
NASA Astrophysics Data System (ADS)
Shi, Chune; Wang, Lei; Zhang, Hao; Zhang, Su; Deng, Xueliang; Li, Yaosun; Qiu, Mingyan
2012-05-01
Accurate forecasts of fog and visibility are very important to air and high way traffic, and are still a big challenge. A 1D fog model (PAFOG) is coupled to MM5 by obtaining the initial and boundary conditions (IC/BC) and some other necessary input parameters from MM5. Thus, PAFOG can be run for any area of interest. On the other hand, MM5 itself can be used to simulate fog events over a large domain. This paper presents evaluations of the fog predictability of these two systems for December of 2006 and December of 2007, with nine regional fog events observed in a field experiment, as well as over a large domain in eastern China. Among the simulations of the nine fog events by the two systems, two cases were investigated in detail. Daily results of ground level meteorology were validated against the routine observations at the CMA observational network. Daily fog occurrences for the two study periods was validated in Nanjing. General performance of the two models for the nine fog cases are presented by comparing with routine and field observational data. The results of MM5 and PAFOG for two typical fog cases are verified in detail against field observations. The verifications demonstrated that all methods tended to overestimate fog occurrence, especially for near-fog cases. In terms of TS/ETS, the LWC-only threshold with MM5 showed the best performance, while PAFOG showed the worst. MM5 performed better for advection-radiation fog than for radiation fog, and PAFOG could be an alternative tool for forecasting radiation fogs. PAFOG did show advantages over MM5 on the fog dissipation time. The performance of PAFOG highly depended on the quality of MM5 output. The sensitive runs of PAFOG with different IC/BC showed the capability of using MM5 output to run the 1D model and the high sensitivity of PAFOG on cloud cover. Future works should intensify the study of how to improve the quality of input data (e.g. cloud cover, advection, large scale subsidence) for the 1D model, particularly how to eliminate near-fog case in fog forecasting.
Five-Photon Absorption and Selective Enhancement of Multiphoton Absorption Processes
2015-01-01
We study one-, two-, three-, four-, and five-photon absorption of three centrosymmetric molecules using density functional theory. These calculations are the first ab initio calculations of five-photon absorption. Even- and odd-order absorption processes show different trends in the absorption cross sections. The behavior of all even- and odd-photon absorption properties shows a semiquantitative similarity, which can be explained using few-state models. This analysis shows that odd-photon absorption processes are largely determined by the one-photon absorption strength, whereas all even-photon absorption strengths are largely dominated by the two-photon absorption strength, in both cases modulated by powers of the polarizability of the final excited state. We demonstrate how to selectively enhance a specific multiphoton absorption process. PMID:26120588
Five-Photon Absorption and Selective Enhancement of Multiphoton Absorption Processes.
Friese, Daniel H; Bast, Radovan; Ruud, Kenneth
2015-05-20
We study one-, two-, three-, four-, and five-photon absorption of three centrosymmetric molecules using density functional theory. These calculations are the first ab initio calculations of five-photon absorption. Even- and odd-order absorption processes show different trends in the absorption cross sections. The behavior of all even- and odd-photon absorption properties shows a semiquantitative similarity, which can be explained using few-state models. This analysis shows that odd-photon absorption processes are largely determined by the one-photon absorption strength, whereas all even-photon absorption strengths are largely dominated by the two-photon absorption strength, in both cases modulated by powers of the polarizability of the final excited state. We demonstrate how to selectively enhance a specific multiphoton absorption process.
Phase-locked patterns of the Kuramoto model on 3-regular graphs
NASA Astrophysics Data System (ADS)
DeVille, Lee; Ermentrout, Bard
2016-09-01
We consider the existence of non-synchronized fixed points to the Kuramoto model defined on sparse networks: specifically, networks where each vertex has degree exactly three. We show that "most" such networks support multiple attracting phase-locked solutions that are not synchronized and study the depth and width of the basins of attraction of these phase-locked solutions. We also show that it is common in "large enough" graphs to find phase-locked solutions where one or more of the links have angle difference greater than π/2.
Phase-locked patterns of the Kuramoto model on 3-regular graphs.
DeVille, Lee; Ermentrout, Bard
2016-09-01
We consider the existence of non-synchronized fixed points to the Kuramoto model defined on sparse networks: specifically, networks where each vertex has degree exactly three. We show that "most" such networks support multiple attracting phase-locked solutions that are not synchronized and study the depth and width of the basins of attraction of these phase-locked solutions. We also show that it is common in "large enough" graphs to find phase-locked solutions where one or more of the links have angle difference greater than π/2.
NASA Technical Reports Server (NTRS)
Burris, John; McGee, Thomas J.; Hoegy, Walt; Lait, Leslie; Sumnicht, Grant; Twigg, Larry; Heaps, William
2000-01-01
Temperature profiles acquired by Goddard Space Flight Center's AROTEL lidar during the SOLVE mission onboard NASA's DC-8 are compared with predicted values from several atmospheric models (DAO, NCEP and UKMO). The variability in the differences between measured and calculated temperature fields was approximately 5 K. Retrieved temperatures within the polar vortex showed large regions that were significantly colder than predicted by the atmospheric models.
Improving Domain-specific Machine Translation by Constraining the Language Model
2012-07-01
performance. To make up for the lack of parallel training data, one assumption is that more monolingual target language data should be used in building the...target language model. Prior work on domain-specific MT has focused on training target language models with monolingual 2 domain-specific data...showed that the using a large dictionary extracted from medical domain documents in a statistical MT system to generalize the training data significantly
A first large-scale flood inundation forecasting model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schumann, Guy J-P; Neal, Jeffrey C.; Voisin, Nathalie
2013-11-04
At present continental to global scale flood forecasting focusses on predicting at a point discharge, with little attention to the detail and accuracy of local scale inundation predictions. Yet, inundation is actually the variable of interest and all flood impacts are inherently local in nature. This paper proposes a first large scale flood inundation ensemble forecasting model that uses best available data and modeling approaches in data scarce areas and at continental scales. The model was built for the Lower Zambezi River in southeast Africa to demonstrate current flood inundation forecasting capabilities in large data-scarce regions. The inundation model domainmore » has a surface area of approximately 170k km2. ECMWF meteorological data were used to force the VIC (Variable Infiltration Capacity) macro-scale hydrological model which simulated and routed daily flows to the input boundary locations of the 2-D hydrodynamic model. Efficient hydrodynamic modeling over large areas still requires model grid resolutions that are typically larger than the width of many river channels that play a key a role in flood wave propagation. We therefore employed a novel sub-grid channel scheme to describe the river network in detail whilst at the same time representing the floodplain at an appropriate and efficient scale. The modeling system was first calibrated using water levels on the main channel from the ICESat (Ice, Cloud, and land Elevation Satellite) laser altimeter and then applied to predict the February 2007 Mozambique floods. Model evaluation showed that simulated flood edge cells were within a distance of about 1 km (one model resolution) compared to an observed flood edge of the event. Our study highlights that physically plausible parameter values and satisfactory performance can be achieved at spatial scales ranging from tens to several hundreds of thousands of km2 and at model grid resolutions up to several km2. However, initial model test runs in forecast mode revealed that it is crucial to account for basin-wide hydrological response time when assessing lead time performances notwithstanding structural limitations in the hydrological model and possibly large inaccuracies in precipitation data.« less
Future constraints on angle-dependent non-Gaussianity from large radio surveys
NASA Astrophysics Data System (ADS)
Raccanelli, Alvise; Shiraishi, Maresuke; Bartolo, Nicola; Bertacca, Daniele; Liguori, Michele; Matarrese, Sabino; Norris, Ray P.; Parkinson, David
2017-03-01
We investigate how well future large-scale radio surveys could measure different shapes of primordial non-Gaussianity; in particular we focus on angle-dependent non-Gaussianity arising from primordial anisotropic sources, whose bispectrum has an angle dependence between the three wavevectors that is characterized by Legendre polynomials PL and expansion coefficients cL. We provide forecasts for measurements of galaxy power spectrum, finding that Large-Scale Structure (LSS) data could allow measurements of primordial non-Gaussianity that would be competitive with, or improve upon, current constraints set by CMB experiments, for all the shapes considered. We argue that the best constraints will come from the possibility to assign redshift information to radio galaxy surveys, and investigate a few possible scenarios for the EMU and SKA surveys. A realistic (futuristic) modeling could provide constraints of fNLloc ≈ 1(0 . 5) for the local shape, fNL of O(10) (O(1)) for the orthogonal, equilateral and folded shapes, and cL=1 ≈ 80(2) , cL=2 ≈ 400(10) for angle-dependent non-Gaussianity showing that only futuristic galaxy surveys will be able to set strong constraints on these models. Nevertheless, the more futuristic forecasts show the potential of LSS analyses to considerably improve current constraints on non-Gaussianity, and so on models of the primordial Universe. Finally, we find the minimum requirements that would be needed to reach σ(cL=1) = 10, which can be considered as a typical (lower) value predicted by some (inflationary) models.
Miklós, István
2009-01-01
Homologous genes originate from a common ancestor through vertical inheritance, duplication, or horizontal gene transfer. Entire homolog families spawned by a single ancestral gene can be identified across multiple genomes based on protein sequence similarity. The sequences, however, do not always reveal conclusively the history of large families. To study the evolution of complete gene repertoires, we propose here a mathematical framework that does not rely on resolved gene family histories. We show that so-called phylogenetic profiles, formed by family sizes across multiple genomes, are sufficient to infer principal evolutionary trends. The main novelty in our approach is an efficient algorithm to compute the likelihood of a phylogenetic profile in a model of birth-and-death processes acting on a phylogeny. We examine known gene families in 28 archaeal genomes using a probabilistic model that involves lineage- and family-specific components of gene acquisition, duplication, and loss. The model enables us to consider all possible histories when inferring statistics about archaeal evolution. According to our reconstruction, most lineages are characterized by a net loss of gene families. Major increases in gene repertoire have occurred only a few times. Our reconstruction underlines the importance of persistent streamlining processes in shaping genome composition in Archaea. It also suggests that early archaeal genomes were as complex as typical modern ones, and even show signs, in the case of the methanogenic ancestor, of an extremely large gene repertoire. PMID:19570746
Development of a 3D Stream Network and Topography for Improved Large-Scale Hydraulic Modeling
NASA Astrophysics Data System (ADS)
Saksena, S.; Dey, S.; Merwade, V.
2016-12-01
Most digital elevation models (DEMs) used for hydraulic modeling do not include channel bed elevations. As a result, the DEMs are complimented with additional bathymetric data for accurate hydraulic simulations. Existing methods to acquire bathymetric information through field surveys or through conceptual models are limited to reach-scale applications. With an increasing focus on large scale hydraulic modeling of rivers, a framework to estimate and incorporate bathymetry for an entire stream network is needed. This study proposes an interpolation-based algorithm to estimate bathymetry for a stream network by modifying the reach-based empirical River Channel Morphology Model (RCMM). The effect of a 3D stream network that includes river bathymetry is then investigated by creating a 1D hydraulic model (HEC-RAS) and 2D hydrodynamic model (Integrated Channel and Pond Routing) for the Upper Wabash River Basin in Indiana, USA. Results show improved simulation of flood depths and storage in the floodplain. Similarly, the impact of river bathymetry incorporation is more significant in the 2D model as compared to the 1D model.
On the Subgrid-Scale Modeling of Compressible Turbulence
NASA Technical Reports Server (NTRS)
Squires, Kyle; Zeman, Otto
1990-01-01
A new sub-grid scale model is presented for the large-eddy simulation of compressible turbulence. In the proposed model, compressibility contributions have been incorporated in the sub-grid scale eddy viscosity which, in the incompressible limit, reduce to a form originally proposed by Smagorinsky (1963). The model has been tested against a simple extension of the traditional Smagorinsky eddy viscosity model using simulations of decaying, compressible homogeneous turbulence. Simulation results show that the proposed model provides greater dissipation of the compressive modes of the resolved-scale velocity field than does the Smagorinsky eddy viscosity model. For an initial r.m.s. turbulence Mach number of 1.0, simulations performed using the Smagorinsky model become physically unrealizable (i.e., negative energies) because of the inability of the model to sufficiently dissipate fluctuations due to resolved scale velocity dilations. The proposed model is able to provide the necessary dissipation of this energy and maintain the realizability of the flow. Following Zeman (1990), turbulent shocklets are considered to dissipate energy independent of the Kolmogorov energy cascade. A possible parameterization of dissipation by turbulent shocklets for Large-Eddy Simulation is also presented.
Wei, Jiangyong; Hu, Xiaohua; Zou, Xiufen; Tian, Tianhai
2017-12-28
Recent advances in omics technologies have raised great opportunities to study large-scale regulatory networks inside the cell. In addition, single-cell experiments have measured the gene and protein activities in a large number of cells under the same experimental conditions. However, a significant challenge in computational biology and bioinformatics is how to derive quantitative information from the single-cell observations and how to develop sophisticated mathematical models to describe the dynamic properties of regulatory networks using the derived quantitative information. This work designs an integrated approach to reverse-engineer gene networks for regulating early blood development based on singel-cell experimental observations. The wanderlust algorithm is initially used to develop the pseudo-trajectory for the activities of a number of genes. Since the gene expression data in the developed pseudo-trajectory show large fluctuations, we then use Gaussian process regression methods to smooth the gene express data in order to obtain pseudo-trajectories with much less fluctuations. The proposed integrated framework consists of both bioinformatics algorithms to reconstruct the regulatory network and mathematical models using differential equations to describe the dynamics of gene expression. The developed approach is applied to study the network regulating early blood cell development. A graphic model is constructed for a regulatory network with forty genes and a dynamic model using differential equations is developed for a network of nine genes. Numerical results suggests that the proposed model is able to match experimental data very well. We also examine the networks with more regulatory relations and numerical results show that more regulations may exist. We test the possibility of auto-regulation but numerical simulations do not support the positive auto-regulation. In addition, robustness is used as an importantly additional criterion to select candidate networks. The research results in this work shows that the developed approach is an efficient and effective method to reverse-engineer gene networks using single-cell experimental observations.
Malignant infarction of the middle cerebral artery in a porcine model. A pilot study
Martínez-Valverde, Tamara; Sánchez-Guerrero, Ángela; Campos, Mireia; Esteves, Marielle; Gandara, Dario; Torné, Ramon; Castro, Lidia; Dalmau, Antoni; Tibau, Joan
2017-01-01
Background and purpose Interspecies variability and poor clinical translation from rodent studies indicate that large gyrencephalic animal stroke models are urgently needed. We present a proof-of-principle study describing an alternative animal model of malignant infarction of the middle cerebral artery (MCA) in the common pig and illustrate some of its potential applications. We report on metabolic patterns, ionic profile, brain partial pressure of oxygen (PtiO2), expression of sulfonylurea receptor 1 (SUR1), and the transient receptor potential melastatin 4 (TRPM4). Methods A 5-hour ischemic infarct of the MCA territory was performed in 5 2.5-to-3-month-old female hybrid pigs (Large White x Landrace) using a frontotemporal approach. The core and penumbra areas were intraoperatively monitored to determine the metabolic and ionic profiles. To determine the infarct volume, 2,3,5-triphenyltetrazolium chloride staining and immunohistochemistry analysis was performed to determine SUR1 and TRPM4 expression. Results PtiO2 monitoring showed an abrupt reduction in values close to 0 mmHg after MCA occlusion in the core area. Hourly cerebral microdialysis showed that the infarcted tissue was characterized by reduced concentrations of glucose (0.03 mM) and pyruvate (0.003 mM) and increases in lactate levels (8.87mM), lactate-pyruvate ratio (4202), glycerol levels (588 μM), and potassium concentration (27.9 mmol/L). Immunohistochemical analysis showed increased expression of SUR1-TRPM4 channels. Conclusions The aim of the present proof-of-principle study was to document the feasibility of a large animal model of malignant MCA infarction by performing transcranial occlusion of the MCA in the common pig, as an alternative to lisencephalic animals. This model may be useful for detailed studies of cerebral ischemia mechanisms and the development of neuroprotective strategies. PMID:28235044
NASA Astrophysics Data System (ADS)
de Simone, Andrea; Franceschini, Roberto; Giudice, Gian Francesco; Pappadopulo, Duccio; Rattazzi, Riccardo
2011-05-01
It has been recently pointed out that the unavoidable tuning among supersymmetric parameters required to raise the Higgs boson mass beyond its experimental limit opens up new avenues for dealing with the so called μ- B μ problem of gauge mediation. In fact, it allows for accommodating, with no further parameter tuning, large values of B μ and of the other Higgs-sector soft masses, as predicted in models where both μ and B μ are generated at one-loop order. This class of models, called Lopsided Gauge Mediation, offers an interesting alternative to conventional gauge mediation and is characterized by a strikingly different phenomenology, with light higgsinos, very large Higgs pseudoscalar mass, and moderately light sleptons. We discuss general parametric relations involving the fine-tuning of the model and various observables such as the chargino mass and the value of tan β. We build an explicit model and we study the constraints coming from LEP and Tevatron. We show that in spite of new interactions between the Higgs and the messenger superfields, the theory can remain perturbative up to very large scales, thus retaining gauge coupling unification.
Small-Scale Drop-Size Variability: Empirical Models for Drop-Size-Dependent Clustering in Clouds
NASA Technical Reports Server (NTRS)
Marshak, Alexander; Knyazikhin, Yuri; Larsen, Michael L.; Wiscombe, Warren J.
2005-01-01
By analyzing aircraft measurements of individual drop sizes in clouds, it has been shown in a companion paper that the probability of finding a drop of radius r at a linear scale l decreases as l(sup D(r)), where 0 less than or equals D(r) less than or equals 1. This paper shows striking examples of the spatial distribution of large cloud drops using models that simulate the observed power laws. In contrast to currently used models that assume homogeneity and a Poisson distribution of cloud drops, these models illustrate strong drop clustering, especially with larger drops. The degree of clustering is determined by the observed exponents D(r). The strong clustering of large drops arises naturally from the observed power-law statistics. This clustering has vital consequences for rain physics, including how fast rain can form. For radiative transfer theory, clustering of large drops enhances their impact on the cloud optical path. The clustering phenomenon also helps explain why remotely sensed cloud drop size is generally larger than that measured in situ.
A spatial age-structured model for describing sea lamprey (Petromyzon marinus) population dynamics
Robinson, Jason M.; Wilberg, Michael J.; Adams, Jean V.; Jones, Michael L.
2013-01-01
The control of invasive sea lampreys (Petromyzon marinus) presents large scale management challenges in the Laurentian Great Lakes. No modeling approach has been developed that describes spatial dynamics of lamprey populations. We developed and validated a spatial and age-structured model and applied it to a sea lamprey population in a large river in the Great Lakes basin. We considered 75 discrete spatial areas, included a stock-recruitment function, spatial recruitment patterns, natural mortality, chemical treatment mortality, and larval metamorphosis. Recruitment was variable, and an upstream shift in recruitment location was observed over time. From 1993–2011 recruitment, larval abundance, and the abundance of metamorphosing individuals decreased by 80, 84, and 86%, respectively. The model successfully identified areas of high larval abundance and showed that areas of low larval density contribute significantly to the population. Estimated treatment mortality was less than expected but had a large population-level impact. The results and general approach of this work have applications for sea lamprey control throughout the Great Lakes and for the restoration and conservation of native lamprey species globally.
Sabatino, Denise E.; Nichols, Timothy C.; Merricks, Elizabeth; Bellinger, Dwight A.; Herzog, Roland W.; Monahan, Paul E.
2013-01-01
The X-linked bleeding disorder hemophilia is caused by mutations in coagulation factor VIII (hemophilia A) or factor IX (hemophilia B). Unless prophylactic treatment is provided, patients with severe disease (less than 1% clotting activity) typically experience frequent spontaneous bleeds. Current treatment is largely based on intravenous infusion of recombinant or plasma-derived coagulation factor concentrate. More effective factor products are being developed. Moreover, gene therapies for sustained correction of hemophilia are showing much promise in pre-clinical studies and in clinical trials. These advances in molecular medicine heavily depend on availability of well-characterized small and large animal models of hemophilia, primarily hemophilia mice and dogs. Experiments in these animals represent important early and intermediate steps of translational research aimed at development of better and safer treatments for hemophilia, such a protein and gene therapies or immune tolerance protocols. While murine models are excellent for studies of large groups of animals using genetically defined strains, canine models are important for testing scale-up and for longer-term follow-up as well as for studies that require larger blood volumes. PMID:22137432
Modeling CMB lensing cross correlations with CLEFT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Modi, Chirag; White, Martin; Vlah, Zvonimir, E-mail: modichirag@berkeley.edu, E-mail: mwhite@berkeley.edu, E-mail: zvlah@stanford.edu
2017-08-01
A new generation of surveys will soon map large fractions of sky to ever greater depths and their science goals can be enhanced by exploiting cross correlations between them. In this paper we study cross correlations between the lensing of the CMB and biased tracers of large-scale structure at high z . We motivate the need for more sophisticated bias models for modeling increasingly biased tracers at these redshifts and propose the use of perturbation theories, specifically Convolution Lagrangian Effective Field Theory (CLEFT). Since such signals reside at large scales and redshifts, they can be well described by perturbative approaches.more » We compare our model with the current approach of using scale independent bias coupled with fitting functions for non-linear matter power spectra, showing that the latter will not be sufficient for upcoming surveys. We illustrate our ideas by estimating σ{sub 8} from the auto- and cross-spectra of mock surveys, finding that CLEFT returns accurate and unbiased results at high z . We discuss uncertainties due to the redshift distribution of the tracers, and several avenues for future development.« less
Hieu, Nguyen Trong; Brochier, Timothée; Tri, Nguyen-Huu; Auger, Pierre; Brehmer, Patrice
2014-09-01
We consider a fishery model with two sites: (1) a marine protected area (MPA) where fishing is prohibited and (2) an area where the fish population is harvested. We assume that fish can migrate from MPA to fishing area at a very fast time scale and fish spatial organisation can change from small to large clusters of school at a fast time scale. The growth of the fish population and the catch are assumed to occur at a slow time scale. The complete model is a system of five ordinary differential equations with three time scales. We take advantage of the time scales using aggregation of variables methods to derive a reduced model governing the total fish density and fishing effort at the slow time scale. We analyze this aggregated model and show that under some conditions, there exists an equilibrium corresponding to a sustainable fishery. Our results suggest that in small pelagic fisheries the yield is maximum for a fish population distributed among both small and large clusters of school.
A 2D nonlinear multiring model for blood flow in large elastic arteries
NASA Astrophysics Data System (ADS)
Ghigo, Arthur R.; Fullana, Jose-Maria; Lagrée, Pierre-Yves
2017-12-01
In this paper, we propose a two-dimensional nonlinear ;multiring; model to compute blood flow in axisymmetric elastic arteries. This model is designed to overcome the numerical difficulties of three-dimensional fluid-structure interaction simulations of blood flow without using the over-simplifications necessary to obtain one-dimensional blood flow models. This multiring model is derived by integrating over concentric rings of fluid the simplified long-wave Navier-Stokes equations coupled to an elastic model of the arterial wall. The resulting system of balance laws provides a unified framework in which both the motion of the fluid and the displacement of the wall are dealt with simultaneously. The mathematical structure of the multiring model allows us to use a finite volume method that guarantees the conservation of mass and the positivity of the numerical solution and can deal with nonlinear flows and large deformations of the arterial wall. We show that the finite volume numerical solution of the multiring model provides at a reasonable computational cost an asymptotically valid description of blood flow velocity profiles and other averaged quantities (wall shear stress, flow rate, ...) in large elastic and quasi-rigid arteries. In particular, we validate the multiring model against well-known solutions such as the Womersley or the Poiseuille solutions as well as against steady boundary layer solutions in quasi-rigid constricted and expanded tubes.
Hippen, Keli L; Watkins, Benjamin; Tkachev, Victor; Lemire, Amanda M; Lehnen, Charles; Riddle, Megan J; Singh, Karnail; Panoskaltsis-Mortari, Angela; Vanhove, Bernard; Tolar, Jakub; Kean, Leslie S; Blazar, Bruce R
2016-12-01
Graft-versus-host disease (GVHD) is a severe complication of hematopoietic stem cell transplantation. Current therapies to prevent alloreactive T cell activation largely cause generalized immunosuppression and may result in adverse drug, antileukemia and antipathogen responses. Recently, several immunomodulatory therapeutics have been developed that show efficacy in maintaining antileukemia responses while inhibiting GVHD in murine models. To analyze efficacy and better understand immunological tolerance, escape mechanisms, and side effects of clinical reagents, testing of species cross-reactive human agents in large animal GVHD models is critical. We have previously developed and refined a nonhuman primate (NHP) large animal GVHD model. However, this model is not readily amenable to semi-high throughput screening of candidate clinical reagents. Here, we report a novel, optimized NHP xenogeneic GVHD (xeno-GVHD) small animal model that recapitulates many aspects of NHP and human GVHD. This model was validated using a clinically available blocking, monovalent anti-CD28 antibody (FR104) whose effects in a human xeno-GVHD rodent model are known. Because human-reactive reagents may not be fully cross-reactive or effective in vivo on NHP immune cells, this NHP xeno-GVHD model provides immunological insights and direct testing on NHP-induced GVHD before committing to the intensive NHP studies that are being increasingly used for detailed evaluation of new immune therapeutic strategies before human trials.
NASA Technical Reports Server (NTRS)
Nowak, Michael A.; Wilms, Joern; Vaughan, Brian A.; Dove, James B.; Begelman, Mitchell C.
1999-01-01
We have recently shown that a 'sphere + disk' geometry Compton corona model provides a good description of Rossi X-ray Timing Explorer (RXTE) observations of the hard/low state of Cygnus X-1. Separately, we have analyzed the temporal data provided by RXTE. In this paper we consider the implications of this timing analysis for our best-fit 'sphere + disk' Comptonization models. We focus our attention on the observed Fourier frequency-dependent time delays between hard and soft photons. We consider whether the observed time delays are: created in the disk but are merely reprocessed by the corona; created by differences between the hard and soft photon diffusion times in coronae with extremely large radii; or are due to 'propagation' of disturbances through the corona. We find that the time delays are most likely created directly within the corona; however, it is currently uncertain which specific model is the most likely explanation. Models that posit a large coronal radius [or equivalently, a large Advection Dominated Accretion Flow (ADAF) region] do not fully address all the details of the observed spectrum. The Compton corona models that do address the full spectrum do not contain dynamical information. We show, however, that simple phenomenological propagation models for the observed time delays for these latter models imply extremely slow characteristic propagation speeds within the coronal region.
Modelling the light-scattering properties of a planetary-regolith analog sample
NASA Astrophysics Data System (ADS)
Vaisanen, T.; Markkanen, J.; Hadamcik, E.; Levasseur-Regourd, A. C.; Lasue, J.; Blum, J.; Penttila, A.; Muinonen, K.
2017-12-01
Solving the scattering properties of asteroid surfaces can be made cheaper, faster, and more accurate with reliable physics-based electromagnetic scattering programs for large and dense random media. Existing exact methods fail to produce solutions for such large systems and it is essential to develop approximate methods. Radiative transfer (RT) is an approximate method which works for sparse random media such as atmospheres fails when applied to dense media. In order to make the method applicable to dense media, we have developed a radiative-transfer coherent-backscattering method (RT-CB) with incoherent interactions. To show the current progress with the RT-CB, we have modeled a planetary-regolith analog sample. The analog sample is a low-density agglomerate produced by random ballistic deposition of almost equisized silicate spheres studied using the PROGRA2-surf experiment. The scattering properties were then computed with the RT-CB assuming that the silicate spheres were equisized and that there were a Gaussian particle size distribution. The results were then compared to the measured data and the intensity plot is shown below. The phase functions are normalized to unity at the 40-deg phase angle. The tentative intensity modeling shows good match with the measured data, whereas the polarization modeling shows discrepancies. In summary, the current RT-CB modeling is promising, but more work needs to be carried out, in particular, for modeling the polarization. Acknowledgments. Research supported by European Research Council with Advanced Grant No. 320773 SAEMPL, Scattering and Absorption of ElectroMagnetic waves in ParticuLate media. Computational resources provided by CSC - IT Centre for Science Ltd, Finland.
Leohr, Jennifer; Heathman, Michael; Kjellsson, Maria C
2018-03-01
To quantify the postprandial triglyceride (TG) response of chylomicrons and very-low-density lipoprotein-V6 (VLDL-V6) after a high-fat meal in lean, obese and very obese healthy individuals, using a mechanistic population lipokinetic modelling approach. Healthy individuals from three body mass index population categories: lean (18.5-24.9 kg/m 2 ), obese (30-33 kg/m 2 ), and very obese (34-40 kg/m 2 ) were enrolled in a clinical study to assess the TG response after a high-fat meal, containing 60% fat. Non-linear mixed-effect modelling was used to analyse the TG concentrations of chylomicrons and large VLDL-V6 particles. The TGs in chylomicrons and VLDL-V6 particles had a prominent postprandial peak and represented the majority of the postprandial response; only the VLDL-V6 showed a difference across the populations. A turn-over model successfully described the TG concentration-time profiles of both chylomicrons and large VLDL-V6 particles after the high-fat meal. This model consisted of four compartments: two transit compartments for the lag between meal consumption and appearance of TGs in the blood, and one compartment each for the chylomicrons and large VLDL-V6 particles. The rate constants for the production of chylomicrons and elimination of large VLDL-V6 particles, along with the conversion rate of chylomicrons to large VLDL-V6 particles were well defined. This is the first lipokinetic model to describe the absorption of TGs from dietary fats into the blood stream and compares the dynamics of TGs in chylomicrons and large VLDL-V6 particles among lean, obese and very obese people. Such a model can be used to identify where pharmacological therapies act, thereby improving the determination of efficacy, and identifying complementary mechanisms for combinational drug therapies. © 2017 John Wiley & Sons Ltd.
Enhanced axion-photon coupling in GUT with hidden photon
NASA Astrophysics Data System (ADS)
Daido, Ryuji; Takahashi, Fuminobu; Yokozaki, Norimi
2018-05-01
We show that the axion coupling to photons can be enhanced in simple models with a single Peccei-Quinn field, if the gauge coupling unification is realized by a large kinetic mixing χ = O (0.1) between hypercharge and unbroken hidden U(1)H. The key observation is that the U(1)H gauge coupling should be rather strong to induce such large kinetic mixing, leading to enhanced contributions of hidden matter fields to the electromagnetic anomaly. We find that the axion-photon coupling is enhanced by about a factor of 10-100 with respect to the GUT-axion models with E / N = 8 / 3.
Coniferous canopy BRF simulation based on 3-D realistic scene.
Wang, Xin-Yun; Guo, Zhi-Feng; Qin, Wen-Han; Sun, Guo-Qing
2011-09-01
It is difficulties for the computer simulation method to study radiation regime at large-scale. Simplified coniferous model was investigated in the present study. It makes the computer simulation methods such as L-systems and radiosity-graphics combined method (RGM) more powerful in remote sensing of heterogeneous coniferous forests over a large-scale region. L-systems is applied to render 3-D coniferous forest scenarios, and RGM model was used to calculate BRF (bidirectional reflectance factor) in visible and near-infrared regions. Results in this study show that in most cases both agreed well. Meanwhile at a tree and forest level, the results are also good.
Coniferous Canopy BRF Simulation Based on 3-D Realistic Scene
NASA Technical Reports Server (NTRS)
Wang, Xin-yun; Guo, Zhi-feng; Qin, Wen-han; Sun, Guo-qing
2011-01-01
It is difficulties for the computer simulation method to study radiation regime at large-scale. Simplified coniferous model was investigate d in the present study. It makes the computer simulation methods such as L-systems and radiosity-graphics combined method (RGM) more powerf ul in remote sensing of heterogeneous coniferous forests over a large -scale region. L-systems is applied to render 3-D coniferous forest scenarios: and RGM model was used to calculate BRF (bidirectional refle ctance factor) in visible and near-infrared regions. Results in this study show that in most cases both agreed well. Meanwhiie at a tree and forest level. the results are also good.
2016-09-01
1 II. MODEL DESIGN ...Figure 10. Experimental Optical Layout for the Boston DM Characterization ..........13 Figure 11. Side View Showing the Curved Surface on a DM...of different methods for deposition, patterning, and etching until the desired design of the device is achieved. While a large number of devices
ERIC Educational Resources Information Center
Rupp, Andre A.
2012-01-01
In the focus article of this issue, von Davier, Naemi, and Roberts essentially coupled: (1) a short methodological review of structural similarities of latent variable models with discrete and continuous latent variables; and (2) 2 short empirical case studies that show how these models can be applied to real, rather than simulated, large-scale…
The Spin Pulse of the Intermediate Polar V1062 Tauri
NASA Technical Reports Server (NTRS)
Hellier, Coel; Beardmore, A. P.; Mukai, Koji; White, Nicholas E. (Technical Monitor)
2002-01-01
We combine ASCA and RXTE data of V1062 Tau to confirm the presence of a 62-min X-ray pulsation. We show that the pulsation is caused largely by the variation of dense partial absorption, in keeping with current models of accretion onto magnetic white dwarfs. Further parameterisation of the spin pulse is, however, hampered by ambiguities in the models.
Dynamical analysis of rumor spreading model with impulse vaccination and time delay
NASA Astrophysics Data System (ADS)
Huo, Liang'an; Ma, Chenyang
2017-04-01
Rumor cause unnecessary conflicts and confusion by misleading the cognition of the public, its spreading has largely influence on human affairs. All kinds of rumors and people's suspicion are often caused by the lack of official information. Hence, the official should take a variety of channels to deny the rumors. The promotion of scientific knowledge is implemented to improve the quality of the whole nation, reduce the harm caused by rumor spreading. In this paper, regarding the process of the science education that official deny the rumor many times as periodic impulse, we propose a XWYZ rumor spreading model with impulse vaccination and time delay, and analyze the global dynamics behaviors of the model. By using the discrete dynamical system determined by the comparison theory and Floquet theorem, we show that there exists a rumor-free periodic solution. Further, we show that the rumor-free periodic solution is globally attractive under appropriate conditions. We also obtain a sufficient condition for the permanence of model. Finally, with the numerical simulation, our results indicate that large vaccination rate, short impulse period or long latent period is sufficient condition for the extinction of the rumors.
Large eddy simulation of dust-uplift by haboob density currents
NASA Astrophysics Data System (ADS)
Huang, Q.
2017-12-01
Cold pool outflows have been shown from both observations and convection-permitting models to be a dominant source of dust uplift ("haboobs") in the summertime Sahel and Sahara, and to cause dust uplift over deserts across the world. In this paper large eddy model (LEM) simulations, which resolve the turbulence within the cold-pools much better than previous studies of haboobs which have used convection-permitting models, are used to investigate the winds that cause dust uplift in cold pools, and the resultant dust uplift and transport. Dust uplift largely occurs in the head of the density current, consistent with the few existing observations. In the modeled density current dust is largely restricted to the lowest coldest and well mixed layer of the cold pool outflow (below around 400 m), except above the head of the cold pool where some dust reaches 2.5 km. This rapid transport to high altitude will contribute to long atmospheric lifetimes of large dust particles from haboobs. Decreasing the model horizontal grid-spacing from 1.0 km to 100 m resolves more turbulence, locally increasing winds, increasing mixing and reducing the propagation speed of the density current. Total accumulated dust uplift is approximately twice as large in 1.0 km runs compared with 100 m runs, suggesting that for studying haboobs in convection-permitting runs the representation of turbulence and mixing is significant. Simulations with surface sensible heat fluxes representative of those from a desert region in daytime show that increasing surface fluxes slow the density current due to increased mixing, but increase dust uplift rates, due to increased downward transport of momentum to the surface.
Explorations in fuzzy physics and non-commutative geometry
NASA Astrophysics Data System (ADS)
Kurkcuoglu, Seckin
Fuzzy spaces arise as discrete approximations to continuum manifolds. They are usually obtained through quantizing coadjoint orbits of compact Lie groups and they can be described in terms of finite-dimensional matrix algebras, which for large matrix sizes approximate the algebra of functions of the limiting continuum manifold. Their ability to exactly preserve the symmetries of their parent manifolds is especially appealing for physical applications. Quantum Field Theories are built over them as finite-dimensional matrix models preserving almost all the symmetries of their respective continuum models. In this dissertation, we first focus our attention to the study of fuzzy supersymmetric spaces. In this regard, we obtain the fuzzy supersphere S2,2F through quantizing the supersphere, and demonstrate that it has exact supersymmetry. We derive a finite series formula for the *-product of functions over S2,2F and analyze the differential geometric information encoded in this formula. Subsequently, we show that quantum field theories on S2,2F are realized as finite-dimensional supermatrix models, and in particular we obtain the non-linear sigma model over the fuzzy supersphere by constructing the fuzzy supersymmetric extensions of a certain class of projectors. We show that this model too, is realized as a finite-dimensional supermatrix model with exact supersymmetry. Next, we show that fuzzy spaces have a generalized Hopf algebra structure. By focusing on the fuzzy sphere, we establish that there is a *-homomorphism from the group algebra SU(2)* of SU(2) to the fuzzy sphere. Using this and the canonical Hopf algebra structure of SU(2)* we show that both the fuzzy sphere and their direct sum are Hopf algebras. Using these results, we discuss processes in which a fuzzy sphere with angular momenta J splits into fuzzy spheres with angular momenta K and L. Finally, we study the formulation of Chern-Simons (CS) theory on an infinite strip of the non-commutative plane. We develop a finite-dimensional matrix model, whose large size limit approximates the CS theory on the infinite strip, and show that there are edge observables in this model obeying a finite-dimensional Lie algebra, that resembles the Kac-Moody algebra.
Nonlinear Aerodynamic Modeling From Flight Data Using Advanced Piloted Maneuvers and Fuzzy Logic
NASA Technical Reports Server (NTRS)
Brandon, Jay M.; Morelli, Eugene A.
2012-01-01
Results of the Aeronautics Research Mission Directorate Seedling Project Phase I research project entitled "Nonlinear Aerodynamics Modeling using Fuzzy Logic" are presented. Efficient and rapid flight test capabilities were developed for estimating highly nonlinear models of airplane aerodynamics over a large flight envelope. Results showed that the flight maneuvers developed, used in conjunction with the fuzzy-logic system identification algorithms, produced very good model fits of the data, with no model structure inputs required, for flight conditions ranging from cruise to departure and spin conditions.
Large-scale topology and the default mode network in the mouse connectome
Stafford, James M.; Jarrett, Benjamin R.; Miranda-Dominguez, Oscar; Mills, Brian D.; Cain, Nicholas; Mihalas, Stefan; Lahvis, Garet P.; Lattal, K. Matthew; Mitchell, Suzanne H.; David, Stephen V.; Fryer, John D.; Nigg, Joel T.; Fair, Damien A.
2014-01-01
Noninvasive functional imaging holds great promise for serving as a translational bridge between human and animal models of various neurological and psychiatric disorders. However, despite a depth of knowledge of the cellular and molecular underpinnings of atypical processes in mouse models, little is known about the large-scale functional architecture measured by functional brain imaging, limiting translation to human conditions. Here, we provide a robust processing pipeline to generate high-resolution, whole-brain resting-state functional connectivity MRI (rs-fcMRI) images in the mouse. Using a mesoscale structural connectome (i.e., an anterograde tracer mapping of axonal projections across the mouse CNS), we show that rs-fcMRI in the mouse has strong structural underpinnings, validating our procedures. We next directly show that large-scale network properties previously identified in primates are present in rodents, although they differ in several ways. Last, we examine the existence of the so-called default mode network (DMN)—a distributed functional brain system identified in primates as being highly important for social cognition and overall brain function and atypically functionally connected across a multitude of disorders. We show the presence of a potential DMN in the mouse brain both structurally and functionally. Together, these studies confirm the presence of basic network properties and functional networks of high translational importance in structural and functional systems in the mouse brain. This work clears the way for an important bridge measurement between human and rodent models, enabling us to make stronger conclusions about how regionally specific cellular and molecular manipulations in mice relate back to humans. PMID:25512496
Axonal degeneration and regeneration in sensory roots in a genital herpes model.
Soffer, D; Martin, J R
1989-01-01
In a mouse model of genital herpes simplex virus type 2 (HSV-2) infection, roots of the lower spinal cord were examined 5 days to 6 months after inoculation. Using immunoperoxidase methods on paraffin sections, viral antigen was found in sensory ganglia, their proximal roots and distal nerves on days 5 and 6 after infection. In Epon sections, most mice had focal sensory root abnormalities in lower thoracic, lumbar or sacral levels. At days 7 and 10, lesions showed chiefly nerve fiber degeneration, particularly of large myelinated fibers. At 2 weeks, lesions contained relatively large bundles of small unmyelinated fibers with immature axon-Schwann cell relationships. From 3 to 6 weeks, lesions again contained many more small unmyelinated fibers than normal but, in increasing proportions, axons in bundles were isolated from their neighbors by Schwann cell cytoplasm, and Schwann cells having 1:1 relationships with axons showed mesaxon or thin myelin sheath formation. At later times, the proportion of small unmyelinated axons decreased in parallel with increased numbers of small myelinated axons. By 6 months, affected roots showed a relative reduction in large myelinated fibers, increased proportions of small myelinated fibers and Schwann cell nuclei. Numbers of unmyelinated fibers were reduced relative to 3- to 6-week lesions. Axonal degeneration and regeneration appears to be the chief pathological change in sensory roots in this model. If regenerated fibers arise from latently infected neurons, then establishment of latency is not a relatively silent event, but is associated with major long-lasting, morphologically detectable effects.
Li, Lun; Long, Yan; Zhang, Libin; Dalton-Morgan, Jessica; Batley, Jacqueline; Yu, Longjiang; Meng, Jinling; Li, Maoteng
2015-01-01
The prediction of the flowering time (FT) trait in Brassica napus based on genome-wide markers and the detection of underlying genetic factors is important not only for oilseed producers around the world but also for the other crop industry in the rotation system in China. In previous studies the low density and mixture of biomarkers used obstructed genomic selection in B. napus and comprehensive mapping of FT related loci. In this study, a high-density genome-wide SNP set was genotyped from a double-haploid population of B. napus. We first performed genomic prediction of FT traits in B. napus using SNPs across the genome under ten environments of three geographic regions via eight existing genomic predictive models. The results showed that all the models achieved comparably high accuracies, verifying the feasibility of genomic prediction in B. napus. Next, we performed a large-scale mapping of FT related loci among three regions, and found 437 associated SNPs, some of which represented known FT genes, such as AP1 and PHYE. The genes tagged by the associated SNPs were enriched in biological processes involved in the formation of flowers. Epistasis analysis showed that significant interactions were found between detected loci, even among some known FT related genes. All the results showed that our large scale and high-density genotype data are of great practical and scientific values for B. napus. To our best knowledge, this is the first evaluation of genomic selection models in B. napus based on a high-density SNP dataset and large-scale mapping of FT loci.
Assembly and control of large microtubule complexes
NASA Astrophysics Data System (ADS)
Korolev, Kirill; Ishihara, Keisuke; Mitchison, Timothy
Motility, division, and other cellular processes require rapid assembly and disassembly of microtubule structures. We report a new mechanism for the formation of asters, radial microtubule complexes found in very large cells. The standard model of aster growth assumes elongation of a fixed number of microtubules originating from the centrosomes. However, aster morphology in this model does not scale with cell size, and we found evidence for microtubule nucleation away from centrosomes. By combining polymerization dynamics and auto-catalytic nucleation of microtubules, we developed a new biophysical model of aster growth. The model predicts an explosive transition from an aster with a steady-state radius to one that expands as a travelling wave. At the transition, microtubule density increases continuously, but aster growth rate discontinuously jumps to a nonzero value. We tested our model with biochemical perturbations in egg extract and confirmed main theoretical predictions including the jump in the growth rate. Our results show that asters can grow even though individual microtubules are short and unstable. The dynamic balance between microtubule collapse and nucleation could be a general framework for the assembly and control of large microtubule complexes. NIH GM39565; Simons Foundation 409704; Honjo International 486 Scholarship Foundation.
NASA Astrophysics Data System (ADS)
Le Doussal, Pierre; Petković, Aleksandra; Wiese, Kay Jörg
2012-06-01
We study the motion of an elastic object driven in a disordered environment in presence of both dissipation and inertia. We consider random forces with the statistics of random walks and reduce the problem to a single degree of freedom. It is the extension of the mean-field Alessandro-Beatrice- Bertotti-Montorsi (ABBM) model in presence of an inertial mass m. While the ABBM model can be solved exactly, its extension to inertia exhibits complicated history dependence due to oscillations and backward motion. The characteristic scales for avalanche motion are studied from numerics and qualitative arguments. To make analytical progress, we consider two variants which coincide with the original model whenever the particle moves only forward. Using a combination of analytical and numerical methods together with simulations, we characterize the distributions of instantaneous acceleration and velocity, and compare them in these three models. We show that for large driving velocity, all three models share the same large-deviation function for positive velocities, which is obtained analytically for small and large m, as well as for m=6/25. The effect of small additional thermal and quantum fluctuations can be treated within an approximate method.
NASA Technical Reports Server (NTRS)
Kashlinsky, A.
1993-01-01
Modified cold dark matter (CDM) models were recently suggested to account for large-scale optical data, which fix the power spectrum on large scales, and the COBE results, which would then fix the bias parameter, b. We point out that all such models have deficit of small-scale power where density fluctuations are presently nonlinear, and should then lead to late epochs of collapse of scales M between 10 exp 9 - 10 exp 10 solar masses and (1-5) x 10 exp 14 solar masses. We compute the probabilities and comoving space densities of various scale objects at high redshifts according to the CDM models and compare these with observations of high-z QSOs, high-z galaxies and the protocluster-size object found recently by Uson et al. (1992) at z = 3.4. We show that the modified CDM models are inconsistent with the observational data on these objects. We thus suggest that in order to account for the high-z objects, as well as the large-scale and COBE data, one needs a power spectrum with more power on small scales than CDM models allow and an open universe.
Comparison of Numerical Modeling Methods for Soil Vibration Cutting
NASA Astrophysics Data System (ADS)
Jiang, Jiandong; Zhang, Enguang
2018-01-01
In this paper, we studied the appropriate numerical simulation method for vibration soil cutting. Three numerical simulation methods, commonly used for uniform speed soil cutting, Lagrange, ALE and DEM, are analyzed. Three models of vibration soil cutting simulation model are established by using ls-dyna.The applicability of the three methods to this problem is analyzed in combination with the model mechanism and simulation results. Both the Lagrange method and the DEM method can show the force oscillation of the tool and the large deformation of the soil in the vibration cutting. Lagrange method shows better effect of soil debris breaking. Because of the poor stability of ALE method, it is not suitable to use soil vibration cutting problem.
Testing the gravitational instability hypothesis?
NASA Technical Reports Server (NTRS)
Babul, Arif; Weinberg, David H.; Dekel, Avishai; Ostriker, Jeremiah P.
1994-01-01
We challenge a widely accepted assumption of observational cosmology: that successful reconstruction of observed galaxy density fields from measured galaxy velocity fields (or vice versa), using the methods of gravitational instability theory, implies that the observed large-scale structures and large-scale flows were produced by the action of gravity. This assumption is false, in that there exist nongravitational theories that pass the reconstruction tests and gravitational theories with certain forms of biased galaxy formation that fail them. Gravitational instability theory predicts specific correlations between large-scale velocity and mass density fields, but the same correlations arise in any model where (a) structures in the galaxy distribution grow from homogeneous initial conditions in a way that satisfies the continuity equation, and (b) the present-day velocity field is irrotational and proportional to the time-averaged velocity field. We demonstrate these assertions using analytical arguments and N-body simulations. If large-scale structure is formed by gravitational instability, then the ratio of the galaxy density contrast to the divergence of the velocity field yields an estimate of the density parameter Omega (or, more generally, an estimate of beta identically equal to Omega(exp 0.6)/b, where b is an assumed constant of proportionality between galaxy and mass density fluctuations. In nongravitational scenarios, the values of Omega or beta estimated in this way may fail to represent the true cosmological values. However, even if nongravitational forces initiate and shape the growth of structure, gravitationally induced accelerations can dominate the velocity field at late times, long after the action of any nongravitational impulses. The estimated beta approaches the true value in such cases, and in our numerical simulations the estimated beta values are reasonably accurate for both gravitational and nongravitational models. Reconstruction tests that show correlations between galaxy density and velocity fields can rule out some physically interesting models of large-scale structure. In particular, successful reconstructions constrain the nature of any bias between the galaxy and mass distributions, since processes that modulate the efficiency of galaxy formation on large scales in a way that violates the continuity equation also produce a mismatch between the observed galaxy density and the density inferred from the peculiar velocity field. We obtain successful reconstructions for a gravitational model with peaks biasing, but we also show examples of gravitational and nongravitational models that fail reconstruction tests because of more complicated modulations of galaxy formation.
Large-scale and Long-duration Simulation of a Multi-stage Eruptive Solar Event
NASA Astrophysics Data System (ADS)
Jiang, chaowei; Hu, Qiang; Wu, S. T.
2015-04-01
We employ a data-driven 3D MHD active region evolution model by using the Conservation Element and Solution Element (CESE) numerical method. This newly developed model retains the full MHD effects, allowing time-dependent boundary conditions and time evolution studies. The time-dependent simulation is driven by measured vector magnetograms and the method of MHD characteristics on the bottom boundary. We have applied the model to investigate the coronal magnetic field evolution of AR11283 which was characterized by a pre-existing sigmoid structure in the core region and multiple eruptions, both in relatively small and large scales. We have succeeded in producing the core magnetic field structure and the subsequent eruptions of flux-rope structures (see https://dl.dropboxusercontent.com/u/96898685/large.mp4 for an animation) as the measured vector magnetograms on the bottom boundary evolve in time with constant flux emergence. The whole process, lasting for about an hour in real time, compares well with the corresponding SDO/AIA and coronagraph imaging observations. From these results, we show the capability of the model, largely data-driven, that is able to simulate complex, topological, and highly dynamic active region evolutions. (We acknowledge partial support of NSF grants AGS 1153323 and AGS 1062050, and data support from SDO/HMI and AIA teams).
Heterogeneity-induced large deviations in activity and (in some cases) entropy production
NASA Astrophysics Data System (ADS)
Gingrich, Todd R.; Vaikuntanathan, Suriyanarayanan; Geissler, Phillip L.
2014-10-01
We solve a simple model that supports a dynamic phase transition and show conditions for the existence of the transition. Using methods of large deviation theory we analytically compute the probability distribution for activity and entropy production rates of the trajectories on a large ring with a single heterogeneous link. The corresponding joint rate function demonstrates two dynamical phases—one localized and the other delocalized, but the marginal rate functions do not always exhibit the underlying transition. Symmetries in dynamic order parameters influence the observation of a transition, such that distributions for certain dynamic order parameters need not reveal an underlying dynamical bistability. Solution of our model system furthermore yields the form of the effective Markov transition matrices that generate dynamics in which the two dynamical phases are at coexistence. We discuss the implications of the transition for the response of bacterial cells to antibiotic treatment, arguing that even simple models of a cell cycle lacking an explicit bistability in configuration space will exhibit a bistability of dynamical phases.
Large strain cruciform biaxial testing for FLC detection
NASA Astrophysics Data System (ADS)
Güler, Baran; Efe, Mert
2017-10-01
Selection of proper test method, specimen design and analysis method are key issues for studying formability of sheet metals and detection of their forming limit curves (FLC). Materials with complex microstructures may need an additional micro-mechanical investigation and accurate modelling. Cruciform biaxial test stands as an alternative to standard tests as it achieves frictionless, in-plane, multi-axial stress states with a single sample geometry. In this study, we introduce a small-scale (less than 10 cm) cruciform sample allowing micro-mechanical investigation at stress states ranging from plane strain to equibiaxial. With successful specimen design and surface finish, large forming limit strains are obtained at the test region of the sample. The large forming limit strains obtained by experiments are compared to the values obtained from Marciniak-Kuczynski (M-K) local necking model and Cockroft-Latham damage model. This comparison shows that the experimental limiting strains are beyond the theoretical values, approaching to the fracture strain of the two test materials: Al-6061-T6 aluminum alloy and DC-04 high formability steel.
Energy gain calculations in Penning fusion systems using a bounce-averaged Fokker-Planck model
NASA Astrophysics Data System (ADS)
Chacón, L.; Miley, G. H.; Barnes, D. C.; Knoll, D. A.
2000-11-01
In spherical Penning fusion devices, a spherical cloud of electrons, confined in a Penning-like trap, creates the ion-confining electrostatic well. Fusion energy gains for these systems have been calculated in optimistic conditions (i.e., spherically uniform electrostatic well, no collisional ion-electron interactions, single ion species) using a bounce-averaged Fokker-Planck (BAFP) model. Results show that steady-state distributions in which the Maxwellian ion population is dominant correspond to lowest ion recirculation powers (and hence highest fusion energy gains). It is also shown that realistic parabolic-like wells result in better energy gains than square wells, particularly at large well depths (>100 kV). Operating regimes with fusion power to ion input power ratios (Q-value) >100 have been identified. The effect of electron losses on the Q-value has been addressed heuristically using a semianalytic model, indicating that large Q-values are still possible provided that electron particle losses are kept small and well depths are large.
DeepDeath: Learning to predict the underlying cause of death with Big Data.
Hassanzadeh, Hamid Reza; Ying Sha; Wang, May D
2017-07-01
Multiple cause-of-death data provides a valuable source of information that can be used to enhance health standards by predicting health related trajectories in societies with large populations. These data are often available in large quantities across U.S. states and require Big Data techniques to uncover complex hidden patterns. We design two different classes of models suitable for large-scale analysis of mortality data, a Hadoop-based ensemble of random forests trained over N-grams, and the DeepDeath, a deep classifier based on the recurrent neural network (RNN). We apply both classes to the mortality data provided by the National Center for Health Statistics and show that while both perform significantly better than the random classifier, the deep model that utilizes long short-term memory networks (LSTMs), surpasses the N-gram based models and is capable of learning the temporal aspect of the data without a need for building ad-hoc, expert-driven features.
Zheng, Wei; Yan, Xiaoyong; Zhao, Wei; Qian, Chengshan
2017-12-20
A novel large-scale multi-hop localization algorithm based on regularized extreme learning is proposed in this paper. The large-scale multi-hop localization problem is formulated as a learning problem. Unlike other similar localization algorithms, the proposed algorithm overcomes the shortcoming of the traditional algorithms which are only applicable to an isotropic network, therefore has a strong adaptability to the complex deployment environment. The proposed algorithm is composed of three stages: data acquisition, modeling and location estimation. In data acquisition stage, the training information between nodes of the given network is collected. In modeling stage, the model among the hop-counts and the physical distances between nodes is constructed using regularized extreme learning. In location estimation stage, each node finds its specific location in a distributed manner. Theoretical analysis and several experiments show that the proposed algorithm can adapt to the different topological environments with low computational cost. Furthermore, high accuracy can be achieved by this method without setting complex parameters.
Quantifying O3 Impacts in Urban Areas Due to Wildfires Using a Generalized Additive Model.
Gong, Xi; Kaulfus, Aaron; Nair, Udaysankar; Jaffe, Daniel A
2017-11-21
Wildfires emit O 3 precursors but there are large variations in emissions, plume heights, and photochemical processing. These factors make it challenging to model O 3 production from wildfires using Eulerian models. Here we describe a statistical approach to characterize the maximum daily 8-h average O 3 (MDA8) for 8 cities in the U.S. for typical, nonfire, conditions. The statistical model represents between 35% and 81% of the variance in MDA8 for each city. We then examine the residual from the model under conditions with elevated particulate matter (PM) and satellite observed smoke ("smoke days"). For these days, the residuals are elevated by an average of 3-8 ppb (MDA8) compared to nonsmoke days. We found that while smoke days are only 4.1% of all days (May-Sept) they are 19% of days with an MDA8 greater than 75 ppb. We also show that a published method that does not account for transport patterns gives rise to large overestimates in the amount of O 3 from fires, particularly for coastal cities. Finally, we apply this method to a case study from August 2015, and show that the method gives results that are directly applicable to the EPA guidance on excluding data due to an uncontrollable source.
NASA Astrophysics Data System (ADS)
Wang, Z.
2015-12-01
For decades, distributed and lumped hydrological models have furthered our understanding of hydrological system. The development of hydrological simulation in large scale and high precision elaborated the spatial descriptions and hydrological behaviors. Meanwhile, the new trend is also followed by the increment of model complexity and number of parameters, which brings new challenges of uncertainty quantification. Generalized Likelihood Uncertainty Estimation (GLUE) has been widely used in uncertainty analysis for hydrological models referring to Monte Carlo method coupled with Bayesian estimation. However, the stochastic sampling method of prior parameters adopted by GLUE appears inefficient, especially in high dimensional parameter space. The heuristic optimization algorithms utilizing iterative evolution show better convergence speed and optimality-searching performance. In light of the features of heuristic optimization algorithms, this study adopted genetic algorithm, differential evolution, shuffled complex evolving algorithm to search the parameter space and obtain the parameter sets of large likelihoods. Based on the multi-algorithm sampling, hydrological model uncertainty analysis is conducted by the typical GLUE framework. To demonstrate the superiority of the new method, two hydrological models of different complexity are examined. The results shows the adaptive method tends to be efficient in sampling and effective in uncertainty analysis, providing an alternative path for uncertainty quantilization.
NASA Astrophysics Data System (ADS)
Huang, Zhijiong; Hu, Yongtao; Zheng, Junyu; Zhai, Xinxin; Huang, Ran
2018-05-01
Lateral boundary conditions (LBCs) are essential for chemical transport models to simulate regional transport; however they often contain large uncertainties. This study proposes an optimized data fusion approach to reduce the bias of LBCs by fusing gridded model outputs, from which the daughter domain's LBCs are derived, with ground-level measurements. The optimized data fusion approach follows the framework of a previous interpolation-based fusion method but improves it by using a bias kriging method to correct the spatial bias in gridded model outputs. Cross-validation shows that the optimized approach better estimates fused fields in areas with a large number of observations compared to the previous interpolation-based method. The optimized approach was applied to correct LBCs of PM2.5 concentrations for simulations in the Pearl River Delta (PRD) region as a case study. Evaluations show that the LBCs corrected by data fusion improve in-domain PM2.5 simulations in terms of the magnitude and temporal variance. Correlation increases by 0.13-0.18 and fractional bias (FB) decreases by approximately 3%-15%. This study demonstrates the feasibility of applying data fusion to improve regional air quality modeling.
NASA Astrophysics Data System (ADS)
Tassi, R.; Lorenzini, F.; Allasia, D. G.
2015-06-01
In the last decades, new approaches were adopted to manage stormwater as close to its source as possible through technologies and devices that preserve and recreate natural landscape features. Green Roofs (GR) are examples of these devices that are also incentivized by city's stormwater management plans. Several studies show that GR decreases on-site runoff from impervious surfaces, however, the analysis of the effect of widespread implementation of GR in the flood characteristics at the urban basin scale in subtropical areas are little discussed, mainly because of the absence of data. Thereby, this paper shows results related to the monitoring of an extensive modular GR under subtropical weather conditions, the development of a rainfall-runoff model based on the modified Curve Number (CN) and SCS Triangular Unit Hydrograph (TUH) methods and the analysis of large-scale impact of GR by modelling different basins. The model was calibrated against observed data and showed that GR absorbed almost all the smaller storms and reduced runoff even during the most intense rainfall. The overall CN was estimated in 83 (consistent with available literature) with the shape of hydrographs well reproduced. Large-scale modelling (in basins ranging from 0.03 ha to several square kilometers) showed that the widespread use of GRs reduced peak flows (volumes) around 57% (48%) at source and 38% (32%) at the basin scale. Thus, this research validated a tool for the assessment of structural management measures (specifically GR) to address changes in flood characteristics in the city's water management planning. From the application of this model it was concluded that even if the efficiency of GR decreases as the basin scale increase they still provide a good option to cope with urbanization impact.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, S.; Wang, Minghuai; Ghan, Steven J.
Aerosol-cloud interactions continue to constitute a major source of uncertainty for the estimate of climate radiative forcing. The variation of aerosol indirect effects (AIE) in climate models is investigated across different dynamical regimes, determined by monthly mean 500 hPa vertical pressure velocity (ω500), lower-tropospheric stability (LTS) and large-scale surface precipitation rate derived from several global climate models (GCMs), with a focus on liquid water path (LWP) response to cloud condensation nuclei (CCN) concentrations. The LWP sensitivity to aerosol perturbation within dynamic regimes is found to exhibit a large spread among these GCMs. It is in regimes of strong large-scale ascendmore » (ω500 < -25 hPa/d) and low clouds (stratocumulus and trade wind cumulus) where the models differ most. Shortwave aerosol indirect forcing is also found to differ significantly among different regimes. Shortwave aerosol indirect forcing in ascending regimes is as large as that in stratocumulus regimes, which indicates that regimes with strong large-scale ascend are as important as stratocumulus regimes in studying AIE. 42" It is further shown that shortwave aerosol indirect forcing over regions with high monthly large-scale surface precipitation rate (> 0.1 mm/d) contributes the most to the total aerosol indirect forcing (from 64% to nearly 100%). Results show that the uncertainty in AIE is even larger within specific dynamical regimes than that globally, pointing to the need to reduce the uncertainty in AIE in different dynamical regimes.« less
NASA Astrophysics Data System (ADS)
Das, M.; Nath, P.; Sarkar, D.
2016-02-01
In this article effect of etching current density (J) on the microstructural, optical and electrical properties of photoelectrochemically prepared heterostructure is reported. Prepared samples are characterized by FESEM, XRD, UV-Visible, Raman and photoluminescence (PL) spectra and current-voltage (I-V) characteristics. FESEM shows presence of mixture of randomly distributed meso- and micro-pores. Porous layer thickness determined by cross section view of SEM is proportional to J. XRD shows crystalline nature but gradually extent of crystallinity decreases with increasing J. Raman spectra show large red-shift and asymmetric broadening with respect to crystalline silicon (c-Si). UV-visible reflectance and PL show blue shift in peaks with increasing J. The I-V characteristics are analyzed by the conventional thermionic emission (TE) model and Cheung's model to estimate the barrier height (φb), ideality factor (n) and series resistance (Rs) for comparison between the two models. The latter model is found to fit better.
From the 2008 to the 2014 Crisis: Response of the Labor Market of Russia's Largest Cities
ERIC Educational Resources Information Center
Khmeleva, Galina A.; Bulavko, Olga A.
2016-01-01
The model of shift share analysis was improved to show that the foundation of economy's transition to industrially innovational type of development is created at the local level in case of developing countries. Analysis of structural shifts in 28 large cities in 2008-2014 showed that the perspective of industrially innovational development is yet…
Analysis of Covariance: Is It the Appropriate Model to Study Change?
ERIC Educational Resources Information Center
Marston, Paul T., Borich, Gary D.
The four main approaches to measuring treatment effects in schools; raw gain, residual gain, covariance, and true scores; were compared. A simulation study showed true score analysis produced a large number of Type-I errors. When corrected for this error, this method showed the least power of the four. This outcome was clearly the result of the…
Likelihood inference of non-constant diversification rates with incomplete taxon sampling.
Höhna, Sebastian
2014-01-01
Large-scale phylogenies provide a valuable source to study background diversification rates and investigate if the rates have changed over time. Unfortunately most large-scale, dated phylogenies are sparsely sampled (fewer than 5% of the described species) and taxon sampling is not uniform. Instead, taxa are frequently sampled to obtain at least one representative per subgroup (e.g. family) and thus to maximize diversity (diversified sampling). So far, such complications have been ignored, potentially biasing the conclusions that have been reached. In this study I derive the likelihood of a birth-death process with non-constant (time-dependent) diversification rates and diversified taxon sampling. Using simulations I test if the true parameters and the sampling method can be recovered when the trees are small or medium sized (fewer than 200 taxa). The results show that the diversification rates can be inferred and the estimates are unbiased for large trees but are biased for small trees (fewer than 50 taxa). Furthermore, model selection by means of Akaike's Information Criterion favors the true model if the true rates differ sufficiently from alternative models (e.g. the birth-death model is recovered if the extinction rate is large and compared to a pure-birth model). Finally, I applied six different diversification rate models--ranging from a constant-rate pure birth process to a decreasing speciation rate birth-death process but excluding any rate shift models--on three large-scale empirical phylogenies (ants, mammals and snakes with respectively 149, 164 and 41 sampled species). All three phylogenies were constructed by diversified taxon sampling, as stated by the authors. However only the snake phylogeny supported diversified taxon sampling. Moreover, a parametric bootstrap test revealed that none of the tested models provided a good fit to the observed data. The model assumptions, such as homogeneous rates across species or no rate shifts, appear to be violated.