Sample records for predict large scale

  1. Prehospital Acute Stroke Severity Scale to Predict Large Artery Occlusion: Design and Comparison With Other Scales.

    PubMed

    Hastrup, Sidsel; Damgaard, Dorte; Johnsen, Søren Paaske; Andersen, Grethe

    2016-07-01

    We designed and validated a simple prehospital stroke scale to identify emergent large vessel occlusion (ELVO) in patients with acute ischemic stroke and compared the scale to other published scales for prediction of ELVO. A national historical test cohort of 3127 patients with information on intracranial vessel status (angiography) before reperfusion therapy was identified. National Institutes of Health Stroke Scale (NIHSS) items with the highest predictive value of occlusion of a large intracranial artery were identified, and the most optimal combination meeting predefined criteria to ensure usefulness in the prehospital phase was determined. The predictive performance of Prehospital Acute Stroke Severity (PASS) scale was compared with other published scales for ELVO. The PASS scale was composed of 3 NIHSS scores: level of consciousness (month/age), gaze palsy/deviation, and arm weakness. In derivation of PASS 2/3 of the test cohort was used and showed accuracy (area under the curve) of 0.76 for detecting large arterial occlusion. Optimal cut point ≥2 abnormal scores showed: sensitivity=0.66 (95% CI, 0.62-0.69), specificity=0.83 (0.81-0.85), and area under the curve=0.74 (0.72-0.76). Validation on 1/3 of the test cohort showed similar performance. Patients with a large artery occlusion on angiography with PASS ≥2 had a median NIHSS score of 17 (interquartile range=6) as opposed to PASS <2 with a median NIHSS score of 6 (interquartile range=5). The PASS scale showed equal performance although more simple when compared with other scales predicting ELVO. The PASS scale is simple and has promising accuracy for prediction of ELVO in the field. © 2016 American Heart Association, Inc.

  2. Latest COBE results, large-scale data, and predictions of inflation

    NASA Technical Reports Server (NTRS)

    Kashlinsky, A.

    1992-01-01

    One of the predictions of the inflationary scenario of cosmology is that the initial spectrum of primordial density fluctuations (PDFs) must have the Harrison-Zeldovich (HZ) form. Here, in order to test the inflationary scenario, predictions of the microwave background radiation (MBR) anisotropies measured by COBE are computed based on large-scale data for the universe and assuming Omega-1 and the HZ spectrum on large scales. It is found that the minimal scale where the spectrum can first enter the HZ regime is found, constraining the power spectrum of the mass distribution to within the bias factor b. This factor is determined and used to predict parameters of the MBR anisotropy field. For the spectrum of PDFs that reaches the HZ regime immediately after the scale accessible to the APM catalog, the numbers on MBR anisotropies are consistent with the COBE detections and thus the standard inflation can indeed be considered a viable theory for the origin of the large-scale structure in the universe.

  3. Spatiotemporal property and predictability of large-scale human mobility

    NASA Astrophysics Data System (ADS)

    Zhang, Hai-Tao; Zhu, Tao; Fu, Dongfei; Xu, Bowen; Han, Xiao-Pu; Chen, Duxin

    2018-04-01

    Spatiotemporal characteristics of human mobility emerging from complexity on individual scale have been extensively studied due to the application potential on human behavior prediction and recommendation, and control of epidemic spreading. We collect and investigate a comprehensive data set of human activities on large geographical scales, including both websites browse and mobile towers visit. Numerical results show that the degree of activity decays as a power law, indicating that human behaviors are reminiscent of scale-free random walks known as Lévy flight. More significantly, this study suggests that human activities on large geographical scales have specific non-Markovian characteristics, such as a two-segment power-law distribution of dwelling time and a high possibility for prediction. Furthermore, a scale-free featured mobility model with two essential ingredients, i.e., preferential return and exploration, and a Gaussian distribution assumption on the exploration tendency parameter is proposed, which outperforms existing human mobility models under scenarios of large geographical scales.

  4. Large-scale dynamo growth rates from numerical simulations and implications for mean-field theories

    NASA Astrophysics Data System (ADS)

    Park, Kiwan; Blackman, Eric G.; Subramanian, Kandaswamy

    2013-05-01

    Understanding large-scale magnetic field growth in turbulent plasmas in the magnetohydrodynamic limit is a goal of magnetic dynamo theory. In particular, assessing how well large-scale helical field growth and saturation in simulations match those predicted by existing theories is important for progress. Using numerical simulations of isotropically forced turbulence without large-scale shear with its implications, we focus on several additional aspects of this comparison: (1) Leading mean-field dynamo theories which break the field into large and small scales predict that large-scale helical field growth rates are determined by the difference between kinetic helicity and current helicity with no dependence on the nonhelical energy in small-scale magnetic fields. Our simulations show that the growth rate of the large-scale field from fully helical forcing is indeed unaffected by the presence or absence of small-scale magnetic fields amplified in a precursor nonhelical dynamo. However, because the precursor nonhelical dynamo in our simulations produced fields that were strongly subequipartition with respect to the kinetic energy, we cannot yet rule out the potential influence of stronger nonhelical small-scale fields. (2) We have identified two features in our simulations which cannot be explained by the most minimalist versions of two-scale mean-field theory: (i) fully helical small-scale forcing produces significant nonhelical large-scale magnetic energy and (ii) the saturation of the large-scale field growth is time delayed with respect to what minimalist theory predicts. We comment on desirable generalizations to the theory in this context and future desired work.

  5. Large-scale dynamo growth rates from numerical simulations and implications for mean-field theories.

    PubMed

    Park, Kiwan; Blackman, Eric G; Subramanian, Kandaswamy

    2013-05-01

    Understanding large-scale magnetic field growth in turbulent plasmas in the magnetohydrodynamic limit is a goal of magnetic dynamo theory. In particular, assessing how well large-scale helical field growth and saturation in simulations match those predicted by existing theories is important for progress. Using numerical simulations of isotropically forced turbulence without large-scale shear with its implications, we focus on several additional aspects of this comparison: (1) Leading mean-field dynamo theories which break the field into large and small scales predict that large-scale helical field growth rates are determined by the difference between kinetic helicity and current helicity with no dependence on the nonhelical energy in small-scale magnetic fields. Our simulations show that the growth rate of the large-scale field from fully helical forcing is indeed unaffected by the presence or absence of small-scale magnetic fields amplified in a precursor nonhelical dynamo. However, because the precursor nonhelical dynamo in our simulations produced fields that were strongly subequipartition with respect to the kinetic energy, we cannot yet rule out the potential influence of stronger nonhelical small-scale fields. (2) We have identified two features in our simulations which cannot be explained by the most minimalist versions of two-scale mean-field theory: (i) fully helical small-scale forcing produces significant nonhelical large-scale magnetic energy and (ii) the saturation of the large-scale field growth is time delayed with respect to what minimalist theory predicts. We comment on desirable generalizations to the theory in this context and future desired work.

  6. Large-scale linear programs in planning and prediction.

    DOT National Transportation Integrated Search

    2017-06-01

    Large-scale linear programs are at the core of many traffic-related optimization problems in both planning and prediction. Moreover, many of these involve significant uncertainty, and hence are modeled using either chance constraints, or robust optim...

  7. Attribution of Large-Scale Climate Patterns to Seasonal Peak-Flow and Prospects for Prediction Globally

    NASA Astrophysics Data System (ADS)

    Lee, Donghoon; Ward, Philip; Block, Paul

    2018-02-01

    Flood-related fatalities and impacts on society surpass those from all other natural disasters globally. While the inclusion of large-scale climate drivers in streamflow (or high-flow) prediction has been widely studied, an explicit link to global-scale long-lead prediction is lacking, which can lead to an improved understanding of potential flood propensity. Here we attribute seasonal peak-flow to large-scale climate patterns, including the El Niño Southern Oscillation (ENSO), Pacific Decadal Oscillation (PDO), North Atlantic Oscillation (NAO), and Atlantic Multidecadal Oscillation (AMO), using streamflow station observations and simulations from PCR-GLOBWB, a global-scale hydrologic model. Statistically significantly correlated climate patterns and streamflow autocorrelation are subsequently applied as predictors to build a global-scale season-ahead prediction model, with prediction performance evaluated by the mean squared error skill score (MSESS) and the categorical Gerrity skill score (GSS). Globally, fair-to-good prediction skill (20% ≤ MSESS and 0.2 ≤ GSS) is evident for a number of locations (28% of stations and 29% of land area), most notably in data-poor regions (e.g., West and Central Africa). The persistence of such relevant climate patterns can improve understanding of the propensity for floods at the seasonal scale. The prediction approach developed here lays the groundwork for further improving local-scale seasonal peak-flow prediction by identifying relevant global-scale climate patterns. This is especially attractive for regions with limited observations and or little capacity to develop flood early warning systems.

  8. Mesoscale Predictability and Error Growth in Short Range Ensemble Forecasts

    NASA Astrophysics Data System (ADS)

    Gingrich, Mark

    Although it was originally suggested that small-scale, unresolved errors corrupt forecasts at all scales through an inverse error cascade, some authors have proposed that those mesoscale circulations resulting from stationary forcing on the larger scale may inherit the predictability of the large-scale motions. Further, the relative contributions of large- and small-scale uncertainties in producing error growth in the mesoscales remain largely unknown. Here, 100 member ensemble forecasts are initialized from an ensemble Kalman filter (EnKF) to simulate two winter storms impacting the East Coast of the United States in 2010. Four verification metrics are considered: the local snow water equivalence, total liquid water, and 850 hPa temperatures representing mesoscale features; and the sea level pressure field representing a synoptic feature. It is found that while the predictability of the mesoscale features can be tied to the synoptic forecast, significant uncertainty existed on the synoptic scale at lead times as short as 18 hours. Therefore, mesoscale details remained uncertain in both storms due to uncertainties at the large scale. Additionally, the ensemble perturbation kinetic energy did not show an appreciable upscale propagation of error for either case. Instead, the initial condition perturbations from the cycling EnKF were maximized at large scales and immediately amplified at all scales without requiring initial upscale propagation. This suggests that relatively small errors in the synoptic-scale initialization may have more importance in limiting predictability than errors in the unresolved, small-scale initial conditions.

  9. Wave models for turbulent free shear flows

    NASA Technical Reports Server (NTRS)

    Liou, W. W.; Morris, P. J.

    1991-01-01

    New predictive closure models for turbulent free shear flows are presented. They are based on an instability wave description of the dominant large scale structures in these flows using a quasi-linear theory. Three model were developed to study the structural dynamics of turbulent motions of different scales in free shear flows. The local characteristics of the large scale motions are described using linear theory. Their amplitude is determined from an energy integral analysis. The models were applied to the study of an incompressible free mixing layer. In all cases, predictions are made for the development of the mean flow field. In the last model, predictions of the time dependent motion of the large scale structure of the mixing region are made. The predictions show good agreement with experimental observations.

  10. Quantitative Missense Variant Effect Prediction Using Large-Scale Mutagenesis Data.

    PubMed

    Gray, Vanessa E; Hause, Ronald J; Luebeck, Jens; Shendure, Jay; Fowler, Douglas M

    2018-01-24

    Large datasets describing the quantitative effects of mutations on protein function are becoming increasingly available. Here, we leverage these datasets to develop Envision, which predicts the magnitude of a missense variant's molecular effect. Envision combines 21,026 variant effect measurements from nine large-scale experimental mutagenesis datasets, a hitherto untapped training resource, with a supervised, stochastic gradient boosting learning algorithm. Envision outperforms other missense variant effect predictors both on large-scale mutagenesis data and on an independent test dataset comprising 2,312 TP53 variants whose effects were measured using a low-throughput approach. This dataset was never used for hyperparameter tuning or model training and thus serves as an independent validation set. Envision prediction accuracy is also more consistent across amino acids than other predictors. Finally, we demonstrate that Envision's performance improves as more large-scale mutagenesis data are incorporated. We precompute Envision predictions for every possible single amino acid variant in human, mouse, frog, zebrafish, fruit fly, worm, and yeast proteomes (https://envision.gs.washington.edu/). Copyright © 2017 Elsevier Inc. All rights reserved.

  11. Constraints on the power spectrum of the primordial density field from large-scale data - Microwave background and predictions of inflation

    NASA Technical Reports Server (NTRS)

    Kashlinsky, A.

    1992-01-01

    It is shown here that, by using galaxy catalog correlation data as input, measurements of microwave background radiation (MBR) anisotropies should soon be able to test two of the inflationary scenario's most basic predictions: (1) that the primordial density fluctuations produced were scale-invariant and (2) that the universe is flat. They should also be able to detect anisotropies of large-scale structure formed by gravitational evolution of density fluctuations present at the last scattering epoch. Computations of MBR anisotropies corresponding to the minimum of the large-scale variance of the MBR anisotropy are presented which favor an open universe with P(k) significantly different from the Harrison-Zeldovich spectrum predicted by most inflationary models.

  12. Linking crop yield anomalies to large-scale atmospheric circulation in Europe.

    PubMed

    Ceglar, Andrej; Turco, Marco; Toreti, Andrea; Doblas-Reyes, Francisco J

    2017-06-15

    Understanding the effects of climate variability and extremes on crop growth and development represents a necessary step to assess the resilience of agricultural systems to changing climate conditions. This study investigates the links between the large-scale atmospheric circulation and crop yields in Europe, providing the basis to develop seasonal crop yield forecasting and thus enabling a more effective and dynamic adaptation to climate variability and change. Four dominant modes of large-scale atmospheric variability have been used: North Atlantic Oscillation, Eastern Atlantic, Scandinavian and Eastern Atlantic-Western Russia patterns. Large-scale atmospheric circulation explains on average 43% of inter-annual winter wheat yield variability, ranging between 20% and 70% across countries. As for grain maize, the average explained variability is 38%, ranging between 20% and 58%. Spatially, the skill of the developed statistical models strongly depends on the large-scale atmospheric variability impact on weather at the regional level, especially during the most sensitive growth stages of flowering and grain filling. Our results also suggest that preceding atmospheric conditions might provide an important source of predictability especially for maize yields in south-eastern Europe. Since the seasonal predictability of large-scale atmospheric patterns is generally higher than the one of surface weather variables (e.g. precipitation) in Europe, seasonal crop yield prediction could benefit from the integration of derived statistical models exploiting the dynamical seasonal forecast of large-scale atmospheric circulation.

  13. Large-scale model quality assessment for improving protein tertiary structure prediction.

    PubMed

    Cao, Renzhi; Bhattacharya, Debswapna; Adhikari, Badri; Li, Jilong; Cheng, Jianlin

    2015-06-15

    Sampling structural models and ranking them are the two major challenges of protein structure prediction. Traditional protein structure prediction methods generally use one or a few quality assessment (QA) methods to select the best-predicted models, which cannot consistently select relatively better models and rank a large number of models well. Here, we develop a novel large-scale model QA method in conjunction with model clustering to rank and select protein structural models. It unprecedentedly applied 14 model QA methods to generate consensus model rankings, followed by model refinement based on model combination (i.e. averaging). Our experiment demonstrates that the large-scale model QA approach is more consistent and robust in selecting models of better quality than any individual QA method. Our method was blindly tested during the 11th Critical Assessment of Techniques for Protein Structure Prediction (CASP11) as MULTICOM group. It was officially ranked third out of all 143 human and server predictors according to the total scores of the first models predicted for 78 CASP11 protein domains and second according to the total scores of the best of the five models predicted for these domains. MULTICOM's outstanding performance in the extremely competitive 2014 CASP11 experiment proves that our large-scale QA approach together with model clustering is a promising solution to one of the two major problems in protein structure modeling. The web server is available at: http://sysbio.rnet.missouri.edu/multicom_cluster/human/. © The Author 2015. Published by Oxford University Press.

  14. Large Eddy Simulation in the Computation of Jet Noise

    NASA Technical Reports Server (NTRS)

    Mankbadi, R. R.; Goldstein, M. E.; Povinelli, L. A.; Hayder, M. E.; Turkel, E.

    1999-01-01

    Noise can be predicted by solving Full (time-dependent) Compressible Navier-Stokes Equation (FCNSE) with computational domain. The fluctuating near field of the jet produces propagating pressure waves that produce far-field sound. The fluctuating flow field as a function of time is needed in order to calculate sound from first principles. Noise can be predicted by solving the full, time-dependent, compressible Navier-Stokes equations with the computational domain extended to far field - but this is not feasible as indicated above. At high Reynolds number of technological interest turbulence has large range of scales. Direct numerical simulations (DNS) can not capture the small scales of turbulence. The large scales are more efficient than the small scales in radiating sound. The emphasize is thus on calculating sound radiated by large scales.

  15. The predictability of consumer visitation patterns

    NASA Astrophysics Data System (ADS)

    Krumme, Coco; Llorente, Alejandro; Cebrian, Manuel; Pentland, Alex ("Sandy"); Moro, Esteban

    2013-04-01

    We consider hundreds of thousands of individual economic transactions to ask: how predictable are consumers in their merchant visitation patterns? Our results suggest that, in the long-run, much of our seemingly elective activity is actually highly predictable. Notwithstanding a wide range of individual preferences, shoppers share regularities in how they visit merchant locations over time. Yet while aggregate behavior is largely predictable, the interleaving of shopping events introduces important stochastic elements at short time scales. These short- and long-scale patterns suggest a theoretical upper bound on predictability, and describe the accuracy of a Markov model in predicting a person's next location. We incorporate population-level transition probabilities in the predictive models, and find that in many cases these improve accuracy. While our results point to the elusiveness of precise predictions about where a person will go next, they suggest the existence, at large time-scales, of regularities across the population.

  16. The predictability of consumer visitation patterns

    PubMed Central

    Krumme, Coco; Llorente, Alejandro; Cebrian, Manuel; Pentland, Alex ("Sandy"); Moro, Esteban

    2013-01-01

    We consider hundreds of thousands of individual economic transactions to ask: how predictable are consumers in their merchant visitation patterns? Our results suggest that, in the long-run, much of our seemingly elective activity is actually highly predictable. Notwithstanding a wide range of individual preferences, shoppers share regularities in how they visit merchant locations over time. Yet while aggregate behavior is largely predictable, the interleaving of shopping events introduces important stochastic elements at short time scales. These short- and long-scale patterns suggest a theoretical upper bound on predictability, and describe the accuracy of a Markov model in predicting a person's next location. We incorporate population-level transition probabilities in the predictive models, and find that in many cases these improve accuracy. While our results point to the elusiveness of precise predictions about where a person will go next, they suggest the existence, at large time-scales, of regularities across the population. PMID:23598917

  17. Predicting the effect of fire on large-scale vegetation patterns in North America.

    Treesearch

    Donald McKenzie; David L. Peterson; Ernesto. Alvarado

    1996-01-01

    Changes in fire regimes are expected across North America in response to anticipated global climatic changes. Potential changes in large-scale vegetation patterns are predicted as a result of altered fire frequencies. A new vegetation classification was developed by condensing Kuchler potential natural vegetation types into aggregated types that are relatively...

  18. Prediction of Indian Summer-Monsoon Onset Variability: A Season in Advance.

    PubMed

    Pradhan, Maheswar; Rao, A Suryachandra; Srivastava, Ankur; Dakate, Ashish; Salunke, Kiran; Shameera, K S

    2017-10-27

    Monsoon onset is an inherent transient phenomenon of Indian Summer Monsoon and it was never envisaged that this transience can be predicted at long lead times. Though onset is precipitous, its variability exhibits strong teleconnections with large scale forcing such as ENSO and IOD and hence may be predictable. Despite of the tremendous skill achieved by the state-of-the-art models in predicting such large scale processes, the prediction of monsoon onset variability by the models is still limited to just 2-3 weeks in advance. Using an objective definition of onset in a global coupled ocean-atmosphere model, it is shown that the skillful prediction of onset variability is feasible under seasonal prediction framework. The better representations/simulations of not only the large scale processes but also the synoptic and intraseasonal features during the evolution of monsoon onset are the comprehensions behind skillful simulation of monsoon onset variability. The changes observed in convection, tropospheric circulation and moisture availability prior to and after the onset are evidenced in model simulations, which resulted in high hit rate of early/delay in monsoon onset in the high resolution model.

  19. Analysis on the Critical Rainfall Value For Predicting Large Scale Landslides Caused by Heavy Rainfall In Taiwan.

    NASA Astrophysics Data System (ADS)

    Tsai, Kuang-Jung; Chiang, Jie-Lun; Lee, Ming-Hsi; Chen, Yie-Ruey

    2017-04-01

    Analysis on the Critical Rainfall Value For Predicting Large Scale Landslides Caused by Heavy Rainfall In Taiwan. Kuang-Jung Tsai 1, Jie-Lun Chiang 2,Ming-Hsi Lee 2, Yie-Ruey Chen 1, 1Department of Land Management and Development, Chang Jung Christian Universityt, Tainan, Taiwan. 2Department of Soil and Water Conservation, National Pingtung University of Science and Technology, Pingtung, Taiwan. ABSTRACT The accumulated rainfall amount was recorded more than 2,900mm that were brought by Morakot typhoon in August, 2009 within continuous 3 days. Very serious landslides, and sediment related disasters were induced by this heavy rainfall event. The satellite image analysis project conducted by Soil and Water Conservation Bureau after Morakot event indicated that more than 10,904 sites of landslide with total sliding area of 18,113ha were found by this project. At the same time, all severe sediment related disaster areas are also characterized based on their disaster type, scale, topography, major bedrock formations and geologic structures during the period of extremely heavy rainfall events occurred at the southern Taiwan. Characteristics and mechanism of large scale landslide are collected on the basis of the field investigation technology integrated with GPS/GIS/RS technique. In order to decrease the risk of large scale landslides on slope land, the strategy of slope land conservation, and critical rainfall database should be set up and executed as soon as possible. Meanwhile, study on the establishment of critical rainfall value used for predicting large scale landslides induced by heavy rainfall become an important issue which was seriously concerned by the government and all people live in Taiwan. The mechanism of large scale landslide, rainfall frequency analysis ,sediment budge estimation and river hydraulic analysis under the condition of extremely climate change during the past 10 years would be seriously concerned and recognized as a required issue by this research. Hopefully, all results developed from this research can be used as a warning system for Predicting Large Scale Landslides in the southern Taiwan. Keywords:Heavy Rainfall, Large Scale, landslides, Critical Rainfall Value

  20. Prediction of monthly rainfall on homogeneous monsoon regions of India based on large scale circulation patterns using Genetic Programming

    NASA Astrophysics Data System (ADS)

    Kashid, Satishkumar S.; Maity, Rajib

    2012-08-01

    SummaryPrediction of Indian Summer Monsoon Rainfall (ISMR) is of vital importance for Indian economy, and it has been remained a great challenge for hydro-meteorologists due to inherent complexities in the climatic systems. The Large-scale atmospheric circulation patterns from tropical Pacific Ocean (ENSO) and those from tropical Indian Ocean (EQUINOO) are established to influence the Indian Summer Monsoon Rainfall. The information of these two large scale atmospheric circulation patterns in terms of their indices is used to model the complex relationship between Indian Summer Monsoon Rainfall and the ENSO as well as EQUINOO indices. However, extracting the signal from such large-scale indices for modeling such complex systems is significantly difficult. Rainfall predictions have been done for 'All India' as one unit, as well as for five 'homogeneous monsoon regions of India', defined by Indian Institute of Tropical Meteorology. Recent 'Artificial Intelligence' tool 'Genetic Programming' (GP) has been employed for modeling such problem. The Genetic Programming approach is found to capture the complex relationship between the monthly Indian Summer Monsoon Rainfall and large scale atmospheric circulation pattern indices - ENSO and EQUINOO. Research findings of this study indicate that GP-derived monthly rainfall forecasting models, that use large-scale atmospheric circulation information are successful in prediction of All India Summer Monsoon Rainfall with correlation coefficient as good as 0.866, which may appears attractive for such a complex system. A separate analysis is carried out for All India Summer Monsoon rainfall for India as one unit, and five homogeneous monsoon regions, based on ENSO and EQUINOO indices of months of March, April and May only, performed at end of month of May. In this case, All India Summer Monsoon Rainfall could be predicted with 0.70 as correlation coefficient with somewhat lesser Correlation Coefficient (C.C.) values for different 'homogeneous monsoon regions'.

  1. Novel patch modelling method for efficient simulation and prediction uncertainty analysis of multi-scale groundwater flow and transport processes

    NASA Astrophysics Data System (ADS)

    Sreekanth, J.; Moore, Catherine

    2018-04-01

    The application of global sensitivity and uncertainty analysis techniques to groundwater models of deep sedimentary basins are typically challenged by large computational burdens combined with associated numerical stability issues. The highly parameterized approaches required for exploring the predictive uncertainty associated with the heterogeneous hydraulic characteristics of multiple aquifers and aquitards in these sedimentary basins exacerbate these issues. A novel Patch Modelling Methodology is proposed for improving the computational feasibility of stochastic modelling analysis of large-scale and complex groundwater models. The method incorporates a nested groundwater modelling framework that enables efficient simulation of groundwater flow and transport across multiple spatial and temporal scales. The method also allows different processes to be simulated within different model scales. Existing nested model methodologies are extended by employing 'joining predictions' for extrapolating prediction-salient information from one model scale to the next. This establishes a feedback mechanism supporting the transfer of information from child models to parent models as well as parent models to child models in a computationally efficient manner. This feedback mechanism is simple and flexible and ensures that while the salient small scale features influencing larger scale prediction are transferred back to the larger scale, this does not require the live coupling of models. This method allows the modelling of multiple groundwater flow and transport processes using separate groundwater models that are built for the appropriate spatial and temporal scales, within a stochastic framework, while also removing the computational burden associated with live model coupling. The utility of the method is demonstrated by application to an actual large scale aquifer injection scheme in Australia.

  2. Data assimilation in optimizing and integrating soil and water quality water model predictions at different scales

    USDA-ARS?s Scientific Manuscript database

    Relevant data about subsurface water flow and solute transport at relatively large scales that are of interest to the public are inherently laborious and in most cases simply impossible to obtain. Upscaling in which fine-scale models and data are used to predict changes at the coarser scales is the...

  3. Improving our fundamental understanding of the role of aerosol-cloud interactions in the climate system.

    PubMed

    Seinfeld, John H; Bretherton, Christopher; Carslaw, Kenneth S; Coe, Hugh; DeMott, Paul J; Dunlea, Edward J; Feingold, Graham; Ghan, Steven; Guenther, Alex B; Kahn, Ralph; Kraucunas, Ian; Kreidenweis, Sonia M; Molina, Mario J; Nenes, Athanasios; Penner, Joyce E; Prather, Kimberly A; Ramanathan, V; Ramaswamy, Venkatachalam; Rasch, Philip J; Ravishankara, A R; Rosenfeld, Daniel; Stephens, Graeme; Wood, Robert

    2016-05-24

    The effect of an increase in atmospheric aerosol concentrations on the distribution and radiative properties of Earth's clouds is the most uncertain component of the overall global radiative forcing from preindustrial time. General circulation models (GCMs) are the tool for predicting future climate, but the treatment of aerosols, clouds, and aerosol-cloud radiative effects carries large uncertainties that directly affect GCM predictions, such as climate sensitivity. Predictions are hampered by the large range of scales of interaction between various components that need to be captured. Observation systems (remote sensing, in situ) are increasingly being used to constrain predictions, but significant challenges exist, to some extent because of the large range of scales and the fact that the various measuring systems tend to address different scales. Fine-scale models represent clouds, aerosols, and aerosol-cloud interactions with high fidelity but do not include interactions with the larger scale and are therefore limited from a climatic point of view. We suggest strategies for improving estimates of aerosol-cloud relationships in climate models, for new remote sensing and in situ measurements, and for quantifying and reducing model uncertainty.

  4. Improving Our Fundamental Understanding of the Role of Aerosol Cloud Interactions in the Climate System

    NASA Technical Reports Server (NTRS)

    Seinfeld, John H.; Bretherton, Christopher; Carslaw, Kenneth S.; Coe, Hugh; DeMott, Paul J.; Dunlea, Edward J.; Feingold, Graham; Ghan, Steven; Guenther, Alex B.; Kahn, Ralph; hide

    2016-01-01

    The effect of an increase in atmospheric aerosol concentrations on the distribution and radiative properties of Earth's clouds is the most uncertain component of the overall global radiative forcing from preindustrial time. General circulation models (GCMs) are the tool for predicting future climate, but the treatment of aerosols, clouds, and aerosol-cloud radiative effects carries large uncertainties that directly affect GCM predictions, such as climate sensitivity. Predictions are hampered by the large range of scales of interaction between various components that need to be captured. Observation systems (remote sensing, in situ) are increasingly being used to constrain predictions, but significant challenges exist, to some extent because of the large range of scales and the fact that the various measuring systems tend to address different scales. Fine-scale models represent clouds, aerosols, and aerosol-cloud interactions with high fidelity but do not include interactions with the larger scale and are therefore limited from a climatic point of view. We suggest strategies for improving estimates of aerosol-cloud relationships in climate models, for new remote sensing and in situ measurements, and for quantifying and reducing model uncertainty.

  5. Improving our fundamental understanding of the role of aerosol-cloud interactions in the climate system

    DOE PAGES

    Seinfeld, John H.; Bretherton, Christopher; Carslaw, Kenneth S.; ...

    2016-05-24

    The effect of an increase in atmospheric aerosol concentrations on the distribution and radiative properties of Earth’s clouds is the most uncertain component of the overall global radiative forcing from pre-industrial time. General Circulation Models (GCMs) are the tool for predicting future climate, but the treatment of aerosols, clouds, and aerosol-cloud radiative effects carries large uncertainties that directly affect GCM predictions, such as climate sensitivity. Predictions are hampered by the large range of scales of interaction between various components that need to be captured. Observation systems (remote sensing, in situ) are increasingly being used to constrain predictions but significant challengesmore » exist, to some extent because of the large range of scales and the fact that the various measuring systems tend to address different scales. Fine-scale models represent clouds, aerosols, and aerosol-cloud interactions with high fidelity but do not include interactions with the larger scale and are therefore limited from a climatic point of view. Lastly, we suggest strategies for improving estimates of aerosol-cloud relationships in climate models, for new remote sensing and in situ measurements, and for quantifying and reducing model uncertainty.« less

  6. Improving our fundamental understanding of the role of aerosol−cloud interactions in the climate system

    PubMed Central

    Seinfeld, John H.; Bretherton, Christopher; Carslaw, Kenneth S.; Coe, Hugh; DeMott, Paul J.; Dunlea, Edward J.; Feingold, Graham; Ghan, Steven; Guenther, Alex B.; Kraucunas, Ian; Molina, Mario J.; Nenes, Athanasios; Penner, Joyce E.; Prather, Kimberly A.; Ramanathan, V.; Ramaswamy, Venkatachalam; Rasch, Philip J.; Ravishankara, A. R.; Rosenfeld, Daniel; Stephens, Graeme; Wood, Robert

    2016-01-01

    The effect of an increase in atmospheric aerosol concentrations on the distribution and radiative properties of Earth’s clouds is the most uncertain component of the overall global radiative forcing from preindustrial time. General circulation models (GCMs) are the tool for predicting future climate, but the treatment of aerosols, clouds, and aerosol−cloud radiative effects carries large uncertainties that directly affect GCM predictions, such as climate sensitivity. Predictions are hampered by the large range of scales of interaction between various components that need to be captured. Observation systems (remote sensing, in situ) are increasingly being used to constrain predictions, but significant challenges exist, to some extent because of the large range of scales and the fact that the various measuring systems tend to address different scales. Fine-scale models represent clouds, aerosols, and aerosol−cloud interactions with high fidelity but do not include interactions with the larger scale and are therefore limited from a climatic point of view. We suggest strategies for improving estimates of aerosol−cloud relationships in climate models, for new remote sensing and in situ measurements, and for quantifying and reducing model uncertainty. PMID:27222566

  7. Classification Accuracy of Oral Reading Fluency and Maze in Predicting Performance on Large-Scale Reading Assessments

    ERIC Educational Resources Information Center

    Decker, Dawn M.; Hixson, Michael D.; Shaw, Amber; Johnson, Gloria

    2014-01-01

    The purpose of this study was to examine whether using a multiple-measure framework yielded better classification accuracy than oral reading fluency (ORF) or maze alone in predicting pass/fail rates for middle-school students on a large-scale reading assessment. Participants were 178 students in Grades 7 and 8 from a Midwestern school district.…

  8. A dynamical systems approach to studying midlatitude weather extremes

    NASA Astrophysics Data System (ADS)

    Messori, Gabriele; Caballero, Rodrigo; Faranda, Davide

    2017-04-01

    Extreme weather occurrences carry enormous social and economic costs and routinely garner widespread scientific and media coverage. The ability to predict these events is therefore a topic of crucial importance. Here we propose a novel predictability pathway for extreme events, by building upon recent advances in dynamical systems theory. We show that simple dynamical systems metrics can be used to identify sets of large-scale atmospheric flow patterns with similar spatial structure and temporal evolution on time scales of several days to a week. In regions where these patterns favor extreme weather, they afford a particularly good predictability of the extremes. We specifically test this technique on the atmospheric circulation in the North Atlantic region, where it provides predictability of large-scale wintertime surface temperature extremes in Europe up to 1 week in advance.

  9. Statistical Learning Theory for High Dimensional Prediction: Application to Criterion-Keyed Scale Development

    PubMed Central

    Chapman, Benjamin P.; Weiss, Alexander; Duberstein, Paul

    2016-01-01

    Statistical learning theory (SLT) is the statistical formulation of machine learning theory, a body of analytic methods common in “big data” problems. Regression-based SLT algorithms seek to maximize predictive accuracy for some outcome, given a large pool of potential predictors, without overfitting the sample. Research goals in psychology may sometimes call for high dimensional regression. One example is criterion-keyed scale construction, where a scale with maximal predictive validity must be built from a large item pool. Using this as a working example, we first introduce a core principle of SLT methods: minimization of expected prediction error (EPE). Minimizing EPE is fundamentally different than maximizing the within-sample likelihood, and hinges on building a predictive model of sufficient complexity to predict the outcome well, without undue complexity leading to overfitting. We describe how such models are built and refined via cross-validation. We then illustrate how three common SLT algorithms–Supervised Principal Components, Regularization, and Boosting—can be used to construct a criterion-keyed scale predicting all-cause mortality, using a large personality item pool within a population cohort. Each algorithm illustrates a different approach to minimizing EPE. Finally, we consider broader applications of SLT predictive algorithms, both as supportive analytic tools for conventional methods, and as primary analytic tools in discovery phase research. We conclude that despite their differences from the classic null-hypothesis testing approach—or perhaps because of them–SLT methods may hold value as a statistically rigorous approach to exploratory regression. PMID:27454257

  10. Multi-Scale Three-Dimensional Variational Data Assimilation System for Coastal Ocean Prediction

    NASA Technical Reports Server (NTRS)

    Li, Zhijin; Chao, Yi; Li, P. Peggy

    2012-01-01

    A multi-scale three-dimensional variational data assimilation system (MS-3DVAR) has been formulated and the associated software system has been developed for improving high-resolution coastal ocean prediction. This system helps improve coastal ocean prediction skill, and has been used in support of operational coastal ocean forecasting systems and field experiments. The system has been developed to improve the capability of data assimilation for assimilating, simultaneously and effectively, sparse vertical profiles and high-resolution remote sensing surface measurements into coastal ocean models, as well as constraining model biases. In this system, the cost function is decomposed into two separate units for the large- and small-scale components, respectively. As such, data assimilation is implemented sequentially from large to small scales, the background error covariance is constructed to be scale-dependent, and a scale-dependent dynamic balance is incorporated. This scheme then allows effective constraining large scales and model bias through assimilating sparse vertical profiles, and small scales through assimilating high-resolution surface measurements. This MS-3DVAR enhances the capability of the traditional 3DVAR for assimilating highly heterogeneously distributed observations, such as along-track satellite altimetry data, and particularly maximizing the extraction of information from limited numbers of vertical profile observations.

  11. Drivers and seasonal predictability of extreme wind speeds in the ECMWF System 4 and a statistical model

    NASA Astrophysics Data System (ADS)

    Walz, M. A.; Donat, M.; Leckebusch, G. C.

    2017-12-01

    As extreme wind speeds are responsible for large socio-economic losses in Europe, a skillful prediction would be of great benefit for disaster prevention as well as for the actuarial community. Here we evaluate patterns of large-scale atmospheric variability and the seasonal predictability of extreme wind speeds (e.g. >95th percentile) in the European domain in the dynamical seasonal forecast system ECMWF System 4, and compare to the predictability based on a statistical prediction model. The dominant patterns of atmospheric variability show distinct differences between reanalysis and ECMWF System 4, with most patterns in System 4 extended downstream in comparison to ERA-Interim. The dissimilar manifestations of the patterns within the two models lead to substantially different drivers associated with the occurrence of extreme winds in the respective model. While the ECMWF System 4 is shown to provide some predictive power over Scandinavia and the eastern Atlantic, only very few grid cells in the European domain have significant correlations for extreme wind speeds in System 4 compared to ERA-Interim. In contrast, a statistical model predicts extreme wind speeds during boreal winter in better agreement with the observations. Our results suggest that System 4 does not seem to capture the potential predictability of extreme winds that exists in the real world, and therefore fails to provide reliable seasonal predictions for lead months 2-4. This is likely related to the unrealistic representation of large-scale patterns of atmospheric variability. Hence our study points to potential improvements of dynamical prediction skill by improving the simulation of large-scale atmospheric dynamics.

  12. Some aspects of control of a large-scale dynamic system

    NASA Technical Reports Server (NTRS)

    Aoki, M.

    1975-01-01

    Techniques of predicting and/or controlling the dynamic behavior of large scale systems are discussed in terms of decentralized decision making. Topics discussed include: (1) control of large scale systems by dynamic team with delayed information sharing; (2) dynamic resource allocation problems by a team (hierarchical structure with a coordinator); and (3) some problems related to the construction of a model of reduced dimension.

  13. Prediction of Large Vessel Occlusions in Acute Stroke: National Institute of Health Stroke Scale Is Hard to Beat.

    PubMed

    Vanacker, Peter; Heldner, Mirjam R; Amiguet, Michael; Faouzi, Mohamed; Cras, Patrick; Ntaios, George; Arnold, Marcel; Mattle, Heinrich P; Gralla, Jan; Fischer, Urs; Michel, Patrik

    2016-06-01

    Endovascular treatment for acute ischemic stroke with a large vessel occlusion was recently shown to be effective. We aimed to develop a score capable of predicting large vessel occlusion eligible for endovascular treatment in the early hospital management. Retrospective, cohort study. Two tertiary, Swiss stroke centers. Consecutive acute ischemic stroke patients (1,645 patients; Acute STroke Registry and Analysis of Lausanne registry), who had CT angiography within 6 and 12 hours of symptom onset, were categorized according to the occlusion site. Demographic and clinical information was used in logistic regression analysis to derive predictors of large vessel occlusion (defined as intracranial carotid, basilar, and M1 segment of middle cerebral artery occlusions). Based on logistic regression coefficients, an integer score was created and validated internally and externally (848 patients; Bernese Stroke Registry). None. Large vessel occlusions were present in 316 patients (21%) in the derivation and 566 (28%) in the external validation cohort. Five predictors added significantly to the score: National Institute of Health Stroke Scale at admission, hemineglect, female sex, atrial fibrillation, and no history of stroke and prestroke handicap (modified Rankin Scale score, < 2). Diagnostic accuracy in internal and external validation cohorts was excellent (area under the receiver operating characteristic curve, 0.84 both). The score performed slightly better than National Institute of Health Stroke Scale alone regarding prediction error (Wilcoxon signed rank test, p < 0.001) and regarding discriminatory power in derivation and pooled cohorts (area under the receiver operating characteristic curve, 0.81 vs 0.80; DeLong test, p = 0.02). Our score accurately predicts the presence of emergent large vessel occlusions, which are eligible for endovascular treatment. However, incorporation of additional demographic and historical information available on hospital arrival provides minimal incremental predictive value compared with the National Institute of Health Stroke Scale alone.

  14. Predicting the propagation of concentration and saturation fronts in fixed-bed filters.

    PubMed

    Callery, O; Healy, M G

    2017-10-15

    The phenomenon of adsorption is widely exploited across a range of industries to remove contaminants from gases and liquids. Much recent research has focused on identifying low-cost adsorbents which have the potential to be used as alternatives to expensive industry standards like activated carbons. Evaluating these emerging adsorbents entails a considerable amount of labor intensive and costly testing and analysis. This study proposes a simple, low-cost method to rapidly assess the potential of novel media for potential use in large-scale adsorption filters. The filter media investigated in this study were low-cost adsorbents which have been found to be capable of removing dissolved phosphorus from solution, namely: i) aluminum drinking water treatment residual, and ii) crushed concrete. Data collected from multiple small-scale column tests was used to construct a model capable of describing and predicting the progression of adsorbent saturation and the associated effluent concentration breakthrough curves. This model was used to predict the performance of long-term, large-scale filter columns packed with the same media. The approach proved highly successful, and just 24-36 h of experimental data from the small-scale column experiments were found to provide sufficient information to predict the performance of the large-scale filters for up to three months. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Support Vector Machines Trained with Evolutionary Algorithms Employing Kernel Adatron for Large Scale Classification of Protein Structures.

    PubMed

    Arana-Daniel, Nancy; Gallegos, Alberto A; López-Franco, Carlos; Alanís, Alma Y; Morales, Jacob; López-Franco, Adriana

    2016-01-01

    With the increasing power of computers, the amount of data that can be processed in small periods of time has grown exponentially, as has the importance of classifying large-scale data efficiently. Support vector machines have shown good results classifying large amounts of high-dimensional data, such as data generated by protein structure prediction, spam recognition, medical diagnosis, optical character recognition and text classification, etc. Most state of the art approaches for large-scale learning use traditional optimization methods, such as quadratic programming or gradient descent, which makes the use of evolutionary algorithms for training support vector machines an area to be explored. The present paper proposes an approach that is simple to implement based on evolutionary algorithms and Kernel-Adatron for solving large-scale classification problems, focusing on protein structure prediction. The functional properties of proteins depend upon their three-dimensional structures. Knowing the structures of proteins is crucial for biology and can lead to improvements in areas such as medicine, agriculture and biofuels.

  16. On the influences of key modelling constants of large eddy simulations for large-scale compartment fires predictions

    NASA Astrophysics Data System (ADS)

    Yuen, Anthony C. Y.; Yeoh, Guan H.; Timchenko, Victoria; Cheung, Sherman C. P.; Chan, Qing N.; Chen, Timothy

    2017-09-01

    An in-house large eddy simulation (LES) based fire field model has been developed for large-scale compartment fire simulations. The model incorporates four major components, including subgrid-scale turbulence, combustion, soot and radiation models which are fully coupled. It is designed to simulate the temporal and fluid dynamical effects of turbulent reaction flow for non-premixed diffusion flame. Parametric studies were performed based on a large-scale fire experiment carried out in a 39-m long test hall facility. Several turbulent Prandtl and Schmidt numbers ranging from 0.2 to 0.5, and Smagorinsky constants ranging from 0.18 to 0.23 were investigated. It was found that the temperature and flow field predictions were most accurate with turbulent Prandtl and Schmidt numbers of 0.3, respectively, and a Smagorinsky constant of 0.2 applied. In addition, by utilising a set of numerically verified key modelling parameters, the smoke filling process was successfully captured by the present LES model.

  17. Building spatially-explicit model predictions for ecological condition of streams in the Pacific Northwest: An assessment of landscape variables, models, endpoints and prediction scale

    EPA Science Inventory

    While large-scale, randomized surveys estimate the percentage of a region’s streams in poor ecological condition, identifying particular stream reaches or watersheds in poor condition is an equally important goal for monitoring and management. We built predictive models of strea...

  18. Analysis of the ability of large-scale reanalysis data to define Siberian fire danger in preparation for future fire prediction

    NASA Astrophysics Data System (ADS)

    Soja, Amber; Westberg, David; Stackhouse, Paul, Jr.; McRae, Douglas; Jin, Ji-Zhong; Sukhinin, Anatoly

    2010-05-01

    Fire is the dominant disturbance that precipitates ecosystem change in boreal regions, and fire is largely under the control of weather and climate. Fire frequency, fire severity, area burned and fire season length are predicted to increase in boreal regions under current climate change scenarios. Therefore, changes in fire regimes have the potential to compel ecological change, moving ecosystems more quickly towards equilibrium with a new climate. The ultimate goal of this research is to assess the viability of large-scale (1°) data to be used to define fire weather danger and fire regimes, so that large-scale data can be confidently used to predict future fire regimes using large-scale fire weather data, like that available from current Intergovernmental Panel on Climate Change (IPCC) climate change scenarios. In this talk, we intent to: (1) evaluate Fire Weather Indices (FWI) derived using reanalysis and interpolated station data; (2) discuss the advantages and disadvantages of using these distinct data sources; and (3) highlight established relationships between large-scale fire weather data, area burned, active fires and ecosystems burned. Specifically, the Canadian Forestry Service (CFS) Fire Weather Index (FWI) will be derived using: (1) NASA Goddard Earth Observing System version 4 (GEOS-4) large-scale reanalysis and NASA Global Precipitation Climatology Project (GPCP) data; and National Climatic Data Center (NCDC) surface station-interpolated data. Requirements of the FWI are local noon surface-level air temperature, relative humidity, wind speed, and daily (noon-noon) rainfall. GEOS-4 reanalysis and NCDC station-interpolated fire weather indices are generally consistent spatially, temporally and quantitatively. Additionally, increased fire activity coincides with increased FWI ratings in both data products. Relationships have been established between large-scale FWI to area burned, fire frequency, ecosystem types, and these can be use to estimate historic and future fire regimes.

  19. Statistical learning theory for high dimensional prediction: Application to criterion-keyed scale development.

    PubMed

    Chapman, Benjamin P; Weiss, Alexander; Duberstein, Paul R

    2016-12-01

    Statistical learning theory (SLT) is the statistical formulation of machine learning theory, a body of analytic methods common in "big data" problems. Regression-based SLT algorithms seek to maximize predictive accuracy for some outcome, given a large pool of potential predictors, without overfitting the sample. Research goals in psychology may sometimes call for high dimensional regression. One example is criterion-keyed scale construction, where a scale with maximal predictive validity must be built from a large item pool. Using this as a working example, we first introduce a core principle of SLT methods: minimization of expected prediction error (EPE). Minimizing EPE is fundamentally different than maximizing the within-sample likelihood, and hinges on building a predictive model of sufficient complexity to predict the outcome well, without undue complexity leading to overfitting. We describe how such models are built and refined via cross-validation. We then illustrate how 3 common SLT algorithms-supervised principal components, regularization, and boosting-can be used to construct a criterion-keyed scale predicting all-cause mortality, using a large personality item pool within a population cohort. Each algorithm illustrates a different approach to minimizing EPE. Finally, we consider broader applications of SLT predictive algorithms, both as supportive analytic tools for conventional methods, and as primary analytic tools in discovery phase research. We conclude that despite their differences from the classic null-hypothesis testing approach-or perhaps because of them-SLT methods may hold value as a statistically rigorous approach to exploratory regression. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  20. LARGE-SCALE PREDICTIONS OF MOBILE SOURCE CONTRIBUTIONS TO CONCENTRATIONS OF TOXIC AIR POLLUTANTS

    EPA Science Inventory

    This presentation shows concentrations and deposition of toxic air pollutants predicted by a 3-D air quality model, the Community Multi Scale Air Quality (CMAQ) modeling system. Contributions from both on-road and non-road mobile sources are analyzed.

  1. Nonlocal and collective relaxation in stellar systems

    NASA Technical Reports Server (NTRS)

    Weinberg, Martin D.

    1993-01-01

    The modal response of stellar systems to fluctuations at large scales is presently investigated by means of analytic theory and n-body simulation; the stochastic excitation of these modes is shown to increase the relaxation rate even for a system which is moderately far from instability. The n-body simulations, when designed to suppress relaxation at small scales, clearly show the effects of large-scale fluctuations. It is predicted that large-scale fluctuations will be largest for such marginally bound systems as forming star clusters and associations.

  2. Pretest predictions for the response of a 1:8-scale steel LWR containment building model to static overpressurization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clauss, D.B.

    The analyses used to predict the behavior of a 1:8-scale model of a steel LWR containment building to static overpressurization are described and results are presented. Finite strain, large displacement, and nonlinear material properties were accounted for using finite element methods. Three-dimensional models were needed to analyze the penetrations, which included operable equipment hatches, personnel lock representations, and a constrained pipe. It was concluded that the scale model would fail due to leakage caused by large deformations of the equipment hatch sleeves. 13 refs., 34 figs., 1 tab.

  3. A large-scale evaluation of computational protein function prediction

    PubMed Central

    Radivojac, Predrag; Clark, Wyatt T; Ronnen Oron, Tal; Schnoes, Alexandra M; Wittkop, Tobias; Sokolov, Artem; Graim, Kiley; Funk, Christopher; Verspoor, Karin; Ben-Hur, Asa; Pandey, Gaurav; Yunes, Jeffrey M; Talwalkar, Ameet S; Repo, Susanna; Souza, Michael L; Piovesan, Damiano; Casadio, Rita; Wang, Zheng; Cheng, Jianlin; Fang, Hai; Gough, Julian; Koskinen, Patrik; Törönen, Petri; Nokso-Koivisto, Jussi; Holm, Liisa; Cozzetto, Domenico; Buchan, Daniel W A; Bryson, Kevin; Jones, David T; Limaye, Bhakti; Inamdar, Harshal; Datta, Avik; Manjari, Sunitha K; Joshi, Rajendra; Chitale, Meghana; Kihara, Daisuke; Lisewski, Andreas M; Erdin, Serkan; Venner, Eric; Lichtarge, Olivier; Rentzsch, Robert; Yang, Haixuan; Romero, Alfonso E; Bhat, Prajwal; Paccanaro, Alberto; Hamp, Tobias; Kassner, Rebecca; Seemayer, Stefan; Vicedo, Esmeralda; Schaefer, Christian; Achten, Dominik; Auer, Florian; Böhm, Ariane; Braun, Tatjana; Hecht, Maximilian; Heron, Mark; Hönigschmid, Peter; Hopf, Thomas; Kaufmann, Stefanie; Kiening, Michael; Krompass, Denis; Landerer, Cedric; Mahlich, Yannick; Roos, Manfred; Björne, Jari; Salakoski, Tapio; Wong, Andrew; Shatkay, Hagit; Gatzmann, Fanny; Sommer, Ingolf; Wass, Mark N; Sternberg, Michael J E; Škunca, Nives; Supek, Fran; Bošnjak, Matko; Panov, Panče; Džeroski, Sašo; Šmuc, Tomislav; Kourmpetis, Yiannis A I; van Dijk, Aalt D J; ter Braak, Cajo J F; Zhou, Yuanpeng; Gong, Qingtian; Dong, Xinran; Tian, Weidong; Falda, Marco; Fontana, Paolo; Lavezzo, Enrico; Di Camillo, Barbara; Toppo, Stefano; Lan, Liang; Djuric, Nemanja; Guo, Yuhong; Vucetic, Slobodan; Bairoch, Amos; Linial, Michal; Babbitt, Patricia C; Brenner, Steven E; Orengo, Christine; Rost, Burkhard; Mooney, Sean D; Friedberg, Iddo

    2013-01-01

    Automated annotation of protein function is challenging. As the number of sequenced genomes rapidly grows, the overwhelming majority of protein products can only be annotated computationally. If computational predictions are to be relied upon, it is crucial that the accuracy of these methods be high. Here we report the results from the first large-scale community-based Critical Assessment of protein Function Annotation (CAFA) experiment. Fifty-four methods representing the state-of-the-art for protein function prediction were evaluated on a target set of 866 proteins from eleven organisms. Two findings stand out: (i) today’s best protein function prediction algorithms significantly outperformed widely-used first-generation methods, with large gains on all types of targets; and (ii) although the top methods perform well enough to guide experiments, there is significant need for improvement of currently available tools. PMID:23353650

  4. Validation of self-reported figural drawing scales against anthropometric measurements in adults.

    PubMed

    Dratva, Julia; Bertelsen, Randi; Janson, Christer; Johannessen, Ane; Benediktsdóttir, Bryndis; Bråbäck, Lennart; Dharmage, Shyamali C; Forsberg, Bertil; Gislason, Thorarinn; Jarvis, Debbie; Jogi, Rain; Lindberg, Eva; Norback, Dan; Omenaas, Ernst; Skorge, Trude D; Sigsgaard, Torben; Toren, Kjell; Waatevik, Marie; Wieslander, Gundula; Schlünssen, Vivi; Svanes, Cecilie; Real, Francisco Gomez

    2016-08-01

    The aim of the present study was to validate figural drawing scales depicting extremely lean to extremely obese subjects to obtain proxies for BMI and waist circumference in postal surveys. Reported figural scales and anthropometric data from a large population-based postal survey were validated with measured anthropometric data from the same individuals by means of receiver-operating characteristic curves and a BMI prediction model. Adult participants in a Scandinavian cohort study first recruited in 1990 and followed up twice since. Individuals aged 38-66 years with complete data for BMI (n 1580) and waist circumference (n 1017). Median BMI and waist circumference increased exponentially with increasing figural scales. Receiver-operating characteristic curve analyses showed a high predictive ability to identify individuals with BMI > 25·0 kg/m2 in both sexes. The optimal figural scales for identifying overweight or obese individuals with a correct detection rate were 4 and 5 in women, and 5 and 6 in men, respectively. The prediction model explained 74 % of the variance among women and 62 % among men. Predicted BMI differed only marginally from objectively measured BMI. Figural drawing scales explained a large part of the anthropometric variance in this population and showed a high predictive ability for identifying overweight/obese subjects. These figural scales can be used with confidence as proxies of BMI and waist circumference in settings where objective measures are not feasible.

  5. Ecosystem resilience despite large-scale altered hydro climatic conditions

    USDA-ARS?s Scientific Manuscript database

    Climate change is predicted to increase both drought frequency and duration, and when coupled with substantial warming, will establish a new hydroclimatological paradigm for many regions. Large-scale, warm droughts have recently impacted North America, Africa, Europe, Amazonia, and Australia result...

  6. The plume head-continental lithosphere interaction using a tectonically realistic formulation for the lithosphere

    NASA Astrophysics Data System (ADS)

    Burov, E.; Guillou-Frottier, L.

    2005-05-01

    Current debates on the existence of mantle plumes largely originate from interpretations of supposed signatures of plume-induced surface topography that are compared with predictions of geodynamic models of plume-lithosphere interactions. These models often inaccurately predict surface evolution: in general, they assume a fixed upper surface and consider the lithosphere as a single viscous layer. In nature, the surface evolution is affected by the elastic-brittle-ductile deformation, by a free upper surface and by the layered structure of the lithosphere. We make a step towards reconciling mantle- and tectonic-scale studies by introducing a tectonically realistic continental plate model in large-scale plume-lithosphere interaction. This model includes (i) a natural free surface boundary condition, (ii) an explicit elastic-viscous(ductile)-plastic(brittle) rheology and (iii) a stratified structure of continental lithosphere. The numerical experiments demonstrate a number of important differences from predictions of conventional models. In particular, this relates to plate bending, mechanical decoupling of crustal and mantle layers and tension-compression instabilities, which produce transient topographic signatures such as uplift and subsidence at large (>500 km) and small scale (300-400, 200-300 and 50-100 km). The mantle plumes do not necessarily produce detectable large-scale topographic highs but often generate only alternating small-scale surface features that could otherwise be attributed to regional tectonics. A single large-wavelength deformation, predicted by conventional models, develops only for a very cold and thick lithosphere. Distinct topographic wavelengths or temporarily spaced events observed in the East African rift system, as well as over French Massif Central, can be explained by a single plume impinging at the base of the continental lithosphere, without evoking complex asthenospheric upwelling.

  7. Improving predictions of large scale soil carbon dynamics: Integration of fine-scale hydrological and biogeochemical processes, scaling, and benchmarking

    NASA Astrophysics Data System (ADS)

    Riley, W. J.; Dwivedi, D.; Ghimire, B.; Hoffman, F. M.; Pau, G. S. H.; Randerson, J. T.; Shen, C.; Tang, J.; Zhu, Q.

    2015-12-01

    Numerical model representations of decadal- to centennial-scale soil-carbon dynamics are a dominant cause of uncertainty in climate change predictions. Recent attempts by some Earth System Model (ESM) teams to integrate previously unrepresented soil processes (e.g., explicit microbial processes, abiotic interactions with mineral surfaces, vertical transport), poor performance of many ESM land models against large-scale and experimental manipulation observations, and complexities associated with spatial heterogeneity highlight the nascent nature of our community's ability to accurately predict future soil carbon dynamics. I will present recent work from our group to develop a modeling framework to integrate pore-, column-, watershed-, and global-scale soil process representations into an ESM (ACME), and apply the International Land Model Benchmarking (ILAMB) package for evaluation. At the column scale and across a wide range of sites, observed depth-resolved carbon stocks and their 14C derived turnover times can be explained by a model with explicit representation of two microbial populations, a simple representation of mineralogy, and vertical transport. Integrating soil and plant dynamics requires a 'process-scaling' approach, since all aspects of the multi-nutrient system cannot be explicitly resolved at ESM scales. I will show that one approach, the Equilibrium Chemistry Approximation, improves predictions of forest nitrogen and phosphorus experimental manipulations and leads to very different global soil carbon predictions. Translating model representations from the site- to ESM-scale requires a spatial scaling approach that either explicitly resolves the relevant processes, or more practically, accounts for fine-resolution dynamics at coarser scales. To that end, I will present recent watershed-scale modeling work that applies reduced order model methods to accurately scale fine-resolution soil carbon dynamics to coarse-resolution simulations. Finally, we contend that creating believable soil carbon predictions requires a robust, transparent, and community-available benchmarking framework. I will present an ILAMB evaluation of several of the above-mentioned approaches in ACME, and attempt to motivate community adoption of this evaluation approach.

  8. Nudging and predictability in regional climate modelling: investigation in a nested quasi-geostrophic model

    NASA Astrophysics Data System (ADS)

    Omrani, Hiba; Drobinski, Philippe; Dubos, Thomas

    2010-05-01

    In this work, we consider the effect of indiscriminate and spectral nudging on the large and small scales of an idealized model simulation. The model is a two layer quasi-geostrophic model on the beta-plane driven at its boundaries by the « global » version with periodic boundary condition. This setup mimics the configuration used for regional climate modelling. The effect of large-scale nudging is studied by using the "perfect model" approach. Two sets of experiments are performed: (1) the effect of nudging is investigated with a « global » high resolution two layer quasi-geostrophic model driven by a low resolution two layer quasi-geostrophic model. (2) similar simulations are conducted with the two layer quasi-geostrophic Limited Area Model (LAM) where the size of the LAM domain comes into play in addition to the first set of simulations. The study shows that the indiscriminate nudging time that minimizes the error at both the large and small scales is reached for a nudging time close to the predictability time, for spectral nudging, the optimum nudging time should tend to zero since the best large scale dynamics is supposed to be given by the driving large-scale fields are generally given at much lower frequency than the model time step(e,g, 6-hourly analysis) with a basic interpolation between the fields, the optimum nudging time differs from zero, however remaining smaller than the predictability time.

  9. Improving Disease Prediction by Incorporating Family Disease History in Risk Prediction Models with Large-Scale Genetic Data.

    PubMed

    Gim, Jungsoo; Kim, Wonji; Kwak, Soo Heon; Choi, Hosik; Park, Changyi; Park, Kyong Soo; Kwon, Sunghoon; Park, Taesung; Won, Sungho

    2017-11-01

    Despite the many successes of genome-wide association studies (GWAS), the known susceptibility variants identified by GWAS have modest effect sizes, leading to notable skepticism about the effectiveness of building a risk prediction model from large-scale genetic data. However, in contrast to genetic variants, the family history of diseases has been largely accepted as an important risk factor in clinical diagnosis and risk prediction. Nevertheless, the complicated structures of the family history of diseases have limited their application in clinical practice. Here, we developed a new method that enables incorporation of the general family history of diseases with a liability threshold model, and propose a new analysis strategy for risk prediction with penalized regression analysis that incorporates both large numbers of genetic variants and clinical risk factors. Application of our model to type 2 diabetes in the Korean population (1846 cases and 1846 controls) demonstrated that single-nucleotide polymorphisms accounted for 32.5% of the variation explained by the predicted risk scores in the test data set, and incorporation of family history led to an additional 6.3% improvement in prediction. Our results illustrate that family medical history provides valuable information on the variation of complex diseases and improves prediction performance. Copyright © 2017 by the Genetics Society of America.

  10. Optimizing BAO measurements with non-linear transformations of the Lyman-α forest

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Xinkang; Font-Ribera, Andreu; Seljak, Uroš, E-mail: xinkang.wang@berkeley.edu, E-mail: afont@lbl.gov, E-mail: useljak@berkeley.edu

    2015-04-01

    We explore the effect of applying a non-linear transformation to the Lyman-α forest transmitted flux F=e{sup −τ} and the ability of analytic models to predict the resulting clustering amplitude. Both the large-scale bias of the transformed field (signal) and the amplitude of small scale fluctuations (noise) can be arbitrarily modified, but we were unable to find a transformation that increases significantly the signal-to-noise ratio on large scales using Taylor expansion up to the third order. In particular, however, we achieve a 33% improvement in signal to noise for Gaussianized field in transverse direction. On the other hand, we explore anmore » analytic model for the large-scale biasing of the Lyα forest, and present an extension of this model to describe the biasing of the transformed fields. Using hydrodynamic simulations we show that the model works best to describe the biasing with respect to velocity gradients, but is less successful in predicting the biasing with respect to large-scale density fluctuations, especially for very nonlinear transformations.« less

  11. Predicting agricultural impacts of large-scale drought: 2012 and the case for better modeling

    USDA-ARS?s Scientific Manuscript database

    We present an example of a simulation-based forecast for the 2012 U.S. maize growing season produced as part of a high-resolution, multi-scale, predictive mechanistic modeling study designed for decision support, risk management, and counterfactual analysis. The simulations undertaken for this analy...

  12. Ensemble Kalman filters for dynamical systems with unresolved turbulence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grooms, Ian, E-mail: grooms@cims.nyu.edu; Lee, Yoonsang; Majda, Andrew J.

    Ensemble Kalman filters are developed for turbulent dynamical systems where the forecast model does not resolve all the active scales of motion. Coarse-resolution models are intended to predict the large-scale part of the true dynamics, but observations invariably include contributions from both the resolved large scales and the unresolved small scales. The error due to the contribution of unresolved scales to the observations, called ‘representation’ or ‘representativeness’ error, is often included as part of the observation error, in addition to the raw measurement error, when estimating the large-scale part of the system. It is here shown how stochastic superparameterization (amore » multiscale method for subgridscale parameterization) can be used to provide estimates of the statistics of the unresolved scales. In addition, a new framework is developed wherein small-scale statistics can be used to estimate both the resolved and unresolved components of the solution. The one-dimensional test problem from dispersive wave turbulence used here is computationally tractable yet is particularly difficult for filtering because of the non-Gaussian extreme event statistics and substantial small scale turbulence: a shallow energy spectrum proportional to k{sup −5/6} (where k is the wavenumber) results in two-thirds of the climatological variance being carried by the unresolved small scales. Because the unresolved scales contain so much energy, filters that ignore the representation error fail utterly to provide meaningful estimates of the system state. Inclusion of a time-independent climatological estimate of the representation error in a standard framework leads to inaccurate estimates of the large-scale part of the signal; accurate estimates of the large scales are only achieved by using stochastic superparameterization to provide evolving, large-scale dependent predictions of the small-scale statistics. Again, because the unresolved scales contain so much energy, even an accurate estimate of the large-scale part of the system does not provide an accurate estimate of the true state. By providing simultaneous estimates of both the large- and small-scale parts of the solution, the new framework is able to provide accurate estimates of the true system state.« less

  13. Basic numerical competences in large-scale assessment data: Structure and long-term relevance.

    PubMed

    Hirsch, Stefa; Lambert, Katharina; Coppens, Karien; Moeller, Korbinian

    2018-03-01

    Basic numerical competences are seen as building blocks for later numerical and mathematical achievement. The current study aimed at investigating the structure of early numeracy reflected by different basic numerical competences in kindergarten and its predictive value for mathematical achievement 6 years later using data from large-scale assessment. This allowed analyses based on considerably large sample sizes (N > 1700). A confirmatory factor analysis indicated that a model differentiating five basic numerical competences at the end of kindergarten fitted the data better than a one-factor model of early numeracy representing a comprehensive number sense. In addition, these basic numerical competences were observed to reliably predict performance in a curricular mathematics test in Grade 6 even after controlling for influences of general cognitive ability. Thus, our results indicated a differentiated view on early numeracy considering basic numerical competences in kindergarten reflected in large-scale assessment data. Consideration of different basic numerical competences allows for evaluating their specific predictive value for later mathematical achievement but also mathematical learning difficulties. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. Predicting the planform configuration of the braided Toklat River, AK with a suite of rule-based models

    USGS Publications Warehouse

    Podolak, Charles J.

    2013-01-01

    An ensemble of rule-based models was constructed to assess possible future braided river planform configurations for the Toklat River in Denali National Park and Preserve, Alaska. This approach combined an analysis of large-scale influences on stability with several reduced-complexity models to produce the predictions at a practical level for managers concerned about the persistence of bank erosion while acknowledging the great uncertainty in any landscape prediction. First, a model of confluence angles reproduced observed angles of a major confluence, but showed limited susceptibility to a major rearrangement of the channel planform downstream. Second, a probabilistic map of channel locations was created with a two-parameter channel avulsion model. The predicted channel belt location was concentrated in the same area as the current channel belt. Finally, a suite of valley-scale channel and braid plain characteristics were extracted from a light detection and ranging (LiDAR)-derived surface. The characteristics demonstrated large-scale stabilizing topographic influences on channel planform. The combination of independent analyses increased confidence in the conclusion that the Toklat River braided planform is a dynamically stable system due to large and persistent valley-scale influences, and that a range of avulsive perturbations are likely to result in a relatively unchanged planform configuration in the short term.

  15. Wall Modeled Large Eddy Simulation of Airfoil Trailing Edge Noise

    NASA Astrophysics Data System (ADS)

    Kocheemoolayil, Joseph; Lele, Sanjiva

    2014-11-01

    Large eddy simulation (LES) of airfoil trailing edge noise has largely been restricted to low Reynolds numbers due to prohibitive computational cost. Wall modeled LES (WMLES) is a computationally cheaper alternative that makes full-scale Reynolds numbers relevant to large wind turbines accessible. A systematic investigation of trailing edge noise prediction using WMLES is conducted. Detailed comparisons are made with experimental data. The stress boundary condition from a wall model does not constrain the fluctuating velocity to vanish at the wall. This limitation has profound implications for trailing edge noise prediction. The simulation over-predicts the intensity of fluctuating wall pressure and far-field noise. An improved wall model formulation that minimizes the over-prediction of fluctuating wall pressure is proposed and carefully validated. The flow configurations chosen for the study are from the workshop on benchmark problems for airframe noise computations. The large eddy simulation database is used to examine the adequacy of scaling laws that quantify the dependence of trailing edge noise on Mach number, Reynolds number and angle of attack. Simplifying assumptions invoked in engineering approaches towards predicting trailing edge noise are critically evaluated. We gratefully acknowledge financial support from GE Global Research and thank Cascade Technologies Inc. for providing access to their massively-parallel large eddy simulation framework.

  16. Large-scale Estimates of Leaf Area Index from Active Remote Sensing Laser Altimetry

    NASA Astrophysics Data System (ADS)

    Hopkinson, C.; Mahoney, C.

    2016-12-01

    Leaf area index (LAI) is a key parameter that describes the spatial distribution of foliage within forest canopies which in turn control numerous relationships between the ground, canopy, and atmosphere. The retrieval of LAI has demonstrated success by in-situ (digital) hemispherical photography (DHP) and airborne laser scanning (ALS) data; however, field and ALS acquisitions are often spatially limited (100's km2) and costly. Large-scale (>1000's km2) retrievals have been demonstrated by optical sensors, however, accuracies remain uncertain due to the sensor's inability to penetrate the canopy. The spaceborne Geoscience Laser Altimeter System (GLAS) provides a possible solution in retrieving large-scale derivations whilst simultaneously penetrating the canopy. LAI retrieved by multiple DHP from 6 Australian sites, representing a cross-section of Australian ecosystems, were employed to model ALS LAI, which in turn were used to infer LAI from GLAS data at 5 other sites. An optimally filtered GLAS dataset was then employed in conjunction with a host of supplementary data to build a Random Forest (RF) model to infer predictions (and uncertainties) of LAI at a 250 m resolution across the forested regions of Australia. Predictions were validated against ALS-based LAI from 20 sites (R2=0.64, RMSE=1.1 m2m-2); MODIS-based LAI were also assessed against these sites (R2=0.30, RMSE=1.78 m2m-2) to demonstrate the strength of GLAS-based predictions. The large-scale nature of current predictions was also leveraged to demonstrate large-scale relationships of LAI with other environmental characteristics, such as: canopy height, elevation, and slope. The need for such wide-scale quantification of LAI is key in the assessment and modification of forest management strategies across Australia. Such work also assists Australia's Terrestrial Ecosystem Research Network, in fulfilling their government issued mandates.

  17. Direct Computation of Sound Radiation by Jet Flow Using Large-scale Equations

    NASA Technical Reports Server (NTRS)

    Mankbadi, R. R.; Shih, S. H.; Hixon, D. R.; Povinelli, L. A.

    1995-01-01

    Jet noise is directly predicted using large-scale equations. The computational domain is extended in order to directly capture the radiated field. As in conventional large-eddy-simulations, the effect of the unresolved scales on the resolved ones is accounted for. Special attention is given to boundary treatment to avoid spurious modes that can render the computed fluctuations totally unacceptable. Results are presented for a supersonic jet at Mach number 2.1.

  18. Dynamical systems proxies of atmospheric predictability and mid-latitude extremes

    NASA Astrophysics Data System (ADS)

    Messori, Gabriele; Faranda, Davide; Caballero, Rodrigo; Yiou, Pascal

    2017-04-01

    Extreme weather ocurrences carry enormous social and economic costs and routinely garner widespread scientific and media coverage. Many extremes (for e.g. storms, heatwaves, cold spells, heavy precipitation) are tied to specific patterns of midlatitude atmospheric circulation. The ability to identify these patterns and use them to enhance the predictability of the extremes is therefore a topic of crucial societal and economic value. We propose a novel predictability pathway for extreme events, by building upon recent advances in dynamical systems theory. We use two simple dynamical systems metrics - local dimension and persistence - to identify sets of similar large-scale atmospheric flow patterns which present a coherent temporal evolution. When these patterns correspond to weather extremes, they therefore afford a particularly good forward predictability. We specifically test this technique on European winter temperatures, whose variability largely depends on the atmospheric circulation in the North Atlantic region. We find that our dynamical systems approach provides predictability of large-scale temperature extremes up to one week in advance.

  19. Structural Similitude and Scaling Laws for Plates and Shells: A Review

    NASA Technical Reports Server (NTRS)

    Simitses, G. J.; Starnes, J. H., Jr.; Rezaeepazhand, J.

    2000-01-01

    This paper deals with the development and use of scaled-down models in order to predict the structural behavior of large prototypes. The concept is fully described and examples are presented which demonstrate its applicability to beam-plates, plates and cylindrical shells of laminated construction. The concept is based on the use of field equations, which govern the response behavior of both the small model as well as the large prototype. The conditions under which the experimental data of a small model can be used to predict the behavior of a large prototype are called scaling laws or similarity conditions and the term that best describes the process is structural similitude. Moreover, since the term scaling is used to describe the effect of size on strength characteristics of materials, a discussion is included which should clarify the difference between "scaling law" and "size effect". Finally, a historical review of all published work in the broad area of structural similitude is presented for completeness.

  20. Patterns and multi-scale drivers of phytoplankton species richness in temperate peri-urban lakes.

    PubMed

    Catherine, Arnaud; Selma, Maloufi; Mouillot, David; Troussellier, Marc; Bernard, Cécile

    2016-07-15

    Local species richness (SR) is a key characteristic affecting ecosystem functioning. Yet, the mechanisms regulating phytoplankton diversity in freshwater ecosystems are not fully understood, especially in peri-urban environments where anthropogenic pressures strongly impact the quality of aquatic ecosystems. To address this issue, we sampled the phytoplankton communities of 50 lakes in the Paris area (France) characterized by a large gradient of physico-chemical and catchment-scale characteristics. We used large phytoplankton datasets to describe phytoplankton diversity patterns and applied a machine-learning algorithm to test the degree to which species richness patterns are potentially controlled by environmental factors. Selected environmental factors were studied at two scales: the lake-scale (e.g. nutrients concentrations, water temperature, lake depth) and the catchment-scale (e.g. catchment, landscape and climate variables). Then, we used a variance partitioning approach to evaluate the interaction between lake-scale and catchment-scale variables in explaining local species richness. Finally, we analysed the residuals of predictive models to identify potential vectors of improvement of phytoplankton species richness predictive models. Lake-scale and catchment-scale drivers provided similar predictive accuracy of local species richness (R(2)=0.458 and 0.424, respectively). Both models suggested that seasonal temperature variations and nutrient supply strongly modulate local species richness. Integrating lake- and catchment-scale predictors in a single predictive model did not provide increased predictive accuracy; therefore suggesting that the catchment-scale model probably explains observed species richness variations through the impact of catchment-scale variables on in-lake water quality characteristics. Models based on catchment characteristics, which include simple and easy to obtain variables, provide a meaningful way of predicting phytoplankton species richness in temperate lakes. This approach may prove useful and cost-effective for the management and conservation of aquatic ecosystems. Copyright © 2016 Elsevier B.V. All rights reserved.

  1. Large-Scale Disasters

    NASA Astrophysics Data System (ADS)

    Gad-El-Hak, Mohamed

    "Extreme" events - including climatic events, such as hurricanes, tornadoes, and drought - can cause massive disruption to society, including large death tolls and property damage in the billions of dollars. Events in recent years have shown the importance of being prepared and that countries need to work together to help alleviate the resulting pain and suffering. This volume presents a review of the broad research field of large-scale disasters. It establishes a common framework for predicting, controlling and managing both manmade and natural disasters. There is a particular focus on events caused by weather and climate change. Other topics include air pollution, tsunamis, disaster modeling, the use of remote sensing and the logistics of disaster management. It will appeal to scientists, engineers, first responders and health-care professionals, in addition to graduate students and researchers who have an interest in the prediction, prevention or mitigation of large-scale disasters.

  2. Development and Evaluation of Season-ahead Precipitation and Streamflow Predictions for Sectoral Management in Western Ethiopia

    NASA Astrophysics Data System (ADS)

    Block, P. J.; Alexander, S.; WU, S.

    2017-12-01

    Skillful season-ahead predictions conditioned on local and large-scale hydro-climate variables can provide valuable knowledge to farmers and reservoir operators, enabling informed water resource allocation and management decisions. In Ethiopia, the potential for advancing agriculture and hydropower management, and subsequently economic growth, is substantial, yet evidence suggests a weak adoption of prediction information by sectoral audiences. To address common critiques, including skill, scale, and uncertainty, probabilistic forecasts are developed at various scales - temporally and spatially - for the Finchaa hydropower dam and the Koga agricultural scheme in an attempt to promote uptake and application. Significant prediction skill is evident across scales, particularly for statistical models. This raises questions regarding other potential barriers to forecast utilization at community scales, which are also addressed.

  3. ASSESSING THE PREDICTIVE CAPABILITY OF LANDSCAPE SAMPLING UNITS OF VARYING SCALE IN THE ANALYSIS OF ESTUARINE CONDITION

    EPA Science Inventory

    Landscape structure metrics are often used to predict water and sediment quality of lakes, streams, and estuaries; however, the sampling units used to generate the landscape metrics are often at an irrelevant spatial scale. They are either too large (i.e., an entire watershed) or...

  4. Data-driven Climate Modeling and Prediction

    NASA Astrophysics Data System (ADS)

    Kondrashov, D. A.; Chekroun, M.

    2016-12-01

    Global climate models aim to simulate a broad range of spatio-temporal scales of climate variability with state vector having many millions of degrees of freedom. On the other hand, while detailed weather prediction out to a few days requires high numerical resolution, it is fairly clear that a major fraction of large-scale climate variability can be predicted in a much lower-dimensional phase space. Low-dimensional models can simulate and predict this fraction of climate variability, provided they are able to account for linear and nonlinear interactions between the modes representing large scales of climate dynamics, as well as their interactions with a much larger number of modes representing fast and small scales. This presentation will highlight several new applications by Multilayered Stochastic Modeling (MSM) [Kondrashov, Chekroun and Ghil, 2015] framework that has abundantly proven its efficiency in the modeling and real-time forecasting of various climate phenomena. MSM is a data-driven inverse modeling technique that aims to obtain a low-order nonlinear system of prognostic equations driven by stochastic forcing, and estimates both the dynamical operator and the properties of the driving noise from multivariate time series of observations or a high-end model's simulation. MSM leads to a system of stochastic differential equations (SDEs) involving hidden (auxiliary) variables of fast-small scales ranked by layers, which interact with the macroscopic (observed) variables of large-slow scales to model the dynamics of the latter, and thus convey memory effects. New MSM climate applications focus on development of computationally efficient low-order models by using data-adaptive decomposition methods that convey memory effects by time-embedding techniques, such as Multichannel Singular Spectrum Analysis (M-SSA) [Ghil et al. 2002] and recently developed Data-Adaptive Harmonic (DAH) decomposition method [Chekroun and Kondrashov, 2016]. In particular, new results by DAH-MSM modeling and prediction of Arctic Sea Ice, as well as decadal predictions of near-surface Earth temperatures will be presented.

  5. Analytic prediction of baryonic effects from the EFT of large scale structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lewandowski, Matthew; Perko, Ashley; Senatore, Leonardo, E-mail: mattlew@stanford.edu, E-mail: perko@stanford.edu, E-mail: senatore@stanford.edu

    2015-05-01

    The large scale structures of the universe will likely be the next leading source of cosmological information. It is therefore crucial to understand their behavior. The Effective Field Theory of Large Scale Structures provides a consistent way to perturbatively predict the clustering of dark matter at large distances. The fact that baryons move distances comparable to dark matter allows us to infer that baryons at large distances can be described in a similar formalism: the backreaction of short-distance non-linearities and of star-formation physics at long distances can be encapsulated in an effective stress tensor, characterized by a few parameters. Themore » functional form of baryonic effects can therefore be predicted. In the power spectrum the leading contribution goes as ∝ k{sup 2} P(k), with P(k) being the linear power spectrum and with the numerical prefactor depending on the details of the star-formation physics. We also perform the resummation of the contribution of the long-wavelength displacements, allowing us to consistently predict the effect of the relative motion of baryons and dark matter. We compare our predictions with simulations that contain several implementations of baryonic physics, finding percent agreement up to relatively high wavenumbers such as k ≅ 0.3 hMpc{sup −1} or k ≅ 0.6 hMpc{sup −1}, depending on the order of the calculation. Our results open a novel way to understand baryonic effects analytically, as well as to interface with simulations.« less

  6. Analysis of Discrete-Source Damage Progression in a Tensile Stiffened Composite Panel

    NASA Technical Reports Server (NTRS)

    Wang, John T.; Lotts, Christine G.; Sleight, David W.

    1999-01-01

    This paper demonstrates the progressive failure analysis capability in NASA Langley s COMET-AR finite element analysis code on a large-scale built-up composite structure. A large-scale five stringer composite panel with a 7-in. long discrete source damage was analyzed from initial loading to final failure including the geometric and material nonlinearities. Predictions using different mesh sizes, different saw cut modeling approaches, and different failure criteria were performed and assessed. All failure predictions have a reasonably good correlation with the test result.

  7. Development of Computational Aeroacoustics Code for Jet Noise and Flow Prediction

    NASA Astrophysics Data System (ADS)

    Keith, Theo G., Jr.; Hixon, Duane R.

    2002-07-01

    Accurate prediction of jet fan and exhaust plume flow and noise generation and propagation is very important in developing advanced aircraft engines that will pass current and future noise regulations. In jet fan flows as well as exhaust plumes, two major sources of noise are present: large-scale, coherent instabilities and small-scale turbulent eddies. In previous work for the NASA Glenn Research Center, three strategies have been explored in an effort to computationally predict the noise radiation from supersonic jet exhaust plumes. In order from the least expensive computationally to the most expensive computationally, these are: 1) Linearized Euler equations (LEE). 2) Very Large Eddy Simulations (VLES). 3) Large Eddy Simulations (LES). The first method solves the linearized Euler equations (LEE). These equations are obtained by linearizing about a given mean flow and the neglecting viscous effects. In this way, the noise from large-scale instabilities can be found for a given mean flow. The linearized Euler equations are computationally inexpensive, and have produced good noise results for supersonic jets where the large-scale instability noise dominates, as well as for the tone noise from a jet engine blade row. However, these linear equations do not predict the absolute magnitude of the noise; instead, only the relative magnitude is predicted. Also, the predicted disturbances do not modify the mean flow, removing a physical mechanism by which the amplitude of the disturbance may be controlled. Recent research for isolated airfoils' indicates that this may not affect the solution greatly at low frequencies. The second method addresses some of the concerns raised by the LEE method. In this approach, called Very Large Eddy Simulation (VLES), the unsteady Reynolds averaged Navier-Stokes equations are solved directly using a high-accuracy computational aeroacoustics numerical scheme. With the addition of a two-equation turbulence model and the use of a relatively coarse grid, the numerical solution is effectively filtered into a directly calculated mean flow with the small-scale turbulence being modeled, and an unsteady large-scale component that is also being directly calculated. In this way, the unsteady disturbances are calculated in a nonlinear way, with a direct effect on the mean flow. This method is not as fast as the LEE approach, but does have many advantages to recommend it; however, like the LEE approach, only the effect of the largest unsteady structures will be captured. An initial calculation was performed on a supersonic jet exhaust plume, with promising results, but the calculation was hampered by the explicit time marching scheme that was employed. This explicit scheme required a very small time step to resolve the nozzle boundary layer, which caused a long run time. Current work is focused on testing a lower-order implicit time marching method to combat this problem.

  8. Large scale fire whirls: Can their formation be predicted?

    Treesearch

    J. Forthofer; Bret Butler

    2010-01-01

    Large scale fire whirls have not traditionally been recognized as a frequent phenomenon on wildland fires. However, there are anecdotal data suggesting that they can and do occur with some regularity. This paper presents a brief summary of this information and an analysis of the causal factors leading to their formation.

  9. Integrating SMOS brightness temperatures with a new conceptual spatially distributed hydrological model for improving flood and drought predictions at large scale.

    NASA Astrophysics Data System (ADS)

    Hostache, Renaud; Rains, Dominik; Chini, Marco; Lievens, Hans; Verhoest, Niko E. C.; Matgen, Patrick

    2017-04-01

    Motivated by climate change and its impact on the scarcity or excess of water in many parts of the world, several agencies and research institutions have taken initiatives in monitoring and predicting the hydrologic cycle at a global scale. Such a monitoring/prediction effort is important for understanding the vulnerability to extreme hydrological events and for providing early warnings. This can be based on an optimal combination of hydro-meteorological models and remote sensing, in which satellite measurements can be used as forcing or calibration data or for regularly updating the model states or parameters. Many advances have been made in these domains and the near future will bring new opportunities with respect to remote sensing as a result of the increasing number of spaceborn sensors enabling the large scale monitoring of water resources. Besides of these advances, there is currently a tendency to refine and further complicate physically-based hydrologic models to better capture the hydrologic processes at hand. However, this may not necessarily be beneficial for large-scale hydrology, as computational efforts are therefore increasing significantly. As a matter of fact, a novel thematic science question that is to be investigated is whether a flexible conceptual model can match the performance of a complex physically-based model for hydrologic simulations at large scale. In this context, the main objective of this study is to investigate how innovative techniques that allow for the estimation of soil moisture from satellite data can help in reducing errors and uncertainties in large scale conceptual hydro-meteorological modelling. A spatially distributed conceptual hydrologic model has been set up based on recent developments of the SUPERFLEX modelling framework. As it requires limited computational efforts, this model enables early warnings for large areas. Using as forcings the ERA-Interim public dataset and coupled with the CMEM radiative transfer model, SUPERFLEX is capable of predicting runoff, soil moisture, and SMOS-like brightness temperature time series. Such a model is traditionally calibrated using only discharge measurements. In this study we designed a multi-objective calibration procedure based on both discharge measurements and SMOS-derived brightness temperature observations in order to evaluate the added value of remotely sensed soil moisture data in the calibration process. As a test case we set up the SUPERFLEX model for the large scale Murray-Darling catchment in Australia ( 1 Million km2). When compared to in situ soil moisture time series, model predictions show good agreement resulting in correlation coefficients exceeding 70 % and Root Mean Squared Errors below 1 %. When benchmarked with the physically based land surface model CLM, SUPERFLEX exhibits similar performance levels. By adapting the runoff routing function within the SUPERFLEX model, the predicted discharge results in a Nash Sutcliff Efficiency exceeding 0.7 over both the calibration and the validation periods.

  10. Biotic and abiotic factors predicting the global distribution and population density of an invasive large mammal

    PubMed Central

    Lewis, Jesse S.; Farnsworth, Matthew L.; Burdett, Chris L.; Theobald, David M.; Gray, Miranda; Miller, Ryan S.

    2017-01-01

    Biotic and abiotic factors are increasingly acknowledged to synergistically shape broad-scale species distributions. However, the relative importance of biotic and abiotic factors in predicting species distributions is unclear. In particular, biotic factors, such as predation and vegetation, including those resulting from anthropogenic land-use change, are underrepresented in species distribution modeling, but could improve model predictions. Using generalized linear models and model selection techniques, we used 129 estimates of population density of wild pigs (Sus scrofa) from 5 continents to evaluate the relative importance, magnitude, and direction of biotic and abiotic factors in predicting population density of an invasive large mammal with a global distribution. Incorporating diverse biotic factors, including agriculture, vegetation cover, and large carnivore richness, into species distribution modeling substantially improved model fit and predictions. Abiotic factors, including precipitation and potential evapotranspiration, were also important predictors. The predictive map of population density revealed wide-ranging potential for an invasive large mammal to expand its distribution globally. This information can be used to proactively create conservation/management plans to control future invasions. Our study demonstrates that the ongoing paradigm shift, which recognizes that both biotic and abiotic factors shape species distributions across broad scales, can be advanced by incorporating diverse biotic factors. PMID:28276519

  11. Preliminary design, analysis, and costing of a dynamic scale model of the NASA space station

    NASA Technical Reports Server (NTRS)

    Gronet, M. J.; Pinson, E. D.; Voqui, H. L.; Crawley, E. F.; Everman, M. R.

    1987-01-01

    The difficulty of testing the next generation of large flexible space structures on the ground places an emphasis on other means for validating predicted on-orbit dynamic behavior. Scale model technology represents one way of verifying analytical predictions with ground test data. This study investigates the preliminary design, scaling and cost trades for a Space Station dynamic scale model. The scaling of nonlinear joint behavior is studied from theoretical and practical points of view. Suspension system interaction trades are conducted for the ISS Dual Keel Configuration and Build-Up Stages suspended in the proposed NASA/LaRC Large Spacecraft Laboratory. Key issues addressed are scaling laws, replication vs. simulation of components, manufacturing, suspension interactions, joint behavior, damping, articulation capability, and cost. These issues are the subject of parametric trades versus the scale model factor. The results of these detailed analyses are used to recommend scale factors for four different scale model options, each with varying degrees of replication. Potential problems in constructing and testing the scale model are identified, and recommendations for further study are outlined.

  12. Use of Fuzzy rainfall-runoff predictions for claypan watersheds with conservation buffers in Northeast Missouri

    NASA Astrophysics Data System (ADS)

    Anomaa Senaviratne, G. M. M. M.; Udawatta, Ranjith P.; Anderson, Stephen H.; Baffaut, Claire; Thompson, Allen

    2014-09-01

    Fuzzy rainfall-runoff models are often used to forecast flood or water supply in large catchments and applications at small/field scale agricultural watersheds are limited. The study objectives were to develop, calibrate, and validate a fuzzy rainfall-runoff model using long-term data of three adjacent field scale row crop watersheds (1.65-4.44 ha) with intermittent discharge in the claypan soils of Northeast Missouri. The watersheds were monitored for a six-year calibration period starting 1991 (pre-buffer period). Thereafter, two of them were treated with upland contour grass and agroforestry (tree + grass) buffers (4.5 m wide, 36.5 m apart) to study water quality benefits. The fuzzy system was based on Mamdani method using MATLAB 7.10.0. The model predicted event-based runoff with model performance coefficients of r2 and Nash-Sutcliffe Coefficient (NSC) values greater than 0.65 for calibration and validation. The pre-buffer fuzzy system predicted event-based runoff for 30-50 times larger corn/soybean watersheds with r2 values of 0.82 and 0.68 and NSC values of 0.77 and 0.53, respectively. The runoff predicted by the fuzzy system closely agreed with values predicted by physically-based Agricultural Policy Environmental eXtender model (APEX) for the pre-buffer watersheds. The fuzzy rainfall-runoff model has the potential for runoff predictions at field-scale watersheds with minimum input. It also could up-scale the predictions for large-scale watersheds to evaluate the benefits of conservation practices.

  13. High Fidelity Modeling of Turbulent Mixing and Chemical Kinetics Interactions in a Post-Detonation Flow Field

    NASA Astrophysics Data System (ADS)

    Sinha, Neeraj; Zambon, Andrea; Ott, James; Demagistris, Michael

    2015-06-01

    Driven by the continuing rapid advances in high-performance computing, multi-dimensional high-fidelity modeling is an increasingly reliable predictive tool capable of providing valuable physical insight into complex post-detonation reacting flow fields. Utilizing a series of test cases featuring blast waves interacting with combustible dispersed clouds in a small-scale test setup under well-controlled conditions, the predictive capabilities of a state-of-the-art code are demonstrated and validated. Leveraging physics-based, first principle models and solving large system of equations on highly-resolved grids, the combined effects of finite-rate/multi-phase chemical processes (including thermal ignition), turbulent mixing and shock interactions are captured across the spectrum of relevant time-scales and length scales. Since many scales of motion are generated in a post-detonation environment, even if the initial ambient conditions are quiescent, turbulent mixing plays a major role in the fireball afterburning as well as in dispersion, mixing, ignition and burn-out of combustible clouds in its vicinity. Validating these capabilities at the small scale is critical to establish a reliable predictive tool applicable to more complex and large-scale geometries of practical interest.

  14. Application of Large-Scale Database-Based Online Modeling to Plant State Long-Term Estimation

    NASA Astrophysics Data System (ADS)

    Ogawa, Masatoshi; Ogai, Harutoshi

    Recently, attention has been drawn to the local modeling techniques of a new idea called “Just-In-Time (JIT) modeling”. To apply “JIT modeling” to a large amount of database online, “Large-scale database-based Online Modeling (LOM)” has been proposed. LOM is a technique that makes the retrieval of neighboring data more efficient by using both “stepwise selection” and quantization. In order to predict the long-term state of the plant without using future data of manipulated variables, an Extended Sequential Prediction method of LOM (ESP-LOM) has been proposed. In this paper, the LOM and the ESP-LOM are introduced.

  15. Large scale anomalies in the microwave background: causation and correlation.

    PubMed

    Aslanyan, Grigor; Easther, Richard

    2013-12-27

    Most treatments of large scale anomalies in the microwave sky are a posteriori, with unquantified look-elsewhere effects. We contrast these with physical models of specific inhomogeneities in the early Universe which can generate these apparent anomalies. Physical models predict correlations between candidate anomalies and the corresponding signals in polarization and large scale structure, reducing the impact of cosmic variance. We compute the apparent spatial curvature associated with large-scale inhomogeneities and show that it is typically small, allowing for a self-consistent analysis. As an illustrative example we show that a single large plane wave inhomogeneity can contribute to low-l mode alignment and odd-even asymmetry in the power spectra and the best-fit model accounts for a significant part of the claimed odd-even asymmetry. We argue that this approach can be generalized to provide a more quantitative assessment of potential large scale anomalies in the Universe.

  16. Preferential pathways in complex fracture systems and their influence on large scale transport

    NASA Astrophysics Data System (ADS)

    Willmann, M.; Mañé, R.; Tyukhova, A.

    2017-12-01

    Many subsurface applications in complex fracture systems require large-scale predictions. Precise predictions are difficult because of the existence of preferential pathways at different scales. The intrinsic complexity of fracture systems increases within fractured sedimentary formations, because also the coupling of fractures and matrix has to be taken into account. This interplay of fracture system and the sedimentary matrix is strongly controlled by the actual fracture aperture of an individual fracture. And an effective aperture cannot be easily be determined because of the preferential pathways along the fracture plane. We investigate the influence of these preferential pathways on large scale solute transport and upscale the aperture. By explicitly modeling flow and particle tracking in individual fractures, we develop a new effective transport aperture, which is weighted by the aperture along the preferential paths, a Lagrangian aperture. We show that this new aperture is consistently larger than existing definitions of effective flow and transport apertures. Finally, we apply our results to a fractured sedimentary formation in Northern Switzerland.

  17. Large-scale transportation network congestion evolution prediction using deep learning theory.

    PubMed

    Ma, Xiaolei; Yu, Haiyang; Wang, Yunpeng; Wang, Yinhai

    2015-01-01

    Understanding how congestion at one location can cause ripples throughout large-scale transportation network is vital for transportation researchers and practitioners to pinpoint traffic bottlenecks for congestion mitigation. Traditional studies rely on either mathematical equations or simulation techniques to model traffic congestion dynamics. However, most of the approaches have limitations, largely due to unrealistic assumptions and cumbersome parameter calibration process. With the development of Intelligent Transportation Systems (ITS) and Internet of Things (IoT), transportation data become more and more ubiquitous. This triggers a series of data-driven research to investigate transportation phenomena. Among them, deep learning theory is considered one of the most promising techniques to tackle tremendous high-dimensional data. This study attempts to extend deep learning theory into large-scale transportation network analysis. A deep Restricted Boltzmann Machine and Recurrent Neural Network architecture is utilized to model and predict traffic congestion evolution based on Global Positioning System (GPS) data from taxi. A numerical study in Ningbo, China is conducted to validate the effectiveness and efficiency of the proposed method. Results show that the prediction accuracy can achieve as high as 88% within less than 6 minutes when the model is implemented in a Graphic Processing Unit (GPU)-based parallel computing environment. The predicted congestion evolution patterns can be visualized temporally and spatially through a map-based platform to identify the vulnerable links for proactive congestion mitigation.

  18. Large-Scale Transportation Network Congestion Evolution Prediction Using Deep Learning Theory

    PubMed Central

    Ma, Xiaolei; Yu, Haiyang; Wang, Yunpeng; Wang, Yinhai

    2015-01-01

    Understanding how congestion at one location can cause ripples throughout large-scale transportation network is vital for transportation researchers and practitioners to pinpoint traffic bottlenecks for congestion mitigation. Traditional studies rely on either mathematical equations or simulation techniques to model traffic congestion dynamics. However, most of the approaches have limitations, largely due to unrealistic assumptions and cumbersome parameter calibration process. With the development of Intelligent Transportation Systems (ITS) and Internet of Things (IoT), transportation data become more and more ubiquitous. This triggers a series of data-driven research to investigate transportation phenomena. Among them, deep learning theory is considered one of the most promising techniques to tackle tremendous high-dimensional data. This study attempts to extend deep learning theory into large-scale transportation network analysis. A deep Restricted Boltzmann Machine and Recurrent Neural Network architecture is utilized to model and predict traffic congestion evolution based on Global Positioning System (GPS) data from taxi. A numerical study in Ningbo, China is conducted to validate the effectiveness and efficiency of the proposed method. Results show that the prediction accuracy can achieve as high as 88% within less than 6 minutes when the model is implemented in a Graphic Processing Unit (GPU)-based parallel computing environment. The predicted congestion evolution patterns can be visualized temporally and spatially through a map-based platform to identify the vulnerable links for proactive congestion mitigation. PMID:25780910

  19. Nightside Detection of a Large-Scale Thermospheric Wave Generated by a Solar Eclipse

    NASA Astrophysics Data System (ADS)

    Harding, B. J.; Drob, D. P.; Buriti, R. A.; Makela, J. J.

    2018-04-01

    The generation of a large-scale wave in the upper atmosphere caused by a solar eclipse was first predicted in the 1970s, but the experimental evidence remains sparse and comprises mostly indirect observations. This study presents observations of the wind component of a large-scale thermospheric wave generated by the 21 August 2017 total solar eclipse. In contrast with previous studies, the observations are made on the nightside, after the eclipse ended. A ground-based interferometer located in northeastern Brazil is used to monitor the Doppler shift of the 630.0-nm airglow emission, providing direct measurements of the wind and temperature in the thermosphere, where eclipse effects are expected to be the largest. A disturbance is seen in the zonal and meridional wind which is at or above the 90% significance level based on the measured 30-day variability. These observations are compared with a first principles numerical model calculation from the Thermosphere-Ionosphere-Mesosphere-Electrodynamics General Circulation Model, which predicted the propagation of a large-scale wave well into the nightside. The modeled disturbance matches well the difference between the wind measurements and the 30-day median, though the measured perturbation (˜60 m/s) is larger than the prediction (38 m/s) for the meridional wind. No clear evidence for the wave is seen in the temperature data, however.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seljak, Uroš, E-mail: useljak@berkeley.edu

    On large scales a nonlinear transformation of matter density field can be viewed as a biased tracer of the density field itself. A nonlinear transformation also modifies the redshift space distortions in the same limit, giving rise to a velocity bias. In models with primordial nongaussianity a nonlinear transformation generates a scale dependent bias on large scales. We derive analytic expressions for the large scale bias, the velocity bias and the redshift space distortion (RSD) parameter β, as well as the scale dependent bias from primordial nongaussianity for a general nonlinear transformation. These biases can be expressed entirely in termsmore » of the one point distribution function (PDF) of the final field and the parameters of the transformation. The analysis shows that one can view the large scale bias different from unity and primordial nongaussianity bias as a consequence of converting higher order correlations in density into 2-point correlations of its nonlinear transform. Our analysis allows one to devise nonlinear transformations with nearly arbitrary bias properties, which can be used to increase the signal in the large scale clustering limit. We apply the results to the ionizing equilibrium model of Lyman-α forest, in which Lyman-α flux F is related to the density perturbation δ via a nonlinear transformation. Velocity bias can be expressed as an average over the Lyman-α flux PDF. At z = 2.4 we predict the velocity bias of -0.1, compared to the observed value of −0.13±0.03. Bias and primordial nongaussianity bias depend on the parameters of the transformation. Measurements of bias can thus be used to constrain these parameters, and for reasonable values of the ionizing background intensity we can match the predictions to observations. Matching to the observed values we predict the ratio of primordial nongaussianity bias to bias to have the opposite sign and lower magnitude than the corresponding values for the highly biased galaxies, but this depends on the model parameters and can also vanish or change the sign.« less

  1. Rapid high-throughput characterisation, classification and selection of recombinant mammalian cell line phenotypes using intact cell MALDI-ToF mass spectrometry fingerprinting and PLS-DA modelling.

    PubMed

    Povey, Jane F; O'Malley, Christopher J; Root, Tracy; Martin, Elaine B; Montague, Gary A; Feary, Marc; Trim, Carol; Lang, Dietmar A; Alldread, Richard; Racher, Andrew J; Smales, C Mark

    2014-08-20

    Despite many advances in the generation of high producing recombinant mammalian cell lines over the last few decades, cell line selection and development is often slowed by the inability to predict a cell line's phenotypic characteristics (e.g. growth or recombinant protein productivity) at larger scale (large volume bioreactors) using data from early cell line construction at small culture scale. Here we describe the development of an intact cell MALDI-ToF mass spectrometry fingerprinting method for mammalian cells early in the cell line construction process whereby the resulting mass spectrometry data are used to predict the phenotype of mammalian cell lines at larger culture scale using a Partial Least Squares Discriminant Analysis (PLS-DA) model. Using MALDI-ToF mass spectrometry, a library of mass spectrometry fingerprints was generated for individual cell lines at the 96 deep well plate stage of cell line development. The growth and productivity of these cell lines were evaluated in a 10L bioreactor model of Lonza's large-scale (up to 20,000L) fed-batch cell culture processes. Using the mass spectrometry information at the 96 deep well plate stage and phenotype information at the 10L bioreactor scale a PLS-DA model was developed to predict the productivity of unknown cell lines at the 10L scale based upon their MALDI-ToF fingerprint at the 96 deep well plate scale. This approach provides the basis for the very early prediction of cell lines' performance in cGMP manufacturing-scale bioreactors and the foundation for methods and models for predicting other mammalian cell phenotypes from rapid, intact-cell mass spectrometry based measurements. Copyright © 2014 Elsevier B.V. All rights reserved.

  2. The role of the airline transportation network in the prediction and predictability of global epidemics.

    PubMed

    Colizza, Vittoria; Barrat, Alain; Barthélemy, Marc; Vespignani, Alessandro

    2006-02-14

    The systematic study of large-scale networks has unveiled the ubiquitous presence of connectivity patterns characterized by large-scale heterogeneities and unbounded statistical fluctuations. These features affect dramatically the behavior of the diffusion processes occurring on networks, determining the ensuing statistical properties of their evolution pattern and dynamics. In this article, we present a stochastic computational framework for the forecast of global epidemics that considers the complete worldwide air travel infrastructure complemented with census population data. We address two basic issues in global epidemic modeling: (i) we study the role of the large scale properties of the airline transportation network in determining the global diffusion pattern of emerging diseases; and (ii) we evaluate the reliability of forecasts and outbreak scenarios with respect to the intrinsic stochasticity of disease transmission and traffic flows. To address these issues we define a set of quantitative measures able to characterize the level of heterogeneity and predictability of the epidemic pattern. These measures may be used for the analysis of containment policies and epidemic risk assessment.

  3. Learning Traffic as Images: A Deep Convolutional Neural Network for Large-Scale Transportation Network Speed Prediction.

    PubMed

    Ma, Xiaolei; Dai, Zhuang; He, Zhengbing; Ma, Jihui; Wang, Yong; Wang, Yunpeng

    2017-04-10

    This paper proposes a convolutional neural network (CNN)-based method that learns traffic as images and predicts large-scale, network-wide traffic speed with a high accuracy. Spatiotemporal traffic dynamics are converted to images describing the time and space relations of traffic flow via a two-dimensional time-space matrix. A CNN is applied to the image following two consecutive steps: abstract traffic feature extraction and network-wide traffic speed prediction. The effectiveness of the proposed method is evaluated by taking two real-world transportation networks, the second ring road and north-east transportation network in Beijing, as examples, and comparing the method with four prevailing algorithms, namely, ordinary least squares, k-nearest neighbors, artificial neural network, and random forest, and three deep learning architectures, namely, stacked autoencoder, recurrent neural network, and long-short-term memory network. The results show that the proposed method outperforms other algorithms by an average accuracy improvement of 42.91% within an acceptable execution time. The CNN can train the model in a reasonable time and, thus, is suitable for large-scale transportation networks.

  4. Local gravity and large-scale structure

    NASA Technical Reports Server (NTRS)

    Juszkiewicz, Roman; Vittorio, Nicola; Wyse, Rosemary F. G.

    1990-01-01

    The magnitude and direction of the observed dipole anisotropy of the galaxy distribution can in principle constrain the amount of large-scale power present in the spectrum of primordial density fluctuations. This paper confronts the data, provided by a recent redshift survey of galaxies detected by the IRAS satellite, with the predictions of two cosmological models with very different levels of large-scale power: the biased Cold Dark Matter dominated model (CDM) and a baryon-dominated model (BDM) with isocurvature initial conditions. Model predictions are investigated for the Local Group peculiar velocity, v(R), induced by mass inhomogeneities distributed out to a given radius, R, for R less than about 10,000 km/s. Several convergence measures for v(R) are developed, which can become powerful cosmological tests when deep enough samples become available. For the present data sets, the CDM and BDM predictions are indistinguishable at the 2 sigma level and both are consistent with observations. A promising discriminant between cosmological models is the misalignment angle between v(R) and the apex of the dipole anisotropy of the microwave background.

  5. A Ground-Based Research Vehicle for Base Drag Studies at Subsonic Speeds

    NASA Technical Reports Server (NTRS)

    Diebler, Corey; Smith, Mark

    2002-01-01

    A ground research vehicle (GRV) has been developed to study the base drag on large-scale vehicles at subsonic speeds. Existing models suggest that base drag is dependent upon vehicle forebody drag, and for certain configurations, the total drag of a vehicle can be reduced by increasing its forebody drag. Although these models work well for small projectile shapes, studies have shown that they do not provide accurate predictions when applied to large-scale vehicles. Experiments are underway at the NASA Dryden Flight Research Center to collect data at Reynolds numbers to a maximum of 3 x 10(exp 7), and to formulate a new model for predicting the base drag of trucks, buses, motor homes, reentry vehicles, and other large-scale vehicles. Preliminary tests have shown errors as great as 70 percent compared to Hoerner's two-dimensional base drag prediction. This report describes the GRV and its capabilities, details the studies currently underway at NASA Dryden, and presents preliminary results of both the effort to formulate a new base drag model and the investigation into a method of reducing total drag by manipulating forebody drag.

  6. Learning Traffic as Images: A Deep Convolutional Neural Network for Large-Scale Transportation Network Speed Prediction

    PubMed Central

    Ma, Xiaolei; Dai, Zhuang; He, Zhengbing; Ma, Jihui; Wang, Yong; Wang, Yunpeng

    2017-01-01

    This paper proposes a convolutional neural network (CNN)-based method that learns traffic as images and predicts large-scale, network-wide traffic speed with a high accuracy. Spatiotemporal traffic dynamics are converted to images describing the time and space relations of traffic flow via a two-dimensional time-space matrix. A CNN is applied to the image following two consecutive steps: abstract traffic feature extraction and network-wide traffic speed prediction. The effectiveness of the proposed method is evaluated by taking two real-world transportation networks, the second ring road and north-east transportation network in Beijing, as examples, and comparing the method with four prevailing algorithms, namely, ordinary least squares, k-nearest neighbors, artificial neural network, and random forest, and three deep learning architectures, namely, stacked autoencoder, recurrent neural network, and long-short-term memory network. The results show that the proposed method outperforms other algorithms by an average accuracy improvement of 42.91% within an acceptable execution time. The CNN can train the model in a reasonable time and, thus, is suitable for large-scale transportation networks. PMID:28394270

  7. Anisotropies of the cosmic microwave background in nonstandard cold dark matter models

    NASA Technical Reports Server (NTRS)

    Vittorio, Nicola; Silk, Joseph

    1992-01-01

    Small angular scale cosmic microwave anisotropies in flat, vacuum-dominated, cold dark matter cosmological models which fit large-scale structure observations and are consistent with a high value for the Hubble constant are reexamined. New predictions for CDM models in which the large-scale power is boosted via a high baryon content and low H(0) are presented. Both classes of models are consistent with current limits: an improvement in sensitivity by a factor of about 3 for experiments which probe angular scales between 7 arcmin and 1 deg is required, in the absence of very early reionization, to test boosted CDM models for large-scale structure formation.

  8. The Spike-and-Slab Lasso Generalized Linear Models for Prediction and Associated Genes Detection.

    PubMed

    Tang, Zaixiang; Shen, Yueping; Zhang, Xinyan; Yi, Nengjun

    2017-01-01

    Large-scale "omics" data have been increasingly used as an important resource for prognostic prediction of diseases and detection of associated genes. However, there are considerable challenges in analyzing high-dimensional molecular data, including the large number of potential molecular predictors, limited number of samples, and small effect of each predictor. We propose new Bayesian hierarchical generalized linear models, called spike-and-slab lasso GLMs, for prognostic prediction and detection of associated genes using large-scale molecular data. The proposed model employs a spike-and-slab mixture double-exponential prior for coefficients that can induce weak shrinkage on large coefficients, and strong shrinkage on irrelevant coefficients. We have developed a fast and stable algorithm to fit large-scale hierarchal GLMs by incorporating expectation-maximization (EM) steps into the fast cyclic coordinate descent algorithm. The proposed approach integrates nice features of two popular methods, i.e., penalized lasso and Bayesian spike-and-slab variable selection. The performance of the proposed method is assessed via extensive simulation studies. The results show that the proposed approach can provide not only more accurate estimates of the parameters, but also better prediction. We demonstrate the proposed procedure on two cancer data sets: a well-known breast cancer data set consisting of 295 tumors, and expression data of 4919 genes; and the ovarian cancer data set from TCGA with 362 tumors, and expression data of 5336 genes. Our analyses show that the proposed procedure can generate powerful models for predicting outcomes and detecting associated genes. The methods have been implemented in a freely available R package BhGLM (http://www.ssg.uab.edu/bhglm/). Copyright © 2017 by the Genetics Society of America.

  9. Achievement in Large-Scale National Numeracy Assessment: An Ecological Study of Motivation and Student, Home, and School Predictors

    ERIC Educational Resources Information Center

    Martin, Andrew J.; Lazendic, Goran

    2018-01-01

    With the rise of large-scale academic assessment programs around the world, there is a need to better understand the factors predicting students' achievement in these assessment exercises. This investigation into national numeracy assessment drew on ecological and transactional conceptualizing involving student, student/home, and school factors.…

  10. From catchment scale hydrologic processes to numerical models and robust predictions of climate change impacts at regional scales

    NASA Astrophysics Data System (ADS)

    Wagener, T.

    2017-12-01

    Current societal problems and questions demand that we increasingly build hydrologic models for regional or even continental scale assessment of global change impacts. Such models offer new opportunities for scientific advancement, for example by enabling comparative hydrology or connectivity studies, and for improved support of water management decision, since we might better understand regional impacts on water resources from large scale phenomena such as droughts. On the other hand, we are faced with epistemic uncertainties when we move up in scale. The term epistemic uncertainty describes those uncertainties that are not well determined by historical observations. This lack of determination can be because the future is not like the past (e.g. due to climate change), because the historical data is unreliable (e.g. because it is imperfectly recorded from proxies or missing), or because it is scarce (either because measurements are not available at the right scale or there is no observation network available at all). In this talk I will explore: (1) how we might build a bridge between what we have learned about catchment scale processes and hydrologic model development and evaluation at larger scales. (2) How we can understand the impact of epistemic uncertainty in large scale hydrologic models. And (3) how we might utilize large scale hydrologic predictions to understand climate change impacts, e.g. on infectious disease risk.

  11. Coordinated Parameterization Development and Large-Eddy Simulation for Marine and Arctic Cloud-Topped Boundary Layers

    NASA Technical Reports Server (NTRS)

    Bretherton, Christopher S.

    2002-01-01

    The goal of this project was to compare observations of marine and arctic boundary layers with: (1) parameterization systems used in climate and weather forecast models; and (2) two and three dimensional eddy resolving (LES) models for turbulent fluid flow. Based on this comparison, we hoped to better understand, predict, and parameterize the boundary layer structure and cloud amount, type, and thickness as functions of large scale conditions that are predicted by global climate models. The principal achievements of the project were as follows: (1) Development of a novel boundary layer parameterization for large-scale models that better represents the physical processes in marine boundary layer clouds; and (2) Comparison of column output from the ECMWF global forecast model with observations from the SHEBA experiment. Overall the forecast model did predict most of the major precipitation events and synoptic variability observed over the year of observation of the SHEBA ice camp.

  12. Towards agile large-scale predictive modelling in drug discovery with flow-based programming design principles.

    PubMed

    Lampa, Samuel; Alvarsson, Jonathan; Spjuth, Ola

    2016-01-01

    Predictive modelling in drug discovery is challenging to automate as it often contains multiple analysis steps and might involve cross-validation and parameter tuning that create complex dependencies between tasks. With large-scale data or when using computationally demanding modelling methods, e-infrastructures such as high-performance or cloud computing are required, adding to the existing challenges of fault-tolerant automation. Workflow management systems can aid in many of these challenges, but the currently available systems are lacking in the functionality needed to enable agile and flexible predictive modelling. We here present an approach inspired by elements of the flow-based programming paradigm, implemented as an extension of the Luigi system which we name SciLuigi. We also discuss the experiences from using the approach when modelling a large set of biochemical interactions using a shared computer cluster.Graphical abstract.

  13. Aqueous Two-Phase Systems at Large Scale: Challenges and Opportunities.

    PubMed

    Torres-Acosta, Mario A; Mayolo-Deloisa, Karla; González-Valdez, José; Rito-Palomares, Marco

    2018-06-07

    Aqueous two-phase systems (ATPS) have proved to be an efficient and integrative operation to enhance recovery of industrially relevant bioproducts. After ATPS discovery, a variety of works have been published regarding their scaling from 10 to 1000 L. Although ATPS have achieved high recovery and purity yields, there is still a gap between their bench-scale use and potential industrial applications. In this context, this review paper critically analyzes ATPS scale-up strategies to enhance the potential industrial adoption. In particular, large-scale operation considerations, different phase separation procedures, the available optimization techniques (univariate, response surface methodology, and genetic algorithms) to maximize recovery and purity and economic modeling to predict large-scale costs, are discussed. ATPS intensification to increase the amount of sample to process at each system, developing recycling strategies and creating highly efficient predictive models, are still areas of great significance that can be further exploited with the use of high-throughput techniques. Moreover, the development of novel ATPS can maximize their specificity increasing the possibilities for the future industry adoption of ATPS. This review work attempts to present the areas of opportunity to increase ATPS attractiveness at industrial levels. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. Proceedings of the Joint IAEA/CSNI Specialists` Meeting on Fracture Mechanics Verification by Large-Scale Testing held at Pollard Auditorium, Oak Ridge, Tennessee

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pugh, C.E.; Bass, B.R.; Keeney, J.A.

    This report contains 40 papers that were presented at the Joint IAEA/CSNI Specialists` Meeting Fracture Mechanics Verification by Large-Scale Testing held at the Pollard Auditorium, Oak Ridge, Tennessee, during the week of October 26--29, 1992. The papers are printed in the order of their presentation in each session and describe recent large-scale fracture (brittle and/or ductile) experiments, analyses of these experiments, and comparisons between predictions and experimental results. The goal of the meeting was to allow international experts to examine the fracture behavior of various materials and structures under conditions relevant to nuclear reactor components and operating environments. The emphasismore » was on the ability of various fracture models and analysis methods to predict the wide range of experimental data now available. The individual papers have been cataloged separately.« less

  15. Evidence for Large Decadal Variability in the Tropical Mean Radiative Energy Budget

    NASA Technical Reports Server (NTRS)

    Wielicki, Bruce A.; Wong, Takmeng; Allan, Richard; Slingo, Anthony; Kiehl, Jeffrey T.; Soden, Brian J.; Gordon, C. T.; Miller, Alvin J.; Yang, Shi-Keng; Randall, David R.; hide

    2001-01-01

    It is widely assumed that variations in the radiative energy budget at large time and space scales are very small. We present new evidence from a compilation of over two decades of accurate satellite data that the top-of-atmosphere (TOA) tropical radiative energy budget is much more dynamic and variable than previously thought. We demonstrate that the radiation budget changes are caused by changes In tropical mean cloudiness. The results of several current climate model simulations fall to predict this large observed variation In tropical energy budget. The missing variability in the models highlights the critical need to Improve cloud modeling in the tropics to support Improved prediction of tropical climate on Inter-annual and decadal time scales. We believe that these data are the first rigorous demonstration of decadal time scale changes In the Earth's tropical cloudiness, and that they represent a new and necessary test of climate models.

  16. Numerical simulation of a plane turbulent mixing layer, with applications to isothermal, rapid reactions

    NASA Technical Reports Server (NTRS)

    Lin, P.; Pratt, D. T.

    1987-01-01

    A hybrid method has been developed for the numerical prediction of turbulent mixing in a spatially-developing, free shear layer. Most significantly, the computation incorporates the effects of large-scale structures, Schmidt number and Reynolds number on mixing, which have been overlooked in the past. In flow field prediction, large-eddy simulation was conducted by a modified 2-D vortex method with subgrid-scale modeling. The predicted mean velocities, shear layer growth rates, Reynolds stresses, and the RMS of longitudinal velocity fluctuations were found to be in good agreement with experiments, although the lateral velocity fluctuations were overpredicted. In scalar transport, the Monte Carlo method was extended to the simulation of the time-dependent pdf transport equation. For the first time, the mixing frequency in Curl's coalescence/dispersion model was estimated by using Broadwell and Breidenthal's theory of micromixing, which involves Schmidt number, Reynolds number and the local vorticity. Numerical tests were performed for a gaseous case and an aqueous case. Evidence that pure freestream fluids are entrained into the layer by large-scale motions was found in the predicted pdf. Mean concentration profiles were found to be insensitive to Schmidt number, while the unmixedness was higher for higher Schmidt number. Applications were made to mixing layers with isothermal, fast reactions. The predicted difference in product thickness of the two cases was in reasonable quantitative agreement with experimental measurements.

  17. Dancing to CHANGA: a self-consistent prediction for close SMBH pair formation time-scales following galaxy mergers

    NASA Astrophysics Data System (ADS)

    Tremmel, M.; Governato, F.; Volonteri, M.; Quinn, T. R.; Pontzen, A.

    2018-04-01

    We present the first self-consistent prediction for the distribution of formation time-scales for close supermassive black hole (SMBH) pairs following galaxy mergers. Using ROMULUS25, the first large-scale cosmological simulation to accurately track the orbital evolution of SMBHs within their host galaxies down to sub-kpc scales, we predict an average formation rate density of close SMBH pairs of 0.013 cMpc-3 Gyr-1. We find that it is relatively rare for galaxy mergers to result in the formation of close SMBH pairs with sub-kpc separation and those that do form are often the result of Gyr of orbital evolution following the galaxy merger. The likelihood and time-scale to form a close SMBH pair depends strongly on the mass ratio of the merging galaxies, as well as the presence of dense stellar cores. Low stellar mass ratio mergers with galaxies that lack a dense stellar core are more likely to become tidally disrupted and deposit their SMBH at large radii without any stellar core to aid in their orbital decay, resulting in a population of long-lived `wandering' SMBHs. Conversely, SMBHs in galaxies that remain embedded within a stellar core form close pairs in much shorter time-scales on average. This time-scale is a crucial, though often ignored or very simplified, ingredient to models predicting SMBH mergers rates and the connection between SMBH and star formation activity.

  18. Predicting Regional Drought on Sub-Seasonal to Decadal Time Scales

    NASA Technical Reports Server (NTRS)

    Schubert, Siegfried; Wang, Hailan; Suarez, Max; Koster, Randal

    2011-01-01

    Drought occurs on a wide range of time scales, and within a variety of different types of regional climates. It is driven foremost by an extended period of reduced precipitation, but it is the impacts on such quantities as soil moisture, streamflow and crop yields that are often most important from a users perspective. While recognizing that different users have different needs for drought information, it is nevertheless important to understand that progress in predicting drought and satisfying such user needs, largely hinges on our ability to improve predictions of precipitation. This talk reviews our current understanding of the physical mechanisms that drive precipitation variations on subseasonal to decadal time scales, and the implications for predictability and prediction skill. Examples are given highlighting the phenomena and mechanisms controlling precipitation on monthly (e.g., stationary Rossby waves, soil moisture), seasonal (ENSO) and decadal time scales (PD and AMO).

  19. Representation of fine scale atmospheric variability in a nudged limited area quasi-geostrophic model: application to regional climate modelling

    NASA Astrophysics Data System (ADS)

    Omrani, H.; Drobinski, P.; Dubos, T.

    2009-09-01

    In this work, we consider the effect of indiscriminate nudging time on the large and small scales of an idealized limited area model simulation. The limited area model is a two layer quasi-geostrophic model on the beta-plane driven at its boundaries by its « global » version with periodic boundary condition. This setup mimics the configuration used for regional climate modelling. Compared to a previous study by Salameh et al. (2009) who investigated the existence of an optimal nudging time minimizing the error on both large and small scale in a linear model, we here use a fully non-linear model which allows us to represent the chaotic nature of the atmosphere: given the perfect quasi-geostrophic model, errors in the initial conditions, concentrated mainly in the smaller scales of motion, amplify and cascade into the larger scales, eventually resulting in a prediction with low skill. To quantify the predictability of our quasi-geostrophic model, we measure the rate of divergence of the system trajectories in phase space (Lyapunov exponent) from a set of simulations initiated with a perturbation of a reference initial state. Predictability of the "global", periodic model is mostly controlled by the beta effect. In the LAM, predictability decreases as the domain size increases. Then, the effect of large-scale nudging is studied by using the "perfect model” approach. Two sets of experiments were performed: (1) the effect of nudging is investigated with a « global » high resolution two layer quasi-geostrophic model driven by a low resolution two layer quasi-geostrophic model. (2) similar simulations are conducted with the two layer quasi-geostrophic LAM where the size of the LAM domain comes into play in addition to the first set of simulations. In the two sets of experiments, the best spatial correlation between the nudge simulation and the reference is observed with a nudging time close to the predictability time.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schreck, S.; Sant, T.; Micallef, D.

    Wind turbine structures and components suffer excessive loads and premature failures when key aerodynamic phenomena are not well characterized, fail to be understood, or are inaccurately predicted. Turbine blade rotational augmentation remains incompletely characterized and understood, thus limiting robust prediction for design. Pertinent rotational augmentation research including experimental, theoretical, and computational work has been pursued for some time, but large scale wind tunnel testing is a relatively recent development for investigating wind turbine blade aerodynamics. Because of their large scale and complementary nature, the MEXICO and UAE Phase VI wind tunnel experiments offer unprecedented synergies to better characterize and understandmore » rotational augmentation of blade aerodynamics.« less

  1. Advances and trends in computational structural mechanics

    NASA Technical Reports Server (NTRS)

    Noor, A. K.

    1986-01-01

    Recent developments in computational structural mechanics are reviewed with reference to computational needs for future structures technology, advances in computational models for material behavior, discrete element technology, assessment and control of numerical simulations of structural response, hybrid analysis, and techniques for large-scale optimization. Research areas in computational structural mechanics which have high potential for meeting future technological needs are identified. These include prediction and analysis of the failure of structural components made of new materials, development of computational strategies and solution methodologies for large-scale structural calculations, and assessment of reliability and adaptive improvement of response predictions.

  2. Predictability of Circulation Transitions (Observed and Modeled): Non-diffusive Dynamics, Markov Chains and Error Growth.

    NASA Astrophysics Data System (ADS)

    Straus, D. M.

    2006-12-01

    The transitions between portions of the state space of the large-scale flow is studied from daily wintertime data over the Pacific North America region using the NCEP reanalysis data set (54 winters) and very large suites of hindcasts made with the COLA atmospheric GCM with observed SST (55 members for each of 18 winters). The partition of the large-scale state space is guided by cluster analysis, whose statistical significance and relationship to SST is reviewed (Straus and Molteni, 2004; Straus, Corti and Molteni, 2006). The determination of the global nature of the flow through state space is studied using Markov Chains (Crommelin, 2004). In particular the non-diffusive part of the flow is contrasted in nature (small data sample) and the AGCM (large data sample). The intrinsic error growth associated with different portions of the state space is studied through sets of identical twin AGCM simulations. The goal is to obtain realistic estimates of predictability times for large-scale transitions that should be useful in long-range forecasting.

  3. Confirmation of general relativity on large scales from weak lensing and galaxy velocities.

    PubMed

    Reyes, Reinabelle; Mandelbaum, Rachel; Seljak, Uros; Baldauf, Tobias; Gunn, James E; Lombriser, Lucas; Smith, Robert E

    2010-03-11

    Although general relativity underlies modern cosmology, its applicability on cosmological length scales has yet to be stringently tested. Such a test has recently been proposed, using a quantity, E(G), that combines measures of large-scale gravitational lensing, galaxy clustering and structure growth rate. The combination is insensitive to 'galaxy bias' (the difference between the clustering of visible galaxies and invisible dark matter) and is thus robust to the uncertainty in this parameter. Modified theories of gravity generally predict values of E(G) different from the general relativistic prediction because, in these theories, the 'gravitational slip' (the difference between the two potentials that describe perturbations in the gravitational metric) is non-zero, which leads to changes in the growth of structure and the strength of the gravitational lensing effect. Here we report that E(G) = 0.39 +/- 0.06 on length scales of tens of megaparsecs, in agreement with the general relativistic prediction of E(G) approximately 0.4. The measured value excludes a model within the tensor-vector-scalar gravity theory, which modifies both Newtonian and Einstein gravity. However, the relatively large uncertainty still permits models within f(R) theory, which is an extension of general relativity. A fivefold decrease in uncertainty is needed to rule out these models.

  4. Confirmation of general relativity on large scales from weak lensing and galaxy velocities

    NASA Astrophysics Data System (ADS)

    Reyes, Reinabelle; Mandelbaum, Rachel; Seljak, Uros; Baldauf, Tobias; Gunn, James E.; Lombriser, Lucas; Smith, Robert E.

    2010-03-01

    Although general relativity underlies modern cosmology, its applicability on cosmological length scales has yet to be stringently tested. Such a test has recently been proposed, using a quantity, EG, that combines measures of large-scale gravitational lensing, galaxy clustering and structure growth rate. The combination is insensitive to `galaxy bias' (the difference between the clustering of visible galaxies and invisible dark matter) and is thus robust to the uncertainty in this parameter. Modified theories of gravity generally predict values of EG different from the general relativistic prediction because, in these theories, the `gravitational slip' (the difference between the two potentials that describe perturbations in the gravitational metric) is non-zero, which leads to changes in the growth of structure and the strength of the gravitational lensing effect. Here we report that EG = 0.39+/-0.06 on length scales of tens of megaparsecs, in agreement with the general relativistic prediction of EG~0.4. The measured value excludes a model within the tensor-vector-scalar gravity theory, which modifies both Newtonian and Einstein gravity. However, the relatively large uncertainty still permits models within f() theory, which is an extension of general relativity. A fivefold decrease in uncertainty is needed to rule out these models.

  5. Accurate prediction of personalized olfactory perception from large-scale chemoinformatic features.

    PubMed

    Li, Hongyang; Panwar, Bharat; Omenn, Gilbert S; Guan, Yuanfang

    2018-02-01

    The olfactory stimulus-percept problem has been studied for more than a century, yet it is still hard to precisely predict the odor given the large-scale chemoinformatic features of an odorant molecule. A major challenge is that the perceived qualities vary greatly among individuals due to different genetic and cultural backgrounds. Moreover, the combinatorial interactions between multiple odorant receptors and diverse molecules significantly complicate the olfaction prediction. Many attempts have been made to establish structure-odor relationships for intensity and pleasantness, but no models are available to predict the personalized multi-odor attributes of molecules. In this study, we describe our winning algorithm for predicting individual and population perceptual responses to various odorants in the DREAM Olfaction Prediction Challenge. We find that random forest model consisting of multiple decision trees is well suited to this prediction problem, given the large feature spaces and high variability of perceptual ratings among individuals. Integrating both population and individual perceptions into our model effectively reduces the influence of noise and outliers. By analyzing the importance of each chemical feature, we find that a small set of low- and nondegenerative features is sufficient for accurate prediction. Our random forest model successfully predicts personalized odor attributes of structurally diverse molecules. This model together with the top discriminative features has the potential to extend our understanding of olfactory perception mechanisms and provide an alternative for rational odorant design.

  6. Predicting Hydrologic Function With Aquatic Gene Fragments

    NASA Astrophysics Data System (ADS)

    Good, S. P.; URycki, D. R.; Crump, B. C.

    2018-03-01

    Recent advances in microbiology techniques, such as genetic sequencing, allow for rapid and cost-effective collection of large quantities of genetic information carried within water samples. Here we posit that the unique composition of aquatic DNA material within a water sample contains relevant information about hydrologic function at multiple temporal scales. In this study, machine learning was used to develop discharge prediction models trained on the relative abundance of bacterial taxa classified into operational taxonomic units (OTUs) based on 16S rRNA gene sequences from six large arctic rivers. We term this approach "genohydrology," and show that OTU relative abundances can be used to predict river discharge at monthly and longer timescales. Based on a single DNA sample from each river, the average Nash-Sutcliffe efficiency (NSE) for predicted mean monthly discharge values throughout the year was 0.84, while the NSE for predicted discharge values across different return intervals was 0.67. These are considerable improvements over predictions based only on the area-scaled mean specific discharge of five similar rivers, which had average NSE values of 0.64 and -0.32 for seasonal and recurrence interval discharge values, respectively. The genohydrology approach demonstrates that genetic diversity within the aquatic microbiome is a large and underutilized data resource with benefits for prediction of hydrologic function.

  7. Impact of large-scale tides on cosmological distortions via redshift-space power spectrum

    NASA Astrophysics Data System (ADS)

    Akitsu, Kazuyuki; Takada, Masahiro

    2018-03-01

    Although large-scale perturbations beyond a finite-volume survey region are not direct observables, these affect measurements of clustering statistics of small-scale (subsurvey) perturbations in large-scale structure, compared with the ensemble average, via the mode-coupling effect. In this paper we show that a large-scale tide induced by scalar perturbations causes apparent anisotropic distortions in the redshift-space power spectrum of galaxies in a way depending on an alignment between the tide, wave vector of small-scale modes and line-of-sight direction. Using the perturbation theory of structure formation, we derive a response function of the redshift-space power spectrum to large-scale tide. We then investigate the impact of large-scale tide on estimation of cosmological distances and the redshift-space distortion parameter via the measured redshift-space power spectrum for a hypothetical large-volume survey, based on the Fisher matrix formalism. To do this, we treat the large-scale tide as a signal, rather than an additional source of the statistical errors, and show that a degradation in the parameter is restored if we can employ the prior on the rms amplitude expected for the standard cold dark matter (CDM) model. We also discuss whether the large-scale tide can be constrained at an accuracy better than the CDM prediction, if the effects up to a larger wave number in the nonlinear regime can be included.

  8. Downscaling ocean conditions with application to the Gulf of Maine, Scotian Shelf and adjacent deep ocean

    NASA Astrophysics Data System (ADS)

    Katavouta, Anna; Thompson, Keith R.

    2016-08-01

    The overall goal is to downscale ocean conditions predicted by an existing global prediction system and evaluate the results using observations from the Gulf of Maine, Scotian Shelf and adjacent deep ocean. The first step is to develop a one-way nested regional model and evaluate its predictions using observations from multiple sources including satellite-borne sensors of surface temperature and sea level, CTDs, Argo floats and moored current meters. It is shown that the regional model predicts more realistic fields than the global system on the shelf because it has higher resolution and includes tides that are absent from the global system. However, in deep water the regional model misplaces deep ocean eddies and meanders associated with the Gulf Stream. This is not because the regional model's dynamics are flawed but rather is the result of internally generated variability in deep water that leads to decoupling of the regional model from the global system. To overcome this problem, the next step is to spectrally nudge the regional model to the large scales (length scales > 90 km) of the global system. It is shown this leads to more realistic predictions off the shelf. Wavenumber spectra show that even though spectral nudging constrains the large scales, it does not suppress the variability on small scales; on the contrary, it favours the formation of eddies with length scales below the cutoff wavelength of the spectral nudging.

  9. A simple phenomenological model for grain clustering in turbulence

    NASA Astrophysics Data System (ADS)

    Hopkins, Philip F.

    2016-01-01

    We propose a simple model for density fluctuations of aerodynamic grains, embedded in a turbulent, gravitating gas disc. The model combines a calculation for the behaviour of a group of grains encountering a single turbulent eddy, with a hierarchical approximation of the eddy statistics. This makes analytic predictions for a range of quantities including: distributions of grain densities, power spectra and correlation functions of fluctuations, and maximum grain densities reached. We predict how these scale as a function of grain drag time ts, spatial scale, grain-to-gas mass ratio tilde{ρ }, strength of turbulence α, and detailed disc properties. We test these against numerical simulations with various turbulence-driving mechanisms. The simulations agree well with the predictions, spanning ts Ω ˜ 10-4-10, tilde{ρ }˜ 0{-}3, α ˜ 10-10-10-2. Results from `turbulent concentration' simulations and laboratory experiments are also predicted as a special case. Vortices on a wide range of scales disperse and concentrate grains hierarchically. For small grains this is most efficient in eddies with turnover time comparable to the stopping time, but fluctuations are also damped by local gas-grain drift. For large grains, shear and gravity lead to a much broader range of eddy scales driving fluctuations, with most power on the largest scales. The grain density distribution has a log-Poisson shape, with fluctuations for large grains up to factors ≳1000. We provide simple analytic expressions for the predictions, and discuss implications for planetesimal formation, grain growth, and the structure of turbulence.

  10. Predicting Species Distributions Using Record Centre Data: Multi-Scale Modelling of Habitat Suitability for Bat Roosts.

    PubMed

    Bellamy, Chloe; Altringham, John

    2015-01-01

    Conservation increasingly operates at the landscape scale. For this to be effective, we need landscape scale information on species distributions and the environmental factors that underpin them. Species records are becoming increasingly available via data centres and online portals, but they are often patchy and biased. We demonstrate how such data can yield useful habitat suitability models, using bat roost records as an example. We analysed the effects of environmental variables at eight spatial scales (500 m - 6 km) on roost selection by eight bat species (Pipistrellus pipistrellus, P. pygmaeus, Nyctalus noctula, Myotis mystacinus, M. brandtii, M. nattereri, M. daubentonii, and Plecotus auritus) using the presence-only modelling software MaxEnt. Modelling was carried out on a selection of 418 data centre roost records from the Lake District National Park, UK. Target group pseudoabsences were selected to reduce the impact of sampling bias. Multi-scale models, combining variables measured at their best performing spatial scales, were used to predict roosting habitat suitability, yielding models with useful predictive abilities. Small areas of deciduous woodland consistently increased roosting habitat suitability, but other habitat associations varied between species and scales. Pipistrellus were positively related to built environments at small scales, and depended on large-scale woodland availability. The other, more specialist, species were highly sensitive to human-altered landscapes, avoiding even small rural towns. The strength of many relationships at large scales suggests that bats are sensitive to habitat modifications far from the roost itself. The fine resolution, large extent maps will aid targeted decision-making by conservationists and planners. We have made available an ArcGIS toolbox that automates the production of multi-scale variables, to facilitate the application of our methods to other taxa and locations. Habitat suitability modelling has the potential to become a standard tool for supporting landscape-scale decision-making as relevant data and open source, user-friendly, and peer-reviewed software become widely available.

  11. An imperative need for global change research in tropical forests.

    PubMed

    Zhou, Xuhui; Fu, Yuling; Zhou, Lingyan; Li, Bo; Luo, Yiqi

    2013-09-01

    Tropical forests play a crucial role in regulating regional and global climate dynamics, and model projections suggest that rapid climate change may result in forest dieback or savannization. However, these predictions are largely based on results from leaf-level studies. How tropical forests respond and feedback to climate change is largely unknown at the ecosystem level. Several complementary approaches have been used to evaluate the effects of climate change on tropical forests, but the results are conflicting, largely due to confounding effects of multiple factors. Although altered precipitation and nitrogen deposition experiments have been conducted in tropical forests, large-scale warming and elevated carbon dioxide (CO2) manipulations are completely lacking, leaving many hypotheses and model predictions untested. Ecosystem-scale experiments to manipulate temperature and CO2 concentration individually or in combination are thus urgently needed to examine their main and interactive effects on tropical forests. Such experiments will provide indispensable data and help gain essential knowledge on biogeochemical, hydrological and biophysical responses and feedbacks of tropical forests to climate change. These datasets can also inform regional and global models for predicting future states of tropical forests and climate systems. The success of such large-scale experiments in natural tropical forests will require an international framework to coordinate collaboration so as to meet the challenges in cost, technological infrastructure and scientific endeavor.

  12. Extensions of Island Biogeography Theory predict the scaling of functional trait composition with habitat area and isolation.

    PubMed

    Jacquet, Claire; Mouillot, David; Kulbicki, Michel; Gravel, Dominique

    2017-02-01

    The Theory of Island Biogeography (TIB) predicts how area and isolation influence species richness equilibrium on insular habitats. However, the TIB remains silent about functional trait composition and provides no information on the scaling of functional diversity with area, an observation that is now documented in many systems. To fill this gap, we develop a probabilistic approach to predict the distribution of a trait as a function of habitat area and isolation, extending the TIB beyond the traditional species-area relationship. We compare model predictions to the body-size distribution of piscivorous and herbivorous fishes found on tropical reefs worldwide. We find that small and isolated reefs have a higher proportion of large-sized species than large and connected reefs. We also find that knowledge of species body-size and trophic position improves the predictions of fish occupancy on tropical reefs, supporting both the allometric and trophic theory of island biogeography. The integration of functional ecology to island biogeography is broadly applicable to any functional traits and provides a general probabilistic approach to study the scaling of trait distribution with habitat area and isolation. © 2016 John Wiley & Sons Ltd/CNRS.

  13. Large-Scale Coherent Vortex Formation in Two-Dimensional Turbulence

    NASA Astrophysics Data System (ADS)

    Orlov, A. V.; Brazhnikov, M. Yu.; Levchenko, A. A.

    2018-04-01

    The evolution of a vortex flow excited by an electromagnetic technique in a thin layer of a conducting liquid was studied experimentally. Small-scale vortices, excited at the pumping scale, merge with time due to the nonlinear interaction and produce large-scale structures—the inverse energy cascade is formed. The dependence of the energy spectrum in the developed inverse cascade is well described by the Kraichnan law k -5/3. At large scales, the inverse cascade is limited by cell sizes, and a large-scale coherent vortex flow is formed, which occupies almost the entire area of the experimental cell. The radial profile of the azimuthal velocity of the coherent vortex immediately after the pumping was switched off has been established for the first time. Inside the vortex core, the azimuthal velocity grows linearly along a radius and reaches a constant value outside the core, which agrees well with the theoretical prediction.

  14. A first large-scale flood inundation forecasting model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schumann, Guy J-P; Neal, Jeffrey C.; Voisin, Nathalie

    2013-11-04

    At present continental to global scale flood forecasting focusses on predicting at a point discharge, with little attention to the detail and accuracy of local scale inundation predictions. Yet, inundation is actually the variable of interest and all flood impacts are inherently local in nature. This paper proposes a first large scale flood inundation ensemble forecasting model that uses best available data and modeling approaches in data scarce areas and at continental scales. The model was built for the Lower Zambezi River in southeast Africa to demonstrate current flood inundation forecasting capabilities in large data-scarce regions. The inundation model domainmore » has a surface area of approximately 170k km2. ECMWF meteorological data were used to force the VIC (Variable Infiltration Capacity) macro-scale hydrological model which simulated and routed daily flows to the input boundary locations of the 2-D hydrodynamic model. Efficient hydrodynamic modeling over large areas still requires model grid resolutions that are typically larger than the width of many river channels that play a key a role in flood wave propagation. We therefore employed a novel sub-grid channel scheme to describe the river network in detail whilst at the same time representing the floodplain at an appropriate and efficient scale. The modeling system was first calibrated using water levels on the main channel from the ICESat (Ice, Cloud, and land Elevation Satellite) laser altimeter and then applied to predict the February 2007 Mozambique floods. Model evaluation showed that simulated flood edge cells were within a distance of about 1 km (one model resolution) compared to an observed flood edge of the event. Our study highlights that physically plausible parameter values and satisfactory performance can be achieved at spatial scales ranging from tens to several hundreds of thousands of km2 and at model grid resolutions up to several km2. However, initial model test runs in forecast mode revealed that it is crucial to account for basin-wide hydrological response time when assessing lead time performances notwithstanding structural limitations in the hydrological model and possibly large inaccuracies in precipitation data.« less

  15. Vorticity, backscatter and counter-gradient transport predictions using two-level simulation of turbulent flows

    NASA Astrophysics Data System (ADS)

    Ranjan, R.; Menon, S.

    2018-04-01

    The two-level simulation (TLS) method evolves both the large-and the small-scale fields in a two-scale approach and has shown good predictive capabilities in both isotropic and wall-bounded high Reynolds number (Re) turbulent flows in the past. Sensitivity and ability of this modelling approach to predict fundamental features (such as backscatter, counter-gradient turbulent transport, small-scale vorticity, etc.) seen in high Re turbulent flows is assessed here by using two direct numerical simulation (DNS) datasets corresponding to a forced isotropic turbulence at Taylor's microscale-based Reynolds number Reλ ≈ 433 and a fully developed turbulent flow in a periodic channel at friction Reynolds number Reτ ≈ 1000. It is shown that TLS captures the dynamics of local co-/counter-gradient transport and backscatter at the requisite scales of interest. These observations are further confirmed through a posteriori investigation of the flow in a periodic channel at Reτ = 2000. The results reveal that the TLS method can capture both the large- and the small-scale flow physics in a consistent manner, and at a reduced overall cost when compared to the estimated DNS or wall-resolved LES cost.

  16. The Costs of Carnivory

    PubMed Central

    Carbone, Chris; Teacher, Amber; Rowcliffe, J. Marcus

    2007-01-01

    Mammalian carnivores fall into two broad dietary groups: smaller carnivores (<20 kg) that feed on very small prey (invertebrates and small vertebrates) and larger carnivores (>20 kg) that specialize in feeding on large vertebrates. We develop a model that predicts the mass-related energy budgets and limits of carnivore size within these groups. We show that the transition from small to large prey can be predicted by the maximization of net energy gain; larger carnivores achieve a higher net gain rate by concentrating on large prey. However, because it requires more energy to pursue and subdue large prey, this leads to a 2-fold step increase in energy expenditure, as well as increased intake. Across all species, energy expenditure and intake both follow a three-fourths scaling with body mass. However, when each dietary group is considered individually they both display a shallower scaling. This suggests that carnivores at the upper limits of each group are constrained by intake and adopt energy conserving strategies to counter this. Given predictions of expenditure and estimates of intake, we predict a maximum carnivore mass of approximately a ton, consistent with the largest extinct species. Our approach provides a framework for understanding carnivore energetics, size, and extinction dynamics. PMID:17227145

  17. On Matrix Sampling and Imputation of Context Questionnaires with Implications for the Generation of Plausible Values in Large-Scale Assessments

    ERIC Educational Resources Information Center

    Kaplan, David; Su, Dan

    2016-01-01

    This article presents findings on the consequences of matrix sampling of context questionnaires for the generation of plausible values in large-scale assessments. Three studies are conducted. Study 1 uses data from PISA 2012 to examine several different forms of missing data imputation within the chained equations framework: predictive mean…

  18. Large-Scale Dynamics of the Magnetospheric Boundary: Comparisons between Global MHD Simulation Results and ISTP Observations

    NASA Technical Reports Server (NTRS)

    Berchem, J.; Raeder, J.; Ashour-Abdalla, M.; Frank, L. A.; Paterson, W. R.; Ackerson, K. L.; Kokubun, S.; Yamamoto, T.; Lepping, R. P.

    1998-01-01

    Understanding the large-scale dynamics of the magnetospheric boundary is an important step towards achieving the ISTP mission's broad objective of assessing the global transport of plasma and energy through the geospace environment. Our approach is based on three-dimensional global magnetohydrodynamic (MHD) simulations of the solar wind-magnetosphere- ionosphere system, and consists of using interplanetary magnetic field (IMF) and plasma parameters measured by solar wind monitors upstream of the bow shock as input to the simulations for predicting the large-scale dynamics of the magnetospheric boundary. The validity of these predictions is tested by comparing local data streams with time series measured by downstream spacecraft crossing the magnetospheric boundary. In this paper, we review results from several case studies which confirm that our MHD model reproduces very well the large-scale motion of the magnetospheric boundary. The first case illustrates the complexity of the magnetic field topology that can occur at the dayside magnetospheric boundary for periods of northward IMF with strong Bx and By components. The second comparison reviewed combines dynamic and topological aspects in an investigation of the evolution of the distant tail at 200 R(sub E) from the Earth.

  19. Large eddy simulation of orientation and rotation of ellipsoidal particles in isotropic turbulent flows

    NASA Astrophysics Data System (ADS)

    Chen, Jincai; Jin, Guodong; Zhang, Jian

    2016-03-01

    The rotational motion and orientational distribution of ellipsoidal particles in turbulent flows are of significance in environmental and engineering applications. Whereas the translational motion of an ellipsoidal particle is controlled by the turbulent motions at large scales, its rotational motion is determined by the fluid velocity gradient tensor at small scales, which raises a challenge when predicting the rotational dispersion of ellipsoidal particles using large eddy simulation (LES) method due to the lack of subgrid scale (SGS) fluid motions. We report the effects of the SGS fluid motions on the orientational and rotational statistics, such as the alignment between the long axis of ellipsoidal particles and the vorticity, the mean rotational energy at various aspect ratios against those obtained with direct numerical simulation (DNS) and filtered DNS. The performances of a stochastic differential equation (SDE) model for the SGS velocity gradient seen by the particles and the approximate deconvolution method (ADM) for LES are investigated. It is found that the missing SGS fluid motions in LES flow fields have significant effects on the rotational statistics of ellipsoidal particles. Alignment between the particles and the vorticity is weakened; and the rotational energy of the particles is reduced in LES. The SGS-SDE model leads to a large error in predicting the alignment between the particles and the vorticity and over-predicts the rotational energy of rod-like particles. The ADM significantly improves the rotational energy prediction of particles in LES.

  20. Ice Accretion Test Results for Three Large-Scale Swept-Wing Models in the NASA Icing Research Tunnel

    NASA Technical Reports Server (NTRS)

    Broeren, Andy; Potapczuk, Mark; Lee, Sam; Malone, Adam; Paul, Ben; Woodard, Brian

    2016-01-01

    The design and certification of modern transport airplanes for flight in icing conditions increasing relies on three-dimensional numerical simulation tools for ice accretion prediction. There is currently no publically available, high-quality, ice accretion database upon which to evaluate the performance of icing simulation tools for large-scale swept wings that are representative of modern commercial transport airplanes. The purpose of this presentation is to present the results of a series of icing wind tunnel test campaigns whose aim was to provide an ice accretion database for large-scale, swept wings.

  1. Optical correlator using very-large-scale integrated circuit/ferroelectric-liquid-crystal electrically addressed spatial light modulators

    NASA Technical Reports Server (NTRS)

    Turner, Richard M.; Jared, David A.; Sharp, Gary D.; Johnson, Kristina M.

    1993-01-01

    The use of 2-kHz 64 x 64 very-large-scale integrated circuit/ferroelectric-liquid-crystal electrically addressed spatial light modulators as the input and filter planes of a VanderLugt-type optical correlator is discussed. Liquid-crystal layer thickness variations that are present in the devices are analyzed, and the effects on correlator performance are investigated through computer simulations. Experimental results from the very-large-scale-integrated / ferroelectric-liquid-crystal optical-correlator system are presented and are consistent with the level of performance predicted by the simulations.

  2. Principle of Parsimony, Fake Science, and Scales

    NASA Astrophysics Data System (ADS)

    Yeh, T. C. J.; Wan, L.; Wang, X. S.

    2017-12-01

    Considering difficulties in predicting exact motions of water molecules, and the scale of our interests (bulk behaviors of many molecules), Fick's law (diffusion concept) has been created to predict solute diffusion process in space and time. G.I. Taylor (1921) demonstrated that random motion of the molecules reach the Fickian regime in less a second if our sampling scale is large enough to reach ergodic condition. Fick's law is widely accepted for describing molecular diffusion as such. This fits the definition of the parsimony principle at the scale of our concern. Similarly, advection-dispersion or convection-dispersion equation (ADE or CDE) has been found quite satisfactory for analysis of concentration breakthroughs of solute transport in uniformly packed soil columns. This is attributed to the solute is often released over the entire cross-section of the column, which has sampled many pore-scale heterogeneities and met the ergodicity assumption. Further, the uniformly packed column contains a large number of stationary pore-size heterogeneity. The solute thus reaches the Fickian regime after traveling a short distance along the column. Moreover, breakthrough curves are concentrations integrated over the column cross-section (the scale of our interest), and they meet the ergodicity assumption embedded in the ADE and CDE. To the contrary, scales of heterogeneity in most groundwater pollution problems evolve as contaminants travel. They are much larger than the scale of our observations and our interests so that the ergodic and the Fickian conditions are difficult. Upscaling the Fick's law for solution dispersion, and deriving universal rules of the dispersion to the field- or basin-scale pollution migrations are merely misuse of the parsimony principle and lead to a fake science ( i.e., the development of theories for predicting processes that can not be observed.) The appropriate principle of parsimony for these situations dictates mapping of large-scale heterogeneities as detailed as possible and adapting the Fick's law for effects of small-scale heterogeneity resulting from our inability to characterize them in detail.

  3. An adaptive two-stage analog/regression model for probabilistic prediction of small-scale precipitation in France

    NASA Astrophysics Data System (ADS)

    Chardon, Jérémy; Hingray, Benoit; Favre, Anne-Catherine

    2018-01-01

    Statistical downscaling models (SDMs) are often used to produce local weather scenarios from large-scale atmospheric information. SDMs include transfer functions which are based on a statistical link identified from observations between local weather and a set of large-scale predictors. As physical processes driving surface weather vary in time, the most relevant predictors and the regression link are likely to vary in time too. This is well known for precipitation for instance and the link is thus often estimated after some seasonal stratification of the data. In this study, we present a two-stage analog/regression model where the regression link is estimated from atmospheric analogs of the current prediction day. Atmospheric analogs are identified from fields of geopotential heights at 1000 and 500 hPa. For the regression stage, two generalized linear models are further used to model the probability of precipitation occurrence and the distribution of non-zero precipitation amounts, respectively. The two-stage model is evaluated for the probabilistic prediction of small-scale precipitation over France. It noticeably improves the skill of the prediction for both precipitation occurrence and amount. As the analog days vary from one prediction day to another, the atmospheric predictors selected in the regression stage and the value of the corresponding regression coefficients can vary from one prediction day to another. The model allows thus for a day-to-day adaptive and tailored downscaling. It can also reveal specific predictors for peculiar and non-frequent weather configurations.

  4. Multilevel landscape utilization of the Siberian flying squirrel: Scale effects on species habitat use.

    PubMed

    Remm, Jaanus; Hanski, Ilpo K; Tuominen, Sakari; Selonen, Vesa

    2017-10-01

    Animals use and select habitat at multiple hierarchical levels and at different spatial scales within each level. Still, there is little knowledge on the scale effects at different spatial levels of species occupancy patterns. The objective of this study was to examine nonlinear effects and optimal-scale landscape characteristics that affect occupancy of the Siberian flying squirrel, Pteromys volans , in South- and Mid-Finland. We used presence-absence data ( n  = 10,032 plots of 9 ha) and novel approach to separate the effects on site-, landscape-, and regional-level occupancy patterns. Our main results were: landscape variables predicted the placement of population patches at least twice as well as they predicted the occupancy of particular sites; the clear optimal value of preferred habitat cover for species landscape-level abundance is a surprisingly low value (10% within a 4 km buffer); landscape metrics exert different effects on species occupancy and abundance in high versus low population density regions of our study area. We conclude that knowledge of regional variation in landscape utilization will be essential for successful conservation of the species. The results also support the view that large-scale landscape variables have high predictive power in explaining species abundance. Our study demonstrates the complex response of species occurrence at different levels of population configuration on landscape structure. The study also highlights the need for data in large spatial scale to increase the precision of biodiversity mapping and prediction of future trends.

  5. Numerical Upscaling of Solute Transport in Fractured Porous Media Based on Flow Aligned Blocks

    NASA Astrophysics Data System (ADS)

    Leube, P.; Nowak, W.; Sanchez-Vila, X.

    2013-12-01

    High-contrast or fractured-porous media (FPM) pose one of the largest unresolved challenges for simulating large hydrogeological systems. The high contrast in advective transport between fast conduits and low-permeability rock matrix, including complex mass transfer processes, leads to the typical complex characteristics of early bulk arrivals and long tailings. Adequate direct representation of FPM requires enormous numerical resolutions. For large scales, e.g. the catchment scale, and when allowing for uncertainty in the fracture network architecture or in matrix properties, computational costs quickly reach an intractable level. In such cases, multi-scale simulation techniques have become useful tools. They allow decreasing the complexity of models by aggregating and transferring their parameters to coarser scales and so drastically reduce the computational costs. However, these advantages come at a loss of detail and accuracy. In this work, we develop and test a new multi-scale or upscaled modeling approach based on block upscaling. The novelty is that individual blocks are defined by and aligned with the local flow coordinates. We choose a multi-rate mass transfer (MRMT) model to represent the remaining sub-block non-Fickian behavior within these blocks on the coarse scale. To make the scale transition simple and to save computational costs, we capture sub-block features by temporal moments (TM) of block-wise particle arrival times to be matched with the MRMT model. By predicting spatial mass distributions of injected tracers in a synthetic test scenario, our coarse-scale solution matches reasonably well with the corresponding fine-scale reference solution. For predicting higher TM-orders (such as arrival time and effective dispersion), the prediction accuracy steadily decreases. This is compensated to some extent by the MRMT model. If the MRMT model becomes too complex, it loses its effect. We also found that prediction accuracy is sensitive to the choice of the effective dispersion coefficients and on the block resolution. A key advantage of the flow-aligned blocks is that the small-scale velocity field is reproduced quite accurately on the block-scale through their flow alignment. Thus, the block-scale transverse dispersivities remain in the similar magnitude as local ones, and they do not have to represent macroscopic uncertainty. Also, the flow-aligned blocks minimize numerical dispersion when solving the large-scale transport problem.

  6. Hybrid Reynolds-Averaged/Large Eddy Simulation of a Cavity Flameholder; Assessment of Modeling Sensitivities

    NASA Technical Reports Server (NTRS)

    Baurle, R. A.

    2015-01-01

    Steady-state and scale-resolving simulations have been performed for flow in and around a model scramjet combustor flameholder. The cases simulated corresponded to those used to examine this flowfield experimentally using particle image velocimetry. A variety of turbulence models were used for the steady-state Reynolds-averaged simulations which included both linear and non-linear eddy viscosity models. The scale-resolving simulations used a hybrid Reynolds-averaged / large eddy simulation strategy that is designed to be a large eddy simulation everywhere except in the inner portion (log layer and below) of the boundary layer. Hence, this formulation can be regarded as a wall-modeled large eddy simulation. This effort was undertaken to formally assess the performance of the hybrid Reynolds-averaged / large eddy simulation modeling approach in a flowfield of interest to the scramjet research community. The numerical errors were quantified for both the steady-state and scale-resolving simulations prior to making any claims of predictive accuracy relative to the measurements. The steady-state Reynolds-averaged results showed a high degree of variability when comparing the predictions obtained from each turbulence model, with the non-linear eddy viscosity model (an explicit algebraic stress model) providing the most accurate prediction of the measured values. The hybrid Reynolds-averaged/large eddy simulation results were carefully scrutinized to ensure that even the coarsest grid had an acceptable level of resolution for large eddy simulation, and that the time-averaged statistics were acceptably accurate. The autocorrelation and its Fourier transform were the primary tools used for this assessment. The statistics extracted from the hybrid simulation strategy proved to be more accurate than the Reynolds-averaged results obtained using the linear eddy viscosity models. However, there was no predictive improvement noted over the results obtained from the explicit Reynolds stress model. Fortunately, the numerical error assessment at most of the axial stations used to compare with measurements clearly indicated that the scale-resolving simulations were improving (i.e. approaching the measured values) as the grid was refined. Hence, unlike a Reynolds-averaged simulation, the hybrid approach provides a mechanism to the end-user for reducing model-form errors.

  7. Origin of the Two Scales of Wind Ripples on Mars

    NASA Technical Reports Server (NTRS)

    Lapotre, Mathieu G. A.; Ewing, Ryan C.; Lamb, Michael P.; Fischer, Woodward W.; Grotzinger, John P.; Rubin, David M.; Lewis, Kevin W.; Day, Mackenzie; Gupta, Sanjeev; Banham, Steeve G.; hide

    2016-01-01

    Earth's sandy deserts host two main types of bedforms - decimeter-scale ripples and larger dunes. Years of orbital observations on Mars also confirmed the existence of two modes of active eolian bedforms - meter-scale ripples, and dunes. By analogy to terrestrial ripples, which are thought to form from a grain mechanism, it was hypothesized that large martian ripples also formed from grain impacts, but spaced further apart due to elongated saltation trajectories from the lower martian gravity and different atmospheric properties. However, the Curiosity rover recently documented the coexistence of three scales of bedforms in Gale crater. Because a grain impact mechanism cannot readily explain two distinct and coeval ripple modes in similar sand sizes, a new mechanism seems to be required to explain one of the scales of ripples. Small ripples are most similar to Earth's impact ripples, with straight crests and subdued profiles. In contrast, large martian ripples are sinuous and asymmetric, with lee slopes dominated by grain flows and grainfall deposits. Thus, large martian ripples resemble current ripples formed underwater on Earth, suggesting that they may form from a fluid-drag mechanism. To test this hypothesis, we develop a scaling relation to predict the spacing of fluid-drag ripples from an extensive flume data compilation. The size of large martian ripples is predicted by our scaling relation when adjusted for martian atmospheric properties. Specifically, we propose that the wavelength of martian wind-drag ripples arises from the high kinematic viscosity of the low-density atmosphere. Because fluid density controls drag-ripple size, our scaling relation can help constrain paleoatmospheric density from wind-drag ripple stratification.

  8. Origin of the two scales of wind ripples on Mars

    NASA Astrophysics Data System (ADS)

    Lapotre, M. G. A.; Ewing, R. C.; Lamb, M. P.; Fischer, W. W.; Grotzinger, J. P.; Rubin, D. M.; Lewis, K. W.; Ballard, M.; Day, M. D.; Gupta, S.; Banham, S.; Bridges, N.; Des Marais, D. J.; Fraeman, A. A.; Grant, J. A., III; Ming, D. W.; Mischna, M.; Rice, M. S.; Sumner, D. Y.; Vasavada, A. R.; Yingst, R. A.

    2016-12-01

    Earth's sandy deserts host two main types of bedforms - decimeter-scale ripples and larger dunes. Years of orbital observations on Mars also confirmed the existence of two modes of active eolian bedforms - meter-scale ripples, and dunes. By analogy to terrestrial ripples, which are thought to form from a grain mechanism, it was hypothesized that large martian ripples also formed from grain impacts, but spaced further apart due to elongated saltation trajectories from the lower martian gravity and different atmospheric properties. However, the Curiosity rover recently documented the coexistence of three scales of bedforms in Gale crater. Because a grain impact mechanism cannot readily explain two distinct and coeval ripple modes in similar sand sizes, a new mechanism seems to be required to explain one of the scales of ripples. Small ripples are most similar to Earth's impact ripples, with straight crests and subdued profiles. In contrast, large martian ripples are sinuous and asymmetric, with lee slopes dominated by grain flows and grainfall deposits. Thus, large martian ripples resemble current ripples formed underwater on Earth, suggesting that they may form from a fluid-drag mechanism. To test this hypothesis, we develop a scaling relation to predict the spacing of fluid-drag ripples from an extensive flume data compilation. The size of large martian ripples is predicted by our scaling relation when adjusted for martian atmospheric properties. Specifically, we propose that the wavelength of martian wind-drag ripples arises from the high kinematic viscosity of the low-density atmosphere. Because fluid density controls drag-ripple size, our scaling relation can help constrain paleoatmospheric density from wind-drag ripple stratification.

  9. Chemically intuited, large-scale screening of MOFs by machine learning techniques

    NASA Astrophysics Data System (ADS)

    Borboudakis, Giorgos; Stergiannakos, Taxiarchis; Frysali, Maria; Klontzas, Emmanuel; Tsamardinos, Ioannis; Froudakis, George E.

    2017-10-01

    A novel computational methodology for large-scale screening of MOFs is applied to gas storage with the use of machine learning technologies. This approach is a promising trade-off between the accuracy of ab initio methods and the speed of classical approaches, strategically combined with chemical intuition. The results demonstrate that the chemical properties of MOFs are indeed predictable (stochastically, not deterministically) using machine learning methods and automated analysis protocols, with the accuracy of predictions increasing with sample size. Our initial results indicate that this methodology is promising to apply not only to gas storage in MOFs but in many other material science projects.

  10. Grindability measurements on low-rank fuels. [Prediction of large pulverizer performance from small scale test equipment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peipho, R.R.; Dougan, D.R.

    1981-01-01

    Experience has shown that the grinding characteristics of low rank coals are best determined by testing them in a pulverizer. Test results from a small MPS-32 Babcock and Wilcox pulverizer to predict large, full-scale pulverizer performance are presented. The MPS-32 apparatus, test procedure and evaluation of test results is described. The test data show that the Hardgrove apparatus and the ASTM test method must be used with great caution when considering low-rank fuels. The MPS-32 meets the needs for real-machine simulation but with some disadvantages. A smaller pulverizer is desirable. 1 ref.

  11. Power-law scaling in Bénard-Marangoni convection at large Prandtl numbers

    NASA Astrophysics Data System (ADS)

    Boeck, Thomas; Thess, André

    2001-08-01

    Bénard-Marangoni convection at large Prandtl numbers is found to exhibit steady (nonturbulent) behavior in numerical experiments over a very wide range of Marangoni numbers Ma far away from the primary instability threshold. A phenomenological theory, taking into account the different character of thermal boundary layers at the bottom and at the free surface, is developed. It predicts a power-law scaling for the nondimensional velocity (Peclet number) and heat flux (Nusselt number) of the form Pe~Ma2/3, Nu~Ma2/9. This prediction is in good agreement with two-dimensional direct numerical simulations up to Ma=3.2×105.

  12. SVM and SVM Ensembles in Breast Cancer Prediction.

    PubMed

    Huang, Min-Wei; Chen, Chih-Wen; Lin, Wei-Chao; Ke, Shih-Wen; Tsai, Chih-Fong

    2017-01-01

    Breast cancer is an all too common disease in women, making how to effectively predict it an active research problem. A number of statistical and machine learning techniques have been employed to develop various breast cancer prediction models. Among them, support vector machines (SVM) have been shown to outperform many related techniques. To construct the SVM classifier, it is first necessary to decide the kernel function, and different kernel functions can result in different prediction performance. However, there have been very few studies focused on examining the prediction performances of SVM based on different kernel functions. Moreover, it is unknown whether SVM classifier ensembles which have been proposed to improve the performance of single classifiers can outperform single SVM classifiers in terms of breast cancer prediction. Therefore, the aim of this paper is to fully assess the prediction performance of SVM and SVM ensembles over small and large scale breast cancer datasets. The classification accuracy, ROC, F-measure, and computational times of training SVM and SVM ensembles are compared. The experimental results show that linear kernel based SVM ensembles based on the bagging method and RBF kernel based SVM ensembles with the boosting method can be the better choices for a small scale dataset, where feature selection should be performed in the data pre-processing stage. For a large scale dataset, RBF kernel based SVM ensembles based on boosting perform better than the other classifiers.

  13. SVM and SVM Ensembles in Breast Cancer Prediction

    PubMed Central

    Huang, Min-Wei; Chen, Chih-Wen; Lin, Wei-Chao; Ke, Shih-Wen; Tsai, Chih-Fong

    2017-01-01

    Breast cancer is an all too common disease in women, making how to effectively predict it an active research problem. A number of statistical and machine learning techniques have been employed to develop various breast cancer prediction models. Among them, support vector machines (SVM) have been shown to outperform many related techniques. To construct the SVM classifier, it is first necessary to decide the kernel function, and different kernel functions can result in different prediction performance. However, there have been very few studies focused on examining the prediction performances of SVM based on different kernel functions. Moreover, it is unknown whether SVM classifier ensembles which have been proposed to improve the performance of single classifiers can outperform single SVM classifiers in terms of breast cancer prediction. Therefore, the aim of this paper is to fully assess the prediction performance of SVM and SVM ensembles over small and large scale breast cancer datasets. The classification accuracy, ROC, F-measure, and computational times of training SVM and SVM ensembles are compared. The experimental results show that linear kernel based SVM ensembles based on the bagging method and RBF kernel based SVM ensembles with the boosting method can be the better choices for a small scale dataset, where feature selection should be performed in the data pre-processing stage. For a large scale dataset, RBF kernel based SVM ensembles based on boosting perform better than the other classifiers. PMID:28060807

  14. Scale-Similar Models for Large-Eddy Simulations

    NASA Technical Reports Server (NTRS)

    Sarghini, F.

    1999-01-01

    Scale-similar models employ multiple filtering operations to identify the smallest resolved scales, which have been shown to be the most active in the interaction with the unresolved subgrid scales. They do not assume that the principal axes of the strain-rate tensor are aligned with those of the subgrid-scale stress (SGS) tensor, and allow the explicit calculation of the SGS energy. They can provide backscatter in a numerically stable and physically realistic manner, and predict SGS stresses in regions that are well correlated with the locations where large Reynolds stress occurs. In this paper, eddy viscosity and mixed models, which include an eddy-viscosity part as well as a scale-similar contribution, are applied to the simulation of two flows, a high Reynolds number plane channel flow, and a three-dimensional, nonequilibrium flow. The results show that simulations without models or with the Smagorinsky model are unable to predict nonequilibrium effects. Dynamic models provide an improvement of the results: the adjustment of the coefficient results in more accurate prediction of the perturbation from equilibrium. The Lagrangian-ensemble approach [Meneveau et al., J. Fluid Mech. 319, 353 (1996)] is found to be very beneficial. Models that included a scale-similar term and a dissipative one, as well as the Lagrangian ensemble averaging, gave results in the best agreement with the direct simulation and experimental data.

  15. Ecological niche modeling as a new paradigm for large-scale investigations of diversity and distribution of birds

    Treesearch

    A. Townsend Peterson; Daniel A. Kluza

    2005-01-01

    Large-scale assessments of the distribution and diversity of birds have been challenged by the need for a robust methodology for summarizing or predicting species' geographic distributions (e.g. Beard et al. 1999, Manel et al. 1999, Saveraid et al. 2001). Methodologies used in such studies have at times been inappropriate, or even more frequently limited in their...

  16. Soil organic carbon - a large scale paired catchment assessment

    NASA Astrophysics Data System (ADS)

    Kunkel, V.; Hancock, G. R.; Wells, T.

    2016-12-01

    Soil organic carbon (SOC) concentration can vary both spatially and temporally driven by differences in soil properties, topography and climate. However most studies have focused on point scale data sets with a paucity of studies examining larger scale catchments. Here we examine the spatial and temporal distribution of SOC for two large catchments. The Krui (575 km2) and Merriwa River (675km2) catchments (New South Wales, Australia). Both have similar shape, soils, topography and orientation. We show that SOC distribution is very similar for both catchments and that elevation (and associated increase in soil moisture) is a major influence on SOC. We also show that there is little change in SOC from the initial assessment in 2006 to 2015 despite a major drought from 2003 to 2010 and extreme rainfall events in 2007 and 2010 -therefore SOC concentration appears robust. However, we found significant relationships between erosion and deposition patterns (as quantified using 137Cs) and SOC for both catchments again demonstrating a strong geomorphic relationship. Vegetation across the catchments was assessed using remote sensing (Landsat and MODIS). Vegetation patterns were temporally consistent with above ground biomass increasing with elevation. SOC could be predicted using both these low and high resolution remote sensing platforms. Results indicate that, although moderate resolution (250 m) allows for reasonable prediction of the spatial distribution of SOC, the higher resolution (30 m) improved the strength of the SOC-NDVI relationship. The relationship between SOC and 137Cs, as a surrogate for the erosion and deposition of SOC, suggested that sediment transport and deposition influences the distribution of SOC within the catchment. The findings demonstrate that over the large catchment scale and at the decadal time scale that SOC is relatively constant and can largely be predicted by topography.

  17. Speeding up GW Calculations to Meet the Challenge of Large Scale Quasiparticle Predictions.

    PubMed

    Gao, Weiwei; Xia, Weiyi; Gao, Xiang; Zhang, Peihong

    2016-11-11

    Although the GW approximation is recognized as one of the most accurate theories for predicting materials excited states properties, scaling up conventional GW calculations for large systems remains a major challenge. We present a powerful and simple-to-implement method that can drastically accelerate fully converged GW calculations for large systems, enabling fast and accurate quasiparticle calculations for complex materials systems. We demonstrate the performance of this new method by presenting the results for ZnO and MgO supercells. A speed-up factor of nearly two orders of magnitude is achieved for a system containing 256 atoms (1024 valence electrons) with a negligibly small numerical error of ±0.03 eV. Finally, we discuss the application of our method to the GW calculations for 2D materials.

  18. Combining classifiers to predict gene function in Arabidopsis thaliana using large-scale gene expression measurements.

    PubMed

    Lan, Hui; Carson, Rachel; Provart, Nicholas J; Bonner, Anthony J

    2007-09-21

    Arabidopsis thaliana is the model species of current plant genomic research with a genome size of 125 Mb and approximately 28,000 genes. The function of half of these genes is currently unknown. The purpose of this study is to infer gene function in Arabidopsis using machine-learning algorithms applied to large-scale gene expression data sets, with the goal of identifying genes that are potentially involved in plant response to abiotic stress. Using in house and publicly available data, we assembled a large set of gene expression measurements for A. thaliana. Using those genes of known function, we first evaluated and compared the ability of basic machine-learning algorithms to predict which genes respond to stress. Predictive accuracy was measured using ROC50 and precision curves derived through cross validation. To improve accuracy, we developed a method for combining these classifiers using a weighted-voting scheme. The combined classifier was then trained on genes of known function and applied to genes of unknown function, identifying genes that potentially respond to stress. Visual evidence corroborating the predictions was obtained using electronic Northern analysis. Three of the predicted genes were chosen for biological validation. Gene knockout experiments confirmed that all three are involved in a variety of stress responses. The biological analysis of one of these genes (At1g16850) is presented here, where it is shown to be necessary for the normal response to temperature and NaCl. Supervised learning methods applied to large-scale gene expression measurements can be used to predict gene function. However, the ability of basic learning methods to predict stress response varies widely and depends heavily on how much dimensionality reduction is used. Our method of combining classifiers can improve the accuracy of such predictions - in this case, predictions of genes involved in stress response in plants - and it effectively chooses the appropriate amount of dimensionality reduction automatically. The method provides a useful means of identifying genes in A. thaliana that potentially respond to stress, and we expect it would be useful in other organisms and for other gene functions.

  19. The cosmic microwave background radiation

    NASA Technical Reports Server (NTRS)

    Silk, Joseph

    1992-01-01

    A review the implications of the spectrum and anisotropy of the cosmic microwave background for cosmology. Thermalization and processes generating spectral distortions are discussed. Anisotropy predictions are described and compared with observational constraints. If the evidence for large-scale power in the galaxy distribution in excess of that predicted by the cold dark matter model is vindicated, and the observed structure originated via gravitational instabilities of primordial density fluctuations, the predicted amplitude of microwave background anisotropies on angular scales of a degree and larger must be at least several parts in 10 exp 6.

  20. Predicting coral bleaching in response to environmental stressors using 8 years of global-scale data.

    PubMed

    Yee, Susan Harrell; Barron, Mace G

    2010-02-01

    Coral reefs have experienced extensive mortality over the past few decades as a result of temperature-induced mass bleaching events. There is an increasing realization that other environmental factors, including water mixing, solar radiation, water depth, and water clarity, interact with temperature to either exacerbate bleaching or protect coral from mass bleaching. The relative contribution of these factors to variability in mass bleaching at a global scale has not been quantified, but can provide insights when making large-scale predictions of mass bleaching events. Using data from 708 bleaching surveys across the globe, a framework was developed to predict the probability of moderate or severe bleaching as a function of key environmental variables derived from global-scale remote-sensing data. The ability of models to explain spatial and temporal variability in mass bleaching events was quantified. Results indicated approximately 20% improved accuracy of predictions of bleaching when solar radiation and water mixing, in addition to elevated temperature, were incorporated into models, but predictive accuracy was variable among regions. Results provide insights into the effects of environmental parameters on bleaching at a global scale.

  1. Predicting protein-protein interactions on a proteome scale by matching evolutionary and structural similarities at interfaces using PRISM.

    PubMed

    Tuncbag, Nurcan; Gursoy, Attila; Nussinov, Ruth; Keskin, Ozlem

    2011-08-11

    Prediction of protein-protein interactions at the structural level on the proteome scale is important because it allows prediction of protein function, helps drug discovery and takes steps toward genome-wide structural systems biology. We provide a protocol (termed PRISM, protein interactions by structural matching) for large-scale prediction of protein-protein interactions and assembly of protein complex structures. The method consists of two components: rigid-body structural comparisons of target proteins to known template protein-protein interfaces and flexible refinement using a docking energy function. The PRISM rationale follows our observation that globally different protein structures can interact via similar architectural motifs. PRISM predicts binding residues by using structural similarity and evolutionary conservation of putative binding residue 'hot spots'. Ultimately, PRISM could help to construct cellular pathways and functional, proteome-scale annotation. PRISM is implemented in Python and runs in a UNIX environment. The program accepts Protein Data Bank-formatted protein structures and is available at http://prism.ccbb.ku.edu.tr/prism_protocol/.

  2. Large-eddy simulation of a boundary layer with concave streamwise curvature

    NASA Technical Reports Server (NTRS)

    Lund, Thomas S.

    1994-01-01

    Turbulence modeling continues to be one of the most difficult problems in fluid mechanics. Existing prediction methods are well developed for certain classes of simple equilibrium flows, but are still not entirely satisfactory for a large category of complex non-equilibrium flows found in engineering practice. Direct and large-eddy simulation (LES) approaches have long been believed to have great potential for the accurate prediction of difficult turbulent flows, but the associated computational cost has been prohibitive for practical problems. This remains true for direct simulation but is no longer clear for large-eddy simulation. Advances in computer hardware, numerical methods, and subgrid-scale modeling have made it possible to conduct LES for flows or practical interest at Reynolds numbers in the range of laboratory experiments. The objective of this work is to apply ES and the dynamic subgrid-scale model to the flow of a boundary layer over a concave surface.

  3. The Large-scale Coronal Structure of the 2017 August 21 Great American Eclipse: An Assessment of Solar Surface Flux Transport Model Enabled Predictions and Observations

    NASA Astrophysics Data System (ADS)

    Nandy, Dibyendu; Bhowmik, Prantika; Yeates, Anthony R.; Panda, Suman; Tarafder, Rajashik; Dash, Soumyaranjan

    2018-01-01

    On 2017 August 21, a total solar eclipse swept across the contiguous United States, providing excellent opportunities for diagnostics of the Sun’s corona. The Sun’s coronal structure is notoriously difficult to observe except during solar eclipses; thus, theoretical models must be relied upon for inferring the underlying magnetic structure of the Sun’s outer atmosphere. These models are necessary for understanding the role of magnetic fields in the heating of the corona to a million degrees and the generation of severe space weather. Here we present a methodology for predicting the structure of the coronal field based on model forward runs of a solar surface flux transport model, whose predicted surface field is utilized to extrapolate future coronal magnetic field structures. This prescription was applied to the 2017 August 21 solar eclipse. A post-eclipse analysis shows good agreement between model simulated and observed coronal structures and their locations on the limb. We demonstrate that slow changes in the Sun’s surface magnetic field distribution driven by long-term flux emergence and its evolution governs large-scale coronal structures with a (plausibly cycle-phase dependent) dynamical memory timescale on the order of a few solar rotations, opening up the possibility for large-scale, global corona predictions at least a month in advance.

  4. Detection of right-to-left shunts: comparison between the International Consensus and Spencer Logarithmic Scale criteria.

    PubMed

    Lao, Annabelle Y; Sharma, Vijay K; Tsivgoulis, Georgios; Frey, James L; Malkoff, Marc D; Navarro, Jose C; Alexandrov, Andrei V

    2008-10-01

    International Consensus Criteria (ICC) consider right-to-left shunt (RLS) present when Transcranial Doppler (TCD) detects even one microbubble (microB). Spencer Logarithmic Scale (SLS) offers more grades of RLS with detection of >30 microB corresponding to a large shunt. We compared the yield of ICC and SLS in detection and quantification of a large RLS. We prospectively evaluated paradoxical embolism in consecutive patients with ischemic strokes or transient ischemic attack (TIA) using injections of 9 cc saline agitated with 1 cc of air. Results were classified according to ICC [negative (no microB), grade I (1-20 microB), grade II (>20 microB or "shower" appearance of microB), and grade III ("curtain" appearance of microB)] and SLS criteria [negative (no microB), grade I (1-10 microB), grade II (11-30 microB), grade III (31100 microB), grade IV (101300 microB), grade V (>300 microB)]. The RLS size was defined as large (>4 mm) using diameter measurement of the septal defects on transesophageal echocardiography (TEE). TCD comparison to TEE showed 24 true positive, 48 true negative, 4 false positive, and 2 false negative cases (sensitivity 92.3%, specificity 92.3%, positive predictive value (PPV) 85.7%, negative predictive value (NPV) 96%, and accuracy 92.3%) for any RLS presence. Both ICC and SLS were 100% sensitive for detection of large RLS. ICC and SLS criteria yielded a false positive rate of 24.4% and 7.7%, respectively when compared to TEE. Although both grading scales provide agreement as to any shunt presence, using the Spencer Scale grade III or higher can decrease by one-half the number of false positive TCD diagnoses to predict large RLS on TEE.

  5. Hybrid Reynolds-Averaged/Large Eddy Simulation of the Flow in a Model SCRamjet Cavity Flameholder

    NASA Technical Reports Server (NTRS)

    Baurle, R. A.

    2016-01-01

    Steady-state and scale-resolving simulations have been performed for flow in and around a model scramjet combustor flameholder. Experimental data available for this configuration include velocity statistics obtained from particle image velocimetry. Several turbulence models were used for the steady-state Reynolds-averaged simulations which included both linear and non-linear eddy viscosity models. The scale-resolving simulations used a hybrid Reynolds-averaged/large eddy simulation strategy that is designed to be a large eddy simulation everywhere except in the inner portion (log layer and below) of the boundary layer. Hence, this formulation can be regarded as a wall-modeled large eddy simulation. This e ort was undertaken to not only assess the performance of the hybrid Reynolds-averaged / large eddy simulation modeling approach in a flowfield of interest to the scramjet research community, but to also begin to understand how this capability can best be used to augment standard Reynolds-averaged simulations. The numerical errors were quantified for the steady-state simulations, and at least qualitatively assessed for the scale-resolving simulations prior to making any claims of predictive accuracy relative to the measurements. The steady-state Reynolds-averaged results displayed a high degree of variability when comparing the flameholder fuel distributions obtained from each turbulence model. This prompted the consideration of applying the higher-fidelity scale-resolving simulations as a surrogate "truth" model to calibrate the Reynolds-averaged closures in a non-reacting setting prior to their use for the combusting simulations. In general, the Reynolds-averaged velocity profile predictions at the lowest fueling level matched the particle imaging measurements almost as well as was observed for the non-reacting condition. However, the velocity field predictions proved to be more sensitive to the flameholder fueling rate than was indicated in the measurements.

  6. Ensemble modeling to predict habitat suitability for a large-scale disturbance specialist

    Treesearch

    Quresh S. Latif; Victoria A. Saab; Jonathan G. Dudley; Jeff P. Hollenbeck

    2013-01-01

    To conserve habitat for disturbance specialist species, ecologists must identify where individuals will likely settle in newly disturbed areas. Habitat suitability models can predict which sites at new disturbances will most likely attract specialists. Without validation data from newly disturbed areas, however, the best approach for maximizing predictive accuracy can...

  7. Potential climatic impacts and reliability of large-scale offshore wind farms

    NASA Astrophysics Data System (ADS)

    Wang, Chien; Prinn, Ronald G.

    2011-04-01

    The vast availability of wind power has fueled substantial interest in this renewable energy source as a potential near-zero greenhouse gas emission technology for meeting future world energy needs while addressing the climate change issue. However, in order to provide even a fraction of the estimated future energy needs, a large-scale deployment of wind turbines (several million) is required. The consequent environmental impacts, and the inherent reliability of such a large-scale usage of intermittent wind power would have to be carefully assessed, in addition to the need to lower the high current unit wind power costs. Our previous study (Wang and Prinn 2010 Atmos. Chem. Phys. 10 2053) using a three-dimensional climate model suggested that a large deployment of wind turbines over land to meet about 10% of predicted world energy needs in 2100 could lead to a significant temperature increase in the lower atmosphere over the installed regions. A global-scale perturbation to the general circulation patterns as well as to the cloud and precipitation distribution was also predicted. In the later study reported here, we conducted a set of six additional model simulations using an improved climate model to further address the potential environmental and intermittency issues of large-scale deployment of offshore wind turbines for differing installation areas and spatial densities. In contrast to the previous land installation results, the offshore wind turbine installations are found to cause a surface cooling over the installed offshore regions. This cooling is due principally to the enhanced latent heat flux from the sea surface to lower atmosphere, driven by an increase in turbulent mixing caused by the wind turbines which was not entirely offset by the concurrent reduction of mean wind kinetic energy. We found that the perturbation of the large-scale deployment of offshore wind turbines to the global climate is relatively small compared to the case of land-based installations. However, the intermittency caused by the significant seasonal wind variations over several major offshore sites is substantial, and demands further options to ensure the reliability of large-scale offshore wind power. The method that we used to simulate the offshore wind turbine effect on the lower atmosphere involved simply increasing the ocean surface drag coefficient. While this method is consistent with several detailed fine-scale simulations of wind turbines, it still needs further study to ensure its validity. New field observations of actual wind turbine arrays are definitely required to provide ultimate validation of the model predictions presented here.

  8. Acoustic scaling: A re-evaluation of the acoustic model of Manchester Studio 7

    NASA Astrophysics Data System (ADS)

    Walker, R.

    1984-12-01

    The reasons for the reconstruction and re-evaluation of the acoustic scale mode of a large music studio are discussed. The design and construction of the model using mechanical and structural considerations rather than purely acoustic absorption criteria is described and the results obtained are given. The results confirm that structural elements within the studio gave rise to unexpected and unwanted low-frequency acoustic absorption. The results also show that at least for the relatively well understood mechanisms of sound energy absorption physical modelling of the structural and internal components gives an acoustically accurate scale model, within the usual tolerances of acoustic design. The poor reliability of measurements of acoustic absorption coefficients, is well illustrated. The conclusion is reached that such acoustic scale modelling is a valid and, for large scale projects, financially justifiable technique for predicting fundamental acoustic effects. It is not appropriate for the prediction of fine details because such small details are unlikely to be reproduced exactly at a different size without extensive measurements of the material's performance at both scales.

  9. Prediction and monitoring of monsoon intraseasonal oscillations over Indian monsoon region in an ensemble prediction system using CFSv2

    NASA Astrophysics Data System (ADS)

    Abhilash, S.; Sahai, A. K.; Borah, N.; Chattopadhyay, R.; Joseph, S.; Sharmila, S.; De, S.; Goswami, B. N.; Kumar, Arun

    2014-05-01

    An ensemble prediction system (EPS) is devised for the extended range prediction (ERP) of monsoon intraseasonal oscillations (MISO) of Indian summer monsoon (ISM) using National Centers for Environmental Prediction Climate Forecast System model version 2 at T126 horizontal resolution. The EPS is formulated by generating 11 member ensembles through the perturbation of atmospheric initial conditions. The hindcast experiments were conducted at every 5-day interval for 45 days lead time starting from 16th May to 28th September during 2001-2012. The general simulation of ISM characteristics and the ERP skill of the proposed EPS at pentad mean scale are evaluated in the present study. Though the EPS underestimates both the mean and variability of ISM rainfall, it simulates the northward propagation of MISO reasonably well. It is found that the signal-to-noise ratio of the forecasted rainfall becomes unity by about 18 days. The potential predictability error of the forecasted rainfall saturates by about 25 days. Though useful deterministic forecasts could be generated up to 2nd pentad lead, significant correlations are found even up to 4th pentad lead. The skill in predicting large-scale MISO, which is assessed by comparing the predicted and observed MISO indices, is found to be ~17 days. It is noted that the prediction skill of actual rainfall is closely related to the prediction of large-scale MISO amplitude as well as the initial conditions related to the different phases of MISO. An analysis of categorical prediction skills reveals that break is more skillfully predicted, followed by active and then normal. The categorical probability skill scores suggest that useful probabilistic forecasts could be generated even up to 4th pentad lead.

  10. War, space, and the evolution of Old World complex societies.

    PubMed

    Turchin, Peter; Currie, Thomas E; Turner, Edward A L; Gavrilets, Sergey

    2013-10-08

    How did human societies evolve from small groups, integrated by face-to-face cooperation, to huge anonymous societies of today, typically organized as states? Why is there so much variation in the ability of different human populations to construct viable states? Existing theories are usually formulated as verbal models and, as a result, do not yield sharply defined, quantitative predictions that could be unambiguously tested with data. Here we develop a cultural evolutionary model that predicts where and when the largest-scale complex societies arose in human history. The central premise of the model, which we test, is that costly institutions that enabled large human groups to function without splitting up evolved as a result of intense competition between societies-primarily warfare. Warfare intensity, in turn, depended on the spread of historically attested military technologies (e.g., chariots and cavalry) and on geographic factors (e.g., rugged landscape). The model was simulated within a realistic landscape of the Afroeurasian landmass and its predictions were tested against a large dataset documenting the spatiotemporal distribution of historical large-scale societies in Afroeurasia between 1,500 BCE and 1,500 CE. The model-predicted pattern of spread of large-scale societies was very similar to the observed one. Overall, the model explained 65% of variance in the data. An alternative model, omitting the effect of diffusing military technologies, explained only 16% of variance. Our results support theories that emphasize the role of institutions in state-building and suggest a possible explanation why a long history of statehood is positively correlated with political stability, institutional quality, and income per capita.

  11. War, space, and the evolution of Old World complex societies

    PubMed Central

    Turchin, Peter; Currie, Thomas E.; Turner, Edward A. L.; Gavrilets, Sergey

    2013-01-01

    How did human societies evolve from small groups, integrated by face-to-face cooperation, to huge anonymous societies of today, typically organized as states? Why is there so much variation in the ability of different human populations to construct viable states? Existing theories are usually formulated as verbal models and, as a result, do not yield sharply defined, quantitative predictions that could be unambiguously tested with data. Here we develop a cultural evolutionary model that predicts where and when the largest-scale complex societies arose in human history. The central premise of the model, which we test, is that costly institutions that enabled large human groups to function without splitting up evolved as a result of intense competition between societies—primarily warfare. Warfare intensity, in turn, depended on the spread of historically attested military technologies (e.g., chariots and cavalry) and on geographic factors (e.g., rugged landscape). The model was simulated within a realistic landscape of the Afroeurasian landmass and its predictions were tested against a large dataset documenting the spatiotemporal distribution of historical large-scale societies in Afroeurasia between 1,500 BCE and 1,500 CE. The model-predicted pattern of spread of large-scale societies was very similar to the observed one. Overall, the model explained 65% of variance in the data. An alternative model, omitting the effect of diffusing military technologies, explained only 16% of variance. Our results support theories that emphasize the role of institutions in state-building and suggest a possible explanation why a long history of statehood is positively correlated with political stability, institutional quality, and income per capita. PMID:24062433

  12. Large-Scale Aerosol Modeling and Analysis

    DTIC Science & Technology

    2007-09-30

    deserts of the world: Arabian Gulf, Sea of Japan, China Sea , Mediterranean Sea , and the Tropical Atlantic Ocean. NAAPS also accurately predicts the...fate of large-scale smoke and pollution plumes. With its global and continuous coverage, 1 Report Documentation Page Form ApprovedOMB No. 0704-0188...origin of dust plumes impacting naval operations in the Red Sea , Mediterranean, eastern Atlantic, Gulf of Guinea, Sea of Japan, Yellow Sea , and East

  13. Exploring Entrainment Patterns of Human Emotion in Social Media

    PubMed Central

    Luo, Chuan; Zhang, Zhu

    2016-01-01

    Emotion entrainment, which is generally defined as the synchronous convergence of human emotions, performs many important social functions. However, what the specific mechanisms of emotion entrainment are beyond in-person interactions, and how human emotions evolve under different entrainment patterns in large-scale social communities, are still unknown. In this paper, we aim to examine the massive emotion entrainment patterns and understand the underlying mechanisms in the context of social media. As modeling emotion dynamics on a large scale is often challenging, we elaborate a pragmatic framework to characterize and quantify the entrainment phenomenon. By applying this framework on the datasets from two large-scale social media platforms, we find that the emotions of online users entrain through social networks. We further uncover that online users often form their relations via dual entrainment, while maintain it through single entrainment. Remarkably, the emotions of online users are more convergent in nonreciprocal entrainment. Building on these findings, we develop an entrainment augmented model for emotion prediction. Experimental results suggest that entrainment patterns inform emotion proximity in dyads, and encoding their associations promotes emotion prediction. This work can further help us to understand the underlying dynamic process of large-scale online interactions and make more reasonable decisions regarding emergency situations, epidemic diseases, and political campaigns in cyberspace. PMID:26953692

  14. Exploring Entrainment Patterns of Human Emotion in Social Media.

    PubMed

    He, Saike; Zheng, Xiaolong; Zeng, Daniel; Luo, Chuan; Zhang, Zhu

    2016-01-01

    Emotion entrainment, which is generally defined as the synchronous convergence of human emotions, performs many important social functions. However, what the specific mechanisms of emotion entrainment are beyond in-person interactions, and how human emotions evolve under different entrainment patterns in large-scale social communities, are still unknown. In this paper, we aim to examine the massive emotion entrainment patterns and understand the underlying mechanisms in the context of social media. As modeling emotion dynamics on a large scale is often challenging, we elaborate a pragmatic framework to characterize and quantify the entrainment phenomenon. By applying this framework on the datasets from two large-scale social media platforms, we find that the emotions of online users entrain through social networks. We further uncover that online users often form their relations via dual entrainment, while maintain it through single entrainment. Remarkably, the emotions of online users are more convergent in nonreciprocal entrainment. Building on these findings, we develop an entrainment augmented model for emotion prediction. Experimental results suggest that entrainment patterns inform emotion proximity in dyads, and encoding their associations promotes emotion prediction. This work can further help us to understand the underlying dynamic process of large-scale online interactions and make more reasonable decisions regarding emergency situations, epidemic diseases, and political campaigns in cyberspace.

  15. Homogenization of Large-Scale Movement Models in Ecology

    USGS Publications Warehouse

    Garlick, M.J.; Powell, J.A.; Hooten, M.B.; McFarlane, L.R.

    2011-01-01

    A difficulty in using diffusion models to predict large scale animal population dispersal is that individuals move differently based on local information (as opposed to gradients) in differing habitat types. This can be accommodated by using ecological diffusion. However, real environments are often spatially complex, limiting application of a direct approach. Homogenization for partial differential equations has long been applied to Fickian diffusion (in which average individual movement is organized along gradients of habitat and population density). We derive a homogenization procedure for ecological diffusion and apply it to a simple model for chronic wasting disease in mule deer. Homogenization allows us to determine the impact of small scale (10-100 m) habitat variability on large scale (10-100 km) movement. The procedure generates asymptotic equations for solutions on the large scale with parameters defined by small-scale variation. The simplicity of this homogenization procedure is striking when compared to the multi-dimensional homogenization procedure for Fickian diffusion,and the method will be equally straightforward for more complex models. ?? 2010 Society for Mathematical Biology.

  16. Dynamic Smagorinsky model on anisotropic grids

    NASA Technical Reports Server (NTRS)

    Scotti, A.; Meneveau, C.; Fatica, M.

    1996-01-01

    Large Eddy Simulation (LES) of complex-geometry flows often involves highly anisotropic meshes. To examine the performance of the dynamic Smagorinsky model in a controlled fashion on such grids, simulations of forced isotropic turbulence are performed using highly anisotropic discretizations. The resulting model coefficients are compared with a theoretical prediction (Scotti et al., 1993). Two extreme cases are considered: pancake-like grids, for which two directions are poorly resolved compared to the third, and pencil-like grids, where one direction is poorly resolved when compared to the other two. For pancake-like grids the dynamic model yields the results expected from the theory (increasing coefficient with increasing aspect ratio), whereas for pencil-like grids the dynamic model does not agree with the theoretical prediction (with detrimental effects only on smallest resolved scales). A possible explanation of the departure is attempted, and it is shown that the problem may be circumvented by using an isotropic test-filter at larger scales. Overall, all models considered give good large-scale results, confirming the general robustness of the dynamic and eddy-viscosity models. But in all cases, the predictions were poor for scales smaller than that of the worst resolved direction.

  17. Drainage networks after wildfire

    USGS Publications Warehouse

    Kinner, D.A.; Moody, J.A.

    2005-01-01

    Predicting runoff and erosion from watersheds burned by wildfires requires an understanding of the three-dimensional structure of both hillslope and channel drainage networks. We investigate the small-and large-scale structures of drainage networks using field studies and computer analysis of 30-m digital elevation model. Topologic variables were derived from a composite 30-m DEM, which included 14 order 6 watersheds within the Pikes Peak batholith. Both topologic and hydraulic variables were measured in the field in two smaller burned watersheds (3.7 and 7.0 hectares) located within one of the order 6 watersheds burned by the 1996 Buffalo Creek Fire in Central Colorado. Horton ratios of topologic variables (stream number, drainage area, stream length, and stream slope) for small-scale and large-scale watersheds are shown to scale geometrically with stream order (i.e., to be scale invariant). However, the ratios derived for the large-scale drainage networks could not be used to predict the rill and gully drainage network structure. Hydraulic variables (width, depth, cross-sectional area, and bed roughness) for small-scale drainage networks were found to be scale invariant across 3 to 4 stream orders. The relation between hydraulic radius and cross-sectional area is similar for rills and gullies, suggesting that their geometry can be treated similarly in hydraulic modeling. Additionally, the rills and gullies have relatively small width-to-depth ratios, implying sidewall friction may be important to the erosion and evolutionary process relative to main stem channels.

  18. Combining Flux Balance and Energy Balance Analysis for Large-Scale Metabolic Network: Biochemical Circuit Theory for Analysis of Large-Scale Metabolic Networks

    NASA Technical Reports Server (NTRS)

    Beard, Daniel A.; Liang, Shou-Dan; Qian, Hong; Biegel, Bryan (Technical Monitor)

    2001-01-01

    Predicting behavior of large-scale biochemical metabolic networks represents one of the greatest challenges of bioinformatics and computational biology. Approaches, such as flux balance analysis (FBA), that account for the known stoichiometry of the reaction network while avoiding implementation of detailed reaction kinetics are perhaps the most promising tools for the analysis of large complex networks. As a step towards building a complete theory of biochemical circuit analysis, we introduce energy balance analysis (EBA), which compliments the FBA approach by introducing fundamental constraints based on the first and second laws of thermodynamics. Fluxes obtained with EBA are thermodynamically feasible and provide valuable insight into the activation and suppression of biochemical pathways.

  19. Icing Simulation Research Supporting the Ice-Accretion Testing of Large-Scale Swept-Wing Models

    NASA Technical Reports Server (NTRS)

    Yadlin, Yoram; Monnig, Jaime T.; Malone, Adam M.; Paul, Bernard P.

    2018-01-01

    The work summarized in this report is a continuation of NASA's Large-Scale, Swept-Wing Test Articles Fabrication; Research and Test Support for NASA IRT contract (NNC10BA05 -NNC14TA36T) performed by Boeing under the NASA Research and Technology for Aerospace Propulsion Systems (RTAPS) contract. In the study conducted under RTAPS, a series of icing tests in the Icing Research Tunnel (IRT) have been conducted to characterize ice formations on large-scale swept wings representative of modern commercial transport airplanes. The outcome of that campaign was a large database of ice-accretion geometries that can be used for subsequent aerodynamic evaluation in other experimental facilities and for validation of ice-accretion prediction codes.

  20. Stream Flow Prediction by Remote Sensing and Genetic Programming

    NASA Technical Reports Server (NTRS)

    Chang, Ni-Bin

    2009-01-01

    A genetic programming (GP)-based, nonlinear modeling structure relates soil moisture with synthetic-aperture-radar (SAR) images to present representative soil moisture estimates at the watershed scale. Surface soil moisture measurement is difficult to obtain over a large area due to a variety of soil permeability values and soil textures. Point measurements can be used on a small-scale area, but it is impossible to acquire such information effectively in large-scale watersheds. This model exhibits the capacity to assimilate SAR images and relevant geoenvironmental parameters to measure soil moisture.

  1. Prediction and Monitoring of Monsoon Intraseasonal Oscillations over Indian Monsoon Region in an Ensemble Prediction System using CFSv2

    NASA Astrophysics Data System (ADS)

    Borah, Nabanita; Sukumarpillai, Abhilash; Sahai, Atul Kumar; Chattopadhyay, Rajib; Joseph, Susmitha; De, Soumyendu; Nath Goswami, Bhupendra; Kumar, Arun

    2014-05-01

    An ensemble prediction system (EPS) is devised for the extended range prediction (ERP) of monsoon intraseasonal oscillations (MISO) of Indian summer monsoon (ISM) using NCEP Climate Forecast System model version2 at T126 horizontal resolution. The EPS is formulated by producing 11 member ensembles through the perturbation of atmospheric initial conditions. The hindcast experiments were conducted at every 5-day interval for 45 days lead time starting from 16th May to 28th September during 2001-2012. The general simulation of ISM characteristics and the ERP skill of the proposed EPS at pentad mean scale are evaluated in the present study. Though the EPS underestimates both the mean and variability of ISM rainfall, it simulates the northward propagation of MISO reasonably well. It is found that the signal-to-noise ratio becomes unity by about18 days and the predictability error saturates by about 25 days. Though useful deterministic forecasts could be generated up to 2nd pentad lead, significant correlations are observed even up to 4th pentad lead. The skill in predicting large-scale MISO, which is assessed by comparing the predicted and observed MISO indices, is found to be ~17 days. It is noted that the prediction skill of actual rainfall is closely related to the prediction of amplitude of large scale MISO as well as the initial conditions related to the different phases of MISO. Categorical prediction skills reveals that break is more skillfully predicted, followed by active and then normal. The categorical probability skill scores suggest that useful probabilistic forecasts could be generated even up to 4th pentad lead.

  2. Prediction and Monitoring of Monsoon Intraseasonal Oscillations over Indian Monsoon Region in an Ensemble Prediction System using CFSv2

    NASA Astrophysics Data System (ADS)

    Borah, N.; Abhilash, S.; Sahai, A. K.; Chattopadhyay, R.; Joseph, S.; Sharmila, S.; de, S.; Goswami, B.; Kumar, A.

    2013-12-01

    An ensemble prediction system (EPS) is devised for the extended range prediction (ERP) of monsoon intraseasonal oscillations (MISOs) of Indian summer monsoon (ISM) using NCEP Climate Forecast System model version2 at T126 horizontal resolution. The EPS is formulated by producing 11 member ensembles through the perturbation of atmospheric initial conditions. The hindcast experiments were conducted at every 5-day interval for 45 days lead time starting from 16th May to 28th September during 2001-2012. The general simulation of ISM characteristics and the ERP skill of the proposed EPS at pentad mean scale are evaluated in the present study. Though the EPS underestimates both the mean and variability of ISM rainfall, it simulates the northward propagation of MISO reasonably well. It is found that the signal-to-noise ratio becomes unity by about18 days and the predictability error saturates by about 25 days. Though useful deterministic forecasts could be generated up to 2nd pentad lead, significant correlations are observed even up to 4th pentad lead. The skill in predicting large-scale MISO, which is assessed by comparing the predicted and observed MISO indices, is found to be ~17 days. It is noted that the prediction skill of actual rainfall is closely related to the prediction of amplitude of large scale MISO as well as the initial conditions related to the different phases of MISO. Categorical prediction skills reveals that break is more skillfully predicted, followed by active and then normal. The categorical probability skill scores suggest that useful probabilistic forecasts could be generated even up to 4th pentad lead.

  3. Speeding up GW Calculations to Meet the Challenge of Large Scale Quasiparticle Predictions

    PubMed Central

    Gao, Weiwei; Xia, Weiyi; Gao, Xiang; Zhang, Peihong

    2016-01-01

    Although the GW approximation is recognized as one of the most accurate theories for predicting materials excited states properties, scaling up conventional GW calculations for large systems remains a major challenge. We present a powerful and simple-to-implement method that can drastically accelerate fully converged GW calculations for large systems, enabling fast and accurate quasiparticle calculations for complex materials systems. We demonstrate the performance of this new method by presenting the results for ZnO and MgO supercells. A speed-up factor of nearly two orders of magnitude is achieved for a system containing 256 atoms (1024 valence electrons) with a negligibly small numerical error of ±0.03 eV. Finally, we discuss the application of our method to the GW calculations for 2D materials. PMID:27833140

  4. Integrating environmental covariates and crop modeling into the genomic selection framework to predict genotype by environment interactions.

    PubMed

    Heslot, Nicolas; Akdemir, Deniz; Sorrells, Mark E; Jannink, Jean-Luc

    2014-02-01

    Development of models to predict genotype by environment interactions, in unobserved environments, using environmental covariates, a crop model and genomic selection. Application to a large winter wheat dataset. Genotype by environment interaction (G*E) is one of the key issues when analyzing phenotypes. The use of environment data to model G*E has long been a subject of interest but is limited by the same problems as those addressed by genomic selection methods: a large number of correlated predictors each explaining a small amount of the total variance. In addition, non-linear responses of genotypes to stresses are expected to further complicate the analysis. Using a crop model to derive stress covariates from daily weather data for predicted crop development stages, we propose an extension of the factorial regression model to genomic selection. This model is further extended to the marker level, enabling the modeling of quantitative trait loci (QTL) by environment interaction (Q*E), on a genome-wide scale. A newly developed ensemble method, soft rule fit, was used to improve this model and capture non-linear responses of QTL to stresses. The method is tested using a large winter wheat dataset, representative of the type of data available in a large-scale commercial breeding program. Accuracy in predicting genotype performance in unobserved environments for which weather data were available increased by 11.1% on average and the variability in prediction accuracy decreased by 10.8%. By leveraging agronomic knowledge and the large historical datasets generated by breeding programs, this new model provides insight into the genetic architecture of genotype by environment interactions and could predict genotype performance based on past and future weather scenarios.

  5. Modelling the large-scale redshift-space 3-point correlation function of galaxies

    NASA Astrophysics Data System (ADS)

    Slepian, Zachary; Eisenstein, Daniel J.

    2017-08-01

    We present a configuration-space model of the large-scale galaxy 3-point correlation function (3PCF) based on leading-order perturbation theory and including redshift-space distortions (RSD). This model should be useful in extracting distance-scale information from the 3PCF via the baryon acoustic oscillation method. We include the first redshift-space treatment of biasing by the baryon-dark matter relative velocity. Overall, on large scales the effect of RSD is primarily a renormalization of the 3PCF that is roughly independent of both physical scale and triangle opening angle; for our adopted Ωm and bias values, the rescaling is a factor of ˜1.8. We also present an efficient scheme for computing 3PCF predictions from our model, important for allowing fast exploration of the space of cosmological parameters in future analyses.

  6. Predicting Positive and Negative Relationships in Large Social Networks.

    PubMed

    Wang, Guan-Nan; Gao, Hui; Chen, Lian; Mensah, Dennis N A; Fu, Yan

    2015-01-01

    In a social network, users hold and express positive and negative attitudes (e.g. support/opposition) towards other users. Those attitudes exhibit some kind of binary relationships among the users, which play an important role in social network analysis. However, some of those binary relationships are likely to be latent as the scale of social network increases. The essence of predicting latent binary relationships have recently began to draw researchers' attention. In this paper, we propose a machine learning algorithm for predicting positive and negative relationships in social networks inspired by structural balance theory and social status theory. More specifically, we show that when two users in the network have fewer common neighbors, the prediction accuracy of the relationship between them deteriorates. Accordingly, in the training phase, we propose a segment-based training framework to divide the training data into two subsets according to the number of common neighbors between users, and build a prediction model for each subset based on support vector machine (SVM). Moreover, to deal with large-scale social network data, we employ a sampling strategy that selects small amount of training data while maintaining high accuracy of prediction. We compare our algorithm with traditional algorithms and adaptive boosting of them. Experimental results of typical data sets show that our algorithm can deal with large social networks and consistently outperforms other methods.

  7. Large-Scale Earthquake Countermeasures Act and the Earthquake Prediction Council in Japan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rikitake, T.

    1979-08-07

    The Large-Scale Earthquake Countermeasures Act was enacted in Japan in December 1978. This act aims at mitigating earthquake hazards by designating an area to be an area under intensified measures against earthquake disaster, such designation being based on long-term earthquake prediction information, and by issuing an earthquake warnings statement based on imminent prediction information, when possible. In an emergency case as defined by the law, the prime minister will be empowered to take various actions which cannot be taken at ordinary times. For instance, he may ask the Self-Defense Force to come into the earthquake-threatened area before the earthquake occurrence.more » A Prediction Council has been formed in order to evaluate premonitory effects that might be observed over the Tokai area, which was designated an area under intensified measures against earthquake disaster some time in June 1979. An extremely dense observation network has been constructed over the area.« less

  8. Subgrid-scale models for large-eddy simulation of rotating turbulent channel flows

    NASA Astrophysics Data System (ADS)

    Silvis, Maurits H.; Bae, Hyunji Jane; Trias, F. Xavier; Abkar, Mahdi; Moin, Parviz; Verstappen, Roel

    2017-11-01

    We aim to design subgrid-scale models for large-eddy simulation of rotating turbulent flows. Rotating turbulent flows form a challenging test case for large-eddy simulation due to the presence of the Coriolis force. The Coriolis force conserves the total kinetic energy while transporting it from small to large scales of motion, leading to the formation of large-scale anisotropic flow structures. The Coriolis force may also cause partial flow laminarization and the occurrence of turbulent bursts. Many subgrid-scale models for large-eddy simulation are, however, primarily designed to parametrize the dissipative nature of turbulent flows, ignoring the specific characteristics of transport processes. We, therefore, propose a new subgrid-scale model that, in addition to the usual dissipative eddy viscosity term, contains a nondissipative nonlinear model term designed to capture transport processes, such as those due to rotation. We show that the addition of this nonlinear model term leads to improved predictions of the energy spectra of rotating homogeneous isotropic turbulence as well as of the Reynolds stress anisotropy in spanwise-rotating plane-channel flows. This work is financed by the Netherlands Organisation for Scientific Research (NWO) under Project Number 613.001.212.

  9. Preliminary measurement of the noise from the 2/9 scale model of the Large-scale Advanced Propfan (LAP) propeller, SR-7A

    NASA Technical Reports Server (NTRS)

    Dittmar, J. H.

    1985-01-01

    Noise data on the Large-scale Advanced Propfan (LAP) propeller model SR-7A were taken into the NASA Lewis 8- by 6-Foot Wind Tunnel. The maximum blade passing tone decreases from the peak level when going to higher helical tip Mach numbers. This noise reduction points to the use of higher propeller speeds as a possible method to reduce airplane cabin noise while maintaining high flight speed and efficiency. Comparison of the SR-7A blade passing noise with the noise of the similarly designed SR-3 propeller shows good agreement as expected. The SR-7A propeller is slightly noisier than the SR-3 model in the plane of rotation at the cruise condition. Projections of the tunnel model data are made to the full-scale LAP propeller mounted on the test bed aircraft and compared with design predictions. The prediction method is conservative in the sense that it overpredicts the projected model data.

  10. Cruise noise of the 2/9th scale model of the Large-scale Advanced Propfan (LAP) propeller, SR-7A

    NASA Technical Reports Server (NTRS)

    Dittmar, James H.; Stang, David B.

    1987-01-01

    Noise data on the Large-scale Advanced Propfan (LAP) propeller model SR-7A were taken in the NASA Lewis Research Center 8 x 6 foot Wind Tunnel. The maximum blade passing tone noise first rises with increasing helical tip Mach number to a peak level, then remains the same or decreases from its peak level when going to higher helical tip Mach numbers. This trend was observed for operation at both constant advance ratio and approximately equal thrust. This noise reduction or, leveling out at high helical tip Mach numbers, points to the use of higher propeller tip speeds as a possible method to limit airplane cabin noise while maintaining high flight speed and efficiency. Projections of the tunnel model data are made to the full scale LAP propeller mounted on the test bed aircraft and compared with predictions. The prediction method is found to be somewhat conservative in that it slightly overpredicts the projected model data at the peak.

  11. Cruise noise of the 2/9 scale model of the Large-scale Advanced Propfan (LAP) propeller, SR-7A

    NASA Technical Reports Server (NTRS)

    Dittmar, James H.; Stang, David B.

    1987-01-01

    Noise data on the Large-scale Advanced Propfan (LAP) propeller model SR-7A were taken in the NASA Lewis Research Center 8 x 6 foot Wind Tunnel. The maximum blade passing tone noise first rises with increasing helical tip Mach number to a peak level, then remains the same or decreases from its peak level when going to higher helical tip Mach numbers. This trend was observed for operation at both constant advance ratio and approximately equal thrust. This noise reduction or, leveling out at high helical tip Mach numbers, points to the use of higher propeller tip speeds as a possible method to limit airplane cabin noise while maintaining high flight speed and efficiency. Projections of the tunnel model data are made to the full scale LAP propeller mounted on the test bed aircraft and compared with predictions. The prediction method is found to be somewhat conservative in that it slightly overpredicts the projected model data at the peak.

  12. On the Computation of Sound by Large-Eddy Simulations

    NASA Technical Reports Server (NTRS)

    Piomelli, Ugo; Streett, Craig L.; Sarkar, Sutanu

    1997-01-01

    The effect of the small scales on the source term in Lighthill's acoustic analogy is investigated, with the objective of determining the accuracy of large-eddy simulations when applied to studies of flow-generated sound. The distribution of the turbulent quadrupole is predicted accurately, if models that take into account the trace of the SGS stresses are used. Its spatial distribution is also correct, indicating that the low-wave-number (or frequency) part of the sound spectrum can be predicted well by LES. Filtering, however, removes the small-scale fluctuations that contribute significantly to the higher derivatives in space and time of Lighthill's stress tensor T(sub ij). The rms fluctuations of the filtered derivatives are substantially lower than those of the unfiltered quantities. The small scales, however, are not strongly correlated, and are not expected to contribute significantly to the far-field sound; separate modeling of the subgrid-scale density fluctuations might, however, be required in some configurations.

  13. Identification and Functional Prediction of Large Intergenic Noncoding RNAs (lincRNAs) in Rainbow Trout (Oncorhynchus mykiss)

    USDA-ARS?s Scientific Manuscript database

    Long noncoding RNAs (lncRNAs) have been recognized in recent years as key regulators of diverse cellular processes. Genome-wide large-scale projects have uncovered thousands of lncRNAs in many model organisms. Large intergenic noncoding RNAs (lincRNAs) are lncRNAs that are transcribed from intergeni...

  14. Characterization and prediction of extreme events in turbulence

    NASA Astrophysics Data System (ADS)

    Fonda, Enrico; Iyer, Kartik P.; Sreenivasan, Katepalli R.

    2017-11-01

    Extreme events in Nature such as tornadoes, large floods and strong earthquakes are rare but can have devastating consequences. The predictability of these events is very limited at present. Extreme events in turbulence are the very large events in small scales that are intermittent in character. We examine events in energy dissipation rate and enstrophy which are several tens to hundreds to thousands of times the mean value. To this end we use our DNS database of homogeneous and isotropic turbulence with Taylor Reynolds numbers spanning a decade, computed with different small scale resolutions and different box sizes, and study the predictability of these events using machine learning. We start with an aggressive data augmentation to virtually increase the number of these rare events by two orders of magnitude and train a deep convolutional neural network to predict their occurrence in an independent data set. The goal of the work is to explore whether extreme events can be predicted with greater assurance than can be done by conventional methods (e.g., D.A. Donzis & K.R. Sreenivasan, J. Fluid Mech. 647, 13-26, 2010).

  15. Using High Resolution Remotely Sensed Data to Predict Territory Occupancy and Mircrorefugia for a Habitat Specialist, the American Pika (Ochotona princeps)

    NASA Astrophysics Data System (ADS)

    Beers, A.; Ray, C.

    2015-12-01

    Climate change is likely to affect mountainous areas unevenly due to the complex interactions between topography, vegetation, and the accumulation of snow and ice. This heterogeneity will complicate relationships between species presence and large-scale drivers such as precipitation and make predicting habitat extent and connectivity much more difficult. We studied the potential for fine-scale variation in climate and habitat use throughout the year in the American pika (Ochotona princeps), a talus specialist of mountainous western North America known for strong microhabitat affiliation. Not all areas of talus are likely to be equally hospitable, which may reduce connectivity more than predicted by large-scale occupancy drivers. We used high resolution remotely sensed data to create metrics of the terrain and land cover in the Niwot Ridge (NWT) LTER site in Colorado. We hypothesized that pikas preferentially use heterogeneous terrain, as it might foster greater snow accumulation, and used radio telemetry to test this with radio-collared pikas. Pikas use heterogeneous terrain during snow covered periods and less heterogeneous area during the summer. This suggests that not all areas of talus habitat are equally suitable as shelter from extreme conditions but that pikas need more than just shelter from winter cold. With those results we created a predictive map using the same habitat metrics to model the extent of suitable habitat across the NWT area. These strong effects of terrain on pika habitat use and territory occupancy show the great utility that high resolution remotely sensed data can have in ecological applications. With increasing effects of climate change in mountainous regions, this modeling approach is crucial for quantifying habitat connectivity at both small and large scales and to identify potential refugia for threatened or isolated species.

  16. Inflationary tensor fossils in large-scale structure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dimastrogiovanni, Emanuela; Fasiello, Matteo; Jeong, Donghui

    Inflation models make specific predictions for a tensor-scalar-scalar three-point correlation, or bispectrum, between one gravitational-wave (tensor) mode and two density-perturbation (scalar) modes. This tensor-scalar-scalar correlation leads to a local power quadrupole, an apparent departure from statistical isotropy in our Universe, as well as characteristic four-point correlations in the current mass distribution in the Universe. So far, the predictions for these observables have been worked out only for single-clock models in which certain consistency conditions between the tensor-scalar-scalar correlation and tensor and scalar power spectra are satisfied. Here we review the requirements on inflation models for these consistency conditions to bemore » satisfied. We then consider several examples of inflation models, such as non-attractor and solid-inflation models, in which these conditions are put to the test. In solid inflation the simplest consistency conditions are already violated whilst in the non-attractor model we find that, contrary to the standard scenario, the tensor-scalar-scalar correlator probes directly relevant model-dependent information. We work out the predictions for observables in these models. For non-attractor inflation we find an apparent local quadrupolar departure from statistical isotropy in large-scale structure but that this power quadrupole decreases very rapidly at smaller scales. The consistency of the CMB quadrupole with statistical isotropy then constrains the distance scale that corresponds to the transition from the non-attractor to attractor phase of inflation to be larger than the currently observable horizon. Solid inflation predicts clustering fossils signatures in the current galaxy distribution that may be large enough to be detectable with forthcoming, and possibly even current, galaxy surveys.« less

  17. Modeling near-wall turbulent flows

    NASA Astrophysics Data System (ADS)

    Marusic, Ivan; Mathis, Romain; Hutchins, Nicholas

    2010-11-01

    The near-wall region of turbulent boundary layers is a crucial region for turbulence production, but it is also a region that becomes increasing difficult to access and make measurements in as the Reynolds number becomes very high. Consequently, it is desirable to model the turbulence in this region. Recent studies have shown that the classical description, with inner (wall) scaling alone, is insufficient to explain the behaviour of the streamwise turbulence intensities with increasing Reynolds number. Here we will review our recent near-wall model (Marusic et al., Science 329, 2010), where the near-wall turbulence is predicted given information from only the large-scale signature at a single measurement point in the logarithmic layer, considerably far from the wall. The model is consistent with the Townsend attached eddy hypothesis in that the large-scale structures associated with the log-region are felt all the way down to the wall, but also includes a non-linear amplitude modulation effect of the large structures on the near-wall turbulence. Detailed predicted spectra across the entire near- wall region will be presented, together with other higher order statistics over a large range of Reynolds numbers varying from laboratory to atmospheric flows.

  18. StructRNAfinder: an automated pipeline and web server for RNA families prediction.

    PubMed

    Arias-Carrasco, Raúl; Vásquez-Morán, Yessenia; Nakaya, Helder I; Maracaja-Coutinho, Vinicius

    2018-02-17

    The function of many noncoding RNAs (ncRNAs) depend upon their secondary structures. Over the last decades, several methodologies have been developed to predict such structures or to use them to functionally annotate RNAs into RNA families. However, to fully perform this analysis, researchers should utilize multiple tools, which require the constant parsing and processing of several intermediate files. This makes the large-scale prediction and annotation of RNAs a daunting task even to researchers with good computational or bioinformatics skills. We present an automated pipeline named StructRNAfinder that predicts and annotates RNA families in transcript or genome sequences. This single tool not only displays the sequence/structural consensus alignments for each RNA family, according to Rfam database but also provides a taxonomic overview for each assigned functional RNA. Moreover, we implemented a user-friendly web service that allows researchers to upload their own nucleotide sequences in order to perform the whole analysis. Finally, we provided a stand-alone version of StructRNAfinder to be used in large-scale projects. The tool was developed under GNU General Public License (GPLv3) and is freely available at http://structrnafinder.integrativebioinformatics.me . The main advantage of StructRNAfinder relies on the large-scale processing and integrating the data obtained by each tool and database employed along the workflow, of which several files are generated and displayed in user-friendly reports, useful for downstream analyses and data exploration.

  19. A new energy transfer model for turbulent free shear flow

    NASA Technical Reports Server (NTRS)

    Liou, William W.-W.

    1992-01-01

    A new model for the energy transfer mechanism in the large-scale turbulent kinetic energy equation is proposed. An estimate of the characteristic length scale of the energy containing large structures is obtained from the wavelength associated with the structures predicted by a weakly nonlinear analysis for turbulent free shear flows. With the inclusion of the proposed energy transfer model, the weakly nonlinear wave models for the turbulent large-scale structures are self-contained and are likely to be independent flow geometries. The model is tested against a plane mixing layer. Reasonably good agreement is achieved. Finally, it is shown by using the Liapunov function method, the balance between the production and the drainage of the kinetic energy of the turbulent large-scale structures is asymptotically stable as their amplitude saturates. The saturation of the wave amplitude provides an alternative indicator for flow self-similarity.

  20. Measuring the Large-scale Solar Magnetic Field

    NASA Astrophysics Data System (ADS)

    Hoeksema, J. T.; Scherrer, P. H.; Peterson, E.; Svalgaard, L.

    2017-12-01

    The Sun's large-scale magnetic field is important for determining global structure of the corona and for quantifying the evolution of the polar field, which is sometimes used for predicting the strength of the next solar cycle. Having confidence in the determination of the large-scale magnetic field of the Sun is difficult because the field is often near the detection limit, various observing methods all measure something a little different, and various systematic effects can be very important. We compare resolved and unresolved observations of the large-scale magnetic field from the Wilcox Solar Observatory, Heliseismic and Magnetic Imager (HMI), Michelson Doppler Imager (MDI), and Solis. Cross comparison does not enable us to establish an absolute calibration, but it does allow us to discover and compensate for instrument problems, such as the sensitivity decrease seen in the WSO measurements in late 2016 and early 2017.

  1. Use of DES in mildly separated internal flow: dimples in a turbulent channel

    NASA Astrophysics Data System (ADS)

    Tay, Chien Ming Jonathan; Khoo, Boo Cheong; Chew, Yong Tian

    2017-12-01

    Detached eddy simulation (DES) is investigated as a means to study an array of shallow dimples with depth to diameter ratios of 1.5% and 5% in a turbulent channel. The DES captures large-scale flow features relatively well, but is unable to predict skin friction accurately due to flow modelling near the wall. The current work instead relies on the accuracy of DES to predict large-scale flow features, as well as its well-documented reliability in predicting flow separation regions to support the proposed mechanism that dimples reduce drag by introducing spanwise flow components near the wall through the addition of streamwise vorticity. Profiles of the turbulent energy budget show the stabilising effect of the dimples on the flow. The presence of flow separation however modulates the net drag reduction. Increasing the Reynolds number can reduce the size of the separated region and experiments show that this increases the overall drag reduction.

  2. In Situ Burning of Oil Spills

    PubMed Central

    Evans, David D.; Mulholland, George W.; Baum, Howard R.; Walton, William D.; McGrattan, Kevin B.

    2001-01-01

    For more than a decade NIST conducted research to understand, measure and predict the important features of burning oil on water. Results of that research have been included in nationally recognized guidelines for approval of intentional burning. NIST measurements and predictions have played a major role in establishing in situ burning as a primary oil spill response method. Data are given for pool fire burning rates, smoke yield, smoke particulate size distribution, smoke aging, and polycyclic aromatic hydrocarbon content of the smoke for crude and fuel oil fires with effective diameters up to 17.2 m. New user-friendly software, ALOFT, was developed to quantify the large-scale features and trajectory of wind blown smoke plumes in the atmosphere and estimate the ground level smoke particulate concentrations. Predictions using the model were tested successfully against data from large-scale tests. ALOFT software is being used by oil spill response teams to help assess the potential impact of intentional burning. PMID:27500022

  3. HOW THE DENSITY ENVIRONMENT CHANGES THE INFLUENCE OF THE DARK MATTER–BARYON STREAMING VELOCITY ON COSMOLOGICAL STRUCTURE FORMATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahn, Kyungjin, E-mail: kjahn@chosun.ac.kr

    We study the dynamical effect of the relative velocity between dark matter and baryonic fluids, which remained supersonic after the epoch of recombination. The impact of this supersonic motion on the formation of cosmological structures was first formulated by Tseliakhovich and Hirata, in terms of the linear theory of small-scale fluctuations coupled to large-scale, relative velocities in mean-density regions. In their formalism, they limited the large-scale density environment to be that of the global mean density. We improve on their formulation by allowing variation in the density environment as well as the relative velocities. This leads to a new typemore » of coupling between large-scale and small-scale modes. We find that the small-scale fluctuation grows in a biased way: faster in the overdense environment and slower in the underdense environment. We also find that the net effect on the global power spectrum of the density fluctuation is to boost its overall amplitude from the prediction by Tseliakhovich and Hirata. Correspondingly, the conditional mass function of cosmological halos and the halo bias parameter are both affected in a similar way. The discrepancy between our prediction and that of Tseliakhovich and Hirata is significant, and therefore, the related cosmology and high-redshift astrophysics should be revisited. The mathematical formalism of this study can be used for generating cosmological initial conditions of small-scale perturbations in generic, overdense (underdense) background patches.« less

  4. Measures for a transdimensional multiverse

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schwartz-Perlov, Delia; Vilenkin, Alexander, E-mail: dperlov@cosmos.phy.tufts.edu, E-mail: vilenkin@cosmos.phy.tufts.edu

    2010-06-01

    The multiverse/landscape paradigm that has emerged from eternal inflation and string theory, describes a large-scale multiverse populated by ''pocket universes'' which come in a huge variety of different types, including different dimensionalities. In order to make predictions in the multiverse, we need a probability measure. In (3+1)d landscapes, the scale factor cutoff measure has been previously shown to have a number of attractive properties. Here we consider possible generalizations of this measure to a transdimensional multiverse. We find that a straightforward extension of scale factor cutoff to the transdimensional case gives a measure that strongly disfavors large amounts of slow-rollmore » inflation and predicts low values for the density parameter Ω, in conflict with observations. A suitable generalization, which retains all the good properties of the original measure, is the ''volume factor'' cutoff, which regularizes the infinite spacetime volume using cutoff surfaces of constant volume expansion factor.« less

  5. Predictive validity and psychiatric nursing staff's perception of the clinical usefulness of the French version of the Dynamic Appraisal of Situational Aggression.

    PubMed

    Dumais, Alexandre; Larue, Caroline; Michaud, Cécile; Goulet, Marie-Hélène

    2012-10-01

    This study seeks to evaluate the predictive validity of the French version of the Dynamic Appraisal of Situational Aggression (DASAfr) and psychiatric nurses' perceptions of the clinical usefulness of the scale. The study was conducted in a 12-bed psychiatric intensive care unit in a large adult general psychiatric hospital. We found that the total score on the DASAfr has acceptable predictive accuracy for aggression against others and against staff and for seclusion with restraints; predictive accuracy was poorer for aggression against objects. Moreover, the nurses though the scale would be useful to their practice; and, indeed, the team still uses the DASAfr.

  6. Void probability as a function of the void's shape and scale-invariant models

    NASA Technical Reports Server (NTRS)

    Elizalde, E.; Gaztanaga, E.

    1991-01-01

    The dependence of counts in cells on the shape of the cell for the large scale galaxy distribution is studied. A very concrete prediction can be done concerning the void distribution for scale invariant models. The prediction is tested on a sample of the CfA catalog, and good agreement is found. It is observed that the probability of a cell to be occupied is bigger for some elongated cells. A phenomenological scale invariant model for the observed distribution of the counts in cells, an extension of the negative binomial distribution, is presented in order to illustrate how this dependence can be quantitatively determined. An original, intuitive derivation of this model is presented.

  7. Hi-C Chromatin Interaction Networks Predict Co-expression in the Mouse Cortex

    PubMed Central

    Hulsman, Marc; Lelieveldt, Boudewijn P. F.; de Ridder, Jeroen; Reinders, Marcel

    2015-01-01

    The three dimensional conformation of the genome in the cell nucleus influences important biological processes such as gene expression regulation. Recent studies have shown a strong correlation between chromatin interactions and gene co-expression. However, predicting gene co-expression from frequent long-range chromatin interactions remains challenging. We address this by characterizing the topology of the cortical chromatin interaction network using scale-aware topological measures. We demonstrate that based on these characterizations it is possible to accurately predict spatial co-expression between genes in the mouse cortex. Consistent with previous findings, we find that the chromatin interaction profile of a gene-pair is a good predictor of their spatial co-expression. However, the accuracy of the prediction can be substantially improved when chromatin interactions are described using scale-aware topological measures of the multi-resolution chromatin interaction network. We conclude that, for co-expression prediction, it is necessary to take into account different levels of chromatin interactions ranging from direct interaction between genes (i.e. small-scale) to chromatin compartment interactions (i.e. large-scale). PMID:25965262

  8. Field Assessment Stroke Triage for Emergency Destination: A Simple and Accurate Prehospital Scale to Detect Large Vessel Occlusion Strokes.

    PubMed

    Lima, Fabricio O; Silva, Gisele S; Furie, Karen L; Frankel, Michael R; Lev, Michael H; Camargo, Érica C S; Haussen, Diogo C; Singhal, Aneesh B; Koroshetz, Walter J; Smith, Wade S; Nogueira, Raul G

    2016-08-01

    Patients with large vessel occlusion strokes (LVOS) may be better served by direct transfer to endovascular capable centers avoiding hazardous delays between primary and comprehensive stroke centers. However, accurate stroke field triage remains challenging. We aimed to develop a simple field scale to identify LVOS. The Field Assessment Stroke Triage for Emergency Destination (FAST-ED) scale was based on items of the National Institutes of Health Stroke Scale (NIHSS) with higher predictive value for LVOS and tested in the Screening Technology and Outcomes Project in Stroke (STOPStroke) cohort, in which patients underwent computed tomographic angiography within the first 24 hours of stroke onset. LVOS were defined by total occlusions involving the intracranial internal carotid artery, middle cerebral artery-M1, middle cerebral artery-2, or basilar arteries. Patients with partial, bihemispheric, and anterior+posterior circulation occlusions were excluded. Receiver operating characteristic curve, sensitivity, specificity, positive predictive value, and negative predictive value of FAST-ED were compared with the NIHSS, Rapid Arterial Occlusion Evaluation (RACE) scale, and Cincinnati Prehospital Stroke Severity (CPSS) scale. LVO was detected in 240 of the 727 qualifying patients (33%). FAST-ED had comparable accuracy to predict LVO to the NIHSS and higher accuracy than RACE and CPSS (area under the receiver operating characteristic curve: FAST-ED=0.81 as reference; NIHSS=0.80, P=0.28; RACE=0.77, P=0.02; and CPSS=0.75, P=0.002). A FAST-ED ≥4 had sensitivity of 0.60, specificity of 0.89, positive predictive value of 0.72, and negative predictive value of 0.82 versus RACE ≥5 of 0.55, 0.87, 0.68, and 0.79, and CPSS ≥2 of 0.56, 0.85, 0.65, and 0.78, respectively. FAST-ED is a simple scale that if successfully validated in the field, it may be used by medical emergency professionals to identify LVOS in the prehospital setting enabling rapid triage of patients. © 2016 American Heart Association, Inc.

  9. Wetlands as large-scale nature-based solutions: status and future challenges for research and management

    NASA Astrophysics Data System (ADS)

    Thorslund, Josefin; Jarsjö, Jerker; Destouni, Georgia

    2017-04-01

    Wetlands are often considered as nature-based solutions that can provide a multitude of services of great social, economic and environmental value to humankind. The services may include recreation, greenhouse gas sequestration, contaminant retention, coastal protection, groundwater level and soil moisture regulation, flood regulation and biodiversity support. Changes in land-use, water use and climate can all impact wetland functions and occur at scales extending well beyond the local scale of an individual wetland. However, in practical applications, management decisions usually regard and focus on individual wetland sites and local conditions. To understand the potential usefulness and services of wetlands as larger-scale nature-based solutions, e.g. for mitigating negative impacts from large-scale change pressures, one needs to understand the combined function multiple wetlands at the relevant large scales. We here systematically investigate if and to what extent research so far has addressed the large-scale dynamics of landscape systems with multiple wetlands, which are likely to be relevant for understanding impacts of regional to global change. Our investigation regards key changes and impacts of relevance for nature-based solutions, such as large-scale nutrient and pollution retention, flow regulation and coastal protection. Although such large-scale knowledge is still limited, evidence suggests that the aggregated functions and effects of multiple wetlands in the landscape can differ considerably from those observed at individual wetlands. Such scale differences may have important implications for wetland function-effect predictability and management under large-scale change pressures and impacts, such as those of climate change.

  10. Large-Scale Aerosol Modeling and Analysis

    DTIC Science & Technology

    2008-09-30

    novel method of simultaneous real- time measurements of ice-nucleating particle concentrations and size- resolved chemical composition of individual...is to develop a practical predictive capability for visibility and weather effects of aerosol particles for the entire globe for timely use in...prediction follows that used in numerical weather prediction, namely real- time assessment for initialization of first-principles models. The Naval

  11. THE FUTURE OF TOXICOLOGY-PREDICTIVE TOXICOLOGY ...

    EPA Pesticide Factsheets

    A chemistry approach to predictive toxicology relies on structure−activity relationship (SAR) modeling to predict biological activity from chemical structure. Such approaches have proven capabilities when applied to well-defined toxicity end points or regions of chemical space. These approaches are less well-suited, however, to the challenges of global toxicity prediction, i.e., to predicting the potential toxicity of structurally diverse chemicals across a wide range of end points of regulatory and pharmaceutical concern. New approaches that have the potential to significantly improve capabilities in predictive toxicology are elaborating the “activity” portion of the SAR paradigm. Recent advances in two areas of endeavor are particularly promising. Toxicity data informatics relies on standardized data schema, developed for particular areas of toxicological study, to facilitate data integration and enable relational exploration and mining of data across both historical and new areas of toxicological investigation. Bioassay profiling refers to large-scale high-throughput screening approaches that use chemicals as probes to broadly characterize biological response space, extending the concept of chemical “properties” to the biological activity domain. The effective capture and representation of legacy and new toxicity data into mineable form and the large-scale generation of new bioassay data in relation to chemical toxicity, both employing chemical stru

  12. A refined regional modeling approach for the Corn Belt - Experiences and recommendations for large-scale integrated modeling

    NASA Astrophysics Data System (ADS)

    Panagopoulos, Yiannis; Gassman, Philip W.; Jha, Manoj K.; Kling, Catherine L.; Campbell, Todd; Srinivasan, Raghavan; White, Michael; Arnold, Jeffrey G.

    2015-05-01

    Nonpoint source pollution from agriculture is the main source of nitrogen and phosphorus in the stream systems of the Corn Belt region in the Midwestern US. This region is comprised of two large river basins, the intensely row-cropped Upper Mississippi River Basin (UMRB) and Ohio-Tennessee River Basin (OTRB), which are considered the key contributing areas for the Northern Gulf of Mexico hypoxic zone according to the US Environmental Protection Agency. Thus, in this area it is of utmost importance to ensure that intensive agriculture for food, feed and biofuel production can coexist with a healthy water environment. To address these objectives within a river basin management context, an integrated modeling system has been constructed with the hydrologic Soil and Water Assessment Tool (SWAT) model, capable of estimating river basin responses to alternative cropping and/or management strategies. To improve modeling performance compared to previous studies and provide a spatially detailed basis for scenario development, this SWAT Corn Belt application incorporates a greatly refined subwatershed structure based on 12-digit hydrologic units or 'subwatersheds' as defined by the US Geological Service. The model setup, calibration and validation are time-demanding and challenging tasks for these large systems, given the scale intensive data requirements, and the need to ensure the reliability of flow and pollutant load predictions at multiple locations. Thus, the objectives of this study are both to comprehensively describe this large-scale modeling approach, providing estimates of pollution and crop production in the region as well as to present strengths and weaknesses of integrated modeling at such a large scale along with how it can be improved on the basis of the current modeling structure and results. The predictions were based on a semi-automatic hydrologic calibration approach for large-scale and spatially detailed modeling studies, with the use of the Sequential Uncertainty Fitting algorithm (SUFI-2) and the SWAT-CUP interface, followed by a manual water quality calibration on a monthly basis. The refined modeling approach developed in this study led to successful predictions across most parts of the Corn Belt region and can be used for testing pollution mitigation measures and agricultural economic scenarios, providing useful information to policy makers and recommendations on similar efforts at the regional scale.

  13. Logistic regression accuracy across different spatial and temporal scales for a wide-ranging species, the marbled murrelet

    Treesearch

    Carolyn B. Meyer; Sherri L. Miller; C. John Ralph

    2004-01-01

    The scale at which habitat variables are measured affects the accuracy of resource selection functions in predicting animal use of sites. We used logistic regression models for a wide-ranging species, the marbled murrelet, (Brachyramphus marmoratus) in a large region in California to address how much changing the spatial or temporal scale of...

  14. Multi-scale properties of large eddy simulations: correlations between resolved-scale velocity-field increments and subgrid-scale quantities

    NASA Astrophysics Data System (ADS)

    Linkmann, Moritz; Buzzicotti, Michele; Biferale, Luca

    2018-06-01

    We provide analytical and numerical results concerning multi-scale correlations between the resolved velocity field and the subgrid-scale (SGS) stress-tensor in large eddy simulations (LES). Following previous studies for Navier-Stokes equations, we derive the exact hierarchy of LES equations governing the spatio-temporal evolution of velocity structure functions of any order. The aim is to assess the influence of the subgrid model on the inertial range intermittency. We provide a series of predictions, within the multifractal theory, for the scaling of correlation involving the SGS stress and we compare them against numerical results from high-resolution Smagorinsky LES and from a-priori filtered data generated from direct numerical simulations (DNS). We find that LES data generally agree very well with filtered DNS results and with the multifractal prediction for all leading terms in the balance equations. Discrepancies are measured for some of the sub-leading terms involving cross-correlation between resolved velocity increments and the SGS tensor or the SGS energy transfer, suggesting that there must be room to improve the SGS modelisation to further extend the inertial range properties for any fixed LES resolution.

  15. Large-scale brain network coupling predicts acute nicotine abstinence effects on craving and cognitive function.

    PubMed

    Lerman, Caryn; Gu, Hong; Loughead, James; Ruparel, Kosha; Yang, Yihong; Stein, Elliot A

    2014-05-01

    Interactions of large-scale brain networks may underlie cognitive dysfunctions in psychiatric and addictive disorders. To test the hypothesis that the strength of coupling among 3 large-scale brain networks--salience, executive control, and default mode--will reflect the state of nicotine withdrawal (vs smoking satiety) and will predict abstinence-induced craving and cognitive deficits and to develop a resource allocation index (RAI) that reflects the combined strength of interactions among the 3 large-scale networks. A within-subject functional magnetic resonance imaging study in an academic medical center compared resting-state functional connectivity coherence strength after 24 hours of abstinence and after smoking satiety. We examined the relationship of abstinence-induced changes in the RAI with alterations in subjective, behavioral, and neural functions. We included 37 healthy smoking volunteers, aged 19 to 61 years, for analyses. Twenty-four hours of abstinence vs smoking satiety. Inter-network connectivity strength (primary) and the relationship with subjective, behavioral, and neural measures of nicotine withdrawal during abstinence vs smoking satiety states (secondary). The RAI was significantly lower in the abstinent compared with the smoking satiety states (left RAI, P = .002; right RAI, P = .04), suggesting weaker inhibition between the default mode and salience networks. Weaker inter-network connectivity (reduced RAI) predicted abstinence-induced cravings to smoke (r = -0.59; P = .007) and less suppression of default mode activity during performance of a subsequent working memory task (ventromedial prefrontal cortex, r = -0.66, P = .003; posterior cingulate cortex, r = -0.65, P = .001). Alterations in coupling of the salience and default mode networks and the inability to disengage from the default mode network may be critical in cognitive/affective alterations that underlie nicotine dependence.

  16. Estimating large carnivore populations at global scale based on spatial predictions of density and distribution – Application to the jaguar (Panthera onca)

    PubMed Central

    Robinson, Hugh S.; Abarca, Maria; Zeller, Katherine A.; Velasquez, Grisel; Paemelaere, Evi A. D.; Goldberg, Joshua F.; Payan, Esteban; Hoogesteijn, Rafael; Boede, Ernesto O.; Schmidt, Krzysztof; Lampo, Margarita; Viloria, Ángel L.; Carreño, Rafael; Robinson, Nathaniel; Lukacs, Paul M.; Nowak, J. Joshua; Salom-Pérez, Roberto; Castañeda, Franklin; Boron, Valeria; Quigley, Howard

    2018-01-01

    Broad scale population estimates of declining species are desired for conservation efforts. However, for many secretive species including large carnivores, such estimates are often difficult. Based on published density estimates obtained through camera trapping, presence/absence data, and globally available predictive variables derived from satellite imagery, we modelled density and occurrence of a large carnivore, the jaguar, across the species’ entire range. We then combined these models in a hierarchical framework to estimate the total population. Our models indicate that potential jaguar density is best predicted by measures of primary productivity, with the highest densities in the most productive tropical habitats and a clear declining gradient with distance from the equator. Jaguar distribution, in contrast, is determined by the combined effects of human impacts and environmental factors: probability of jaguar occurrence increased with forest cover, mean temperature, and annual precipitation and declined with increases in human foot print index and human density. Probability of occurrence was also significantly higher for protected areas than outside of them. We estimated the world’s jaguar population at 173,000 (95% CI: 138,000–208,000) individuals, mostly concentrated in the Amazon Basin; elsewhere, populations tend to be small and fragmented. The high number of jaguars results from the large total area still occupied (almost 9 million km2) and low human densities (< 1 person/km2) coinciding with high primary productivity in the core area of jaguar range. Our results show the importance of protected areas for jaguar persistence. We conclude that combining modelling of density and distribution can reveal ecological patterns and processes at global scales, can provide robust estimates for use in species assessments, and can guide broad-scale conservation actions. PMID:29579129

  17. Prediction of miRNA targets.

    PubMed

    Oulas, Anastasis; Karathanasis, Nestoras; Louloupi, Annita; Pavlopoulos, Georgios A; Poirazi, Panayiota; Kalantidis, Kriton; Iliopoulos, Ioannis

    2015-01-01

    Computational methods for miRNA target prediction are currently undergoing extensive review and evaluation. There is still a great need for improvement of these tools and bioinformatics approaches are looking towards high-throughput experiments in order to validate predictions. The combination of large-scale techniques with computational tools will not only provide greater credence to computational predictions but also lead to the better understanding of specific biological questions. Current miRNA target prediction tools utilize probabilistic learning algorithms, machine learning methods and even empirical biologically defined rules in order to build models based on experimentally verified miRNA targets. Large-scale protein downregulation assays and next-generation sequencing (NGS) are now being used to validate methodologies and compare the performance of existing tools. Tools that exhibit greater correlation between computational predictions and protein downregulation or RNA downregulation are considered the state of the art. Moreover, efficiency in prediction of miRNA targets that are concurrently verified experimentally provides additional validity to computational predictions and further highlights the competitive advantage of specific tools and their efficacy in extracting biologically significant results. In this review paper, we discuss the computational methods for miRNA target prediction and provide a detailed comparison of methodologies and features utilized by each specific tool. Moreover, we provide an overview of current state-of-the-art high-throughput methods used in miRNA target prediction.

  18. An integrated approach to reconstructing genome-scale transcriptional regulatory networks

    DOE PAGES

    Imam, Saheed; Noguera, Daniel R.; Donohue, Timothy J.; ...

    2015-02-27

    Transcriptional regulatory networks (TRNs) program cells to dynamically alter their gene expression in response to changing internal or environmental conditions. In this study, we develop a novel workflow for generating large-scale TRN models that integrates comparative genomics data, global gene expression analyses, and intrinsic properties of transcription factors (TFs). An assessment of this workflow using benchmark datasets for the well-studied γ-proteobacterium Escherichia coli showed that it outperforms expression-based inference approaches, having a significantly larger area under the precision-recall curve. Further analysis indicated that this integrated workflow captures different aspects of the E. coli TRN than expression-based approaches, potentially making themmore » highly complementary. We leveraged this new workflow and observations to build a large-scale TRN model for the α-Proteobacterium Rhodobacter sphaeroides that comprises 120 gene clusters, 1211 genes (including 93 TFs), 1858 predicted protein-DNA interactions and 76 DNA binding motifs. We found that ~67% of the predicted gene clusters in this TRN are enriched for functions ranging from photosynthesis or central carbon metabolism to environmental stress responses. We also found that members of many of the predicted gene clusters were consistent with prior knowledge in R. sphaeroides and/or other bacteria. Experimental validation of predictions from this R. sphaeroides TRN model showed that high precision and recall was also obtained for TFs involved in photosynthesis (PpsR), carbon metabolism (RSP_0489) and iron homeostasis (RSP_3341). In addition, this integrative approach enabled generation of TRNs with increased information content relative to R. sphaeroides TRN models built via other approaches. We also show how this approach can be used to simultaneously produce TRN models for each related organism used in the comparative genomics analysis. Our results highlight the advantages of integrating comparative genomics of closely related organisms with gene expression data to assemble large-scale TRN models with high-quality predictions.« less

  19. Large-Scale Off-Target Identification Using Fast and Accurate Dual Regularized One-Class Collaborative Filtering and Its Application to Drug Repurposing.

    PubMed

    Lim, Hansaim; Poleksic, Aleksandar; Yao, Yuan; Tong, Hanghang; He, Di; Zhuang, Luke; Meng, Patrick; Xie, Lei

    2016-10-01

    Target-based screening is one of the major approaches in drug discovery. Besides the intended target, unexpected drug off-target interactions often occur, and many of them have not been recognized and characterized. The off-target interactions can be responsible for either therapeutic or side effects. Thus, identifying the genome-wide off-targets of lead compounds or existing drugs will be critical for designing effective and safe drugs, and providing new opportunities for drug repurposing. Although many computational methods have been developed to predict drug-target interactions, they are either less accurate than the one that we are proposing here or computationally too intensive, thereby limiting their capability for large-scale off-target identification. In addition, the performances of most machine learning based algorithms have been mainly evaluated to predict off-target interactions in the same gene family for hundreds of chemicals. It is not clear how these algorithms perform in terms of detecting off-targets across gene families on a proteome scale. Here, we are presenting a fast and accurate off-target prediction method, REMAP, which is based on a dual regularized one-class collaborative filtering algorithm, to explore continuous chemical space, protein space, and their interactome on a large scale. When tested in a reliable, extensive, and cross-gene family benchmark, REMAP outperforms the state-of-the-art methods. Furthermore, REMAP is highly scalable. It can screen a dataset of 200 thousands chemicals against 20 thousands proteins within 2 hours. Using the reconstructed genome-wide target profile as the fingerprint of a chemical compound, we predicted that seven FDA-approved drugs can be repurposed as novel anti-cancer therapies. The anti-cancer activity of six of them is supported by experimental evidences. Thus, REMAP is a valuable addition to the existing in silico toolbox for drug target identification, drug repurposing, phenotypic screening, and side effect prediction. The software and benchmark are available at https://github.com/hansaimlim/REMAP.

  20. Large-Scale Off-Target Identification Using Fast and Accurate Dual Regularized One-Class Collaborative Filtering and Its Application to Drug Repurposing

    PubMed Central

    Poleksic, Aleksandar; Yao, Yuan; Tong, Hanghang; Meng, Patrick; Xie, Lei

    2016-01-01

    Target-based screening is one of the major approaches in drug discovery. Besides the intended target, unexpected drug off-target interactions often occur, and many of them have not been recognized and characterized. The off-target interactions can be responsible for either therapeutic or side effects. Thus, identifying the genome-wide off-targets of lead compounds or existing drugs will be critical for designing effective and safe drugs, and providing new opportunities for drug repurposing. Although many computational methods have been developed to predict drug-target interactions, they are either less accurate than the one that we are proposing here or computationally too intensive, thereby limiting their capability for large-scale off-target identification. In addition, the performances of most machine learning based algorithms have been mainly evaluated to predict off-target interactions in the same gene family for hundreds of chemicals. It is not clear how these algorithms perform in terms of detecting off-targets across gene families on a proteome scale. Here, we are presenting a fast and accurate off-target prediction method, REMAP, which is based on a dual regularized one-class collaborative filtering algorithm, to explore continuous chemical space, protein space, and their interactome on a large scale. When tested in a reliable, extensive, and cross-gene family benchmark, REMAP outperforms the state-of-the-art methods. Furthermore, REMAP is highly scalable. It can screen a dataset of 200 thousands chemicals against 20 thousands proteins within 2 hours. Using the reconstructed genome-wide target profile as the fingerprint of a chemical compound, we predicted that seven FDA-approved drugs can be repurposed as novel anti-cancer therapies. The anti-cancer activity of six of them is supported by experimental evidences. Thus, REMAP is a valuable addition to the existing in silico toolbox for drug target identification, drug repurposing, phenotypic screening, and side effect prediction. The software and benchmark are available at https://github.com/hansaimlim/REMAP. PMID:27716836

  1. Data-based discharge extrapolation: estimating annual discharge for a partially gauged large river basin from its small sub-basins

    NASA Astrophysics Data System (ADS)

    Gong, L.

    2013-12-01

    Large-scale hydrological models and land surface models are by far the only tools for accessing future water resources in climate change impact studies. Those models estimate discharge with large uncertainties, due to the complex interaction between climate and hydrology, the limited quality and availability of data, as well as model uncertainties. A new purely data-based scale-extrapolation method is proposed, to estimate water resources for a large basin solely from selected small sub-basins, which are typically two-orders-of-magnitude smaller than the large basin. Those small sub-basins contain sufficient information, not only on climate and land surface, but also on hydrological characteristics for the large basin In the Baltic Sea drainage basin, best discharge estimation for the gauged area was achieved with sub-basins that cover 2-4% of the gauged area. There exist multiple sets of sub-basins that resemble the climate and hydrology of the basin equally well. Those multiple sets estimate annual discharge for gauged area consistently well with 5% average error. The scale-extrapolation method is completely data-based; therefore it does not force any modelling error into the prediction. The multiple predictions are expected to bracket the inherent variations and uncertainties of the climate and hydrology of the basin. The method can be applied in both un-gauged basins and un-gauged periods with uncertainty estimation.

  2. Evaluation of Penalized and Nonpenalized Methods for Disease Prediction with Large-Scale Genetic Data.

    PubMed

    Won, Sungho; Choi, Hosik; Park, Suyeon; Lee, Juyoung; Park, Changyi; Kwon, Sunghoon

    2015-01-01

    Owing to recent improvement of genotyping technology, large-scale genetic data can be utilized to identify disease susceptibility loci and this successful finding has substantially improved our understanding of complex diseases. However, in spite of these successes, most of the genetic effects for many complex diseases were found to be very small, which have been a big hurdle to build disease prediction model. Recently, many statistical methods based on penalized regressions have been proposed to tackle the so-called "large P and small N" problem. Penalized regressions including least absolute selection and shrinkage operator (LASSO) and ridge regression limit the space of parameters, and this constraint enables the estimation of effects for very large number of SNPs. Various extensions have been suggested, and, in this report, we compare their accuracy by applying them to several complex diseases. Our results show that penalized regressions are usually robust and provide better accuracy than the existing methods for at least diseases under consideration.

  3. Landscapes for Energy and Wildlife: Conservation Prioritization for Golden Eagles across Large Spatial Scales

    PubMed Central

    Tack, Jason D.; Fedy, Bradley C.

    2015-01-01

    Proactive conservation planning for species requires the identification of important spatial attributes across ecologically relevant scales in a model-based framework. However, it is often difficult to develop predictive models, as the explanatory data required for model development across regional management scales is rarely available. Golden eagles are a large-ranging predator of conservation concern in the United States that may be negatively affected by wind energy development. Thus, identifying landscapes least likely to pose conflict between eagles and wind development via shared space prior to development will be critical for conserving populations in the face of imposing development. We used publically available data on golden eagle nests to generate predictive models of golden eagle nesting sites in Wyoming, USA, using a suite of environmental and anthropogenic variables. By overlaying predictive models of golden eagle nesting habitat with wind energy resource maps, we highlight areas of potential conflict among eagle nesting habitat and wind development. However, our results suggest that wind potential and the relative probability of golden eagle nesting are not necessarily spatially correlated. Indeed, the majority of our sample frame includes areas with disparate predictions between suitable nesting habitat and potential for developing wind energy resources. Map predictions cannot replace on-the-ground monitoring for potential risk of wind turbines on wildlife populations, though they provide industry and managers a useful framework to first assess potential development. PMID:26262876

  4. Landscapes for energy and wildlife: conservation prioritization for golden eagles across large spatial scales

    USGS Publications Warehouse

    Tack, Jason D.; Fedy, Bradley C.

    2015-01-01

    Proactive conservation planning for species requires the identification of important spatial attributes across ecologically relevant scales in a model-based framework. However, it is often difficult to develop predictive models, as the explanatory data required for model development across regional management scales is rarely available. Golden eagles are a large-ranging predator of conservation concern in the United States that may be negatively affected by wind energy development. Thus, identifying landscapes least likely to pose conflict between eagles and wind development via shared space prior to development will be critical for conserving populations in the face of imposing development. We used publically available data on golden eagle nests to generate predictive models of golden eagle nesting sites in Wyoming, USA, using a suite of environmental and anthropogenic variables. By overlaying predictive models of golden eagle nesting habitat with wind energy resource maps, we highlight areas of potential conflict among eagle nesting habitat and wind development. However, our results suggest that wind potential and the relative probability of golden eagle nesting are not necessarily spatially correlated. Indeed, the majority of our sample frame includes areas with disparate predictions between suitable nesting habitat and potential for developing wind energy resources. Map predictions cannot replace on-the-ground monitoring for potential risk of wind turbines on wildlife populations, though they provide industry and managers a useful framework to first assess potential development.

  5. Landscapes for Energy and Wildlife: Conservation Prioritization for Golden Eagles across Large Spatial Scales.

    PubMed

    Tack, Jason D; Fedy, Bradley C

    2015-01-01

    Proactive conservation planning for species requires the identification of important spatial attributes across ecologically relevant scales in a model-based framework. However, it is often difficult to develop predictive models, as the explanatory data required for model development across regional management scales is rarely available. Golden eagles are a large-ranging predator of conservation concern in the United States that may be negatively affected by wind energy development. Thus, identifying landscapes least likely to pose conflict between eagles and wind development via shared space prior to development will be critical for conserving populations in the face of imposing development. We used publically available data on golden eagle nests to generate predictive models of golden eagle nesting sites in Wyoming, USA, using a suite of environmental and anthropogenic variables. By overlaying predictive models of golden eagle nesting habitat with wind energy resource maps, we highlight areas of potential conflict among eagle nesting habitat and wind development. However, our results suggest that wind potential and the relative probability of golden eagle nesting are not necessarily spatially correlated. Indeed, the majority of our sample frame includes areas with disparate predictions between suitable nesting habitat and potential for developing wind energy resources. Map predictions cannot replace on-the-ground monitoring for potential risk of wind turbines on wildlife populations, though they provide industry and managers a useful framework to first assess potential development.

  6. Scientific goals of the Cooperative Multiscale Experiment (CME)

    NASA Technical Reports Server (NTRS)

    Cotton, William

    1993-01-01

    Mesoscale Convective Systems (MCS) form the focus of CME. Recent developments in global climate models, the urgent need to improve the representation of the physics of convection, radiation, the boundary layer, and orography, and the surge of interest in coupling hydrologic, chemistry, and atmospheric models of various scales, have emphasized the need for a broad interdisciplinary and multi-scale approach to understanding and predicting MCS's and their interactions with processes at other scales. The role of mesoscale systems in the large-scale atmospheric circulation, the representation of organized convection and other mesoscale flux sources in terms of bulk properties, and the mutually consistent treatment of water vapor, clouds, radiation, and precipitation, are all key scientific issues concerning which CME will seek to increase understanding. The manner in which convective, mesoscale, and larger scale processes interact to produce and organize MCS's, the moisture cycling properties of MCS's, and the use of coupled cloud/mesoscale models to better understand these processes, are also major objectives of CME. Particular emphasis will be placed on the multi-scale role of MCS's in the hydrological cycle and in the production and transport of chemical trace constituents. The scientific goals of the CME consist of the following: understand how the large and small scales of motion influence the location, structure, intensity, and life cycles of MCS's; understand processes and conditions that determine the relative roles of balanced (slow manifold) and unbalanced (fast manifold) circulations in the dynamics of MCS's throughout their life cycles; assess the predictability of MCS's and improve the quantitative forecasting of precipitation and severe weather events; quantify the upscale feedback of MCS's to the large-scale environment and determine interrelationships between MCS occurrence and variations in the large-scale flow and surface forcing; provide a data base for initialization and verification of coupled regional, mesoscale/hydrologic, mesoscale/chemistry, and prototype mesoscale/cloud-resolving models for prediction of severe weather, ceilings, and visibility; provide a data base for initialization and validation of cloud-resolving models, and for assisting in the fabrication, calibration, and testing of cloud and MCS parameterization schemes; and provide a data base for validation of four dimensional data assimilation schemes and algorithms for retrieving cloud and state parameters from remote sensing instrumentation.

  7. Downscaling ocean conditions with application to the Gulf of Maine, Scotian Shelf and adjacent deep ocean

    NASA Astrophysics Data System (ADS)

    Katavouta, Anna; Thompson, Keith

    2017-04-01

    A high resolution regional model (1/36 degree) of the Gulf of Maine, Scotian Shelf and adjacent deep ocean (GoMSS) is developed to downscale ocean conditions from an existing global operational system. First, predictions from the regional GoMSS model in a one-way nesting set up are evaluated using observations from multiple sources including satellite-borne sensors of surface temperature and sea level, CTDs, Argo floats and moored current meters. It is shown that on the shelf, the regional model predicts more realistic fields than the global system because it has higher resolution and includes tides that are absent from the global system. However, in deep water the regional model misplaces deep ocean eddies and meanders associated with the Gulf Stream. This is because of unrealistic internally generated variability (associated with the one-way nesting set up) that leads to decoupling of the regional model from the global system in the deep water. To overcome this problem, the large scales (length scales > 90 km) of the regional model are spectrally nudged towards the global system fields. This leads to more realistic predictions off the shelf. Wavenumber spectra show that even though spectral nudging constrains the large scales, it does not suppress the variability on small scales; on the contrary, it favours the formation of eddies with length scales below the cut-off wavelength of the spectral nudging.

  8. Application of renormalization group theory to the large-eddy simulation of transitional boundary layers

    NASA Technical Reports Server (NTRS)

    Piomelli, Ugo; Zang, Thomas A.; Speziale, Charles G.; Lund, Thomas S.

    1990-01-01

    An eddy viscosity model based on the renormalization group theory of Yakhot and Orszag (1986) is applied to the large-eddy simulation of transition in a flat-plate boundary layer. The simulation predicts with satisfactory accuracy the mean velocity and Reynolds stress profiles, as well as the development of the important scales of motion. The evolution of the structures characteristic of the nonlinear stages of transition is also predicted reasonably well.

  9. Preduction of Vehicle Mobility on Large-Scale Soft-Soil Terrain Maps Using Physics-Based Simulation

    DTIC Science & Technology

    2016-08-02

    PREDICTION OF VEHICLE MOBILITY ON LARGE-SCALE SOFT- SOIL TERRAIN MAPS USING PHYSICS-BASED SIMULATION Tamer M. Wasfy, Paramsothy Jayakumar, Dave...NRMM • Objectives • Soft Soils • Review of Physics-Based Soil Models • MBD/DEM Modeling Formulation – Joint & Contact Constraints – DEM Cohesive... Soil Model • Cone Penetrometer Experiment • Vehicle- Soil Model • Vehicle Mobility DOE Procedure • Simulation Results • Concluding Remarks 2UNCLASSIFIED

  10. Non-invasive Prediction of Pork Loin Tenderness

    USDA-ARS?s Scientific Manuscript database

    The present experiment was conducted to develop a non-invasive method to predict tenderness of pork loins. Boneless pork loins (n = 901) were evaluated either on line on the loin boning and trimming line of large-scale commercial plants (n = 465) or at the U.S. Meat Animal Research Center abattoir ...

  11. Evaluation of the synoptic and mesoscale predictive capabilities of a mesoscale atmospheric simulation system

    NASA Technical Reports Server (NTRS)

    Koch, S. E.; Skillman, W. C.; Kocin, P. J.; Wetzel, P. J.; Brill, K.; Keyser, D. A.; Mccumber, M. C.

    1983-01-01

    The overall performance characteristics of a limited area, hydrostatic, fine (52 km) mesh, primitive equation, numerical weather prediction model are determined in anticipation of satellite data assimilations with the model. The synoptic and mesoscale predictive capabilities of version 2.0 of this model, the Mesoscale Atmospheric Simulation System (MASS 2.0), were evaluated. The two part study is based on a sample of approximately thirty 12h and 24h forecasts of atmospheric flow patterns during spring and early summer. The synoptic scale evaluation results benchmark the performance of MASS 2.0 against that of an operational, synoptic scale weather prediction model, the Limited area Fine Mesh (LFM). The large sample allows for the calculation of statistically significant measures of forecast accuracy and the determination of systematic model errors. The synoptic scale benchmark is required before unsmoothed mesoscale forecast fields can be seriously considered.

  12. Macroweather Predictions and Climate Projections using Scaling and Historical Observations

    NASA Astrophysics Data System (ADS)

    Hébert, R.; Lovejoy, S.; Del Rio Amador, L.

    2017-12-01

    There are two fundamental time scales that are pertinent to decadal forecasts and multidecadal projections. The first is the lifetime of planetary scale structures, about 10 days (equal to the deterministic predictability limit), and the second is - in the anthropocene - the scale at which the forced anthropogenic variability exceeds the internal variability (around 16 - 18 years). These two time scales define three regimes of variability: weather, macroweather and climate that are respectively characterized by increasing, decreasing and then increasing varibility with scale.We discuss how macroweather temperature variability can be skilfully predicted to its theoretical stochastic predictability limits by exploiting its long-range memory with the Stochastic Seasonal and Interannual Prediction System (StocSIPS). At multi-decadal timescales, the temperature response to forcing is approximately linear and this can be exploited to make projections with a Green's function, or Climate Response Function (CRF). To make the problem tractable, we exploit the temporal scaling symmetry and restrict our attention to global mean forcing and temperature response using a scaling CRF characterized by the scaling exponent H and an inner scale of linearity τ. An aerosol linear scaling factor α and a non-linear volcanic damping exponent ν were introduced to account for the large uncertainty in these forcings. We estimate the model and forcing parameters by Bayesian inference using historical data and these allow us to analytically calculate a median (and likely 66% range) for the transient climate response, and for the equilibrium climate sensitivity: 1.6K ([1.5,1.8]K) and 2.4K ([1.9,3.4]K) respectively. Aerosol forcing typically has large uncertainty and we find a modern (2005) forcing very likely range (90%) of [-1.0, -0.3] Wm-2 with median at -0.7 Wm-2. Projecting to 2100, we find that to keep the warming below 1.5 K, future emissions must undergo cuts similar to Representative Concentration Pathway (RCP) 2.6 for which the probability to remain under 1.5 K is 48%. RCP 4.5 and RCP 8.5-like futures overshoot with very high probability. This underscores that over the next century, the state of the environment will be strongly influenced by past, present and future economical policies.

  13. Patterns of soil community structure differ by scale and ecosystem type along a large-scale precipitation gradient

    USDA-ARS?s Scientific Manuscript database

    Climate models predict increased variability in precipitation regimes, which will likely increase frequency/duration of drought. Reductions in soil moisture affect physical and chemical characteristics of the soil habitat and can influence soil organisms such as mites and nematodes. These organisms ...

  14. A new framework to increase the efficiency of large-scale solar power plants.

    NASA Astrophysics Data System (ADS)

    Alimohammadi, Shahrouz; Kleissl, Jan P.

    2015-11-01

    A new framework to estimate the spatio-temporal behavior of solar power is introduced, which predicts the statistical behavior of power output at utility scale Photo-Voltaic (PV) power plants. The framework is based on spatio-temporal Gaussian Processes Regression (Kriging) models, which incorporates satellite data with the UCSD version of the Weather and Research Forecasting model. This framework is designed to improve the efficiency of the large-scale solar power plants. The results are also validated from measurements of the local pyranometer sensors, and some improvements in different scenarios are observed. Solar energy.

  15. Cosmic microwave background probes models of inflation

    NASA Technical Reports Server (NTRS)

    Davis, Richard L.; Hodges, Hardy M.; Smoot, George F.; Steinhardt, Paul J.; Turner, Michael S.

    1992-01-01

    Inflation creates both scalar (density) and tensor (gravity wave) metric perturbations. We find that the tensor-mode contribution to the cosmic microwave background anisotropy on large-angular scales can only exceed that of the scalar mode in models where the spectrum of perturbations deviates significantly from scale invariance. If the tensor mode dominates at large-angular scales, then the value of DeltaT/T predicted on 1 deg is less than if the scalar mode dominates, and, for cold-dark-matter models, bias factors greater than 1 can be made consistent with Cosmic Background Explorer (COBE) DMR results.

  16. The Stochastic predictability limits of GCM internal variability and the Stochastic Seasonal to Interannual Prediction System (StocSIPS)

    NASA Astrophysics Data System (ADS)

    Del Rio Amador, Lenin; Lovejoy, Shaun

    2017-04-01

    Over the past ten years, a key advance in our understanding of atmospheric variability is the discovery that between the weather and climate regime lies an intermediate "macroweather" regime, spanning the range of scales from ≈10 days to ≈30 years. Macroweather statistics are characterized by two fundamental symmetries: scaling and the factorization of the joint space-time statistics. In the time domain, the scaling has low intermittency with the additional property that successive fluctuations tend to cancel. In space, on the contrary the scaling has high (multifractal) intermittency corresponding to the existence of different climate zones. These properties have fundamental implications for macroweather forecasting: a) the temporal scaling implies that the system has a long range memory that can be exploited for forecasting; b) the low temporal intermittency implies that mathematically well-established (Gaussian) forecasting techniques can be used; and c), the statistical factorization property implies that although spatial correlations (including teleconnections) may be large, if long enough time series are available, they are not necessarily useful in improving forecasts. Theoretically, these conditions imply the existence of stochastic predictability limits in our talk, we show that these limits apply to GCM's. Based on these statistical implications, we developed the Stochastic Seasonal and Interannual Prediction System (StocSIPS) for the prediction of temperature from regional to global scales and from one month to many years horizons. One of the main components of StocSIPS is the separation and prediction of both the internal and externally forced variabilities. In order to test the theoretical assumptions and consequences for predictability and predictions, we use 41 different CMIP5 model outputs from preindustrial control runs that have fixed external forcings: whose variability is purely internally generated. We first show that these statistical assumptions hold with relatively good accuracy and then we performed hindcasts at global and regional scales from monthly to annual time resolutions using StocSIPS. We obtained excellent agreement between the hindcast Mean Square Skill Score (MSSS) and the theoretical stochastic limits. We also show the application of StocSIPS to the prediction of average global temperature and compare our results with those obtained using multi-model ensemble approaches. StocSIPS has numerous advantages including a) higher MSSS for large time horizons, b) the from convergence to the real - not model - climate, c) much higher computational speed, d) no need for data assimilation, e) no ad hoc post processing and f) no need for downscaling.

  17. Prediction of Broadband Shock-Associated Noise Including Propagation Effects Originating NASA

    NASA Technical Reports Server (NTRS)

    Miller, Steven; Morris, Philip J.

    2012-01-01

    An acoustic analogy is developed based on the Euler equations for broadband shock-associated noise (BBSAN) that directly incorporates the vector Green s function of the linearized Euler equations and a steady Reynolds-Averaged Navier-Stokes solution (SRANS) to describe the mean flow. The vector Green s function allows the BBSAN propagation through the jet shear layer to be determined. The large-scale coherent turbulence is modeled by two-point second order velocity cross-correlations. Turbulent length and time scales are related to the turbulent kinetic energy and dissipation rate. An adjoint vector Green s function solver is implemented to determine the vector Green s function based on a locally parallel mean flow at different streamwise locations. The newly developed acoustic analogy can be simplified to one that uses the Green s function associated with the Helmholtz equation, which is consistent with a previous formulation by the authors. A large number of predictions are generated using three different nozzles over a wide range of fully-expanded jet Mach numbers and jet stagnation temperatures. These predictions are compared with experimental data from multiple jet noise experimental facilities. In addition, two models for the so-called fine-scale mixing noise are included in the comparisons. Improved BBSAN predictions are obtained relative to other models that do not include propagation effects.

  18. Clinical Scales Do Not Reliably Identify Acute Ischemic Stroke Patients With Large-Artery Occlusion.

    PubMed

    Turc, Guillaume; Maïer, Benjamin; Naggara, Olivier; Seners, Pierre; Isabel, Clothilde; Tisserand, Marie; Raynouard, Igor; Edjlali, Myriam; Calvet, David; Baron, Jean-Claude; Mas, Jean-Louis; Oppenheim, Catherine

    2016-06-01

    It remains debated whether clinical scores can help identify acute ischemic stroke patients with large-artery occlusion and hence improve triage in the era of thrombectomy. We aimed to determine the accuracy of published clinical scores to predict large-artery occlusion. We assessed the performance of 13 clinical scores to predict large-artery occlusion in consecutive patients with acute ischemic stroke undergoing clinical examination and magnetic resonance or computed tomographic angiography ≤6 hours of symptom onset. When no cutoff was published, we used the cutoff maximizing the sum of sensitivity and specificity in our cohort. We also determined, for each score, the cutoff associated with a false-negative rate ≤10%. Of 1004 patients (median National Institute of Health Stroke Scale score, 7; range, 0-40), 328 (32.7%) had an occlusion of the internal carotid artery, M1 segment of the middle cerebral artery, or basilar artery. The highest accuracy (79%; 95% confidence interval, 77-82) was observed for National Institute of Health Stroke Scale score ≥11 and Rapid Arterial Occlusion Evaluation Scale score ≥5. However, these cutoffs were associated with false-negative rates >25%. Cutoffs associated with an false-negative rate ≤10% were 5, 1, and 0 for National Institute of Health Stroke Scale, Rapid Arterial Occlusion Evaluation Scale, and Cincinnati Prehospital Stroke Severity Scale, respectively. Using published cutoffs for triage would result in a loss of opportunity for ≥20% of patients with large-artery occlusion who would be inappropriately sent to a center lacking neurointerventional facilities. Conversely, using cutoffs reducing the false-negative rate to 10% would result in sending almost every patient to a comprehensive stroke center. Our findings, therefore, suggest that intracranial arterial imaging should be performed in all patients with acute ischemic stroke presenting within 6 hours of symptom onset. © 2016 American Heart Association, Inc.

  19. Self-Consistent Field Theories for the Role of Large Length-Scale Architecture in Polymers

    NASA Astrophysics Data System (ADS)

    Wu, David

    At large length-scales, the architecture of polymers can be described by a coarse-grained specification of the distribution of branch points and monomer types within a molecule. This includes molecular topology (e.g., cyclic or branched) as well as distances between branch points or chain ends. Design of large length-scale molecular architecture is appealing because it offers a universal strategy, independent of monomer chemistry, to tune properties. Non-linear analogs of linear chains differ in molecular-scale properties, such as mobility, entanglements, and surface segregation in blends that are well-known to impact rheological, dynamical, thermodynamic and surface properties including adhesion and wetting. We have used Self-Consistent Field (SCF) theories to describe a number of phenomena associated with large length-scale polymer architecture. We have predicted the surface composition profiles of non-linear chains in blends with linear chains. These predictions are in good agreement with experimental results, including from neutron scattering, on a range of well-controlled branched (star, pom-pom and end-branched) and cyclic polymer architectures. Moreover, the theory allows explanation of the segregation and conformations of branched polymers in terms of effective surface potentials acting on the end and branch groups. However, for cyclic chains, which have no end or junction points, a qualitatively different topological mechanism based on conformational entropy drives cyclic chains to a surface, consistent with recent neutron reflectivity experiments. We have also used SCF theory to calculate intramolecular and intermolecular correlations for polymer chains in the bulk, dilute solution, and trapped at a liquid-liquid interface. Predictions of chain swelling in dilute star polymer solutions compare favorably with existing PRISM theory and swelling at an interface helps explain recent measurements of chain mobility at an oil-water interface. In collaboration with: Renfeng Hu, Colorado School of Mines, and Mark Foster, University of Akron. This work was supported by NSF Grants No. CBET- 0730692 and No. CBET-0731319.

  20. Resolution, Scales and Predictability: Is High Resolution Detrimental To Predictability At Extended Forecast Times?

    NASA Astrophysics Data System (ADS)

    Mesinger, F.

    The traditional views hold that high-resolution limited area models (LAMs) down- scale large-scale lateral boundary information, and that predictability of small scales is short. Inspection of various rms fits/errors has contributed to these views. It would follow that the skill of LAMs should visibly deteriorate compared to that of their driver models at more extended forecast times. The limited area Eta Model at NCEP has an additional handicap of being driven by LBCs of the previous Avn global model run, at 0000 and 1200 UTC estimated to amount to about an 8 h loss in accuracy. This should make its relative skill compared to that of the Avn deteriorate even faster. These views are challenged by various Eta results including rms fits to raobs out to 84 h. It is argued that it is the largest scales that contribute the most to the skill of the Eta relative to that of the Avn.

  1. Exploring cosmic homogeneity with the BOSS DR12 galaxy sample

    NASA Astrophysics Data System (ADS)

    Ntelis, Pierros; Hamilton, Jean-Christophe; Le Goff, Jean-Marc; Burtin, Etienne; Laurent, Pierre; Rich, James; Guillermo Busca, Nicolas; Tinker, Jeremy; Aubourg, Eric; du Mas des Bourboux, Hélion; Bautista, Julian; Palanque Delabrouille, Nathalie; Delubac, Timothée; Eftekharzadeh, Sarah; Hogg, David W.; Myers, Adam; Vargas-Magaña, Mariana; Pâris, Isabelle; Petitjean, Partick; Rossi, Graziano; Schneider, Donald P.; Tojeiro, Rita; Yeche, Christophe

    2017-06-01

    In this study, we probe the transition to cosmic homogeneity in the Large Scale Structure (LSS) of the Universe using the CMASS galaxy sample of BOSS spectroscopic survey which covers the largest effective volume to date, 3 h-3 Gpc3 at 0.43 <= z <= 0.7. We study the scaled counts-in-spheres, N(2.97 for r>RH, we find RH = (63.3±0.7) h-1 Mpc, in agreement at the percentage level with the predictions of the ΛCDM model RH=62.0 h-1 Mpc. Thanks to the large cosmic depth of the survey, we investigate the redshift evolution of the transition to homogeneity scale and find agreement with the ΛCDM prediction. Finally, we find that Script D2 is compatible with 3 at scales larger than 300 h-1 Mpc in all redshift bins. These results consolidate the Cosmological Principle and represent a precise consistency test of the ΛCDM model.

  2. Propeller aircraft interior noise model utilization study and validation

    NASA Technical Reports Server (NTRS)

    Pope, L. D.

    1984-01-01

    Utilization and validation of a computer program designed for aircraft interior noise prediction is considered. The program, entitled PAIN (an acronym for Propeller Aircraft Interior Noise), permits (in theory) predictions of sound levels inside propeller driven aircraft arising from sidewall transmission. The objective of the work reported was to determine the practicality of making predictions for various airplanes and the extent of the program's capabilities. The ultimate purpose was to discern the quality of predictions for tonal levels inside an aircraft occurring at the propeller blade passage frequency and its harmonics. The effort involved three tasks: (1) program validation through comparisons of predictions with scale-model test results; (2) development of utilization schemes for large (full scale) fuselages; and (3) validation through comparisons of predictions with measurements taken in flight tests on a turboprop aircraft. Findings should enable future users of the program to efficiently undertake and correctly interpret predictions.

  3. A gravitational puzzle.

    PubMed

    Caldwell, Robert R

    2011-12-28

    The challenge to understand the physical origin of the cosmic acceleration is framed as a problem of gravitation. Specifically, does the relationship between stress-energy and space-time curvature differ on large scales from the predictions of general relativity. In this article, we describe efforts to model and test a generalized relationship between the matter and the metric using cosmological observations. Late-time tracers of large-scale structure, including the cosmic microwave background, weak gravitational lensing, and clustering are shown to provide good tests of the proposed solution. Current data are very close to proving a critical test, leaving only a small window in parameter space in the case that the generalized relationship is scale free above galactic scales.

  4. Cold dark matter and degree-scale cosmic microwave background anisotropy statistics after COBE

    NASA Technical Reports Server (NTRS)

    Gorski, Krzysztof M.; Stompor, Radoslaw; Juszkiewicz, Roman

    1993-01-01

    We conduct a Monte Carlo simulation of the cosmic microwave background (CMB) anisotropy in the UCSB South Pole 1991 degree-scale experiment. We examine cold dark matter cosmology with large-scale structure seeded by the Harrison-Zel'dovich hierarchy of Gaussian-distributed primordial inhomogeneities normalized to the COBE-DMR measurement of large-angle CMB anisotropy. We find it statistically implausible (in the sense of low cumulative probability F lower than 5 percent, of not measuring a cosmological delta-T/T signal) that the degree-scale cosmological CMB anisotropy predicted in such models could have escaped a detection at the level of sensitivity achieved in the South Pole 1991 experiment.

  5. Will COBE challenge the inflationary paradigm - Cosmic microwave background anisotropies versus large-scale streaming motions revisited

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gorski, K.M.

    1991-03-01

    The relation between cosmic microwave background (CMB) anisotropies and large-scale galaxy streaming motions is examined within the framework of inflationary cosmology. The minimal Sachs and Wolfe (1967) CMB anisotropies at large angular scales in the models with initial Harrison-Zel'dovich spectrum of inhomogeneity normalized to the local large-scale bulk flow, which are independent of the Hubble constant and specific nature of dark matter, are found to be within the anticipated ultimate sensitivity limits of COBE's Differential Microwave Radiometer experiment. For example, the most likely value of the quadrupole coefficient is predicted to be a2 not less than 7 x 10 tomore » the -6th, where equality applies to the limiting minimal model. If (1) COBE's DMR instruments perform well throughout the two-year period; (2) the anisotropy data are not marred by the systematic errors; (3) the large-scale motions retain their present observational status; (4) there is no statistical conspiracy in a sense of the measured bulk flow being of untypically high and the large-scale anisotropy of untypically low amplitudes; and (5) the low-order multipoles in the all-sky primordial fireball temperature map are not detected, the inflationary paradigm will have to be questioned. 19 refs.« less

  6. Using stroboscopic flow imaging to validate large-scale computational fluid dynamics simulations

    NASA Astrophysics Data System (ADS)

    Laurence, Ted A.; Ly, Sonny; Fong, Erika; Shusteff, Maxim; Randles, Amanda; Gounley, John; Draeger, Erik

    2017-02-01

    The utility and accuracy of computational modeling often requires direct validation against experimental measurements. The work presented here is motivated by taking a combined experimental and computational approach to determine the ability of large-scale computational fluid dynamics (CFD) simulations to understand and predict the dynamics of circulating tumor cells in clinically relevant environments. We use stroboscopic light sheet fluorescence imaging to track the paths and measure the velocities of fluorescent microspheres throughout a human aorta model. Performed over complex physiologicallyrealistic 3D geometries, large data sets are acquired with microscopic resolution over macroscopic distances.

  7. Computation of Large-Scale Structure Jet Noise Sources With Weak Nonlinear Effects Using Linear Euler

    NASA Technical Reports Server (NTRS)

    Dahl, Milo D.; Hixon, Ray; Mankbadi, Reda R.

    2003-01-01

    An approximate technique is presented for the prediction of the large-scale turbulent structure sound source in a supersonic jet. A linearized Euler equations code is used to solve for the flow disturbances within and near a jet with a given mean flow. Assuming a normal mode composition for the wave-like disturbances, the linear radial profiles are used in an integration of the Navier-Stokes equations. This results in a set of ordinary differential equations representing the weakly nonlinear self-interactions of the modes along with their interaction with the mean flow. Solutions are then used to correct the amplitude of the disturbances that represent the source of large-scale turbulent structure sound in the jet.

  8. Large-scale solar magnetic fields and H-alpha patterns

    NASA Technical Reports Server (NTRS)

    Mcintosh, P. S.

    1972-01-01

    Coronal and interplanetary magnetic fields computed from measurements of large-scale photospheric magnetic fields suffer from interruptions in day-to-day observations and the limitation of using only measurements made near the solar central meridian. Procedures were devised for inferring the lines of polarity reversal from H-alpha solar patrol photographs that map the same large-scale features found on Mt. Wilson magnetograms. These features may be monitored without interruption by combining observations from the global network of observatories associated with NOAA's Space Environment Services Center. The patterns of inferred magnetic fields may be followed accurately as far as 60 deg from central meridian. Such patterns will be used to improve predictions of coronal features during the next solar eclipse.

  9. Void probability as a function of the void's shape and scale-invariant models. [in studies of spacial galactic distribution

    NASA Technical Reports Server (NTRS)

    Elizalde, E.; Gaztanaga, E.

    1992-01-01

    The dependence of counts in cells on the shape of the cell for the large scale galaxy distribution is studied. A very concrete prediction can be done concerning the void distribution for scale invariant models. The prediction is tested on a sample of the CfA catalog, and good agreement is found. It is observed that the probability of a cell to be occupied is bigger for some elongated cells. A phenomenological scale invariant model for the observed distribution of the counts in cells, an extension of the negative binomial distribution, is presented in order to illustrate how this dependence can be quantitatively determined. An original, intuitive derivation of this model is presented.

  10. Inflation in the standard cosmological model

    NASA Astrophysics Data System (ADS)

    Uzan, Jean-Philippe

    2015-12-01

    The inflationary paradigm is now part of the standard cosmological model as a description of its primordial phase. While its original motivation was to solve the standard problems of the hot big bang model, it was soon understood that it offers a natural theory for the origin of the large-scale structure of the universe. Most models rely on a slow-rolling scalar field and enjoy very generic predictions. Besides, all the matter of the universe is produced by the decay of the inflaton field at the end of inflation during a phase of reheating. These predictions can be (and are) tested from their imprint of the large-scale structure and in particular the cosmic microwave background. Inflation stands as a window in physics where both general relativity and quantum field theory are at work and which can be observationally studied. It connects cosmology with high-energy physics. Today most models are constructed within extensions of the standard model, such as supersymmetry or string theory. Inflation also disrupts our vision of the universe, in particular with the ideas of chaotic inflation and eternal inflation that tend to promote the image of a very inhomogeneous universe with fractal structure on a large scale. This idea is also at the heart of further speculations, such as the multiverse. This introduction summarizes the connections between inflation and the hot big bang model and details the basics of its dynamics and predictions. xml:lang="fr"

  11. Test of Gravity on Large Scales with Weak Gravitational Lensing and Clustering Measurements of SDSS Luminous Red Galaxies

    NASA Astrophysics Data System (ADS)

    Reyes, Reinabelle; Mandelbaum, R.; Seljak, U.; Gunn, J.; Lombriser, L.

    2009-01-01

    We perform a test of gravity on large scales (5-50 Mpc/h) using 70,000 luminous red galaxies (LRGs) from the Sloan Digital Sky Survey (SDSS) DR7 with redshifts 0.16

  12. Energy spectrum of tearing mode turbulence in sheared background field

    NASA Astrophysics Data System (ADS)

    Hu, Di; Bhattacharjee, Amitava; Huang, Yi-Min

    2018-06-01

    The energy spectrum of tearing mode turbulence in a sheared background magnetic field is studied in this work. We consider the scenario where the nonlinear interaction of overlapping large-scale modes excites a broad spectrum of small-scale modes, generating tearing mode turbulence. The spectrum of such turbulence is of interest since it is relevant to the small-scale back-reaction on the large-scale field. The turbulence we discuss here differs from traditional MHD turbulence mainly in two aspects. One is the existence of many linearly stable small-scale modes which cause an effective damping during the energy cascade. The other is the scale-independent anisotropy induced by the large-scale modes tilting the sheared background field, as opposed to the scale-dependent anisotropy frequently encountered in traditional critically balanced turbulence theories. Due to these two differences, the energy spectrum deviates from a simple power law and takes the form of a power law multiplied by an exponential falloff. Numerical simulations are carried out using visco-resistive MHD equations to verify our theoretical predictions, and a reasonable agreement is found between the numerical results and our model.

  13. Large-Scale Brain Network Coupling Predicts Total Sleep Deprivation Effects on Cognitive Capacity

    PubMed Central

    Wang, Lubin; Zhai, Tianye; Zou, Feng; Ye, Enmao; Jin, Xiao; Li, Wuju; Qi, Jianlin; Yang, Zheng

    2015-01-01

    Interactions between large-scale brain networks have received most attention in the study of cognitive dysfunction of human brain. In this paper, we aimed to test the hypothesis that the coupling strength of large-scale brain networks will reflect the pressure for sleep and will predict cognitive performance, referred to as sleep pressure index (SPI). Fourteen healthy subjects underwent this within-subject functional magnetic resonance imaging (fMRI) study during rested wakefulness (RW) and after 36 h of total sleep deprivation (TSD). Self-reported scores of sleepiness were higher for TSD than for RW. A subsequent working memory (WM) task showed that WM performance was lower after 36 h of TSD. Moreover, SPI was developed based on the coupling strength of salience network (SN) and default mode network (DMN). Significant increase of SPI was observed after 36 h of TSD, suggesting stronger pressure for sleep. In addition, SPI was significantly correlated with both the visual analogue scale score of sleepiness and the WM performance. These results showed that alterations in SN-DMN coupling might be critical in cognitive alterations that underlie the lapse after TSD. Further studies may validate the SPI as a potential clinical biomarker to assess the impact of sleep deprivation. PMID:26218521

  14. Inner-outer predictive wall model for wall-bounded turbulence in hypersonic flow

    NASA Astrophysics Data System (ADS)

    Martin, M. Pino; Helm, Clara M.

    2017-11-01

    The inner-outer predictive wall model of Mathis et al. is modified for hypersonic turbulent boundary layers. The model is based on a modulation of the energized motions in the inner layer by large scale momentum fluctuations in the logarithmic layer. Using direct numerical simulation (DNS) data of turbulent boundary layers with free stream Mach number 3 to 10, it is shown that the variation of the fluid properties in the compressible flows leads to large Reynolds number (Re) effects in the outer layer and facilitate the modulation observed in high Re incompressible flows. The modulation effect by the large scale increases with increasing free-stream Mach number. The model is extended to include spanwise and wall-normal velocity fluctuations and is generalized through Morkovin scaling. Temperature fluctuations are modeled using an appropriate Reynolds Analogy. Density fluctuations are calculated using an equation of state and a scaling with Mach number. DNS data are used to obtain the universal signal and parameters. The model is tested by using the universal signal to reproduce the flow conditions of Mach 3 and Mach 7 turbulent boundary layer DNS data and comparing turbulence statistics between the modeled flow and the DNS data. This work is supported by the Air Force Office of Scientific Research under Grant FA9550-17-1-0104.

  15. Downscaling ocean conditions: Experiments with a quasi-geostrophic model

    NASA Astrophysics Data System (ADS)

    Katavouta, A.; Thompson, K. R.

    2013-12-01

    The predictability of small-scale ocean variability, given the time history of the associated large-scales, is investigated using a quasi-geostrophic model of two wind-driven gyres separated by an unstable, mid-ocean jet. Motivated by the recent theoretical study of Henshaw et al. (2003), we propose a straightforward method for assimilating information on the large-scale in order to recover the small-scale details of the quasi-geostrophic circulation. The similarity of this method to the spectral nudging of limited area atmospheric models is discussed. Results from the spectral nudging of the quasi-geostrophic model, and an independent multivariate regression-based approach, show that important features of the ocean circulation, including the position of the meandering mid-ocean jet and the associated pinch-off eddies, can be recovered from the time history of a small number of large-scale modes. We next propose a hybrid approach for assimilating both the large-scales and additional observed time series from a limited number of locations that alone are too sparse to recover the small scales using traditional assimilation techniques. The hybrid approach improved significantly the recovery of the small-scales. The results highlight the importance of the coupling between length scales in downscaling applications, and the value of assimilating limited point observations after the large-scales have been set correctly. The application of the hybrid and spectral nudging to practical ocean forecasting, and projecting changes in ocean conditions on climate time scales, is discussed briefly.

  16. Extended-range high-resolution dynamical downscaling over a continental-scale spatial domain with atmospheric and surface nudging

    NASA Astrophysics Data System (ADS)

    Husain, S. Z.; Separovic, L.; Yu, W.; Fernig, D.

    2014-12-01

    Extended-range high-resolution mesoscale simulations with limited-area atmospheric models when applied to downscale regional analysis fields over large spatial domains can provide valuable information for many applications including the weather-dependent renewable energy industry. Long-term simulations over a continental-scale spatial domain, however, require mechanisms to control the large-scale deviations in the high-resolution simulated fields from the coarse-resolution driving fields. As enforcement of the lateral boundary conditions is insufficient to restrict such deviations, large scales in the simulated high-resolution meteorological fields are therefore spectrally nudged toward the driving fields. Different spectral nudging approaches, including the appropriate nudging length scales as well as the vertical profiles and temporal relaxations for nudging, have been investigated to propose an optimal nudging strategy. Impacts of time-varying nudging and generation of hourly analysis estimates are explored to circumvent problems arising from the coarse temporal resolution of the regional analysis fields. Although controlling the evolution of the atmospheric large scales generally improves the outputs of high-resolution mesoscale simulations within the surface layer, the prognostically evolving surface fields can nevertheless deviate from their expected values leading to significant inaccuracies in the predicted surface layer meteorology. A forcing strategy based on grid nudging of the different surface fields, including surface temperature, soil moisture, and snow conditions, toward their expected values obtained from a high-resolution offline surface scheme is therefore proposed to limit any considerable deviation. Finally, wind speed and temperature at wind turbine hub height predicted by different spectrally nudged extended-range simulations are compared against observations to demonstrate possible improvements achievable using higher spatiotemporal resolution.

  17. A small-scale, rolled-membrane microfluidic artificial lung designed towards future large area manufacturing.

    PubMed

    Thompson, A J; Marks, L H; Goudie, M J; Rojas-Pena, A; Handa, H; Potkay, J A

    2017-03-01

    Artificial lungs have been used in the clinic for multiple decades to supplement patient pulmonary function. Recently, small-scale microfluidic artificial lungs (μAL) have been demonstrated with large surface area to blood volume ratios, biomimetic blood flow paths, and pressure drops compatible with pumpless operation. Initial small-scale microfluidic devices with blood flow rates in the μ l/min to ml/min range have exhibited excellent gas transfer efficiencies; however, current manufacturing techniques may not be suitable for scaling up to human applications. Here, we present a new manufacturing technology for a microfluidic artificial lung in which the structure is assembled via a continuous "rolling" and bonding procedure from a single, patterned layer of polydimethyl siloxane (PDMS). This method is demonstrated in a small-scale four-layer device, but is expected to easily scale to larger area devices. The presented devices have a biomimetic branching blood flow network, 10  μ m tall artificial capillaries, and a 66  μ m thick gas transfer membrane. Gas transfer efficiency in blood was evaluated over a range of blood flow rates (0.1-1.25 ml/min) for two different sweep gases (pure O 2 , atmospheric air). The achieved gas transfer data closely follow predicted theoretical values for oxygenation and CO 2 removal, while pressure drop is marginally higher than predicted. This work is the first step in developing a scalable method for creating large area microfluidic artificial lungs. Although designed for microfluidic artificial lungs, the presented technique is expected to result in the first manufacturing method capable of simply and easily creating large area microfluidic devices from PDMS.

  18. Spatial scale and distribution of neurovascular signals underlying decoding of orientation and eye of origin from fMRI data

    PubMed Central

    Harrison, Charlotte; Jackson, Jade; Oh, Seung-Mock; Zeringyte, Vaida

    2016-01-01

    Multivariate pattern analysis of functional magnetic resonance imaging (fMRI) data is widely used, yet the spatial scales and origin of neurovascular signals underlying such analyses remain unclear. We compared decoding performance for stimulus orientation and eye of origin from fMRI measurements in human visual cortex with predictions based on the columnar organization of each feature and estimated the spatial scales of patterns driving decoding. Both orientation and eye of origin could be decoded significantly above chance in early visual areas (V1–V3). Contrary to predictions based on a columnar origin of response biases, decoding performance for eye of origin in V2 and V3 was not significantly lower than that in V1, nor did decoding performance for orientation and eye of origin differ significantly. Instead, response biases for both features showed large-scale organization, evident as a radial bias for orientation, and a nasotemporal bias for eye preference. To determine whether these patterns could drive classification, we quantified the effect on classification performance of binning voxels according to visual field position. Consistent with large-scale biases driving classification, binning by polar angle yielded significantly better decoding performance for orientation than random binning in V1–V3. Similarly, binning by hemifield significantly improved decoding performance for eye of origin. Patterns of orientation and eye preference bias in V2 and V3 showed a substantial degree of spatial correlation with the corresponding patterns in V1, suggesting that response biases in these areas originate in V1. Together, these findings indicate that multivariate classification results need not reflect the underlying columnar organization of neuronal response selectivities in early visual areas. NEW & NOTEWORTHY Large-scale response biases can account for decoding of orientation and eye of origin in human early visual areas V1–V3. For eye of origin this pattern is a nasotemporal bias; for orientation it is a radial bias. Differences in decoding performance across areas and stimulus features are not well predicted by differences in columnar-scale organization of each feature. Large-scale biases in extrastriate areas are spatially correlated with those in V1, suggesting biases originate in primary visual cortex. PMID:27903637

  19. Bayesian Hierarchical Modeling for Big Data Fusion in Soil Hydrology

    NASA Astrophysics Data System (ADS)

    Mohanty, B.; Kathuria, D.; Katzfuss, M.

    2016-12-01

    Soil moisture datasets from remote sensing (RS) platforms (such as SMOS and SMAP) and reanalysis products from land surface models are typically available on a coarse spatial granularity of several square km. Ground based sensors on the other hand provide observations on a finer spatial scale (meter scale or less) but are sparsely available. Soil moisture is affected by high variability due to complex interactions between geologic, topographic, vegetation and atmospheric variables. Hydrologic processes usually occur at a scale of 1 km or less and therefore spatially ubiquitous and temporally periodic soil moisture products at this scale are required to aid local decision makers in agriculture, weather prediction and reservoir operations. Past literature has largely focused on downscaling RS soil moisture for a small extent of a field or a watershed and hence the applicability of such products has been limited. The present study employs a spatial Bayesian Hierarchical Model (BHM) to derive soil moisture products at a spatial scale of 1 km for the state of Oklahoma by fusing point scale Mesonet data and coarse scale RS data for soil moisture and its auxiliary covariates such as precipitation, topography, soil texture and vegetation. It is seen that the BHM model handles change of support problems easily while performing accurate uncertainty quantification arising from measurement errors and imperfect retrieval algorithms. The computational challenge arising due to the large number of measurements is tackled by utilizing basis function approaches and likelihood approximations. The BHM model can be considered as a complex Bayesian extension of traditional geostatistical prediction methods (such as Kriging) for large datasets in the presence of uncertainties.

  20. Genome-wide computational prediction and analysis of core promoter elements across plant monocots and dicots

    USDA-ARS?s Scientific Manuscript database

    Transcription initiation, essential to gene expression regulation, involves recruitment of basal transcription factors to the core promoter elements (CPEs). The distribution of currently known CPEs across plant genomes is largely unknown. This is the first large scale genome-wide report on the compu...

  1. Hele-Shaw scaling properties of low-contrast Saffman-Taylor flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DiFrancesco, M. W.; Maher, J. V.

    1989-07-01

    We have measured variations of Saffman-Taylor flows by changingdimensionless surface tension /ital B/ alone and by changing /ital B/ inconjunction with changes in dimensionless viscosity contrast /ital A/. Ourlow-aspect-ratio cell permits close study of the linear- and earlynonlinear-flow regimes. Our critical binary-liquid sample allows study of verylow values of /ital A/. The predictions of linear stability analysis work wellfor predicting which length scales are important, but discrepancies areobserved for growth rates. We observe an empirical scaling law for growth ofthe Fourier modes of the patterns in the linear regime. The observed frontpropagation velocity for side-wall disturbances is constantly 2+-1in dimensionlessmore » units, a value consistent with the predictions of Langer andof van Saarloos. Patterns in both the linear and nonlinear regimes collapseimpressively under the scaling suggested by the Hele-Shaw equations. Violationsof scaling due to wetting phenomena are not evident here, presumably becausethe wetting properties of the two phases of the critical binary liquid are sosimilar; thus direct comparison with large-scale Hele-Shaw simulations shouldbe meaningful.« less

  2. Hydroclimatic drivers, Water-borne Diseases, and Population Vulnerability in Bengal Delta

    NASA Astrophysics Data System (ADS)

    Akanda, A. S.; Jutla, A. S.

    2012-04-01

    Water-borne diarrheal disease outbreaks in the Bengal Delta region, such as cholera, rotavirus, and dysentery, show distinct seasonal peaks and spatial signatures in their origin and progression. However, the mechanisms behind these seasonal phenomena, especially the role of regional climatic and hydrologic processes behind the disease outbreaks, are not fully understood. Overall diarrheal disease prevalence and the population vulnerability to transmission mechanisms thus remain severely underestimated. Recent findings suggest that diarrheal incidence in the spring is strongly associated with scarcity of freshwater flow volumes, while the abundance of water in monsoon show strong positive correlation with autumn diarrheal burden. The role of large-scale ocean-atmospheric processes that tend to modulate meteorological, hydrological, and environmental conditions over large regions and the effects on the ecological states conducive to the vectors and triggers of diarrheal outbreaks over large geographic regions are not well understood. We take a large scale approach to conduct detailed diagnostic analyses of a range of climate, hydrological, and ecosystem variables to investigate their links to outbreaks, occurrence, and transmission of the most prevalent water-borne diarrheal diseases. We employ satellite remote sensing data products to track coastal ecosystems and plankton processes related to cholera outbreaks. In addition, we investigate the effect of large scale hydroclimatic extremes (e.g., droughts and floods, El Nino) to identify how diarrheal transmission and epidemic outbreaks are most likely to respond to shifts in climatic, hydrologic, and ecological changes over coming decades. We argue that controlling diarrheal disease burden will require an integrated predictive surveillance approach - a combination of prediction and prevention - with recent advances in climate-based predictive capabilities and demonstrated successes in primary and tertiary prevention in endemic regions.

  3. Seasonal prediction of US summertime ozone using statistical analysis of large scale climate patterns.

    PubMed

    Shen, Lu; Mickley, Loretta J

    2017-03-07

    We develop a statistical model to predict June-July-August (JJA) daily maximum 8-h average (MDA8) ozone concentrations in the eastern United States based on large-scale climate patterns during the previous spring. We find that anomalously high JJA ozone in the East is correlated with these springtime patterns: warm tropical Atlantic and cold northeast Pacific sea surface temperatures (SSTs), as well as positive sea level pressure (SLP) anomalies over Hawaii and negative SLP anomalies over the Atlantic and North America. We then develop a linear regression model to predict JJA MDA8 ozone from 1980 to 2013, using the identified SST and SLP patterns from the previous spring. The model explains ∼45% of the variability in JJA MDA8 ozone concentrations and ∼30% variability in the number of JJA ozone episodes (>70 ppbv) when averaged over the eastern United States. This seasonal predictability results from large-scale ocean-atmosphere interactions. Warm tropical Atlantic SSTs can trigger diabatic heating in the atmosphere and influence the extratropical climate through stationary wave propagation, leading to greater subsidence, less precipitation, and higher temperatures in the East, which increases surface ozone concentrations there. Cooler SSTs in the northeast Pacific are also associated with more summertime heatwaves and high ozone in the East. On average, models participating in the Atmospheric Model Intercomparison Project fail to capture the influence of this ocean-atmosphere interaction on temperatures in the eastern United States, implying that such models would have difficulty simulating the interannual variability of surface ozone in this region.

  4. Seasonal prediction of US summertime ozone using statistical analysis of large scale climate patterns

    PubMed Central

    Mickley, Loretta J.

    2017-01-01

    We develop a statistical model to predict June–July–August (JJA) daily maximum 8-h average (MDA8) ozone concentrations in the eastern United States based on large-scale climate patterns during the previous spring. We find that anomalously high JJA ozone in the East is correlated with these springtime patterns: warm tropical Atlantic and cold northeast Pacific sea surface temperatures (SSTs), as well as positive sea level pressure (SLP) anomalies over Hawaii and negative SLP anomalies over the Atlantic and North America. We then develop a linear regression model to predict JJA MDA8 ozone from 1980 to 2013, using the identified SST and SLP patterns from the previous spring. The model explains ∼45% of the variability in JJA MDA8 ozone concentrations and ∼30% variability in the number of JJA ozone episodes (>70 ppbv) when averaged over the eastern United States. This seasonal predictability results from large-scale ocean–atmosphere interactions. Warm tropical Atlantic SSTs can trigger diabatic heating in the atmosphere and influence the extratropical climate through stationary wave propagation, leading to greater subsidence, less precipitation, and higher temperatures in the East, which increases surface ozone concentrations there. Cooler SSTs in the northeast Pacific are also associated with more summertime heatwaves and high ozone in the East. On average, models participating in the Atmospheric Model Intercomparison Project fail to capture the influence of this ocean–atmosphere interaction on temperatures in the eastern United States, implying that such models would have difficulty simulating the interannual variability of surface ozone in this region. PMID:28223483

  5. Genome wide analysis of flowering time trait in multiple environments via high-throughput genotyping technique in Brassica napus L.

    PubMed

    Li, Lun; Long, Yan; Zhang, Libin; Dalton-Morgan, Jessica; Batley, Jacqueline; Yu, Longjiang; Meng, Jinling; Li, Maoteng

    2015-01-01

    The prediction of the flowering time (FT) trait in Brassica napus based on genome-wide markers and the detection of underlying genetic factors is important not only for oilseed producers around the world but also for the other crop industry in the rotation system in China. In previous studies the low density and mixture of biomarkers used obstructed genomic selection in B. napus and comprehensive mapping of FT related loci. In this study, a high-density genome-wide SNP set was genotyped from a double-haploid population of B. napus. We first performed genomic prediction of FT traits in B. napus using SNPs across the genome under ten environments of three geographic regions via eight existing genomic predictive models. The results showed that all the models achieved comparably high accuracies, verifying the feasibility of genomic prediction in B. napus. Next, we performed a large-scale mapping of FT related loci among three regions, and found 437 associated SNPs, some of which represented known FT genes, such as AP1 and PHYE. The genes tagged by the associated SNPs were enriched in biological processes involved in the formation of flowers. Epistasis analysis showed that significant interactions were found between detected loci, even among some known FT related genes. All the results showed that our large scale and high-density genotype data are of great practical and scientific values for B. napus. To our best knowledge, this is the first evaluation of genomic selection models in B. napus based on a high-density SNP dataset and large-scale mapping of FT loci.

  6. Examining Chaotic Convection with Super-Parameterization Ensembles

    NASA Astrophysics Data System (ADS)

    Jones, Todd R.

    This study investigates a variety of features present in a new configuration of the Community Atmosphere Model (CAM) variant, SP-CAM 2.0. The new configuration (multiple-parameterization-CAM, MP-CAM) changes the manner in which the super-parameterization (SP) concept represents physical tendency feedbacks to the large-scale by using the mean of 10 independent two-dimensional cloud-permitting model (CPM) curtains in each global model column instead of the conventional single CPM curtain. The climates of the SP and MP configurations are examined to investigate any significant differences caused by the application of convective physical tendencies that are more deterministic in nature, paying particular attention to extreme precipitation events and large-scale weather systems, such as the Madden-Julian Oscillation (MJO). A number of small but significant changes in the mean state climate are uncovered, and it is found that the new formulation degrades MJO performance. Despite these deficiencies, the ensemble of possible realizations of convective states in the MP configuration allows for analysis of uncertainty in the small-scale solution, lending to examination of those weather regimes and physical mechanisms associated with strong, chaotic convection. Methods of quantifying precipitation predictability are explored, and use of the most reliable of these leads to the conclusion that poor precipitation predictability is most directly related to the proximity of the global climate model column state to atmospheric critical points. Secondarily, the predictability is tied to the availability of potential convective energy, the presence of mesoscale convective organization on the CPM grid, and the directive power of the large-scale.

  7. Predicting Early School Achievement with the EDI: A Longitudinal Population-Based Study

    ERIC Educational Resources Information Center

    Forget-Dubois, Nadine; Lemelin, Jean-Pascal; Boivin, Michel; Dionne, Ginette; Seguin, Jean R.; Vitaro, Frank; Tremblay, Richard E.

    2007-01-01

    School readiness tests are significant predictors of early school achievement. Measuring school readiness on a large scale would be necessary for the implementation of intervention programs at the community level. However, assessment of school readiness is costly and time consuming. This study assesses the predictive value of a school readiness…

  8. Perceptions of Crowding: Predicting at the Residence, Neighborhood, and City Levels.

    ERIC Educational Resources Information Center

    Schmidt, Donald E.; And Others

    1979-01-01

    Details the results of a large-scale field study aimed at testing two theories on human crowding. Found that psychological factors are increasingly important for the prediction of crowding as one moved from the immediate residence to the less immediate city level. Implications, limitations and further results are discussed. (Author/MA)

  9. Massive neutrinos and the pancake theory of galaxy formation

    NASA Technical Reports Server (NTRS)

    Schaeffer, R.; Silk, J.

    1984-01-01

    Three problems encountered by the pancake theory of galaxy formation in a massive neutrino-dominated universe are discussed. A nonlinear model for pancakes is shown to reconcile the data with the predicted coherence length and velocity field, and minimal predictions are given of the contribution from the large-scale matter distribution.

  10. Comparing large-scale hydrological model predictions with observed streamflow in the Pacific Northwest: effects of climate and groundwater

    Treesearch

    Mohammad Safeeq; Guillaume S. Mauger; Gordon E. Grant; Ivan Arismendi; Alan F. Hamlet; Se-Yeun Lee

    2014-01-01

    Assessing uncertainties in hydrologic models can improve accuracy in predicting future streamflow. Here, simulated streamflows using the Variable Infiltration Capacity (VIC) model at coarse (1/16°) and fine (1/120°) spatial resolutions were evaluated against observed streamflows from 217 watersheds. In...

  11. Predicting Southern Appalachian overstory vegetation with digital terrain data

    Treesearch

    Paul V. Bolstad; Wayne Swank; James Vose

    1998-01-01

    Vegetation in mountainous regions responds to small-scale variation in terrain, largely due to effects on both temperature and soil moisture. However, there are few studies of quantitative, terrain-based methods for predicting vegetation composition. This study investigated relationships between forest composition, elevation, and a derived index of terrain shape, and...

  12. On Predictability of System Anomalies in Real World

    DTIC Science & Technology

    2011-08-01

    distributed system SETI @home [44]. Different from the above work, this work focuses on quantifying the predictability of real-world system anomalies. V...J.-M. Vincent, and D. Anderson, “Mining for statistical models of availability in large-scale distributed systems: An empirical study of seti @home,” in Proc. of MASCOTS, sept. 2009.

  13. Federated learning of predictive models from federated Electronic Health Records.

    PubMed

    Brisimi, Theodora S; Chen, Ruidi; Mela, Theofanie; Olshevsky, Alex; Paschalidis, Ioannis Ch; Shi, Wei

    2018-04-01

    In an era of "big data," computationally efficient and privacy-aware solutions for large-scale machine learning problems become crucial, especially in the healthcare domain, where large amounts of data are stored in different locations and owned by different entities. Past research has been focused on centralized algorithms, which assume the existence of a central data repository (database) which stores and can process the data from all participants. Such an architecture, however, can be impractical when data are not centrally located, it does not scale well to very large datasets, and introduces single-point of failure risks which could compromise the integrity and privacy of the data. Given scores of data widely spread across hospitals/individuals, a decentralized computationally scalable methodology is very much in need. We aim at solving a binary supervised classification problem to predict hospitalizations for cardiac events using a distributed algorithm. We seek to develop a general decentralized optimization framework enabling multiple data holders to collaborate and converge to a common predictive model, without explicitly exchanging raw data. We focus on the soft-margin l 1 -regularized sparse Support Vector Machine (sSVM) classifier. We develop an iterative cluster Primal Dual Splitting (cPDS) algorithm for solving the large-scale sSVM problem in a decentralized fashion. Such a distributed learning scheme is relevant for multi-institutional collaborations or peer-to-peer applications, allowing the data holders to collaborate, while keeping every participant's data private. We test cPDS on the problem of predicting hospitalizations due to heart diseases within a calendar year based on information in the patients Electronic Health Records prior to that year. cPDS converges faster than centralized methods at the cost of some communication between agents. It also converges faster and with less communication overhead compared to an alternative distributed algorithm. In both cases, it achieves similar prediction accuracy measured by the Area Under the Receiver Operating Characteristic Curve (AUC) of the classifier. We extract important features discovered by the algorithm that are predictive of future hospitalizations, thus providing a way to interpret the classification results and inform prevention efforts. Copyright © 2018 Elsevier B.V. All rights reserved.

  14. SChloro: directing Viridiplantae proteins to six chloroplastic sub-compartments.

    PubMed

    Savojardo, Castrense; Martelli, Pier Luigi; Fariselli, Piero; Casadio, Rita

    2017-02-01

    Chloroplasts are organelles found in plants and involved in several important cell processes. Similarly to other compartments in the cell, chloroplasts have an internal structure comprising several sub-compartments, where different proteins are targeted to perform their functions. Given the relation between protein function and localization, the availability of effective computational tools to predict protein sub-organelle localizations is crucial for large-scale functional studies. In this paper we present SChloro, a novel machine-learning approach to predict protein sub-chloroplastic localization, based on targeting signal detection and membrane protein information. The proposed approach performs multi-label predictions discriminating six chloroplastic sub-compartments that include inner membrane, outer membrane, stroma, thylakoid lumen, plastoglobule and thylakoid membrane. In comparative benchmarks, the proposed method outperforms current state-of-the-art methods in both single- and multi-compartment predictions, with an overall multi-label accuracy of 74%. The results demonstrate the relevance of the approach that is eligible as a good candidate for integration into more general large-scale annotation pipelines of protein subcellular localization. The method is available as web server at http://schloro.biocomp.unibo.it gigi@biocomp.unibo.it.

  15. Numerical prediction of the Mid-Atlantic states cyclone of 18-19 February 1979

    NASA Technical Reports Server (NTRS)

    Atlas, R.; Rosenberg, R.

    1982-01-01

    A series of forecast experiments was conducted to assess the accuracy of the GLAS model, and to determine the importance of large scale dynamical processes and diabatic heating to the cyclogenesis. The GLAS model correctly predicted intense coastal cyclogenesis and heavy precipitation. Repeated without surface heat and moisture fluxes, the model failed to predict any cyclone development. An extended range forecast, a forecast from the NMC analysis interpolated to the GLAS grid, and a forecast from the GLAS analysis with the surface moisture flux excluded predicted weak coastal low development. Diabatic heating resulting from oceanic fluxes significantly contributed to the generation of low level cyclonic vorticity and the intensification and slow rate of movement of an upper level ridge over the western Atlantic. As an upper level short wave trough approached this ridge, diabatic heating associated with the release of latent heat intensified, and the gradient of vorticity, vorticity advection and upper level divergence in advance of the trough were greatly increased, providing strong large scale forcing for the surface cyclogenesis.

  16. Optimization of a novel biophysical model using large scale in vivo antisense hybridization data displays improved prediction capabilities of structurally accessible RNA regions

    PubMed Central

    Vazquez-Anderson, Jorge; Mihailovic, Mia K.; Baldridge, Kevin C.; Reyes, Kristofer G.; Haning, Katie; Cho, Seung Hee; Amador, Paul; Powell, Warren B.

    2017-01-01

    Abstract Current approaches to design efficient antisense RNAs (asRNAs) rely primarily on a thermodynamic understanding of RNA–RNA interactions. However, these approaches depend on structure predictions and have limited accuracy, arguably due to overlooking important cellular environment factors. In this work, we develop a biophysical model to describe asRNA–RNA hybridization that incorporates in vivo factors using large-scale experimental hybridization data for three model RNAs: a group I intron, CsrB and a tRNA. A unique element of our model is the estimation of the availability of the target region to interact with a given asRNA using a differential entropic consideration of suboptimal structures. We showcase the utility of this model by evaluating its prediction capabilities in four additional RNAs: a group II intron, Spinach II, 2-MS2 binding domain and glgC 5΄ UTR. Additionally, we demonstrate the applicability of this approach to other bacterial species by predicting sRNA–mRNA binding regions in two newly discovered, though uncharacterized, regulatory RNAs. PMID:28334800

  17. A Large-Scale Assessment of Nucleic Acids Binding Site Prediction Programs

    PubMed Central

    Miao, Zhichao; Westhof, Eric

    2015-01-01

    Computational prediction of nucleic acid binding sites in proteins are necessary to disentangle functional mechanisms in most biological processes and to explore the binding mechanisms. Several strategies have been proposed, but the state-of-the-art approaches display a great diversity in i) the definition of nucleic acid binding sites; ii) the training and test datasets; iii) the algorithmic methods for the prediction strategies; iv) the performance measures and v) the distribution and availability of the prediction programs. Here we report a large-scale assessment of 19 web servers and 3 stand-alone programs on 41 datasets including more than 5000 proteins derived from 3D structures of protein-nucleic acid complexes. Well-defined binary assessment criteria (specificity, sensitivity, precision, accuracy…) are applied. We found that i) the tools have been greatly improved over the years; ii) some of the approaches suffer from theoretical defects and there is still room for sorting out the essential mechanisms of binding; iii) RNA binding and DNA binding appear to follow similar driving forces and iv) dataset bias may exist in some methods. PMID:26681179

  18. Extension of Miles Equation for Ring Baffle Damping Predictions to Small Slosh Amplitudes and Large Baffle Widths

    NASA Technical Reports Server (NTRS)

    West, Jeff; Yang, H. Q.; Brodnick, Jacob; Sansone, Marco; Westra, Douglas

    2016-01-01

    The Miles equation has long been used to predict slosh damping in liquid propellant tanks due to ring baffles. The original work by Miles identifies defined limits to its range of application. Recent evaluations of the Space Launch System identified that the Core Stage baffle designs resulted in violating the limits of the application of the Miles equation. This paper describes the work conducted by NASA/MSFC to develop methods to predict slosh damping from ring baffles for conditions for which Miles equation is not applicable. For asymptotically small slosh amplitudes or conversely large baffle widths, an asymptotic expression for slosh damping was developed and calibrated using historical experimental sub-scale slosh damping data. For the parameter space that lies between region of applicability of the asymptotic expression and the Miles equation, Computational Fluid Dynamics simulations of slosh damping were used to develop an expression for slosh damping. The combined multi-regime slosh prediction methodology is shown to be smooth at regime boundaries and consistent with both sub-scale experimental slosh damping data and the results of validated Computational Fluid Dynamics predictions of slosh damping due to ring baffles.

  19. Covariation in Plant Functional Traits and Soil Fertility within Two Species-Rich Forests

    PubMed Central

    Liu, Xiaojuan; Swenson, Nathan G.; Wright, S. Joseph; Zhang, Liwen; Song, Kai; Du, Yanjun; Zhang, Jinlong; Mi, Xiangcheng; Ren, Haibao; Ma, Keping

    2012-01-01

    The distribution of plant species along environmental gradients is expected to be predictable based on organismal function. Plant functional trait research has shown that trait values generally vary predictably along broad-scale climatic and soil gradients. This work has also demonstrated that at any one point along these gradients there is a large amount of interspecific trait variation. The present research proposes that this variation may be explained by the local-scale sorting of traits along soil fertility and acidity axes. Specifically, we predicted that trait values associated with high resource acquisition and growth rates would be found on soils that are more fertile and less acidic. We tested the expected relationships at the species-level and quadrat-level (20×20 m) using two large forest plots in Panama and China that contain over 450 species combined. Predicted relationships between leaf area and wood density and soil fertility were supported in some instances, but the majority of the predicted relationships were rejected. Alternative resource axes, such as light gradients, therefore likely play a larger role in determining the interspecific variability in plant functional traits in the two forests studied. PMID:22509355

  20. Basin-Scale Hydrologic Impacts of CO2 Storage: Regulatory and Capacity Implications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Birkholzer, J.T.; Zhou, Q.

    Industrial-scale injection of CO{sub 2} into saline sedimentary basins will cause large-scale fluid pressurization and migration of native brines, which may affect valuable groundwater resources overlying the deep sequestration reservoirs. In this paper, we discuss how such basin-scale hydrologic impacts can (1) affect regulation of CO{sub 2} storage projects and (2) may reduce current storage capacity estimates. Our assessment arises from a hypothetical future carbon sequestration scenario in the Illinois Basin, which involves twenty individual CO{sub 2} storage projects in a core injection area suitable for long-term storage. Each project is assumed to inject five million tonnes of CO{sub 2}more » per year for 50 years. A regional-scale three-dimensional simulation model was developed for the Illinois Basin that captures both the local-scale CO{sub 2}-brine flow processes and the large-scale groundwater flow patterns in response to CO{sub 2} storage. The far-field pressure buildup predicted for this selected sequestration scenario suggests that (1) the area that needs to be characterized in a permitting process may comprise a very large region within the basin if reservoir pressurization is considered, and (2) permits cannot be granted on a single-site basis alone because the near- and far-field hydrologic response may be affected by interference between individual sites. Our results also support recent studies in that environmental concerns related to near-field and far-field pressure buildup may be a limiting factor on CO{sub 2} storage capacity. In other words, estimates of storage capacity, if solely based on the effective pore volume available for safe trapping of CO{sub 2}, may have to be revised based on assessments of pressure perturbations and their potential impact on caprock integrity and groundwater resources, respectively. We finally discuss some of the challenges in making reliable predictions of large-scale hydrologic impacts related to CO{sub 2} sequestration projects.« less

  1. Tidal interactions in the expanding universe - The formation of prolate systems

    NASA Technical Reports Server (NTRS)

    Binney, J.; Silk, J.

    1979-01-01

    The study estimates the magnitude of the anisotropy that can be tidally induced in neighboring initially spherical protostructures, be they protogalaxies, protoclusters, or even uncollapsed density enhancements in the large-scale structure of the universe. It is shown that the linear analysis of tidal interactions developed by Peebles (1969) predicts that the anisotropy energy of a perturbation grows to first order in a small dimensionless parameter, whereas the net angular momentum acquired is of second order. A simple model is presented for the growth of anisotropy by tidal interactions during the nonlinear stage of the development of perturbations. A possible observational test is described of the alignment predicted by the model between the orientations of large-scale perturbations and the positions of neighboring density enhancements.

  2. Numerical Simulation of a High Mach Number Jet Flow

    NASA Technical Reports Server (NTRS)

    Hayder, M. Ehtesham; Turkel, Eli; Mankbadi, Reda R.

    1993-01-01

    The recent efforts to develop accurate numerical schemes for transition and turbulent flows are motivated, among other factors, by the need for accurate prediction of flow noise. The success of developing high speed civil transport plane (HSCT) is contingent upon our understanding and suppression of the jet exhaust noise. The radiated sound can be directly obtained by solving the full (time-dependent) compressible Navier-Stokes equations. However, this requires computational storage that is beyond currently available machines. This difficulty can be overcome by limiting the solution domain to the near field where the jet is nonlinear and then use acoustic analogy (e.g., Lighthill) to relate the far-field noise to the near-field sources. The later requires obtaining the time-dependent flow field. The other difficulty in aeroacoustics computations is that at high Reynolds numbers the turbulent flow has a large range of scales. Direct numerical simulations (DNS) cannot obtain all the scales of motion at high Reynolds number of technological interest. However, it is believed that the large scale structure is more efficient than the small-scale structure in radiating noise. Thus, one can model the small scales and calculate the acoustically active scales. The large scale structure in the noise-producing initial region of the jet can be viewed as a wavelike nature, the net radiated sound is the net cancellation after integration over space. As such, aeroacoustics computations are highly sensitive to errors in computing the sound sources. It is therefore essential to use a high-order numerical scheme to predict the flow field. The present paper presents the first step in a ongoing effort to predict jet noise. The emphasis here is in accurate prediction of the unsteady flow field. We solve the full time-dependent Navier-Stokes equations by a high order finite difference method. Time accurate spatial simulations of both plane and axisymmetric jet are presented. Jet Mach numbers of 1.5 and 2.1 are considered. Reynolds number in the simulations was about a million. Our numerical model is based on the 2-4 scheme by Gottlieb & Turkel. Bayliss et al. applied the 2-4 scheme in boundary layer computations. This scheme was also used by Ragab and Sheen to study the nonlinear development of supersonic instability waves in a mixing layer. In this study, we present two dimensional direct simulation results for both plane and axisymmetric jets. These results are compared with linear theory predictions. These computations were made for near nozzle exit region and velocity in spanwise/azimuthal direction was assumed to be zero.

  3. Radiative PQ breaking and the Higgs boson mass

    NASA Astrophysics Data System (ADS)

    D'Eramo, Francesco; Hall, Lawrence J.; Pappadopulo, Duccio

    2015-06-01

    The small and negative value of the Standard Model Higgs quartic coupling at high scales can be understood in terms of anthropic selection on a landscape where large and negative values are favored: most universes have a very short-lived electroweak vacuum and typical observers are in universes close to the corresponding metastability boundary. We provide a simple example of such a landscape with a Peccei-Quinn symmetry breaking scale generated through dimensional transmutation and supersymmetry softly broken at an intermediate scale. Large and negative contributions to the Higgs quartic are typically generated on integrating out the saxion field. Cancellations among these contributions are forced by the anthropic requirement of a sufficiently long-lived electroweak vacuum, determining the multiverse distribution for the Higgs quartic in a similar way to that of the cosmological constant. This leads to a statistical prediction of the Higgs boson mass that, for a wide range of parameters, yields the observed value within the 1σ statistical uncertainty of ˜ 5 GeV originating from the multiverse distribution. The strong CP problem is solved and single-component axion dark matter is predicted, with an abundance that can be understood from environmental selection. A more general setting for the Higgs mass prediction is discussed.

  4. Transfer of movement sequences: bigger is better.

    PubMed

    Dean, Noah J; Kovacs, Attila J; Shea, Charles H

    2008-02-01

    Experiment 1 was conducted to determine if proportional transfer from "small to large" scale movements is as effective as transferring from "large to small." We hypothesize that the learning of larger scale movement will require the participant to learn to manage the generation, storage, and dissipation of forces better than when practicing smaller scale movements. Thus, we predict an advantage for transfer of larger scale movements to smaller scale movements relative to transfer from smaller to larger scale movements. Experiment 2 was conducted to determine if adding a load to a smaller scale movement would enhance later transfer to a larger scale movement sequence. It was hypothesized that the added load would require the participants to consider the dynamics of the movement to a greater extent than without the load. The results replicated earlier findings of effective transfer from large to small movements, but consistent with our hypothesis, transfer was less effective from small to large (Experiment 1). However, when a load was added during acquisition transfer from small to large was enhanced even though the load was removed during the transfer test. These results are consistent with the notion that the transfer asymmetry noted in Experiment 1 was due to factors related to movement dynamics that were enhanced during practice of the larger scale movement sequence, but not during the practice of the smaller scale movement sequence. The findings that the movement structure is unaffected by transfer direction but the movement dynamics are influenced by transfer direction is consistent with hierarchal models of sequence production.

  5. Energetic Consistency and Coupling of the Mean and Covariance Dynamics

    NASA Technical Reports Server (NTRS)

    Cohn, Stephen E.

    2008-01-01

    The dynamical state of the ocean and atmosphere is taken to be a large dimensional random vector in a range of large-scale computational applications, including data assimilation, ensemble prediction, sensitivity analysis, and predictability studies. In each of these applications, numerical evolution of the covariance matrix of the random state plays a central role, because this matrix is used to quantify uncertainty in the state of the dynamical system. Since atmospheric and ocean dynamics are nonlinear, there is no closed evolution equation for the covariance matrix, nor for the mean state. Therefore approximate evolution equations must be used. This article studies theoretical properties of the evolution equations for the mean state and covariance matrix that arise in the second-moment closure approximation (third- and higher-order moment discard). This approximation was introduced by EPSTEIN [1969] in an early effort to introduce a stochastic element into deterministic weather forecasting, and was studied further by FLEMING [1971a,b], EPSTEIN and PITCHER [1972], and PITCHER [1977], also in the context of atmospheric predictability. It has since fallen into disuse, with a simpler one being used in current large-scale applications. The theoretical results of this article make a case that this approximation should be reconsidered for use in large-scale applications, however, because the second moment closure equations possess a property of energetic consistency that the approximate equations now in common use do not possess. A number of properties of solutions of the second-moment closure equations that result from this energetic consistency will be established.

  6. Zebrafish Whole-Adult-Organism Chemogenomics for Large-Scale Predictive and Discovery Chemical Biology

    PubMed Central

    Lam, Siew Hong; Mathavan, Sinnakarupan; Tong, Yan; Li, Haixia; Karuturi, R. Krishna Murthy; Wu, Yilian; Vega, Vinsensius B.; Liu, Edison T.; Gong, Zhiyuan

    2008-01-01

    The ability to perform large-scale, expression-based chemogenomics on whole adult organisms, as in invertebrate models (worm and fly), is highly desirable for a vertebrate model but its feasibility and potential has not been demonstrated. We performed expression-based chemogenomics on the whole adult organism of a vertebrate model, the zebrafish, and demonstrated its potential for large-scale predictive and discovery chemical biology. Focusing on two classes of compounds with wide implications to human health, polycyclic (halogenated) aromatic hydrocarbons [P(H)AHs] and estrogenic compounds (ECs), we generated robust prediction models that can discriminate compounds of the same class from those of different classes in two large independent experiments. The robust expression signatures led to the identification of biomarkers for potent aryl hydrocarbon receptor (AHR) and estrogen receptor (ER) agonists, respectively, and were validated in multiple targeted tissues. Knowledge-based data mining of human homologs of zebrafish genes revealed highly conserved chemical-induced biological responses/effects, health risks, and novel biological insights associated with AHR and ER that could be inferred to humans. Thus, our study presents an effective, high-throughput strategy of capturing molecular snapshots of chemical-induced biological states of a whole adult vertebrate that provides information on biomarkers of effects, deregulated signaling pathways, and possible affected biological functions, perturbed physiological systems, and increased health risks. These findings place zebrafish in a strategic position to bridge the wide gap between cell-based and rodent models in chemogenomics research and applications, especially in preclinical drug discovery and toxicology. PMID:18618001

  7. In silico prediction of splice-altering single nucleotide variants in the human genome.

    PubMed

    Jian, Xueqiu; Boerwinkle, Eric; Liu, Xiaoming

    2014-12-16

    In silico tools have been developed to predict variants that may have an impact on pre-mRNA splicing. The major limitation of the application of these tools to basic research and clinical practice is the difficulty in interpreting the output. Most tools only predict potential splice sites given a DNA sequence without measuring splicing signal changes caused by a variant. Another limitation is the lack of large-scale evaluation studies of these tools. We compared eight in silico tools on 2959 single nucleotide variants within splicing consensus regions (scSNVs) using receiver operating characteristic analysis. The Position Weight Matrix model and MaxEntScan outperformed other methods. Two ensemble learning methods, adaptive boosting and random forests, were used to construct models that take advantage of individual methods. Both models further improved prediction, with outputs of directly interpretable prediction scores. We applied our ensemble scores to scSNVs from the Catalogue of Somatic Mutations in Cancer database. Analysis showed that predicted splice-altering scSNVs are enriched in recurrent scSNVs and known cancer genes. We pre-computed our ensemble scores for all potential scSNVs across the human genome, providing a whole genome level resource for identifying splice-altering scSNVs discovered from large-scale sequencing studies.

  8. Remote Imaging Applied to Schistosomiasis Control: The Anning River Project

    NASA Technical Reports Server (NTRS)

    Seto, Edmund Y. W.; Maszle, Don R.; Spear, Robert C.; Gong, Peng

    1997-01-01

    The use of satellite imaging to remotely detect areas of high risk for transmission of infectious disease is an appealing prospect for large-scale monitoring of these diseases. The detection of large-scale environmental determinants of disease risk, often called landscape epidemiology, has been motivated by several authors (Pavlovsky 1966; Meade et al. 1988). The basic notion is that large-scale factors such as population density, air temperature, hydrological conditions, soil type, and vegetation can determine in a coarse fashion the local conditions contributing to disease vector abundance and human contact with disease agents. These large-scale factors can often be remotely detected by sensors or cameras mounted on satellite or aircraft platforms and can thus be used in a predictive model to mark high risk areas of transmission and to target control or monitoring efforts. A review of satellite technologies for this purpose was recently presented by Washino and Wood (1994) and Hay (1997) and Hay et al. (1997).

  9. Large-scale three-dimensional phase-field simulations for phase coarsening at ultrahigh volume fraction on high-performance architectures

    NASA Astrophysics Data System (ADS)

    Yan, Hui; Wang, K. G.; Jones, Jim E.

    2016-06-01

    A parallel algorithm for large-scale three-dimensional phase-field simulations of phase coarsening is developed and implemented on high-performance architectures. From the large-scale simulations, a new kinetics in phase coarsening in the region of ultrahigh volume fraction is found. The parallel implementation is capable of harnessing the greater computer power available from high-performance architectures. The parallelized code enables increase in three-dimensional simulation system size up to a 5123 grid cube. Through the parallelized code, practical runtime can be achieved for three-dimensional large-scale simulations, and the statistical significance of the results from these high resolution parallel simulations are greatly improved over those obtainable from serial simulations. A detailed performance analysis on speed-up and scalability is presented, showing good scalability which improves with increasing problem size. In addition, a model for prediction of runtime is developed, which shows a good agreement with actual run time from numerical tests.

  10. NASA/FAA general aviation crash dynamics program

    NASA Technical Reports Server (NTRS)

    Thomson, R. G.; Hayduk, R. J.; Carden, H. D.

    1981-01-01

    The program involves controlled full scale crash testing, nonlinear structural analyses to predict large deflection elastoplastic response, and load attenuating concepts for use in improved seat and subfloor structure. Both analytical and experimental methods are used to develop expertise in these areas. Analyses include simplified procedures for estimating energy dissipating capabilities and comprehensive computerized procedures for predicting airframe response. These analyses are developed to provide designers with methods for predicting accelerations, loads, and displacements on collapsing structure. Tests on typical full scale aircraft and on full and subscale structural components are performed to verify the analyses and to demonstrate load attenuating concepts. A special apparatus was built to test emergency locator transmitters when attached to representative aircraft structure. The apparatus is shown to provide a good simulation of the longitudinal crash pulse observed in full scale aircraft crash tests.

  11. Spatial Pattern of Standing Timber Value across the Brazilian Amazon

    PubMed Central

    Ahmed, Sadia E.; Ewers, Robert M.

    2012-01-01

    The Amazon is a globally important system, providing a host of ecosystem services from climate regulation to food sources. It is also home to a quarter of all global diversity. Large swathes of forest are removed each year, and many models have attempted to predict the spatial patterns of this forest loss. The spatial patterns of deforestation are determined largely by the patterns of roads that open access to frontier areas and expansion of the road network in the Amazon is largely determined by profit seeking logging activities. Here we present predictions for the spatial distribution of standing value of timber across the Amazon. We show that the patterns of timber value reflect large-scale ecological gradients, determining the spatial distribution of functional traits of trees which are, in turn, correlated with timber values. We expect that understanding the spatial patterns of timber value across the Amazon will aid predictions of logging movements and thus predictions of potential future road developments. These predictions in turn will be of great use in estimating the spatial patterns of deforestation in this globally important biome. PMID:22590520

  12. Testing the consistency of three-point halo clustering in Fourier and configuration space

    NASA Astrophysics Data System (ADS)

    Hoffmann, K.; Gaztañaga, E.; Scoccimarro, R.; Crocce, M.

    2018-05-01

    We compare reduced three-point correlations Q of matter, haloes (as proxies for galaxies) and their cross-correlations, measured in a total simulated volume of ˜100 (h-1 Gpc)3, to predictions from leading order perturbation theory on a large range of scales in configuration space. Predictions for haloes are based on the non-local bias model, employing linear (b1) and non-linear (c2, g2) bias parameters, which have been constrained previously from the bispectrum in Fourier space. We also study predictions from two other bias models, one local (g2 = 0) and one in which c2 and g2 are determined by b1 via approximately universal relations. Overall, measurements and predictions agree when Q is derived for triangles with (r1r2r3)1/3 ≳60 h-1 Mpc, where r1 - 3 are the sizes of the triangle legs. Predictions for Qmatter, based on the linear power spectrum, show significant deviations from the measurements at the BAO scale (given our small measurement errors), which strongly decrease when adding a damping term or using the non-linear power spectrum, as expected. Predictions for Qhalo agree best with measurements at large scales when considering non-local contributions. The universal bias model works well for haloes and might therefore be also useful for tightening constraints on b1 from Q in galaxy surveys. Such constraints are independent of the amplitude of matter density fluctuation (σ8) and hence break the degeneracy between b1 and σ8, present in galaxy two-point correlations.

  13. Scaling laws for perturbations in the ocean-atmosphere system following large CO2 emissions

    NASA Astrophysics Data System (ADS)

    Towles, N.; Olson, P.; Gnanadesikan, A.

    2015-01-01

    Scaling relationships are derived for the perturbations to atmosphere and ocean variables from large transient CO2 emissions. Using the carbon cycle model LOSCAR (Zeebe et al., 2009; Zeebe, 2012b) we calculate perturbations to atmosphere temperature and total carbon, ocean temperature, total ocean carbon, pH, and alkalinity, marine sediment carbon, plus carbon-13 isotope anomalies in the ocean and atmosphere resulting from idealized CO2 emission events. The peak perturbations in the atmosphere and ocean variables are then fit to power law functions of the form γDαEbeta, where D is the event duration, E is its total carbon emission, and γ is a coefficient. Good power law fits are obtained for most system variables for E up to 50 000 PgC and D up to 100 kyr. However, these power laws deviate substantially from predictions based on simplified equilibrium considerations. For example, although all of the peak perturbations increase with emission rate E/D, we find no evidence of emission rate-only scaling α + β =0, a prediction of the long-term equilibrium between CO2 input by volcanism and CO2 removal by silicate weathering. Instead, our scaling yields α + β ≃ 1 for total ocean and atmosphere carbon and 0< α + β < 1 for most of the other system variables. The deviations in these scaling laws from equilibrium predictions are mainly due to the multitude and diversity of time scales that govern the exchange of carbon between marine sediments, the ocean, and the atmosphere.

  14. Simulation of all-scale atmospheric dynamics on unstructured meshes

    NASA Astrophysics Data System (ADS)

    Smolarkiewicz, Piotr K.; Szmelter, Joanna; Xiao, Feng

    2016-10-01

    The advance of massively parallel computing in the nineteen nineties and beyond encouraged finer grid intervals in numerical weather-prediction models. This has improved resolution of weather systems and enhanced the accuracy of forecasts, while setting the trend for development of unified all-scale atmospheric models. This paper first outlines the historical background to a wide range of numerical methods advanced in the process. Next, the trend is illustrated with a technical review of a versatile nonoscillatory forward-in-time finite-volume (NFTFV) approach, proven effective in simulations of atmospheric flows from small-scale dynamics to global circulations and climate. The outlined approach exploits the synergy of two specific ingredients: the MPDATA methods for the simulation of fluid flows based on the sign-preserving properties of upstream differencing; and the flexible finite-volume median-dual unstructured-mesh discretisation of the spatial differential operators comprising PDEs of atmospheric dynamics. The paper consolidates the concepts leading to a family of generalised nonhydrostatic NFTFV flow solvers that include soundproof PDEs of incompressible Boussinesq, anelastic and pseudo-incompressible systems, common in large-eddy simulation of small- and meso-scale dynamics, as well as all-scale compressible Euler equations. Such a framework naturally extends predictive skills of large-eddy simulation to the global atmosphere, providing a bottom-up alternative to the reverse approach pursued in the weather-prediction models. Theoretical considerations are substantiated by calculations attesting to the versatility and efficacy of the NFTFV approach. Some prospective developments are also discussed.

  15. Formation of outflow channels on Mars: Testing the origin of Reull Vallis in Hesperia Planum by large-scale lava-ice interactions and top-down melting

    NASA Astrophysics Data System (ADS)

    Cassanelli, James P.; Head, James W.

    2018-05-01

    The Reull Vallis outflow channel is a segmented system of fluvial valleys which originates from the volcanic plains of the Hesperia Planum region of Mars. Explanation of the formation of the Reull Vallis outflow channel by canonical catastrophic groundwater release models faces difficulties with generating sufficient hydraulic head, requiring unreasonably high aquifer permeability, and from limited recharge sources. Recent work has proposed that large-scale lava-ice interactions could serve as an alternative mechanism for outflow channel formation on the basis of predictions of regional ice sheet formation in areas that also underwent extensive contemporaneous volcanic resurfacing. Here we assess in detail the potential formation of outflow channels by large-scale lava-ice interactions through an applied case study of the Reull Vallis outflow channel system, selected for its close association with the effusive volcanic plains of the Hesperia Planum region. We first review the geomorphology of the Reull Vallis system to outline criteria that must be met by the proposed formation mechanism. We then assess local and regional lava heating and loading conditions and generate model predictions for the formation of Reull Vallis to test against the outlined geomorphic criteria. We find that successive events of large-scale lava-ice interactions that melt ice deposits, which then undergo re-deposition due to climatic mechanisms, best explains the observed geomorphic criteria, offering improvements over previously proposed formation models, particularly in the ability to supply adequate volumes of water.

  16. Large Scale Processes and Extreme Floods in Brazil

    NASA Astrophysics Data System (ADS)

    Ribeiro Lima, C. H.; AghaKouchak, A.; Lall, U.

    2016-12-01

    Persistent large scale anomalies in the atmospheric circulation and ocean state have been associated with heavy rainfall and extreme floods in water basins of different sizes across the world. Such studies have emerged in the last years as a new tool to improve the traditional, stationary based approach in flood frequency analysis and flood prediction. Here we seek to advance previous studies by evaluating the dominance of large scale processes (e.g. atmospheric rivers/moisture transport) over local processes (e.g. local convection) in producing floods. We consider flood-prone regions in Brazil as case studies and the role of large scale climate processes in generating extreme floods in such regions is explored by means of observed streamflow, reanalysis data and machine learning methods. The dynamics of the large scale atmospheric circulation in the days prior to the flood events are evaluated based on the vertically integrated moisture flux and its divergence field, which are interpreted in a low-dimensional space as obtained by machine learning techniques, particularly supervised kernel principal component analysis. In such reduced dimensional space, clusters are obtained in order to better understand the role of regional moisture recycling or teleconnected moisture in producing floods of a given magnitude. The convective available potential energy (CAPE) is also used as a measure of local convection activities. We investigate for individual sites the exceedance probability in which large scale atmospheric fluxes dominate the flood process. Finally, we analyze regional patterns of floods and how the scaling law of floods with drainage area responds to changes in the climate forcing mechanisms (e.g. local vs large scale).

  17. Biogeographic affinity helps explain productivity-richness relationships at regional and local scales

    USGS Publications Warehouse

    Harrison, S.; Grace, J.B.

    2007-01-01

    The unresolved question of what causes the observed positive relationship between large-scale productivity and species richness has long interested ecologists and evolutionists. Here we examine a potential explanation that we call the biogeographic affinity hypothesis, which proposes that the productivity-richness relationship is a function of species' climatic tolerances that in turn are shaped by the earth's climatic history combined with evolutionary niche conservatism. Using botanical data from regions and sites across California, we find support for a key prediction of this hypothesis, namely, that the productivity-species richness relationship differs strongly and predictably among groups of higher taxa on the basis of their biogeographic affinities (i.e., between families or genera primarily associated with north-temperate, semiarid, or desert zones). We also show that a consideration of biogeographic affinity can yield new insights on how productivity-richness patterns at large geographic scales filter down to affect patterns of species richness and composition within local communities. ?? 2007 by The University of Chicago. All rights reserved.

  18. Ground Motion Simulation for a Large Active Fault System using Empirical Green's Function Method and the Strong Motion Prediction Recipe - a Case Study of the Noubi Fault Zone -

    NASA Astrophysics Data System (ADS)

    Kuriyama, M.; Kumamoto, T.; Fujita, M.

    2005-12-01

    The 1995 Hyogo-ken Nambu Earthquake (1995) near Kobe, Japan, spurred research on strong motion prediction. To mitigate damage caused by large earthquakes, a highly precise method of predicting future strong motion waveforms is required. In this study, we applied empirical Green's function method to forward modeling in order to simulate strong ground motion in the Noubi Fault zone and examine issues related to strong motion prediction for large faults. Source models for the scenario earthquakes were constructed using the recipe of strong motion prediction (Irikura and Miyake, 2001; Irikura et al., 2003). To calculate the asperity area ratio of a large fault zone, the results of a scaling model, a scaling model with 22% asperity by area, and a cascade model were compared, and several rupture points and segmentation parameters were examined for certain cases. A small earthquake (Mw: 4.6) that occurred in northern Fukui Prefecture in 2004 were examined as empirical Green's function, and the source spectrum of this small event was found to agree with the omega-square scaling law. The Nukumi, Neodani, and Umehara segments of the 1891 Noubi Earthquake were targeted in the present study. The positions of the asperity area and rupture starting points were based on the horizontal displacement distributions reported by Matsuda (1974) and the fault branching pattern and rupture direction model proposed by Nakata and Goto (1998). Asymmetry in the damage maps for the Noubi Earthquake was then examined. We compared the maximum horizontal velocities for each case that had a different rupture starting point. In the case, rupture started at the center of the Nukumi Fault, while in another case, rupture started on the southeastern edge of the Umehara Fault; the scaling model showed an approximately 2.1-fold difference between these cases at observation point FKI005 of K-Net. This difference is considered to relate to the directivity effect associated with the direction of rupture propagation. Moreover, it was clarified that the horizontal velocities by assuming the cascade model was underestimated more than one standard deviation of empirical relation by Si and Midorikawa (1999). The scaling and cascade models showed an approximately 6.4-fold difference for the case, in which the rupture started along the southeastern edge of the Umehara Fault at observation point GIF020. This difference is significantly large in comparison with the effect of different rupture starting points, and shows that it is important to base scenario earthquake assumptions on active fault datasets before establishing the source characterization model. The distribution map of seismic intensity for the 1891 Noubi Earthquake also suggests that the synthetic waveforms in the southeastern Noubi Fault zone may be underestimated. Our results indicate that outer fault parameters (e.g., earthquake moment) related to the construction of scenario earthquakes influence strong motion prediction, rather than inner fault parameters such as the rupture starting point. Based on these methods, we will predict strong motion for approximately 140 to 150 km of the Itoigawa-Shizuoka Tectonic Line.

  19. Co-evolutionary Analysis of Domains in Interacting Proteins Reveals Insights into Domain–Domain Interactions Mediating Protein–Protein Interactions

    PubMed Central

    Jothi, Raja; Cherukuri, Praveen F.; Tasneem, Asba; Przytycka, Teresa M.

    2006-01-01

    Recent advances in functional genomics have helped generate large-scale high-throughput protein interaction data. Such networks, though extremely valuable towards molecular level understanding of cells, do not provide any direct information about the regions (domains) in the proteins that mediate the interaction. Here, we performed co-evolutionary analysis of domains in interacting proteins in order to understand the degree of co-evolution of interacting and non-interacting domains. Using a combination of sequence and structural analysis, we analyzed protein–protein interactions in F1-ATPase, Sec23p/Sec24p, DNA-directed RNA polymerase and nuclear pore complexes, and found that interacting domain pair(s) for a given interaction exhibits higher level of co-evolution than the noninteracting domain pairs. Motivated by this finding, we developed a computational method to test the generality of the observed trend, and to predict large-scale domain–domain interactions. Given a protein–protein interaction, the proposed method predicts the domain pair(s) that is most likely to mediate the protein interaction. We applied this method on the yeast interactome to predict domain–domain interactions, and used known domain–domain interactions found in PDB crystal structures to validate our predictions. Our results show that the prediction accuracy of the proposed method is statistically significant. Comparison of our prediction results with those from two other methods reveals that only a fraction of predictions are shared by all the three methods, indicating that the proposed method can detect known interactions missed by other methods. We believe that the proposed method can be used with other methods to help identify previously unrecognized domain–domain interactions on a genome scale, and could potentially help reduce the search space for identifying interaction sites. PMID:16949097

  20. Quantum-gravity predictions for the fine-structure constant

    NASA Astrophysics Data System (ADS)

    Eichhorn, Astrid; Held, Aaron; Wetterich, Christof

    2018-07-01

    Asymptotically safe quantum fluctuations of gravity can uniquely determine the value of the gauge coupling for a large class of grand unified models. In turn, this makes the electromagnetic fine-structure constant calculable. The balance of gravity and matter fluctuations results in a fixed point for the running of the gauge coupling. It is approached as the momentum scale is lowered in the transplanckian regime, leading to a uniquely predicted value of the gauge coupling at the Planck scale. The precise value of the predicted fine-structure constant depends on the matter content of the grand unified model. It is proportional to the gravitational fluctuation effects for which computational uncertainties remain to be settled.

  1. Large eddy simulation modelling of combustion for propulsion applications.

    PubMed

    Fureby, C

    2009-07-28

    Predictive modelling of turbulent combustion is important for the development of air-breathing engines, internal combustion engines, furnaces and for power generation. Significant advances in modelling non-reactive turbulent flows are now possible with the development of large eddy simulation (LES), in which the large energetic scales of the flow are resolved on the grid while modelling the effects of the small scales. Here, we discuss the use of combustion LES in predictive modelling of propulsion applications such as gas turbine, ramjet and scramjet engines. The LES models used are described in some detail and are validated against laboratory data-of which results from two cases are presented. These validated LES models are then applied to an annular multi-burner gas turbine combustor and a simplified scramjet combustor, for which some additional experimental data are available. For these cases, good agreement with the available reference data is obtained, and the LES predictions are used to elucidate the flow physics in such devices to further enhance our knowledge of these propulsion systems. Particular attention is focused on the influence of the combustion chemistry, turbulence-chemistry interaction, self-ignition, flame holding burner-to-burner interactions and combustion oscillations.

  2. StePS: Stereographically Projected Cosmological Simulations

    NASA Astrophysics Data System (ADS)

    Rácz, Gábor; Szapudi, István; Csabai, István; Dobos, László

    2018-05-01

    StePS (Stereographically Projected Cosmological Simulations) compactifies the infinite spatial extent of the Universe into a finite sphere with isotropic boundary conditions to simulate the evolution of the large-scale structure. This eliminates the need for periodic boundary conditions, which are a numerical convenience unsupported by observation and which modifies the law of force on large scales in an unrealistic fashion. StePS uses stereographic projection for space compactification and naive O(N2) force calculation; this arrives at a correlation function of the same quality more quickly than standard (tree or P3M) algorithms with similar spatial and mass resolution. The N2 force calculation is easy to adapt to modern graphics cards, hence StePS can function as a high-speed prediction tool for modern large-scale surveys.

  3. Scalar decay in two-dimensional chaotic advection and Batchelor-regime turbulence

    NASA Astrophysics Data System (ADS)

    Fereday, D. R.; Haynes, P. H.

    2004-12-01

    This paper considers the decay in time of an advected passive scalar in a large-scale flow. The relation between the decay predicted by "Lagrangian stretching theories," which consider evolution of the scalar field within a small fluid element and then average over many such elements, and that observed at large times in numerical simulations, associated with emergence of a "strange eigenmode" is discussed. Qualitative arguments are supported by results from numerical simulations of scalar evolution in two-dimensional spatially periodic, time aperiodic flows, which highlight the differences between the actual behavior and that predicted by the Lagrangian stretching theories. In some cases the decay rate of the scalar variance is different from the theoretical prediction and determined globally and in other cases it apparently matches the theoretical prediction. An updated theory for the wavenumber spectrum of the scalar field and a theory for the probability distribution of the scalar concentration are presented. The wavenumber spectrum and the probability density function both depend on the decay rate of the variance, but can otherwise be calculated from the statistics of the Lagrangian stretching history. In cases where the variance decay rate is not determined by the Lagrangian stretching theory, the wavenumber spectrum for scales that are much smaller than the length scale of the flow but much larger than the diffusive scale is argued to vary as k-1+ρ, where k is wavenumber, and ρ is a positive number which depends on the decay rate of the variance γ2 and on the Lagrangian stretching statistics. The probability density function for the scalar concentration is argued to have algebraic tails, with exponent roughly -3 and with a cutoff that is determined by diffusivity κ and scales roughly as κ-1/2 and these predictions are shown to be in good agreement with numerical simulations.

  4. Factors Affecting Volunteering among Older Rural and City Dwelling Adults in Australia

    ERIC Educational Resources Information Center

    Warburton, Jeni; Stirling, Christine

    2007-01-01

    In the absence of large scale Australian studies of volunteering among older adults, this study compared the relevance of two theoretical approaches--social capital theory and sociostructural resources theory--to predict voluntary activity in relation to a large national database. The paper explores volunteering by older people (aged 55+) in order…

  5. Hierarchical spatial models for predicting tree species assemblages across large domains

    Treesearch

    Andrew O. Finley; Sudipto Banerjee; Ronald E. McRoberts

    2009-01-01

    Spatially explicit data layers of tree species assemblages, referred to as forest types or forest type groups, are a key component in large-scale assessments of forest sustainability, biodiversity, timber biomass, carbon sinks and forest health monitoring. This paper explores the utility of coupling georeferenced national forest inventory (NFI) data with readily...

  6. Large-area forest inventory regression modeling: spatial scale considerations

    Treesearch

    James A. Westfall

    2015-01-01

    In many forest inventories, statistical models are employed to predict values for attributes that are difficult and/or time-consuming to measure. In some applications, models are applied across a large geographic area, which assumes the relationship between the response variable and predictors is ubiquitously invariable within the area. The extent to which this...

  7. A cosmic superfluid phase

    NASA Technical Reports Server (NTRS)

    Gradwohl, Ben-Ami

    1991-01-01

    The universe may have undergone a superfluid-like phase during its evolution, resulting from the injection of nontopological charge into the spontaneously broken vacuum. In the presence of vortices this charge is identified with angular momentum. This leads to turbulent domains on the scale of the correlation length. By restoring the symmetry at low temperatures, the vortices dissociate and push the charges to the boundaries of these domains. The model can be scaled (phenomenologically) to very low energies, it can be incorporated in a late time phase transition and form large scale structure in the boundary layers of the correlation volumes. The novel feature of the model lies in the fact that the dark matter is endowed with coherent motion. The possibilities of identifying this flow around superfluid vortices with the observed large scale bulk motion is discussed. If this identification is possible, then the definite prediction can be made that a more extended map of peculiar velocities would have to reveal large scale circulations in the flow pattern.

  8. FDTD method for laser absorption in metals for large scale problems.

    PubMed

    Deng, Chun; Ki, Hyungson

    2013-10-21

    The FDTD method has been successfully used for many electromagnetic problems, but its application to laser material processing has been limited because even a several-millimeter domain requires a prohibitively large number of grids. In this article, we present a novel FDTD method for simulating large-scale laser beam absorption problems, especially for metals, by enlarging laser wavelength while maintaining the material's reflection characteristics. For validation purposes, the proposed method has been tested with in-house FDTD codes to simulate p-, s-, and circularly polarized 1.06 μm irradiation on Fe and Sn targets, and the simulation results are in good agreement with theoretical predictions.

  9. Use of Direct and Indirect Estimates of Crown Dimensions to Predict One Seed Juniper Woody Biomass Yield for Alternative Energy Uses

    USDA-ARS?s Scientific Manuscript database

    Throughout the western United States there is increased interest in utilizing woodland biomass as an alternative energy source. We conducted a pilot study to predict one seed juniper (Juniperus monosperma) chip yield from tree-crown dimensions measured on the ground or derived from Very Large Scale ...

  10. NOAA's world-class weather and climate prediction center opens at

    Science.gov Websites

    StumbleUpon Digg More Destinations NOAA's world-class weather and climate prediction center opens at currents and large-scale rain and snow storms. Billions of earth observations from around the world flow operations. Investing in this center is an investment in our human capital, serving as a world class facility

  11. Exploring cosmic homogeneity with the BOSS DR12 galaxy sample

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ntelis, Pierros; Hamilton, Jean-Christophe; Busca, Nicolas Guillermo

    2017-06-01

    In this study, we probe the transition to cosmic homogeneity in the Large Scale Structure (LSS) of the Universe using the CMASS galaxy sample of BOSS spectroscopic survey which covers the largest effective volume to date, 3 h {sup −3} Gpc{sup 3} at 0.43 ≤ z ≤ 0.7. We study the scaled counts-in-spheres, N(< r ), and the fractal correlation dimension, D{sub 2}( r ), to assess the homogeneity scale of the universe using a Landy and Szalay inspired estimator. Defining the scale of transition to homogeneity as the scale at which D{sub 2}( r ) reaches 3 within 1%,more » i.e. D{sub 2}( r )>2.97 for r >R {sub H} , we find R {sub H} = (63.3±0.7) h {sup −1} Mpc, in agreement at the percentage level with the predictions of the ΛCDM model R {sub H} =62.0 h {sup −1} Mpc. Thanks to the large cosmic depth of the survey, we investigate the redshift evolution of the transition to homogeneity scale and find agreement with the ΛCDM prediction. Finally, we find that D{sub 2} is compatible with 3 at scales larger than 300 h {sup −1} Mpc in all redshift bins. These results consolidate the Cosmological Principle and represent a precise consistency test of the ΛCDM model.« less

  12. Predicting above-ground density and distribution of small mammal prey species at large spatial scales

    Treesearch

    Lucretia E. Olson; John R. Squires; Robert J. Oakleaf; Zachary P. Wallace; Patricia L. Kennedy

    2017-01-01

    Grassland and shrub-steppe ecosystems are increasingly threatened by anthropogenic activities. Loss of native habitats may negatively impact important small mammal prey species. Little information, however, is available on the impact of habitat variability on density of small mammal prey species at broad spatial scales. We examined the relationship between small mammal...

  13. Large-scale particle acceleration by magnetic reconnection during solar flares

    NASA Astrophysics Data System (ADS)

    Li, X.; Guo, F.; Li, H.; Li, G.; Li, S.

    2017-12-01

    Magnetic reconnection that triggers explosive magnetic energy release has been widely invoked to explain the large-scale particle acceleration during solar flares. While great efforts have been spent in studying the acceleration mechanism in small-scale kinetic simulations, there have been rare studies that make predictions to acceleration in the large scale comparable to the flare reconnection region. Here we present a new arrangement to study this problem. We solve the large-scale energetic-particle transport equation in the fluid velocity and magnetic fields from high-Lundquist-number MHD simulations of reconnection layers. This approach is based on examining the dominant acceleration mechanism and pitch-angle scattering in kinetic simulations. Due to the fluid compression in reconnection outflows and merging magnetic islands, particles are accelerated to high energies and develop power-law energy distributions. We find that the acceleration efficiency and power-law index depend critically on upstream plasma beta and the magnitude of guide field (the magnetic field component perpendicular to the reconnecting component) as they influence the compressibility of the reconnection layer. We also find that the accelerated high-energy particles are mostly concentrated in large magnetic islands, making the islands a source of energetic particles and high-energy emissions. These findings may provide explanations for acceleration process in large-scale magnetic reconnection during solar flares and the temporal and spatial emission properties observed in different flare events.

  14. Effect of turbulence modelling to predict combustion and nanoparticle production in the flame assisted spray dryer based on computational fluid dynamics

    NASA Astrophysics Data System (ADS)

    Septiani, Eka Lutfi; Widiyastuti, W.; Winardi, Sugeng; Machmudah, Siti; Nurtono, Tantular; Kusdianto

    2016-02-01

    Flame assisted spray dryer are widely uses for large-scale production of nanoparticles because of it ability. Numerical approach is needed to predict combustion and particles production in scale up and optimization process due to difficulty in experimental observation and relatively high cost. Computational Fluid Dynamics (CFD) can provide the momentum, energy and mass transfer, so that CFD more efficient than experiment due to time and cost. Here, two turbulence models, k-ɛ and Large Eddy Simulation were compared and applied in flame assisted spray dryer system. The energy sources for particle drying was obtained from combustion between LPG as fuel and air as oxidizer and carrier gas that modelled by non-premixed combustion in simulation. Silica particles was used to particle modelling from sol silica solution precursor. From the several comparison result, i.e. flame contour, temperature distribution and particle size distribution, Large Eddy Simulation turbulence model can provide the closest data to the experimental result.

  15. Sensitivity of local air quality to the interplay between small- and large-scale circulations: a large-eddy simulation study

    NASA Astrophysics Data System (ADS)

    Wolf-Grosse, Tobias; Esau, Igor; Reuder, Joachim

    2017-06-01

    Street-level urban air pollution is a challenging concern for modern urban societies. Pollution dispersion models assume that the concentrations decrease monotonically with raising wind speed. This convenient assumption breaks down when applied to flows with local recirculations such as those found in topographically complex coastal areas. This study looks at a practically important and sufficiently common case of air pollution in a coastal valley city. Here, the observed concentrations are determined by the interaction between large-scale topographically forced and local-scale breeze-like recirculations. Analysis of a long observational dataset in Bergen, Norway, revealed that the most extreme cases of recurring wintertime air pollution episodes were accompanied by increased large-scale wind speeds above the valley. Contrary to the theoretical assumption and intuitive expectations, the maximum NO2 concentrations were not found for the lowest 10 m ERA-Interim wind speeds but in situations with wind speeds of 3 m s-1. To explain this phenomenon, we investigated empirical relationships between the large-scale forcing and the local wind and air quality parameters. We conducted 16 large-eddy simulation (LES) experiments with the Parallelised Large-Eddy Simulation Model (PALM) for atmospheric and oceanic flows. The LES accounted for the realistic relief and coastal configuration as well as for the large-scale forcing and local surface condition heterogeneity in Bergen. They revealed that emerging local breeze-like circulations strongly enhance the urban ventilation and dispersion of the air pollutants in situations with weak large-scale winds. Slightly stronger large-scale winds, however, can counteract these local recirculations, leading to enhanced surface air stagnation. Furthermore, this study looks at the concrete impact of the relative configuration of warmer water bodies in the city and the major transport corridor. We found that a relatively small local water body acted as a barrier for the horizontal transport of air pollutants from the largest street in the valley and along the valley bottom, transporting them vertically instead and hence diluting them. We found that the stable stratification accumulates the street-level pollution from the transport corridor in shallow air pockets near the surface. The polluted air pockets are transported by the local recirculations to other less polluted areas with only slow dilution. This combination of relatively long distance and complex transport paths together with weak dispersion is not sufficiently resolved in classical air pollution models. The findings have important implications for the air quality predictions over urban areas. Any prediction not resolving these, or similar local dynamic features, might not be able to correctly simulate the dispersion of pollutants in cities.

  16. Predicted deep-sea coral habitat suitability for the U.S. West coast.

    PubMed

    Guinotte, John M; Davies, Andrew J

    2014-01-01

    Regional scale habitat suitability models provide finer scale resolution and more focused predictions of where organisms may occur. Previous modelling approaches have focused primarily on local and/or global scales, while regional scale models have been relatively few. In this study, regional scale predictive habitat models are presented for deep-sea corals for the U.S. West Coast (California, Oregon and Washington). Model results are intended to aid in future research or mapping efforts and to assess potential coral habitat suitability both within and outside existing bottom trawl closures (i.e. Essential Fish Habitat (EFH)) and identify suitable habitat within U.S. National Marine Sanctuaries (NMS). Deep-sea coral habitat suitability was modelled at 500 m×500 m spatial resolution using a range of physical, chemical and environmental variables known or thought to influence the distribution of deep-sea corals. Using a spatial partitioning cross-validation approach, maximum entropy models identified slope, temperature, salinity and depth as important predictors for most deep-sea coral taxa. Large areas of highly suitable deep-sea coral habitat were predicted both within and outside of existing bottom trawl closures and NMS boundaries. Predicted habitat suitability over regional scales are not currently able to identify coral areas with pin point accuracy and probably overpredict actual coral distribution due to model limitations and unincorporated variables (i.e. data on distribution of hard substrate) that are known to limit their distribution. Predicted habitat results should be used in conjunction with multibeam bathymetry, geological mapping and other tools to guide future research efforts to areas with the highest probability of harboring deep-sea corals. Field validation of predicted habitat is needed to quantify model accuracy, particularly in areas that have not been sampled.

  17. Predicted Deep-Sea Coral Habitat Suitability for the U.S. West Coast

    PubMed Central

    Guinotte, John M.; Davies, Andrew J.

    2014-01-01

    Regional scale habitat suitability models provide finer scale resolution and more focused predictions of where organisms may occur. Previous modelling approaches have focused primarily on local and/or global scales, while regional scale models have been relatively few. In this study, regional scale predictive habitat models are presented for deep-sea corals for the U.S. West Coast (California, Oregon and Washington). Model results are intended to aid in future research or mapping efforts and to assess potential coral habitat suitability both within and outside existing bottom trawl closures (i.e. Essential Fish Habitat (EFH)) and identify suitable habitat within U.S. National Marine Sanctuaries (NMS). Deep-sea coral habitat suitability was modelled at 500 m×500 m spatial resolution using a range of physical, chemical and environmental variables known or thought to influence the distribution of deep-sea corals. Using a spatial partitioning cross-validation approach, maximum entropy models identified slope, temperature, salinity and depth as important predictors for most deep-sea coral taxa. Large areas of highly suitable deep-sea coral habitat were predicted both within and outside of existing bottom trawl closures and NMS boundaries. Predicted habitat suitability over regional scales are not currently able to identify coral areas with pin point accuracy and probably overpredict actual coral distribution due to model limitations and unincorporated variables (i.e. data on distribution of hard substrate) that are known to limit their distribution. Predicted habitat results should be used in conjunction with multibeam bathymetry, geological mapping and other tools to guide future research efforts to areas with the highest probability of harboring deep-sea corals. Field validation of predicted habitat is needed to quantify model accuracy, particularly in areas that have not been sampled. PMID:24759613

  18. Predicting ecosystem dynamics at regional scales: an evaluation of a terrestrial biosphere model for the forests of northeastern North America.

    PubMed

    Medvigy, David; Moorcroft, Paul R

    2012-01-19

    Terrestrial biosphere models are important tools for diagnosing both the current state of the terrestrial carbon cycle and forecasting terrestrial ecosystem responses to global change. While there are a number of ongoing assessments of the short-term predictive capabilities of terrestrial biosphere models using flux-tower measurements, to date there have been relatively few assessments of their ability to predict longer term, decadal-scale biomass dynamics. Here, we present the results of a regional-scale evaluation of the Ecosystem Demography version 2 (ED2)-structured terrestrial biosphere model, evaluating the model's predictions against forest inventory measurements for the northeast USA and Quebec from 1985 to 1995. Simulations were conducted using a default parametrization, which used parameter values from the literature, and a constrained model parametrization, which had been developed by constraining the model's predictions against 2 years of measurements from a single site, Harvard Forest (42.5° N, 72.1° W). The analysis shows that the constrained model parametrization offered marked improvements over the default model formulation, capturing large-scale variation in patterns of biomass dynamics despite marked differences in climate forcing, land-use history and species-composition across the region. These results imply that data-constrained parametrizations of structured biosphere models such as ED2 can be successfully used for regional-scale ecosystem prediction and forecasting. We also assess the model's ability to capture sub-grid scale heterogeneity in the dynamics of biomass growth and mortality of different sizes and types of trees, and then discuss the implications of these analyses for further reducing the remaining biases in the model's predictions.

  19. Molecular Structure-Based Large-Scale Prediction of Chemical-Induced Gene Expression Changes.

    PubMed

    Liu, Ruifeng; AbdulHameed, Mohamed Diwan M; Wallqvist, Anders

    2017-09-25

    The quantitative structure-activity relationship (QSAR) approach has been used to model a wide range of chemical-induced biological responses. However, it had not been utilized to model chemical-induced genomewide gene expression changes until very recently, owing to the complexity of training and evaluating a very large number of models. To address this issue, we examined the performance of a variable nearest neighbor (v-NN) method that uses information on near neighbors conforming to the principle that similar structures have similar activities. Using a data set of gene expression signatures of 13 150 compounds derived from cell-based measurements in the NIH Library of Integrated Network-based Cellular Signatures program, we were able to make predictions for 62% of the compounds in a 10-fold cross validation test, with a correlation coefficient of 0.61 between the predicted and experimentally derived signatures-a reproducibility rivaling that of high-throughput gene expression measurements. To evaluate the utility of the predicted gene expression signatures, we compared the predicted and experimentally derived signatures in their ability to identify drugs known to cause specific liver, kidney, and heart injuries. Overall, the predicted and experimentally derived signatures had similar receiver operating characteristics, whose areas under the curve ranged from 0.71 to 0.77 and 0.70 to 0.73, respectively, across the three organ injury models. However, detailed analyses of enrichment curves indicate that signatures predicted from multiple near neighbors outperformed those derived from experiments, suggesting that averaging information from near neighbors may help improve the signal from gene expression measurements. Our results demonstrate that the v-NN method can serve as a practical approach for modeling large-scale, genomewide, chemical-induced, gene expression changes.

  20. A Comparison of Hybrid Reynolds Averaged Navier Stokes/Large Eddy Simulation (RANS/LES) and Unsteady RANS Predictions of Separated Flow for a Variable Speed Power Turbine Blade Operating with Low Inlet Turbulence Levels

    DTIC Science & Technology

    2017-10-01

    Facility is a large-scale cascade that allows detailed flow field surveys and blade surface measurements.10–12 The facility has a continuous run ...structured grids at 2 flow conditions, cruise and takeoff, of the VSPT blade . Computations were run in parallel on a Department of Defense...RANS/LES) and Unsteady RANS Predictions of Separated Flow for a Variable-Speed Power- Turbine Blade Operating with Low Inlet Turbulence Levels

  1. Multi-scale enhancement of climate prediction over land by improving the model sensitivity to vegetation variability

    NASA Astrophysics Data System (ADS)

    Alessandri, A.; Catalano, F.; De Felice, M.; Hurk, B. V. D.; Doblas-Reyes, F. J.; Boussetta, S.; Balsamo, G.; Miller, P. A.

    2017-12-01

    Here we demonstrate, for the first time, that the implementation of a realistic representation of vegetation in Earth System Models (ESMs) can significantly improve climate simulation and prediction across multiple time-scales. The effective sub-grid vegetation fractional coverage vary seasonally and at interannual time-scales in response to leaf-canopy growth, phenology and senescence. Therefore it affects biophysical parameters such as the surface resistance to evapotranspiration, albedo, roughness lenght, and soil field capacity. To adequately represent this effect in the EC-Earth ESM, we included an exponential dependence of the vegetation cover on the Leaf Area Index.By comparing two sets of simulations performed with and without the new variable fractional-coverage parameterization, spanning from centennial (20th Century) simulations and retrospective predictions to the decadal (5-years), seasonal (2-4 months) and weather (4 days) time-scales, we show for the first time a significant multi-scale enhancement of vegetation impacts in climate simulation and prediction over land. Particularly large effects at multiple time scales are shown over boreal winter middle-to-high latitudes over Canada, West US, Eastern Europe, Russia and eastern Siberia due to the implemented time-varying shadowing effect by tree-vegetation on snow surfaces. Over Northern Hemisphere boreal forest regions the improved representation of vegetation-cover consistently correct the winter warm biases, improves the climate change sensitivity, the decadal potential predictability as well as the skill of forecasts at seasonal and weather time-scales. Significant improvements of the prediction of 2m temperature and rainfall are also shown over transitional land surface hot spots. Both the potential predictability at decadal time-scale and seasonal-forecasts skill are enhanced over Sahel, North American Great Plains, Nordeste Brazil and South East Asia, mainly related to improved performance in the surface evapotranspiration.Above results are discussed in a peer-review paper just being accepted for publication on Climate Dynamics (Alessandri et al., 2017; doi:10.1007/s00382-017-3766-y).

  2. Mesh refinement in a two-dimensional large eddy simulation of a forced shear layer

    NASA Technical Reports Server (NTRS)

    Claus, R. W.; Huang, P. G.; Macinnes, J. M.

    1989-01-01

    A series of large eddy simulations are made of a forced shear layer and compared with experimental data. Several mesh densities were examined to separate the effect of numerical inaccuracy from modeling deficiencies. The turbulence model that was used to represent small scale, 3-D motions correctly predicted some gross features of the flow field, but appears to be structurally incorrect. The main effect of mesh refinement was to act as a filter on the scale of vortices that developed from the inflow boundary conditions.

  3. Filter size definition in anisotropic subgrid models for large eddy simulation on irregular grids

    NASA Astrophysics Data System (ADS)

    Abbà, Antonella; Campaniello, Dario; Nini, Michele

    2017-06-01

    The definition of the characteristic filter size to be used for subgrid scales models in large eddy simulation using irregular grids is still an unclosed problem. We investigate some different approaches to the definition of the filter length for anisotropic subgrid scale models and we propose a tensorial formulation based on the inertial ellipsoid of the grid element. The results demonstrate an improvement in the prediction of several key features of the flow when the anisotropicity of the grid is explicitly taken into account with the tensorial filter size.

  4. On the interest of combining an analog model to a regression model for the adaptation of the downscaling link. Application to probabilistic prediction of precipitation over France.

    NASA Astrophysics Data System (ADS)

    Chardon, Jérémy; Hingray, Benoit; Favre, Anne-Catherine

    2016-04-01

    Scenarios of surface weather required for the impact studies have to be unbiased and adapted to the space and time scales of the considered hydro-systems. Hence, surface weather scenarios obtained from global climate models and/or numerical weather prediction models are not really appropriated. Outputs of these models have to be post-processed, which is often carried out thanks to Statistical Downscaling Methods (SDMs). Among those SDMs, approaches based on regression are often applied. For a given station, a regression link can be established between a set of large scale atmospheric predictors and the surface weather variable. These links are then used for the prediction of the latter. However, physical processes generating surface weather vary in time. This is well known for precipitation for instance. The most relevant predictors and the regression link are also likely to vary in time. A better prediction skill is thus classically obtained with a seasonal stratification of the data. Another strategy is to identify the most relevant predictor set and establish the regression link from dates that are similar - or analog - to the target date. In practice, these dates can be selected thanks to an analog model. In this study, we explore the possibility of improving the local performance of an analog model - where the analogy is applied to the geopotential heights 1000 and 500 hPa - using additional local scale predictors for the probabilistic prediction of the Safran precipitation over France. For each prediction day, the prediction is obtained from two GLM regression models - for both the occurrence and the quantity of precipitation - for which predictors and parameters are estimated from the analog dates. Firstly, the resulting combined model noticeably allows increasing the prediction performance by adapting the downscaling link for each prediction day. Secondly, the selected predictors for a given prediction depend on the large scale situation and on the considered region. Finally, even with such an adaptive predictor identification, the downscaling link appears to be robust: for a same prediction day, predictors selected for different locations of a given region are similar and the regression parameters are consistent within the region of interest.

  5. Ecologic and Geographic Distribution of Filovirus Disease

    PubMed Central

    Bauer, John T.; Mills, James N.

    2004-01-01

    We used ecologic niche modeling of outbreaks and sporadic cases of filovirus-associated hemorrhagic fever (HF) to provide a large-scale perspective on the geographic and ecologic distributions of Ebola and Marburg viruses. We predicted that filovirus would occur across the Afrotropics: Ebola HF in the humid rain forests of central and western Africa, and Marburg HF in the drier and more open areas of central and eastern Africa. Most of the predicted geographic extent of Ebola HF has been observed; Marburg HF has the potential to occur farther south and east. Ecologic conditions appropriate for Ebola HF are also present in Southeast Asia and the Philippines, where Ebola Reston is hypothesized to be distributed. This first large-scale ecologic analysis provides a framework for a more informed search for taxa that could constitute the natural reservoir for this virus family. PMID:15078595

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Giacinti, Gwenael; Kirk, John G.

    We calculate the large-scale cosmic-ray (CR) anisotropies predicted for a range of Goldreich–Sridhar (GS) and isotropic models of interstellar turbulence, and compare them with IceTop data. In general, the predicted CR anisotropy is not a pure dipole; the cold spots reported at 400 TeV and 2 PeV are consistent with a GS model that contains a smooth deficit of parallel-propagating waves and a broad resonance function, though some other possibilities cannot, as yet, be ruled out. In particular, isotropic fast magnetosonic wave turbulence can match the observations at high energy, but cannot accommodate an energy dependence in the shape ofmore » the CR anisotropy. Our findings suggest that improved data on the large-scale CR anisotropy could provide a valuable probe of the properties—notably the power-spectrum—of the interstellar turbulence within a few tens of parsecs from Earth.« less

  7. Regression-based season-ahead drought prediction for southern Peru conditioned on large-scale climate variables

    NASA Astrophysics Data System (ADS)

    Mortensen, Eric; Wu, Shu; Notaro, Michael; Vavrus, Stephen; Montgomery, Rob; De Piérola, José; Sánchez, Carlos; Block, Paul

    2018-01-01

    Located at a complex topographic, climatic, and hydrologic crossroads, southern Peru is a semiarid region that exhibits high spatiotemporal variability in precipitation. The economic viability of the region hinges on this water, yet southern Peru is prone to water scarcity caused by seasonal meteorological drought. Meteorological droughts in this region are often triggered during El Niño episodes; however, other large-scale climate mechanisms also play a noteworthy role in controlling the region's hydrologic cycle. An extensive season-ahead precipitation prediction model is developed to help bolster the existing capacity of stakeholders to plan for and mitigate deleterious impacts of drought. In addition to existing climate indices, large-scale climatic variables, such as sea surface temperature, are investigated to identify potential drought predictors. A principal component regression framework is applied to 11 potential predictors to produce an ensemble forecast of regional January-March precipitation totals. Model hindcasts of 51 years, compared to climatology and another model conditioned solely on an El Niño-Southern Oscillation index, achieve notable skill and perform better for several metrics, including ranked probability skill score and a hit-miss statistic. The information provided by the developed model and ancillary modeling efforts, such as extending the lead time of and spatially disaggregating precipitation predictions to the local level as well as forecasting the number of wet-dry days per rainy season, may further assist regional stakeholders and policymakers in preparing for drought.

  8. Prediction of Soil Organic Carbon at the European Scale by Visible and Near InfraRed Reflectance Spectroscopy.

    PubMed

    Stevens, Antoine; Nocita, Marco; Tóth, Gergely; Montanarella, Luca; van Wesemael, Bas

    2013-01-01

    Soil organic carbon is a key soil property related to soil fertility, aggregate stability and the exchange of CO2 with the atmosphere. Existing soil maps and inventories can rarely be used to monitor the state and evolution in soil organic carbon content due to their poor spatial resolution, lack of consistency and high updating costs. Visible and Near Infrared diffuse reflectance spectroscopy is an alternative method to provide cheap and high-density soil data. However, there are still some uncertainties on its capacity to produce reliable predictions for areas characterized by large soil diversity. Using a large-scale EU soil survey of about 20,000 samples and covering 23 countries, we assessed the performance of reflectance spectroscopy for the prediction of soil organic carbon content. The best calibrations achieved a root mean square error ranging from 4 to 15 g C kg(-1) for mineral soils and a root mean square error of 50 g C kg(-1) for organic soil materials. Model errors are shown to be related to the levels of soil organic carbon and variations in other soil properties such as sand and clay content. Although errors are ∼5 times larger than the reproducibility error of the laboratory method, reflectance spectroscopy provides unbiased predictions of the soil organic carbon content. Such estimates could be used for assessing the mean soil organic carbon content of large geographical entities or countries. This study is a first step towards providing uniform continental-scale spectroscopic estimations of soil organic carbon, meeting an increasing demand for information on the state of the soil that can be used in biogeochemical models and the monitoring of soil degradation.

  9. Prediction of Soil Organic Carbon at the European Scale by Visible and Near InfraRed Reflectance Spectroscopy

    PubMed Central

    Stevens, Antoine; Nocita, Marco; Tóth, Gergely; Montanarella, Luca; van Wesemael, Bas

    2013-01-01

    Soil organic carbon is a key soil property related to soil fertility, aggregate stability and the exchange of CO2 with the atmosphere. Existing soil maps and inventories can rarely be used to monitor the state and evolution in soil organic carbon content due to their poor spatial resolution, lack of consistency and high updating costs. Visible and Near Infrared diffuse reflectance spectroscopy is an alternative method to provide cheap and high-density soil data. However, there are still some uncertainties on its capacity to produce reliable predictions for areas characterized by large soil diversity. Using a large-scale EU soil survey of about 20,000 samples and covering 23 countries, we assessed the performance of reflectance spectroscopy for the prediction of soil organic carbon content. The best calibrations achieved a root mean square error ranging from 4 to 15 g C kg−1 for mineral soils and a root mean square error of 50 g C kg−1 for organic soil materials. Model errors are shown to be related to the levels of soil organic carbon and variations in other soil properties such as sand and clay content. Although errors are ∼5 times larger than the reproducibility error of the laboratory method, reflectance spectroscopy provides unbiased predictions of the soil organic carbon content. Such estimates could be used for assessing the mean soil organic carbon content of large geographical entities or countries. This study is a first step towards providing uniform continental-scale spectroscopic estimations of soil organic carbon, meeting an increasing demand for information on the state of the soil that can be used in biogeochemical models and the monitoring of soil degradation. PMID:23840459

  10. A De-Novo Genome Analysis Pipeline (DeNoGAP) for large-scale comparative prokaryotic genomics studies.

    PubMed

    Thakur, Shalabh; Guttman, David S

    2016-06-30

    Comparative analysis of whole genome sequence data from closely related prokaryotic species or strains is becoming an increasingly important and accessible approach for addressing both fundamental and applied biological questions. While there are number of excellent tools developed for performing this task, most scale poorly when faced with hundreds of genome sequences, and many require extensive manual curation. We have developed a de-novo genome analysis pipeline (DeNoGAP) for the automated, iterative and high-throughput analysis of data from comparative genomics projects involving hundreds of whole genome sequences. The pipeline is designed to perform reference-assisted and de novo gene prediction, homolog protein family assignment, ortholog prediction, functional annotation, and pan-genome analysis using a range of proven tools and databases. While most existing methods scale quadratically with the number of genomes since they rely on pairwise comparisons among predicted protein sequences, DeNoGAP scales linearly since the homology assignment is based on iteratively refined hidden Markov models. This iterative clustering strategy enables DeNoGAP to handle a very large number of genomes using minimal computational resources. Moreover, the modular structure of the pipeline permits easy updates as new analysis programs become available. DeNoGAP integrates bioinformatics tools and databases for comparative analysis of a large number of genomes. The pipeline offers tools and algorithms for annotation and analysis of completed and draft genome sequences. The pipeline is developed using Perl, BioPerl and SQLite on Ubuntu Linux version 12.04 LTS. Currently, the software package accompanies script for automated installation of necessary external programs on Ubuntu Linux; however, the pipeline should be also compatible with other Linux and Unix systems after necessary external programs are installed. DeNoGAP is freely available at https://sourceforge.net/projects/denogap/ .

  11. Predicting Large-scale Effects During Cookoff of Plastic-Bonded Explosives (PBX 9501 PBX 9502 and LX-14)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hobbs, Michael L.; Kaneshige, Michael J.; Erikson, William W.

    In this study, we have made reasonable cookoff predictions of large-scale explosive systems by using pressure-dependent kinetics determined from small-scale experiments. Scale-up is determined by properly accounting for pressure generated from gaseous decomposition products and the volume that these reactive gases occupy, e.g. trapped within the explosive, the system, or vented. The pressure effect on the decomposition rates has been determined for different explosives by using both vented and sealed experiments at low densities. Low-density explosives are usually permeable to decomposition gases and can be used in both vented and sealed configurations to determine pressure-dependent reaction rates. In contrast, explosivesmore » that are near the theoretical maximum density (TMD) are not as permeable to decomposition gases, and pressure-dependent kinetics are difficult to determine. Ignition in explosives at high densities can be predicted by using pressure-dependent rates determined from the low-density experiments as long as gas volume changes associated with bulk thermal expansion are also considered. In the current work, cookoff of the plastic-bonded explosives PBX 9501 and PBX 9502 is reviewed and new experimental work on LX-14 is presented. Reactive gases are formed inside these heated explosives causing large internal pressures. The pressure is released differently for each of these explosives. For PBX 9501, permeability is increased and internal pressure is relieved as the nitroplasticizer melts and decomposes. Internal pressure in PBX 9502 is relieved as the material is damaged by cracks and spalling. For LX-14, internal pressure is not relieved until the explosive thermally ignites. The current paper is an extension of work presented at the 26th ICDERS symposium [1].« less

  12. The Large Local Hole in the Galaxy Distribution: The 2MASS Galaxy Angular Power Spectrum

    NASA Astrophysics Data System (ADS)

    Frith, W. J.; Outram, P. J.; Shanks, T.

    2005-06-01

    We present new evidence for a large deficiency in the local galaxy distribution situated in the ˜4000 deg2 APM survey area. We use models guided by the 2dF Galaxy Redshift Survey (2dFGRS) n(z) as a probe of the underlying large-scale structure. We first check the usefulness of this technique by comparing the 2dFGRS n(z) model prediction with the K-band and B-band number counts extracted from the 2MASS and 2dFGRS parent catalogues over the 2dFGRS Northern and Southern declination strips, before turning to a comparison with the APM counts. We find that the APM counts in both the B and K-bands indicate a deficiency in the local galaxy distribution of ˜30% to z ≈ 0.1 over the entire APM survey area. We examine the implied significance of such a large local hole, considering several possible forms for the real-space correlation function. We find that such a deficiency in the APM survey area indicates an excess of power at large scales over what is expected from the correlation function observed in 2dFGRS correlation function or predicted from ΛCDM Hubble Volume mock catalogues. In order to check further the clustering at large scales in the 2MASS data, we have calculated the angular power spectrum for 2MASS galaxies. Although in the linear regime (l<30), ΛCDM models can give a good fit to the 2MASS angular power spectrum, over a wider range (l<100) the power spectrum from Hubble Volume mock catalogues suggests that scale-dependent bias may be needed for ΛCDM to fit. However, the modest increase in large-scale power observed in the 2MASS angular power spectrum is still not enough to explain the local hole. If the APM survey area really is 25% deficient in galaxies out to z≈0.1, explanations for the disagreement with observed galaxy clustering statistics include the possibilities that the galaxy clustering is non-Gaussian on large scales or that the 2MASS volume is still too small to represent a `fair sample' of the Universe. Extending the 2dFGRS redshift survey over the whole APM area would resolve many of the remaining questions about the existence and interpretation of this local hole.

  13. Module discovery by exhaustive search for densely connected, co-expressed regions in biomolecular interaction networks.

    PubMed

    Colak, Recep; Moser, Flavia; Chu, Jeffrey Shih-Chieh; Schönhuth, Alexander; Chen, Nansheng; Ester, Martin

    2010-10-25

    Computational prediction of functionally related groups of genes (functional modules) from large-scale data is an important issue in computational biology. Gene expression experiments and interaction networks are well studied large-scale data sources, available for many not yet exhaustively annotated organisms. It has been well established, when analyzing these two data sources jointly, modules are often reflected by highly interconnected (dense) regions in the interaction networks whose participating genes are co-expressed. However, the tractability of the problem had remained unclear and methods by which to exhaustively search for such constellations had not been presented. We provide an algorithmic framework, referred to as Densely Connected Biclustering (DECOB), by which the aforementioned search problem becomes tractable. To benchmark the predictive power inherent to the approach, we computed all co-expressed, dense regions in physical protein and genetic interaction networks from human and yeast. An automatized filtering procedure reduces our output which results in smaller collections of modules, comparable to state-of-the-art approaches. Our results performed favorably in a fair benchmarking competition which adheres to standard criteria. We demonstrate the usefulness of an exhaustive module search, by using the unreduced output to more quickly perform GO term related function prediction tasks. We point out the advantages of our exhaustive output by predicting functional relationships using two examples. We demonstrate that the computation of all densely connected and co-expressed regions in interaction networks is an approach to module discovery of considerable value. Beyond confirming the well settled hypothesis that such co-expressed, densely connected interaction network regions reflect functional modules, we open up novel computational ways to comprehensively analyze the modular organization of an organism based on prevalent and largely available large-scale datasets. Software and data sets are available at http://www.sfu.ca/~ester/software/DECOB.zip.

  14. Stream Discharge and Evapotranspiration Responses to Climate Change and Their Associated Uncertainties in a Large Semi-Arid Basin

    NASA Astrophysics Data System (ADS)

    Bassam, S.; Ren, J.

    2017-12-01

    Predicting future water availability in watersheds is very important for proper water resources management, especially in semi-arid regions with scarce water resources. Hydrological models have been considered as powerful tools in predicting future hydrological conditions in watershed systems in the past two decades. Streamflow and evapotranspiration are the two important components in watershed water balance estimation as the former is the most commonly-used indicator of the overall water budget estimation, and the latter is the second biggest component of water budget (biggest outflow from the system). One of the main concerns in watershed scale hydrological modeling is the uncertainties associated with model prediction, which could arise from errors in model parameters and input meteorological data, or errors in model representation of the physics of hydrological processes. Understanding and quantifying these uncertainties are vital to water resources managers for proper decision making based on model predictions. In this study, we evaluated the impacts of different climate change scenarios on the future stream discharge and evapotranspiration, and their associated uncertainties, throughout a large semi-arid basin using a stochastically-calibrated, physically-based, semi-distributed hydrological model. The results of this study could provide valuable insights in applying hydrological models in large scale watersheds, understanding the associated sensitivity and uncertainties in model parameters, and estimating the corresponding impacts on interested hydrological process variables under different climate change scenarios.

  15. Large-Scale Brain Network Coupling Predicts Acute Nicotine Abstinence Effects on Craving and Cognitive Function

    PubMed Central

    Lerman, Caryn; Gu, Hong; Loughead, James; Ruparel, Kosha; Yang, Yihong; Stein, Elliot A.

    2014-01-01

    IMPORTANCE Interactions of large-scale brain networks may underlie cognitive dysfunctions in psychiatric and addictive disorders. OBJECTIVES To test the hypothesis that the strength of coupling among 3 large-scale brain networks–salience, executive control, and default mode–will reflect the state of nicotine withdrawal (vs smoking satiety) and will predict abstinence-induced craving and cognitive deficits and to develop a resource allocation index (RAI) that reflects the combined strength of interactions among the 3 large-scale networks. DESIGN, SETTING, AND PARTICIPANTS A within-subject functional magnetic resonance imaging study in an academic medical center compared resting-state functional connectivity coherence strength after 24 hours of abstinence and after smoking satiety. We examined the relationship of abstinence-induced changes in the RAI with alterations in subjective, behavioral, and neural functions. We included 37 healthy smoking volunteers, aged 19 to 61 years, for analyses. INTERVENTIONS Twenty-four hours of abstinence vs smoking satiety. MAIN OUTCOMES AND MEASURES Inter-network connectivity strength (primary) and the relationship with subjective, behavioral, and neural measures of nicotine withdrawal during abstinence vs smoking satiety states (secondary). RESULTS The RAI was significantly lower in the abstinent compared with the smoking satiety states (left RAI, P = .002; right RAI, P = .04), suggesting weaker inhibition between the default mode and salience networks. Weaker inter-network connectivity (reduced RAI) predicted abstinence-induced cravings to smoke (r = −0.59; P = .007) and less suppression of default mode activity during performance of a subsequent working memory task (ventromedial prefrontal cortex, r = −0.66, P = .003; posterior cingulate cortex, r = −0.65, P = .001). CONCLUSIONS AND RELEVANCE Alterations in coupling of the salience and default mode networks and the inability to disengage from the default mode network may be critical in cognitive/affective alterations that underlie nicotine dependence. PMID:24622915

  16. Scale dependence of halo bispectrum from non-Gaussian initial conditions in cosmological N-body simulations

    NASA Astrophysics Data System (ADS)

    Nishimichi, Takahiro; Taruya, Atsushi; Koyama, Kazuya; Sabiu, Cristiano

    2010-07-01

    We study the halo bispectrum from non-Gaussian initial conditions. Based on a set of large N-body simulations starting from initial density fields with local type non-Gaussianity, we find that the halo bispectrum exhibits a strong dependence on the shape and scale of Fourier space triangles near squeezed configurations at large scales. The amplitude of the halo bispectrum roughly scales as fNL2. The resultant scaling on the triangular shape is consistent with that predicted by Jeong & Komatsu based on perturbation theory. We systematically investigate this dependence with varying redshifts and halo mass thresholds. It is shown that the fNL dependence of the halo bispectrum is stronger for more massive haloes at higher redshifts. This feature can be a useful discriminator of inflation scenarios in future deep and wide galaxy redshift surveys.

  17. Scale-dependent habitat use by a large free-ranging predator, the Mediterranean fin whale

    NASA Astrophysics Data System (ADS)

    Cotté, Cédric; Guinet, Christophe; Taupier-Letage, Isabelle; Mate, Bruce; Petiau, Estelle

    2009-05-01

    Since the heterogeneity of oceanographic conditions drives abundance, distribution, and availability of prey, it is essential to understand how foraging predators interact with their dynamic environment at various spatial and temporal scales. We examined the spatio-temporal relationships between oceanographic features and abundance of fin whales ( Balaenoptera physalus), the largest free-ranging predator in the Western Mediterranean Sea (WM), through two independent approaches. First, spatial modeling was used to estimate whale density, using waiting distance (the distance between detections) for fin whales along ferry routes across the WM, in relation to remotely sensed oceanographic parameters. At a large scale (basin and year), fin whales exhibited fidelity to the northern WM with a summer-aggregated and winter-dispersed pattern. At mesoscale (20-100 km), whales were found in colder, saltier (from an on-board system) and dynamic areas defined by steep altimetric and temperature gradients. Second, using an independent fin whale satellite tracking dataset, we showed that tracked whales were effectively preferentially located in favorable habitats, i.e. in areas of high predicted densities as identified by our previous model using oceanographic data contemporaneous to the tracking period. We suggest that the large-scale fidelity corresponds to temporally and spatially predictable habitat of whale favorite prey, the northern krill ( Meganyctiphanes norvegica), while mesoscale relationships are likely to identify areas of high prey concentration and availability.

  18. Spatiotemporal Recurrent Convolutional Networks for Traffic Prediction in Transportation Networks

    PubMed Central

    Yu, Haiyang; Wu, Zhihai; Wang, Shuqin; Wang, Yunpeng; Ma, Xiaolei

    2017-01-01

    Predicting large-scale transportation network traffic has become an important and challenging topic in recent decades. Inspired by the domain knowledge of motion prediction, in which the future motion of an object can be predicted based on previous scenes, we propose a network grid representation method that can retain the fine-scale structure of a transportation network. Network-wide traffic speeds are converted into a series of static images and input into a novel deep architecture, namely, spatiotemporal recurrent convolutional networks (SRCNs), for traffic forecasting. The proposed SRCNs inherit the advantages of deep convolutional neural networks (DCNNs) and long short-term memory (LSTM) neural networks. The spatial dependencies of network-wide traffic can be captured by DCNNs, and the temporal dynamics can be learned by LSTMs. An experiment on a Beijing transportation network with 278 links demonstrates that SRCNs outperform other deep learning-based algorithms in both short-term and long-term traffic prediction. PMID:28672867

  19. Spatiotemporal Recurrent Convolutional Networks for Traffic Prediction in Transportation Networks.

    PubMed

    Yu, Haiyang; Wu, Zhihai; Wang, Shuqin; Wang, Yunpeng; Ma, Xiaolei

    2017-06-26

    Predicting large-scale transportation network traffic has become an important and challenging topic in recent decades. Inspired by the domain knowledge of motion prediction, in which the future motion of an object can be predicted based on previous scenes, we propose a network grid representation method that can retain the fine-scale structure of a transportation network. Network-wide traffic speeds are converted into a series of static images and input into a novel deep architecture, namely, spatiotemporal recurrent convolutional networks (SRCNs), for traffic forecasting. The proposed SRCNs inherit the advantages of deep convolutional neural networks (DCNNs) and long short-term memory (LSTM) neural networks. The spatial dependencies of network-wide traffic can be captured by DCNNs, and the temporal dynamics can be learned by LSTMs. An experiment on a Beijing transportation network with 278 links demonstrates that SRCNs outperform other deep learning-based algorithms in both short-term and long-term traffic prediction.

  20. Ecology, distribution, and predictive occurrence modeling of Palmers chipmunk (Tamias palmeri): a high-elevation small mammal endemic to the Spring Mountains in southern Nevada, USA

    USGS Publications Warehouse

    Lowrey, Chris E.; Longshore, Kathleen M.; Riddle, Brett R.; Mantooth, Stacy

    2016-01-01

    Although montane sky islands surrounded by desert scrub and shrub steppe comprise a large part of the biological diversity of the Basin and Range Province of southwestern North America, comprehensive ecological and population demographic studies for high-elevation small mammals within these areas are rare. Here, we examine the ecology and population parameters of the Palmer’s chipmunk (Tamias palmeri) in the Spring Mountains of southern Nevada, and present a predictive GIS-based distribution and probability of occurrence model at both home range and geographic spatial scales. Logistic regression analyses and Akaike Information Criterion model selection found variables of forest type, slope, and distance to water sources as predictive of chipmunk occurrence at the geographic scale. At the home range scale, increasing population density, decreasing overstory canopy cover, and decreasing understory canopy cover contributed to increased survival rates.

  1. Evaluating scaling models in biology using hierarchical Bayesian approaches

    PubMed Central

    Price, Charles A; Ogle, Kiona; White, Ethan P; Weitz, Joshua S

    2009-01-01

    Theoretical models for allometric relationships between organismal form and function are typically tested by comparing a single predicted relationship with empirical data. Several prominent models, however, predict more than one allometric relationship, and comparisons among alternative models have not taken this into account. Here we evaluate several different scaling models of plant morphology within a hierarchical Bayesian framework that simultaneously fits multiple scaling relationships to three large allometric datasets. The scaling models include: inflexible universal models derived from biophysical assumptions (e.g. elastic similarity or fractal networks), a flexible variation of a fractal network model, and a highly flexible model constrained only by basic algebraic relationships. We demonstrate that variation in intraspecific allometric scaling exponents is inconsistent with the universal models, and that more flexible approaches that allow for biological variability at the species level outperform universal models, even when accounting for relative increases in model complexity. PMID:19453621

  2. Unchained Melody: Revisiting the Estimation of SF-6D Values

    PubMed Central

    Craig, Benjamin M.

    2015-01-01

    Purpose In the original SF-6D valuation study, the analytical design inherited conventions that detrimentally affected its ability to predict values on a quality-adjusted life year (QALY) scale. Our objective is to estimate UK values for SF-6D states using the original data and multi-attribute utility (MAU) regression after addressing its limitations and to compare the revised SF-6D and EQ-5D value predictions. Methods Using the unaltered data (611 respondents, 3503 SG responses), the parameters of the original MAU model were re-estimated under 3 alternative error specifications, known as the instant, episodic, and angular random utility models. Value predictions on a QALY scale were compared to EQ-5D3L predictions using the 1996 Health Survey for England. Results Contrary to the original results, the revised SF-6D value predictions range below 0 QALYs (i.e., worse than death) and agree largely with EQ-5D predictions after adjusting for scale. Although a QALY is defined as a year in optimal health, the SF-6D sets a higher standard for optimal health than the EQ-5D-3L; therefore, it has larger units on a QALY scale by construction (20.9% more). Conclusions Much of the debate in health valuation has focused on differences between preference elicitation tasks, sampling, and instruments. After correcting errant econometric practices and adjusting for differences in QALY scale between the EQ-5D and SF-6D values, the revised predictions demonstrate convergent validity, making them more suitable for UK economic evaluations compared to original estimates. PMID:26359242

  3. Insect density-plant density relationships: a modified view of insect responses to resource concentrations.

    PubMed

    Andersson, Petter; Löfstedt, Christer; Hambäck, Peter A

    2013-12-01

    Habitat area is an important predictor of spatial variation in animal densities. However, the area often correlates with the quantity of resources within habitats, complicating our understanding of the factors shaping animal distributions. We addressed this problem by investigating densities of insect herbivores in habitat patches with a constant area but varying numbers of plants. Using a mathematical model, predictions of scale-dependent immigration and emigration rates for insects into patches with different densities of host plants were derived. Moreover, a field experiment was conducted where the scaling properties of odour-mediated attraction in relation to the number of odour sources were estimated, in order to derive a prediction of immigration rates of olfactory searchers. The theoretical model predicted that we should expect immigration rates of contact and visual searchers to be determined by patch area, with a steep scaling coefficient, μ = -1. The field experiment suggested that olfactory searchers should show a less steep scaling coefficient, with μ ≈ -0.5. A parameter estimation and analysis of published data revealed a correspondence between observations and predictions, and density-variation among groups could largely be explained by search behaviour. Aphids showed scaling coefficients corresponding to the prediction for contact/visual searchers, whereas moths, flies and beetles corresponded to the prediction for olfactory searchers. As density responses varied considerably among groups, and variation could be explained by a certain trait, we conclude that a general theory of insect responses to habitat heterogeneity should be based on shared traits, rather than a general prediction for all species.

  4. Estimated allele substitution effects underlying genomic evaluation models depend on the scaling of allele counts.

    PubMed

    Bouwman, Aniek C; Hayes, Ben J; Calus, Mario P L

    2017-10-30

    Genomic evaluation is used to predict direct genomic values (DGV) for selection candidates in breeding programs, but also to estimate allele substitution effects (ASE) of single nucleotide polymorphisms (SNPs). Scaling of allele counts influences the estimated ASE, because scaling of allele counts results in less shrinkage towards the mean for low minor allele frequency (MAF) variants. Scaling may become relevant for estimating ASE as more low MAF variants will be used in genomic evaluations. We show the impact of scaling on estimates of ASE using real data and a theoretical framework, and in terms of power, model fit and predictive performance. In a dairy cattle dataset with 630 K SNP genotypes, the correlation between DGV for stature from a random regression model using centered allele counts (RRc) and centered and scaled allele counts (RRcs) was 0.9988, whereas the overall correlation between ASE using RRc and RRcs was 0.27. The main difference in ASE between both methods was found for SNPs with a MAF lower than 0.01. Both the ratio (ASE from RRcs/ASE from RRc) and the regression coefficient (regression of ASE from RRcs on ASE from RRc) were much higher than 1 for low MAF SNPs. Derived equations showed that scenarios with a high heritability, a large number of individuals and a small number of variants have lower ratios between ASE from RRc and RRcs. We also investigated the optimal scaling parameter [from - 1 (RRcs) to 0 (RRc) in steps of 0.1] in the bovine stature dataset. We found that the log-likelihood was maximized with a scaling parameter of - 0.8, while the mean squared error of prediction was minimized with a scaling parameter of - 1, i.e., RRcs. Large differences in estimated ASE were observed for low MAF SNPs when allele counts were scaled or not scaled because there is less shrinkage towards the mean for scaled allele counts. We derived a theoretical framework that shows that the difference in ASE due to shrinkage is heavily influenced by the power of the data. Increasing the power results in smaller differences in ASE whether allele counts are scaled or not.

  5. Using the Personality Assessment Inventory Antisocial and Borderline Features Scales to Predict Behavior Change.

    PubMed

    Penson, Brittany N; Ruchensky, Jared R; Morey, Leslie C; Edens, John F

    2016-11-01

    A substantial amount of research has examined the developmental trajectory of antisocial behavior and, in particular, the relationship between antisocial behavior and maladaptive personality traits. However, research typically has not controlled for previous behavior (e.g., past violence) when examining the utility of personality measures, such as self-report scales of antisocial and borderline traits, in predicting future behavior (e.g., subsequent violence). Examination of the potential interactive effects of measures of both antisocial and borderline traits also is relatively rare in longitudinal research predicting adverse outcomes. The current study utilizes a large sample of youthful offenders ( N = 1,354) from the Pathways to Desistance project to examine the separate effects of the Personality Assessment Inventory Antisocial Features (ANT) and Borderline Features (BOR) scales in predicting future offending behavior as well as trends in other negative outcomes (e.g., substance abuse, violence, employment difficulties) over a 1-year follow-up period. In addition, an ANT × BOR interaction term was created to explore the predictive effects of secondary psychopathy. ANT and BOR both explained unique variance in the prediction of various negative outcomes even after controlling for past indicators of those same behaviors during the preceding year.

  6. Voltage Imaging of Waking Mouse Cortex Reveals Emergence of Critical Neuronal Dynamics

    PubMed Central

    Scott, Gregory; Fagerholm, Erik D.; Mutoh, Hiroki; Leech, Robert; Sharp, David J.; Shew, Woodrow L.

    2014-01-01

    Complex cognitive processes require neuronal activity to be coordinated across multiple scales, ranging from local microcircuits to cortex-wide networks. However, multiscale cortical dynamics are not well understood because few experimental approaches have provided sufficient support for hypotheses involving multiscale interactions. To address these limitations, we used, in experiments involving mice, genetically encoded voltage indicator imaging, which measures cortex-wide electrical activity at high spatiotemporal resolution. Here we show that, as mice recovered from anesthesia, scale-invariant spatiotemporal patterns of neuronal activity gradually emerge. We show for the first time that this scale-invariant activity spans four orders of magnitude in awake mice. In contrast, we found that the cortical dynamics of anesthetized mice were not scale invariant. Our results bridge empirical evidence from disparate scales and support theoretical predictions that the awake cortex operates in a dynamical regime known as criticality. The criticality hypothesis predicts that small-scale cortical dynamics are governed by the same principles as those governing larger-scale dynamics. Importantly, these scale-invariant principles also optimize certain aspects of information processing. Our results suggest that during the emergence from anesthesia, criticality arises as information processing demands increase. We expect that, as measurement tools advance toward larger scales and greater resolution, the multiscale framework offered by criticality will continue to provide quantitative predictions and insight on how neurons, microcircuits, and large-scale networks are dynamically coordinated in the brain. PMID:25505314

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Gang

    Mid-latitude extreme weather events are responsible for a large part of climate-related damage. Yet large uncertainties remain in climate model projections of heat waves, droughts, and heavy rain/snow events on regional scales, limiting our ability to effectively use these projections for climate adaptation and mitigation. These uncertainties can be attributed to both the lack of spatial resolution in the models, and to the lack of a dynamical understanding of these extremes. The approach of this project is to relate the fine-scale features to the large scales in current climate simulations, seasonal re-forecasts, and climate change projections in a very widemore » range of models, including the atmospheric and coupled models of ECMWF over a range of horizontal resolutions (125 to 10 km), aqua-planet configuration of the Model for Prediction Across Scales and High Order Method Modeling Environments (resolutions ranging from 240 km – 7.5 km) with various physics suites, and selected CMIP5 model simulations. The large scale circulation will be quantified both on the basis of the well tested preferred circulation regime approach, and very recently developed measures, the finite amplitude Wave Activity (FAWA) and its spectrum. The fine scale structures related to extremes will be diagnosed following the latest approaches in the literature. The goal is to use the large scale measures as indicators of the probability of occurrence of the finer scale structures, and hence extreme events. These indicators will then be applied to the CMIP5 models and time-slice projections of a future climate.« less

  8. The Seasonal Predictability of Extreme Wind Events in the Southwest United States

    NASA Astrophysics Data System (ADS)

    Seastrand, Simona Renee

    Extreme wind events are a common phenomenon in the Southwest United States. Entities such as the United States Air Force (USAF) find the Southwest appealing for many reasons, primarily for the an expansive, unpopulated, and electronically unpolluted space for large-scale training and testing. However, wind events can cause hazards for the USAF including: surface wind gusts can impact the take-off and landing of all aircraft, can tip the airframes of large wing-surface aircraft during the performance of maneuvers close to the ground, and can even impact weapons systems. This dissertation is comprised of three sections intended to further our knowledge and understanding of wind events in the Southwest. The first section builds a climatology of wind events for seven locations in the Southwest during the twelve 3-month seasons of the year. The first section further examines the wind events in relation to terrain and the large-scale flow of the atmosphere. The second section builds upon the first by taking the wind events and generating mid-level composites for each of the twelve 3-month seasons. In the third section, teleconnections identified as consistent with the large-scale circulation in the second paper were used as predictor variables to build a Poisson regression model for each of the twelve 3-month seasons. The purpose of this research is to increase our understanding of the climatology of extreme wind events, increase our understanding of how the large-scale circulation influences extreme wind events, and create a model to enhance predictability of extreme wind events in the Southwest. Knowledge from this paper will help protect personnel and property associated with not only the USAF, but all those in the Southwest.

  9. Flood events across the North Atlantic region - past development and future perspectives

    NASA Astrophysics Data System (ADS)

    Matti, Bettina; Dieppois, Bastien; Lawler, Damian; Dahlke, Helen E.; Lyon, Steve W.

    2016-04-01

    Flood events have a large impact on humans, both socially and economically. An increase in winter and spring flooding across much of northern Europe in recent years opened up the question of changing underlying hydro-climatic drivers of flood events. Predicting the manifestation of such changes is difficult due to the natural variability and fluctuations in northern hydrological systems caused by large-scale atmospheric circulations, especially under altered climate conditions. Improving knowledge on the complexity of these hydrological systems and their interactions with climate is essential to be able to determine drivers of flood events and to predict changes in these drivers under altered climate conditions. This is particularly true for the North Atlantic region where both physical catchment properties and large-scale atmospheric circulations have a profound influence on floods. This study explores changes in streamflow across North Atlantic region catchments. An emphasis is placed on high-flow events, namely the timing and magnitude of past flood events, and selected flood percentiles were tested for stationarity by applying a flood frequency analysis. The issue of non-stationarity of flood return periods is important when linking streamflow to large-scale atmospheric circulations. Natural fluctuations in these circulations are found to have a strong influence on the outcome causing natural variability in streamflow records. Long time series and a multi-temporal approach allows for determining drivers of floods and linking streamflow to large-scale atmospheric circulations. Exploring changes in selected hydrological signatures consistency was found across much of the North Atlantic region suggesting a shift in flow regime. The lack of an overall regional pattern suggests that how catchments respond to changes in climatic drivers is strongly influenced by their physical characteristics. A better understanding of hydrological response to climate drivers is essential for example for forecasting purposes.

  10. Quantifying aggregated uncertainty in Plasmodium falciparum malaria prevalence and populations at risk via efficient space-time geostatistical joint simulation.

    PubMed

    Gething, Peter W; Patil, Anand P; Hay, Simon I

    2010-04-01

    Risk maps estimating the spatial distribution of infectious diseases are required to guide public health policy from local to global scales. The advent of model-based geostatistics (MBG) has allowed these maps to be generated in a formal statistical framework, providing robust metrics of map uncertainty that enhances their utility for decision-makers. In many settings, decision-makers require spatially aggregated measures over large regions such as the mean prevalence within a country or administrative region, or national populations living under different levels of risk. Existing MBG mapping approaches provide suitable metrics of local uncertainty--the fidelity of predictions at each mapped pixel--but have not been adapted for measuring uncertainty over large areas, due largely to a series of fundamental computational constraints. Here the authors present a new efficient approximating algorithm that can generate for the first time the necessary joint simulation of prevalence values across the very large prediction spaces needed for global scale mapping. This new approach is implemented in conjunction with an established model for P. falciparum allowing robust estimates of mean prevalence at any specified level of spatial aggregation. The model is used to provide estimates of national populations at risk under three policy-relevant prevalence thresholds, along with accompanying model-based measures of uncertainty. By overcoming previously unchallenged computational barriers, this study illustrates how MBG approaches, already at the forefront of infectious disease mapping, can be extended to provide large-scale aggregate measures appropriate for decision-makers.

  11. The relationship of large fire occurrence with drought and fire danger indices in the western USA, 1984-2008: The role of temporal scale

    Treesearch

    Karin L. Riley; John T. Abatzoglou; Isaac C. Grenfell; Anna E. Klene; Faith Ann Heinsch

    2013-01-01

    The relationship between large fire occurrence and drought has important implications for fire prediction under current and future climates. This study’s primary objective was to evaluate correlations between drought and fire-danger- rating indices representing short- and long-term drought, to determine which had the strongest relationships with large fire occurrence...

  12. An invariability-area relationship sheds new light on the spatial scaling of ecological stability.

    PubMed

    Wang, Shaopeng; Loreau, Michel; Arnoldi, Jean-Francois; Fang, Jingyun; Rahman, K Abd; Tao, Shengli; de Mazancourt, Claire

    2017-05-19

    The spatial scaling of stability is key to understanding ecological sustainability across scales and the sensitivity of ecosystems to habitat destruction. Here we propose the invariability-area relationship (IAR) as a novel approach to investigate the spatial scaling of stability. The shape and slope of IAR are largely determined by patterns of spatial synchrony across scales. When synchrony decays exponentially with distance, IARs exhibit three phases, characterized by steeper increases in invariability at both small and large scales. Such triphasic IARs are observed for primary productivity from plot to continental scales. When synchrony decays as a power law with distance, IARs are quasilinear on a log-log scale. Such quasilinear IARs are observed for North American bird biomass at both species and community levels. The IAR provides a quantitative tool to predict the effects of habitat loss on population and ecosystem stability and to detect regime shifts in spatial ecological systems, which are goals of relevance to conservation and policy.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kownacki, Corey; Ma, Ernest; Pollard, Nicholas

    The [SU(3)] 4 quartification model of Babu, Ma, and Willenbrock (BMW), proposed in 2003, predicts a confining leptonic color SU(2)gauge symmetry, which becomes strong at the keV scale. Also, it predicts the existence of three families of half-charged leptons (hemions) below the TeV scale. These hemions are confined to form bound states which are not so easy to discover at the Large Hadron Collider (LHC). But, just as J/ψand Υ appeared as sharp resonances in e -e +colliders of the 20th century, the corresponding ‘hemionium’ states are expected at a future e -e +collider of the 21st century.

  14. Scalable Joint Models for Reliable Uncertainty-Aware Event Prediction.

    PubMed

    Soleimani, Hossein; Hensman, James; Saria, Suchi

    2017-08-21

    Missing data and noisy observations pose significant challenges for reliably predicting events from irregularly sampled multivariate time series (longitudinal) data. Imputation methods, which are typically used for completing the data prior to event prediction, lack a principled mechanism to account for the uncertainty due to missingness. Alternatively, state-of-the-art joint modeling techniques can be used for jointly modeling the longitudinal and event data and compute event probabilities conditioned on the longitudinal observations. These approaches, however, make strong parametric assumptions and do not easily scale to multivariate signals with many observations. Our proposed approach consists of several key innovations. First, we develop a flexible and scalable joint model based upon sparse multiple-output Gaussian processes. Unlike state-of-the-art joint models, the proposed model can explain highly challenging structure including non-Gaussian noise while scaling to large data. Second, we derive an optimal policy for predicting events using the distribution of the event occurrence estimated by the joint model. The derived policy trades-off the cost of a delayed detection versus incorrect assessments and abstains from making decisions when the estimated event probability does not satisfy the derived confidence criteria. Experiments on a large dataset show that the proposed framework significantly outperforms state-of-the-art techniques in event prediction.

  15. Effect of helicity on the correlation time of large scales in turbulent flows

    NASA Astrophysics Data System (ADS)

    Cameron, Alexandre; Alexakis, Alexandros; Brachet, Marc-Étienne

    2017-11-01

    Solutions of the forced Navier-Stokes equation have been conjectured to thermalize at scales larger than the forcing scale, similar to an absolute equilibrium obtained for the spectrally truncated Euler equation. Using direct numeric simulations of Taylor-Green flows and general-periodic helical flows, we present results on the probability density function, energy spectrum, autocorrelation function, and correlation time that compare the two systems. In the case of highly helical flows, we derive an analytic expression describing the correlation time for the absolute equilibrium of helical flows that is different from the E-1 /2k-1 scaling law of weakly helical flows. This model predicts a new helicity-based scaling law for the correlation time as τ (k ) ˜H-1 /2k-1 /2 . This scaling law is verified in simulations of the truncated Euler equation. In simulations of the Navier-Stokes equations the large-scale modes of forced Taylor-Green symmetric flows (with zero total helicity and large separation of scales) follow the same properties as absolute equilibrium including a τ (k ) ˜E-1 /2k-1 scaling for the correlation time. General-periodic helical flows also show similarities between the two systems; however, the largest scales of the forced flows deviate from the absolute equilibrium solutions.

  16. Allometric Scaling of the Active Hematopoietic Stem Cell Pool across Mammals

    PubMed Central

    Dingli, David; Pacheco, Jorge M.

    2006-01-01

    Background Many biological processes are characterized by allometric relations of the type Y = Y 0 Mb between an observable Y and body mass M, which pervade at multiple levels of organization. In what regards the hematopoietic stem cell pool, there is experimental evidence that the size of the hematopoietic stem cell pool is conserved in mammals. However, demands for blood cell formation vary across mammals and thus the size of the active stem cell compartment could vary across species. Methodology/Principle Findings Here we investigate the allometric scaling of the hematopoietic system in a large group of mammalian species using reticulocyte counts as a marker of the active stem cell pool. Our model predicts that the total number of active stem cells, in an adult mammal, scales with body mass with the exponent ¾. Conclusion/Significance The scaling predicted here provides an intuitive justification of the Hayflick hypothesis and supports the current view of a small active stem cell pool supported by a large, quiescent reserve. The present scaling shows excellent agreement with the available (indirect) data for smaller mammals. The small size of the active stem cell pool enhances the role of stochastic effects in the overall dynamics of the hematopoietic system. PMID:17183646

  17. Large density expansion of a hydrodynamic theory for self-propelled particles

    NASA Astrophysics Data System (ADS)

    Ihle, T.

    2015-07-01

    Recently, an Enskog-type kinetic theory for Vicsek-type models for self-propelled particles has been proposed [T. Ihle, Phys. Rev. E 83, 030901 (2011)]. This theory is based on an exact equation for a Markov chain in phase space and is not limited to small density. Previously, the hydrodynamic equations were derived from this theory and its transport coefficients were given in terms of infinite series. Here, I show that the transport coefficients take a simple form in the large density limit. This allows me to analytically evaluate the well-known density instability of the polarly ordered phase near the flocking threshold at moderate and large densities. The growth rate of a longitudinal perturbation is calculated and several scaling regimes, including three different power laws, are identified. It is shown that at large densities, the restabilization of the ordered phase at smaller noise is analytically accessible within the range of validity of the hydrodynamic theory. Analytical predictions for the width of the unstable band, the maximum growth rate, and for the wave number below which the instability occurs are given. In particular, the system size below which spatial perturbations of the homogeneous ordered state are stable is predicted to scale with where √ M is the average number of collision partners. The typical time scale until the instability becomes visible is calculated and is proportional to M.

  18. Structure and evolution of the large scale solar and heliospheric magnetic fields. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Hoeksema, J. T.

    1984-01-01

    Structure and evolution of large scale photospheric and coronal magnetic fields in the interval 1976-1983 were studied using observations from the Stanford Solar Observatory and a potential field model. The solar wind in the heliosphere is organized into large regions in which the magnetic field has a componenet either toward or away from the sun. The model predicts the location of the current sheet separating these regions. Near solar minimum, in 1976, the current sheet lay within a few degrees of the solar equator having two extensions north and south of the equator. Soon after minimum the latitudinal extent began to increase. The sheet reached to at least 50 deg from 1978 through 1983. The complex structure near maximum occasionally included multiple current sheets. Large scale structures persist for up to two years during the entire interval. To minimize errors in determining the structure of the heliospheric field particular attention was paid to decreasing the distorting effects of rapid field evolution, finding the optimum source surface radius, determining the correction to the sun's polar field, and handling missing data. The predicted structure agrees with direct interplanetary field measurements taken near the ecliptic and with coronameter and interplanetary scintillation measurements which infer the three dimensional interplanetary magnetic structure. During most of the solar cycle the heliospheric field cannot be adequately described as a dipole.

  19. Numerical Modeling of Propellant Boil-Off in a Cryogenic Storage Tank

    NASA Technical Reports Server (NTRS)

    Majumdar, A. K.; Steadman, T. E.; Maroney, J. L.; Sass, J. P.; Fesmire, J. E.

    2007-01-01

    A numerical model to predict boil-off of stored propellant in large spherical cryogenic tanks has been developed. Accurate prediction of tank boil-off rates for different thermal insulation systems was the goal of this collaboration effort. The Generalized Fluid System Simulation Program, integrating flow analysis and conjugate heat transfer for solving complex fluid system problems, was used to create the model. Calculation of tank boil-off rate requires simultaneous simulation of heat transfer processes among liquid propellant, vapor ullage space, and tank structure. The reference tank for the boil-off model was the 850,000 gallon liquid hydrogen tank at Launch Complex 39B (LC- 39B) at Kennedy Space Center, which is under study for future infrastructure improvements to support the Constellation program. The methodology employed in the numerical model was validated using a sub-scale model and tank. Experimental test data from a 1/15th scale version of the LC-39B tank using both liquid hydrogen and liquid nitrogen were used to anchor the analytical predictions of the sub-scale model. Favorable correlations between sub-scale model and experimental test data have provided confidence in full-scale tank boil-off predictions. These methods are now being used in the preliminary design for other cases including future launch vehicles

  20. Extensions and evaluations of a general quantitative theory of forest structure and dynamics

    PubMed Central

    Enquist, Brian J.; West, Geoffrey B.; Brown, James H.

    2009-01-01

    Here, we present the second part of a quantitative theory for the structure and dynamics of forests under demographic and resource steady state. The theory is based on individual-level allometric scaling relations for how trees use resources, fill space, and grow. These scale up to determine emergent properties of diverse forests, including size–frequency distributions, spacing relations, canopy configurations, mortality rates, population dynamics, successional dynamics, and resource flux rates. The theory uniquely makes quantitative predictions for both stand-level scaling exponents and normalizations. We evaluate these predictions by compiling and analyzing macroecological datasets from several tropical forests. The close match between theoretical predictions and data suggests that forests are organized by a set of very general scaling rules. Our mechanistic theory is based on allometric scaling relations, is complementary to “demographic theory,” but is fundamentally different in approach. It provides a quantitative baseline for understanding deviations from predictions due to other factors, including disturbance, variation in branching architecture, asymmetric competition, resource limitation, and other sources of mortality, which are not included in the deliberately simplified theory. The theory should apply to a wide range of forests despite large differences in abiotic environment, species diversity, and taxonomic and functional composition. PMID:19363161

  1. Estimation of net ecosystem carbon exchange for the conterminous United States by combining MODIS and AmeriFlux data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xiao, Jingfeng; Zhuang, Qianlai; Baldocchi, Dennis D.

    Eddy covariance flux towers provide continuous measurements of net ecosystem carbon exchange (NEE) for a wide range of climate and biome types. However, these measurements only represent the carbon fluxes at the scale of the tower footprint. To quantify the net exchange of carbon dioxide between the terrestrial biosphere and the atmosphere for regions or continents, flux tower measurements need to be extrapolated to these large areas. Here we used remotely sensed data from the Moderate Resolution Imaging Spectrometer (MODIS) instrument on board the National Aeronautics and Space Administration's (NASA) Terra satellite to scale up AmeriFlux NEE measurements to themore » continental scale. We first combined MODIS and AmeriFlux data for representative U.S. ecosystems to develop a predictive NEE model using a modified regression tree approach. The predictive model was trained and validated using eddy flux NEE data over the periods 2000-2004 and 2005-2006, respectively. We found that the model predicted NEE well (r = 0.73, p < 0.001). We then applied the model to the continental scale and estimated NEE for each 1 km x 1 km cell across the conterminous U.S. for each 8-day interval in 2005 using spatially explicit MODIS data. The model generally captured the expected spatial and seasonal patterns of NEE as determined from measurements and the literature. Our study demonstrated that our empirical approach is effective for scaling up eddy flux NEE measurements to the continental scale and producing wall-to-wall NEE estimates across multiple biomes. Our estimates may provide an independent dataset from simulations with biogeochemical models and inverse modeling approaches for examining the spatiotemporal patterns of NEE and constraining terrestrial carbon budgets over large areas.« less

  2. Estimation of Net Ecosystem Carbon Exchange for the Conterminous UnitedStates by Combining MODIS and AmeriFlux Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xiao, Jingfeng; Zhuang, Qianlai; Baldocchi, Dennis D.

    Eddy covariance flux towers provide continuous measurements of net ecosystem carbon exchange (NEE) for a wide range of climate and biome types. However, these measurements only represent the carbon fluxes at the scale of the tower footprint. To quantify the net exchange of carbon dioxide between the terrestrial biosphere and the atmosphere for regions or continents, flux tower measurements need to be extrapolated to these large areas. Here we used remotely-sensed data from the Moderate Resolution Imaging Spectrometer (MODIS) instrument on board NASA's Terra satellite to scale up AmeriFlux NEE measurements to the continental scale. We first combined MODIS andmore » AmeriFlux data for representative U.S. ecosystems to develop a predictive NEE model using a regression tree approach. The predictive model was trained and validated using NEE data over the periods 2000-2004 and 2005-2006, respectively. We found that the model predicted NEE reasonably well at the site level. We then applied the model to the continental scale and estimated NEE for each 1 km x 1 km cell across the conterminous U.S. for each 8-day period in 2005 using spatially-explicit MODIS data. The model generally captured the expected spatial and seasonal patterns of NEE. Our study demonstrated that our empirical approach is effective for scaling up eddy flux NEE measurements to the continental scale and producing wall-to-wall NEE estimates across multiple biomes. Our estimates may provide an independent dataset from simulations with biogeochemical models and inverse modeling approaches for examining the spatiotemporal patterns of NEE and constraining terrestrial carbon budgets for large areas.« less

  3. PHENOstruct: Prediction of human phenotype ontology terms using heterogeneous data sources.

    PubMed

    Kahanda, Indika; Funk, Christopher; Verspoor, Karin; Ben-Hur, Asa

    2015-01-01

    The human phenotype ontology (HPO) was recently developed as a standardized vocabulary for describing the phenotype abnormalities associated with human diseases. At present, only a small fraction of human protein coding genes have HPO annotations. But, researchers believe that a large portion of currently unannotated genes are related to disease phenotypes. Therefore, it is important to predict gene-HPO term associations using accurate computational methods. In this work we demonstrate the performance advantage of the structured SVM approach which was shown to be highly effective for Gene Ontology term prediction in comparison to several baseline methods. Furthermore, we highlight a collection of informative data sources suitable for the problem of predicting gene-HPO associations, including large scale literature mining data.

  4. Are more complex physiological models of forest ecosystems better choices for plot and regional predictions?

    Treesearch

    Wenchi Jin; Hong S. He; Frank R. Thompson

    2016-01-01

    Process-based forest ecosystem models vary from simple physiological, complex physiological, to hybrid empirical-physiological models. Previous studies indicate that complex models provide the best prediction at plot scale with a temporal extent of less than 10 years, however, it is largely untested as to whether complex models outperform the other two types of models...

  5. Using Data Mining to Predict K-12 Students' Performance on Large-Scale Assessment Items Related to Energy

    ERIC Educational Resources Information Center

    Liu, Xiufeng; Ruiz, Miguel E.

    2008-01-01

    This article reports a study on using data mining to predict K-12 students' competence levels on test items related to energy. Data sources are the 1995 Third International Mathematics and Science Study (TIMSS), 1999 TIMSS-Repeat, 2003 Trend in International Mathematics and Science Study (TIMSS), and the National Assessment of Educational…

  6. [A Predictive Model for the Magnetic Field in the Heliosphere and Acceleration of Suprathermal Particles in the Solar Wind

    NASA Technical Reports Server (NTRS)

    Fisk, L. A.

    2005-01-01

    The purpose of this grant was to develop a theoretical understanding of the processes by which open magnetic flux undergoes large-scale transport in the solar corona, and to use this understanding to develop a predictive model for the heliospheric magnetic field, the configuration for which is determined by such motions.

  7. Optimization of a novel biophysical model using large scale in vivo antisense hybridization data displays improved prediction capabilities of structurally accessible RNA regions.

    PubMed

    Vazquez-Anderson, Jorge; Mihailovic, Mia K; Baldridge, Kevin C; Reyes, Kristofer G; Haning, Katie; Cho, Seung Hee; Amador, Paul; Powell, Warren B; Contreras, Lydia M

    2017-05-19

    Current approaches to design efficient antisense RNAs (asRNAs) rely primarily on a thermodynamic understanding of RNA-RNA interactions. However, these approaches depend on structure predictions and have limited accuracy, arguably due to overlooking important cellular environment factors. In this work, we develop a biophysical model to describe asRNA-RNA hybridization that incorporates in vivo factors using large-scale experimental hybridization data for three model RNAs: a group I intron, CsrB and a tRNA. A unique element of our model is the estimation of the availability of the target region to interact with a given asRNA using a differential entropic consideration of suboptimal structures. We showcase the utility of this model by evaluating its prediction capabilities in four additional RNAs: a group II intron, Spinach II, 2-MS2 binding domain and glgC 5΄ UTR. Additionally, we demonstrate the applicability of this approach to other bacterial species by predicting sRNA-mRNA binding regions in two newly discovered, though uncharacterized, regulatory RNAs. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  8. Seasonal prediction of lightning activity in North Western Venezuela: Large-scale versus local drivers

    NASA Astrophysics Data System (ADS)

    Muñoz, Á. G.; Díaz-Lobatón, J.; Chourio, X.; Stock, M. J.

    2016-05-01

    The Lake Maracaibo Basin in North Western Venezuela has the highest annual lightning rate of any place in the world (~ 200 fl km- 2 yr- 1), whose electrical discharges occasionally impact human and animal lives (e.g., cattle) and frequently affect economic activities like oil and natural gas exploitation. Lightning activity is so common in this region that it has a proper name: Catatumbo Lightning (plural). Although short-term lightning forecasts are now common in different parts of the world, to the best of the authors' knowledge, seasonal prediction of lightning activity is still non-existent. This research discusses the relative role of both large-scale and local climate drivers as modulators of lightning activity in the region, and presents a formal predictability study at seasonal scale. Analysis of the Catatumbo Lightning Regional Mode, defined in terms of the second Empirical Orthogonal Function of monthly Lightning Imaging Sensor (LIS-TRMM) and Optical Transient Detector (OTD) satellite data for North Western South America, permits the identification of potential predictors at seasonal scale via a Canonical Correlation Analysis. Lightning activity in North Western Venezuela responds to well defined sea-surface temperature patterns (e.g., El Niño-Southern Oscillation, Atlantic Meridional Mode) and changes in the low-level meridional wind field that are associated with the Inter-Tropical Convergence Zone migrations, the Caribbean Low Level Jet and tropical cyclone activity, but it is also linked to local drivers like convection triggered by the topographic configuration and the effect of the Maracaibo Basin Nocturnal Low Level Jet. The analysis indicates that at seasonal scale the relative contribution of the large-scale drivers is more important than the local (basin-wide) ones, due to the synoptic control imposed by the former. Furthermore, meridional CAPE transport at 925 mb is identified as the best potential predictor for lightning activity in the Lake Maracaibo Basin. It is found that the predictive skill is slightly higher for the minimum lightning season (Jan-Feb) than for the maximum one (Sep-Oct), but that in general the skill is high enough to be useful for decision-making processes related to human safety, oil and natural gas exploitation, energy and food security.

  9. Allometric scaling for predicting human clearance of bisphenol A

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Collet, Séverine H., E-mail: s.collet@envt.fr; Picard-Hagen, Nicole, E-mail: n.hagen-picard@envt.fr; Lacroix, Marlène Z., E-mail: m.lacroix@envt.fr

    The investigation of interspecies differences in bisphenol A (BPA) pharmacokinetics (PK) may be useful for translating findings from animal studies to humans, identifying major processes involved in BPA clearance mechanisms, and predicting BPA PK parameters in man. For the first time, a large range of species in terms of body weight, from 0.02 kg (mice) to 495 kg (horses) was used to predict BPA clearance in man by an allometric approach. BPA PK was evaluated after intravenous administration of BPA in horses, sheep, pigs, dogs, rats and mice. A non-compartmental analysis was used to estimate plasma clearance and steady statemore » volume of distribution and predict BPA PK parameters in humans from allometric scaling. In all the species investigated, BPA plasma clearance was high and of the same order of magnitude as their respective hepatic blood flow. By an allometric scaling, the human clearance was estimated to be 1.79 L/min (equivalent to 25.6 mL/kg.min) with a 95% prediction interval of 0.36 to 8.83 L/min. Our results support the hypothesis that there are highly efficient and hepatic mechanisms of BPA clearance in man. - Highlights: • Allometric scaling was used to predict BPA pharmacokinetic parameters in humans. • In all species, BPA plasma clearance approached hepatic blood flow. • Human BPA clearance was estimated to be 1.79 L/min.« less

  10. Management applications of discontinuity theory

    EPA Science Inventory

    1.Human impacts on the environment are multifaceted and can occur across distinct spatiotemporal scales. Ecological responses to environmental change are therefore difficult to predict, and entail large degrees of uncertainty. Such uncertainty requires robust tools for management...

  11. Feasibility of large-scale power plants based on thermoelectric effects

    NASA Astrophysics Data System (ADS)

    Liu, Liping

    2014-12-01

    Heat resources of small temperature difference are easily accessible, free and enormous on the Earth. Thermoelectric effects provide the technology for converting these heat resources directly into electricity. We present designs for electricity generators based on thermoelectric effects that utilize heat resources of small temperature difference, e.g., ocean water at different depths and geothermal resources, and conclude that large-scale power plants based on thermoelectric effects are feasible and economically competitive. The key observation is that the power factor of thermoelectric materials, unlike the figure of merit, can be improved by orders of magnitude upon laminating good conductors and good thermoelectric materials. The predicted large-scale power generators based on thermoelectric effects, if validated, will have the advantages of the scalability, renewability, and free supply of heat resources of small temperature difference on the Earth.

  12. Global-Scale Hydrology: Simple Characterization of Complex Simulation

    NASA Technical Reports Server (NTRS)

    Koster, Randal D.

    1999-01-01

    Atmospheric general circulation models (AGCMS) are unique and valuable tools for the analysis of large-scale hydrology. AGCM simulations of climate provide tremendous amounts of hydrological data with a spatial and temporal coverage unmatched by observation systems. To the extent that the AGCM behaves realistically, these data can shed light on the nature of the real world's hydrological cycle. In the first part of the seminar, I will describe the hydrological cycle in a typical AGCM, with some emphasis on the validation of simulated precipitation against observations. The second part of the seminar will focus on a key goal in large-scale hydrology studies, namely the identification of simple, overarching controls on hydrological behavior hidden amidst the tremendous amounts of data produced by the highly complex AGCM parameterizations. In particular, I will show that a simple 50-year-old climatological relation (and a recent extension we made to it) successfully predicts, to first order, both the annual mean and the interannual variability of simulated evaporation and runoff fluxes. The seminar will conclude with an example of a practical application of global hydrology studies. The accurate prediction of weather statistics several months in advance would have tremendous societal benefits, and conventional wisdom today points at the use of coupled ocean-atmosphere-land models for such seasonal-to-interannual prediction. Understanding the hydrological cycle in AGCMs is critical to establishing the potential for such prediction. Our own studies show, among other things, that soil moisture retention can lead to significant precipitation predictability in many midlatitude and tropical regions.

  13. Moral parochialism and contextual contingency across seven societies

    PubMed Central

    Fessler, Daniel M. T.; Barrett, H. Clark; Kanovsky, Martin; Stich, Stephen; Holbrook, Colin; Henrich, Joseph; Bolyanatz, Alexander H.; Gervais, Matthew M.; Gurven, Michael; Kushnick, Geoff; Pisor, Anne C.; von Rueden, Christopher; Laurence, Stephen

    2015-01-01

    Human moral judgement may have evolved to maximize the individual's welfare given parochial culturally constructed moral systems. If so, then moral condemnation should be more severe when transgressions are recent and local, and should be sensitive to the pronouncements of authority figures (who are often arbiters of moral norms), as the fitness pay-offs of moral disapproval will primarily derive from the ramifications of condemning actions that occur within the immediate social arena. Correspondingly, moral transgressions should be viewed as less objectionable if they occur in other places or times, or if local authorities deem them acceptable. These predictions contrast markedly with those derived from prevailing non-evolutionary perspectives on moral judgement. Both classes of theories predict purportedly species-typical patterns, yet to our knowledge, no study to date has investigated moral judgement across a diverse set of societies, including a range of small-scale communities that differ substantially from large highly urbanized nations. We tested these predictions in five small-scale societies and two large-scale societies, finding substantial evidence of moral parochialism and contextual contingency in adults' moral judgements. Results reveal an overarching pattern in which moral condemnation reflects a concern with immediate local considerations, a pattern consistent with a variety of evolutionary accounts of moral judgement. PMID:26246545

  14. Moral parochialism and contextual contingency across seven societies.

    PubMed

    Fessler, Daniel M T; Barrett, H Clark; Kanovsky, Martin; Stich, Stephen; Holbrook, Colin; Henrich, Joseph; Bolyanatz, Alexander H; Gervais, Matthew M; Gurven, Michael; Kushnick, Geoff; Pisor, Anne C; von Rueden, Christopher; Laurence, Stephen

    2015-08-22

    Human moral judgement may have evolved to maximize the individual's welfare given parochial culturally constructed moral systems. If so, then moral condemnation should be more severe when transgressions are recent and local, and should be sensitive to the pronouncements of authority figures (who are often arbiters of moral norms), as the fitness pay-offs of moral disapproval will primarily derive from the ramifications of condemning actions that occur within the immediate social arena. Correspondingly, moral transgressions should be viewed as less objectionable if they occur in other places or times, or if local authorities deem them acceptable. These predictions contrast markedly with those derived from prevailing non-evolutionary perspectives on moral judgement. Both classes of theories predict purportedly species-typical patterns, yet to our knowledge, no study to date has investigated moral judgement across a diverse set of societies, including a range of small-scale communities that differ substantially from large highly urbanized nations. We tested these predictions in five small-scale societies and two large-scale societies, finding substantial evidence of moral parochialism and contextual contingency in adults' moral judgements. Results reveal an overarching pattern in which moral condemnation reflects a concern with immediate local considerations, a pattern consistent with a variety of evolutionary accounts of moral judgement. © 2015 The Authors.

  15. Large-scale exploration and analysis of drug combinations.

    PubMed

    Li, Peng; Huang, Chao; Fu, Yingxue; Wang, Jinan; Wu, Ziyin; Ru, Jinlong; Zheng, Chunli; Guo, Zihu; Chen, Xuetong; Zhou, Wei; Zhang, Wenjuan; Li, Yan; Chen, Jianxin; Lu, Aiping; Wang, Yonghua

    2015-06-15

    Drug combinations are a promising strategy for combating complex diseases by improving the efficacy and reducing corresponding side effects. Currently, a widely studied problem in pharmacology is to predict effective drug combinations, either through empirically screening in clinic or pure experimental trials. However, the large-scale prediction of drug combination by a systems method is rarely considered. We report a systems pharmacology framework to predict drug combinations (PreDCs) on a computational model, termed probability ensemble approach (PEA), for analysis of both the efficacy and adverse effects of drug combinations. First, a Bayesian network integrating with a similarity algorithm is developed to model the combinations from drug molecular and pharmacological phenotypes, and the predictions are then assessed with both clinical efficacy and adverse effects. It is illustrated that PEA can predict the combination efficacy of drugs spanning different therapeutic classes with high specificity and sensitivity (AUC = 0.90), which was further validated by independent data or new experimental assays. PEA also evaluates the adverse effects (AUC = 0.95) quantitatively and detects the therapeutic indications for drug combinations. Finally, the PreDC database includes 1571 known and 3269 predicted optimal combinations as well as their potential side effects and therapeutic indications. The PreDC database is available at http://sm.nwsuaf.edu.cn/lsp/predc.php. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  16. Improving Air Quality (and Weather) Predictions using Advanced Data Assimilation Techniques Applied to Coupled Models during KORUS-AQ

    NASA Astrophysics Data System (ADS)

    Carmichael, G. R.; Saide, P. E.; Gao, M.; Streets, D. G.; Kim, J.; Woo, J. H.

    2017-12-01

    Ambient aerosols are important air pollutants with direct impacts on human health and on the Earth's weather and climate systems through their interactions with radiation and clouds. Their role is dependent on their distributions of size, number, phase and composition, which vary significantly in space and time. There remain large uncertainties in simulated aerosol distributions due to uncertainties in emission estimates and in chemical and physical processes associated with their formation and removal. These uncertainties lead to large uncertainties in weather and air quality predictions and in estimates of health and climate change impacts. Despite these uncertainties and challenges, regional-scale coupled chemistry-meteorological models such as WRF-Chem have significant capabilities in predicting aerosol distributions and explaining aerosol-weather interactions. We explore the hypothesis that new advances in on-line, coupled atmospheric chemistry/meteorological models, and new emission inversion and data assimilation techniques applicable to such coupled models, can be applied in innovative ways using current and evolving observation systems to improve predictions of aerosol distributions at regional scales. We investigate the impacts of assimilating AOD from geostationary satellite (GOCI) and surface PM2.5 measurements on predictions of AOD and PM in Korea during KORUS-AQ through a series of experiments. The results suggest assimilating datasets from multiple platforms can improve the predictions of aerosol temporal and spatial distributions.

  17. Highly turbulent solutions of the Lagrangian-averaged Navier-Stokes alpha model and their large-eddy-simulation potential.

    PubMed

    Pietarila Graham, Jonathan; Holm, Darryl D; Mininni, Pablo D; Pouquet, Annick

    2007-11-01

    We compute solutions of the Lagrangian-averaged Navier-Stokes alpha - (LANS alpha ) model for significantly higher Reynolds numbers (up to Re approximately 8300 ) than have previously been accomplished. This allows sufficient separation of scales to observe a Navier-Stokes inertial range followed by a second inertial range specific to the LANS alpha model. Both fully helical and nonhelical flows are examined, up to Reynolds numbers of approximately 1300. Analysis of the third-order structure function scaling supports the predicted l3 scaling; it corresponds to a k-1 scaling of the energy spectrum for scales smaller than alpha. The energy spectrum itself shows a different scaling, which goes as k1. This latter spectrum is consistent with the absence of stretching in the subfilter scales due to the Taylor frozen-in hypothesis employed as a closure in the derivation of the LANS alpha model. These two scalings are conjectured to coexist in different spatial portions of the flow. The l3 [E(k) approximately k-1] scaling is subdominant to k1 in the energy spectrum, but the l3 scaling is responsible for the direct energy cascade, as no cascade can result from motions with no internal degrees of freedom. We demonstrate verification of the prediction for the size of the LANS alpha attractor resulting from this scaling. From this, we give a methodology either for arriving at grid-independent solutions for the LANS alpha model, or for obtaining a formulation of the large eddy simulation optimal in the context of the alpha models. The fully converged grid-independent LANS alpha model may not be the best approximation to a direct numerical simulation of the Navier-Stokes equations, since the minimum error is a balance between truncation errors and the approximation error due to using the LANS alpha instead of the primitive equations. Furthermore, the small-scale behavior of the LANS alpha model contributes to a reduction of flux at constant energy, leading to a shallower energy spectrum for large alpha. These small-scale features, however, do not preclude the LANS alpha model from reproducing correctly the intermittency properties of the high-Reynolds-number flow.

  18. N-body simulations of gravitational redshifts and other relativistic distortions of galaxy clustering

    NASA Astrophysics Data System (ADS)

    Zhu, Hongyu; Alam, Shadab; Croft, Rupert A. C.; Ho, Shirley; Giusarma, Elena

    2017-10-01

    Large redshift surveys of galaxies and clusters are providing the first opportunities to search for distortions in the observed pattern of large-scale structure due to such effects as gravitational redshift. We focus on non-linear scales and apply a quasi-Newtonian approach using N-body simulations to predict the small asymmetries in the cross-correlation function of two galaxy different populations. Following recent work by Bonvin et al., Zhao and Peacock and Kaiser on galaxy clusters, we include effects which enter at the same order as gravitational redshift: the transverse Doppler effect, light-cone effects, relativistic beaming, luminosity distance perturbation and wide-angle effects. We find that all these effects cause asymmetries in the cross-correlation functions. Quantifying these asymmetries, we find that the total effect is dominated by the gravitational redshift and luminosity distance perturbation at small and large scales, respectively. By adding additional subresolution modelling of galaxy structure to the large-scale structure information, we find that the signal is significantly increased, indicating that structure on the smallest scales is important and should be included. We report on comparison of our simulation results with measurements from the SDSS/BOSS galaxy redshift survey in a companion paper.

  19. Multiresolution comparison of precipitation datasets for large-scale models

    NASA Astrophysics Data System (ADS)

    Chun, K. P.; Sapriza Azuri, G.; Davison, B.; DeBeer, C. M.; Wheater, H. S.

    2014-12-01

    Gridded precipitation datasets are crucial for driving large-scale models which are related to weather forecast and climate research. However, the quality of precipitation products is usually validated individually. Comparisons between gridded precipitation products along with ground observations provide another avenue for investigating how the precipitation uncertainty would affect the performance of large-scale models. In this study, using data from a set of precipitation gauges over British Columbia and Alberta, we evaluate several widely used North America gridded products including the Canadian Gridded Precipitation Anomalies (CANGRD), the National Center for Environmental Prediction (NCEP) reanalysis, the Water and Global Change (WATCH) project, the thin plate spline smoothing algorithms (ANUSPLIN) and Canadian Precipitation Analysis (CaPA). Based on verification criteria for various temporal and spatial scales, results provide an assessment of possible applications for various precipitation datasets. For long-term climate variation studies (~100 years), CANGRD, NCEP, WATCH and ANUSPLIN have different comparative advantages in terms of their resolution and accuracy. For synoptic and mesoscale precipitation patterns, CaPA provides appealing performance of spatial coherence. In addition to the products comparison, various downscaling methods are also surveyed to explore new verification and bias-reduction methods for improving gridded precipitation outputs for large-scale models.

  20. Single-trabecula building block for large-scale finite element models of cancellous bone.

    PubMed

    Dagan, D; Be'ery, M; Gefen, A

    2004-07-01

    Recent development of high-resolution imaging of cancellous bone allows finite element (FE) analysis of bone tissue stresses and strains in individual trabeculae. However, specimen-specific stress/strain analyses can include effects of anatomical variations and local damage that can bias the interpretation of the results from individual specimens with respect to large populations. This study developed a standard (generic) 'building-block' of a trabecula for large-scale FE models. Being parametric and based on statistics of dimensions of ovine trabeculae, this building block can be scaled for trabecular thickness and length and be used in commercial or custom-made FE codes to construct generic, large-scale FE models of bone, using less computer power than that currently required to reproduce the accurate micro-architecture of trabecular bone. Orthogonal lattices constructed with this building block, after it was scaled to trabeculae of the human proximal femur, provided apparent elastic moduli of approximately 150 MPa, in good agreement with experimental data for the stiffness of cancellous bone from this site. Likewise, lattices with thinner, osteoporotic-like trabeculae could predict a reduction of approximately 30% in the apparent elastic modulus, as reported in experimental studies of osteoporotic femora. Based on these comparisons, it is concluded that the single-trabecula element developed in the present study is well-suited for representing cancellous bone in large-scale generic FE simulations.

  1. Comprehensive modeling of microRNA targets predicts functional non-conserved and non-canonical sites.

    PubMed

    Betel, Doron; Koppal, Anjali; Agius, Phaedra; Sander, Chris; Leslie, Christina

    2010-01-01

    mirSVR is a new machine learning method for ranking microRNA target sites by a down-regulation score. The algorithm trains a regression model on sequence and contextual features extracted from miRanda-predicted target sites. In a large-scale evaluation, miRanda-mirSVR is competitive with other target prediction methods in identifying target genes and predicting the extent of their downregulation at the mRNA or protein levels. Importantly, the method identifies a significant number of experimentally determined non-canonical and non-conserved sites.

  2. Prediction of the Arctic Oscillation in Boreal Winter by Dynamical Seasonal Forecasting Systems

    NASA Technical Reports Server (NTRS)

    Kang, Daehyun; Lee, Myong-In; Im, Jungho; Kim, Daehyun; Kim, Hye-Mi; Kang, Hyun-Suk; Shubert, Siegfried D.; Arriba, Albertom; MacLachlan, Craig

    2013-01-01

    This study assesses the prediction skill of the boreal winter Arctic Oscillation (AO) in the state-of-the-art dynamical ensemble prediction systems (EPSs): the UKMO GloSea4, the NCEP CFSv2, and the NASA GEOS-5. Long-term reforecasts made with the EPSs are used to evaluate representations of the AO, and to examine skill scores for the deterministic and probabilistic forecast of the AO index. The reforecasts reproduce the observed changes in the large-scale patterns of the Northern Hemispheric surface temperature, upper-level wind, and precipitation according to the AO phase. Results demonstrate that all EPSs have better prediction skill than the persistence prediction for lead times up to 3-month, suggesting a great potential for skillful prediction of the AO and the associated climate anomalies in seasonal time scale. It is also found that the deterministic and probabilistic forecast skill of the AO in the recent period (1997-2010) is higher than that in the earlier period (1983-1996).

  3. Linking Satellite-Derived Fire Counts to Satellite-Derived Weather Data in Fire Prediction Models to Forecast Extreme Fires in Siberia

    NASA Astrophysics Data System (ADS)

    Westberg, David; Soja, Amber; Stackhouse, Paul, Jr.

    2010-05-01

    Fire is the dominant disturbance that precipitates ecosystem change in boreal regions, and fire is largely under the control of weather and climate. Boreal systems contain the largest pool of terrestrial carbon, and Russia holds 2/3 of the global boreal forests. Fire frequency, fire severity, area burned and fire season length are predicted to increase in boreal regions under climate change scenarios. Meteorological parameters influence fire danger and fire is a catalyst for ecosystem change. Therefore to predict fire weather and ecosystem change, we must understand the factors that influence fire regimes and at what scale these are viable. Our data consists of NASA Langley Research Center (LaRC)-derived fire weather indices (FWI) and National Climatic Data Center (NCDC) surface station-derived FWI on a domain from 50°N-80°N latitude and 70°E-170°W longitude and the fire season from April through October for the years of 1999, 2002, and 2004. Both of these are calculated using the Canadian Forest Service (CFS) FWI, which is based on local noon surface-level air temperature, relative humidity, wind speed, and daily (noon-noon) rainfall. The large-scale (1°) LaRC product uses NASA Goddard Earth Observing System version 4 (GEOS-4) reanalysis and NASA Global Precipitation Climatology Project (GEOS-4/GPCP) data to calculate FWI. CFS Natural Resources Canada uses Geographic Information Systems (GIS) to interpolate NCDC station data and calculate FWI. We compare the LaRC GEOS- 4/GPCP FWI and CFS NCDC FWI based on their fraction of 1° grid boxes that contain satellite-derived fire counts and area burned to the domain total number of 1° grid boxes with a common FWI category (very low to extreme). These are separated by International Geosphere-Biosphere Programme (IGBP) 1°x1° resolution vegetation types to determine and compare fire regimes in each FWI/ecosystem class and to estimate the fraction of each of the 18 IGBP ecosystems burned, which are dependent on the FWI. On days with fire counts, the domain total of 1°x1° grid boxes with and without daily fire counts and area burned are totaled. The fraction of 1° grid boxes with fire counts and area burned to the total number of 1° grid boxes having common FWI category and vegetation type are accumulated, and a daily mean for the burning season is calculated. The mean fire counts and mean area burned plots appear to be well related. The ultimate goal of this research is to assess the viability of large-scale (1°) data to be used to assess fire weather danger and fire regimes, so these data can be confidently used to predict future fire regimes using large-scale fire weather data. Specifically, we related large-scale fire weather, area burned, and the amount of fire-induced ecosystem change. Both the LaRC and CFS FWI showed gradual linear increase in fraction of grid boxes with fire counts and area burned with increasing FWI category, with an exponential increase in the higher FWI categories in some cases, for the majority of the vegetation types. Our analysis shows a direct correlation between increased fire activity and increased FWI, independent of time or the severity of the fire season. During normal and extreme fire seasons, we noticed the fraction of fire counts and area burned per 1° grid box increased with increasing FWI rating. Given this analysis, we are confident large-scale weather and climate data, in this case from the GEOS-4 reanalysis and the GPCP data sets, can be used to accurately assess future fire potential. This increases confidence in the ability of large-scale IPCC weather and climate scenarios to predict future fire regimes in boreal regions.

  4. Modeling High Temperature Deformation Behavior of Large-Scaled Mg-Al-Zn Magnesium Alloy Fabricated by Semi-continuous Casting

    NASA Astrophysics Data System (ADS)

    Li, Jianping; Xia, Xiangsheng

    2015-09-01

    In order to improve the understanding of the hot deformation and dynamic recrystallization (DRX) behaviors of large-scaled AZ80 magnesium alloy fabricated by semi-continuous casting, compression tests were carried out in the temperature range from 250 to 400 °C and strain rate range from 0.001 to 0.1 s-1 on a Gleeble 1500 thermo-mechanical machine. The effects of the temperature and strain rate on the hot deformation behavior have been expressed by means of the conventional hyperbolic sine equation, and the influence of the strain has been incorporated in the equation by considering its effect on different material constants for large-scaled AZ80 magnesium alloy. In addition, the DRX behavior has been discussed. The result shows that the deformation temperature and strain rate exerted remarkable influences on the flow stress. The constitutive equation of large-scaled AZ80 magnesium alloy for hot deformation at steady-state stage (ɛ = 0.5) was The true stress-true strain curves predicted by the extracted model were in good agreement with the experimental results, thereby confirming the validity of the developed constitutive relation. The DRX kinetic model of large-scaled AZ80 magnesium alloy was established as X d = 1 - exp[-0.95((ɛ - ɛc)/ɛ*)2.4904]. The rate of DRX increases with increasing deformation temperature, and high temperature is beneficial for achieving complete DRX in the large-scaled AZ80 magnesium alloy.

  5. Asynchronous Two-Level Checkpointing Scheme for Large-Scale Adjoints in the Spectral-Element Solver Nek5000

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schanen, Michel; Marin, Oana; Zhang, Hong

    Adjoints are an important computational tool for large-scale sensitivity evaluation, uncertainty quantification, and derivative-based optimization. An essential component of their performance is the storage/recomputation balance in which efficient checkpointing methods play a key role. We introduce a novel asynchronous two-level adjoint checkpointing scheme for multistep numerical time discretizations targeted at large-scale numerical simulations. The checkpointing scheme combines bandwidth-limited disk checkpointing and binomial memory checkpointing. Based on assumptions about the target petascale systems, which we later demonstrate to be realistic on the IBM Blue Gene/Q system Mira, we create a model of the expected performance of our checkpointing approach and validatemore » it using the highly scalable Navier-Stokes spectralelement solver Nek5000 on small to moderate subsystems of the Mira supercomputer. In turn, this allows us to predict optimal algorithmic choices when using all of Mira. We also demonstrate that two-level checkpointing is significantly superior to single-level checkpointing when adjoining a large number of time integration steps. To our knowledge, this is the first time two-level checkpointing had been designed, implemented, tuned, and demonstrated on fluid dynamics codes at large scale of 50k+ cores.« less

  6. Steps Towards Understanding Large-scale Deformation of Gas Hydrate-bearing Sediments

    NASA Astrophysics Data System (ADS)

    Gupta, S.; Deusner, C.; Haeckel, M.; Kossel, E.

    2016-12-01

    Marine sediments bearing gas hydrates are typically characterized by heterogeneity in the gas hydrate distribution and anisotropy in the sediment-gas hydrate fabric properties. Gas hydrates also contribute to the strength and stiffness of the marine sediment, and any disturbance in the thermodynamic stability of the gas hydrates is likely to affect the geomechanical stability of the sediment. Understanding mechanisms and triggers of large-strain deformation and failure of marine gas hydrate-bearing sediments is an area of extensive research, particularly in the context of marine slope-stability and industrial gas production. The ultimate objective is to predict severe deformation events such as regional-scale slope failure or excessive sand production by using numerical simulation tools. The development of such tools essentially requires a careful analysis of thermo-hydro-chemo-mechanical behavior of gas hydrate-bearing sediments at lab-scale, and its stepwise integration into reservoir-scale simulators through definition of effective variables, use of suitable constitutive relations, and application of scaling laws. One of the focus areas of our research is to understand the bulk coupled behavior of marine gas hydrate systems with contributions from micro-scale characteristics, transport-reaction dynamics, and structural heterogeneity through experimental flow-through studies using high-pressure triaxial test systems and advanced tomographical tools (CT, ERT, MRI). We combine these studies to develop mathematical model and numerical simulation tools which could be used to predict the coupled hydro-geomechanical behavior of marine gas hydrate reservoirs in a large-strain framework. Here we will present some of our recent results from closely co-ordinated experimental and numerical simulation studies with an objective to capture the large-deformation behavior relevant to different gas production scenarios. We will also report on a variety of mechanically relevant test scenarios focusing on effects of dynamic changes in gas hydrate saturation, highly uneven gas hydrate distributions, focused fluid migration and gas hydrate production through depressurization and CO2 injection.

  7. Spatial Structure of Above-Ground Biomass Limits Accuracy of Carbon Mapping in Rainforest but Large Scale Forest Inventories Can Help to Overcome.

    PubMed

    Guitet, Stéphane; Hérault, Bruno; Molto, Quentin; Brunaux, Olivier; Couteron, Pierre

    2015-01-01

    Precise mapping of above-ground biomass (AGB) is a major challenge for the success of REDD+ processes in tropical rainforest. The usual mapping methods are based on two hypotheses: a large and long-ranged spatial autocorrelation and a strong environment influence at the regional scale. However, there are no studies of the spatial structure of AGB at the landscapes scale to support these assumptions. We studied spatial variation in AGB at various scales using two large forest inventories conducted in French Guiana. The dataset comprised 2507 plots (0.4 to 0.5 ha) of undisturbed rainforest distributed over the whole region. After checking the uncertainties of estimates obtained from these data, we used half of the dataset to develop explicit predictive models including spatial and environmental effects and tested the accuracy of the resulting maps according to their resolution using the rest of the data. Forest inventories provided accurate AGB estimates at the plot scale, for a mean of 325 Mg.ha-1. They revealed high local variability combined with a weak autocorrelation up to distances of no more than10 km. Environmental variables accounted for a minor part of spatial variation. Accuracy of the best model including spatial effects was 90 Mg.ha-1 at plot scale but coarse graining up to 2-km resolution allowed mapping AGB with accuracy lower than 50 Mg.ha-1. Whatever the resolution, no agreement was found with available pan-tropical reference maps at all resolutions. We concluded that the combined weak autocorrelation and weak environmental effect limit AGB maps accuracy in rainforest, and that a trade-off has to be found between spatial resolution and effective accuracy until adequate "wall-to-wall" remote sensing signals provide reliable AGB predictions. Waiting for this, using large forest inventories with low sampling rate (<0.5%) may be an efficient way to increase the global coverage of AGB maps with acceptable accuracy at kilometric resolution.

  8. FAST MAGNETIC FIELD AMPLIFICATION IN THE EARLY UNIVERSE: GROWTH OF COLLISIONLESS PLASMA INSTABILITIES IN TURBULENT MEDIA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Falceta-Gonçalves, D.; Kowal, G.

    2015-07-20

    In this work we report on a numerical study of the cosmic magnetic field amplification due to collisionless plasma instabilities. The collisionless magnetohydrodynamic equations derived account for the pressure anisotropy that leads, in specific conditions, to the firehose and mirror instabilities. We study the time evolution of seed fields in turbulence under the influence of such instabilities. An approximate analytical time evolution of the magnetic field is provided. The numerical simulations and the analytical predictions are compared. We found that (i) amplification of the magnetic field was efficient in firehose-unstable turbulent regimes, but not in the mirror-unstable models; (ii) the growthmore » rate of the magnetic energy density is much faster than the turbulent dynamo; and (iii) the efficient amplification occurs at small scales. The analytical prediction for the correlation between the growth timescales and pressure anisotropy is confirmed by the numerical simulations. These results reinforce the idea that pressure anisotropies—driven naturally in a turbulent collisionless medium, e.g., the intergalactic medium, could efficiently amplify the magnetic field in the early universe (post-recombination era), previous to the collapse of the first large-scale gravitational structures. This mechanism, though fast for the small-scale fields (∼kpc scales), is unable to provide relatively strong magnetic fields at large scales. Other mechanisms that were not accounted for here (e.g., collisional turbulence once instabilities are quenched, velocity shear, or gravitationally induced inflows of gas into galaxies and clusters) could operate afterward to build up large-scale coherent field structures in the long time evolution.« less

  9. Dynamo theory prediction of solar activity

    NASA Technical Reports Server (NTRS)

    Schatten, Kenneth H.

    1988-01-01

    The dynamo theory technique to predict decadal time scale solar activity variations is introduced. The technique was developed following puzzling correlations involved with geomagnetic precursors of solar activity. Based upon this, a dynamo theory method was developed to predict solar activity. The method was used successfully in solar cycle 21 by Schatten, Scherrer, Svalgaard, and Wilcox, after testing with 8 prior solar cycles. Schatten and Sofia used the technique to predict an exceptionally large cycle, peaking early (in 1990) with a sunspot value near 170, likely the second largest on record. Sunspot numbers are increasing, suggesting that: (1) a large cycle is developing, and (2) that the cycle may even surpass the largest cycle (19). A Sporer Butterfly method shows that the cycle can now be expected to peak in the latter half of 1989, consistent with an amplitude comparable to the value predicted near the last solar minimum.

  10. The Prediction of Broadband Shock-Associated Noise Including Propagation Effects

    NASA Technical Reports Server (NTRS)

    Miller, Steven; Morris, Philip J.

    2011-01-01

    An acoustic analogy is developed based on the Euler equations for broadband shock- associated noise (BBSAN) that directly incorporates the vector Green's function of the linearized Euler equations and a steady Reynolds-Averaged Navier-Stokes solution (SRANS) as the mean flow. The vector Green's function allows the BBSAN propagation through the jet shear layer to be determined. The large-scale coherent turbulence is modeled by two-point second order velocity cross-correlations. Turbulent length and time scales are related to the turbulent kinetic energy and dissipation. An adjoint vector Green's function solver is implemented to determine the vector Green's function based on a locally parallel mean flow at streamwise locations of the SRANS solution. However, the developed acoustic analogy could easily be based on any adjoint vector Green's function solver, such as one that makes no assumptions about the mean flow. The newly developed acoustic analogy can be simplified to one that uses the Green's function associated with the Helmholtz equation, which is consistent with the formulation of Morris and Miller (AIAAJ 2010). A large number of predictions are generated using three different nozzles over a wide range of fully expanded Mach numbers and jet stagnation temperatures. These predictions are compared with experimental data from multiple jet noise labs. In addition, two models for the so-called 'fine-scale' mixing noise are included in the comparisons. Improved BBSAN predictions are obtained relative to other models that do not include the propagation effects, especially in the upstream direction of the jet.

  11. (Finite) statistical size effects on compressive strength.

    PubMed

    Weiss, Jérôme; Girard, Lucas; Gimbert, Florent; Amitrano, David; Vandembroucq, Damien

    2014-04-29

    The larger structures are, the lower their mechanical strength. Already discussed by Leonardo da Vinci and Edmé Mariotte several centuries ago, size effects on strength remain of crucial importance in modern engineering for the elaboration of safety regulations in structural design or the extrapolation of laboratory results to geophysical field scales. Under tensile loading, statistical size effects are traditionally modeled with a weakest-link approach. One of its prominent results is a prediction of vanishing strength at large scales that can be quantified in the framework of extreme value statistics. Despite a frequent use outside its range of validity, this approach remains the dominant tool in the field of statistical size effects. Here we focus on compressive failure, which concerns a wide range of geophysical and geotechnical situations. We show on historical and recent experimental data that weakest-link predictions are not obeyed. In particular, the mechanical strength saturates at a nonzero value toward large scales. Accounting explicitly for the elastic interactions between defects during the damage process, we build a formal analogy of compressive failure with the depinning transition of an elastic manifold. This critical transition interpretation naturally entails finite-size scaling laws for the mean strength and its associated variability. Theoretical predictions are in remarkable agreement with measurements reported for various materials such as rocks, ice, coal, or concrete. This formalism, which can also be extended to the flowing instability of granular media under multiaxial compression, has important practical consequences for future design rules.

  12. The feasibility of using a universal Random Forest model to map tree height across different locations and vegetation types

    NASA Astrophysics Data System (ADS)

    Su, Y.; Guo, Q.; Jin, S.; Gao, S.; Hu, T.; Liu, J.; Xue, B. L.

    2017-12-01

    Tree height is an important forest structure parameter for understanding forest ecosystem and improving the accuracy of global carbon stock quantification. Light detection and ranging (LiDAR) can provide accurate tree height measurements, but its use in large-scale tree height mapping is limited by the spatial availability. Random Forest (RF) has been one of the most commonly used algorithms for mapping large-scale tree height through the fusion of LiDAR and other remotely sensed datasets. However, how the variances in vegetation types, geolocations and spatial scales of different study sites influence the RF results is still a question that needs to be addressed. In this study, we selected 16 study sites across four vegetation types in United States (U.S.) fully covered by airborne LiDAR data, and the area of each site was 100 km2. The LiDAR-derived canopy height models (CHMs) were used as the ground truth to train the RF algorithm to predict canopy height from other remotely sensed variables, such as Landsat TM imagery, terrain information and climate surfaces. To address the abovementioned question, 22 models were run under different combinations of vegetation types, geolocations and spatial scales. The results show that the RF model trained at one specific location or vegetation type cannot be used to predict tree height in other locations or vegetation types. However, by training the RF model using samples from all locations and vegetation types, a universal model can be achieved for predicting canopy height across different locations and vegetation types. Moreover, the number of training samples and the targeted spatial resolution of the canopy height product have noticeable influence on the RF prediction accuracy.

  13. Developing a Framework for Seamless Prediction of Sub-Seasonal to Seasonal Extreme Precipitation Events in the United States.

    NASA Astrophysics Data System (ADS)

    Rosendahl, D. H.; Ćwik, P.; Martin, E. R.; Basara, J. B.; Brooks, H. E.; Furtado, J. C.; Homeyer, C. R.; Lazrus, H.; Mcpherson, R. A.; Mullens, E.; Richman, M. B.; Robinson-Cook, A.

    2017-12-01

    Extreme precipitation events cause significant damage to homes, businesses, infrastructure, and agriculture, as well as many injures and fatalities as a result of fast-moving water or waterborne diseases. In the USA, these natural hazard events claimed the lives of more than 300 people during 2015 - 2016 alone, with total damage reaching $24.4 billion. Prior studies of extreme precipitation events have focused on the sub-daily to sub-weekly timeframes. However, many decisions for planning, preparing and resilience-building require sub-seasonal to seasonal timeframes (S2S; 14 to 90 days), but adequate forecasting tools for prediction do not exist. Therefore, the goal of this newly funded project is an enhancement in understanding of the large-scale forcing and dynamics of S2S extreme precipitation events in the United States, and improved capability for modeling and predicting such events. Here, we describe the project goals, objectives, and research activities that will take place over the next 5 years. In this project, a unique team of scientists and stakeholders will identify and understand weather and climate processes connected with the prediction of S2S extreme precipitation events by answering these research questions: 1) What are the synoptic patterns associated with, and characteristic of, S2S extreme precipitation evens in the contiguous U.S.? 2) What role, if any, do large-scale modes of climate variability play in modulating these events? 3) How predictable are S2S extreme precipitation events across temporal scales? 4) How do we create an informative prediction of S2S extreme precipitation events for policymaking and planing? This project will use observational data, high-resolution radar composites, dynamical climate models and workshops that engage stakeholders (water resource managers, emergency managers and tribal environmental professionals) in co-production of knowledge. The overarching result of this project will be predictive models to reduce of the societal and economic impacts of extreme precipitation events. Another outcome will include statistical and co-production frameworks, which could be applied across other meteorological extremes, all time scales and in other parts of the world to increase resilience to extreme meteorological events.

  14. The EMCC / DARPA Massively Parallel Electromagnetic Scattering Project

    NASA Technical Reports Server (NTRS)

    Woo, Alex C.; Hill, Kueichien C.

    1996-01-01

    The Electromagnetic Code Consortium (EMCC) was sponsored by the Advanced Research Program Agency (ARPA) to demonstrate the effectiveness of massively parallel computing in large scale radar signature predictions. The EMCC/ARPA project consisted of three parts.

  15. Predictive wind turbine simulation with an adaptive lattice Boltzmann method for moving boundaries

    NASA Astrophysics Data System (ADS)

    Deiterding, Ralf; Wood, Stephen L.

    2016-09-01

    Operating horizontal axis wind turbines create large-scale turbulent wake structures that affect the power output of downwind turbines considerably. The computational prediction of this phenomenon is challenging as efficient low dissipation schemes are necessary that represent the vorticity production by the moving structures accurately and that are able to transport wakes without significant artificial decay over distances of several rotor diameters. We have developed a parallel adaptive lattice Boltzmann method for large eddy simulation of turbulent weakly compressible flows with embedded moving structures that considers these requirements rather naturally and enables first principle simulations of wake-turbine interaction phenomena at reasonable computational costs. The paper describes the employed computational techniques and presents validation simulations for the Mexnext benchmark experiments as well as simulations of the wake propagation in the Scaled Wind Farm Technology (SWIFT) array consisting of three Vestas V27 turbines in triangular arrangement.

  16. Supersonic jet noise generated by large scale instabilities

    NASA Technical Reports Server (NTRS)

    Seiner, J. M.; Mclaughlin, D. K.; Liu, C. H.

    1982-01-01

    The role of large scale wavelike structures as the major mechanism for supersonic jet noise emission is examined. With the use of aerodynamic and acoustic data for low Reynolds number, supersonic jets at and below 70 thousand comparisons are made with flow fluctuation and acoustic measurements in high Reynolds number, supersonic jets. These comparisons show that a similar physical mechanism governs the generation of sound emitted in he principal noise direction. These experimental data are further compared with a linear instability theory whose prediction for the axial location of peak wave amplitude agrees satisfactorily with measured phased averaged flow fluctuation data in the low Reynolds number jets. The agreement between theory and experiment in the high Reynolds number flow differs as to the axial location for peak flow fluctuations and predicts an apparent origin for sound emission far upstream of the measured acoustic data.

  17. Large-scale wind-tunnel investigation of a close-coupled canard-delta-wing fighter model through high angles of attack

    NASA Technical Reports Server (NTRS)

    Stoll, F.; Koenig, D. G.

    1983-01-01

    Data obtained through very high angles of attack from a large-scale, subsonic wind-tunnel test of a close-coupled canard-delta-wing fighter model are analyzed. The canard delays wing leading-edge vortex breakdown, even for angles of attack at which the canard is completely stalled. A vortex-lattice method was applied which gave good predictions of lift and pitching moment up to an angle of attack of about 20 deg, where vortex-breakdown effects on performance become significant. Pitch-control inputs generally retain full effectiveness up to the angle of attack of maximum lift, beyond which, effectiveness drops off rapidly. A high-angle-of-attack prediction method gives good estimates of lift and drag for the completely stalled aircraft. Roll asymmetry observed at zero sideslip is apparently caused by an asymmetry in the model support structure.

  18. Voltage collapse in complex power grids

    PubMed Central

    Simpson-Porco, John W.; Dörfler, Florian; Bullo, Francesco

    2016-01-01

    A large-scale power grid's ability to transfer energy from producers to consumers is constrained by both the network structure and the nonlinear physics of power flow. Violations of these constraints have been observed to result in voltage collapse blackouts, where nodal voltages slowly decline before precipitously falling. However, methods to test for voltage collapse are dominantly simulation-based, offering little theoretical insight into how grid structure influences stability margins. For a simplified power flow model, here we derive a closed-form condition under which a power network is safe from voltage collapse. The condition combines the complex structure of the network with the reactive power demands of loads to produce a node-by-node measure of grid stress, a prediction of the largest nodal voltage deviation, and an estimate of the distance to collapse. We extensively test our predictions on large-scale systems, highlighting how our condition can be leveraged to increase grid stability margins. PMID:26887284

  19. Nonlinear Reynolds stress model for turbulent shear flows

    NASA Technical Reports Server (NTRS)

    Barton, J. Michael; Rubinstein, R.; Kirtley, K. R.

    1991-01-01

    A nonlinear algebraic Reynolds stress model, derived using the renormalization group, is applied to equilibrium homogeneous shear flow and fully developed flow in a square duct. The model, which is quadratically nonlinear in the velocity gradients, successfully captures the large-scale inhomogeneity and anisotropy of the flows studied. The ratios of normal stresses, as well as the actual magnitudes of the stresses are correctly predicted for equilibrium homogeneous shear flow. Reynolds normal stress anisotropy and attendant turbulence driven secondary flow are predicted for a square duct. Profiles of mean velocity and normal stresses are in good agreement with measurements. Very close to walls, agreement with measurements diminishes. The model has the benefit of containing no arbitrary constants; all values are determined directly from the theory. It seems that near wall behavior is influenced by more than the large scale anisotropy accommodated in the current model. More accurate near wall calculations may well require a model for anisotropic dissipation.

  20. Coral mass spawning predicted by rapid seasonal rise in ocean temperature

    PubMed Central

    Maynard, Jeffrey A.; Edwards, Alasdair J.; Guest, James R.; Rahbek, Carsten

    2016-01-01

    Coral spawning times have been linked to multiple environmental factors; however, to what extent these factors act as generalized cues across multiple species and large spatial scales is unknown. We used a unique dataset of coral spawning from 34 reefs in the Indian and Pacific Oceans to test if month of spawning and peak spawning month in assemblages of Acropora spp. can be predicted by sea surface temperature (SST), photosynthetically available radiation, wind speed, current speed, rainfall or sunset time. Contrary to the classic view that high mean SST initiates coral spawning, we found rapid increases in SST to be the best predictor in both cases (month of spawning: R2 = 0.73, peak: R2 = 0.62). Our findings suggest that a rapid increase in SST provides the dominant proximate cue for coral mass spawning over large geographical scales. We hypothesize that coral spawning is ultimately timed to ensure optimal fertilization success. PMID:27170709

  1. Assessment of village-wise groundwater draft for irrigation: a field-based study in hard-rock aquifers of central India

    NASA Astrophysics Data System (ADS)

    Ray, R. K.; Syed, T. H.; Saha, Dipankar; Sarkar, B. C.; Patre, A. K.

    2017-12-01

    Extracted groundwater, 90% of which is used for irrigated agriculture, is central to the socio-economic development of India. A lack of regulation or implementation of regulations, alongside unrecorded extraction, often leads to over exploitation of large-scale common-pool resources like groundwater. Inevitably, management of groundwater extraction (draft) for irrigation is critical for sustainability of aquifers and the society at large. However, existing assessments of groundwater draft, which are mostly available at large spatial scales, are inadequate for managing groundwater resources that are primarily exploited by stakeholders at much finer scales. This study presents an estimate, projection and analysis of fine-scale groundwater draft in the Seonath-Kharun interfluve of central India. Using field surveys of instantaneous discharge from irrigation wells and boreholes, annual groundwater draft for irrigation in this area is estimated to be 212 × 106 m3, most of which (89%) is withdrawn during non-monsoon season. However, the density of wells/boreholes, and consequent extraction of groundwater, is controlled by the existing hydrogeological conditions. Based on trends in the number of abstraction structures (1982-2011), groundwater draft for the year 2020 is projected to be approximately 307 × 106 m3; hence, groundwater draft for irrigation in the study area is predicted to increase by ˜44% within a span of 8 years. Central to the work presented here is the approach for estimation and prediction of groundwater draft at finer scales, which can be extended to critical groundwater zones of the country.

  2. Large-scale structure in brane-induced gravity. I. Perturbation theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scoccimarro, Roman

    2009-11-15

    We study the growth of subhorizon perturbations in brane-induced gravity using perturbation theory. We solve for the linear evolution of perturbations taking advantage of the symmetry under gauge transformations along the extra-dimension to decouple the bulk equations in the quasistatic approximation, which we argue may be a better approximation at large scales than thought before. We then study the nonlinearities in the bulk and brane equations, concentrating on the workings of the Vainshtein mechanism by which the theory becomes general relativity (GR) at small scales. We show that at the level of the power spectrum, to a good approximation, themore » effect of nonlinearities in the modified gravity sector may be absorbed into a renormalization of the gravitational constant. Since the relation between the lensing potential and density perturbations is entirely unaffected by the extra physics in these theories, the modified gravity can be described in this approximation by a single function, an effective gravitational constant for nonrelativistic motion that depends on space and time. We develop a resummation scheme to calculate it, and provide predictions for the nonlinear power spectrum. At the level of the large-scale bispectrum, the leading order corrections are obtained by standard perturbation theory techniques, and show that the suppression of the brane-bending mode leads to characteristic signatures in the non-Gaussianity generated by gravity, generic to models that become GR at small scales through second-derivative interactions. We compare the predictions in this work to numerical simulations in a companion paper.« less

  3. Expediting SRM assay development for large-scale targeted proteomics experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Chaochao; Shi, Tujin; Brown, Joseph N.

    2014-08-22

    Due to their high sensitivity and specificity, targeted proteomics measurements, e.g. selected reaction monitoring (SRM), are becoming increasingly popular for biological and translational applications. Selection of optimal transitions and optimization of collision energy (CE) are important assay development steps for achieving sensitive detection and accurate quantification; however, these steps can be labor-intensive, especially for large-scale applications. Herein, we explored several options for accelerating SRM assay development evaluated in the context of a relatively large set of 215 synthetic peptide targets. We first showed that HCD fragmentation is very similar to CID in triple quadrupole (QQQ) instrumentation, and by selection ofmore » top six y fragment ions from HCD spectra, >86% of top transitions optimized from direct infusion on QQQ instrument are covered. We also demonstrated that the CE calculated by existing prediction tools was less accurate for +3 precursors, and a significant increase in intensity for transitions could be obtained using a new CE prediction equation constructed from the present experimental data. Overall, our study illustrates the feasibility of expediting the development of larger numbers of high-sensitivity SRM assays through automation of transitions selection and accurate prediction of optimal CE to improve both SRM throughput and measurement quality.« less

  4. Large eddy simulation of fine water sprays: comparative analysis of two models and computer codes

    NASA Astrophysics Data System (ADS)

    Tsoy, A. S.; Snegirev, A. Yu.

    2015-09-01

    The model and the computer code FDS, albeit widely used in engineering practice to predict fire development, is not sufficiently validated for fire suppression by fine water sprays. In this work, the effect of numerical resolution of the large scale turbulent pulsations on the accuracy of predicted time-averaged spray parameters is evaluated. Comparison of the simulation results obtained with the two versions of the model and code, as well as that of the predicted and measured radial distributions of the liquid flow rate revealed the need to apply monotonic and yet sufficiently accurate discrete approximations of the convective terms. Failure to do so delays jet break-up, otherwise induced by large turbulent eddies, thereby excessively focuses the predicted flow around its axis. The effect of the pressure drop in the spray nozzle is also examined, and its increase has shown to cause only weak increase of the evaporated fraction and vapor concentration despite the significant increase of flow velocity.

  5. Seasonal forecasting of lightning and thunderstorm activity in tropical and temperate regions of the world.

    PubMed

    Dowdy, Andrew J

    2016-02-11

    Thunderstorms are convective systems characterised by the occurrence of lightning. Lightning and thunderstorm activity has been increasingly studied in recent years in relation to the El Niño/Southern Oscillation (ENSO) and various other large-scale modes of atmospheric and oceanic variability. Large-scale modes of variability can sometimes be predictable several months in advance, suggesting potential for seasonal forecasting of lightning and thunderstorm activity in various regions throughout the world. To investigate this possibility, seasonal lightning activity in the world's tropical and temperate regions is examined here in relation to numerous different large-scale modes of variability. Of the seven modes of variability examined, ENSO has the strongest relationship with lightning activity during each individual season, with relatively little relationship for the other modes of variability. A measure of ENSO variability (the NINO3.4 index) is significantly correlated to local lightning activity at 53% of locations for one or more seasons throughout the year. Variations in atmospheric parameters commonly associated with thunderstorm activity are found to provide a plausible physical explanation for the variations in lightning activity associated with ENSO. It is demonstrated that there is potential for accurately predicting lightning and thunderstorm activity several months in advance in various regions throughout the world.

  6. Seasonal forecasting of lightning and thunderstorm activity in tropical and temperate regions of the world

    PubMed Central

    Dowdy, Andrew J.

    2016-01-01

    Thunderstorms are convective systems characterised by the occurrence of lightning. Lightning and thunderstorm activity has been increasingly studied in recent years in relation to the El Niño/Southern Oscillation (ENSO) and various other large-scale modes of atmospheric and oceanic variability. Large-scale modes of variability can sometimes be predictable several months in advance, suggesting potential for seasonal forecasting of lightning and thunderstorm activity in various regions throughout the world. To investigate this possibility, seasonal lightning activity in the world’s tropical and temperate regions is examined here in relation to numerous different large-scale modes of variability. Of the seven modes of variability examined, ENSO has the strongest relationship with lightning activity during each individual season, with relatively little relationship for the other modes of variability. A measure of ENSO variability (the NINO3.4 index) is significantly correlated to local lightning activity at 53% of locations for one or more seasons throughout the year. Variations in atmospheric parameters commonly associated with thunderstorm activity are found to provide a plausible physical explanation for the variations in lightning activity associated with ENSO. It is demonstrated that there is potential for accurately predicting lightning and thunderstorm activity several months in advance in various regions throughout the world. PMID:26865431

  7. Massive superclusters as a probe of the nature and amplitude of primordial density fluctuations

    NASA Technical Reports Server (NTRS)

    Kaiser, N.; Davis, M.

    1985-01-01

    It is pointed out that correlation studies of galaxy positions have been widely used in the search for information about the large-scale matter distribution. The study of rare condensations on large scales provides an approach to extend the existing knowledge of large-scale structure into the weakly clustered regime. Shane (1975) provides a description of several apparent massive condensations within the Shane-Wirtanen catalog, taking into account the Serpens-Virgo cloud and the Corona cloud. In the present study, a description is given of a model for estimating the frequency of condensations which evolve from initially Gaussian fluctuations. This model is applied to the Corona cloud to estimate its 'rareness' and thereby estimate the rms density contrast on this mass scale. An attempt is made to find a conflict between the density fluctuations derived from the Corona cloud and independent constraints. A comparison is conducted of the estimate and the density fluctuations predicted to arise in a universe dominated by cold dark matter.

  8. A 100,000 Scale Factor Radar Range.

    PubMed

    Blanche, Pierre-Alexandre; Neifeld, Mark; Peyghambarian, Nasser

    2017-12-19

    The radar cross section of an object is an important electromagnetic property that is often measured in anechoic chambers. However, for very large and complex structures such as ships or sea and land clutters, this common approach is not practical. The use of computer simulations is also not viable since it would take many years of computational time to model and predict the radar characteristics of such large objects. We have now devised a new scaling technique to overcome these difficulties, and make accurate measurements of the radar cross section of large items. In this article we demonstrate that by reducing the scale of the model by a factor 100,000, and using near infrared wavelength, the radar cross section can be determined in a tabletop setup. The accuracy of the method is compared to simulations, and an example of measurement is provided on a 1 mm highly detailed model of a ship. The advantages of this scaling approach is its versatility, and the possibility to perform fast, convenient, and inexpensive measurements.

  9. Modeling of yield surface evolution in uniaxial and biaxial loading conditions using a prestrained large scale specimen

    NASA Astrophysics Data System (ADS)

    Zaman, Shakil Bin; Barlat, Frédéric; Kim, Jin Hwan

    2018-05-01

    Large-scale advanced high strength steel (AHSS) sheet specimens were deformed in uniaxial tension, using a novel grip system mounted on a MTS universal tension machine. After pre-strain, they were used as a pre-strained material to examine the anisotropic response in the biaxial tension tests with various load ratios, and orthogonal tension tests at 45° and 90° from the pre-strain axis. The flow curve and the instantaneous r-value of the pre-strained steel in each of the aforementioned uniaxial testing conditions were also measured and compared with those of the undeformed steel. Furthermore, an exhaustive analysis of the yield surface was also conducted and the results, prior and post-prestrain were represented and compared. The homogeneous anisotropic hardening (HAH) model [1] was employed to predict the behavior of the pre-strained material. It was found that the HAH-predicted flow curves after non-linear strain path change and the yield loci after uniaxial pre-strain were in good agreement with the experiments, while the r-value evolution after strain path change was qualitatively well predicted.

  10. Continental Tele-connections of ET and Precipitation: Tractable Models, Viable Management, and Potential Monitoring.

    NASA Astrophysics Data System (ADS)

    Selker, J. S.; Higgins, C. W.; Tai, L. C. M.

    2014-12-01

    The linkage between large-scale manipulation of land cover and resulting patterns of precipitation has been a long-standing problem. For example, what is the impact of the Columbia River project's 2,700 km^2 irrigated area (applying approximately 300 m^3/s) on the down-wind continental rainfall in North America? Similarly, can we identify places on earth where planting large-scale runoff-reducing forests might increase down-wind precipitation, thus leading to magnified carbon capture? In this talk we present an analytical Lagrangian framework for the prediction of incremental increases in down-wind precipitation due to land surface evaporation and transpiration. We compare these predictions to recently published rainfall recycling values from the literature. Focus is on the Columbia basin (Pacific Northwest of hte USA), with extensions to East Africa. We further explore the monitoring requirements for verification of any such impact, and see if the planned TAHMO African Observatory (TAHMO.org) has the potential to document any such processes over the 25-year and 1,000 km scales.

  11. Insights from triangulation of two purchase choice elicitation methods to predict social decision making in healthcare.

    PubMed

    Whitty, Jennifer A; Rundle-Thiele, Sharyn R; Scuffham, Paul A

    2012-03-01

    Discrete choice experiments (DCEs) and the Juster scale are accepted methods for the prediction of individual purchase probabilities. Nevertheless, these methods have seldom been applied to a social decision-making context. To gain an overview of social decisions for a decision-making population through data triangulation, these two methods were used to understand purchase probability in a social decision-making context. We report an exploratory social decision-making study of pharmaceutical subsidy in Australia. A DCE and selected Juster scale profiles were presented to current and past members of the Australian Pharmaceutical Benefits Advisory Committee and its Economic Subcommittee. Across 66 observations derived from 11 respondents for 6 different pharmaceutical profiles, there was a small overall median difference of 0.024 in the predicted probability of public subsidy (p = 0.003), with the Juster scale predicting the higher likelihood. While consistency was observed at the extremes of the probability scale, the funding probability differed over the mid-range of profiles. There was larger variability in the DCE than Juster predictions within each individual respondent, suggesting the DCE is better able to discriminate between profiles. However, large variation was observed between individuals in the Juster scale but not DCE predictions. It is important to use multiple methods to obtain a complete picture of the probability of purchase or public subsidy in a social decision-making context until further research can elaborate on our findings. This exploratory analysis supports the suggestion that the mixed logit model, which was used for the DCE analysis, may fail to adequately account for preference heterogeneity in some contexts.

  12. Large-scale deformed QRPA calculations of the gamma-ray strength function based on a Gogny force

    NASA Astrophysics Data System (ADS)

    Martini, M.; Goriely, S.; Hilaire, S.; Péru, S.; Minato, F.

    2016-01-01

    The dipole excitations of nuclei play an important role in nuclear astrophysics processes in connection with the photoabsorption and the radiative neutron capture that take place in stellar environment. We present here the results of a large-scale axially-symmetric deformed QRPA calculation of the γ-ray strength function based on the finite-range Gogny force. The newly determined γ-ray strength is compared with experimental photoabsorption data for spherical as well as deformed nuclei. Predictions of γ-ray strength functions and Maxwellian-averaged neutron capture rates for Sn isotopes are also discussed.

  13. Site-level habitat models for the endemic, threatened Cheat Mountain salamander (Plethodon nettingi): the importance of geophysical and biotic attributes for predicting occurrence

    Treesearch

    Lester O. Dillard; Kevin R. Russell; W. Mark Ford

    2008-01-01

    The federally threatened Cheat Mountain salamander (Plethodon nettingi; hereafter CMS) is known to occur in approximately 70 small, scattered populations in the Allegheny Mountains of eastern West Virginia, USA. Current conservation and management efforts on federal, state, and private lands involving CMS largely rely on small scale, largely...

  14. Words That Fascinate the Listener: Predicting Affective Ratings of On-Line Lectures

    ERIC Educational Resources Information Center

    Weninger, Felix; Staudt, Pascal; Schuller, Björn

    2013-01-01

    In a large scale study on 843 transcripts of Technology, Entertainment and Design (TED) talks, the authors address the relation between word usage and categorical affective ratings of lectures by a large group of internet users. Users rated the lectures by assigning one or more predefined tags which relate to the affective state evoked in the…

  15. The Predictability of Large-Scale, Short-Period Variability in the Philippine Sea and the Influence of Such Variability on Long-Range acoustic Propagation

    DTIC Science & Technology

    2015-03-31

    with the black line indicating an average of these travel times. Altimetry data from 2000- 2007 were used to obtain the predictions, hence the...Vol. 2), Venice, Italy, 21-25 September 2009, Hall, J., Harrison, D.E. & Stammer , D., Eds., ESA Publication WPP-306. Dushaw, B. D., P. F. Worcester

  16. Effects of Eddy Viscosity on Time Correlations in Large Eddy Simulation

    NASA Technical Reports Server (NTRS)

    He, Guowei; Rubinstein, R.; Wang, Lian-Ping; Bushnell, Dennis M. (Technical Monitor)

    2001-01-01

    Subgrid-scale (SGS) models for large. eddy simulation (LES) have generally been evaluated by their ability to predict single-time statistics of turbulent flows such as kinetic energy and Reynolds stresses. Recent application- of large eddy simulation to the evaluation of sound sources in turbulent flows, a problem in which time, correlations determine the frequency distribution of acoustic radiation, suggest that subgrid models should also be evaluated by their ability to predict time correlations in turbulent flows. This paper compares the two-point, two-time Eulerian velocity correlation evaluated from direct numerical simulation (DNS) with that evaluated from LES, using a spectral eddy viscosity, for isotropic homogeneous turbulence. It is found that the LES fields are too coherent, in the sense that their time correlations decay more slowly than the corresponding time. correlations in the DNS fields. This observation is confirmed by theoretical estimates of time correlations using the Taylor expansion technique. Tile reason for the slower decay is that the eddy viscosity does not include the random backscatter, which decorrelates fluid motion at large scales. An effective eddy viscosity associated with time correlations is formulated, to which the eddy viscosity associated with energy transfer is a leading order approximation.

  17. Advancement of proprotor technology. Task 2: Wind-tunnel test results

    NASA Technical Reports Server (NTRS)

    1971-01-01

    An advanced-design 25-foot-diameter flightworthy proprotor was tested in the NASA-Ames Large-Scale Wind Tunnel. These tests, have verified and confirmed the theory and design solutions developed as part of the Army Composite Aircraft Program. This report presents the test results and compares them with theoretical predictions. During performance tests, the results met or exceeded predictions. Hover thrust 15 percent greater than the predicted maximum was measured. In airplane mode, propulsive efficiencies (some of which exceeded 90 percent) agreed with theory.

  18. Hydrodynamic predictions for 5.44 TeV Xe+Xe collisions

    NASA Astrophysics Data System (ADS)

    Giacalone, Giuliano; Noronha-Hostler, Jacquelyn; Luzum, Matthew; Ollitrault, Jean-Yves

    2018-03-01

    We argue that relativistic hydrodynamics is able to make robust predictions for soft particle production in Xe+Xe collisions at the CERN Large Hadron Collider (LHC). The change of system size from Pb+Pb to Xe+Xe provides a unique opportunity to test the scaling laws inherent to fluid dynamics. Using event-by-event hydrodynamic simulations, we make quantitative predictions for several observables: mean transverse momentum, anisotropic flow coefficients, and their fluctuations. Results are shown as a function of collision centrality.

  19. The Teacher Sense of Efficacy Scale: Validation Evidence and Behavioral Prediction. WCER Working Paper No. 2006-7

    ERIC Educational Resources Information Center

    Heneman, Herbert G., III; Kimball, Steven; Milanowski, Anthony

    2006-01-01

    The present study contributes to knowledge of the construct validity of the short form of the Teacher Sense of Efficacy Scale (and by extension, given their similar content and psychometric properties, to the long form). The authors' research involves: (1) examining the psychometric properties of the TSES on a large sample of elementary, middle,…

  20. Numerical Modeling of STARx for Ex Situ Soil Remediation

    NASA Astrophysics Data System (ADS)

    Gerhard, J.; Solinger, R. L.; Grant, G.; Scholes, G.

    2016-12-01

    Growing stockpiles of contaminated soils contaminated with petroleum hydrocarbons are an outstanding problem worldwide. Self-sustaining Treatment for Active Remediation (STAR) is an emerging technology based on smouldering combustion that has been successfully deployed for in situ remediation. STAR has also been developed for ex situ applications (STARx). This work used a two-dimensional numerical model to systematically explore the sensitivity of ex situ remedial performance to key design and operational parameters. First the model was calibrated and validated against pilot scale experiments, providing confidence that the rate and extent of treatment were correctly predicted. Simulations then investigated sensitivity of remedial performance to injected air flux, contaminant saturation, system configuration, heterogeneity of intrinsic permeability, heterogeneity of contaminant saturation, and system scale. Remedial performance was predicted to be most sensitive to the injected air flux, with higher air fluxes achieving higher treatment rates and remediating larger fractions of the initial contaminant mass. The uniformity of the advancing smouldering front was predicted to be highly dependent on effective permeability contrasts between treated and untreated sections of the contaminant pack. As a result, increased heterogeneity (of intrinsic permeability in particular) is predicted to lower remedial performance. Full-scale systems were predicted to achieve treatment rates an order of magnitude higher than the pilot scale for similar contaminant saturation and injected air flux. This work contributed to the large scale STARx treatment system that is being tested at a field site in Fall 2016.

  1. Linear Scaling Density Functional Calculations with Gaussian Orbitals

    NASA Technical Reports Server (NTRS)

    Scuseria, Gustavo E.

    1999-01-01

    Recent advances in linear scaling algorithms that circumvent the computational bottlenecks of large-scale electronic structure simulations make it possible to carry out density functional calculations with Gaussian orbitals on molecules containing more than 1000 atoms and 15000 basis functions using current workstations and personal computers. This paper discusses the recent theoretical developments that have led to these advances and demonstrates in a series of benchmark calculations the present capabilities of state-of-the-art computational quantum chemistry programs for the prediction of molecular structure and properties.

  2. High-Resolution Subtropical Summer Precipitation Derived from Dynamical Downscaling of the NCEP-DOE Reanalysis: How Much Small-Scale Information Is Added by a Regional Model?

    NASA Technical Reports Server (NTRS)

    Lim, Young-Kwon; Stefanova, Lydia B.; Chan, Steven C.; Schubert, Siegfried D.; OBrien, James J.

    2010-01-01

    This study assesses the regional-scale summer precipitation produced by the dynamical downscaling of analyzed large-scale fields. The main goal of this study is to investigate how much the regional model adds smaller scale precipitation information that the large-scale fields do not resolve. The modeling region for this study covers the southeastern United States (Florida, Georgia, Alabama, South Carolina, and North Carolina) where the summer climate is subtropical in nature, with a heavy influence of regional-scale convection. The coarse resolution (2.5deg latitude/longitude) large-scale atmospheric variables from the National Center for Environmental Prediction (NCEP)/DOE reanalysis (R2) are downscaled using the NCEP Environmental Climate Prediction Center regional spectral model (RSM) to produce precipitation at 20 km resolution for 16 summer seasons (19902005). The RSM produces realistic details in the regional summer precipitation at 20 km resolution. Compared to R2, the RSM-produced monthly precipitation shows better agreement with observations. There is a reduced wet bias and a more realistic spatial pattern of the precipitation climatology compared with the interpolated R2 values. The root mean square errors of the monthly R2 precipitation are reduced over 93 (1,697) of all the grid points in the five states (1,821). The temporal correlation also improves over 92 (1,675) of all grid points such that the domain-averaged correlation increases from 0.38 (R2) to 0.55 (RSM). The RSM accurately reproduces the first two observed eigenmodes, compared with the R2 product for which the second mode is not properly reproduced. The spatial patterns for wet versus dry summer years are also successfully simulated in RSM. For shorter time scales, the RSM resolves heavy rainfall events and their frequency better than R2. Correlation and categorical classification (above/near/below average) for the monthly frequency of heavy precipitation days is also significantly improved by the RSM.

  3. Large ejecta fragments from asteroids. [Abstract only

    NASA Technical Reports Server (NTRS)

    Asphaug, E.

    1994-01-01

    The asteroid 4 Vesta, with its unique basaltic crust, remains a key mystery of planetary evolution. A localized olivine feature suggests excavation of subcrustal material in a crater or impact basin comparable in size to the planetary radius (R(sub vesta) is approximately = 280 km). Furthermore, a 'clan' of small asteroids associated with Vesta (by spectral and orbital similarities) may be ejecta from this impact 151 and direct parents of the basaltic achondrites. To escape, these smaller (about 4-7 km) asteroids had to be ejected at speeds greater than the escape velocity, v(sub esc) is approximately = 350 m/s. This evidence that large fragments were ejected at high speed from Vesta has not been reconciled with the present understanding of impact physics. Analytical spallation models predict that an impactor capable of ejecting these 'chips off Vesta' would be almost the size of Vesta! Such an impact would lead to the catastrophic disruption of both bodies. A simpler analysis is outlined, based on comparison with cratering on Mars, and it is shown that Vesta could survive an impact capable of ejecting kilometer-scale fragments at sufficient speed. To what extent does Vesta survive the formation of such a large crater? This is best addressed using a hydrocode such as SALE 2D with centroidal gravity to predict velocities subsequent to impact. The fragmentation outcome and velocity subsequent to the impact described to demonstrate that Vesta survives without large-scale disassembly or overturning of the crust. Vesta and its clan represent a valuable dataset for testing fragmentation hydrocodes such as SALE 2D and SPH 3D at planetary scales. Resolution required to directly model spallation 'chips' on a body 100 times as large is now marginally possible on modern workstations. These boundaries are important in near-surface ejection processes and in large-scale disruption leading to asteroid families and stripped cores.

  4. High precision predictions for exclusive VH production at the LHC

    DOE PAGES

    Li, Ye; Liu, Xiaohui

    2014-06-04

    We present a resummation-improved prediction for pp → VH + 0 jets at the Large Hadron Collider. We focus on highly-boosted final states in the presence of jet veto to suppress the tt¯ background. In this case, conventional fixed-order calculations are plagued by the existence of large Sudakov logarithms α n slog m(p veto T/Q) for Q ~ m V + m H which lead to unreliable predictions as well as large theoretical uncertainties, and thus limit the accuracy when comparing experimental measurements to the Standard Model. In this work, we show that the resummation of Sudakov logarithms beyond themore » next-to-next-to-leading-log accuracy, combined with the next-to-next-to-leading order calculation, reduces the scale uncertainty and stabilizes the perturbative expansion in the region where the vector bosons carry large transverse momentum. Thus, our result improves the precision with which Higgs properties can be determined from LHC measurements using boosted Higgs techniques.« less

  5. Predicting the cosmological constant with the scale-factor cutoff measure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    De Simone, Andrea; Guth, Alan H.; Salem, Michael P.

    2008-09-15

    It is well known that anthropic selection from a landscape with a flat prior distribution of cosmological constant {lambda} gives a reasonable fit to observation. However, a realistic model of the multiverse has a physical volume that diverges with time, and the predicted distribution of {lambda} depends on how the spacetime volume is regulated. A very promising method of regulation uses a scale-factor cutoff, which avoids a number of serious problems that arise in other approaches. In particular, the scale-factor cutoff avoids the 'youngness problem' (high probability of living in a much younger universe) and the 'Q and G catastrophes'more » (high probability for the primordial density contrast Q and gravitational constant G to have extremely large or small values). We apply the scale-factor cutoff measure to the probability distribution of {lambda}, considering both positive and negative values. The results are in good agreement with observation. In particular, the scale-factor cutoff strongly suppresses the probability for values of {lambda} that are more than about 10 times the observed value. We also discuss qualitatively the prediction for the density parameter {omega}, indicating that with this measure there is a possibility of detectable negative curvature.« less

  6. Retrieving cosmological signal using cosmic flows

    NASA Astrophysics Data System (ADS)

    Bouillot, V.; Alimi, J.-M.

    2011-12-01

    To understand the origin of the anomalously high bulk flow at large scales, we use very large simulations in various cosmological models. To disentangle between cosmological and environmental effects, we select samples with bulk flow profiles similar to the observational data Watkins et al. (2009) which exhibit a maximum in the bulk flow at 53 h^{-1} Mpc. The estimation of the cosmological parameters Ω_M and σ_8, done on those samples, is correct from the rms mass fluctuation whereas this estimation gives completely false values when done on bulk flow measurements, hence showing a dependance of velocity fields on larger scales. By drawing a clear link between velocity fields at 53 h^{-1} Mpc and asymmetric patterns of the density field at 85 h^{-1} Mpc, we show that the bulk flow can depend largely on the environment. The retrieving of the cosmological signal is achieved by studying the convergence of the bulk flow towards the linear prediction at very large scale (˜ 150 h^{-1} Mpc).

  7. Cryptic biodiversity loss linked to global climate change

    NASA Astrophysics Data System (ADS)

    Bálint, M.; Domisch, S.; Engelhardt, C. H. M.; Haase, P.; Lehrian, S.; Sauer, J.; Theissinger, K.; Pauls, S. U.; Nowak, C.

    2011-09-01

    Global climate change (GCC) significantly affects distributional patterns of organisms, and considerable impacts on biodiversity are predicted for the next decades. Inferred effects include large-scale range shifts towards higher altitudes and latitudes, facilitation of biological invasions and species extinctions. Alterations of biotic patterns caused by GCC have usually been predicted on the scale of taxonomically recognized morphospecies. However, the effects of climate change at the most fundamental level of biodiversity--intraspecific genetic diversity--remain elusive. Here we show that the use of morphospecies-based assessments of GCC effects will result in underestimations of the true scale of biodiversity loss. Species distribution modelling and assessments of mitochondrial DNA variability in nine montane aquatic insect species in Europe indicate that future range contractions will be accompanied by severe losses of cryptic evolutionary lineages and genetic diversity within these lineages. These losses greatly exceed those at the scale of morphospecies. We also document that the extent of range reduction may be a useful proxy when predicting losses of genetic diversity. Our results demonstrate that intraspecific patterns of genetic diversity should be considered when estimating the effects of climate change on biodiversity.

  8. Extracting Primordial Non-Gaussianity from Large Scale Structure in the Post-Planck Era

    NASA Astrophysics Data System (ADS)

    Dore, Olivier

    Astronomical observations have become a unique tool to probe fundamental physics. Cosmology, in particular, emerged as a data-driven science whose phenomenological modeling has achieved great success: in the post-Planck era, key cosmological parameters are measured to percent precision. A single model reproduces a wealth of astronomical observations involving very distinct physical processes at different times. This success leads to fundamental physical questions. One of the most salient is the origin of the primordial perturbations that grew to form the large-scale structures we now observe. More and more cosmological observables point to inflationary physics as the origin of the structure observed in the universe. Inflationary physics predict the statistical properties of the primordial perturbations and it is thought to be slightly non-Gaussian. The detection of this small deviation from Gaussianity represents the next frontier in early Universe physics. To measure it would provide direct, unique and quantitative insights about the physics at play when the Universe was only a fraction of a second old, thus probing energies untouchable otherwise. En par with the well-known relic gravitational wave radiation -- the famous ``B-modes'' -- it is one the few probes of inflation. This departure from Gaussianity leads to very specific signature in the large scale clustering of galaxies. Observing large-scale structure, we can thus establish a direct connection with fundamental theories of the early universe. In the post-Planck era, large-scale structures are our most promising pathway to measuring this primordial signal. Current estimates suggests that the next generation of space or ground based large scale structure surveys (e.g. the ESA EUCLID or NASA WFIRST missions) might enable a detection of this signal. This potential huge payoff requires us to solidify the theoretical predictions supporting these measurements. Even if the exact signal we are looking for is of unknown amplitude, it is obvious that we must measure it as well as these ground breaking data set will permit. We propose to develop the supporting theoretical work to the point where the complete non-gaussianian signature can be extracted from these data sets. We will do so by developing three complementary directions: - We will develop the appropriate formalism to measure and model galaxy clustering on the largest scales. - We will study the impact of non-Gaussianity on higher-order statistics, the most promising statistics for our purpose.. - We will explicit the connection between these observables and the microphysics of a large class of inflation models, but also identify fundamental limitations to this interpretation.

  9. Water, Carbon, and Nutrient Cycling Following Insect-induced Tree Mortality: How Well Do Plot-scale Observations Predict Ecosystem-Scale Response?

    NASA Astrophysics Data System (ADS)

    Brooks, P. D.; Barnard, H. R.; Biederman, J. A.; Borkhuu, B.; Edburg, S. L.; Ewers, B. E.; Gochis, D. J.; Gutmann, E. D.; Harpold, A. A.; Hicke, J. A.; Pendall, E.; Reed, D. E.; Somor, A. J.; Troch, P. A.

    2011-12-01

    Widespread tree mortality caused by insect infestations and drought has impacted millions of hectares across western North America in recent years. Although previous work on post-disturbance responses (e.g. experimental manipulations, fire, and logging) provides insight into how water and biogeochemical cycles may respond to insect infestations and drought, we find that the unique nature of these drivers of tree mortality complicates extrapolation to larger scales. Building from previous work on forest disturbance, we present a conceptual model of how temporal changes in forest structure impact the individual components of energy balance, hydrologic partitioning, and biogeochemical cycling and the interactions among them. We evaluate and refine this model using integrated observations and process modeling on multiple scales including plot, stand, flux tower footprint, hillslope, and catchment to identify scaling relationships and emergent patterns in hydrological and biogeochemical responses. Our initial results suggest that changes in forest structure at point or plot scales largely have predictable effects on energy, water, and biogeochemical cycles that are well captured by land surface, hydrological, and biogeochemical models. However, observations from flux towers and nested catchments suggest that both the hydrological and biogeochemical effects observed at tree and plot scales may be attenuated or exacerbated at larger scales. Compensatory processes are associated with attenuation (e.g. as transpiration decreases, evaporation and sublimation increase), whereas both attenuation and exacerbation may result from nonlinear scaling behavior across transitions in topography and ecosystem structure that affect the redistribution of energy, water, and solutes. Consequently, the effects of widespread tree mortality on ecosystem services of water supply and carbon sequestration will likely depend on how spatial patterns in mortality severity across the landscape affect large-scale hydrological partitioning.

  10. Applied Distributed Model Predictive Control for Energy Efficient Buildings and Ramp Metering

    NASA Astrophysics Data System (ADS)

    Koehler, Sarah Muraoka

    Industrial large-scale control problems present an interesting algorithmic design challenge. A number of controllers must cooperate in real-time on a network of embedded hardware with limited computing power in order to maximize system efficiency while respecting constraints and despite communication delays. Model predictive control (MPC) can automatically synthesize a centralized controller which optimizes an objective function subject to a system model, constraints, and predictions of disturbance. Unfortunately, the computations required by model predictive controllers for large-scale systems often limit its industrial implementation only to medium-scale slow processes. Distributed model predictive control (DMPC) enters the picture as a way to decentralize a large-scale model predictive control problem. The main idea of DMPC is to split the computations required by the MPC problem amongst distributed processors that can compute in parallel and communicate iteratively to find a solution. Some popularly proposed solutions are distributed optimization algorithms such as dual decomposition and the alternating direction method of multipliers (ADMM). However, these algorithms ignore two practical challenges: substantial communication delays present in control systems and also problem non-convexity. This thesis presents two novel and practically effective DMPC algorithms. The first DMPC algorithm is based on a primal-dual active-set method which achieves fast convergence, making it suitable for large-scale control applications which have a large communication delay across its communication network. In particular, this algorithm is suited for MPC problems with a quadratic cost, linear dynamics, forecasted demand, and box constraints. We measure the performance of this algorithm and show that it significantly outperforms both dual decomposition and ADMM in the presence of communication delay. The second DMPC algorithm is based on an inexact interior point method which is suited for nonlinear optimization problems. The parallel computation of the algorithm exploits iterative linear algebra methods for the main linear algebra computations in the algorithm. We show that the splitting of the algorithm is flexible and can thus be applied to various distributed platform configurations. The two proposed algorithms are applied to two main energy and transportation control problems. The first application is energy efficient building control. Buildings represent 40% of energy consumption in the United States. Thus, it is significant to improve the energy efficiency of buildings. The goal is to minimize energy consumption subject to the physics of the building (e.g. heat transfer laws), the constraints of the actuators as well as the desired operating constraints (thermal comfort of the occupants), and heat load on the system. In this thesis, we describe the control systems of forced air building systems in practice. We discuss the "Trim and Respond" algorithm which is a distributed control algorithm that is used in practice, and show that it performs similarly to a one-step explicit DMPC algorithm. Then, we apply the novel distributed primal-dual active-set method and provide extensive numerical results for the building MPC problem. The second main application is the control of ramp metering signals to optimize traffic flow through a freeway system. This application is particularly important since urban congestion has more than doubled in the past few decades. The ramp metering problem is to maximize freeway throughput subject to freeway dynamics (derived from mass conservation), actuation constraints, freeway capacity constraints, and predicted traffic demand. In this thesis, we develop a hybrid model predictive controller for ramp metering that is guaranteed to be persistently feasible and stable. This contrasts to previous work on MPC for ramp metering where such guarantees are absent. We apply a smoothing method to the hybrid model predictive controller and apply the inexact interior point method to this nonlinear non-convex ramp metering problem.

  11. Supermassive Black Holes and Galaxy Evolution

    NASA Technical Reports Server (NTRS)

    Merritt, D.

    2004-01-01

    Supermassive black holes appear to be generic components of galactic nuclei. The formation and growth of black holes is intimately connected with the evolution of galaxies on a wide range of scales. For instance, mergers between galaxies containing nuclear black holes would produce supermassive binaries which eventually coalesce via the emission of gravitational radiation. The formation and decay of these binaries is expected to produce a number of observable signatures in the stellar distribution. Black holes can also affect the large-scale structure of galaxies by perturbing the orbits of stars that pass through the nucleus. Large-scale N-body simulations are beginning to generate testable predictions about these processes which will allow us to draw inferences about the formation history of supermassive black holes.

  12. Predicting protein functions from redundancies in large-scale protein interaction networks

    NASA Technical Reports Server (NTRS)

    Samanta, Manoj Pratim; Liang, Shoudan

    2003-01-01

    Interpreting data from large-scale protein interaction experiments has been a challenging task because of the widespread presence of random false positives. Here, we present a network-based statistical algorithm that overcomes this difficulty and allows us to derive functions of unannotated proteins from large-scale interaction data. Our algorithm uses the insight that if two proteins share significantly larger number of common interaction partners than random, they have close functional associations. Analysis of publicly available data from Saccharomyces cerevisiae reveals >2,800 reliable functional associations, 29% of which involve at least one unannotated protein. By further analyzing these associations, we derive tentative functions for 81 unannotated proteins with high certainty. Our method is not overly sensitive to the false positives present in the data. Even after adding 50% randomly generated interactions to the measured data set, we are able to recover almost all (approximately 89%) of the original associations.

  13. Large scale mass redistribution and surface displacement from GRACE and SLR

    NASA Astrophysics Data System (ADS)

    Cheng, M.; Ries, J. C.; Tapley, B. D.

    2012-12-01

    Mass transport between the atmosphere, ocean and solid earth results in the temporal variations in the Earth gravity field and loading induced deformation of the Earth. Recent space-borne observations, such as GRACE mission, are providing extremely high precision temporal variations of gravity field. The results from 10-yr GRACE data has shown a significant annual variations of large scale vertical and horizontal displacements occurring over the Amazon, Himalayan region and South Asia, African, and Russian with a few mm amplitude. Improving understanding from monitoring and modeling of the large scale mass redistribution and the Earth's response are a critical for all studies in the geosciences, in particular for determination of Terrestrial Reference System (TRS), including geocenter motion. This paper will report results for the observed seasonal variations in the 3-dimentional surface displacements of SLR and GPS tracking stations and compare with the prediction from time series of GRACE monthly gravity solution.

  14. Where and why hyporheic exchange is important: Inferences from a parsimonious, physically-based river network model

    NASA Astrophysics Data System (ADS)

    Gomez-Velez, J. D.; Harvey, J. W.

    2014-12-01

    Hyporheic exchange has been hypothesized to have basin-scale consequences; however, predictions throughout river networks are limited by available geomorphic and hydrogeologic data as well as models that can analyze and aggregate hyporheic exchange flows across large spatial scales. We developed a parsimonious but physically-based model of hyporheic flow for application in large river basins: Networks with EXchange and Subsurface Storage (NEXSS). At the core of NEXSS is a characterization of the channel geometry, geomorphic features, and related hydraulic drivers based on scaling equations from the literature and readily accessible information such as river discharge, bankfull width, median grain size, sinuosity, channel slope, and regional groundwater gradients. Multi-scale hyporheic flow is computed based on combining simple but powerful analytical and numerical expressions that have been previously published. We applied NEXSS across a broad range of geomorphic diversity in river reaches and synthetic river networks. NEXSS demonstrates that vertical exchange beneath submerged bedforms dominates hyporheic fluxes and turnover rates along the river corridor. Moreover, the hyporheic zone's potential for biogeochemical transformations is comparable across stream orders, but the abundance of lower-order channels results in a considerably higher cumulative effect for low-order streams. Thus, vertical exchange beneath submerged bedforms has more potential for biogeochemical transformations than lateral exchange beneath banks, although lateral exchange through meanders may be important in large rivers. These results have implications for predicting outcomes of river and basin management practices.

  15. Season-ahead water quality forecasts for the Schuylkill River, Pennsylvania

    NASA Astrophysics Data System (ADS)

    Block, P. J.; Leung, K.

    2013-12-01

    Anticipating and preparing for elevated water quality parameter levels in critical water sources, using weather forecasts, is not uncommon. In this study, we explore the feasibility of extending this prediction scale to a season-ahead for the Schuylkill River in Philadelphia, utilizing both statistical and dynamical prediction models, to characterize the season. This advance information has relevance for recreational activities, ecosystem health, and water treatment, as the Schuylkill provides 40% of Philadelphia's water supply. The statistical model associates large-scale climate drivers with streamflow and water quality parameter levels; numerous variables from NOAA's CFSv2 model are evaluated for the dynamical approach. A multi-model combination is also assessed. Results indicate moderately skillful prediction of average summertime total coliform and wintertime turbidity, using season-ahead oceanic and atmospheric variables, predominantly from the North Atlantic Ocean. Models predicting the number of elevated turbidity events across the wintertime season are also explored.

  16. Equivalent isotropic scattering formulation for transient short-pulse radiative transfer in anisotropic scattering planar media.

    PubMed

    Guo, Z; Kumar, S

    2000-08-20

    An isotropic scaling formulation is evaluated for transient radiative transfer in a one-dimensional planar slab subject to collimated and/or diffuse irradiation. The Monte Carlo method is used to implement the equivalent scattering and exact simulations of the transient short-pulse radiation transport through forward and backward anisotropic scattering planar media. The scaled equivalent isotropic scattering results are compared with predictions of anisotropic scattering in various problems. It is found that the equivalent isotropic scaling law is not appropriate for backward-scattering media in transient radiative transfer. Even for an optically diffuse medium, the differences in temporal transmittance and reflectance profiles between predictions of backward anisotropic scattering and equivalent isotropic scattering are large. Additionally, for both forward and backward anisotropic scattering media, the transient equivalent isotropic results are strongly affected by the change of photon flight time, owing to the change of flight direction associated with the isotropic scaling technique.

  17. Pollutant Transport and Fate: Relations Between Flow-paths and Downstream Impacts of Human Activities

    NASA Astrophysics Data System (ADS)

    Thorslund, J.; Jarsjo, J.; Destouni, G.

    2017-12-01

    The quality of freshwater resources is increasingly impacted by human activities. Humans also extensively change the structure of landscapes, which may alter natural hydrological processes. To manage and maintain freshwater of good water quality, it is critical to understand how pollutants are released into, transported and transformed within the hydrological system. Some key scientific questions include: What are net downstream impacts of pollutants across different hydroclimatic and human disturbance conditions, and on different scales? What are the functions within and between components of the landscape, such as wetlands, on mitigating pollutant load delivery to downstream recipients? We explore these questions by synthesizing results from several relevant case study examples of intensely human-impacted hydrological systems. These case study sites have been specifically evaluated in terms of net impact of human activities on pollutant input to the aquatic system, as well as flow-path distributions trough wetlands as a potential ecosystem service of pollutant mitigation. Results shows that although individual wetlands have high retention capacity, efficient net retention effects were not always achieved at a larger landscape scale. Evidence suggests that the function of wetlands as mitigation solutions to pollutant loads is largely controlled by large-scale parallel and circular flow-paths, through which multiple wetlands are interconnected in the landscape. To achieve net mitigation effects at large scale, a large fraction of the polluted large-scale flows must be transported through multiple connected wetlands. Although such large-scale flow interactions are critical for assessing water pollution spreading and fate through the landscape, our synthesis shows a frequent lack of knowledge at such scales. We suggest ways forward for addressing the mismatch between the large scales at which key pollutant pressures and water quality changes take place and the relatively scale at which most studies and implementations are currently made. These suggestions can help bridge critical knowledge gaps, as needed for improving water quality predictions and mitigation solutions under human and environmental changes.

  18. Structural similitude and design of scaled down laminated models

    NASA Technical Reports Server (NTRS)

    Simitses, G. J.; Rezaeepazhand, J.

    1993-01-01

    The excellent mechanical properties of laminated composite structures make them prime candidates for wide variety of applications in aerospace, mechanical and other branches of engineering. The enormous design flexibility of advanced composites is obtained at the cost of large number of design parameters. Due to complexity of the systems and lack of complete design based informations, designers tend to be conservative in their design. Furthermore, any new design is extensively evaluated experimentally until it achieves the necessary reliability, performance and safety. However, the experimental evaluation of composite structures are costly and time consuming. Consequently, it is extremely useful if a full-scale structure can be replaced by a similar scaled-down model which is much easier to work with. Furthermore, a dramatic reduction in cost and time can be achieved, if available experimental data of a specific structure can be used to predict the behavior of a group of similar systems. This study investigates problems associated with the design of scaled models. Such study is important since it provides the necessary scaling laws, and the factors which affect the accuracy of the scale models. Similitude theory is employed to develop the necessary similarity conditions (scaling laws). Scaling laws provide relationship between a full-scale structure and its scale model, and can be used to extrapolate the experimental data of a small, inexpensive, and testable model into design information for a large prototype. Due to large number of design parameters, the identification of the principal scaling laws by conventional method (dimensional analysis) is tedious. Similitude theory based on governing equations of the structural system is more direct and simpler in execution. The difficulty of making completely similar scale models often leads to accept certain type of distortion from exact duplication of the prototype (partial similarity). Both complete and partial similarity are discussed. The procedure consists of systematically observing the effect of each parameter and corresponding scaling laws. Then acceptable intervals and limitations for these parameters and scaling laws are discussed. In each case, a set of valid scaling factors and corresponding response scaling laws that accurately predict the response of prototypes from experimental models is introduced. The examples used include rectangular laminated plates under destabilizing loads, applied individually, vibrational characteristics of same plates, as well as cylindrical bending of beam-plates.

  19. Toward large eddy simulation of turbulent flow over an airfoil

    NASA Technical Reports Server (NTRS)

    Choi, Haecheon

    1993-01-01

    The flow field over an airfoil contains several distinct flow characteristics, e.g. laminar, transitional, turbulent boundary layer flow, flow separation, unstable free shear layers, and a wake. This diversity of flow regimes taxes the presently available Reynolds averaged turbulence models. Such models are generally tuned to predict a particular flow regime, and adjustments are necessary for the prediction of a different flow regime. Similar difficulties are likely to emerge when the large eddy simulation technique is applied with the widely used Smagorinsky model. This model has not been successful in correctly representing different turbulent flow fields with a single universal constant and has an incorrect near-wall behavior. Germano et al. (1991) and Ghosal, Lund & Moin have developed a new subgrid-scale model, the dynamic model, which is very promising in alleviating many of the persistent inadequacies of the Smagorinsky model: the model coefficient is computed dynamically as the calculation progresses rather than input a priori. The model has been remarkably successful in prediction of several turbulent and transitional flows. We plan to simulate turbulent flow over a '2D' airfoil using the large eddy simulation technique. Our primary objective is to assess the performance of the newly developed dynamic subgrid-scale model for computation of complex flows about aircraft components and to compare the results with those obtained using the Reynolds average approach and experiments. The present computation represents the first application of large eddy simulation to a flow of aeronautical interest and a key demonstration of the capabilities of the large eddy simulation technique.

  20. Quartified leptonic color, bound states, and future electron–positron collider

    DOE PAGES

    Kownacki, Corey; Ma, Ernest; Pollard, Nicholas; ...

    2017-04-04

    The [SU(3)] 4 quartification model of Babu, Ma, and Willenbrock (BMW), proposed in 2003, predicts a confining leptonic color SU(2)gauge symmetry, which becomes strong at the keV scale. Also, it predicts the existence of three families of half-charged leptons (hemions) below the TeV scale. These hemions are confined to form bound states which are not so easy to discover at the Large Hadron Collider (LHC). But, just as J/ψand Υ appeared as sharp resonances in e -e +colliders of the 20th century, the corresponding ‘hemionium’ states are expected at a future e -e +collider of the 21st century.

  1. Quantitative Earthquake Prediction on Global and Regional Scales

    NASA Astrophysics Data System (ADS)

    Kossobokov, Vladimir G.

    2006-03-01

    The Earth is a hierarchy of volumes of different size. Driven by planetary convection these volumes are involved into joint and relative movement. The movement is controlled by a wide variety of processes on and around the fractal mesh of boundary zones, and does produce earthquakes. This hierarchy of movable volumes composes a large non-linear dynamical system. Prediction of such a system in a sense of extrapolation of trajectory into the future is futile. However, upon coarse-graining the integral empirical regularities emerge opening possibilities of prediction in a sense of the commonly accepted consensus definition worked out in 1976 by the US National Research Council. Implications of the understanding hierarchical nature of lithosphere and its dynamics based on systematic monitoring and evidence of its unified space-energy similarity at different scales help avoiding basic errors in earthquake prediction claims. They suggest rules and recipes of adequate earthquake prediction classification, comparison and optimization. The approach has already led to the design of reproducible intermediate-term middle-range earthquake prediction technique. Its real-time testing aimed at prediction of the largest earthquakes worldwide has proved beyond any reasonable doubt the effectiveness of practical earthquake forecasting. In the first approximation, the accuracy is about 1-5 years and 5-10 times the anticipated source dimension. Further analysis allows reducing spatial uncertainty down to 1-3 source dimensions, although at a cost of additional failures-to-predict. Despite of limited accuracy a considerable damage could be prevented by timely knowledgeable use of the existing predictions and earthquake prediction strategies. The December 26, 2004 Indian Ocean Disaster seems to be the first indication that the methodology, designed for prediction of M8.0+ earthquakes can be rescaled for prediction of both smaller magnitude earthquakes (e.g., down to M5.5+ in Italy) and for mega-earthquakes of M9.0+. The monitoring at regional scales may require application of a recently proposed scheme for the spatial stabilization of the intermediate-term middle-range predictions. The scheme guarantees a more objective and reliable diagnosis of times of increased probability and is less restrictive to input seismic data. It makes feasible reestablishment of seismic monitoring aimed at prediction of large magnitude earthquakes in Caucasus and Central Asia, which to our regret, has been discontinued in 1991. The first results of the monitoring (1986-1990) were encouraging, at least for M6.5+.

  2. Multi-granularity Bandwidth Allocation for Large-Scale WDM/TDM PON

    NASA Astrophysics Data System (ADS)

    Gao, Ziyue; Gan, Chaoqin; Ni, Cuiping; Shi, Qiongling

    2017-12-01

    WDM (wavelength-division multiplexing)/TDM (time-division multiplexing) PON (passive optical network) is being viewed as a promising solution for delivering multiple services and applications, such as high-definition video, video conference and data traffic. Considering the real-time transmission, QoS (quality of services) requirements and differentiated services model, a multi-granularity dynamic bandwidth allocation (DBA) in both domains of wavelengths and time for large-scale hybrid WDM/TDM PON is proposed in this paper. The proposed scheme achieves load balance by using the bandwidth prediction. Based on the bandwidth prediction, the wavelength assignment can be realized fairly and effectively to satisfy the different demands of various classes. Specially, the allocation of residual bandwidth further augments the DBA and makes full use of bandwidth resources in the network. To further improve the network performance, two schemes named extending the cycle of one free wavelength (ECoFW) and large bandwidth shrinkage (LBS) are proposed, which can prevent transmission from interruption when the user employs more than one wavelength. The simulation results show the effectiveness of the proposed scheme.

  3. Plate motions and deformations from geologic and geodetic data

    NASA Technical Reports Server (NTRS)

    Jordan, Thomas H.

    1990-01-01

    An analysis of geodetic data in the vicinity of the Crustal Dynamics Program (CDP) site at Vandenberg Air Force Base (VNDN) is presented. The utility of space-geodetic data in the monitoring of transient strains associated with earthquakes in tectonically active areas like California is investigated. Particular interest is in the possibility that space-geodetic methods may be able to provide critical new data on deformations precursory to large seismic events. Although earthquake precursory phenomena are not well understood, the monitoring of small strains in the vicinity of active faults is a promising technique for studying the mechanisms that nucleate large earthquakes and, ultimately, for earthquake prediction. Space-geodetic techniques are now capable of measuring baselines of tens to hundreds of kilometers with a precision of a few parts in 108. Within the next few years, it will be possible to record and analyze large-scale strain variations with this precision continuously in real time. Thus, space-geodetic techniques may become tools for earthquake prediction. In anticipation of this capability, several questions related to the temporal and spatial scales associated with subseismic deformation transients are examined.

  4. Fractal Tempo Fluctuation and Pulse Prediction

    PubMed Central

    Rankin, Summer K.; Large, Edward W.; Fink, Philip W.

    2010-01-01

    WE INVESTIGATED PEOPLES’ ABILITY TO ADAPT TO THE fluctuating tempi of music performance. In Experiment 1, four pieces from different musical styles were chosen, and performances were recorded from a skilled pianist who was instructed to play with natural expression. Spectral and rescaled range analyses on interbeat interval time-series revealed long-range (1/f type) serial correlations and fractal scaling in each piece. Stimuli for Experiment 2 included two of the performances from Experiment 1, with mechanical versions serving as controls. Participants tapped the beat at ¼- and ⅛-note metrical levels, successfully adapting to large tempo fluctuations in both performances. Participants predicted the structured tempo fluctuations, with superior performance at the ¼-note level. Thus, listeners may exploit long-range correlations and fractal scaling to predict tempo changes in music. PMID:25190901

  5. Derivation and precision of mean field electrodynamics with mesoscale fluctuations

    NASA Astrophysics Data System (ADS)

    Zhou, Hongzhe; Blackman, Eric G.

    2018-06-01

    Mean field electrodynamics (MFE) facilitates practical modelling of secular, large scale properties of astrophysical or laboratory systems with fluctuations. Practitioners commonly assume wide scale separation between mean and fluctuating quantities, to justify equality of ensemble and spatial or temporal averages. Often however, real systems do not exhibit such scale separation. This raises two questions: (I) What are the appropriate generalized equations of MFE in the presence of mesoscale fluctuations? (II) How precise are theoretical predictions from MFE? We address both by first deriving the equations of MFE for different types of averaging, along with mesoscale correction terms that depend on the ratio of averaging scale to variation scale of the mean. We then show that even if these terms are small, predictions of MFE can still have a significant precision error. This error has an intrinsic contribution from the dynamo input parameters and a filtering contribution from differences in the way observations and theory are projected through the measurement kernel. Minimizing the sum of these contributions can produce an optimal scale of averaging that makes the theory maximally precise. The precision error is important to quantify when comparing to observations because it quantifies the resolution of predictive power. We exemplify these principles for galactic dynamos, comment on broader implications, and identify possibilities for further work.

  6. Species richness and biomass explain spatial turnover in ecosystem functioning across tropical and temperate ecosystems.

    PubMed

    Barnes, Andrew D; Weigelt, Patrick; Jochum, Malte; Ott, David; Hodapp, Dorothee; Haneda, Noor Farikhah; Brose, Ulrich

    2016-05-19

    Predicting ecosystem functioning at large spatial scales rests on our ability to scale up from local plots to landscapes, but this is highly contingent on our understanding of how functioning varies through space. Such an understanding has been hampered by a strong experimental focus of biodiversity-ecosystem functioning research restricted to small spatial scales. To address this limitation, we investigate the drivers of spatial variation in multitrophic energy flux-a measure of ecosystem functioning in complex communities-at the landscape scale. We use a structural equation modelling framework based on distance matrices to test how spatial and environmental distances drive variation in community energy flux via four mechanisms: species composition, species richness, niche complementarity and biomass. We found that in both a tropical and a temperate study region, geographical and environmental distance indirectly influence species richness and biomass, with clear evidence that these are the dominant mechanisms explaining variability in community energy flux over spatial and environmental gradients. Our results reveal that species composition and trait variability may become redundant in predicting ecosystem functioning at the landscape scale. Instead, we demonstrate that species richness and total biomass may best predict rates of ecosystem functioning at larger spatial scales. © 2016 The Author(s).

  7. Development and Evaluation of the Sugar-Sweetened Beverages Media Literacy (SSB-ML) Scale and Its Relationship With SSB Consumption.

    PubMed

    Chen, Yvonnes; Porter, Kathleen J; Estabrooks, Paul A; Zoellner, Jamie

    2017-10-01

    Understanding how adults' media literacy skill sets impact their sugar-sweetened beverage (SSB) intake provides insight into designing effective interventions to enhance their critical analysis of marketing messages and thus improve their healthy beverage choices. However, a media literacy scale focusing on SSBs is lacking. This cross-sectional study uses baseline data from a large randomized controlled trial to (a) describe the psychometric properties of an SSB Media Literacy Scale (SSB-ML) scale and its subdomains, (b) examine how the scale varies across demographic variables, and (c) explain the scale's concurrent validity to predict SSB consumption. Results from 293 adults in rural southwestern Virginia (81.6% female, 94.0% White, 54.1% receiving SNAP and/or WIC benefits, average 410 SSB kcal daily) show that overall SSB-ML scale and its subdomains have strong internal consistencies (Cronbach's alphas ranging from 0.65 to 0.83). The Representation & Reality domain significantly predicted SSB kilocalories, after controlling for demographic variables. This study has implications for the assessment and inclusion of context-specific media literacy skills in behavioral interventions.

  8. Cognitive Rationalizations for Tanning-Bed Use: A Preliminary Exploration

    PubMed Central

    Banerjee, Smita C.; Hay, Jennifer L.; Greene, Kathryn

    2016-01-01

    Objectives To examine construct and predictive utility of an adapted cognitive rationalization scale for tanning-bed use. Methods Current/former tanning-bed-using undergraduate students (N = 216; 87.6% females; 78.4% white) at a large northeastern university participated in a survey. A cognitive rationalization for tanning-bed use scale was adapted. Standardized self-report measures of past tanning-bed use, advantages of tanning, perceived vulnerability to photoaging, tanning-bed use dependence, and tanning- bed use intention were also administered. Results The cognitive rationalization scale exhibited strong construct and predictive validity. Current tanners and tanning-bed-use-dependent participants endorsed rationalizations more strongly than did former tanners and not-tanning-bed-use-dependent participants respectively. Conclusions Findings indicate that cognitive rationalizations help explain discrepancy between inconsistent cognitions. PMID:23985280

  9. Superfluidity, Bose-Einstein condensation, and structure in one-dimensional Luttinger liquids

    NASA Astrophysics Data System (ADS)

    Vranješ Markić, L.; Vrcan, H.; Zuhrianda, Z.; Glyde, H. R.

    2018-01-01

    We report diffusion Monte Carlo (DMC) and path integral Monte Carlo (PIMC) calculations of the properties of a one-dimensional (1D) Bose quantum fluid. The equation of state, the superfluid fraction ρS/ρ0 , the one-body density matrix n (x ) , the pair distribution function g (x ) , and the static structure factor S (q ) are evaluated. The aim is to test Luttinger liquid (LL) predictions for 1D fluids over a wide range of fluid density and LL parameter K . The 1D Bose fluid examined is a single chain of 4He atoms confined to a line in the center of a narrow nanopore. The atoms cannot exchange positions in the nanopore, the criterion for 1D. The fluid density is varied from the spinodal density where the 1D liquid is unstable to droplet formation to the density of bulk liquid 4He. In this range, K varies from K >2 at low density, where a robust superfluid is predicted, to K <0.5 , where fragile 1D superflow and solidlike peaks in S (q ) are predicted. For uniform pore walls, the ρS/ρ0 scales as predicted by LL theory. The n (x ) and g (x ) show long range oscillations and decay with x as predicted by LL theory. The amplitude of the oscillations is large at high density (small K ) and small at low density (large K ). The K values obtained from different properties agree well verifying the internal structure of LL theory. In the presence of disorder, the ρS/ρ0 does not scale as predicted by LL theory. A single vJ parameter in the LL theory that recovers LL scaling was not found. The one body density matrix (OBDM) in disorder is well predicted by LL theory. The "dynamical" superfluid fraction, ρSD/ρ0 , is determined. The physics of the deviation from LL theory in disorder and the "dynamical" ρSD/ρ0 are discussed.

  10. Water and salt balance modelling to predict the effects of land-use changes in forested catchments. 3. The large catchment model

    NASA Astrophysics Data System (ADS)

    Sivapalan, Murugesu; Viney, Neil R.; Jeevaraj, Charles G.

    1996-03-01

    This paper presents an application of a long-term, large catchment-scale, water balance model developed to predict the effects of forest clearing in the south-west of Western Australia. The conceptual model simulates the basic daily water balance fluxes in forested catchments before and after clearing. The large catchment is divided into a number of sub-catchments (1-5 km2 in area), which are taken as the fundamental building blocks of the large catchment model. The responses of the individual subcatchments to rainfall and pan evaporation are conceptualized in terms of three inter-dependent subsurface stores A, B and F, which are considered to represent the moisture states of the subcatchments. Details of the subcatchment-scale water balance model have been presented earlier in Part 1 of this series of papers. The response of any subcatchment is a function of its local moisture state, as measured by the local values of the stores. The variations of the initial values of the stores among the subcatchments are described in the large catchment model through simple, linear equations involving a number of similarity indices representing topography, mean annual rainfall and level of forest clearing.The model is applied to the Conjurunup catchment, a medium-sized (39·6 km2) catchment in the south-west of Western Australia. The catchment has been heterogeneously (in space and time) cleared for bauxite mining and subsequently rehabilitated. For this application, the catchment is divided into 11 subcatchments. The model parameters are estimated by calibration, by comparing observed and predicted runoff values, over a 18 year period, for the large catchment and two of the subcatchments. Excellent fits are obtained.

  11. Detectability of large-scale power suppression in the galaxy distribution

    NASA Astrophysics Data System (ADS)

    Gibelyou, Cameron; Huterer, Dragan; Fang, Wenjuan

    2010-12-01

    Suppression in primordial power on the Universe’s largest observable scales has been invoked as a possible explanation for large-angle observations in the cosmic microwave background, and is allowed or predicted by some inflationary models. Here we investigate the extent to which such a suppression could be confirmed by the upcoming large-volume redshift surveys. For definiteness, we study a simple parametric model of suppression that improves the fit of the vanilla ΛCDM model to the angular correlation function measured by WMAP in cut-sky maps, and at the same time improves the fit to the angular power spectrum inferred from the maximum likelihood analysis presented by the WMAP team. We find that the missing power at large scales, favored by WMAP observations within the context of this model, will be difficult but not impossible to rule out with a galaxy redshift survey with large-volume (˜100Gpc3). A key requirement for success in ruling out power suppression will be having redshifts of most galaxies detected in the imaging survey.

  12. Using the abbreviated injury severity and Glasgow Coma Scale scores to predict 2-week mortality after traumatic brain injury.

    PubMed

    Timmons, Shelly D; Bee, Tiffany; Webb, Sharon; Diaz-Arrastia, Ramon R; Hesdorffer, Dale

    2011-11-01

    Prediction of outcome after traumatic brain injury (TBI) remains elusive. We tested the use of a single hospital Glasgow Coma Scale (GCS) Score, GCS Motor Score, and the Head component of the Abbreviated Injury Scale (AIS) Score to predict 2-week cumulative mortality in a large cohort of TBI patients admitted to the eight U.S. Level I trauma centers in the TBI Clinical Trials Network. Data on 2,808 TBI patients were entered into a centralized database. These TBI patients were categorized as severe (GCS score, 3-8), moderate (9-12), or complicated mild (13-15 with positive computed tomography findings). Intubation and chemical paralysis were recorded. The cumulative incidence of mortality in the first 2 weeks after head injury was calculated using Kaplan-Meier survival analysis. Cox proportional hazards regression was used to estimate the magnitude of the risk for 2-week mortality. Two-week cumulative mortality was independently predicted by GCS, GCS Motor Score, and Head AIS. GCS Severity Category and GCS Motor Score were stronger predictors of 2-week mortality than Head AIS. There was also an independent effect of age (<60 vs. ≥60) on mortality after controlling for both GCS and Head AIS Scores. Anatomic and physiologic scales are useful in the prediction of mortality after TBI. We did not demonstrate any added benefit to combining the total GCS or GCS Motor Scores with the Head AIS Score in the short-term prediction of death after TBI.

  13. Street Level Hydrology: An Urban Application of the WRF-Hydro Framework in Denver, Colorado

    NASA Astrophysics Data System (ADS)

    Read, L.; Hogue, T. S.; Salas, F. R.; Gochis, D.

    2015-12-01

    Urban flood modeling at the watershed scale carries unique challenges in routing complexity, data resolution, social and political issues, and land surface - infrastructure interactions. The ability to accurately trace and predict the flow of water through the urban landscape enables better emergency response management, floodplain mapping, and data for future urban infrastructure planning and development. These services are of growing importance as urban population is expected to continue increasing by 1.84% per year for the next 25 years, increasing the vulnerability of urban regions to damages and loss of life from floods. Although a range of watershed-scale models have been applied in specific urban areas to examine these issues, there is a trend towards national scale hydrologic modeling enabled by supercomputing resources to understand larger system-wide hydrologic impacts and feedbacks. As such it is important to address how urban landscapes can be represented in large scale modeling processes. The current project investigates how coupling terrain and infrastructure routing can improve flow prediction and flooding events over the urban landscape. We utilize the WRF-Hydro modeling framework and a high-resolution terrain routing grid with the goal of compiling standard data needs necessary for fine scale urban modeling and dynamic flood forecasting in the urban setting. The city of Denver is selected as a case study, as it has experienced several large flooding events in the last five years and has an urban annual population growth rate of 1.5%, one of the highest in the U.S. Our work highlights the hydro-informatic challenges associated with linking channel networks and drainage infrastructure in an urban area using the WRF-Hydro modeling framework and high resolution urban models for short-term flood prediction.

  14. Simulation-Based Height of Burst Map for Asteroid Airburst Damage Prediction

    NASA Technical Reports Server (NTRS)

    Aftosmis, Michael J.; Mathias, Donovan L.; Tarano, Ana M.

    2017-01-01

    Entry and breakup models predict that airburst in the Earth's atmosphere is likely for asteroids up to approximately 200 meters in diameter. Objects of this size can deposit over 250 megatons of energy into the atmosphere. Fast-running ground damage prediction codes for such events rely heavily upon methods developed from nuclear weapons research to estimate the damage potential for an airburst at altitude. (Collins, 2005; Mathias, 2017; Hills and Goda, 1993). In particular, these tools rely upon the powerful yield scaling laws developed for point-source blasts that are used in conjunction with a Height of Burst (HOB) map to predict ground damage for an airburst of a specific energy at a given altitude. While this approach works extremely well for yields as large as tens of megatons, it becomes less accurate as yields increase to the hundreds of megatons potentially released by larger airburst events. This study revisits the assumptions underlying this approach and shows how atmospheric buoyancy becomes important as yield increases beyond a few megatons. We then use large-scale three-dimensional simulations to construct numerically generated height of burst maps that are appropriate at the higher energy levels associated with the entry of asteroids with diameters of hundreds of meters. These numerically generated HOB maps can then be incorporated into engineering methods for damage prediction, significantly improving their accuracy for asteroids with diameters greater than 80-100 m.

  15. Improving parallel I/O autotuning with performance modeling

    DOE PAGES

    Behzad, Babak; Byna, Surendra; Wild, Stefan M.; ...

    2014-01-01

    Various layers of the parallel I/O subsystem offer tunable parameters for improving I/O performance on large-scale computers. However, searching through a large parameter space is challenging. We are working towards an autotuning framework for determining the parallel I/O parameters that can achieve good I/O performance for different data write patterns. In this paper, we characterize parallel I/O and discuss the development of predictive models for use in effectively reducing the parameter space. Furthermore, applying our technique on tuning an I/O kernel derived from a large-scale simulation code shows that the search time can be reduced from 12 hours to 2more » hours, while achieving 54X I/O performance speedup.« less

  16. Predicting violence and recidivism in a large sample of males on probation or parole.

    PubMed

    Prell, Lettie; Vitacco, Michael J; Zavodny, Denis

    This study evaluated the utility of items and scales from the Iowa Violence and Victimization Instrument in a sample of 1961 males from the state of Iowa who were on probation or released from prison to parole supervision. This is the first study to examine the potential of the Iowa Violence and Victimization Instrument to predict criminal offenses. The males were followed for 30months immediately following their admission to probation or parole. AUC analyses indicated fair to good predictive power for the Iowa Violence and Victimization Instrument for charges of violence and victimization, but chance predictive power for drug offenses. Notably, both scales of the instrument performed equally well at the 30-month follow-up. Items on the Iowa Violence and Victimization Instrument not only predicted violence, but are straightforward to score. Violence management strategies are discussed as they relate to the current findings, including the potential to expand the measure to other jurisdictions and populations. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Multi-scale enhancement of climate prediction over land by increasing the model sensitivity to vegetation variability in EC-Earth

    NASA Astrophysics Data System (ADS)

    Alessandri, Andrea; Catalano, Franco; De Felice, Matteo; Van Den Hurk, Bart; Doblas Reyes, Francisco; Boussetta, Souhail; Balsamo, Gianpaolo; Miller, Paul

    2016-04-01

    The EC-Earth earth system model has been recently developed to include the dynamics of vegetation. In its original formulation, vegetation variability is simply operated by the Leaf Area Index (LAI), which affects climate basically by changing the vegetation physiological resistance to evapotranspiration. This coupling has been found to have only a weak effect on the surface climate modeled by EC-Earth. In reality, the effective sub-grid vegetation fractional coverage will vary seasonally and at interannual time-scales in response to leaf-canopy growth, phenology and senescence. Therefore it affects biophysical parameters such as the albedo, surface roughness and soil field capacity. To adequately represent this effect in EC-Earth, we included an exponential dependence of the vegetation cover on the LAI. By comparing two sets of simulations performed with and without the new variable fractional-coverage parameterization, spanning retrospective predictions at the decadal (5-years), seasonal and sub-seasonal time-scales, we show for the first time a significant multi-scale enhancement of vegetation impacts in climate simulation and prediction over land. Particularly large effects at multiple time scales are shown over boreal winter middle-to-high latitudes over Canada, West US, Eastern Europe, Russia and eastern Siberia due to the implemented time-varying shadowing effect by tree-vegetation on snow surfaces. Over Northern Hemisphere boreal forest regions the improved representation of vegetation cover tends to correct the winter warm biases, improves the climate change sensitivity, the decadal potential predictability as well as the skill of forecasts at seasonal and sub-seasonal time-scales. Significant improvements of the prediction of 2m temperature and rainfall are also shown over transitional land surface hot spots. Both the potential predictability at decadal time-scale and seasonal-forecasts skill are enhanced over Sahel, North American Great Plains, Nordeste Brazil and South East Asia, mainly related to improved performance in the surface evapotranspiration.

  18. Multi-scale enhancement of climate prediction over land by increasing the model sensitivity to vegetation variability in EC-Earth

    NASA Astrophysics Data System (ADS)

    Alessandri, A.; Catalano, F.; De Felice, M.; van den Hurk, B.; Doblas-Reyes, F. J.; Boussetta, S.; Balsamo, G.; Miller, P. A.

    2016-12-01

    The European consortium earth system model (EC-Earth; http://www.ec-earth.org) has been recently developed to include the dynamics of vegetation. In its original formulation, vegetation variability is simply operated by the Leaf Area Index (LAI), which affects climate basically by changing the vegetation physiological resistance to evapotranspiration. This coupling has been found to have only a weak effect on the surface climate modeled by EC-Earth. In reality, the effective sub-grid vegetation fractional coverage will vary seasonally and at interannual time-scales in response to leaf-canopy growth, phenology and senescence. Therefore it affects biophysical parameters such as the albedo, surface roughness and soil field capacity. To adequately represent this effect in EC-Earth, we included an exponential dependence of the vegetation cover on the LAI. By comparing two sets of simulations performed with and without the new variable fractional-coverage parameterization, spanning from centennial (20th Century) simulations and retrospective predictions to the decadal (5-years), seasonal and weather time-scales, we show for the first time a significant multi-scale enhancement of vegetation impacts in climate simulation and prediction over land. Particularly large effects at multiple time scales are shown over boreal winter middle-to-high latitudes over Canada, West US, Eastern Europe, Russia and eastern Siberia due to the implemented time-varying shadowing effect by tree-vegetation on snow surfaces. Over Northern Hemisphere boreal forest regions the improved representation of vegetation cover tends to correct the winter warm biases, improves the climate change sensitivity, the decadal potential predictability as well as the skill of forecasts at seasonal and weather time-scales. Significant improvements of the prediction of 2m temperature and rainfall are also shown over transitional land surface hot spots. Both the potential predictability at decadal time-scale and seasonal-forecasts skill are enhanced over Sahel, North American Great Plains, Nordeste Brazil and South East Asia, mainly related to improved performance in the surface evapotranspiration.

  19. Multi-scale enhancement of climate prediction over land by increasing the model sensitivity to vegetation variability in EC-Earth

    NASA Astrophysics Data System (ADS)

    Alessandri, Andrea; Catalano, Franco; De Felice, Matteo; Van Den Hurk, Bart; Doblas Reyes, Francisco; Boussetta, Souhail; Balsamo, Gianpaolo; Miller, Paul A.

    2017-08-01

    The EC-Earth earth system model has been recently developed to include the dynamics of vegetation. In its original formulation, vegetation variability is simply operated by the Leaf Area Index (LAI), which affects climate basically by changing the vegetation physiological resistance to evapotranspiration. This coupling has been found to have only a weak effect on the surface climate modeled by EC-Earth. In reality, the effective sub-grid vegetation fractional coverage will vary seasonally and at interannual time-scales in response to leaf-canopy growth, phenology and senescence. Therefore it affects biophysical parameters such as the albedo, surface roughness and soil field capacity. To adequately represent this effect in EC-Earth, we included an exponential dependence of the vegetation cover on the LAI. By comparing two sets of simulations performed with and without the new variable fractional-coverage parameterization, spanning from centennial (twentieth century) simulations and retrospective predictions to the decadal (5-years), seasonal and weather time-scales, we show for the first time a significant multi-scale enhancement of vegetation impacts in climate simulation and prediction over land. Particularly large effects at multiple time scales are shown over boreal winter middle-to-high latitudes over Canada, West US, Eastern Europe, Russia and eastern Siberia due to the implemented time-varying shadowing effect by tree-vegetation on snow surfaces. Over Northern Hemisphere boreal forest regions the improved representation of vegetation cover tends to correct the winter warm biases, improves the climate change sensitivity, the decadal potential predictability as well as the skill of forecasts at seasonal and weather time-scales. Significant improvements of the prediction of 2 m temperature and rainfall are also shown over transitional land surface hot spots. Both the potential predictability at decadal time-scale and seasonal-forecasts skill are enhanced over Sahel, North American Great Plains, Nordeste Brazil and South East Asia, mainly related to improved performance in the surface evapotranspiration.

  20. Multi-scale enhancement of climate prediction over land by increasing the model sensitivity to vegetation variability in EC-Earth

    NASA Astrophysics Data System (ADS)

    Alessandri, Andrea; Catalano, Franco; De Felice, Matteo; Van Den Hurk, Bart; Doblas Reyes, Francisco; Boussetta, Souhail; Balsamo, Gianpaolo; Miller, Paul A.

    2017-04-01

    The EC-Earth earth system model has been recently developed to include the dynamics of vegetation. In its original formulation, vegetation variability is simply operated by the Leaf Area Index (LAI), which affects climate basically by changing the vegetation physiological resistance to evapotranspiration. This coupling has been found to have only a weak effect on the surface climate modeled by EC-Earth. In reality, the effective sub-grid vegetation fractional coverage will vary seasonally and at interannual time-scales in response to leaf-canopy growth, phenology and senescence. Therefore it affects biophysical parameters such as the albedo, surface roughness and soil field capacity. To adequately represent this effect in EC-Earth, we included an exponential dependence of the vegetation cover on the LAI. By comparing two sets of simulations performed with and without the new variable fractional-coverage parameterization, spanning from centennial (20th Century) simulations and retrospective predictions to the decadal (5-years), seasonal and weather time-scales, we show for the first time a significant multi-scale enhancement of vegetation impacts in climate simulation and prediction over land. Particularly large effects at multiple time scales are shown over boreal winter middle-to-high latitudes over Canada, West US, Eastern Europe, Russia and eastern Siberia due to the implemented time-varying shadowing effect by tree-vegetation on snow surfaces. Over Northern Hemisphere boreal forest regions the improved representation of vegetation cover tends to correct the winter warm biases, improves the climate change sensitivity, the decadal potential predictability as well as the skill of forecasts at seasonal and weather time-scales. Significant improvements of the prediction of 2m temperature and rainfall are also shown over transitional land surface hot spots. Both the potential predictability at decadal time-scale and seasonal-forecasts skill are enhanced over Sahel, North American Great Plains, Nordeste Brazil and South East Asia, mainly related to improved performance in the surface evapotranspiration.

  1. High flexibility of DNA on short length scales probed by atomic force microscopy.

    PubMed

    Wiggins, Paul A; van der Heijden, Thijn; Moreno-Herrero, Fernando; Spakowitz, Andrew; Phillips, Rob; Widom, Jonathan; Dekker, Cees; Nelson, Philip C

    2006-11-01

    The mechanics of DNA bending on intermediate length scales (5-100 nm) plays a key role in many cellular processes, and is also important in the fabrication of artificial DNA structures, but previous experimental studies of DNA mechanics have focused on longer length scales than these. We use high-resolution atomic force microscopy on individual DNA molecules to obtain a direct measurement of the bending energy function appropriate for scales down to 5 nm. Our measurements imply that the elastic energy of highly bent DNA conformations is lower than predicted by classical elasticity models such as the worm-like chain (WLC) model. For example, we found that on short length scales, spontaneous large-angle bends are many times more prevalent than predicted by the WLC model. We test our data and model with an interlocking set of consistency checks. Our analysis also shows how our model is compatible with previous experiments, which have sometimes been viewed as confirming the WLC.

  2. Advanced Model for Extreme Lift and Improved Aeroacoustics (AMELIA)

    NASA Technical Reports Server (NTRS)

    Lichtwardt, Jonathan; Paciano, Eric; Jameson, Tina; Fong, Robert; Marshall, David

    2012-01-01

    With the very recent advent of NASA's Environmentally Responsible Aviation Project (ERA), which is dedicated to designing aircraft that will reduce the impact of aviation on the environment, there is a need for research and development of methodologies to minimize fuel burn, emissions, and reduce community noise produced by regional airliners. ERA tackles airframe technology, propulsion technology, and vehicle systems integration to meet performance objectives in the time frame for the aircraft to be at a Technology Readiness Level (TRL) of 4-6 by the year of 2020 (deemed N+2). The proceeding project that investigated similar goals to ERA was NASA's Subsonic Fixed Wing (SFW). SFW focused on conducting research to improve prediction methods and technologies that will produce lower noise, lower emissions, and higher performing subsonic aircraft for the Next Generation Air Transportation System. The work provided in this investigation was a NASA Research Announcement (NRA) contract #NNL07AA55C funded by Subsonic Fixed Wing. The project started in 2007 with a specific goal of conducting a large-scale wind tunnel test along with the development of new and improved predictive codes for the advanced powered-lift concepts. Many of the predictive codes were incorporated to refine the wind tunnel model outer mold line design. The large scale wind tunnel test goal was to investigate powered lift technologies and provide an experimental database to validate current and future modeling techniques. Powered-lift concepts investigated were Circulation Control (CC) wing in conjunction with over-the-wing mounted engines to entrain the exhaust to further increase the lift generated by CC technologies alone. The NRA was a five-year effort; during the first year the objective was to select and refine CESTOL concepts and then to complete a preliminary design of a large-scale wind tunnel model for the large scale test. During the second, third, and fourth years the large-scale wind tunnel model design would be completed, manufactured, and calibrated. During the fifth year the large scale wind tunnel test was conducted. This technical memo will describe all phases of the Advanced Model for Extreme Lift and Improved Aeroacoustics (AMELIA) project and provide a brief summary of the background and modeling efforts involved in the NRA. The conceptual designs considered for this project and the decision process for the selected configuration adapted for a wind tunnel model will be briefly discussed. The internal configuration of AMELIA, and the internal measurements chosen in order to satisfy the requirements of obtaining a database of experimental data to be used for future computational model validations. The external experimental techniques that were employed during the test, along with the large-scale wind tunnel test facility are covered in great detail. Experimental measurements in the database include forces and moments, and surface pressure distributions, local skin friction measurements, boundary and shear layer velocity profiles, far-field acoustic data and noise signatures from turbofan propulsion simulators. Results and discussion of the circulation control performance, over-the-wing mounted engines, and the combined performance are also discussed in great detail.

  3. High Resolution Forecasts in the Florida Straits: Predicting the Modulations of the Florida Current and Connectivity Around South Florida and Cuba

    NASA Astrophysics Data System (ADS)

    Kourafalou, V.; Kang, H.; Perlin, N.; Le Henaff, M.; Lamkin, J. T.

    2016-02-01

    Connectivity around the South Florida coastal regions and between South Florida and Cuba are largely influenced by a) local coastal processes and b) circulation in the Florida Straits, which is controlled by the larger scale Florida Current variability. Prediction of the physical connectivity is a necessary component for several activities that require ocean forecasts, such as oil spills, fisheries research, search and rescue. This requires a predictive system that can accommodate the intense coastal to offshore interactions and the linkages to the complex regional circulation. The Florida Straits, South Florida and Florida Keys Hybrid Coordinate Ocean Model is such a regional ocean predictive system, covering a large area over the Florida Straits and the adjacent land areas, representing both coastal and oceanic processes. The real-time ocean forecast system is high resolution ( 900m), embedded in larger scale predictive models. It includes detailed coastal bathymetry, high resolution/high frequency atmospheric forcing and provides 7-day forecasts, updated daily (see: http://coastalmodeling.rsmas.miami.edu/). The unprecedented high resolution and coastal details of this system provide value added on global forecasts through downscaling and allow a variety of applications. Examples will be presented, focusing on the period of a 2015 fisheries cruise around the coastal areas of Cuba, where model predictions helped guide the measurements on biophysical connectivity, under intense variability of the mesoscale eddy field and subsequent Florida Current meandering.

  4. First Pass Annotation of Promoters on Human Chromosome 22

    PubMed Central

    Scherf, Matthias; Klingenhoff, Andreas; Frech, Kornelie; Quandt, Kerstin; Schneider, Ralf; Grote, Korbinian; Frisch, Matthias; Gailus-Durner, Valérie; Seidel, Alexander; Brack-Werner, Ruth; Werner, Thomas

    2001-01-01

    The publication of the first almost complete sequence of a human chromosome (chromosome 22) is a major milestone in human genomics. Together with the sequence, an excellent annotation of genes was published which certainly will serve as an information resource for numerous future projects. We noted that the annotation did not cover regulatory regions; in particular, no promoter annotation has been provided. Here we present an analysis of the complete published chromosome 22 sequence for promoters. A recent breakthrough in specific in silico prediction of promoter regions enabled us to attempt large-scale prediction of promoter regions on chromosome 22. Scanning of sequence databases revealed only 20 experimentally verified promoters, of which 10 were correctly predicted by our approach. Nearly 40% of our 465 predicted promoter regions are supported by the currently available gene annotation. Promoter finding also provides a biologically meaningful method for “chromosomal scaffolding”, by which long genomic sequences can be divided into segments starting with a gene. As one example, the combination of promoter region prediction with exon/intron structure predictions greatly enhances the specificity of de novo gene finding. The present study demonstrates that it is possible to identify promoters in silico on the chromosomal level with sufficient reliability for experimental planning and indicates that a wealth of information about regulatory regions can be extracted from current large-scale (megabase) sequencing projects. Results are available on-line at http://genomatix.gsf.de/chr22/. PMID:11230158

  5. Hadoop-Based Distributed System for Online Prediction of Air Pollution Based on Support Vector Machine

    NASA Astrophysics Data System (ADS)

    Ghaemi, Z.; Farnaghi, M.; Alimohammadi, A.

    2015-12-01

    The critical impact of air pollution on human health and environment in one hand and the complexity of pollutant concentration behavior in the other hand lead the scientists to look for advance techniques for monitoring and predicting the urban air quality. Additionally, recent developments in data measurement techniques have led to collection of various types of data about air quality. Such data is extremely voluminous and to be useful it must be processed at high velocity. Due to the complexity of big data analysis especially for dynamic applications, online forecasting of pollutant concentration trends within a reasonable processing time is still an open problem. The purpose of this paper is to present an online forecasting approach based on Support Vector Machine (SVM) to predict the air quality one day in advance. In order to overcome the computational requirements for large-scale data analysis, distributed computing based on the Hadoop platform has been employed to leverage the processing power of multiple processing units. The MapReduce programming model is adopted for massive parallel processing in this study. Based on the online algorithm and Hadoop framework, an online forecasting system is designed to predict the air pollution of Tehran for the next 24 hours. The results have been assessed on the basis of Processing Time and Efficiency. Quite accurate predictions of air pollutant indicator levels within an acceptable processing time prove that the presented approach is very suitable to tackle large scale air pollution prediction problems.

  6. Predicting hydrologic function with the streamwater mircobiome

    NASA Astrophysics Data System (ADS)

    Good, S. P.; URycki, D. R.; Crump, B. C.

    2017-12-01

    Recent advances in microbiology allow for rapid and cost-effective determination of the presence of a nearly limitless number of bacterial (and other) species within a water sample. Here, we posit that the quasi-unique taxonomic composition of the aquatic microbiome is an emergent property of a catchment that contains information about hydrologic function at multiple temporal and spatial scales, and term this approach `genohydrolgy.' As first a genohydrology case study, we show that the relative abundance of bacterial species within different operational taxonomic units (OTUs) from six large arctic rivers can be used to predict river discharge at monthly and longer timescales. Using only OTU abundance information and a machine-learning algorithm trained on OTU and discharge data from the other five rivers, our genohydrology approach is able to predict mean monthly discharge values throughout the year with an average Nash-Sutcliffe efficiency (NSE) of 0.50, while the recurrence interval of extreme flows at longer times scales in these rivers was predicted with an NSE of 0.04. This approach demonstrates considerable improvement over prediction of these quantities in each river based only on discharge data from the other five (our null hypothesis), which had average NSE values of -1.19 and -5.50 for the seasonal and recurrence interval discharge values, respectively. Overall the genohydrology approach demonstrates that bacterial diversity within the aquatic microbiome is a large and underutilized data resource with benefits for prediction of hydrologic function.

  7. An operational global-scale ocean thermal analysis system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clancy, R. M.; Pollak, K.D.; Phoebus, P.A.

    1990-04-01

    The Optimum Thermal Interpolation System (OTIS) is an ocean thermal analysis system designed for operational use at FNOC. It is based on the optimum interpolation of the assimilation technique and functions in an analysis-prediction-analysis data assimilation cycle with the TOPS mixed-layer model. OTIS provides a rigorous framework for combining real-time data, climatology, and predictions from numerical ocean prediction models to produce a large-scale synoptic representation of ocean thermal structure. The techniques and assumptions used in OTIS are documented and results of operational tests of global scale OTIS at FNOC are presented. The tests involved comparisons of OTIS against an existingmore » operational ocean thermal structure model and were conducted during February, March, and April 1988. Qualitative comparison of the two products suggests that OTIS gives a more realistic representation of subsurface anomalies and horizontal gradients and that it also gives a more accurate analysis of the thermal structure, with improvements largest below the mixed layer. 37 refs.« less

  8. Cost Prediction Using a Survival Grouping Algorithm: An Application to Incident Prostate Cancer Cases.

    PubMed

    Onukwugha, Eberechukwu; Qi, Ran; Jayasekera, Jinani; Zhou, Shujia

    2016-02-01

    Prognostic classification approaches are commonly used in clinical practice to predict health outcomes. However, there has been limited focus on use of the general approach for predicting costs. We applied a grouping algorithm designed for large-scale data sets and multiple prognostic factors to investigate whether it improves cost prediction among older Medicare beneficiaries diagnosed with prostate cancer. We analysed the linked Surveillance, Epidemiology and End Results (SEER)-Medicare data, which included data from 2000 through 2009 for men diagnosed with incident prostate cancer between 2000 and 2007. We split the survival data into two data sets (D0 and D1) of equal size. We trained the classifier of the Grouping Algorithm for Cancer Data (GACD) on D0 and tested it on D1. The prognostic factors included cancer stage, age, race and performance status proxies. We calculated the average difference between observed D1 costs and predicted D1 costs at 5 years post-diagnosis with and without the GACD. The sample included 110,843 men with prostate cancer. The median age of the sample was 74 years, and 10% were African American. The average difference (mean absolute error [MAE]) per person between the real and predicted total 5-year cost was US$41,525 (MAE US$41,790; 95% confidence interval [CI] US$41,421-42,158) with the GACD and US$43,113 (MAE US$43,639; 95% CI US$43,062-44,217) without the GACD. The 5-year cost prediction without grouping resulted in a sample overestimate of US$79,544,508. The grouping algorithm developed for complex, large-scale data improves the prediction of 5-year costs. The prediction accuracy could be improved by utilization of a richer set of prognostic factors and refinement of categorical specifications.

  9. Large-scale prediction of adverse drug reactions using chemical, biological, and phenotypic properties of drugs.

    PubMed

    Liu, Mei; Wu, Yonghui; Chen, Yukun; Sun, Jingchun; Zhao, Zhongming; Chen, Xue-wen; Matheny, Michael Edwin; Xu, Hua

    2012-06-01

    Adverse drug reaction (ADR) is one of the major causes of failure in drug development. Severe ADRs that go undetected until the post-marketing phase of a drug often lead to patient morbidity. Accurate prediction of potential ADRs is required in the entire life cycle of a drug, including early stages of drug design, different phases of clinical trials, and post-marketing surveillance. Many studies have utilized either chemical structures or molecular pathways of the drugs to predict ADRs. Here, the authors propose a machine-learning-based approach for ADR prediction by integrating the phenotypic characteristics of a drug, including indications and other known ADRs, with the drug's chemical structures and biological properties, including protein targets and pathway information. A large-scale study was conducted to predict 1385 known ADRs of 832 approved drugs, and five machine-learning algorithms for this task were compared. This evaluation, based on a fivefold cross-validation, showed that the support vector machine algorithm outperformed the others. Of the three types of information, phenotypic data were the most informative for ADR prediction. When biological and phenotypic features were added to the baseline chemical information, the ADR prediction model achieved significant improvements in area under the curve (from 0.9054 to 0.9524), precision (from 43.37% to 66.17%), and recall (from 49.25% to 63.06%). Most importantly, the proposed model successfully predicted the ADRs associated with withdrawal of rofecoxib and cerivastatin. The results suggest that phenotypic information on drugs is valuable for ADR prediction. Moreover, they demonstrate that different models that combine chemical, biological, or phenotypic information can be built from approved drugs, and they have the potential to detect clinically important ADRs in both preclinical and post-marketing phases.

  10. Climate, Water, and Human Health: Large Scale Hydroclimatic Controls in Forecasting Cholera Epidemics

    NASA Astrophysics Data System (ADS)

    Akanda, A. S.; Jutla, A. S.; Islam, S.

    2009-12-01

    Despite ravaging the continents through seven global pandemics in past centuries, the seasonal and interannual variability of cholera outbreaks remain a mystery. Previous studies have focused on the role of various environmental and climatic factors, but provided little or no predictive capability. Recent findings suggest a more prominent role of large scale hydroclimatic extremes - droughts and floods - and attempt to explain the seasonality and the unique dual cholera peaks in the Bengal Delta region of South Asia. We investigate the seasonal and interannual nature of cholera epidemiology in three geographically distinct locations within the region to identify the larger scale hydroclimatic controls that can set the ecological and environmental ‘stage’ for outbreaks and have significant memory on a seasonal scale. Here we show that two distinctly different, pre and post monsoon, cholera transmission mechanisms related to large scale climatic controls prevail in the region. An implication of our findings is that extreme climatic events such as prolonged droughts, record floods, and major cyclones may cause major disruption in the ecosystem and trigger large epidemics. We postulate that a quantitative understanding of the large-scale hydroclimatic controls and dominant processes with significant system memory will form the basis for forecasting such epidemic outbreaks. A multivariate regression method using these predictor variables to develop probabilistic forecasts of cholera outbreaks will be explored. Forecasts from such a system with a seasonal lead-time are likely to have measurable impact on early cholera detection and prevention efforts in endemic regions.

  11. 2:1 for naturalness at the LHC?

    NASA Astrophysics Data System (ADS)

    Arkani-Hamed, Nima; Blum, Kfir; D'Agnolo, Raffaele Tito; Fan, JiJi

    2013-01-01

    A large enhancement of a factor of 1.5 - 2 in Higgs production and decay in the diphoton channel, with little deviation in the ZZ channel, can only plausibly arise from a loop of new charged particles with large couplings to the Higgs. We show that, allowing only new fermions with marginal interactions at the weak scale, the required Yukawa couplings for a factor of 2 enhancement are so large that the Higgs quartic coupling is pushed to large negative values in the UV, triggering an unacceptable vacuum instability far beneath the 10 TeV scale. An enhancement by a factor of 1.5 can be accommodated if the charged particles are lighter than 150 GeV, within reach of discovery in almost all cases in the 8 TeV run at the LHC, and in even the most difficult cases at 14 TeV. Thus if the diphoton enhancement survives further scrutiny, and no charged particles beneath 150 GeV are found, there must be new bosons far beneath the 10 TeV scale. This would unambiguously rule out a large class of fine-tuned theories for physics beyond the Standard Model, including split SUSY and many of its variants, and provide strong circumstantial evidence for a natural theory of electroweak symmetry breaking at the TeV scale. Alternately, theories with only a single fine-tuned Higgs and new fermions at the weak scale, with no additional scalars or gauge bosons up to a cutoff much larger than the 10 TeV scale, unambiguously predict that the hints for a large diphoton enhancement in the current data will disappear.

  12. What initial condition of inflation would suppress the large-scale CMB spectrum?

    DOE PAGES

    Chen, Pisin; Lin, Yu -Hsiang

    2016-01-08

    There is an apparent power deficit relative to the Λ CDM prediction of the cosmic microwave background spectrum at large scales, which, though not yet statistically significant, persists from WMAP to Planck data. Proposals that invoke some form of initial condition for the inflation have been made to address this apparent power suppression, albeit with conflicting conclusions. By studying the curvature perturbations of a scalar field in the Friedmann-Lemaître-Robertson-Walker universe parameterized by the equation of state parameter w, we find that the large-scale spectrum at the end of inflation reflects the superhorizon spectrum of the initial state. The large-scale spectrummore » is suppressed if the universe begins with the adiabatic vacuum in a superinflation (w < –1) or positive-pressure (w > 0) era. In the latter case, there is however no causal mechanism to establish the initial adiabatic vacuum. On the other hand, as long as the universe begins with the adiabatic vacuum in an era with –1 < w < 0, even if there exists an intermediate positive-pressure era, the large-scale spectrum would be enhanced rather than suppressed. In conclusion, we further calculate the spectrum of a two-stage inflation model with a two-field potential and show that the result agrees with that obtained from the ad hoc single-field analysis.« less

  13. Asynchrony, Fragmentation, and Scale Determine Benefits of Landscape Heterogeneity to Mobile Herbivores

    USDA-ARS?s Scientific Manuscript database

    Fragmentation of landscapes into spatially isolated parts is a prevailing source of environmental change worldwide. However, predicting the consequences of fragmentation for populations remains problematic, in large measure because the mechanisms translating landscape change into population performa...

  14. Disease Modeling via Large-Scale Network Analysis

    DTIC Science & Technology

    2015-05-20

    SECURITY CLASSIFICATION OF: A central goal of genetics is to learn how the genotype of an organism determines its phenotype. We address the implicit...guarantees for the methods. In the past, we have developed predictive methods general enough to apply to potentially any genetic trait, varying from... genetics is to learn how the genotype of an organism determines its phenotype. We address the implicit problem of predicting the association of genes with

  15. Mapping multi-scale vascular plant richness in a forest landscape with integrated LiDAR and hyperspectral remote-sensing.

    PubMed

    Hakkenberg, C R; Zhu, K; Peet, R K; Song, C

    2018-02-01

    The central role of floristic diversity in maintaining habitat integrity and ecosystem function has propelled efforts to map and monitor its distribution across forest landscapes. While biodiversity studies have traditionally relied largely on ground-based observations, the immensity of the task of generating accurate, repeatable, and spatially-continuous data on biodiversity patterns at large scales has stimulated the development of remote-sensing methods for scaling up from field plot measurements. One such approach is through integrated LiDAR and hyperspectral remote-sensing. However, despite their efficiencies in cost and effort, LiDAR-hyperspectral sensors are still highly constrained in structurally- and taxonomically-heterogeneous forests - especially when species' cover is smaller than the image resolution, intertwined with neighboring taxa, or otherwise obscured by overlapping canopy strata. In light of these challenges, this study goes beyond the remote characterization of upper canopy diversity to instead model total vascular plant species richness in a continuous-cover North Carolina Piedmont forest landscape. We focus on two related, but parallel, tasks. First, we demonstrate an application of predictive biodiversity mapping, using nonparametric models trained with spatially-nested field plots and aerial LiDAR-hyperspectral data, to predict spatially-explicit landscape patterns in floristic diversity across seven spatial scales between 0.01-900 m 2 . Second, we employ bivariate parametric models to test the significance of individual, remotely-sensed predictors of plant richness to determine how parameter estimates vary with scale. Cross-validated results indicate that predictive models were able to account for 15-70% of variance in plant richness, with LiDAR-derived estimates of topography and forest structural complexity, as well as spectral variance in hyperspectral imagery explaining the largest portion of variance in diversity levels. Importantly, bivariate tests provide evidence of scale-dependence among predictors, such that remotely-sensed variables significantly predict plant richness only at spatial scales that sufficiently subsume geolocational imprecision between remotely-sensed and field data, and best align with stand components including plant size and density, as well as canopy gaps and understory growth patterns. Beyond their insights into the scale-dependent patterns and drivers of plant diversity in Piedmont forests, these results highlight the potential of remotely-sensible essential biodiversity variables for mapping and monitoring landscape floristic diversity from air- and space-borne platforms. © 2017 by the Ecological Society of America.

  16. Prediction of drug indications based on chemical interactions and chemical similarities.

    PubMed

    Huang, Guohua; Lu, Yin; Lu, Changhong; Zheng, Mingyue; Cai, Yu-Dong

    2015-01-01

    Discovering potential indications of novel or approved drugs is a key step in drug development. Previous computational approaches could be categorized into disease-centric and drug-centric based on the starting point of the issues or small-scaled application and large-scale application according to the diversity of the datasets. Here, a classifier has been constructed to predict the indications of a drug based on the assumption that interactive/associated drugs or drugs with similar structures are more likely to target the same diseases using a large drug indication dataset. To examine the classifier, it was conducted on a dataset with 1,573 drugs retrieved from Comprehensive Medicinal Chemistry database for five times, evaluated by 5-fold cross-validation, yielding five 1st order prediction accuracies that were all approximately 51.48%. Meanwhile, the model yielded an accuracy rate of 50.00% for the 1st order prediction by independent test on a dataset with 32 other drugs in which drug repositioning has been confirmed. Interestingly, some clinically repurposed drug indications that were not included in the datasets are successfully identified by our method. These results suggest that our method may become a useful tool to associate novel molecules with new indications or alternative indications with existing drugs.

  17. Prediction of Drug Indications Based on Chemical Interactions and Chemical Similarities

    PubMed Central

    Huang, Guohua; Lu, Yin; Lu, Changhong; Cai, Yu-Dong

    2015-01-01

    Discovering potential indications of novel or approved drugs is a key step in drug development. Previous computational approaches could be categorized into disease-centric and drug-centric based on the starting point of the issues or small-scaled application and large-scale application according to the diversity of the datasets. Here, a classifier has been constructed to predict the indications of a drug based on the assumption that interactive/associated drugs or drugs with similar structures are more likely to target the same diseases using a large drug indication dataset. To examine the classifier, it was conducted on a dataset with 1,573 drugs retrieved from Comprehensive Medicinal Chemistry database for five times, evaluated by 5-fold cross-validation, yielding five 1st order prediction accuracies that were all approximately 51.48%. Meanwhile, the model yielded an accuracy rate of 50.00% for the 1st order prediction by independent test on a dataset with 32 other drugs in which drug repositioning has been confirmed. Interestingly, some clinically repurposed drug indications that were not included in the datasets are successfully identified by our method. These results suggest that our method may become a useful tool to associate novel molecules with new indications or alternative indications with existing drugs. PMID:25821813

  18. Application of portable XRF and VNIR sensors for rapid assessment of soil heavy metal pollution.

    PubMed

    Hu, Bifeng; Chen, Songchao; Hu, Jie; Xia, Fang; Xu, Junfeng; Li, Yan; Shi, Zhou

    2017-01-01

    Rapid heavy metal soil surveys at large scale with high sampling density could not be conducted with traditional laboratory physical and chemical analyses because of the high cost, low efficiency and heavy workload involved. This study explored a rapid approach to assess heavy metals contamination in 301 farmland soils from Fuyang in Zhejiang Province, in the southern Yangtze River Delta, China, using portable proximal soil sensors. Portable X-ray fluorescence spectroscopy (PXRF) was used to determine soil heavy metals total concentrations while soil pH was predicted by portable visible-near infrared spectroscopy (PVNIR). Zn, Cu and Pb were successfully predicted by PXRF (R2 >0.90 and RPD >2.50) while As and Ni were predicted with less accuracy (R2 <0.75 and RPD <1.40). The pH values were well predicted by PVNIR. Classification of heavy metals contamination grades in farmland soils was conducted based on previous results; the Kappa coefficient was 0.87, which showed that the combination of PXRF and PVNIR was an effective and rapid method to determine the degree of pollution with soil heavy metals. This study provides a new approach to assess soil heavy metals pollution; this method will facilitate large-scale surveys of soil heavy metal pollution.

  19. bpRNA: large-scale automated annotation and analysis of RNA secondary structure.

    PubMed

    Danaee, Padideh; Rouches, Mason; Wiley, Michelle; Deng, Dezhong; Huang, Liang; Hendrix, David

    2018-05-09

    While RNA secondary structure prediction from sequence data has made remarkable progress, there is a need for improved strategies for annotating the features of RNA secondary structures. Here, we present bpRNA, a novel annotation tool capable of parsing RNA structures, including complex pseudoknot-containing RNAs, to yield an objective, precise, compact, unambiguous, easily-interpretable description of all loops, stems, and pseudoknots, along with the positions, sequence, and flanking base pairs of each such structural feature. We also introduce several new informative representations of RNA structure types to improve structure visualization and interpretation. We have further used bpRNA to generate a web-accessible meta-database, 'bpRNA-1m', of over 100 000 single-molecule, known secondary structures; this is both more fully and accurately annotated and over 20-times larger than existing databases. We use a subset of the database with highly similar (≥90% identical) sequences filtered out to report on statistical trends in sequence, flanking base pairs, and length. Both the bpRNA method and the bpRNA-1m database will be valuable resources both for specific analysis of individual RNA molecules and large-scale analyses such as are useful for updating RNA energy parameters for computational thermodynamic predictions, improving machine learning models for structure prediction, and for benchmarking structure-prediction algorithms.

  20. Ecosystem heterogeneity determines the ecological resilience of the Amazon to climate change

    PubMed Central

    Longo, Marcos; Baccini, Alessandro; Phillips, Oliver L.; Lewis, Simon L.; Alvarez-Dávila, Esteban; Segalin de Andrade, Ana Cristina; Brienen, Roel J. W.; Erwin, Terry L.; Feldpausch, Ted R.; Monteagudo Mendoza, Abel Lorenzo; Nuñez Vargas, Percy; Prieto, Adriana; Silva-Espejo, Javier Eduardo; Malhi, Yadvinder; Moorcroft, Paul R.

    2016-01-01

    Amazon forests, which store ∼50% of tropical forest carbon and play a vital role in global water, energy, and carbon cycling, are predicted to experience both longer and more intense dry seasons by the end of the 21st century. However, the climate sensitivity of this ecosystem remains uncertain: several studies have predicted large-scale die-back of the Amazon, whereas several more recent studies predict that the biome will remain largely intact. Combining remote-sensing and ground-based observations with a size- and age-structured terrestrial ecosystem model, we explore the sensitivity and ecological resilience of these forests to changes in climate. We demonstrate that water stress operating at the scale of individual plants, combined with spatial variation in soil texture, explains observed patterns of variation in ecosystem biomass, composition, and dynamics across the region, and strongly influences the ecosystem’s resilience to changes in dry season length. Specifically, our analysis suggests that in contrast to existing predictions of either stability or catastrophic biomass loss, the Amazon forest’s response to a drying regional climate is likely to be an immediate, graded, heterogeneous transition from high-biomass moist forests to transitional dry forests and woody savannah-like states. Fire, logging, and other anthropogenic disturbances may, however, exacerbate these climate change-induced ecosystem transitions. PMID:26711984

  1. Ecosystem heterogeneity determines the ecological resilience of the Amazon to climate change.

    PubMed

    Levine, Naomi M; Zhang, Ke; Longo, Marcos; Baccini, Alessandro; Phillips, Oliver L; Lewis, Simon L; Alvarez-Dávila, Esteban; Segalin de Andrade, Ana Cristina; Brienen, Roel J W; Erwin, Terry L; Feldpausch, Ted R; Monteagudo Mendoza, Abel Lorenzo; Nuñez Vargas, Percy; Prieto, Adriana; Silva-Espejo, Javier Eduardo; Malhi, Yadvinder; Moorcroft, Paul R

    2016-01-19

    Amazon forests, which store ∼ 50% of tropical forest carbon and play a vital role in global water, energy, and carbon cycling, are predicted to experience both longer and more intense dry seasons by the end of the 21st century. However, the climate sensitivity of this ecosystem remains uncertain: several studies have predicted large-scale die-back of the Amazon, whereas several more recent studies predict that the biome will remain largely intact. Combining remote-sensing and ground-based observations with a size- and age-structured terrestrial ecosystem model, we explore the sensitivity and ecological resilience of these forests to changes in climate. We demonstrate that water stress operating at the scale of individual plants, combined with spatial variation in soil texture, explains observed patterns of variation in ecosystem biomass, composition, and dynamics across the region, and strongly influences the ecosystem's resilience to changes in dry season length. Specifically, our analysis suggests that in contrast to existing predictions of either stability or catastrophic biomass loss, the Amazon forest's response to a drying regional climate is likely to be an immediate, graded, heterogeneous transition from high-biomass moist forests to transitional dry forests and woody savannah-like states. Fire, logging, and other anthropogenic disturbances may, however, exacerbate these climate change-induced ecosystem transitions.

  2. Reduced-order prediction of rogue waves in two-dimensional deep-water waves

    NASA Astrophysics Data System (ADS)

    Farazmand, Mohammad; Sapsis, Themistoklis P.

    2017-07-01

    We consider the problem of large wave prediction in two-dimensional water waves. Such waves form due to the synergistic effect of dispersive mixing of smaller wave groups and the action of localized nonlinear wave interactions that leads to focusing. Instead of a direct simulation approach, we rely on the decomposition of the wave field into a discrete set of localized wave groups with optimal length scales and amplitudes. Due to the short-term character of the prediction, these wave groups do not interact and therefore their dynamics can be characterized individually. Using direct numerical simulations of the governing envelope equations we precompute the expected maximum elevation for each of those wave groups. The combination of the wave field decomposition algorithm, which provides information about the statistics of the system, and the precomputed map for the expected wave group elevation, which encodes dynamical information, allows (i) for understanding of how the probability of occurrence of rogue waves changes as the spectrum parameters vary, (ii) the computation of a critical length scale characterizing wave groups with high probability of evolving to rogue waves, and (iii) the formulation of a robust and parsimonious reduced-order prediction scheme for large waves. We assess the validity of this scheme in several cases of ocean wave spectra.

  3. Application of portable XRF and VNIR sensors for rapid assessment of soil heavy metal pollution

    PubMed Central

    Hu, Bifeng; Chen, Songchao; Hu, Jie; Xia, Fang; Xu, Junfeng; Li, Yan; Shi, Zhou

    2017-01-01

    Rapid heavy metal soil surveys at large scale with high sampling density could not be conducted with traditional laboratory physical and chemical analyses because of the high cost, low efficiency and heavy workload involved. This study explored a rapid approach to assess heavy metals contamination in 301 farmland soils from Fuyang in Zhejiang Province, in the southern Yangtze River Delta, China, using portable proximal soil sensors. Portable X-ray fluorescence spectroscopy (PXRF) was used to determine soil heavy metals total concentrations while soil pH was predicted by portable visible-near infrared spectroscopy (PVNIR). Zn, Cu and Pb were successfully predicted by PXRF (R2 >0.90 and RPD >2.50) while As and Ni were predicted with less accuracy (R2 <0.75 and RPD <1.40). The pH values were well predicted by PVNIR. Classification of heavy metals contamination grades in farmland soils was conducted based on previous results; the Kappa coefficient was 0.87, which showed that the combination of PXRF and PVNIR was an effective and rapid method to determine the degree of pollution with soil heavy metals. This study provides a new approach to assess soil heavy metals pollution; this method will facilitate large-scale surveys of soil heavy metal pollution. PMID:28234944

  4. Framework for Smart Electronic Health Record-Linked Predictive Models to Optimize Care for Complex Digestive Diseases

    DTIC Science & Technology

    2014-07-01

    mucos"x1; N Acquired Abnormality 4.7350 93696 76 0.85771...4. Roden DM, Pulley JM, Basford MA, et al. Development of a large- scale de-identified DNA biobank to enable personalized medicine. Clin Pharmacol...large healthcare system which incorporated clinical information from a 20-hospital setting (both aca- demic and community hospitals) of University of

  5. Wind-tunnel/flight correlation study of aerodynamic characteristics of a large flexible supersonic cruise airplane (XB-70-1). 3: A comparison between characteristics predicted from wind-tunnel measurements and those measured in flight

    NASA Technical Reports Server (NTRS)

    Arnaiz, H. H.; Peterson, J. B., Jr.; Daugherty, J. C.

    1980-01-01

    A program was undertaken by NASA to evaluate the accuracy of a method for predicting the aerodynamic characteristics of large supersonic cruise airplanes. This program compared predicted and flight-measured lift, drag, angle of attack, and control surface deflection for the XB-70-1 airplane for 14 flight conditions with a Mach number range from 0.76 to 2.56. The predictions were derived from the wind-tunnel test data of a 0.03-scale model of the XB-70-1 airplane fabricated to represent the aeroelastically deformed shape at a 2.5 Mach number cruise condition. Corrections for shape variations at the other Mach numbers were included in the prediction. For most cases, differences between predicted and measured values were within the accuracy of the comparison. However, there were significant differences at transonic Mach numbers. At a Mach number of 1.06 differences were as large as 27 percent in the drag coefficients and 20 deg in the elevator deflections. A brief analysis indicated that a significant part of the difference between drag coefficients was due to the incorrect prediction of the control surface deflection required to trim the airplane.

  6. Validity of the two-level model for Viterbi decoder gap-cycle performance

    NASA Technical Reports Server (NTRS)

    Dolinar, S.; Arnold, S.

    1990-01-01

    A two-level model has previously been proposed for approximating the performance of a Viterbi decoder which encounters data received with periodically varying signal-to-noise ratio. Such cyclically gapped data is obtained from the Very Large Array (VLA), either operating as a stand-alone system or arrayed with Goldstone. This approximate model predicts that the decoder error rate will vary periodically between two discrete levels with the same period as the gap cycle. It further predicts that the length of the gapped portion of the decoder error cycle for a constraint length K decoder will be about K-1 bits shorter than the actual duration of the gap. The two-level model for Viterbi decoder performance with gapped data is subjected to detailed validation tests. Curves showing the cyclical behavior of the decoder error burst statistics are compared with the simple square-wave cycles predicted by the model. The validity of the model depends on a parameter often considered irrelevant in the analysis of Viterbi decoder performance, the overall scaling of the received signal or the decoder's branch-metrics. Three scaling alternatives are examined: optimum branch-metric scaling and constant branch-metric scaling combined with either constant noise-level scaling or constant signal-level scaling. The simulated decoder error cycle curves roughly verify the accuracy of the two-level model for both the case of optimum branch-metric scaling and the case of constant branch-metric scaling combined with constant noise-level scaling. However, the model is not accurate for the case of constant branch-metric scaling combined with constant signal-level scaling.

  7. A study on large-scale nudging effects in regional climate model simulation

    NASA Astrophysics Data System (ADS)

    Yhang, Yoo-Bin; Hong, Song-You

    2011-05-01

    The large-scale nudging effects on the East Asian summer monsoon (EASM) are examined using the National Centers for Environmental Prediction (NCEP) Regional Spectral Model (RSM). The NCEP/DOE reanalysis data is used to provide large-scale forcings for RSM simulations, configured with an approximately 50-km grid over East Asia, centered on the Korean peninsula. The RSM with a variant of spectral nudging, that is, the scale selective bias correction (SSBC), is forced by perfect boundary conditions during the summers (June-July-August) from 1979 to 2004. The two summers of 2000 and 2004 are investigated to demonstrate the impact of SSBC on precipitation in detail. It is found that the effect of SSBC on the simulated seasonal precipitation is in general neutral without a discernible advantage. Although errors in large-scale circulation for both 2000 and 2004 are reduced by using the SSBC method, the impact on simulated precipitation is found to be negative in 2000 and positive in 2004 summers. One possible reason for a different effect is that precipitation in the summer of 2004 is characterized by a strong baroclinicity, while precipitation in 2000 is caused by thermodynamic instability. The reduction of convective rainfall over the oceans by the application of the SSBC method seems to play an important role in modeled atmosphere.

  8. Deformation of leaky-dielectric fluid globules under strong electric fields: Boundary layers and jets at large Reynolds numbers

    NASA Astrophysics Data System (ADS)

    Schnitzer, Ory; Frankel, Itzchak; Yariv, Ehud

    2013-11-01

    In Taylor's theory of electrohydrodynamic drop deformation (Proc. R. Soc. Lond. A, vol. 291, 1966, pp. 159-166), inertia is neglected at the outset, resulting in fluid velocity that scales as the square of the applied-field magnitude. For large drops, with increasing field strength the Reynolds number predicted by this scaling may actually become large, suggesting the need for a complementary large-Reynolds-number investigation. Balancing viscous stresses and electrical shear forces in this limit reveals a different velocity scaling, with the 4/3-power of the applied-field magnitude. We focus here on the flow over a gas bubble. It is essentially confined to two boundary layers propagating from the poles to the equator, where they collide to form a radial jet. At leading order in the Capillary number, the bubble deforms due to (i) Maxwell stresses; (ii) the hydrodynamic boundary-layer pressure associated with centripetal acceleration; and (iii) the intense pressure distribution acting over the narrow equatorial deflection zone, appearing as a concentrated load. Remarkably, the unique flow topology and associated scalings allow to obtain a closed-form expression for this deformation through application of integral mass and momentum balances. On the bubble scale, the concentrated pressure load is manifested in the appearance of a non-smooth equatorial dimple.

  9. An investigation of small scales of turbulence in a boundary layer at high Reynolds numbers

    NASA Technical Reports Server (NTRS)

    Wallace, James M.; Ong, L.; Balint, J.-L.

    1993-01-01

    The assumption that turbulence at large wave-numbers is isotropic and has universal spectral characteristics which are independent of the flow geometry, at least for high Reynolds numbers, has been a cornerstone of closure theories as well as of the most promising recent development in the effort to predict turbulent flows, viz. large eddy simulations. This hypothesis was first advanced by Kolmogorov based on the supposition that turbulent kinetic energy cascades down the scales (up the wave-numbers) of turbulence and that, if the number of these cascade steps is sufficiently large (i.e. the wave-number range is large), then the effects of anisotropies at the large scales are lost in the energy transfer process. Experimental attempts were repeatedly made to verify this fundamental assumption. However, Van Atta has recently suggested that an examination of the scalar and velocity gradient fields is necessary to definitively verify this hypothesis or prove it to be unfounded. Of course, this must be carried out in a flow with a sufficiently high Reynolds number to provide the necessary separation of scales in order unambiguously to provide the possibility of local isotropy at large wave-numbers. An opportunity to use our 12-sensor hot-wire probe to address this issue directly was made available at the 80'x120' wind tunnel at the NASA Ames Research Center, which is normally used for full-scale aircraft tests. An initial report on this high Reynolds number experiment and progress toward its evaluation is presented.

  10. Large Eddy Simulation Study for Fluid Disintegration and Mixing

    NASA Technical Reports Server (NTRS)

    Bellan, Josette; Taskinoglu, Ezgi

    2011-01-01

    A new modeling approach is based on the concept of large eddy simulation (LES) within which the large scales are computed and the small scales are modeled. The new approach is expected to retain the fidelity of the physics while also being computationally efficient. Typically, only models for the small-scale fluxes of momentum, species, and enthalpy are used to reintroduce in the simulation the physics lost because the computation only resolves the large scales. These models are called subgrid (SGS) models because they operate at a scale smaller than the LES grid. In a previous study of thermodynamically supercritical fluid disintegration and mixing, additional small-scale terms, one in the momentum and one in the energy conservation equations, were identified as requiring modeling. These additional terms were due to the tight coupling between dynamics and real-gas thermodynamics. It was inferred that if these terms would not be modeled, the high density-gradient magnitude regions, experimentally identified as a characteristic feature of these flows, would not be accurately predicted without the additional term in the momentum equation; these high density-gradient magnitude regions were experimentally shown to redistribute turbulence in the flow. And it was also inferred that without the additional term in the energy equation, the heat flux magnitude could not be accurately predicted; the heat flux to the wall of combustion devices is a crucial quantity that determined necessary wall material properties. The present work involves situations where only the term in the momentum equation is important. Without this additional term in the momentum equation, neither the SGS-flux constant-coefficient Smagorinsky model nor the SGS-flux constant-coefficient Gradient model could reproduce in LES the pressure field or the high density-gradient magnitude regions; the SGS-flux constant- coefficient Scale-Similarity model was the most successful in this endeavor although not totally satisfactory. With a model for the additional term in the momentum equation, the predictions of the constant-coefficient Smagorinsky and constant-coefficient Scale-Similarity models were improved to a certain extent; however, most of the improvement was obtained for the Gradient model. The previously derived model and a newly developed model for the additional term in the momentum equation were both tested, with the new model proving even more successful than the previous model at reproducing the high density-gradient magnitude regions. Several dynamic SGS-flux models, in which the SGS-flux model coefficient is computed as part of the simulation, were tested in conjunction with the new model for this additional term in the momentum equation. The most successful dynamic model was a "mixed" model combining the Smagorinsky and Gradient models. This work is directly applicable to simulations of gas turbine engines (aeronautics) and rocket engines (astronautics).

  11. Quantifying effectiveness of failure prediction and response in HPC systems : methodology and example.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mayo, Jackson R.; Chen, Frank Xiaoxiao; Pebay, Philippe Pierre

    2010-06-01

    Effective failure prediction and mitigation strategies in high-performance computing systems could provide huge gains in resilience of tightly coupled large-scale scientific codes. These gains would come from prediction-directed process migration and resource servicing, intelligent resource allocation, and checkpointing driven by failure predictors rather than at regular intervals based on nominal mean time to failure. Given probabilistic associations of outlier behavior in hardware-related metrics with eventual failure in hardware, system software, and/or applications, this paper explores approaches for quantifying the effects of prediction and mitigation strategies and demonstrates these using actual production system data. We describe context-relevant methodologies for determining themore » accuracy and cost-benefit of predictors. While many research studies have quantified the expected impact of growing system size, and the associated shortened mean time to failure (MTTF), on application performance in large-scale high-performance computing (HPC) platforms, there has been little if any work to quantify the possible gains from predicting system resource failures with significant but imperfect accuracy. This possibly stems from HPC system complexity and the fact that, to date, no one has established any good predictors of failure in these systems. Our work in the OVIS project aims to discover these predictors via a variety of data collection techniques and statistical analysis methods that yield probabilistic predictions. The question then is, 'How good or useful are these predictions?' We investigate methods for answering this question in a general setting, and illustrate them using a specific failure predictor discovered on a production system at Sandia.« less

  12. Using the Positive and Negative Syndrome Scale (PANSS) to Define Different Domains of Negative Symptoms

    PubMed Central

    Khan, Anzalee; Keefe, Richard S. E.

    2017-01-01

    Background: Reduced emotional experience and expression are two domains of negative symptoms. The authors assessed these two domains of negative symptoms using previously developed Positive and Negative Syndrome Scale (PANSS) factors. Using an existing dataset, the authors predicted three different elements of everyday functioning (social, vocational, and everyday activities) with these two factors, as well as with performance on measures of functional capacity. Methods: A large (n=630) sample of people with schizophrenia was used as the data source of this study. Using regression analyses, the authors predicted the three different aspects of everyday functioning, first with just the two Positive and Negative Syndrome Scale factors and then with a global negative symptom factor. Finally, we added neurocognitive performance and functional capacity as predictors. Results: The Positive and Negative Syndrome Scale reduced emotional experience factor accounted for 21 percent of the variance in everyday social functioning, while reduced emotional expression accounted for no variance. The total Positive and Negative Syndrome Scale negative symptom factor accounted for less variance (19%) than the reduced experience factor alone. The Positive and Negative Syndrome Scale expression factor accounted for, at most, one percent of the variance in any of the functional outcomes, with or without the addition of other predictors. Implications: Reduced emotional experience measured with the Positive and Negative Syndrome Scale, often referred to as “avolition and anhedonia,” specifically predicted impairments in social outcomes. Further, reduced experience predicted social impairments better than emotional expression or the total Positive and Negative Syndrome Scale negative symptom factor. In this cross-sectional study, reduced emotional experience was specifically related with social outcomes, accounting for essentially no variance in work or everyday activities, and being the sole meaningful predictor of impairment in social outcomes. PMID:29410933

  13. An examination of the predictive validity of the risk matrix 2000 in England and wales.

    PubMed

    Barnett, Georgia D; Wakeling, Helen C; Howard, Philip D

    2010-12-01

    This study examined the predictive validity of an actuarial risk-assessment tool with convicted sexual offenders in England and Wales. A modified version of the RM2000/s scale and the RM2000 v and c scales (Thornton et al., 2003) were examined for accuracy in predicting proven sexual violent, nonsexual violent, and combined sexual and/or nonsexual violent reoffending in a sample of sexual offenders who had either started a community sentence or been released from prison into the community by March 2007. Rates of proven reoffending were examined at 2 years for the majority of the sample (n = 4,946), and 4 years ( n = 578) for those for whom these data were available. The predictive validity of the RM2000 scales was also explored for different subgroups of sexual offenders to assess the robustness of the tool. Both the modified RM2000/s and the complete v and c scales effectively classified offenders into distinct risk categories that differed significantly in rates of proven sexual and/or nonsexual violent reoffending. Survival analyses on the RM2000/s and v scales (N = 9,284) indicated that the higher risk groups offended more quickly and at a higher rate than lower risk groups. The relative predictive validity of the RM2000/s, v, and c, as calculated using Receiver Operating Characteristics (ROC) analyses, were moderate (.68) for RM2000/s and large for both the RM2000/c (.73) and RM2000/v (.80), at the 2-year follow-up. RM2000/s was moderately accurate in predicting relative risk of proven sexual reoffending for a variety of subgroups of sexual offenders.

  14. Forced synchronization of large-scale circulation to increase predictability of surface states

    NASA Astrophysics Data System (ADS)

    Shen, Mao-Lin; Keenlyside, Noel; Selten, Frank; Wiegerinck, Wim; Duane, Gregory

    2016-04-01

    Numerical models are key tools in the projection of the future climate change. The lack of perfect initial condition and perfect knowledge of the laws of physics, as well as inherent chaotic behavior limit predictions. Conceptually, the atmospheric variables can be decomposed into a predictable component (signal) and an unpredictable component (noise). In ensemble prediction the anomaly of ensemble mean is regarded as the signal and the ensemble spread the noise. Naturally the prediction skill will be higher if the signal-to-noise ratio (SNR) is larger in the initial conditions. We run two ensemble experiments in order to explore a way to reduce the SNR of surface winds and temperature. One ensemble experiment is AGCM with prescribing sea surface temperature (SST); the other is AGCM with both prescribing SST and nudging the high-level temperature and winds to ERA-Interim. Each ensemble has 30 members. Larger SNR is expected and found over the tropical ocean in the first experiment because the tropical circulation is associated with the convection and the associated surface wind convergence as these are to a large extent driven by the SST. However, small SNR is found over high latitude ocean and land surface due to the chaotic and non-synchronized atmosphere states. In the second experiment the higher level temperature and winds are forced to be synchronized (nudged to reanalysis) and hence a larger SNR of surface winds and temperature is expected. Furthermore, different nudging coefficients are also tested in order to understand the limitation of both synchronization of large-scale circulation and the surface states. These experiments will be useful for the developing strategies to synchronize the 3-D states of atmospheric models that can be later used to build a super model.

  15. A New Stochastic Approach to Predict Peak and Residual Shear Strength of Natural Rock Discontinuities

    NASA Astrophysics Data System (ADS)

    Casagrande, D.; Buzzi, O.; Giacomini, A.; Lambert, C.; Fenton, G.

    2018-01-01

    Natural discontinuities are known to play a key role in the stability of rock masses. However, it is a non-trivial task to estimate the shear strength of large discontinuities. Because of the inherent complexity to access to the full surface of the large in situ discontinuities, researchers or engineers tend to work on small-scale specimens. As a consequence, the results are often plagued by the well-known scale effect. A new approach is here proposed to predict shear strength of discontinuities. This approach has the potential to avoid the scale effect. The rationale of the approach is as follows: a major parameter that governs the shear strength of a discontinuity within a rock mass is roughness, which can be accounted for by surveying the discontinuity surface. However, this is typically not possible for discontinuities contained within the rock mass where only traces are visible. For natural surfaces, it can be assumed that traces are, to some extent, representative of the surface. It is here proposed to use the available 2D information (from a visible trace, referred to as a seed trace) and a random field model to create a large number of synthetic surfaces (3D data sets). The shear strength of each synthetic surface can then be estimated using a semi-analytical model. By using a large number of synthetic surfaces and a Monte Carlo strategy, a meaningful shear strength distribution can be obtained. This paper presents the validation of the semi-analytical mechanistic model required to support the new approach for prediction of discontinuity shear strength. The model can predict both peak and residual shear strength. The second part of the paper lays the foundation of a random field model to support the creation of synthetic surfaces having statistical properties in line with those of the data of the seed trace. The paper concludes that it is possible to obtain a reasonable estimate of peak and residual shear strength of the discontinuities tested from the information from a single trace, without having access to the whole surface.

  16. Forecasting of magnitude and duration of currency crises based on the analysis of distortions of fractal scaling in exchange rate fluctuations

    NASA Astrophysics Data System (ADS)

    Uritskaya, Olga Y.

    2005-05-01

    Results of fractal stability analysis of daily exchange rate fluctuations of more than 30 floating currencies for a 10-year period are presented. It is shown for the first time that small- and large-scale dynamical instabilities of national monetary systems correlate with deviations of the detrended fluctuation analysis (DFA) exponent from the value 1.5 predicted by the efficient market hypothesis. The observed dependence is used for classification of long-term stability of floating exchange rates as well as for revealing various forms of distortion of stable currency dynamics prior to large-scale crises. A normal range of DFA exponents consistent with crisis-free long-term exchange rate fluctuations is determined, and several typical scenarios of unstable currency dynamics with DFA exponents fluctuating beyond the normal range are identified. It is shown that monetary crashes are usually preceded by prolonged periods of abnormal (decreased or increased) DFA exponent, with the after-crash exponent tending to the value 1.5 indicating a more reliable exchange rate dynamics. Statistically significant regression relations (R=0.99, p<0.01) between duration and magnitude of currency crises and the degree of distortion of monofractal patterns of exchange rate dynamics are found. It is demonstrated that the parameters of these relations characterizing small- and large-scale crises are nearly equal, which implies a common instability mechanism underlying these events. The obtained dependences have been used as a basic ingredient of a forecasting technique which provided correct in-sample predictions of monetary crisis magnitude and duration over various time scales. The developed technique can be recommended for real-time monitoring of dynamical stability of floating exchange rate systems and creating advanced early-warning-system models for currency crisis prevention.

  17. MiKlip-PRODEF: Probabilistic Decadal Forecast for Central and Western Europe

    NASA Astrophysics Data System (ADS)

    Reyers, Mark; Haas, Rabea; Ludwig, Patrick; Pinto, Joaquim

    2013-04-01

    The demand for skilful climate predictions on time-scales of several years to decades has increased in recent years, in particular for economic, societal and political terms. Within the BMBF MiKlip consortium, a decadal prediction system on the global to local scale is currently being developed. The subproject PRODEF is part of the MiKlip-Module C, which aims at the regionalisation of decadal predictability for Central and Western Europe. In PRODEF, a combined statistical-dynamical downscaling (SDD) and a probabilistic forecast tool are developed and applied to the new Earth system model of the Max-Planck Institute Hamburg (MPI-ESM), which is part of the CMIP5 experiment. Focus is given on the decadal predictability of windstorms, related wind gusts as well as wind energy potentials. SDD combines the benefits of both high resolution dynamical downscaling and purely statistical downscaling of GCM output. Hence, the SDD approach is used to obtain a very large ensemble of highly resolved decadal forecasts. With respect to the focal points of PRODEF, a clustering of temporal evolving atmospheric fields, a circulation weather type (CWT) analysis, and a storm damage indices analysis is applied to the full ensemble of the decadal hindcast experiments of the MPI-ESM in its lower resolution (MPI-ESM-LR). The ensemble consists of up to ten realisations per yearly initialised decadal hindcast experiments for the period 1960-2010 (altogether 287 realisations). Representatives of CWTs / clusters and single storm episodes are dynamical downscaled with the regional climate model COSMO-CLM with a horizontal resolution of 0.22°. For each model grid point, the distributions of the local climate parameters (e.g. surface wind gusts) are determined for different periods (e.g. each decades) by recombining dynamical downscaled episodes weighted with the respective weather type frequencies. The applicability of the SDD approach is illustrated with examples of decadal forecasts of the MPI-ESM. We are able to perform a bias correction of the frequencies of large scale weather types and to quantify the uncertainties of decadal predictability on large and local scale arising from different initial conditions. Further, probability density functions of local parameters like e.g. wind gusts for different periods and decades derived from the SDD approach is compared to observations and reanalysis data. Skill scores are used to quantify the decadal predictability for different leading time periods and to analyse whether the SDD approach shows systematic errors for some regions.

  18. Full-scale testing and progressive damage modeling of sandwich composite aircraft fuselage structure

    NASA Astrophysics Data System (ADS)

    Leone, Frank A., Jr.

    A comprehensive experimental and computational investigation was conducted to characterize the fracture behavior and structural response of large sandwich composite aircraft fuselage panels containing artificial damage in the form of holes and notches. Full-scale tests were conducted where panels were subjected to quasi-static combined pressure, hoop, and axial loading up to failure. The panels were constructed using plain-weave carbon/epoxy prepreg face sheets and a Nomex honeycomb core. Panel deformation and notch tip damage development were monitored during the tests using several techniques, including optical observations, strain gages, digital image correlation (DIC), acoustic emission (AE), and frequency response (FR). Additional pretest and posttest inspections were performed via thermography, computer-aided tap tests, ultrasound, x-radiography, and scanning electron microscopy. The framework to simulate damage progression and to predict residual strength through use of the finite element (FE) method was developed. The DIC provided local and full-field strain fields corresponding to changes in the state-of-damage and identified the strain components driving damage progression. AE was monitored during loading of all panels and data analysis methodologies were developed to enable real-time determination of damage initiation, progression, and severity in large composite structures. The FR technique has been developed, evaluating its potential as a real-time nondestructive inspection technique applicable to large composite structures. Due to the large disparity in scale between the fuselage panels and the artificial damage, a global/local analysis was performed. The global FE models fully represented the specific geometries, composite lay-ups, and loading mechanisms of the full-scale tests. A progressive damage model was implemented in the local FE models, allowing the gradual failure of elements in the vicinity of the artificial damage. A set of modifications to the definitions of the local FE model boundary conditions is proposed and developed to address several issues related to the scalability of progressive damage modeling concepts, especially in regards to full-scale fuselage structures. Notable improvements were observed in the ability of the FE models to predict the strength of damaged composite fuselage structures. Excellent agreement has been established between the FE model predictions and the experimental results recorded by DIC, AE, FR, and visual observations.

  19. Predicting the Velocity Dispersions of the Dwarf Satellite Galaxies of Andromeda

    NASA Astrophysics Data System (ADS)

    McGaugh, Stacy S.

    2016-05-01

    Dwarf Spheroidal galaxies in the Local Group are the faintest and most diffuse stellar systems known. They exhibit large mass discrepancies, making them popular laboratories for studying the missing mass problem. The PANDAS survey of M31 revealed dozens of new examples of such dwarfs. As these systems were discovered, it was possible to use the observed photometric properties to predict their stellar velocity dispersions with the modified gravity theory MOND. These predictions, made in advance of the observations, have since been largely confirmed. A unique feature of MOND is that a structurally identical dwarf will behave differently when it is or is not subject to the external field of a massive host like Andromeda. The role of this "external field effect" is critical in correctly predicting the velocity dispersions of dwarfs that deviate from empirical scaling relations. With continued improvement in the observational data, these systems could provide a test of the strong equivalence principle.

  20. An economic prediction of the finer resolution level wavelet coefficients in electronic structure calculations.

    PubMed

    Nagy, Szilvia; Pipek, János

    2015-12-21

    In wavelet based electronic structure calculations, introducing a new, finer resolution level is usually an expensive task, this is why often a two-level approximation is used with very fine starting resolution level. This process results in large matrices to calculate with and a large number of coefficients to be stored. In our previous work we have developed an adaptively refined solution scheme that determines the indices, where the refined basis functions are to be included, and later a method for predicting the next, finer resolution coefficients in a very economic way. In the present contribution, we would like to determine whether the method can be applied for predicting not only the first, but also the other, higher resolution level coefficients. Also the energy expectation values of the predicted wave functions are studied, as well as the scaling behaviour of the coefficients in the fine resolution limit.

  1. Measurement and prediction of flow through a replica segment of a mildly atherosclerotic coronary artery of man

    NASA Technical Reports Server (NTRS)

    Back, L. H.; Radbill, J. R.; Cho, Y. I.; Crawford, D. W.

    1986-01-01

    Pressure distributions were measured along a hollow vascular axisymmetric replica of a segment of the left circumflex coronary artery of man with mildly atherosclerotic diffuse disease. A large range of physiological Reynolds numbers from about 60 to 500, including hyperemic response, was spanned in the flows investigation using a fluid simulating blood kinematic viscosity. Predicted pressure distributions from the numerical solution of the Navier-Stokes equations were similar in trend and magnitude to the measurements. Large variations in the predicted velocity profiles occurred along the lumen. The influence of the smaller scale multiple flow obstacles along the wall (lesion variations) led to sharp spikes in the predicted wall shear stresses. Reynolds number similarity was discussed, and estimates of what time averaged in vivo pressure drop and shear stress might be were given for a vessel segment.

  2. miRNAFold: a web server for fast miRNA precursor prediction in genomes.

    PubMed

    Tav, Christophe; Tempel, Sébastien; Poligny, Laurent; Tahi, Fariza

    2016-07-08

    Computational methods are required for prediction of non-coding RNAs (ncRNAs), which are involved in many biological processes, especially at post-transcriptional level. Among these ncRNAs, miRNAs have been largely studied and biologists need efficient and fast tools for their identification. In particular, ab initio methods are usually required when predicting novel miRNAs. Here we present a web server dedicated for miRNA precursors identification at a large scale in genomes. It is based on an algorithm called miRNAFold that allows predicting miRNA hairpin structures quickly with high sensitivity. miRNAFold is implemented as a web server with an intuitive and user-friendly interface, as well as a standalone version. The web server is freely available at: http://EvryRNA.ibisc.univ-evry.fr/miRNAFold. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  3. Theoretical prediction and impact of fundamental electric dipole moments

    DOE PAGES

    Ellis, Sebastian A. R.; Kane, Gordon L.

    2016-01-13

    The predicted Standard Model (SM) electric dipole moments (EDMs) of electrons and quarks are tiny, providing an important window to observe new physics. Theories beyond the SM typically allow relatively large EDMs. The EDMs depend on the relative phases of terms in the effective Lagrangian of the extended theory, which are generally unknown. Underlying theories, such as string/M-theories compactified to four dimensions, could predict the phases and thus EDMs in the resulting supersymmetric (SUSY) theory. Earlier one of us, with collaborators, made such a prediction and found, unexpectedly, that the phases were predicted to be zero at tree level inmore » the theory at the unification or string scale ~O(10 16 GeV). Electroweak (EW) scale EDMs still arise via running from the high scale, and depend only on the SM Yukawa couplings that also give the CKM phase. Here we extend the earlier work by studying the dependence of the low scale EDMs on the constrained but not fully known fundamental Yukawa couplings. The dominant contribution is from two loop diagrams and is not sensitive to the choice of Yukawa texture. The electron EDM should not be found to be larger than about 5 × 10 –30e cm, and the neutron EDM should not be larger than about 5 × 10 –29e cm. These values are quite a bit smaller than the reported predictions from Split SUSY and typical effective theories, but much larger than the Standard Model prediction. Also, since models with random phases typically give much larger EDMs, it is a significant testable prediction of compactified M-theory that the EDMs should not be above these upper limits. The actual EDMs can be below the limits, so once they are measured they could provide new insight into the fundamental Yukawa couplings of leptons and quarks. As a result, we comment also on the role of strong CP violation. EDMs probe fundamental physics near the Planck scale.« less

  4. Theoretical prediction and impact of fundamental electric dipole moments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ellis, Sebastian A. R.; Kane, Gordon L.

    The predicted Standard Model (SM) electric dipole moments (EDMs) of electrons and quarks are tiny, providing an important window to observe new physics. Theories beyond the SM typically allow relatively large EDMs. The EDMs depend on the relative phases of terms in the effective Lagrangian of the extended theory, which are generally unknown. Underlying theories, such as string/M-theories compactified to four dimensions, could predict the phases and thus EDMs in the resulting supersymmetric (SUSY) theory. Earlier one of us, with collaborators, made such a prediction and found, unexpectedly, that the phases were predicted to be zero at tree level inmore » the theory at the unification or string scale ~O(10 16 GeV). Electroweak (EW) scale EDMs still arise via running from the high scale, and depend only on the SM Yukawa couplings that also give the CKM phase. Here we extend the earlier work by studying the dependence of the low scale EDMs on the constrained but not fully known fundamental Yukawa couplings. The dominant contribution is from two loop diagrams and is not sensitive to the choice of Yukawa texture. The electron EDM should not be found to be larger than about 5 × 10 –30e cm, and the neutron EDM should not be larger than about 5 × 10 –29e cm. These values are quite a bit smaller than the reported predictions from Split SUSY and typical effective theories, but much larger than the Standard Model prediction. Also, since models with random phases typically give much larger EDMs, it is a significant testable prediction of compactified M-theory that the EDMs should not be above these upper limits. The actual EDMs can be below the limits, so once they are measured they could provide new insight into the fundamental Yukawa couplings of leptons and quarks. As a result, we comment also on the role of strong CP violation. EDMs probe fundamental physics near the Planck scale.« less

  5. Organizational Commitment and Nurses' Characteristics as Predictors of Job Involvement.

    PubMed

    Alammar, Kamila; Alamrani, Mashael; Alqahtani, Sara; Ahmad, Muayyad

    2016-01-01

    To predict nurses' job involvement on the basis of their organizational commitment and personal characteristics at a large tertiary hospital in Saudi Arabia. Data were collected in 2015 from a convenience sample of 558 nurses working at a large tertiary hospital in Riyadh, Saudi Arabia. A cross-sectional correlational design was used in this study. Data were collected using a structured questionnaire. All commitment scales had significant relationships. Multiple linear regression analysis revealed that the model predicted a sizeable proportion of variance in nurses' job involvement (p < 0.001). High organizational commitment enhances job involvement, which may lead to more organizational stability and effectiveness.

  6. Using the Positive and Negative Syndrome Scale (PANSS) to Define Different Domains of Negative Symptoms: Prediction of Everyday Functioning by Impairments in Emotional Expression and Emotional Experience.

    PubMed

    Harvey, Philip D; Khan, Anzalee; Keefe, Richard S E

    2017-12-01

    Background: Reduced emotional experience and expression are two domains of negative symptoms. The authors assessed these two domains of negative symptoms using previously developed Positive and Negative Syndrome Scale (PANSS) factors. Using an existing dataset, the authors predicted three different elements of everyday functioning (social, vocational, and everyday activities) with these two factors, as well as with performance on measures of functional capacity. Methods: A large (n=630) sample of people with schizophrenia was used as the data source of this study. Using regression analyses, the authors predicted the three different aspects of everyday functioning, first with just the two Positive and Negative Syndrome Scale factors and then with a global negative symptom factor. Finally, we added neurocognitive performance and functional capacity as predictors. Results: The Positive and Negative Syndrome Scale reduced emotional experience factor accounted for 21 percent of the variance in everyday social functioning, while reduced emotional expression accounted for no variance. The total Positive and Negative Syndrome Scale negative symptom factor accounted for less variance (19%) than the reduced experience factor alone. The Positive and Negative Syndrome Scale expression factor accounted for, at most, one percent of the variance in any of the functional outcomes, with or without the addition of other predictors. Implications: Reduced emotional experience measured with the Positive and Negative Syndrome Scale, often referred to as "avolition and anhedonia," specifically predicted impairments in social outcomes. Further, reduced experience predicted social impairments better than emotional expression or the total Positive and Negative Syndrome Scale negative symptom factor. In this cross-sectional study, reduced emotional experience was specifically related with social outcomes, accounting for essentially no variance in work or everyday activities, and being the sole meaningful predictor of impairment in social outcomes.

  7. Social welfare as small-scale help: evolutionary psychology and the deservingness heuristic.

    PubMed

    Petersen, Michael Bang

    2012-01-01

    Public opinion concerning social welfare is largely driven by perceptions of recipient deservingness. Extant research has argued that this heuristic is learned from a variety of cultural, institutional, and ideological sources. The present article provides evidence supporting a different view: that the deservingness heuristic is rooted in psychological categories that evolved over the course of human evolution to regulate small-scale exchanges of help. To test predictions made on the basis of this view, a method designed to measure social categorization is embedded in nationally representative surveys conducted in different countries. Across the national- and individual-level differences that extant research has used to explain the heuristic, people categorize welfare recipients on the basis of whether they are lazy or unlucky. This mode of categorization furthermore induces people to think about large-scale welfare politics as its presumed ancestral equivalent: small-scale help giving. The general implications for research on heuristics are discussed.

  8. Anisotropy of the Cosmic Microwave Background Radiation on Large and Medium Angular Scales

    NASA Technical Reports Server (NTRS)

    Houghton, Anthony; Timbie, Peter

    1998-01-01

    This grant has supported work at Brown University on measurements of the 2.7 K Cosmic Microwave Background Radiation (CMB). The goal has been to characterize the spatial variations in the temperature of the CMB in order to understand the formation of large-scale structure in the universe. We have concurrently pursued two measurements using millimeter-wave telescopes carried aloft by scientific balloons. Both systems operate over a range of wavelengths, chosen to allow spectral removal of foreground sources such as the atmosphere, Galaxy, etc. The angular resolution of approx. 25 arcminutes is near the angular scale at which the most structure is predicted by current models to be visible in the CMB angular power spectrum. The main goal is to determine the angular scale of this structure; in turn we can infer the density parameter, Omega, for the universe as well as other cosmological parameters, such as the Hubble constant.

  9. The statistics of primordial density fluctuations

    NASA Astrophysics Data System (ADS)

    Barrow, John D.; Coles, Peter

    1990-05-01

    The statistical properties of the density fluctuations produced by power-law inflation are investigated. It is found that, even the fluctuations present in the scalar field driving the inflation are Gaussian, the resulting density perturbations need not be, due to stochastic variations in the Hubble parameter. All the moments of the density fluctuations are calculated, and is is argued that, for realistic parameter choices, the departures from Gaussian statistics are small and would have a negligible effect on the large-scale structure produced in the model. On the other hand, the model predicts a power spectrum with n not equal to 1, and this could be good news for large-scale structure.

  10. The one-loop matter bispectrum in the Effective Field Theory of Large Scale Structures

    DOE PAGES

    Angulo, Raul E.; Foreman, Simon; Schmittfull, Marcel; ...

    2015-10-14

    With this study, given the importance of future large scale structure surveys for delivering new cosmological information, it is crucial to reliably predict their observables. The Effective Field Theory of Large Scale Structures (EFTofLSS) provides a manifestly convergent perturbative scheme to compute the clustering of dark matter in the weakly nonlinear regime in an expansion in k/k NL, where k is the wavenumber of interest and k NL is the wavenumber associated to the nonlinear scale. It has been recently shown that the EFTofLSS matches to 1% level the dark matter power spectrum at redshift zero up to k ≃more » 0.3 h Mpc –1 and k ≃ 0.6 h Mpc –1 at one and two loops respectively, using only one counterterm that is fit to data. Similar results have been obtained for the momentum power spectrum at one loop. This is a remarkable improvement with respect to former analytical techniques. Here we study the prediction for the equal-time dark matter bispectrum at one loop. We find that at this order it is sufficient to consider the same counterterm that was measured in the power spectrum. Without any remaining free parameter, and in a cosmology for which kNL is smaller than in the previously considered cases (σ 8=0.9), we find that the prediction from the EFTofLSS agrees very well with N-body simulations up to k ≃ 0.25 h Mpc –1, given the accuracy of the measurements, which is of order a few percent at the highest k's of interest. While the fit is very good on average up to k ≃ 0.25 h Mpc –1, the fit performs slightly worse on equilateral configurations, in agreement with expectations that for a given maximum k, equilateral triangles are the most nonlinear.« less

  11. The cross-over to magnetostrophic convection in planetary dynamo systems

    PubMed Central

    King, E. M.

    2017-01-01

    Global scale magnetostrophic balance, in which Lorentz and Coriolis forces comprise the leading-order force balance, has long been thought to describe the natural state of planetary dynamo systems. This argument arises from consideration of the linear theory of rotating magnetoconvection. Here we test this long-held tenet by directly comparing linear predictions against dynamo modelling results. This comparison shows that dynamo modelling results are not typically in the global magnetostrophic state predicted by linear theory. Then, in order to estimate at what scale (if any) magnetostrophic balance will arise in nonlinear dynamo systems, we carry out a simple scaling analysis of the Elsasser number Λ, yielding an improved estimate of the ratio of Lorentz and Coriolis forces. From this, we deduce that there is a magnetostrophic cross-over length scale, LX≈(Λo2/Rmo)D, where Λo is the linear (or traditional) Elsasser number, Rmo is the system scale magnetic Reynolds number and D is the length scale of the system. On scales well above LX, magnetostrophic convection dynamics should not be possible. Only on scales smaller than LX should it be possible for the convective behaviours to follow the predictions for the magnetostrophic branch of convection. Because LX is significantly smaller than the system scale in most dynamo models, their large-scale flows should be quasi-geostrophic, as is confirmed in many dynamo simulations. Estimating Λo≃1 and Rmo≃103 in Earth’s core, the cross-over scale is approximately 1/1000 that of the system scale, suggesting that magnetostrophic convection dynamics exists in the core only on small scales below those that can be characterized by geomagnetic observations. PMID:28413338

  12. The cross-over to magnetostrophic convection in planetary dynamo systems.

    PubMed

    Aurnou, J M; King, E M

    2017-03-01

    Global scale magnetostrophic balance, in which Lorentz and Coriolis forces comprise the leading-order force balance, has long been thought to describe the natural state of planetary dynamo systems. This argument arises from consideration of the linear theory of rotating magnetoconvection. Here we test this long-held tenet by directly comparing linear predictions against dynamo modelling results. This comparison shows that dynamo modelling results are not typically in the global magnetostrophic state predicted by linear theory. Then, in order to estimate at what scale (if any) magnetostrophic balance will arise in nonlinear dynamo systems, we carry out a simple scaling analysis of the Elsasser number Λ , yielding an improved estimate of the ratio of Lorentz and Coriolis forces. From this, we deduce that there is a magnetostrophic cross-over length scale, [Formula: see text], where Λ o is the linear (or traditional) Elsasser number, Rm o is the system scale magnetic Reynolds number and D is the length scale of the system. On scales well above [Formula: see text], magnetostrophic convection dynamics should not be possible. Only on scales smaller than [Formula: see text] should it be possible for the convective behaviours to follow the predictions for the magnetostrophic branch of convection. Because [Formula: see text] is significantly smaller than the system scale in most dynamo models, their large-scale flows should be quasi-geostrophic, as is confirmed in many dynamo simulations. Estimating Λ o ≃1 and Rm o ≃10 3 in Earth's core, the cross-over scale is approximately 1/1000 that of the system scale, suggesting that magnetostrophic convection dynamics exists in the core only on small scales below those that can be characterized by geomagnetic observations.

  13. Regional climates in the GISS general circulation model: Surface air temperature

    NASA Technical Reports Server (NTRS)

    Hewitson, Bruce

    1994-01-01

    One of the more viable research techniques into global climate change for the purpose of understanding the consequent environmental impacts is based on the use of general circulation models (GCMs). However, GCMs are currently unable to reliably predict the regional climate change resulting from global warming, and it is at the regional scale that predictions are required for understanding human and environmental responses. Regional climates in the extratropics are in large part governed by the synoptic-scale circulation and the feasibility of using this interscale relationship is explored to provide a way of moving to grid cell and sub-grid cell scales in the model. The relationships between the daily circulation systems and surface air temperature for points across the continental United States are first developed in a quantitative form using a multivariate index based on principal components analysis (PCA) of the surface circulation. These relationships are then validated by predicting daily temperature using observed circulation and comparing the predicted values with the observed temperatures. The relationships predict surface temperature accurately over the major portion of the country in winter, and for half the country in summer. These relationships are then applied to the surface synoptic circulation of the Goddard Institute for Space Studies (GISS) GCM control run, and a set of surface grid cell temperatures are generated. These temperatures, based on the larger-scale validated circulation, may now be used with greater confidence at the regional scale. The generated temperatures are compared to those of the model and show that the model has regional errors of up to 10 C in individual grid cells.

  14. Development and Evaluation of the Sugar-Sweetened Beverages Media Literacy (SSB-ML) Scale and Its Relationship With SSB Consumption

    PubMed Central

    Chen, Yvonnes; Porter, Kathleen J.; Estabrooks, Paul A.; Zoellner, Jamie

    2017-01-01

    Understanding how adults’ media literacy skill sets impact their sugar-sweetened beverage (SSB) intake provides insight into designing effective interventions to enhance their critical analysis of marketing messages and thus improve their healthy beverage choices. However, a media literacy scale focusing on SSBs is lacking. This cross-sectional study uses baseline data from a large randomized controlled trial to (a) describe the psychometric properties of an SSB Media Literacy Scale (SSB-ML) scale and its subdomains, (b) examine how the scale varies across demographic variables, and (c) explain the scale’s concurrent validity to predict SSB consumption. Results from 293 adults in rural southwestern Virginia (81.6% female, 94.0% White, 54.1% receiving SNAP and/or WIC benefits, average 410 SSB kcal daily) show that overall SSB-ML scale and its subdomains have strong internal consistencies (Cronbach’s alphas ranging from 0.65 to 0.83). The Representation & Reality domain significantly predicted SSB kilocalories, after controlling for demographic variables. This study has implications for the assessment and inclusion of context-specific media literacy skills in behavioral interventions. PMID:27690635

  15. Measurement of kT splitting scales in W→ℓν events at [Formula: see text] with the ATLAS detector.

    PubMed

    Aad, G; Abajyan, T; Abbott, B; Abdallah, J; Abdel Khalek, S; Abdelalim, A A; Abdinov, O; Aben, R; Abi, B; Abolins, M; AbouZeid, O S; Abramowicz, H; Abreu, H; Acharya, B S; Adamczyk, L; Adams, D L; Addy, T N; Adelman, J; Adomeit, S; Adragna, P; Adye, T; Aefsky, S; Aguilar-Saavedra, J A; Agustoni, M; Ahlen, S P; Ahles, F; Ahmad, A; Ahsan, M; Aielli, G; Åkesson, T P A; Akimoto, G; Akimov, A V; Alam, M A; Albert, J; Albrand, S; Aleksa, M; Aleksandrov, I N; Alessandria, F; Alexa, C; Alexander, G; Alexandre, G; Alexopoulos, T; Alhroob, M; Aliev, M; Alimonti, G; Alison, J; Allbrooke, B M M; Allison, L J; Allport, P P; Allwood-Spiers, S E; Almond, J; Aloisio, A; Alon, R; Alonso, A; Alonso, F; Altheimer, A; Alvarez Gonzalez, B; Alviggi, M G; Amako, K; Amelung, C; Ammosov, V V; Amor Dos Santos, S P; Amorim, A; Amoroso, S; Amram, N; Anastopoulos, C; Ancu, L S; Andari, N; Andeen, T; Anders, C F; Anders, G; Anderson, K J; Andreazza, A; Andrei, V; Anduaga, X S; Angelidakis, S; Anger, P; Angerami, A; Anghinolfi, F; Anisenkov, A; Anjos, N; Annovi, A; Antonaki, A; Antonelli, M; Antonov, A; Antos, J; Anulli, F; Aoki, M; Aperio Bella, L; Apolle, R; Arabidze, G; Aracena, I; Arai, Y; Arce, A T H; Arfaoui, S; Arguin, J-F; Argyropoulos, S; Arik, E; Arik, M; Armbruster, A J; Arnaez, O; Arnal, V; Artamonov, A; Artoni, G; Arutinov, D; Asai, S; Ask, S; Åsman, B; Asquith, L; Assamagan, K; Astalos, R; Astbury, A; Atkinson, M; Auerbach, B; Auge, E; Augsten, K; Aurousseau, M; Avolio, G; Axen, D; Azuelos, G; Azuma, Y; Baak, M A; Baccaglioni, G; Bacci, C; Bach, A M; Bachacou, H; Bachas, K; Backes, M; Backhaus, M; Backus Mayes, J; Badescu, E; Bagnaia, P; Bai, Y; Bailey, D C; Bain, T; Baines, J T; Baker, O K; Baker, S; Balek, P; Balli, F; Banas, E; Banerjee, P; Banerjee, Sw; Banfi, D; Bangert, A; Bansal, V; Bansil, H S; Barak, L; Baranov, S P; Barber, T; Barberio, E L; Barberis, D; Barbero, M; Bardin, D Y; Barillari, T; Barisonzi, M; Barklow, T; Barlow, N; Barnett, B M; Barnett, R M; Baroncelli, A; Barone, G; Barr, A J; Barreiro, F; Barreiro Guimarães da Costa, J; Bartoldus, R; Barton, A E; Bartsch, V; Basye, A; Bates, R L; Batkova, L; Batley, J R; Battaglia, A; Battistin, M; Bauer, F; Bawa, H S; Beale, S; Beau, T; Beauchemin, P H; Beccherle, R; Bechtle, P; Beck, H P; Becker, K; Becker, S; Beckingham, M; Becks, K H; Beddall, A J; Beddall, A; Bedikian, S; Bednyakov, V A; Bee, C P; Beemster, L J; Beermann, T A; Begel, M; Behar Harpaz, S; Belanger-Champagne, C; Bell, P J; Bell, W H; Bella, G; Bellagamba, L; Bellomo, M; Belloni, A; Beloborodova, O; Belotskiy, K; Beltramello, O; Benary, O; Benchekroun, D; Bendtz, K; Benekos, N; Benhammou, Y; Benhar Noccioli, E; Benitez Garcia, J A; Benjamin, D P; Benoit, M; Bensinger, J R; Benslama, K; Bentvelsen, S; Berge, D; Bergeaas Kuutmann, E; Berger, N; Berghaus, F; Berglund, E; Beringer, J; Bernat, P; Bernhard, R; Bernius, C; Bernlochner, F U; Berry, T; Bertella, C; Bertin, A; Bertolucci, F; Besana, M I; Besjes, G J; Besson, N; Bethke, S; Bhimji, W; Bianchi, R M; Bianchini, L; Bianco, M; Biebel, O; Bieniek, S P; Bierwagen, K; Biesiada, J; Biglietti, M; Bilokon, H; Bindi, M; Binet, S; Bingul, A; Bini, C; Biscarat, C; Bittner, B; Black, C W; Black, J E; Black, K M; Blair, R E; Blanchard, J-B; Blazek, T; Bloch, I; Blocker, C; Blocki, J; Blum, W; Blumenschein, U; Bobbink, G J; Bobrovnikov, V S; Bocchetta, S S; Bocci, A; Boddy, C R; Boehler, M; Boek, J; Boek, T T; Boelaert, N; Bogaerts, J A; Bogdanchikov, A; Bogouch, A; Bohm, C; Bohm, J; Boisvert, V; Bold, T; Boldea, V; Bolnet, N M; Bomben, M; Bona, M; Boonekamp, M; Bordoni, S; Borer, C; Borisov, A; Borissov, G; Borjanovic, I; Borri, M; Borroni, S; Bortfeldt, J; Bortolotto, V; Bos, K; Boscherini, D; Bosman, M; Boterenbrood, H; Bouchami, J; Boudreau, J; Bouhova-Thacker, E V; Boumediene, D; Bourdarios, C; Bousson, N; Boutouil, S; Boveia, A; Boyd, J; Boyko, I R; Bozovic-Jelisavcic, I; Bracinik, J; Branchini, P; Brandt, A; Brandt, G; Brandt, O; Bratzler, U; Brau, B; Brau, J E; Braun, H M; Brazzale, S F; Brelier, B; Bremer, J; Brendlinger, K; Brenner, R; Bressler, S; Bristow, T M; Britton, D; Brochu, F M; Brock, I; Brock, R; Broggi, F; Bromberg, C; Bronner, J; Brooijmans, G; Brooks, T; Brooks, W K; Brown, G; Bruckman de Renstrom, P A; Bruncko, D; Bruneliere, R; Brunet, S; Bruni, A; Bruni, G; Bruschi, M; Bryngemark, L; Buanes, T; Buat, Q; Bucci, F; Buchanan, J; Buchholz, P; Buckingham, R M; Buckley, A G; Buda, S I; Budagov, I A; Budick, B; Bugge, L; Bulekov, O; Bundock, A C; Bunse, M; Buran, T; Burckhart, H; Burdin, S; Burgess, T; Burke, S; Busato, E; Büscher, V; Bussey, P; Buszello, C P; Butler, B; Butler, J M; Buttar, C M; Butterworth, J M; Buttinger, W; Byszewski, M; Cabrera Urbán, S; Caforio, D; Cakir, O; Calafiura, P; Calderini, G; Calfayan, P; Calkins, R; Caloba, L P; Caloi, R; Calvet, D; Calvet, S; Camacho Toro, R; Camarri, P; Cameron, D; Caminada, L M; Caminal Armadans, R; Campana, S; Campanelli, M; Canale, V; Canelli, F; Canepa, A; Cantero, J; Cantrill, R; Cao, T; Capeans Garrido, M D M; Caprini, I; Caprini, M; Capriotti, D; Capua, M; Caputo, R; Cardarelli, R; Carli, T; Carlino, G; Carminati, L; Caron, S; Carquin, E; Carrillo-Montoya, G D; Carter, A A; Carter, J R; Carvalho, J; Casadei, D; Casado, M P; Cascella, M; Caso, C; Castaneda-Miranda, E; Castillo Gimenez, V; Castro, N F; Cataldi, G; Catastini, P; Catinaccio, A; Catmore, J R; Cattai, A; Cattani, G; Caughron, S; Cavaliere, V; Cavalleri, P; Cavalli, D; Cavalli-Sforza, M; Cavasinni, V; Ceradini, F; Cerqueira, A S; Cerri, A; Cerrito, L; Cerutti, F; Cetin, S A; Chafaq, A; Chakraborty, D; Chalupkova, I; Chan, K; Chang, P; Chapleau, B; Chapman, J D; Chapman, J W; Charlton, D G; Chavda, V; Chavez Barajas, C A; Cheatham, S; Chekanov, S; Chekulaev, S V; Chelkov, G A; Chelstowska, M A; Chen, C; Chen, H; Chen, S; Chen, X; Chen, Y; Cheng, Y; Cheplakov, A; Cherkaoui El Moursli, R; Chernyatin, V; Cheu, E; Cheung, S L; Chevalier, L; Chiefari, G; Chikovani, L; Childers, J T; Chilingarov, A; Chiodini, G; Chisholm, A S; Chislett, R T; Chitan, A; Chizhov, M V; Choudalakis, G; Chouridou, S; Chow, B K B; Christidi, I A; Christov, A; Chromek-Burckhart, D; Chu, M L; Chudoba, J; Ciapetti, G; Ciftci, A K; Ciftci, R; Cinca, D; Cindro, V; Ciocio, A; Cirilli, M; Cirkovic, P; Citron, Z H; Citterio, M; Ciubancan, M; Clark, A; Clark, P J; Clarke, R N; Cleland, W; Clemens, J C; Clement, B; Clement, C; Coadou, Y; Cobal, M; Coccaro, A; Cochran, J; Coffey, L; Cogan, J G; Coggeshall, J; Colas, J; Cole, S; Colijn, A P; Collins, N J; Collins-Tooth, C; Collot, J; Colombo, T; Colon, G; Compostella, G; Conde Muiño, P; Coniavitis, E; Conidi, M C; Consonni, S M; Consorti, V; Constantinescu, S; Conta, C; Conti, G; Conventi, F; Cooke, M; Cooper, B D; Cooper-Sarkar, A M; Cooper-Smith, N J; Copic, K; Cornelissen, T; Corradi, M; Corriveau, F; Cortes-Gonzalez, A; Cortiana, G; Costa, G; Costa, M J; Costanzo, D; Côté, D; Cottin, G; Courneyea, L; Cowan, G; Cox, B E; Cranmer, K; Crépé-Renaudin, S; Crescioli, F; Cristinziani, M; Crosetti, G; Cuciuc, C-M; Cuenca Almenar, C; Cuhadar Donszelmann, T; Cummings, J; Curatolo, M; Curtis, C J; Cuthbert, C; Cwetanski, P; Czirr, H; Czodrowski, P; Czyczula, Z; D'Auria, S; D'Onofrio, M; D'Orazio, A; Da Cunha Sargedas De Sousa, M J; Da Via, C; Dabrowski, W; Dafinca, A; Dai, T; Dallaire, F; Dallapiccola, C; Dam, M; Damiani, D S; Danielsson, H O; Dao, V; Darbo, G; Darlea, G L; Darmora, S; Dassoulas, J A; Davey, W; Davidek, T; Davidson, N; Davidson, R; Davies, E; Davies, M; Davignon, O; Davison, A R; Davygora, Y; Dawe, E; Dawson, I; Daya-Ishmukhametova, R K; De, K; de Asmundis, R; De Castro, S; De Cecco, S; de Graat, J; De Groot, N; de Jong, P; De La Taille, C; De la Torre, H; De Lorenzi, F; De Nooij, L; De Pedis, D; De Salvo, A; De Sanctis, U; De Santo, A; De Vivie De Regie, J B; De Zorzi, G; Dearnaley, W J; Debbe, R; Debenedetti, C; Dechenaux, B; Dedovich, D V; Degenhardt, J; Del Peso, J; Del Prete, T; Delemontex, T; Deliyergiyev, M; Dell'Acqua, A; Dell'Asta, L; Della Pietra, M; Della Volpe, D; Delmastro, M; Delsart, P A; Deluca, C; Demers, S; Demichev, M; Demirkoz, B; Denisov, S P; Derendarz, D; Derkaoui, J E; Derue, F; Dervan, P; Desch, K; Deviveiros, P O; Dewhurst, A; DeWilde, B; Dhaliwal, S; Dhullipudi, R; Di Ciaccio, A; Di Ciaccio, L; Di Donato, C; Di Girolamo, A; Di Girolamo, B; Di Luise, S; Di Mattia, A; Di Micco, B; Di Nardo, R; Di Simone, A; Di Sipio, R; Diaz, M A; Diehl, E B; Dietrich, J; Dietzsch, T A; Diglio, S; Dindar Yagci, K; Dingfelder, J; Dinut, F; Dionisi, C; Dita, P; Dita, S; Dittus, F; Djama, F; Djobava, T; do Vale, M A B; Do Valle Wemans, A; Doan, T K O; Dobbs, M; Dobos, D; Dobson, E; Dodd, J; Doglioni, C; Doherty, T; Dohmae, T; Doi, Y; Dolejsi, J; Dolezal, Z; Dolgoshein, B A; Donadelli, M; Donini, J; Dopke, J; Doria, A; Dos Anjos, A; Dotti, A; Dova, M T; Doyle, A T; Dressnandt, N; Dris, M; Dubbert, J; Dube, S; Dubreuil, E; Duchovni, E; Duckeck, G; Duda, D; Dudarev, A; Dudziak, F; Duerdoth, I P; Duflot, L; Dufour, M-A; Duguid, L; Dührssen, M; Dunford, M; Duran Yildiz, H; Düren, M; Duxfield, R; Dwuznik, M; Ebenstein, W L; Ebke, J; Eckweiler, S; Edson, W; Edwards, C A; Edwards, N C; Ehrenfeld, W; Eifert, T; Eigen, G; Einsweiler, K; Eisenhandler, E; Ekelof, T; El Kacimi, M; Ellert, M; Elles, S; Ellinghaus, F; Ellis, K; Ellis, N; Elmsheuser, J; Elsing, M; Emeliyanov, D; Enari, Y; Engelmann, R; Engl, A; Epp, B; Erdmann, J; Ereditato, A; Eriksson, D; Ernst, J; Ernst, M; Ernwein, J; Errede, D; Errede, S; Ertel, E; Escalier, M; Esch, H; Escobar, C; Espinal Curull, X; Esposito, B; Etienne, F; Etienvre, A I; Etzion, E; Evangelakou, D; Evans, H; Fabbri, L; Fabre, C; Facini, G; Fakhrutdinov, R M; Falciano, S; Fang, Y; Fanti, M; Farbin, A; Farilla, A; Farley, J; Farooque, T; Farrell, S; Farrington, S M; Farthouat, P; Fassi, F; Fassnacht, P; Fassouliotis, D; Fatholahzadeh, B; Favareto, A; Fayard, L; Federic, P; Fedin, O L; Fedorko, W; Fehling-Kaschek, M; Feligioni, L; Feng, C; Feng, E J; Fenyuk, A B; Ferencei, J; Fernando, W; Ferrag, S; Ferrando, J; Ferrara, V; Ferrari, A; Ferrari, P; Ferrari, R; Ferreira de Lima, D E; Ferrer, A; Ferrere, D; Ferretti, C; Ferretto Parodi, A; Fiascaris, M; Fiedler, F; Filipčič, A; Filthaut, F; Fincke-Keeler, M; Fiolhais, M C N; Fiorini, L; Firan, A; Fischer, J; Fisher, M J; Fitzgerald, E A; Flechl, M; Fleck, I; Fleischmann, P; Fleischmann, S; Fletcher, G T; Fletcher, G; Flick, T; Floderus, A; Flores Castillo, L R; Florez Bustos, A C; Flowerdew, M J; Fonseca Martin, T; Formica, A; Forti, A; Fortin, D; Fournier, D; Fowler, A J; Fox, H; Francavilla, P; Franchini, M; Franchino, S; Francis, D; Frank, T; Franklin, M; Franz, S; Fraternali, M; Fratina, S; French, S T; Friedrich, C; Friedrich, F; Froidevaux, D; Frost, J A; Fukunaga, C; Fullana Torregrosa, E; Fulsom, B G; Fuster, J; Gabaldon, C; Gabizon, O; Gadatsch, S; Gadfort, T; Gadomski, S; Gagliardi, G; Gagnon, P; Galea, C; Galhardo, B; Gallas, E J; Gallo, V; Gallop, B J; Gallus, P; Gan, K K; Gandrajula, R P; Gao, Y S; Gaponenko, A; Garay Walls, F M; Garberson, F; García, C; García Navarro, J E; Garcia-Sciveres, M; Gardner, R W; Garelli, N; Garonne, V; Gatti, C; Gaudio, G; Gaur, B; Gauthier, L; Gauzzi, P; Gavrilenko, I L; Gay, C; Gaycken, G; Gazis, E N; Ge, P; Gecse, Z; Gee, C N P; Geerts, D A A; Geich-Gimbel, Ch; Gellerstedt, K; Gemme, C; Gemmell, A; Genest, M H; Gentile, S; George, M; George, S; Gerbaudo, D; Gerlach, P; Gershon, A; Geweniger, C; Ghazlane, H; Ghodbane, N; Giacobbe, B; Giagu, S; Giangiobbe, V; Gianotti, F; Gibbard, B; Gibson, A; Gibson, S M; Gilchriese, M; Gillam, T P S; Gillberg, D; Gillman, A R; Gingrich, D M; Ginzburg, J; Giokaris, N; Giordani, M P; Giordano, R; Giorgi, F M; Giovannini, P; Giraud, P F; Giugni, D; Giunta, M; Gjelsten, B K; Gladilin, L K; Glasman, C; Glatzer, J; Glazov, A; Glonti, G L; Goddard, J R; Godfrey, J; Godlewski, J; Goebel, M; Goeringer, C; Goldfarb, S; Golling, T; Golubkov, D; Gomes, A; Gomez Fajardo, L S; Gonçalo, R; Goncalves Pinto Firmino Da Costa, J; Gonella, L; González de la Hoz, S; Gonzalez Parra, G; Gonzalez Silva, M L; Gonzalez-Sevilla, S; Goodson, J J; Goossens, L; Göpfert, T; Gorbounov, P A; Gordon, H A; Gorelov, I; Gorfine, G; Gorini, B; Gorini, E; Gorišek, A; Gornicki, E; Goshaw, A T; Gössling, C; Gostkin, M I; Gough Eschrich, I; Gouighri, M; Goujdami, D; Goulette, M P; Goussiou, A G; Goy, C; Gozpinar, S; Graber, L; Grabowska-Bold, I; Grafström, P; Grahn, K-J; Gramstad, E; Grancagnolo, F; Grancagnolo, S; Grassi, V; Gratchev, V; Gray, H M; Gray, J A; Graziani, E; Grebenyuk, O G; Greenshaw, T; Greenwood, Z D; Gregersen, K; Gregor, I M; Grenier, P; Griffiths, J; Grigalashvili, N; Grillo, A A; Grimm, K; Grinstein, S; Gris, Ph; Grishkevich, Y V; Grivaz, J-F; Grohs, J P; Grohsjean, A; Gross, E; Grosse-Knetter, J; Groth-Jensen, J; Grybel, K; Guest, D; Gueta, O; Guicheney, C; Guido, E; Guillemin, T; Guindon, S; Gul, U; Gunther, J; Guo, B; Guo, J; Gutierrez, P; Guttman, N; Gutzwiller, O; Guyot, C; Gwenlan, C; Gwilliam, C B; Haas, A; Haas, S; Haber, C; Hadavand, H K; Hadley, D R; Haefner, P; Hajduk, Z; Hakobyan, H; Hall, D; Halladjian, G; Hamacher, K; Hamal, P; Hamano, K; Hamer, M; Hamilton, A; Hamilton, S; Han, L; Hanagaki, K; Hanawa, K; Hance, M; Handel, C; Hanke, P; Hansen, J R; Hansen, J B; Hansen, J D; Hansen, P H; Hansson, P; Hara, K; Harenberg, T; Harkusha, S; Harper, D; Harrington, R D; Harris, O M; Hartert, J; Hartjes, F; Haruyama, T; Harvey, A; Hasegawa, S; Hasegawa, Y; Hassani, S; Haug, S; Hauschild, M; Hauser, R; Havranek, M; Hawkes, C M; Hawkings, R J; Hawkins, A D; Hayakawa, T; Hayashi, T; Hayden, D; Hays, C P; Hayward, H S; Haywood, S J; Head, S J; Heck, T; Hedberg, V; Heelan, L; Heim, S; Heinemann, B; Heisterkamp, S; Helary, L; Heller, C; Heller, M; Hellman, S; Hellmich, D; Helsens, C; Henderson, R C W; Henke, M; Henrichs, A; Henriques Correia, A M; Henrot-Versille, S; Hensel, C; Hernandez, C M; Hernández Jiménez, Y; Herrberg, R; Herten, G; Hertenberger, R; Hervas, L; Hesketh, G G; Hessey, N P; Hickling, R; Higón-Rodriguez, E; Hill, J C; Hiller, K H; Hillert, S; Hillier, S J; Hinchliffe, I; Hines, E; Hirose, M; Hirsch, F; Hirschbuehl, D; Hobbs, J; Hod, N; Hodgkinson, M C; Hodgson, P; Hoecker, A; Hoeferkamp, M R; Hoffman, J; Hoffmann, D; Hohlfeld, M; Holmgren, S O; Holy, T; Holzbauer, J L; Hong, T M; Hooft van Huysduynen, L; Hostachy, J-Y; Hou, S; Hoummada, A; Howard, J; Howarth, J; Hrabovsky, M; Hristova, I; Hrivnac, J; Hryn'ova, T; Hsu, P J; Hsu, S-C; Hu, D; Hubacek, Z; Hubaut, F; Huegging, F; Huettmann, A; Huffman, T B; Hughes, E W; Hughes, G; Huhtinen, M; Hülsing, T A; Hurwitz, M; Huseynov, N; Huston, J; Huth, J; Iacobucci, G; Iakovidis, G; Ibbotson, M; Ibragimov, I; Iconomidou-Fayard, L; Idarraga, J; Iengo, P; Igonkina, O; Ikegami, Y; Ikematsu, K; Ikeno, M; Iliadis, D; Ilic, N; Ince, T; Ioannou, P; Iodice, M; Iordanidou, K; Ippolito, V; Irles Quiles, A; Isaksson, C; Ishino, M; Ishitsuka, M; Ishmukhametov, R; Issever, C; Istin, S; Ivashin, A V; Iwanski, W; Iwasaki, H; Izen, J M; Izzo, V; Jackson, B; Jackson, J N; Jackson, P; Jaekel, M R; Jain, V; Jakobs, K; Jakobsen, S; Jakoubek, T; Jakubek, J; Jamin, D O; Jana, D K; Jansen, E; Jansen, H; Janssen, J; Jantsch, A; Janus, M; Jared, R C; Jarlskog, G; Jeanty, L; Jeng, G-Y; Jen-La Plante, I; Jennens, D; Jenni, P; Jeske, C; Jež, P; Jézéquel, S; Jha, M K; Ji, H; Ji, W; Jia, J; Jiang, Y; Jimenez Belenguer, M; Jin, S; Jinnouchi, O; Joergensen, M D; Joffe, D; Johansen, M; Johansson, K E; Johansson, P; Johnert, S; Johns, K A; Jon-And, K; Jones, G; Jones, R W L; Jones, T J; Joram, C; Jorge, P M; Joshi, K D; Jovicevic, J; Jovin, T; Ju, X; Jung, C A; Jungst, R M; Juranek, V; Jussel, P; Juste Rozas, A; Kabana, S; Kaci, M; Kaczmarska, A; Kadlecik, P; Kado, M; Kagan, H; Kagan, M; Kajomovitz, E; Kalinin, S; Kama, S; Kanaya, N; Kaneda, M; Kaneti, S; Kanno, T; Kantserov, V A; Kanzaki, J; Kaplan, B; Kapliy, A; Kar, D; Karagounis, M; Karakostas, K; Karnevskiy, M; Kartvelishvili, V; Karyukhin, A N; Kashif, L; Kasieczka, G; Kass, R D; Kastanas, A; Kataoka, Y; Katzy, J; Kaushik, V; Kawagoe, K; Kawamoto, T; Kawamura, G; Kazama, S; Kazanin, V F; Kazarinov, M Y; Keeler, R; Keener, P T; Kehoe, R; Keil, M; Keller, J S; Kenyon, M; Keoshkerian, H; Kepka, O; Kerschen, N; Kerševan, B P; Kersten, S; Kessoku, K; Keung, J; Khalil-Zada, F; Khandanyan, H; Khanov, A; Kharchenko, D; Khodinov, A; Khomich, A; Khoo, T J; Khoriauli, G; Khoroshilov, A; Khovanskiy, V; Khramov, E; Khubua, J; Kim, H; Kim, S H; Kimura, N; Kind, O; King, B T; King, M; King, R S B; Kirk, J; Kiryunin, A E; Kishimoto, T; Kisielewska, D; Kitamura, T; Kittelmann, T; Kiuchi, K; Kladiva, E; Klein, M; Klein, U; Kleinknecht, K; Klemetti, M; Klier, A; Klimek, P; Klimentov, A; Klingenberg, R; Klinger, J A; Klinkby, E B; Klioutchnikova, T; Klok, P F; Klous, S; Kluge, E-E; Kluge, T; Kluit, P; Kluth, S; Kneringer, E; Knoops, E B F G; Knue, A; Ko, B R; Kobayashi, T; Kobel, M; Kocian, M; Kodys, P; Koenig, S; Koetsveld, F; Koevesarki, P; Koffas, T; Koffeman, E; Kogan, L A; Kohlmann, S; Kohn, F; Kohout, Z; Kohriki, T; Koi, T; Kolanoski, H; Kolesnikov, V; Koletsou, I; Koll, J; Komar, A A; Komori, Y; Kondo, T; Köneke, K; König, A C; Kono, T; Kononov, A I; Konoplich, R; Konstantinidis, N; Kopeliansky, R; Koperny, S; Köpke, L; Kopp, A K; Korcyl, K; Kordas, K; Korn, A; Korol, A; Korolkov, I; Korolkova, E V; Korotkov, V A; Kortner, O; Kortner, S; Kostyukhin, V V; Kotov, S; Kotov, V M; Kotwal, A; Kourkoumelis, C; Kouskoura, V; Koutsman, A; Kowalewski, R; Kowalski, T Z; Kozanecki, W; Kozhin, A S; Kral, V; Kramarenko, V A; Kramberger, G; Krasny, M W; Krasznahorkay, A; Kraus, J K; Kravchenko, A; Kreiss, S; Krejci, F; Kretzschmar, J; Kreutzfeldt, K; Krieger, N; Krieger, P; Kroeninger, K; Kroha, H; Kroll, J; Kroseberg, J; Krstic, J; Kruchonak, U; Krüger, H; Kruker, T; Krumnack, N; Krumshteyn, Z V; Kruse, M K; Kubota, T; Kuday, S; Kuehn, S; Kugel, A; Kuhl, T; Kukhtin, V; Kulchitsky, Y; Kuleshov, S; Kuna, M; Kunkle, J; Kupco, A; Kurashige, H; Kurata, M; Kurochkin, Y A; Kus, V; Kuwertz, E S; Kuze, M; Kvita, J; Kwee, R; La Rosa, A; La Rotonda, L; Labarga, L; Lablak, S; Lacasta, C; Lacava, F; Lacey, J; Lacker, H; Lacour, D; Lacuesta, V R; Ladygin, E; Lafaye, R; Laforge, B; Lagouri, T; Lai, S; Laisne, E; Lambourne, L; Lampen, C L; Lampl, W; Lançon, E; Landgraf, U; Landon, M P J; Lang, V S; Lange, C; Lankford, A J; Lanni, F; Lantzsch, K; Lanza, A; Laplace, S; Lapoire, C; Laporte, J F; Lari, T; Larner, A; Lassnig, M; Laurelli, P; Lavorini, V; Lavrijsen, W; Laycock, P; Le Dortz, O; Le Guirriec, E; Le Menedeu, E; LeCompte, T; Ledroit-Guillon, F; Lee, H; Lee, J S H; Lee, S C; Lee, L; Lefebvre, M; Legendre, M; Legger, F; Leggett, C; Lehmacher, M; Lehmann Miotto, G; Leister, A G; Leite, M A L; Leitner, R; Lellouch, D; Lemmer, B; Lendermann, V; Leney, K J C; Lenz, T; Lenzen, G; Lenzi, B; Leonhardt, K; Leontsinis, S; Lepold, F; Leroy, C; Lessard, J-R; Lester, C G; Lester, C M; Levêque, J; Levin, D; Levinson, L J; Lewis, A; Lewis, G H; Leyko, A M; Leyton, M; Li, B; Li, B; Li, H; Li, H L; Li, S; Li, X; Liang, Z; Liao, H; Liberti, B; Lichard, P; Lie, K; Liebal, J; Liebig, W; Limbach, C; Limosani, A; Limper, M; Lin, S C; Linde, F; Linnemann, J T; Lipeles, E; Lipniacka, A; Lisovyi, M; Liss, T M; Lissauer, D; Lister, A; Litke, A M; Liu, D; Liu, J B; Liu, L; Liu, M; Liu, Y; Livan, M; Livermore, S S A; Lleres, A; Llorente Merino, J; Lloyd, S L; Lo Sterzo, F; Lobodzinska, E; Loch, P; Lockman, W S; Loddenkoetter, T; Loebinger, F K; Loevschall-Jensen, A E; Loginov, A; Loh, C W; Lohse, T; Lohwasser, K; Lokajicek, M; Lombardo, V P; Long, R E; Lopes, L; Lopez Mateos, D; Lorenz, J; Lorenzo Martinez, N; Losada, M; Loscutoff, P; Losty, M J; Lou, X; Lounis, A; Loureiro, K F; Love, J; Love, P A; Lowe, A J; Lu, F; Lubatti, H J; Luci, C; Lucotte, A; Ludwig, D; Ludwig, I; Ludwig, J; Luehring, F; Lukas, W; Luminari, L; Lund, E; Lundberg, B; Lundberg, J; Lundberg, O; Lund-Jensen, B; Lundquist, J; Lungwitz, M; Lynn, D; Lysak, R; Lytken, E; Ma, H; Ma, L L; Maccarrone, G; Macchiolo, A; Maček, B; Machado Miguens, J; Macina, D; Mackeprang, R; Madar, R; Madaras, R J; Maddocks, H J; Mader, W F; Madsen, A; Maeno, M; Maeno, T; Magnoni, L; Magradze, E; Mahboubi, K; Mahlstedt, J; Mahmoud, S; Mahout, G; Maiani, C; Maidantchik, C; Maio, A; Majewski, S; Makida, Y; Makovec, N; Mal, P; Malaescu, B; Malecki, Pa; Malecki, P; Maleev, V P; Malek, F; Mallik, U; Malon, D; Malone, C; Maltezos, S; Malyshev, V; Malyukov, S; Mamuzic, J; Manabe, A; Mandelli, L; Mandić, I; Mandrysch, R; Maneira, J; Manfredini, A; Manhaes de Andrade Filho, L; Manjarres Ramos, J A; Mann, A; Manning, P M; Manousakis-Katsikakis, A; Mansoulie, B; Mantifel, R; Mapelli, A; Mapelli, L; March, L; Marchand, J F; Marchese, F; Marchiori, G; Marcisovsky, M; Marino, C P; Marroquim, F; Marshall, Z; Marti, L F; Marti-Garcia, S; Martin, B; Martin, B; Martin, J P; Martin, T A; Martin, V J; Martin Dit Latour, B; Martinez, H; Martinez, M; Martinez Outschoorn, V; Martin-Haugh, S; Martyniuk, A C; Marx, M; Marzano, F; Marzin, A; Masetti, L; Mashimo, T; Mashinistov, R; Masik, J; Maslennikov, A L; Massa, I; Massol, N; Mastrandrea, P; Mastroberardino, A; Masubuchi, T; Matsunaga, H; Matsushita, T; Mättig, P; Mättig, S; Mattravers, C; Maurer, J; Maxfield, S J; Maximov, D A; Mazini, R; Mazur, M; Mazzaferro, L; Mazzanti, M; Mc Donald, J; Mc Kee, S P; McCarn, A; McCarthy, R L; McCarthy, T G; McCubbin, N A; McFarlane, K W; Mcfayden, J A; Mchedlidze, G; Mclaughlan, T; McMahon, S J; McPherson, R A; Meade, A; Mechnich, J; Mechtel, M; Medinnis, M; Meehan, S; Meera-Lebbai, R; Meguro, T; Mehlhase, S; Mehta, A; Meier, K; Meineck, C; Meirose, B; Melachrinos, C; Mellado Garcia, B R; Meloni, F; Mendoza Navas, L; Meng, Z; Mengarelli, A; Menke, S; Meoni, E; Mercurio, K M; Meric, N; Mermod, P; Merola, L; Meroni, C; Merritt, F S; Merritt, H; Messina, A; Metcalfe, J; Mete, A S; Meyer, C; Meyer, C; Meyer, J-P; Meyer, J; Meyer, J; Michal, S; Micu, L; Middleton, R P; Migas, S; Mijović, L; Mikenberg, G; Mikestikova, M; Mikuž, M; Miller, D W; Miller, R J; Mills, W J; Mills, C; Milov, A; Milstead, D A; Milstein, D; Minaenko, A A; Miñano Moya, M; Minashvili, I A; Mincer, A I; Mindur, B; Mineev, M; Ming, Y; Mir, L M; Mirabelli, G; Mitrevski, J; Mitsou, V A; Mitsui, S; Miyagawa, P S; Mjörnmark, J U; Moa, T; Moeller, V; Mohapatra, S; Mohr, W; Moles-Valls, R; Molfetas, A; Mönig, K; Monini, C; Monk, J; Monnier, E; Montejo Berlingen, J; Monticelli, F; Monzani, S; Moore, R W; Mora Herrera, C; Moraes, A; Morange, N; Morel, J; Moreno, D; Moreno Llácer, M; Morettini, P; Morgenstern, M; Morii, M; Morley, A K; Mornacchi, G; Morris, J D; Morvaj, L; Möser, N; Moser, H G; Mosidze, M; Moss, J; Mount, R; Mountricha, E; Mouraviev, S V; Moyse, E J W; Mueller, F; Mueller, J; Mueller, K; Mueller, T; Muenstermann, D; Müller, T A; Munwes, Y; Murray, W J; Mussche, I; Musto, E; Myagkov, A G; Myska, M; Nackenhorst, O; Nadal, J; Nagai, K; Nagai, R; Nagai, Y; Nagano, K; Nagarkar, A; Nagasaka, Y; Nagel, M; Nairz, A M; Nakahama, Y; Nakamura, K; Nakamura, T; Nakano, I; Namasivayam, H; Nanava, G; Napier, A; Narayan, R; Nash, M; Nattermann, T; Naumann, T; Navarro, G; Neal, H A; Nechaeva, P Yu; Neep, T J; Negri, A; Negri, G; Negrini, M; Nektarijevic, S; Nelson, A; Nelson, T K; Nemecek, S; Nemethy, P; Nepomuceno, A A; Nessi, M; Neubauer, M S; Neumann, M; Neusiedl, A; Neves, R M; Nevski, P; Newcomer, F M; Newman, P R; Nguyen, D H; Nguyen Thi Hong, V; Nickerson, R B; Nicolaidou, R; Nicquevert, B; Niedercorn, F; Nielsen, J; Nikiforou, N; Nikiforov, A; Nikolaenko, V; Nikolic-Audit, I; Nikolics, K; Nikolopoulos, K; Nilsen, H; Nilsson, P; Ninomiya, Y; Nisati, A; Nisius, R; Nobe, T; Nodulman, L; Nomachi, M; Nomidis, I; Norberg, S; Nordberg, M; Novakova, J; Nozaki, M; Nozka, L; Nuncio-Quiroz, A-E; Nunes Hanninger, G; Nunnemann, T; Nurse, E; O'Brien, B J; O'Neil, D C; O'Shea, V; Oakes, L B; Oakham, F G; Oberlack, H; Ocariz, J; Ochi, A; Ochoa, M I; Oda, S; Odaka, S; Odier, J; Ogren, H; Oh, A; Oh, S H; Ohm, C C; Ohshima, T; Okamura, W; Okawa, H; Okumura, Y; Okuyama, T; Olariu, A; Olchevski, A G; Olivares Pino, S A; Oliveira, M; Oliveira Damazio, D; Oliver Garcia, E; Olivito, D; Olszewski, A; Olszowska, J; Onofre, A; Onyisi, P U E; Oram, C J; Oreglia, M J; Oren, Y; Orestano, D; Orlando, N; Oropeza Barrera, C; Orr, R S; Osculati, B; Ospanov, R; Osuna, C; Otero Y Garzon, G; Ottersbach, J P; Ouchrif, M; Ouellette, E A; Ould-Saada, F; Ouraou, A; Ouyang, Q; Ovcharova, A; Owen, M; Owen, S; Ozcan, V E; Ozturk, N; Pacheco Pages, A; Padilla Aranda, C; Pagan Griso, S; Paganis, E; Pahl, C; Paige, F; Pais, P; Pajchel, K; Palacino, G; Paleari, C P; Palestini, S; Pallin, D; Palma, A; Palmer, J D; Pan, Y B; Panagiotopoulou, E; Panduro Vazquez, J G; Pani, P; Panikashvili, N; Panitkin, S; Pantea, D; Papadelis, A; Papadopoulou, Th D; Paramonov, A; Paredes Hernandez, D; Park, W; Parker, M A; Parodi, F; Parsons, J A; Parzefall, U; Pashapour, S; Pasqualucci, E; Passaggio, S; Passeri, A; Pastore, F; Pastore, Fr; Pásztor, G; Pataraia, S; Patel, N D; Pater, J R; Patricelli, S; Pauly, T; Pearce, J; Pedersen, M; Pedraza Lopez, S; Pedraza Morales, M I; Peleganchuk, S V; Pelikan, D; Peng, H; Penning, B; Penson, A; Penwell, J; Perez Cavalcanti, T; Perez Codina, E; Pérez García-Estañ, M T; Perez Reale, V; Perini, L; Pernegger, H; Perrino, R; Perrodo, P; Peshekhonov, V D; Peters, K; Peters, R F Y; Petersen, B A; Petersen, J; Petersen, T C; Petit, E; Petridis, A; Petridou, C; Petrolo, E; Petrucci, F; Petschull, D; Petteni, M; Pezoa, R; Phan, A; Phillips, P W; Piacquadio, G; Pianori, E; Picazio, A; Piccaro, E; Piccinini, M; Piec, S M; Piegaia, R; Pignotti, D T; Pilcher, J E; Pilkington, A D; Pina, J; Pinamonti, M; Pinder, A; Pinfold, J L; Pingel, A; Pinto, B; Pizio, C; Pleier, M-A; Pleskot, V; Plotnikova, E; Plucinski, P; Poblaguev, A; Poddar, S; Podlyski, F; Poettgen, R; Poggioli, L; Pohl, D; Pohl, M; Polesello, G; Policicchio, A; Polifka, R; Polini, A; Poll, J; Polychronakos, V; Pomeroy, D; Pommès, K; Pontecorvo, L; Pope, B G; Popeneciu, G A; Popovic, D S; Poppleton, A; Portell Bueso, X; Pospelov, G E; Pospisil, S; Potrap, I N; Potter, C J; Potter, C T; Poulard, G; Poveda, J; Pozdnyakov, V; Prabhu, R; Pralavorio, P; Pranko, A; Prasad, S; Pravahan, R; Prell, S; Pretzl, K; Price, D; Price, J; Price, L E; Prieur, D; Primavera, M; Proissl, M; Prokofiev, K; Prokoshin, F; Protopapadaki, E; Protopopescu, S; Proudfoot, J; Prudent, X; Przybycien, M; Przysiezniak, H; Psoroulas, S; Ptacek, E; Pueschel, E; Puldon, D; Purohit, M; Puzo, P; Pylypchenko, Y; Qian, J; Quadt, A; Quarrie, D R; Quayle, W B; Quilty, D; Raas, M; Radeka, V; Radescu, V; Radloff, P; Ragusa, F; Rahal, G; Rahimi, A M; Rajagopalan, S; Rammensee, M; Rammes, M; Randle-Conde, A S; Randrianarivony, K; Rangel-Smith, C; Rao, K; Rauscher, F; Rave, T C; Ravenscroft, T; Raymond, M; Read, A L; Rebuzzi, D M; Redelbach, A; Redlinger, G; Reece, R; Reeves, K; Reinsch, A; Reisinger, I; Relich, M; Rembser, C; Ren, Z L; Renaud, A; Rescigno, M; Resconi, S; Resende, B; Reznicek, P; Rezvani, R; Richter, R; Richter-Was, E; Ridel, M; Rieck, P; Rijssenbeek, M; Rimoldi, A; Rinaldi, L; Rios, R R; Ritsch, E; Riu, I; Rivoltella, G; Rizatdinova, F; Rizvi, E; Robertson, S H; Robichaud-Veronneau, A; Robinson, D; Robinson, J E M; Robson, A; Rocha de Lima, J G; Roda, C; Roda Dos Santos, D; Roe, A; Roe, S; Røhne, O; Rolli, S; Romaniouk, A; Romano, M; Romeo, G; Romero Adam, E; Rompotis, N; Roos, L; Ros, E; Rosati, S; Rosbach, K; Rose, A; Rose, M; Rosenbaum, G A; Rosendahl, P L; Rosenthal, O; Rosselet, L; Rossetti, V; Rossi, E; Rossi, L P; Rotaru, M; Roth, I; Rothberg, J; Rousseau, D; Royon, C R; Rozanov, A; Rozen, Y; Ruan, X; Rubbo, F; Rubinskiy, I; Ruckstuhl, N; Rud, V I; Rudolph, C; Rudolph, M S; Rühr, F; Ruiz-Martinez, A; Rumyantsev, L; Rurikova, Z; Rusakovich, N A; Ruschke, A; Rutherfoord, J P; Ruthmann, N; Ruzicka, P; Ryabov, Y F; Rybar, M; Rybkin, G; Ryder, N C; Saavedra, A F; Sadeh, I; Sadrozinski, H F-W; Sadykov, R; Safai Tehrani, F; Sakamoto, H; Salamanna, G; Salamon, A; Saleem, M; Salek, D; Salihagic, D; Salnikov, A; Salt, J; Salvachua Ferrando, B M; Salvatore, D; Salvatore, F; Salvucci, A; Salzburger, A; Sampsonidis, D; Sanchez, A; Sánchez, J; Sanchez Martinez, V; Sandaker, H; Sander, H G; Sanders, M P; Sandhoff, M; Sandoval, T; Sandoval, C; Sandstroem, R; Sankey, D P C; Sansoni, A; Santamarina Rios, C; Santoni, C; Santonico, R; Santos, H; Santoyo Castillo, I; Sapp, K; Saraiva, J G; Sarangi, T; Sarkisyan-Grinbaum, E; Sarrazin, B; Sarri, F; Sartisohn, G; Sasaki, O; Sasaki, Y; Sasao, N; Satsounkevitch, I; Sauvage, G; Sauvan, E; Sauvan, J B; Savard, P; Savinov, V; Savu, D O; Sawyer, L; Saxon, D H; Saxon, J; Sbarra, C; Sbrizzi, A; Scannicchio, D A; Scarcella, M; Schaarschmidt, J; Schacht, P; Schaefer, D; Schaelicke, A; Schaepe, S; Schaetzel, S; Schäfer, U; Schaffer, A C; Schaile, D; Schamberger, R D; Scharf, V; Schegelsky, V A; Scheirich, D; Schernau, M; Scherzer, M I; Schiavi, C; Schieck, J; Schillo, C; Schioppa, M; Schlenker, S; Schmidt, E; Schmieden, K; Schmitt, C; Schmitt, C; Schmitt, S; Schneider, B; Schnellbach, Y J; Schnoor, U; Schoeffel, L; Schoening, A; Schorlemmer, A L S; Schott, M; Schouten, D; Schovancova, J; Schram, M; Schroeder, C; Schroer, N; Schultens, M J; Schultes, J; Schultz-Coulon, H-C; Schulz, H; Schumacher, M; Schumm, B A; Schune, Ph; Schwartzman, A; Schwegler, Ph; Schwemling, Ph; Schwienhorst, R; Schwindling, J; Schwindt, T; Schwoerer, M; Sciacca, F G; Scifo, E; Sciolla, G; Scott, W G; Searcy, J; Sedov, G; Sedykh, E; Seidel, S C; Seiden, A; Seifert, F; Seixas, J M; Sekhniaidze, G; Sekula, S J; Selbach, K E; Seliverstov, D M; Sellden, B; Sellers, G; Seman, M; Semprini-Cesari, N; Serfon, C; Serin, L; Serkin, L; Serre, T; Seuster, R; Severini, H; Sfyrla, A; Shabalina, E; Shamim, M; Shan, L Y; Shank, J T; Shao, Q T; Shapiro, M; Shatalov, P B; Shaw, K; Sherwood, P; Shimizu, S; Shimojima, M; Shin, T; Shiyakova, M; Shmeleva, A; Shochet, M J; Short, D; Shrestha, S; Shulga, E; Shupe, M A; Sicho, P; Sidoti, A; Siegert, F; Sijacki, Dj; Silbert, O; Silva, J; Silver, Y; Silverstein, D; Silverstein, S B; Simak, V; Simard, O; Simic, Lj; Simion, S; Simioni, E; Simmons, B; Simoniello, R; Simonyan, M; Sinervo, P; Sinev, N B; Sipica, V; Siragusa, G; Sircar, A; Sisakyan, A N; Sivoklokov, S Yu; Sjölin, J; Sjursen, T B; Skinnari, L A; Skottowe, H P; Skovpen, K; Skubic, P; Slater, M; Slavicek, T; Sliwa, K; Smakhtin, V; Smart, B H; Smestad, L; Smirnov, S Yu; Smirnov, Y; Smirnova, L N; Smirnova, O; Smith, B C; Smith, K M; Smizanska, M; Smolek, K; Snesarev, A A; Snidero, G; Snow, S W; Snow, J; Snyder, S; Sobie, R; Sodomka, J; Soffer, A; Soh, D A; Solans, C A; Solar, M; Solc, J; Soldatov, E Yu; Soldevila, U; Solfaroli Camillocci, E; Solodkov, A A; Solovyanov, O V; Solovyev, V; Soni, N; Sood, A; Sopko, V; Sopko, B; Sosebee, M; Soualah, R; Soueid, P; Soukharev, A; South, D; Spagnolo, S; Spanò, F; Spighi, R; Spigo, G; Spiwoks, R; Spousta, M; Spreitzer, T; Spurlock, B; St Denis, R D; Stahlman, J; Stamen, R; Stanecka, E; Stanek, R W; Stanescu, C; Stanescu-Bellu, M; Stanitzki, M M; Stapnes, S; Starchenko, E A; Stark, J; Staroba, P; Starovoitov, P; Staszewski, R; Staude, A; Stavina, P; Steele, G; Steinbach, P; Steinberg, P; Stekl, I; Stelzer, B; Stelzer, H J; Stelzer-Chilton, O; Stenzel, H; Stern, S; Stewart, G A; Stillings, J A; Stockton, M C; Stoebe, M; Stoerig, K; Stoicea, G; Stonjek, S; Strachota, P; Stradling, A R; Straessner, A; Strandberg, J; Strandberg, S; Strandlie, A; Strang, M; Strauss, E; Strauss, M; Strizenec, P; Ströhmer, R; Strom, D M; Strong, J A; Stroynowski, R; Stugu, B; Stumer, I; Stupak, J; Sturm, P; Styles, N A; Su, D; Subramania, Hs; Subramaniam, R; Succurro, A; Sugaya, Y; Suhr, C; Suk, M; Sulin, V V; Sultansoy, S; Sumida, T; Sun, X; Sundermann, J E; Suruliz, K; Susinno, G; Sutton, M R; Suzuki, Y; Suzuki, Y; Svatos, M; Swedish, S; Swiatlowski, M; Sykora, I; Sykora, T; Ta, D; Tackmann, K; Taffard, A; Tafirout, R; Taiblum, N; Takahashi, Y; Takai, H; Takashima, R; Takeda, H; Takeshita, T; Takubo, Y; Talby, M; Talyshev, A; Tam, J Y C; Tamsett, M C; Tan, K G; Tanaka, J; Tanaka, R; Tanaka, S; Tanaka, S; Tanasijczuk, A J; Tani, K; Tannoury, N; Tapprogge, S; Tardif, D; Tarem, S; Tarrade, F; Tartarelli, G F; Tas, P; Tasevsky, M; Tassi, E; Tayalati, Y; Taylor, C; Taylor, F E; Taylor, G N; Taylor, W; Teinturier, M; Teischinger, F A; Teixeira Dias Castanheira, M; Teixeira-Dias, P; Temming, K K; Ten Kate, H; Teng, P K; Terada, S; Terashi, K; Terron, J; Testa, M; Teuscher, R J; Therhaag, J; Theveneaux-Pelzer, T; Thoma, S; Thomas, J P; Thompson, E N; Thompson, P D; Thompson, P D; Thompson, A S; Thomsen, L A; Thomson, E; Thomson, M; Thong, W M; Thun, R P; Tian, F; Tibbetts, M J; Tic, T; Tikhomirov, V O; Tikhonov, Y A; Timoshenko, S; Tiouchichine, E; Tipton, P; Tisserant, S; Todorov, T; Todorova-Nova, S; Toggerson, B; Tojo, J; Tokár, S; Tokushuku, K; Tollefson, K; Tomlinson, L; Tomoto, M; Tompkins, L; Toms, K; Tonoyan, A; Topfel, C; Topilin, N D; Torrence, E; Torres, H; Torró Pastor, E; Toth, J; Touchard, F; Tovey, D R; Tran, H L; Trefzger, T; Tremblet, L; Tricoli, A; Trigger, I M; Trincaz-Duvoid, S; Tripiana, M F; Triplett, N; Trischuk, W; Trocmé, B; Troncon, C; Trottier-McDonald, M; Trovatelli, M; True, P; Trzebinski, M; Trzupek, A; Tsarouchas, C; Tseng, J C-L; Tsiakiris, M; Tsiareshka, P V; Tsionou, D; Tsipolitis, G; Tsiskaridze, S; Tsiskaridze, V; Tskhadadze, E G; Tsukerman, I I; Tsulaia, V; Tsung, J-W; Tsuno, S; Tsybychev, D; Tua, A; Tudorache, A; Tudorache, V; Tuggle, J M; Turala, M; Turecek, D; Turk Cakir, I; Turra, R; Tuts, P M; Tykhonov, A; Tylmad, M; Tyndel, M; Tzanakos, G; Uchida, K; Ueda, I; Ueno, R; Ughetto, M; Ugland, M; Uhlenbrock, M; Ukegawa, F; Unal, G; Undrus, A; Unel, G; Ungaro, F C; Unno, Y; Urbaniec, D; Urquijo, P; Usai, G; Vacavant, L; Vacek, V; Vachon, B; Vahsen, S; Valencic, N; Valentinetti, S; Valero, A; Valery, L; Valkar, S; Valladolid Gallego, E; Vallecorsa, S; Valls Ferrer, J A; Van Berg, R; Van Der Deijl, P C; van der Geer, R; van der Graaf, H; Van Der Leeuw, R; van der Poel, E; van der Ster, D; van Eldik, N; van Gemmeren, P; Van Nieuwkoop, J; van Vulpen, I; Vanadia, M; Vandelli, W; Vaniachine, A; Vankov, P; Vannucci, F; Vari, R; Varnes, E W; Varol, T; Varouchas, D; Vartapetian, A; Varvell, K E; Vassilakopoulos, V I; Vazeille, F; Vazquez Schroeder, T; Veloso, F; Veneziano, S; Ventura, A; Ventura, D; Venturi, M; Venturi, N; Vercesi, V; Verducci, M; Verkerke, W; Vermeulen, J C; Vest, A; Vetterli, M C; Vichou, I; Vickey, T; Vickey Boeriu, O E; Viehhauser, G H A; Viel, S; Villa, M; Villaplana Perez, M; Vilucchi, E; Vincter, M G; Vinek, E; Vinogradov, V B; Virzi, J; Vitells, O; Viti, M; Vivarelli, I; Vives Vaque, F; Vlachos, S; Vladoiu, D; Vlasak, M; Vogel, A; Vokac, P; Volpi, G; Volpi, M; Volpini, G; von der Schmitt, H; von Radziewski, H; von Toerne, E; Vorobel, V; Vorwerk, V; Vos, M; Voss, R; Vossebeld, J H; Vranjes, N; Vranjes Milosavljevic, M; Vrba, V; Vreeswijk, M; Vu Anh, T; Vuillermet, R; Vukotic, I; Vykydal, Z; Wagner, W; Wagner, P; Wahlen, H; Wahrmund, S; Wakabayashi, J; Walch, S; Walder, J; Walker, R; Walkowiak, W; Wall, R; Waller, P; Walsh, B; Wang, C; Wang, H; Wang, H; Wang, J; Wang, J; Wang, K; Wang, R; Wang, S M; Wang, T; Wang, X; Warburton, A; Ward, C P; Wardrope, D R; Warsinsky, M; Washbrook, A; Wasicki, C; Watanabe, I; Watkins, P M; Watson, A T; Watson, I J; Watson, M F; Watts, G; Watts, S; Waugh, A T; Waugh, B M; Weber, M S; Webster, J S; Weidberg, A R; Weigell, P; Weingarten, J; Weiser, C; Wells, P S; Wenaus, T; Wendland, D; Weng, Z; Wengler, T; Wenig, S; Wermes, N; Werner, M; Werner, P; Werth, M; Wessels, M; Wetter, J; Weydert, C; Whalen, K; White, A; White, M J; White, S; Whitehead, S R; Whiteson, D; Whittington, D; Wicke, D; Wickens, F J; Wiedenmann, W; Wielers, M; Wienemann, P; Wiglesworth, C; Wiik-Fuchs, L A M; Wijeratne, P A; Wildauer, A; Wildt, M A; Wilhelm, I; Wilkens, H G; Will, J Z; Williams, E; Williams, H H; Williams, S; Willis, W; Willocq, S; Wilson, J A; Wilson, M G; Wilson, A; Wingerter-Seez, I; Winkelmann, S; Winklmeier, F; Wittgen, M; Wittig, T; Wittkowski, J; Wollstadt, S J; Wolter, M W; Wolters, H; Wong, W C; Wooden, G; Wosiek, B K; Wotschack, J; Woudstra, M J; Wozniak, K W; Wraight, K; Wright, M; Wrona, B; Wu, S L; Wu, X; Wu, Y; Wulf, E; Wynne, B M; Xella, S; Xiao, M; Xie, S; Xu, C; Xu, D; Xu, L; Yabsley, B; Yacoob, S; Yamada, M; Yamaguchi, H; Yamaguchi, Y; Yamamoto, A; Yamamoto, K; Yamamoto, S; Yamamura, T; Yamanaka, T; Yamauchi, K; Yamazaki, T; Yamazaki, Y; Yan, Z; Yang, H; Yang, H; Yang, U K; Yang, Y; Yang, Z; Yanush, S; Yao, L; Yasu, Y; Yatsenko, E; Ye, J; Ye, S; Yen, A L; Yilmaz, M; Yoosoofmiya, R; Yorita, K; Yoshida, R; Yoshihara, K; Young, C; Young, C J S; Youssef, S; Yu, D; Yu, D R; Yu, J; Yu, J; Yuan, L; Yurkewicz, A; Zabinski, B; Zaidan, R; Zaitsev, A M; Zambito, S; Zanello, L; Zanzi, D; Zaytsev, A; Zeitnitz, C; Zeman, M; Zemla, A; Zenin, O; Ženiš, T; Zerwas, D; Zevi Della Porta, G; Zhang, D; Zhang, H; Zhang, J; Zhang, L; Zhang, X; Zhang, Z; Zhao, L; Zhao, Z; Zhemchugov, A; Zhong, J; Zhou, B; Zhou, N; Zhou, Y; Zhu, C G; Zhu, H; Zhu, J; Zhu, Y; Zhuang, X; Zhuravlov, V; Zibell, A; Zieminska, D; Zimin, N I; Zimmermann, R; Zimmermann, S; Zimmermann, S; Zinonos, Z; Ziolkowski, M; Zitoun, R; Živković, L; Zmouchko, V V; Zobernig, G; Zoccoli, A; Zur Nedden, M; Zutshi, V; Zwalinski, L

    A measurement of splitting scales, as defined by the k T clustering algorithm, is presented for final states containing a W boson produced in proton-proton collisions at a centre-of-mass energy of 7 TeV. The measurement is based on the full 2010 data sample corresponding to an integrated luminosity of 36 pb -1 which was collected using the ATLAS detector at the CERN Large Hadron Collider. Cluster splitting scales are measured in events containing W bosons decaying to electrons or muons. The measurement comprises the four hardest splitting scales in a k T cluster sequence of the hadronic activity accompanying the W boson, and ratios of these splitting scales. Backgrounds such as multi-jet and top-quark-pair production are subtracted and the results are corrected for detector effects. Predictions from various Monte Carlo event generators at particle level are compared to the data. Overall, reasonable agreement is found with all generators, but larger deviations between the predictions and the data are evident in the soft regions of the splitting scales.

  16. A priori testing of subgrid-scale models for large-eddy simulation of the atmospheric boundary layer

    NASA Astrophysics Data System (ADS)

    Juneja, Anurag; Brasseur, James G.

    1996-11-01

    Subgrid-scale models are generally developed assuming homogeneous isotropic turbulence with the filter cutoff lying in the inertial range. In the surface layer and capping inversion regions of the atmospheric boundary layer, the turbulence is strongly anisotropic and, in general, influenced by both buoyancy and shear. Furthermore, the integral scale motions are under-resolved in these regions. Herein we perform direct numerical simulations of shear and buoyancy-generated homogeneous anisotropic turbulence to compute and analyze the actual subgrid-resolved-scale (SGS-RS) dynamics as the filter cutoff moves into the energy-containing scales. These are compared with the SGS-RS dynamics predicted by Smagorinsky-based models with a focus on motivating improved closures. We find that, in general, the underlying assumption of such models, that the anisotropic part of the subgrid stress tensor be aligned with the resolved strain rate tensor, is a poor approximation. Similarly, we find poor alignment between the actual and predicted stress divergence, and find low correlations between the actual and modeled subgrid-scale contribution to the pressure and pressure gradient. Details will be given in the talk.

  17. Regional climates in the GISS global circulation model - Synoptic-scale circulation

    NASA Technical Reports Server (NTRS)

    Hewitson, B.; Crane, R. G.

    1992-01-01

    A major weakness of current general circulation models (GCMs) is their perceived inability to predict reliably the regional consequences of a global-scale change, and it is these regional-scale predictions that are necessary for studies of human-environmental response. For large areas of the extratropics, the local climate is controlled by the synoptic-scale atmospheric circulation, and it is the purpose of this paper to evaluate the synoptic-scale circulation of the Goddard Institute for Space Studies (GISS) GCM. A methodology for validating the daily synoptic circulation using Principal Component Analysis is described, and the methodology is then applied to the GCM simulation of sea level pressure over the continental United States (excluding Alaska). The analysis demonstrates that the GISS 4 x 5 deg GCM Model II effectively simulates the synoptic-scale atmospheric circulation over the United States. The modes of variance describing the atmospheric circulation of the model are comparable to those found in the observed data, and these modes explain similar amounts of variance in their respective datasets. The temporal behavior of these circulation modes in the synoptic time frame are also comparable.

  18. Quantifying predictability in a model with statistical features of the atmosphere

    PubMed Central

    Kleeman, Richard; Majda, Andrew J.; Timofeyev, Ilya

    2002-01-01

    The Galerkin truncated inviscid Burgers equation has recently been shown by the authors to be a simple model with many degrees of freedom, with many statistical properties similar to those occurring in dynamical systems relevant to the atmosphere. These properties include long time-correlated, large-scale modes of low frequency variability and short time-correlated “weather modes” at smaller scales. The correlation scaling in the model extends over several decades and may be explained by a simple theory. Here a thorough analysis of the nature of predictability in the idealized system is developed by using a theoretical framework developed by R.K. This analysis is based on a relative entropy functional that has been shown elsewhere by one of the authors to measure the utility of statistical predictions precisely. The analysis is facilitated by the fact that most relevant probability distributions are approximately Gaussian if the initial conditions are assumed to be so. Rather surprisingly this holds for both the equilibrium (climatological) and nonequilibrium (prediction) distributions. We find that in most cases the absolute difference in the first moments of these two distributions (the “signal” component) is the main determinant of predictive utility variations. Contrary to conventional belief in the ensemble prediction area, the dispersion of prediction ensembles is generally of secondary importance in accounting for variations in utility associated with different initial conditions. This conclusion has potentially important implications for practical weather prediction, where traditionally most attention has focused on dispersion and its variability. PMID:12429863

  19. Initial conditions and modeling for simulations of shock driven turbulent material mixing

    DOE PAGES

    Grinstein, Fernando F.

    2016-11-17

    Here, we focus on the simulation of shock-driven material mixing driven by flow instabilities and initial conditions (IC). Beyond complex multi-scale resolution issues of shocks and variable density turbulence, me must address the equally difficult problem of predicting flow transition promoted by energy deposited at the material interfacial layer during the shock interface interactions. Transition involves unsteady large-scale coherent-structure dynamics capturable by a large eddy simulation (LES) strategy, but not by an unsteady Reynolds-Averaged Navier–Stokes (URANS) approach based on developed equilibrium turbulence assumptions and single-point-closure modeling. On the engineering end of computations, such URANS with reduced 1D/2D dimensionality and coarsermore » grids, tend to be preferred for faster turnaround in full-scale configurations.« less

  20. Public Health Crisis in War and Conflict - Health Security in Aggregate.

    PubMed

    Quinn, John; Zelený, Tomáš; Subramaniam, Rammika; Bencko, Vladimír

    2017-03-01

    Public health status of populations is multifactorial and besides other factors it is linked to war and conflict. Public health crisis can erupt when states go to war or are invaded; health security may be reduced for affected populations. This study reviews in aggregate multiple indices of human security, human development and legitimacy of the state in order to describe a predictable global health portrait. Paradigm shift of large global powers to that non-state actors and proxies impact regional influence through scaled conflict and present major global health challenges for policy makers. Small scale conflict with large scale violence threatens health security for at-risk populations. The paper concludes that health security is directly proportional to state security. Copyright© by the National Institute of Public Health, Prague 2017

Top